Amazon MSK expands Express brokers to Africa (Cape Town) and Asia Pacific (Taipei) regions

You can now create provisioned Amazon Managed Streaming for Apache Kafka (Amazon MSK) clusters with Express brokers in Africa (Cape Town) and Asia Pacific (Taipei) regions.
Express brokers are a new broker type for Amazon MSK Provisioned designed to deliver up to 3x more throughput per broker, scale up to 20x faster, and reduce recovery time by 90% as compared to standard Apache Kafka brokers. Express brokers come pre-configured with Kafka best practices by default, support all Kafka APIs, and provide the same low-latency performance that Amazon MSK customers expect, so they can continue using existing client applications without any changes.
To get started, create a new cluster with Express brokers through the Amazon MSK console or the Amazon CLI and read our Amazon MSK Developer Guide for more information.
Quelle: aws.amazon.com

SageMaker Training Plans now enables extending of existing capacity commitments without workload reconfiguration

SageMaker Training Plans allows you to reserve GPU capacity within specified time frames in cluster sizes of up to 64 instances. Today, Amazon SageMaker AI announces that Training Plans can now be extended when your AI workloads take longer than anticipated, ensuring uninterrupted access to capacity. You can extend plans by 1-day increments up to 14 days, or 7-day increments up to 182 days (26 weeks). Extensions can be initiated via API or the SageMaker console. Once the extension is purchased the workload continues to run un-interrupted without you needing to reconfgure the workload. SageMaker AI helps you create the most cost-efficient training plans that fits within your timeline and AI budget. Once you create and purchase your training plans, SageMaker automatically provisions the infrastructure and runs the AI workloads on these compute resources without requiring any manual intervention. See the SageMaker AI pricing page for a detailed breakdown of instance availability by AWS Region. To learn more about training plan extensions, see the Amazon SageMaker Training Plans User Guide
Quelle: aws.amazon.com

Amazon Neptune now supports reading S3 data using openCyper

Amazon Neptune now supports reading data from Amazon S3 within openCypher queries. Through the new `neptune.read()` procedure, customers now have an additional option of federating with external data stored in S3 versus needing to load data into Neptune. Organizations using Neptune for graph analytics can now dynamically incorporate S3-stored data without the traditional multi-step workflow requirements.
Key use cases include real-time graph analytics that combine S3 data with existing graph structures, dynamic node and edge creation from external datasets, and complex graph queries requiring external reference data. The procedure supports comprehensive data types including standard and Neptune-specific formats such as geometry and datetime, while maintaining security through the caller’s IAM credentials.
Read from S3 is available in all regions where Amazon Neptune Database is currently offered. To learn more, check out the Neptune Database documentation.
Quelle: aws.amazon.com

SageMaker HyperPod now supports idle resource sharing for dynamic cluster utilization

Amazon SageMaker HyperPod task governance now supports dynamic resource sharing, allowing teams to borrow unallocated compute capacity in HyperPod clusters beyond their guaranteed quotas. Administrators can also configure borrow limits for specific resource types, such as accelerators, vCPU, or memory, to ensure fair distribution across teams. Administrators running shared compute clusters for generative AI workloads often face underutilization challenges. When data scientists do not fully consume their allocated quotas, expensive compute instances remain idle. Idle resource sharing solves this by automatically identifying unallocated cluster capacity and making it available for teams to borrow on a best-effort basis. HyperPod task governance monitors your cluster state and automatically recalculates borrowable resources when instances and compute quota policies change, eliminating manual configuration. Eligible instances that are in a ready and schedulable state, including instances with partitioned GPU configurations, contribute to the borrowable pool of unallocated compute capacity. Administrators can also define absolute borrow limits in addition to percentage-based borrow limits of idle compute. This helps administrators maximize compute utilization and maintain fine-grained control over how idle capacity is distributed across teams, while ensuring guaranteed compute quota isolation for each team. This capability is currently available for Amazon SageMaker HyperPod clusters using the EKS orchestrator across the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo), Asia Pacific (Jakarta), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), Europe (Spain), and South America (São Paulo). To learn more, visit SageMaker HyperPod webpage, and HyperPod task governance documentation.
Quelle: aws.amazon.com

Amazon CloudWatch Logs now supports log ingestion using HTTP-based protocol

Amazon CloudWatch Logs now supports HTTP Log Collector (HLC), ND-JSON, Structured JSON and OTEL for sending logs using HTTP-based protocol with bearer token. With this launch, customers can ingest logs where AWS SDK integration is not feasible, such as with third-party or packaged software. The new endpoints are:

HTTP Log Collector (HLC) Logs (https://logs .<region>.amazonaws.com/services/collector/event) — for JSON events, ideal for migrating existing log pipelines. 

ND-JSON Logs (https://logs.<region>.amazonaws.com/ingest/bulk) — for newline-delimited JSON, where each line is an independent log event. Perfect for high-volume streaming and bulk log ingestion. 

Structured JSON Logs (https://logs .<region>.amazonaws.com/ingest/json) — Send a single JSON object or a JSON array of objects.

OpenTelemetry Logs (https://logs .<region>.amazonaws.com/v1/logs) — for OTLP-formatted logs in JSON or Protobuf encoding to CloudWatch.

To enable the HLC endpoint, navigate to CloudWatch Settings in the AWS Console and generate an API key. CloudWatch creates the necessary IAM user with service-specific credentials and permissions. API keys can be configured with expiration periods of 1, 5, 30, 90, or 365 days. Customers must enable bearer token authentication on each log group before it can accept logs, which protects from unintended ingestion. Customers can use service control policies to block the creation of service-specific credentials.
These endpoints are available in the following AWS Regions: US East (N. Virginia), US West (N. California), US West (Oregon), and US East (Ohio). To learn more about the HLC endpoint and security best practices, refer to the CloudWatch Logs Documentation. 
Quelle: aws.amazon.com