Amazon Bedrock is now available in Asia Pacific (New Zealand)

Starting today, customers can use Amazon Bedrock in the Asia Pacific (New Zealand) Region to easily build and scale generative AI applications using a variety of foundation models (FMs) as well as powerful tools to build generative AI applications. Amazon Bedrock is a fully managed service that offers a choice of high-performing large language models (LLMs) and other FMs from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, OpenAI, Stability AI, as well as Amazon via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustainable growth from generative AI while maintaining privacy and security. With this launch, customers can now use models from Anthropic (Sonnet 4.5, Sonnet 4.6, Opus 4.5, Opus 4.6, Haiku 4.5) and Amazon (Nova 2 Lite) in New Zealand with cross region inference. To get started, visit the Amazon Bedrock page and see the Amazon Bedrock documentation for more details.
Quelle: aws.amazon.com

Amazon MSK expands Express brokers to Africa (Cape Town) and Asia Pacific (Taipei) regions

You can now create provisioned Amazon Managed Streaming for Apache Kafka (Amazon MSK) clusters with Express brokers in Africa (Cape Town) and Asia Pacific (Taipei) regions.
Express brokers are a new broker type for Amazon MSK Provisioned designed to deliver up to 3x more throughput per broker, scale up to 20x faster, and reduce recovery time by 90% as compared to standard Apache Kafka brokers. Express brokers come pre-configured with Kafka best practices by default, support all Kafka APIs, and provide the same low-latency performance that Amazon MSK customers expect, so they can continue using existing client applications without any changes.
To get started, create a new cluster with Express brokers through the Amazon MSK console or the Amazon CLI and read our Amazon MSK Developer Guide for more information.
Quelle: aws.amazon.com

SageMaker Training Plans now enables extending of existing capacity commitments without workload reconfiguration

SageMaker Training Plans allows you to reserve GPU capacity within specified time frames in cluster sizes of up to 64 instances. Today, Amazon SageMaker AI announces that Training Plans can now be extended when your AI workloads take longer than anticipated, ensuring uninterrupted access to capacity. You can extend plans by 1-day increments up to 14 days, or 7-day increments up to 182 days (26 weeks). Extensions can be initiated via API or the SageMaker console. Once the extension is purchased the workload continues to run un-interrupted without you needing to reconfgure the workload. SageMaker AI helps you create the most cost-efficient training plans that fits within your timeline and AI budget. Once you create and purchase your training plans, SageMaker automatically provisions the infrastructure and runs the AI workloads on these compute resources without requiring any manual intervention. See the SageMaker AI pricing page for a detailed breakdown of instance availability by AWS Region. To learn more about training plan extensions, see the Amazon SageMaker Training Plans User Guide
Quelle: aws.amazon.com