AWS Batch now provides AMI status and supports AWS Health Planned Lifecycle Events

AWS Batch now provides enhanced visibility into your compute environments with two new capabilities that help you maintain operational best practices. When you describe a compute environment, you can now see the status of your Batch-provided default Amazon Machine Images (AMIs), indicating when updates are available. Additionally, AWS Batch now publishes AWS Health Planned Lifecycle Events to help you prepare for and track changes affecting your batch computing resources. The AMI status indicator shows whether you’re using the latest AMI (LATEST) or if an update is available (UPDATE_AVAILABLE), helping you identify compute environments that may be running outdated AMIs. AWS Health Planned Lifecycle Events provide advance notification of upcoming changes, such as AMI deprecations, help you monitor migration status of your affected compute environments, and automate responses using Amazon EventBridge. AMI status indicator and AWS Health Planned Lifecycle Events are available today in all AWS Regions where AWS Batch is available. For more information, see Managing AMI versions and AWS Health Planned Lifecycle Events pages in the AWS Batch User Guide.
Quelle: aws.amazon.com

Accelerate AI-assisted development with Agent Plugin for AWS Serverless

AWS announces the Agent Plugin for AWS Serverless, enabling developers to easily build, deploy, troubleshoot, and manage serverless applications using AI coding assistants like Kiro, Claude Code, and Cursor.
Agent plugins extend AI coding assistants with structured, reusable capabilities by packaging skills, sub-agents, hooks, and Model Context Protocol (MCP) servers into a single modular unit. The Agent Plugin for AWS Serverless dynamically loads relevant guidance and expertise required throughout the development lifecycle for building production-ready serverless applications on AWS. You can create AWS Lambda functions that integrate with popular event sources like Amazon EventBridge, Amazon Kinesis, and AWS Step Functions, while following built-in best practices for observability, performance optimization, and troubleshooting. As you adopt Infrastructure as Code (IaC), you can streamline project setup with AWS Serverless Application Model (SAM) and AWS Cloud Development Kit (CDK), with reusable constructs, proven architectural patterns, automated CI/CD pipelines, and local testing workflows. For long-running, stateful workflows, you can build with confidence using Lambda durable functions, which provides checkpoint-replay model, advanced orchestration patterns, and error handling capabilities. Lastly, you can design and manage APIs as part of your application using Amazon API Gateway, with guidance across REST APIs, HTTP APIs, and WebSocket APIs. These capabilities are packaged as agent skills in the open Agent Skills format, making them usable across compatible AI tools such as Kiro, Claude Code, and Cursor.
The Agent Plugin for AWS Serverless is available in any AI coding assistant tools that support agent plugins such as Claude Code and Cursor. In Claude Code, you can install it from the official Claude Marketplace using a simple command ‘/plugin install aws-serverless@claude-plugins-official’. You can also install agent skills from the plugin individually in any AI coding assistant tools that support agent skills. To learn more about the plugin and its capabilities, visit GitHub.
Quelle: aws.amazon.com

Amazon Bedrock AgentCore Runtime now supports managed session storage for persistent agent filesystem state (preview)

Amazon Bedrock AgentCore Runtime now offers managed session storage in public preview, enabling agents to persist their filesystem state across stop and resume cycles. Modern agents write code, install packages, generate artifacts, and manage state through the filesystem. Until now, that work was lost when a session stopped. With managed session storage, everything your agent writes to a configured mount path persists automatically, even after the compute environment terminates.
When you configure session storage, each session gets a persistent directory at the mount path you specify. Your agent reads and writes files as normal, and AgentCore Runtime transparently replicates data to durable storage. When the session stops, data is flushed during graceful shutdown. When you resume with the same session ID, a new microVM mounts the same storage and the agent continues from where it left off — source files, installed packages, build artifacts, and git history all intact. No checkpoint logic, no save and restore code, and no changes to your agent application required. Session storage supports standard Linux filesystem operations including regular files, directories, and symlinks, with up to 1 GB per session and data retained for 14 days of idle time. Storage communication is confined to a single session’s data and cannot access other sessions or AgentCore Runtime environments.
Session storage is available in public preview across fourteen AWS Regions: US (N. Virginia, Ohio, Oregon), Canada (Central), Asia Pacific (Mumbai, Seoul, Singapore, Sydney, Tokyo), Europe (Frankfurt, Ireland, London, Paris, Stockholm).
To learn more, see persist files across stop/resume in the Amazon Bedrock AgentCore documentation.
Quelle: aws.amazon.com

Amazon SageMaker HyperPod now supports continuous provisioning for Slurm-orchestrated clusters

Amazon SageMaker HyperPod now extends continuous provisioning support to clusters using the Slurm orchestrator, enabling greater flexibility and efficiency for enterprise customers running large-scale AI/ML training workloads. AI/ML customers running Slurm-based clusters need to start training quickly, scale seamlessly, perform maintenance without disrupting operations, and have granular visibility into cluster operations. Previously, if any instance group could not be fully provisioned, the entire cluster creation or scaling operation failed and rolled back, causing delays and requiring manual intervention. With continuous provisioning for Slurm, SageMaker HyperPod automatically provisions remaining capacity in the background while training jobs can begin immediately on available instances. The system uses priority-based provisioning to bring up the Slurm controller node first, followed by login and worker nodes in parallel, so your cluster reaches an operational state as quickly as possible. HyperPod retries failed node launches asynchronously and adds nodes to the Slurm cluster automatically as they become available, ensuring clusters reliably reach their desired scale without requiring manual intervention. You can now perform concurrent, non-blocking scaling operations across multiple instance groups simultaneously — a capacity shortage in one instance group no longer blocks scaling in others. These capabilities help customers reduce time-to-training, maximize resource utilization, and focus on innovation rather than infrastructure management. This feature is available for new SageMaker HyperPod clusters using the Slurm orchestrator. You can enable continuous provisioning by setting the NodeProvisioningMode parameter to “Continuous” when creating new HyperPod clusters using the CreateCluster API. Continuous provisioning can also be enabled when creating new clusters through the AWS CLI and the SageMaker AI console. This feature is available in all AWS Regions where Amazon SageMaker HyperPod is supported. To learn more about continuous provisioning for Slurm clusters, see the Amazon SageMaker HyperPod User Guide.
Quelle: aws.amazon.com

AWS Backup expands support for Amazon DocumentDB to 12 Regions

AWS Backup now supports Amazon DocumentDB in 12 additional AWS Regions: Asia Pacific (Malaysia, Thailand, Osaka, Hong Kong, Jakarta, Melbourne), Europe (Stockholm, Spain, Zurich), Africa (Cape Town), Israel (Tel Aviv), and Mexico (Central).
This expansion brings policy-based data protection and recovery to your Amazon DocumentDB clusters in these newly supported Regions.
To start protecting your DocumentDB clusters with AWS Backup, add your DocumentDB clusters to your existing backup plans, or create a new backup plan and attach your DocumentDB clusters to it. To learn more about AWS Backup for Amazon DocumentDB, visit the product page, pricing page, and documentation. To get started, visit the AWS Backup console, AWS Command Line Interface (CLI), or AWS SDKs.
Quelle: aws.amazon.com

Amazon EC2 I7ie instances now available in additional AWS regions

AWS is announcing starting today, Amazon EC2 I7ie instances are now available in AWS Asia Pacific (Hong Kong), Asia Pacific (Seoul), Asia Pacific (Melbourne), Asia Pacific (Thailand), Europe (Zurich), Europe (Milan) and Mexico (Central) regions. Designed for large storage I/O intensive workloads, I7ie instances are powered by 5th Gen Intel Xeon Processors with an all-core turbo frequency of 3.2 GHz, offering up to 40% better compute performance and 20% better price performance versus I3en instances.
I7ie instances offer up to 120TB local NVMe storage density for storage optimized instances and offer up to twice as many vCPUs and memory compared to prior generation instances. Powered by 3rd generation AWS Nitro SSDs, I7ie instances deliver up to 65% better real-time storage performance, up to 50% lower storage I/O latency, and 65% lower storage I/O latency variability compared to I3en instances.
I7ie are high density storage optimized instances, ideal for workloads requiring fast local storage with high random read/write performance at very low latency consistency to access large data sets. These instances are available in 9 virtual sizes and deliver up to 100Gbps of network bandwidth and 60Gbps of bandwidth for Amazon Elastic Block Store (EBS).
To learn more, visit the I7ie instances page.
Quelle: aws.amazon.com

AWS Batch now supports quota management and preemption for SageMaker Training jobs

AWS Batch now supports quota management with job preemption for SageMaker Training jobs, enabling you to efficiently allocate and share compute resources across your teams and projects. If you’re using GPU capacity in SageMaker Training jobs, you can now intelligently allocate compute resources, prioritize your business-critical training jobs, and automatically preempt lower-priority workloads when your urgent experiments arrive. With quota management, you can create up to 20 quota shares per job queue that function as virtual queues with dedicated capacity limits and configurable resource sharing strategies. The service automatically uses cross-share preemption to restore borrowed capacity when the original owner submits jobs, and supports in-share preemption to allow high-priority jobs to preempt lower-priority jobs within the same quota share. You can monitor capacity utilization at the queue, quota share, and job-level granularity, update job priorities after submission to influence preemption decisions, and configure preemption retry limits to control behavior. The feature integrates directly with the SageMaker Python SDK via the aws_batch module. Quota management with job preemption for SageMaker Training jobs is available today in all AWS Regions where AWS Batch is available. For more information, see our Quota Management example notebook on GitHub and the AWS Batch User Guide.
Quelle: aws.amazon.com

Amazon Polly expands Generative TTS engine with 10 new voices, 2 new regions, and Bidirectional Streaming API

Today, we are excited to announce the general availability of 10 new highly expressive Amazon Polly Generative voices across 8 locales: Tiffany (American English), Brian (British English), Aria (New Zealand English), Jasmine (Singapore English),  Florian (French), Ambre (French), Lorenzo (Italian), Beatrice (Italian), Lennart (German), and Sabrina (Swiss German). 
Alongside these new voices, we have expanded the Generative engine to two new AWS regions in Europe (London) and Canada (Central). We have also introduced the Bidirectional Streaming API support for the Generative engine, allowing customers to stream text to Polly and receive synthesized audio back simultaneously. This makes it easy to feed output directly from a large language model (LLM) into speech synthesis, enabling real-time applications like chatbots and bespoke characters in games.
Amazon Polly is a fully managed service that turns text into lifelike speech. This expansion addresses the growing demand for natural-sounding, lifelike speech generation in conversational AI and content creation. Developers building LLM-based interactive systems and speech-enabled applications can take advantage of the enhanced voice quality and variety, expanded language and feature support, as well as broader AWS region availability. 
To hear how Polly voices sound, go to Amazon Polly Features. For more details on the Polly offerings and use, see the Amazon Polly documentation and pricing page.
Quelle: aws.amazon.com

AWS HealthImaging is now available in Europe (London)

AWS HealthImaging is now available in the AWS Europe (London) Region. AWS HealthImaging is a HIPAA-eligible service that empowers healthcare providers, life sciences organizations, and their software partners to store, analyze, and share medical images at petabyte scale.
AWS HealthImaging offers fully managed infrastructure for storing medical imaging data, with both DICOMWeb APIs for easy integration with existing applications and AWS-native APIs for cloud-first implementations. With AWS HealthImaging, organizations can reduce storage costs by up to 40% compared to do-it-yourself solutions, enable faster image access for clinical workflows, and accelerate the development of AI-powered diagnostic applications while maintaining strict security controls over sensitive data.
AWS HealthImaging is generally available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), Europe (Ireland), and Europe (London). To learn more, see the AWS HealthImaging Developer Guide.
Quelle: aws.amazon.com

AWS Firewall Manager launches in AWS Asia Pacific (New Zealand) Region

AWS Firewall Manager announces that it is now available in AWS Asia Pacific (New Zealand) Region. AWS Firewall Manager helps cloud security administrators and site reliability engineers protect applications while reducing the operational overhead of manually configuring and managing rules. Working with AWS Firewall Manager, customers can provide defense in depth policies to address the full range of AWS security services for customers hosting their applications and workloads in AWS Taipei. Customers wishing to establish secured assets using AWS WAF can create and maintain security policies with AWS Firewall Manager. To learn more about how AWS Firewall Manager works, see the AWS Firewall Manager documentation for more details and the AWS Region Table for the list of regions where AWS Firewall Manager is currently available. To learn more about AWS Firewall Manager, its features, and its pricing, visit the AWS Firewall Manager website.
Quelle: aws.amazon.com