Amazon EC2 C8a instances now available in the Europe (Frankfurt) and Europe (Ireland) region

Starting today, the compute-optimized Amazon EC2 C8a instances are available in the Europe (Frankfurt) and Europe (Ireland) regions. C8a instances are powered by 5th Gen AMD EPYC processors (formerly code named Turin) with a maximum frequency of 4.5 GHz, delivering up to 30% higher performance and up to 19% better price-performance compared to C7a instances. C8a instances deliver 33% more memory bandwidth compared to C7a instances, making these instances ideal for latency sensitive workloads. Compared to Amazon EC2 C7a instances, they are up to 57% faster for GroovyJVM allowing better response times for Java-based applications. C8a instances offer 12 sizes including 2 bare metal sizes. This range of instance sizes allows customers to precisely match their workload requirements. C8a instances are built on AWS Nitro System and are ideal for high performance, compute-intensive workloads such as batch processing, distributed analytics, high performance computing (HPC), ad serving, highly-scalable multiplayer gaming, and video encoding. To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information visit the Amazon EC2 C8a instance page.
Quelle: aws.amazon.com

Amazon Bedrock reinforcement fine-tuning adds support for open-weight models with OpenAI-compatible APIs

Amazon Bedrock now extends reinforcement fine-tuning (RFT) support to popular open-weight models, including OpenAI GPT-OSS and Qwen models, and introduces OpenAI-compatible fine-tuning APIs. These capabilities make it easier for developers to improve open-weight model accuracy without requiring deep machine learning expertise or large volumes of labeled data. Reinforcement fine-tuning in Amazon Bedrock automates the end-to-end customization workflow, allowing models to learn from feedback on multiple possible responses using a small set of prompts, rather than traditional large training datasets. Reinforcement fine-tuning enables customers to use smaller, faster, and more cost-effective model variants while maintaining high quality. Organizations often struggle to adapt foundation models to their unique business requirements, forcing tradeoffs between generic models with limited performance and complex, expensive customization pipelines that require specialized infrastructure and expertise. Amazon Bedrock removes this complexity by providing a fully managed, secure reinforcement fine-tuning experience. Customers define reward functions using verifiable rule-based graders or AI-based judges, including built-in templates for both objective tasks such as code generation and math reasoning, and subjective tasks such as instruction following or conversational quality. During training, customers can use AWS Lambda functions for custom grading logic, and access intermediate model checkpoints to evaluate, debug, and select the best-performing model, improving iteration speed and training efficiency. All proprietary data remains within AWS’s secure, governed environment throughout the customization process. Models supported at this launch are: qwen.qwen3-32b and openai.gpt-oss-20b. After fine-tuning completes, customers can immediately use the resulting fine tuned model for on-demand inference through Amazon Bedrock’s OpenAI-compatible APIs – Responses API and Chat Completions API, without any additional deployment steps. To learn more, see the Amazon Bedrock documentation.
Quelle: aws.amazon.com

Claude Sonnet 4.6 now available in Amazon Bedrock

Starting today, Amazon Bedrock supports Claude Sonnet 4.6, which offers frontier performance across coding, agents, and professional work at scale. According to Anthropic, Claude Sonnet 4.6 is their best computer use model yet, allowing organizations to deploy browser-based automation across business tools with near-human reliability. Claude Sonnet 4.6 approaches Opus 4.6 intelligence at a lower cost. It enables faster, high-quality task completion, making it ideal for high-volume coding and knowledge work use cases. 
 
Claude Sonnet 4.6 serves as a direct upgrade to Sonnet 4.5 across use cases that require consistent conversational quality and efficient multi-step orchestration. For search and chat applications, it delivers reliable performance across single and multi-turn exchanges at a price point that makes high-volume deployment practical, maintaining quality standards while optimizing for scale. Developers can leverage Claude Sonnet 4.6’s for agentic workflows, seamlessly filling both lead agent and subagent roles in multi-model pipelines with precise workflow management and context compaction capabilities. Enterprise teams can use Claude Sonnet 4.6 to power domain-specific applications with professional precision, including spreadsheet and financial model creation that accelerates analysis workflows, compliance review processes that require meticulous attention to detail, and data summarization tasks where iteration speed and accuracy are paramount. Claude Sonnet 4.6 requires only minor prompting adjustments from Sonnet 4.5, ensuring smooth migration for existing implementations. 
 
Claude Sonnet 4.6 is now available in Amazon Bedrock. For the full list of available regions, refer to the documentation. To learn more and get started with Claude Sonnet 4.6 in Amazon Bedrock, read the About Amazon blog and visit the Amazon Bedrock console.
Quelle: aws.amazon.com

Amazon Connect now includes agent time-off requests in draft schedules

Amazon Connect now includes agent time-off requests in draft schedules, making it easier for you to view why an agent was not scheduled on a particular day or part of the day. For example, when generating schedules for next month, you can see that an agent who typically works Monday to Friday wasn’t scheduled for the first week because they’re on leave without needing to check the published schedules or troubleshooting configuration as to why agent was not scheduled. This launch helps schedulers quickly identify coverage gaps and adjust schedules before publishing them to agents. This feature is available in all AWS Regions where Amazon Connect agent scheduling is available. To learn more about Amazon Connect agent scheduling, click here.
Quelle: aws.amazon.com

AWS Backup announces PrivateLink support for SAP HANA on AWS

AWS Backup now supports AWS PrivateLink for SAP HANA systems running on Amazon EC2. This enables customers to route all backup traffic through private network connections without traversing the public internet, helping organizations meet security and compliance requirements for regulated workloads.
Customers in regulated industries such as financial services, healthcare, and government agencies often require that all traffic remain on private networks. Previously, while SAP HANA application workloads could use AWS PrivateLink for secure, private communication with AWS services, backup traffic to AWS Backup had to traverse public endpoints. With this release, you can now use AWS PrivateLink for AWS Backup storage endpoints, ensuring your SAP HANA workloads on EC2 maintain end-to-end private connectivity for both application traffic and backup data. This helps organizations subject to HIPAA, EU/US Privacy Shield, and PCI DSS regulations implement fully private data protection strategies.
This feature is available in all AWS Regions where AWS Backup supports SAP HANA databases on EC2. To get started, update your Backint agent and add the backup-storage VPCE to your VPC.
Quelle: aws.amazon.com

Amazon EC2 M7i instances are now available in the Israel (Tel Aviv) Region

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M7i instances powered by custom 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) are available in the Israel (Tel Aviv) region. These custom processors, available only on AWS, offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers. M7i deliver up to 15% better price-performance compared to M6i. M7i instances are a great choice for workloads that need the largest instance sizes or continuous high CPU usage, such as gaming servers, CPU-based machine learning (ML), and video-streaming. M7i offer larger instance sizes, up to 48xlarge, and two bare metal sizes (metal-24xl, metal-48xl). These bare-metal sizes support built-in Intel accelerators: Data Streaming Accelerator, In-Memory Analytics Accelerator, and QuickAssist Technology that are used to facilitate efficient offload and acceleration of data operations and optimize performance for workloads. To learn more, visit Amazon EC2 M7i Instances. To get started, see the AWS Management Console.
Quelle: aws.amazon.com

Announcing new high performance computing Amazon EC2 Hpc8a instances

AWS announces Amazon EC2 Hpc8a instances, the next generation of high performance computing optimized instance, powered by 5th Gen AMD EPYC processors (formerly code named Turin). With a maximum frequency of 4.5GHz, Hpc8a instances deliver up to 40% higher performance and up to 25% better price performance compared to Hpc7a instances, helping customers accelerate compute-intensive workloads while optimizing costs. Built on the latest sixth-generation AWS Nitro Cards, Hpc8a instances are designed for compute-intensive, latency-sensitive HPC workloads. They are ideal for tightly coupled applications such as computational fluid dynamics (CFD), weather forecasting, explicit finite element analysis (FEA), and multiphysics simulations that require fast inter-node communication and consistent high performance. Hpc8a instances feature 192 cores, 768 GiB memory and 300 Gbps of Elastic Fabric Adapter (EFA) network bandwidth, enabling fast, low-latency cluster scaling for large-scale HPC workloads. Compared to Hpc7a instances, Hpc8a instances also provide up to 42% higher memory bandwidth, further improving performance for memory-intensive simulations and scientific computing workloads. Hpc8a instances are available today in US East (Ohio) and Europe (Stockholm). Customers can purchase Hpc8a instances via Savings Plans or On-Demand instances. To get started, sign in to the AWS Management Console. For more information visit the Amazon EC2 Hpc8a instance page or AWS news blog.
Quelle: aws.amazon.com

AWS HealthImaging launches additional metrics for monitoring data stores

AWS HealthImaging has launched additional metrics through Amazon CloudWatch that enable monitoring storage at the account and data store levels. These new metrics help customers better understand their medical imaging storage and growth trends over time. HealthImaging now provides customers with granular CloudWatch metrics to monitor their data stores. Customers can track storage by volume, number of image sets, and the number of DICOM studies, series, and instances. These metrics provide the insights needed to manage both single-tenant and multi-tenant workloads at petabyte scale. To learn more, visit Using Amazon CloudWatch with HealthImaging. AWS HealthImaging is a HIPAA-eligible service that empowers healthcare providers and their software partners to store, analyze, and share medical images. AWS HealthImaging is generally available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland).
Quelle: aws.amazon.com

Amazon EC2 High Memory U7i instances now available in additional regions

Amazon EC2 High Memory instances are now available in new regions – U7i-6tb.112xlarge instances in AWS South America (Sao Paulo) and Europe (Milan), U7i-12tb.224xlarge in AWS GovCloud (US-East), and U7in-16tb.224xlarge instances in Europe (London). U7i instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-6tb instances offer 6TiB of DDR5 memory, U7i-12tb instances offer 12TiB of DDR5 memory, and U7in-16tb instances offer 16TiB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment. U7i-6tb instances offer 448 vCPUs and support up to 100Gbps Elastic Block Storage (EBS) and deliver up to 100Gbps of network bandwidth. U7i-12tb instances offer 896 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) and deliver up to 100Gbps of network bandwidth. U7in-16tb instances offer 896 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) and deliver up to 200Gbps of network bandwidth for faster data loading and backups. All U7i instances support ENA Express. 
U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server. To learn more about U7i instances, visit the High Memory instances page.
Quelle: aws.amazon.com

Amazon EC2 supports nested virtualization on virtual Amazon EC2 instances

Starting today, customers can create nested environments within virtualized Amazon EC2 instances. Previously, customers could only create and manage virtual machines inside bare metal EC2 instances. With this launch, customers can create nested virtual machines by running KVM or Hyper-V on virtual EC2 instances. Customers can leverage this capability for use cases such as running emulators for mobile applications, simulating in-vehicle hardware for automobiles, and running Windows Subsystem for Linux on Windows workstations.
 
Quelle: aws.amazon.com