NVIDIA Nemotron 3 Super now available on Amazon Bedrock

Amazon Bedrock now supports NVIDIA Nemotron 3 Super, an open hybrid Mixture-of-Experts (MoE) model designed for complex multi-agent applications. Built for agentic workloads, Nemotron 3 Super delivers fast, and cost-efficient inference enabling AI agents to maintain focus and accuracy across long, multi-step tasks without losing context. Fully open with weights, datasets, and recipes, the model supports easy customization and secure deployment, making it well-suited for enterprises, startups, and individual developers building multi-agent workflows, and advanced reasoning applications.
Amazon Bedrock gives customers access to Nemotron 3 Super through a single, fully managed API — with no infrastructure to provision or models to host. Bedrock’s serverless inference, built-in security controls, and compatibility with OpenAI API specifications make it easy to integrate Nemotron 3 Super into existing workflows and deploy at production scale with confidence.
NVIDIA Nemotron 3 Super is now available in Amazon Bedrock across select AWS Regions. For the full list of available AWS Regions, refer to the documentation. To learn more and get started, visit the Amazon Bedrock console or the service documentation here. To get started with Amazon Bedrock OpenAI API-compatible service endpoints, visit documentation here.
Quelle: aws.amazon.com

Minimax M2.5 and GLM 5 models now available on Amazon Bedrock

Amazon Bedrock expands model selection for customers by adding support for GLM 5 and Minimax M2.5. GLM 5 is a frontier‑class, general‑purpose large language model optimized for complex systems engineering and long‑horizon agentic tasks. It builds on the GLM 4.5 agent‑centric lineage and is designed to support multi‑step reasoning, math (including AIME‑style benchmarks), advanced coding, and tool‑augmented workflows, with long context support suitable for sophisticated agents and enterprise applications. MiniMax M2.5 is an agent‑native frontier model trained explicitly to reason efficiently, decompose tasks optimally, and complete complex workflows under real‑world time and cost constraints. It achieves task completion speeds comparable to or faster than leading proprietary frontier models by combining high inference throughput with reinforcement learning focused on token‑efficient reasoning and better decision‑making in agentic scaffolds.
MiniMax M2.5 and GLM 5 are now available in Amazon Bedrock across select AWS Regions. For the full list of available AWS Regions, refer to the documentation.
Quelle: aws.amazon.com

Amazon Inspector expands agentless EC2 scanning and introduces Windows KB-based findings

Amazon Inspector now offers expanded agentless EC2 scanning with enhanced detection coverage, including new support for Windows operating system vulnerability scanning without requiring an agent. Security teams and IT administrators can now detect vulnerabilities across a broader range of software and applications on their EC2 instances — including WordPress, Apache HTTP Server, Python packages, and Ruby gems — as well as Windows OS vulnerabilities, all through agentless scanning. Customers automatically receive findings for newly supported software and applications with no configuration changes required.
Amazon Inspector is also introducing Windows Knowledge Base (KB)-based findings for Windows OS vulnerabilities. Rather than receiving a separate finding for each CVE addressed by a single Microsoft patch, customers now receive a single consolidated KB finding that groups all related CVEs together. Each KB finding surfaces the highest CVSS score, EPSS score, and exploit availability from its constituent CVEs, and includes a direct link to the relevant Microsoft KB article — making it straightforward to understand exactly which patch to apply and why. All existing CVE-based Windows OS findings will automatically transition to KB-based findings. All existing CVE-based Windows OS findings will automatically transition to KB-based findings, and customers do not need to take any additional action.
Both capabilities are available in all AWS Regions where Amazon Inspector is available. To learn more, visit the Amazon Inspector product page and the Amazon Inspector documentation. 
Quelle: aws.amazon.com

Amazon EC2 C8a instances now available in the Asia Pacific (Tokyo) region

Starting today, the compute-optimized Amazon EC2 C8a instances are available in the Asia Pacific (Tokyo) region. C8a instances are powered by 5th Gen AMD EPYC processors (formerly code named Turin) with a maximum frequency of 4.5 GHz, delivering up to 30% higher performance and up to 19% better price-performance compared to C7a instances. C8a instances deliver 33% more memory bandwidth compared to C7a instances, making these instances ideal for latency sensitive workloads. Compared to Amazon EC2 C7a instances, they are up to 57% faster for GroovyJVM allowing better response times for Java-based applications. C8a instances offer 12 sizes including 2 bare metal sizes. This range of instance sizes allows customers to precisely match their workload requirements. C8a instances are built on AWS Nitro System and are ideal for high performance, compute-intensive workloads such as batch processing, distributed analytics, high performance computing (HPC), ad serving, highly-scalable multiplayer gaming, and video encoding. To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information visit the Amazon EC2 C8a instance page.
Quelle: aws.amazon.com

Amazon S3 Access Grants are now available in the AWS Asia Pacific (New Zealand) Region

You can now create Amazon S3 Access Grants in the AWS Asia Pacific (New Zealand) Region.
Amazon S3 Access Grants map identities in directories such as Microsoft Entra ID, or AWS Identity and Access Management (IAM) principals, to datasets in S3. This helps you manage data permissions at scale by automatically granting S3 access to end users based on their corporate identity.
Visit the AWS Region Table for complete regional availability information. To learn more about Amazon S3 Access Grants, visit our product page.
Quelle: aws.amazon.com

Amazon ECR now supports pull through cache for Chainguard

Amazon Elastic Container Registry (Amazon ECR) pull through cache now supports Chainguard’s registry as an upstream source. With today’s release, customers now benefit from the security and availability of Amazon ECR for private Chainguard images. As customers continue to scale their use of Chainguard images, keeping them synchronized with Chainguard’s registry becomes increasingly important. With ECR’s pull through cache feature, customers can keep Chainguard images in sync without additional workflows or tools to manage. Amazon ECR’s pull through cache supports frequent registry syncs, helping to keep container images sourced from Chainguard up to date. Later, customers can apply ECR features such as image scanning and lifecycle policies to their cached Chainguard images. The pull through cache for Chainguard is available in all AWS Regions where Amazon ECR pull through cache is supported. To get started, review our documentation.
Quelle: aws.amazon.com

Amazon EC2 High Memory U7i-6TB instances now available in Asia Pacific (Malaysia)

Amazon EC2 High Memory U7i instances with 6TB of memory (u7i-6tb.112xlarge) are now available in AWS Asia Pacific (Malaysia). U7i instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-6tb instances offer 6TiB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment.
U7i-6tb instances deliver 448 vCPUs with up to 100 Gbps of Amazon EBS bandwidth for faster data loading and backups, 100 Gbps of network bandwidth, and ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server.
To learn more about U7i instances, visit the High Memory instances page.
Quelle: aws.amazon.com

AWS Blu Insights is now AWS Transform for mainframe refactor

AWS Blu Insights capabilities are now available as part of AWS Transform, enabling customers to launch mainframe refactoring projects from the AWS Transform console. This launch unifies all three mainframe modernization patterns — refactor, replatform, and reimagine — within AWS Transform for mainframe. Code transformation is now offered at no cost, replacing the previous lines-of-code based pricing model. With this launch, you can access AWS Transform for mainframe refactor directly from the AWS Transform console using your existing AWS credentials. The mandatory three-level certification requirement to access the Transformation Center has been removed, lowering the friction to exploring refactor projects. Self-paced training content remains available within the application for those who want to build deeper knowledge. AWS Transform for mainframe refactor is available in 18 AWS Regions. In regions where AWS Transform for mainframe is not yet available, you can continue to access the service through the AWS Mainframe Modernization console. To get started, visit the AWS Transform for mainframe refactor user guide.
Quelle: aws.amazon.com

Amazon SageMaker Unified Studio supports aggregated view of data lineage

Amazon SageMaker Unified Studio now provides an aggregated view of data lineage, displaying all jobs contributing to your dataset. The aggregated view gives you a complete picture of data transformations and dependencies across your entire lineage graph, helping you quickly identify all upstream sources and downstream consumers of your datasets. Previously, SageMaker Unified Studio showed the lineage graph as it existed at a specific point in time, which is useful for troubleshooting and investigating specific data processing events. The aggregated view now provides a complete picture of data transformations and dependencies across multiple levels of the lineage graph. You can use this view to understand the full scope of jobs impacting your datasets and to identify all upstream sources and downstream consumers. The aggregated view is available as the default lineage view in Amazon SageMaker Unified Studio for IdC-based domains. You can switch to the previous view by toggling the “display in event timestamp order” option. You can also query the lineage graph using the new QueryGraph API, which provides lineage node graphs with metadata and augmented business context. Aggregated view of lineage is available in all existing Amazon SageMaker Unified Studio regions. For detailed information on how to get started with lineage using these new features, refer to the documentation and API.
Quelle: aws.amazon.com

Amazon Connect voice AI agents now supports 13 new languages

Amazon Connect now supports 13 new languages for voice AI agents, bringing the total to 40 language locales.  New languages include Arabic (Saudi Arabia), Czech, Danish, Dutch (Belgium), English (Ireland), English (New Zealand), English (Wales), German (Switzerland), Icelandic, Romanian, Spanish (Mexico), Turkish, and Welsh.
Amazon Connect’s agentic self-service capabilities enable AI agents to understand, reason, and take action across voice and digital channels to automate routine and complex customer service tasks across multiple languages.  
To learn more about this feature, see the Amazon Connect Administrator Guide. To learn more about Amazon Connect, a complete AI-powered contact center solution delivering personalized customer experiences at scale, visit the Amazon Connect website.
Quelle: aws.amazon.com