Announcing new metal sizes for Amazon EC2 M8gn and M8gb instances

Today, AWS announces the general availability of metal-24xl and metal-48xl sizes for Amazon Elastic Compute Cloud (Amazon EC2) M8gn and M8gb instances. These instances are powered by AWS Graviton4 processors to deliver up to 30% better compute performance than AWS Graviton3 processors. M8gn instances feature the latest 6th generation AWS Nitro Cards, and offer up to 600 Gbps network bandwidth, the highest network bandwidth among network optimized EC2 instances. M8gb offers up to 300 Gbps of EBS bandwidth to provide higher EBS performance compared to same-sized equivalent Graviton4-based instances.
M8gn and M8gb instances offer instance sizes up to 48xlarge and metal-48xl, with up to 768 GiB of memory. M8gn instances offer up to 600 Gbps of networking bandwidth, up to 60 Gbps of bandwidth to Amazon Elastic Block Store (EBS), and are ideal for network-intensive workloads such as high-performance file systems, distributed web scale in-memory caches, caching fleets, real-time big data analytics, Telco applications such as 5G User Plane Function (UPF). M8gb instances offer up to 300 Gbps of EBS bandwidth, up to 400 Gbps of networking bandwidth, and are ideal for workloads requiring high block storage performance such as high-performance databases and NoSQL databases.
M8gn and M8gb instances support Elastic Fabric Adapter (EFA) networking on 16xlarge, 24xlarge, 48xlarge, metal-24xl, and metal-48xl sizes. EFA networking enables lower latency and improved cluster performance for workloads deployed on tightly coupled clusters.
The new metal-24xl and metal-48xl sizes are available in the AWS US East (N. Virginia) region. 
To begin your Graviton journey, visit the Level up your compute with AWS Graviton page. To get started, see AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.
Quelle: aws.amazon.com

Amazon RDS Snapshot Export to S3 now available in AWS GovCloud (US) Regions

Amazon RDS Snapshot Export to S3 is now available in AWS GovCloud (US) regions, enabling you to export snapshot data in Apache Parquet format for analytics, data retention, and machine learning use cases. Snapshot export to S3 supports all DB snapshot types (manual, automated system, and AWS Backup snapshots) and runs directly on the snapshot without impacting database performance. The exported data in Apache Parquet format can be analyzed using other AWS services such as Amazon Athena, Amazon SageMaker, or Amazon Redshift Spectrum, or with big data processing frameworks such as Apache Spark. You can create a snapshot export with just a few clicks in the Amazon RDS Management Console or by using the AWS SDK or CLI. Snapshot Export to S3 is supported for Amazon Aurora PostgreSQL – Compatible Edition and Amazon Aurora MySQL, Amazon RDS for PostgreSQL, Amazon RDS for MySQL, and Amazon RDS for MariaDB snapshots. For more information, including instructions on getting started, read Aurora documentation or Amazon RDS documentation.
Quelle: aws.amazon.com

AWS Observability now available as a Kiro power

Today, AWS announces AWS Observability as a Kiro power, enabling developers and operators to investigate infrastructure and application health issues faster with AI agent-assisted workflows in Kiro. Kiro Powers is a repository of curated and pre-packaged Model Context Protocol (MCP) servers, steering files, and hooks validated by Kiro partners to accelerate specialized software development and deployment use cases. The AWS Observability power packages four specialized MCP servers with targeted observability guidance: the CloudWatch MCP server for observability data; the Application Signals MCP server for application performance monitoring; the CloudTrail MCP server for security analysis and compliance; and the AWS Documentation MCP server for contextual reference access. This unified platform gives Kiro agents instant context for comprehensive workflows including alarm response, anomaly detection, distributed tracing, SLO compliance monitoring, and security investigation. Additionally, the power includes automated gap analysis that helps you identify and fix missing instrumentation. With the AWS Observability power, developers can now accelerate troubleshooting their distributed applications and infrastructure in minutes, directly in their IDE. The power addresses two critical needs: reducing mean time to resolution (MTTR) for active incidents and proactively improving your observability stack. For faster incident response, when investigating an active alarm, the power dynamically loads relevant guidance and operational signals so AI agents receive only the context needed for the specific troubleshooting task at hand. For stack improvement, the automated gap analysis examines your code to identify missing instrumentation patterns—such as unlogged errors, missing correlation IDs, or absent distributed tracing—and provides actionable recommendations. The power includes eight comprehensive steering guides covering incident response, alerting, performance monitoring, security auditing, and gap analysis. The AWS Observability power is available for one-click installation within Kiro IDE and Kiro powers webpage in all AWS Regions, with each underlying MCP server functional based on regional support of the corresponding AWS service. To learn more about AWS observability MCP servers, visit our documentation. 
Quelle: aws.amazon.com

AWS Compute Optimizer now applies AWS-generated tags to EBS snapshots created during automation

AWS Compute Optimizer makes it easier to identify snapshots that are created when snapshotting and deleting unattached Amazon Elastic Block Store (EBS) volumes by automatically applying an AWS-generated tag during creation. This enhancement improves visibility and tracking of EBS snapshots created through Compute Optimizer Automation.
When Compute Optimizer creates a snapshot before deleting an unattached EBS volume—whether initiated through manual actions or automation rules—the snapshot now receives the tag aws:compute-optimizer:automation-event-id with a tag value that links the snapshot to the unique identifier of the automation event that created it. This allows you to easily identify, track, and manage snapshots created through the automated optimization process, helping you maintain better governance over your backup resources and understand the source of snapshots in your environment.
This is available in all AWS Regions where AWS Compute Optimizer Automation is available. To get started with automated optimization, go to the AWS Compute Optimizer console or visit the user guide documentation.
Quelle: aws.amazon.com

Amazon Bedrock now supports server-side tool execution with AgentCore Gateway

Amazon Bedrock now enables server-side tool execution through Amazon Bedrock AgentCore Gateway integration with the Responses API. Customers can connect their AgentCore Gateway tools to Amazon Bedrock models, enabling server-side tool execution without client-side orchestration.
With this launch, customers can specify an AgentCore Gateway ARN as a tool connector in Responses API requests. Amazon Bedrock automatically discovers available tools from the gateway, presents them to the model during inference, and executes tool calls server-side when the model selects them, all within a single API call. This eliminates the need for customers to build and maintain client-side tool orchestration loops, reducing application complexity and latency for agentic workflows. Customers retain full control over tool access through their existing AgentCore Gateway configurations and AWS IAM permissions.
Server-side tool execution with AgentCore Gateway supports all models available through the Amazon Bedrock Responses API. Customers define tools using the MCP server connector type with their gateway ARN, and Amazon Bedrock handles tool discovery, model-driven tool selection, execution, and result injection automatically. Multiple tool calls within a single conversation turn are supported, and tool results are streamed back to the client in real time.
This capability is generally available in all AWS Regions where both Amazon Bedrock’s Responses API and Amazon Bedrock AgentCore Gateway are available. To get started, visit the Amazon Bedrock documentation or the Amazon Bedrock console. For more information about Amazon Bedrock AgentCore Gateway, see the AgentCore documentation.
Quelle: aws.amazon.com

Amazon EKS Node Monitoring Agent is now open source

Amazon Elastic Kubernetes Service (Amazon EKS) Node Monitoring Agent is now open source. You can access the Amazon EKS Node Monitoring Agent source code and contribute to its development on GitHub. Running workloads reliably in Kubernetes clusters can be challenging. Cluster administrators often have to resort to manual methods of monitoring and repairing degraded nodes in their clusters. The Amazon EKS Node Monitoring Agent simplifies this process by automatically monitoring and publishing node-level system, storage, networking, and accelerator issues as node conditions, which are used by Amazon EKS for automatic node repair. With the Amazon EKS Node Monitoring Agent’s source code available on GitHub, you now have visibility into the agent’s implementation, can customize it to fit your requirements, and can contribute directly to its ongoing development. The Amazon EKS Node Monitoring Agent is included in Amazon EKS Auto Mode and is available as an Amazon EKS add-on in all AWS Regions where Amazon EKS is available. To learn more about the Amazon EKS Node Monitoring Agent and node repair, visit the Amazon EKS documentation.
Quelle: aws.amazon.com

AWS AppConfig integrates with New Relic for automated rollbacks

AWS AppConfig today launched a new integration that enables automated, intelligent rollbacks during feature flag and dynamic configuration deployments using New Relic Workflow Automation. Building on AWS AppConfig’s third-party alert capability, this integration provides teams using New Relic with a solution to automatically detect degraded application health and trigger rollbacks in seconds, eliminating manual intervention. When you deploy feature flags using AWS AppConfig’s gradual deployment strategy, the AWS AppConfig New Relic Extension continuously monitors your application health against configured alert conditions. If issues are detected during a feature flag update and deployment, such as increased error rates or elevated latency, the New Relic Workflow automatically sends a notification to trigger an immediate rollback, reverting the feature flag to its previous state. This closed-loop automation reduces the time between detection and remediation from minutes to seconds, minimizing customer impact during failed deployments.
 
Quelle: aws.amazon.com

Amazon EC2 M8a instances now available in AWS Europe (Frankfurt) region

Starting today, the general-purpose Amazon EC2 M8a instances are available in AWS Europe (Frankfurt) region. M8a instances are powered by 5th Gen AMD EPYC processors (formerly code named Turin) with a maximum frequency of 4.5 GHz, deliver up to 30% higher performance, and up to 19% better price-performance compared to M7a instances. M8a instances deliver 45% more memory bandwidth compared to M7a instances, making these instances ideal for even latency sensitive workloads. M8a instances deliver even higher performance gains for specific workloads. M8a instances are up to 60% faster for GroovyJVM benchmark, and up to 39% faster for Cassandra benchmark compared to Amazon EC2 M7a instances. M8a instances are SAP-certified and offer 12 sizes including 2 bare metal sizes. This range of instance sizes allows customers to precisely match their workload requirements. M8a instances are built using the latest sixth generation AWS Nitro Cards and ideal for applications that benefit from high performance and high throughput such as financial applications, gaming, rendering, application servers, simulation modeling, mid-size data stores, application development environments, and caching fleets. To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information visit the Amazon EC2 M8a instance page.
Quelle: aws.amazon.com

Announcing AWS Elemental Inference

AWS Elemental Inference, a fully managed Artificial Intelligence (AI) service that enables broadcasters and streamers to automatically generate vertical content and highlight clips for mobile and social platforms in real time, is now generally available. The service applies AI capabilities to live and on-demand video in parallel with encoding and helps companies and creators to reach audiences in any format without requiring AI expertise or dedicated production teams.
With Elemental Inference you can process video once and optimize it everywhere—creating main broadcasts while simultaneously generating vertical versions for TikTok, Instagram Reels, YouTube Shorts, Snapchat, and other mobile platforms in parallel with live video. For example, sports broadcasters can automatically generate vertical highlight clips during live games and distribute them to social platforms in real-time, capturing viral moments as they happen rather than hours later. 
The service launches with two AI features: vertical video cropping that transforms live and on-demand landscape broadcasts into mobile-optimized formats, and advanced metadata analysis that identifies key moments to generate highlight clips from live content. Using an agentic AI application that requires no prompts or human-in-the-loop intervention, broadcasters can scale content production without adding manual workflows or production staff—the system automatically adapts content for each platform. In beta testing, large media companies achieved 34% or more savings on AI-powered live video workflows compared to using multiple point solutions.
AWS Elemental Inference is available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), and Europe (Ireland).
For more information, visit the AWS News Blog or explore the AWS Elemental Inference documentation.
Quelle: aws.amazon.com

AWS IAM Policy Autopilot is now available as a Kiro Power

AWS IAM Policy Autopilot, the open source static code analysis tool launched at re:Invent 2025, is now available as a Kiro power to bring policy expertise to agentic AI development. This tool helps developers quickly create baseline AWS IAM policies that can be refined as applications evolve, eliminating the need for manual IAM policy creation.
The Kiro power delivers significant benefits through one-click installation directly from the Kiro IDE and web interface, removing the need for manual MCP server configuration. This streamlined workflow enables faster policy creation and integrates seamlessly into AI-assisted development environments. Key use cases include rapid prototyping of AWS applications requiring IAM policies, baseline policy creation for new AWS projects, and enhanced productivity within IDE environments where developers can generate policies without leaving their coding workflow.
To learn more about AWS IAM Policy Autopilot and access the integration, visit the AWS IAM Policy Autopilot GitHub repository. To learn more about Kiro powers, visit the Kiro powers page. 
Quelle: aws.amazon.com