Amazon RDS Blue/Green Deployments now supports Amazon RDS Proxy

Amazon RDS Blue/Green Deployments now supports Amazon RDS Proxy, enabling faster application recovery during switchover by eliminating DNS propagation delays. Blue/Green Deployments create a fully managed staging environment (Green) that allows you to deploy and test production changes, keeping your current production database (Blue) safe. When ready, you can switchover to the new production environment and your applications begin accessing it immediately without any configuration changes. During a Blue/Green Deployment switchover for single-Region configurations, RDS Proxy actively monitors database instances and detects when the Green environment becomes the new production environment. This allows RDS Proxy to quickly redirect connections to the Green environment, enabling faster application recovery. You don’t need to modify your drivers or change your existing application setup. Amazon RDS Blue/Green Deployments with Amazon RDS Proxy is available for Amazon Aurora with MySQL compatibility, Amazon Aurora with PostgreSQL compatibility, Amazon RDS for MySQL, Amazon RDS for PostgreSQL, and Amazon RDS for MariaDB in all commercial AWS Regions where RDS Proxy is available. In a few clicks, update your databases using RDS Blue/Green Deployments via the Amazon RDS Console or Amazon CLI. To learn more, see Blue/Green Deployments overview in the Amazon RDS documentation.
Quelle: aws.amazon.com

Amazon OpenSearch Service supports Managed Prometheus and agent tracing

Amazon OpenSearch Service now provides a unified observability experience that brings together metrics, logs, traces, and AI agent tracing in a single interface. This release introduces native integration with Amazon Managed Service for Prometheus and comprehensive agent tracing capabilities, addressing the dual challenges of prohibitive costs from premium observability platforms and operational complexity from fragmented tooling. Site Reliability Engineers, DevOps Engineers, and Platform Engineering teams can now consolidate their observability stack without costly data duplication or constant context switching between multiple tools.
You can now query Prometheus metrics directly using native PromQL syntax alongside logs and traces in OpenSearch UI’s observability workspace—without duplicating data. Combined with new application monitoring workflows powered by RED metrics (Rate, Errors, Duration) and AI agent tracing using OpenTelemetry GenAI semantic conventions, operations teams can correlate slow traces to application logs, overlay Prometheus metrics on service dashboards, and trace LLM agent execution—all without switching tools. This live query architecture delivers significant cost reduction compared to premium platforms while maintaining operational excellence.
The new unified observability experience is available on OpenSearch UI in 20 AWS regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Asia Pacific (Hong Kong, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo), Europe (Frankfurt, Ireland, London, Milan, Paris, Spain, Stockholm), Canada (Central), and South America (São Paulo).
To learn more, visit the OpenSearch Service observability documentation and direct query documentation.
Quelle: aws.amazon.com

Amazon Bedrock now supports cost allocation by IAM user and role

Amazon Bedrock now supports cost allocation by IAM principal, such as IAM users and IAM roles, in AWS Cost and Usage Report 2.0 (CUR 2.0) and Cost Explorer. This enables customers to understand and attribute Bedrock model inference costs across users, teams, projects, and applications. With this launch, customers can tag their IAM users and roles with attributes like team, project, or cost center, activate them as cost allocation tags, and analyze Bedrock model inference costs by the tags in Cost Explorer or at the line-item level in CUR 2.0. To get started, tag your IAM users and roles and activate them as cost allocation tags in the Billing and Cost Management console. Then create a CUR 2.0 data export and select “Include caller identity (IAM principal) allocation data” or filter by tags in Cost Explorer. This feature is available in all AWS commercial Regions where Amazon Bedrock is available. To learn more, see Using IAM principal for Cost Allocation documentation. To get started with Amazon Bedrock, visit Amazon Bedrock documentation.
Quelle: aws.amazon.com

Amazon Timestream for InfluxDB Now Supports Customer-Defined Maintenance Windows

Amazon Timestream for InfluxDB now supports customer-defined maintenance windows, giving you control over when routine maintenance is performed on your InfluxDB databases. This feature is available for both InfluxDB 2 instances and InfluxDB 3 clusters across all supported editions. With this launch, you can specify a weekly maintenance window using a day-and-time format in your preferred timezone. Timestream for InfluxDB supports IANA timezone identifiers such as America/New_York, Europe/London, and Asia/Tokyo, and automatically handles Daylight Saving Time transitions so you don’t need to manually adjust your schedule. If you don’t specify a maintenance window, the service continues to manage maintenance timing automatically. You can set or update your preferred maintenance window when creating or modifying a resource using the Amazon Timestream for InfluxDB console, AWS CLI, or AWS SDKs. You can use Amazon Timestream for InfluxDB Customer-Defined Maintenance Windows in all Regions where Timestream for InfluxDB is offered. To get started with Amazon Timestream for InfluxDB, visit the Amazon Timestream for InfluxDB console. For more information, see the Amazon Timestream for InfluxDB documentation and pricing page.
Quelle: aws.amazon.com

Amazon EC2 Capacity Manager now supports tag-based dimensions

Starting today, Amazon EC2 Capacity Manager supports tag-based dimensions, enabling you to use tags from your EC2 resources to group and filter capacity metrics. EC2 Capacity Manager helps you monitor and optimize capacity usage across On-Demand Instances, Spot Instances, and Capacity Reservations. This launch also introduces Account Name as a new built-in dimension.
You can activate up to five custom tag keys — such as environment, team, or cost-center — and use them alongside built-in dimensions like Region, Instance Type, and Availability Zone to group and filter capacity metrics by tag values in the console and APIs, and include tag data as additional columns in newly created S3 data exports. Capacity Manager also includes four Capacity Manager-provided tags by default: EC2 Auto Scaling group name, EKS cluster name, EKS Kubernetes node pool, and Karpenter node pool. The new Account Name dimension makes it easier to identify accounts when analyzing cross-account capacity data across your organization.
This feature is available in all AWS Regions where EC2 Capacity Manager is available. To get started, navigate to the Settings tab in Capacity Manager and choose Manage tag keys, or use the AWS CLI. To learn more, see Managing monitored tag keys in the Amazon EC2 User Guide. For more information about Amazon EC2 Capacity Manager, visit the EC2 Capacity Manager documentation.
Quelle: aws.amazon.com

AWS Marketplace announces the Discovery API for programmatic access to catalog data

Today, AWS Marketplace announces the Discovery API, giving you programmatic access to product and pricing information across the AWS Marketplace catalog — including SaaS, AI agents and tools, AMI, containers, and machine learning models. With the Discovery API, buyers can embed catalog data into internal portals, enrich procurement tools with current pricing and offer terms, and streamline vendor evaluation workflows. Sellers and channel partners can surface product listings, public pricing, and private offer details directly within their own websites and storefronts — helping customers browse, compare, and move to purchase without leaving the partner experience. The API provides access to product descriptions, categories, pricing across public and private offers, and offer terms, so you can build experiences tailored to how your organization discovers and procures software through AWS Marketplace. The AWS Marketplace Discovery API is available in US East (N. Virginia), US West (Oregon), and Europe (Ireland). You can get started by configuring IAM permissions for your AWS account and calling the API through the AWS SDK. For more information, see the AWS Marketplace Discovery API Reference.
Quelle: aws.amazon.com

AWS Agent Registry for centralized agent discovery and governance is now available in Preview

AWS Agent Registry, available through Amazon Bedrock AgentCore, is now in preview — a private, governed catalog and discovery layer for agents, tools, skills, MCP servers, and custom resources within the organization. It gives teams complete visibility into their AI landscape, enabling them to discover existing agents and tools instead of rebuilding capabilities that already exist. The registry can be accessed via the AgentCore Console UI, APIs (AWS CLI, AWS SDK), or as an MCP server that builders can query and invoke directly from their IDEs. Registry supports both IAM and OAuth (Custom JWT) based access.
Teams can register resources manually through the console or API, or use URL-based discovery, which automatically retrieves metadata such as tool schemas and capability descriptions from a live MCP server or agent endpoint. Records go through an approval workflow where administrators can approve records before they become discoverable, and they can plug the registry into their existing approval workflows to enforce governance policies. AWS CloudTrail provides complete audit trails of all registry access and administrative actions, ensuring compliance and security oversight. For discovery, the registry offers both semantic and keyword search, so developers can quickly find agents by describing their use case in natural language. 
AWS Agent Registry (preview) is available in five AWS Regions where AgentCore is available: US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Sydney), Europe (Ireland), and US East (N. Virginia). Learn more about the registry through the blog, and deep dive using the documentation.
Quelle: aws.amazon.com

Amazon S3 Lifecycle pauses actions on objects that are unable to replicate

Amazon S3 Lifecycle now prevents expiration and transition actions on objects that failed replication, helping you to coordinate replication configuration or permissions changes with actions defined in your lifecycle rules.
Incorrect permissions or replication configuration can prevent objects from being replicated. With this change, S3 Lifecycle no longer expires or transitions objects that have failed replication, even if they match one of the lifecycle rules that you have defined. Once you have corrected your replication configuration or permissions, you can use S3 Batch Replication to replicate objects that previously failed. After successful replication, S3 Lifecycle will automatically process these objects according to your configured rules.
This change applies automatically to all existing and new S3 Lifecycle configurations, across 37 AWS Regions, including the AWS China and AWS GovCloud (US) Regions. We are in the process of deploying this change and plan to complete the deployment in the coming days. To learn more, visit S3 Lifecycle documentation and S3 Replication troubleshooting documentation.
Quelle: aws.amazon.com

Amazon WorkSpaces Advisor now available for AI-powered troubleshooting

Amazon WorkSpaces Advisor is a new AI-powered tool that helps administrators quickly troubleshoot and resolve issues with Amazon WorkSpaces Personal. Using generative AI capabilities, it analyzes WorkSpace configurations, identifies problems, and provides actionable recommendations to restore service and optimize performance.
WorkSpaces Advisor streamlines administrative workflows by reducing the time needed to investigate and fix common issues. Administrators can leverage AI-driven insights to proactively maintain their virtual desktop infrastructure, improve end-user experience, and minimize downtime across their WorkSpaces.
Amazon WorkSpaces Advisor is now available in all AWS commercial regions where Amazon WorkSpaces is offered. Visit the Amazon WorkSpaces console to access WorkSpaces Advisor and begin troubleshooting your environment. Learn more in the feature blog and user guide.
Quelle: aws.amazon.com

Amazon EKS managed node groups now support EC2 Auto Scaling warm pools

Amazon Elastic Kubernetes Service (Amazon EKS) managed node groups now support Auto Scaling warm pools, enabling you to maintain pre-initialized EC2 instances ready for rapid scale-out. This reduces node provisioning latency for applications with burst traffic patterns, time-sensitive workloads, or long instance boot times due to complex initialization scripts and software dependencies. With warm pools enabled, your EKS managed node group maintains a pool of instances that have already completed OS initialization, user data execution, and software configuration. When demand increases and the Auto Scaling group scales out, instances transition from the warm pool to active service without repeating the full cold-start sequence. You can configure instances in the warm pool as Stopped (lower cost, longer transition) or Running (higher cost, faster transition). You can also enable reuse on scale-in, which returns instances to the warm pool during scale-down instead of terminating them. Warm pools work with Cluster Autoscaler without requiring any additional configuration. You can enable warm pools through the EKS API, AWS CLI, AWS Management Console, or AWS CloudFormation by adding a warmPoolConfig to your CreateNodegroup or UpdateNodegroupConfig requests. Existing managed node groups that do not enable warm pools are unaffected. This feature is available in all AWS Regions where Amazon EKS is available, except for the China (Beijing) Region, operated by Sinnet and the China (Ningxia) Region, operated by NWCD. To get started, see the Amazon EKS managed node groups documentation.
Quelle: aws.amazon.com