AWS Deadline Cloud supports monitor creation in multiple regions

Today, AWS Deadline Cloud announces support for creating monitors in multiple AWS Regions without additional configuration of your IAM Identity Center instance. AWS Deadline Cloud is a fully managed service that helps creative teams manage and scale their rendering workloads in the cloud. You can now deploy render farms with monitors across multiple Regions without needing to adjust your existing IAM Identity Center configuration. You can operate more efficiently by placing rendering resources in regions closest to your artists and studios worldwide, and can run and compare workloads across regions to help optimize your rendering strategy or diversify your instance types. Deadline Cloud automatically routes authentication requests to your IAM Identity Center instance in its primary Region, so your identity data remains in place without replication and requires no changes to your identity management setup. To learn more, see Getting Started with Deadline Cloud in the AWS Deadline Cloud User Guide. 
Quelle: aws.amazon.com

Amazon CloudWatch pipelines now supports drop and conditional processing

Amazon CloudWatch pipelines now supports conditional processing and a new drop events processor, giving you more control over how your log data is transformed. CloudWatch pipelines is a fully managed service that ingests, transforms, and routes log data to CloudWatch without requiring you to manage infrastructure. Until now, processors applied to all log entries uniformly. With conditional processing, you can define rules that determine when a processor runs and which individual log entries it acts on, so you only transform the data that matters.
Conditional processing is available across 21 processors including Add Entries, Delete Entries, Copy Values, Grok, Rename Key, and more. For each processor, you can set a “run when” condition to skip the entire processor if the condition is not met, or an entry-level condition to control whether each individual action within the processor is applied. The new Drop Events processor lets you filter out unwanted log entries from third-party pipeline connectors based on conditions you define, helping reduce noise and lower costs.
Conditional processing and the Drop Events processor are available at no additional cost in all AWS Regions where CloudWatch pipelines is generally available. Standard CloudWatch Logs ingestion and storage rates still apply.
To get started, visit the CloudWatch pipelines page in the Amazon CloudWatch console. To learn more, see the CloudWatch pipelines documentation.
Quelle: aws.amazon.com

Amazon EC2 X8i instances are now available in Europe (Paris)

Amazon Web Services (AWS) is announcing the general availability of Amazon EC2 X8i instances, next-generation memory optimized instances powered by custom Intel Xeon 6 processors available only on AWS. X8i instances are SAP-certified and deliver the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. They deliver up to 43% higher performance, 1.5x more memory capacity (up to 6TB), and 3.3x more memory bandwidth compared to previous generation X2i instances. X8i instances are designed for memory-intensive workloads like SAP HANA, large databases, data analytics, and Electronic Design Automation (EDA). Compared to X2i instances, X8i instances offer up to 50% higher SAPS performance, up to 47% faster PostgreSQL performance, 88% faster Memcached performance, and 46% faster AI inference performance. X8i instances come in 14 sizes, from large to 96xlarge, including two bare metal options. X8i instances are available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Stockholm) and Europe (Paris). To get started, visit the AWS Management Console. X8i instances can be purchased via Savings Plans, On-Demand instances, and Spot instances. For more information visit X8i instances page.
Quelle: aws.amazon.com

AWS Billing and Cost Management Dashboards Now Supports Scheduled Email Delivery

AWS Billing and Cost Management Dashboards now support scheduled email delivery for your reports. You can now automate report distribution on flexible recurring schedules, eliminating manual compilation work and ensuring financial insights reach decision-makers without requiring console access.”
Scheduled email reports enable you to configure daily, weekly, or monthly delivery schedules for your dashboards. Recipients receive emails containing secure links to password-protected PDF reports optimized for offline viewing. Manage recipients through AWS User Notifications, and once configured, reports generate and distribute automatically on your chosen schedule. You can also access these capabilities programmatically through AWS SDKs and CLI tools.
This feature is available at no additional cost in all commercial AWS Regions, excluding AWS China Regions. To get started, open the AWS Billing and Cost Management console, navigate to Dashboards, select a dashboard, and choose ‘Manage email reports’ from the Actions menu. For more information, see the Dashboards user guide and announcement blog post.
Quelle: aws.amazon.com

Amazon RDS Blue/Green Deployments now supports Amazon RDS Proxy

Amazon RDS Blue/Green Deployments now supports Amazon RDS Proxy, enabling faster application recovery during switchover by eliminating DNS propagation delays. Blue/Green Deployments create a fully managed staging environment (Green) that allows you to deploy and test production changes, keeping your current production database (Blue) safe. When ready, you can switchover to the new production environment and your applications begin accessing it immediately without any configuration changes. During a Blue/Green Deployment switchover for single-Region configurations, RDS Proxy actively monitors database instances and detects when the Green environment becomes the new production environment. This allows RDS Proxy to quickly redirect connections to the Green environment, enabling faster application recovery. You don’t need to modify your drivers or change your existing application setup. Amazon RDS Blue/Green Deployments with Amazon RDS Proxy is available for Amazon Aurora with MySQL compatibility, Amazon Aurora with PostgreSQL compatibility, Amazon RDS for MySQL, Amazon RDS for PostgreSQL, and Amazon RDS for MariaDB in all commercial AWS Regions where RDS Proxy is available. In a few clicks, update your databases using RDS Blue/Green Deployments via the Amazon RDS Console or Amazon CLI. To learn more, see Blue/Green Deployments overview in the Amazon RDS documentation.
Quelle: aws.amazon.com

Amazon OpenSearch Service supports Managed Prometheus and agent tracing

Amazon OpenSearch Service now provides a unified observability experience that brings together metrics, logs, traces, and AI agent tracing in a single interface. This release introduces native integration with Amazon Managed Service for Prometheus and comprehensive agent tracing capabilities, addressing the dual challenges of prohibitive costs from premium observability platforms and operational complexity from fragmented tooling. Site Reliability Engineers, DevOps Engineers, and Platform Engineering teams can now consolidate their observability stack without costly data duplication or constant context switching between multiple tools.
You can now query Prometheus metrics directly using native PromQL syntax alongside logs and traces in OpenSearch UI’s observability workspace—without duplicating data. Combined with new application monitoring workflows powered by RED metrics (Rate, Errors, Duration) and AI agent tracing using OpenTelemetry GenAI semantic conventions, operations teams can correlate slow traces to application logs, overlay Prometheus metrics on service dashboards, and trace LLM agent execution—all without switching tools. This live query architecture delivers significant cost reduction compared to premium platforms while maintaining operational excellence.
The new unified observability experience is available on OpenSearch UI in 20 AWS regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Asia Pacific (Hong Kong, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo), Europe (Frankfurt, Ireland, London, Milan, Paris, Spain, Stockholm), Canada (Central), and South America (São Paulo).
To learn more, visit the OpenSearch Service observability documentation and direct query documentation.
Quelle: aws.amazon.com

Amazon Bedrock now supports cost allocation by IAM user and role

Amazon Bedrock now supports cost allocation by IAM principal, such as IAM users and IAM roles, in AWS Cost and Usage Report 2.0 (CUR 2.0) and Cost Explorer. This enables customers to understand and attribute Bedrock model inference costs across users, teams, projects, and applications. With this launch, customers can tag their IAM users and roles with attributes like team, project, or cost center, activate them as cost allocation tags, and analyze Bedrock model inference costs by the tags in Cost Explorer or at the line-item level in CUR 2.0. To get started, tag your IAM users and roles and activate them as cost allocation tags in the Billing and Cost Management console. Then create a CUR 2.0 data export and select “Include caller identity (IAM principal) allocation data” or filter by tags in Cost Explorer. This feature is available in all AWS commercial Regions where Amazon Bedrock is available. To learn more, see Using IAM principal for Cost Allocation documentation. To get started with Amazon Bedrock, visit Amazon Bedrock documentation.
Quelle: aws.amazon.com

Amazon Timestream for InfluxDB Now Supports Customer-Defined Maintenance Windows

Amazon Timestream for InfluxDB now supports customer-defined maintenance windows, giving you control over when routine maintenance is performed on your InfluxDB databases. This feature is available for both InfluxDB 2 instances and InfluxDB 3 clusters across all supported editions. With this launch, you can specify a weekly maintenance window using a day-and-time format in your preferred timezone. Timestream for InfluxDB supports IANA timezone identifiers such as America/New_York, Europe/London, and Asia/Tokyo, and automatically handles Daylight Saving Time transitions so you don’t need to manually adjust your schedule. If you don’t specify a maintenance window, the service continues to manage maintenance timing automatically. You can set or update your preferred maintenance window when creating or modifying a resource using the Amazon Timestream for InfluxDB console, AWS CLI, or AWS SDKs. You can use Amazon Timestream for InfluxDB Customer-Defined Maintenance Windows in all Regions where Timestream for InfluxDB is offered. To get started with Amazon Timestream for InfluxDB, visit the Amazon Timestream for InfluxDB console. For more information, see the Amazon Timestream for InfluxDB documentation and pricing page.
Quelle: aws.amazon.com

Amazon EC2 Capacity Manager now supports tag-based dimensions

Starting today, Amazon EC2 Capacity Manager supports tag-based dimensions, enabling you to use tags from your EC2 resources to group and filter capacity metrics. EC2 Capacity Manager helps you monitor and optimize capacity usage across On-Demand Instances, Spot Instances, and Capacity Reservations. This launch also introduces Account Name as a new built-in dimension.
You can activate up to five custom tag keys — such as environment, team, or cost-center — and use them alongside built-in dimensions like Region, Instance Type, and Availability Zone to group and filter capacity metrics by tag values in the console and APIs, and include tag data as additional columns in newly created S3 data exports. Capacity Manager also includes four Capacity Manager-provided tags by default: EC2 Auto Scaling group name, EKS cluster name, EKS Kubernetes node pool, and Karpenter node pool. The new Account Name dimension makes it easier to identify accounts when analyzing cross-account capacity data across your organization.
This feature is available in all AWS Regions where EC2 Capacity Manager is available. To get started, navigate to the Settings tab in Capacity Manager and choose Manage tag keys, or use the AWS CLI. To learn more, see Managing monitored tag keys in the Amazon EC2 User Guide. For more information about Amazon EC2 Capacity Manager, visit the EC2 Capacity Manager documentation.
Quelle: aws.amazon.com

AWS Marketplace announces the Discovery API for programmatic access to catalog data

Today, AWS Marketplace announces the Discovery API, giving you programmatic access to product and pricing information across the AWS Marketplace catalog — including SaaS, AI agents and tools, AMI, containers, and machine learning models. With the Discovery API, buyers can embed catalog data into internal portals, enrich procurement tools with current pricing and offer terms, and streamline vendor evaluation workflows. Sellers and channel partners can surface product listings, public pricing, and private offer details directly within their own websites and storefronts — helping customers browse, compare, and move to purchase without leaving the partner experience. The API provides access to product descriptions, categories, pricing across public and private offers, and offer terms, so you can build experiences tailored to how your organization discovers and procures software through AWS Marketplace. The AWS Marketplace Discovery API is available in US East (N. Virginia), US West (Oregon), and Europe (Ireland). You can get started by configuring IAM permissions for your AWS account and calling the API through the AWS SDK. For more information, see the AWS Marketplace Discovery API Reference.
Quelle: aws.amazon.com