Amazon Connect now provides conversational analytics for voice and chat bots

Amazon Connect now provides conversational analytics for end-customer self-service interactions across voice and digital channels, helping you better understand and improve your customers’ self-service experiences. This includes across PSTN/telephony, in-app and web-calling, web and mobile chat, SMS, WhatsApp Business messaging, and Apple Messages for Business. With this launch, Connect now provides rich conversational analytics across both human-agent interactions and end-customer self-service interactions. You can now automatically analyze the quality of automated self-service interactions including customer sentiment, redact sensitive data, discover top contact drivers and themes, identify compliance risks, and proactively identify areas for improvement through easy-to-customize dashboards. Connect’s conversational analytics also enables you to use semantic matching rules to categorize interactions based on customer behavior, keywords, sentiment, or issue types, such as billing inquiries or agent escalation requests. Amazon Connect is an AI-powered application that provides one seamless experience for your contact center customers, agents, and supervisors. To learn more about Amazon Connect and its conversational analytics capabilities, refer to the following resources:

Amazon Connect website and pricing
Conversational analytics in the Administrator Guide
Supported languages and Regions

Quelle: aws.amazon.com

Amazon ECR introduces archive storage class for rarely accessed container images

Amazon ECR now offers a new archive storage class to reduce storage costs for large volumes of rarely accessed container images. The new archive storage class helps you meet your compliance and retention requirements while optimizing storage cost. As part of this launch, ECR lifecycle policies now support archiving images based on last pull time, allowing you to use lifecycle rules to automatically archive images based on usage patterns. To get started, you can archive images by configuring lifecycle rules to automatically archive images based on criteria such as image age, count, or last pull time, or using the ECR Console or API to archive images individually. You can archive an unlimited number of images. Archived images do not count against your image per repository limit. Once the images are archived, they are no longer accessible for pulls, but can be easily restored via ECR Console, CLI, or API within 20 minutes. Once restored, images can be pulled normally. All archival and restore operations are logged through CloudTrail for auditability. The new ECR archive storage class is available in all AWS Commercial and AWS GovCloud (US) Regions. For pricing, visit the pricing page. To learn more, visit the documentation.
Quelle: aws.amazon.com

Amazon OpenSearch Service launches Cluster Insights for improved operational visibility

Amazon OpenSearch Service now includes Cluster Insights, a monitoring solution that provides comprehensive operational visibility of your clusters through a single dashboard. This eliminates the complexity of having to analyze and correlate various logs and metrics to identify potential risks to cluster availability or performance. The solution automates the consolidation of critical operational data across nodes, indices, and shards, transforming complex troubleshooting into a streamlined process. When investigating performance issues like slow search queries, Cluster Insights displays relevant performance metrics, affected cluster resources, top-N query analysis, and specific remediation steps in one comprehensive view. The solution operates through OpenSearch UI’s resilient architecture, maintaining monitoring capabilities even during cluster unavailability. Users gain immediate access to account-level cluster summaries, enabling efficient management of multiple deployments. Cluster Insights is available at no additional cost for OpenSearch version 2.17 or later in all Regions where OpenSearch UI is available. View the complete list of supported Regions here. To learn more about Cluster Insights, refer to our technical documentation.
Quelle: aws.amazon.com

Amazon CloudWatch now supports scheduled queries in Logs Insights

Amazon CloudWatch Logs now supports automatically running Logs Insights queries on a recurring schedule for your log analysis needs. With scheduled queries, you can now automate log analysis tasks and deliver query results to Amazon S3 and Amazon EventBridge.
With today’s launch, you can track trends, monitor key operational metrics, and detect anomalies without needing to manually re-run queries or maintain custom automation. This feature makes it easier to maintain continuous visibility into your applications and infrastructure, streamline operational workflows, and ensure consistent insight generation at scale. For example, you can setup scheduled queries for your weekly audit reporting. The query results can also be stored in Amazon S3 for analysis, or trigger incident response workflows through Amazon EventBridge. The feature supports all CloudWatch Logs Insights query languages and helps teams improve operational efficiency by eliminating manual query executions.
Scheduled queries is available in US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), and South America (São Paulo).
You can configure a scheduled query using the Amazon CloudWatch console, AWS Command Line Interface (AWS CLI), AWS Cloud Development Kit (AWS CDK), and AWS SDKs. For more information, visit the Amazon CloudWatch documentation.
Quelle: aws.amazon.com

Get Invoice PDF API is now generally available.

Today, AWS announces the general availability of the Get Invoice PDF API, enabling customers to programmatically download AWS invoices via SDK calls. Customers can retrieve individual invoice PDF artifacts by invoking API calls with AWS Invoice ID as input and receives pre-signed Amazon S3 URL for immediate download of AWS invoice and supplemental documents in PDF format. For bulk invoice retrieval, customers can first call the List Invoice Summaries API to get Invoice IDs for a specific billing period, then use the Invoice IDs as input to Get Invoice API to download each Invoice PDF artifact. The Get Invoice PDF API is available in the US East (N. Virginia) Region. Customers from any commercial regions (except China Regions) can use the service. To get started with Get Invoice PDF API please visit the API Documentation.
Quelle: aws.amazon.com

Amazon RDS Optimized Reads now supports R8gd and M8gd database instances

Amazon Relational Database Service (RDS) now supports R8gd and M8gd database instances for Optimized Reads on Amazon Aurora PostgreSQL and RDS for PostgreSQL, MySQL, and MariaDB. R8gd and M8gd database instances offer improved price-performance. For example, Optimized Reads on R8gd instances deliver up to 165% better throughput and up to 120% better price-performance over R6g instances for Aurora PostgreSQL. Optimized Reads uses local NVMe-based SSD block storage available on these instances to store ephemeral data, such as temporary tables, reducing data access to/from network-based storage and improving read latency and throughput. The result is improved query performance for complex queries and faster index rebuild operations. Aurora PostgreSQL Optimized Reads instances using the I/O-Optimized configuration additionally use the local storage to extend their caching capacity. Database pages that are evicted from the in-memory buffer cache are cached in local storage to speed subsequent retrieval of that data. Customers can get started with Optimized Reads through the AWS Management Console, CLI, and SDK by modifying their existing Aurora and RDS databases or creating a new database using R8gd or M8gd instances. These instances are available in the US East (N. Virginia, Ohio), US West (Oregon), Europe (Spain, Frankfurt), and Asia Pacific (Tokyo) Regions. For complete information on pricing and regional availability, please refer to the pricing page. For information on specific engine versions that support these DB instance types, please see the Aurora and RDS documentation.
Quelle: aws.amazon.com

EC2 Auto Scaling now offers a synchronous API to launch instances inside an Auto Scaling group

Today, EC2 Auto Scaling is launching a new API, LaunchInstances, which gives customers more control and flexibility over how EC2 Auto Scaling provisions instances while providing instant feedback on capacity availability. Customers use EC2 Auto Scaling for automated fleet management. With scaling policies, EC2 Auto Scaling can automatically add instances when demand spikes and remove them when traffic drops, ensuring customers’ applications always have the right amount of compute. EC2 Auto Scaling also offers the ability to monitor and replace unhealthy instances. In certain use cases, customers may want to specify exactly where EC2 Auto Scaling should launch additional instances and need immediate feedback on capacity availability. The new LaunchInstances API allows customers to precisely control where instances are launched by specifying an override for any Availability Zone and/or subnet in an Auto Scaling group, while providing immediate feedback on capacity availability. This synchronous operation gives customers real-time insight into scaling operations, enabling them to quickly implement alternative strategies if needed. For additional flexibility, the API includes optional asynchronous retries to help reach the desired capacity. This feature is now available in US East (N. Virginia), US West (Oregon), Europe (Ireland), and Asia Pacific (Singapore), at no additional cost beyond standard EC2 and EBS usage. To get started, visit the AWS Command Line Interface (CLI) and the AWS SDKs. To learn more about this feature, visit the AWS documentation. 
Quelle: aws.amazon.com

Amazon Bedrock introduces Priority and Flex inference service tiers

Today, Amazon Bedrock introduces two new inference service tiers to optimize costs and performance for different AI workloads. The new Flex tier offers cost-effective pricing for non-time-critical applications like model evaluations and content summarization while the Priority tier provides premium performance and preferential processing for mission-critical applications. For most models that support Priority Tier, customers can realize up to 25% better output tokens per second (OTPS) latency compared to standard tier. These join the existing Standard tier for everyday AI applications with reliable performance.
These service tiers address key challenges that organizations face when deploying AI at scale. The Flex tier is designed for non-interactive workloads that can tolerate longer latencies, making it ideal for model evaluations, content summarization, labeling and annotation, and multistep agentic workflow, and it’s priced at a discount relative to the Standard tier. During periods of high demand, Flex requests receive lower priority relative to the Standard tier. The Priority tier is an ideal fit for mission critical applications, real-time end-user interactions, and interactive experiences where consistent, fast responses are essential. During periods of high demand, Priority requests receive processing priority, at a premium price, over other service tiers. These new service tiers are available today for a range of leading foundation models, including OpenAI (gpt-oss-20b, gpt-oss-120b), DeepSeek (DeepSeek V3.1), Qwen3 (Coder-480B-A35B-Instruct, Coder-30B-A3B-Instruct, 32B dense, Qwen3-235B-A22B-2507), and Amazon Nova (Nova Pro and Nova Premier). With these new options, Amazon Bedrock helps customers gain greater control over balancing cost efficiency with performance requirements, enabling them to scale AI workloads economically while ensuring optimal user experiences for their most critical applications.
For more information about the AWS Regions where Amazon Bedrock Priority and Flex inference service tiers are available, see the AWS Regions table
Learn more about service tiers in our News Blog and documentation.
Quelle: aws.amazon.com

Amazon OpenSearch Serverless now adds audit logs for data plane APIs

Amazon OpenSearch Serverless now supports detailed audit logging of data plane requests via AWS CloudTrail. This feature enables customers to record user actions on their collections, helping meet compliance regulations, improve security posture, and provide evidence for security investigations. Customers can now track user activities such as authorization attempts, index modifications, and search queries. Customers can use CloudTrail to configure filters for OpenSearch Serverless collections with read-only and write-only options, or use advanced event selectors for more granular control over logged data events. All OpenSearch Serverless data events are delivered to an Amazon S3 bucket and optionally to Amazon CloudWatch Events, creating a comprehensive audit trail. This enhanced visibility into when and who made API calls helps security and operations teams monitor data access and respond to events in real-time. Once configured with CloudTrail, audit logs will be continuously streamed with no additional customer action required. Audit Logs will be continuously streamed to CloudTrail and can be further analyzed there. Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, see the documentation. 
Quelle: aws.amazon.com

Amazon EC2 P6-B300 instances with NVIDIA Blackwell Ultra GPUs are now available

Today, AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) P6-B300 instances, accelerated by NVIDIA Blackwell Ultra B300 GPUs. Amazon EC2 P6-B300 instances provide 8x NVIDIA Blackwell Ultra GPUs with 2.1 TB high bandwidth GPU memory, 6.4 Tbps EFA networking, 300 Gbps dedicated ENA throughput, and 4 TB of system memory. 
P6-B300 instances deliver 2x networking bandwidth, 1.5x GPU memory size, and 1.5x GPU TFLOPS (at FP4, without sparsity) compared to P6-B200 instances, making them well suited to train and deploy large trillion-parameter foundation models (FMs) and large language models (LLMs) with sophisticated techniques. The higher networking and larger memory deliver faster training times and more token throughput for AI workloads. 
P6-B300 instances are now available in the p6-b300.48xlarge size through Amazon EC2 Capacity Blocks for ML and Savings Plans in the following AWS Region: US West (Oregon). For on-demand reservation of P6-B300 instances, please reach out to your account manager.
To learn more about P6-B300 instances, visit Amazon EC2 P6 instances.
Quelle: aws.amazon.com