Amazon RDS Optimized Reads now supports R8gd and M8gd database instances

Amazon Relational Database Service (RDS) now supports R8gd and M8gd database instances for Optimized Reads on Amazon Aurora PostgreSQL and RDS for PostgreSQL, MySQL, and MariaDB. R8gd and M8gd database instances offer improved price-performance. For example, Optimized Reads on R8gd instances deliver up to 165% better throughput and up to 120% better price-performance over R6g instances for Aurora PostgreSQL. Optimized Reads uses local NVMe-based SSD block storage available on these instances to store ephemeral data, such as temporary tables, reducing data access to/from network-based storage and improving read latency and throughput. The result is improved query performance for complex queries and faster index rebuild operations. Aurora PostgreSQL Optimized Reads instances using the I/O-Optimized configuration additionally use the local storage to extend their caching capacity. Database pages that are evicted from the in-memory buffer cache are cached in local storage to speed subsequent retrieval of that data. Customers can get started with Optimized Reads through the AWS Management Console, CLI, and SDK by modifying their existing Aurora and RDS databases or creating a new database using R8gd or M8gd instances. These instances are available in the US East (N. Virginia, Ohio), US West (Oregon), Europe (Spain, Frankfurt), and Asia Pacific (Tokyo) Regions. For complete information on pricing and regional availability, please refer to the pricing page. For information on specific engine versions that support these DB instance types, please see the Aurora and RDS documentation.
Quelle: aws.amazon.com

EC2 Auto Scaling now offers a synchronous API to launch instances inside an Auto Scaling group

Today, EC2 Auto Scaling is launching a new API, LaunchInstances, which gives customers more control and flexibility over how EC2 Auto Scaling provisions instances while providing instant feedback on capacity availability. Customers use EC2 Auto Scaling for automated fleet management. With scaling policies, EC2 Auto Scaling can automatically add instances when demand spikes and remove them when traffic drops, ensuring customers’ applications always have the right amount of compute. EC2 Auto Scaling also offers the ability to monitor and replace unhealthy instances. In certain use cases, customers may want to specify exactly where EC2 Auto Scaling should launch additional instances and need immediate feedback on capacity availability. The new LaunchInstances API allows customers to precisely control where instances are launched by specifying an override for any Availability Zone and/or subnet in an Auto Scaling group, while providing immediate feedback on capacity availability. This synchronous operation gives customers real-time insight into scaling operations, enabling them to quickly implement alternative strategies if needed. For additional flexibility, the API includes optional asynchronous retries to help reach the desired capacity. This feature is now available in US East (N. Virginia), US West (Oregon), Europe (Ireland), and Asia Pacific (Singapore), at no additional cost beyond standard EC2 and EBS usage. To get started, visit the AWS Command Line Interface (CLI) and the AWS SDKs. To learn more about this feature, visit the AWS documentation. 
Quelle: aws.amazon.com

Amazon Bedrock introduces Priority and Flex inference service tiers

Today, Amazon Bedrock introduces two new inference service tiers to optimize costs and performance for different AI workloads. The new Flex tier offers cost-effective pricing for non-time-critical applications like model evaluations and content summarization while the Priority tier provides premium performance and preferential processing for mission-critical applications. For most models that support Priority Tier, customers can realize up to 25% better output tokens per second (OTPS) latency compared to standard tier. These join the existing Standard tier for everyday AI applications with reliable performance.
These service tiers address key challenges that organizations face when deploying AI at scale. The Flex tier is designed for non-interactive workloads that can tolerate longer latencies, making it ideal for model evaluations, content summarization, labeling and annotation, and multistep agentic workflow, and it’s priced at a discount relative to the Standard tier. During periods of high demand, Flex requests receive lower priority relative to the Standard tier. The Priority tier is an ideal fit for mission critical applications, real-time end-user interactions, and interactive experiences where consistent, fast responses are essential. During periods of high demand, Priority requests receive processing priority, at a premium price, over other service tiers. These new service tiers are available today for a range of leading foundation models, including OpenAI (gpt-oss-20b, gpt-oss-120b), DeepSeek (DeepSeek V3.1), Qwen3 (Coder-480B-A35B-Instruct, Coder-30B-A3B-Instruct, 32B dense, Qwen3-235B-A22B-2507), and Amazon Nova (Nova Pro and Nova Premier). With these new options, Amazon Bedrock helps customers gain greater control over balancing cost efficiency with performance requirements, enabling them to scale AI workloads economically while ensuring optimal user experiences for their most critical applications.
For more information about the AWS Regions where Amazon Bedrock Priority and Flex inference service tiers are available, see the AWS Regions table
Learn more about service tiers in our News Blog and documentation.
Quelle: aws.amazon.com

Amazon OpenSearch Serverless now adds audit logs for data plane APIs

Amazon OpenSearch Serverless now supports detailed audit logging of data plane requests via AWS CloudTrail. This feature enables customers to record user actions on their collections, helping meet compliance regulations, improve security posture, and provide evidence for security investigations. Customers can now track user activities such as authorization attempts, index modifications, and search queries. Customers can use CloudTrail to configure filters for OpenSearch Serverless collections with read-only and write-only options, or use advanced event selectors for more granular control over logged data events. All OpenSearch Serverless data events are delivered to an Amazon S3 bucket and optionally to Amazon CloudWatch Events, creating a comprehensive audit trail. This enhanced visibility into when and who made API calls helps security and operations teams monitor data access and respond to events in real-time. Once configured with CloudTrail, audit logs will be continuously streamed with no additional customer action required. Audit Logs will be continuously streamed to CloudTrail and can be further analyzed there. Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, see the documentation. 
Quelle: aws.amazon.com

Amazon EC2 P6-B300 instances with NVIDIA Blackwell Ultra GPUs are now available

Today, AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) P6-B300 instances, accelerated by NVIDIA Blackwell Ultra B300 GPUs. Amazon EC2 P6-B300 instances provide 8x NVIDIA Blackwell Ultra GPUs with 2.1 TB high bandwidth GPU memory, 6.4 Tbps EFA networking, 300 Gbps dedicated ENA throughput, and 4 TB of system memory. 
P6-B300 instances deliver 2x networking bandwidth, 1.5x GPU memory size, and 1.5x GPU TFLOPS (at FP4, without sparsity) compared to P6-B200 instances, making them well suited to train and deploy large trillion-parameter foundation models (FMs) and large language models (LLMs) with sophisticated techniques. The higher networking and larger memory deliver faster training times and more token throughput for AI workloads. 
P6-B300 instances are now available in the p6-b300.48xlarge size through Amazon EC2 Capacity Blocks for ML and Savings Plans in the following AWS Region: US West (Oregon). For on-demand reservation of P6-B300 instances, please reach out to your account manager.
To learn more about P6-B300 instances, visit Amazon EC2 P6 instances.
Quelle: aws.amazon.com

Amazon Redshift announces support for the SUPER data type in Databases with Case-Insensitive Collation

Amazon Redshift announces support for the SUPER data type in databases with case insensitive collation, enabling analytics on semi-structured and nested data in these databases. Using the SUPER data type with PartiQL in Amazon Redshift, you can perform advanced analytics that combine structured SQL data (such as string, numeric, and timestamp) with the semi-structured SUPER data (such as JSON) with flexibility and ease-of-use. This enhancement allows you to leverage the SUPER data type for your structured and semi-structured data processing needs in databases with case-insensitive collation. Using the COLLATE function, you can now explicitly specify case sensitivity preferences for SUPER columns, providing greater flexibility in handling data with varying case patterns. This is particularly valuable when working with JSON documents, APIs, or application data where case consistency isn’t guaranteed. Whether you’re processing user-defined identifiers or integrating data from multiple sources, you can now perform complex queries across both case-sensitive and case-insensitive data without additional normalization overhead. Amazon Redshift support for the SUPER data type in databases with case insensitive collation is available in all AWS Regions, including the AWS GovCloud (US) Regions, where Amazon Redshift is available. See AWS Region Table for more details. To learn more about the SUPER data type in databases with case insensitive collation, please visit our documentation.
Quelle: aws.amazon.com

Amazon EC2 I7i instances now available in additional AWS regions

Amazon Web Services (AWS) announces the availability of high performance Storage Optimized Amazon EC2 I7i instances in AWS Asia Pacific (Melbourne, Mumbai, Osaka), Middle East (UAE) regions. Powered by 5th generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.2 GHz, these instances deliver up to 23% better compute performance and more than 10% better price performance over previous generation I4i instances. Powered by 3rd generation AWS Nitro SSDs, I7i instances offer up to 45TB of NVMe storage with up to 50% better real-time storage performance, up to 50% lower storage I/O latency, and up to 60% lower storage I/O latency variability compared to I4i instances. I7i instances are ideal for I/O intensive and latency-sensitive workloads that demand very high random IOPS performance with real-time latency to access small to medium size datasets (multi-TBs). I7i instances support torn write prevention feature with up to 16KB block sizes, enabling customers to eliminate database performance bottlenecks. I7i instances are available in eleven sizes – nine virtual sizes up to 48xlarge and two bare metal sizes – delivering up to 100Gbps of network bandwidth and 60Gbps of Amazon Elastic Block Store (EBS) bandwidth. To learn more, visit the I7i instances page.
Quelle: aws.amazon.com

Amazon Polly expands Generative TTS engine with additional languages and region support

Today, we are excited to announce the general availability of five highly expressive Amazon Polly Generative voices in Austrian German (Hannah), Irish English (Niamh), Brazilian Portuguese (Camila), Belgian Dutch (Lisa), and Korean (Seoyeon). This release follows our October launch of Netherlands Dutch (Laura) Generative voice, bringing our total Generative engine offering to thirty-one voices across twenty locales. Additionally, we have expanded the Generative engine to three new regions in Asia Pacific: Asia Pacific (Seoul), Asia Pacific (Singapore), and Asia Pacific (Tokyo). Amazon Polly is a fully-managed service that turns text into lifelike speech, allowing developers and builders to enable their applications for conversational AI or for speech content creation. All new and existing Generative voices are now available in the US East (North Virginia), Europe (Frankfurt), US West (Oregon), Asia Pacific (Seoul), Asia Pacific (Singapore), and Asia Pacific (Tokyo) regions. To hear how Polly voices sound, go to Amazon Polly Features. To learn more about how to use Generative engine, go to AWS Blog. For more details on the Polly offerings and use, please read the Amazon Polly documentation and visit our pricing page.
Quelle: aws.amazon.com

AWS Transfer Family announces Terraform module to automate scanning of transferred files

AWS Transfer Family Terraform module now supports deployment of automated malware scanning workflows for files transferred using Transfer Family resources. This release streamlines centralized provisioning of threat detection workflows using Amazon GuardDuty S3 Protection, helping you meet data security requirements by identifying potential threats in transferred files. AWS Transfer Family provides fully managed file transfers over SFTP, AS2, FTPS, FTP, and web browser- based interfaces for AWS storage services. Using the new module, you can programmatically provision workflows to scan incoming files, dynamically route files based on scan results, and generate threat notifications, in a single deployment. You can granularly implement threat detection for specific S3 prefixes while preserving folder structures post scanning, and ensure that only verified clean files reach your business applications and data lakes. This eliminates the overhead and risks associated with manual configurations, and provides a scalable deployment option for data security compliance. Customers can get started by using the new module from the Terraform Registry. To learn more about Transfer Family, visit the product page and user guide. To see all the regions where Transfer Family is available, visit the AWS Region table.
Quelle: aws.amazon.com

Amazon EC2 I7ie instances now available in AWS Asia Pacific (Singapore) Region

Amazon Web Services (AWS) is announcing starting today, Amazon EC2 I7ie instances are now available in Asia Pacific (Singapore) region. Designed for large storage I/O intensive workloads, I7ie instances are powered by 5th Gen Intel Xeon Processors with an all-core turbo frequency of 3.2 GHz, offering up to 40% better compute performance and 20% better price performance over existing I3en instances. I7ie instances offer up to 120TB local NVMe storage density for storage optimized instances and offer up to twice as many vCPUs and memory compared to prior generation instances. Powered by 3rd generation AWS Nitro SSDs, I7ie instances deliver up to 65% better real-time storage performance, up to 50% lower storage I/O latency, and 65% lower storage I/O latency variability compared to I3en instances. I7ie are high density storage optimized instances, ideal for workloads requiring fast local storage with high random read/write performance at very low latency consistency to access large data sets. These instances are available in 9 different virtual sizes and deliver up to 100Gbps of network bandwidth and 60Gbps of bandwidth for Amazon Elastic Block Store (EBS). To learn more, visit the I7ie instances page.
Quelle: aws.amazon.com