Amazon RDS for Oracle now supports cross-Region replicas with additional storage volumes

Amazon RDS for Oracle now supports cross-Region replicas with additional storage volumes. With additional storage volumes, customers can add up to three storage volumes, each with up to 64 TiB, in addition to the primary storage volume for their database instance. As a result, customers get flexibility to add or remove storage with evolving workload demands, without incurring application downtime, and set up their database instance with up to 256 TiB storage. Now, with support for cross-Region replicas, customers that set up database instances with cross-Region replicas for business-critical applications also get the benefit of using additional storage volumes for storage flexibility. When you create a cross-Region replica for a database instance that is set up with additional storage volumes, Amazon RDS for Oracle automatically configures the same storage layout on the replica. Subsequently, you can apply changes to additional storage volumes on the primary instance and the replica using the AWS Management Console, AWS CLI, or AWS SDK. In disaster recovery situations, you can promote a cross-Region replica to serve as the new standalone database, or execute a switchover to reverse roles between the primary database and the replica to meet low recovery point objective (RPO) and recovery time objective (RTO) for business critical applications. You will need an Oracle Database Enterprise Edition (EE) license to use replicas in mounted mode, and an additional Oracle Active Data Guard license to use replicas in read-only mode. We recommend consulting your legal team or licensing expert to verify Oracle license requirements for your specific use case. Amazon RDS for Oracle cross-Region replicas with additional storage volumes is available in all AWS Regions including the AWS GovCloud (US) Regions. To learn more, see Amazon RDS for Oracle User Guide.
Quelle: aws.amazon.com

New Partner Revenue Measurement gives visibility into AWS service consumption

Today, AWS announces the launch of Partner Revenue Measurement, a new capability that gives AWS Partners visibility into how their solutions impact AWS service consumption across partner-managed and customer-managed accounts.
Partner Revenue Measurement allows Partners to better understand their AWS revenue impact and product consumption patterns. Partners can now tag AWS resources using the product code from their AWS Marketplace listing with tag key: aws-apn-id and tag value: pc:<AWS Marketplace product-code> to quantify and measure the AWS revenue impact of that solution.
Partner Revenue Measurement is generally available in all commercial regions. To learn more about implementing Partner Revenue Measurement, review the onboarding guide for more information.
Quelle: aws.amazon.com

Change the server-side encryption type of Amazon S3 objects

You can now change the server-side encryption type of encrypted objects in Amazon S3 without any data movement. You can use the UpdateObjectEncryption API to atomically change the encryption key of your objects regardless of the object size or storage class. With S3 Batch Operations, you can use UpdateObjectEncryption at scale to standardize the encryption type on entire buckets of objects while preserving object properties and S3 Lifecycle eligibility. Customers across many industries face increasingly stringent audit and compliance requirements on data security and privacy. A common requirement for these compliance frameworks is more rigorous encryption standards for data-at-rest, where organizations must encrypt data using a key management service. With UpdateObjectEncryption, customers can now change the encryption type of existing encrypted objects to move from Amazon S3 managed server-side encryption (SSE-S3) to use server-side encryption with AWS KMS keys (SSE-KMS). You can also change the customer-managed KMS key used to encrypt your data to comply with custom key rotation standards or enable the use of S3 Bucket Keys to reduce your KMS requests. The Amazon S3 UpdateObjectEncryption API is available in all AWS Regions. To get started, you can use the AWS Management Console or the latest AWS SDKs to update the server-side encryption type of your objects. To learn more, please visit the documentation.
Quelle: aws.amazon.com

Amazon Keyspaces (for Apache Cassandra) introduces pre-warming with WarmThroughput for your tables

Amazon Keyspaces (for Apache Cassandra) now supports table pre-warming, allowing you to proactively prepare both new and existing tables to meet future traffic demands. This capability is available for tables in both provisioned and on-demand capacity modes, including multi-Region replicated tables. Amazon Keyspaces (for Apache Cassandra) is a scalable, highly available, and managed Apache Cassandra–compatible database service. Amazon Keyspaces is serverless, so you pay for only the resources that you use and you can build applications that serve thousands of requests per second with virtually unlimited throughput and storage. While Amazon Keyspaces automatically scales to accommodate growing workloads, certain scenarios like application launches, marketing campaigns, or seasonal events can create sudden traffic spikes that exceed normal scaling patterns. With pre-warming, you can now manually specify your expected peak throughput requirements during table creation or update operations, ensuring your tables are immediately ready to handle large traffic surges without scaling delays or increased error rates. The pre-warming process is non-disruptive and runs asynchronously, allowing you to continue making other table modifications while pre-warming is in progress. Pre-warming incurs a one-time charge based on the difference between your specified values and the baseline capacity. The feature is now available in all AWS Commercial and AWS GovCloud (US) Regions where Amazon Keyspaces is offered. To learn more, visit the pre-warming launch blog or Amazon Keyspaces documentation.
Quelle: aws.amazon.com

Amazon Bedrock now supports server-side custom tools using the Responses API

Amazon Bedrock now supports server-side tools in the Responses API using OpenAI API-compatible service endpoints. Bedrock already supports client-side tool use with the Converse, Chat Completions, and Responses APIs. Now, with the launch of server-side tool use for Responses API, Amazon Bedrock calls the tools directly without going through a client, enabling your AI applications to perform real-time, multi-step actions such as searching the web, executing code, and updating databases within the organizational, governance, compliance, and security boundaries of your AWS accounts. You can either submit your own custom Lambda function to run custom tools or use AWS-provided tools, such as notes and tasks.
Server-side tools using the Responses API is available starting today with OpenAI’s GPT OSS 20B/120B models in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Mumbai), South America (São Paulo), Europe (Ireland), Europe (London), and Europe (Milan) AWS Regions. Support for other regions and models is coming soon.
To get started, visit the service documentation.
Quelle: aws.amazon.com

AWS announces Deployment Agent SOPs in AWS MCP Server (preview)

AWS announces the launch of deployment Standard Operating Procedures (SOPs) available in the AWS MCP Server. SOPs are structured, natural language instructions that guide AI agents through complex, multi-step tasks to ensure consistent, reliable, and efficient behavior. With these automated procedures, customers can deploy web applications to their AWS account using natural language prompts from any MCP-compatible IDE or CLI, including Kiro, Kiro CLI, Cursor, and Claude Code. Deployment works by generating AWS CDK infrastructure, deploying CloudFormation stacks, and creating CI/CD pipelines with recommended AWS security best practices. Previously, developers struggled to take their vibe-coded applications to production with DevOps best practices in place. Now, developers can move quickly from prototype to production in as little as one prompt. When you ask your AI assistant configured with AWS MCP Server to deploy your web application, your AI agent will follow the multi-step plan defined in Agent SOPs to analyze the project structure, generate CDK infrastructure, and deploy a preview environment hosted on Amazon S3 and Amazon CloudFront. Once you are ready, it can configure AWS CodePipeline for automated production deployments from source repositories, setting up CI/CD automatically for your application. The Agent SOPs support web applications built with popular frameworks including React, Vue.js, Angular, and Next.js. Deployment documentation is automatically created in the repository, enabling agents to handle future deployments, query logs for troubleshooting and resume work across sessions. The Agent SOPs are available in preview as part of the AWS MCP Server at no additional cost in the US East (N. Virginia) Region. You pay only for AWS resources you create and applicable data transfer costs. To get started, see the AWS MCP Server documentation.
Quelle: aws.amazon.com

Amazon Cognito introduces inbound federation Lambda triggers

Amazon Cognito introduces inbound federation Lambda triggers that enable you to transform and customize federated user attributes during the authentication process. You can now modify responses from external SAML and OIDC providers before they are stored in your user pool, providing complete programmatic control over the federation flow without requiring changes to your identity provider configuration..
Inbound federation Lambda trigger addresses current limitations in federated authentication workflows, particularly issues caused by attribute size limits and the need for selective attribute storage from external identity providers. For example, large group attributes from external SAML or OIDC identity providers that exceed Cognito’s 2,048 character limit per attribute can block the authentication flow. This capability allows you to add, override, or suppress attribute values, such as modifying large group attributes, before creating new federated users or updating existing federated user profiles in Cognito.
The new inbound federation Lambda trigger is available through hosted UI (classic) and managed login in all AWS Regions where Amazon Cognito is available. To get started, configure the trigger using the AWS Management Console, AWS Command Line Interface (CLI), AWS Software Development Kits (SDKs), Cloud Development Kit (CDK), or AWS CloudFormation by adding the new parameter to your User Pool LambdaConfig. To learn more, see the Amazon Cognito Developer Guide for implementation examples and best practices.
Quelle: aws.amazon.com

Announcing increased 1 MB payload size support in Amazon EventBridge

Amazon EventBridge increases event payload size from 256 KB to 1 MB, enabling developers to ingest richer, complex payloads for their event-driven workloads without the need to split, compress, or externalize data. Amazon EventBridge is a serverless event router that enables you to create scalable event-driven applications by routing events between your applications, third-party SaaS applications, and AWS services. These applications often need to process rich contextual data, including large-language model prompts, telemetry signals, and complex JSON structures for machine learning outputs. The new 1MB payload support in EventBridge Event Buses enables developers to streamline their architectures by including comprehensive data in a single event, reducing the need for complex data chunking or external storage solutions. This feature is available in all commercial AWS Regions where Amazon EventBridge is offered, except Asia Pacific (New Zealand), Asia Pacific (Thailand), Asia Pacific (Malaysia), Asia Pacific (Taipei), and Mexico (Central). For a full list, see the AWS Regional Services List. To learn more, visit the EventBridge documentation.
Quelle: aws.amazon.com

Amazon DynamoDB global tables with multi-Region strong consistency now supports application resiliency testing with AWS Fault Injection Service

Amazon DynamoDB global tables with multi-Region strong consistency (MRSC) now supports application resiliency testing with AWS Fault Injection Service (FIS), a fully managed service for running controlled fault injection experiments to improve application performance, observability, and resilience. With this launch, you can create real-world failure scenarios to MRSC global tables, such as during regional failures, enabling you to observe how your applications respond to these disruptions and validate your resilience mechanisms. MRSC global tables replicate your DynamoDB tables automatically across your choice of AWS Regions to achieve fast, strongly consistent read and write performance, providing you 99.999% availability, increased application resiliency, and improved business continuity. FIS is a fully managed service for running controlled fault injection experiments to improve an application’s performance, observability, and resilience. You can use the new FIS action to observe how your application responds to a pause in regional replication and tune monitoring and recovery processes to improve resiliency and application availability. MRSC global tables support for FIS is available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Osaka), Asia Pacific (Seoul), Europe (Ireland), Europe (London), Europe (Frankfurt), and Europe (Paris). To get started, visit the DynamoDB FIS actions documentation.
Quelle: aws.amazon.com

Amazon MSK Replicator is now available in Asia Pacific (New Zealand)

You can now use Amazon MSK Replicator to replicate streaming data across Amazon Managed Streaming for Apache Kafka (Amazon MSK) clusters in Asia Pacific (New Zealand) Region. MSK Replicator is a feature of Amazon MSK that enables you to reliably replicate data across Amazon MSK clusters in different or the same AWS Region(s) in a few clicks. With MSK Replicator, you can easily build regionally resilient streaming applications for increased availability and business continuity. MSK Replicator provides automatic asynchronous replication across MSK clusters, eliminating the need to write custom code, manage infrastructure, or setup cross-region networking. MSK Replicator automatically scales the underlying resources so that you can replicate data on-demand without having to monitor or scale capacity. MSK Replicator also replicates the necessary Kafka metadata including topic configurations, Access Control Lists (ACLs), and consumer group offsets. If an unexpected event occurs in a region, you can failover to the other AWS Region and seamlessly resume processing. You can get started with MSK Replicator from the Amazon MSK console or the Amazon CLI. With this launch, MSK Replicator is now available in thirty six AWS Regions. To learn more, visit the MSK Replicator documentation, product page, and pricing page.
Quelle: aws.amazon.com