Change the server-side encryption type of Amazon S3 objects

You can now change the server-side encryption type of encrypted objects in Amazon S3 without any data movement. You can use the UpdateObjectEncryption API to atomically change the encryption key of your objects regardless of the object size or storage class. With S3 Batch Operations, you can use UpdateObjectEncryption at scale to standardize the encryption type on entire buckets of objects while preserving object properties and S3 Lifecycle eligibility. Customers across many industries face increasingly stringent audit and compliance requirements on data security and privacy. A common requirement for these compliance frameworks is more rigorous encryption standards for data-at-rest, where organizations must encrypt data using a key management service. With UpdateObjectEncryption, customers can now change the encryption type of existing encrypted objects to move from Amazon S3 managed server-side encryption (SSE-S3) to use server-side encryption with AWS KMS keys (SSE-KMS). You can also change the customer-managed KMS key used to encrypt your data to comply with custom key rotation standards or enable the use of S3 Bucket Keys to reduce your KMS requests. The Amazon S3 UpdateObjectEncryption API is available in all AWS Regions. To get started, you can use the AWS Management Console or the latest AWS SDKs to update the server-side encryption type of your objects. To learn more, please visit the documentation.
Quelle: aws.amazon.com

Amazon Keyspaces (for Apache Cassandra) introduces pre-warming with WarmThroughput for your tables

Amazon Keyspaces (for Apache Cassandra) now supports table pre-warming, allowing you to proactively prepare both new and existing tables to meet future traffic demands. This capability is available for tables in both provisioned and on-demand capacity modes, including multi-Region replicated tables. Amazon Keyspaces (for Apache Cassandra) is a scalable, highly available, and managed Apache Cassandra–compatible database service. Amazon Keyspaces is serverless, so you pay for only the resources that you use and you can build applications that serve thousands of requests per second with virtually unlimited throughput and storage. While Amazon Keyspaces automatically scales to accommodate growing workloads, certain scenarios like application launches, marketing campaigns, or seasonal events can create sudden traffic spikes that exceed normal scaling patterns. With pre-warming, you can now manually specify your expected peak throughput requirements during table creation or update operations, ensuring your tables are immediately ready to handle large traffic surges without scaling delays or increased error rates. The pre-warming process is non-disruptive and runs asynchronously, allowing you to continue making other table modifications while pre-warming is in progress. Pre-warming incurs a one-time charge based on the difference between your specified values and the baseline capacity. The feature is now available in all AWS Commercial and AWS GovCloud (US) Regions where Amazon Keyspaces is offered. To learn more, visit the pre-warming launch blog or Amazon Keyspaces documentation.
Quelle: aws.amazon.com

Amazon Bedrock now supports server-side custom tools using the Responses API

Amazon Bedrock now supports server-side tools in the Responses API using OpenAI API-compatible service endpoints. Bedrock already supports client-side tool use with the Converse, Chat Completions, and Responses APIs. Now, with the launch of server-side tool use for Responses API, Amazon Bedrock calls the tools directly without going through a client, enabling your AI applications to perform real-time, multi-step actions such as searching the web, executing code, and updating databases within the organizational, governance, compliance, and security boundaries of your AWS accounts. You can either submit your own custom Lambda function to run custom tools or use AWS-provided tools, such as notes and tasks.
Server-side tools using the Responses API is available starting today with OpenAI’s GPT OSS 20B/120B models in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Mumbai), South America (São Paulo), Europe (Ireland), Europe (London), and Europe (Milan) AWS Regions. Support for other regions and models is coming soon.
To get started, visit the service documentation.
Quelle: aws.amazon.com