Amazon CloudFront announces mutual TLS support for origins

Amazon CloudFront announces support for mutual TLS authentication (mTLS) for origins, a security protocol that enables customers to verify that requests to their origin servers come only from their authorized CloudFront distributions using TLS certificates. This certificate-based authentication provides cryptographic verification of CloudFront’s identity, eliminating the need for customers to manage custom security controls. Previously, verifying that requests came from CloudFront distributions required customers to build and maintain custom authentication solutions like shared secret headers or IP allow-lists, particularly for public or externally hosted origins. These approaches required ongoing operational overhead to rotate secrets, update allow-lists, and maintain custom code. Now with origin mTLS support, customers can implement a standardized, certificate-based authentication approach that eliminates this operational burden. This enables organizations to enforce strict authentication for their proprietary content, ensuring that only verified CloudFront distributions can establish connections to backend infrastructure ranging from AWS origins and on-premises servers to third-party cloud providers and external CDNs. Customers can leverage client certificates issued by AWS Private Certificate Authority or third-party private Certificate Authorities, which they import through AWS Certificate Manager. Customers can configure origin mTLS using the AWS Management Console, CLI, SDK, CDK, or CloudFormation. Origin mTLS is supported for all origins that support mutual TLS on AWS such as Application Load Balancer and API Gateway, as well as on-premises and custom origins. There is no additional charge for origin mTLS. Origin mTLS is also available in the Business and Premium flat-rate pricing plans. For detailed implementation guidance and best practices, visit the CloudFront origin mutual TLS documentation.
Quelle: aws.amazon.com

AWS STS now supports validation of select identity provider specific claims from Google, GitHub, CircleCI and OCI

AWS Security Token Service (STS) now supports validation of select identity provider specific claims from Google, GitHub, CircleCI and Oracle Cloud Infrastructure in IAM role trust policies and resource control policies for OpenID Connect (OIDC) federation into AWS via the AssumeRoleWithWebIdentity API. With this new capability, you can reference these custom claims as condition keys in IAM role trust policies and resource control policies, expanding your ability to implement fine-grained access control for federated identities and help you establish your data perimeters. This enhancement builds upon IAM’s existing OIDC federation capabilities, which allow you to grant temporary AWS credentials to users authenticated through external OIDC-compatible identity providers.
Quelle: aws.amazon.com

Announcing memory-optimized instance bundles for Amazon Lightsail

Amazon Lightsail now offers memory-optimized instance bundles with up to 512 GB memory. The new instance bundles are available in 7 sizes, with Linux and Windows operating system (OS) and application blueprints, for both IPv6-only and dual-stack networking types. You can create instances using the new bundles with pre-configured OS and application blueprints including WordPress, cPanel & WHM, Plesk, Drupal, Magento, MEAN, LAMP, Node.js, Ruby on Rails, Amazon Linux, Ubuntu, CentOS, Debian, AlmaLinux, and Windows. The new memory-optimized instance bundles enable you to run memory-intensive workloads that require high RAM-to-vCPU ratios in Lightsail. These high-memory instance bundles are ideal for workloads such as in-memory databases, real-time big data analytics, in-memory caching systems, high-performance computing (HPC) applications, and large-scale enterprise applications that process extensive datasets in memory. These new bundles are now available in all AWS Regions where Amazon Lightsail is available. For more information on pricing, click here.
Quelle: aws.amazon.com

DeepSeek OCR, MiniMax M2.1, and Qwen3-VL-8B-Instruct models are now available on SageMaker JumpStart

Today, AWS announced the availability of DeepSeek OCR, MiniMax M2.1, and Qwen3-VL-8B-Instruct in Amazon SageMaker JumpStart, expanding the portfolio of foundation models available to AWS customers. These three models bring specialized capabilities spanning document intelligence, multilingual coding, advanced multimodal reasoning, and vision-language understanding, enabling customers to build sophisticated AI applications across diverse use cases on AWS infrastructure. These models address different enterprise AI challenges with specialized capabilities: DeepSeek OCR explores visual-text compression for document processing. It can extract structured information from forms, invoices, diagrams, and complex documents with dense text layouts. MiniMax M2.1 is optimized for coding, tool use, instruction following, and long-horizon planning. It automates multilingual software development and executes complex, multi-step office workflows, empowering developers to build autonomous applications. Qwen3-VL-8B-Instruct delivers ssuperior text understanding and generation, deeper visual perception and reasoning, extended context length, enhanced spatial and video dynamics comprehension, and stronger agent interaction capabilities. With SageMaker JumpStart, customers can deploy any of these models with just a few clicks to address their specific AI use cases. To get started with these models, navigate to the SageMaker JumpStart model catalog in the SageMaker console or use the SageMaker Python SDK to deploy the models to your AWS account. For more information about deploying and using foundation models in SageMaker JumpStart, see the Amazon SageMaker JumpStart documentation. 
Quelle: aws.amazon.com

Amazon Connect launches improved wait time estimates

Amazon Connect now delivers improved estimated wait time metrics for queues and enqueued contacts, empowering organizations. This allows contact centers to set accurate customer expectations, provide convenient options such as callbacks when hold times are extended, and balance workloads effectively across multiple queues. By leveraging the improved estimated wait time metrics, contact centers can make more strategic routing choices across queues while gaining enhanced visibility for better resource planning. For example, a customer calling about billing during peak hours with a 15-minute wait is seamlessly transferred to a cross-trained team with 2-minute availability, getting help faster without repeating their issue. The metric works seamlessly with routing criteria and agent proficiency configurations. 
Quelle: aws.amazon.com