AWS STS now supports validation of select identity provider specific claims from Google, GitHub, CircleCI and OCI

AWS Security Token Service (STS) now supports validation of select identity provider specific claims from Google, GitHub, CircleCI and Oracle Cloud Infrastructure in IAM role trust policies and resource control policies for OpenID Connect (OIDC) federation into AWS via the AssumeRoleWithWebIdentity API. With this new capability, you can reference these custom claims as condition keys in IAM role trust policies and resource control policies, expanding your ability to implement fine-grained access control for federated identities and help you establish your data perimeters. This enhancement builds upon IAM’s existing OIDC federation capabilities, which allow you to grant temporary AWS credentials to users authenticated through external OIDC-compatible identity providers.
Quelle: aws.amazon.com

Announcing memory-optimized instance bundles for Amazon Lightsail

Amazon Lightsail now offers memory-optimized instance bundles with up to 512 GB memory. The new instance bundles are available in 7 sizes, with Linux and Windows operating system (OS) and application blueprints, for both IPv6-only and dual-stack networking types. You can create instances using the new bundles with pre-configured OS and application blueprints including WordPress, cPanel & WHM, Plesk, Drupal, Magento, MEAN, LAMP, Node.js, Ruby on Rails, Amazon Linux, Ubuntu, CentOS, Debian, AlmaLinux, and Windows. The new memory-optimized instance bundles enable you to run memory-intensive workloads that require high RAM-to-vCPU ratios in Lightsail. These high-memory instance bundles are ideal for workloads such as in-memory databases, real-time big data analytics, in-memory caching systems, high-performance computing (HPC) applications, and large-scale enterprise applications that process extensive datasets in memory. These new bundles are now available in all AWS Regions where Amazon Lightsail is available. For more information on pricing, click here.
Quelle: aws.amazon.com

DeepSeek OCR, MiniMax M2.1, and Qwen3-VL-8B-Instruct models are now available on SageMaker JumpStart

Today, AWS announced the availability of DeepSeek OCR, MiniMax M2.1, and Qwen3-VL-8B-Instruct in Amazon SageMaker JumpStart, expanding the portfolio of foundation models available to AWS customers. These three models bring specialized capabilities spanning document intelligence, multilingual coding, advanced multimodal reasoning, and vision-language understanding, enabling customers to build sophisticated AI applications across diverse use cases on AWS infrastructure. These models address different enterprise AI challenges with specialized capabilities: DeepSeek OCR explores visual-text compression for document processing. It can extract structured information from forms, invoices, diagrams, and complex documents with dense text layouts. MiniMax M2.1 is optimized for coding, tool use, instruction following, and long-horizon planning. It automates multilingual software development and executes complex, multi-step office workflows, empowering developers to build autonomous applications. Qwen3-VL-8B-Instruct delivers ssuperior text understanding and generation, deeper visual perception and reasoning, extended context length, enhanced spatial and video dynamics comprehension, and stronger agent interaction capabilities. With SageMaker JumpStart, customers can deploy any of these models with just a few clicks to address their specific AI use cases. To get started with these models, navigate to the SageMaker JumpStart model catalog in the SageMaker console or use the SageMaker Python SDK to deploy the models to your AWS account. For more information about deploying and using foundation models in SageMaker JumpStart, see the Amazon SageMaker JumpStart documentation. 
Quelle: aws.amazon.com

Amazon Connect launches improved wait time estimates

Amazon Connect now delivers improved estimated wait time metrics for queues and enqueued contacts, empowering organizations. This allows contact centers to set accurate customer expectations, provide convenient options such as callbacks when hold times are extended, and balance workloads effectively across multiple queues. By leveraging the improved estimated wait time metrics, contact centers can make more strategic routing choices across queues while gaining enhanced visibility for better resource planning. For example, a customer calling about billing during peak hours with a 15-minute wait is seamlessly transferred to a cross-trained team with 2-minute availability, getting help faster without repeating their issue. The metric works seamlessly with routing criteria and agent proficiency configurations. 
Quelle: aws.amazon.com

AWS HealthImaging adds JPEG XL support

AWS HealthImaging now supports storing and retrieving lossy compressed medical images in the JPEG XL transfer syntax (1.2.840.10008.1.2.4.112). It is now simpler than ever to integrate HealthImaging with applications that require JPEG XL encoded DICOM data, such as digital pathology whole slide imaging systems.
With this launch, HealthImaging stores your JPEG XL Lossy image data without transcoding, which maintains the fidelity of your data and reduces your storage costs. Further, you can retrieve stored image frames in the JPEG XL format without the latency of transcoding at retrieval time.
Quelle: aws.amazon.com