AWS Lambda supports up to 32 GB of memory and 16 vCPUs for Lambda Managed Instances

AWS Lambda now supports up to 32 GB of memory and 16 vCPUs for functions running on Lambda Managed Instances, enabling customers to run compute-intensive workloads such as large-scale data processing, media transcoding, and scientific simulations without managing any infrastructure. Customers can also configure the memory-to-vCPU ratio — 2:1, 4:1, or 8:1 — to match the resource profile of their workload. Lambda Managed Instances lets you run Lambda functions on managed Amazon EC2 instances with built-in routing, load balancing, and auto-scaling, giving you access to specialized compute configurations including the latest-generation processors and high-bandwidth networking, with no operational overhead. Customers building compute-intensive applications such as data processing pipelines, high-throughput API backends, and batch computation workloads require substantial memory and CPU resources to process large datasets, serve low-latency responses at scale, and run complex computations efficiently. Previously, function execution environments on Lambda were limited to 10 GB of memory and approximately 6 vCPUs, with no option to customize the memory-to-vCPU ratio. Functions on Lambda Managed Instances can now be configured with up to 32 GB of memory, and a choice of memory-to-vCPU ratio — 2:1, 4:1, or 8:1 — allowing customers to select the right balance of memory and compute for their workload. For example, at 32 GB of memory, customers can configure 16 vCPUs (2:1), 8 vCPUs (4:1), or 4 vCPUs (8:1) depending on whether their workload is CPU-intensive or memory-intensive. This feature is available in all AWS Regions where Lambda Managed Instances is generally available. You can configure these settings using the AWS Console, AWS CLI, AWS CloudFormation, AWS CDK, or AWS SAM. To learn more, visit the AWS Lambda Managed Instances product page and documentation.
Quelle: aws.amazon.com

AWS Lambda increases the file descriptor limit to 4,096 for functions running on Lambda Managed Instances

AWS Lambda increases the file descriptor limit from 1,024 to 4,096, a 4x increase, for functions running on Lambda Managed Instances (LMI). This capability enables customers to run I/O intensive workloads such as high-concurrency web services, and file-heavy data processing pipelines, without running into file descriptor limits. LMI enables you to run Lambda functions on managed Amazon EC2 instances with built-in routing, load-balancing, and auto-scaling, giving you access to specialized compute configurations including the latest-generation processors and high-bandwidth networking, with no operational overhead. Customers use Lambda functions to build a wide range of serverless applications such as event-driven workloads, web applications, and AI-driven workflows. These applications rely on file descriptors for operations such as opening files, establishing network socket connections to external services and databases, and managing concurrent I/O streams for data processing. Each open file, network socket, or internal resource consumes one file descriptor. Today, Lambda supports a maximum of 1,024 file descriptors. However, LMI allows multiple requests to be processed simultaneously, which often requires higher number of file descriptors. With this launch, AWS Lambda is increasing the file descriptor limit to 4,096, allowing customers to run I/O intensive workloads, maintain larger connection pools, and effectively utilize multi-concurrency for functions running on LMI. This feature is available in all AWS Regions where AWS Lambda Managed Instances is generally available. To get started, visit the AWS Lambda Managed Instances documentation.
Quelle: aws.amazon.com

Amazon ECS Managed Instances now supports FIPS-certified workloads on Graviton and GPU accelerated instances in AWS GovCloud (US) Regions

Starting today, customers can deploy their Graviton-based and GPU-accelerated workloads on Amazon Elastic Container Service (Amazon ECS) Managed Instances in a Federal Information Processing Standard (FIPS) compliant mode in the AWS GovCloud (US) Regions. FIPS is a U.S. and Canadian government standard that specifies the security requirements for cryptographic modules that protect sensitive information.
In the AWS GovCloud (US) Regions, Amazon ECS Managed Instances automatically enable FIPS compliance by default. ECS Managed Instances communicate through FIPS-compliant endpoints, use appropriately configured cryptographic modules, and boot the underlying kernel in FIPS mode. Customers with federal compliance requirements can run workloads with FIPS-validated cryptographic modules across a broad range of instance types, including Graviton-based, GPU-accelerated, network-optimized, and burstable performance instances.
To learn more about FIPS, refer to FIPS on AWS and AWS Fargate Federal Information Processing Standard (FIPS-140). To get started with ECS Managed Instances, use the AWS Console, Amazon ECS MCP Server, ECS Express Mode, or your favorite infrastructure-as-code tooling to enable it in a new or existing Amazon ECS cluster. You will be charged for the management of compute provisioned, in addition to your regular Amazon EC2 costs. To learn more about ECS Managed Instances, visit the feature page, documentation, and AWS News launch blog.
Quelle: aws.amazon.com

Amazon SageMaker Studio launches support for Kiro and Cursor IDEs as remote IDEs

Today, AWS announces the ability to remotely connect from Kiro and Cursor IDEs to Amazon SageMaker Studio. This new capability allows data scientists, ML engineers, and developers to leverage their Kiro and Cursor setup – including its spec-driven development, conversational coding, and automated feature generation capabilities – while accessing the scalable compute resources of Amazon SageMaker Studio. By connecting Kiro and Cursor to SageMaker Studio using the AWS Toolkit extension, you can eliminate context switching between your local IDE and cloud infrastructure, maintaining your existing agentic development workflows within a single environment for all your AWS analytics and AI/ML services. SageMaker Studio, offers a broad set of fully managed cloud interactive development environments (IDE), including JupyterLab and Code Editor based on Code-OSS (Open-Source Software), and VS Code IDE as remote IDE. Starting today, you can also use your customized local Kiro and Cursor setup – complete with specs, steering files, and hooks – while accessing your compute resources and data on Amazon SageMaker. You can authenticate using the AWS Toolkit extension in Kiro or Cursor or through SageMaker Studio’s web interface. Once authenticated, connect to any of your SageMaker Studio development environments in a few simple clicks. You maintain the same security boundaries as SageMaker Studio’s web-based environments while developing AI models and analyzing data in local IDE of your choice – Kiro or Cursor. To learn more, refer to the SageMaker user guide.
Quelle: aws.amazon.com

Amazon EC2 High Memory U7i instances now available in Europe (Milan)

Amazon EC2 High Memory U7i-8TB instances (u7i-8tb.112xlarge) and U7i-12TB instances (u7i-12tb.224xlarge) are now available in AWS Europe (Milan). U7i instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-8tb instances offer 8TiB of DDR5 memory, and U7i-12tb instances offer 12TiB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment.
U7i-8tb instances deliver 448 vCPUs; U7i-12tb instances deliver 896 vCPUs. Both instances support up to 100 Gbps of Amazon EBS bandwidth for faster data loading and backups, 100 Gbps of network bandwidth, and ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server.
To learn more about U7i instances, visit the High Memory instances page.
Quelle: aws.amazon.com

Amazon EC2 R8gd instances are now available in additional AWS Regions

Amazon Elastic Compute Cloud (Amazon EC2) R8gd instances with up to 11.4 TB of local NVMe-based SSD block-level storage are now available in US West (N. California), Asia Pacific (Seoul, Hong Kong, Jakarta), Africa (Cape Town), and Canada West (Calgary) AWS Regions. These instances are powered by AWS Graviton4 processors, delivering up to 30% better performance over Graviton3-based instances. They have up to 40% higher performance for I/O intensive database workloads, and up to 20% faster query results for I/O intensive real-time data analytics than comparable AWS Graviton3-based instances. These instances are built on the AWS Nitro System and are a great fit for applications that need access to high-speed, low latency local storage. Each instance is available in 12 different sizes. They provide up to 50 Gbps of network bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). Additionally, customers can now adjust the network and Amazon EBS bandwidth on these instances by 25% using EC2 instance bandwidth weighting configuration, providing greater flexibility with the allocation of bandwidth resources to better optimize workloads. These instances offer Elastic Fabric Adapter (EFA) networking on 24xlarge, 48xlarge, metal-24xl, and metal-48xl sizes. To learn more, see Amazon R8gd Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.
Quelle: aws.amazon.com

AWS AppConfig adds enhanced targeting during feature flag rollout

AWS AppConfig enhances its deployment capabilities with new controls that allow customers to target feature flag and configuration data values to specific segments or individual users during the lifecycle of a gradual roll-out. One of AWS AppConfig’s key safety guardrails is the ability for customers to roll out feature flag or configuration data changes slowly, over the course of minutes or hours. This progressive delivery allows customers to move safer, and limit the impact of unexpected changes. AWS AppConfig uses customer-provided entity identifiers to make specific feature flag or dynamic configuration data “sticky” to individual target segments during the lifecycle of these gradual roll-outs. This targeting capability, using AppConfig Agent, ensures fine-grained control, including using an individual user ID or IDs, while updates are being deployed.
Quelle: aws.amazon.com

Aurora DSQL launches connector that simplifies building Ruby applications

Today we are announcing the release of the Aurora DSQL Connector for Ruby (pg gem) that makes it easy to build Ruby applications on Aurora DSQL. The Ruby Connector streamlines authentication and eliminates security risks associated with traditional user-generated passwords by automatically generating tokens for each connection, ensuring valid tokens are always used while maintaining full compatibility with existing pg gem features. The connector handles IAM token generation, SSL configuration, and connection pooling, enabling customers to scale from simple scripts to production workloads without changing their authentication approach. It also provides opt-in optimistic concurrency control (OCC) retry with exponential backoff, custom IAM credential providers, and AWS profile support, giving customers flexibility in how they manage their AWS credentials and handle transient failures. To get started, visit the Connectors for Aurora DSQL documentation page. For code examples, visit our Github page for the Ruby connector. Get started with Aurora DSQL for free with the AWS Free Tier. To learn more about Aurora DSQL, visit the webpage.
Quelle: aws.amazon.com

Amazon Bedrock AgentCore adds support for Chrome policies and custom root CA

Amazon Bedrock AgentCore now enables customers to configure Chrome Enterprise policies for AgentCore Browser and specify custom root Certificate Authority (CA) certificates for both AgentCore Browser and Code Interpreter. These enhancements help ensure enterprise requirements are met when allowing AI agents to operate within organizations that have strict security policies and internal infrastructure using custom certificates. With Chrome policies, you can leverage over 100+ configurable policies for managing browser behavior across security, URL filtering, content settings, and more to enforce organizational compliance requirements. For example, restrict agents to specific URLs for kiosk-mode operations, disable password managers and downloads for data-entry tasks, or implement URL blocklists for regulatory compliance. Custom root CA support enables agents to seamlessly connect to internal services like Artifactory, Jira, and finance portals that use SSL certificates signed by your organization’s internal Certificate Authority, and work with corporate proxies performing TLS interception. These features are available in all 14 AWS Regions where Amazon Bedrock AgentCore Browser and Code Interpreter are available: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), and Canada (Central). To learn more, visit the AgentCore Browser documentation. 
Quelle: aws.amazon.com

Amazon SageMaker Unified Studio launches support for remote connection from Cursor IDE

Today, AWS announces remote connection from Cursor IDE to Amazon SageMaker Unified Studio via the AWS Toolkit extension. This new capability allows data scientists, ML engineers, and developers to leverage their Cursor setup – including its AI-powered code completion, natural language editing, and multi-file editing capabilities – while accessing the scalable compute resources of Amazon SageMaker. By connecting Cursor to SageMaker Unified Studio using the AWS Toolkit extension, you can eliminate context switching between your local IDE and cloud infrastructure, maintaining your existing AI-assisted development workflows within a single environment for all your AWS analytics and AI/ML services.
SageMaker Unified Studio, part of the next generation of Amazon SageMaker, offers a broad set of fully managed cloud interactive development environments (IDE), including JupyterLab and Code Editor based on Code-OSS (Open-Source Software). Starting today, you can also use your customized local Cursor setup – complete with custom rules, extensions, and AI model preferences – while accessing your compute resources and data on Amazon SageMaker. Since Cursor is built on Code-OSS, authentication is secure via IAM through the AWS Toolkit extension, giving you access to all your SageMaker Unified Studio domains and projects. This integration provides a convenient path from your local AI-powered development environment to scalable infrastructure for running workloads across data processing, SQL analytics services like Amazon EMR, AWS Glue, and Amazon Athena, and ML workflows – all with enterprise-grade security including customer-managed encryption keys and AWS IAM integration.
This feature is available in all AWS Regions where Amazon SageMaker Unified Studio is available. To learn more, visit the local IDE support documentation..
Quelle: aws.amazon.com