AWS Management Console now supports settings to control service and Region visibility

Today, AWS announces the general availability of Visible services and Visible Regions account settings in the AWS Management Console. These settings allow you to customize which services and regions appear in the Management Console for authorized users in your account, helping your users easily identify what is available to them and simplifying navigation. You can configure these settings in the AWS Management Console under Unified Settings in the Account Settings tab. You can also configure these setting programmatically via User Experience Customization (UXC) in AWS Command Line Interface (CLI), AWS Software Development Kits (SDKs), AWS Cloud Development Kit (CDK), or AWS CloudFormation. The Visible services and Visible Regions settings are available in AWS Commercial Regions at no additional cost. Visit the AWS User Experience Customization documentation page and API guide to learn more.
Quelle: aws.amazon.com

AWS Lambda supports up to 32 GB of memory and 16 vCPUs for Lambda Managed Instances

AWS Lambda now supports up to 32 GB of memory and 16 vCPUs for functions running on Lambda Managed Instances, enabling customers to run compute-intensive workloads such as large-scale data processing, media transcoding, and scientific simulations without managing any infrastructure. Customers can also configure the memory-to-vCPU ratio — 2:1, 4:1, or 8:1 — to match the resource profile of their workload. Lambda Managed Instances lets you run Lambda functions on managed Amazon EC2 instances with built-in routing, load balancing, and auto-scaling, giving you access to specialized compute configurations including the latest-generation processors and high-bandwidth networking, with no operational overhead. Customers building compute-intensive applications such as data processing pipelines, high-throughput API backends, and batch computation workloads require substantial memory and CPU resources to process large datasets, serve low-latency responses at scale, and run complex computations efficiently. Previously, function execution environments on Lambda were limited to 10 GB of memory and approximately 6 vCPUs, with no option to customize the memory-to-vCPU ratio. Functions on Lambda Managed Instances can now be configured with up to 32 GB of memory, and a choice of memory-to-vCPU ratio — 2:1, 4:1, or 8:1 — allowing customers to select the right balance of memory and compute for their workload. For example, at 32 GB of memory, customers can configure 16 vCPUs (2:1), 8 vCPUs (4:1), or 4 vCPUs (8:1) depending on whether their workload is CPU-intensive or memory-intensive. This feature is available in all AWS Regions where Lambda Managed Instances is generally available. You can configure these settings using the AWS Console, AWS CLI, AWS CloudFormation, AWS CDK, or AWS SAM. To learn more, visit the AWS Lambda Managed Instances product page and documentation.
Quelle: aws.amazon.com

AWS Lambda increases the file descriptor limit to 4,096 for functions running on Lambda Managed Instances

AWS Lambda increases the file descriptor limit from 1,024 to 4,096, a 4x increase, for functions running on Lambda Managed Instances (LMI). This capability enables customers to run I/O intensive workloads such as high-concurrency web services, and file-heavy data processing pipelines, without running into file descriptor limits. LMI enables you to run Lambda functions on managed Amazon EC2 instances with built-in routing, load-balancing, and auto-scaling, giving you access to specialized compute configurations including the latest-generation processors and high-bandwidth networking, with no operational overhead. Customers use Lambda functions to build a wide range of serverless applications such as event-driven workloads, web applications, and AI-driven workflows. These applications rely on file descriptors for operations such as opening files, establishing network socket connections to external services and databases, and managing concurrent I/O streams for data processing. Each open file, network socket, or internal resource consumes one file descriptor. Today, Lambda supports a maximum of 1,024 file descriptors. However, LMI allows multiple requests to be processed simultaneously, which often requires higher number of file descriptors. With this launch, AWS Lambda is increasing the file descriptor limit to 4,096, allowing customers to run I/O intensive workloads, maintain larger connection pools, and effectively utilize multi-concurrency for functions running on LMI. This feature is available in all AWS Regions where AWS Lambda Managed Instances is generally available. To get started, visit the AWS Lambda Managed Instances documentation.
Quelle: aws.amazon.com

Amazon ECS Managed Instances now supports FIPS-certified workloads on Graviton and GPU accelerated instances in AWS GovCloud (US) Regions

Starting today, customers can deploy their Graviton-based and GPU-accelerated workloads on Amazon Elastic Container Service (Amazon ECS) Managed Instances in a Federal Information Processing Standard (FIPS) compliant mode in the AWS GovCloud (US) Regions. FIPS is a U.S. and Canadian government standard that specifies the security requirements for cryptographic modules that protect sensitive information.
In the AWS GovCloud (US) Regions, Amazon ECS Managed Instances automatically enable FIPS compliance by default. ECS Managed Instances communicate through FIPS-compliant endpoints, use appropriately configured cryptographic modules, and boot the underlying kernel in FIPS mode. Customers with federal compliance requirements can run workloads with FIPS-validated cryptographic modules across a broad range of instance types, including Graviton-based, GPU-accelerated, network-optimized, and burstable performance instances.
To learn more about FIPS, refer to FIPS on AWS and AWS Fargate Federal Information Processing Standard (FIPS-140). To get started with ECS Managed Instances, use the AWS Console, Amazon ECS MCP Server, ECS Express Mode, or your favorite infrastructure-as-code tooling to enable it in a new or existing Amazon ECS cluster. You will be charged for the management of compute provisioned, in addition to your regular Amazon EC2 costs. To learn more about ECS Managed Instances, visit the feature page, documentation, and AWS News launch blog.
Quelle: aws.amazon.com

Amazon SageMaker Studio launches support for Kiro and Cursor IDEs as remote IDEs

Today, AWS announces the ability to remotely connect from Kiro and Cursor IDEs to Amazon SageMaker Studio. This new capability allows data scientists, ML engineers, and developers to leverage their Kiro and Cursor setup – including its spec-driven development, conversational coding, and automated feature generation capabilities – while accessing the scalable compute resources of Amazon SageMaker Studio. By connecting Kiro and Cursor to SageMaker Studio using the AWS Toolkit extension, you can eliminate context switching between your local IDE and cloud infrastructure, maintaining your existing agentic development workflows within a single environment for all your AWS analytics and AI/ML services. SageMaker Studio, offers a broad set of fully managed cloud interactive development environments (IDE), including JupyterLab and Code Editor based on Code-OSS (Open-Source Software), and VS Code IDE as remote IDE. Starting today, you can also use your customized local Kiro and Cursor setup – complete with specs, steering files, and hooks – while accessing your compute resources and data on Amazon SageMaker. You can authenticate using the AWS Toolkit extension in Kiro or Cursor or through SageMaker Studio’s web interface. Once authenticated, connect to any of your SageMaker Studio development environments in a few simple clicks. You maintain the same security boundaries as SageMaker Studio’s web-based environments while developing AI models and analyzing data in local IDE of your choice – Kiro or Cursor. To learn more, refer to the SageMaker user guide.
Quelle: aws.amazon.com