The Model Context Protocol (MCP) Proxy for AWS is now generally available

Today, AWS announces the general availability of the Model Context Protocol (MCP) Proxy for AWS, a client-side proxy that enables MCP clients to connect to remote, AWS-hosted MCP servers using AWS SigV4 authentication. The Proxy supports popular agentic AI development tools like Amazon Q Developer CLI, Kiro, Cursor, and popular agent frameworks like Strands Agents. Customers can connect to remote MCP servers with AWS credentials using the Proxy to automatically handle MCP protocol communications via SigV4. The Proxy also helps customers to connect to MCP servers built on Amazon Bedrock AgentCore Gateway or Runtime using SigV4 authentication. This release allows developers and agents to extend development workflows to include AWS service interactions from AWS MCP server tools. For example, you can use AWS MCP servers to work with resources like AWS S3 buckets or Amazon RDS tables through existing MCP servers with SigV4. The MCP Proxy for AWS includes safety controls such as read-only mode to prevent unintended changes, configurable retry logic for reliability, and logging for troubleshooting. Customers can install the Proxy from source, through Python package managers, or by using a container making it simple to configure with their preferred MCP-supported development tool. The MCP Proxy for AWS is open-source and available now. Visit the AWS GitHub repository to view the installation and configuration options and start connecting with remote AWS MCP Servers today. 
Quelle: aws.amazon.com

Amazon Connect now supports scheduling of individual agents

Amazon Connect now supports scheduling of individual agents, giving you more flexibility in scheduling your workforce. For example, when onboarding 100 new agents to a business unit with schedules already published for next two months, you can create schedules for only those new agents and automatically merge them with existing schedules. This eliminates the need for workarounds such as manually copying schedules from existing agents to new agents or regenerating schedules for entire business unit, thus improving manager productivity and operational efficiency. This feature is available in all AWS Regions where Amazon Connect agent scheduling is available. To learn more about Amazon Connect agent scheduling, click here.
Quelle: aws.amazon.com

Amazon DynamoDB Accelerator now supports AWS PrivateLink

Amazon DynamoDB Accelerator (DAX) now supports AWS PrivateLink, enabling you to securely access DAX management APIs such as CreateCluster, DescribeClusters, and DeleteCluster over private IP addresses within your virtual private cloud (VPC). DAX clusters already run inside your VPC, and all data plane operations like GetItem and Query are handled privately within the VPC. With this launch, you can now perform cluster management operations privately, without connecting to the public regional endpoint. With AWS PrivateLink, you can simplify private network connectivity between virtual private clouds (VPCs), DAX, and your on-premises data centers using interface VPC endpoints and private IP addresses. It helps you meet compliance regulations and eliminates the need to use public IP addresses, configure firewall rules, or configure an Internet gateway to access DAX from your on-premises data centers. AWS PrivateLink for DAX is available in all Regions where DAX is available today. For information about DAX Regional availability, see the “Service endpoints” section in Amazon DynamoDB endpoints and quotas. There is an additional cost to use the feature. Please see AWS PrivateLink pricing for more details. To get started with DAX and PrivateLink, see AWS PrivateLink for DAX.
Quelle: aws.amazon.com

Amazon Aurora DSQL now supports FIPS 140-3 compliant endpoints

Amazon Aurora DSQL now supports Federal Information Processing Standards (FIPS) 140-3 compliant endpoints, helping companies contracting with the US federal governments meet the FIPS security requirement to encrypt sensitive data in supported Regions. With this launch, you can use Aurora DSQL for workloads that require a FIPS 140-3 validated cryptographic module when sending requests over public or VPC endpoints. Aurora DSQL is the fastest serverless, distributed SQL database with single- and multi-Region clusters providing active-active high availability and strong consistency. Aurora DSQL enables you to build applications with virtually unlimited scalability, the highest availability, and zero infrastructure management. Aurora DSQL FIPS compliant endpoints are now available in the following regions: US East (N. Virginia), US East (Ohio), and US West (Oregon). To learn more about FIPS 140-3 at AWS, visit FIPS 140-3 Compliance.
Quelle: aws.amazon.com

Split Cost Allocation Data for Amazon EKS supports Kubernetes labels

Starting today, Split Cost Allocation Data for Amazon EKS now allows you to import up to 50 Kubernetes custom labels per pod as cost allocation tags. You can attribute costs of your Amazon EKS cluster at the pod level using custom attributes, such as cost center, application, business unit, and environment in AWS Cost and Usage Report (CUR). With this new capability, you can better align your cost allocation with specific business requirements and organizational structure driven by your cloud financial management needs. This enables granular cost visibility of your EKS clusters running multiple application containers using shared EC2 instances, allowing you to allocate the shared costs of your EKS cluster. For new split cost allocation data customers, you can enable this feature in the AWS Billing and Cost Management console. For existing customers, EKS will automatically import the labels, but you must activate them as cost allocation tags. After activation, Kubernetes custom labels are available in your CUR within 24 hours. You can use the Containers Cost Allocation dashboard to visualize the costs in Amazon QuickSight and the CUR query library to query the costs using Amazon Athena. This feature is available in all AWS Regions where Split Cost Allocation Data for Amazon EKS is available. To get started, visit Understanding Split Cost Allocation Data.
Quelle: aws.amazon.com

TwelveLabs’ Pegasus 1.2 model now available in three additional AWS regions

Amazon announces the expansion of the TwelveLabs’ Pegasus 1.2 video understanding model to the US East (Ohio), US West (N. California), and Europe (Frankfurt) AWS Regions. This expansion makes it easier for customers to build and scale generative AI applications that can understand and interact with video content at an enterprise level. Pegasus 1.2 is a powerful video-first language model that can generate text based on the visual, audio, and textual content within videos. Specifically designed for long-form video, it excels at video-to-text generation and temporal understanding. With Pegasus 1.2’s availability in these additional regions, you can now build video-intelligence applications closer to your data and end users in key geographic locations, reducing latency and simplifying your architecture. With today’s expansion, Pegasus 1.2 is now available in Amazon Bedrock across 7 regions: US East (N. Virginia), US West (Oregon), US East (Ohio), US West (N. California), Europe (Ireland), Europe (Frankfurt), and Asia Pacific (Seoul). To get started with Pegasus 1.2, visit the Amazon Bedrock console. To learn more, read the blog, product page, Amazon Bedrock pricing, and documentation. 
Quelle: aws.amazon.com

Amazon WorkSpaces announces USB redirection support for DCV WorkSpaces

AWS announces USB redirection support for WorkSpaces running Amazon DCV protocol, enabling users to access locally connected USB devices from their virtual desktop environments. With this feature, customers can now connect a wide range of USB peripherals to their virtual desktops, including credit card readers, 3D mice, and other specialized devices. USB redirection addresses the need for direct access to USB devices that require specialized drivers or lack dedicated protocols. This capability is currently limited to WorkSpaces Personal with Windows desktops accessed from Windows client devices. Performance and device compatibility may vary, so testing with your specific USB peripherals is recommended before adding them to the allowlist. This feature is available in all AWS Regions where Amazon WorkSpaces is offered. For more information about USB redirection in Amazon WorkSpaces, see USB Redirection for DCV in the Amazon WorkSpaces Administration Guide, or visit the Amazon WorkSpaces page to learn more about virtual desktop solutions from AWS.
Quelle: aws.amazon.com

Amazon ECS Service Connect enhances observability with Envoy Access Logs

Amazon Elastic Container Service (Amazon ECS) Service Connect now supports Envoy access logs, providing deeper observability into request-level traffic patterns and service interactions. This new capability captures detailed per-request telemetry for end-to-end tracing, debugging, and compliance monitoring. Amazon ECS Service Connect makes it simple to build secure, resilient service-to-service communication across clusters, VPCs, and AWS accounts. It integrates service discovery and service mesh capabilities by automatically injecting AWS-managed Envoy proxies as sidecars that handle traffic routing, load balancing, and inter-service connectivity. Envoy Access logs capture detailed traffic metadata enabling request-level visibility into service communication patterns. This enables you to perform network diagnostics, troubleshoot issues efficiently, and maintain audit trails for compliance requirements. You can now configure access logs within ECS Service Connect by updating the ServiceConnectConfiguration to enable access logging. Query strings are redacted by default to protect sensitive data. Envoy access logs will output to the standard output (STDOUT) stream alongside application logs and flow through the existing ECS log pipeline without requiring additional infrastructure. This configuration supports all existing application protocols (HTTP, HTTP2, GRPC and TCP). This feature is available in all regions where Amazon ECS Service Connect is supported. To learn more, visit the Amazon ECS Developer Guide.
Quelle: aws.amazon.com

AWS Elastic Beanstalk adds support for Amazon Corretto 25

AWS Elastic Beanstalk now enables customers to build and deploy Java applications using Amazon Corretto 25 on Amazon Linux 2023 (AL2023) platform. This latest platform support allows developers to leverage the newest Java 25 features while benefiting from AL2023’s enhanced security and performance capabilities. AWS Elastic Beanstalk is a service that provides the ability to deploy and manage applications in AWS without worrying about the infrastructure that runs those applications. Corretto 25 on AL2023 allows developers to take advantage of the latest Java language features including compact object headers, ahead-of-time (AOT) caching, and structured concurrency. Developers can create Elastic Beanstalk environments running Corretto 25 through the Elastic Beanstalk Console, CLI, or API. This platform is generally available in commercial regions where Elastic Beanstalk is available including the AWS GovCloud (US) Regions. For a complete list of regions and service offerings, see AWS Regions. For more information about Corretto 25 and Linux Platforms, see the Elastic Beanstalk developer guide. To learn more about Elastic Beanstalk, visit the Elastic Beanstalk product page.
Quelle: aws.amazon.com

Introducing the Capacity Reservation Topology API for AI, ML, and HPC instance types

AWS announces the general availability of the Amazon Elastic Compute Cloud (EC2) Capacity Reservation Topology API. It joins the Instance Topology API in enabling customers to efficiently manage capacity, schedule jobs, and rank nodes for Artificial Intelligence, Machine Learning, and High-Performance Computing distributed workloads. The Capacity Reservation Topology API gives customers a unique per-account hierarchical view of the relative location of their capacity reservations.
Customers running distributed parallel workloads are managing thousands of instances across tens to hundreds of capacity reservations. With the Capacity Reservation Topology API, customers can describe the topology of their reservations as a network node set, which will show the relative proximity of their capacity without the need to launch an instance. This enables efficient capacity planning and management as customers provision workloads on tightly coupled capacity. Customers can then use the Instance Topology API, which provides consistent network nodes from the Capacity Reservation Topology API with further granularity, enabling a consistent and seamless way to schedule jobs and rank nodes for optimal performance in distributed parallel workloads.
The Capacity Reservation Topology API is available in the following AWS regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Africa (Cape Town), Asia Pacific (Jakarta), Asia Pacific (Hong Kong), Asia Pacific (Hyderabad), Asia Pacific (Melbourne), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Spain), Europe (Stockholm), Europe (Zurich), Middle East (Bahrain), Middle East (UAE), and South America (São Paulo), and it is supported on all instances available with the Instance Topology API.
To learn more, please visit the latest EC2 user guide.
Quelle: aws.amazon.com