Multi-party approval now supports approval team baselining

Multi-party approval (MPA) now supports MPA administrators running test approvals to confirm that their approval team is set up correctly and that approvers are active and reachable. With this new capability, customers ensure their approval teams do not become unresponsive due to natural attrition, incorrect approver selection, or reduced engagement. MPA administrators and security teams can now proactively assess their approval configurations before relying on them for sensitive operations. The baseline feature enables proactive team health management by allowing manual initiation of test approval sessions through the AWS Organizations console. Customers can verify approver availability, identify inactive team members, and maintain compliance with internal governance requirements. Key use cases include regular team responsiveness verification, recommended every 90 days by AWS using the MPA Console, onboarding validation for new approval configurations, and operation health checks to ensure approval workflows function effectively when needed. This feature is available in all AWS commercial regions. To learn more about implementing baseline testing for your multi-party approval workflows, visit the Multi-party approval documentation.    
Quelle: aws.amazon.com

Amazon OpenSearch Ingestion now supports unified ingestion endpoint for OpenTelemetry data

Amazon OpenSearch Ingestion now supports a unified ingestion endpoint that can accept all three OpenTelemetry observability signals — logs, metrics, and traces — through a single pipeline. Previously, customers who wanted to ingest all three OpenTelemetry data types had to create and manage three separate pipelines, one for each signal type. With this launch, a single pipeline can now receive any combination of OpenTelemetry signals, simplifying pipeline architecture and reducing operational overhead. Customers can now build centralized observability pipelines that consolidate logs, metrics, and traces in one place, making it easier to correlate signals and gain a holistic view of application health. Teams operating at scale can reduce the number of pipelines they manage, lowering infrastructure costs and simplifying access control, monitoring, and lifecycle management. This also makes it easier to adopt OpenTelemetry incrementally as teams can begin with one signal type and add others over time without any pipeline reconfiguration. The unified ingestion endpoint for OpenTelemetry data is supported in all regions that Amazon OpenSearch Ingestion is currently available. Customers can get started by using the new unified OpenTelemetry source in their pipeline configuration via the AWS Management console or using the AWS CLI and point their OpenTelemetry clients to the new unified endpoint. To learn more and get started, visit the Amazon OpenSearch Ingestion documentation.
Quelle: aws.amazon.com

Amazon SageMaker HyperPod now provides comprehensive observability for Restricted Instance Groups

Amazon SageMaker HyperPod now offers comprehensive observability for Restricted Instance Groups (RIG), enabling teams training foundation models with Nova Forge to gain deep visibility into their compute resources and training workloads. This new capability eliminates the manual effort of collecting and correlating metrics across the infrastructure stack, providing a unified view of GPU performance, system health, network throughput, and Kubernetes cluster state through a pre-configured Amazon Managed Grafana dashboard backed by Amazon Managed Service for Prometheus.
You can now monitor GPU utilization, NVLink bandwidth, CPU pressure, FSx for Lustre usage, and pod lifecycle from a single Grafana dashboard, with metrics collected across four exporters covering GPU performance, host-level system health, network fabric, and Kubernetes object state. In addition, curated logs are automatically made available in these dashboards, covering epoch progress, step-level training logs, pipeline errors, and Python tracebacks, so you can quickly diagnose training failures. HyperPod Observability for Restricted Instance Group is automatically enabled when you create a new cluster using RIGs, or can be enabled for existing clusters in a few clicks in the HyperPod cluster management console.
Amazon SageMaker HyperPod RIG observability is available in all AWS Regions where SageMaker HyperPod RIG is supported. To learn more, visit the documentation.
Quelle: aws.amazon.com

AWS simplifies IAM role creation and setup in service workflows

AWS Identity and Access Management (IAM) now makes it easier to create and configure IAM roles directly within service workflows, allowing you to customize role permissions without switching between browser tabs. Now, when you are performing console tasks that involve role configuration, a new panel will appear to set the permissions required. IAM roles enable secure AWS cross-service connections using temporary credentials, eliminating the need for hardcoded access keys. This launch integrates role creation capabilities with custom permissions directly into service workflows, allowing you to configure roles and permissions without navigating to the IAM console. You can use default policies or the simplified statement builder to customize your permissions, streamlining your resource setup while maintaining the full functionality of IAM role management. This feature is available when working with Amazon EC2, AWS Lambda, Amazon EKS, Amazon ECS, AWS Glue, AWS CloudFormation, AWS Database Migration Service, AWS Systems Manager, AWS Secrets Manager, Amazon Relational Database Service, and AWS IoT Core in the US East (N. Virginia) Region. The feature will gradually become available across additional AWS services and regions. To learn more, refer to individual service User Guide or IAM documentation.
Quelle: aws.amazon.com

Amazon GameLift Servers launches DDoS Protection

We’re excited to announce Amazon GameLift Servers DDoS Protection, a new feature that helps game developers protect session-based multiplayer games that utilize Amazon GameLift Servers to help improve overall game session resiliency. DDoS Protection is designed to defend against denial-of-service (DoS) and distributed denial-of-service (DDoS) attacks, providing proactive, User Datagram Protocol (UDP)-based traffic protection–without the need for manual byte matching, and with negligible latency added.
Amazon GameLift Servers DDoS Protection co-locates a relay network directly alongside your game servers. The relay authenticates client traffic using access tokens so that only authorized traffic reaches the server. The feature also enforces per-player traffic limits to help prevent disruptions, even from seemingly legitimate sources. Game developers can use DDoS Protection to protect against targeted disruptions to specific players or entire game sessions. Check out the Amazon GameLift Servers release notes to get started through the console or API, with sample code provided for popular game engines including Unreal Engine and native C++.
Amazon GameLift Servers DDoS Protection is available at no additional cost to Amazon GameLift Servers customers and is initially available in the following regions: US East (N. Virginia), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Asia Pacific (Sydney), Asia Pacific (Tokyo), Pacific (Seoul).
Quelle: aws.amazon.com

Amazon OpenSearch Ingestion now supports Amazon Managed Service for Prometheus as a sink

Amazon OpenSearch Ingestion now supports Amazon Managed Service for Prometheus  as a sink, making it possible to build fully managed, end-to-end metrics ingestion pipelines without any custom forwarding infrastructure. With this launch, customers can now manage their entire metrics ingestion workflow using the same pipeline infrastructure they already use for logs and traces.
Customers can now choose the right destination for each observability signal — sending logs and traces to Amazon OpenSearch Service for powerful full-text search, log analytics, and trace correlation, while routing metrics to Amazon Managed Service for Prometheus for time-series storage and analysis. This flexibility allows teams to build purpose-fit observability pipelines that leverage the strengths of each service without compromising on data fidelity or analytical capability. Amazon OpenSearch Ingestion’s built-in data transformation and enrichment capabilities allow customers to prepare and refine metrics before they land in Amazon Managed Service for Prometheus, improving data quality and consistency. Once metrics are in Amazon Managed Service for Prometheus, customers can query them using Prometheus Query Language to analyze trends, configure alerting rules to get notified when metrics cross defined thresholds, and visualize their data using Amazon Managed Grafana for rich, customizable views of infrastructure and application health.
The feature is supported in all regions that Amazon OpenSearch Ingestion and  is currently available. Customers can get started by using the new sink for Amazon Managed Service for Prometheus in their pipeline configuration via the AWS Management console or using the AWS CLI and start ingesting metrics into their Amazon Managed Service for Prometheus workspace.
To learn more and get started, visit the Amazon OpenSearch Ingestion documentation.
Quelle: aws.amazon.com

Amazon Lightsail now offers OpenClaw, a private self-hosted AI assistant

Amazon Lightsail now lets you deploy OpenClaw, a private self-hosted AI assistant, on your own cloud infrastructure in a simple and secure manner. Every Lightsail OpenClaw instance ships with built-in security controls, pre-configured and ready to use. Sandboxing isolates each agent session for improved security posture. One-click HTTPS access puts the OpenClaw dashboard in your browser securely, without requiring manual TLS configuration. Device pairing authentication ensures only your authorized devices can connect to your assistant. Automatic snapshots back up your configuration continuously, so you never lose your setup. Amazon Bedrock serves as the default model provider for Lightsail OpenClaw, and you can swap models or connect to Slack, Telegram, WhatsApp, and Discord as per your requirements. Amazon Lightsail is available in 15 AWS Regions including US East (N. Virginia), US West (Oregon), Europe (Frankfurt), Europe (London), Asia Pacific (Tokyo), and Asia Pacific (Jakarta). To get started, visit the Lightsail console. For pricing and other details, visit the Amazon Lightsail pricing and quick start documentation pages.
Quelle: aws.amazon.com

 Policy in Amazon Bedrock AgentCore is now generally available

Policy in Amazon Bedrock AgentCore is now generally available, providing organizations with centralized, fine-grained controls for agent-tool interactions. Policy operates outside your agent code, enabling security, compliance, and operations teams to define tool access and input validation rules without modifying agent code. Teams can author policies using natural language that automatically converts to Cedar, the AWS open-source policy language. Policies are stored in a policy engine and attached to an AgentCore Gateway, which intercepts agent-tool traffic and evaluates each request against the policies before allowing or denying tool access. Policy helps ensure agents operate within defined parameters while maintaining organizational visibility and governance.
Policy in AgentCore is available in thirteen AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), and Europe (Stockholm).
Learn more about Policy in AgentCore through the documentation, and get started with the AgentCore Starter Toolkit.
Quelle: aws.amazon.com

Amazon SageMaker Unified Studio launches support for remote connection from Kiro IDE

Today, AWS announces the ability to remotely connect from Kiro IDE to Amazon SageMaker Unified Studio. This new capability allows data scientists, ML engineers, and developers to leverage their Kiro setup – including its spec-driven development, conversational coding, and automated feature generation capabilities – while accessing the scalable compute resources of Amazon SageMaker. By connecting Kiro to SageMaker Unified Studio using the AWS toolkit extension, you can eliminate context switching between your local IDE and cloud infrastructure, maintaining your existing agentic development workflows within a single environment for all your AWS analytics and AI/ML services.
SageMaker Unified Studio, part of the next generation of Amazon SageMaker, offers a broad set of fully managed cloud interactive development environments (IDE), including JupyterLab and Code Editor based on Code-OSS (Open-Source Software). Starting today, you can also use your customized local Kiro setup – complete with specs, steering files, and hooks – while accessing your compute resources and data on Amazon SageMaker. Since Kiro is built on Code-OSS, authentication is secure via IAM through the AWS Toolkit extension, giving you access to all your SageMaker Unified Studio domains and projects. This integration provides a convenient path from your local AI-powered development environment to scalable infrastructure for running workloads across data processing, SQL analytics services like Amazon EMR, AWS Glue, and Amazon Athena, and ML workflows – all with enterprise-grade security including customer-managed encryption keys and AWS IAM integration.
This feature is available in all Regions where Amazon SageMaker Unified Studio is available. To learn more, refer to the SageMaker user guide.
Quelle: aws.amazon.com

Amazon SageMaker Unified Studio adds metadata sync with third-party catalogs

Amazon SageMaker Unified Studio now supports metadata and context sync across Atlan, Collibra, and Alation. These integrations synchronize catalog metadata between Amazon SageMaker Catalog and each partner platform, giving teams a consistent view of their data and AI assets regardless of which tool they use day to day. Organizations can maintain aligned glossary terms, asset descriptions, and ownership information across platforms without manual reconciliation.
All three integrations synchronize key metadata elements including projects, assets, descriptions, glossary terms, and their hierarchies. With the Collibra integration, you can synchronize metadata in both directions between SageMaker Catalog and the partner platform, so updates you make in one are reflected in the other. Also, you can manage SageMaker Unified Studio data access requests from Collibra. With the Atlan and Alation integration, you can ingest metadata from SageMaker Catalog into Alation with additional enhancements coming soon. You set up these integrations by setting up a connection to SageMaker Unified Studio from within Atlan and Alation, while the Collibra integration is available as an open-source solution on GitHub.
To learn more, visit the Amazon SageMaker Unified Studio documentation. For implementation details, see the Atlan blog post, Collibra blog post , and Alation blog post.
Quelle: aws.amazon.com