Amazon WorkSpaces Advisor now available for AI-powered troubleshooting

Amazon WorkSpaces Advisor is a new AI-powered tool that helps administrators quickly troubleshoot and resolve issues with Amazon WorkSpaces Personal. Using generative AI capabilities, it analyzes WorkSpace configurations, identifies problems, and provides actionable recommendations to restore service and optimize performance.
WorkSpaces Advisor streamlines administrative workflows by reducing the time needed to investigate and fix common issues. Administrators can leverage AI-driven insights to proactively maintain their virtual desktop infrastructure, improve end-user experience, and minimize downtime across their WorkSpaces.
Amazon WorkSpaces Advisor is now available in all AWS commercial regions where Amazon WorkSpaces is offered. Visit the Amazon WorkSpaces console to access WorkSpaces Advisor and begin troubleshooting your environment. Learn more in the feature blog and user guide.
Quelle: aws.amazon.com

Amazon EKS managed node groups now support EC2 Auto Scaling warm pools

Amazon Elastic Kubernetes Service (Amazon EKS) managed node groups now support Auto Scaling warm pools, enabling you to maintain pre-initialized EC2 instances ready for rapid scale-out. This reduces node provisioning latency for applications with burst traffic patterns, time-sensitive workloads, or long instance boot times due to complex initialization scripts and software dependencies. With warm pools enabled, your EKS managed node group maintains a pool of instances that have already completed OS initialization, user data execution, and software configuration. When demand increases and the Auto Scaling group scales out, instances transition from the warm pool to active service without repeating the full cold-start sequence. You can configure instances in the warm pool as Stopped (lower cost, longer transition) or Running (higher cost, faster transition). You can also enable reuse on scale-in, which returns instances to the warm pool during scale-down instead of terminating them. Warm pools work with Cluster Autoscaler without requiring any additional configuration. You can enable warm pools through the EKS API, AWS CLI, AWS Management Console, or AWS CloudFormation by adding a warmPoolConfig to your CreateNodegroup or UpdateNodegroupConfig requests. Existing managed node groups that do not enable warm pools are unaffected. This feature is available in all AWS Regions where Amazon EKS is available, except for the China (Beijing) Region, operated by Sinnet and the China (Ningxia) Region, operated by NWCD. To get started, see the Amazon EKS managed node groups documentation.
Quelle: aws.amazon.com

Amazon IVS Real-Time Streaming now supports redundant ingest

Amazon Interactive Video Service (Amazon IVS) Real-Time Streaming now supports redundant ingest, helping protect your live streams against source encoder failures and first-mile network issues. With redundant ingest, you can stream from two encoders simultaneously to a single stage with automated failover, ensuring uninterrupted delivery to your viewers.
Redundant ingest is ideal for live events, 24/7 live streams, or any scenario where uninterrupted delivery is essential. This capability helps you maintain viewer engagement during unexpected disruptions and enables continuous 24/7 streaming. 
Amazon IVS is a managed live streaming solution designed to make low-latency or real-time video available to viewers around the world. Visit the AWS region table for a full list of AWS Regions where the Amazon IVS console and APIs for control and creation of video streams are available.
To learn more, please visit the Amazon IVS Real-Time Streaming RTMP ingest documentation page.
Quelle: aws.amazon.com

SageMaker HyperPod now supports gang scheduling for distributed training workloads

Amazon SageMaker HyperPod task governance now supports gang scheduling, which ensures all pods required for a distributed training job are ready before training begins. Administrators can configure gang scheduling to prevent wasted compute from partial job runs and avoid deadlocks from jobs waiting for resources. Data scientists running distributed AI/ML training jobs on Amazon SageMaker HyperPod clusters using the EKS orchestrator require multiple pods to work together across nodes with pod-to-pod communication. When some pods start but others do not, jobs can hold onto resources without making progress, block other workloads, and increase costs. Gang scheduling resolves this by monitoring all pods in a workload and pulling the workload back if not all pods are ready within a set time. Pulled-back workloads are automatically requeued to prevent stalling. Administrators can adjust settings on the HyperPod Console, such as how long to wait for pods to be ready, how to handle node failures, whether to admit workloads one at a time to avoid deadlocks on busy clusters, and how retries are scheduled. This capability is currently available for Amazon SageMaker HyperPod clusters using the EKS orchestrator across the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo), Asia Pacific (Jakarta), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), Europe (Spain), and South America (São Paulo). To learn more, visit SageMaker HyperPod webpage, and HyperPod task governance documentation.
Quelle: aws.amazon.com