Amazon SageMaker Unified Studio launches support for remote connection from Cursor IDE

Today, AWS announces remote connection from Cursor IDE to Amazon SageMaker Unified Studio via the AWS Toolkit extension. This new capability allows data scientists, ML engineers, and developers to leverage their Cursor setup – including its AI-powered code completion, natural language editing, and multi-file editing capabilities – while accessing the scalable compute resources of Amazon SageMaker. By connecting Cursor to SageMaker Unified Studio using the AWS Toolkit extension, you can eliminate context switching between your local IDE and cloud infrastructure, maintaining your existing AI-assisted development workflows within a single environment for all your AWS analytics and AI/ML services.
SageMaker Unified Studio, part of the next generation of Amazon SageMaker, offers a broad set of fully managed cloud interactive development environments (IDE), including JupyterLab and Code Editor based on Code-OSS (Open-Source Software). Starting today, you can also use your customized local Cursor setup – complete with custom rules, extensions, and AI model preferences – while accessing your compute resources and data on Amazon SageMaker. Since Cursor is built on Code-OSS, authentication is secure via IAM through the AWS Toolkit extension, giving you access to all your SageMaker Unified Studio domains and projects. This integration provides a convenient path from your local AI-powered development environment to scalable infrastructure for running workloads across data processing, SQL analytics services like Amazon EMR, AWS Glue, and Amazon Athena, and ML workflows – all with enterprise-grade security including customer-managed encryption keys and AWS IAM integration.
This feature is available in all AWS Regions where Amazon SageMaker Unified Studio is available. To learn more, visit the local IDE support documentation..
Quelle: aws.amazon.com

AWS Batch now provides AMI status and supports AWS Health Planned Lifecycle Events

AWS Batch now provides enhanced visibility into your compute environments with two new capabilities that help you maintain operational best practices. When you describe a compute environment, you can now see the status of your Batch-provided default Amazon Machine Images (AMIs), indicating when updates are available. Additionally, AWS Batch now publishes AWS Health Planned Lifecycle Events to help you prepare for and track changes affecting your batch computing resources. The AMI status indicator shows whether you’re using the latest AMI (LATEST) or if an update is available (UPDATE_AVAILABLE), helping you identify compute environments that may be running outdated AMIs. AWS Health Planned Lifecycle Events provide advance notification of upcoming changes, such as AMI deprecations, help you monitor migration status of your affected compute environments, and automate responses using Amazon EventBridge. AMI status indicator and AWS Health Planned Lifecycle Events are available today in all AWS Regions where AWS Batch is available. For more information, see Managing AMI versions and AWS Health Planned Lifecycle Events pages in the AWS Batch User Guide.
Quelle: aws.amazon.com

Accelerate AI-assisted development with Agent Plugin for AWS Serverless

AWS announces the Agent Plugin for AWS Serverless, enabling developers to easily build, deploy, troubleshoot, and manage serverless applications using AI coding assistants like Kiro, Claude Code, and Cursor.
Agent plugins extend AI coding assistants with structured, reusable capabilities by packaging skills, sub-agents, hooks, and Model Context Protocol (MCP) servers into a single modular unit. The Agent Plugin for AWS Serverless dynamically loads relevant guidance and expertise required throughout the development lifecycle for building production-ready serverless applications on AWS. You can create AWS Lambda functions that integrate with popular event sources like Amazon EventBridge, Amazon Kinesis, and AWS Step Functions, while following built-in best practices for observability, performance optimization, and troubleshooting. As you adopt Infrastructure as Code (IaC), you can streamline project setup with AWS Serverless Application Model (SAM) and AWS Cloud Development Kit (CDK), with reusable constructs, proven architectural patterns, automated CI/CD pipelines, and local testing workflows. For long-running, stateful workflows, you can build with confidence using Lambda durable functions, which provides checkpoint-replay model, advanced orchestration patterns, and error handling capabilities. Lastly, you can design and manage APIs as part of your application using Amazon API Gateway, with guidance across REST APIs, HTTP APIs, and WebSocket APIs. These capabilities are packaged as agent skills in the open Agent Skills format, making them usable across compatible AI tools such as Kiro, Claude Code, and Cursor.
The Agent Plugin for AWS Serverless is available in any AI coding assistant tools that support agent plugins such as Claude Code and Cursor. In Claude Code, you can install it from the official Claude Marketplace using a simple command ‘/plugin install aws-serverless@claude-plugins-official’. You can also install agent skills from the plugin individually in any AI coding assistant tools that support agent skills. To learn more about the plugin and its capabilities, visit GitHub.
Quelle: aws.amazon.com

Amazon Bedrock AgentCore Runtime now supports managed session storage for persistent agent filesystem state (preview)

Amazon Bedrock AgentCore Runtime now offers managed session storage in public preview, enabling agents to persist their filesystem state across stop and resume cycles. Modern agents write code, install packages, generate artifacts, and manage state through the filesystem. Until now, that work was lost when a session stopped. With managed session storage, everything your agent writes to a configured mount path persists automatically, even after the compute environment terminates.
When you configure session storage, each session gets a persistent directory at the mount path you specify. Your agent reads and writes files as normal, and AgentCore Runtime transparently replicates data to durable storage. When the session stops, data is flushed during graceful shutdown. When you resume with the same session ID, a new microVM mounts the same storage and the agent continues from where it left off — source files, installed packages, build artifacts, and git history all intact. No checkpoint logic, no save and restore code, and no changes to your agent application required. Session storage supports standard Linux filesystem operations including regular files, directories, and symlinks, with up to 1 GB per session and data retained for 14 days of idle time. Storage communication is confined to a single session’s data and cannot access other sessions or AgentCore Runtime environments.
Session storage is available in public preview across fourteen AWS Regions: US (N. Virginia, Ohio, Oregon), Canada (Central), Asia Pacific (Mumbai, Seoul, Singapore, Sydney, Tokyo), Europe (Frankfurt, Ireland, London, Paris, Stockholm).
To learn more, see persist files across stop/resume in the Amazon Bedrock AgentCore documentation.
Quelle: aws.amazon.com

Amazon SageMaker HyperPod now supports continuous provisioning for Slurm-orchestrated clusters

Amazon SageMaker HyperPod now extends continuous provisioning support to clusters using the Slurm orchestrator, enabling greater flexibility and efficiency for enterprise customers running large-scale AI/ML training workloads. AI/ML customers running Slurm-based clusters need to start training quickly, scale seamlessly, perform maintenance without disrupting operations, and have granular visibility into cluster operations. Previously, if any instance group could not be fully provisioned, the entire cluster creation or scaling operation failed and rolled back, causing delays and requiring manual intervention. With continuous provisioning for Slurm, SageMaker HyperPod automatically provisions remaining capacity in the background while training jobs can begin immediately on available instances. The system uses priority-based provisioning to bring up the Slurm controller node first, followed by login and worker nodes in parallel, so your cluster reaches an operational state as quickly as possible. HyperPod retries failed node launches asynchronously and adds nodes to the Slurm cluster automatically as they become available, ensuring clusters reliably reach their desired scale without requiring manual intervention. You can now perform concurrent, non-blocking scaling operations across multiple instance groups simultaneously — a capacity shortage in one instance group no longer blocks scaling in others. These capabilities help customers reduce time-to-training, maximize resource utilization, and focus on innovation rather than infrastructure management. This feature is available for new SageMaker HyperPod clusters using the Slurm orchestrator. You can enable continuous provisioning by setting the NodeProvisioningMode parameter to “Continuous” when creating new HyperPod clusters using the CreateCluster API. Continuous provisioning can also be enabled when creating new clusters through the AWS CLI and the SageMaker AI console. This feature is available in all AWS Regions where Amazon SageMaker HyperPod is supported. To learn more about continuous provisioning for Slurm clusters, see the Amazon SageMaker HyperPod User Guide.
Quelle: aws.amazon.com