AWS launches Sustainability console for carbon emissions tracking

AWS launches the AWS Sustainability console, a free, standalone service that shows customers their environmental impact associated with their AWS usage. Expanding on the features from the Customer Carbon Footprint Tool (CCFT) in the AWS Billing console, this new service addresses a critical access barrier by enabling sustainability professionals to view carbon emissions data without requiring billing permissions. Organizations can now ensure the right teams have access to the environmental data. Like the CCFT, the AWS Sustainability console provides customers their estimated carbon emissions from using AWS, calculated using both market-based (MBM) and location-based (LBM) methods and available by AWS Region, service, and emissions scope (1, 2, 3). The console also delivers additional capabilities including improved customizable visualizations, the ability to set which month your fiscal year starts, customizable CSV reports, and API/SDK access for seamless integration of emissions data into existing reporting workflows.
The AWS Sustainability service is now available in the US East (N. Virginia) region and provides carbon emissions data for all AWS commercial regions. Access the service globally through the AWS Management Console.
Quelle: aws.amazon.com

Amazon CloudWatch now supports ingesting Security Hub CSPM findings with organization-wide enablement

Amazon CloudWatch now supports ingesting AWS Security Hub CSPM findings, enabling customers to centrally analyze and monitor security findings directly in CloudWatch Logs. Security Hub CSPM findings are supported in AWS Security Finding Format (ASFF) and Open Cybersecurity Schema Framework (OCSF) format using CloudWatch Pipelines, providing standardized security data ingestion. Customers can now use CloudWatch Logs Insights to query findings, create metric filters for monitoring, and leverage Amazon S3 Tables integration for advanced analytics, helping security teams identify and respond to threats faster across their AWS environment.
With today’s launch, customers can automatically enable Security Hub findings delivery to CloudWatch Logs using CloudWatch enablement rules that apply to the entire organization or specific accounts, to standardize security monitoring coverage. For example, a security team can create an enablement rule to automatically send Security Hub findings to CloudWatch Logs for all production accounts, ensuring consistent visibility into security posture.
Security Hub findings to CloudWatch logs are available in all AWS commercial regions.
Security Hub findings are charged as tiered pricing when delivered to CloudWatch Logs. For pricing information, see the CloudWatch pricing page. To learn more about Security Hub findings in CloudWatch Logs and organization-level enablement, visit the Amazon CloudWatch documentation..
Quelle: aws.amazon.com

Docker Sandboxes: Run Agents in YOLO Mode, Safely

Agents have crossed a threshold.

Over a quarter of all production code is now AI-authored, and developers who use agents are merging roughly 60% more pull requests. But these gains only come when you let agents run autonomously. And to unlock that, you have to get out of the way.That means letting agents run without stopping to ask permission at every step, often called YOLO mode.

Doing that on your own machine is risky. An autonomous agent can access files or directories you did not intend for it to touch, read sensitive data, execute destructive commands, or make broad changes while trying to help.

So yes, guardrails matter, but only when they’re enforced outside the agent, not by it.  Agents need a true bounding box: constraints defined before execution and clear limits on what it can access and execute. Inside that box, the agent should be able to move fast.

That’s exactly what Docker Sandboxes provide.

They let you run agents in fully autonomous mode with a boundary you define. And Docker Sandboxes are standalone; you don’t need Docker Desktop. That dramatically expands who can use them. For the newest class of builder, whether you’re just getting started with agents or building advanced workflows, you can run them safely from day one.

Docker Sandboxes work out of the box with today’s coding agents like Claude Code, Github Copilot CLI, OpenCode, Gemini CLI, Codex, Docker Agent, and Kiro. They also make it practical to run next-generation autonomous systems like NanoClaw and OpenClaw locally, without needing dedicated hardware like a Mac mini.

Here’s what Docker Sandboxes unlock.

You Actually Get the Productivity Agents Promise

The difference between a cautious agent and a fully autonomous one isn’t just speed. The interaction model changes entirely. In a constrained setup, you become the bottleneck: approving actions instead of deciding what to build next. In a sandbox, you give direction, step away, and come back to a cloned repo, passing tests, and an open pull request. No interruptions. That’s what a real boundary makes possible.

You Stop Worrying About Damage

Running an agent directly on your machine exposes everything it can reach. Mistakes are not hypothetical. Commands like rm -rf, accidental exposure of environment variables, or unintended edits to directories like .ssh can all happen.

Docker Sandboxes offer the strongest isolation environments for autonomous agents. Under the hood, each sandbox runs in its own lightweight microVM, built for strong isolation without sacrificing speed. There is no shared state, no unintended access, and no bleed-through between environments. Environments spin up in seconds (now, even on Windows), run the task, and disappear just as quickly. 

Other approaches introduce tradeoffs. Mounting the Docker socket exposes the host daemon. Docker-in-Docker relies on privileged access. Running directly on the host provides almost no isolation. A microVM-based approach avoids these issues by design. 

Run Any Agent

Docker Sandboxes are fully standalone and work with the tools developers already use, including Claude Code, Codex, GitHub Copilot, Docker Agent, Gemini, and Kiro. They also support emerging autonomous systems like OpenClaw and NanoClaw. There is no new workflow to adopt. Agents continue to open ports, access secrets, and execute multi-step tasks. The only difference is the environment they run in. Each sandbox can be inspected and interacted with through a terminal interface, so you always have visibility into what the agent is doing.

What Teams Are Saying

“Every team is about to have their own team of AI agents doing real work for them. The question is whether it can happen safely. Sandboxes is what that looks like at the infrastructure level.” — Gavriel Cohen, Creator of NanoClaw

“Docker Sandboxes let agents have the autonomy to do long-running tasks without compromising safety.”— Ben Navetta, Engineering Lead, Warp

Start in Seconds

For macOS: brew install docker/tap/sbx

For Windows: winget install Docker.sbx

Read the docs to learn more, or get in touch if you’re deploying for a team. If you’re already using Docker Desktop, the new Sandboxes experience is coming there soon. Stay tuned.

What’s Next

You already trust Docker to build, ship, and run your software. Sandboxes extend that trust to agents, giving them room to operate without giving them access to everything.

Autonomous agents are becoming more capable. The limiting factor is no longer what they can do, but whether you can safely let them do it.

Sandboxes make that possible.
Quelle: https://blog.docker.com/feed/

Run and Iterate on LLMs Faster with Docker Model Runner on DGX Station

Back in October, we showed how Docker Model Runner on the NVIDIA DGX Spark makes it remarkably easy to run large AI models locally with the same familiar Docker experience developers already trust. That post struck a chord: hundreds of developers discovered that a compact desktop system paired with Docker Model Runner could replace complex GPU setups and cloud API calls.

Recently at NVIDIA GTC 2026, NVIDIA is raising the bar with NVIDIA DGX Station and we’re excited to add support for it in Docker Model Runner!  The new DGX Station brings serious performance, and Model Runner helps make it practical to use day to day. With Model Runner, you can run and iterate on larger models on a DGX Station, using the same intuitive Docker experience you already know and trust.

From NVIDIA DGX Spark to DGX Station: What has changed and why does this matter?

NVIDIA DGX Spark, powered by the GB10 Grace Blackwell Superchip, gave developers 128GB of unified memory and petaflop-class AI performance in a compact form factor. A fantastic entry point for running models.

NVIDIA DGX Station is a different beast entirely. Built around the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip, it connects a 72-core NVIDIA Grace CPU and NVIDIA Blackwell Ultra GPU through NVIDIA NVLink-C2C, creating a unified, high-bandwidth architecture built for frontier AI workloads. It brings data-center-class performance to a deskside form factor. Here are the headline specs:

DGX Spark (GB10)

DGX Station (GB300)

GPU Memory

128 GB unified

252 GB

GPU Memory Bandwidth

273 GB/s

7.1 TB/s

Total Coherent Memory

128 GB

748 GB

Networking

200 Gb/s

800 Gb/s

GPU Architecture

Blackwell (5th-gen Tensor Cores, FP4)

Blackwell Ultra (5th-gen Tensor Cores, FP4)

With 252GB of GPU memory at 7.1 TB/s of bandwidth and a total of 748GB of coherent memory, the DGX Station doesn’t just let you run frontier models,  it lets you run trillion-parameter models, fine-tune massive architectures, and serve multiple models simultaneously, all from your desk.

Here’s what 748GB of coherent memory and 7.1 TB/s of bandwidth unlock in practice:

Run the largest open models without quantization. DGX Station can run the largest open 1T parameter models with quantization.

Serve a team, not just yourself. NVIDIA Multi-Instance GPU (MIG) technology lets you partition NVIDIA Blackwell Ultra GPUs into up to seven isolated instances. Combined with Docker Model Runner’s containerized architecture, a single DGX Station can serve as a shared AI development node for an entire team — each member getting their own sandboxed model endpoint.

Faster iteration on agentic workflows. Agentic AI pipelines often require multiple models running concurrently — a reasoning model, a code generation model, a vision model. With 7.1 TB/s of memory bandwidth, switching between and serving these models is dramatically faster than anything a desktop system has offered before.

Bottom line: The DGX Spark made that fast. The DGX Station makes it transformative. And raw hardware is only half the story. With Docker Model Runner, the setup stays effortless and the developer experience stays smooth, no matter how powerful the machine underneath becomes.

Getting Started: It’s the Same Docker Experience

For the full step-by-step walkthrough check out our guide for DGX Spark. Every instruction applies to the DGX Station as well.

NVIDIA’s new DGX Station puts data-center-class AI on your desk with 252GB of GPU memory, 7.1 TB/s bandwidth, and 748GB of total coherent memory. Docker Model Runner makes all of that power accessible with the same familiar commands developers already use on the DGX Spark. Pull a trillion-parameter model, serve a whole team, and iterate on agentic workflows. No cloud required, no new tools to learn.

How You Can Get Involved

The strength of Docker Model Runner lies in its community, and there’s always room to grow. To get involved:

Star the repository: Show your support by starring the Docker Model Runner repo.

Contribute your ideas: Create an issue or submit a pull request. We’re excited to see what ideas you have!

Spread the word: Tell your friends and colleagues who might be interested in running AI models with Docker.

Learn More

Read our original post on Docker Model Runner + DGX Spark 

Check out the Docker Model Runner General Availability announcement

Visit our Model Runner GitHub repo

Get started with a simple hello GenAI application

Quelle: https://blog.docker.com/feed/

AWS DevOps Agent is now generally available

Now generally available, AWS DevOps Agent is your always-available operations teammate that resolves and proactively prevents incidents, optimizes application reliability and performance, and handles on-demand SRE tasks across AWS, multicloud, and on-prem environments. Building on the preview launch, DevOps Agent now adds new use cases, broader integrations, enhanced intelligence, and enterprise-ready features, including the ability to investigate applications in Azure and on-prem environments, add custom agent skills to extend capabilities, and create custom charts and reports for deeper operational insights. DevOps Agent investigates incidents and identifies operational improvements as an experienced teammate would: by learning your applications and their relationships, working with your observability tools, runbooks, code repositories, and CI/CD pipelines, and correlating telemetry, code, and deployment data. It autonomously triages incidents and guides teams to rapid resolution, reducing mean time to resolution (MTTR) from hours to minutes, while analyzing patterns across historical incidents to deliver actionable recommendations that prevent future outages. For the full list of AWS Regions where AWS DevOps Agent is available, visit the Regions list. Pricing details are available on the AWS DevOps Agent pricing page. AWS Support customers receive monthly DevOps Agent credits based on the prior month’s gross AWS Support spend: 100% for Unified Operations, 75% for Enterprise Support, or 30% for Business Support+. For many customers, this significantly reduces or eliminates DevOps Agent costs. For details, visit the support compare page. If you are a preview customer, review the migration documentation to ensure seamless access to new AWS DevOps Agent capabilities. To learn more, read the launch blog and see getting started.
Quelle: aws.amazon.com