Running NanoClaw in a Docker Shell Sandbox

Ever wanted to run a personal AI assistant that monitors your WhatsApp messages 24/7, but worried about giving it access to your entire system? Docker Sandboxes’ new shell sandbox type is the perfect solution. In this post, I’ll show you how to run NanoClaw, a lightweight Claude-powered WhatsApp assistant, inside a secure, isolated Docker sandbox.

What is the Shell Sandbox?

Docker Sandboxes provides pre-configured environments for running AI coding agents like Claude Code, Gemini CLI, and others. But what if you want to run a different agent or tool that isn’t built-in?That’s where the shell sandbox comes in. It’s a minimal sandbox that drops you into an interactive bash shell inside an isolated microVM. No pre-installed agent, no opinions — just a clean Ubuntu environment with Node.js, Python, git, and common dev tools. You install whatever you need.

Why Run NanoClaw in a Sandbox?

NanoClaw already runs its agents in containers, so it’s security-conscious by design. But running the entire NanoClaw process inside a Docker sandbox adds another layer:

Filesystem isolation – NanoClaw can only see the workspace directory you mount, not your home directory

Credential management – API keys are injected via Docker’s proxy, never stored inside the sandbox

Clean environment – No conflicts with your host’s Node.js version or global packages

Disposability – Nuke it and start fresh anytime with docker sandbox rm

Prerequisites

Docker Desktop installed and running

Docker Sandboxes CLI (docker sandbox command available) (v.0.12.0 available in the nightly build as of Feb 13)

An Anthropic API key in an env variable

Setting It Up

Create the sandbox

Pick a directory on your host that will be mounted as the workspace inside the sandbox. This is the only part of your filesystem the sandbox can see:

mkdir -p ~/nanoclaw-workspace
docker sandbox create –name nanoclaw shell ~/nanoclaw-workspace

Connect to it

docker sandbox run nanoclaw

You’re now inside the sandbox – an Ubuntu shell running in an isolated VM. Everything from here on happens inside the sandbox.

Install Claude Code

The shell sandbox comes with Node.js 20 pre-installed, so we can install Claude Code directly via npm:

npm install -g @anthropic-ai/claude-code

Configure the API key

This is the one extra step needed in a shell sandbox. The built-in claude sandbox type does this automatically, but since we’re in a plain shell, we need to tell Claude Code to get its API key from Docker’s credential proxy:

mkdir -p ~/.claude && cat > ~/.claude/settings.json << 'EOF'
{
"apiKeyHelper": "echo proxy-managed",
"defaultMode": "bypassPermissions",
"bypassPermissionsModeAccepted": true
}
EOF

What this does: apiKeyHelper tells Claude Code to run echo proxy-managed to get its API key. The sandbox’s network proxy intercepts outgoing API calls and swaps this sentinel value for your real Anthropic key, so the actual key never exists inside the sandbox.

Clone NanoClaw and install dependencies

cd ~/workspace
git clone https://github.com/†/nanoclaw.git
cd nanoclaw
npm install

Run Claude and set up NanoClaw

NanoClaw uses Claude Code for its initial setup – configuring WhatsApp authentication, the database, and the container runtime:

claude

Once Claude starts, run /setup and follow the prompts. Claude will walk you through scanning a WhatsApp QR code and configuring everything else.

Start NanoClaw

After setup completes, start the assistant:

npm start

NanoClaw is now running and listening for WhatsApp messages inside the sandbox.

Managing the Sandbox

# List all sandboxes
docker sandbox ls

# Stop the sandbox (stops NanoClaw too)
docker sandbox stop nanoclaw

# Start it again
docker sandbox start nanoclaw

# Remove it entirely
docker sandbox rm nanoclaw

What Else Could You Run?

The shell sandbox isn’t specific to NanoClaw. Anything that runs on Linux and talks to AI APIs is a good fit:

Custom agents built with the Claude Agent SDK or any other AI agent: Claude code, Codex, Github Copilot, OpenCode, Kiro, and more. 

AI-powered bots and automation scripts

Experimental tools you don’t want running on your host

The pattern is always the same: create a sandbox, install what you need, configure credentials via the proxy, and run it.

docker sandbox create –name my-shell shell ~/my-workspace
docker sandbox run my-shell

Quelle: https://blog.docker.com/feed/

Amazon EC2 High Memory U7i instances now available in additional regions

Amazon EC2 High Memory instances are now available in new regions – U7i-6tb.112xlarge instances in AWS South America (Sao Paulo) and Europe (Milan), U7i-12tb.224xlarge in AWS GovCloud (US-East), and U7in-16tb.224xlarge instances in Europe (London). U7i instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-6tb instances offer 6TiB of DDR5 memory, U7i-12tb instances offer 12TiB of DDR5 memory, and U7in-16tb instances offer 16TiB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment. U7i-6tb instances offer 448 vCPUs and support up to 100Gbps Elastic Block Storage (EBS) and deliver up to 100Gbps of network bandwidth. U7i-12tb instances offer 896 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) and deliver up to 100Gbps of network bandwidth. U7in-16tb instances offer 896 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) and deliver up to 200Gbps of network bandwidth for faster data loading and backups. All U7i instances support ENA Express. 
U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server. To learn more about U7i instances, visit the High Memory instances page.
Quelle: aws.amazon.com

Amazon EC2 supports nested virtualization on virtual Amazon EC2 instances

Starting today, customers can create nested environments within virtualized Amazon EC2 instances. Previously, customers could only create and manage virtual machines inside bare metal EC2 instances. With this launch, customers can create nested virtual machines by running KVM or Hyper-V on virtual EC2 instances. Customers can leverage this capability for use cases such as running emulators for mobile applications, simulating in-vehicle hardware for automobiles, and running Windows Subsystem for Linux on Windows workstations.
 
Quelle: aws.amazon.com

Announcing Amazon DocumentDB long-term support (LTS) on 5.0

Starting today, Amazon DocumentDB (with MongoDB compatibility) offers Long-Term Support (LTS) on DocumentDB 5.0, enabling customers to reduce database upgrade frequency and maintenance overhead. LTS versions will receive only critical stability and security patches without introducing new features. To get started, create a new DocumentDB cluster engine version 5.0.0, or patch your existing engine version 5.0.0 cluster during your next maintenance window. Verify you’re running the required Engine Patch Version by connecting to your cluster and running db.runCommand({getEngineVersion: 1}). Ensure you’re running Engine Patch Version 3.0.17983 or later. This LTS release is available in all Amazon Web Services regions where DocumentDB is offered. For more details about DocumentDB LTS, and how to check to see what engine patch version you’re on, refer to the Long-Term Support (LTS) release for Amazon DocumentDB.
Quelle: aws.amazon.com