How to Connect MCP Servers to Claude Desktop with Docker MCP Toolkit

What if you could turn Claude from a conversational assistant into a development partner that actually does things—safely, securely, and without touching your local machine?

If you’ve been exploring Claude Desktop and wondering how to connect it with real developer tools, Docker MCP Toolkit is the missing piece you’ve been looking for.

Here’s the reality: Today’s AI assistants are brilliant at reasoning and explaining concepts. But when it comes to acting on that knowledge? They’re stuck. They can’t deploy containers, manage repos, or analyze data without trusted tools to bridge that gap.

That’s where Docker MCP Toolkit comes in.

Think of it this way:

Claude is the reasoning engine—the brain that understands what you want

Claude Desktop is the translator, turning your requests into actionable commands

Docker MCP Toolkit is the hand that safely executes those actions in isolated containers

Together, they transform Claude from something that just talks to something that builds alongside you—all through Docker’s trusted, security-hardened workflow.

In this guide, I’ll walk you through setting up Claude Desktop as an MCP client, connecting it to MCP servers running in Docker Desktop, and automating real developer tasks securely and reproducibly. 

What if you could turn 4 hours of work into 10 minutes? Imagine taking a screenshot of any app UI—a Stripe checkout page, a dashboard component, anything—and watching Claude recreate it as a fully-tested, production-ready React component, complete with Jest tests, GitHub repo, and deployment. That’s exactly what we’ll build together

What is Model Context Protocol (MCP)?

Before we dive into the setup, let’s clarify what MCP actually is.

Model Context Protocol (MCP) is the standardized way AI agents like Claude connect to tools, APIs, and services. It’s what lets Claude go beyond conversation and perform real-world actions—like deploying containers, analyzing datasets, or managing GitHub repositories.

In short: MCP is the bridge between Claude’s reasoning and your developer stack. And Docker? Docker provides the guardrails that make it safe.

Why use Docker MCP Toolkit with Claude Desktop?

I’ve been working with AI tools for a while now, and this Docker MCP setup is one of the most impressive things I’ve seen. Docker MCP Toolkit bridges Claude’s intelligence with Docker’s trusted developer workflow.

Docker MCP Toolkit doesn’t just connect Claude to your tools—it creates a secure, containerized environment where Claude can work without any risk to your local machine. Every action happens in an isolated container that gets torn down when you’re done. No mess, no security risks, complete reproducibility.

Once connected, Claude Desktop can tap into Docker’s curated MCP servers to automate developer tasks that used to eat up hours of your day. Here’s what that looks like in practice:

GitHub Integration: Want to create a repo, push some code, or review a pull request? Claude’s got you covered. No more switching between tabs constantly.

Kubernetes Management: If you’re into K8s (and let’s be honest, who isn’t these days?), Claude can deploy pods, manage services, and even handle Helm charts.

Data Analysis: Upload a CSV file and watch Claude analyze it, create visualizations, and give you insights you didn’t even know you needed.

Web Scraping: With Firecrawl integration, Claude can go fetch data from websites and bring it back to you, all formatted and ready to use.

Each of these MCP servers can be enabled directly through the MCP Toolkit in Docker Desktop. The setup is straightforward, the execution is secure, and the results are reproducible every single time.

Setting up Claude Desktop with Docker MCP Toolkit

Prerequisites

Before you begin, make sure you have:

A machine with 8GB RAM minimum, ideally 16GB

Install Docker Desktop

Step 1: Install and sign in to Claude Desktop

Head over to claude.ai/desktop and download Claude Desktop if you haven’t already. The installation is pretty straightforward – just follow the prompts. Once it’s installed, sign in with your Anthropic account.

Step 2: Enable Docker MCP Toolkit

Open Docker Desktop and enable the MCP Toolkit from the Settings menu.

Step 3. Connect Claude Desktop as MCP Client

Click “MCP Toolkit” in the left sidebar of Docker Desktop and click on “Connect” under Claude Desktop.

Step 4. Verify the MCP_DOCKER connection 

Restart Claude Desktop and verify that the MCP_DOCKER connection appears under the Claude Desktop menu.

Step 5. View the Claude Configuration

You can view the Claude configuration file claude_desktop_config.json accessible via Claude > Settings > Developers > Edit Config.

Click “Edit Config” and open it via Visual Studio Code or your preferred IDE.

How it works

When you connect Claude Desktop to Docker MCP Toolkit, this configuration gets added to Claude Desktop’s claude_desktop_config.json file:

Here’s what each part does:

mcpServers: The parent object that defines all MCP server connections available to Claude Desktop

“MCP_DOCKER”: The name identifier for this connection. This is what you’ll see referenced in Claude Desktop’s settings and logs

“command”: “docker”: Tells Claude Desktop to use the Docker CLI as the execution command

“args”: [“mcp”, “gateway”, “run”]: The arguments passed to Docker, which translate to running:

$ docker mcp gateway run

What this actually does

When Claude Desktop needs to access MCP tools, it executes docker mcp gateway run, which:

Starts the Docker MCP Gateway – This acts as the central router/bridge

Connects Claude Desktop (MCP client) to all enabled MCP servers in Docker Desktop

Routes tool requests from Claude → Gateway → Appropriate MCP Server → Back to Claude

Runs everything in isolated containers for security and reproducibility

In simple terms:

Think of it like this:

Claude Desktop = Your AI assistant (the client)

Docker MCP Gateway = The switchboard operator (the router)

MCP Servers = The actual tools (GitHub, Firecrawl, Node.js Sandbox, etc.)

This one configuration line is what makes all the magic happen—it’s the handshake that lets Claude safely access and execute all those containerized developer tools.

Where to find this file:

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json

Windows: %APPDATA%Claudeclaude_desktop_config.json

Note: When you click “Connect” in Docker Desktop’s MCP Client settings, this configuration gets added automatically—you don’t need to edit it manually.

Real-World Demo: From Screenshot to Deployed App (With Tests!)

Now that you’ve connected Claude Desktop to Docker MCP Toolkit, let’s see it in action with a practical example. What if you could take a screenshot of any app, and have Claude recreate it with working code AND full test coverage in minutes?

The Problem

Right now, the typical workflow looks like this:

Designers create mockups

Developers manually rebuild them in code

QA writes and runs tests

Repeat until it works

It’s slow, manual, and riddled with back-and-forth communication gaps.

The Solution

Claude Desktop + Docker MCP Toolkit automates all three steps. You provide a screenshot, and Claude handles the rest—analysis, code generation, testing, debugging, and deployment.

What You’ll Build

A functional React component from a screenshot, complete with:

Jest unit tests 

A GitHub repo (basic structure)

Configuration files (package.json, babelrc.js, gitignore)

Files ready for download

Note: This is a code foundation, not production deployment. Does NOT include: CI/CD, live deployment, Playwright tests, or a11y validation.

The Demo Setup

For this demonstration, we’ll use five MCP servers:

Firecrawl – Captures and analyzes screenshots

Node.js Sandbox – Runs tests, installs dependencies, validates code (in isolated containers)

GitHub – Handles version control and deployment

Sequential Thinking – Debugs failing tests and optimizes code

Context7 – Provides code documentation for LLMs and AI code editors

The Walkthrough (Step-by-Step)

Phase 1: Screenshot to Component

– You provide a screenshot, could be a Stripe checkout page, a dashboard card, whatever UI you need.

– Claude analyzes the layout, identifies components, measures spacing, captures colors, and generates a React/Next.js component that matches the design.

Phase 2: Generate Test Suite

– Jest unit tests for component logic (41 tests covering rendering, interactions, filters and state management).

Phase 3: Run & Debug with Node.js Sandbox

This is where Docker really shines

– The Node.js Sandbox MCP spins up an isolated container, installs all dependencies safely, and runs your test suite.

– Everything happens in a sandboxes environment; nothing touches your local machine.

– If tests fail? Sequential Thinking kicks in. Claude iterates, fixes the code, spins up a fresh container, and runs the tests again. It repeats this loop until everything passes.

Phase 4: Deploy to GitHub

Once tests are green:

– Claude creates a GitHub repo with proper structure

– Pushes the code

The Results: 

Before: A screenshot of a UI

After: A fully tested, deployed, production-ready component

Time saved: What used to take 4 hours now takes 10 minutes.

Why This Matters

Security First Code runs in isolated containers, not on your machine. No risk of malicious code affecting your system. Every execution has an audit trail.

Reproducibility Same Docker image = same results, every time. Works on your machine, your teammate’s machine, and in CI/CD. No more “works on my machine” excuses.

Speed + Safety Test potentially dangerous AI-generated code without fear. Iterate quickly in isolated environments. Roll back instantly if something breaks.

Visibility See exactly what’s running and where. Monitor resource usage in Docker Desktop. Access full logs and debugging capabilities.

Configure MCP Servers

Assuming that Docker MCP Toolkit is already configured with Claude Desktop, follow the below steps to configure and add MCP servers. 

Click “MCP Toolkit” and select “Catalog” to search for the following MCP servers and add them one by one.

Firecrawl (web scraping, screenshots)

GitHub Official (repo management, PRs, issues, commits)

Sequential Thinking (systematic problem-solving)

Context7 (up-to-date code documentation for LLMs and code editors)

Note: The Node.js Sandbox MCP server implements Docker-out-of-Docker (DooD) pattern by mounting /var/run/docker.sock, giving the sandbox container full Docker daemon API access. This allows it to spawn ephemeral sibling containers for code execution – when Claude requests JavaScript execution, the sandbox container makes Docker API calls to create temporary Node.js containers (node:lts-slim, Playwright, etc.) with resource limits (512MB RAM, 0.75 CPU cores), executes the code in isolation, and auto-removes the container.The Docker socket mount is a privilege escalation vector (effectively granting root-level host access) but enables dynamic container orchestration without the overhead of true Docker-in-Docker. Files are persisted via volume mount to ~/Desktop/sandbox-output, mapped to /root inside execution containers. This architecture trades security for flexibility – acceptable for local development but requires Docker Scout vulnerability scanning and careful consideration for production use. Hence, you’ll require a separate entry for Node.js Sandbox MCP server in  the Claude configuration file. You’ll see how to configure later in this article.

Configure Firecrawl MCP Server

The Firecrawl MCP server gives Claude Desktop the ability to add powerful web scraping and search capabilities. To setup a Firecrawl MCP server, you’ll need Firecrawl API key via https://www.firecrawl.dev/app/api-keys. Create a new account if you’re visiting the https://www.firecrawl.dev/ portal for the first time. Click “API Keys’ on the left sidebar to get the new API keys created.

In Docker Desktop:

Open Docker Desktop → MCP Toolkit → Catalog

Search for “Firecrawl”

Find Firecrawl in the results

Select Configurations

Add Firecrawl API Keys (firecrawl.api_key) that you created earlier

Leave all the other entries blank

Click Save and Add Server

The Firecrawl MCP server should now appear under “My Servers” in Docker MCP Toolkit.

What you get:

6+ Firecrawl tools including:

firecrawl_check_crawl_status – check the status of a crawl job.

firecrawl_crawl – Starts a crawl job on a website and extracts content from all pages.

firecrawl_map – Map a website to discover all indexed URLs on the site

firecrawl_extract – Extract structured information from web pages using LLM capabilties.

firecrawl_scrape -Scrape content from a single URL with advanced options.

firecrawl_search – Search the web and optionally extract content from search results.

Configure GitHub Official MCP Server

The GitHub MCP enables Claude Desktop to create issues, PRs, and manage repositories on your behalf.

Option 1: OAuth Authentication (Recommended – Easiest)

In MCP Toolkit → Catalog, search “GitHub Official”

Click + Add

Go to the OAuth tab in Docker Desktop

Find the GitHub entry

Click “Authorize”

Your browser opens GitHub’s authorization page

Click “Authorize Docker” on GitHub

You’re redirected back to Docker Desktop

Return to Catalog tab, find GitHub Official

Click Start Server

Advantage: No manual token creation. Authorization happens through GitHub’s secure OAuth flow with automatic token refresh.

Option 2: Personal Access Token (For Granular Control)

If you prefer manual control or need specific scopes:

Step 1: Create GitHub Personal Access Token

Go to https://github.com  and sign in

Click your profile picture → Settings

Scroll to “Developer settings” in the left sidebar

Click “Personal access tokens” → “Tokens (classic)”

Click “Generate new token” → “Generate new token (classic)”

Name it: “Docker MCP Browser Testing”

Select scopes:

repo (Full control of repositories)

workflow (Update GitHub Actions workflows)

Click “Generate token”

Copy the token immediately (you won’t see it again!)

Step 2: Configure in Docker Desktop

In MCP Toolkit → Catalog, find GitHub Official

Click + Add (if not already added)

Go to Configuration tab

Select “Personal Access Token” as the authentication method

Paste your token

Click Start Server

Or via CLI:

docker mcp secret set GITHUB.PERSONAL_ACCESS_TOKEN=github_pat_YOUR_TOKEN_HERE

Configure Sequential Thinking MCP Server

The Sequential Thinking MCP server gives Claude Desktop the ability for dynamic and reflective problem-solving through thought sequences. Adding the Sequential Thinking MCP server is straightforward –  it doesn’t require any API key. Just search for Sequential Thinking in the Catalog and get it to your MCP server list.

In Docker Desktop:

Open Docker Desktop → MCP Toolkit → Catalog

Search for “Sequential Thinking”

Find Sequential Thinking in the results

Click “Add MCP Server” to add without any configuration

The Sequential Thinking MCP MCP server should now appear under “My Servers” in Docker MCP Toolkit.

What you get:

A single Sequential Thinking tool that includes:

sequentialthinking – A detailed tool for dynamic and reflective problem-solving through thoughts. This tool helps analyze problems through a flexible thinking process that can adapt and evolve. Each thought can build on, question, or revise previous insights as understanding deepens.

Configure Node.js Sandbox MCP Server

The Node.js Sandbox MCP enables Claude Desktop to spin up disposable Docker containers to execute arbitrary JavaScript. To get it added to the Claude Desktop, replace the contents of the Claude Desktop configuration file with the following JSON structure. This configuration tells Claude Desktop to start the Node.js Sandbox with access to specific directories:

{
"mcpServers": {
"MCP_DOCKER": {
"command": "docker",
"args": ["mcp", "gateway", "run"]
},
"node-code-sandbox": {
"command": "docker",
"args": [
"run",
"-i",
"–rm",
"-v",
"/var/run/docker.sock:/var/run/docker.sock",
"-v",
"/Users/YOUR_USERNAME/Desktop/sandbox-output:/root",
"mcp/node-code-sandbox"
],
"env": {
"FILES_DIR": "/root",
"SANDBOX_MEMORY_LIMIT": "512m",
"SANDBOX_CPU_LIMIT": "0.75"
}
}
}
}

Before you restart your Claude Desktop, make sure that the /Users/YOUR_USERNAME/Desktop/sandout-output directory exists on your local system and this directory is made available to containers via Docker Desktop > Settings > Resources > File Sharing.

What you get:

7 Node.js Sandbox tools including:

get_dependency_types – Given an array of npm package names (and optional versions), fetch whether each package ships its own TypeScript definitions or has a corresponding @types/… package, and return the raw .d.ts text. Useful whenwhen you’re about to run a Node.js script against an unfamiliar dependency and want to inspect what APIs and types it exposes.

run_js- Install npm dependencies and run JavaScript code inside a running sandbox container.

run_js_ephermeral – Run a JavaScript snippet in a temporary disposable container with optional npm dependencies, then automatically clean up.

sandbox_exe – Execute one or more shell commands inside a running sandbox container. Requires a sandbox initialized beforehand.

sandbox_initialize – Start a new isolated Docker container running Node.js. Used to set up a sandbox session for multiple commands and scripts.

sandbox_stop – Terminate and remove a running sandbox container. Should be called after finishing work in a sandbox initialized with sandbox_initialize.

search_npm_packages – Search for npm packages by a search term and get their name, description, and a README snippet.

Configure Context7 MCP Server

The Context7 MCP enables Claude Desktop to access the latest and up-to-date code documentation for LLMs and AI code editors. Adding Context7 MCP server is straightforward. It doesn’t require any API key. Just search for Context7 in the Catalog and get it added to the MCP server lists.

In Docker Desktop:

Open Docker Desktop → MCP Toolkit → Catalog

Search for “Context7”

Find Context7 in the results

Click “Add MCP Server” to add without any configuration

The Context7 MCP server should now appear under “My Servers” in Docker MCP Toolkit

What you get:

2 Context7 tools including:

get-library-docs – Fetches up-to-date documentation for a library.

resolve-library-id – Resolves a package/product name to a Context7-compatible library ID and returns a list of matching libraries. 

Verify the available tools under Claude Desktop

Once you have added all the MCP servers, click “Disconnect” and “Connect” so as to see the various MCP tools under MCP_DOCKER.

That’s it. It’s time to start interacting with your MCP servers and tools.

Let’s Test it Out

Prompt 1:

I'm going to upload a screenshot of a UI component. Please execute this complete workflow using all available MCP tools:

PHASE 0: STRATEGIC PLANNING Sequential Thinking to:

1. Analyze what type of component this appears to be
2. Determine what research and documentation we'll need
3. Plan the component architecture and structure
4. Identify testing requirements and edge cases
5. Create a step-by-step implementation strategy
6. Estimate the number of tests needed for full coverage

PHASE 1: DESIGN RESEARCH

Use Firecrawl to analyze the screenshot and extract:
– Complete color palette (hex values)
– All spacing and padding measurements
– Typography specifications (font family, sizes, weights) – Layout structure (grid, flexbox patterns)
– Component boundaries and hierarchy
– Interactive elements (buttons, inputs, dropdowns)

PHASE 2: DOCUMENTATION RESEARCH

Use Context7 to fetch the latest documentation for:
– React (latest hooks patterns and best practices)
– Next.js (if applicable
– Jest (current testing patterns and matchers)
– React Testing Library (current query methods and best practices)
– Lucide React or any icon library needed
– Any other libraries identified in the planning phase

PHASE 3: CODE GENERATION

Based on the research from Phases 0-2, generate:
1. A React/Next.js component that:
– Uses exact colors and spacing from Firecrawl analysis
– Follows current best practices from Context7 docs
– Implements the architecture from Sequential Thinking plan
– Includes proper TypeScript types
– Has clean, readable, production-ready code '

2. Complete project structure:
– package.json with correct dependencies
– babel.config.js for Jest
– .gitignore
– README.md with setup instructions

PHASE 4: COMPREHENSIVE TESTING

Generate Jest + React Testing Library tests covering:
– Component rendering (all variants)
– User interactions (clicks, inputs, selections)
– State management and updates
– Edge cases (empty states, loading, errors)
– Accessibility (ARIA roles, keyboard navigation)
– Props validation and defaults

Follow current best practices from Context7 documentation. Aim for the test count estimated in Phase 0.

PHASE 5: VALIDATION & DEBUGGING

Use Node.js Sandbox to:
1. Install all dependencies in isolated container
2. Run the complete test suite
3. Capture all output and results

If ANY tests fail:
– Use Sequential Thinking to:
* Analyze the error messages
* Generate 3 hypotheses for root cause
* Rank hypotheses by likelihood
* Test hypothesis 1 with a fix
* Re-run in Node.js Sandbox
* If still failing, move to hypothesis #2
* Continue until all tests pass

If tests need clarification on errors:
– Use Context7 to look up specific error messages or API changes

Ensure that you upload the screenshot

Here’s how Claude Desktop responds.

I'll execute this comprehensive workflow step-by-step using all available MCP tools. Let me start by analyzing the uploaded screenshot and planning our approach.
PHASE 0: STRATEGIC PLANNING
Let me use Sequential Thinking to analyze this component and create a comphrenesive implementation strategy.
Perfect! I've completed the strategic planning. Now let's move to Phase 1.
PHASE 1: DESIGN RESEARCH

For Phase 1, I'll use Firecrawl to scrape the uploaded image and extract visual specifications. However, Firecrawl is designed for web scraping, not image analysis. I should use the image directly for visual analysis and describe what I see.
..

PHASE 2: DOCUMENTATION RESEARCH
Now let me fetch the latest documentation using Context7:

Now let me fetch React Testing Library documentation:…
I'll use /websites/testing-library_com-docs as it has the most code snippets (565) and comprehensive documentation.
Perfect! I have enough documentation. Now let's proceed to PHASE 3: CODE GENERATION

Perfect! Now let's move to PHASE 6: DEPLOYMENT. Let me create a GitHub repository
..

Now let's proceed with PHASE 4: COMPREHENSIVE TESTING. I'll create comprehensive test files for all components:

Now I'll create comprehensive tests for the resource library component based on the screenshot. Let me generate a complete test suite.

Excellent progress! CSS issue resolved – 35 of 41 tests now passing! Let me analyze the remaining 6 failures:

BUG IDENTIFIED: Type Filter Mismatch
Problem: Type filter keys don't match resource.type values:
Filter keys: 'Infographics', 'Videos', 'White Papers' (plural/different)
Resource types: 'Infographic', 'Video', 'White Paper' (singular/different)
Fixing the component:
SUCCESS! ALL 41 TESTS PASSING!
Test Suites: 1 passed, 1 total
Tests: 41 passed, 41 total
Time: 3.046 s
..
Due to character limits, let me push the updated files more efficiently. Since all tests pass locally, let me commit the fixes…

Within minutes, Claude will: 

Analyze the design 

Generate the component 

Write comprehensive tests 

Run them in an isolated Docker container 

Debug any failures 

Push the final code to GitHub with a live repo link

Example Result: https://github.com/ajeetraina/resources-browser-component

You should be able to verify if Node.js Sandbox container is running or not using Docker Dashboard. The logs show the MCP (Model Context Protocol) server successfully initializing inside a Docker container. The container starts a scavenger process that will automatically clean up after 1 hour of inactivity (checking every 60 seconds). It then performs a handshake announcing it supports MCP Protocol 2025-06-18 and server version 1.3.0.

The server registers 7 tools for JavaScript execution: 

sandbox_initialize (create persistent containers), 

sandbox_exec (run shell commands), 

run_js (run JS in persistent container), 

run_js_ephemeral (one-shot JS with auto-cleanup), 

sandbox_stop (cleanup), 

get_dependency_types (fetch TypeScript definitions), and 

search_npm_packages (search npm registry). 

All tools that execute JavaScript require ESModules syntax and use the ./files directory for file persistence. The server is fully operational and ready to execute code! 

Wrapping Up

You’ve just witnessed how Docker MCP Toolkit transforms Claude Desktop from a chat assistant into a genuine development partner.

What used to require hours of manual work—analyzing designs, writing code, creating tests, debugging failures, setting up repos—now happens in minutes with a single prompt.

This is the new paradigm for AI-assisted development. You’re not just using AI anymore. You’re collaborating with it in a way that’s secure, reproducible, and production-ready.

Ready to try it? Open Docker Desktop to get started with MCP Toolkit (requires v4.48 or newer to launch automatically).

Learn more

New to Docker? Download Docker Desktop today.

Explore the MCP Catalog: Discover containerized, security-hardened MCP servers.

Get started with the MCP Toolkit: Run MCP servers easily and securely.

Read our MCP Horror Stories for real-life MCP security cases.

Quelle: https://blog.docker.com/feed/

Docker Hub Incident Report – October 20, 2025

Docker experienced significant disruptions due to a widespread outage in AWS’s US-East-1 region on October 20, 2025. Developers worldwide rely on Docker as part of their daily workflow, and we regret the disruption this caused. In this post, we want to provide transparency about what happened, what we have learned, and how we are strengthening our systems for the future.

What Happened

Beginning October 20, 2025 at 06:48 UTC, Docker Hub, Hardened Images, Scout, Build Cloud, Automated Builds, and Testcontainers Cloud experienced an increase in failure rate when AWS’s largest region, US-East-1 experienced an outage with DynamoDB, EC2, Network Load Balancer, and other AWS services. See the AWS summary of the service disruption for further details. This increasing failure rate led to a complete outage of Docker services across the aforementioned products beginning at 2025-10-20 08:01 UTC.

Starting at 2025-10-20 09:40 UTC, AWS reported progress and partial restoration that resulted in Docker services being partially restored to operation. Full restoration of Docker services was completed by 2025-10-21 09:42 UTC. The complete timeline and user impacts can be seen here.

Timeline & Impact of Events

2025-10-20 06:48 UTC

AWS DynamoDB and EC2 APIs begin failing, causing degraded performance across Docker Hub, Build Cloud, Testcontainers Cloud, and other related services.

2025-10-20 06:51 UTC

AWS STS begins failing with cascading failures across AWS services

Degradation of Docker services increases. 

Users experience widespread, increased intermittent failures across all requests

2025-10-20 08:01 UTC

All services unavailable

2025-10-20 09:21 UTC

AWS SQS recovers

AWS STS recovers

AWS EC2 still failing

Users continue experiencing high error rates across all Docker services – over 90%

2025-10-20 09:40 UTC

AWS DynamoDB recovery begins

Docker Hub recovery begins – error rate less than 20%

Docker Hardened recovery begins – error rate less than 20%

2025-10-20 12:28 UTC

AWS EC2 recovery begins with throttling in effect

Docker Scout recovery begins

Docker Offload recovery begins

Docker Build Cloud recovery begins

Docker Testcontainers Cloud recovery begins

Automated builds remain unavailable

2025-10-20 18:52 UTC

Docker Hub and Scout fully recover

Docker Build Cloud and Testcontainers Cloud seeing improvements – error rate ~50%

Automated Builds remain unavailable

2025-10-20 20:50 UTC

AWS EC2 recovers fully

Docker Build Cloud, Offload, & Testcontainers Cloud fully recover

Automated Builds remain unavailable

2025-10-21 09:42 UTC

Automated builds fully recover

All services operational

Ongoing Monitoring

All Docker systems are currently operational and we continue to monitor the status of our infrastructure. For real-time operational details, visit our status page where you can subscribe to notifications.

Resilience and Next Steps

We take Docker Hub’s reliability seriously and understand its critical role in development workflows worldwide. Among Docker’s services, Hub’s registry operations, especially image pulls, are the most heavily used and the most essential to keeping developer workflows moving.

Our first priority is ensuring Docker Hub remains available even in the event of a regional cloud failure. To that end, our immediate focus areas include:

Caching strategies: Expanding and optimizing cache layers to reduce the blast radius of upstream failures, ensuring customers can continue accessing frequently used images even during partial outages.

Multi-region resilience: Enabling regional redundancy for Docker Hub’s read operations, beginning with image pulls. This will allow pulls to continue seamlessly even if a single cloud region experiences disruption. We are also exploring approaches to extend these capabilities to write operations such as image pushes, which involve significantly more complexity across regions.

The Docker community depends on Hub’s reliability, and we take that trust seriously. We are committed to learning from this event so that Docker Hub remains a dependable foundation for developers everywhere.
Quelle: https://blog.docker.com/feed/

Why More People Are Taking Control of Their Digital Lives with Self-Hosted Alternatives

Remember when owning software meant you bought a CD, installed it, and it was yours until your computer died? Even if you got a new computer, you could install that same software on the new one. Only “serious” software packages had strict licensing restrictions.

These days, most of our tools live in the cloud, guarded by login screens and monthly invoices. That does sound more convenient—until the subscription fee increases, the features change, or the service shuts down overnight.

If you’ve ever thought, “There must be a better way”, you’re not alone. Self-hosting is on the rise, and it’s not just because people enjoy tinkering with servers in their basements (though some do). Whether you only want a way to store your family photos and documents or you want to do your own password management, there’s probably an open-source option you can host yourself.

In this article, I dig into why self-hosting is growing, the different ways to host your own services (from a Raspberry Pi to a rented server), and some beginner-friendly software to get started with. 

Something to keep in mind: self-hosting and open-source software often go hand in hand, but they’re not the same thing. Many popular self-hosted apps are open-source, but you can also self-host closed-source tools, just as there are open-source apps that only run in someone else’s cloud. In other words, self-hosting is about who controls the infrastructure, not necessarily how the software is licensed.

What Motivates People to Try Self-Hosting?

Here are some of the biggest reasons you might decide to run things yourself:

Privacy and Data Sovereignty

When you use a third-party service, your data often lives on their servers under their rules. That means trusting them to keep it secure, private, and not to sell it to advertisers or hand it over at the drop of a subpoena. With self-hosting, you decide where your data lives and who can see it. 

There’s also something inherently creepy about providers like Google Photos wanting to normalize the idea of deepfaking your family photos to create memories that never happened. These are *your* photos, not training data for Google’s AI models.

Escaping Subscription Fatigue

Five dollars here, ten dollars there… and suddenly you’re paying more for cloud services than for your actual internet connection. To make matters worse, subscription tiers are often set up so that the one feature you actually need sits behind the premium option. 

Self-hosted software is often free, or it might carry a one-time cost; you only need to provide the hardware and electricity. 

For a home user, that can lead to serious savings over time.

Avoiding Platform Lock-in

Have you ever tried to switch from one cloud service to another, only to find your data is trapped in a weird file format or missing entirely? That’s platform lock-in, and it’s as fun as it sounds. 

With self-hosted, open-source tools, you control the formats, backups, and migration process, making it easier to change things later.

When you’re in control of your data, you can move it to a new platform in an afternoon instead of spending days untangling exported files—or worse, discovering that some of your data is simply gone for good.

The Joy of Customization and Independence

Many self-hosted open-source projects let you tweak things to your liking, from the frontend color scheme to adding features no commercial platform offers. There’s something satisfying about knowing your setup runs exactly how you want it to.

Updates can still cause trouble even in self-hosted setups, but at least you decide when you want to upgrade. Unlike when OpenAI recently “upgraded” to ChatGPT 5, and users were clamoring to get ChatGPT 4 back!

It also means you decide how often you update, which features to enable, and how your data is organized. Your server won’t suddenly rearrange your photos or change its interface just because someone in a corporate boardroom decided it was time for a “fresh new look.”

A mainstream cloud service usually has to adopt a one-size-fits-all policy, but if you try self-hosting, you’ve got the time to try out a custom-fitted solution.

What Is Self-Hosting?

At its core, self-hosting simply means running software on infrastructure you control: your own hardware, a rented server, or some mix of the two. Instead of trusting a third-party provider to store your files, stream your media, or manage your notes, you’re the one who’s in charge.

There’s no one “right” way to self-host. The best option for you depends on how much control you want, how hands-on you’re willing to be, and what kind of hardware you have access to.

Running Your Own Hardware

This could be anything from a powerful (network-attached storage (NAS)) box, an old laptop you’ve repurposed, or a Raspberry Pi quietly humming in the corner. You get maximum control and keep your data entirely in your own hands, without relying on an outside company to keep it safe.

You might even be brave enough to go the route of a refurbished enterprise server for more horsepower. But be aware: those older machines can be noisy and surprisingly power-hungry. Your electricity bill might notice before you do. You’ll also most likely be running Linux as an operating system because most self-hosted projects are built and supported for Linux first. This opens up a world of possibilities—but it also comes with a bit of a learning curve if you’ve never worked with Linux before.

Of course, you’re also in control of your own backups, which can be a double-edged sword. You should be fine if you follow the 3-2-1 rule of backups, which advocates for keeping three copies of your data on at least two different mediums, with at least one copy being stored off-site.

Remember, RAID is good, but RAID is not a backup!

Using a VPS or Rented Server

A VPS, or virtual private server, is basically a small slice of someone else’s much larger computer in a data center, rented on a month-to-month basis. It can provide excellent uptime, stable internet speeds, and the ability to run almost anything you could run at home, without worrying about power cuts or router failures.

Depending on the provider and the pricing tier, there might also be some extras included: automated backups, easy scaling if you need more resources, and sometimes even built-in security tools.

The trade-off is that you’re still trusting the hosting provider, who has full control over the physical host machine your VPS runs on, which in turn gives them the ability to access or manage your virtual server if they choose. If you have particularly strong privacy concerns, then the VPS route might not be for you.

Oh, and you’ll probably still be running Linux.

The Hybrid Approach

Many people prefer to use a mix of these two approaches: maybe your photo library runs on a VPS for easy access anywhere in the world, while your media server lives at home to stream movies directly over your local network.

Hybrid setups can give you the best of both worlds in terms of convenience and control, without putting all your eggs in one basket.

Popular Self-Hosted Alternatives

Once you decide to give self-hosting a try, the next question is: what should you run first? The good news is there’s a huge range of open-source software that’s not only powerful, but also easy to set up—especially if they’re available to run as a [container](https://www.docker.com/resources/what-container/).

Containers are tightly packaged software delivery mechanisms that contain all the necessary pieces for running a particular application, like software libraries and configuration. And while you’d typically run different services (like a web server and database) as separate containers, tools like Docker Compose lets you start them all together with a single command, instead of manually configuring everything one piece at a time.

Below are some popular self-hosted options to get started with. They’re sourced from recommendations that are mentioned again and again on communities like r/selfhosted or r/homelab.

Photos

If you’ve been looking for a Google Photos replacement, Immich is worth consideration. It can automatically back up your photos and videos from your phone, offers popular Google-esque features like facial recognition, and has a slick web interface for browsing your library.

It’s open-source, and you can download the source code directly from their Github repository. They also ask that users support the project by buying a support license. The license does not change anything about the product; it’s simply a way for fans of the project to contribute to a great, open-source application.

Productivity

Everyone needs a word processor, and even though Google Docs is free to use with a Google account, it’s not really free: you’re still paying with your data and privacy.

LibreOffice is a fully-featured, open-source office productivity suite with applications for word processing, spreadsheets, and presentations, to mention a few. It is also compatible with most of the new Microsoft Office document formats like *.docx* and *.pptx*.

You can also try “Collabora Online Development Edition” (CODE for short), an online, collaborative editing platform built on top of LibreOffice. Its Docker image makes running it rather effortless.

File Storage

Seafile is a file storage application for hosting and sharing any file from your own servers. While they do have a Pro edition with more features, the completely free Community Edition can run on your own hardware.

Another option is Nextcloud, an open-source content collaboration platform with a rich app ecosystem for features like file storage, calendars, contacts, and more, all running on infrastructure you control.

Media

If you’ve ever dreamed of running your own Netflix-style media library, Jellyfin can make it happen. It streams your movies, TV shows, and music to any device you own: no ads, no data tracking, and no sudden content removals.

Supporting Tools

If you’re considering running a self-hosted setup, there are a few supporting tools you should almost always include.

One of these is Vaultwarden, a lightweight, self-hosted password manager that syncs across devices and works with the Bitwarden browser extensions and apps.

Another is Netbird. When you start hosting your own hardware and software, one of the more common challenges is securing your network access to your infrastructure. Netbird is a zero-configuration networking platform that makes setting up VPNs to your various devices a much less daunting task.

Taking Back Control of Your Digital Life

Self-hosting isn’t just for enterprise sysadmins anymore. Whether you want to protect your privacy, cut down on subscriptions, avoid lock-in, or just build something that works exactly the way you want it, there’s never been an easier time to take control of your digital life.

As you’ve seen in this article, there’s more than one way to do it: running a quiet little server in your home, renting a VPS with built-in backups, or mixing the two options in a hybrid setup. And with so many great open-source tools available—from Immich and Nextcloud to Jellyfin and Vaultwarden—you can build your own digital toolkit without sacrificing convenience or features.

Container platforms like Docker make it even easier to get started. Many self-hosted projects provide official Docker images, so you can deploy apps consistently on almost any machine. With a little time and patience, you could be running powerful, private, and subscription-free services in no time.

If you’ve ever felt like the cloud has taken a little too much control over your data (and your wallet), maybe it’s time to explore the idea of owning your own little piece of cloud.

Quelle: https://blog.docker.com/feed/

AI Guide to the Galaxy: MCP Toolkit and Gateway, Explained

This is an abridged version of the interview we had in AI Guide to the Galaxy, where host Oleg Šelajev spoke with Jim Clark, Principal Software Engineer at Docker, to unpack Docker’s MCP Toolkit and MCP Gateway.

TL;DR

What they are: The MCP Toolkit helps you discover, run, and manage MCP servers; the MCP Gateway unifies and securely exposes them to your agent clients.

Why Docker: Everything runs as containers with supply-chain checks, secret isolation, and OAuth support.

How to use: Pick servers from the MCP Catalog, start the MCP Gateway, and your client (e.g., Claude) instantly sees the tools.

First things first: if you want the official overview and how-tos, start with the Docker MCP Catalog and Toolkit.

A quick origin story (why MCP and Docker?)

Oleg: You’ve been deep in agents for a while. Where did this all start?

Jim: When tool calling arrived, we noticed something simple but powerful: tools look a lot like containers. So we wrapped tools in Docker images, gave agents controlled “hands,” and everything clicked. That was even before the Model Context Protocol (MCP) spec landed. When Anthropic published MCP, it put a name to what we were already building.

What the MCP Toolkit actually solves

Oleg: So, what problem does the Toolkit solve on day one?

Jim: Installation and orchestration. The Toolkit gives you a catalog of MCP servers (think: YouTube transcript, Brave search, Atlassian, etc.) packaged as containers and ready to run. No cloning, no environment drift. Just grab the image, start it, and go. As Docker builds these images and publishes them to Hub, you get consistency and governance on pull.

Oleg: And it presents a single, client-friendly surface?

Jim: Exactly. The Toolkit can act as an MCP server to clients, aggregating whatever servers you enable so clients can list tools in one place.

How the MCP Gateway fits in

Oleg: I see “Toolkit” inside Docker Desktop. Where does the MCP Gateway come in?

Jim: The Gateway is a core piece inside the Toolkit: a process (and open source project) that unifies which MCP servers are exposed to which clients. The CLI and UI manage both local containerized servers and trusted remote MCP servers. That way you can attach a client, run through OAuth where needed, and use those remote capabilities securely via one entry point.

Oleg: Can we see it from a client’s perspective?

Jim: Sure. Fire up the Gateway, connect Claude, run mcp list, and you’ll see the tools (e.g., Brave Web Search, Get Transcript) available to that session, backed by containers the Gateway spins up on demand.

Security: provenance, secrets, and OAuth without drama

Oleg: What hardening happens before a server runs?

Jim: On pull/run, we do provenance verification, ensuring Docker built the image, checking for an SBOM, and running supply-chain checks (via Docker Scout) so you’re not executing something tampered with.

Oleg: And credentials?

Jim: Secrets you add (say, for Atlassian) are mounted only into the target container at runtime, nothing else can see them. For remote servers, the Gateway can handle OAuth flows, acquiring or proxying tokens into the right container or request path. It’s two flavors of secret management: local injection and remote OAuth, both controlled from Docker Desktop and the CLI.

Profiles, filtering, and “just the tools I want”

Oleg: If I have 30 servers, can I scope what a given client sees?

Jim: Yes. Choose the servers per Gateway run, then filter tools, prompts, and resources so the client only gets the subset you want. Treat it like “profiles” you can version alongside your code; compose files and config make it repeatable for teams. You can even run multiple gateways for different configurations (e.g., “chess tools” vs. “cloud ops tools”).

From local dev to production (and back again)

Oleg: How do I move from tinkering to something durable?

Jim: Keep it Compose-first. The Gateway and servers are defined as services in your compose files, so your agent stack is reproducible. From there, push to cloud: partners like Google Cloud Run already support one-command deploys from Compose, with Azure integrations in progress. Start locally, then graduate to remote runs seamlessly.

Oleg: And choosing models?

Jim: Experiment locally, swap models as needed, and wire in the MCP tools that fit your agent’s job. The pattern is the same: pick models, pick tools, compose them, and ship.

Getting started with MCP Gateway (in minutes)

Oleg: Summarize the path for me.

Jim:

Pick servers from the catalog in Docker Desktop (or CLI).

Start the MCP Gateway and connect your client.

Add secrets or flow through OAuth as needed.

Filter tools into a profile.

Capture it in Compose and scale out.

Why the MCP Toolkit and Gateway improve team workflows

Fast onboarding: No glue code or conflicting envs, servers come containerized.

Security built-in: Supply-chain checks and scoped secret access reduce risk.

One workflow: Local debug, Compose config, cloud deploys. Same primitives, fewer rewrites.

Try it out

Spin up your first profile and point your favorite client at the Gateway. When you’re ready to expand your agent stack, explore tooling like Docker Desktop for local iteration and Docker Offload for on-demand cloud resources — then keep everything declarative with Compose.

Ready to build? Explore the Docker MCP Catalog and Toolkit to get started.

Learn More

Watch the rest of the AI Guide to the Galaxy series

Explore the MCP Catalog: Discover containerized, security-hardened MCP servers

Open Docker Desktop and get started with the MCP Toolkit (Requires version 4.48 or newer to launch the MCP Toolkit automatically)

Check out our latest guide on how to setup Claude Code with Docker’s MCP Toolkit

Quelle: https://blog.docker.com/feed/

Your Org, Your Tools: Building a Custom MCP Catalog

I’m Mike Coleman, a staff solutions architect at Docker. In this role, I spend a lot of time talking to enterprise customers about AI adoption. One thing I hear over and over again is that these companies want to ensure appropriate guardrails are in place when it comes to deploying AI tooling. 

For instance, many organizations want tighter control over which tools developers and AI assistants can access via Docker’s Model Context Protocol (MCP) tooling. Some have strict security policies that prohibit pulling images directly from Docker Hub. Others simply want to offer a curated set of trusted MCP servers to their teams or customers.

In this post, we walk through how to build your own MCP catalog. You’ll see how to:

Fork Docker’s official MCP catalog

Host MCP server images in your own container registry

Publish a private catalog

Use MCP Gateway to expose those servers to clients

Whether you’re pulling existing MCP servers from Docker’s MCP Catalog or building your own, you’ll end up with a clean, controlled MCP environment that fits your organization.

Introducing Docker’s MCP Tooling

Docker’s MCP ecosystem has three core pieces:

MCP Catalog

A YAML-based index of MCP server definitions. These describe how to run each server and what metadata (description, image, repo) is associated with it. The MCP Catalog hosts over 220+ containerized MCP servers, ready to run with just a click. 

The official docker-mcp catalog is read-only. But you can fork it, export it, or build your own.

MCP Gateway

The MCP Gateway connects your clients to your MCP servers. It doesn’t “host” anything — the servers are just regular Docker containers. But it provides a single connection point to expose multiple servers from a catalog over HTTP SSE or STDIO.

Traditionally, with X servers and Y clients, you needed X * Y configuration entries. MCP Gateway reduces that to just Y entries (one per client). Servers are managed behind the scenes based on your selected catalog.

You can start the gateway using a specific catalog:

docker mcp gateway run –catalog my-private-catalog

MCP Gateway is open source: https://github.com/docker/mcp-gateway

Figure 1: The MCP Gateway provides a single connection point to expose multiple MCP servers

MCP Toolkit (GUI)

Built into Docker Desktop, the MCP Toolkit provides a graphical way to work with the MCP Catalog and MCP Gateway. This allows you to:

Access to Docker’s MCP Catalog via a rich GUI

Secure handling of secrets (like GitHub tokens)

Easily enable MCP servers

Connect your selected MCP servers with one click to a variety of clients like Claude code, Claude Desktop, Codex, Cursor, Continue.dev, and Gemini CLI

Workflow Overview

The workflow below will show you the steps necessary to create and use a custom MCP catalog. 

The basic steps are:

Export the official MCP Catalog to inspect its contents

Fork the Catalog so you can edit it

Create your own private catalog

Add specific server entries

Pull (or rebuild) images and push them to your registry

Update your catalog to use your images

Run the MCP Gateway using your catalog

Connect clients to it

Step-by-Step Guide: Creating and Using a Custom MCP Catalog

We start by setting a few environment variables to make this process repeatable and easy to modify later.For the purpose of this example, assume we are migrating an existing MCP server (DuckDuckGo) to a private registry (ghcr.io/mikegcoleman). You can also add your own custom MCP server images into the catalog, and we mention that below as well. 

export MCP_SERVER_NAME="duckduckgo"
export GHCR_REGISTRY="ghcr.io"
export GHCR_ORG="mikegcoleman"
export GHCR_IMAGE="${GHCR_REGISTRY}/${GHCR_ORG}/${MCP_SERVER_NAME}:latest"
export FORK_CATALOG="my-fork"
export PRIVATE_CATALOG="my-private-catalog"
export FORK_EXPORT="./my-fork.yaml"
export OFFICIAL_DUMP="./docker-mcp.yaml"
export MCP_HOME="${HOME}/.docker/mcp"
export MCP_CATALOG_FILE="${MCP_HOME}/catalogs/${PRIVATE_CATALOG}.yaml"

Step 1: Export the official MCP Catalog 

Exporting the official Docker MCPCatalog gives you a readable local YAML file listing all servers. This makes it easy to inspect metadata like images, descriptions, and repository sources outside the CLI.

docker mcp catalog show docker-mcp –format yaml > "${OFFICIAL_DUMP}"

Step 2: Fork the official MCP Catalog

Forking the official catalog creates a copy you can modify. Since the built-in Docker catalog is read-only, this fork acts as your editable version.

docker mcp catalog fork docker-mcp "${FORK_CATALOG}"
docker mcp catalog ls

Step 3: Create a new catalog

Now create a brand-new catalog that will hold only the servers you explicitly want to support. This ensures your organization runs a clean, controlled catalog that you fully own.

docker mcp catalog create "${PRIVATE_CATALOG}"

Step 4: Add specific server entries

Export your forked catalog to a file so you can copy over just the entries you want. Here we’ll take only the duckduckgo server and add it to your private catalog.

docker mcp catalog export "${FORK_CATALOG}" "${FORK_EXPORT}"
docker mcp catalog add "${PRIVATE_CATALOG}" "${MCP_SERVER_NAME}" "${FORK_EXPORT}"

Step 5: Pull (or rebuild) images and push them to your registry

At this point you have two options:

If you are able to pull from Docker Hub, find the image key for the server you’re interested in by looking at the YAML file you exported earlier. Then pull that image down to your local machine. After you’ve pulled it down, retag it for whatever repository it is you want to use. 

Example for duckduckgo

vi "${OFFICIAL_DUMP}" # look for the duckduck go entry and find the image: key which will look like this:
# image: mcp/duckduckgo@sha256:68eb20db6109f5c312a695fc5ec3386ad15d93ffb765a0b4eb1baf4328dec14f

# pull the image to your machine
docker pull
mcp/duckduckgo@sha256:68eb20db6109f5c312a695fc5ec3386ad15d93ffb765a0b4eb1baf4328dec14f

# tag the image with the appropriate registry
docker image tag mcp/duckduckgo@sha256:68eb20db6109f5c312a695fc5ec3386ad15d93ffb765a0b4eb1baf4328dec14f ${GHCR_IMAGE}

# push the image
docker push ${GHCR_IMAGE}

At this point you can move on to editing the MCP Catalog file in the next section.

 If you cannot download from Docker Hub you can always rebuild the MCP server from its GitHub repo. To do this, open the exported YAML and look for your target server’s GitHub source repository. You can use tools like vi, cat, or grep to find it — it’s usually listed under a source key. 

Example for duckduckgo:
source: https://github.com/nickclyde/duckduckgo-mcp-server/tree/main

export SOURCE_REPO="https://github.com/nickclyde/duckduckgo-mcp-server.git"

Next, you’ll rebuild the MCP server image from the original GitHub repository and push it to your own registry. This gives you full control over the image and eliminates dependency on Docker Hub access.

echo "${GH_PAT}" | docker login "${GHCR_REGISTRY}" -u "${GHCR_ORG}" –password-stdin

docker buildx build
–platform linux/amd64,linux/arm64
"${SOURCE_REPO}"
-t "${GHCR_IMAGE}"
–push

Step 6: Update your catalog 

After publishing the image to GHCR, update your private catalog so it points to that new image instead of the Docker Hub version. This step links your catalog entry directly to the image you just built.

vi "${MCP_CATALOG_FILE}"

# Update the image line for the duckduckgo server to point to the image you created in the previous step (e.g. ghcr.io/mikegcoleman/duckduckgo-mcp)

Remove the forked version of the catalog as you no longer need it

docker mcp catalog rm "${FORK_CATALOG}"

Step 7: Run the MCP Gateway 

Enabling the server activates it within your MCP environment. Once enabled, the gateway can load it and make it available to connected clients. You will get warnings about “overlapping servers” that is because the same servers are listed in two places (your catalog and the original catalog)

docker mcp server enable "${MCP_SERVER_NAME}"
docker mcp server list

Step 8: Connect to popular clients 

Now integrate the MCP Gateway with your chosen client. The raw command to run the gateway is: 

docker mcp gateway run –catalog "${PRIVATE_CATALOG}"

But that just runs an instance on your local machine, when what you probably want is to integrate with some client application. 

To do this you need to format the raw command so that it works for the client you wish to use. For example, with VS Code you’d want to update the mcp.json as follows:

"servers": {
"docker-mcp-gateway-private": {
"type": "stdio",
"command": "docker",
"args": [
"mcp",
"gateway",
"run",
"–catalog",
"my-private-catalog"
]
}
}

Finally, verify that the gateway is using your new GHCR image and that the server is properly enabled. This quick check confirms everything is configured as expected before connecting clients.

docker mcp server inspect "${MCP_SERVER_NAME}" | grep -E 'name|image'

Summary of Key Commands

You might find the following CLI commands handy:

docker mcp catalog show docker-mcp –format yaml > ./docker-mcp.yaml
docker mcp catalog fork docker-mcp my-fork
docker mcp catalog export my-fork ./my-fork.yaml
docker mcp catalog create my-private-catalog
docker mcp catalog add my-private-catalog duckduckgo ./my-fork.yaml
docker buildx build –platform linux/amd64,linux/arm64 https://github.com/nickclyde/duckduckgo-mcp-server.git
-t ghcr.io/mikegcoleman/duckduckgo:latest –push
docker mcp server enable duckduckgo
docker mcp gateway run –catalog my-private-catalog

Conclusion

By using Docker’s MCP Toolkit, Catalog, and Gateway, you can fully control the tools available to your developers, customers, or AI agents. No more one-off setups, scattered images, or cross-client connection headaches.

Your next steps:

Add more servers to your catalog

Set up CI to rebuild and publish new server images

Share your catalog internally or with customers

Docs:

https://docs.docker.com/ai/mcp-catalog-and-toolkit/

https://github.com/docker/mcp-gateway/

Happy curating. 

We’re working on some exciting enhancements to make creating custom catalogs even easier. Stay tuned for updates!

Learn more

Explore the MCP Catalog: Discover containerized, security-hardened MCP servers

Open Docker Desktop and get started with the MCP Toolkit (Requires version 4.48 or newer to launch the MCP Toolkit automatically)

Read about How Open Source Genius Cut Entropy Debt with Docker MCP Toolkit and Claude Desktop

Quelle: https://blog.docker.com/feed/

Docker + E2B: Building the Future of Trusted AI

Trusted Software Starts Here

The era of agents is here. Some teams are experimenting, others are just getting started, and a few are already running agents in production. But one challenge stands out: trust. Trust that your agents will act securely. Over 20 million developers already rely on Docker to build and ship software safely and fast. Now, we’re helping you build what’s next.

Over the past few months, Docker has continued to make significant investments in driving developer productivity and building a trusted foundation for AI, with a focus on simplicity and portability, from how you build and run agents to how you secure them. We’ve shipped new capabilities: a toolkit for MCP, support for running open-weight models locally, and a catalog of 200+ MCP servers. 

Today, we’re taking that commitment to the next level through a new partnership with E2B, a company that provides secure cloud sandboxes for AI agents. Together, we’re giving developers fast, secure access to hundreds of real-world tools, without sacrificing safety or speed.

The Next Frontier of AI: Trust

Agents can code, connect, and act, but without control, that power creates risk in two areas.

First, when agents run code. AI-generated code often executes in live environments without safeguards, accessing files or APIs it shouldn’t. Teams are learning that “auto-run” doesn’t mean “safe to run.”

Second, when agents connect to real-world tools. Integrating with external tools can create security trade-offs. And the challenge keeps growing. With the rise of the Model Context Protocol (MCP), every connected tool becomes its own potential risk surface with hundreds of servers and tokens to manage. Today, developers use a variety of DIY isolation techniques, custom-built sandboxes, and resource limits, but all of them tend to slow productivity.

Developers have seen this before. Before containers, developers struggled with messy dependencies and inconsistent environments when building applications. 

Today’s AI builders face a similar challenge. Models need access to APIs and databases, but enabling that access slows them down and introduces complexity and risk. Just as containers standardized how applications run, Docker is now standardizing how agents connect and act.

This is where E2B and Docker come together. E2B secures agent-generated code execution with cloud sandboxes. Docker secures tool access through the MCP Gateway and Catalog, offering direct connectivity to 200+ real-world tools. Each MCP server is curated by Docker and automatically audited for exploits and malicious behavior.

Docker + E2B: Secure Access to Hundreds of MCP Tools

Starting today, every E2B sandbox includes direct access to Docker’s MCP Catalog, a collection of 200+ tools such as GitHub, Perplexity, Browserbase, and ElevenLabs, all enabled by the Docker MCP Gateway. Developers can now run and connect agents more confidently, without slowing down.

Try it here.

What’s Next 

This is only the beginning. Today’s partnership is the first step in a larger journey. Together, our goal is to build the future of a secure AI stack, where every agent runs securely, every connection is verifiable, and developers can move fast without compromise.

From Docker’s perspective, our goal is to ensure developers can trust the MCP servers they pull, validate them through Docker, and govern their AI stack with the same visibility and confidence they already have for containers today.

The next wave of AI development will be built on trust. Trust in the tools, the data, and the environments behind them. At Docker, we’re building that foundation.

Quelle: https://blog.docker.com/feed/

Getting Started with Offload: Automating Everyday Workflows with Docker

Every developer eventually hits a wall with their local machine. Maybe it’s training an AI model that drains your CPU, compiling a massive codebase that makes your fan sound like a jet engine, or simply trying to run GPU workloads on a laptop that doesn’t have one. The result is the same: slow builds, limited resources, and wasted time.

Docker Offload was built to solve exactly that problem. Instead of forcing you to upgrade your hardware or spend days setting up cloud infrastructure, Offload extends your existing Docker workflow into the cloud. You keep using the same docker build and docker run commands you already know — but behind the scenes, the heavy lifting happens on powerful, cloud-hosted machines (including GPUs).

Think of it as a turbo button for Docker: the same developer experience, just supercharged.

Docker Offload is a fully managed service that lets you execute Docker builds and run containers in the cloud while maintaining your familiar local development experience.It provides on-demand cloud infrastructure for fast, consistent builds and compute-intensive workloads like running LLMs, machine learning pipelines, and GPU-accelerated applications.This is ideal if you want to leverage cloud resources or if your local machine doesn’t meet the hardware requirements to run the model locally.

Why use Docker Offload

Docker Offload is designed to support modern development teams working across local and cloud environments. It helps you:

Offload heavy builds and runs to fast, scalable infrastructure

Accelerate feedback loops in development and testing

Run containers that require more resources than your local setup can provide

Use Docker Compose to manage complex, multi-service apps that need cloud resources

Docker Offload is ideal for high-velocity development workflows that need the flexibility of the cloud without sacrificing the simplicity of local tools.

How to get started

Step 1: Sign up and subscribe to Docker Offload for access

To access Docker Offload, you must sign up and subscribe.

Step 2: Start Docker Offload

Start Docker Desktop and sign in with your account.

Open a terminal and run the following command to start Docker Offload:

$ docker offload start

When prompted, select your Docker account to use for Docker Offload. This account will consume credits for your Docker Offload usage.

When prompted, select whether to enable GPU support. If you choose to enable GPU support, Docker Offload will run in an instance with an NVIDIA L4 GPU, which is useful for machine learning or compute-intensive workloads.

NoteEnabling GPU support consumes more budget. For more details, see Docker Offload usage.

Step 3: Run a container with Docker Offload

After starting Docker Offload, Docker Desktop connects to a secure cloud environment that mirrors your local experience. When you run builds or containers, they execute remotely, but behave just like local ones.

To verify that Docker Offload is working, run a container:

$ docker run –rm hello-world

If you enabled GPU support, you can also run a GPU-enabled container:

$ docker run –rm –gpus all hello-world

If Docker Offload is working, you’ll see Hello from Docker! in the terminal output.

Step 4: Stop Docker Offload

When you’re done using Docker Offload, you can stop it, which will stop a budget consumption, however after a few minutes the offload worker will be completely disposed. When stopped, you get back to  building images and running containers locally.

$ docker offload stop

To start Docker Offload again, run the docker offload start command.

Save money

Docker Offload runs your builds remotely, not on the machine where you invoke the build. This means that files must be transferred from your local system to the cloud over the network.

Transferring files over the network introduces higher latency and lower bandwidth compared to local transfers. To reduce these effects, Docker Offload includes several performance optimizations:

It uses attached storage volumes for build cache, which makes reading and writing cache fast.

When pulling build results back to your local machine, it only transfers files that changed since the previous build.

Even with these optimizations, large projects or slower network connections can lead to longer transfer times. Here are several ways to optimize your build setup for Docker Offload:

Use .dockerignore files

Choose slim base images

Use multi-stage builds

Fetch remote files during the build

Leverage multi-threaded tools

For general Dockerfile tips, see Building best practices.
Quelle: https://blog.docker.com/feed/

Introducing a Richer ”docker model run” Experience

The command line is where developers live and breathe. A powerful and intuitive CLI can make the difference between a frustrating task and a joyful one. That’s why we’re excited to announce a major upgrade to the interactive chat experience in Docker Model Runner, our tool for running AI workloads locally.

We’ve rolled out a new, fully-featured interactive prompt for the “docker model run” command that brings a host of quality-of-life improvements, making it faster, easier, and more intuitive to chat with your local models. Let’s dive into what’s new.

A True Readline-Style Prompt with Keyboard Shortcuts

The most significant change is the move to a new readline-like implementation. If you spend any time in a modern terminal, you’ll feel right at home. This brings advanced keyboard support for navigating and editing your prompts right on the command line.

You can now use familiar keyboard shortcuts to work with your text more efficiently. Here are some of the new key bindings you can start using immediately:

Move to Start/End: Use “Ctrl + a” to jump to the beginning of the line and “Ctrl + e” to jump to the end.

Word-by-Word Navigation: Quickly move through your prompt using “Alt + b” to go back one word and “Alt + f” to go forward one word.

Efficient Deletions:

“Ctrl + k”: Delete everything from the cursor to the end of the line.

“Ctrl + u”: Delete everything from the cursor to the beginning of the line.

“Ctrl + w”: Delete the word immediately before the cursor.

Screen and Session Management:

“Ctrl + l”: Clear the terminal screen to reduce clutter.

“Ctrl + d”: Exit the chat session, just like the /bye command.

Take Back Control with “Ctrl + c”

We’ve all been there: you send a prompt to a model, and it starts generating a long, incorrect, or unwanted response. Previously, you had to wait for it to finish. Not anymore.

You can now press “Ctrl + c” at any time while the model is generating a response to immediately stop it. We’ve implemented this using context cancellation in our client, which sends a signal to halt the streaming response from the model. This gives you full control over the interaction, saving you time and frustration. This feature has also been added to the basic interactive mode for users who may not be in a standard terminal environment. “Ctrl + c” ends that interaction but does not exit. “Ctrl + d” exits “docker model run”.

Improved Multi-line and History Support

Working with multi-line prompts, like pasting in code snippets, is now much smoother. The prompt intelligently changes from > to a more subtle . to indicate that you’re in multi-line mode.

Furthermore, the new prompt includes command history. Simply use the Up and Down arrow keys to cycle through your previous prompts, making it easy to experiment, correct mistakes, or ask follow-up questions without retyping everything. For privacy or scripting purposes, you can disable history writing by setting the DOCKER_MODEL_NOHISTORY environment variable.

Get Started Today!

These improvements make “docker model run” a more powerful and pleasant tool for all your local AI experiments. Pull a model from Docker Hub and start a chat to experience the new prompt yourself:

$ docker model run ai/gemma3
> Tell me a joke about docker containers.
Why did the Docker container break up with the Linux host?

… Because it said, "I need some space!"

Would you like to hear another one?

Help Us Build the Future of Local AI

Docker Model Runner is an open-source project, and we’re building it for the community. These updates are a direct result of our effort to create the best possible experience for developers working with AI.

We invite you to get involved!

Star, fork, and contribute to the project on GitHub: https://github.com/docker/model-runner

Report issues and suggest new features you’d like to see.

Share your feedback with us and the community.

Your contributions help shape the future of local AI development and make powerful tools accessible to everyone. We can’t wait to see what you build!

Learn more

Check out the Docker Model Runner General Availability announcement

Visit our Model Runner GitHub repo! Docker Model Runner is open-source, and we welcome collaboration and contributions from the community!

Get started with Docker Model Runner with a simple hello GenAI application

Quelle: https://blog.docker.com/feed/

Docker Model Runner Meets Open WebUI: A Simpler Way to Run Local AI Models

Hi, I’m Sergei Shitikov – a Docker Captain and Lead Software Engineer living in Berlin. I’m focused on DevOps, developer experience, open source, and local AI tools. I created this extension to make it easier for anyone – even without a technical background – to get started with local LLMs using Docker Model Runner and Open WebUI.

This new Docker extension lets you run Docker Model Runner as the inference service behind Open WebUI, powering a richer chat experience. This blog walks you through why Docker Model Runner and Open WebUI make a powerful duo, how to set up your own local AI assistant, and what’s happening under the hood.

Local LLMs are no longer just experimental toys. Thanks to rapid advances in model optimization and increasingly powerful consumer hardware, local large language models have gone from proof-of-concept curiosities to genuinely useful tools.

Even a MacBook with an M-series chip can now run models that deliver fast, meaningful responses offline, without an internet connection or API keys.

Docker Model Runner, accessible via Docker Desktop (and also available as a plugin for Docker CE and of course fully OSS), makes getting started easy: just pick a model in the UI or run a single docker model run from the CLI.You’ll have a fully operational model up and running in seconds.

Docker Model Runner + Open WebUI: A powerful duo for running richer, local AI chat experiences

Docker Model Runner is designed as an inference service in Docker Desktop or CLI, allowing developers to run models locally with familiar workflows and commands they trust and know. This means from a design perspective, it only provides the bare minimum: a prompt box and a response field. There’s no memory. No file upload. No chat flow. No interface that feels like a real assistant. Note this is actually by design. There is no intention to replicate an experience within Docker Desktop; that is already well implemented by other offerings within the wider ecosystem.

That’s where Open WebUI comes in: a modern, self-hosted interface designed specifically for working with local LLMs.

It brings chat history, file uploads, prompt editing, and more. All local. All private.

That’s why an extension was created: to combine the two.

This Docker Extension launches Open WebUI and hooks it directly into your running model via Docker Model Runner. No configuration. No setup. Just open and go.

Let’s see how it works.

From Zero to Local AI Assistant in a Few Clicks

If you already have Docker Desktop installed, you’re almost there.

Head over to the Models tab and pick any model from the Docker Hub section: GPT-OSS, Gemma, LLaMA 3, Mistral or others.

One click, and Docker Model Runner will pull the container and start serving the model locally.

Prefer the CLI? A single docker model pull does the same job.

Next, you might want something more capable than a single input box.

Open the Extensions Marketplace inside Docker Desktop and install the Open WebUI extension, a feature-rich interface for local LLMs.

It automatically provisions the container and connects to your local Docker Model Runner.

All models you’ve downloaded will appear in the WebUI, ready to use; no manual config, no environment variables, no port mapping.

Once setup completes, you’ll see a screen confirming the extension is running, along with a button to launch the interface in your browser.

Alternatively, you can open it manually at http://localhost:8090 (default port) or bookmark it for future use.

Note: The first-time startup may take a couple of minutes as Open WebUI installs required components and configures integration.

Subsequent launches are much faster – nearly instant.

What You Can Do with Open WebUI

Once installed, Open WebUI feels instantly familiar, like using ChatGPT, but running entirely on your own machine.

You get a full chat experience, with persistent conversations, system prompt editing, and the ability to switch between models on the fly.

Upload files and chat with them

Drop in PDFs, Markdown files, presentations, spreadsheets, or even images.

Open WebUI extracts the content and makes it queryable through the chat.

Need a summary, quick answer, or content overview? Just ask: all processing happens locally.

Speak instead of type

With voice input turned on, you can talk to your assistant right from the browser.

This is great for hands-free tasks, quick prompts, or just demoing your local AI setup to a friend.

Requires permission setup for microphone access.

Define how your model behaves

Open WebUI supports full control over system prompts with templates, variables, and chat presets.

Whether you’re drafting code, writing blog posts, or answering emails, you can fine-tune how the model thinks and responds.

Switch between models instantly

Already downloaded multiple models using Docker Model Runner?

Open WebUI detects them automatically. Pick any model from the dropdown and start chatting; no restart required.

Save insights to memory

Want the model to remember something specific?

You can store facts or reminders manually in the personal memory panel and edit or remove them at any time.

More Things You Can Do

Open WebUI goes beyond chat with advanced tools that power real workflows:

Function Calling & Plugins

Use prebuilt tools or define your own Python functions via the Pipelines framework, ideal for automations, translations, or data lookups.

Multilingual UI

Open WebUI supports a wide range of interface languages and is fully localizable, perfect for international teams or non-English users.

Secure, local-first by design

No sign-up and no cloud storage. Everything stays on your machine, under your control.

Note: Not all features are universally available. Some depend on the model’s capabilities (e.g., function calling, image understanding), your current Open WebUI settings (e.g., voice input, plugins), or the hardware you’re running on (e.g., GPU acceleration, local RAG performance).Open WebUI aims to provide a flexible platform, but actual functionality may vary based on your setup.

How it works inside

Under the hood, the extension brings together two key components: integration between Open WebUI and Docker Model Runner, and a dynamic container provisioner built into the Docker extension.

Open WebUI supports Python-based “functions”, lightweight plugins that extend model behavior.This extension includes a function that connects to Docker Model Runner via its local API, allowing the interface to list and access all downloaded models automatically.

When you install the extension, Docker spins up the Open WebUI container on demand. It’s not a static setup, the container is configured dynamically based on your environment. You can:

Switch to a different Open WebUI image (e.g., CUDA-enabled or a specific version)

Change the default port

Support for custom environments and advanced flags – coming soon

The extension handles all of this behind the scenes, but gives you full control when needed.

Conclusion

You’ve just seen how the Docker Open WebUI Extension turns Docker Model Runner from a simple model launcher into a fully-featured local AI assistant with memory, file uploads, multi-model support, and more.

What used to require custom configs, manual ports, or third-party scripts now works out of the box, with just a few clicks.

Next steps

Install the Open WebUI Extension from the Docker Desktop Marketplace

Download a model via Docker Model Runner (e.g., GPT-OSS, Gemma, LLaMA 3, Mistral)

Launch the interface at http://localhost:8090 and start chatting locally

Explore advanced features: file chat, voice input, system prompts, knowledge, plugins

Switch models anytime or try new ones without changing your setup

The future of local AI is modular, private, and easy to use.

This extension brings us one step closer to that vision and it’s just getting started.

Get involved

Star or contribute to Open WebUI Docker Extension on GitHub

Follow updates and releases in the Docker Extensions Marketplace

Contribute to the Docker Model Runner repo: it’s open source and community-driven

Share feedback or use cases with the Docker and Open WebUI communities

Quelle: https://blog.docker.com/feed/

Why I Still Use jQuery

jQuery is a household name among web developers who have been around the block. Initially released in 2006, it took the web development world by storm with its easy and intuitive syntax for navigating a document, selecting DOM elements, handling events, and making AJAX requests. At its peak in 2015, jQuery featured on 62.7 percent of the top one million websites and 17 percent of all Internet websites.

A decade later, jQuery is not the shiny new kid on the block anymore. Most of the original pain points jQuery solved, such as DOM manipulation and inconsistent browser behavior, are gone thanks to modern browser APIs. 

But jQuery is still widely used. According to SimilarWeb, as of August 11, 2025, nearly 195 million websites use it. That means many developers, like me, still use it every day. And like me, you might prefer it in certain cases. 

So, in this article, I’ll share when it still makes sense to use jQuery and when not. Don’t worry: I’m not arguing we should replace React with jQuery. And I’m not here to romanticize 2008. In 2025, I simply still find myself reaching for jQuery because it’s the right tool for the job. 

A Brief History of jQuery

To determine when it makes sense to use jQuery and when not, it helps to know why it was created in the first place and what problems it aimed to solve.

When John Resig launched jQuery at BarCamp NYC in January 2006, the web was a different place. Features we take for granted today were absent from most browsers:

No querySelectorAll: Selecting DOM elements across browsers was messy. In the mid-2000s, none of the available element selectors, like getElementById or getElementsByClassName, could select elements using complex CSS queries.

Inconsistent event handling: addEventListener wasn’t universal. While browsers like Firefox, Safari, and Chrome supported the W3C event model with addEventListener, Internet Explorer (before IE9) used Microsoft’s proprietary model with attachEvent. These two models differed from each other in almost all functional aspects.

Different browsers had different APIs for XMLHttpRequest. While browsers like Firefox and Safari offered the familiar XMLHttpRequest, Internet Explorer (before IE7) used ActiveX objects to give JavaScript network capabilities. This meant you had to use a bunch of if-else blocks to make an AJAX request.

CSS manipulation quirks: In the 2000s and early 2010s, many CSS features were implemented inconsistently across browsers, which made it difficult to manipulate CSS with JS.

jQuery solved all of this with a simple, chainable syntax and consistent cross-browser behavior. It offered a streamlined, chainable API for DOM traversal, event handling, and AJAX—far simpler than cross-browser native JavaScript at the time. These features made jQuery become the go-to JavaScript library in the 2010s, powering everything from personal blogs to Fortune 500 sites. In 2012, a W3Techs survey found that jQuery was running on 50 percent of all websites, and by 62.7 percent of the top 1M websites used it.

Where jQuery Still Makes Sense

Although jQuery’s glory days are clearly behind us, it still works well in some situations. Here are the scenarios where I still choose jQuery:

Legacy Projects

Even now in 2025, a W3Techs survey shows that jQuery is used in 77.8 percent of the top 10M websites in 2025. This is mostly legacy usage—old apps that use jQuery because switching to a more modern framework is a costly endeavour. This is clear when you look at the version statistics. In a 2023 survey across 500 organizations, only 44 percent use maintained versions (3.6.0 or newer), while 59 percent run older versions (1.x to 3.5.1)

I maintain a few legacy projects like these that were written with jQuery, and I can tell you why they’re still around: they just work. So as the adage goes, “If it ain’t broke, don’t fix it.” 

Many large enterprises, government sites, corporate intranets, and many WordPress plugins and themes still rely on jQuery. Rewriting these sites to pure JavaScript or a modern framework is a time-consuming, expensive endeavour that can also introduce new challenges and bugs. Most of the time, all that effort and risk aren’t worth the relatively small benefits in the short term.

The truth is this: the codebase I inherited, built in the jQuery era, works. The business logic is robust, the profit margins are healthy, and—most surprisingly—shipping new features feels like slipping into a worn leather jacket: unfashionable, but comfortable. – Marc Boisvert-Duprs

Yes, most jQuery plugins are no longer actively maintained or have been deprecated, so depending on them is a security risk. Abandoned plugins may become incompatible or insecure as browsers continue to evolve. So, legacy projects that use jQuery and jQuery plugins should eventually migrate away from jQuery.

Quick Prototyping without Build Tools

Developers often need to prototype very simple frontend apps, be it for throwaway demos, internal tools, or proof-of-concept pages. Sometimes the spec may even require a very basic frontend with minimal interactivity (for example, a static page with a simple form and a button).

jQuery is a perfect choice for these situations. Simply drop in a <script> tag from a CDN and get animations, DOM manipulation, and AJAX in minutes—no need for npm, bundlers, transpilers, or complicated frameworks with hundreds of dependencies. It’s also great for running quick commands from the DevTools console, especially if you want to experiment with an app. 

But why not use a more modern but lightweight framework like Alpine.js? Personally, I’m intimately familiar with jQuery: I’ve used it since the beginning of my web development journey. I love its simplicity and ease of use. The minor improvements a new framework can make in this scenario don’t offset the time spent learning a new tool.

Complex DOM Manipulation in Different Browser Contexts

Hopefully, you don’t have to support older browsers that lack the standard querySelector, or browsers like Internet Explorer, notorious for their non-standard behavior. Unfortunately, some of us still need to maintain apps that run on these browsers.

While native JS is perfectly fine for modern browsers, if you’re building something that has to run on older embedded browsers (think: kiosk software, older enterprise or university intranets, or web apps inside legacy desktop apps), jQuery’s normalization saves you from manual polyfilling, and its CSS selector lets you perform complex DOM manipulations easily.

Simple Animations without CSS Keyframes

As someone who primarily works with backend apps, I don’t often need to code animations for the frontend. But when I do need to create basic chained animations (fading, sliding, sequencing multiple elements, etc.), jQuery’s .animate()is simpler (and more lightweight) to write than juggling CSS animations and JS event callbacks.

Simple AJAX with HTML Server Responses

I was recently tasked to make some upgrades to an ancient app with a PHP backend. Imagine my surprise when I discovered that the server returns HTML fragments, and not JSON APIs. In this case, jQuery’s .load() and .html() methods can be simpler and more efficient than writing fetch() boilerplate with DOM parsing.

For example, I can extract a DOM element from the results of an AJAX request, and load it into an element like so:

// Replace #comments with just the #comments-list from the server response
$('#comments').load('/article/1 #comments-list');

Whereas the same thing in native JS would be:

fetch('/article/1')
.then(res => res.text())
.then(html => {
const doc = new DOMParser().parseFromString(html, 'text/html');
const comments = doc.querySelector('#comments-list');
document.querySelector('#comments').innerHTML = comments.outerHTML;
})

Yes, while the jQuery syntax is more straightforward, both approaches are doing the same thing under the hood, so there’s not a huge performance gain. In the jQuery version, you also have the added overhead of bundling the jQuery library. So, it’s a tradeoff between simplicity and bundle size.

When You Should Not Use jQuery

While jQuery still makes sense in some situations, there are some cases where I would never use jQuery.

Building a Modern, Component-Driven Frontend

If I’m building a modern frontend app with lots of reactivity and reusable components, I’d use a modern framework like React or Vue with native features for DOM manipulation.

Frameworks like React, Vue, Svelte, or Angular handle DOM rendering in a virtualised way. Direct DOM manipulation with jQuery conflicts with their data-binding approach, causing state mismatches and bugs. 

For example, in React, calling $(‘#el’).html(‘…’) bypasses React’s virtual DOM and React won’t know about the change. This will inevitably lead to bugs that are difficult to diagnose.

When Simple Vanilla JS Is Enough

Most of jQuery’s once-killer features, such as selectors, AJAX, events, and animations, are now native in JavaScript:

document.querySelectorAll() replaces $().

fetch() replaces $.ajax().

element.classList replaces .addClass()/ .removeClass().

element.animate() handles animations.

If I’m just toggling classes or making a fetch call, adding jQuery is wasteful.

Targeting Modern Browsers Only

jQuery’s major draw between 2008 and 2015 was its cross-browser compatibility, which was necessary due to quirks in IE6–IE9. It simply wasn’t practical to write browser-specific JS for all the different versions of IE. With jQuery, the quirks were abstracted away.

When IE was discontinued, this usefulness is no longer relevant. 

So if the app I’m working on needs to support only modern browsers, I don’t need most of jQuery’s compatibility layer.

Projects Already Using Modern Tooling

Mixing jQuery and framework code leads to a “hybrid monster” that’s difficult to maintain. 

jQuery can conflict with existing frameworks, which can cause hard-to-fix bugs. If my project is already written in another framework, I avoid including jQuery.

Alternatives to jQuery

Sometimes, I need to use some features of jQuery, but I can’t justify including it in its entirety.  Here are some libraries I use in cases like these. 

DOM Selection and Traversal

Native DOM API (most common replacement) using document.querySelector() and document.querySelectorAll()

Cash: jQuery-like API, tiny (~10KB), works with modern browsers

Zepto.js: lightweight jQuery-compatible library for mobile-first projects

AJAX/HTTP Requests

Native fetch() API

Axios: promise-based HTTP client with interceptors and JSON handling.

Event Handling

Native events using element.addEventListener()

delegate-it: small utility for jQuery-style event delegation

Animations

CSS transitions and animations (native, GPU-accelerated)

Web Animations API

GSAP: Powerful animation library, much more capable than .animate() in jQuery.

Utilities

* Lodash: collection iteration, object/array utilities, throttling, debouncing

* Day.js: date manipulation in a tiny package (instead of jQuery’s date plugins)

All-in-One Mini jQuery Replacements

If you still like a single API but want it lighter than jQuery:

Umbrella JS: ~3KB, jQuery-like API

Bliss: focuses on modern features, syntactic sugar, and chaining

Cash: as mentioned above, the closest modern equivalent

 jQuery Still Has a Job

In 2025, jQuery isn’t the cutting-edge choice for building complex, highly interactive single-page applications that it was in the 2010s, and that’s perfectly fine. While modern frameworks dominate the headlines, jQuery remains a reliable, well-understood tool that solves the problems it was designed for, simply and effectively.

In the end, the “right” tool is the one that meets your project’s needs, and for countless developers and businesses, jQuery continues to be that.
Quelle: https://blog.docker.com/feed/