Introducing Docker Hardened Images: Secure, Minimal, and Ready for Production

From the start, Docker has focused on enabling developers to build, share, and run software efficiently and securely. Today, Docker Hub powers software delivery at a global scale, with over 14 million images and more than 11 billion pulls each month. That scale gives us a unique vantage point into how modern software is built and the challenges teams face in securing it.

That’s why we’ve made security a cornerstone of our platform. From trusted Docker Official Images to SBOM support for transparency, the launch of Docker Scout for real-time vulnerability insights, and a hardened Docker Desktop to secure local development, every investment reflects our commitment to making software supply chain security more accessible, actionable, and developer-first.

Now, we’re taking that commitment even further.

We’re excited to introduce Docker Hardened Images (DHI) — secure-by-default container images purpose-built for modern production environments.

These images go far beyond being just slim or minimal. Docker Hardened Images start with a dramatically reduced attack surface, up to 95% smaller, to limit exposure from the outset. Each image is curated and maintained by Docker, kept continuously up to date to ensure near-zero known CVEs. They support widely adopted distros like Alpine and Debian, so teams can integrate them without retooling or compromising compatibility.

Plus, they’re designed to work seamlessly with the tools you already depend on. We’ve partnered with a range of leading security and DevOps platforms, including Microsoft, NGINX, Sonatype, GitLab, Wiz, Grype, Neo4j, JFrog, Sysdig and Cloudsmith, to ensure seamless integration with scanning tools, registries, and CI/CD pipelines.

What we’re hearing from customers

We talk to teams every day, from fast-moving startups to global enterprises, and the same themes keep coming up.

Integrity is a growing concern: “How do we know every component in our software is exactly what it claims to be—and hasn’t been tampered with?” With so many dependencies, it’s getting harder to answer that with confidence.

Then there’s the attack surface problem. Most teams start with general-purpose base images like Ubuntu or Alpine. But over time, these containers get bloated with unnecessary packages and outdated software, creating more ways in for attackers.

And of course, operational overhead is through the roof. Security teams are flooded with CVEs. Developers are stuck in a loop of patching and re-patching, instead of shipping new features. We’re hearing about vulnerability scanners lighting up constantly, platform teams stretched thin by centralized dependencies, and developers resorting to manual upgrades just to stay afloat. These challenges aren’t isolated — they’re systemic. And they’re exactly what we designed Docker Hardened Images to address.

Inside Docker Hardened Images

Docker Hardened Images aren’t just trimmed-down versions of existing containers — they’re built from the ground up with security, efficiency, and real-world usability in mind. They’re designed to meet teams where they are. Here’s how they deliver value across three essential areas:

Seamless Migration

First, they integrate seamlessly into existing workflows. Unlike other minimal or “secure” images that force teams to change base OSes, rewrite Dockerfiles, or abandon tooling, DHI supports the distributions developers already use, including familiar Debian and Alpine variants. In fact, upgrading to a DHI can be simple. Switching to a hardened image is as simple as updating one line in your Dockerfile:

Flexible customization

Second, they strike the right balance between security and flexibility. Security shouldn’t mean sacrificing usability. DHI supports the customizations teams rely on, including certificates, packages, scripts, and configuration files, without compromising the hardened foundation. You get the security posture you need with the flexibility to tailor images to your environment.

Under the hood, Docker Hardened Images follow a distroless philosophy, stripping away unnecessary components like shells, package managers, and debugging tools that commonly introduce risk. While these extras might be helpful during development, they significantly expand the attack surface in production, slow down startup times, and complicate security management.

By including only the essential runtime dependencies needed to run your application, DHI delivers leaner, faster containers that are easier to secure and maintain. This focused, minimal design leads to up to a 95% reduction in attack surface, giving teams a dramatically stronger security posture right out of the box.

Automated Patching & Rapid CVE Response

Finally, patching and updates are continuous and automated. Docker monitors upstream sources, OS packages, and CVEs across all dependencies. When updates are released, DHI images are rebuilt, subjected to extensive testing, and published with fresh attestations—ensuring integrity and compliance within our SLSA Build Level 3–compliant build system. The result: you’re always running the most secure, verified version—no manual intervention required.

Most importantly, when essential components are built directly from source, allowing us to deliver critical patches faster and remediate vulnerabilities promptly. We patch Critical and High-severity CVEs within 7 days — faster than typical industry response times —and back it all with an enterprise-grade SLA for added peace of mind.

Internal Adoption: Validating Docker Hardened Images in Production Environments

We’ve been using DHI internally across several key projects — putting them to the test in real-world, production environments. One standout example is our internal use of a hardened Node image. 

By replacing the standard Node base image with a Docker Hardened Image, we saw immediate and measurable results: vulnerabilities dropped to zero, and the package count was reduced by over 98%. 

That reduction in packages isn’t just a matter of image size, it directly translates to a smaller attack surface, fewer moving parts to manage, and significantly less overhead for our security and platform teams. This shift gave us a stronger security posture and simplified operational complexity — exactly the kind of outcome we designed DHI to deliver.

Ready to get started?

Docker Hardened Images are designed to help you ship software with confidence by dramatically reducing your attack surface, automating patching, and integrating seamlessly into your existing workflows. Developers stay focused on building. Security teams get the assurance they need.

Looking to reduce your vulnerability count?

We’re here to help. Get in touch with us and let’s harden your software supply chain, together.

Quelle: https://blog.docker.com/feed/

Docker at Microsoft Build 2025: Where Secure Software Meets Intelligent Innovation

This year at Microsoft Build, Docker will blend developer experience, security, and AI innovation with our latest product announcements. Whether you attend in person at the Seattle Convention Center or tune in online, you’ll see how Docker is redefining the way teams build, secure, and scale modern applications.

Docker’s Vision for Developers

At Microsoft Build 2025, Docker’s EVP of Product and Engineering, Tushar Jain, will present the company’s vision for AI-native software delivery, prioritizing simplicity, security, and developer flow. His session will explore how Docker is helping teams adopt AI without complexity and scale confidently from local development to production using the workflows they already trust.

This vision starts with security. Today’s developers are expected to manage a growing number of vulnerabilities, stay compliant with evolving standards, and still ship software on time. Docker helps teams simplify container security by integrating with tools like Microsoft Defender, Azure Container Registry, and AKS. This makes it easier to build secure, production-ready applications without overhauling existing workflows.

This session explores how Docker is streamlining agentic AI development by bringing models and MCP tools together in one familiar environment. Learn how to build agentic AI with your existing workflows and commands. Explore curated AI tools on Docker Hub to get inspired and jumpstart your projects. No steep learning curve is required! With built-in security, access control, and secret management, Docker handles the heavy lifting so you can focus on building smarter, more capable agents.

Don’t miss our follow-up demo session with Principal Engineer Jim Clark. He’ll show how to build an agentic app that uses Docker’s latest AI tools and familiar workflows.

Visit Docker at Booth #400 to see us in action

Throughout the conference, Docker will be live at Booth #400. Drop by for demos, expert walkthroughs, and to try out Docker Hardened Images, Model Runner, and MCP Catalog and Toolkit. Our product, engineering, and DevRel teams will be on-site to answer questions and help you get hands-on.

Party with your fellow Developers at MOPOP

We’re hosting an evening event at one of Seattle’s most iconic pop culture venues to celebrate the launch of our latest tools.

Docker MCP @ MOPOP Date: Monday, May 19Time: 7:00–10:00 PMLocation: Museum of Pop Culture, Seattle

Enjoy live demos, food and drinks, access to Docker engineers and leaders, and private after-hours access to the museum. Space is limited. RSVP now to reserve your spot!

Quelle: https://blog.docker.com/feed/

Securing Model Context Protocol: Safer Agentic AI with Containers

Model Context Protocol (MCP) tools remain primarily in the hands of early adopters, but broader adoption is accelerating. Alongside this growth, MCP security concerns are becoming more urgent. By increasing agent autonomy, MCP tools introduce new risks related to misalignment between agent behavior and user expectations and uncontrolled execution. These systems also present a novel attack surface, creating new software supply chain threats. As a result, MCP adoption raises critical questions about trust, isolation, and runtime control before these systems are integrated into production environments.

Where MCP tools fall short on security

Most of us first experimented with MCP tools by configuring files like the one shown below. This workflow is fast, flexible, and productive, ideal for early experimentation. But it also comes with trade-offs. MCP servers are pulled directly from the internet, executed on the host machine, and configured with sensitive credentials passed as plaintext environment variables. It has been like setting off fireworks in your living room: it’s thrilling, but it’s not very safe.

{
"mcpServers": {
"command": "npx",
"args": [
"-y",
"@org/mcp-server",
"–user", "me"
],
"env": {
"SECRET_API_KEY": "YOUR_API_KEY_HERE"
}
}
}

As MCP tools move closer to production use, they force us to confront a set of foundational questions:

Can we trust the MCP server?

Can we guarantee the right software is installed on the host? Without that baseline, reproducibility and reliability fall apart. How do we verify the provenance and integrity of the MCP server itself? If we can’t trace where it came from or confirm what it contains, we can’t trust it to run safely. Even if it runs, how do we know it hasn’t been tampered with — either before it reached us or while it’s executing?

Are we managing secrets and access securely?

Secret management also becomes a pressing concern. Environment variables are convenient, but they’re not secure. We need ways to safely inject sensitive data into only the runtimes permitted to read it and nowhere else. The same goes for access control. As teams scale up their use of MCP tools, it becomes essential to define which agents are allowed to talk to which servers and ensure those rules are enforced at runtime.

Figure 1: Discussions on not storing secrets in.env on Reddit. Credit: amirshk

How do we detect threats early? 

And then there’s the question of detection. Are we equipped to recognize the kinds of threats that are emerging around MCP tools? From prompt injection to malicious server responses, new attack vectors are already appearing. Without purpose-built tooling and clear security standards, we risk walking into these threats blind. Some recent threat patterns include:

MCP Rug Pull – A malicious MCP server can perform a “rug pull” by altering a tool’s description after it’s been approved by the user.

MCP Shadowing – A malicious server injects a tool description that alters the agent’s behavior toward a trusted service or tool. 

Tool Poisoning – Malicious instructions in MCP tool descriptions, hidden from users but readable by AI models.

What’s clear is that the practices that worked for early-stage experimentation won’t scale safely. As adoption grows, the need for secure, standardized mechanisms to package, verify, and run MCP servers becomes critical. Without them, the very autonomy that makes MCP tools powerful could also make them dangerous.

Why Containers for MCP servers

Developers quickly realized that the same container technology used to deliver cloud-native applications is also a natural fit for safely powering agentic systems. Containers aren’t just about packaging, they give us a controlled runtime environment where we can add guardrails and build a safer path toward adopting MCP servers.

Making MCP servers portable and secure 

Most of us are familiar with how containers are used to move software around, providing runtime consistency and easy distribution. Containers also provide a strong layer of isolation between workloads, helping prevent one application from interfering with another or with the host system. This isolation limits the blast radius of a compromise and makes it easier to enforce least-privilege access. In addition, containers can provide us with verification of both provenance and integrity. This continues to be one of the important lessons from software supply chain security. Together, these properties help mitigate the risks of running untrusted MCP servers directly on the host.

As a first step, we can use what we already know about cloud native delivery and simply distribute the MCP servers in a container. 

{
"mcpServers": {
"mcpserver": {
"command": "docker",
"args": [
"run", "-i", "–rm",
"org/mcpserver:latest",
"–user", "me"
],
"env": {
"SECRET_API_KEY": "YOUR_API_KEY_HERE"
}
}
}
}

But containerizing the server is only half the story. Developers still would need to specify arguments for the MCP server runtime and secrets. If those arguments are misconfigured, or worse, intentionally altered, they could expose sensitive data or make the server unsafe to run. 

In the next section, we’ll cover key design considerations, guardrails, and best practices for mitigating these risks.

Designing secure containerized architectures for MCP servers and clients

Containers provide a solid foundation for securely running MCP servers, but they’re just the beginning. It’s important to consider additional guardrails and designs, such as how to handle secrets, defend against threats, and manage tool selection and authorization as the number of MCP servers and clients increases. 

Secure secrets handling

When these servers require runtime configuration secrets, container-based solutions must provide a secure interface for users to supply that data. Sensitive information like credentials, API keys, or OAuth access tokens should then be injected into only the authorized container runtimes. As with cloud-native deployments, secrets remain isolated and scoped to the workloads that need them, reducing the risk of accidental exposure or misuse.

Defenses against new MCP threats

Many of the emerging threats in the MCP ecosystem involve malicious servers attempting to trick agents and MCP servers into taking actions that conflict with the user’s intent. These attacks often begin with poisoned data flowing from the server to the client.

To mitigate this, it’s recommended to route all MCP client traffic through a single connection endpoint, a MCP Gateway, or a proxy built on top of containers. Think of MCP servers like passengers at an airport: by establishing one centralized security checkpoint (the Gateway), you ensure that everyone is screened before boarding the plane (the MCP client). This Gateway becomes the critical interface where threats like MCP Rug Pull Attacks, MCP Shadowing, and Tool Poisoning can be detected early and stopped. Mitigations include:

MCP Rug Pull: Prevents a server from changing its tool description after user consent. Clients must re-authorize if a new version is introduced.

MCP Shadowing: Detects agent sessions with access to sets of tools with semantically close descriptions, or outright conflicts.

Tool Poisoning: Uses heuristics or signature-based scanning to detect suspicious patterns in tool metadata, such as manipulative prompts or misleading capabilities, that are common in poisoning attacks.

Managing MCP server selection and authorization

As agentic systems evolve, it’s important to distinguish between two separate decisions: which MCP servers are trusted across an environment, and which are actually needed by a specific agent. The first defines a trusted perimeter, determining which servers can be used. The second is about intent and scope — deciding which servers should be used by a given client.

With the number of available MCP servers expected to grow rapidly, most agents will only require a small, curated subset. Managing this calls for clear policies around trust, selective exposure, and strict runtime controls. Ideally, these decisions should be enforced through platforms that already support container-based distribution, with built-in capabilities for storing, managing, and securely sharing workloads, along with the necessary guardrails to limit unintended access.

MCP security best practices

As the MCP spec evolves, we are already seeing helpful additions such as tool-level annotations like readOnlyHint and destructiveHint.  A readOnlyHint can direct the runtime to mount file systems in read-only mode, minimizing the risk of unintentional changes. Networking hints can isolate an MCP from the internet entirely or restrict outbound connections to a limited set of routes. Declaring these annotations in your tool’s metadata is strongly recommended. They can be enforced at container runtime and help drive adoption — users are more likely to trust and run tools with clearly defined boundaries.

We’re starting by focusing on developer productivity. But making these guardrails easy to adopt and test means they won’t get in the way, and that’s a critical step toward building safer, more resilient agentic systems by default.

How Docker helps  

Containers offer a natural way to package and isolate MCP tools, making them easier and safer to run. Docker extends this further with its latest MCP Catalog and Toolkit, streamlining how trusted tools are discovered, shared, and executed.

While many developers know that Docker provides an API for containerized workloads, the Docker MCP Toolkit builds on that by enabling MCP clients to securely connect to any trusted server listed in your MCP Catalog. This creates a controlled interface between agents and tools, with the familiar benefits of container-based delivery: portability, consistency, and isolation.

Figure 2: Docker MCP Catalog and Toolkit securely connects MCP servers to clients by running them in containers

The MCP Catalog, a part of Docker Hub, helps manage the growing ecosystem of tools by letting you identify trusted MCP servers while still giving you the flexibility to configure your MCP clients. Developers can not only decide which servers to make available to any agent, but also scope specific servers to their agents. The MCP Toolkit simplifies this further by exposing any set of trusted MCP servers through a single, unified connection, the MCP Gateway. 

Developers stay in control, defining how secrets are stored and which MCP servers are authorized to access them. Each server is referenced by a URL that points to a fully configured, ready-to-run Docker container. Since the runtime handles both content and configuration, agents interact only with MCP runtimes that are reproducible, verifiable, and self-contained.   These runtimes are tamper-resistant, isolated, and constrained to access only the resources explicitly granted by the user. Since all MCP messages pass through one gateway, the MCP Toolkit offers a single enforcement point for detecting threats before they become visible to the MCP client. 

Going back to the earlier example, our configuration is now a single connection to the Catalog with an allowed set of configured MCP server containers. MCP client sees a managed view of configured MCP servers over STDIO. The result: MCP clients have a safe connection to the MCP ecosystem!

{
"mcpServers": {
"mcpserver": {
"command": "docker",
"args": [
"run", "-i", "–rm",
"alpine/socat", "STDIO", "TCP:host.docker.internal:8811"
],
}
}
}

Summary

We’re at a pivotal point in the evolution of MCP tool adoption. The ecosystem is expanding rapidly, and while it remains developer-led, more users are exploring ways to safely extend their agentic systems. Containers are proving to be the ideal delivery model for MCP tools — providing isolation, reproducibility, and security with minimal friction.

Docker’s MCP Catalog and Toolkit build on this foundation, offering a lightweight way to share and run trusted MCP servers. By packaging tools as containers, we can introduce guardrails without disrupting how users already consume MCP from their existing clients. The Catalog is compatible with any MCP client today, making it easy to get started without vendor lock-in.

Our goal is to support this fast-moving space by making MCP adoption as safe and seamless as possible, without getting in the way of innovation. We’re excited to keep working with the community to make MCP adoption not just easy and productive, but secure by default.

Learn more

Check out the MCP Catalog and Toolkit product launch blog 

Get started with Docker MCP Catalog and Toolkit

Join the webinar for a live technical walkthrough.

Visit our MCP webpage 

Subscribe to the Docker Navigator Newsletter.

New to Docker? Create an account. 

Have questions? The Docker community is here to help.

Quelle: https://blog.docker.com/feed/

Simplifying Enterprise Management with Docker Desktop on the Microsoft Store

We’re excited to announce that Docker Desktop is now available on the Microsoft Store! This new distribution channel enhances both the installation and update experience for individual developers while significantly simplifying management for enterprise IT teams.

This milestone reinforces our commitment to Windows, our most widely used platform among Docker Desktop users. By partnering with the Microsoft Store, we’re ensuring seamless compatibility with enterprise management tools while delivering a more consistent experience to our shared customers.

[Figure 1]: MS Store listing: https://apps.microsoft.com/detail/xp8cbj40xlbwkx?hl=en-GB&gl=GB

Seamless deployment and control for enterprises

For developers:

Automatic Updates: The Microsoft Store handles all update processes automatically, ensuring you’re always running the latest version without manual intervention.

Streamlined Installation: Experience a more reliable setup process with fewer startup errors..

Unified Management: Manage Docker Desktop alongside your other applications in one familiar interface.

For IT administrators:

Native Intune MDM Integration: Deploy Docker Desktop across your organization using Microsoft’s enterprise management tools — Learn how to add Microsoft Store apps via Intune.

Centralized Control: Easily roll out Docker Desktop through the Microsoft Store’s enterprise distribution channels.

Security-Compatible Updates: Updates are handled automatically by the Microsoft Store infrastructure, even in organizations where users don’t have direct store access.

Updates Without Direct Store Access: The native integration with Intune allows automatic updates to function even when users don’t have Microsoft Store access — a significant advantage for security-conscious organizations with restricted environments.

Familiar Workflow: The update mechanism works similarly to winget commands (winget install –id=XP8CBJ40XLBWKX –source=msstore), providing consistency with other enterprise software management.

Why it matters for businesses and developers 

With 99% of enterprise users not running the latest version of Docker Desktop, the Microsoft Store’s automatic update capabilities directly address compliance and security concerns while minimizing downtime. IT administrators can now:

Increase Productivity: Developers can focus on innovation instead of managing installations.

Improve Operational Efficiency: Better control over Docker Desktop deployments reduces IT bottlenecks.

Enhance Compliance: Automatic updates and secure installations support enterprise security protocols.

Conclusion

Docker Desktop’s availability on the Microsoft Store represents a significant step forward in simplifying how organizations deploy and maintain development environments. By focusing on seamless updates, reliability, and enterprise-grade management, Docker and Microsoft are empowering teams to innovate with greater confidence.

Ready to try it out? Download Docker Desktop from the Microsoft Store today!

Learn more

Docker Desktop on Microsoft Store

Learn how to add Microsoft Store apps via Intune

Configure Microsoft Store access

Authenticate and update to receive your subscription level’s newest Docker Desktop features

New to Docker? Create an account 

Subscribe to the Docker Newsletter

Quelle: https://blog.docker.com/feed/

Introducing Docker MCP Catalog and Toolkit: The Simple and Secure Way to Power AI Agents with MCP

Model Context Protocols (MCPs) are quickly becoming the standard for connecting AI agents to external tools, but the developer experience hasn’t caught up. Discovery is fragmented, setup is clunky, and security is too often bolted on last. Fixing this experience isn’t a solo mission—it will take an industry-wide effort. A secure, scalable, and trusted MCP ecosystem demands collaboration across platforms and vendors.

That’s why we’re excited to announce Docker MCP Catalog and Toolkit are now available in Beta. The Docker MCP Catalog, now a part of Docker Hub, is your starting point for discovery, surfacing a curated set of popular, containerized MCP servers to jumpstart agentic AI development. But discovery alone isn’t enough. That’s where the MCP Toolkit comes in. It simplifies installation, manages credentials, enforces access control, and secures the runtime environment. Together, Docker MCP Catalog and MCP Toolkit give developers and teams a complete foundation for working with MCP tools, making them easier to find, safer to use, and ready to scale across projects and teams.

We’re partnering with some of the most trusted names in cloud, developer tooling, and AI, including Stripe, Elastic, Heroku, Pulumi, Grafana Labs, Kong Inc., Neo4j, New Relic, Continue.dev, and many more, to shape a secure ecosystem for MCP tools. With a one-click connection right from Docker Desktop to leading MCP clients like Gordon (Docker AI Agent), Claude, Cursor, VSCode, Windsurf, continue.dev, and Goose, building powerful, intelligent AI agents has never been easier.

This aligns perfectly with our mission. Docker pioneered the container revolution, transforming how developers build and deploy software. Today, over 20 million registered developers rely on Docker to build, share, and run modern applications. Now, we’re bringing that same trusted experience to the next frontier: Agentic AI with MCP tools.

Model Context Protocol is gaining momentum — what improvements are still needed?

As MCPs become the backbone of agentic AI systems, the developer experience still faces key challenges. Here are some of the major hurdles:

Discovering the right, official, and/or trustworthy tools is hard

Finding MCP servers is fragmented. Developers search across registries, community-curated lists, and blog posts—yet it’s still hard to know which ones are official and trustworthy.

Complex installations and distribution

Getting started with MCP tools remains complex. Developers often have to clone repositories, wrangle conflicting dependencies in environments like Node.js or Python, and self-host local services—many of which aren’t containerized, making setup and portability even harder. On top of that, connecting MCP clients adds more friction, with each one requiring custom configuration that slows down onboarding and adoption.

Auth and permissions fall short

Many MCP tools run with full access to the host, launched via npx or uvx, with no isolation or sandboxing. Credentials are commonly passed as plaintext environment variables, exposing sensitive data and increasing the risk of leaks. Moreover, these tools often aren’t designed for scale and security. They’re missing enterprise-ready features like policy enforcement, audit logs, and standardized security. 

How Docker can help solve these challenges

The Docker MCP Catalog and Toolkit are designed to address the above pain points by securely streamlining the discovery, installation, and authentication of MCP servers — making it easy to connect with your favorite MCP clients. 

Discover and run MCP servers easily in secure, isolated containers

The MCP Catalog makes it easy to discover and access 100+ MCP servers — including Stripe, Elastic, Neo4j, and many more — all available on Docker Hub. With the MCP Toolkit Docker Desktop extension, you can quickly and securely run and interact with these servers. By packaging MCP servers as containers, developers can sidestep common challenges such as runtime setup, dependency conflicts, and environment inconsistencies — just run the container, and it works. 

Figure 1: Discover curated and popular MCP servers in Docker MCP Catalog, part of the Docker Hub

We’re not just simplifying discovery and installation — we’re placing security at the heart of the MCP experience. Because MCPs run inside Docker container images, they inherit the same built-in security features developers already trust and a rich ecosystem of tools for securing software throughout the supply chain. And we’re going further. The Docker MCP Toolkit addresses emerging threats unique to MCP servers like Tool Poisoning and Tool Rug Pulls, by leveraging Docker’s strong position as both a provider of secure content and secure runtimes.

Figure 2: The MCP Toolkit Docker Desktop Extension allows you to easily and securely run MCP servers in containers.

Go to the extensions menu of Docker Desktop to get started with Docker MCP Catalog and Toolkit, or use this for installation. Check out our doc for more information.

One-Click MCP Client Integration with Built-In Secure Authentication

While a curated list of MCPs and simplified security is a great starting point, it’s just the beginning. You can connect popular MCP servers from the Docker MCP Catalog to any MCP client. For clients like Gordon (Docker AI Agent), Claude, Cursor, VSCode, Windsurf, continue.dev, and Goose, one-click setup will make integration seamless. 

The Docker MCP Toolkit includes built-in OAuth support and secure credential storage, enabling clients to authenticate with MCP servers and third-party services without hardcoding secrets into environment variables. This ensures your MCP tools run securely and reliably right from the start.

Figure 3: Easily connect to your favorite MCP clients like Gordon, Claude, Cursor, and continue.dev with one click.

Enterprise-Ready MCP Tooling: Build, manage, and share in Docker Hub

Soon, you’ll be able to build and share your own MCPs on Docker Hub—home to over 14 million images, millions of active users, and a robust ecosystem of trusted content. Teams count on Docker Hub for verified images, deep image analysis, lifecycle management, and enterprise-grade tooling. Those same trusted capabilities will soon extend to MCPs, giving teams access to the latest tools and a secure, reliable way to distribute their own. And just like container images, MCPs will integrate with enterprise features like Registry Access Management and Image Access Management, ensuring secure, streamlined developer workflows from end to end. 

Wrapping up

Docker MCP Catalog and Toolkit bring much-needed structure, security, and simplicity to the fast-growing world of MCP tools. By standardizing how MCP servers are discovered, installed, and secured, we’re removing friction for developers building smarter, more capable AI-powered applications and agents.

Whether you’re connecting to external tools, customizing workflows, or scaling automation inside your IDE, Docker makes the entire process easy and secure. And this is just the beginning. With ongoing investments in expanding the MCP ecosystem and streamlining how tools are managed, we’re committed to making powerful AI tooling accessible to every team.

With Docker Catalog and Toolkit, your AI agent isn’t limited by what’s built in — it’s empowered by everything you can plug in. 

Go to the extensions menu of Docker Desktop to get started with Docker MCP Catalog and Toolkit, or use this for installation. See it in action during our upcoming webinar. Interested in hosting your MCP servers on Docker? Let’s connect.

Learn more

Get started with Docker MCP Catalog and Toolkit

Join the webinar for a live technical walkthrough.

Visit our MCP webpage 

Authenticate and update today to receive your subscription level’s newest Docker Desktop features.

Subscribe to the Docker Navigator Newsletter.

New to Docker? Create an account. 

Have questions? The Docker community is here to help.

Quelle: https://blog.docker.com/feed/

Update on the Docker DX extension for VS Code

It’s now been a couple of weeks since we released the new Docker DX extension for Visual Studio Code. This launch reflects a deeper collaboration between Docker and Microsoft to better support developers building containerized applications.

Over the past few weeks, you may have noticed some changes to your Docker extension in VS Code. We want to take a moment to explain what’s happening—and where we’re headed next.

What’s Changing?

The original Docker extension in VS Code is being migrated to the new Container Tools extension, maintained by Microsoft. It’s designed to make it easier to build, manage, and deploy containers—streamlining the container development experience directly inside VS Code.

As part of this partnership, it was decided to bundle the new Docker DX extension with the existing Docker extension, so that it would install automatically to make the process seamless.

While the automatic installation was intended to simplify the experience, we realize it may have caught some users off guard. To provide more clarity and choice, the next release will make Docker DX Extension an opt-in installation, giving you full control over when and how you want to use it. 

What’s New from Docker?

Docker is introducing the new Docker DX extension, focused on delivering a best-in-class authoring experience for Dockerfiles, Compose files, and Bake files

Key features include:

Dockerfile linting: Get build warnings and best-practice suggestions directly from BuildKit and Buildx—so you can catch issues early, right inside your editor.

Image vulnerability remediation (experimental): Automatically flag references to container images with known vulnerabilities, directly in your Dockerfiles.

Bake file support: Enjoy code completion, variable navigation, and inline suggestions when authoring Bake files—including the ability to generate targets based on your Dockerfile stages.

Compose file outline: Easily navigate and understand complex Compose files with a new outline view in the editor.

Better Together

These two extensions are designed to work side-by-side, giving you the best of both worlds:

Powerful tooling to build, manage, and deploy your containers

Smart, contextual authoring support for Dockerfiles, Compose files, and Bake files

And the best part? Both extensions are free and fully open source.

Thank You for Your Patience

We know changes like this can be disruptive. While our goal was to make the transition as seamless as possible, we recognize that the approach caused some confusion, and we sincerely apologize for the lack of early communication.

The teams at Docker and Microsoft are committed to delivering the best container development experience possible—and this is just the beginning.

Where Docker DX is Going Next

At Docker, we’re proud of the contributions we’ve made to the container ecosystem, including Dockerfiles, Compose, and Bake.

We’re committed to ensuring the best possible experience when editing these files in your IDE, with instant feedback while you work.

Here’s a glimpse of what’s coming:

Expanded Dockerfile checks: More best-practice validations, actionable tips, and guidance—surfaced right when you need them.

Stronger security insights: Deeper visibility into vulnerabilities across your Dockerfiles, Compose files, and Bake configurations.

Improved debugging and troubleshooting: Soon, you’ll be able to live debug Docker builds—step through your Dockerfile line-by-line, inspect the filesystem at each stage, see what’s cached, and troubleshoot issues faster.

We Want Your Feedback!

Your feedback is critical in helping us improve the Docker DX extension and your overall container development experience.

If you encounter any issues or have ideas for enhancements you’d like to see, please let us know:

Open an issue on the Docker DX VS Code extension GitHub repo

Or submit feedback through the Docker feedback page

We’re listening and excited to keep making things better for you! 
Quelle: https://blog.docker.com/feed/

Docker Desktop 4.41: Docker Model Runner supports Windows, Compose, and Testcontainers integrations, Docker Desktop on the Microsoft Store

Big things are happening in Docker Desktop 4.41! Whether you’re building the next AI breakthrough or managing development environments at scale, this release is packed with tools to help you move faster and collaborate smarter. From bringing Docker Model Runner to Windows (with NVIDIA GPU acceleration!), Compose and Testcontainers, to new ways to manage models in Docker Desktop, we’re making AI development more accessible than ever. Plus, we’ve got fresh updates for your favorite workflows — like a new Docker DX Extension for Visual Studio Code, a speed boost for Mac users, and even a new location for Docker Desktop on the Microsoft Store. Also, we’re enabling ACH transfer as a payment option for self-serve customers. Let’s dive into what’s new!

Docker Model Runner now supports Windows, Compose & Testcontainers

This release brings Docker Model Runner to Windows users with NVIDIA GPU support. We’ve also introduced improvements that make it easier to manage, push, and share models on Docker Hub and integrate with familiar tools like Docker Compose and Testcontainers. Docker Model Runner works with Docker Compose projects for orchestrating model pulls and injecting model runner services, and Testcontainers via its libraries. These updates continue our focus on helping developers build AI applications faster using existing tools and workflows. 

In addition to CLI support for managing models, Docker Desktop now includes a dedicated “Models” section in the GUI. This gives developers more flexibility to browse, run, and manage models visually, right alongside their containers, volumes, and images.

Figure 1: Easily browse, run, and manage models from Docker Desktop

Further extending the developer experience, you can now push models directly to Docker Hub, just like you would with container images. This creates a consistent, unified workflow for storing, sharing, and collaborating on models across teams. With models treated as first-class artifacts, developers can version, distribute, and deploy them using the same trusted Docker tooling they already use for containers — no extra infrastructure or custom registries required.

docker model push <model>

The Docker Compose integration makes it easy to define, configure, and run AI applications alongside traditional microservices within a single Compose file. This removes the need for separate tools or custom configurations, so teams can treat models like any other service in their dev environment.

Figure 2: Using Docker Compose to declare services, including running AI models

Similarly, the Testcontainers integration extends testing to AI models, with initial support for Java and Go and more languages on the way. This allows developers to run applications and create automated tests using AI services powered by Docker Model Runner. By enabling full end-to-end testing with Large Language Models, teams can confidently validate application logic, their integration code, and drive high-quality releases.

String modelName = "ai/gemma3";
DockerModelRunnerContainer modelRunnerContainer = new DockerModelRunnerContainer()
.withModel(modelName);
modelRunnerContainer.start();

OpenAiChatModel model = OpenAiChatModel.builder()
.baseUrl(modelRunnerContainer.getOpenAIEndpoint())
.modelName(modelName)
.logRequests(true)
.logResponses(true)
.build();

String answer = model.chat("Give me a fact about Whales.");
System.out.println(answer);

Docker DX Extension in Visual Studio: Catch issues early, code with confidence 

The Docker DX Extension is now live on the Visual Studio Marketplace. This extension streamlines your container development workflow with rich editing, linting features, and built-in vulnerability scanning. You’ll get inline warnings and best-practice recommendations for your Dockerfiles, powered by Build Check — a feature we introduced last year. 

It also flags known vulnerabilities in container image references, helping you catch issues early in the dev cycle. For Bake files, it offers completion, variable navigation, and inline suggestions based on your Dockerfile stages. And for those managing complex Docker Compose setups, an outline view makes it easier to navigate and understand services at a glance.

Figure 3: Docker DX Extension in Visual Studio provides actionable recommendations for fixing vulnerabilities and optimizing Dockerfiles

Read more about this in our announcement blog and GitHub repo. Get started today by installing Docker DX – Visual Studio Marketplace 

MacOS QEMU virtualization option deprecation

The QEMU virtualization option in Docker Desktop for Mac will be deprecated on July 14, 2025. 

With the new Apple Virtualization Framework, you’ll experience improved performance, stability, and compatibility with macOS updates as well as tighter integration with Apple Silicon architecture. 

What this means for you:

If you’re using QEMU as your virtualization backend on macOS, you’ll need to switch to either Apple Virtualization Framework (default) or Docker VMM (beta) options.

This does NOT affect QEMU’s role in emulating non-native architectures for multi-platform builds.

Your multi-architecture builds will continue to work as before.

For complete details, please see our official announcement. 

Introducing Docker Desktop in the Microsoft Store

Docker Desktop is now available for download from the Microsoft Store! We’re rolling out an EXE-based installer for Docker Desktop on Windows. This new distribution channel provides an enhanced installation and update experience for Windows users while simplifying deployment management for IT administrators across enterprise environments.

Key benefits

For developers:

Automatic Updates: The Microsoft Store handles all update processes automatically, ensuring you’re always running the latest version without manual intervention.

Streamlined Installation: Experience a more reliable setup process with fewer startup errors.

Simplified Management: Manage Docker Desktop alongside your other applications in one familiar interface.

For IT admins: 

Native Intune MDM Integration: Deploy Docker Desktop across your organization with Microsoft’s native management tools.

Centralized Deployment Control: Roll out Docker Desktop more easily through the Microsoft Store’s enterprise distribution channels.

Automatic Updates Regardless of Security Settings: Updates are handled automatically by the Microsoft Store infrastructure, even in organizations where users don’t have direct store access.

Familiar Process: The update mechanism maps to the widget command, providing consistency with other enterprise software management tools.

This new distribution option represents our commitment to improving the Docker experience for Windows users while providing enterprise IT teams with the management capabilities they need.

Unlock greater flexibility: Enable ACH transfer as a payment option for self-serve customers

We’re focused on making it easier for teams to scale, grow, and innovate. All on their own terms. That’s why we’re excited to announce an upgrade to the self-serve purchasing experience: customers can pay via ACH transfer starting on 4/30/25.

Historically, self-serve purchases were limited to credit card payments, forcing many customers who could not use credit cards into manual sales processes, even for small seat expansions. With the introduction of an ACH transfer payment option, customers can choose the payment method that works best for their business. Fewer delays and less unnecessary friction.

This payment option upgrade empowers customers to:

Purchase more independently without engaging sales

Choose between credit card or ACH transfer with a verified bank account

By empowering enterprises and developers, we’re freeing up your time, and ours, to focus on what matters most: building, scaling, and succeeding with Docker.

Visit our documentation to explore the new payment options, or log in to your Docker account to get started today!

Wrapping up 

With Docker Desktop 4.41, we’re continuing to meet developers where they are — making it easier to build, test, and ship innovative apps, no matter your stack or setup. Whether you’re pushing AI models to Docker Hub, catching issues early with the Docker DX Extension, or enjoying faster virtualization on macOS, these updates are all about helping you do your best work with the tools you already know and love. We can’t wait to see what you build next!

Learn more

Authenticate and update today to receive your subscription level’s newest Docker Desktop features.

Subscribe to the Docker Navigator Newsletter.

Learn about our sign-in enforcement options.

New to Docker? Create an account. 

Have questions? The Docker community is here to help.

Quelle: https://blog.docker.com/feed/

How to build and deliver an MCP server for production

In December of 2024, we published a blog with Anthropic about their totally new spec (back then) to run tools with AI agents: the Model Context Protocol, or MCP. Since then, we’ve seen an explosion in developer appetite to build, share, and run their tools with Agentic AI – all using MCP. We’ve seen new MCP clients pop up, and big players like Google and OpenAI committing to this standard. However, nearly immediately, early growing pains have led to friction when it comes down to actually building and using MCP tools. At the moment, we’ve hit a major bump in the road.

MCP Pain Points

Runtime:

Getting up and running with MCP servers is a headache for devs. The standard runtimes for MCP servers rely on a specific version of Python or NodeJS, and combining tools means managing those versions, on top of extra dependencies an MCP server may require.

Security:

Giving an LLM direct access to run software on the host system is unacceptable to devs outside of hobbyist environments. In the event of hallucinations or incorrect output, significant damage could be done.

Users are asked to configure sensitive data in plaintext json files. An MCP config file contains all of the necessary data for your agent to act on your behalf, but likewise it centralizes everything a bad actor needs to exploit your accounts.

Discoverability

The tools are out there, but there isn’t a single good place to find the best MCP servers. Marketplaces are beginning to crop up, but the developers are still required to hunt out good sources of tools for themselves.

Later on in the MCP user experience, it’s very easy to end up with enough servers and tools to overwhelm your LLM – leading to incorrect tools being used, and worse outcomes. When an LLM has the right tools for the job, it can execute more efficiently. When an LLM gets the wrong tools – or too many tools to decide, hallucinations spike while evals plummet.

Trust:

When the tools are run by LLMs on behalf of the developer, it’s critical to trust the publisher of MCP servers. The current MCP publisher landscape looks like a gold rush, and is therefore vulnerable to supply-chain attacks from untrusted authors.

Docker as an MCP Runtime

Docker is a tried and true runtime to stabilize the environment in which tools run. Instead of managing multiple Node or Python installations, using Dockerized MCP servers allows anyone with the Docker Engine to run MCP servers.

Docker provides sandboxed isolation for tools so that undesirable LLM behavior can’t damage the host configuration. The LLM has no access to the host filesystem for example, unless that MCP container is explicitly bound. 

The MCP Gateway

In order for LLM’s to work autonomously, they need to be able to discover and run tools for themselves. This is nearly impossible using all of these MCP servers. Every time a new tool is added, a config file needs to be updated and the MCP client needs to be updated. The current workaround is to develop MCP servers which configure new MCP servers, but even this requires reloading. A much better approach is to simply use one MCP server: Docker. This MCP server acts as a gateway into a dynamic set of containerized tools. But how can tools be dynamic?

The MCP Catalog 

A dynamic set of tools in one MCP server means that users can go somewhere to add or remove MCP tools without modifying any config. This is achieved through a simple UI in Docker Desktop to maintain a list of tools which the MCP gateway can serve out. Users gain the ability to configure their MCP clients use hundreds of Dockerized servers all by “connecting” to the gateway MCP server. 

Much like Docker Hub, Docker MCP Catalog delivers a trusted, centralized hub to discover tools for developers. And for tool authors, that same hub becomes a critical distribution channel: a way to reach new users and ensure compatibility with platforms like Claude, Cursor, OpenAI, and VS Code. 

Docker Secrets

Finally, in order to securely pass access tokens and other secrets around containers, we’ve developed a feature as part of Docker Desktop to manage secrets. When configured, secrets are only exposed to the MCP’s container process. That means the secret won’t appear even when inspecting the running container. Allowing secrets to be kept scoped tightly to the tools that need them means you no longer risk big data breaches leaving MCP config files around.
Quelle: https://blog.docker.com/feed/

Dockerizing MCP – Bringing Discovery, Simplicity, and Trust to the Ecosystem

AI agents are moving fast—from labs to real-world apps. And as they go from generating text to taking real action, the Model Context Protocol (MCP) has emerged as the de facto standard for connecting agents to tools.

MCP is exciting. It’s simple, modular, and built on web-native principles. We believe it has the potential to do for agentic AI interaction what containers did for app deployment – standardize and simplify a complex, fragmented landscape.

But, that leaves us at a classic inflection point. MCP Clients and Servers hold enormous potential, but the experience isn’t production-ready – yet. Discovery is fragmented, trust is manual, and core capabilities like security and authentication are still patched together with workarounds. 

To move from prototypes to production, a few things need to become non-negotiable. First, developers need a trusted, centralized hub to discover tools – no more digging through Discord threads or Twitter replies. And for tool authors, that same hub becomes a critical distribution channel: a way to reach new users and ensure compatibility with platforms like Claude, Cursor, OpenAI, and VS Code. Today, that channel simply doesn’t exist. Second, containerization should be the default; cloning repos and wrangling dependencies just to get started is unnecessary friction. Third, credential management must be seamless and secure – centralized, encrypted, and built to fit modern pipelines. And finally, security has to be foundational. Sandbox it. Permission it. Audit it. Trust can’t be an afterthought—it needs to be built in from day one. And it needs to be simple to use: accessible to all developers.

This moment for MCP reminds us a lot of the early days of the cloud and containers – high potential, a few sharp edges, and massive opportunity ahead. These aren’t abstract problems – they’re the same challenges developers face every time a new technology hits its inflection point. We’ve seen it before. And we know how to help. Back in the early days of the cloud, Docker brought structure to chaos by making immutability and isolation the standard, building in authentication, and launching Docker Hub as a central discovery layer. It didn’t just streamline deployment – it redefined how software gets built, shared, and trusted. Today, Docker serves over 20 million developers and powers billions of image pulls every month. If we bring that same clarity, trust, and scalability to MCP, we unlock a whole new generation of intelligent agents and real-world automation. That’s exactly what we’re doing – with Docker MCP Catalog and Docker MCP Toolkit.

And we’re not doing it alone. We’re partnering with leaders like Stripe, Elastic, Heroku, Pulumi, Grafana Labs, Kong Inc., Neo4j, New Relic, Continue.dev, and more – each contributing their expertise to help shape a robust, open, and secure MCP ecosystem. This isn’t just another product launch – it’s the foundation of a platform shift. And we’re building it together.

The world we’ve envisioned is one we’re building together with our partners — and it all begins this May. Starting then, the Docker MCP Catalog will serve as the trusted home for discovering MCP tools – seamlessly integrated into Docker Hub. At launch, it will include over 100 verified tools from leading partners like Stripe, Elastic, Neo4j, and more. Each tool will feature publisher verification, versioned releases, and curated collections to help developers find exactly what they need, faster. And just like container images, MCP tools will be distributed via Docker’s proven pull-based infrastructure – the same trusted backbone behind billions of downloads every month.

Alongside it, the Docker MCP Toolkit brings these tools to life – making them secure, seamless, and instantly usable on your local machine or anywhere Docker runs. With one-click launch from Docker Desktop, you can spin up MCP servers in seconds and connect them to clients like Docker AI Agent, Claude, Cursor, VS Code, Windsurf, continue.dev, and Goose – no complex setup required. It also includes built-in credentials and OAuth management, integrated with your Docker Hub account, ensuring smooth authentication and making it easy to revoke credentials when necessary. A Gateway MCP Server dynamically exposes enabled tools to compatible clients, while the new docker mcp CLI lets you build, run, and manage them with ease. And with built-in memory, network and disk isolation, every tool runs securely by default-ready for production from day one.

So what does the future look like with Docker MCP Catalog and Toolkit? Picture this: browsing hundreds of ready-to-run MCP servers directly on Docker Hub and spinning them up as easily as Redis or Postgres. Instantly connecting them to agents with a few clicks. No more hardcoded secrets, no more launching tools with full host access via npx or uvx, and no more compromising on isolation or security. Best of all? Run a Docker container, and the MCP tools just work. With familiar commands and tooling, the learning curve is nearly zero—and the possibilities are massive.

Whether you’re building tools, creating agents, or just exploring what’s possible with MCP—we’d love to hear from you. Eager to try the Docker MCP Toolkit and MCP Catalog? Click here to join our alert list. Want a sneak peek? Schedule a session with our DevRel team here. Interested in hosting your own tools on the MCP Catalog? Get in touch with us here. Let’s build this ecosystem – together.
Quelle: https://blog.docker.com/feed/

Docker Desktop for Mac: QEMU Virtualization Option to be Deprecated in 90 Days

We are announcing the upcoming deprecation of QEMU as a virtualization option for Docker Desktop on Apple Silicon Macs. After serving as our legacy virtualization solution during the early transition to Apple Silicon, QEMU will be fully deprecated 90 days from today, on July 14, 2025. This deprecation does not affect QEMU’s role in emulating non-native architectures for multi-platform builds. By moving to Apple Virtualization Framework or Docker VMM, you will ensure optimal performance.

Why We’re Making This Change

Our telemetry shows that a very small percentage of users are still using the QEMU option. We’ve maintained QEMU support for backward compatibility, but both Docker VMM and Apple Virtualization Framework now offer:

Significantly better performance

Improved stability

Enhanced compatibility with macOS updates

Better integration with Apple Silicon architecture

What This Means For You

If you’re currently using QEMU as your Virtual Machine Manager (VMM) on Docker Desktop for Mac:

Your current installation will continue to work normally during the 90-day transition period

After July 1, 2025, Docker Desktop releases will automatically migrate your environment to Apple Virtualization Framework

You’ll experience improved performance and stability with the newer virtualization options

Migration Plan

The migration process will be smooth and straightforward:

Users on the latest Docker Desktop release will be automatically migrated to Apple Virtualization Framework after the 90-day period

During the transition period, you can manually switch to either Docker VMM (our fastest option for Apple Silicon Macs) or Apple Virtualization Framework through Settings > General > Virtual Machine Options

For 30 days after the deprecation date, the QEMU option will remain available in settings for users who encounter migration issues

After this extended period, the QEMU option will be fully removed

Note: This deprecation does not affect QEMU’s role in emulating non-native architectures for multi-platform builds.

What You Should Do Now

We recommend proactively switching to one of our newer VMM options before the automatic migration:

Update to the latest version of Docker Desktop for Mac

Open Docker Desktop Settings > General

Under “Choose Virtual Machine Manager (VMM)” select either:

Docker VMM (BETA) – Our fastest option for Apple Silicon Macs

Apple Virtualization Framework – A mature, high-performance alternative

Questions or Concerns?

If you have questions or encounter any issues during migration, please:

Visit our documentation

Reach out to us via GitHub issues

Join the conversation on the Docker Community Forums

We’re committed to making this transition as seamless as possible while delivering the best development experience on macOS.
Quelle: https://blog.docker.com/feed/