Broadcom’s New Bitnami Restrictions? Migrate Easily with Docker

For years, Bitnami has played a vital role in the open source and cloud-native community, making it easier for developers and operators to deploy popular applications with reliable, prebuilt container images and Helm charts. Countless teams have benefited from their work standardizing installation and updates for everything from WordPress to PostgreSQL. We want to acknowledge and thank Bitnami’s contributors for that important contribution.

Recently, however, Bitnami announced significant changes to how their images are distributed. Starting this month, access to most versioned images will move behind a paid subscription under Bitnami Secure Images (BSI), with only the :latest tags remaining free. Older images are being shifted into a Bitnami Legacy archive that will no longer receive updates. For many teams, this raises real challenges around cost, stability, and compliance.

Docker remains committed to being a trusted partner for developers and enterprises alike. Docker Official Images (DOI) are one of the two most widely used catalogs of open source container images in the world, and by far the most adopted. While Bitnami has been valuable to the community, Docker Official Images see billions of pulls every month and are trusted by developers, maintainers, and enterprises globally. This is the standard foundation teams already rely on.

For production environments that require added security and compliance, Docker Hardened Images (DHI) are a seamless drop-in replacement for DOI. They combine the familiarity and compatibility of DOI with enterprise-ready features: minimal builds, non-root by default, signed provenance, and near-zero-CVE baselines. Unlike Bitnami’s new paid model, DHI is designed to be affordable and transparent, giving organizations the confidence they need without unpredictable costs.

Bitnami’s Access Changes Are Already Underway

On July 16, Broadcom’s Bitnami team announced changes to their container image distribution model, effective September 29. Here’s what’s changing:

Freely built and available images and Helm charts are going away. The bitnami organization will be deleted.

New Bitnami Secure Images offering. Users that want to use Bitnami images will need to get a paid subscription to a new Binami Secure Images offering, hosted on the Bitnami registry. This provides access to stable tags, version history,

Free tier of Bitnami Secure Images. The bitnamisecure org has been created to provide a set of hardened, more secure images. Only the :latest tags will be available and the images are intended for development purposes only.

Unsupported legacy fallback. Older images are moved to a “Bitnami Legacy Registry”, available on Docker Hub in the bitnamilegacy org. These images are unsupported, will no longer receive updates or patches, and are intended to be used while making plans for alternatives.

Image and Helm chart source still available. While the built artifacts won’t be published, organizations will still be able to access the source code for Debian-based images and Helm charts. They can build and publish these on their own.

The timeline is tight too. Brownouts have already begun, and the public catalog deletion is set for September 29, 2025.

What Bitnami Users Need to Know

For many teams, this means Helm charts, CI/CD pipelines, and Kubernetes clusters relying on Bitnami will soon face broken pulls, compliance risks, or steep new costs.

The community reaction has been strong. Developers and operators voice concerns around:

Trust and stability concerns. Many see this as a “bait and switch,” with long-standing free infrastructure suddenly paywalled.

Increased operational risk. Losing version pinning or relying on :latest tags introduces deployment chaos, security blind spots, and audit failures.

Cost and budget pressure. Early pricing reports suggest that for organizations running hundreds of workloads, Bitnami’s new model could mean six-figure annual costs.

In short: teams depending on Bitnami for reliable, stable images and Helm charts now face an urgent decision.

Your Fastest Path Forward: Docker

At Docker, we believe developers and enterprises deserve choice, stability, and stability. That’s why we continue to offer two strong paths forward:

Docker Official Images – Free and Widely Available

Docker is committed to building and maintaining its Docker Official Image catalog. This catalog:

Fully supported with a dedicated team. This team reviews, publishes, and maintains the Docker Official Images.

Focused on collaboration. The team works with upstream software maintainers, security experts, and the broader Docker community to ensure images work, are patched, and support the needs of the Docker community.

Trusted by millions of developers worldwide. The Docker Official Images are pulled billions of times per month for development, learning, and production.

Docker Hardened Images – Secure, Minimal, Production-Ready

Docker Hardened Images are secure, production-ready container images designed for enterprise use.

Smaller near-zero known CVEs. Start with images that are up to 95% smaller, fewer packages, and a much-reduced attack surface.

Fast, SLA-backed remediation. Critical and High severity CVEs are patched within 7 days, faster than typical industry response times, and backed by an enterprise-grade SLA.

Multi-distro support. Use the distros you’re familiar with, including trusted Linux distros like Alpine and Debian

Signed provenance, SBOMs, and VEX data – for compliance confidence.

SLSA Level 3 builds, non-root by default, distroless options – following secure-by-default practices.

Self-service customization. Add certificates, packages, environment variables, and other configuration right into the build pipelines without forking or secondary patching.

Fully integrated into Docker Hub for a familiar developer workflow.

Start Your Move Today

If your organization is affected by the Bitnami changes,we are here to help. Docker offers you a fast path forward:

Audit your Bitnami dependencies. Identify which images you’re pulling.

Choose your path. Explore the Docker Official Images catalog or learn more about Docker Hardened Images. Many of the Bitnami images can be easily swapped with images from either catalog.

Need help?Contact our sales team to learn how Docker Hardened Images can provide secure, production-ready images at scale.

Quelle: https://blog.docker.com/feed/

Boost Your Copilot with SonarQube via Docker MCP Toolkit and Gateway

In the era of AI copilots and code generation tools productivity is skyrocketing, but so is the risk of insecure, untested, or messy code slipping into production. How do you ensure it doesn’t introduce vulnerabilities, bugs, or bad practices? 

A widely adopted tool to help address these concerns is SonarQube. It provides a rich set of rules and quality gates to analyze code for bugs, test coverage, code smells, and security issues. But there’s a common pain point: the feedback loop. You often need to switch between your IDE and SonarQube’s results, breaking focus and slowing iteration.

What if your AI agent could see code quality issues the moment they appear, right in your IDE, without you switching tabs or breaking your flow? In this post, we’ll focus on enhancing your development workflow by integrating SonarQube analysis directly into your IDE using the Sonar MCP server and Docker MCP Toolkit.

Getting Started with Sonar MCP from the Docker MCP Toolkit

The solution is here: Sonar MCP Server – a Model Context Protocol (MCP) server that integrates with SonarQube (Cloud or Server) and allows AI agents (like GitHub Copilot) to access code quality metrics and insights directly from your IDE.

To enable Sonar MCP easily and securely, we’ll use the Docker MCP Toolkit. It provides a catalog of over 150 MCP servers – including SonarQube.

We won’t dive deep into how MCP servers and the MCP Toolkit work, (check out the links below for that), but instead we’ll walk through a hands-on example of using Docker MCP Toolkit with Sonar MCP in a Java project.

Further reading about MCP Catalog and Toolkit:

How Docker MCP Toolkit Works with VS Code Copilot Agent Mode

Introducing Docker MCP Catalog and Toolkit

Demo Project: Java Local Development with Testcontainers

For our demo, we’ll use the Java Local Development Testcontainers Workshop project, a Spring Boot-based microservice for managing a product catalog, complete with APIs and Testcontainers-based tests.

GitHub repo: GannaChernyshova/java-testcontainers-local-development

Before diving into MCP integration, ensure your Java project is already set up for SonarQube analysis. In this demo project, that includes:

Using the JaCoCo plugin to collect test coverage data

Adding the SonarQube Maven plugin for code scanning

We also created a corresponding project in SonarQube Cloud and linked it to the GitHub repository. The details of SonarQube setup are outside the scope of this post, but if you need guidance, check out the official SonarQube documentation.

Step 1: Start the Sonar MCP Server via Docker Desktop

The Docker MCP Toolkit, available in Docker Desktop, makes it quick and secure to spin up MCP servers from a pre‑curated catalog without worrying about manual setup or complex dependencies. 

To get started:

Open Docker Desktop and navigate to the MCP Toolkit tab.

Browse the Catalog to find SonarQube.

Configure it with your SonarQube URL, organization, and access token.

Hit Start to launch the MCP server.

Figure 1: SonarQube MCP settings in the Docker Desktop MCP Toolkit

Your MCP server should now be up and running.

Step 2: Connect Sonar MCP to GitHub Copilot (IntelliJ)

We’ll use GitHub Copilot in IntelliJ, which now supports Agent Mode and MCP integration.  Here is the detailed instruction from GitHub: how to use the Model Context Protocol (MCP) to extend Copilot Chat.

Open Copilot Settings.

Edit or create the mcp.json file with:

{
"servers": {
"MCP_DOCKER": {
"command": "docker",
"args": [
"mcp",
"gateway",
"run"
],
"type": "stdio"
}
}
}

With this configuration you enable the Docker MCP Gateway, a secure enforcement point between agents and external tools, that would connect the MCP servers from the MCP Toolkit to your clients or agents.  

Now when you switch to Agent Mode in Copilot Chat, you’ll see a list of tools available from the connected MCP server – in this case, the Sonar MCP tools.

Figure 2: Tools that SonarQube MCP server provides

Step 3: Analyze and Improve Your Code

Let’s scan the project:

mvn clean verify sonar:sonar

In our case, the default quality gate passed. However, 4 security issues, few maintainability and 72.1% test coverage were flagged, leaving room for improvement.

Figure 3: Initial SonarQube scanning overview

Time to bring in Copilot + Sonar MCP!

We can now ask Copilot Chat to list the issues, suggest fixes, help with adding missing tests, and iterate faster – all within IntelliJ, without switching context.

Through several iterations, the agent successfully:

Detected open issues, suggested and applied fixes:

Figure 4: GitHub Copilot Agent detects and fixes issues reported by SonarQube 

Improved test coverage based on the sonar report of uncovered code lines: 

Figure 5: GitHub Copilot Agent writes tests for uncovered code detected in SonarQube report 

Resolved security problems and improved code maintainability:

Figure 6: GitHub Copilot Agent implements fixes based on the SonarQube open security and maintainability issues

As a result, the final SonarQube scan showed an A rating in every analysis category, and test coverage increased by over 15%, reaching 91.1%.

Figure 7: SonarQube scanning results after the fixes made with the help of Copilot

Conclusion

With the rapid rise of generative AI tools, developers can move faster than ever. But that speed comes with responsibility. The combination of Sonar MCP + Docker MCP Toolkit turns AI copilots into security- and quality-aware coding partners. It’s not just about writing code faster, it’s about writing better code first. 

Learn More

Discover hundreds of curated MCP servers on the Docker MCP Catalog

Learn more about Docker MCP Toolkit

Explore Docker MCP Gateway on GitHub

Quelle: https://blog.docker.com/feed/

Secure by Design: A Shift-Left Approach with Testcontainers, Docker Scout, and Hardened Images

In today’s fast-paced world of software development, product teams are expected to move quickly: building features, shipping updates, and reacting to user needs in real-time. But moving fast should never mean compromising on quality or security.

Thanks to modern tooling, developers can now maintain high standards while accelerating delivery. In a previous article, we explored how Testcontainers supports shift-left testing by enabling fast and reliable integration tests within the inner dev loop. In this post, we’ll look at the security side of this shift-left approach and how Docker can help move security earlier in the development lifecycle, using practical examples.

A Shift-Left Approach: Testing a Movie Catalog API

We’ll use a simple demo project to walk through our workflow. This is a Node.js + TypeScript API backed by PostgreSQL and tested with Testcontainers.

Movie API Endpoints:

Method

Endpoint

Description

POST

/movies

Add a new movie to the catalog

GET

/movies

Retrieve all movies, sorted by title

GET

/movies/search?q=…

Search movies by title or description (fuzzy match)

Before deploying this app to production, we want to make sure it functions correctly and is free from critical vulnerabilities.

Shift-Left Testing with Testcontainers: Recap

We verify the application against a real PostgreSQL instance by using Testcontainers to spin up containers for both the database and the application. A key advantage of Testcontainers is that it creates these containers dynamically during test execution. Another feature of the Testcontainers libraries is the ability to start containers directly from a Dockerfile. This allows us to run the containerized application along with any required services, such as databases, effectively reproducing the local environment needed to test the application at the API or end-to-end (E2E) level. This approach provides an additional layer of quality assurance and brings even more testing into the inner development loop.

For a more detailed explanation of how Testcontainers enables a shift-left testing approach into the developer inner loop, refer to the introductory blog post.

Here’s a beforeAll setup that prepares our test environment, including PostgreSQL and the application under development, started from the Dockerfile :

beforeAll(async () => {
const network = await new Network().start();
// 1. Start Postgres
db = await new PostgreSqlContainer("postgres:17.4")
.withNetwork(network)
.withNetworkAliases("postgres")
.withDatabase("catalog")
.withUsername("postgres")
.withPassword("postgres")
.withCopyFilesToContainer([
{
source: path.join(__dirname, "../dev/db/1-create-schema.sql"),
target: "/docker-entrypoint-initdb.d/1-create-schema.sql"
},
])
.start();
// 2. Build movie catalog API container from the Dockerfile
const container = await GenericContainer
.fromDockerfile("../movie-catalog")
.withTarget("final")
.withBuildkit()
.build();
// 3. Start movie catalog API container with environment variables for DB connection
app = await container
.withNetwork(network)
.withExposedPorts(3000)
.withEnvironment({
PGHOST: "postgres",
PGPORT: "5432",
PGDATABASE: "catalog",
PGUSER: "postgres",
PGPASSWORD: "postgres",
})
.withWaitStrategy(Wait.forListeningPorts())
.start();
}, 120000);

We can now test the movie catalog API:

it("should create and retrieve a movie", async () => {
const baseUrl = `http://${app.getHost()}:${app.getMappedPort(3000)}`;
const payload = {
title: "Interstellar",
director: "Christopher Nolan",
genres: ["sci-fi"],
releaseYear: 2014,
description: "Space and time exploration"
};

const response = await axios.post(`${baseUrl}/movies`, payload);
expect(response.status).toBe(201);
expect(response.data.title).toBe("Interstellar");
}, 120000);

This approach allows us to validate that:

The application is properly containerized and starts successfully.

The API behaves correctly in a containerized environment with a real database.

However, that’s just one part of the quality story. Now, let’s turn our attention to the security aspects of the application under development.

Introducing Docker Scout and Docker Hardened Images 

To follow modern best practices, we want to containerize the app and eventually deploy it to production. Before doing so, we must ensure the image is secure by using Docker Scout.

Our Dockerfile takes a multi-stage build approach and is based on the node:22-slim image.

###########################################################
# Stage: base
# This stage serves as the base for all of the other stages.
# By using this stage, it provides a consistent base for both
# the dev and prod versions of the image.
###########################################################
FROM node:22-slim AS base
WORKDIR /usr/local/app
RUN useradd -m appuser && chown -R appuser /usr/local/app
USER appuser
COPY –chown=appuser:appuser package.json package-lock.json ./

###########################################################
# Stage: dev
# This stage is used to run the application in a development
# environment. It installs all app dependencies and will
# start the app in a dev mode that will watch for file changes
# and automatically restart the app.
###########################################################
FROM base AS dev
ENV NODE_ENV=development
RUN npm ci –ignore-scripts
COPY –chown=appuser:appuser ./src ./src
EXPOSE 3000
CMD ["npx", "nodemon", "src/app.js"]

###########################################################
# Stage: final
# This stage serves as the final image for production. It
# installs only the production dependencies.
###########################################################
# Deps: install only prod deps
FROM base AS prod-deps
ENV NODE_ENV=production
RUN npm ci –production –ignore-scripts && npm cache clean –force
# Final: clean prod image
FROM base AS final
WORKDIR /usr/local/app
COPY –from=prod-deps /usr/local/app/node_modules ./node_modules
COPY ./src ./src
EXPOSE 3000
CMD [ "node", "src/app.js" ]

Let’s build our image with SBOM and provenance metadata. First, make sure that the containerd image store is enabled in Docker Desktop. We’ll also use the buildx command ( a Docker CLI plugin that extends the docker build) with the –provenance=true  and –sbom=true flags. These options attach build attestations to the image, which Docker Scout uses to provide more detailed and accurate security analysis.

docker buildx build –provenance=true –sbom=true -t movie-catalog-service:v1 .

Then set up a Docker organization with security policies and scan the image with Docker Scout: 

docker scout config organization demonstrationorg
docker scout quickview movie-catalog-service:v1

Figure 1: Docker Scout cli quickview output for node:22 based movie-catalog-service image

Docker Scout also offers a visual analysis via Docker Desktop.

Figure 2: Image layers and CVEs view in Docker Desktop for node:22 based movie-catalog-service image

In this example, no vulnerabilities were found in the application layer. However, several CVEs were introduced by the base node:22-slim image, including a high-severity CVE-2025-6020, a vulnerability present in Debian 12. This means that any Node.js image based on Debian 12 inherits this vulnerability. A common way to address this is by switching to an Alpine-based Node image, which does not include this CVE. However, Alpine uses musl libc instead of glibc, which can lead to compatibility issues depending on your application’s runtime requirements and deployment environment.

So, what’s a more secure and compatible alternative?

That’s where Docker Hardened Images (DHI) come in. These images follow a distroless philosophy, removing unnecessary components to significantly reduce the attack surface. The result? Smaller images that pull faster, run leaner, and provide a secure-by-default foundation for production workloads:

Near-zero exploitable CVEs: Continuously updated, vulnerability-scanned, and published with signed attestations to minimize patch fatigue and eliminate false positives.

Seamless migration: Drop-in replacements for popular base images, with -dev variants available for multi-stage builds.

Up to 95% smaller attack surface: Unlike traditional base images that include full OS stacks with shells and package managers, distroless images retain only the essentials needed to run your app.

Built-in supply chain security: Each image includes signed SBOMs, VEX documents, and SLSA provenance for audit-ready pipelines.

For developers, DHI means fewer CVE-related disruptions, faster CI/CD pipelines, and trusted images you can use with confidence.

Making the Switch to Docker Hardened Images

Switching to a Docker Hardened Image is straightforward. All we need to do is replace the base image node:22-slim with a DHI equivalent.

Docker Hardened Images come in two variants:

Dev variant (demonstrationorg/dhi-node:22-dev) – includes a shell and package managers, making it suitable for building and testing.

Runtime variant (demonstrationorg/dhi-node:22) – stripped down to only the essentials, providing a minimal and secure footprint for production.

This makes them perfect for use in multi-stage Dockerfiles. We can build the app in the dev image, then copy the built application into the runtime image, which will serve as the base for production.

Here’s what the updated Dockerfile would look like:

###########################################################
# Stage: base
# This stage serves as the base for all of the other stages.
# By using this stage, it provides a consistent base for both
# the dev and prod versions of the image.
###########################################################
# Changed node:22 to dhi-node:22-dev
FROM demonstrationorg/dhi-node:22-dev AS base
WORKDIR /usr/local/app
# DHI comes with nonroot user built-in.
COPY –chown=nonroot package.json package-lock.json ./

###########################################################
# Stage: dev
# This stage is used to run the application in a development
# environment. It installs all app dependencies and will
# start the app in a dev mode that will watch for file changes
# and automatically restart the app.
###########################################################
FROM base AS dev
ENV NODE_ENV=development
RUN npm ci –ignore-scripts
# DHI comes with nonroot user built-in.
COPY –chown=nonroot ./src ./src
EXPOSE 3000
CMD ["npx", "nodemon", "src/app.js"]

###########################################################
# Stage: final
# This stage serves as the final image for production. It
# installs only the production dependencies.
###########################################################
# Deps: install only prod deps
FROM base AS prod-deps
ENV NODE_ENV=production
RUN npm ci –production –ignore-scripts && npm cache clean –force
# Final: clean prod image
# Changed base to dhi-node:22
FROM demonstrationorg/dhi-node:22 AS final
WORKDIR /usr/local/app
COPY –from=prod-deps /usr/local/app/node_modules ./node_modules
COPY ./src ./src
EXPOSE 3000
CMD [ "node", "src/app.js" ]

Let’s rebuild and scan the new image:

docker buildx build –provenance=true –sbom=true -t movie-catalog-service-dhi:v1 .
docker scout quickview movie-catalog-service-dhi:v1

Figure 3: Docker Scout cli quickview output for dhi-node:22 based movie-catalog-service image

As you can see, all critical and high CVEs are gone, thanks to the clean and minimal footprint of the Docker Hardened Image.

One of the key benefits of using DHI is the security SLA it provides. If a new CVE is discovered, the DHI team commits to resolving:

Critical and high vulnerabilities within 7 days of a patch becoming available,

Medium and low vulnerabilities within 30 days.

This means you can significantly reduce your CVE remediation burden and give developers more time to focus on innovation and feature development instead of chasing vulnerabilities.

Comparing images with Docker Scout

Let’s also look at the image size and package count advantages of using distroless Hardened Images.

Docker Scout offers a helpful command docker scout compare , that allows you to analyze and compare two images. We’ll use it to evaluate the difference in size and package footprint between node:22-slim and dhi-node:22 based images.

docker scout compare local://movie-catalog-service:v1 –to local://movie-catalog-service-dhi:v1

Figure 4: Comparison of the node:22 and dhi-node:22 based movie-catalog-service images

As you can see, the original node:22-slim based image was 80 MB in size and included 427 packages, while the dhi-node:22 based image is just 41 MB with only 123 packages. 

By switching to a Docker Hardened Image, we reduced the image size by nearly 50 percent and cut down the number of packages by more than three times, significantly reducing the attack surface.

Final Step: Validate with local API tests

Last but not least, after migrating to a DHI base image, we should verify that the application still functions as expected.

Since we’ve already implemented Testcontainers-based tests, we can easily ensure that the API remains accessible and behaves correctly.

Let’s run the tests using the npm test command. 

Figure 5: Local API test execution results

As you can see, the container was built and started successfully. In less than 20 seconds, we were able to verify that the application functions correctly and integrates properly with Postgres.

At this point, we can push the changes to the remote repository, confident that the application is both secure and fully functional, and move on to the next task. 

Further integration with external security tools

In addition to providing a minimal and secure base image, Docker Hardened Images include a comprehensive set of attestations. These include a Software Bill of Materials (SBOM), which details all components, libraries, and dependencies used during the build process, as well as Vulnerability Exploitability eXchange (VEX). VEX offers contextual insights into vulnerabilities, specifying whether they are actually exploitable in a given environment, helping teams prioritize remediation.

Let’s say you’ve committed your code changes, built the application, and pushed a container image. Now you want to verify the security posture using an external scanning tool you already use, such as Grype or Trivy. That requires vulnerability information in a compatible format, which Docker Scout can generate for you.

First, you can view the list of available attestations using the docker scout attest command:

docker scout attest list demonstrationorg/movie-catalog-service-dhi:v1 –platform linux/arm64

This command returns a detailed list of attestations bundled with the image. For example, you might see two OpenVEX files: one for the DHI base image and another for any custom exceptions (like no-dsa) specific to your image.

Then, to integrate this information with external tools, you can export the VEX data into a vex.json file. Starting Docker Scout v1.18.3 you can use the docker scout vex get command to get the merged VEX document from all VEX attestations:

docker scout vex get demonstrationorg/movie-catalog-service-dhi:v1 –output vex.json

This generates a vex.json file containing all VEX statements for the specified image. Tools that support VEX can then use this file to suppress known non-exploitable CVEs.

To use the VEX information with Grype or Trivy, pass the –vex flag during scanning:

trivy image demonstrationorg/movie-catalog-service-dhi:v1 –vex vex.json

This ensures your security scanning results are consistent across tools, leveraging the same set of vulnerability contexts provided by Docker Scout.

Conclusion

Shifting left is about more than just early testing. It’s a proactive mindset for building secure, production-ready software from the beginning.

This shift-left approach combines:

Real infrastructure testing using Testcontainers

End-to-end supply chain visibility and actionable insights with Docker Scout

Trusted, minimal base images through Docker Hardened Images

Together, these tools help catch issues early, improve compliance, and reduce security risks in the software supply chain.

Learn more and request access to Docker Hardened Images!
Quelle: https://blog.docker.com/feed/

Docker Desktop Accelerates Innovation with Faster Release Cadence

We’re excited to announce a major evolution in how we deliver Docker Desktop updates to you. Starting with Docker Desktop release 4.45.0 on 28 August we’re moving to releases every two weeks, with the goal of reaching weekly releases by the end of 2025.

Why We’re Making This Change

You’ve told us you want faster access to new features, bug fixes, and security updates. By moving from a monthly to a two-week cadence, you get:

Earlier access to new features and improvements

Reduced wait times for critical updates

Faster bug fixes and security patches

Built on Proven Quality Processes

Our accelerated releases are backed by the same robust quality assurance that enterprise customers depend on:

Comprehensive automated testing across platforms and configurations

Docker Captains Community continues as our valued early adopter program, providing crucial feedback through beta channels

Real-time reliability monitoring to catch issues early

Feature flags for controlled rollout of major changes

Canary deployments reaching a small percentage of users first

Coming Soon

Along with faster releases, we’re completely redesigning how updates work. The following changes are going to be rolled out very soon:

Smarter Component Updates

Independent tools like Scout, Compose, Ask Gordon, and Model Runner update silently in the background

No workflow interruption – the component updates happen when you’re not actively developing

GUI updates (Docker Desktop dashboard) happen automatically when you close and reopen Docker Desktop

Clearer Update Information

Simplified update flow

In-app release highlights showcasing key improvements

Enterprise Control Maintained

We know enterprises need precise control over updates. The new model maintains the ability to disable in-app updates for local users or set defaults via the cloud admin console.

Getting Started

The new release cadence and update experience are rolling out in phases to all Docker Desktop users starting with version 4.45.0. Enterprise customers can access governance features through existing Docker Business subscriptions.

We’re excited to get improvements into your hands faster while maintaining the enterprise-grade reliability you expect from Docker Desktop.Download Docker Desktop here or update in-app!

Quelle: https://blog.docker.com/feed/

Prototyping an AI Tutor with Docker Model Runner

Every developer remembers their first docker run hello-world. The mix of excitement and wonder as that simple command pulls an image, creates a container, and displays a friendly message. But what if AI could make that experience even better?

As a technical writer on Docker’s Docs team, I spend my days thinking about developer experience. Recently, I’ve been exploring how AI can enhance the way developers learn new tools. Instead of juggling documentation tabs and ChatGPT windows, what if we could embed AI assistance directly into the learning flow? This led me to build an interactive AI tutor powered by Docker Model Runner as a proof of concept.

The Case for Embedded AI Tutors

The landscape of developer education is shifting. While documentation remains essential, we are seeing more developers coding alongside AI assistants. But context-switching between your terminal, documentation, and an external AI chat breaks concentration and flow. An embedded AI tutor changes this dynamic completely.

Imagine learning Docker with an AI assistant that:

Lives alongside your development environment

Maintains context about what you’re trying to achieve

Responds quickly without network latency

Keeps your code and questions completely private

This isn’t about replacing documentation. It’s about offering developers a choice in how they learn. Some prefer reading guides, others learn by doing, and increasingly, many want conversational guidance through complex tasks.

Building the AI Tutor

To build the AI tutor, I kept the architecture rather simple:

The frontend is a React app with a chat interface. Nothing fancy, just a message history, input field, and loading states.

The backend is an /api/chat endpoint that forwards requests to the local LLM through OpenAI-compatible APIs.

The AI powering it all is where Docker Model Runner comes in. Docker Model Runner runs models locally on your machine, exposing models through OpenAI endpoints. I decided to use Docker Model Runner because it promised local development and fast iteration.

The system prompt was designed with running docker run hello-world in mind:

You are a Docker tutor with ONE SPECIFIC JOB: helping users run their first "hello-world" container.

YOUR ONLY TASK: Guide users through these exact steps:
1. Check if Docker is installed: docker –version
2. Run their first container: docker run hello-world
3. Celebrate their success

STRICT BOUNDARIES:
– If a user says they already know Docker: Respond with an iteration of "I'm specifically designed to help beginners run their first container. For advanced help, please review Docker documentation at docs.docker.com or use Ask Gordon."
– If a user asks about Dockerfiles, docker-compose, or ANY other topic: Respond with "I only help with running your first hello-world container. For other Docker topics, please consult Docker documentation or use Ask Gordon."
– If a user says they've already run hello-world: Respond with "Great! You've completed what I'm designed to help with. For next steps, check out Docker's official tutorials at docs.docker.com."

ALLOWED RESPONSES:
– Helping install Docker Desktop (provide official download link)
– Troubleshooting "docker –version" command
– Troubleshooting "docker run hello-world" command
– Explaining what the hello-world output means
– Celebrating their success

CONVERSATION RULES:
– Use short, simple messages (max 2-3 sentences)
– One question at a time
– Stay friendly but firm about your boundaries
– If users persist with off-topic questions, politely repeat your purpose

EXAMPLE BOUNDARY ENFORCEMENT:
User: "Help me debug my Dockerfile"
You: "I'm specifically designed to help beginners run their first hello-world container. For Dockerfile help, please check Docker's documentation or Ask Gordon."

Start by asking: "Hi! I'm your Docker tutor. Is this your first time using Docker?"

Setting Up Docker Model Runner

Getting started with Docker Model Runner proved straightforward. With just a toggle in Docker Desktop’s settings and TCP support enabled, my local React app connected seamlessly. The setup delivered on Docker Model Runner’s promise of simplicity.

During initial testing, the model performed well. I could interact with it through the OpenAI-compatible endpoint, and my React frontend connected without requiring modifications or fine-tuning. I had my prototype up and running in no time.

To properly evaluate the AI tutor, I approached it from two paths. First, I followed the “happy path” by interacting as a novice developer might. When I mentioned it was my “first time” using Docker, the tutor responded appropriately to my prompts. It walked me through checking if Docker was installed using my terminal before running my container. 

Next, I ventured down the “unhappy path” to test the tutor’s boundaries. Acting as an experienced developer, I attempted to push beyond basic container operations. The AI tutor maintained its focus and stayed within its designated scope.

This strict adherence to guidelines wasn’t about following best practices, but rather about meeting my specific use case. I needed to prototype an AI tutor with clear guardrails that served a single, well-defined purpose. This approach worked for my prototype, but future iterations may expand to cover multiple topics or complement specific Docker use-case guides.

Reflections on Docker Model Runner

Docker Model Runner delivered on its core promise: making AI models accessible through familiar Docker workflows. The vision of models as first-class citizens in the Docker ecosystem proved valuable for rapid local prototyping. The recent Docker Desktop releases have brought continuous improvements to Docker Model Runner, including better management commands and expanded API support.

What worked really well for me:

Native integration with Docker Desktop, a tool I use all day, every day

OpenAI-compatible APIs that require no frontend modifications

GPU acceleration support for faster local inference

Growing model selection available on Docker Hub

More than anything, simplicity is its standout feature. Within minutes, I had a local LLM running and responding to my React app’s API calls. The speed from idea to working prototype is exactly what developers need when experimenting with AI tools.

Moving Forward

This prototype proved that embedded AI tutors aren’t just an idea, they’re a practical learning tool. Docker Model Runner provided the foundation I needed to test whether contextual AI assistance could enhance developer learning.

For anyone curious about Docker Model Runner:

Start experimenting now! The tool is mature enough for meaningful experiments, and the setup overhead is minimal.

Keep it simple. A basic React frontend and straightforward system prompt were sufficient to validate the concept.

Think local-first. Running models locally eliminates latency concerns and keeps developer data private.

Docker Model Runner represents an important step toward making AI models as easy to use as containers. While my journey had some bumps, the destination was worth it: an AI tutor that helps developers learn.

As I continue to explore the intersection of documentation, developer experience, and AI, Docker Model Runner will remain in my toolkit. The ability to spin up a local model as easily as running a container opens up possibilities for intelligent, responsive developer tools. The future of developer experience might just be a docker model run away.

Try It Yourself

Ready to build your own AI-powered developer tools? Get started with Docker Model Runner.

Have feedback? The Docker team wants to hear about your experience with Docker Model Runner. Share what’s working, what isn’t, and what features you’d like to see. Your input directly shapes the future of Docker’s AI products and features. Share feedback with Docker.

Quelle: https://blog.docker.com/feed/

The Supply Chain Paradox: When “Hardened” Images Become a Vendor Lock-in Trap

The market for pre-hardened container images is experiencing explosive growth as security-conscious organizations pursue the ultimate efficiency: instant security with minimal operational overhead. The value proposition is undeniably compelling—hardened images with minimal dependencies promise security “out of the box,” enabling teams to focus on building and shipping applications rather than constantly revisiting low-level configuration management.

For good reason, enterprises are adopting these pre-configured images to reduce attack surface area and simplify security operations. In theory, hardened images deliver reduced setup time, standardized security baselines, and streamlined compliance validation with significantly less manual intervention.

Yet beneath this attractive surface lies a fundamental contradiction. While hardened images can genuinely reduce certain categories of supply chain risk and strengthen security posture, they simultaneously create a more subtle form of vendor dependency than traditional licensing models. Organizations are unknowingly building critical operational dependencies on a single vendor’s design philosophy, build processes, institutional knowledge, responsiveness, and long-term market viability.

The paradox is striking: in the pursuit of supply chain independence, many organizations are inadvertently creating more concentrated dependencies and potentially weakening their security through stealth vendor lock-in that becomes apparent only when it’s costly to reverse.

The Mechanics of Modern Vendor Lock-in

Unfamiliar Base Systems Create Switching Friction

The first layer of lock-in emerges from architectural choices that seem benign during initial evaluation but become problematic at scale. Some hardened image vendors deviate from mainstream distributions, opting to bake their own Linux variants rather  than offering widely-adopted options like Debian, Alpine, or Ubuntu. This deviation creates immediate friction for platform engineering teams who must develop vendor-specific expertise to effectively manage these systems. Even if the differences are small, this raises the spectre of edge-cases – the bane of platform teams. Add enough edge cases and teams will start to fear adoption.

While vendors try to standardize their approach to hardening, in reality, it remains a bespoke process. This can create differences from image to image across different open source versions, up and down the stack – even from the same vendor. In larger organizations, platform teams may need to offer hardened images from multiple vendors. This creates further compounding complexity. In the end, teams find themselves managing a heterogeneous environment that requires specialized knowledge across multiple proprietary approaches. This increases toil, adds risk, increases documentation requirements and raises the cost of staff turnover.

Compatibility Barriers and Customization Constraints

More problematic is how hardened images often break compatibility with standard tooling and monitoring systems that organizations have already invested in and optimized. Open source compatibility gaps emerge when hardened images introduce modifications that prevent seamless integration with established DevOps workflows, forcing organizations to either accept reduced functionality or invest in vendor-specific alternatives.

Security measures, while well-intentioned, can become so restrictive they prevent necessary business customizations. Configuration lockdown reaches levels where platform teams cannot implement organization-specific requirements without vendor consultation or approval, transforming what should be internal operational decisions into external dependencies.

Perhaps most disruptive is how hardened images force changes to established CI/CD pipelines and operational practices. Teams discover that their existing automation, deployment scripts, and monitoring configurations require substantial modification to accommodate the vendor’s approach to security hardening.

The Hidden Migration Tax

The vendor lock-in trap becomes most apparent when organizations attempt to change direction. While vendors excel at streamlining initial adoption—providing migration tools, professional services, and comprehensive onboarding support—they systematically downplay the complexity of eventual exit scenarios.

Organizations accumulate sunk costs through investments in training and vendor-specific tooling that create psychological and financial barriers to switching providers. More critically, expertise about these systems becomes concentrated within vendor organizations rather than distributed among internal teams. Platform engineers find themselves dependent on vendor documentation, support channels, and institutional knowledge to troubleshoot issues and implement changes.

The Open Source Transparency Problem

The hardened image industry leverages the credibility of open source. But it can also undermine the spirit of open source transparency by creating almost a kind of fork but without the benefits of community.. While vendors may provide source code access, this availability doesn’t guarantee system understanding or maintainability. The knowledge required to comprehend complex hardening processes often remains concentrated within small vendor teams, making independent verification and modification practically impossible.

Heavily modified images become difficult for internal teams to audit and troubleshoot. Platform engineers encounter systems that appear familiar on the surface but behave differently under stress or during incident response, creating operational blind spots that can compromise security during critical moments.

Trust and Verification Gaps

Transparency is only half the equation. Security doesn’t end at a vendor’s brand name or marketing claims. Hardened images are part of your production supply chain and should be scrutinized like any other critical dependency. Questions platform teams should ask include:

How are vulnerabilities identified and disclosed? Is there a public, time-bound process, and is it tied to upstream commits and advisories rather than just public CVEs?

Could the hardening process itself introduce risks through untested modifications?

Have security claims been independently validated through audits, reproducible builds, or public attestations?

Does your SBOM meta-data accurately reflect the full context of your hardened image? 

Transparency plus verification and full disclosure builds durable trust. Without both, hardened images can be difficult to audit, slow to patch, and nearly impossible to replace. Not providing easy-to-understand and easy-to-consume verification artefacts and answers functions as a form of lock-in forcing the customer to trust but not allowing them to verify.

Building Independence: A Strategic Framework

For platform teams that want to benefit from the security gains of hardened images and reap ease of use while avoiding lock-in, taking a structured approach to hardened vendor decision making is critical.

Distribution Compatibility as Foundation

Platform engineering leaders must establish mainstream distribution adherence as a non-negotiable requirement. Hardened images should be built from widely-adopted distributions like Debian, Ubuntu, Alpine, or RHEL rather than vendor-specific variants that introduce unnecessary complexity and switching costs.

Equally important is preserving compatibility with standard package managers and maintaining adherence to the Filesystem Hierarchy Standard (FHS) to preserve tool compatibility and operational familiarity across teams. Key requirements include:

Package manager preservation: Compatibility with standard tools (apt, yum, apk) for independent software installation and updates 

File system layout standards: Adherence to FHS for seamless integration with existing tooling

Library and dependency compatibility: No proprietary dependencies that create additional vendor lock-in

Enabling Rapid Customization Without Security Compromise

Security enhancements should be architected as modular, configurable layers rather than baked-in modifications that resist change. This approach allows organizations to customize security posture while maintaining the underlying benefits of hardened configurations.

Built-in capability to modify security settings through standard configuration management tools preserves existing operational workflows and prevents the need for vendor-specific automation approaches. Critical capabilities include:

Modular hardening layers: Security enhancements as removable, configurable components

Configuration override mechanisms: Integration with standard tools (Ansible, Chef, Puppet)

Whitelist-based customization: Approved modifications without vendor consultation

Continuous validation: Continuous verification that customizations don’t compromise security baselines

Community Integration and Upstream Collaboration

Organizations should demand that hardened image vendors contribute security improvements back to original distribution maintainers. This requirement ensures that security enhancements benefit the broader community and aren’t held hostage by vendor business models.

Evaluating vendor participation in upstream security discussions, patch contributions, and vulnerability disclosure processes provides insight into their long-term commitment to community-driven security rather than proprietary advantage. Essential evaluation criteria include:

Upstream contribution requirements: Active contribution of security improvements to distribution maintainers

True community engagement: Participation in security discussions and vulnerability disclosure processes

Compatibility guarantees: Contractual requirements for backward and forward compatibility with official distributions

Intelligent Migration Tooling and Transparency

AI-powered Dockerfile conversion capabilities should provide automated translation between vendor hardened images and standard distributions, handling complex multi-stage builds and dependency mappings without requiring manual intervention.

Migration tooling must accommodate practical deployment patterns including multi-service containers and legacy application constraints rather than forcing organizations to adopt idealized single-service architectures. Essential tooling requirements include:

Automated conversion capabilities: AI-powered translation between hardened images and standard distributions

Transparent migration documentation: Open source tools that generate equivalent configurations for different providers

Bidirectional conversion: Tools that work equally well for migrating to and away from hardened images

Real-world architecture support: Accommodation of practical deployment patterns rather than forcing idealized architectures

Practical Implementation Framework

Standardized compatibility testing protocols should verify that hardened images integrate seamlessly with existing toolchains, monitoring systems, and operational procedures before deployment at scale. Self-service customization interfaces for common modifications eliminate dependency on vendor support for routine operational tasks.

Advanced image merging capabilities allow organizations to combine hardened base images with custom application layers while maintaining security baselines, providing flexibility without compromising protection. Implementation requirements include:

Compatibility testing protocols: Standardized verification of integration with existing toolchains and monitoring systems

Self-service customization:: User-friendly tools for common modifications (CA certificates, custom files, configuration overlays)

Image merging capabilities: Advanced tooling for combining hardened bases with custom application layers

Vendor SLAs: Service level agreements for maintaining compatibility and providing migration support

Conclusion: Security Without Surrendering Control

The real question platform teams must ask is this. Does my hardened image vendor strengthen or weaken my own control of my supply chain? The risks of lock-in aren’t theoretical. All of the factors described above can turn security into an unwanted operational constraint. Platform teams can demand hardened images and hardening process built for independence from the start— rooted in mainstream distributions, transparent in their build processes, modular in their security layers, supported by strong community involvement, and butressed by tooling that makes migration a choice, not a crisis.

When security leaders adopt hardened images that preserve compatibility, encourage upstream collaboration, and fit seamlessly into existing workflows, they protect more than just their containers. They protect their ability to adapt and they minimize lock-in while actually improving their security posture. The most secure organizations will be the ones that can harden without handcuffing themselves.
Quelle: https://blog.docker.com/feed/

Building AI Agents with Docker MCP Toolkit: A Developer’s Real-World Setup

Building AI agents in the real world often involves more than just making model calls — it requires integrating with external tools, handling complex workflows, and ensuring the solution can scale in production.

In this post, we’ll walk through a real-world developer setup for creating an agent using the Docker MCP Toolkit.

To make things concrete, I’ve built an agent that takes a Git repository as input and can answer questions about its contents — whether it’s explaining the purpose of a function, summarizing a module, or finding where a specific API call is made. This simple but practical use case serves as a foundation for exploring how agents can interact with real-world data sources and respond intelligently.

I built and ran it using the Docker MCP Toolkit, which made setup and integration fast, portable, and repeatable. This blog walks you through that developer setup and explains why Docker MCP is a game changer for building and running agents.

Use Case: GitHub Repo Question-Answering Agent

The goal: Build an AI agent that can connect to a GitHub repository, retrieve relevant code or metadata, and answer developer questions in plain language.

Example queries:

“Summarize this repo: https://github.com/owner/repo”

“Where is the authentication logic implemented?”

“List main modules and their purpose.”

“Explain the function parse_config and show where it’s used.”

This goes beyond a simple code demo — it reflects how developers work in real-world environments

The agent acts like a code-aware teammate you can query anytime.

The MCP Gateway handles tooling integration (GitHub API) without bloating the agent code.

Docker Compose ties the environment together so it runs the same in dev, staging, or production.

Role of Docker MCP Toolkit

Without MCP Toolkit, you’d spend hours wiring up API SDKs, managing auth tokens, and troubleshooting environment differences.

With MCP Toolkit:

Containerized connectors – Run the GitHub MCP Gateway as a ready-made service (docker/mcp-gateway:latest), no SDK setup required.

Consistent environments – The container image has fixed dependencies, so the setup works identically for every team member.

Rapid integration – The agent connects to the gateway over HTTP; adding a new tool is as simple as adding a new container.

Iterate faster – Restart or swap services in seconds using docker compose.

Focus on logic, not plumbing – The gateway handles the GitHub-specific heavy lifting while you focus on prompt design, reasoning, and multi-agent orchestration.

Role of Docker Compose 

Running everything via Docker Compose means you treat the entire agent environment as a single deployable unit:

One-command startup – docker compose up brings up the MCP Gateway (and your agent, if containerized) together.

Service orchestration – Compose ensures dependencies start in the right order.

Internal networking – Services talk to each other by name (http://mcp-gateway-github:8080) without manual port wrangling.

Scaling – Run multiple agent instances for concurrent requests.

Unified logging – View all logs in one place for easier debugging.

Architecture Overview

This setup connects a developer’s local agent to GitHub through a Dockerized MCP Gateway, with Docker Compose orchestrating the environment. Here’s how it works step-by-step:

User Interaction

The developer runs the agent from a CLI or terminal.

They type a question about a GitHub repository — e.g., “Where is the authentication logic implemented?”

Agent Processing

The Agent (LLM + MCPTools) receives the question.

The agent determines that it needs repository data and issues a tool call via MCPTools.

MCPTools → MCP Gateway

 MCPTools sends the request using streamable-http to the MCP Gateway running in Docker.

This gateway is defined in docker-compose.yml and configured for the GitHub server (–servers=github –port=8080).

GitHub Integration

The MCP Gateway handles all GitHub API interactions — listing files, retrieving content, searching code — and returns structured results to the agent.

LLM Reasoning

The agent sends the retrieved GitHub context to OpenAI GPT-4o as part of a prompt.

 The LLM reasons over the data and generates a clear, context-rich answer.

Response to User

The agent prints the final answer back to the CLI, often with file names and line references.

Code Reference & File Roles

The detailed source code for this setup is available at this link. 

Rather than walk through it line-by-line, here’s what each file does in the real-world developer setup:

docker-compose.yml

Defines the MCP Gateway service for GitHub.

Runs the docker/mcp-gateway:latest container with GitHub as the configured server.

Exposes the gateway on port 8080.

Can be extended to run the agent and additional connectors as separate services in the same network.

app.py

Implements the GitHub Repo Summarizer Agent.

Uses MCPTools to connect to the MCP Gateway over streamable-http.

Sends queries to GitHub via the gateway, retrieves results, and passes them to GPT-4o for reasoning.

Handles the interactive CLI loop so you can type questions and get real-time responses.

In short: the Compose file manages infrastructure and orchestration, while the Python script handles intelligence and conversation.

Setup and Execution

Clone the repository 

git clone https://github.com/rajeshsgr/mcp-demo-agents/tree/main

cd mcp-demo-agents

Configure environment

Create a .env file in the root directory and add your OpenAI API key:

OPEN_AI_KEY = <<Insert your Open AI Key>>

Configure GitHub Access

To allow the MCP Gateway to access GitHub repositories, set your GitHub personal access token:

docker mcp secret set github.personal_access_token=<YOUR_GITHUB_TOKEN>

Start MCP Gateway

Bring up the GitHub MCP Gateway container using Docker Compose:

docker compose up -d

Install Dependencies & Run Agent

python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
python app.py

Ask Queries

Enter your query: Summarize https://github.com/owner/repo

Real-World Agent Development with Docker, MCP, and Compose

This setup is built with production realities in mind —

Docker ensures each integration (GitHub, databases, APIs) runs in its own isolated container with all dependencies preconfigured.

MCP acts as the bridge between your agent and real-world tools, abstracting away API complexity so your agent code stays clean and focused on reasoning.

Docker Compose orchestrates all these moving parts, managing startup order, networking, scaling, and environment parity between development, staging, and production.

From here, it’s easy to add:

More MCP connectors (Jira, Slack, internal APIs).

Multiple agents specializing in different tasks.

CI/CD pipelines that spin up this environment for automated testing

Final Thoughts

By combining Docker for isolation, MCP for seamless tool integration, and Docker Compose for orchestration, we’ve built more than just a working AI agent — we’ve created a repeatable, production-ready development pattern. This approach removes environment drift, accelerates iteration, and makes it simple to add new capabilities without disrupting existing workflows. Whether you’re experimenting locally or deploying at scale, this setup ensures your agents are reliable, maintainable, and ready to handle real-world demands from day one.

Before vs. After: The Developer Experience

Aspect

Without Docker + MCP + Compose

With Docker + MCP + Compose

Environment Setup

Manual SDK installs, dependency conflicts, “works on my machine” issues.

Prebuilt container images with fixed dependencies ensure identical environments everywhere.

Integration with Tools (GitHub, Jira, etc.)

Custom API wiring in the agent code; high maintenance overhead.

MCP handles integrations in separate containers; agent code stays clean and focused.

Startup Process

Multiple scripts/terminals; manual service ordering.

docker compose up launches and orchestrates all services in the right order.

Networking

Manually configuring ports and URLs; prone to errors.

Internal Docker network with service name resolution (e.g., http://mcp-gateway-github:8080).

Scalability

Scaling services requires custom scripts and reconfigurations.

Scale any service instantly with docker compose up –scale.

Extensibility

Adding a new integration means changing the agent’s code and redeploying.

Add new MCP containers to docker-compose.yml without modifying the agent.

CI/CD Integration

Hard to replicate environments in pipelines; brittle builds.

Same Compose file works locally, in staging, and in CI/CD pipelines.

Iteration Speed

Restarting services or switching configs is slow and error-prone.

Containers can be stopped, replaced, and restarted in seconds.

Quelle: https://blog.docker.com/feed/

Streamline NGINX Configuration with Docker Desktop Extension

Docker periodically highlights blog posts featuring use cases and success stories from Docker partners and practitioners. This story was contributed by Dylen Turnbull and Timo Stark. With over 29 years in enterprise and open-source software development, Dylen Turnbull has held roles at Symantec, Veritas, F5 Networks, and most recently as a Developer Advocate for NGINX. Timo is a Docker Captain, Head of IT at DoHo Engineering, and was formerly a Principal Technical Product Manager at NGINX.

Modern Application developers face challenges in managing dependencies, ensuring consistent environments, and scaling applications. Docker Desktop simplifies these tasks with intuitive containerization, delivering reliable environments, easy deployments, and scalable architectures. NGINX server management in containers offers opportunities for enhancement, which the NGINX Development Center addresses with user-friendly tools to optimize configuration, performance, and web server management.

Opportunities for Increased Workflow Efficiency

Docker Desktop streamlines container workflows, but NGINX configuration can be further improved with the NGINX Development Center:

Easier Configuration: NGINX setup often requires command-line expertise. The NGINX Development Center offers intuitive interfaces to simplify the process.

Simplified Multi-Server Management: Managing multiple configurations involves complex volume mounting. The NGINX Development Center centralizes and streamlines configuration handling.

Improved Debugging: Debugging requires manual log access and container inspection. The NGINX Development Center provides clear diagnostic tools for faster resolution.

Faster Iteration: Reverse proxy updates need frequent restarts. The NGINX Development Center enables quick configuration changes with minimal downtime.

By integrating Docker Desktop’s seamless containerization with the NGINX Development Center’s tools, developers can achieve a more efficient workflow for modern applications.

The NGINX Development Center, available in the Docker Extensions Marketplace with over 51,000 downloads, addresses these frictions, streamlining NGINX configuration management for developers.

The advantage for App/Web Server Development

The NGINX Development Center enhances app and web server development by offering an intuitive GUI-based interface integrated into Docker Desktop, simplifying server configuration file management without requiring command-line expertise. It provides streamlined access to runtime configuration previews, minimizing manual container inspection, and enables rapid iteration without container restarts for faster development and testing cycles.

Centralized configuration management ensures consistency across development, testing, and production environments. Seamlessly integrated with Docker Desktop, the extension reduces the complexity of traditional NGINX workflows, allowing developers to focus on application development rather than infrastructure management.

Overview of the NGINX Development Center

The NGINX Development Center, developed by Timo Stark, is designed to enhance the developer experience for NGINX server configuration in containerized environments. Available in the Docker Extensions Marketplace, the extension leverages Docker Desktop’s extensibility to provide a dedicated NGINX Development Center. Key features include:

Graphical Configuration Interface

A user-friendly UI for creating and editing NGINX server blocks, routing rules, and SSL configurations.

Run-Time Configuration Updates

Apply changes to NGINX instances without container restarts, supporting rapid iteration.

Integrated Debugging Tools

Validate configurations, and troubleshoot issues directly within Docker Desktop.

How Does the NGINX Development Center Work?

The NGINX Development Center Docker extension, based on the NGINX Docker Desktop Extension public repository, simplifies NGINX configuration and management within Docker Desktop. It operates as a containerized application with a React-based user interface and a Node.js backend, integrated into Docker Desktop via the Extensions Marketplace and Docker API.

Here’s how it works in simplified terms:

Installation and Setup: The extension is installed from the Docker Extensions Marketplace or built locally using a Dockerfile that compiles the UI and backend components. It runs as a container within Docker Desktop, pulling the image nginx/nginx-docker-extension:latest.

User Interface: The React-based UI, accessible through the NGINX Development Center tab in Docker Desktop, allows developers to create and edit NGINX configurations, such as server blocks, routing rules, and SSL settings.

Configuration Management: The Node.js backend processes user inputs from the UI, generates NGINX configuration files, and applies them to a managed NGINX container. Changes are deployed dynamically using NGINX’s reload mechanism, avoiding container restarts.

Integration with Docker: The extension communicates with Docker Desktop’s API to manage NGINX containers and uses Docker volumes to store configuration files and logs, ensuring seamless interaction with the Docker ecosystem.

Debugging Support: While it doesn’t provide direct log access, the extension supports debugging by validating configurations in real-time and leveraging Docker Desktop’s native tools for indirect log viewing.

The extension’s backend, built with Node.js, handles configuration generation and NGINX instance management, while the React-based frontend provides an intuitive user experience. For development, the extension supports hot reloading, allowing developers to test changes without rebuilding the image.

Architecture Diagram

Below is a simplified architecture diagram illustrating how the NGINX Development Center integrates with Docker Desktop:

NGINX Development Center architecture showing integration with Docker Desktop, featuring a Node.js backend and React UI, managing NGINX containers and configuration files.

Docker Desktop: Hosts the extension and provides access to the Docker API and Extensions Marketplace.

NGINX Development Center: Runs as a container, with a Node.js backend for configuration management and a React UI for user interaction.

NGINX Container: The managed NGINX instance, configured dynamically by the extension.

Configuration Files: Generated and monitored by the extension, stored in Docker volumes for persistence.

Why run NGINX configuration management as a Docker Desktop Extension?

Running NGINX configuration management as a Docker Desktop Extension provides a unified, streamlined experience for developers already working within the Docker ecosystem. By integrating directly into Docker Desktop’s interface, the extension eliminates the friction of switching between multiple tools and command-line interfaces, allowing developers to manage NGINX configurations alongside their containerized applications in a single, familiar environment.

The extension approach leverages Docker’s inherent benefits of isolation and consistency, ensuring that NGINX configuration management operates reliably across different development machines and operating systems. This containerized approach prevents conflicts with local system configurations and removes the complexity of installing and maintaining separate NGINX management tools.

Furthermore, Docker Desktop serves as the only prerequisite for the NGINX Development Center. Once Docker Desktop is installed, developers can immediately access sophisticated NGINX configuration capabilities without additional software installations, complex environment setup, or specialized NGINX expertise. The extension transforms what traditionally requires command-line proficiency into an intuitive, graphical workflow that integrates seamlessly with existing Docker-based development practices.

Getting Started

Follow these steps to set up and use the Docker Extension: NGINX Development CenterPrerequisites: Docker Desktop, 1 running NGINX container.

NGINX Development Center Setup in Docker Desktop:

Ensure Docker Desktop is installed and running on your machine (Windows, macOS, or Linux).

Installing the NGINX Development Center:

Open Docker Desktop and navigate to the Extensions Marketplace (left-hand menu).

Search for “NGINX” or “NGINX Development Center”.

Click “Install” to pull and install the NGINX Development Center image 

Accessing the NGINX Development Center:

After installation, a new “NGINX” tab appears in Docker Desktop’s left-hand menu.

Click the tab to open the NGINX Development Center, where you can manage configurations and monitor NGINX instances.

Configuration Management with the NGINX Development Center:

Use the GUI configuration editor to create new NGINX config files.

Configure existing nginx configuration files.

Preview and validate configurations before applying them.

Save changes, which are applied dynamically via hot reloading without restarting the NGINX container.

Real-world use case example: Development Proxy for Local Services

In modern application development, NGINX serves as a reverse proxy that’s useful for developers on full-stack or microservices projects. It manages traffic routing between components, mitigates CORS issues in browser-based testing, enables secure local HTTPS setups, and supports efficient workflows by letting multiple services share a single entry point without direct port exposure. This aids local environments for simulating production setups, testing API integrations, or handling real-time features like WebSockets, while avoiding manual restarts and complex configurations. NGINX can proxy diverse local services, including frontend frameworks (e.g., React or Angular apps), backend APIs (e.g., Node.js/Express servers), databases with web interfaces (e.g., phpMyAdmin), static file servers, or third-party tools like mock services and caching layers.

Developers often require a local proxy to route traffic between services (e.g., frontend on port 3000 and backend API) and avoid CORS issues, but manual NGINX setup demands file edits and restarts.

With the Docker Extension: NGINX Development Center

Setup: Install the NGINX Development Center via Docker Extensions Marketplace in Docker Desktop. Ensure local services (e.g., Node.js backend on port 3000) run in separate containers. Open the NGINX Development Center tab.

Containers run separately.

Configuration: In the UI, create a new server. Set upstream to server the frontend at localhost. Add proxy for /api/* to http://backend:3000. Publish via the graphical options.

Server config editing via the Docker Desktop UI

App server configuration

Validation and Testing: Preview the config in the NGINX Development Center UI to check for errors. Test by accessing http://localhost/ and http://localhost/api in a browser; confirm routing to backend.

Deployment: Save and apply changes dynamically (no restart needed). Export config for reuse in a Docker Compose file to orchestrate services.

This use case utilizes the NGINX Development Center’s React UI for proxy configuration, Node.js backend for config generation, and Docker API for seamless networking. Try setting up your own local proxy today by installing the extension and exploring the NGINX Development Center.

Try it out and come visit us

This post has examined how the NGINX Development Center, integrated into Docker Desktop via the NGINX Development Center, tackles developer challenges in managing NGINX configurations for containerized web applications. It provides a UI and backend to simplify dependency management, ensure consistent environments, and support scalable setups. The graphical interface reduces the need for command-line expertise, managing server blocks, routing, and SSL settings, while dynamic updates and real-time previews aid iteration and debugging. Docker volumes help maintain consistency across development, testing, and production.

We’ve highlighted a practical use case with Development Proxy for Local Services feasible within Docker Desktop using the extension. The architecture leverages Docker Desktop’s API and a containerized design to support the workflow.If you’re a developer interested in improving NGINX management, try installing the NGINX Development Center from the Docker Extensions Marketplace and explore the NGINX Development Center. For deeper engagement, visit the GitHub repository to review the codebase, suggest features, or contribute to its development, and consider joining discussions to connect with others.
Quelle: https://blog.docker.com/feed/

A practitioner’s view on how Docker enables security by default and makes developers work better

This blog post was written by Docker Captains, experienced professionals recognized for their expertise with Docker. It shares their firsthand, real-world experiences using Docker in their own work or within the organizations they lead. Docker Captains are technical experts and passionate community builders who drive Docker’s ecosystem forward. As active contributors and advocates, they share Docker knowledge and help shape Docker products. To learn more about how to become or to contact a Docker Captain, visit the Docker Captains’ website.

Security has been a primary concern of all types of organizations around the world. This has gone through all the eras of technology. First we had mainframes, then servers, then the cloud, all of them with their public and private offerings variations. With each evolution, security concerns grew and became harder to comply with.

Once we advanced into the world of distributed systems, security teams had to deal with the faster evolution of the environment. New programming languages, new libraries, new packages, new images, new everything.

For security to be handled correctly, security engineers needed a strong, well designed security architecture, always guaranteeing Developer Experience wouldn’t be impacted. And that’s where Docker comes in!

Container Security Basics

Container security covers a wide range of different topics. The field is so broad that there are entire books written exclusively about this subject. But when entering an enterprise environment, we can narrow it down to a few specific topics that need to be prioritized:

Artifacts

Code

Build file (e.g. Dockerfile) creation

Vulnerability management

Culture/Processes

Let’s get a little more in depth with those topics.

Artifacts

That’s the first step to a secure environment. Having trustworthy resources available for your engineers.

To reduce friction between security teams and developers, security engineers have to make secure resources available for developers, so they can just pull their images, libraries and dependencies in general, and start using it on their systems.

Docker Hardened Images (which we’ll talk a couple sections into the article) can help you with that.

In enterprise environments, we usually see a centralized repository for approved artifacts. This helps teams manage resources and the components used in their environments, while also helping developers know where to look when they want something.

Code

Everything really starts with the code that’s written. Having problematic code pushed into production might not seem bad at first but in the long run will cause you a lot of trouble.

In security, every surface has to be considered. We can create the most secure build file in the world, have the most robust process for managing assets, have great IAM (Identity and Access Management) workflows, but we are exposed if our code isn’t well written.

Beyond relying only on the developer’s expertise, we need to create guardrails to identify and mitigate problems as they are noted. This enforces a second layer of protection over all the work that’s done. Having tools in place can get mistakes developers might not see at first.

Having well trained developers and the right controls in the CI/CD pipelines our code goes through allows us to rest easy at night knowing we’re not sending bad code into production.

A couple of controls that can be applied to the pipelines:

SCA (Software Composition Analysis)

SAST/DAST/IAST

Secret Scanning

Dependency Scanning

Build file

In the beginning of the SDLC (Software Development Life-Cycle) our engineers have to create their build file (usually a Dockerfile) to download their application’s dependencies and to turn it into a container.

Creating a build file is easy, as it’s just a sequence of steps. You download something (e.g. a Package or a Library), install it, create a folder or a file, then download the next component, install it, and so on until all the steps have been completed. But even though the default values and settings usually do the work, they don’t have all the security guardrails and best practices applied by default. Because of that, you need to be careful with what’s being pushed into production.

While coding a build file, it’s crucial to ensure:

That there aren’t any secrets hard coded in it;

That the container is not configured to run as root – which could possibly allow an attacker to elevate their privilege and gain access to the host; 

That there aren’t any sensitive files copied to your container (like certificates and credentials).

Taking these steps in the beginning and starting strong guarantees that the rest of the SDLC will be minimally exposed.

Vulnerability management

Now, we’re starting to move away from the code and from the artifacts we have engineers deliver.

Vulnerabilities can be found on everything. On technologies, on processes, on everything. We need good vulnerability management to keep the engine going.

Companies need to have well established processes to identify vulnerabilities on the go, fix them and when it’s needed, accept them. Usually we have frameworks developed internally to understand if a risk is worth being taken or if it should be fixed before moving on.

Those vulnerabilities can be new or already known. They can be in libraries used in the code, on container images used in their systems and in versions of solutions used in our environment.

They are everywhere! Be sure to identify them, keep them registered and fix them when needed.

Culture/Processes

Not only technology presents a risk to enterprise security. Poorly trained engineers and bad processes also represent a real threat to a company’s security structure.

A flaw in a process might result in the wrong code being pushed into production. Or maybe the bad version of a container image to be used in a system.

If we take into perspective how people, processes and technology are related, we might understand why a problem in the vulnerability assessment of a library might cause an entire cluster to be compromised. Or why a role that was wrongfully attributed to an user presents a serious risk to the integrity of an entire cloud environment.

These are exaggerated examples, but serve to show us that in tech, everything is connected, even if we don’t see it.

That’s why processes are so important. Solid processes mean we are focused on set outcomes instead of pleasing stakeholders. It’s important to take feedback into consideration and to make adjustments as we move forward, but we need to ensure these processes are followed, even when there isn’t unanimous agreement.

To have successful processes established, we have to:

Design guardrails

Implement steps

Train teams

Repeat

That’s the only way to enable teams effectively!

How Docker protects engineers and companies

Docker has been an ally of software engineers and security teams for a while now. Not only by enabling the success of distributed systems, but also by improving how developers write and containerize their applications.

As the Docker platform evolved, security was taken into consideration as the number one priority, like its customers.

Today, developers have access to different Docker security solutions in different parts of the platform.

Docker Scout

Docker Scout is a service created by Docker to analyze container images and its layers for known vulnerabilities. It checks against publicly known CVEs and provides the user with information regarding vulnerabilities in their images. To also help with mitigation, Docker Scout provides the user with a “fixable” value, declaring if that vulnerability can be fixed. 

This is very useful once we enter a corporate environment because it makes it possible for the security teams to recognize the risks that image brings to the organization and allow them to decide if that amount of risk can be taken or not.

We all love the CLI, but sometimes having a GUI (Graphical User Interface) might help. Docker knows what developers like, and for that reason, we have Scout on both platforms. Your developers can use it to scan their images and see a quick summary on their terminal or they can enjoy the features provided by Docker Desktop and see a complete report with links and explanations on their image’s found vulnerabilities.

Docker Scout terminal report

Docker Scout Desktop report

By providing users with those reports, they can make smarter choices when adopting different libraries and packages into their applications and can also work closely with the security teams to provide faster feedback on whether that technology is safe to use or not.

Docker Hardened Images

Now focusing on providing engineers and companies with safe and recommended resources, Docker recently announced Docker Hardened Images (DHI), a list of near-zero CVE images and optimized resources for you to start building your applications.

Even though it’s common in large organizations to have private container registries to store safe images and dependencies, DHI provides a safer start point for the security teams, since the resources available have been through extensive examination and auditing.

Docker Hardened Images report

DHI is a very helpful resource not only for enterprises but also for independent and open source software developers. Docker-backed images make the internet and the cloud safer, allowing businesses to build trustworthy and reliable platforms for their customers!

From an engineer’s perspective, the true value of Docker Hardened Images is the trust we have on Docker and the value that this security-ready solution brings us. Managing image security is hard if you have to do it all the way through. It’s hard to keep images ready to use and the difficulty just increases when our developers are requesting newer versions every day. By using Hardened Images, we’re able to provide our end users (developers and engineers) the latest versions of the most popular solutions available and offload the security team.

Final Thoughts

We can approach security in a lot of different ways, the main thing is: Security CANNOT slow down engineers. We need to design our controls in a way that we’re able to cover everything, fulfilling all gaps identified and still allowing developers to deliver code fast.

Guarantee your engineers have the best of both worlds with Docker.

Security DevEx

Get in touch with the authors:

Pedro Ignácio:

Linkedin

Blog

Denis Rodrigues:

Linkedin

Blog

Learn more about Docker’s security solutions:

Docker Desktop

Docker Scout

Docker Hardened Images

Quelle: https://blog.docker.com/feed/

Docker @ Black Hat 2025: CVEs have everyone’s attention, here’s the path forward

CVEs dominated the conversation at Black Hat 2025. Across sessions, booth discussions, and hallway chatter, it was clear that teams are feeling the pressure to manage vulnerabilities at scale. While scanning remains an important tool, the focus is shifting toward removing security debt before it enters the software supply chain. Hardened images, compliance-ready tooling, and strong ecosystem partnerships are emerging as the path forward.

Community Highlights

The Docker community was out in full force, thank you all! Our booth at Black Hat was busy all week with nonstop conversations, hands-on demos, and a steady stream of limited-edition hoodies and Docker socks spotted around Las Vegas.

The Docker + Wiz evening party brought together the DevSecOps community to swap stories, compare challenges, and celebrate progress toward a more secure software supply chain. It was a great way to hear firsthand what’s top of mind for teams right now.

Across sessions, booth conversations, and the Wiz + Docker party, six key security themes stood out.

A busy Doker Booth @ Black Hat 2025

What We Learned: Six Key Themes

Scanning isn’t enough. Teams are looking for secure, zero-CVE starting points that eliminate security debt from the outset.

Security works best when it meets teams where they are. The right hardened distro makes all the difference. For example, Debian for compatibility and Alpine for a minimal footprint.

Flexibility is essential. Customizations to minimal images are a crucial business requirement for enterprises running custom, mission-critical apps.

Hardening is expanding quickly to regulated industries, with FedRAMP-ready variants in high demand.

AI security doesn’t require reinvention; proven container patterns still protect emerging workloads.

Better together ecosystems and partnerships still matter. We’re cooking some great things with Wiz to cut through alert fatigue, focus on exploitable risks, and speed hardened image adoption.

Technical Sessions Highlights

In our Lunch and Learn event, Docker’s Mike Donovan, Brian Pratt, and Britney Blodget shared how Docker Hardened Images provide a zero-CVE starting point backed by SLAs, SBOMs, and signed provenance. This approach removes the need to choose between usability and security. Debian and Alpine variants meet teams where they are, while customization capabilities allow organizations to add certificates, packages, or configurations and still inherit updates from the base image. Interest in FedRAMP-ready images reinforced that secure-by-default solutions are in demand across highly regulated industries, and can accelerate an organization’s FedRAMP process.

Docker Hardened Images Customization

On the AI Stage, Per Krogslund explored how emerging AI agents raise new questions around trust and governance, but do not require reinventing security from scratch. Proven container security patterns—including isolation, gateway controls, and pre-runtime validation—apply directly to these workloads. Hardened images provide a crucial, trusted launchpad for AI systems too, ensuring a secure and compliant foundation before a single agent is deployed.

Black Hat 2025 is in the books, but the conversation about building secure foundations is just getting started. In response to the fantastic customer feedback, Docker Hardened Images’ roadmap now features more workflow integrations, many more verified images in the catalog, and a lot more. Watch this space!

Ready to eliminate security debt from day one? Docker Hardened Images provide zero-CVE base images, built-in compliance tooling, and the flexibility to fit your workflows. 

Learn more and request access to Docker Hardened Images!

Quelle: https://blog.docker.com/feed/