Shift-Left Testing with Testcontainers: Catching Bugs Early with Local Integration Tests

Modern software development emphasizes speed and agility, making efficient testing crucial. DORA research reveals that elite teams thrive with both high performance and reliability. They can achieve 127x faster lead times, 182x more deployments per year, 8x lower change failure rates and most impressively, 2,293x faster recovery times after incidents. The secret sauce is they “shift left.” 

Shift-Left is a practice that moves integration activities like testing and security earlier in the development cycle, allowing teams to detect and fix issues before they reach production. By incorporating local and integration tests early, developers can prevent costly late-stage defects, accelerate development, and improve software quality. 

In this article, you’ll learn how integration tests can help you catch defects earlier in the development inner loop and how Testcontainers can make them feel as lightweight and easy as unit tests. Finally, we’ll break down the impact that shifting left integration tests has on the development process velocity and lead time for changes according to DORA metrics. 

Real-world example: Case sensitivity bug in user registration

In a traditional workflow, integration and E2E tests are often executed in the outer loop of the development cycle, leading to delayed bug detection and expensive fixes. For example, if you are building a user registration service where users enter their email addresses, you must ensure that the emails are case-insensitive and not duplicated when stored. 

If case sensitivity is not handled properly and is assumed to be managed by the database, testing a scenario where users can register with duplicate emails differing only in letter case would only occur during E2E tests or manual checks. At that stage, it’s too late in the SDLC and can result in costly fixes.

By shifting testing earlier and enabling developers to spin up real services locally — such as databases, message brokers, cloud emulators, or other microservices — the testing process becomes significantly faster. This allows developers to detect and resolve defects sooner, preventing expensive late-stage fixes.

Let’s dive deep into this example scenario and how different types of tests would handle it.

Scenario

A new developer is implementing a user registration service and preparing for production deployment.

Code Example of the registerUser method

async registerUser(email: string, username: string): Promise<User> {
const existingUser = await this.userRepository.findOne({
where: {
email: email
}
});

if (existingUser) {
throw new Error("Email already exists");
}

}

The Bug

The registerUser method doesn’t handle case sensitivity properly and relies on the database or the UI framework to handle case insensitivity by default. So, in practice, users can register duplicate emails with both lower and upper letters  (e.g., user@example.com and USER@example.com).

Impact

Authentication issues arise because email case mismatches cause login failures.

Security vulnerabilities appear due to duplicate user identities.

Data inconsistencies complicate user identity management.

Testing method 1: Unit tests. 

These tests only validate the code itself, so email case sensitivity verification relies on the database where SQL queries are executed. Since unit tests don’t run against a real database, they can’t catch issues like case sensitivity. 

Testing method 2: End-to-end test or manual checks. 

These verifications will only catch the issue after the code is deployed to a staging environment. While automation can help, detecting issues this late in the development cycle delays feedback to developers and makes fixes more time-consuming and costly.

Testing method 3: Using mocks to simulate database interactions with Unit Tests. 

One approach that could work and allow us to iterate quickly would be to mock the database layer and define a mock repository that responds with the error. Then, we could write a unit test that executes really fast:

test('should prevent registration with same email in different case', async () => {
const userService = new UserRegistrationService(new MockRepository());
await userService.registerUser({ email: 'user@example.com', password: 'password123' });
await expect(userService.registerUser({ email: 'USER@example.com', password: 'password123' }))
.rejects.toThrow('Email already exists');
});

In the above example, the User service is created with a mock repository that’ll hold an in-memory representation of the database, i.e. as a map of users. This mock repository will detect if a user has passed twice, probably using the username as a non-case-sensitive key, returning the expected error. 

Here, we have to code the validation logic in the mock, replicating what the User service or the database should do. Whenever the user’s validation needs a change, e.g. not including special characters, we have to change the mock too. Otherwise, our tests will assert against an outdated state of the validations. If the usage of mocks is spread across the entire codebase, this maintenance could be very hard to do.

To avoid that, we consider that integration tests with real representations of the services we depend on. In the above example,  using the database repository is much better than mocks, because it provides us with more confidence on what we are testing.

Testing method 4: Shift-left local integration tests with Testcontainers 

Instead of using mocks, or waiting for staging to run the integration or E2E tests, we can detect the issue earlier.  This is achieved by enabling developers to run the integration tests for the project locally in the developer’s inner loop, using Testcontainers with a real PostgreSQL database.

Benefits

Time Savings: Tests run in seconds, catching the bug early.

More Realistic Testing: Uses an actual database instead of mocks.

Confidence in Production Readiness: Ensures business-critical logic behaves as expected.

Example integration test

First, let’s set up a PostgreSQL container using the Testcontainers library and create a userRepository to connect to this PostgreSQL instance:

let userService: UserRegistrationService;

beforeAll(async () => {
container = await new PostgreSqlContainer("postgres:16")
.start();

dataSource = new DataSource({
type: "postgres",
host: container.getHost(),
port: container.getMappedPort(5432),
username: container.getUsername(),
password: container.getPassword(),
database: container.getDatabase(),
entities: [User],
synchronize: true,
logging: true,
connectTimeoutMS: 5000
});
await dataSource.initialize();
const userRepository = dataSource.getRepository(User);
userService = new UserRegistrationService(userRepository);
}, 30000);

Now, with initialized userService, we can use the registerUser method to test user registration with the real PostgreSQL instance:

test('should prevent registration with same email in different case', async () => {
await userService.registerUser({ email: 'user@example.com', password: 'password123' });
await expect(userService.registerUser({ email: 'USER@example.com', password: 'password123' }))
.rejects.toThrow('Email already exists');
});

Why This Works

Uses a real PostgreSQL database via Testcontainers

Validates case-insensitive email uniqueness

Verifies email storage format

How Testcontainers helps

Testcontainers modules provide preconfigured implementations for the most popular technologies, making it easier than ever to write robust tests. Whether your application relies on databases, message brokers, cloud services like AWS (via LocalStack), or other microservices, Testcontainers has a module to streamline your testing workflow.

With Testcontainers, you can also mock and simulate service-level interactions or use contract tests to verify how your services interact with others. Combining this approach with local testing against real dependencies, Testcontainers provides a comprehensive solution for local integration testing and eliminates the need for shared integration testing environments, which are often difficult and costly to set up and manage. To run Testcontainers tests, you need a Docker context to spin up containers. Docker Desktop ensures seamless compatibility with Testcontainers for local testing. 

Testcontainers Cloud: Scalable Testing for High-Performing Teams

Testcontainers is a great solution to enable integration testing with real dependencies locally. If you want to take testing a step further — scaling Testcontainers usage across teams, monitoring images used for testing, or seamlessly running Testcontainers tests in CI — you should consider using Testcontainers Cloud. It provides ephemeral environments without the overhead of managing dedicated test infrastructure. Using Testcontainers Cloud locally and in CI ensures consistent testing outcomes, giving you greater confidence in your code changes. Additionally, Testcontainers Cloud allows you to seamlessly run integration tests in CI across multiple pipelines, helping to maintain high-quality standards at scale. Finally, Testcontainers Cloud is more secure and ideal for teams and enterprises who have more stringent requirements for containers’ security mechanisms.   

Measuring the business impact of shift-left testing

As we have seen, shift-left testing with Testcontainers significantly improves defect detection rate and time and reduces context switching for developers. Let’s take the example above and compare different production deployment workflows and how early-stage testing would impact developer productivity. 

Traditional workflow (shared integration environment)

Process breakdown:

The traditional workflow comprises writing feature code, running unit tests locally, committing changes, and creating pull requests for the verification flow in the outer loop. If a bug is detected in the outer loop, developers have to go back to their IDE and repeat the process of running the unit test locally and other steps to verify the fix. 

Figure 1: Workflow of a traditional shared integration environment broken down by time taken for each step.

Lead Time for Changes (LTC): It takes at least 1 to 2 hours to discover and fix the bug (more depending on CI/CD load and established practices). In the best-case scenario, it would take approximately 2 hours from code commit to production deployment. In the worst-case scenario, it may take several hours or even days if multiple iterations are required.

Deployment Frequency (DF) Impact: Since fixing a pipeline failure can take around 2 hours and there’s a daily time constraint (8-hour workday), you can realistically deploy only 3 to 4 times per day. If multiple failures occur, deployment frequency can drop further.

Additional associated costs: Pipeline workers’ runtime minutes and Shared Integration Environment maintenance costs.

Developer Context Switching: Since bug detection occurs about 30 minutes after the code commit, developers lose focus. This leads to an increased cognitive load after they have to constantly context switch, debug, and then context switch again.

Shift-left workflow (local integration testing with Testcontainers)

Process breakdown:

The shift-left workflow is much simpler and starts with writing code and running unit tests. Instead of running integration tests in the outer loop, developers can run them locally in the inner loop to troubleshoot and fix issues. The changes are verified again before proceeding to the next steps and the outer loop. 

Figure 2: Shift-Left Local Integration Testing with Testcontainers workflow broken down by time taken for each step. The feedback loop is much faster and saves developers time and headaches downstream.

Lead Time for Changes (LTC): It takes less than 20 minutes to discover and fix the bug in the developers’ inner loop. Therefore, local integration testing enables at least 65% faster defect identification than testing on a Shared Integration Environment.  

Deployment Frequency (DF) Impact: Since the defect was identified and fixed locally within 20 minutes, the pipeline would run to production, allowing for 10 or more deployments daily.

Additional associated costs: 5 Testcontainers Cloud minutes are consumed.  

Developer Context Switching: No context switching for the developer, as tests running locally provide immediate feedback on code changes and let the developer stay focused within the IDE and in the inner loop.

Key Takeaways

Traditional Workflow (Shared Integration Environment)Shift-Left Workflow (Local Integration Testing with Testcontainers)Improvements and further referencesFaster Lead Time for Changes (LTCCode changes validated in hours or days. Developers wait for shared CI/CD environments.Code changes validated in minutes. Testing is immediate and local.>65% Faster Lead Time for Changes (LTC) – Microsoft reduced lead time from days to hours by adopting shift-left practices.Higher Deployment Frequency (DF)Deployment happens daily, weekly, or even monthly due to slow validation cycles.Continuous testing allows multiple deployments per day.2x Higher Deployment Frequency  – 2024 DORA Report shows shift-left practices more than double deployment frequency. Elite teams deploy 182x more often.Lower Change Failure Rate (CFR)Bugs that escape into production can lead to costly rollbacks and emergency fixes.More bugs are caught earlier in CI/CD, reducing production failures.Lower Change Failure Rate – IBM’s Systems Sciences Institute estimates defects found in production cost 15x more to fix than those caught early.Faster Mean Time to Recovery (MTTR)Fixes take hours, days, or weeks due to complex debugging in shared environments.Rapid bug resolution with local testing. Fixes verified in minutes.Faster MTTR—DORA’s elite performers restore service in less than one hour, compared to weeks to a month for low performers.Cost SavingsExpensive shared environments, slow pipeline runs, high maintenance costs.Eliminates costly test environments, reducing infrastructure expenses.Significant Cost Savings – ThoughtWorks Technology Radar highlights shared integration environments as fragile and expensive.

Table 1: Summary of key metrics improvement by using shifting left workflow with local testing using Testcontainers

Conclusion

Shift-left testing improves software quality by catching issues earlier, reducing debugging effort, enhancing system stability, and overall increasing developer productivity. As we’ve seen, traditional workflows relying on shared integration environments introduce inefficiencies, increasing lead time for changes, deployment delays, and cognitive load due to frequent context switching. In contrast, by introducing Testcontainers for local integration testing, developers can achieve:

Faster feedback loops – Bugs are identified and resolved within minutes, preventing delays.

More reliable application behavior – Testing in realistic environments ensures confidence in releases.

Reduced reliance on expensive staging environments – Minimizing shared infrastructure cuts costs and streamlines the CI/CD process.

Better developer flow state – Easily setting up local test scenarios and re-running them fast for debugging helps developers stay focused on innovation.

Testcontainers provides an easy and efficient way to test locally and catch expensive issues earlier. To scale across teams,  developers can consider using Docker Desktop and Testcontainers Cloud to run unit and integration tests locally, in the CI, or ephemeral environments without the complexity of maintaining dedicated test infrastructure. Learn more about Testcontainers and Testcontainers Cloud in our docs. 

Further Reading

Sign up for a Testcontainers Cloud account.

Follow the guide: Mastering Testcontainers Cloud by Docker: streamlining integration testing with containers

Connect on the Testcontainers Slack.

Get started with the Testcontainers guide.

Learn about Testcontainers best practices.

Learn about Spring Boot Application Testing and Development with Testcontainers

Subscribe to the Docker Newsletter.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Desktop 4.39: Smarter AI Agent, Docker Desktop CLI in GA, and Effortless Multi-Platform Builds

Developers need a fast, secure, and reliable way to build, share, and run applications — and Docker makes that easy. With the Docker Desktop 4.39 release, we’re excited to announce a few developer productivity enhancements including Docker AI Agent with Model Context Protocol (MCP) and Kubernetes support, general availability of Docker Desktop CLI, and `platform` flag support for more seamless multi-platform image management.

Docker AI Agent: Smarter, more capable, and now with MCP & Kubernetes

In our last release, we introduced the Docker AI Agent in beta as an AI-powered, context-aware assistant built into Docker Desktop and the CLI. It simplifies container management, troubleshooting, and workflows with guidance and automation. And the response has been incredible: a 9x increase in weekly active users. With each Docker Desktop release, we’re making Docker AI Agent smarter, more helpful, and more versatile across developer container workflows. And if you’re using Docker for GitHub Copilot, you’ll get these upgrades automatically — so you’re always working with the latest and greatest.

Docker AI Agent now supports Model Context Protocol (MCP) and Kubernetes, along with usability upgrades like multiline prompts and easy copying. The agent can now also interact with the Docker Engine to list and clean up containers, images, and volumes. Plus, with access to the Kubernetes cluster, Docker AI Agent can list namespaces, deploy and expose, for example, an Nginx service, and analyze pod logs. 

How Docker AI Agent Uses MCP

MCP is a new standard for connecting AI agents and models to external data and tools. It lets AI-powered apps and agents retrieve data and information from external sources, perform operations with third-party services, and interact with local filesystems, unlocking new and expanded capabilities. MCP works by introducing the concept of MCP clients and MCP Servers, this way clients request resources and the servers handle the request and perform the requested action.

The Docker AI Agent acts as an MCP client and can interact with MCP servers running as containers. When running the docker ai command in the terminal or in the Docker Desktop AI Agent window to ask a question, the agent looks for a gordon-mcp.yml file in the working directory for a list of MCP servers that should be used when in that context. For example, as a specialist in all things Docker, Docker AI Agent can:

Access the internet via the MCP fetch server.

Create a project on GitHub with the MCP Github server

To make MCP adoption easier and more secure, Docker has collaborated with Anthropic to build container images for the reference implementations of MCP servers, available on Docker Hub under the mcp namespace. Check out our docs for examples of using MCP with Docker AI Agent. 

Containerizing apps in multiple popular languages: More coming soon

Docker AI Agent is also more capable, and can now support the containerization of applications in new programming languages including:

JavaScript/TypeScript applications using npm, pnpm, yarn and bun;

Go applications using Go modules;

Python applications using pip, poetry, and uv;

C# applications using nuget

Try it out — just ask, “Can you containerize my application?” 

Once the agent runs through steps such as determining the number of services in the project, the language, package manager, and relevant information for containerization, it’ll generate Docker-related assets. You’ll have an optimized Dockerfile, Docker Compose file, dockerignore file, and a README to jumpstart your application with Docker. 

More language and package manager support will be available soon!

Figure 1: Docker AI Agent helps with containerizing your app and shows steps of its work

No need to write scripts, just ask Docker AI Agent

The Docker AI Agent also comes with built-in capabilities such as interfacing with containers, images, and volumes. Instead of writing scripts, you can simply ask in natural language to perform complex operations.  For example, combining various servers, to do complex tasks such as finding and cleaning unused images.

Figure 2: Finding and optimizing unused images storage with a simple ask to Docker AI Agent

Docker Desktop CLI: Now in GA

With the Docker Desktop 4.37 release, we introduced the Docker Desktop CLI controller in Beta, a command-line tool to manage Docker Desktop. In addition to performing tasks like starting, stopping, restarting, and checking the status of Docker Desktop directly from the command line, developers can also print logs and update to the latest version of Docker Desktop. 

Docker meets developers where they work — whether in the CLI or GUI. With the Docker Desktop CLI, developers can seamlessly switch between GUI and command-line workflows, tailoring their workflows to their needs. 

This feature lets you automate Docker Desktop operations in CI/CD pipelines, expedites troubleshooting directly from the terminal, and creates a smoother, distraction-free workflow. IT admins also benefit from this feature; for example, they can use these commands in automation scripts to manage updates. 

Improve multi-platform image management with the new –platform flag 

Containerized applications often need to run across multiple architectures, making efficient platform-specific image management essential. To simplify this, we’ve introduced a –platform flag for docker save, docker load, and docker history. This addition will let developers explicitly select and manage images for specific architectures like linux/amd64, linux/arm64, and more.

The new –platform flag gives you full control over environment variants when saving or loading. For example, exporting only the linux/arm64 version of an image is now as simple as running:

docker save –platform linux/arm64 -o my-image.tar my-app:latest

Similarly, docker load –platform linux/amd64 ensures that only the amd64 variant is imported from a multi-architecture archive, reducing ambiguity and improving cross-platform workflows. For debugging and optimization, docker history –platform provides detailed insights into the build history of a specific architecture.

These enhancements streamline multi-platform development by giving developers full control over how they build, store, and distribute images. 

Head over to our history, load, and save documentation to learn more! 

Wrapping up 

Docker Desktop 4.39 reinforces our commitment to streamlining the developer experience. With Docker AI Agent’s expanded support for MCP, Kubernetes, built-in capabilities of interacting with containers, and more, developers can simplify and customize their workflow. They can also seamlessly switch between the GUI and command-line, while creating automations with the Docker Desktop CLI. Plus, with the new –platform flag, developers now have full control over how they build, store, and distribute images. 

Less friction, more flexibility — we can’t wait to see what you build next!

Authenticate and update today to receive your subscription level’s newest Docker Desktop features.

Learn more

Subscribe to the Docker Navigator Newsletter.

Learn about our sign-in enforcement options.

New to Docker? Create an account. 

Have questions? The Docker community is here to help.

Quelle: https://blog.docker.com/feed/

Docker Engine v28: Hardening Container Networking by Default

Docker simplifies containerization by removing runtime complexity and making app development seamless. With Docker Engine v28, we’re taking another step forward in security by ensuring containers aren’t unintentionally accessible from local networks. This update isn’t about fixing a single vulnerability — it’s about security hardening so your containers stay safe. 

What happened?

When you run a container on the default Docker “bridge” network, Docker sets up NAT (Network Address Translation) rules using your system’s firewall (via iptables). For example, the following command forwards traffic from port 8080 on your host to port 80 in the container. 

docker run -d -p 8080:80 my-web-app

However, if your host’s filter-FORWARD chain is permissive (i.e., ACCEPT by default) and net.ipv4.ip_forward is enabled, unpublished ports could also be remotely accessible under certain conditions.

This only affects hosts on the same physical/link-layer network as the Docker host. In multi-tenant LAN environments or other shared local networks, someone connected on an RFC1918 subnet (such as 192.168.x.x or 10.x.x.x) could reach unpublished container ports if they knew (or guessed) its IP address.

Who’s affected?

This behavior only affects Linux users running Docker versions earlier than 28.0.0 with iptables. Docker Desktop is not affected.

If you installed Docker on a single machine and used our defaults, without manually customizing your firewall settings, you’re likely unaffected by upgrading. However, you could be impacted if:

You deliberately set your host’s FORWARD chain to ACCEPT and rely on accessing containers by their IP from other machines on the LAN.

You bound containers to 127.0.0.1 or another loopback interface but still route external traffic to them using advanced host networking tricks.

You require direct container access from subnets, VLANs, or other broadcast domains without explicitly publishing ports (i.e., no p flags).

If any of these exceptions apply to you, you might notice that containers previously reachable by direct IP now appear blocked unless you opt out or reconfigure your networking.

What’s the impact?

This exposure would’ve required being on the same local network or otherwise having route-level access to the container’s RFC1918 IP range. It did not affect machines across the public internet. However, a malicious user could discover unpublished container ports, within LAN settings or corporate environments, and connect to them. 

For instance, they might add a custom route to the container’s subnet, using the following command:

ip route add 172.17.0.0/16 via 192.168.0.10

From there, if 192.168.0.10 is your Docker host with a permissive firewall, the attacker could send packets straight to 172.17.0.x (the container’s IP).

What are my next steps?

If you’ve been impacted, we recommend taking the following three steps:

1. Upgrade to Docker Engine 28.0

Docker Engine 28.0 now drops traffic to unpublished ports by default. 

This “secure by default” approach prevents containers from being unexpectedly accessible on the LAN. Most users won’t notice a difference, published ports will continue to function as usual (-p 8080:80), and unpublished ports remain private as intended.

2. Decide if you need the old behavior (opt-out)

If you connect to containers over the LAN without publishing ports, you have a couple of options:

Option 1: Disable Docker’s DROP policy In /etc/docker/daemon.json, add the following:

{
“ip-forward-no-drop”: true
}

Or you can run Docker with the –ip-forward-no-drop flag. This preserves a globally open FORWARD chain. But keep in mind that Docker still drops traffic for unpublished ports using separate rules. See Option 2 for a fully unprotected alternative.

Option 2: Create a “nat-unprotected” network

docker network create -d bridge
-o com.docker.network.bridge.gateway_mode_ipv4=nat-unprotected
my_unprotected_net

3. Consider custom iptables management

Advanced users with complex network setups can manually configure iptables to allow exactly the traffic they need. But this route is only recommended if you’re comfortable managing firewall rules.

Technical details

In previous Docker Engine versions, Docker’s reliance on a permissive FORWARD chain meant that containers on the default bridge network could be reached if:

net.ipv4.ip_forward was enabled (often auto-enabled by Docker if it wasn’t already).

The system-wide FORWARD chain was set to ACCEPT.

Another machine on the same LAN (or an attached subnet) routed traffic to the container’s IP address.

In Docker 28.0, we now explicitly drop unsolicited inbound traffic to each container’s internal IP unless that port was explicitly published (-p or –publish). This doesn’t affect local connections from the Docker host itself, but it does block remote LAN connections to unpublished ports.

Attacks: a real-world example

Suppose you have two hosts on the same RFC1918 subnet:

Attacker at 10.0.0.2

DockerHost at 10.0.0.3

DockerHost runs a container with IP 172.17.0.2, and Docker’s firewall policy is effectively “ACCEPT.” In this situation, an attacker could run the following command:

ip route add 172.17.0.2/32 via 10.0.0.3
nc 172.17.0.2 3306

If MySQL was listening on port 3306 in the container (unpublished), Docker might still forward that traffic. The container sees a connection from 10.0.0.2, and no authentication is enforced by Docker’s network stack alone. 

Mitigations in Docker Engine 28.0

By enforcing a drop rule for unpublished ports, Docker 28.0 now prevents the above scenario by default.

1. Default drop for unpublished ports

Ensures that local traffic to container IPs is discarded unless explicitly published.

2. Targeted adjustments to FORWARD policy

Originally, if Docker had to enable IP forwarding, it would set DROP by default. Now, Docker only applies that if necessary and extends the same logic to IPv6. You can opt-out with –ip-forward-no-drop or in the config file.

3. Unpublished ports stay private

The container remains accessible from the host itself, but not from other devices on the LAN (unless you intentionally choose “nat-unprotected”).

Why we’re making this change now

Some users have raised concerns about local-network exposure for years (see issues like #14041 and #22054). But there’s been a lot of change since then:

Docker was simpler: Use cases often revolved around single-node setups where any extra exposure was mitigated by typical dev/test workflows.

Ecosystem evolution: Overlay networks, multi-host orchestration, and advanced routing became mainstream. Suddenly, scenarios once relegated to specialized setups became common.

Security expectations: Docker now underpins critical workloads in complex environments. Everyone benefits from safer defaults, even if it means adjusting older assumptions.

By introducing these changes in Docker Engine 28.0, we align with today’s best practices: don’t expose anything without explicit user intent. It’s a shift away from relying on users to configure their own firewalls if they want to lock things down.

Backward compatibility and the path forward

These changes are not backward compatible for users who rely on direct container IP access from a local LAN. For the majority, the new defaults lock down containers more securely without breaking typical docker run -p scenarios.

Still, we strongly advise upgrading to 28.0.1 or later so you can benefit from:

Safer defaults: Unpublished ports remain private unless you publish or explicitly opt out.

Clearer boundaries: Published vs. unpublished ports are now unambiguously enforced by iptables rules.

Easier management: Users can adopt the new defaults without becoming iptables experts.

Quick upgrade checklist

Before you upgrade, here’s a recommended checklist to run through to make sure you’re set up for success with the latest release:

Examine current firewall settingsRun iptables -L -n and ip6tables -L -n. If you see ACCEPT in the FORWARD chain, and you depend on that for multi-subnet or direct container access, plan accordingly.

Test in a controlled environmentSpin up Docker Engine 28.0 on a staging environment. Attempt to reach containers that you previously accessed directly. Then, verify which connections still work and which are now blocked.

Decide on opt-outsIf you do need the old behavior, set “ip-forward-no-drop”: true or use a “nat-unprotected” network. Otherwise, enjoy the heightened security defaults.

Monitor logs and metricsWatch for unexpected connection errors or service downtime. If something breaks, check if it’s caused by Docker’s new drop rules before rolling back any changes.

Conclusion

By hardening default networking rules in Docker Engine 28.0, we’re reducing accidental container exposure on local networks. Most users can continue without disruption and can just enjoy the extra peace of mind. But if you rely on older behaviors, you can opt out or create specialized networks that bypass these rules.

Ready to upgrade? Follow our official installation and upgrade instructions to get Docker Engine 28.0 on your system today.

We appreciate the feedback from the community and encourage you to reach out if you have questions or run into surprises after upgrading. You can find us on GitHub issues, our community forums, or your usual support channels.

Learn more

Review the Docker Engine v28 release notes

Read Docker’s iptables documentation

Issues that shaped these changes:

#14041

#22054

#48815

Quelle: https://blog.docker.com/feed/

Revisiting Docker Hub Policies: Prioritizing Developer Experience

At Docker, we are committed to ensuring that Docker Hub remains the best place for developers, engineering teams, and operations teams to build, share, and collaborate. As part of this, we previously announced plans to introduce image pull consumption fees and storage-based billing. After further evaluating how developers use Docker Hub and what will best support the ecosystem, we have refined our approach—one that prioritizes developer experience and enables developers to scale with confidence while reinforcing Docker Hub as the foundation of the cloud-native ecosystem.

What’s Changing?

We’re making important updates to our previously announced pull limits and storage policies to ensure Docker Hub remains a valuable resource for developers:

No More Pull Count Limits or Consumption Charges – We’re cancelling pull consumption charges entirely. Our focus is on making Docker Hub the best place for developers to build, share, and collaborate—ensuring teams can scale with confidence.

Unlimited Pull rates for Paid Users (As Announced Earlier) – Starting April 1, 2025, all paid Docker subscribers will have unlimited image pulls (with fair use limits) to ensure a seamless experience.

Updated Pull Rate Limits for Free & Unauthenticated Users – To ensure a reliable and seamless experience for all users, we are updating authenticated and free pull limits:

Unauthenticated users: Limited to 10 pulls per hour (as announced previously)

Free authenticated users: Increased to 100 pulls per hour (up from 40 pulls / hour)

System accounts & automation: As previously shared, automated systems and service accounts can easily authenticate using Personal Access Tokens (PATs) or Organizational Access Tokens (OATs), ensuring access to higher pull limits and a more reliable experience for automated authenticated pulls.

Storage Charges Delayed Indefinitely – Previously, we announced plans to introduce storage-based billing, but we have decided to indefinitely delay any storage charges. Instead, we are focusing on delivering new tools that will allow users to actively manage their storage usage. Once these tools are available, we will assess storage policies in the best interest of our users. If and when storage charges are introduced, we will provide a six-month notice, ensuring teams have ample time to adjust.

Why This Matters

The Best Place to Build and Share – Docker Hub remains the world’s leading container registry, trusted by over 20 million developers and organizations. We’re committed to keeping it the best place to distribute and consume software.

Growing the Ecosystem – We’re making these changes to support more developers, teams, and businesses as they scale, reinforcing Docker Hub as the foundation of the cloud-native world.

Investing in the Future – Our focus is on delivering more capabilities that help developers move faster, from better storage management to strengthening security to better protect the software supply chain.

Committed to Developers – Every decision we make is about strengthening the platform and enabling developers to build, share, and innovate without unnecessary barriers.

We appreciate your feedback, and we’re excited to keep evolving Docker Hub to meet the needs of developers and teams worldwide. Stay tuned for more updates, and as always—happy building! 
Quelle: https://blog.docker.com/feed/

Powered by Docker: Streamlining Engineering Operations as a Platform Engineer

The Powered by Docker is a series of blog posts featuring use cases and success stories from Docker partners and practitioners. This story was contributed by Neal Patel from Siimpl.io. Neal has more than ten years of experience developing software and is a Docker Captain.

Background

As a platform engineer at a mid-size startup, I’m responsible for identifying bottlenecks and developing solutions to streamline engineering operations to keep up with the velocity and scale of the engineering organization. In this post, I outline some of the challenges we faced with one of our clients, how we addressed them, and provide guides on how to tackle these challenges at your company.

One of our clients faced critical engineering challenges, including poor synchronization between development and CI/CD environments, slow incident response due to inadequate rollback mechanisms, and fragmented telemetry tools that delayed issue resolution. Siimpl implemented strategic solutions to enhance development efficiency, improve system reliability, and streamline observability, turning obstacles into opportunities for growth.

Let’s walk through the primary challenges we encountered.

Inefficient development and deployment

Problem: We lacked parity between developer tooling and CI/CD tooling, which made it difficult for engineers to test changes confidently.

Goal: We needed to ensure consistent environments across development, testing, and production.

Unreliable incident response

Problem: If a rollback was necessary, we did not have the proper infrastructure to accomplish this efficiently.

Goal: We wanted to revert to stable versions in case of deployment issues easily.

Lack of comprehensive telemetry

Problem: Our SRE team created tooling to simplify collecting and publishing telemetry, but distribution and upgradability were poor. Also, we found adoption to be extremely low.

Goal: We needed to standardize how we configure telemetry collection, and simplify the configuration of auto-instrumentation libraries so the developer experience is turnkey.

Solution: Efficient development and deployment

CI/CD configuration with self-hosted GitHub runners and Docker Buildx

We had a requirement for multi-architecture support (arm64/amd64), which we initially implemented in CI/CD with Docker Buildx and QEMU. However, we noticed an extreme dip in performance due to the emulated architecture build times.

We were able to reduce build times by almost 90% by ditching QEMU (emulated builds), and targeting arm64 and amd64 self-hosted runners. This gave us the advantage of blazing-fast native architecture builds, but still allowed us to support multi-arch by publishing the manifest after-the-fact. 

Here’s a working example of the solution we will walk through: https://github.com/siimpl/multi-architecture-cicd

If you’d like to deploy this yourself, there’s a guide in the README.md.

Prerequisites

This project uses the following tools:

Docker Build Cloud (included in all Docker paid subscriptions.)

DBC cloud driver

GitHub/GitHub Actions

A managed container orchestration service like Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), or  Google Kubernetes Engine (GKE)

Terraform

Helm

Because this project uses industry-standard tooling like Terraform, Kubernetes, and Helm, it can be easily adapted to any CI/CD or cloud solution you need.

Key features

The secret sauce of this solution is provisioning the self-hosted runners in a way that allows our CI/CD to specify which architecture to execute the build on.

The first step is to provision two node pools — an amd64 node pool and an arm64 node pool, which can be found in the aks.tf. In this example, the node_count is fixed at 1 for both node pools but for better scalability/flexibility you can also enable autoscaling for a dynamic pool.

resource “azurerm_kubernetes_cluster_node_pool” “amd64″ {
name = “amd64pool”
kubernetes_cluster_id = azurerm_kubernetes_cluster.cicd.id
vm_size = “Standard_DS2_v2″ # AMD-based instance
node_count = 1
os_type = “Linux”
tags = {
environment = “dev”
}
}

resource “azurerm_kubernetes_cluster_node_pool” “arm64″ {
name = “arm64pool”
kubernetes_cluster_id = azurerm_kubernetes_cluster.cicd.id
vm_size = “Standard_D4ps_v5″ # ARM-based instance
node_count = 1
os_type = “Linux”
tags = {
environment = “dev”
}
}

Next, we need to update the self-hosted runners’ values.yaml to have a configurable nodeSelector. This will allow us to deploy one runner scale set to the arm64pool and one to the amd64pool.

Once the Terraform resources are successfully created, the runners should be registered to the organization or repository you specified in the GitHub config URL. We can now update the REGISTRY values for the emulated-build and the native-build.

After creating a pull request with those changes, navigate to the Actions tab to witness the results.

You should see two jobs kick off, one using the emulated build path with QEMU, and the other using the self-hosted runners for native node builds. Depending on cache hits or the Dockerfile being built, the performance improvements can be up to 90%. Even with this substantial improvement, utilizing Docker Build Cloud can improve performance 95%. More importantly, you can reap the benefits during development builds! Take a look at the docker-build-cloud.yml workflow for more details. All you need is a Docker Build Cloud subscription and a cloud driver to take advantage of the improved pipeline.

Getting Started

1. Generate GitHub PAT

2. Update the variables.tf

3. Initialise AZ CLI

4. Deploy Cluster

5. Create a PR to validate pipelines

README.md for reference

Reliable Incident Response

Leveraging SemVer Tagged Containers for Easy Rollback

Recognizing that deployment issues can arise unexpectedly, we needed a mechanism to quickly and reliably rollback production deployments. Below is an example workflow for properly rolling back a deployment based on the tagging strategy we implemented above.

Rollback Process:

In case of a problematic build, deployment was rolled back to a previous stable version using the tagged images.

AWS CLI commands were used to update ECS services with the desired image tag:

on:
workflow_call:
inputs:
image-version:
required: true
type: string
jobs:
rollback:
runs-on: ubuntu-latest
permissions:
id-token: write
context: read
steps:
– name: Rollback to previous version
run: |
aws ecs update-service –cluster my-cluster –service my-service –force-new-deployment –image ${{ secrets.REGISTRY }}/myapp:${{ inputs.image-version }}

Comprehensive Telemetry

Configuring Sidecar Containers in ECS for Aggregating/Publishing Telemetry Data (OTEL)

As we adopted a OpenTelemetry to standardize observability, we quickly realized that adoption was one of the toughest hurdles. As a team, we decided to bake in as much configuration as possible into the infrastructure (Terraform modules) so that we could easily distribute and maintain observability instrumentation.

Sidecar Container Setup:

Sidecar containers were defined in the ECS task definitions to run OpenTelemetry collectors.

The collectors were configured to aggregate and publish telemetry data from the application containers.

Task Definition Example:

{
“containerDefinitions”: [
{
“name”: “myapp”,
“image”: “myapp:1.0.0″,
“essential”: true,
“portMappings”: [{ “containerPort”: 8080 }]
},
{
“name”: “otel-collector”,
“image”: “otel/opentelemetry-collector:latest”,
“essential”: false,
“portMappings”: [{ “containerPort”: 4317 }],
“environment”: [
{ “name”: “OTEL_RESOURCE_ATTRIBUTES”, “value”: “service.name=myapp” }
]
}
],
“family”: “my-task”
}

Configuring Multi-Stage Dockerfiles for OpenTelemetry Auto-Instrumentation Libraries (Node.js)

At the application level, configuring the auto-instrumentation posed a challenge since most applications varied in their build process. By leveraging multi-stage Dockerfiles, we were able to help standardize the way we initialized the auto-instrumentation libraries across microservices. We were primarily a nodejs shop, so below is an example Dockerfile for that.

Multi-Stage Dockerfile:

The Dockerfile is divided into stages to separate the build environment from the final runtime environment, ensuring a clean and efficient image.

OpenTelemetry libraries are installed in the build stage and copied to the runtime stage:

# Stage 1: Build stage
FROM node:20 AS build
WORKDIR /app
COPY package.json package-lock.json ./
# package.json defines otel libs (ex. @opentelemetry/node @opentelemetry/tracing)
RUN npm install
COPY . .
RUN npm run build

# Stage 2: Runtime stage
FROM node:20
WORKDIR /app
COPY –from=build /app /app
CMD [”node”, “dist/index.js”]

Results

By addressing these challenges we were able to reduce build times by ~90%, which alone dropped our DORA metrics for Lead time for changes and Time to restore by ~50%. With the rollback strategy and telemetry changes, we were able to reduce our Mean time to Detect (MTTD) and Mean time to resolve (MTTR) by ~30%. We believe that it could get to 50-60% with tuning of alerts and the addition of runbooks (automated and manual).

Enhanced Development Efficiency: Consistent environments across development, testing, and production stages sped up the development process, and roughly 90% faster build times with the native architecture solution.

Reliable Rollbacks: Quick and efficient rollbacks minimized downtime and maintained system integrity.

Comprehensive Telemetry: Sidecar containers enabled detailed monitoring of system health and security without impacting application performance, and was baked right into the infrastructure developers were deploying. Auto-instrumentation of the application code was simplified drastically with the adoption of our Dockerfiles.

Siimpl: Transforming Enterprises with Cloud-First Solutions

With Docker at the core, Siimpl.io’s solutions demonstrate how teams can build faster, more reliable, and scalable systems. Whether you’re optimizing CI/CD pipelines, enhancing telemetry, or ensuring secure rollbacks, Docker provides the foundation for success. Try Docker today to unlock new levels of developer productivity and operational efficiency.

Learn more from our website or contact us at solutions@siimpl.io
Quelle: https://blog.docker.com/feed/

Introducing the Beta Launch of Docker’s AI Agent, Transforming Development Experiences

For years, Docker has been an essential partner for developers, empowering everyone from small startups to the world’s largest enterprises. Today, AI is transforming organizations across industries, creating opportunities for those who embrace it to gain a competitive edge. Yet, for many teams, the question of where to start and how to effectively integrate AI into daily workflows remains a challenge. True to its developer-first philosophy, Docker is here to bridge that gap.

We’re thrilled to introduce the beta launch of Docker AI Agent (also known as Project: Gordon)—an embedded, context-aware assistant seamlessly integrated into the Docker suite. Available within Docker Desktop and CLI, this innovative agent delivers tailored guidance for tasks like building and running containers, authoring Dockerfiles and Docker-specific troubleshooting—eliminating disruptive context-switching. By addressing challenges precisely when and where developers encounter them, Docker AI Agent ensures a smoother, more productive workflow.

As the AI Agent evolves, enterprise teams will unlock even greater capabilities, including customizable features that streamline collaboration, enhance security, and help developers work smarter. With the Docker AI Agent, we’re making Docker even easier and more effective to use than it has ever been — AI accessible, actionable, and indispensable for developers everywhere.

How Docker’s AI Agent Simplifies Development Challenges  

Developing in today’s fast-paced tech landscape is increasingly complex, with developers having to learn an ever growing number of tools, libraries and technologies.

By integrating a GenAI Agent into Docker’s ecosystem, we aim to provide developers with a powerful assistant that can help them navigate these complexities. 

The Docker AI Agent helps developers accelerate their work, providing real-time assistance, actionable suggestions, and automations that remove many of the manual tasks associated with containerized application development. Delivering the most helpful, expert-level guidance on Docker-related questions and technologies, Gordon serves as a powerful support system for developers, meeting them exactly where they are in their workflow. 

If you’re a developer who favors graphical interfaces, Docker Desktop AI UI will help you navigate container running issues, image size management and more generic Dockerfile oriented questions. If you’re a command line interface user, you can call, and share context with the agent directly in your favorite terminal.

So what can Docker’s AI Agent do today? 

We’re delivering an expert assistant for every Docker-related concept and technology, whether it’s getting started, optimizing an existing Dockerfile or Compose file, or understanding Docker technologies in general. With Docker AI Agent, you also have the ability to delegate actions while maintaining full control and review over the process.

A first example, if you want to run a container from an image, our agent can suggest the most appropriate docker run command tailored to your needs. This eliminates the guesswork or the need to search Docker Hub, saving you time and effort. The result combines a custom prompt, live data from Docker Hub, Docker container expertise and private usage insights, unique to Docker Inc.

We’ve intentionally designed the output to be concise and actionable, avoiding the overwhelming verbosity often associated with AI-generated commands. We also provide sources for most of the AI agent recommendations, pointing directly to our documentation website. Our goal is to continuously refine this experience, ensuring that Docker’s AI Agent always provides the best possible command based on your specific local context.

Beside helping you run containers, the Docker AI Agent can today:

Explain, Rate and optimize Dockerfile leveraging the latest version of Docker.

Help you run containers in an effective, concise way, leveraging the local context (checking port already used or volumes).

Answers any docker related questions with the latest version of our documentations for our whole tool suite, and as such is able to answer any kind of questions on Docker tools and technologies.

Containerize a software project helping you run your software in containers.

Helps on Docker related Github Actions.

Suggest fix when a container is failing to start in Docker Desktop.

Provides contextual help for containers, images and volumes.

Can augment its answer with per directory MCP servers (see doc).

For the node expert, in the above screenshot the AI is recommending node 20.12 which is not the latest version but the one the AI found in the package.json.

With every future version of Docker Desktop and thanks to the feedback that you provide, the agent will be able to do so much more in the future.

How can you try Docker AI Agent? 

This first beta release of Docker AI Agent is now progressively available for all signed-in users*. By default, the Docker AI agent is disabled. To enable it you will need to follow the steps below. Here’s how to get started:

Install or update to the latest release of Docker Desktop 4.38

Enable Docker AI into Docker Desktop Settings -> Features in Development

For the best experience, ensure the Docker terminal is enabled by going to Settings → General

Apply Changes 

* If you’re a business subscriber, your Administrator needs to enable the Docker AI Agent for the organization first. This can be done through the Settings Management. If this is your case, feel free to contact us through the support  for further information.

Docker Agent’s Vision for 2025

By 2025, we aim to expand the agent’s capabilities with features like customizing your experience with more context from your registry, enhanced GitHub Copilot integrations, and deeper presence across the development tools you already use. With regular updates and your feedback, Docker AI Agent is being built to become an indispensable part of your development process.

For now this beta is the start of an exciting evolution in how we approach developer productivity. Stay tuned for more updates as we continue to shape a smarter, more streamlined way to build, secure, and ship applications. We want to hear from you, if you like or want more information you can contact us.

Learn more

Learn more about Docker’s AI Agent in on our docs 

Quelle: https://blog.docker.com/feed/

Docker Desktop 4.38: New AI Agent, Multi-Node Kubernetes, and Bake in GA

At Docker, we’re committed to simplifying the developer experience and empowering enterprises to scale securely and efficiently. With the Docker Desktop 4.38 release, teams can look forward to improved developer productivity and enterprise governance. 

We’re excited to announce the General Availability of Bake, a powerful feature for optimizing build performance and multi-node Kubernetes testing to help teams “shift left.” We’re also expanding availability for several enterprise features designed to boost operational efficiency. And last but not least, Docker AI Agent (formerly Project: Agent Gordon) is now in Beta, delivering intelligent, real-time Docker-related suggestions across Docker CLI, Desktop, and Hub. It’s here to help developers navigate Docker concepts, fix errors, and boost productivity.

Docker’s AI Agent boosts developer productivity  

We’re thrilled to introduce Docker AI Agent (also known as Project: Agent Gordon) — an embedded, context-aware assistant seamlessly integrated into the Docker suite. Available within Docker Desktop and CLI, this innovative agent delivers real-time, tailored guidance for tasks like container management and Docker-specific troubleshooting — eliminating disruptive context-switching. Docker AI agent can be used for every Docker-related concept and technology, whether you’re getting started, optimizing an existing Dockerfile or Compose file, or understanding Docker technologies in general. By addressing challenges precisely when and where developers encounter them, Docker AI Agent ensures a smoother, more productive workflow. 

The first iteration of Docker’s AI Agent is now available in Beta for all signed-in users. The agent is disabled by default, so user activation is required. Read more about Docker’s New AI Agent and how to use it to accelerate developer velocity here. 

Figure 1: Asking questions to Docker AI Agent in Docker Desktop

Simplify build configurations and boost performance with Docker Bake

Docker Bake is an orchestration tool that simplifies and speeds up Docker builds. After launching as an experimental feature, we’re thrilled to make it generally available with exciting new enhancements.

While Dockerfiles are great for defining build steps, teams often juggle docker build commands with various options and arguments — a tedious and error-prone process. Bake changes the game by introducing a declarative file format that consolidates all options and image dependencies (also known as targets) in one place. No more passing flags to every build command! Plus, Bake’s ability to parallelize and deduplicate work ensures faster and more efficient builds.

Key benefits of Docker Bake

Simplicity: Abstract complex build configurations into one simple command.

Flexibility: Write build configurations in a declarative syntax, with support for custom functions, matrices, and more.

Consistency: Share and maintain build configurations effortlessly across your team.

Performance: Bake parallelizes multi-image workflows, enabling faster and more efficient builds.

Developers can simplify multi-service builds by integrating Bake directly into their Compose files — Bake supports Compose files natively. It enables easy, efficient building of multiple images from a single repository with shared configurations. Plus, it works seamlessly with Docker Build Cloud locally and in CI. With Bake-optimized builds as the foundation, developers can achieve more efficient Docker Build Cloud performance and faster builds.

Learn more about streamlining build configurations, boosting performance, and improving team workflows with Bake in our announcement blog. 

Shift Left with Multi-Node Kubernetes testing in Docker Desktop

In today’s complex production environments, “shifting left”  is more essential than ever. By addressing concerns earlier in the development cycle, teams reduce costs and simplify fixes, leading to more efficient workflows and better outcomes. That’s why we continue to bring new features and enhancements to integrate feedback directly into the developer’s inner loop

Docker Desktop now includes Multi-Node Kubernetes integration, enabling easier and extensive testing directly on developers’ machines. While single-node clusters allow for quick verification of app deployments, they fall short when it comes to testing resilience and handling the complex, unpredictable issues of distributed systems. To tackle this, we’re updating our Kubernetes distribution with kind — a lightweight, fast, and user-friendly solution for local test and multi-node cluster simulations.

Figure 2: Selecting Kubernetes version and cluster number for testing

Key Benefits:

Multi-node cluster support: Replicate a more realistic production environment to test critical features like node affinity, failover, and networking configurations.

Multiple Kubernetes versions: Easily test across different Kubernetes versions, which is a must for validating migration paths.

Up-to-date maintenance: Since kind is an actively maintained open-source project, developers can update to the latest version on demand without waiting for the next Docker Desktop release.

Head over to our documentation to discover how to use multi-node Kubernetes clusters for local testing and simulation.

General availability of administration features for Docker Business subscription

With the Docker Desktop 3.36 release, we introduced Beta enterprise admin tools to streamline administration, improve security, and enhance operational efficiency. And the feedback from our Early Access Program customers has been overwhelmingly positive. 

For instance, enforcing sign-in with macOS configuration files and across multiple organizations makes deployment easier and more flexible for large enterprises. Also, the PKG installer simplifies managing large-scale Docker Desktop deployments on macOS by eliminating the need to convert DMG files into PKG first.

Today, the features below are now available to all Docker Business customers.  

Enforce sign-in with macOS configuration profiles 

Enforce sign-in for more than one organization at a time 

Deploy Docker Desktop for Mac in bulk with the PKG installer 

Looking ahead, Docker is dedicated to continue expanding enterprise administration capabilities. Stay tuned for more announcements!

Wrapping up 

Docker Desktop 4.38 reinforces our commitment to simplifying the developer experience while equipping enterprises with robust tools. 

With Bake now in GA, developers can streamline complex build configurations into a single command. The new Docker AI Agent offers real-time, on-demand guidance within their preferred Docker tools. Plus, with Multi-node Kubernetes testing in Docker Desktop, they can replicate realistic production environments and address issues earlier in the development cycle. Finally, we made a few new admin tools available to all our Business customers, simplifying deployment, management, and monitoring. 

We look forward to how these innovations accelerate your workflows and supercharge your operations! 

Learn more

Authenticate and update to receive your subscription level’s newest Docker Desktop features.

Subscribe to the Docker Navigator Newsletter.

Learn about our sign-in enforcement options.

New to Docker? Create an account. 

Have questions? The Docker community is here to help.

Quelle: https://blog.docker.com/feed/

Docker Bake is Now Generally Available in Docker Desktop 4.38!

We’re excited to announce the General Availability of Docker Bake with Docker Desktop 4.38! This powerful build orchestration tool takes the hassle out of managing complex builds and offers simplicity, flexibility, and performance for teams of all sizes.

What is Docker Bake?

Docker Bake is an orchestration tool that streamlines Docker builds, similar to how Compose simplifies managing runtime environments. With Bake, you can define build stages and deployment environments in a declarative file, making complex builds easier to manage. It also leverages BuildKit’s parallelization and optimization features to speed up build times.

While Dockerfiles are excellent for defining image build steps, teams often need to build multiple images and execute helper tasks like testing, linting, and code generation. Traditionally, this meant juggling numerous docker build commands with their own options and arguments – a tedious and error-prone process.

Bake changes the game by introducing a declarative file format that encapsulates all options and image dependencies, referred to as targets. Additionally, Bake’s ability to parallelize and deduplicate work ensures faster and more efficient builds.

Why should you use Bake?

Challenges with complex Docker Build configuration:

Managing long, complex build commands filled with countless flags and environment variables.

Tedious workflows for building multiple images.

Difficulty declaring builds for specific targets or environments.

Requires a script or 3rd-party tool to make things manageable

Docker Bake tackles these challenges with a better way to manage complex builds with a simple, declarative approach.

Key benefits of Docker Bake

Simplicity: Replace complex chains of Docker build commands and scripts with a single docker buildx bake command while maintaining clear, version-controlled configuration files that are easy to understand and modify.

Flexibility: Express sophisticated build logic through HCL syntax and matrix builds, enabling dynamic configurations that adapt to different environments and requirements while supporting custom functions for advanced use cases.

Consistency: Maintain standardized build configurations across teams and environments through version-controlled files and inheritance patterns, eliminating environment-specific build issues and reducing configuration drift.

Performance: Automatically parallelize independent builds and eliminate redundant operations through context deduplication and intelligent caching, dramatically reducing build times for complex multi-image workflows.

Figure 1: One simple Docker buildx bake command to replace all the flags and environment variables.

Use cases for Docker Bake

1. Monorepo and Image Bakery

Docker Bake can help developers efficiently manage and build multiple related Docker images from a single source repository. Plus, they can leverage shared configurations and automated dependency handling to enforce organizational standards.

Development Efficiency: Teams can maintain consistent build logic across dozens or hundreds of microservices in a single repository, reducing configuration drift and maintenance overhead.

Resource Optimization: Shared base images and contexts are automatically deduplicated, dramatically reducing build times and storage costs.

Standardization: Enforce organizational standards through inherited configurations, ensuring all services follow security, tagging, and testing requirements.

Change Management: A single source of truth for build configurations makes it easier to implement organization-wide changes like base image updates or security patches.

2. Compose users

Docker Bake provides seamless compatibility with existing docker-compose.yml files, allowing direct use of your current configurations. Existing Compose users are able to get started using Bake with minimal effort.

Gradual Adoption: Teams can incrementally adopt advanced build features while still leveraging their existing compose workflows and knowledge.

Development Consistency: Use the same configuration for both local development (via compose) and production builds (via Bake), eliminating “works on my machine” issues.

Enhanced Capabilities: Access powerful features like matrix builds and HCL expressions while maintaining compatibility with familiar compose syntax.

CI/CD Integration: Seamlessly integrate with existing CI/CD pipelines that already understand compose files while adding Bake’s advanced build capabilities.

3. Complex build configurations

Developers can use targets, groups, variables, functions, and matrix targets and many more tools in Bake to simplify their build configurations across projects and teams.

Cross-Platform Compatibility: Matrix builds enable teams to efficiently manage builds across multiple architectures, OS versions, and dependency combinations from a single configuration.

Dynamic Adaptation: HCL expressions allow builds to adapt to different environments, git branches, or CI variables without maintaining multiple configurations.

Build Optimization: Custom functions enable sophisticated logic for things like version calculation, tag generation, and conditional builds based on git history.

Quality Control: Variable validation and inheritance ensure consistent configuration across complex build scenarios, reducing errors and maintenance burden.

Scale Management: Groups and targets help organize large-scale build systems with dozens or hundreds of permutations, making them manageable and maintainable.

4. Docker Build Cloud

With Bake-optimized builds as the foundation, developers can achieve more efficient Docker Build Cloud performance and faster builds.

Enhanced Docker Build Cloud Performance: Instantly parallelize matrix builds across cloud infrastructure, turning hour-long build pipelines into minutes without managing build infrastructure.

Resource Optimization: Leverage Build Cloud’s distributed caching and deduplication to dramatically reduce bandwidth usage and build times, which is especially valuable for remote teams.

Cost Management: Save cost with DBC — Bake’s precise target definitions mean you only consume cloud resources for exactly what needs to be built.

Developer Experience: Teams can run complex multi-architecture builds without powerful local machines, enabling development from any device while maintaining build performance.

CI/CD Enhancement: Offload resource-intensive builds from CI runners to Build Cloud, reducing CI costs and queue times while improving reliability.

What’s New in Bake for GA?

Docker Bake has been an experimental feature for several years, allowing us to refine and improve it based on user feedback. So, there is already a strong set of ingredients that users love, such as targets and groups, variables, HCL Expression Support, inheritance capabilities, matrix targets, and additional contexts. With this GA release, Bake is now ready for production use, and we’ve added several enhancements to make it more efficient, secure, and easier to use:

Deduplicated Context Transfers: Significantly speeds up build pipelines by eliminating redundant file transfers when multiple targets share the same build context.

Entitlements: Enhances security and resource management by providing fine-grained control over what capabilities and resources builders can access during the build process.

Composable Attributes: Simplifies configuration management by allowing teams to define reusable attribute sets that can be mixed, matched, and overridden across different targets.

Variable Validation: Prevents wasted time and resources by catching configuration errors before the actual build process begins.

Deduplicate context transfers

When you build targets concurrently using groups, build contexts are loaded independently for each target. If the same context is used by multiple targets in a group, that context is transferred once for each time it’s used. This can significantly impact build time, depending on your build configuration.

Previously, the workaround required users to define a named context that loads the context files and then have each target reference the named context. But with Bake, this will be handled automatically now.

Bake can automatically deduplicate context transfers from targets sharing the same context. When you build targets concurrently using groups, build contexts are loaded independently for each target. If the same context is used by multiple targets in a group, that context is transferred once for each time it’s used. This more efficient approach leads to much faster build time. 

Read more about how to speed up your build time in our docs. 

Entitlements

Bake now includes entitlements to control access to privileged operations, aligning with Build. This prevents unintended side effects and security risks. If Bake detects a potential issue — like a privileged access request or an attempt to access files outside the current directory — the build will fail unless explicitly allowed.

To be consistent, the Bake command now supports the –allow=ENTITLEMENT flag to grant access to additional entitlements. The following entitlements are currently supported for Bake.

Build equivalents

–allow network.host — Allows executions with host networking.

–allow security.insecure — Allows executions without sandbox. (i.e. —privileged)

File system: Grant filesystem access for builds that need access files outside the working directory. This will impact context, output, cache-from, cache-to, dockerfile, secret

–allow fs=<path|*> — Grant read and write access to files outside the working directory.

–allow fs.read=<path|*> — Grant read access to files outside the working directory.

–allow fs.write=<path|*> — Grant write access to files outside the working directory.

ssh

–allow ssh —– Allows exposing SSH agent.

Composable attributes

Several attributes previously had to be defined in CSV (e.g. type=provenance,mode=min). These were challenging to read and couldn’t be easily overridden. The following can now be defined as structured objects:

target “app” {
attest = [
{ type = “provenance”, mode = “max” },
{ type = “sbom”, disabled = true}
]

cache-from = [
{ type = “registry”, ref = “user/app:cache” },
{ type = “local”, src = “path/to/cache”}
]

cache-to = [
{ type = “local”, dest = “path/to/cache” },
]

output = [
{ type = “oci”, dest = “../out.tar” },
{ type = “local”, dest=”../out”}
]

secret = [
{ id = “mysecret”, src = “path/to/secret” },
{ id = “mysecret2″, env = “TOKEN” },
]

ssh = [
{ id = “default” },
{ id = “key”, paths = [”path/to/key”] },
]
}

As such, the attributes are now composable. Teams can mix, match, and override attributes across different targets which simplifies configuration management.

target “app-dev” {
attest = [
{ type = “provenance”, mode = “min” },
{ type = “sbom”, disabled = true}
]
}

target “app-prod” {
inherits = [”app-dev”]

attest = [
{ type = “provenance”, mode = “max” },
]
}

Variable validation

Bake now supports validation for variables similar to Terraform to help developers catch and resolve configuration errors early. The GA for Bake also supports the following use cases.

Basic validation

To verify that the value of a variable conforms to an expected type, value range, or other condition, you can define custom validation rules using the validation block.

variable “FOO” {
validation {
condition = FOO != “”
error_message = “FOO is required.”
}
}

target “default” {
args = {
FOO = FOO
}
}

Multiple validations

To evaluate more than one condition, define multiple validation blocks for the variable. All conditions must be true.

variable “FOO” {
validation {
condition = FOO != “”
error_message = “FOO is required.”
}
validation {
condition = strlen(FOO) > 4
error_message = “FOO must be longer than 4 characters.”
}
}

target “default” {
args = {
FOO = FOO
}
}

Dependency on other variables

You can reference other Bake variables in your condition expression, enabling validations that enforce dependencies between variables. This ensures that dependent variables are set correctly before proceeding.

variable “FOO” {}
variable “BAR” {
validation {
condition = FOO != “”
error_message = “BAR requires FOO to be set.”
}
}

target “default” {
args = {
BAR = BAR
}
}

New Bake options

In addition to updating the Bake configuration, we’ve added a new –list option. Previously, if you were unfamiliar with a project or wanted a reminder of the supported targets and variables, you would have to read through the file. Now, the list option will allow you to quickly query a list of them. It also supports the JSON format option if you need programmatic access.

List target

Quickly get a list of the targets available in your Bake configuration.

docker buildx bake –list targets

docker buildx bake –list type=targets,format=json

List variables

Get a list of variables available for your Bake configuration.

docker buildx bake –list variables

docker buildx bake –list type=variables,format=json

These improvements build on a powerful feature set, ensuring Bake is both reliable and future-ready.

Get started with Docker Bake

Ready to simplify your builds? Update to Docker Desktop 4.38 today and start using Bake. With its declarative syntax and advanced features, Docker Bake is here to help you build faster, more efficiently, and with less effort.

Explore the documentation to learn how to create your first Bake file and experience the benefits of streamlined builds firsthand.

Let’s bake something amazing together!

Quelle: https://blog.docker.com/feed/

How Docker Streamlines the  Onboarding Process and Sets Up Developers for Success

Nearly half (45%) of developers say they don’t have enough time for learning and development, according to a developer experience research study by Harness and Wakefield Research. Additionally, developer onboarding is a slow and painful process, with 71% of executive buyers saying that onboarding new developers takes at least two months. 

To accelerate innovation and bring products to market faster, organizations must empower developers with robust support and intuitive guardrails, enabling them to succeed within a structured yet flexible environment. That’s where Docker fits in: We help developers onboard quickly and help organizations set up the right guardrails to give developers the flexibility to innovate within the boundaries of company policies. 

Setting up developer teams for success 

Docker is recognized as one of the most used, desired, and admired developer tools, making it an essential component of any development team’s toolkit. For developers who are new to Docker, you can quickly get them up and running with Docker’s integrated development workflows, verified secure content, and accessible learning resources and community support.

Streamlined developer onboarding

When new developers join a team, Docker Desktop can significantly reduce the time and effort required to set up their development environments. Docker Desktop integrates seamlessly with popular IDEs, such as Visual Studio Code, allowing developers to containerize directly within familiar tools, accelerating learning within their usual workflows. Docker Extensions expand Docker Desktop’s capabilities and establish new functionalities, integrating developers’ favorite development tools into their application development and deployment workflows. 

Developers can also use Docker for GitHub Copilot for seamless onboarding with assistance for containerizing applications, generating Docker assets, and analyzing project vulnerabilities. In fact, the Docker extension is a top choice among developers in GitHub Copilot’s extension leaderboard, as highlighted by Visual Studio Magazine.

Docker Build Cloud integrates with Docker Compose and CI workflows, making it a seamless transition for dev teams. Verified content on Docker Hub gives developers preconfigured, trusted images, reducing setup time and ensuring a secure foundation as they onboard onto projects. 

Docker Scout provides actionable insights and recommendations, allowing developers to enhance their container security awareness, scan for vulnerabilities, and improve security posture with real-time feedback. And, Testcontainers Cloud lets developers run reliable integration tests, with real dependencies defined in code. With these tools, developers can be confident about delivering high-quality and reliable apps and experiences in production.  

Continuous learning with accessible knowledge resources

Continuous learning is a priority for Docker, with a wide range of accessible resources and tools designed to help developers deepen their knowledge and stay current in their containerization journey.

Docker Docs offers beginner-friendly guides, tutorials, and AI tools to guide developers through foundational concepts, empowering them to quickly build their container skills. Our collection of guides takes developers step by step to learn how Docker can optimize development workflows and how to use it with specific languages, frameworks, or technologies.

Docker Hub’s AI Catalog empowers developers to discover, pull, and integrate AI models into their workflows, bridging the gap between innovation and implementation. 

Docker also offers regular webinars and tech talks that help developers stay updated on new features and best practices and provide a platform to discuss real-world challenges. If you’re a Docker Business customer, you can even request additional, customized training from our Docker experts. 

Docker’s partnerships with educational platforms and organizations, such as Udemy Training and LinkedIn Learning, ensure developers have access to comprehensive training — from beginner tutorials to advanced containerization topics.

Docker’s global developer community

One of Docker’s greatest strengths is its thriving global developer community, offering organizations a unique advantage by connecting them with a wealth of shared expertise, resources, and real-world solutions.

With more than 20 million monthly active users, Docker’s community forums and events foster vibrant collaboration, giving developers access to a collective knowledge base that spans industries and expertise levels. Developers can ask questions, solve challenges, and gain insights from a diverse range of peers — from beginners to seasoned experts. Whether you’re troubleshooting an issue or exploring best practices, the Docker community ensures you’re never working in isolation.

A key pillar of this ecosystem is the Docker Captains program — a network of experienced and passionate Docker advocates who are leaders in their fields. Captains share technical knowledge through blog posts, videos, webinars, and workshops, giving businesses and teams access to curated expertise that accelerates onboarding and productivity.

Beyond forums and the Docker Captains program, Docker’s community-driven events, such as meetups and virtual workshops (Figure 1), provide developers with direct access to real-world use cases, innovative workflows, and emerging trends. These interactions foster continuous learning and help developers and their organizations keep pace with the ever-evolving software development landscape.

Figure 1: Docker DevTools Day 1.0 Meetup in Singapore.

For businesses, tapping into Docker’s extensive community means access to a vast pool of knowledge, support, and inspiration, which is a critical asset in driving developer productivity and innovation.

Empowering developers with enhanced user management and security

In previous articles, we looked at how Docker simplifies complexity and boosts developer productivity (the right tool) and how to unlock efficiency with Docker for AI and cloud-native development (the right process).

To scale and standardize app development processes across the entire company, you also need to have the right guardrails in place for governance, compliance, and security, which is often handled through enterprise control and admin management tools. Ideally, organizations provide guardrails without being overly prescriptive and slowing developer productivity and innovation. 

Modern enterprises require a layered security approach, beginning with trusted content as the foundation for building robust and compliant applications. This approach gives your dev teams a good foundation for building securely from the start. 

Throughout the software development process, you need a secure platform. For regulated industries like finance and public sectors, this means fortified dev environments. Security vulnerability analysis and policy evaluation tools also help inform improvements and remediation. 

Additionally, you need enterprise controls and dashboards that ensure enterprise IT and security teams can confidently monitor and manage risk. 

Setting up the right guardrails 

Docker provides a number of admin tools to safeguard your software with integrated container security in the Docker Business plan. Our goal is to improve security and compliance of developer environments with minimal impact on developer experience or productivity. 

Centralized settings for improved dev environments security 

Docker provides developer teams with access to a vast library of trusted and certified application content, including Docker Official Images, Docker Verified Publisher, and Docker Trusted Open Source content. Coupled with advanced image and registry management rules — with tools like Image Access Management and Registry Access Management — you can ensure that your developers only use software that satisfies your company’s security policies. 

With a solid foundation to build securely from the start, your organization can further enhance security throughout the software development process. Docker ensures software supply chain integrity through vulnerability scanning and image analysis with Docker Scout. Rapid remediation capabilities paired with detailed CVE reporting help developers quickly find and fix vulnerabilities, resulting in speedy time to resolution.

Although containers are generally secure, container development tools still must be properly secured to reduce the risk of security breaches in the developer’s environment. Hardened Docker Desktop is an example of Docker’s fortified development environments with enhanced container isolation. It lets you enforce strict security settings and prevent developers and their containers from bypassing these controls. With air-gapped containers, you can further restrict containers from accessing network resources, limiting where data can be uploaded to or downloaded from.

Continuous monitoring and managing risks

With the Admin Console and Docker Desktop Insights, IT administrators and security teams can visualize and understand how Docker is used within their organizations and manage the implementation of organizational configurations and policies (Figure 2). 

These insights help teams streamline processes and improve efficiency. For example, you can enforce sign-in for developers who don’t sign in to an account associated with your organization. This step ensures that developers receive the benefits of your Docker subscription and work within the boundaries of the company policies. 

Figure 2: Docker Desktop Insights Dashboard provides information on product usage.

For business and engineering leaders, full visibility and governance over the development process help ensure compliance and mitigate risk while driving developer productivity. 

Unlock innovation with Docker’s development suite

Docker is the leading suite of tools purpose-built for cloud-native development, combining a best-in-class developer experience with enterprise-grade security and governance. With Docker, your organization can streamline onboarding, foster innovation, and maintain robust compliance — all while empowering your teams to deliver impactful solutions to market faster and more securely. 

Explore the Docker Business plan today and unlock the full potential of your development processes.

Learn more

Subscribe to the Docker Newsletter. 

Get the latest release of Docker Desktop.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/