Creating Kubernetes Extensions in Docker Desktop

This guest post is courtesy of one of our Docker Captains! James Spurin, a DevOps Consultant and Course/Content Creator at DiveInto, recalls his experience creating the Kubernetes Extension for Docker Desktop. Of course, every journey had its challenges. But being able to leverage the powerful open source benefits of the vcluster Extension was well worth the effort!

Ever wondered what it would take to create your own Kubernetes Extensions in Docker Desktop? In this blog, we’ll walk through the steps and lessons I learned while creating the k9s Docker Extension and how it leverages the incredible open source efforts of vcluster Extension as crucial infrastructure components.

Why build a Kubernetes Docker Extension?

When I initially encountered Docker Extensions, I wondered:

“Can we use Docker Extensions to communicate with the inbuilt Docker-managed Kubernetes server provided in Docker Desktop?”

Docker Extensions open many opportunities with the convenient full-stack interface within the Extensions pane.

Traditionally when using Docker, we’d run a container through the UI or CLI. We’d then expose the container’s service port (for example, 8080) to our host system. Next, we’d access the user interface via our web browser with a URL such as http://localhost:8080.

While the UI/CLI makes this relatively simple, this would still involve multiple steps between different components, namely Docker Desktop and a web browser. We may also need to repeat these steps each time we restart the service or close our browser.

Docker Extensions solve this problem by helping us visualize our backend services through the Docker Dashboard.

Combining Docker Desktop, Docker Extensions, and Kubernetes opens up even more opportunities. This toolset lets us productively leverage Docker Desktop from the beginning stages of development to container creation, execution, and testing, leading up to container orchestration with Kubernetes.

Challenges creating the k9s Extension

Wanting to see this in action, I experimented with different ways to leverage Docker Desktop with the inbuilt Kubernetes server. Eventually, I was able to bridge the gap and provide Kubernetes access to a Docker Extension.

At the time, this required a privileged container — a security risk. As a result, this approach was less than ideal and wasn’t something I was comfortable sharing…

Photo by FLY:D on Unsplash

Let’s dive deeper into this.

Docker Desktop uses a hidden virtual machine to run Docker. We also have the Docker-managed Kubernetes instance within this instance, deployed via kubeadm:

Docker Desktop conveniently provides the user with a local preconfigured kubeconfig file and kubectl command within the user’s home area. This makes accessing Kubernetes less of a hassle. It works and is a fantastic way to fast-tracking access for those looking to leverage Kubernetes from the convenience of Docker.

However, this simplicity poses some challenges from an extension’s viewpoint. Specifically, we’d need to find a way to provide our Docker Extension with an appropriate kubeconfig file for accessing the in-built Kubernetes service.

Finding a solution with and vcluster

Fortunately, the team at and vcluster were able to address this challenge! Their efforts provide a solid foundation for those looking to create their Kubernetes-based Extensions in Docker Desktop.

When launching the vcluster Docker Extension, you’ll see that it uses a control loop that verifies Docker Desktop is running Kubernetes.

From an open source viewpoint, this has tremendous reusability for those creating their own Docker Extensions with Kubernetes. The progress indicator shows vcluster checking for a running Kubernetes service, as we can see in the following:

If the service is running, the UI loads accordingly:

If not, an error is displayed as follows:

While internally verifying that the Kubernetes server is running,’s vcluster Extension cleverly captures the Docker Desktop Kubernetes kubeconfig. The vcluster Extension does this using a javascript hostcli call out with kubectl binaries included in the extension (to provide compatibility across Windows, Mac, and Linux).

Then, it posts the captured output to a service running within the extension. The service in turn writes a local kubeconfig file for use by the vcluster Extension. 🚀

// Gets docker-desktop kubeconfig file from local and save it in container’s /root/.kube/config file-system.
// We have to use the vm.service to call the post api to store the kubeconfig retrieved. Without post api in vm.service
// all the combinations of commands fail
export const updateDockerDesktopK8sKubeConfig = async (ddClient: v1.DockerDesktopClient) => {
// kubectl config view –raw
let kubeConfig = await hostCli(ddClient, "kubectl", ["config", "view", "–raw", "–minify", "–context", DockerDesktop]);
if (kubeConfig?.stderr) {
console.log("error", kubeConfig?.stderr);
return false;

// call backend to store the kubeconfig retrieved
try {
await ddClient.extension.vm?.service?.post("/store-kube-config", {data: kubeConfig?.stdout})
} catch (err) {
console.log("error", JSON.stringify(err));

How the k9 Extension for Docker Desktop works

With’s ‘Docker Desktop Kubernetes Service is Running’ control loop and the kubeconfig capture logic, we have the key ingredients to create our Kubernetes-based Docker Extensions.

Photo by Anshu A on Unsplash

The k9s Extension that I released for Docker Desktop is essentially these components, with a splash of k9s and ttyd (for the web terminal). It’s the vcluster codebase, reduced to a minimum set of components with k9s added.

The source code is available at

While’s vcluster stores the kubeconfig file in a particular directory, the k9s Extension expands this further by combining this service with a Docker Volume. When the service receives the post request with the kubeconfig, it’s saved as expected.

The kubeconfig file is now in a shared volume that other containers can access, such as the k9s as shown in the following example:

When the k9s container starts, it reads the environment variable KUBECONFIG (defined in the container image). Then, it exposes a terminal web-based service on port 35781 with k9s running.

If Kubernetes is running as expected in Docker Desktop, we’ll reuse’s Kubernetes control loop to render an iframe, to the service on port 35781.

if (isDDK8sEnabled) {
const myHTML = ‘<style>:root { –dd-spacing-unit: 0px; }</style><iframe src="http://localhost:35781" frameborder="0" style="overflow:hidden;height:99vh;width:100%" height="100%" width="100%"></iframe>';
component = <React.Fragment>
<div dangerouslySetInnerHTML={{ __html: myHTML }} />
} else {
component = <Box>
<Alert iconMapping={{
error: <ErrorIcon fontSize="inherit"/>,
}} severity="error" color="error">
Seems like Kubernetes is not enabled in your Docker Desktop. Please take a look at the <a
documentation</a> on how to enable the Kubernetes server.

This renders k9s within the Extension pane when accessing the k9s Docker Extension.


With that, I hope that sharing my experiences creating the k9s Docker Extension inspires you. By leveraging the source code for the Kubernetes k9s Docker Extension (standing on the shoulders of, we open the gate to countless opportunities.

You’ll be able to fast-track the creation of a Kubernetes Extension in Docker Desktop, through changes to just two files: the docker-compose.yaml (for your own container services) and the UI rendering in the control loop.

Of course, all of this wouldn’t be possible without the minds behind vcluster. I’d like to give special thanks to’s Lian Li, who I met at Kubecon and introduced me to And I’d also like to thank the development team who are referenced both in the vcluster Extension source code and the forked version of k9s!

Thanks for reading – James Spurin

Not sure how to get started or want to learn more about Docker Extensions like this one? Check out the following additional resources:

Learn how to create your own Docker Extension. Get started by installing Docker Desktop for Mac, Windows, or Linux.Read similar blogs covering Docker Extensions.Find more details on  

You can also learn more about James, his top tips for working with Docker, and more in his feature on our Docker Captain Take 5 series. 

Bring Continuous Integration to Your Laptop With the Drone CI Docker Extension

Continuous Integration (CI) is a key element of cloud native application development. With containers forming the foundation of cloud-native architectures, developers need to integrate their version control system with a CI tool. 

There’s a myth that continuous integration needs a cloud-based infrastructure. Even though CI makes sense for production releases, developers need to build and test the pipeline before they can share it with their team — or have the ability to perform the continuous integration (CI) on their laptop. Is that really possible today? 

Introducing the Drone CI pipeline

An open-source project called Drone CI makes that a reality. With over 25,700 GitHub stars and 300-plus contributors, Drone is a cloud-native, self-service CI platform. Drone CI offers a mature, container-based system that leverages the scaling and fault-tolerance characteristics of cloud-native architectures. It helps you build container-friendly pipelines that are simple, decoupled, and declarative. 

Drone is a container based pipeline engine that lets you run any existing containers as part of your pipeline or package your build logic into reusable containers called Drone Plugins. 

Drone plugins are configurable based on the need and that allows distributing the container within your organization or to the community in general.

Running Drone CI pipelines from Docker Desktop

For a developer working with decentralized tools, the task of building and deploying microservice applications can be monumental. It’s tricky to install, manage, and use these apps in those environments. That’s where Docker Extensions come in. With Docker Extensions, developer tools are integrated right into Docker Desktop — giving you streamlined management workflows. It’s easier to optimize and transform your development processes. 

The Drone CI extension for Docker Desktop brings CI to development machines. You can now import Drone CI pipelines into Docker Desktop and run them locally. You can also run specific steps of a pipeline, monitor execution results, and inspect logs.

Setting up a Drone CI pipeline

In this guide, you’ll learn how to set up a Drone CI pipeline from scratch on Docker Desktop. 

First, you’ll install the Drone CI Extension within Docker Desktop. Second, you’ll learn how to discover Drone pipelines. Third, you’ll see how to open a Drone pipeline on Visual Studio Code. Lastly, you’ll discover how to run CI pipelines in trusted mode, which grants them elevated privileges on the host machine. Let’s jump in.


You’ll need to download Docker Desktop 4.8 or later before getting started. Make sure to choose the correct version for your OS and then install it. 

Next, hop into Docker Desktop and confirm that the Docker Extensions feature is enabled. Click the Settings gear > Extensions tab > check the “Enable Docker Extensions” box.

Installing the Drone CI Docker extension

Drone CI isn’t currently available on the Extensions Marketplace, so you’ll have to download it via the CLI. Launch your terminal and run the following command to install the Drone CI Extension:

docker extension install drone/drone-ci-docker-extension:latest

The Drone CI extension will soon appear in the Docker Dashboard’s left sidebar, underneath the Extensions heading:

Import Drone pipelines

You can click the “Import Pipelines” option to specify the host filesystem path where your Drone CI pipelines (drone.yml files) are. If this is your first time with Drone CI pipelines, you can use the examples from our GitHub repo.

In the recording above, we’ve used the long-run-demo sample to run a local pipeline that executes a long running sleep command. This occurs within a Docker container.

kind: pipeline
type: docker
name: sleep-demos
– name: sleep5
image: busybox
pull: if-not-exists
– x=0;while [ $x -lt 5 ]; do echo "hello"; sleep 1; x=$((x+1)); done
– name: an error step
image: busybox
pull: if-not-exists
– yq –help

You can download this pipeline YAML file from the Drone CI GitHub page.

The file starts with a pipeline object that defines your CI pipeline. The type attribute defines your preferred runtime while executing that pipeline. 

Drone supports numerous runners like  docker, kubernetes, and more. The extension only supports docker pipelines currently.Each pipeline step spins up a Docker container with the corresponding image defined as part of the step image attribute.

Each step defines an attribute called commands. This is a list of shell commands that we want to execute as part of the build. The defined list of  commands will be converted into shell script and set as Docker container’s ENTRYPOINT. If any command (for example, the missing yq command, in this case) returns a non-zero exit code, the pipeline fails and exits.

Edit your pipeline faster in VS Code via Drone CI

Visual Studio Code (VS Code) is a lightweight, highly-popular IDE. It supports JavaScript, TypeScript, and Node.js. VS Code also has a rich extensions ecosystem for numerous other languages and runtimes. 

Opening your Drone pipeline project in VS Code takes just seconds from within Docker Desktop:

This feature helps you quickly view your pipeline and add, edit, or remove steps — then run them from Docker Desktop. It lets you iterate faster while testing new pipeline changes.

Running specific steps in the CI pipeline

The Drone CI Extension lets you run individual steps within the CI pipeline at any time. To better understand this functionality, let’s inspect the following Drone YAML file:

kind: pipeline
type: docker
name: sleep-demos
– name: sleep5
image: busybox
pull: if-not-exists
– x=0;while [ $x -lt 5 ]; do echo "hello"; sleep 1; x=$((x+1)); done
– name: an error step
image: busybox
pull: if-not-exists
– yq –help

In this example, the first pipeline step defined as sleep5 lets you execute a shell script (echo “hello”) for five seconds and then stop (ignoring an error step).The video below shows you how to run the specific sleep-demos stage within the pipeline:

Running steps in trusted mode

Sometimes, you’re required to run a CI pipeline with elevated privileges. These privileges enable a user to systematically do more than a standard user. This is similar to how we pass the –privileged=true parameter within a docker run command. 

When you execute docker run –privileged, Docker will permit access to all host devices and set configurations in AppArmor or SELinux. These settings may grant the container nearly equal access to the host as processes running outside containers on the host.

Drone’s trusted mode tells your container runtime to run the pipeline containers with elevated privileges on the host machine. Among other things, trusted mode can help you:

Mount the Docker host socket onto the pipeline containerMount the host path to the Docker container

Run pipelines using environment variable files

The Drone CI Extension lets you define environment variables for individual build steps. You can set these within a pipeline step. Like docker run provides a way to pass environment variables to running containers, Drone lets you pass usable environment variables to your build. Consider the following Drone YAML file:

kind: pipeline
type: docker
name: default
– name: display environment variables
image: busybox
pull: if-not-exists
– printenv

The file starts with a pipeline object that defines your CI pipeline. The type attribute defines your preferred runtime (Docker, in our case) while executing that pipeline. The platform section helps configure the target OS and architecture (like arm64) and routes the pipeline to the appropriate runner. If unspecified, the system defaults to Linux amd64. 

The steps section defines a series of shell commands. These commands run within a busybox Docker container as the ENTRYPOINT. As shown, the command prints the environment variables if you’ve declared the following environment variables in your my-env file:


You can choose your preferred environment file and run the CI pipeline (pictured below):

If you try importing the CI pipeline, you can print every environment variable.

Run pipelines with secrets files

We use repository secrets to store and manage sensitive information like passwords, tokens, and ssh keys. Storing this information as a secret is considered safer than storing it within a plain text configuration file. 

Note: Drone masks all values used from secrets while printing them to standard output and error.

The Drone CI Extension lets you choose your preferred secrets file and use it within your CI pipeline as shown below:

Remove pipelines

You can remove a CI pipeline in just one step. Select one or more Drone pipelines and remove them by clicking the red minus (“-”) button on the right side of the Dashboard. This action will only remove the pipelines from Docker Desktop — without deleting them from your filesystem.

Bulk remove all pipelines

Remove a single pipeline


Drone is a modern, powerful, container-friendly CI that empowers busy development teams to automate their workflows. This dramatically shortens building, testing, and release cycles. With a Drone server, development teams can build and deploy cloud apps. These harness the scaling and fault-tolerance characteristics of cloud-native architectures like Kubernetes. 

Check out Drone’s documentation to get started with CI on your machine. With the Drone CI extension, developers can now run their Drone CI pipelines locally as they would in their CI systems.

Want to dive deeper into Docker Extensions? Check out our intro documentation, or discover how to build your own extensions. 

Kubernetes in Production Environments

Whalecome, dear reader, to our second installment of Dear Moby. In this developer-centric advice column, our Docker subject matter experts (SMEs) answer real questions from you — the Docker community. Think Dear Abby, but better, because it’s just for developers!

Since we announced this column, we’ve received a tidal wave of questions. And you can submit your own questions too!

In this edition, we’ll be talking about the best way to develop in production environments running Kubernetes (spoiler alert: there are more ways than one!).

Without further ado, let’s dive into today’s top question.

The question

What is the best way to develop if my prod environment runs Kubernetes? – Amos

The answer 

SME: Engineering Manager and Docker Captain, Michael Irwin. 

First and foremost, there isn’t one “best way” to develop, as there are quite a few options, each with its own tradeoffs.

Option #1 is to simply run Kubernetes locally! 

Docker Desktop allows you to spin up a Kubernetes cluster with just a few clicks. If you need more flexibility in the versioning, you can look into minikube or KinD (Kubernetes-in-Docker), which are both supported for use cases. Other fantastic tools like Tilt can also do wonders for your development experience by watching for file changes and rebuilding and redeploying container images (among other things).

Note: Docker Desktop currently only ships the latest version of Kubernetes. 

The biggest advantage to this option is you can leverage very similar manifests to what’s used in your prod environment. If you mount source code into your containers for development (dev), your manifests will need to be flexible enough to support different configurations for prod versus dev. That being said, you can also test most of the system out the same way your prod environments run.

However, there are a few considerations to think about:

Docker Desktop needs more resources (CPU/memory) to run Kubernetes. There’s a good chance you’ll need to learn more about Kubernetes if you need to debug your application. This can add a bit of a learning curve.Even if you sync the capabilities of your prod cluster locally — there’s still a chance things will differ. This is typically from things like custom controllers and resources, access or security policies, service meshes, ingress and certificate management, and/or other factors that can be hard to replicate locally.

Option #2 is to simply use Docker Compose. 

While Kubernetes can be used to run containers, so can many other tools. Docker Compose provides the ability to spin up an entire development environment using a much smaller and more manageable configuration. It leverages the Compose specification, “a developer-focused standard for defining cloud and platform agnostic container-based applications.”

There are a couple of advantages to using Compose. It has a more gradual learning curve and a lighter footprint. You can simply run docker compose up and have everything running! Instead of having to set up Kubernetes, apply manifests, potentially configure Helm, and more, Compose is already ready to go. This saves us from running a full orchestration system on our machines (which we wouldn’t wish on anyone).

However, using Compose does come with conditions:

It’s another tool in your arsenal. This means another set of manifests to maintain and update. If you need to define a new environment variable, you’ll need to add it to both your Compose file and Kubernetes manifests. You’ll have to vet changes against either prod or a staging environment since you’re not running Kubernetes locally. 

To recap, it depends!

There are great teams building amazing apps with each approach. We’re super excited to explore how we can make this space better for all developers, so stay tuned for more!

Whale, that does it for this week’s issue. Have another question you’d like the Docker team to tackle? Submit it here!

Announcing Docker Hub Export Members

Docker Hub’s Export Members functionality is now available, giving you the ability to export a full list of all your Docker users into a single CSV file. The file will contain their username, full name, and email address — as well as the user’s current status and if the user belongs to a given team. If you’re an administrator, that means you can quickly view your entire organization’s usage of Docker.

In the Members Tab, you can download a CSV file by pressing the Export members button. The file can be used to verify user status, confirm team structure, and quickly audit Docker usage.

The Export Members feature is only available for Docker Business subscribers. This feature will help organizations better track their utilization of Docker, while also simplifying the steps needed for an administrator to review their users within Docker Hub. 

At Docker, we continually listen to our customers, and strive to build the tools needed to make them successful. Feel free to check out our public roadmap and leave feedback or requests for more features like this!

Learn more about exporting users on our docs page, or sign in to your Docker Hub account to try it for yourself.

Clarifying Misconceptions About Web3 and Its Relevance With Docker

This blog is the first in a two-part series. We’ll talk about the challenges of defining Web3 plus some interesting connections between Web3 and Docker.

Part two will highlight technical solutions and demonstrate how to use Docker and Web3 together.

We’ll build upon the presentation, “Docker and Web 3.0 — Using Docker to Utilize Decentralized Infrastructure & Build Decentralized Apps,” by JT Olio, Krista Spriggs, and Marton Elek from DockerCon 2022. However, you don’t have to view that session before reading this post.

What’s Web3, after all?

If you ask a group what Web3 is, you’ll likely receive a different answer from each person. The definition of Web3 causes a lot of confusion, but this lack of clarity also offers an opportunity. Since there’s no consensus, we can offer our own vision.

One problem is that many definitions are based on specific technologies, as opposed to goals:

“Web3 is an idea […] which incorporates concepts such as decentralization, blockchain technologies, and token-based economics” (Wikipedia)“Web3 refers to a decentralized online ecosystem based on the blockchain.” (Gevin Wood)

There are three problems with defining Web3 based on technologies and not high-level goals or visions (or in addition to them). In general, these definitions unfortunately confuse the “what” with the “how.” We’ll focus our Web3 definition on the “what” — and leave the “how” for a discussion on implementation with technologies. Let’s discuss each issue in more detail.

Problem #1: it should be about “what” problems to solve instead of “how”

To start, most people aren’t really interested in “token-based economics.” But, they can passionately critique the current internet (”Web2”) through many common questions:

Why’s it so hard to move between platforms and export or import our data? Why’s it so hard to own our data?Why’s it so tricky to communicate with friends who use other social or messaging services?Why can a service provider shut down my user without proper explanation or possibility of appeal? Most terms of service agreements can’t help in practicality. They’re long and hard to understand. Nobody reads them (just envision lengthy new terms for websites and user-data treatment, stemming from GDPR regulations.) In a debate against service providers, we’re disadvantaged and less likely to win.  Why can’t we have better privacy? Full encryption for our data? Or the freedom to choose who can read or use our personal data, posts, and activities?Why couldn’t we sell our content in a more flexible way? Are we really forced to accept high margins from central marketplaces to be successful?How can we avoid being dependent on any one person or organization?How can we ensure that our data and sensitive information are secured?

These are well-known problems. They’re also key usability questions — and ultimately the “what” that we need to solve. We’re not necessarily looking to require new technologies like blockchain or NFT. Instead, we want better services with improved security, privacy, control, sovereignty, economics, and so on. Blockchain technology, NFT, federation, and more, are only useful if they can help us address these issues and enjoy better services. Those are potential tools for “how” to solve the “what.”

What if we had an easier, fairer system for connecting artists with patrons and donors, to help fund their work? That’s just one example of how Web3 could help.

As a result, I believe Web3 should be defined as “the movement to improve the internet’s UX, including for — but not limited to — security, privacy, control, sovereignty, and economics.”

Problem #2: Blockchain, but not Web3?

We can use technologies in so many different ways. Blockchains can create a currency system with more sovereignty, control, and economics, but they can also support fraudulent projects. Since we’ve seen so much of that, it’s not surprising that many people are highly skeptical.

However, those comments are usually critical towards unfair or fraudulent projects that use Web3’s core technologies (e.g. blockchain) to siphon money from people. They’re not usually directed at big problems related to usability.

Healthy skepticism can save us, but we at least need some cautious optimism. Always keep inventing and looking for better solutions. Maybe better technologies are required. Or, maybe using current technologies differently could best help us achieve the “how” of Web3.

Problem #3: Web3, but not blockchain?

We can also view the previous problem from the opposite perspective It’s not just blockchain or NFTs that can help us to solve the internet’s current challenges related to Problem #1. Some projects don’t use blockchain at all, yet qualify as Web3 due to the internet challenges they solve.

One good example is federation — one of the oldest ways of achieving decentralization. Our email system is still fairly decentralized, even if big players handle a significant proportion of email accounts. And this decentralization helped new players provide better privacy, security, or control.

Thankfully, there are newer, promising projects like Matrix, which is one of very few chat apps designed for federation from the ground up. How easy would communication be if all chat apps allowed federated message exchanges between providers? 

Docker and Web3

Since we’re here to talk about Docker, how can we connect everything to containers?

While there are multiple ways to build and deploy software, containers are usually involved on some level. Wherever we use technology, containers can probably help.

But, I believe there’s a fundamental, hidden connection between Docker and Web3. These three similarities are small, but together form a very interesting, common link.

Usability as a motivation

We first defined the Web3 movement based on the need to improve user experiences (privacy, control, security, etc.). Docker containers can provide the same benefits.

Containers quickly became popular because they solved real user problems. They gave developers reproducible environments, easy distribution, and just enough isolation.

Since day one, Docker’s been based on existing, proven technologies like namespace isolation or Linux kernel cgroups. By building upon leading technologies, Docker relieved many existing pain points.

Web3 is similar. We should pick the right technologies to achieve our goals. And luckily innovations like blockchains have become mature enough to support the projects where they’re needed.

Content-addressable world

One barrier to creating a fully decentralized system is creating globally unique, decentralized identifiers for all services items. When somebody creates a new identifier, we must ensure it’s truly one of a kind.

There’s no easy fix, but blockchains can help. After all, chains are the central source of truth (agreed on by thousands of participants in a decentralized way). 

There’s another way to solve this problem. It’s very easy to choose a unique identifier if there’s only one option and the choice is obvious. For example, if any content is identified with its hash, then that’s the unique identifier. If the content is the same, the unique identifier (the hash itself) will always be.

One example is Git, which is made for distribution. Every commit is identified by its hash (metadata, pointers to parents, pointers to the file trees). This made Git decentralization-friendly. While most repositories are hosted by big companies, it’s pretty easy to shift content between providers. This was an earlier problem we were trying to solve.

IPFS — as a decentralized content routing protocol — also pairs hashes with pieces to avoid any confusion between decentralized nodes. It also created a full ecosystem to define notation for different hashing types (multihash), or different data structures (IPLD).

We see exactly the same thing when we look at Docker containers! The digest acts as a content-based hash and can identify layers and manifests. This makes it easy to verify them and get them from different sources without confusion. Docker was designed to be decentralized from the get go.


Content-based digests of container layers and manifests help us, since Docker is usable with any kind of registry.

This is a type of federation. Even if Docker Hub is available, it’s very easy to start new registries. There’s no vendor lock-in, and there’s no grueling process behind being listed on one single possible marketplace. Publishing and sharing new images is as painless as possible.

As we discussed above, I believe the federation is one form of decentralization, and decentralization is one approach to get what we need: better control and ownership. There are stances against federation, but I believe federation offers more benefits despite its complexity. Many hard-forks, soft-forks, and blockchain restarts prove that control (especially democratic control) is possible with federation.

But we can call it in any other way. I believe that the freedom of using different container registries and the process of deploying containers are important factors in the success of Docker containers.


We’ve successfully defined Web3 based on end goals and user feedback — or “what” needs to be achieved. And this definition seems to be working very well. It’s mindful of “how” we achieve those goals. It also includes the use of existing “Web2” technologies and many future projects, even without using NFTs or blockchains. It even excludes the fraudulent projects which have drawn much skepticism.

We’ve also found some interesting intersections between Web3 and Docker!

Our job is to keep working and keep innovating. We should focus on the goals ahead and find the right technologies based on those goals.

Next up, we’ll discuss fields that are more technical. Join us as we explore using Docker with fully distributed storage options.

What is the Best Container Security Workflow for Your Organization?

Since containers are a primary means for developing and deploying today’s microservices, keeping them secure is highly important. But where should you start? A solid container security workflow often begins with assessing your images. These images can contain a wide spectrum of vulnerabilities. Per Sysdig’s latest report, 75% of images have vulnerabilities considered either highly or critically severe. 

There’s good news though — you can patch these vulnerabilities! And with better coordination and transparency, it’s possible to catch these issues in development before they impact your users. This protects everyday users and enterprise customers who require strong security. 

Snyk’s Fani Bahar and Hadar Mutai dove into this container security discussion during their DockerCon session. By taking a shift-left approach and rallying teams around key security goals, stronger image security becomes much more attainable. 

Let’s hop into Fani and Hadar’s talk and digest their key takeaways for developers and organizations. You’ll learn how attitudes, structures, and tools massively impact container security.

Security requires the right mindset across organizations

Mindset is one of the most difficult hurdles to overcome when implementing stronger container security. While teams widely consider security to be important, many often find it annoying in practice. That’s because security has traditionally taken monumental effort to get right. Even today, container security has become “the topic that most developers tend to avoid,” according to Hadar. 

And while teams scramble to meet deadlines or launch dates, the discovery of higher-level vulnerabilities can cause delays. Security soon becomes an enemy rather than a friend. So how do we flip the script? Ideally, a sound container-security workflow should do the following:

Support the agile development principles we’ve come to appreciate with microservices developmentPromote improved application security in productionUnify teams around shared security goals instead of creating conflicting priorities

Two main personas are invested in improving application security: developers and DevSecOps. These separate personas have very similar goals. Developers want to ship secure applications that run properly. Meanwhile, DevSecOps teams want everything that’s deployed to be secured. 

The trick to unifying these goals is creating an effective container-security workflow that benefits everyone. Plus, this workflow must overcome the top challenges impacting container security — today and in the future. Let’s analyze those challenges that Hadar highlighted. 

Organizations face common container security challenges

Unraveling the mystery behind security seems daunting, but understanding common challenges can help you form a strategy. Organizations grapple with the following: 

Vulnerability overload (container images can introduce upwards of 900)Prioritizing security fixes over othersUnderstanding how container security fundamentally works (this impacts whether a team can fix issues)Lengthier development pipelines stemming from security issues (and testing)Integrating useful security tools, that developers support, into existing workflows and systems

From this, we can see that teams have to work together to align on security. This includes identifying security outcomes and defining roles and responsibilities, while causing minimal disruption. Container security should be as seamless as possible. 

DevSecOps maturity and organizational structures matter

DevSecOps stands for Development, Security, and Operations, but what does that mean? Security under a DevSecOps system becomes a shared responsibility and a priority quite early in the software development lifecycle. While some companies have this concept down pat, many others are new to it. Others lie somewhere in the middle. 

As Fani mentioned, a company’s development processes and security maturity determine how they’re categorized. We have two extremes. On one hand, a company might’ve fully “realized” DevSecOps, meaning they’ve successfully scaled their processes and bolstered security. Conversely, a company might be in the exploratory phase. They’ve heard about DevSecOps and know they want it (or need it). But, their development processes aren’t well-entrenched, and their security posture isn’t very strong. 

Those in the exploratory phase might find themselves asking the following questions:

Can we improve our security?Which organizations can we learn from?Which best practices should we follow?

Meanwhile, other companies are either DevOps mature (but security immature) or DevSecOps ready. Knowing where your company sits can help you take the correct next steps to either scale processes or security. 

The impact of autonomy vs. centralization on security

You’ll typically see two methodologies used to organize teams. One focuses on autonomy, while the other prioritizes centralization.

Autonomous approaches

Autonomous organizations might house multiple teams that are more or less siloed. Each works on its own application and oversees that application’s security. This involves building, testing, and validation. Security ownership falls on those developers and anyone else integrated within the team. 

But that’s not to say DevSecOps fades completely into the background! Instead, it fills a support and enablement role. This DevSecOps team could work directly with developers on a case-by-case basis or even build useful, internal tools to make life easier. 

Centralized approaches

Otherwise, your individual developers could rally around a centralized DevOps and AppSec (app security) team. This group is responsible for testing and setting standards across different development teams. For example, DevAppSec would define approved base images and lay out a framework for container design that meets stringent security protocols. This plan must harmonize with each application team throughout the organization. 

Why might you even use approved parent images? These images have undergone rigorous testing to ensure no show-stopping vulnerabilities exist. They also contain basic sets of functionality aimed at different projects. DevSecOps has to find an ideal compromise between functionality and security to support ongoing engineering efforts. 

Whichever camp you fall into will essentially determine how “piecemeal” your plan is. How your developers work best will also influence your security plan. For instance, your teams might be happiest using their own specialized toolsets. In this case, moving to centralization might cause friction or kick off a transition period. 

On the flip side, will autonomous teams have the knowledge to employ strong security after relying on centralized policies? 

It’s worth mentioning that plenty of companies will keep their existing structures. However, any structural changes like those above can affect container security in the short and long term. 

Diverse tools define the container security workflow

Next, Fani showed us just how robust the container security tooling market is. For each step in the development pipeline, and therefore workflow, there are multiple tools for the job. You have your pick between IDEs. You have repositories and version control. You also have integration tools, storage, and orchestration. 

These serve a purpose for the following facets of development: 

Local developmentGitOpsCI/CDRegistryProduction container management

Thankfully, there’s no overarching best or “worst” tool for a given job. But, your organization should choose a tool that delivers exceptional container security with minimal disruption. You should even consider how platforms like Docker Desktop can contribute directly or indirectly to your security workflows, through tools like image management and our Software Bill of Materials (SBOM) feature.

You don’t want to redesign your processes to accommodate a tool. For example, it’s possible that Visual Studio Code suits your teams better than IntelliJ IDEA. The same goes for Jenkins vs. CircleCI, or GitHub vs. Bitbucket. Your chosen tool should fit within existing security processes and even enhance them. Not only that, but these tools should mesh well together to avoid productivity hurdles. 

Container security workflow examples

The theories behind security are important but so are concrete examples. Fani kicked off these examples by hopping into an autonomous team workflow. More and more organizations are embracing autonomy since it empowers individual teams. 

Examining an autonomous workflow

As with any modern workflow, development and security will lean on varying degrees of automation. This is the case with Fani’s example, which begins with a code push to a Git repository. That action initiates a Jenkins job, which is a set of sequential, user-defined tasks. Next, something like the Snyk plugin scans for build-breaking issues. 

If Snyk detects no issues, then the Jenkins job is deemed successful. Snyk monitors continuously from then on and alerts teams to any new issues: 

[Click to Enlarge]

When issues are found, your container security tool might flag those build issues, notify developers, provide artifact access, and offer any appropriate remediation steps. From there, the cycle repeats itself. Or, it might be safer to replace vulnerable components or dependencies with alternatives. 

Examining a common base workflow

With DevSecOps at the security helm, processes can look a little different. Hadar walked us through these unique container security stages to highlight DevOps’ key role. This is adjacent to — but somewhat separate from — the developer’s workflows. However, they’re centrally linked by a common registry: 

[Click to Enlarge]

DevOps begins by choosing an appropriate base image, customizing it, optimizing it, and putting it through its paces to ensure strong security. Approved images travel to the common development registry. Conversely, DevOps will fix any vulnerabilities before making that image available internally. 

Each developer then starts with a safe, vetted image that passes scanning without sacrificing important, custom software packages. Issues require fixing and bounce you back to square one, while success means pushing your container artifacts to a downstream registry. 

Creating safer containers for the future 

Overall, container security isn’t as complex as many think. By aligning on security and developing core processes alongside tooling, it’s possible to make rapid progress. Automation plays a huge role. And while there are many ways to tackle container security workflows, no single approach definitively takes the cake. 

Safer public base images and custom images are important ingredients while building secure applications. You can watch Fani and Hadar’s complete talk to learn more. You can also read more about the Synk Extension for Docker Desktop on Docker Hub.

Back Up and Share Docker Volumes with This Extension

When you need to back up, restore, or migrate data from one Docker host to another, volumes are generally the best choice. You can stop containers using the volume, then back up the volume’s directory (such as /var/lib/docker/volumes/<volume-name>). Other alternatives, such as bind mounts, rely on the host machine’s filesystem having a specific directory structure available, for example /tmp/source on UNIX systems like Linux and macOS and C:/Users/John on Windows.

Normally, if you want to back up a data volume, you run a new container using the volume you want to back up, then execute the tar command to produce an archive of the volume content:

docker run –rm
-v “$VOLUME_NAME”:/backup-volume
-v “$(pwd)”:/backup
tar -zcvf /backup/my-backup.tar.gz /backup-volume

To restore a volume with an existing backup, you can run a new container that mounts the target volume and executes the tar command to decompress the archive into the target volume. 

A quick Google search returns a number of bash scripts that can help you back up volumes, like this one from Docker Captain Bret Fisher. With this script, you can get the job done with the simpler ./vackup export my-volume backup.tar.gz. While scripts like this are totally valid approaches, the Extensions team was wondering: what if we could integrate this tool into Docker Desktop to deliver a better developer experience? Interestingly enough, it all started as a simple demo just the day before going live on Bret’s streaming show!

Now you can back up volumes from Docker Desktop

You can now back up volumes with just a few clicks using the new Volumes Backup & Share extension. This extension is available in the Marketplace and works on macOS, Windows, and Linux. And you can check out the OSS code on GitHub to see how the extension was developed.

How to back up a volume to a local file in your host

What can I do with the extension?

The extension allows you to:

Back up data that is persisted in a volume (for example, database data from Postgres or MySQL) into a compressed file.Upload your backup to Docker Hub and share it with anyone.Create a new volume from an existing backup or restore the state of an existing volume.Transfer your local volumes to a different Docker host (through SSH).Other basic volume operations like clone, empty, and delete a volume.

In the scenario below, John, Alex, and Emma are using Docker Desktop with the Volume Backup & Share extension. John is using the extension to share his volume (my-app-volume) with the rest of their teammates via Docker Hub. The volume is uploaded to Docker Hub as an image (john/my-app-volume:0.0.1) by using the “Export to Registry” option. His colleagues, Alex and Emma, will use the same extension to import the volume from Docker Hub into their own volumes by using the “Import from Registry” option.

Create different types of volume backups

When backing up a volume from the extension, you can select the type of backup:

A local file: creates a compressed file (gzip’ed tarball) in a desired directory of the host filesystem with the content of the volume.A local image: saves the volume data into the /volume-data directory of an existing image filesystem. If you were to inspect the filesystem of this image, you will find the backup stored in /volume-data.A new image: saves the volume data into the /volume-data directory of a newly created image.A registry: pushes a local volume to any image registry, whether local (such as localhost:5000) or hosted like Docker Hub or GitHub Container Registry. This allows you to share a volume with your team with a couple clicks.>  As of today, the maximum volume size supported to push to Docker Hub by the extension is 10GB. This limit may be changed in future versions of the extension depending on feedback received from users.

Restore or import from a volume

Similarly to the different types of volume backups described above, you can import or restore a backup into a new or an existing volume.

You can also select whether you want to restore a volume from a local file, a local image, a new image, or from a registry.

Transfer a volume to another Docker host

You might also want to copy the content of a volume to another host where Docker is running (either Docker Engine or Docker Desktop), like an Ubuntu server or a Raspberry Pi.

From the extension, you can specify both the destination host the local volume copied to (for example, user@ and the destination volume.

> SSH must be enabled and configured between the source and destination Docker hosts. Check to make sure you have the remote host SSH public key in your known_hosts file.

Below is an example of transferring a local volume from Docker Desktop to a Raspberry Pi.

Perform other operations

The extension provides other volume operations such as view, clone, empty, or delete.

How does it work behind the scenes?

In a nutshell, when a back up or restore operation is about to be carried out, the extension will stop all the containers attached to the specified volume to avoid data corruption, and then it will restart them once the operation is completed.

These operations happen in the background, which means you can carry out more of them in parallel, or leave the extension screen and navigate to other parts of Docker Desktop to continue with your work while the operations are running.

For instance, if you have a Postgres container that uses a volume to persist the database data (i.e. -v my-volume:/var/lib/postgresql/data), the extension will stop the Postgres container attached to the volume, generate a .tar.gz file with all the files that are inside the volume, then start the containers and put the file on the local directory that you have specified.

Note that for open files like databases, it’s usually better to use their preferred backup tool to create a backup file, but if you stored that file on a Docker volume, this could still be a way you get the Docker volume into an image or tarball for moving to remote storage for safekeeping.

What’s next?

We invite you to try out the extension and give us feedback here.

And if you haven’t tried Docker Extensions, we encourage you to explore the Extensions Marketplace and install some of them! You can also start developing your own Docker Extensions on all platforms: Windows, WSL2, macOS (both Intel and Apple Silicon), and Linux.

To learn more about the Extensions SDK, have a look at the official documentation. You’ll find tutorials, design guidelines, and everything else you need to build an extension.Once your extension’s ready, you can submit it to the Extensions Marketplace here.

Related posts

Vackup project by Bret FisherBuilding Vackup – Live Stream on YouTube


Containerizing a Slack Clone App Built with the MERN Stack

The MERN Stack is a fast growing, open source JavaScript stack that’s gained huge momentum among today’s web developers. MERN is a diverse collection of robust technologies (namely, Mongo, Express, React, and Node) for developing scalable web applications — supported by frontend, backend, and database components. Node, Express, and React even ranked highly among most-popular frameworks or technologies in Stack Overflow’s 2022 Developer Survey.

How does the MERN Stack work?

MERN has four components:

MongoDB – a NoSQL databaseExpressJS – a backend web-application framework for NodeJSReactJS – a JavaScript library for developing UIs from UI components. NodeJS – a JavaScript runtime environment that enables running JavaScript code outside the browser, among other things

Here’s how those pieces interact within a typical application:

A user interacts with the frontend, via the web browser, which is built with ReactJS UI components.The backend server delivers frontend content, via ExpressJS running atop NodeJS.Data is fetched from the MongoDB database before it returns to the frontend. Here, your application displays it for the user.Any interaction that causes a data-change request is sent to the Node-based Express server.

Why is the MERN stack so popular?

MERN stack is popular due to the following reasons:

Easy learning curve – If you’re familiar with JavaScript and JSON, then it’s easy to get started. MERN’s structure lets you easily build a three-tier architecture (frontend, backend, database) with just JavaScript and JSON.Reduced context switching – Since MERN uses JavaScript for both frontend and backend development, developers don’t need to worry about switching languages. This boosts development efficiency.Open source and active community support – The MERN stack is purely open source. All developers can build robust web applications. Its frameworks improve the coding efficiency and promote faster app development.Model-view architecture – MERN supports the model-view-controller (MVC) architecture, enabling a smooth and seamless development process.

Running the Slack Clone app

Key Components

MongoDBExpressReact.jsNodeDocker Desktop

Deploying a Slack Clone app is a fast process. You’ll clone the repository, set up the client and backend, then bring up the application. Complete the following steps:

git clone
cd slack-clone-docker
yarn install
yarn start

You can then access Slack Clone App at http://localhost:3000 in your browser:

Why containerize the MERN stack?

The MERN stack gives developers the flexibility to build pages on their server as needed. However, developers can encounter issues as their projects grow. Challenges with compatibility, third-party integrations, and steep learning curves are common for non-JavaScript developers.

First, For the MERN stack to work, developers must run a Node version that’s compatible with each additional stack component. Second, React extensively uses third-party libraries that might lower developer productivity due to integration hurdles and unfamiliarity. React is merely a library and might not help prevent common coding errors during development. Completing a large project with many developers becomes difficult with MERN. 

How can you make things easier? Docker simplifies and accelerates your workflows by letting you freely innovate with your choice of tools, application stacks, and deployment environments for each project. You can set up a MERN stack with a single Docker Compose file. This lets you quickly create microservices. This guide will help you completely containerize your Slack clone app.

Containerizing your Slack clone app

Docker helps you containerize your MERN Stack — letting you bundle together your complete Slack clone application, runtime, configuration, and OS-level dependencies. This includes everything needed to ship a cross-platform, multi-architecture web application. 

We’ll explore how to run this app within a Docker container using Docker Official Images. First, you’ll need to download Docker Desktop and complete the installation process. This includes the Docker CLI, Docker Compose, and a user-friendly management UI. These components will each be useful later on.

Docker uses a Dockerfile to create each image’s layers. Each layer stores important changes stemming from your base image’s standard configuration. Let’s create an empty Dockerfile in the root of our project repository.

Containerizing your React frontend

We’ll build a Dockerfile to containerize our React.js frontend and Node.js backend.

A Dockerfile is a plain-text file that contains instructions for assembling a Docker container image. When Docker builds our image via the docker build command, it reads these instructions, executes them, and creates a final image.

Let’s walk through the process of creating a Dockerfile for our application. First create the following empty file with the name Dockerfile.reactUI in the root of your React app:

touch Dockerfile.reactUI

You’ll then need to define your base image in the Dockerfile.reactUI file. Here, we’ve chosen the stable LTS version of the Node Docker Official Image. This comes with every tool and package needed to run a Node.js application:

FROM node:16

Next, let’s quickly create a directory to house our image’s application code. This acts as the working directory for your application:


The following COPY instruction copies the package.json and src file from the host machine to the container image. The COPY command takes two parameters. The first tells Docker what file(s) you’d like to copy into the image. The second tells Docker where you want those files to be copied. We’ll copy everything into our working directory called /app:

COPY ./package.json ./package.json
COPY ./public ./public

Next, we need to add our source code into the image. We’ll use the COPY command just like we previously did with our package.json file:

COPY ./src ./src

Then, use yarn install to install the package:

RUN yarn install

The EXPOSE instruction tells Docker which port the container listens on at runtime. You can specify whether the port listens on TCP or UDP. The default is TCP if the protocol isn’t specified:


Finally, we’ll start a project by using the yarn start command:

CMD ["yarn","start"]

Here’s our complete Dockerfile.reactUI file:

FROM node:16
COPY ./package.json ./package.json
COPY ./public ./public
COPY ./src ./src
RUN yarn install
CMD ["yarn","start"]

Now, let’s build our image. We’ll run the docker build command as above, but with the -f Dockerfile.reactUI flag. The -f flag specifies your Dockerfile name. The “.” command tells Docker to locate that Dockerfile in the current directory. The -t tags the resulting image:

docker build . -f Dockerfile.reactUI -t slackclone-fe:1

Containerizing your Node.js backend

Let’s walk through the process of creating a Dockerfile for our backend as the next step. First create the following empty Dockerfile.node in the root of your backend Node app (i.e server/ directory). Here’s your complete Dockerfile.node:

FROM node:16
COPY ./package.json ./package.json
COPY ./server.js ./server.js
COPY ./messageModel.js ./messageModel.js
COPY ./roomModel.js ./roomModel.js
COPY ./userModel.js ./userModel.js
RUN yarn install
CMD ["node", "server.js"]

Now, let’s build our image. We’ll run the following docker build command:

docker build . -f Dockerfile.node -t slackclone-be:1

Defining services using a Compose file

Here’s how our services appear within a Docker Compose file:

context: .
dockefile: Dockerfile.reactUI
– "3000:3000"
– db
context: ./server
dockerfile: Dockerfile.node
– "9000:9000"
– db
– slack_db:/data/db
image: mongo:latest
– "27017:27017"

Your sample application has the following parts:

Three services backed by Docker images: your React.js frontend, Node.js backend, and Mongo databaseA frontend accessible via port 3000The depends_on parameter, letting you create the backend service before the frontend service startsOne persistent named volume called slack_db, which is attached to the database service and ensures the Mongo data is persisted across container restarts

You can clone the repository or download the docker-compose.yml file directly from here.

Bringing up the container services

You can start the MERN application stack by running the following command:

docker compose up -d —build

Then, use the docker compose ps command to confirm that your stack is running properly. Your terminal will produce the following output:

docker compose ps
Name Command State Ports —————————————————————————–
slack-clone-docker_db_1 mongod Up>27017/tcp
slack-clone-docker_nodebackend_1 node … Up>9000/tcp
slack-clone-docker_slackfrontend_1 yarn … Up>3000/tcp

Viewing the containers via Docker Dashboard

You can also leverage the Docker Dashboard to view your container’s ID and easily access or manage your application:

Viewing the Messages

You can download and use Mongo Compass — an intuitive GUI for querying, optimizing, and analyzing your MongoDB data. This tool provides detailed schema visualization, real-time performance metrics, and sophisticated query abilities. It lets you view key insights, drag and drop to build pipelines, and more.


Congratulations! You’ve successfully learned how to containerize a MERN-backed Slack application with Docker. With a single YAML file, we’ve demonstrated how Docker Compose helps you easily build and deploy your MERN stack in seconds. With just a few extra steps, you can apply this tutorial while building applications with even greater complexity. Happy developing. 


View the project source codeLearn about MongoDBGet started with ReactGet started with ExpressJSBuild Your NodeJS Docker image


Four Ways Docker Boosts Enterprise Software Development

In this guest post, David Balakirev, Regional CTO at Adnovum, describes how they show the benefits of container technology based on Docker. Adnovum is a Swiss software company which offers comprehensive support in the fast and secure digitalization of business processes from consulting and design to implementation and operation.

1. Containers provide standardized development

Everybody wins when solution providers focus on providing value and not on the intricacies of the target environment. This is where containers shine.

With the wide-scale adoption of container technology products (like Docker) and the continued spread of standard container runtime platforms (like Kubernetes), developers have less compatibility aspects to consider. While it’s still important to be familiar with the target environment, the specific operating system, installed utilities, and services are less of a concern as long as we can work with the same platform during development. We believe this is one of the reasons for the growing number of new container runtime options.

For workloads targeting on-premises environments, the runtime platform can be selected based on the level of orchestration needed. Some teams decide on running their handful of services via Docker-Compose, this is typical for development and testing environments, and not unheard of for productive installations. For use-cases which warrant a full-blown container orchestrator, Kubernetes (and derivatives like OpenShift) are still dominant.

Those developing for the cloud can choose from a plethora of options. Kubernetes is present in all major cloud platforms, but there are also options for those with monolithic workloads, from semi to fully managed services to get those simple web applications out there (like Azure App Services or App Engine from the Google Cloud Platform).

For those venturing into serverless, the deployment unit is typically either a container image or source code which then a platform turns into a container.

With all of these options, it’s been interesting to follow how our customers adopted container technology. The IT strategy of smaller firms seemed to react faster to using solution providers like us.

But larger companies are also catching up. We welcome the trend where enterprise customers recognize the benefits of building and shipping software using containers — and other cloud-native technologies.

Overall, we can say that shipping solutions as containers is becoming the norm. We use Docker at Adnovum, and we’ve seen specific benefits for our developers. Let’s look at those benefits more.

2. Limited exposure mean more security

Targeting container platforms (as opposed to traditional OS packages) also comes with security consequences. For example, say we’re given a completely managed Kubernetes platform. This means the client’s IT team is responsible for configuring and operating the cluster in a secure fashion. In these cases, our developers can focus their attention on the application we deliver. Thanks to container technology, we can further limit exposure to various attacks and vulnerabilities.

This ties into the basic idea of containers: by only packaging what is strictly necessary for your application, you may also reduce the possible attack surface. This can be achieved by building images from scratch or by choosing secure base images to enclose your deliverables.When choosing secure base images on Docker Hub, we recommend filtering for container images produced by verified parties:

There are also cases when the complete packaging process is handled by your development tool(s). We use Spring Boot in many of our web application projects. Spring Boot incorporates buildpacks, which can build Docker OCI images from your web applications in an efficient and reliable way. This relieves developers from hunting for base images and reduces (but does not completely eliminate) the need to do various optimizations.


Developers using Docker Desktop can also try local security scanning to spot vulnerabilities before they would enter your code and artifact repositories:

3. Containers support diverse developer environments

While Adnovum specializes in web and mobile application development, within those boundaries we utilize a wide range of technologies. Supporting such heterogeneous environments can be tricky.

Imagine we have one spring boot developer who works on Linux, and another who develops the Angular frontend on a Mac. They both rely on a set of tools and dependencies to develop the project on their machine:

A local database instanceTest-doubles (mocks, etc.) for 3rd party servicesBrowsers — sometimes multiple versionsDeveloper tooling, including runtimes and build tools

In our experience, it can be difficult to support these tools across multiple operating systems if they’re installed natively. Instead, we try to push as many of these into containers as possible. This helps us to align the developer experience and reduce maintenance costs across platforms.

Our developers working on Windows or Mac can use Docker Desktop, which not only allows them to run containers but brings along some additional functionality (Docker Desktop is also available on Linux, alternatively you may opt to use docker-engine directly). For example, we can use docker-compose out of the box, which means we don’t need to worry about ensuring people can install it on various operating systems. Doing this over many such tools can add up to a significant cognitive and cost relief for your support team.

Outsourcing your dependencies this way is also useful if your developers need to work across multiple projects at once. After all, nobody enjoys installing multiple versions of databases, browsers, and tools.

We can typically apply this technique to our more recent projects, whereas for older projects with technology predating the mass adoption of Docker, we still have homework to do.

4. Containers aid reproducibility

As professional software makers, we want to ensure that not only do we provide excellent solutions for our clients, but if there are any concerns (functionality or security), we can trace back the issue to the exact code change which produced the artifact — typically a container image for web applications. Eventually, we may also need to rebuild a fixed version of said artifact, which can prove to be challenging. This is because build environments also evolve over time, continuously shifting the compatibility window of what they offer.

In our experience, automation (specifically Infrastructure-as-code) is key for providing developers with a reliable and scalable build infrastructure. We want to be able to re-create environments swiftly in case of software or hardware failure, or provision infrastructure components according to older configuration parameters for investigations. Our strategy is to manage all infrastructure via tools like Ansible or Terraform, and we strongly encourage engineers to avoid managing services by hand. This is true for our data-center and cloud environments as well.

Whenever possible, we also prefer running services as containers, instead of installing them as traditional packages. You’ll find many of the trending infrastructure services like NGINX and PostgreSQL on Docker Hub.

We try to push hermetic builds because they can bootstrap their own dependencies, which significantly decreases their reliance on what is installed in the build context that your specific CI/CD platform offers. Historically, we had challenges with supporting automated UI tests which relied on browsers installed on the machine. As the number of our projects grew, their expectations for browser versions diverged. This quickly became difficult to support even with our dedication to automation. Later, we faced similar challenges with tools like Node.js and the Java JDK where it was almost impossible to keep up with demand.

Eventually, we decided to adopt bootstrapping and containers in our automated builds, allowing teams to define what version of Chrome or Java their project needed. During the CI/CD pipeline, the required version dependency will be downloaded before the build, in case it’s not already cached.

Immutability means our dependencies, and our products, for that matter,never change after they’re built. Unfortunately, this isn’t exactly how Docker tags work. In fact, Docker tags are mutable by design, and this can be confusing at first if you are accustomed to SemVer.

Let’s say your Dockerfile starts like this:

FROM acme:1.2.3

It would be only logical to assume that whenever you (re-)build your own image, the same base image would be used. In reality, the label could point to different images in case somebody decides to publish a new image under the same label. They may do this for a number of reasons: sometimes out of necessity, but it could also be for malicious reasons.

In case you want to make sure you’ll be using the exact same image as before, you can start to refer to images via their digest. This is a trade-off in usability and security at the same time. While using digests brings you closer to truly reproducible builds, it also means if the authors’ of a base image issue a new image version under the same tag, then your builds won’t be using the latest version. Whichever side you’re leaning towards, you should use base images from trusted sources and introduce vulnerability scanning into your pipelines.

Combining immutability (with all its challenges), automation, and hermetic builds, we’ll be able to rebuild older versions of our code. You may need to do this to reproduce a bug — or to address vulnerabilities before you ship a fixed artifact.

While we still see opportunities for ourselves to improve in our journey towards reproducibility, employing containers along the way was a decision we would make again.


Containers, and specifically Docker, can be a significant boost for all groups of developers from small shops to enterprises. As with most topics, getting to know the best practices comes through experience and using the right sources for learning.

To get the most out of Docker’s wide range of features make sure to consult the documentation.

To learn more about how Adnovum helps companies and organizations to reach their digital potential, please visit our website.

How to Use the Alpine Docker Official Image

With its container-friendly design, the Alpine Docker Official Image (DOI) helps developers build and deploy lightweight, cross-platform applications. It’s based on Alpine Linux which debuted in 2005, making it one of today’s newest major Linux distros. 

While some developers express security concerns when using relatively newer images, Alpine has earned a solid reputation. Developers favor Alpine for the following reasons:  

It has a smaller footprint, and therefore a smaller attack surface (even evading 2014’s ShellShock Bash exploit!).It takes up less disk space.It offers a strong base for customization.It’s built with simplicity in mind.

In fact, the Alpine DOI is one of our most popular container images on Docker Hub. To help you get started, we’ll discuss this image in greater detail and how to use the Alpine Docker Official Image with your next project. Plus, we’ll explore using Alpine to grab the slimmest image possible. Let’s dive in!

In this tutorial:

What is the Alpine Docker Official Image?When to use AlpineHow to run Alpine in DockerUse a quick pull commandBuild your DockerfileGrabbing the slimmest possible imageGet up and running with Alpine today

What is the Alpine Docker Official Image?

The Alpine DOI is a building block for Alpine Linux Docker containers. It’s an executable software package that tells Docker and your application how to behave. The image includes source code, libraries, tools, and other core dependencies that your application needs. These components help Alpine Linux function while enabling developer-centric features. 

The Alpine Docker Official Image differs from other Linux-based images in a few ways. First, Alpine is based on the musl libc implementation of the C standard library — and uses BusyBox instead of GNU coreutils. While GNU packages many Linux-friendly programs together, BusyBox bundles a smaller number of core functions within one executable. 

While our Ubuntu and Debian images leverage glibc and coreutils, these alternatives are comparatively lightweight and resource-friendly, containing fewer extensions and less bloat.

As a result, Alpine appeals to developers who don’t need uncompromising compatibility or functionality from their image. Our Alpine DOI is also user-friendly and straightforward since there are fewer moving parts.

Alpine Linux performs well on resource-limited devices, which is fitting for developing simple applications or spinning up servers. Your containers will consume less RAM and less storage space. 

The Alpine Docker Official Image also offers the following features:

The robust apk package managerA rapid, consistent development-and-release cycle vs. other Linux distributionsMultiple supported tags and architectures, like amd64, arm/v6+, arm64, and ppc64le

Multi-arch support lets you run Alpine on desktops, mobile devices, rack-mounted servers, Raspberry Pis, and even newer M-series Macs. Overall, Alpine pairs well with a wide variety of embedded systems. 

These are only some of the advantages to using the Alpine DOI. Next, we’ll cover how to harness the image for your application. 

When to use Alpine

You may be interested in using Alpine, but find yourself asking, “When should I use it?” Containerized Alpine shines in some key areas: 

Creating serversRouter-based networkingDevelopment/testing environments

While there are some other uses for Alpine, most projects will fall under these two categories. Overall, our Alpine container image excels in situations where space savings and security are critical. 

How to run Alpine in Docker

Before getting started, download Docker Desktop and then install it. Docker Desktop is built upon Docker Engine and bundles together the Docker CLI, Docker Compose, and other core components. Launching Docker Desktop also lets you use Docker CLI commands (which we’ll get into later). Finally, the included Docker Dashboard will help you visually manage your images and containers. 

After completing these steps, you’re ready to Dockerize Alpine!

Note: For Linux users, Docker will still work perfectly fine if you have it installed externally on a server, or through your distro’s package manager. However, Docker Desktop for Linux does save time and effort by bundling all necessary components together — while aiding productivity through its user-friendly GUI. 

Use a quick pull command

You’ll have to first pull the Alpine Docker Official Image before using it for your project. The fastest method involves running docker pull alpine from your terminal. This grabs the alpine:latest image (the most current available version) from Docker Hub and downloads it locally on your machine: 

Your terminal output should show when your pull is complete — and which alpine version you’ve downloaded. You can also confirm this within Docker Desktop. Navigate to the Images tab from the left sidebar. And a list of downloaded images will populate on the right. You’ll see your alpine image, tag, and its minuscule (yes, you saw that right) 5.29 MB size:

Other Linux distro images like Ubuntu, Debian, and Fedora are many, many times larger than Alpine.

That’s a quick introduction to using the Alpine Official Image alongside Docker Desktop. But it’s important to remember that every Alpine DOI version originates from a Dockerfile. This plain-text file contains instructions that tell Docker how to build an image layer by layer. Check out the Alpine Linux GitHub repository for more Dockerfile examples. 

Next up, we’ll cover the significance of these Dockerfiles to Alpine Linux, some CLI-based workflows, and other key information.

Build your Dockerfile

Because Alpine is a standard base for container images, we recommend building on top of it within a Dockerfile. Specify your preferred alpine image tag and add instructions to create this file. Our example takes alpine:3.14 and runs an executable mysql client with it: 

FROM alpine:3.14
RUN apk add –no-cache mysql-client
ENTRYPOINT ["mysql"]

In this case, we’re starting from a slim base image and adding our mysql-client using Alpine’s standard package manager. Overall, this lets us run commands against our MySQL database from within our application. 

This is just one of the many ways to get your Alpine DOI up and running. In particular, Alpine is well-suited to server builds. To see this in action, check out Kathleen Juell’s presentation on serving static content with Docker Compose, Next.js, and NGINX. Navigate to timestamp 7:07 within the embedded video. 

The Alpine Official Image has a close relationship with other technologies (something that other images lack). Many of our Docker Official Images support -alpine tags. For instance, our earlier example of serving static content leverages the node:16-alpine image as a builder. 

This relationship makes Alpine and multi-stage builds an ideal pairing. Since the primary goal of a multi-stage build is to reduce your final image size, we recommend starting with one of the slimmest Docker Official Images.

Grabbing the slimmest possible image

Pulling an -alpine version of a given image typically yields the slimmest result. You can do this using our earlier docker pull [image] command. Or you can create a Dockerfile and specify this image version — while leaving room for customization with added instructions. 

In either case, here are some results using a few of our most popular images. You can see how image sizes change with these tags:

Image tagImage sizeimage:[version number]-alpine sizepython:3.9.13867.66 MB46.71 MBnode:18.8.0939.71 MB164.38 MBnginx:1.23.1134.51 MB22.13 MB

We’ve used the :latest tag since this is the default image tag Docker grabs from Docker Hub. As shown above with Python, pulling the -alpine image version reduces its footprint by nearly 95%! 

From here, the build process (when working from a Dockerfile) becomes much faster. Applications based on slimmer images spin up quicker. You’ll also notice that docker pull and various docker run commands execute swifter with -alpine images. 

However, remember that you’ll likely have to use this tag with a specified version number for your parent image. Running docker pull python-alpine or docker pull python:latest-alpine won’t work. Docker will alert you that the image isn’t found, the repo doesn’t exist, the command is invalid, or login information is required. This applies to any image. 

Get up and running with Alpine today

The Alpine Docker Official Image shines thanks to its simplicity and small size. It’s a fantastic base image — perhaps the most popular amongst Docker users — and offers plenty of room for customization. Alpine is arguably the most user-friendly, containerized Linux distro. We’ve tackled how to use the Alpine Official Image, and showed you how to get the most from it. 

Want to use Alpine for your next application or server? Pull the Alpine Official Image today to jumpstart your build process. You can also learn more about supported tags on Docker Hub. 

Additional resources

Browse the official Alpine Wiki.Learn some Alpine fundamentals via the Alpine newbie Wiki page.Read similar articles about Docker Images. Download and install the latest version of Docker Desktop.