Back Up and Share Docker Volumes with This Extension

When you need to back up, restore, or migrate data from one Docker host to another, volumes are generally the best choice. You can stop containers using the volume, then back up the volume’s directory (such as /var/lib/docker/volumes/<volume-name>). Other alternatives, such as bind mounts, rely on the host machine’s filesystem having a specific directory structure available, for example /tmp/source on UNIX systems like Linux and macOS and C:/Users/John on Windows.

Normally, if you want to back up a data volume, you run a new container using the volume you want to back up, then execute the tar command to produce an archive of the volume content:

docker run –rm
-v “$VOLUME_NAME”:/backup-volume
-v “$(pwd)”:/backup
busybox
tar -zcvf /backup/my-backup.tar.gz /backup-volume

To restore a volume with an existing backup, you can run a new container that mounts the target volume and executes the tar command to decompress the archive into the target volume. 

A quick Google search returns a number of bash scripts that can help you back up volumes, like this one from Docker Captain Bret Fisher. With this script, you can get the job done with the simpler ./vackup export my-volume backup.tar.gz. While scripts like this are totally valid approaches, the Extensions team was wondering: what if we could integrate this tool into Docker Desktop to deliver a better developer experience? Interestingly enough, it all started as a simple demo just the day before going live on Bret’s streaming show!

Now you can back up volumes from Docker Desktop

You can now back up volumes with just a few clicks using the new Volumes Backup & Share extension. This extension is available in the Marketplace and works on macOS, Windows, and Linux. And you can check out the OSS code on GitHub to see how the extension was developed.

How to back up a volume to a local file in your host

What can I do with the extension?

The extension allows you to:

Back up data that is persisted in a volume (for example, database data from Postgres or MySQL) into a compressed file.Upload your backup to Docker Hub and share it with anyone.Create a new volume from an existing backup or restore the state of an existing volume.Transfer your local volumes to a different Docker host (through SSH).Other basic volume operations like clone, empty, and delete a volume.

In the scenario below, John, Alex, and Emma are using Docker Desktop with the Volume Backup & Share extension. John is using the extension to share his volume (my-app-volume) with the rest of their teammates via Docker Hub. The volume is uploaded to Docker Hub as an image (john/my-app-volume:0.0.1) by using the “Export to Registry” option. His colleagues, Alex and Emma, will use the same extension to import the volume from Docker Hub into their own volumes by using the “Import from Registry” option.

Create different types of volume backups

When backing up a volume from the extension, you can select the type of backup:

A local file: creates a compressed file (gzip’ed tarball) in a desired directory of the host filesystem with the content of the volume.A local image: saves the volume data into the /volume-data directory of an existing image filesystem. If you were to inspect the filesystem of this image, you will find the backup stored in /volume-data.A new image: saves the volume data into the /volume-data directory of a newly created image.A registry: pushes a local volume to any image registry, whether local (such as localhost:5000) or hosted like Docker Hub or GitHub Container Registry. This allows you to share a volume with your team with a couple clicks.>  As of today, the maximum volume size supported to push to Docker Hub by the extension is 10GB. This limit may be changed in future versions of the extension depending on feedback received from users.

Restore or import from a volume

Similarly to the different types of volume backups described above, you can import or restore a backup into a new or an existing volume.

You can also select whether you want to restore a volume from a local file, a local image, a new image, or from a registry.

Transfer a volume to another Docker host

You might also want to copy the content of a volume to another host where Docker is running (either Docker Engine or Docker Desktop), like an Ubuntu server or a Raspberry Pi.

From the extension, you can specify both the destination host the local volume copied to (for example, user@192.168.1.50) and the destination volume.

> SSH must be enabled and configured between the source and destination Docker hosts. Check to make sure you have the remote host SSH public key in your known_hosts file.

Below is an example of transferring a local volume from Docker Desktop to a Raspberry Pi.

Perform other operations

The extension provides other volume operations such as view, clone, empty, or delete.

How does it work behind the scenes?

In a nutshell, when a back up or restore operation is about to be carried out, the extension will stop all the containers attached to the specified volume to avoid data corruption, and then it will restart them once the operation is completed.

These operations happen in the background, which means you can carry out more of them in parallel, or leave the extension screen and navigate to other parts of Docker Desktop to continue with your work while the operations are running.

For instance, if you have a Postgres container that uses a volume to persist the database data (i.e. -v my-volume:/var/lib/postgresql/data), the extension will stop the Postgres container attached to the volume, generate a .tar.gz file with all the files that are inside the volume, then start the containers and put the file on the local directory that you have specified.

Note that for open files like databases, it’s usually better to use their preferred backup tool to create a backup file, but if you stored that file on a Docker volume, this could still be a way you get the Docker volume into an image or tarball for moving to remote storage for safekeeping.

What’s next?

We invite you to try out the extension and give us feedback here.

And if you haven’t tried Docker Extensions, we encourage you to explore the Extensions Marketplace and install some of them! You can also start developing your own Docker Extensions on all platforms: Windows, WSL2, macOS (both Intel and Apple Silicon), and Linux.

To learn more about the Extensions SDK, have a look at the official documentation. You’ll find tutorials, design guidelines, and everything else you need to build an extension.Once your extension’s ready, you can submit it to the Extensions Marketplace here.

Related posts

Vackup project by Bret FisherBuilding Vackup – Live Stream on YouTube

Quelle: https://blog.docker.com/feed/

Containerizing a Slack Clone App Built with the MERN Stack

The MERN Stack is a fast growing, open source JavaScript stack that’s gained huge momentum among today’s web developers. MERN is a diverse collection of robust technologies (namely, Mongo, Express, React, and Node) for developing scalable web applications — supported by frontend, backend, and database components. Node, Express, and React even ranked highly among most-popular frameworks or technologies in Stack Overflow’s 2022 Developer Survey.

How does the MERN Stack work?

MERN has four components:

MongoDB – a NoSQL databaseExpressJS – a backend web-application framework for NodeJSReactJS – a JavaScript library for developing UIs from UI components. NodeJS – a JavaScript runtime environment that enables running JavaScript code outside the browser, among other things

Here’s how those pieces interact within a typical application:

A user interacts with the frontend, via the web browser, which is built with ReactJS UI components.The backend server delivers frontend content, via ExpressJS running atop NodeJS.Data is fetched from the MongoDB database before it returns to the frontend. Here, your application displays it for the user.Any interaction that causes a data-change request is sent to the Node-based Express server.

Why is the MERN stack so popular?

MERN stack is popular due to the following reasons:

Easy learning curve – If you’re familiar with JavaScript and JSON, then it’s easy to get started. MERN’s structure lets you easily build a three-tier architecture (frontend, backend, database) with just JavaScript and JSON.Reduced context switching – Since MERN uses JavaScript for both frontend and backend development, developers don’t need to worry about switching languages. This boosts development efficiency.Open source and active community support – The MERN stack is purely open source. All developers can build robust web applications. Its frameworks improve the coding efficiency and promote faster app development.Model-view architecture – MERN supports the model-view-controller (MVC) architecture, enabling a smooth and seamless development process.

Running the Slack Clone app

Key Components

MongoDBExpressReact.jsNodeDocker Desktop

Deploying a Slack Clone app is a fast process. You’ll clone the repository, set up the client and backend, then bring up the application. Complete the following steps:

git clone https://github.com/dockersamples/slack-clone-docker
cd slack-clone-docker
yarn install
yarn start

You can then access Slack Clone App at http://localhost:3000 in your browser:

Why containerize the MERN stack?

The MERN stack gives developers the flexibility to build pages on their server as needed. However, developers can encounter issues as their projects grow. Challenges with compatibility, third-party integrations, and steep learning curves are common for non-JavaScript developers.

First, For the MERN stack to work, developers must run a Node version that’s compatible with each additional stack component. Second, React extensively uses third-party libraries that might lower developer productivity due to integration hurdles and unfamiliarity. React is merely a library and might not help prevent common coding errors during development. Completing a large project with many developers becomes difficult with MERN. 

How can you make things easier? Docker simplifies and accelerates your workflows by letting you freely innovate with your choice of tools, application stacks, and deployment environments for each project. You can set up a MERN stack with a single Docker Compose file. This lets you quickly create microservices. This guide will help you completely containerize your Slack clone app.

Containerizing your Slack clone app

Docker helps you containerize your MERN Stack — letting you bundle together your complete Slack clone application, runtime, configuration, and OS-level dependencies. This includes everything needed to ship a cross-platform, multi-architecture web application. 

We’ll explore how to run this app within a Docker container using Docker Official Images. First, you’ll need to download Docker Desktop and complete the installation process. This includes the Docker CLI, Docker Compose, and a user-friendly management UI. These components will each be useful later on.

Docker uses a Dockerfile to create each image’s layers. Each layer stores important changes stemming from your base image’s standard configuration. Let’s create an empty Dockerfile in the root of our project repository.

Containerizing your React frontend

We’ll build a Dockerfile to containerize our React.js frontend and Node.js backend.

A Dockerfile is a plain-text file that contains instructions for assembling a Docker container image. When Docker builds our image via the docker build command, it reads these instructions, executes them, and creates a final image.

Let’s walk through the process of creating a Dockerfile for our application. First create the following empty file with the name Dockerfile.reactUI in the root of your React app:

touch Dockerfile.reactUI

You’ll then need to define your base image in the Dockerfile.reactUI file. Here, we’ve chosen the stable LTS version of the Node Docker Official Image. This comes with every tool and package needed to run a Node.js application:

FROM node:16

Next, let’s quickly create a directory to house our image’s application code. This acts as the working directory for your application:

WORKDIR /app

The following COPY instruction copies the package.json and src file from the host machine to the container image. The COPY command takes two parameters. The first tells Docker what file(s) you’d like to copy into the image. The second tells Docker where you want those files to be copied. We’ll copy everything into our working directory called /app:

COPY ./package.json ./package.json
COPY ./public ./public

Next, we need to add our source code into the image. We’ll use the COPY command just like we previously did with our package.json file:

COPY ./src ./src

Then, use yarn install to install the package:

RUN yarn install

The EXPOSE instruction tells Docker which port the container listens on at runtime. You can specify whether the port listens on TCP or UDP. The default is TCP if the protocol isn’t specified:

EXPOSE 3000

Finally, we’ll start a project by using the yarn start command:

CMD ["yarn","start"]

Here’s our complete Dockerfile.reactUI file:

FROM node:16
WORKDIR /app
COPY ./package.json ./package.json
COPY ./public ./public
COPY ./src ./src
RUN yarn install
EXPOSE 3000
CMD ["yarn","start"]

Now, let’s build our image. We’ll run the docker build command as above, but with the -f Dockerfile.reactUI flag. The -f flag specifies your Dockerfile name. The “.” command tells Docker to locate that Dockerfile in the current directory. The -t tags the resulting image:

docker build . -f Dockerfile.reactUI -t slackclone-fe:1

Containerizing your Node.js backend

Let’s walk through the process of creating a Dockerfile for our backend as the next step. First create the following empty Dockerfile.node in the root of your backend Node app (i.e server/ directory). Here’s your complete Dockerfile.node:

FROM node:16
WORKDIR /app
COPY ./package.json ./package.json
COPY ./server.js ./server.js
COPY ./messageModel.js ./messageModel.js
COPY ./roomModel.js ./roomModel.js
COPY ./userModel.js ./userModel.js
RUN yarn install
EXPOSE 9000
CMD ["node", "server.js"]

Now, let’s build our image. We’ll run the following docker build command:

docker build . -f Dockerfile.node -t slackclone-be:1

Defining services using a Compose file

Here’s how our services appear within a Docker Compose file:

services:
slackfrontend:
build:
context: .
dockefile: Dockerfile.reactUI
ports:
– "3000:3000"
depends_on:
– db
nodebackend:
build:
context: ./server
dockerfile: Dockerfile.node
ports:
– "9000:9000"
depends_on:
– db
db:
volumes:
– slack_db:/data/db
image: mongo:latest
ports:
– "27017:27017"
volumes:
slack_db:

Your sample application has the following parts:

Three services backed by Docker images: your React.js frontend, Node.js backend, and Mongo databaseA frontend accessible via port 3000The depends_on parameter, letting you create the backend service before the frontend service startsOne persistent named volume called slack_db, which is attached to the database service and ensures the Mongo data is persisted across container restarts

You can clone the repository or download the docker-compose.yml file directly from here.

Bringing up the container services

You can start the MERN application stack by running the following command:

docker compose up -d —build

Then, use the docker compose ps command to confirm that your stack is running properly. Your terminal will produce the following output:

docker compose ps
Name Command State Ports —————————————————————————–
slack-clone-docker_db_1 docker-entrypoint.sh mongod Up 0.0.0.0:27017->27017/tcp
slack-clone-docker_nodebackend_1 docker-entrypoint.sh node … Up 0.0.0.0:9000->9000/tcp
slack-clone-docker_slackfrontend_1 docker-entrypoint.sh yarn … Up 0.0.0.0:3000->3000/tcp

Viewing the containers via Docker Dashboard

You can also leverage the Docker Dashboard to view your container’s ID and easily access or manage your application:

Viewing the Messages

You can download and use Mongo Compass — an intuitive GUI for querying, optimizing, and analyzing your MongoDB data. This tool provides detailed schema visualization, real-time performance metrics, and sophisticated query abilities. It lets you view key insights, drag and drop to build pipelines, and more.

Conclusion

Congratulations! You’ve successfully learned how to containerize a MERN-backed Slack application with Docker. With a single YAML file, we’ve demonstrated how Docker Compose helps you easily build and deploy your MERN stack in seconds. With just a few extra steps, you can apply this tutorial while building applications with even greater complexity. Happy developing. 

References:

View the project source codeLearn about MongoDBGet started with ReactGet started with ExpressJSBuild Your NodeJS Docker image

Quelle: https://blog.docker.com/feed/

Four Ways Docker Boosts Enterprise Software Development

In this guest post, David Balakirev, Regional CTO at Adnovum, describes how they show the benefits of container technology based on Docker. Adnovum is a Swiss software company which offers comprehensive support in the fast and secure digitalization of business processes from consulting and design to implementation and operation.

1. Containers provide standardized development

Everybody wins when solution providers focus on providing value and not on the intricacies of the target environment. This is where containers shine.

With the wide-scale adoption of container technology products (like Docker) and the continued spread of standard container runtime platforms (like Kubernetes), developers have less compatibility aspects to consider. While it’s still important to be familiar with the target environment, the specific operating system, installed utilities, and services are less of a concern as long as we can work with the same platform during development. We believe this is one of the reasons for the growing number of new container runtime options.

For workloads targeting on-premises environments, the runtime platform can be selected based on the level of orchestration needed. Some teams decide on running their handful of services via Docker-Compose, this is typical for development and testing environments, and not unheard of for productive installations. For use-cases which warrant a full-blown container orchestrator, Kubernetes (and derivatives like OpenShift) are still dominant.

Those developing for the cloud can choose from a plethora of options. Kubernetes is present in all major cloud platforms, but there are also options for those with monolithic workloads, from semi to fully managed services to get those simple web applications out there (like Azure App Services or App Engine from the Google Cloud Platform).

For those venturing into serverless, the deployment unit is typically either a container image or source code which then a platform turns into a container.

With all of these options, it’s been interesting to follow how our customers adopted container technology. The IT strategy of smaller firms seemed to react faster to using solution providers like us.

But larger companies are also catching up. We welcome the trend where enterprise customers recognize the benefits of building and shipping software using containers — and other cloud-native technologies.

Overall, we can say that shipping solutions as containers is becoming the norm. We use Docker at Adnovum, and we’ve seen specific benefits for our developers. Let’s look at those benefits more.

2. Limited exposure mean more security

Targeting container platforms (as opposed to traditional OS packages) also comes with security consequences. For example, say we’re given a completely managed Kubernetes platform. This means the client’s IT team is responsible for configuring and operating the cluster in a secure fashion. In these cases, our developers can focus their attention on the application we deliver. Thanks to container technology, we can further limit exposure to various attacks and vulnerabilities.

This ties into the basic idea of containers: by only packaging what is strictly necessary for your application, you may also reduce the possible attack surface. This can be achieved by building images from scratch or by choosing secure base images to enclose your deliverables.When choosing secure base images on Docker Hub, we recommend filtering for container images produced by verified parties:

There are also cases when the complete packaging process is handled by your development tool(s). We use Spring Boot in many of our web application projects. Spring Boot incorporates buildpacks, which can build Docker OCI images from your web applications in an efficient and reliable way. This relieves developers from hunting for base images and reduces (but does not completely eliminate) the need to do various optimizations.

Source: https://buildpacks.io/docs/concepts/

Developers using Docker Desktop can also try local security scanning to spot vulnerabilities before they would enter your code and artifact repositories: https://docs.docker.com/engine/scan/

3. Containers support diverse developer environments

While Adnovum specializes in web and mobile application development, within those boundaries we utilize a wide range of technologies. Supporting such heterogeneous environments can be tricky.

Imagine we have one spring boot developer who works on Linux, and another who develops the Angular frontend on a Mac. They both rely on a set of tools and dependencies to develop the project on their machine:

A local database instanceTest-doubles (mocks, etc.) for 3rd party servicesBrowsers — sometimes multiple versionsDeveloper tooling, including runtimes and build tools

In our experience, it can be difficult to support these tools across multiple operating systems if they’re installed natively. Instead, we try to push as many of these into containers as possible. This helps us to align the developer experience and reduce maintenance costs across platforms.

Our developers working on Windows or Mac can use Docker Desktop, which not only allows them to run containers but brings along some additional functionality (Docker Desktop is also available on Linux, alternatively you may opt to use docker-engine directly). For example, we can use docker-compose out of the box, which means we don’t need to worry about ensuring people can install it on various operating systems. Doing this over many such tools can add up to a significant cognitive and cost relief for your support team.

Outsourcing your dependencies this way is also useful if your developers need to work across multiple projects at once. After all, nobody enjoys installing multiple versions of databases, browsers, and tools.

We can typically apply this technique to our more recent projects, whereas for older projects with technology predating the mass adoption of Docker, we still have homework to do.

4. Containers aid reproducibility

As professional software makers, we want to ensure that not only do we provide excellent solutions for our clients, but if there are any concerns (functionality or security), we can trace back the issue to the exact code change which produced the artifact — typically a container image for web applications. Eventually, we may also need to rebuild a fixed version of said artifact, which can prove to be challenging. This is because build environments also evolve over time, continuously shifting the compatibility window of what they offer.

In our experience, automation (specifically Infrastructure-as-code) is key for providing developers with a reliable and scalable build infrastructure. We want to be able to re-create environments swiftly in case of software or hardware failure, or provision infrastructure components according to older configuration parameters for investigations. Our strategy is to manage all infrastructure via tools like Ansible or Terraform, and we strongly encourage engineers to avoid managing services by hand. This is true for our data-center and cloud environments as well.

Whenever possible, we also prefer running services as containers, instead of installing them as traditional packages. You’ll find many of the trending infrastructure services like NGINX and PostgreSQL on Docker Hub.

We try to push hermetic builds because they can bootstrap their own dependencies, which significantly decreases their reliance on what is installed in the build context that your specific CI/CD platform offers. Historically, we had challenges with supporting automated UI tests which relied on browsers installed on the machine. As the number of our projects grew, their expectations for browser versions diverged. This quickly became difficult to support even with our dedication to automation. Later, we faced similar challenges with tools like Node.js and the Java JDK where it was almost impossible to keep up with demand.

Eventually, we decided to adopt bootstrapping and containers in our automated builds, allowing teams to define what version of Chrome or Java their project needed. During the CI/CD pipeline, the required version dependency will be downloaded before the build, in case it’s not already cached.

Immutability means our dependencies, and our products, for that matter,never change after they’re built. Unfortunately, this isn’t exactly how Docker tags work. In fact, Docker tags are mutable by design, and this can be confusing at first if you are accustomed to SemVer.

Let’s say your Dockerfile starts like this:

FROM acme:1.2.3

It would be only logical to assume that whenever you (re-)build your own image, the same base image would be used. In reality, the label could point to different images in case somebody decides to publish a new image under the same label. They may do this for a number of reasons: sometimes out of necessity, but it could also be for malicious reasons.

In case you want to make sure you’ll be using the exact same image as before, you can start to refer to images via their digest. This is a trade-off in usability and security at the same time. While using digests brings you closer to truly reproducible builds, it also means if the authors’ of a base image issue a new image version under the same tag, then your builds won’t be using the latest version. Whichever side you’re leaning towards, you should use base images from trusted sources and introduce vulnerability scanning into your pipelines.

Combining immutability (with all its challenges), automation, and hermetic builds, we’ll be able to rebuild older versions of our code. You may need to do this to reproduce a bug — or to address vulnerabilities before you ship a fixed artifact.

While we still see opportunities for ourselves to improve in our journey towards reproducibility, employing containers along the way was a decision we would make again.

Conclusion

Containers, and specifically Docker, can be a significant boost for all groups of developers from small shops to enterprises. As with most topics, getting to know the best practices comes through experience and using the right sources for learning.

To get the most out of Docker’s wide range of features make sure to consult the documentation.

To learn more about how Adnovum helps companies and organizations to reach their digital potential, please visit our website.
Quelle: https://blog.docker.com/feed/

How to Use the Alpine Docker Official Image

With its container-friendly design, the Alpine Docker Official Image (DOI) helps developers build and deploy lightweight, cross-platform applications. It’s based on Alpine Linux which debuted in 2005, making it one of today’s newest major Linux distros. 

While some developers express security concerns when using relatively newer images, Alpine has earned a solid reputation. Developers favor Alpine for the following reasons:  

It has a smaller footprint, and therefore a smaller attack surface (even evading 2014’s ShellShock Bash exploit!).It takes up less disk space.It offers a strong base for customization.It’s built with simplicity in mind.

In fact, the Alpine DOI is one of our most popular container images on Docker Hub. To help you get started, we’ll discuss this image in greater detail and how to use the Alpine Docker Official Image with your next project. Plus, we’ll explore using Alpine to grab the slimmest image possible. Let’s dive in!

In this tutorial:

What is the Alpine Docker Official Image?When to use AlpineHow to run Alpine in DockerUse a quick pull commandBuild your DockerfileGrabbing the slimmest possible imageGet up and running with Alpine today

What is the Alpine Docker Official Image?

The Alpine DOI is a building block for Alpine Linux Docker containers. It’s an executable software package that tells Docker and your application how to behave. The image includes source code, libraries, tools, and other core dependencies that your application needs. These components help Alpine Linux function while enabling developer-centric features. 

The Alpine Docker Official Image differs from other Linux-based images in a few ways. First, Alpine is based on the musl libc implementation of the C standard library — and uses BusyBox instead of GNU coreutils. While GNU packages many Linux-friendly programs together, BusyBox bundles a smaller number of core functions within one executable. 

While our Ubuntu and Debian images leverage glibc and coreutils, these alternatives are comparatively lightweight and resource-friendly, containing fewer extensions and less bloat.

As a result, Alpine appeals to developers who don’t need uncompromising compatibility or functionality from their image. Our Alpine DOI is also user-friendly and straightforward since there are fewer moving parts.

Alpine Linux performs well on resource-limited devices, which is fitting for developing simple applications or spinning up servers. Your containers will consume less RAM and less storage space. 

The Alpine Docker Official Image also offers the following features:

The robust apk package managerA rapid, consistent development-and-release cycle vs. other Linux distributionsMultiple supported tags and architectures, like amd64, arm/v6+, arm64, and ppc64le

Multi-arch support lets you run Alpine on desktops, mobile devices, rack-mounted servers, Raspberry Pis, and even newer M-series Macs. Overall, Alpine pairs well with a wide variety of embedded systems. 

These are only some of the advantages to using the Alpine DOI. Next, we’ll cover how to harness the image for your application. 

When to use Alpine

You may be interested in using Alpine, but find yourself asking, “When should I use it?” Containerized Alpine shines in some key areas: 

Creating serversRouter-based networkingDevelopment/testing environments

While there are some other uses for Alpine, most projects will fall under these two categories. Overall, our Alpine container image excels in situations where space savings and security are critical. 

How to run Alpine in Docker

Before getting started, download Docker Desktop and then install it. Docker Desktop is built upon Docker Engine and bundles together the Docker CLI, Docker Compose, and other core components. Launching Docker Desktop also lets you use Docker CLI commands (which we’ll get into later). Finally, the included Docker Dashboard will help you visually manage your images and containers. 

After completing these steps, you’re ready to Dockerize Alpine!

Note: For Linux users, Docker will still work perfectly fine if you have it installed externally on a server, or through your distro’s package manager. However, Docker Desktop for Linux does save time and effort by bundling all necessary components together — while aiding productivity through its user-friendly GUI. 

Use a quick pull command

You’ll have to first pull the Alpine Docker Official Image before using it for your project. The fastest method involves running docker pull alpine from your terminal. This grabs the alpine:latest image (the most current available version) from Docker Hub and downloads it locally on your machine: 

Your terminal output should show when your pull is complete — and which alpine version you’ve downloaded. You can also confirm this within Docker Desktop. Navigate to the Images tab from the left sidebar. And a list of downloaded images will populate on the right. You’ll see your alpine image, tag, and its minuscule (yes, you saw that right) 5.29 MB size:

Other Linux distro images like Ubuntu, Debian, and Fedora are many, many times larger than Alpine.

That’s a quick introduction to using the Alpine Official Image alongside Docker Desktop. But it’s important to remember that every Alpine DOI version originates from a Dockerfile. This plain-text file contains instructions that tell Docker how to build an image layer by layer. Check out the Alpine Linux GitHub repository for more Dockerfile examples. 

Next up, we’ll cover the significance of these Dockerfiles to Alpine Linux, some CLI-based workflows, and other key information.

Build your Dockerfile

Because Alpine is a standard base for container images, we recommend building on top of it within a Dockerfile. Specify your preferred alpine image tag and add instructions to create this file. Our example takes alpine:3.14 and runs an executable mysql client with it: 

FROM alpine:3.14
RUN apk add –no-cache mysql-client
ENTRYPOINT ["mysql"]

In this case, we’re starting from a slim base image and adding our mysql-client using Alpine’s standard package manager. Overall, this lets us run commands against our MySQL database from within our application. 

This is just one of the many ways to get your Alpine DOI up and running. In particular, Alpine is well-suited to server builds. To see this in action, check out Kathleen Juell’s presentation on serving static content with Docker Compose, Next.js, and NGINX. Navigate to timestamp 7:07 within the embedded video. 

The Alpine Official Image has a close relationship with other technologies (something that other images lack). Many of our Docker Official Images support -alpine tags. For instance, our earlier example of serving static content leverages the node:16-alpine image as a builder. 

This relationship makes Alpine and multi-stage builds an ideal pairing. Since the primary goal of a multi-stage build is to reduce your final image size, we recommend starting with one of the slimmest Docker Official Images.

Grabbing the slimmest possible image

Pulling an -alpine version of a given image typically yields the slimmest result. You can do this using our earlier docker pull [image] command. Or you can create a Dockerfile and specify this image version — while leaving room for customization with added instructions. 

In either case, here are some results using a few of our most popular images. You can see how image sizes change with these tags:

Image tagImage sizeimage:[version number]-alpine sizepython:3.9.13867.66 MB46.71 MBnode:18.8.0939.71 MB164.38 MBnginx:1.23.1134.51 MB22.13 MB

We’ve used the :latest tag since this is the default image tag Docker grabs from Docker Hub. As shown above with Python, pulling the -alpine image version reduces its footprint by nearly 95%! 

From here, the build process (when working from a Dockerfile) becomes much faster. Applications based on slimmer images spin up quicker. You’ll also notice that docker pull and various docker run commands execute swifter with -alpine images. 

However, remember that you’ll likely have to use this tag with a specified version number for your parent image. Running docker pull python-alpine or docker pull python:latest-alpine won’t work. Docker will alert you that the image isn’t found, the repo doesn’t exist, the command is invalid, or login information is required. This applies to any image. 

Get up and running with Alpine today

The Alpine Docker Official Image shines thanks to its simplicity and small size. It’s a fantastic base image — perhaps the most popular amongst Docker users — and offers plenty of room for customization. Alpine is arguably the most user-friendly, containerized Linux distro. We’ve tackled how to use the Alpine Official Image, and showed you how to get the most from it. 

Want to use Alpine for your next application or server? Pull the Alpine Official Image today to jumpstart your build process. You can also learn more about supported tags on Docker Hub. 

Additional resources

Browse the official Alpine Wiki.Learn some Alpine fundamentals via the Alpine newbie Wiki page.Read similar articles about Docker Images. Download and install the latest version of Docker Desktop.
Quelle: https://blog.docker.com/feed/

In Case You Missed It: Docker Community All-Hands

That’s a wrap! Community All-Hands has officially come to a close. Our sixth All-Hands featured over 35 talks across 10 channels — with topics ranging from “getting started with Docker” to running machine learning on AI hardware accelerators.

As always, every channel was buzzing with activity. Your willingness to jump in, ask questions, and help others is what the Docker community’s all about. And we loved having the chance to chat with everyone directly! 

Couldn’t attend our recent Community All-Hands event? We’ll cover some important announcements, interesting presentations, and more that you missed.

Docker CPO looks back at a year of developer obsession

Headlining Community All-Hands were some important announcements on the main stage, kicked off by Jake Levirne, our Head of Products. This past year, our engineers focused on improving developer experiences across every product. Integrated features like Dev Environments, Docker Extensions, SBOM, and Compose V2 have helped streamline workflows — along with numerous usability and OS-specific improvements. 

Over the last year, the Docker engineering team:

Released 24 new featuresMade 37,000 internal commitsCurated 52 extensions and counting within Docker Desktop and Docker HubHosted over eight million Docker Desktop downloads

We couldn’t have made these improvements without your feedback. Keep your votes, comments, and messages coming — they’re essential for helping us ship the features you need. Keep an eye out for continued updates about UX enhancements, Trusted Open Source, and user-centric partnerships.

How to use SBOMs to support multiple layers

Following Jake, our presenters dove deeper into the technical depths. Next up was a session on viewing images through layered software bills of materials (SBOMs), led by Docker Principal Software Engineer Jim Clark. 

SBOMs are extremely helpful for knowing what’s in your images and apps. But where it gets complex is that many images stem from base images. And even those base images can have their own base images, making full image transparency difficult. Multi-layer images have historically been harder to analyze. To get a full picture of a multi-layer image, you’ll need to know things like:

Which packages are includedHow those packages are distributed between layersHow image rebuilds can impact packagesIf security fixes are available for individual packages

Jim shared that it’s now possible to gather this information. While this feature is still under development, users will soon see layer sizes, total packages per layer, and be able to view complete Dockerfiles on GitHub. 

And as a next step, the team is also focused on understanding shared content and tracking public data. This is another step toward building developer trust, and knowing exactly what’s going into your projects.

Docker Desktop meets multi-platform image support via containerd

Rounding out our major announcements was Djordje Lukic, Staff Software Engineer, with a session on containerd image management. Containerd has been our container runtime since 2016. Since then, we’re extended its integration within Docker Desktop and Docker Engine. 

Containerd migration offers some key benefits: 

There’s less code to maintainWe can ship features more rapidly and shorten release cyclesIt’s easier to improve our developer toolingWe can bring multi-platform support to Docker, while following the Open Container Initiative (OCI) more closely and supporting different snapshotters.

Leveraging containerd more heavily means we can consolidate portions of the Docker Daemon. Check out our containerd announcement blog to learn more. 

Showcasing attendees’ favorite talks

Every Community All-Hands channel hosted unique sets of topics, while each session highlighted relationships between Docker and today’s top technologies. Here are some popular talks from Community All-Hands and why they’re worth watching. 

Developing Go Apps with Docker

From the “Best Practices” channel.

Go (or Golang) is a language well-loved and highly sought after by professional developers. We support it as a core language and maintain a Go language-specific use guide within our docs. 

Follow along with Muhammad Quanit as he explores containerized Go applications. Muhammad covers best practices, the importance of multi-stage builds, and other tips for optimizing your Dockerfiles. By using a Go web server, he demonstrates the “Dockerization” process and the usefulness of IDE extensions.

Integration Testing Your Legacy Java Microservice with docker-maven-plugin

From the “Demos” channel.

Enterprises and development teams often maintain Java code bases upwards of 10 years old. While these services may still be functional, it’s been challenging to bind automated testing to each individual microservice repository. Docker Compose does enable batch testing, but that extra granularity is needed.

Join Terry Brady as he shows you how to run JUnit microservices tests, automated maven testing, code coverage calculation, and even test-resource management. Don’t worry about rewriting your legacy code. Instead, learn how integration testing and dedicated test containers help make life easier. 

How Does Docker Run Machine Learning on Specialized AI Hardware Accelerators

From the “Cutting Edge” channel.

Currently, 35% of companies report using AI in some fashion, while another 42% of respondents say they’re considering it. Machine learning (ML) — a subset of AI — has been critical to creating predictive models, extracting value from big data, and automating many tedious processes. 

Shashank Prasanna outlines just how important specialized hardware is to powering these algorithms. And while ML gains steam, companies are unveiling numerous supporting chipsets and GPUs. How does Docker handle these accelerators? Follow along as Shashank highlights Docker’s capabilities within multi-processor systems, and how these differ from traditional, single-CPU systems from an AI standpoint.

But wait, there’s more! 

The above talks are just a small sample of our learning sessions. Swing by our Docker YouTube channel to browse through our entire content library. 

You can also check out playlists from each event channel: 

Mainstage – showcases of the community and Docker’s latest developments Best Practices – tips to get the most from your Docker applicationsDemos – in-depth presentations that tackle unique use cases, step by stepSecurity – best practices for building stronger, attack-resistant containers and applicationsExtensions – the basics of building extensions while demonstrating their usefulness in different scenariosCutting Edge – talks about how Docker and today’s leading technologies uniteInternational Waters – multilingual tech talks and panel discussions on trendsOpen Source – panels on the Docker Sponsored Open-Source Program and the value of open sourceUnconference – informal talks on getting started with Docker and Docker experiences

Thank you and see you next time!

From key Docker announcements, to technical talks, to our buzzworthy Community Awards ceremony, we had an absolute blast with you at Community All-Hands. Also, a huge special thanks to DJ Alessandro Vozza for keeping the music and excitement going!

And don’t forget to download the latest Docker Desktop to check out the releases and try out any new tricks you’ve learned.

See you at our next All-Hands event, and thank you for making this community stronger. Happy developing!

Learn about our recent releases

Extending Docker’s Integration with containerdThe Docker-Sponsored Open Source Program has a new look!Integrated Terminal for Running Containers, Extended Integration with Containerd, and More in Docker Desktop 4.12
Quelle: https://blog.docker.com/feed/

Docker Captain Take 5 — Sebastien Flochlay

Docker Captains are select members of the community that are both experts in their field and are passionate about sharing their Docker knowledge with others. “Docker Captains Take 5” is a regular blog series where we get a closer look at our Captains and ask them the same broad set of questions ranging from what their best Docker tip is to whether they prefer cats or dogs (personally, we like whales and turtles over here). Today, we’re interviewing Sebastien Flochlay who recently joined the Captain’s Program. He is a Tech Ambassador and Co-Founder of Stack Labs and is based in Toulouse, France.

How/when did you first discover Docker?

I discovered Docker in 2016, during my first meetup. It was a Software Craftsmanship group that presented an interesting tool — Docker!

It was an amazing discovery for me! Simplicity, rapidity, portability, and scalability —  I had just discovered containerization.

Today, Docker is an integral part of my daily life as a developer. I deliver my applications in Docker images. I build and run my CI/CD pipelines into Docker containers. I deploy locally my development environments using Docker compose or Kind. And I use Kubernetes or Google Cloud Run for release.

What is your favorite Docker command?

My favorite command is definitely this one: docker run.

It’s the most complete and interesting, yet it’s certainly with this one that you started.

In fact, it’s through this command that we discover the world of Docker. It’s with this command that we learn how and why to publish container ports, manage volumes, or define environment variables. It’s with her that we learn to play with the Network or to control how much memory, or CPU a container can use.In short, it’s definitely my favorite docker command line.

What is your top tip for working with Docker that others may not know?

I don’t have any secret pro tips, just simple advice: learn how to write Dockerfiles correctly.Most of the time, when I help a team on their Docker usage, the errors are focused on the Dockerfiles and their writing. There are tons of best practices for writing a Dockerfile (ex. exclude files with a .dockerignore, use multi-stage builds, minimize the number of layers, and so many more), so use them!If you are interested, there is an amazing documentation page on the subject: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/

What’s the coolest Docker demo you have done/seen?

I’ve seen so many that I can’t choose. But I take this opportunity to thank all those who work to share their knowledge on Docker.

What have you worked on in the past six months that you’re particularly proud of?

I created a 3-day training on Flutter. I put a lot of time and energy into it, and finally, I had the chance to perform it several times, and I liked it. I’m really proud of it! 😋

What do you anticipate will be Docker’s biggest announcement this year?

I would say Docker extensions, but that’s cheating. Eva Bojorges told me about it at the Devoxx2022 conference. 🤫

What are some personal goals for the next year with respect to the Docker community?

I’d like to create some training resources on Docker such as videos, blog posts, workshops, or meetups. I don’t know yet, but I want to participate in the Docker community.

What was your favorite thing about DockerCon 2022?

There are two things that I really liked. The first is Shy Ruparel’s excellent workshop “Getting Started with Docker“. It’s a perfect and very complete introduction to Docker. It’s only been a month, and I’ve already recommended it to three development teams.

The second is Amy Bass’ “What are Docker Extensions” presentation, which gave me a quick understanding of what it was.

Looking to the distant future, what is the technology that you’re most excited about and that you think holds a lot of promise?

I’m excited to see how Flutter develops over the next few years. It is an easy-to-access, very comprehensive technology that opens up many possibilities. 

Rapid fire questions…

What new skill have you mastered during the pandemic?

I learned to develop on Flutter with Dart.

Soon after, I worked on different projects for different clients. 

Then, I created a training on Flutter to pass on what I had learned. Now I have the chance to do many talks on this subject.

Cats or Dogs?

Cats, dogs, lizards, snakes, octopuses, or spiders. I love all animals!! ❤️Except centipedes! I hate centipedes!! 🐛😱

Salty, sour or sweet?

Salty or Malty (I like beers 😋)

Beach or mountains?

Mountains! I like hiking in the mountains, walking in the forest, or strolling along the rivers. 

Your most often used emoji?

😋
Quelle: https://blog.docker.com/feed/

How to Colorize Black & White Pictures With OpenVINO™ on Ubuntu Containers

If you’re looking to bring a stack of old family photos back to life, check out Ubuntu’s demo on how to use OpenVINO on Ubuntu containers to colorize monochrome pictures. This magical use of containers, neural networks, and Kubernetes is packed with helpful resources and a fun way to dive into deep learning!

A version of Part 1 and Part 2 of this article was first published on Ubuntu’s blog.

Table of contents:

OpenVINO on Ubuntu containers: making developers’ lives easierWhy Ubuntu Docker images?Why OpenVINO?OpenVINO and Ubuntu container imagesNeural networks to colorize a black & white imagegRPC vs REST APIsUbuntu minimal container imagesDemo architectureNeural network – OpenVINO Model ServerBackend – Ubuntu-based Flask app (Python)Frontend – Ubuntu-based NGINX container and Svelte appDeployment with KubernetesBuild the components’ Docker imagesApply the Kubernetes configuration files

OpenVINO on Ubuntu containers: making developers’ lives easier

Suppose you’re curious about AI/ML and what you can do with OpenVINO on Ubuntu containers. In that case, this blog is an excellent read for you too.

Docker image security isn’t only about provenance and supply chains; it’s also about the user experience. More specifically, the developer experience.

Removing toil and friction from your app development, containerization, and deployment processes avoids encouraging developers to use untrusted sources or bad practices in the name of getting things done. As AI/ML development often requires complex dependencies, it’s the perfect proof point for secure and stable container images.

Why Ubuntu Docker images?

As the most popular container image in its category, the Ubuntu base image provides a seamless, easy-to-set-up experience. From public cloud hosts to IoT devices, the Ubuntu experience is consistent and loved by developers.

One of the main reasons for adopting Ubuntu-based container images is the software ecosystem. More than 30,000 packages are available in one “install” command, with the option to subscribe to enterprise support from Canonical. It just makes things easier.

In this blog, you’ll see that using Ubuntu Docker images greatly simplifies component containerization. We even used a prebuilt & preconfigured container image for the NGINX web server from the LTS images portfolio maintained by Canonical for up to 10 years.

Beyond providing a secure, stable, and consistent experience across container images, Ubuntu is a safe choice from bare metal servers to containers. Additionally, it comes with hardware optimization on clouds and on-premises, including Intel hardware.

Why OpenVINO?

When you’re ready to deploy deep learning inference in production, binary size and memory footprint are key considerations – especially when deploying at the edge. OpenVINO provides a lightweight Inference Engine with a binary size of just over 40MB for CPU-based inference. It also provides a Model Server for serving models at scale and managing deployments.

OpenVINO includes open-source developer tools to improve model inference performance. The first step is to convert a deep learning model (trained with TensorFlow, PyTorch,…) to an Intermediate Representation (IR) using the Model Optimizer. In fact, it cuts the model’s memory usage in half by converting it from FP32 to FP16 precision. You can unlock additional performance by using low-precision tools from OpenVINO. The Post-training Optimisation Tool (POT) and Neural Network Compression Framework (NNCF) provide quantization, binarisation, filter pruning, and sparsity algorithms. As a result, Intel devices’ throughput increases on CPUs, integrated GPUs, VPUs, and other accelerators.

Open Model Zoo provides pre-trained models that work for real-world use cases to get you started quickly. Additionally, Python and C++ sample codes demonstrate how to interact with the model. More than 280 pre-trained models are available to download, from speech recognition to natural language processing and computer vision.

For this blog series, we will use the pre-trained colorization models from Open Model Zoo and serve them with Model Server.

OpenVINO and Ubuntu container images

The Model Server – by default – ships with the latest Ubuntu LTS, providing a consistent development environment and an easy-to-layer base image. The OpenVINO tools are also available as prebuilt development and runtime container images.

To learn more about Canonical LTS Docker Images and OpenVINO™, read:

Intel and Canonical to secure containers software supply chain – Ubuntu blogOpenVINO Documentation – OpenVINO™Webinar: Secure AI deployments at the edge – Canonical and Intel

Neural networks to colorize a black & white image

Now, back to the matter at hand: how will we colorize grandma and grandpa’s old pictures? Thanks to Open Model Zoo, we won’t have to train a neural network ourselves and will only focus on the deployment. (You can still read about it.)

Architecture diagram of the colorizer demo app running on MicroK8s

Our architecture consists of three microservices: a backend, a frontend, and the OpenVINO Model Server (OVMS) to serve the neural network predictions. The Model Server component hosts two different demonstration neural networks to compare their results (V1 and V2). These components all use the Ubuntu base image for a consistent software ecosystem and containerized environment.

A few reads if you’re not familiar with this type of microservices architecture:

What are container images?What is Kubernetes?

gRPC vs REST APIs

The OpenVINO Model Server provides inference as a service via HTTP/REST and gRPC endpoints for serving models in OpenVINO IR or ONNX format. It also offers centralized model management to serve multiple different models or different versions of the same model and model pipelines.

The server offers two sets of APIs to interface with it: REST and gRPC. Both APIs are compatible with TensorFlow Serving and expose endpoints for prediction, checking model metadata, and monitoring model status. For use cases where low latency and high throughput are needed, you’ll probably want to interact with the model server via the gRPC API. Indeed, it introduces a significantly smaller overhead than REST. (Read more about gRPC.)

OpenVINO Model Server is distributed as a Docker image with minimal dependencies. For this demo, we will use the Model Server container image deployed to a MicroK8s cluster. This combination of lightweight technologies is suitable for small deployments. It suits edge computing devices, performing inferences where the data is being produced – for increased privacy, low latency, and low network usage.

Ubuntu minimal container images

Since 2019, the Ubuntu base images have been minimal, with no “slim” flavors. While there’s room for improvement (keep posted), the Ubuntu Docker image is a less than 30MB download, making it one of the tiniest Linux distributions available on containers.

In terms of Docker image security, size is one thing, and reducing the attack surface is a fair investment. However, as is often the case, size isn’t everything. In fact, maintenance is the most critical aspect. The Ubuntu base image, with its rich and active software ecosystem community, is usually a safer bet than smaller distributions.

A common trap is to start smaller and install loads of dependencies from many different sources. The end result will have poor performance, use non-optimized dependencies, and not be secure. You probably don’t want to end up effectively maintaining your own Linux distribution … So, let us do it for you.

“What are you looking at?” (original picture source)

Demo architecture

“As a user, I can drag and drop black and white pictures to the browser so that it displays their ready-to-download colorized version.” – said the PM (me).

For that – replied the one-time software engineer (still me) – we only need:

A fancy yet lightweight frontend component.OpenVINO™ Model Server to serve the neural network colorization predictions.A very light backend component.

Whilst we could target the Model Server directly with the frontend (it exposes a REST API), we need to apply transformations to the submitted image. The colorization models, in fact, each expect a specific input.

Finally, we’ll deploy these three services with Kubernetes because … well … because it’s groovy. And if you think otherwise (everyone is allowed to have a flaw or two), you’ll find a fully functional docker-compose.yaml in the source code repository.

Architecture diagram for the demo app (originally colored tomatoes)

In the upcoming sections, we will first look at each component and then show how to deploy them with Kubernetes using MicroK8s. Don’t worry; the full source code is freely available, and I’ll link you to the relevant parts.

Neural network – OpenVINO Model Server

The colorization neural network is published under the BSD 2-clause License, accessible from the Open Model Zoo. It’s pre-trained, so we don’t need to understand it in order to use it. However, let’s look closer to understand what input it expects. I also strongly encourage you to read the original work from Richard Zhang, Phillip Isola, and Alexei A. Efros. They made the approach super accessible and understandable on this website and in the original paper.

Neural network architecture (from arXiv:1603.08511 [cs.CV])

As you can see on the network architecture diagram, the neural network uses an unusual color space: LAB. There are many 3-dimensional spaces to code colors: RGB, HSL, HSV, etc. The LAB format is relevant here as it fully isolates the color information from the lightness information. Therefore, a grayscale image can be coded with only the L (for Lightness) axis. We will send only the L axis to the neural network’s input. It will generate predictions for the colors coded on the two remaining axes: A and B.

From the architecture diagram, we can also see that the model expects a 256×256 pixels input size. For these reasons, we cannot just send our RGB-coded grayscale picture in its original size to the network. We need to first transform it.

We compare the results of two different model versions for the demo. Let them be called ‘V1’ (Siggraph) and ‘V2’. The models are served with the same instance of the OpenVINO™ Model Server as two different models. (We could also have done it with two different versions of the same model – read more in the documentation.)

Finally, to build the Docker image, we use the first stage from the Ubuntu-based development kit to download and convert the model. We then rebase on the more lightweight Model Server image.

# Dockerfile
FROM openvino/ubuntu20_dev:latest AS omz
# download and convert the model

FROM openvino/model_server:latest
# copy the model files and configure the Model Server

Backend – Ubuntu-based Flask app (Python)

For the backend microservice that interfaces between the user-facing frontend and the Model Server hosting the neural network, we chose to use Python. There are many valuable libraries to manipulate data, including images, specifically for machine learning applications. To provide web serving capabilities, Flask is an easy choice.

The backend takes an HTTP POST request with the to-be-colorized picture. It synchronously returns the colorized result using the neural network predictions. In between – as we’ve just seen – it needs to convert the input to match the model architecture and to prepare the output to show a displayable result.

Here’s what the transformation pipeline looks like on the input:

And the output looks something like that:

To containerize our Python Flask application, we use the first stage with all the development dependencies to prepare our execution environment. We copy it onto a fresh Ubuntu base image to run it, configuring the model server’s gRPC connection.

Frontend – Ubuntu-based NGINX container and Svelte app

Finally, I put together a fancy UI for you to try the solution out. It’s an effortless single-page application with a file input field. It can display side-by-side the results from the two different colorization models.

I used Svelte to build the demo as a dynamic frontend. Below each colorization result, there’s even a saturation slider (using a CSS transformation) so that you can emphasize the predicted colors and better compare the before and after.

To ship this frontend application, we again use a Docker image. We first build the application using the Node base image. We then rebase it on top of the preconfigured NGINX LTS image maintained by Canonical. A reverse proxy on the frontend side serves as a passthrough to the backend on the /API endpoint to simplify the deployment configuration. We do that directly in an NGINX.conf configuration file copied to the NGINX templates directory. The container image is preconfigured to use these template files with environment variables.

Deployment with Kubernetes

I hope you had the time to scan some black and white pictures because things are about to get serious(ly colorized).

We’ll assume you already have a running Kubernetes installation from the next section. If not, I encourage you to run the following steps or go through this MicroK8s tutorial.

# https://microk8s.io/docs
sudo snap install microk8s –classic

# Add current user ($USER) to the microk8s group
sudo usermod -a -G microk8s $USER && sudo chown -f -R $USER ~/.kube
newgrp microk8s

# Enable the DNS, Storage, and Registry addons required later
microk8s enable dns storage registry

# Wait for the cluster to be in a Ready state
microk8s status –wait-ready

# Create an alias to enable the `kubectl` command
sudo snap alias microk8s.kubectl kubectl

Yes, you deployed a Kubernetes cluster in about two command lines.

Build the components’ Docker images

Every component comes with a Dockerfile to build itself in a standard environment and ship its deployment dependencies (read What are containers for more information). They all create an Ubuntu-based Docker image for a consistent developer experience.

Before deploying our colorizer app with Kubernetes, we need to build and push the components’ images. They need to be hosted in a registry accessible from our Kubernetes cluster. We will use the built-in local registry with MicroK8s. Depending on your network bandwidth, building and pushing the images will take a few minutes or more.

sudo snap install docker
cd ~ && git clone https://github.com/valentincanonical/colouriser-demo.git

# Backend
docker build backend -t localhost:32000/backend:latest
docker push localhost:32000/backend:latest

# Model Server
docker build modelserver -t localhost:32000/modelserver:latest
docker push localhost:32000/modelserver:latest

# Frontend
docker build frontend -t localhost:32000/frontend:latest
docker push localhost:32000/frontend:latest

Apply the Kubernetes configuration files

All the components are now ready for deployment. The Kubernetes configuration files are available as deployments and services YAML descriptors in the ./K8s folder of the demo repository. We can apply them all at once, in one command:

kubectl apply -f ./k8s

Give it a few minutes. You can watch the app being deployed with watch kubectl status. Of all the services, the frontend one has a specific NodePort configuration to make it publicly accessible by targeting the Node IP address.

Once ready, you can access the demo app at http://localhost:30000/ (or replace localhost with a cluster node IP address if you’re using a remote cluster). Pick an image from your computer, and get it colorized!

All in all, the project was pretty easy considering the task we accomplished. Thanks to Ubuntu containers, building each component’s image with multi-stage builds was a consistent and straightforward experience. And thanks to OpenVINO™ and the Open Model Zoo, serving a pre-trained model with excellent inference performance was a simple task accessible to all developers.

That’s a wrap!

You didn’t even have to share your pics over the Internet to get it done. Thanks for reading this article; I hope you enjoyed it. Feel free to reach out on socials. I’ll leave you with the last colorization example.

Christmassy colorization example (original picture source)

To learn more about Ubuntu, the magic of Docker images, or even how to make your own Dockerfiles, see below for related resources:

Find more helpful Docker images on Docker Hub.Check out Ubuntu’s Docker Hub profile.Learn how to create your own Dockerfiles for Docker Desktop.Read about secure AI deployments at the edge.Learn more about Canonical’s maintained Ubuntu-based OCI images.Read Ubuntu’s take on running EKS locally.Get started and download Docker Desktop for Windows, Mac, or Linux.
Quelle: https://blog.docker.com/feed/

Extending Docker’s Integration with containerd

We’re extending Docker’s integration with containerd to include image management! To share this work early and get feedback, this integration is available as an opt-in experimental feature with the latest Docker Desktop 4.12.0 release.

What is containerd?

In the simplest terms, containerd is a broadly-adopted open container runtime. It manages the complete container lifecycle of its host system! This includes pulling and pushing images as well as handling the starting and stopping of containers. Not to mention, containerd is a low-level brick in the container experience. Rather than being used directly by developers, it’s designed to be embedded into systems like Docker and Kubernetes.

Docker’s involvement in the containerd project can be traced all the way back to 2016. You could say, it’s a bit of a passion project for us! While we had many reasons for starting the project, our goal was to move the container supervision out of the core Docker Engine and into a separate daemon. This way, it could be reused in projects like Kubernetes. It was donated to the Cloud Native Computing Foundation (CNCF), and it’s now a graduated (stable) project as of 2017.

What does containerd replace in the Docker Engine?

As we mentioned earlier, Docker has used containerd as part of Docker Engine for managing the container lifecycle (creating, starting, and stopping) for a while now! This new work is a step towards a deeper integration of containerd into the Docker Engine. It lets you use containerd to store images and then push and pull them. Containerd also uses snapshotters instead of graph drivers for mounting the root file system of a container. Due to containerd’s pluggable architecture, it can support multiple snapshotters as well. 

Want to learn more? Michael Crosby wrote a great explanation about snapshotters on the Moby Blog.

Why migrate to containerd for image management?

Containerd is the leading open container runtime and, better yet, it’s already a part of Docker Engine! By switching to containerd for image management, we’re better aligning ourselves with the broader industry tooling. 

This migration modifies two main things:

We’re replacing Docker’s graph drivers with containerd’s snapshotters.We’ll be using containerd to push, pull, and store images.

What does this mean for Docker users?

We know developers love how Docker commands work today and that many tools rely on the existing Docker API. With this in mind, we’re fully vested in making sure that the integration is as transparent as possible and doesn’t break existing workflows. To do this, we’re first rolling it out as an experimental, opt-in feature so that we can get early feedback. When enabled in the latest Docker Desktop, this experimental feature lets you use the following Docker commands with containerd under the hood: run, commit, build, push, load, and save.

This integration has the following benefits:

Containerd’s snapshotter implementation helps you quickly plug in new features. Some examples include using stargz to lazy-pull images on startup or nydus and dragonfly for peer-to-peer image distribution.The containerd content store can natively store multi-platform images and other OCI-compatible objects. This enables features like the ability to build and manipulate multi-platform images using Docker Engine (and possibly other content in the future!).

If you plan to build the multi-platform image, the below graphic shows what to expect when you run the build command with the containerd store enabled. 

Without the experimental feature enabled, you will get an error message stating that this feature is not supported on docker driver as shown in the graphic below. 

If you decide not to enable the experimental feature, no big deal! Things will work like before. If you have additional questions, you can access details in our release notes.

Roadmap for the containerd integration

We want to be as transparent as possible with the Docker community when it comes to this containerd integration (no surprises here!). For this reason, we’ve laid out a roadmap. The integration will happen in two key steps:

We’ll ship an initial version in Docker Desktop which enables common workflows but doesn’t touch existing images to prove that this approach works.Next, we’ll write the code to migrate user images to use containerd and activate the feature for all our users.

We work to make expanding integrations like this as seamless as possible so you, our end user, can reap the benefits! This way, you can create new, exciting things while leveraging existing features in the ecosystem such as namespaces, containerd plug-ins, and more.

We’ve released this experimental feature first in Docker Desktop so that we can get feedback quickly from the community. But, you can also expect this feature in a future Docker Engine release.  

The details on the ongoing integration work can be accessed here. 

Conclusion

In summary, Docker users can now look forward to full containerd integration. This brings many exciting features from native multi-platform support to encrypted images and lazy pulls. So make sure to download the latest version of Docker Desktop and enable the containerd experimental feature to take it for a spin! 

We love sharing things early and getting feedback from the Docker community — it helps us build products that work better for you. Please join us on our community Slack channel or drop us a line using our feedback form.
Quelle: https://blog.docker.com/feed/

The Docker-Sponsored Open Source Program has a new look!

It’s no secret that developers love open source software. About 70–90% of code is entirely made up of it! Plus, using open source has a ton of benefits like cost savings and scalability. But most importantly, it promotes faster innovation.

That’s why Docker announced our community program, the Docker-Sponsored Open Source (DSOS) Program, in 2020. While our mission and vision for the program haven’t changed (yes, we’re still committed to building a platform for collaboration and innovation!), some of the criteria and benefits have received a bit of a facelift.

We recently discussed these updates at our quarterly Community All-Hands, so check out that video if you haven’t yet. But since you’re already here, let’s give you a rundown of what’s new.

New criteria & benefits

Over the past two years, we’ve been working to incorporate all of the amazing community feedback we’ve received about the DSOS program. And we heard you! Not only have we updated the application process which will decrease the wait time for approval, but we’ve also added on some major benefits that will help improve the reach and visibility of your projects.

New application process — The new, streamlined application process lets you apply with a single click and provides status updates along the wayUpdated funding criteria — You can now apply for the program even if your project is commercially funded! However, you must not currently have a pathway to commercialization (this is reevaluated yearly). Adjusting our qualification criteria opens the door for even more projects to join the 300+ we already have!Insights & Analytics — Exclusive to DSOS members, you now have access to a plethora of data to help you better understand how your software is being used.DSOS badge on Docker Hub — This makes it easier for your project to be discovered and build brand awareness.

What hasn’t changed

Despite all of these updates, we made sure to keep the popular program features you love. Docker knows the importance of open source as developers create new technologies and turn their innovations into a reality. That’s why there are no changes to the following program benefits:

Free autobuilds — Docker will automatically build images from source code in your external repository and automatically push the build image to your Docker Hub Repository.Unlimited pulls and egress — This is for all users pulling public images from your project namespace.Free 1-year Docker Team subscription — This feature is for core contributors of your project namespace. This includes Docker Desktop, 15 concurrent builds, unlimited Docker Hub image vulnerability scans, unlimited scoped tokens, role-based access control, and audit logs.

We’ve also kept the majority of our qualification criteria the same (aside from what was mentioned above). To qualify for the program, your project namespace must:

Be shared in public reposMeet the Open Source Initiative’s definition of open source Be in active development (meaning image updates are pushed regularly within the past 6 months or dependencies are updated regularly, even if the project source code is stable)Not have a pathway to commercialization. Your organization must not seek to make a profit through services or by charging for higher tiers. Accepting donations to sustain your efforts is allowed.

Want to learn more about the program? Reach out to OpenSource@Docker.com with your questions! We look forward to hearing from you.
Quelle: https://blog.docker.com/feed/