In Case You Missed It: Docker Community All-Hands

That’s a wrap! Community All-Hands has officially come to a close. Our sixth All-Hands featured over 35 talks across 10 channels — with topics ranging from “getting started with Docker” to running machine learning on AI hardware accelerators.

As always, every channel was buzzing with activity. Your willingness to jump in, ask questions, and help others is what the Docker community’s all about. And we loved having the chance to chat with everyone directly! 

Couldn’t attend our recent Community All-Hands event? We’ll cover some important announcements, interesting presentations, and more that you missed.

Docker CPO looks back at a year of developer obsession

Headlining Community All-Hands were some important announcements on the main stage, kicked off by Jake Levirne, our Head of Products. This past year, our engineers focused on improving developer experiences across every product. Integrated features like Dev Environments, Docker Extensions, SBOM, and Compose V2 have helped streamline workflows — along with numerous usability and OS-specific improvements. 

Over the last year, the Docker engineering team:

Released 24 new featuresMade 37,000 internal commitsCurated 52 extensions and counting within Docker Desktop and Docker HubHosted over eight million Docker Desktop downloads

We couldn’t have made these improvements without your feedback. Keep your votes, comments, and messages coming — they’re essential for helping us ship the features you need. Keep an eye out for continued updates about UX enhancements, Trusted Open Source, and user-centric partnerships.

How to use SBOMs to support multiple layers

Following Jake, our presenters dove deeper into the technical depths. Next up was a session on viewing images through layered software bills of materials (SBOMs), led by Docker Principal Software Engineer Jim Clark. 

SBOMs are extremely helpful for knowing what’s in your images and apps. But where it gets complex is that many images stem from base images. And even those base images can have their own base images, making full image transparency difficult. Multi-layer images have historically been harder to analyze. To get a full picture of a multi-layer image, you’ll need to know things like:

Which packages are includedHow those packages are distributed between layersHow image rebuilds can impact packagesIf security fixes are available for individual packages

Jim shared that it’s now possible to gather this information. While this feature is still under development, users will soon see layer sizes, total packages per layer, and be able to view complete Dockerfiles on GitHub. 

And as a next step, the team is also focused on understanding shared content and tracking public data. This is another step toward building developer trust, and knowing exactly what’s going into your projects.

Docker Desktop meets multi-platform image support via containerd

Rounding out our major announcements was Djordje Lukic, Staff Software Engineer, with a session on containerd image management. Containerd has been our container runtime since 2016. Since then, we’re extended its integration within Docker Desktop and Docker Engine. 

Containerd migration offers some key benefits: 

There’s less code to maintainWe can ship features more rapidly and shorten release cyclesIt’s easier to improve our developer toolingWe can bring multi-platform support to Docker, while following the Open Container Initiative (OCI) more closely and supporting different snapshotters.

Leveraging containerd more heavily means we can consolidate portions of the Docker Daemon. Check out our containerd announcement blog to learn more. 

Showcasing attendees’ favorite talks

Every Community All-Hands channel hosted unique sets of topics, while each session highlighted relationships between Docker and today’s top technologies. Here are some popular talks from Community All-Hands and why they’re worth watching. 

Developing Go Apps with Docker

From the “Best Practices” channel.

Go (or Golang) is a language well-loved and highly sought after by professional developers. We support it as a core language and maintain a Go language-specific use guide within our docs. 

Follow along with Muhammad Quanit as he explores containerized Go applications. Muhammad covers best practices, the importance of multi-stage builds, and other tips for optimizing your Dockerfiles. By using a Go web server, he demonstrates the “Dockerization” process and the usefulness of IDE extensions.

Integration Testing Your Legacy Java Microservice with docker-maven-plugin

From the “Demos” channel.

Enterprises and development teams often maintain Java code bases upwards of 10 years old. While these services may still be functional, it’s been challenging to bind automated testing to each individual microservice repository. Docker Compose does enable batch testing, but that extra granularity is needed.

Join Terry Brady as he shows you how to run JUnit microservices tests, automated maven testing, code coverage calculation, and even test-resource management. Don’t worry about rewriting your legacy code. Instead, learn how integration testing and dedicated test containers help make life easier. 

How Does Docker Run Machine Learning on Specialized AI Hardware Accelerators

From the “Cutting Edge” channel.

Currently, 35% of companies report using AI in some fashion, while another 42% of respondents say they’re considering it. Machine learning (ML) — a subset of AI — has been critical to creating predictive models, extracting value from big data, and automating many tedious processes. 

Shashank Prasanna outlines just how important specialized hardware is to powering these algorithms. And while ML gains steam, companies are unveiling numerous supporting chipsets and GPUs. How does Docker handle these accelerators? Follow along as Shashank highlights Docker’s capabilities within multi-processor systems, and how these differ from traditional, single-CPU systems from an AI standpoint.

But wait, there’s more! 

The above talks are just a small sample of our learning sessions. Swing by our Docker YouTube channel to browse through our entire content library. 

You can also check out playlists from each event channel: 

Mainstage – showcases of the community and Docker’s latest developments Best Practices – tips to get the most from your Docker applicationsDemos – in-depth presentations that tackle unique use cases, step by stepSecurity – best practices for building stronger, attack-resistant containers and applicationsExtensions – the basics of building extensions while demonstrating their usefulness in different scenariosCutting Edge – talks about how Docker and today’s leading technologies uniteInternational Waters – multilingual tech talks and panel discussions on trendsOpen Source – panels on the Docker Sponsored Open-Source Program and the value of open sourceUnconference – informal talks on getting started with Docker and Docker experiences

Thank you and see you next time!

From key Docker announcements, to technical talks, to our buzzworthy Community Awards ceremony, we had an absolute blast with you at Community All-Hands. Also, a huge special thanks to DJ Alessandro Vozza for keeping the music and excitement going!

And don’t forget to download the latest Docker Desktop to check out the releases and try out any new tricks you’ve learned.

See you at our next All-Hands event, and thank you for making this community stronger. Happy developing!

Learn about our recent releases

Extending Docker’s Integration with containerdThe Docker-Sponsored Open Source Program has a new look!Integrated Terminal for Running Containers, Extended Integration with Containerd, and More in Docker Desktop 4.12
Quelle: https://blog.docker.com/feed/

Docker Captain Take 5 — Sebastien Flochlay

Docker Captains are select members of the community that are both experts in their field and are passionate about sharing their Docker knowledge with others. “Docker Captains Take 5” is a regular blog series where we get a closer look at our Captains and ask them the same broad set of questions ranging from what their best Docker tip is to whether they prefer cats or dogs (personally, we like whales and turtles over here). Today, we’re interviewing Sebastien Flochlay who recently joined the Captain’s Program. He is a Tech Ambassador and Co-Founder of Stack Labs and is based in Toulouse, France.

How/when did you first discover Docker?

I discovered Docker in 2016, during my first meetup. It was a Software Craftsmanship group that presented an interesting tool — Docker!

It was an amazing discovery for me! Simplicity, rapidity, portability, and scalability —  I had just discovered containerization.

Today, Docker is an integral part of my daily life as a developer. I deliver my applications in Docker images. I build and run my CI/CD pipelines into Docker containers. I deploy locally my development environments using Docker compose or Kind. And I use Kubernetes or Google Cloud Run for release.

What is your favorite Docker command?

My favorite command is definitely this one: docker run.

It’s the most complete and interesting, yet it’s certainly with this one that you started.

In fact, it’s through this command that we discover the world of Docker. It’s with this command that we learn how and why to publish container ports, manage volumes, or define environment variables. It’s with her that we learn to play with the Network or to control how much memory, or CPU a container can use.In short, it’s definitely my favorite docker command line.

What is your top tip for working with Docker that others may not know?

I don’t have any secret pro tips, just simple advice: learn how to write Dockerfiles correctly.Most of the time, when I help a team on their Docker usage, the errors are focused on the Dockerfiles and their writing. There are tons of best practices for writing a Dockerfile (ex. exclude files with a .dockerignore, use multi-stage builds, minimize the number of layers, and so many more), so use them!If you are interested, there is an amazing documentation page on the subject: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/

What’s the coolest Docker demo you have done/seen?

I’ve seen so many that I can’t choose. But I take this opportunity to thank all those who work to share their knowledge on Docker.

What have you worked on in the past six months that you’re particularly proud of?

I created a 3-day training on Flutter. I put a lot of time and energy into it, and finally, I had the chance to perform it several times, and I liked it. I’m really proud of it! 😋

What do you anticipate will be Docker’s biggest announcement this year?

I would say Docker extensions, but that’s cheating. Eva Bojorges told me about it at the Devoxx2022 conference. 🤫

What are some personal goals for the next year with respect to the Docker community?

I’d like to create some training resources on Docker such as videos, blog posts, workshops, or meetups. I don’t know yet, but I want to participate in the Docker community.

What was your favorite thing about DockerCon 2022?

There are two things that I really liked. The first is Shy Ruparel’s excellent workshop “Getting Started with Docker“. It’s a perfect and very complete introduction to Docker. It’s only been a month, and I’ve already recommended it to three development teams.

The second is Amy Bass’ “What are Docker Extensions” presentation, which gave me a quick understanding of what it was.

Looking to the distant future, what is the technology that you’re most excited about and that you think holds a lot of promise?

I’m excited to see how Flutter develops over the next few years. It is an easy-to-access, very comprehensive technology that opens up many possibilities. 

Rapid fire questions…

What new skill have you mastered during the pandemic?

I learned to develop on Flutter with Dart.

Soon after, I worked on different projects for different clients. 

Then, I created a training on Flutter to pass on what I had learned. Now I have the chance to do many talks on this subject.

Cats or Dogs?

Cats, dogs, lizards, snakes, octopuses, or spiders. I love all animals!! ❤️Except centipedes! I hate centipedes!! 🐛😱

Salty, sour or sweet?

Salty or Malty (I like beers 😋)

Beach or mountains?

Mountains! I like hiking in the mountains, walking in the forest, or strolling along the rivers. 

Your most often used emoji?

😋
Quelle: https://blog.docker.com/feed/

How to Colorize Black & White Pictures With OpenVINO™ on Ubuntu Containers

If you’re looking to bring a stack of old family photos back to life, check out Ubuntu’s demo on how to use OpenVINO on Ubuntu containers to colorize monochrome pictures. This magical use of containers, neural networks, and Kubernetes is packed with helpful resources and a fun way to dive into deep learning!

A version of Part 1 and Part 2 of this article was first published on Ubuntu’s blog.

Table of contents:

OpenVINO on Ubuntu containers: making developers’ lives easierWhy Ubuntu Docker images?Why OpenVINO?OpenVINO and Ubuntu container imagesNeural networks to colorize a black & white imagegRPC vs REST APIsUbuntu minimal container imagesDemo architectureNeural network – OpenVINO Model ServerBackend – Ubuntu-based Flask app (Python)Frontend – Ubuntu-based NGINX container and Svelte appDeployment with KubernetesBuild the components’ Docker imagesApply the Kubernetes configuration files

OpenVINO on Ubuntu containers: making developers’ lives easier

Suppose you’re curious about AI/ML and what you can do with OpenVINO on Ubuntu containers. In that case, this blog is an excellent read for you too.

Docker image security isn’t only about provenance and supply chains; it’s also about the user experience. More specifically, the developer experience.

Removing toil and friction from your app development, containerization, and deployment processes avoids encouraging developers to use untrusted sources or bad practices in the name of getting things done. As AI/ML development often requires complex dependencies, it’s the perfect proof point for secure and stable container images.

Why Ubuntu Docker images?

As the most popular container image in its category, the Ubuntu base image provides a seamless, easy-to-set-up experience. From public cloud hosts to IoT devices, the Ubuntu experience is consistent and loved by developers.

One of the main reasons for adopting Ubuntu-based container images is the software ecosystem. More than 30,000 packages are available in one “install” command, with the option to subscribe to enterprise support from Canonical. It just makes things easier.

In this blog, you’ll see that using Ubuntu Docker images greatly simplifies component containerization. We even used a prebuilt & preconfigured container image for the NGINX web server from the LTS images portfolio maintained by Canonical for up to 10 years.

Beyond providing a secure, stable, and consistent experience across container images, Ubuntu is a safe choice from bare metal servers to containers. Additionally, it comes with hardware optimization on clouds and on-premises, including Intel hardware.

Why OpenVINO?

When you’re ready to deploy deep learning inference in production, binary size and memory footprint are key considerations – especially when deploying at the edge. OpenVINO provides a lightweight Inference Engine with a binary size of just over 40MB for CPU-based inference. It also provides a Model Server for serving models at scale and managing deployments.

OpenVINO includes open-source developer tools to improve model inference performance. The first step is to convert a deep learning model (trained with TensorFlow, PyTorch,…) to an Intermediate Representation (IR) using the Model Optimizer. In fact, it cuts the model’s memory usage in half by converting it from FP32 to FP16 precision. You can unlock additional performance by using low-precision tools from OpenVINO. The Post-training Optimisation Tool (POT) and Neural Network Compression Framework (NNCF) provide quantization, binarisation, filter pruning, and sparsity algorithms. As a result, Intel devices’ throughput increases on CPUs, integrated GPUs, VPUs, and other accelerators.

Open Model Zoo provides pre-trained models that work for real-world use cases to get you started quickly. Additionally, Python and C++ sample codes demonstrate how to interact with the model. More than 280 pre-trained models are available to download, from speech recognition to natural language processing and computer vision.

For this blog series, we will use the pre-trained colorization models from Open Model Zoo and serve them with Model Server.

OpenVINO and Ubuntu container images

The Model Server – by default – ships with the latest Ubuntu LTS, providing a consistent development environment and an easy-to-layer base image. The OpenVINO tools are also available as prebuilt development and runtime container images.

To learn more about Canonical LTS Docker Images and OpenVINO™, read:

Intel and Canonical to secure containers software supply chain – Ubuntu blogOpenVINO Documentation – OpenVINO™Webinar: Secure AI deployments at the edge – Canonical and Intel

Neural networks to colorize a black & white image

Now, back to the matter at hand: how will we colorize grandma and grandpa’s old pictures? Thanks to Open Model Zoo, we won’t have to train a neural network ourselves and will only focus on the deployment. (You can still read about it.)

Architecture diagram of the colorizer demo app running on MicroK8s

Our architecture consists of three microservices: a backend, a frontend, and the OpenVINO Model Server (OVMS) to serve the neural network predictions. The Model Server component hosts two different demonstration neural networks to compare their results (V1 and V2). These components all use the Ubuntu base image for a consistent software ecosystem and containerized environment.

A few reads if you’re not familiar with this type of microservices architecture:

What are container images?What is Kubernetes?

gRPC vs REST APIs

The OpenVINO Model Server provides inference as a service via HTTP/REST and gRPC endpoints for serving models in OpenVINO IR or ONNX format. It also offers centralized model management to serve multiple different models or different versions of the same model and model pipelines.

The server offers two sets of APIs to interface with it: REST and gRPC. Both APIs are compatible with TensorFlow Serving and expose endpoints for prediction, checking model metadata, and monitoring model status. For use cases where low latency and high throughput are needed, you’ll probably want to interact with the model server via the gRPC API. Indeed, it introduces a significantly smaller overhead than REST. (Read more about gRPC.)

OpenVINO Model Server is distributed as a Docker image with minimal dependencies. For this demo, we will use the Model Server container image deployed to a MicroK8s cluster. This combination of lightweight technologies is suitable for small deployments. It suits edge computing devices, performing inferences where the data is being produced – for increased privacy, low latency, and low network usage.

Ubuntu minimal container images

Since 2019, the Ubuntu base images have been minimal, with no “slim” flavors. While there’s room for improvement (keep posted), the Ubuntu Docker image is a less than 30MB download, making it one of the tiniest Linux distributions available on containers.

In terms of Docker image security, size is one thing, and reducing the attack surface is a fair investment. However, as is often the case, size isn’t everything. In fact, maintenance is the most critical aspect. The Ubuntu base image, with its rich and active software ecosystem community, is usually a safer bet than smaller distributions.

A common trap is to start smaller and install loads of dependencies from many different sources. The end result will have poor performance, use non-optimized dependencies, and not be secure. You probably don’t want to end up effectively maintaining your own Linux distribution … So, let us do it for you.

“What are you looking at?” (original picture source)

Demo architecture

“As a user, I can drag and drop black and white pictures to the browser so that it displays their ready-to-download colorized version.” – said the PM (me).

For that – replied the one-time software engineer (still me) – we only need:

A fancy yet lightweight frontend component.OpenVINO™ Model Server to serve the neural network colorization predictions.A very light backend component.

Whilst we could target the Model Server directly with the frontend (it exposes a REST API), we need to apply transformations to the submitted image. The colorization models, in fact, each expect a specific input.

Finally, we’ll deploy these three services with Kubernetes because … well … because it’s groovy. And if you think otherwise (everyone is allowed to have a flaw or two), you’ll find a fully functional docker-compose.yaml in the source code repository.

Architecture diagram for the demo app (originally colored tomatoes)

In the upcoming sections, we will first look at each component and then show how to deploy them with Kubernetes using MicroK8s. Don’t worry; the full source code is freely available, and I’ll link you to the relevant parts.

Neural network – OpenVINO Model Server

The colorization neural network is published under the BSD 2-clause License, accessible from the Open Model Zoo. It’s pre-trained, so we don’t need to understand it in order to use it. However, let’s look closer to understand what input it expects. I also strongly encourage you to read the original work from Richard Zhang, Phillip Isola, and Alexei A. Efros. They made the approach super accessible and understandable on this website and in the original paper.

Neural network architecture (from arXiv:1603.08511 [cs.CV])

As you can see on the network architecture diagram, the neural network uses an unusual color space: LAB. There are many 3-dimensional spaces to code colors: RGB, HSL, HSV, etc. The LAB format is relevant here as it fully isolates the color information from the lightness information. Therefore, a grayscale image can be coded with only the L (for Lightness) axis. We will send only the L axis to the neural network’s input. It will generate predictions for the colors coded on the two remaining axes: A and B.

From the architecture diagram, we can also see that the model expects a 256×256 pixels input size. For these reasons, we cannot just send our RGB-coded grayscale picture in its original size to the network. We need to first transform it.

We compare the results of two different model versions for the demo. Let them be called ‘V1’ (Siggraph) and ‘V2’. The models are served with the same instance of the OpenVINO™ Model Server as two different models. (We could also have done it with two different versions of the same model – read more in the documentation.)

Finally, to build the Docker image, we use the first stage from the Ubuntu-based development kit to download and convert the model. We then rebase on the more lightweight Model Server image.

# Dockerfile
FROM openvino/ubuntu20_dev:latest AS omz
# download and convert the model

FROM openvino/model_server:latest
# copy the model files and configure the Model Server

Backend – Ubuntu-based Flask app (Python)

For the backend microservice that interfaces between the user-facing frontend and the Model Server hosting the neural network, we chose to use Python. There are many valuable libraries to manipulate data, including images, specifically for machine learning applications. To provide web serving capabilities, Flask is an easy choice.

The backend takes an HTTP POST request with the to-be-colorized picture. It synchronously returns the colorized result using the neural network predictions. In between – as we’ve just seen – it needs to convert the input to match the model architecture and to prepare the output to show a displayable result.

Here’s what the transformation pipeline looks like on the input:

And the output looks something like that:

To containerize our Python Flask application, we use the first stage with all the development dependencies to prepare our execution environment. We copy it onto a fresh Ubuntu base image to run it, configuring the model server’s gRPC connection.

Frontend – Ubuntu-based NGINX container and Svelte app

Finally, I put together a fancy UI for you to try the solution out. It’s an effortless single-page application with a file input field. It can display side-by-side the results from the two different colorization models.

I used Svelte to build the demo as a dynamic frontend. Below each colorization result, there’s even a saturation slider (using a CSS transformation) so that you can emphasize the predicted colors and better compare the before and after.

To ship this frontend application, we again use a Docker image. We first build the application using the Node base image. We then rebase it on top of the preconfigured NGINX LTS image maintained by Canonical. A reverse proxy on the frontend side serves as a passthrough to the backend on the /API endpoint to simplify the deployment configuration. We do that directly in an NGINX.conf configuration file copied to the NGINX templates directory. The container image is preconfigured to use these template files with environment variables.

Deployment with Kubernetes

I hope you had the time to scan some black and white pictures because things are about to get serious(ly colorized).

We’ll assume you already have a running Kubernetes installation from the next section. If not, I encourage you to run the following steps or go through this MicroK8s tutorial.

# https://microk8s.io/docs
sudo snap install microk8s –classic

# Add current user ($USER) to the microk8s group
sudo usermod -a -G microk8s $USER && sudo chown -f -R $USER ~/.kube
newgrp microk8s

# Enable the DNS, Storage, and Registry addons required later
microk8s enable dns storage registry

# Wait for the cluster to be in a Ready state
microk8s status –wait-ready

# Create an alias to enable the `kubectl` command
sudo snap alias microk8s.kubectl kubectl

Yes, you deployed a Kubernetes cluster in about two command lines.

Build the components’ Docker images

Every component comes with a Dockerfile to build itself in a standard environment and ship its deployment dependencies (read What are containers for more information). They all create an Ubuntu-based Docker image for a consistent developer experience.

Before deploying our colorizer app with Kubernetes, we need to build and push the components’ images. They need to be hosted in a registry accessible from our Kubernetes cluster. We will use the built-in local registry with MicroK8s. Depending on your network bandwidth, building and pushing the images will take a few minutes or more.

sudo snap install docker
cd ~ && git clone https://github.com/valentincanonical/colouriser-demo.git

# Backend
docker build backend -t localhost:32000/backend:latest
docker push localhost:32000/backend:latest

# Model Server
docker build modelserver -t localhost:32000/modelserver:latest
docker push localhost:32000/modelserver:latest

# Frontend
docker build frontend -t localhost:32000/frontend:latest
docker push localhost:32000/frontend:latest

Apply the Kubernetes configuration files

All the components are now ready for deployment. The Kubernetes configuration files are available as deployments and services YAML descriptors in the ./K8s folder of the demo repository. We can apply them all at once, in one command:

kubectl apply -f ./k8s

Give it a few minutes. You can watch the app being deployed with watch kubectl status. Of all the services, the frontend one has a specific NodePort configuration to make it publicly accessible by targeting the Node IP address.

Once ready, you can access the demo app at http://localhost:30000/ (or replace localhost with a cluster node IP address if you’re using a remote cluster). Pick an image from your computer, and get it colorized!

All in all, the project was pretty easy considering the task we accomplished. Thanks to Ubuntu containers, building each component’s image with multi-stage builds was a consistent and straightforward experience. And thanks to OpenVINO™ and the Open Model Zoo, serving a pre-trained model with excellent inference performance was a simple task accessible to all developers.

That’s a wrap!

You didn’t even have to share your pics over the Internet to get it done. Thanks for reading this article; I hope you enjoyed it. Feel free to reach out on socials. I’ll leave you with the last colorization example.

Christmassy colorization example (original picture source)

To learn more about Ubuntu, the magic of Docker images, or even how to make your own Dockerfiles, see below for related resources:

Find more helpful Docker images on Docker Hub.Check out Ubuntu’s Docker Hub profile.Learn how to create your own Dockerfiles for Docker Desktop.Read about secure AI deployments at the edge.Learn more about Canonical’s maintained Ubuntu-based OCI images.Read Ubuntu’s take on running EKS locally.Get started and download Docker Desktop for Windows, Mac, or Linux.
Quelle: https://blog.docker.com/feed/

Extending Docker’s Integration with containerd

We’re extending Docker’s integration with containerd to include image management! To share this work early and get feedback, this integration is available as an opt-in experimental feature with the latest Docker Desktop 4.12.0 release.

What is containerd?

In the simplest terms, containerd is a broadly-adopted open container runtime. It manages the complete container lifecycle of its host system! This includes pulling and pushing images as well as handling the starting and stopping of containers. Not to mention, containerd is a low-level brick in the container experience. Rather than being used directly by developers, it’s designed to be embedded into systems like Docker and Kubernetes.

Docker’s involvement in the containerd project can be traced all the way back to 2016. You could say, it’s a bit of a passion project for us! While we had many reasons for starting the project, our goal was to move the container supervision out of the core Docker Engine and into a separate daemon. This way, it could be reused in projects like Kubernetes. It was donated to the Cloud Native Computing Foundation (CNCF), and it’s now a graduated (stable) project as of 2017.

What does containerd replace in the Docker Engine?

As we mentioned earlier, Docker has used containerd as part of Docker Engine for managing the container lifecycle (creating, starting, and stopping) for a while now! This new work is a step towards a deeper integration of containerd into the Docker Engine. It lets you use containerd to store images and then push and pull them. Containerd also uses snapshotters instead of graph drivers for mounting the root file system of a container. Due to containerd’s pluggable architecture, it can support multiple snapshotters as well. 

Want to learn more? Michael Crosby wrote a great explanation about snapshotters on the Moby Blog.

Why migrate to containerd for image management?

Containerd is the leading open container runtime and, better yet, it’s already a part of Docker Engine! By switching to containerd for image management, we’re better aligning ourselves with the broader industry tooling. 

This migration modifies two main things:

We’re replacing Docker’s graph drivers with containerd’s snapshotters.We’ll be using containerd to push, pull, and store images.

What does this mean for Docker users?

We know developers love how Docker commands work today and that many tools rely on the existing Docker API. With this in mind, we’re fully vested in making sure that the integration is as transparent as possible and doesn’t break existing workflows. To do this, we’re first rolling it out as an experimental, opt-in feature so that we can get early feedback. When enabled in the latest Docker Desktop, this experimental feature lets you use the following Docker commands with containerd under the hood: run, commit, build, push, load, and save.

This integration has the following benefits:

Containerd’s snapshotter implementation helps you quickly plug in new features. Some examples include using stargz to lazy-pull images on startup or nydus and dragonfly for peer-to-peer image distribution.The containerd content store can natively store multi-platform images and other OCI-compatible objects. This enables features like the ability to build and manipulate multi-platform images using Docker Engine (and possibly other content in the future!).

If you plan to build the multi-platform image, the below graphic shows what to expect when you run the build command with the containerd store enabled. 

Without the experimental feature enabled, you will get an error message stating that this feature is not supported on docker driver as shown in the graphic below. 

If you decide not to enable the experimental feature, no big deal! Things will work like before. If you have additional questions, you can access details in our release notes.

Roadmap for the containerd integration

We want to be as transparent as possible with the Docker community when it comes to this containerd integration (no surprises here!). For this reason, we’ve laid out a roadmap. The integration will happen in two key steps:

We’ll ship an initial version in Docker Desktop which enables common workflows but doesn’t touch existing images to prove that this approach works.Next, we’ll write the code to migrate user images to use containerd and activate the feature for all our users.

We work to make expanding integrations like this as seamless as possible so you, our end user, can reap the benefits! This way, you can create new, exciting things while leveraging existing features in the ecosystem such as namespaces, containerd plug-ins, and more.

We’ve released this experimental feature first in Docker Desktop so that we can get feedback quickly from the community. But, you can also expect this feature in a future Docker Engine release.  

The details on the ongoing integration work can be accessed here. 

Conclusion

In summary, Docker users can now look forward to full containerd integration. This brings many exciting features from native multi-platform support to encrypted images and lazy pulls. So make sure to download the latest version of Docker Desktop and enable the containerd experimental feature to take it for a spin! 

We love sharing things early and getting feedback from the Docker community — it helps us build products that work better for you. Please join us on our community Slack channel or drop us a line using our feedback form.
Quelle: https://blog.docker.com/feed/

The Docker-Sponsored Open Source Program has a new look!

It’s no secret that developers love open source software. About 70–90% of code is entirely made up of it! Plus, using open source has a ton of benefits like cost savings and scalability. But most importantly, it promotes faster innovation.

That’s why Docker announced our community program, the Docker-Sponsored Open Source (DSOS) Program, in 2020. While our mission and vision for the program haven’t changed (yes, we’re still committed to building a platform for collaboration and innovation!), some of the criteria and benefits have received a bit of a facelift.

We recently discussed these updates at our quarterly Community All-Hands, so check out that video if you haven’t yet. But since you’re already here, let’s give you a rundown of what’s new.

New criteria & benefits

Over the past two years, we’ve been working to incorporate all of the amazing community feedback we’ve received about the DSOS program. And we heard you! Not only have we updated the application process which will decrease the wait time for approval, but we’ve also added on some major benefits that will help improve the reach and visibility of your projects.

New application process — The new, streamlined application process lets you apply with a single click and provides status updates along the wayUpdated funding criteria — You can now apply for the program even if your project is commercially funded! However, you must not currently have a pathway to commercialization (this is reevaluated yearly). Adjusting our qualification criteria opens the door for even more projects to join the 300+ we already have!Insights & Analytics — Exclusive to DSOS members, you now have access to a plethora of data to help you better understand how your software is being used.DSOS badge on Docker Hub — This makes it easier for your project to be discovered and build brand awareness.

What hasn’t changed

Despite all of these updates, we made sure to keep the popular program features you love. Docker knows the importance of open source as developers create new technologies and turn their innovations into a reality. That’s why there are no changes to the following program benefits:

Free autobuilds — Docker will automatically build images from source code in your external repository and automatically push the build image to your Docker Hub Repository.Unlimited pulls and egress — This is for all users pulling public images from your project namespace.Free 1-year Docker Team subscription — This feature is for core contributors of your project namespace. This includes Docker Desktop, 15 concurrent builds, unlimited Docker Hub image vulnerability scans, unlimited scoped tokens, role-based access control, and audit logs.

We’ve also kept the majority of our qualification criteria the same (aside from what was mentioned above). To qualify for the program, your project namespace must:

Be shared in public reposMeet the Open Source Initiative’s definition of open source Be in active development (meaning image updates are pushed regularly within the past 6 months or dependencies are updated regularly, even if the project source code is stable)Not have a pathway to commercialization. Your organization must not seek to make a profit through services or by charging for higher tiers. Accepting donations to sustain your efforts is allowed.

Want to learn more about the program? Reach out to OpenSource@Docker.com with your questions! We look forward to hearing from you.
Quelle: https://blog.docker.com/feed/

Integrated Terminal for Running Containers, Extended Integration with Containerd, and More in Docker Desktop 4.12

Docker Desktop 4.12 is now live! This release brings some key quality-of-life improvements to the Docker Dashboard. We’ve also made some changes to our container image management and added it as an experimental feature. Finally, we’ve made it easier to find useful Extensions. Let’s dive in.

Execute commands in a running container straight from the Docker Dashboard

Developers often need to explore a running container’s contents to understand its current state or debug it when issues arise. With Docker Desktop 4.12, you can quickly start an interactive session in a running container directly through a Docker Dashboard terminal. This easy access lets you run commands without needing an external CLI. 

Opening this integrated terminal is equal to running docker exec -it <container-id> /bin/sh (or docker exec -it cmd.exe if you’re using Windows containers) in your system terminal. Docker detects a running container’s default user from the image’s Dockerfile. If there’s none specified, it defaults to root. Placing this in the Docker Dashboard gives you real-time access to logs and other information about your running containers. 

Your session is persisted if you navigate throughout the Dashboard and return — letting you easily pick up where you left off. The integrated terminal also supports copy, paste, search, and session clearing.

Still want to use your external terminal? No problem. We’ve added two easy ways to launch a session externally.

Option 1: Use the “Open in External Terminal” button straight from this tab. Even if you prefer an integrated terminal, this might help you run commands and watch logs simultaneously, for example.

Option 2: Change your default settings to always open your system default terminal. We’ve added the option to choose what fits your workflow. After applying this setting, the “Open in terminal” button from the Containers tab will always open your system terminal.

Extending Docker Desktop’s integration with containerd

We’re extending Docker Desktop’s integration with containerd to include image management. This integration is available as an opt-in, experimental feature within this latest release.

Docker’s involvement in the containerd project extends all the way back to 2016. Docker has used containerd within the Docker Engine to manage the container lifecycle (creating, starting, and stopping) for a while now! 

This new feature is a step towards deeper containerd integration with Docker Engine. It lets you use containerd to store images and then push and pull them. When enabled in the latest Docker Desktop version, this experimental feature lets you use the following Docker commands with containerd under the hood: run, commit, build, push, load, and save. 

This integration has the following benefits:

Containerd’s snapshotter implementation helps you quickly plug in new features. One example is using stargz to lazy pull images on startup.The containerd content store can natively store multi-platform images and other OCI-compatible objects. This lets you build and manipulate multi-platform images, for example, or leverage other related features.

You can learn more in our recent announcement, which fully explains containerd’s integration with Docker.

Easily discover extensions

We’ve added two new ways to interact with extensions in Docker Desktop 4.12.

Docker Extensions are now available directly within the Docker menu. From there, you can browse the Marketplace for new extensions, manage your installed extensions, or change extension settings. 

You can also search for extensions in the Extensions Marketplace! Narrow things down by name or keyword to find the tool you need.

Two new extensions have also joined the Extensions Marketplace:

Docker Volumes Backup & Share

Docker Volumes Backup & Share lets you effortlessly back up, clone, restore, and share Docker volumes. You can now easily create copies of your volumes and share them through SSH or by pushing them to a registry. Learn more about Volumes Backup & Share on Docker Hub. 

Mini Cluster

Mini Cluster enables developers who work with Apache Mesos to deploy and test their Mesos applications with ease. Learn more about Mini Cluster on Docker Hub.

Try out Dev Environments with Awesome Compose samples

We’ve updated our GitHub Awesome Compose samples to highlight projects that you can easily launch as Dev Environments in Docker Desktop. This helps you quickly understand how to add multi-service applications as Dev Environment projects. Look for the following green icon in the list of Docker Compose application samples:

Here’s our new Awesome Compose/Dev Environments feature in action:

Get started with Docker Desktop 4.12 today

While we’ve explored some headlining features in this release, Docker Desktop 4.12 also adds important security enhancements under the hood. To learn about these fixes and more, browse our full release notes. 

Have any feedback for us? Upvote, comment, or submit new ideas via our in-product links or our public roadmap. 

Looking to become a new Docker Desktop user? Visit our Get Started page to jumpstart your development journey.
Quelle: https://blog.docker.com/feed/

How to Build and Run Next.js Applications with Docker, Compose, & NGINX

At DockerCon 2022, Kathleen Juell, a Full Stack Engineer at Sourcegraph, shared some tips for combining Next.js, Docker, and NGINX to serve static content. With nearly 400 million active websites today, efficient content delivery is key to attracting new web application users.

In some cases, using Next.js can boost deployment efficiency, accelerate time to market, and help attract web users. Follow along as we tackle building and running Next.js applications with Docker. We’ll also cover key processes and helpful practices for serving that static content. 

Why serve static content with a web application?

According to Kathleen, the following are the benefits of serving static content: 

Fewer moving parts, like databases or other microservices, directly impact page rendering. This backend simplicity minimizes attack surfaces. Static content stands up better (with fewer uncertainties) to higher traffic loads.Static websites are fast since they don’t require repeated rendering.Static website code is stable and relatively unchanging, improving scalability.Simpler content means more deployment options.

Since we know why building a static web app is beneficial, let’s explore how.

Building our services stack

To serve static content efficiently, a three-pronged services approach composed of Next.js, NGINX, and Docker is useful. While it’s possible to run a Next.js server, offloading those tasks to an NGINX server is preferable. NGINX is event-driven and excels at rapidly serving content thanks to its single-threaded architecture. This enables performance optimization even during periods of higher traffic.  

Luckily, containerizing a cross-platform NGINX server instance is pretty straightforward. This setup is also resource friendly. Below are some of the reasons why Kathleen — explicitly or perhaps implicitly — leveraged three technologies. 

Docker Desktop also gives us the tools needed to build and deploy our application. It’s important to install Docker Desktop before recreating Kathleen’s development process. 

The following trio of services will serve our static content:

First, our auth-backend has a build context rooted in a directory and a port mapping. It’s based on a slimmer alpine flavor of the Node.js Docker Official Image and uses named Dockerfile build stages to prevent reordered COPY instructions from breaking. 

Second, our client service has its own build context and a named volume mapped to the staticbuild:/app/out directory. This lets us mount our volume within our NGINX container. We’re not mapping any ports since NGINX will serve our content.

Third, we’ll containerize an NGINX server that’s based on the NGINX Docker Official Image.

As Kathleen mentions, ending this client service’s Dockerfile with a RUN command is key. We want the container to exit after completing the yarn build process. This process generates our static content and should only happen once for a static web application.

Each component is accounted for within its own container. Now, how do we seamlessly spin up this multi-container deployment and start serving content? Let’s dive in!

Using Docker Compose and Docker volumes

The simplest way to orchestrate multi-container deployments is with Docker Compose. This lets us define multiple services within a unified configuration, without having to juggle multiple files or write complex code. 

We use a compose.yml file to describe our services, their contexts, networks, ports, volumes, and more. These configurations influence app behavior. 

Here’s what our complete Docker Compose file looks like: 

services:
auth-backend:
build:
context: ./auth-backend
ports:
– "3001:3001"
networks:
– dev

client:
build:
context: ./client
volumes:
– staticbuild:/app/out
networks:
– dev

nginx:
build:
context: ./nginx
volumes:
– staticbuild:/app/public
ports:
– “8080:80”
networks:
– dev

networks:
dev:
driver: bridge

volumes:
staticbuild:

You’ll also see that we’ve defined our networks and volumes in this file. These services all share the dev network, which lets them communicate with each other while remaining discoverable. You’ll also see a common volume between these services. We’ll now explain why that’s significant.

Using mounted volumes to share files

Specifically, this example leverages named volumes to share files between containers. By mapping the staticbuild volume to Next.js’ default out directory location, you can export your build and serve content with your NGINX server. This typically exists as one or more HTML files. Note that NGINX uses the app/public directory by comparison. 

While Next.js helps present your content on the frontend, NGINX delivers those important resources from the backend. 

Leveraging A/B testing to create tailored user experiences

You can customize your client-side code to change your app’s appearance, and ultimately the end-user experience. This code impacts how page content is displayed while something like an NGINX server is running. It may also determine which users see which content — something that’s common based on sign-in status, for example. 

Testing helps us understand how application changes can impact these user experiences, both positively and negatively. A/B testing helps us uncover the “best” version of our application by comparing features and page designs. How does this look in practice? 

Specifically, you can use cookies and hooks to track user login activity. When a user logs in, they’ll see something like user stories (from Kathleen’s example). Logged-out users won’t see this content. Alternatively, a web user might only have access to certain pages once they’re authenticated. It’s your job to monitor user activity, review any feedback, and determine if those changes bring clear value. 

These are just two use cases for A/B testing, and the possibilities are nearly endless when it comes to conditionally rendering static content with Next.js. 

Containerize your Next.js static web app

There are many different ways to serve static content. However, Kathleen’s three-service method remains an excellent example. It’s useful both during exploratory testing and in production. To learn more, check out Kathleen’s complete talk. 

By containerizing each service, your application remains flexible and deployable across any platform. Docker can help developers craft accessible, customizable user experiences within their web applications. Get started with Next.js and Docker today to begin serving your static web content! 

Additional Resources

Check out the NGINX Docker Official ImageRead about the Node Docker Official ImageLearn about getting started with Docker ComposeView our awesome-compose sample GitHub projects
Quelle: https://blog.docker.com/feed/

August 2022 Newsletter

Community All-Hands: September 1st
Join us tomorrow at our Community All-Hands on September 1st! This virtual event is an opportunity for the community to come together with Docker staff to learn, share, and collaborate. Don’t miss your opportunity to win Docker swag!

Register Now

News you can use and monthly highlights:
6 Docker Compose Best Practices for Dev and Prod – Give your development team a quick knowledge boost with these tips and best practices for using Docker Compose in development and production environments.
Docker Multistage Builds for Hugo – Learn how to keep your Docker container images nice and slim with the use of multistage builds for a hugo documentation project.
How to create a dockerized Nuxt 3 development environment – Learn how Docker simplifies and accelerates the Nuxt.js development workflow. It can also help you build a bleeding-edge, local web app while ensuring consistency between development and production.
Optimize Dockerfile images for NextJS – Is the size of your Next.js Docker image impacting your overall CI/CD pipeline? Here’s an article that’ll help you to improve the development and production lifecycle by optimizing your Docker images efficiently.
Building and Testing Multi-Arch Images with Docker Buildx and QEMU – Building Docker images for multiple architectures has become increasingly popular. This guide walks you through how to build a Docker image for linux/amd64 and linux/arm64 using Docker Buildx. It’ll also walk you through using QEMU to emulate an ARM environment for multiple platform builds.
Implementation And Containerization Of Microservices Using .NET Core 6 And Docker – This article helps you create microservices using the .NET Core 6 Web API and Docker. It’ll also walk you through how to connect them using Ocelot API Gateway.​

Testing with Telepresence
Wishing for a way to synchronize local changes with a remote Kubernetes environment? There is a Docker Extension for that! Learn how Telepresence partners with Docker Desktop to help you run integration tests quickly and where to get started.

Learn More

The latest tips and tricks from the community:

How to Build and Deploy a Task Management Application Using Go
Containerizing a Legendary PetClinic App Built with Spring Boot
Build and Deploy a Retail Store Items Detection System Using No-Code AI Vision at the Edge
Slim.AI Docker Extension for Docker Desktop

Dear Moby: Advice for developers
Looking for developer-specific advice and insights? Introducing our Dear Moby collection — a web series and advice column inspired by and made for YOU, our Docker community. Check out the show, read the column, and submit your app dev questions here.

Watch Dear Moby

Educational content created by the experts at Docker:

How I Built My First Containerized Java Web Application
How to Use the Apache httpd Docker Official Image
How to Use the Redis Docker Official Image
How to Develop and Deploy a Customer Churn Prediction Model Using Python, Streamlit, and Docker

Docker Captain: Julien Maitrehenry
Julien has been working with Docker since 2014 and is now joining as a Docker Captain! His friends call him “Mister Docker” because he’s always sharing his knowledge with others. Julien’s top tip for working with Docker is to build cross-platform images.

Meet the Captain

See what the Docker team has been up to:

Bulk User Add for Docker Business and Teams
Virtual Desktop Support, Mac Permission Changes, & New Extensions in Docker Desktop 4.11
Demo: Managing and Securing Your Docker Workflows
Conversation with RedMonk: Developer Engagement in the Remote Work Era

Docker Hub v1 API deprecation
On September 5th, 2022, Docker plans to deprecate the Docker Hub v1 API endpoints that access information related to Docker Hub repositories. Please update to the v2 endpoints to continue using the Docker Hub API.

Learn More

Subscribe to our newsletter to get the latest news, blogs, tips, how-to guides, best practices, and more from Docker experts sent directly to your inbox once a month.

Quelle: https://blog.docker.com/feed/