January Virtual Meetup Recap: Improve Image Builds Using the Features in BuildKit

This is a guest post by Docker Captain Nicholas Dille, a blogger, speaker and author with 15 years of experience in virtualization and automation. He works as a DevOps Engineer at Haufe Group, a digital media company located in Freiburg, Germany. He is also a Microsoft Most Valuable Professional.

In this virtual meetup, I share how to improve image builds using the features in BuildKit. BuildKit is an alternative builder with great features like caching, concurrency and the ability to separate your image build into multiple stages – which is useful for separating the build environment from the runtime environment. 

The default builder in Docker is the legacy builder. This is recommended for use when you need support for Windows. However, in nearly every other case, using BuildKit is recommended because of the fast build time, ability to use custom BuildKit front-ends, building stages in parallel and other features.

Catch the full replay below and view the slides to learn:

Build cache in BuildKit – instead of relying on a locally present image, buildkit will pull the appropriate layers of the previous image from a registry.How BuildKit helps prevent disclosure of credentials by allowing files to be mounted into the build process. They are kept in memory and are not written to the image layers.How BuildKit supports access to remote systems through SSH by mounting the SSH agent socket into the build without adding the SSH private key to the image.How to use the CLI plugin buildx to cross-build images for different platforms.How using the new “docker context,” the CLI is able to manage connection to multiple instances of the Docker engine. Note that it supported SSH remoting to Docker Engine.And finally, a tip that extends beyond image builds: When troubleshooting a running container, a debugging container can be started sharing the network and PID namespace. This allows debugging without changing the misbehaving container.

I also covered a few tools that I use in my workflow, namely:

goss, which allows images to be tested to match a configuration expressed in YAML. It comes with a nice wrapper called `dgoss` to use it with Docker easily. And it even provides a health endpoint to integrate into your imagetrivy, an OS tool from AquaSecurity that scans images for known vulnerabilities in the OS as well as well-known package managers.

And finally, answered some of your questions:

Why not use BuildKit by default? 

If your workflow involves building images often, then we recommend that you do set BuildKit as the default builder. Here is how to enable BuildKit by default in the docker daemon config. 

Does docker-compose work with BuildKit? 

Support for BuildKit was added in docker-compose 1.25.0 which can be enabled by setting DOCKER_BUILDKIT=1 and COMPOSE_DOCKER_CLI_BUILD=1.

What are the benefits of using BuildKit? 

In addition to the features presented, BuildKit also improves build performance in many cases.

When would I use BuildKit Secrets? (A special thank you to Captain Brandon Mitchell for answering this question)

BuildKit secrets are a good way to use a secret at build time, without saving the secret in the image. Think of it as pulling a private git repo without saving your ssh key to the image. For runtime, it’s often different compose files to support compose vs swarm mode, each mounting the secret a different way, i.e. a volume vs. swarm secret.

How do I enable BuildKit for Jenkins Docker build plugin? 

The only reference to BuildKit I was able to find refers to adding support in the Docker Pipeline plugin.

Does BuildKit share the build cache with the legacy builder? 

No, the caches are separate.

What are your thoughts on having the testing step as a stage in a multi-stage build? 

The test step can be a separate stage in the build. If the test step requires a special tool to be installed, it can be a second final stage. If your multi-stage build increases in complexity, take a look at CI/CD tools.

How does pulling the previous image save time over just doing the build? The download can be significantly faster than redoing all the work.

Is the created image still “identical” or is there any real difference in the final image artifact? 

The legacy builder, as well as BuildKit, produces identical (or rather equivalent) images.

Will Docker inspect show that the image was built using BuildKit? 

No.

Do you know any combination of debugging with docker images/containers (I use the following technologies: python and Django and Pycharm)?

No. Anyone have any advice here? 

Is Docker BuildKit supported with maven Dockerfile plugin? 

If the question is referring to Spotify’s Dockerfile Maven plugin (which is unmaintained), the answer is no. Other plugins may be able to use BuildKit when providing the environment variable DOCKER_BUILDKIT=1. Instead of changing the way the client works, you could configure the daemon to use BuildKit by default (see first question above).

What do you think about CRI-O? 

I think that containerd has gained more visibility and has been adopted by many cloud providers as the runtime in Kubernetes offerings. But I have no experience myself with CRI-O.

To be notified of upcoming meetups, join the Docker Virtual Meetup Group using your Docker ID or on Meetup.com.
The post January Virtual Meetup Recap: Improve Image Builds Using the Features in BuildKit appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Community Collaboration on Notary v2

One of the most productive meetings I had KubeCon in San Diego last November was a meeting with Docker, Amazon and Microsoft to plan a collaboration around a new version of the CNCF project Notary. We held the Notary v2 kickoff meeting a few weeks later in Seattle in the Amazon offices.

Emphasising that this is a cross-industry collaboration, we had eighteen people in the room (with more dialed in) from Amazon, Microsoft, Docker, IBM, Google, Red Hat, Sylabs and JFrog. This represented all the container registry providers and developers, other than the VMware Harbor developers who could unfortunately not make it in person. Unfortunately, we forgot to take a picture of everyone!

@awscloud, @GCPcloud, @Azure, @Docker, @RedHat, @jfrog collaborating on @CloudNativeFdn Notary v2 – touring the amazon spheres. Who would have thought… https://t.co/6VL3OucX0c pic.twitter.com/rNglQIO5ZM— Steve Lasker (@SteveLasker) December 15, 2019

The consensus and community are important because of the aims of Notary v2. But let’s go back a bit as some of you may not know what Notary is and what it is for.

The Notary project was originally started at Docker back in 2015 to provide a general signing infrastructure for containers based on The Update Framework (TUF), a model for package management security developed by Justin Cappos and his team at New York University. This is what supports the “docker trust” set of commands that allow signing containers, and the DOCKER_CONTENT_TRUST settings for validating signatures.

In 2017, Notary was donated to the CNCF, along with the TUF specification, to make it a cross-industry standard. It began to be shipped in other places as well as Docker Hub, including the Docker Trusted Registry (now a Mirantis product), IBM’s container registry, the Azure Container Registry, and with the Harbor project, another CNCF project. TUF also expanded its use cases, in the package management community, and in projects such as Uptane, a framework for updating firmware on automobiles.

So why a version 2 now? Part of the answer is that we learnt a lot of things about the usage of containers since 2015. Are container years like dog years? I am not sure, but a lot has happened since then, and the usage of containers has expanded enormously. I covered a lot of the reasons in-depth in my KubeCon talk:

Supply chain security – making sure that you ship what you intended to ship into production – has become increasingly important, as attacks on software supply chains have increased in recent years. Signatures are an important part of the validation needed in container supply chains.

Integrating Signatures in the Registry

The first big change that we want to make is because at present not every registry supports Notary. This means that if you use a mixture of registries, some may support signatures while others do. In addition, you cannot move signatures between registries. Both of these are related to the design of Notary as in effect a registry sidecar. While Notary shares the same authentication as a registry, it is built as a separate service, with its own database and API. 

Back when Notary was designed this did not seem so important. But now many people use, or want to use, complex registry configurations with local registries close to a production cluster, or at the cloud provider code is running on, or in an edge location which may be disconnected. The solution that we are working on is that rather than being a standalone sidecar service, signatures will be integrated into the OCI image specification and supported by all registries. The details of this are still being worked out, but this will make portability much easier, as signatures will be able to be pushed and pulled with images.

Improving Usability

The second big set of changes is around usability. The current way of signing containers and checking signatures is complex, as is the key management. One of the aims of Notary v2 is to have signatures and checking on by default where possible. There have been many issues stopping this with the current Notary, many of which are detailed in the KubeCon talk, including the large number of keys involved due to lack of hierarchy and delegation, and lack of standard interfaces for signature checking on different platforms such as Kubernetes.

If you want to learn more, there are weekly meetings on Mondays at 10 a.m. Pacific Time – see the CNCF community calendar for details. The Slack channel is #notary-v2 in the CNCF Slack. There will be two sessions at KubeCon Amsterdam, one introductory overview and state of where we are, and another deep dive working session on current issues. Hope to see you there!
The post Community Collaboration on Notary v2 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Faster builds in Docker Compose 1.25.1 thanks to BuildKit Support

One of the most requested features for the docker-compose tool is definitely support for building using Buildkit which is an alternative builder with great capabilities, like caching, concurrency and ability to use custom BuildKit front-ends just to mention a few… Ahhh with a nice blue output! And the good news is that Docker Compose 1.25.1 – that was just released early January – includes BuildKit support!

BuildKit support for Docker Compose is actually achieved by redirecting the docker-compose build to the Docker CLI with a limited feature set.

Enabling Buildkit build

To enable this, we have to align some stars.

First, it requires that the Docker CLI binary present in your PATH:

$ whichdocker/usr/local/bin/docker

Second, docker-compose has to be run with the environment variable COMPOSE_DOCKER_CLI_BUILD set to 1 like in:

$ COMPOSE_DOCKER_CLI_BUILD=1 docker-compose build

This instruction tells docker-compose to use the Docker CLI when executing a build. You should see the same build output, but starting with the experimental warning.

As docker-compose passes its environment variables to the Docker CLI, we can also tell the CLI to use BuildKit instead of the default builder. To accomplish that, we can execute this:

$ COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker-compose build

A short video is worth a thousand words:

Please note that BuildKit support in docker-compose was initially released with Docker Compose 1.25.0. This feature is marked as experimental for now.

Want to know more?

Discover more options using docker-compose build –helpLearn more about Buildkit: docs.docker.com/develop/develop-images/build_enhancements

Share your feedback

Have nice and fast builds with docker-compose and please share your feedback with us!
The post Faster builds in Docker Compose 1.25.1 thanks to BuildKit Support appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Desktop release 2.2 is here!

We are excited to announce that we released a new Docker Desktop version today! Thanks to the user feedback on the new features initially released in the Edge channel, we are now ready to publish them into Stable. 

Before getting to each feature into detail, let’s see what’s new in Docker Desktop 2.2:

WSL 2 as a technical preview, allowing access to the full system resources, improved boot time, access to Linux workspaces and improved file system performanceA new file sharing implementation for Windows, improving the developer inner loop user experienceA New Integrated Desktop Dashboard, to see at once glance your local running containers and Compose applications, and easily manage them.

WSL 2 – New architecture 

Back in July we released on Edge the technical preview of Docker Desktop for WSL 2, where we included an experimental integration of Docker running on an existing user Linux distribution. We learnt from our experience and re-architected our solution (covered in Simon’s blog) . 

This new architecture for WSL 2 allows users to: 

Use Kubernetes on the WSL 2 backendWork with just WSL 2/turn off the traditional HyperV VM while working with WSL 2Continue to work as they did in the traditional Docker Desktop with a friendly networking stack, support for http proxy settings, and trusted CA synchronization Start Docker Desktop in <5 secondsUse Linux Workspaces 

To make use of the WSL 2 features you will need to be on a Windows preview version that supports WSL 2.

Read More on WSL 2

File system improvements on Windows 

For existing Windows users not on Windows Insider builds we have been working on improving the user experience we have today for the inner loop. Traditionally with Docker Desktop on Windows, we have relied upon the Samba protocol to manage the interaction between the Linux file system working with Docker and the Windows file system. We have now replaced this with gRPC FUSE, which:

uses caching to (for example) reduce page load time in Symfony by up to 60%;supports Linux inotify events, triggering automatic recompilation / reload when the source code is changed;is independent of how you authenticate to Windows: smartcard, Azure AD are all fine;always works irrespective of whether your VPN is connected or disconnected;reduces the amount of code running as Administrator.

Read More on new File Sharing

New Integrated Desktop Dashboard

Last but not least, Docker Desktop now includes an interactive Dashboard UI for managing your local running containers and Compose applications. We have been listening to developers and working hard to incorporate a single user interface across Mac and Windows so that we could look at how we can make it easier to work with Docker locally. Historically Docker offered similar capability with Kitematic, which we plan to archive in 2020 and replace with the new Desktop Dashboard.

Read More on Desktop Dashboard

Get started today

You can try all of the new features now by getting Docker Desktop 2.2!

Download Docker Desktop 2.2 for Windows

Download Docker Desktop 2.2 for macOS

The post Docker Desktop release 2.2 is here! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Capturing Logs in Docker Desktop

Docker Desktop runs a Virtual Machine to host Docker containers. Each component within the VM (including the Docker engine itself) runs as a separate isolated container. This extra layer of isolation introduces an interesting new problem: how do we capture all the logs so we can include them in Docker Desktop diagnostic reports? If we do nothing then the logs will be written separately into each individual container which obviously isn’t very useful!

The Docker Desktop VM boots from an ISO which is built using LinuxKit from a list of Docker images together with a list of capabilities and bind mounts. For a minimal example of a LinuxKit VM definition, see https://github.com/linuxkit/linuxkit/blob/master/examples/minimal.yml — more examples and documentation are available in the LinuxKit repository. The LinuxKit VM in Docker Desktop boots in two phases: in the first phase, the init process executes a series of one-shot “on-boot” actions sequentially using runc to isolate them in containers. These actions typically format disks, enable swap, configure sysctl settings and network interfaces. The second phase contains “services” which are started concurrently and run forever as containerd tasks.

The following diagram shows a simplified high-level view of the boot process:

By default the “on-boot” actions’ stdout and stderr are written both to the VM console and files in /var/log/onboot.* while the “services” stdout and stderr are connected directly to open files in /var/log which are left to grow forever.

Initially we considered adding logging to the VM by running a syslog compatible logging daemon as a regular service that exposes /dev/log or a port (or both). Other services would then connect to syslog to write logs. Unfortunately a logging daemon running in a service would start later — and therefore miss all the logs from — the “on-boot” actions. Furthermore, since services start concurrently, there would be a race between the syslog daemon starting and syslog clients starting: either logs would be lost or each client startup would have to block waiting for the syslog service to start. Running a syslog daemon as an “on-boot” action would avoid the race with services, but we would have to choose where to put it in the “on-boot” actions list. Ideally we would start the logging daemon at the beginning so that no logs are lost, but then we would not have access to persistent disks or the network to store the logs anywhere useful.

In summary we wanted to add a logging mechanism to Docker Desktop that:

was able to capture all the logs — both the on-boot actions and the service logs;could write the logs to separate files to make them easier to read in a diagnostics report;could rotate the log files so they don’t grow forever;could be developed within the upstream LinuxKit project; andwould not force existing LinuxKit users to rewrite their YAML definitions or modify their existing code.

We decided to implement first-class support for logging by adding a “memory log daemon” called memlogd which starts before the first on-boot action and buffers in memory the last few thousand lines of console output from each container. Since it is only buffering in memory, memlogd does not require any network or persistent storage. A log downloader starts later, after the network and persistent storage is available, connects to memlogd and streams the logs somewhere permanent.

As long as the logs are streamed before the in-memory buffer is full, no lines will be lost. The use of memlogd is entirely optional in LinuxKit; if it is not included in the image then logs are written to the console and directly to open files as before.

Design

We decided to use the Go library container/ring to create a bounded circular buffer. The buffer is bounded to prevent a spammy logging client consuming too much memory in the VM. However if the buffer does fill, then the oldest log lines will be dropped. The following diagram shows the initial design:

Logging clients send log entries via a file descriptor (labelled “linuxkit-external-logging.sock”). Log downloading programs connect to a query socket (labelled “memlogdq.sock”), read logs from the internal buffer and write them somewhere else.

Recall that one of our design goals was to avoid making changes to each individual container definition to use the new logging system. We don’t want to explicitly bind-mount a logging socket into the container or have to modify the container’s code to connect to it. How then do we capture the output from containers automatically and pass it all to the linuxkit-external-logging.sock?

When an on-boot action or service is launched, the VM’s init system creates a FIFO (for containerd) or a socketpair (for runc) for the stdout and stderr. By convention LinuxKit containers normally write their log entries to stderr. Therefore if we modify the init system, we can capture the logs written to the stderr FIFOs and the socketpairs without having to change the container definition or the code. Once the logs have been captured, the next step is to send them to  memlogd — how do we do that?

A little known feature of Linux is that you can pass open file descriptors to other processes via Unix domain sockets. We can, instead of proxying log lines, just pass an open socket directly to memlogd. We modified the design for memlogd to take advantage of this:

When the container is started, the init system passes the stdout and stderr file descriptors to memlogd along with the name of the container. Memlogd monitors all its file descriptors in a select-loop. When data is available it will be read, tagged with the container name and timestamped before it is appended to the in-memory ringbuffer. When the container terminates, the fd is closed and memlogd removes the fd from the loop.

So this means:

we don’t have to modify container YAML definitions or code to be aware of the logging system; andwe don’t have to proxy logs between the container and memlogd.

Querying memlogd

To see memlogd in action on a Docker Desktop system, try the following command:

docker run -it –privileged –pid=host justincormack/nsenter1 /usr/bin/logread -F -socket /run/guest-services/memlogdq.sock

This will run a privileged container in the root namespace (containing the “memlogdq.sock” used for querying the logs) and run the utility “logread”, telling it to “follow” the stream i.e. to keep copying from memlogd to the terminal until interrupted. The output looks like this:

2019-02-22T16:04:23Z,docker;time=”2019-02-22T16:04:23Z” level=debug msg=”registering ttrpc server”

Where the initial timestamp indicates when memlogd received the message and “docker” shows that the log came from the docker service. The rest of the line is the output written to stderr.

Kernel logs (kmsg)

In Docker Desktop we include the Linux kernel logs in diagnostic reports to help us understand and fix Linux kernel bugs. We created the kmsg-package for this purpose. When this service is started, it will connect to /dev/kmsg, stream the kernel logs and output them to stderr. As the stderr is automatically sent to memlogd, the kernel logs will then also be included in the VM’s logs and will be included in the diagnostic report. Note that reading kernel logs via /dev/kmsg is a privileged operation and so the kmsg service needs the capability CAP_SYSLOG.

Persisting the logs

In Docker Desktop we persist the log entries to files (one per service), rotate them when they become large and then delete the oldest to avoid leaking space. We created the  logwrite package for this purpose. When this service is started, it connects to the query socket memlogdq.sock, downloads the logs as they are written and manages the log files.

Summary

We now have a relatively simple and lightweight, yet extendable logging system that provides the features we need in Docker Desktop: it captures logs from both “on-boot” actions and services, and persists logs to files with rotation after the file system has been mounted. We developed the logging system in the upstream LinuxKit project where we hope the simple and modular design will allow it to be easily extended by other LinuxKit developers.

References

Documentation for memlogd in the LinuxKit repoAn example use of memlogdThe source code for memlogdThe kmsg service which allows kernel log messages to be included
The post Capturing Logs in Docker Desktop appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

5 Software Development Predictions for 2020

Photo by Jamie Street on Unsplash

To kick off the new year, we sat down with Docker CEO Scott Johnston and asked him what the future holds for software development. Here are his 2020 predictions and trends to keep an eye on.
Existing Code and Apps Become New Again
Developers will find new ways to reuse existing code instead of reinventing the wheel to start from scratch. Additionally, we’ll see companies extend the value to existing apps by adding more functionality via microservices.
The Changing Definition of a Modern Application
Today’s applications are more complex than those of yesterday. In 2020, modern apps will power tomorrow’s innovation and this requires a diverse set of tools, languages and frameworks for developers. Developers need even more flexibility to address this new wave of modern apps and evolve with the rest of the industry.
Containers Pave the Way to New Application Trends
Now that containers are typically considered a common deployment mechanism, the conversation will evolve from the packaging of individual containers to the packaging of the entire application (which are becoming increasingly diverse and distributed). Organizations will increasingly look for guidance and solutions that help them unify how they build and manage their entire application portfolio no matter the environment: on premises, hybrid/multi-cloud, edge, etc.
Digital Transformation Transforms Itself 
Digital transformation was the buzzword of 2019, but we’ll see the term become less ambiguous this year. It’ll evolve to have a more specific meaning: a process for creating business impact through modernizing existing technology investments and delivering new applications/services.
A Container-First Strategy Proves Itself 
Developers have long been proponents of containers, but there’s been a huge shift toward establishing container-first strategies that are foundational to business transformation. 2020 will mark the year that these container-centric initiatives become the go-to-approach and play out on a larger scale. This will happen across enterprises and industries as it proves immediate impact by providing a clear path to the cloud for all applications, regardless of programming language or whether they’re three-tier brownfield apps or cloud-native greenfield microservices, while reducing cost and risk.
The post 5 Software Development Predictions for 2020 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

2019 Docker Community Awards

The Docker Community is the heart of Docker’s success and a huge reason why Docker was named the most wanted and second most loved developer tool in the 2019 Stack Overflow Survey. This year, we honored the following members of the Docker Community for their exemplary contributions to Docker users around the globe. On behalf of Docker and developers everywhere, thank you for your passion and commitment to this community!

Ajeet Singh Raina, Bangalore, India
Ajeet is a Docker Captain and Docker Community Leader for Docker Bangalore, the largest Docker Meetup in the world with nearly 8,000 members. His meetups are more like mini-conferences, commonly exceeding hundreds of RSVPs and involving free hands on workshop and training content that he and his docker community have developed. Ajeet is also a prolific blogger, sharing docker and kubernetes content on his blog Collabnix, which had over a million views in 2019. Ajeet also helped to organize and/or speak at more than 30+ events over the past year. This year, Ajeet was recognized by his fellow Captains to receive the Tip of the Captains Hat Award for his tireless dedication to sharing his expertise with the broader tech community. Keep up with Ajeet by following him on twitter, github, or his blog.

Dave Henderson, Ottawa, Canada

Dave has organized Docker Ottawa meetups since 2016 and is a frequent speaker to his local community. “My involvement in the Docker community over these past few years has been a particular highlight for me. It’s rewarding to see how enthusiastic the members of the Ottawa community are to be connecting and developing their skills, especially when those new to Docker begin to understand and master key concepts. Thank you to the Ottawa Docker community, my fellow community leaders, and especially Docker for a great 2019; I’m excited to see what 2020 will bring!”

Dominique Top, London
Dominique Top, or Dom to those that know her, has been the Docker London Community Leader for nearly 2 years. She knows how to keep meetups fun and is full of innovative ideas to support her local tech community. Last year she helped to create Meetup Mates to ensure that those who wanted to attend a meetup didn’t have to do it alone.

Gloria Gonzalez Palma, Mexico City
Gloria organized a record 12 meetups in 2019, including the Docker Birthday, Fall #LearnDocker workshop, and a Docker Posada, a traditional Mexican party held in December, complete with a pinata. “Being a community leader means to me, part of my life, I have grown with the community and the community has grown with me. It is a relation win, win, I love it.”

Imre Nagi, Jakarta, Indonesia
Imre started sharing Docker with others as a student at Carnegie Mellon University and then formed the only Docker meetup in Indonesia once he returned home. “I decided to come back home after my master grad in the US simply because I want to be part of an exciting journey in the Indonesia tech scene. Being in the Docker community gives me the opportunity to do that and to reach more people and cities in Indonesia and to do something greater and greater every day for the community :)” Imre organized two #LearnDocker workshops this fall to be able to support a wider swath of his country’s demand for the content.

Taygan Pillay, Cape Town, South Africa
Taygan organizes the Docker community in Cape Town, South Africa, sharing containers across the tech scene there. “Being a Docker CL has been a rewarding experience by being part of the container movement, at the same time helping build the community. Meeting people from diverse industries striving for a common goal of streamlined deployments and development life cycles allows insights very few other opportunities provide.”

Connect with the Docker community by attending a local meetup, joining the virtual Docker meetup group, and chatting with other users on the Docker Community Slack Channel.

The post 2019 Docker Community Awards appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Year in Review: The Most Loved Docker Articles, Blogs and Tweets of 2019

Photo by NordWood Themes on Unsplash
As this decade comes to a close, we are rounding up some of your favorite content from 2019. Catch up on anything you missed and get ready for a lot more to come in 2020!
Docker Captain Content
Brian Christner did an analysis of VMware, Docker, and Kubernetes Google Trends and the results just might surprise you. . . or maybe not.
John Lees Miller updated his 2016 Lessons from Building a Node App in Docker. Run through the updated tutorial to learn how to Dockerize your node.js apps by setting up the socket.io chat example with Docker, from scratch to production-ready. 
Ajeet Singh Raina wrote nearly 30 blogs in 2019, and the most popular was 5 Minutes to Kubernetes Dashboard running on Docker Desktop for Windows 2.0.0.3. Find yourself five minutes before the end of the year to try this out yourself.
Łukasz Lach and Thomas Shaw spread holiday cheer with some seasonal docker run commands:

$ docker run -it lukaszlach/merry-christmas

docker run –rm -t tomwillfixit/hohoho

Bret Fisher hosts a weekly Docker and DevOps YouTube live show – a fun and educational way to spend an hour on Thursdays. Check out his top episodes of 2019 here.
Top Blog Posts from Docker’s Blog

Hands down the most popular blog of 2019 was Docker Engineer Tibor’s Vass’ Guide to Dockerfile Best Practices. And if video is more your style, check out his and Sebastiaan Van Stijin’s DockerCon talk on the same topic.
Docker developers love to tinker, so it was no surprise that Paulo Frazao’s Happy Pi Day tutorial, walking developers through how to install Docker Engine – Community (CE) 18.09 on Raspberry Pi made the top five this year. Learn more about how to build multi-arch apps with Elton Stoneman’s Online Meetup: Building Multi-Arch Apps with BuildX.
 Updates to Docker Hub were also really popular reads in 2019, including Shanea Leven’s Two-Factor Authentication and Personal Access Tokens posts.
Lastly, Docker Desktop Product Manager Ben De St Paer-Gotch unveiled the availability of Docker’s Technical Preview of Docker Desktop for WSL 2 and shares five things to try out. 
Top Tweets
Holidays, cheat sheets, developer love and how to’s were a hit on Twitter this year:

Articles
Docker CEO Scott Johnston outlines his predictions on What 2020 Holds for Developers.
StackOverflow released its 2019 Developer Survey, and Docker ranked as the #1 most wanted, #2 most loved and #3 most used platform.
A glimpse into what Docker will be up to next year as Docker CEO Scott Johnston Foresees Developer Tool Advances in 2020 and Beyond.
Docker is listed first in this overview of the Best Open Source Innovations of the Decade! 
Thanks for being a part of the Docker Community! We look forward to more great content in the year ahead.
The post Year in Review: The Most Loved Docker Articles, Blogs and Tweets of 2019 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Containers Today Recap: The Future of the Developer Journey

There was amazing attendance at Containers Today in Stockholm a couple of weeks ago. For those who were unable to make it, here is a quick overview of what I talked about at the event in my session around the future of the developer journey. 
Before we talk about what we think will change the journey, we need to think about why it changes. The fundamental goal of any change to the way of working for developers should be to reduce the number of boring, mundane and repetitive tasks that developers have to do or to allow them to reach new customers/solve new problems. Developers create amazing value for companies and provide solutions to customers’ real world problems. But if they are having to spend half of their time working out how to get things into the hands of their customers, then you are getting half the value.

Developer Evolution
The role of developers has changed a lot over the last ~40 years. Developers no longer deploy to mainframes or in house hardware, they don’t do waterfall deployments and not many of them write in machine code. Developers have to now think about web languages and ML, work in Extreme Agile DevOps teams (ok I made that up a bit) and deploy to the Cloud or to Edge devices. This change keeps happening as the entire industry tries to find new ways for developers to deliver value, deliver that value faster and reduce the number of mundane tasks for developers.
Today the process for getting value to customers is often described as the ‘outer loop’ of development, while the creative process of development, i.e. creating new features, is described as the ‘inner loop.’ 
In an ideal world, that outer loop is automated and uses modern CI/CD technologies to create a cycle of feedback that allows developers to create new features and fix bugs ever faster. This looks something like: 

Though this is an ideal world, this is not the case for a lot of developers. In the “The State of Developer Ecosystem 2019” report, only 45% of developers said that CI/CD was part of their regular tool set. This also means 55% of developers have yet to adopt it. This is one of the changes we think will start to accelerate in the near future. As CI/CD is democratized with new tools like Github Actions, we will see even small teams starting to adopt CI/CD over manual deployments.
Another big change in the outer loop is how people look at what they are passing through this outer loop. Developers are trying to get value out to customers but the concept of value is changing. As we moved away from monoliths, we have started to create individual services that are easier to work with and deploy. The idea of bundling these services with systems like Helm or CNAB tools is becoming more prominent as businesses consider that in isolation, a single container does not provide value to a customer. But a collection of these services together is the minimum set to deliver customer value. 
These may each be a separate microservice, but just a notifications service or a payment service won’t drive business value in isolation. 
For the outer loop, some of the big changes we see are changes to how things are deployed. 57% of developers are still not using orchestration (Slashdata Developer Report 16th Edition). The growth in orchestration and in particular K8s is going to change how we think about the hand off between developers and operations as we move towards more microservices in production than ever before. The complexity of this change will be compounded as production is likely to be on multiple clouds or on premises, and as over 50% of companies have a hybrid strategy and a multi cloud strategy.
For developers, all of these things are going to be impactful at a significant scale as we aim to develop the next 500 million apps by 2023 (IDC Analyse the Future). With Edge/IoT growing 10X between 2017-2025 (Allied Market Research), we are also going to need to think of new ways for developers to work with these technologies and extend their inner loop outside of their immediate machine. 
So in summary, we are going to see more bundled apps being deployed via CI/CD to orchestrators in the cloud and on prem, and movement between them. We will be building more of these apps than ever and they have to run anywhere, considering how they interface with the Edge and work as part of our inner loop for this on the Cloud. And finally, we strongly believe that the use of containers is going to be a driving factor and the underlying technology to enable this.
What’s Next for Developers
At Docker, we have started to look ahead at how we can build technology to help developers adopt some of these tools. 

For local developers, we have looked at how to accelerate people creating the next 500 million new applications with Application templates, enabling developers to create new containerized applications in seconds from existing examples. We have been working out how we can improve our IDE integration to make developers lives better where they work today. We have added a layer of GUI abstraction into Docker Desktop to make it simpler to understand all of these components.
We are also thinking about how we can extend the local development environment and the inner loop for new objects. We have Docker Context to allow developers to work on a remote instance within their inner loop and ARM builds in Docker desktop for Working with the Edge.
And to deploy value in the future, we have Docker App for bundling the value of more than one container service to get it running in production.
We are also thinking about things like CI, moving between platforms and generally how to make it easier for the next 10 million developers to get started with Docker. We know that a lot of these technologies have been embraced by early adopters. But as all of these new technologies and changes start to reach the early majority, we need to think about how we are going to scale all of said technologies and keep mundane tasks away from developers! 
Containers Today was a fantastic event with great turnout. Hopefully this post gives some insight into my talk at the event and the trends we see. If you have other thoughts on what the future of the developer journey holds, please reach out to us! 
The post Containers Today Recap: The Future of the Developer Journey appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Deep Dive Into the New Docker Desktop filesharing Implementation Using FUSE

The latest Edge release of Docker Desktop for Windows 2.1.7.0 has a completely new filesharing system using FUSE instead of Samba. The initial blog post we released presents the performance improvements of this new implementation and explains how to give feedback. Please try it out and let us know what you think. Now, we are going to go into details to give you more insight about the new architecture.
New Architecture
Instead of Samba running over a Hyper-V virtual network, the new system uses a Filesystem in Userspace (FUSE) server running over gRPC over Hypervisor sockets.
The following diagram shows the path taken by a single request from a container, for example to read a PHP file:

In step (1) the web-server in the container calls “read” which is a Linux system call handled by the kernel’s Virtual File System (VFS) layer. The VFS is modular and supports many different filesystem implementations. In our case we use Filesystem in Userspace (FUSE) which sends the request to a helper process running inside the VM labelled “FUSE client.” This process runs within the same namespace as the Docker engine. The FUSE client can handle some requests locally, but when it needs to access the host filesystem it connects to the host via “Hypervisor sockets.”
Hypervisor Sockets
Hypervisor sockets are a shared-memory communication mechanism which enables VMs to communicate with each other and with the host. Hypervisor sockets have a number of advantages over using regular virtual networking, including:

since the traffic does not flow over a virtual ethernet/IP network, it is not affected by firewall policies
since the traffic is not routed like IP, it cannot be mis-routed by VPN clients
since the traffic uses shared memory, it can never leave the machine so we don’t have to worry about third parties intercepting it

Docker Desktop already uses these sockets to forward the Docker API, to forward container ports, and now we use them for filesharing on Windows too!
Returning to the diagram, the FUSE client creates sockets using the AF_VSOCK address family, see step (3). The kernel contains a number of low-level transports, one per hypervisor. Since the underlying hypervisor is Hyper-V, we use the VMBus transport. In step (4) filesystem requests are written into the shared memory and read by the VMBus implementation in the Windows kernel. A FUSE server userspace process running in Windows reads the filesystem request over an AF_HYPERV socket in step (5).
FUSE Server
The request to open/close/read/write etc is received by the FUSE server, which is running as a regular Windows process. Finally in step (6) the FUSE server uses the Windows APIs to perform the read or write and then returns the result to the caller.
The FUSE server runs as the user who is running the Docker app, so it only has access to the user’s files and folders. There is no possibility of the VM gaining access to any other files, as could happen in the previous design if a local admin account is used to mount the drive in the VM.
Event Injection
When files are modified in Linux, the kernel generates inotify events. Interested applications can watch for these events and take action. For example, a React app run with
$ npm start
will watch for inotify events and automatically recompile when code changes and trigger the browser to refresh automatically, as shown in this video. In previous versions of Docker Desktop on Windows we weren’t able to generate inotify events so these styles of development simply wouldn’t work.
Injecting inotify events is quite tricky. Normally a Linux VFS implementation like FUSE wouldn’t generate the events itself; instead the common code in the higher layer generates events as a side-effect of performing actions. For example when the VFS “unlink” is called and returns successfully then the “unlink” event will be generated. So when the user calls “unlink” under Windows, how does Linux find out about it?
Docker Desktop watches for events on the host when the user runs docker run -v. When an “unlink” event is received on the host, a request to “please inject unlink” is forwarded over gRPC to the Linux VM. The following diagram shows the sequence of operations:

A thread with a well-known pid inside the FUSE client in Linux “replays” the request by calling “unlink,” even though the directory has actually already been removed. The FUSE client intercepts requests from this well-known pid and pretends that the unlink hasn’t happened yet. For example, when FUSE_GETATTR is called, the FUSE client will say, “yes the directory is still here” (instead of ENOENT). When the FUSE_UNLINK is called, the FUSE client will say, “yes that worked” (instead of ENOENT). As a result of the successful FUSE_UNLINK the Linux kernel generates the inotify event.
Caching
As you can see from the architecture diagram above, each I/O request has to make several user/kernel transitions and VM/host transitions to complete. This means the latency of a filesystem operation is much higher than the case when all the files are local in the VM. We have mitigated this with aggressive use of kernel caching, so many requests can be avoided altogether. We:

use the file attribute cache which minimises FUSE_GETATTR requests
set FOPEN_CACHE_DIR which caches directory contents in the kernel page cache
set FOPEN_KEEP_CACHE which caches file contents
we set CAP_MAX_PAGES to increase the maximum request size
We use a modern 4.19 series kernel with the latest FUSE patches backported

Since we have enabled so many caches we have to carefully handle cache invalidation. When a user runs docker run -v and we are monitoring the filesystem events for inotify event injection, we also use these events to invalidate cache entries. When the docker run -v exits and the watches are disabled we invalidate all the cache entries.
Future Evolution
We have lots of ideas to improve the performance further by even more aggressive use of caching. For example, in the Symfony benchmark above, the majority of the remaining FUSE calls in the cached case are calls to open and close file handles; even though the file contents is itself cached (and hasn’t changed). We may be able to make these open and close calls lazy and only call them when needed.
The new filesystem implementation is not relevant on WSL 2 (currently available on early Windows Insider builds), since that already has a native filesharing mode which uses 9P. Of course we will keep benchmarking, optimising and incorporating user feedback to always use the best available filesharing implementation across all OS versions.
The post Deep Dive Into the New Docker Desktop filesharing Implementation Using FUSE appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/