Docker Secure Images: What Are They and How Do I Use Them?

One of the major challenges in today’s development environments is balancing innovation with security. Focusing on both is never a small effort and might seem tedious or constraining at times, but when security is implemented from the start, it can shorten development time and prevent exposure to vulnerabilities.

This is why Docker rolled out Docker Official Images (now also available on Amazon Elastic Container Registry Public) and the Docker Verified Publisher Program – so developers know they are starting development with reliable building blocks that have been curated and vetted by Docker. 

If you’ve seen those green and blue badges – “Official Image” and “Verified Publisher” – next to certain images on Docker Hub, then you’re already one step ahead of the rest. Docker Verified Publisher images come from repositories published by Docker partners, so you know you’re pulling your image from a trusted source. Docker Official Images are a curated set of images that are reviewed and published by a dedicated team, working in collaboration with upstream software maintainers, security experts, and the broader Docker community. You can use these images as fully-furnished starting points or drop-in solutions.

Your next question is likely, how do I start using those images? The good news is that our recent guide, “Jump-Starting Development with Secure Images from Docker”, lays this out for you, providing a step-by-step look at how to build with Docker Official Images and Verified Publisher Images, specifically installing a Python image and setting up a Ruby on Rails environment with multiple images. 

Docker helps developers build, share, and run applications that are secure from the start. The Docker Verified Publisher Program and Docker Official Images are just one of the ways we provide a solid foundation for your applications, so you can focus on building better software.

Get started with Docker Official and Verified Publisher images today by downloading our guide. Interested in joining the Docker Verified Publisher Program? Sign up here!

Resources

Blog: Docker Verified Publisher: Trusted Sources, Trusted ContentBlog: Welcome Canonical to Docker Hub and the Docker Verified Publisher ProgramPress Release: Docker Expands Trusted Content Offering for Developers Blog: Secure Software Supply Chain Best Practices 

DockerCon Live 2022

Join us for DockerCon Live 2022 on Tuesday, May 10. DockerCon Live is a free, one day virtual event that is a unique experience for developers and development teams who are building the next generation of modern applications. If you want to learn about how to go from code to cloud fast and how to solve your development challenges, DockerCon Live 2022 offers engaging live content to help you build, share and run your applications. Register today at https://www.docker.com/dockercon/
The post Docker Secure Images: What Are They and How Do I Use Them? appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Do the New Terms of Docker Desktop Apply If You Don’t Use the Docker Desktop UI?

Even if you’re not taking advantage of the user interface that Docker provides with Docker Desktop and are simply using the CLI, you may still need a paid subscription to use Docker Desktop. Much of the value of Docker Desktop comes from making it easy to develop with containers locally on Windows and Mac.

We announced updates to our product subscriptions back in August and as part of that change, Docker Desktop now requires a per-user paid subscription (Pro, Team, or Business) for professional use in larger companies (larger than 250 employees OR greater than $10 million in annual revenue). 

If you meet the criteria above for a large business, and you’ve installed Docker Desktop, you need a paid subscription that starts for as little as $5 per user, per month.  

Docker Desktop remains free for small businesses (fewer than 250 employees AND less than $10 million in annual revenue), personal use, education, and non-commercial open source projects.

There is a grace period until January 31, 2022, for those that require a paid subscription to use Docker Desktop.

Okay, so what do I get with Docker Desktop?  

With Docker Desktop, installation, configuration, and maintenance are as easy as one click. Starting from the top, Docker Desktop comes as one single package for Mac or Windows. There is a single installer that, in one click, sets up everything you need to use Docker in minutes. 

Docker simplifies configuration under Docker Desktop, taking care of port mappings, file system concerns, and other default settings, making it seamless to develop on your local machine. Docker also maintains and regularly updates Docker Desktop with bug fixes and security updates. 

You can learn more about all the magic behind the scenes of Docker Desktop in this blog. Or check out these Twitter threads from @glours and @BretFisher. Bret Fisher, one of our Docker Captains, also has a great rundown on his Youtube show here. 

Docker Desktop features – many of which are not related to the UI:

Here’s a summarized list of the features you get with Docker Desktop. You can also learn more about the difference between Docker Desktop vs. DIY with Docker Engine here.  

How can I check if I have Docker Desktop installed?

Checking to see if you are using Docker Desktop is simple. An easy way to determine if Docker Desktop is currently running on your machine is if this icon is present:  

You can also check to see if Docker Desktop is installed via the filesystem. 

On Mac, look for “/Applications/Docker.app”On Windows, look for “C:Program FilesDockerDocker” 

Picking the Docker Subscription that meets your needs

Check out the Docker pricing page to compare all the features in each subscription. If you’d like some help picking the subscription that best suits your needs, you can also check out the Docker Subscription Cheat Sheet. It highlights some of the key differences between each of the subscriptions: Personal, Pro, Team, and Business. And if you still have questions, you can always check out our FAQ page that has additional details.  

DockerCon Live 2022

Join us for DockerCon Live 2022 on Tuesday, May 10. DockerCon Live is a free, one day virtual event that is a unique experience for developers and development teams who are building the next generation of modern applications. If you want to learn about how to go from code to cloud fast and how to solve your development challenges, DockerCon Live 2022 offers engaging live content to help you build, share and run your applications. Register today at https://www.docker.com/dockercon/
The post Do the New Terms of Docker Desktop Apply If You Don’t Use the Docker Desktop UI? appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Verified Publisher: Trusted Sources, Trusted Content

Six months since its launch at DockerCon, the Docker Verified Publisher program delivers on its promise to developers and partners alike

The Docker Verified Publisher program means trusted content and trusted sources for the millions of Docker users. At the May 2021 DockerCon, Docker announced its Secure Software Supply Chain initiative, highlighting Docker Verified Publisher as a key component of that trusted content. 

The trusted images in Docker Hub help development teams build secure software supply chains, minimizing exposure to malicious content early in the process to save time and money later. Docker allows developers to quickly and confidently discover and use images in their applications from known, trusted sources. 

Docker Verified Publisher partners join the trusted content Docker provides, along with Docker Official Images and the Docker Open Source program. In short, the Docker Verified Publisher program promises developers that the images they use are from the trusted software publisher. And a Docker Hub search shows trusted sources first.

Trusted images and software security are at the forefront of what the new Docker Business subscription tier offers, too. These trusted images can be allowed into large organizations – while preventing unverified, untrusted community images via the Docker Business Image Management features in the Docker Hub organization control plane. And of course, those trusted images include Docker Verified Publisher partners.

Dozens of software publishers have joined the Docker Verified Publisher program already, and more are poised to join before Docker’s new Docker Desktop license policies take effect (31 January 2022).

Docker Verified Publisher partners enjoy benefits such as:

Removal of rate limiting on all repos in the DVP partners’ namespace, providing a premium user experience: all Docker users, whether they have a Docker subscription or not, are be able to pull the partner’s images as much as they wantDVP badging on partner namespace and repos, indicating the trusted content and verified source (part of Docker’s Secure Software Supply Chain initiative)Priority search ranking in Docker Hub Co-marketing opportunities including social shares, posts on the popular Docker blog, the exclusive right to sponsor DockerCon 2022,etc.Inclusion as one of two trusted sources in the image access controls included in the Docker Business subscription tier, bringing essential security and management capabilities to larger Docker customersRegular reporting to track key partner repo metrics such as pull requests, unique IP addresses, and moreAnd more benefits added regularly

To learn more and join the Docker Verified Publisher program, just email partners@docker.com or visit this page to contact us.
The post Docker Verified Publisher: Trusted Sources, Trusted Content appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Faster Multi-Platform Builds: Dockerfile Cross-Compilation Guide

There are some important changes happening in the software industry. With Apple moving all of their machines to their custom ARM-based silicon and AWS offering the best performance-per-cost ratio with their Graviton2 instances, one can no longer expect that all software only needs to run on x86 processors. If you work with containers there is some good tooling available for building multi-platform images when your development teams are using different architectures or you want to deploy to a different architecture from the one that you develop on. In this post, I’ll show some patterns that you can use if you want to get the best performance out of such builds.

In order to build multi-platform container images, we will use the docker buildxcommand. Buildx is a Docker component that enables many powerful build features with a familiar Docker user experience. All builds executed via buildx run with Moby Buildkit builder engine. Buildx can also be used standalone or, for example, to run builds in a Kubernetes cluster. In the next version of Docker CLI, the docker buildcommand will also start to use Buildx by default.

By default, a build executed with Buildx will build an image for the architecture that matches your machine. This way, you get an image that runs on the same machine you are working on. In order to build for a different architecture, you can set the–platform flag, e.g. –platform=linux/arm64. To build for multiple platforms together, you can set multiple values with a comma separator.

# building an image for two platforms
docker buildx build –platform=linux/amd64,linux/arm64 .

In order to build multi-platform images, we also need to create a builder instance as building multi-platform images is currently only supported when using BuildKit with docker-container and kubernetes drivers. Setting a single target platform is allowed on all buildx drivers.

docker buildx create –use
# building an image for two platforms
docker buildx build –platform=linux/amd64,linux/arm64 .

When building a multi-platform image from a Dockerfile, effectively your Dockerfile gets built once for each platform. At the end of the build, all of these images are merged together into a single multi-platform image.

FROM alpine
RUN echo “Hello” > /hello

For example, in the case of a simple Dockerfile like this that is built for two architectures, BuildKit will pull two different versions of the Alpine image, one containing x86 binaries and another containing arm64 binaries, and then run their respective shell binary on each of them.

Different methods of building

Generally, the CPU of your machine can only run binaries for its native architecture. x86 CPU can’t run ARM binaries and vice versa. So when we are running the above example on an Intel machine, how can it run the shell binary for ARM? It does this by running the binary through a software emulator instead of doing so directly.

docker buildx ls shows what emulators are installed for each of the builders. If you don’t see them listed for your system you can install them with the tonistiigi/binfmt image.

Using an emulator this way is very easy. We don’t need to modify our Dockerfile at all and can build for multiple platforms automatically. But it doesn’t come without downsides. The binaries running this way need to constantly convert their instructions between architectures and therefore don’t run with native speed. Occasionally you might also find a case that triggers a bug in the emulation layer.

One way to avoid this overhead is to modify your Dockerfile so that the longest-running commands don’t run through an emulator. Instead, we can use a cross-compilation stage.

The difference between emulation and cross-compilation is that in the former, we emulate the full system of another architecture in software, while in cross-compilation we only use binaries built for our native architecture with a special configuration option that makes them generate new binaries for our target architecture. As the name says, this technique can not be used for all processes but mostly only when you are running a compiler. Luckily the two techniques can be combined. For example, your Dockerfile can use emulation to install packages from the package manager and use cross-compilation to build your source code.

Emulation vs. cross-compilation build with “ — platform=linux/amd64,linux/arm64″ as run on Intel/AMD machine. Blue contains x86 binaries, yellow ARM binaries.

When deciding whether to use emulation or cross-compilation, the most important thing to consider is if your process is using a lot of CPU processing power or not. Emulation is usually a fine approach for installing packages or if you need to create some files or run a one-off script. But if using cross-compilation can make your builds (possibly tens of) minutes faster, it is probably worth updating your Dockerfile. If you want to run tests as part of the build then cross-compilation can not achieve that. For the best performance in that case, another option is to use a remote build cluster with multiple machines with different architectures.

Preparing Dockerfile

In order to add cross-compilation to our Dockerfile, we will use multi-stage builds. The most common pattern to use in multi-stage builds is to define a build stage(s) where we prepare our build artifacts and a runtime stage that we export as a final image. We will use the same method here with an extra condition that we want our build stage to always run binaries for our native architecture and our runtime stage to contain binaries for the target architecture.

When we start a build stage with a command like FROM debian it instructs the builder to pull the Debian image that matches the value that was set with –platform flag during your build. What we want to do instead is to make sure this Debian image is always native to our current machine. When we are on an x86 system we could instead use a command like FROM –platform=linux/amd64 debian. Now, no matter what platform was set during the build, this stage will always be based on amd64. Except what happens now if we switch to an ARM machine like the new Apple Macs? Do we now need to change all our Dockerfiles? The answer is no, and instead of writing a constant platform value into our Dockerfile we should use a variable instead, FROM –platform=$BUILDPLATFORM debian.

BUILDPLATFORM is part of a set of automatically defined (global scope) build arguments that you can use. It will always match the platform or your current system and the builder will fill in the correct value for us.

Here is a complete list of such variables:

BUILDPLATFORM — matches the current machine. (e.g. linux/amd64)

BUILDOS — os component of BUILDPLATFORM, e.g. linux

BUILDARCH — e.g. amd64, arm64, riscv64

BUILDVARIANT — used to set ARM variant, e.g. v7

TARGETPLATFORM — The value set with –platform flag on build

TARGETOS – OS component from –platform, e.g. linux

TARGETARCH – Architecture from –platform, e.g. arm64

TARGETVARIANT

Now in our build stage, we can pull in our source code, install the compiler package we want to use, etc. These commands should be identical to the ones you are already using in your single-platform or emulation-based Dockerfile.

The only additional change that needs to be done now is that when you are calling your compiler process you need to pass it a parameter that configures it to return artifacts for your actual target architecture. Remember that now that our build stage always contains binaries for the host’s native architecture, the compiler can’t determine the target’s architecture automatically from the environment anymore.

In order to pass the target architecture, we can use the same automatically defined build arguments shown before, this time with TARGET* prefix. As we are using these build arguments inside the stage, they need to be in the local scope and declared with a ARG command before being used.

FROM –platform=$BUILDPLATFORM alpine AS build
# RUN <install build dependecies/compiler>
# COPY <source> .
ARG TARGETPLATFORM
RUN compile –target=$TARGETPLATFORM -o /out/mybinary

The only thing left to do now is to create a runtime stage that we will export as a result of our build. For this stage, we will not use –platform in the FROM definition. We could write FROM –platform=$TARGETPLATFORM but that is the default value for all builds stages anyway so using a flag is redundant.

FROM alpine
# RUN <install runtime dependencies installed via emulation>
COPY –from=build /out/mybinary /bin

To confirm, let’s look at what happens if the above Dockerfile is built for two platforms with docker buildx build –platform=linux/amd64,linux/arm64 . invoked on ARM64-based systems like new Apple M1 machines.

First, the builder will pull down the Alpine image for ARM64, install the build dependencies and copy over the source. Note that these steps execute only once, even though we are building for two different platforms. BuildKit is smart enough to understand that both of these platforms depend on the same compiler and source code and automatically deduplicates the steps.

Now two separate instances of containers running the compiler process will be invoked, with a different value passed to the –target flag.

For the export stage, BuildKit now pulls down both ARM64 and x86 versions of the Alpine image. If any runtime packages were used then the x86 versions are installed with the help of the emulation layer. All these steps already ran in parallel to the build stage as they did not share dependencies. As the last step, the binary created by the respective compiler process is copied to the stage.

Sample Dockerfile commands as run on Apple M1 machine. Blue contains x86 binaries, yellow ARM.

Both of the runtime stages are then converted into an OCI image and BuildKit will prepare an OCI Image Index structure(also called a manifest list)that contains both of these images.

Go example

For a functional example, let’s look at an example project written in the Go programming language.

A typical multi-stage Dockerfile building a simple Go application would look something like:

FROM golang:1.17-alpine AS build
WORKDIR /src
COPY . .
RUN go build -o /out/myapp .

FROM alpine
COPY –from=build /out/myapp /bin

Using cross-compilation in Go is very easy. The only thing you need to do is pass the target architecture with environment variables. go build understands GOOS , GOARCH environment variables. There is also GOARM for specifying the ARM version for the 32bit systems.

The GOOS and GOARCH values are the same format as the TARGETOS and TARGETARCH values which we saw earlier that BuildKit makes available inside the Dockerfile.

When we apply all the steps we learned before: fixing the build stage to build platform, defining ARG TARGET* variables, and passing cross-compilation parameters to the compiler we will have:

FROM –platform=$BUILDPLATFORM golang:1.17-alpine AS build
WORKDIR /src
COPY . .
ARG TARGETOS TARGETARCH
RUN GOOS=$TARGETOS GOARCH=$TARGETARCH go build -o /out/myapp .

FROM alpine
COPY –from=build /out/myapp /bin

As you see we only needed to do three small modifications and our Dockerfile is much more powerful. Note that there are no downsides to these changes, the Dockerfile is still portable and works in all architectures. Just now when we build for a non-native architecture our builds are much faster.

Let’s look at some additional optimizations you might want to consider as well.

When Go applications depend on other Go modules they usually do it by either including the sources of the dependencies inside a vendor directory or if their project does not include such a directory then the Go compiler will pull the dependencies listed in the go.mod file while the go build command is running.

In the latter case, it means that (although our own source code was copied only one time) because thego build process was invoked twice for our multi-platform build, these dependencies would also be pulled twice. It’s better to avoid that by telling Go to download these dependencies before we branch our build stage with the ARG TARGETARCH command.

FROM –platform=$BUILDPLATFORM golang:1.17-alpine AS build
WORKDIR /src
COPY go.mod go.sum .
RUN go mod download
COPY . .
ARG TARGETOS TARGETARCH
RUN GOOS=$TARGETOS GOARCH=$TARGETARCH go build -o /out/myapp .FROM alpine
COPY –from=build /out/myapp /bin

Now when two go build processes run they already have access to the pre-pulled dependencies. We also copied only thego.mod and go.sum files before downloading the packages so that when our regular source code changes we don’t invalidate cache for the module downloads.

For completeness, let’s also include cache mounts inside our Dockerfile. RUN –mount options allow exposing new mountpoints to the command that may be used for accessing your source code, build secrets, temporary and cache directories. Cache mounts create persistent directories where you can write your application-specific cache files that reappear the next time you invoke the builder again. This results in big performance gains when you are doing incremental builds after making changes to your source code.

In Go, the directories that you want to turn into cache mounts are /root/.cache/go-build and /go/pkg . The first is the default location of the Go build cache and the second is where go mod downloads modules. This assumes your user is root and GOPATH is /go .

RUN –mount=type=cache,target=/root/.cache/go-build
–mount=type=cache,target=/go/pkg
GOOS=$TARGETOS GOARCH=$TARGETARCH go build -o /out/myapp .

You can also use a type=bind mount (default type) to mount in your source code. This helps to avoid the overhead of actually copying the files with COPY . In cross-compiling in Dockerfile, it is sometimes especially important if you don’t want to copy your source before defining ARG TARGETPLATFORM as changes in the source code would invalidate the cache for your target-specific dependencies. Note that type=bind mounts are mounted read-only by default. If the commands you are running need to write files to your source code, you might still want to use COPY or set therw option for the mount.

This leads to our complete, fully-optimized cross-compiling Go Dockerfile:

FROM –platform=$BUILDPLATFORM golang:1.17-alpine AS build
WORKDIR /src
ARG TARGETOS TARGETARCH
RUN –mount=target=.
–mount=type=cache,target=/root/.cache/go-build
–mount=type=cache,target=/go/pkg
GOOS=$TARGETOS GOARCH=$TARGETARCH go build -o /out/myapp .

FROM alpine
COPY –from=build /out/myapp /bin

As an example, I measured how much time it takes to build the Docker CLI binary with the multi-stage Dockerfile we started with and then the one with the optimizations applied. As you can see, the difference is quite drastic.

https://github.com/docker/cli build time with test Dockerfiles (seconds, lower is better)

For the initial build only for our native architecture, the difference is minimal — only a small change from not needing to run the COPY instruction. But when we build an image both for ARM and x86 CPUs the difference is huge. For our new Dockerfile, doubling architectures increases build time only by 70% (because some parts of the builds were shared), while when the second architecture builds with QEMU emulation, our build time is almost seven times longer.

With the additional help from the cache mounts that we added, our incremental rebuilds with a Go source code changes are reaching a ridiculous 100x speed improvement territory compared to the old Dockerfile.
The post Faster Multi-Platform Builds: Dockerfile Cross-Compilation Guide appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/