Helping Developers Simplify Apps, Toolchains, and Open Source

It’s been an exciting four months since we announced that Docker is refocusing on developers. We have spent much of that time listening to you, our developer community, in meetups, on GitHub, through social media, with our Docker Captains, and in face-to-face one-on-ones. Your support and feedback on our refocused direction have been helpful and positive, and we’re fired-up for the year ahead!

What’s driving our enthusiasm for making developers successful? Quite simply, it’s in recognition of the enormous impact your creativity – manifested in the applications you ship – has on all of our lives. Widespread adoption of smartphones and near-pervasive Internet connectivity only accelerates consumer demand for new applications. And businesses recognize that applications are key to engaging their customers, partnering effectively with their supply chain ecosystem, and empowering their employees.

As a result, the demand for developers has never been higher. The current worldwide population of 18 million developers is growing approximately 20% every year (in contrast to the 0.6% annual growth of the overall US labor force). Yet, despite this torrid growth, demand for developers in 2020 will outstrip supply by an estimated 1 million. Thus, we see tremendous opportunities in helping every developer to be even more creative and productive as quickly as possible.

But how best to super-charge developer creativity and productivity? More than half of our employees at Docker are developers, and they, our Docker Captains, and our developer community collectively say that reducing complexity is key. In particular, there is an opportunity to reduce complexity stemming from three potential sources:

Applications. Developers want to ship their ideas from code to cloud as quickly as possible. But, while cloud-native microservices-based apps offer many compelling benefits, these can come at the cost of complexity. Orders of magnitude more app components, multiple languages, multiple service implementations – Containers? Serverless functions? Cloud-hosted services? – and more risk increasing the cognitive load on development teams.

Toolchains. In shipping code-to-cloud, developers want the freedom to select their own tools for each stage of their app delivery toolchains, and there are rich breadth and depth of innovative products from which to select. But integrating together multiple point products across the toolchain stages of source code management, build/CI, deployment, and others can be challenging. Often, it results in custom, one-off scripts that subsequently need to be maintained, lossy hand-off of app state between delivery stages, and subpar developer experiences.

Open Source. No surprise to the Docker community, an increasing number of developers are attracted by the creativity and velocity of innovation in open source technologies. But development teams often struggle with how to integrate and get the most out of open source components in their apps, how to manage the lifecycle of open source updates and patches, and how to navigate open source licensing dos and don’ts.

And for all the complexities above, development teams are seeking code-to-cloud solutions that won’t slow them down or lock them into any specific tool or runtime environment.

At Docker, we view our mission as helping developers bring their ideas to life by conquering the complexities of application development. In conquering these complexities, we believe that developers shouldn’t have to trade off freedom of choice for simplicity, agility, or portability.

We are fortunate that today there are millions of developers already using Docker Desktop and Docker Hub – rated “Second Most Loved Platform” in Stack Overflow’s 2019 survey – to conquer the complexity of building, sharing, and running cloud-native microservices-based applications. In 2020 we will help development teams further reduce complexity so they can ship creative applications even faster. How? Stay tuned for more this week!
The post Helping Developers Simplify Apps, Toolchains, and Open Source appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Desktop for Windows Home is here!

Last year we announced that Docker had released a preview of Docker Desktop with WSL 2 integration. We are now pleased to announce that we have completed the work to enable experimental support for Windows Home WSL 2 integration. This means that Windows Insider users on 19040 or higher can now install and use Docker Desktop!

Feedback on this first version of Docker Desktop for Windows Home is welcomed! To get started, you will need to be on Windows Insider Preview build 19040 or higher and install the Docker Desktop Edge 2.2.2.0.

What’s in Docker Desktop for Windows Home?

Docker Desktop for WSL 2 Windows Home is a full version of Docker Desktop for Linux container development. It comes with the same feature set as our existing Docker Desktop WSL 2 backend. This gives you: 

Latest version of Docker on your Windows machine Install Kubernetes in one click on Windows Home Integrated UI to view/manage your running containers Start Docker Desktop in <5 secondsUse Linux WorkspacesDynamic resource/memory allocation Networking stack, support for http proxy settings, and trusted CA synchronization 

How do I get started developing with Docker Desktop? 

For the best experience of developing with Docker and WSL 2, we suggest having your code inside a Linux distribution. This improves the file system performance and thanks to products like VSCode mean you can still do all of your work inside the Windows UI and in an IDE you know and love. 

Firstly make sure you are on the Windows insider program, are on 19040 and have installed Docker Desktop Edge.

Next install a WSL distribution of Linux (for this example I will assume something like Ubuntu from the Microsoft store).

You may want to check your distro is set to V2, to check in powershell run

wsl -l -v 

If you see your distro is a version one you will need to run 

wsl ‐‐set-version DistroName 2

Once you have a V2 WSL distro, Docker Desktop will automatically set this up with Docker.

The next step is to start working with your code inside this Ubuntu distro and ideally with your IDE still in Windows. In VSCode this is pretty straightforward.

You will want to open up VSCode and install the Remote WSL extension, this will allow you to work with a remote server in the Linux distro and your IDE client still on Windows. 

 Now we need to get started working in VSCode remotely, the easiest way to do this is to open up your terminal and type:

Wsl code .

This will open a new VSCode connected remotely to your default distro which you can check in the bottom corner of the screen. 

(or you can just look for Ubuntu in your start menu, open it and then run  code . )

Once in VSCode there I use the terminal in VSCode to pull my code and start working natively in Linux with Docker from my Windows Home Machine!

Other tips and tricks:

If you want to get the best out of the file system performance avoid mounting from the windows file system (even from a WSL distro. eg: avoid docker run -v /mnt/c/users:/users)If you are worried about the size of the docker-desktop-data VHDX or need to change it you can do this through the WSL tooling built into Windows:https://docs.microsoft.com/en-us/windows/wsl/wsl2-ux-changes#understanding-wsl-2-uses-a-vhd-and-what-to-do-if-you-reach-its-max-size If you are worried about CPU/Memory usage you put limits on memory/cpu/swap size on the WSL2 utility VM https://docs.microsoft.com/en-us/windows/wsl/release-notes#build-18945 

Your feedback needed!

We are excited to get your feedback on the first version of Docker Desktop for Windows Home and for you to tell us how we can make it even better.

To get started with WSL 2 Docker Desktop on Windows home today you will need to be on Windows Insider Preview build 19040 or higher and install the Docker Desktop Edge 2.2.2.0.
The post Docker Desktop for Windows Home is here! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

How to deploy on remote Docker hosts with docker-compose

The docker-compose tool is pretty popular for running dockerized applications in a local development environment. All we need to do is write a Compose file containing the configuration for the app’s services and have a running Docker engine for deployment. From here, we can get the app running locally in a few seconds with a single  `docker-compose up` command. 

This was the initial scope but…

As developers look to have the same ease-of-deployment in CI pipelines/production environments as in their development environment, we find today docker-compose being used in different ways and beyond its initial scope. In such cases, the challenge is that docker-compose provided support for running on remote docker engines through the use of the DOCKER_HOST environment variable and -H, –host command line option. This is not very user friendly and managing deployments of Compose applications across multiple environments becomes a burden.

To address this issue, we rely on Docker Contexts to securely deploy Compose applications across different environments and manage them effortlessly from our localhost. The goal of this post is to show how to use contexts to target different environments for deployment and easily switch between them.

We’ll start defining a sample application to use throughout this exercise, then we’ll show how to deploy it on the localhost. Further we’ll have a look at a Docker Context and the information it holds to allow us to safely connect to remote Docker engines. Finally, we’ll exercise the use of Docker Contexts with docker-compose to deploy on remote engines.

Before proceeding, docker and docker-compose must be installed on the localhost. Docker Engine and Compose are included in Docker Desktop for Windows and macOS. For Linux you will need to get Docker Engine and docker-compose. Make sure you get docker-compose with the context support feature. This is available starting with release 1.26.0-rc2 of docker-compose.

Sample Compose application

Let’s define a Compose file describing an application consisting of two services: frontend and backend.  The frontend service will run an nginx proxy that will forward the HTTP requests to a simple Go app server. 

A sample with all necessary files for this exercise can be downloaded from here or any other sample from the Compose samples repository can be used instead.

The project structure and the Compose file can be found below:

$ tree hello-dockerhello-docker├── backend│ ├── Dockerfile│ └── main.go├── docker-compose.yml└── frontend ├── Dockerfile └── nginx.conf

docker-compose.yml

version: “3.6”services:  frontend:    build: frontend        ports:    – 8080:80    depends_on:    – backend  backend:    build: backend

Running on localhost

To deploy the application we defined previously, go to the project directory and run docker-compose:

$ cd hello-docker/$ docker-compose up -dCreating network “hello-docker_default” with the default driverCreating hello-docker_backend_1 … doneCreating hello-docker_frontend_1     … done$

Check all containers are running and port 80 of the frontend service container is mapped to port 8080 of the localhost as described in the docker-compose.yml.

$ docker psCONTAINER ID  IMAGE                  COMMAND                 CREATED        STATUS  PORTS                   NAMES07b55d101e74  nginx:latest           “nginx -g ‘daemon of…”  6 seconds ago  Up 5 seconds  0.0.0.0:8080->80/tcp    hello-docker_frontend_148cdf1b8417c  hello-docker_backend   “/usr/local/bin/back…”  6 seconds ago  Up 5 seconds                           hello-docker_backend_1

Query the web service on port 8080 to get the hello message from the go backend.

$ curl localhost:8080          ##         .    ## ## ##        == ## ## ## ## ##     ===/”””””””””””””””””___/ ==={                       / ===-______ O           __/              __/  ___________/Hello from Docker!

Running on a remote host

A remote Docker host is a machine, inside or outside our local network which is running a Docker Engine and has ports exposed for querying the Engine API.

The sample application can be deployed on a remote host in several ways. Assume we have SSH access to a remote docker host with a key-based authentication to avoid a password prompt when deploying the application.

There are three ways to deploy it on the remote host:

1. Manual deployment by copying project files, install docker-compose and running it

A common usage of Compose is to copy the project source with the docker-compose.yml, install docker-compose on the target machine where we want to deploy the compose app and finally run it.

$ scp -r hello-docker user@remotehost:/path/to/src$ ssh user@remotehost$ pip install docker-compose$ cd /path/to/src/hello-docker$ docker-compose up -d

The disadvantages in this case is that for any change in the application sources or Compose file, we have to copy, connect to the remote host and re-run.

2. Using DOCKER_HOST environment variable to set up the target engine

Throughout this exercise we use the DOCKER_HOST environment variable scenario to target docker hosts, but the same can be achieved by passing the -H, –host argument to docker-compose.

$ cd hello-docker$ DOCKER_HOST=“ssh://user@remotehost” docker-compose up -d

This is a better approach than the manual deployment. But it gets quite annoying as it requires to set/export the remote host endpoint on every application change or host change.

3. Using docker contexts 

$ docker context lsNAME   DESCRIPTION   DOCKER ENDPOINT   KUBERNETES ENDPOINT   ORCHESTRATOR…remote               ssh://user@remotemachine$ cd hello-docker$ docker-compose ‐‐context remote up -d

Docker Contexts are an efficient way to automatically switch between different deployment targets. We’ll discuss contexts in the next section in order to understand how Docker Contexts can be used with compose to ease / speed up deployment.

Docker Contexts

A Docker Context is a mechanism to provide names to Docker API endpoints and store that information for later usage. The Docker Contexts can be easily managed with the Docker CLI as shown in the documentation. 

Create and use context to target remote host

To access the remote host in an easier way with the Docker client, we first create a context that will hold the connection path to it.

$ docker context create remote ‐‐docker “host=ssh://user@devmachine” remoteSuccessfully created context “remote”$ docker context ls
NAME      DESCRIPTION            DOCKER ENDPOINT    KUBERNETES ENDPOINT     ORCHESTRATORdefault * Current DOCKER_HOST…   unix:///var/run/docker.sock                swarmremote                           ssh://user@remotemachine

Make sure we have set the key-based authentication for SSH-ing to the remote host. Once this is done, we can list containers on the remote host by passing the context name as an argument.

$ docker ‐‐context remote psCONTAINER ID    IMAGE   COMMAND   CREATED   STATUS   NAMES

We can also set the “remote” context as the default context for our docker commands. This will allow us to run all the docker commands directly on the remote host without passing the context argument on each command.

$ docker context use remoteremoteCurrent context is now “remote”$ docker context lsNAME      DESCRIPTION             DOCKER ENDPOINT    KUBERNETES ENDPOINT    ORCHESTRATOR
default   Current DOCKER_HOST …   unix:///var/run/docker.sock               swarm    remote *                          ssh://user@remotemachine

docker-compose context usage

The latest release of docker-compose now supports the use of contexts for accessing Docker API endpoints. This means we can run docker-compose and specify the context “remote” to automatically target the remote host. If no context is specified, docker-compose will use the current context just like the Docker CLI.

$ docker-compose ‐‐context remote up -d/tmp/_MEI4HXgSK/paramiko/client.py:837: UserWarning: Unknown ssh-ed25519 host key for 10.0.0.52: b’047f5071513cab8c00d7944ef9d5d1fd’Creating network “hello-docker_default” with the default driverCreating hello-docker_backend_1  … doneCreating hello-docker_frontend_1 … done$ docker ‐‐context remote psCONTAINER ID   IMAGE                  COMMAND                 CREATED            STATUS          PORTS                  NAMESddbb380635aa   hello-docker_frontend  “nginx -g ‘daemon of…”  24 seconds ago  Up 23 seconds   0.0.0.0:8080->80/tcp   hello-docker_web_1872c6a55316f   hello-docker_backend   “/usr/local/bin/back…”  25 seconds ago  Up 24 seconds                          hello-docker_backend_1

Compose deployments across multiple targets

Many developers may have several development/test environments that they need to switch between. Deployment across all these is now effortless with the use of contexts in docker-compose.

We now try to exercise context switching between several Docker engines. For this, we define three targets:

Localhost running a local Docker engine A remote host accessible through sshA Docker-in-Docker container acting as another remote host 

The table below shows the mapping a contexts to docker targets:

Target EnvironmentContext nameAPI endpointlocalhostdefaultunix:///var/run/docker.sockRemote hostremotessh://user@remotemachinedocker-in-dockerdindtcp://127.0.0.1:2375

To run a Docker-in-Docker container with the port 2375 mapped to localhost run:

$ docker run ‐‐rm -d -p “2375:2375” ‐‐privileged -e “DOCKER_TLS_CERTDIR=” ‐‐name dind docker:19.03.3-dind ed92bc991bade2d41cab08b8c070c70b788d8ecf9dffc89e8c6379187aed9cdc$ docker psCONTAINER ID   IMAGE                COMMAND                 CREATED         STATUS  PORTS                                 NAMESed92bc991bad   docker:19.03.3-dind  “dockerd-entrypoint.…”  17 seconds ago  Up 15 seconds  0.0.0.0:2375->2375/tcp, 2376/tcp      dind

Create a new context ‘dind’ to easily target the container:

$ docker context create dind ‐‐docker “host=tcp://127.0.0.1:2375” ‐‐default-stack-orchestrator swarmdindSuccessfully created context “dind”$ docker context lsNAME       DESCRIPTION           
DOCKER ENDPOINT    KUBERNETES ENDPOINT   ORCHESTRATORdefault *  Current DOCKER_HOST …  unix:///var/run/docker.sock              swarmremote                            ssh://user@devmachine                    swarm

We can now target any of the environments to deploy the Compose application from the localhost.

$ docker context use dinddindCurrent context is now “dind”$ docker-compose up -dCreating network “hello-docker_default” with the default driverCreating hello-docker_backend_1 … doneCreating hello-docker_frontend_1 … done$ docker psCONTAINER ID   IMAGE                  COMMAND                 CREATED            STATUS          PORTS                  NAMES951784341a0d   hello-docker_frontend  “nginx -g ‘daemon of…”  34 seconds ago  Up 33 seconds   0.0.0.0:8080->80/tcp   hello-docker_frontend_1872c6a55316f   hello-docker_backend   “/usr/local/bin/back…”  35 seconds ago  Up 33 seconds                          hello-docker_backend_1$ docker ‐‐context default psCONTAINER ID   IMAGE                 COMMAND                    CREATED    STATUS         PORTS                              NAMESed92bc991bad   docker:19.03.3-dind   “dockerd-entrypoint….”   28 minutes ago    Up 28 minutes   0.0.0.0:2375->2375/tcp, 2376/tcp   dind$ docker-compose ‐‐context remote up -d/tmp/_MEIb4sAgX/paramiko/client.py:837: UserWarning: Unknown ssh-ed25519 host key for 10.0.0.52: b’047f5071513cab8c00d7944ef9d5d1fd’Creating network “hello-docker_default” with the default driverCreating hello-docker_backend_1 … doneCreating hello-docker_frontend_1 … done$ docker context use defaultdefaultCurrent context is now “default”$ docker-compose up -dCreating network “hello-docker_default” with the default driverCreating hello-docker_backend_1 … doneCreating hello-docker_frontend_1 … done$ docker psCONTAINER ID   IMAGE                  COMMAND                 CREATED            STATUS              PORTS                                       NAMES077b5e5b72e8   hello-docker_frontend  “nginx -g ‘daemon of…”  About a minute ago  Up about a minute   0.0.0.0:8080->80/tcp                        hello-docker_frontend_1fc01878ad14e   hello-docker_backend   “/usr/local/bin/back…”  About a minute ago  Up about a minute                                               hello-docker_backend_1ed92bc991bad   docker:19.03.3-dind    “dockerd-entrypoint….”  34 minutes ago  Up 34 minutes       0.0.0.0:2375->2375/tcp, 2376/tcp            dind

The sample application runs now on all three hosts. Querying the frontend service on each of these hosts as shown below should return the same message:

$ curl localhost:8080

$ docker exec -it dind sh -c “wget -O – localhost:8080”

$ curl 10.0.0.52:8080

Output:

          ##         .    ## ## ##        == ## ## ## ## ##     ===/”””””””””””””””””___/ ==={                       / ===-______ O           __/              __/  ___________/Hello from Docker!

Summary

Deploying to remote hosts with docker-compose has been a common use-case for quite some time. 

The Docker Contexts support in docker-compose offers an easy and elegant approach to target different remote hosts. Switching between different environments is now easy to manage and deployment risks across them are reduced. We have shown an example of how to access remote docker hosts via SSH and tcp protocols hoping these cover a large number of use-cases.
The post How to deploy on remote Docker hosts with docker-compose appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Getting Started with Istio Using Docker Desktop

This is a guest post from Docker Captain Elton Stoneman, a Docker alumni who is now a freelance consultant and trainer, helping organizations at all stages of their container journey. Elton is the author of the book Learn Docker in a Month of Lunches, and numerous Pluralsight video training courses – including Managing Apps on Kubernetes with Istio and Monitoring Containerized Application Health with Docker.

Istio is a service mesh – a software component that runs in containers alongside your application containers and takes control of the network traffic between components. It’s a powerful architecture that lets you manage the communication between components independently of the components themselves. That’s useful because it simplifies the code and configuration in your app, removing all network-level infrastructure concerns like routing, load-balancing, authorization and monitoring – which all become centrally managed in Istio.

There’s a lot of good material for digging into Istio. My fellow Docker Captain Lee Calcote is the co-author of Istio: Up and Running, and I’ve just published my own Pluralsight course Managing Apps on Kubernetes with Istio. But it can be a difficult technology to get started with because you really need a solid background in Kubernetes before you get too far. In this post, I’ll try and keep it simple. I’ll focus on three scenarios that Istio enables, and all you need to follow along is Docker Desktop.

Setup

Docker Desktop gives you a full Kubernetes environment on your laptop. Just install the Mac or Windows version – be sure to switch to Linux containers if you’re using Windows – then open the settings from the Docker whale icon, and select Enable Kubernetes in the Kubernetes section. You’ll also need to increase the amount of memory Docker can use, because Istio and the demo app use a fair bit – in the Resources section increase the memory slider to at least 6GB.

Now grab the sample code for this blog post, which is in my GitHub repo:

git clone https://github.com/sixeyed/istio-samples.git
cd istio-samples


The repo has a set of Kubernetes manifests that will deploy Istio and the demo app, which is a simple bookstore website (this is the Istio team’s demo app, but I use it in different ways so be sure to use my repo to follow along). Deploy everything using the Kubernetes control tool kubectl, which is installed as part of Docker Desktop:

kubectl apply -f ./setup/

You’ll see dozens of lines of output as Kubernetes creates all the Istio components along with the demo app – which will all be running in Docker containers. It will take a few minutes for all the images to download from Docker Hub, and you can check the status using kubectl:

# Istio – will have “1/1” in the “READY” column when fully running:
kubectl get deploy -n istio-system

# demo app – will have “2/2” in the “READY” column when fully running:
kubectl get pods

When all the bits are ready, browse to http://localhost/productpage and you’ll see this very simple demo app:

And you’re good to go. If you’re happy working with Kubernetes YAML files you can look at the deployment spec for the demo app, and you’ll see it’s all standard Kubernetes resources – services, service accounts and deployments. Istio is managing the communication for the app, but we haven’t deployed any Istio configurations, so it isn’t doing much yet.

The demo application is a distributed app. The homepage runs in one container and it consumes data from REST APIs running in other containers. The book details and book reviews you see on the page are fetched from other containers. Istio is managing the network traffic between those components, and it’s also managing the external traffic which comes into Kubernetes and on to the homepage.

We’ll use this demo app to explore the main features of Istio: traffic management, security and observability.

Managing Traffic – Canary Deployments with Istio

The homepage is kinda boring, so let’s liven it up with a new release. We want to do a staged release so we can check out how the update gets received, and Istio supports both blue-green and canary deployments. Canary deployments are generally more useful and that’s what we’ll use. We’ll have two versions of the home page running, and Istio will send a proportion of the traffic to version 1 and the remainder to version 2:

We’re using Istio for service discovery and routing here: all incoming traffic comes into Istio and we’re going to set rules for how it forwards that traffic to the product page component. We do that by deploying a VirtualService, which is a custom Istio resource. That contains this routing rule for HTTP traffic:

gateways:
– bookinfo-gateway
http:
– route:
– destination:
host: productpage
subset: v1
port:
number: 9080
weight: 70
– destination:
host: productpage
subset: v2
port:
number: 9080
weight: 30

There are a few moving pieces here:

The gateway is the Istio component which receives external traffic. The bookinfo-gateway object is configured to listen to all HTTP traffic, but gateways can be restricted to specific ports and host names;The destination is the actual target where traffic will be routed (which can be different from the requested domain name). In this case, there are two subsets, v1 which will receive 70% of traffic and v2 which receives 30%;Those subsets are defined in a DestinationRule object, which uses Kubernetes labels to identify pods within a service. In this case the v1 subset finds pods with the label version=v1, and the v2 subset finds pods with the label version=v2.

Sounds complicated, but all it’s really doing is defining the rules to shift traffic between different pods. Those definitions come in Kubernetes manifest YAML files, which you deploy in the same way as your applications. So we can do our canary deployment of version 2 with a single command – this creates the new v2 pod, together with the Istio routing rules:

# deploy:
kubectl apply -f ./canary-deployment

# check the deployment – it’s good when all pods show “2/2” in “READY”:
kubectl get pods

Now if you refresh the bookstore demo app a few times, you’ll see that most of the responses are the same boring v1 page, but a lucky few times you’ll see the v2 page which is the result of much user experience testing:

As the positive feedback rolls in you can increase the traffic to v2 just by altering the weightings in the VirtualService definition and redeploying. Both versions of your app are running throughout the canary stage, so when you shift traffic you’re sending it to components that are already up and ready to handle traffic, so there won’t be additional latency from new pods starting up.

Canary deployments are just one aspect of traffic management which Istio makes simple. You can do much more, including adding add fault tolerance with retries and circuit breakers, all with Istio components and without any changes to your apps.

Securing Traffic – Authentication and Authorization with mTLS

Istio handles all the network traffic between your components transparently, without the components themselves knowing that it’s interfering. It does this by running all the application container traffic through a network proxy, which applies Istio’s rules. We’ve seen how you can use that for traffic management, and it works for security too.

If you need encryption in transit between app components, and you want to enforce access rules so only certain consumers can call services, Istio can do that for you too. You can keep your application code and config simple, use basic unauthenticated HTTP and then apply security at the network level.

Authentication and authorization are security features of Istio which are much easier to use than they are to explain. Here’s the diagram of how the pieces fit together:

Here the product page component on the left is consuming a REST API from the reviews component on the right. Those components run in Kubernetes pods, and you can see each pod has one Docker container for the application and a second Docker container running the Istio proxy, which handles the network traffic for the app.

This setup uses mutual-TLS for encrypting the HTTP traffic and authenticating and authorizing the caller:

The authentication Policy object applied to the service requires mutual TLS, which means the service proxy listens on port 443 for HTTPS traffic, even though the service itself is only configured to listen on port 80 for HTTP traffic;The AuthorizationPolicy object applied to the service specifies which other components are allowed access. In this case, everything is denied access, except the product page component which is allowed HTTP GET access;The DestinationRule object is configured for mutual-TLS, which means the proxy for the product page component will upgrade HTTP calls to HTTPS, so when the app calls the reviews component it will be a mutual-TLS conversation.

Mutual-TLS means the client presents a certificate to identify itself, as well as the service presenting a certificate for encryption (only the server cert is standard HTTPS behavior). Istio can generate and manage all those certs, which removes a huge burden from normal mTLS deployments. 

There’s a lot to take in there, but the deployment and management of all that is super simple, it’s just the same kubectl process:

kubectl apply -f ./service-authorization/

Istio uses the Kubernetes Service Account for identification, and you’ll see when you try the app that nothing’s changed, it all works as before. The difference is that no other components running in the cluster can access the reviews component now, the API is locked down so only the product page can consume it.

You can verify that by connecting to another container – the details component is running in the same cluster. Try to consume the reviews API from the details container:

docker container exec -it $(docker container ls –filter name=k8s_details –format ‘{{ .ID}}’) sh

curl http://reviews:9080/1

You’ll see an error – RBAC: access denied, which is Istio enforcing the authorization policy. This is powerful stuff, especially having Istio manage the certs for you. It generates certs with a short lifespan, so even if they do get compromised they’re not usable for long. All this without complicating your app code or dealing with self-signed certs.

Observability – Visualising the Service Mesh with Kiali

All network traffic runs through Istio, which means it can monitor and record all the communication. Istio uses a pluggable architecture for storing telemetry, which has support for standard systems like Prometheus and Elasticsearch. 

Collecting and storing telemetry for every network call can be expensive, so this is all configurable. The deployment of Istio we’re using is the demo configuration, which has telemetry configured so we can try it out. Telemetry data is sent from the service proxies to the Istio component called Mixer, which can send it out to different back-end stores, in this case, Prometheus:

(This diagram is a simplification – Prometheus actually pulls the data from Istio, and you can use a single Prometheus instance to collect metrics from Istio and your applications).

The data in Prometheus includes response codes and durations, and Istio comes with a bunch of Grafana dashboards you can use to drill down into the metrics. And it also has support for a great tool called Kiali, which gives you a very useful visualization of all your services and the network traffic between them.

Kiali is already running in the demo deployment, but it’s not published by default. You can gain access by deploying a Gateway and a VirtualService:

kubectl apply -f ./visualization-kiali/

Now refresh the app a few times at http://localhost/productpage and then check out the service mesh visualization in Kiali at http://localhost:15029. Log in with the username admin and password admin, then browse to the Graph view and you’ll see the live traffic for the bookstore app:

I’ve turned on “requests percentage” for the labels here, and I can see the traffic split between my product page versions is 67% to 34%, which is pretty close to my 70-30 weighting (the more traffic you have, the closer you’ll get to the specified weightings).

Kiali is just one of the observability tools Istio supports. The demo deployment also runs Grafana with multiple dashboards and Jaeger for distributed tracing – which is a very powerful tool for diagnosing issues with latency in distributed applications. All the data to power those visualizations is collected automatically by Istio.

Wrap-Up

A service mesh makes the communication layer for your application into a separate entity, which you can control centrally and independently from the app itself. Istio is the most fully-featured service mesh available now, although there is also Linkerd (which tends to have better baseline performance), and the Service Mesh Interface project (which aims to standardise mesh features). 

Using a service mesh comes with a cost – there are runtime costs for hosting additional compute for the proxies and organizational costs for getting teams skilled in Istio. But the scenarios it enables will outweigh the cost for a lot of people, and you can very quickly test if Istio is for you, using it with your own apps in Docker Desktop.
The post Getting Started with Istio Using Docker Desktop appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Donates the cnab-to-oci Library to cnab.io

Docker is proud and happy to announce the donation of our cnab-to-oci library to the CNAB project . This project was created last year after Microsoft and Docker moved the CNAB specification to the Linux Foundation’s Joint Development Foundation. At that time, the CNAB specification repository was moved from the deislab GitHub organization to the new cnabio organization. The reference implementations – cnab-go which is the Golang library implementation of the specification and duffle which is the CLI reference implementation – were also moved.

What is cnab-to-oci for?

Docker helped with the development of the CNAB specification and its reference implementations, and led the work on the cnab-to-oci library for sharing a CNAB bundle using an existing container registry. This library is now used by 3 CNAB tools, Docker App, Porter and duffle, as well as Docker Hub. It successfully demonstrated how to push, pull and share a CNAB bundle using a registry. This work will be used as a foundation for the future CNAB Registries specification.

The transfer is already in effect, so starting now please refer to github.com/cnabio/cnab-to-oci in your Golang imports.

How does cnab-to-oci store a CNAB bundle into a registry?

As you may know, the OCI image specification introduces two main objects: the OCI Manifest and the OCI Image Index. The first one is well known and represents the classic Docker image. The other one was, at first, used to store multi-architecture images (see nginx as an example).

But what you may not know is that the specification doesn’t restrict the use of OCI Indexes to multi-arch images. You can store almost anything you want, as long as you meet the specification, and it is quite open.

cnab-to-oci uses this openness to push the bundle.json, but also the invocation image and the component images (or service images for a Docker App). It pushes everything in the same repository, so one has the guarantee that when someone pulls her/his bundle, all the components can be pulled as well.

Demo Time

While cnab-to-oci is implemented as a library that can be used by other tools, the repository contains a handy CLI tool that can perform push and pull of any CNAB bundle.json.

With the following command we push a bundle example to the Docker Hub repository. It pushes all the manifests found in the bundle, then creates an OCI Index and pushes it at the end. The digest we get as a result is pointing to the OCI Index of the bundle.

$ make bin/cnab-to-oci…$ ./bin/cnab-to-oci push examples/helloworld-cnab/bundle.json -t hubusername/repo:demo –log-level=debug –auto-update-bundleDEBU[0000] Fixing up bundle docker.io/hubusername/repo:demoDEBU[0000] Updating entry in relocation map for “cnab/helloworld:0.1.1”Starting to copy image cnab/helloworld:0.1.1…Completed image cnab/helloworld:0.1.1 copyDEBU[0004] Bundle fixedDEBU[0004] Pushing CNAB Bundle docker.io/hubusername/repo:demoDEBU[0004] Pushing CNAB Bundle ConfigDEBU[0004] Trying to push CNAB Bundle ConfigDEBU[0004] CNAB Bundle Config DescriptorDEBU[0004] {  “mediaType”: “application/vnd.cnab.config.v1+json”,  “digest”: “sha256:e91b9dfcbbb3b88bac94726f276b89de46e4460b55f6e6d6f876e666b150ec5b”,  “size”: 498}DEBU[0005] Trying to push CNAB Bundle Config ManifestDEBU[0005] CNAB Bundle Config Manifest DescriptorDEBU[0005] {  “mediaType”: “application/vnd.oci.image.manifest.v1+json”,  “digest”: “sha256:6ec4fd695cace0e3d4305838fdf9fcd646798d3fea42b3abb28c117f903a6a5f”,  “size”: 188}DEBU[0006] Failed to push CNAB Bundle Config Manifest, trying with a fallback methodDEBU[0006] Trying to push CNAB Bundle ConfigDEBU[0006] CNAB Bundle Config DescriptorDEBU[0006] {  “mediaType”: “application/vnd.oci.image.config.v1+json”,  “digest”: “sha256:e91b9dfcbbb3b88bac94726f276b89de46e4460b55f6e6d6f876e666b150ec5b”,  “size”: 498}DEBU[0006] Trying to push CNAB Bundle Config ManifestDEBU[0006] CNAB Bundle Config Manifest DescriptorDEBU[0006] {  “mediaType”: “application/vnd.oci.image.manifest.v1+json”,  “digest”: “sha256:b9616da7500f8c7c9a5e8d915714cd02d11bcc71ff5b4fd190bb77b1355c8549”,  “size”: 193}DEBU[0006] CNAB Bundle Config pushedDEBU[0006] Pushing CNAB IndexDEBU[0006] Trying to push OCI IndexDEBU[0006] {“schemaVersion”:2,”manifests”:[{“mediaType”:”application/vnd.oci.image.manifest.v1+json”,”digest”:”sha256:b9616da7500f8c7c9a5e8d915714cd02d11bcc71ff5b4fd190bb77b1355c8549″,”size”:193,”annotations”:{“io.cnab.manifest.type”:”config”}},{“mediaType”:”application/vnd.docker.distribution.manifest.v2+json”,”digest”:”sha256:a59a4e74d9cc89e4e75dfb2cc7ea5c108e4236ba6231b53081a9e2506d1197b6″,”size”:942,”annotations”:{“io.cnab.manifest.type”:”invocation”}}],”annotations”:{“io.cnab.keywords”:”[”helloworld”,”cnab”,”tutorial”]”,”io.cnab.runtime_version”:”v1.0.0″,”org.opencontainers.artifactType”:”application/vnd.cnab.manifest.v1″,”org.opencontainers.image.authors”:”[{”name”:”Jane Doe”,”email”:”jane.doe@example.com”,”url”:”https://example.com”}]”,”org.opencontainers.image.description”:”A short description of your bundle”,”org.opencontainers.image.title”:”helloworld”,”org.opencontainers.image.version”:”0.1.1″}}DEBU[0006] OCI Index DescriptorDEBU[0006] {  “mediaType”: “application/vnd.oci.image.index.v1+json”,  “digest”: “sha256:fcee8577f3acc8ddc6e0280e6d1eb15be70bdff460fe7353abf917a872487af2”,  “size”: 926}DEBU[0007] CNAB Index pushedDEBU[0007] CNAB Bundle pushedPushed successfully, with digest “sha256:fcee8577f3acc8ddc6e0280e6d1eb15be70bdff460fe7353abf917a872487af2”

Let’s check that our bundle has been pushed on Docker Hub:

We can now pull our bundle back from the registry. It will only fetch the bundle.json file, but as you may notice this now has a digested reference for the image manifest of every component, inside the same registry repository. The Docker Engine will pull any images required by the bundle at runtime. So pulling a bundle is a lightweight process.

$ ./bin/cnab-to-oci pull hubusername/repo:demo –log-level=debugDEBU[0000] Pulling CNAB Bundle docker.io/hubusername/repo:demoDEBU[0000] Getting OCI Index DescriptorDEBU[0001] {  “mediaType”: “application/vnd.oci.image.index.v1+json”,  “digest”: “sha256:fcee8577f3acc8ddc6e0280e6d1eb15be70bdff460fe7353abf917a872487af2”,  “size”: 926}DEBU[0001] Fetching OCI Index sha256:fcee8577f3acc8ddc6e0280e6d1eb15be70bdff460fe7353abf917a872487af2DEBU[0001] {  “schemaVersion”: 2,  “manifests”: [    {      “mediaType”: “application/vnd.oci.image.manifest.v1+json”,      “digest”: “sha256:b9616da7500f8c7c9a5e8d915714cd02d11bcc71ff5b4fd190bb77b1355c8549”,      “size”: 193,      “annotations”: {        “io.cnab.manifest.type”: “config”      }    },    {      “mediaType”: “application/vnd.docker.distribution.manifest.v2+json”,      “digest”: “sha256:a59a4e74d9cc89e4e75dfb2cc7ea5c108e4236ba6231b53081a9e2506d1197b6”,      “size”: 942,      “annotations”: {        “io.cnab.manifest.type”: “invocation”      }    }  ],  “annotations”: {    “io.cnab.keywords”: “[”helloworld”,”cnab”,”tutorial”]”,    “io.cnab.runtime_version”: “v1.0.0”,    “org.opencontainers.artifactType”: “application/vnd.cnab.manifest.v1”,    “org.opencontainers.image.authors”: “[{”name”:”Jane Doe”,”email”:”jane.doe@example.com”,”url”:”https://example.com”}]”,    “org.opencontainers.image.description”: “A short description of your bundle”,    “org.opencontainers.image.title”: “helloworld”,    “org.opencontainers.image.version”: “0.1.1”  }}DEBU[0001] Getting Bundle Config Manifest DescriptorDEBU[0001] {  “mediaType”: “application/vnd.oci.image.manifest.v1+json”,  “digest”: “sha256:b9616da7500f8c7c9a5e8d915714cd02d11bcc71ff5b4fd190bb77b1355c8549”,  “size”: 193,  “annotations”: {    “io.cnab.manifest.type”: “config”  }}DEBU[0001] Getting Bundle Config Manifest sha256:b9616da7500f8c7c9a5e8d915714cd02d11bcc71ff5b4fd190bb77b1355c8549DEBU[0001] {  “schemaVersion”: 2,  “config”: {    “mediaType”: “application/vnd.oci.image.config.v1+json”,    “digest”: “sha256:e91b9dfcbbb3b88bac94726f276b89de46e4460b55f6e6d6f876e666b150ec5b”,    “size”: 498  },  “layers”: null}DEBU[0001] Fetching Bundle sha256:e91b9dfcbbb3b88bac94726f276b89de46e4460b55f6e6d6f876e666b150ec5bDEBU[0002] {  “schemaVersion”: “v1.0.0”,  “name”: “helloworld”,  “version”: “0.1.1”,  “description”: “A short description of your bundle”,  “keywords”: [    “helloworld”,    “cnab”,    “tutorial”  ],  “maintainers”: [    {      “name”: “Jane Doe”,      “email”: “jane.doe@example.com”,      “url”: “https://example.com”    }  ],  “invocationImages”: [    {      “imageType”: “docker”,      “image”: “cnab/helloworld:0.1.1”,      “contentDigest”: “sha256:a59a4e74d9cc89e4e75dfb2cc7ea5c108e4236ba6231b53081a9e2506d1197b6”,      “size”: 942,      “mediaType”: “application/vnd.docker.distribution.manifest.v2+json”    }  ]}

cnab-to-oci has been integrated with Docker App in the last beta release v0.9.0-beta1, to let you push and pull your entire application with the same UX as pushing a regular Docker container image. As Docker App is a standard CNAB runtime, it can also run this generic CNAB example:

$ docker app pull hubusername/repo:demoSuccessfully pulled “helloworld” (0.1.1) from docker.io/hubusername/repo:demo$ docker app run hubusername/repo:demoPort parameter was set to Install actionAction install complete for upbeat_nobelApp “upbeat_nobel” running on context “default”

Want to Know More?

If you’re interested in getting more details about CNAB, a few blog posts are available:

Multi-arch All The ThingsBuilding Multi-Arch Images for Arm and x86 with Docker DesktopAnnouncing CNABDocker App and CNABNext Steps for Cloud Native Application Bundles

Please note that we will give a talk about this topic at KubeCon Europe 2020: “Sharing is Caring! Push your Cloud Application to an OCI Registry – Silvin Lubecki & Djordje Lukic”

And of course, you can also find more information directly on the cnab-to-oci GitHub repository.

Contributions are welcome!!!
The post Docker Donates the cnab-to-oci Library to cnab.io appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Hack Week: How Docker Drives Innovation from the Inside

Since its founding, Docker’s mission has been to help developers bring their ideas to life by conquering the complexity of app development. With millions of Docker developers worldwide, Docker is the de facto standard for building and sharing containerized apps. 

So what is one source of ideas we use to simplify the lives of developers? It starts with being a company of software developers who builds products for software developers. One of the more creative ways Docker has been driving innovation internally is through hackathons. These hackathons have proven to be a great platform for Docker employees to showcase their talent and provide unique opportunities for teams across Docker’s business functions to come together. Our employees get to have fun while creating solutions to problems that simplify the lives of Docker developers.

At Docker, our engineers are always looking for ways to improve their own workflows so as to ship quality code faster. Hack Week gives us a chance to explore the boundaries of what’s possible, and the winning ‘hacks’ make their way into our products to benefit our global developer community.

-Scott Johnston, Docker CEO

With that context, let’s break down how Docker runs employee hackathons. Docker is an open source company, and in the spirit of openness, I am sharing all the gory details here of our hackathon. 

First of all, our hackathon is known as “Hack Week.” We conduct hackathons twice a year. Docker uses Slack channels to manage employee communications, Confluence for team workspaces and Zoom for video conferencing and recording of demos. For example, we have a Confluence Hack Week site with all the info an employee needs to participate: hackathon rules, team sign-ups, calendar and schedule, demo recordings and results.

Because we still need to perform our day jobs, we run Hack Week for a full work week where employees can manage their time but are granted 20% of that time to work on their hackathon project during work hours. Below is a screenshot of Docker’s internal site for Hack Week that provides simple guidance and voting criteria – every employee gets a vote!

Docker Hackathon Home Page

What makes this fun at Docker is the fact that we have employees participating from Paris, Cambridge (UK) and San Francisco. There are no constraints on how teams form. You can have members from all three locations form as one team. Signing up is simple – all we require is a team name, your team members, your region and a 1-3 sentence description of your “hack.” Below is the calendar from Docker’s last Hack Week which we ran back in December 2019. This should give you a good overview of how we execute Hack Week. This actually runs quite smoothly for Docker despite the 8-9 hour time difference between our teams in San Francisco and the teams in the UK and France. 

The winning team for December’s Hack Week was Team M&Ms (s/o to Mathieu Champion in Paris and Michael Parker in Cambridge) after garnering the most employee votes. The description of their hack was “run everything from Docker Desktop.” The hack enables auto-generation of dockerfiles from Docker Desktop. (A dockerfile is a text document that contains all the commands a user could call on the command line to assemble a container image). 

I spoke with Michael Parker regarding his motivations for participation in Hack Week. “Hack Week is a great innovation platform – it lets employees show what can be easily implemented with our current systems and dream a bit bigger about what might be possible rather than focusing on the incremental feature tweaks and bug fixes.” 

Finally, I have shared the recorded video below from our Hack Week winning team. This will give you a good idea as to how we present, collaborate and vote in a virtual work environment with teams spread across two continents and an island. It’s a 6-minute video and will give you a great view of how passionate we are about making the lives of developers that much better by making their jobs that much easier and productive.

Feel free to let any of this content we have shared inspire your organization’s employees to plan and conduct your own hackathons. I remember back in 2012 when I was participating in a public hackathon at Austin’s SXSW Interactive Conference seeing none other than Deepak Chopra kicking off the event and inspiring developers. He talked about hackathons as a form of “creative chaos” and how conflict and destruction of established patterns can often lead to creativity and innovation. I think this is a great description of a hackathon. Are you ready for some creative chaos inside your own organization?

The post Hack Week: How Docker Drives Innovation from the Inside appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Our Favourite Picks from the KubeCon Europe 2020 Schedule

Last Wednesday, the CNCF released the KubeCon Europe 2020 schedule. There are so many talks at KubeCon it can be daunting even to decide what to go to see! Here are some talks by the team at Docker, and some others we think will be particularly interesting. Looking forward to seeing you in Amsterdam!

Simplify Your Cloud-Native Application Packaging and Deployments – Chris Crone

Chris is an engineer in our Paris office and is also co-executive director of the CNAB project. CNAB (Cloud Native Application Bundle) is a specification for bundling up cloud-native applications, which can consist of multiple containers, into a single object that can be pushed to a registry. Open source projects using CNAB, like Docker App or Porter allow you to package apps that would normally require multiple tools like Terraform, Helm, and shell to deploy, into a single tooling agnostic packaging format. These packages can then be shared using existing container registries and used with other CNAB compliant tools. This can really simplify cloud-native development.

Sharing is Caring! Push your Cloud Application to an OCI Registry – Silvin Lubecki & Djordje Lukic

Did you know that you can store anything into a container registry? Did you ever wonder what black magic is behind multi-architecture images? The OCI Image specification is a standard purposely generic enough to enable use cases other than “just” container images.

This talk will give an overview of how images in registries work, and how you can push CNAB applications and other custom resources into a registry. It will also cover our battle scars with the different interpretations of the OCI spec by the mainstream registries. 

How to Work in Cloud Native Security: Demystifying the Security Role – Justin Cormack, Docker

Working in security can be intimidating and the shortage of people in the space makes hiring difficult. But especially in cloud-native environments, security is something everyone must own. If you’ve ever asked yourself, “what does it take to work in security in a cloud-native environment? How can you move into security from a dev or an ops position? Where should you start and what should you learn about?” then this talk is for you. I decided to submit this talk as my journey into working in security was fairly accidental, and I realised that this is true for many people. I meet a lot of people interested in getting into security, through the CNCF SIG Security and elsewhere, and hope I can give help and encouragement.

More interesting talks

I wrote about the work the community is doing in the CNCF on Notary v2 last week. If you found this interesting and want to learn more, we have an introductory session, with me and Omar Paul from Amazon, which will give a beginner’s view and a working session for in-depth work with Steve Lasker from Microsoft and me.

If you want even more on container signing, Justin Cappos and Lukas Puehringer from New York University have a session on securing container delivery with TUF and another on supply chain security with in-toto.

The containerd community continues to grow and innovate. Phil Estes from IBM and Derek McGowan from Docker are covering the Introduction to containerd, while Akihiro Suda and Wei Fu are doing the containerd deep dive. Also on the containerd theme, the great teachers Bret Fisher and Jerome Petazzoni are giving a tutorial: Kubernetes Runtimes: Translating your Docker skills to containerd.

Dominique Top and Ivan Pedrazas run the London Docker meetup and are both lovely people who have built up a great community. Learn from them with 5 Things you Could do to Improve your Local Community.

Lastly, my friend Lee Calcotte always gives great talks, and this one about how to understand the details of traffic control appeals to my geek side: Discreetly Studying the Effects of Individual Traffic Control Functions.
The post Our Favourite Picks from the KubeCon Europe 2020 Schedule appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Changes to dockerproject.org APT and YUM repositories

While many people know about Docker, not that many know its history and where it came from. Docker was started as a project in the dotCloud company, founded by Solomon Hykes, which provided a PaaS solution. The project became so successful that dotCloud renamed itself to Docker, Inc. and focused on Docker as its primary product.

As the “Docker project” grew from being a proof of concept shown off at various meetups and at PyCon in 2013 to a real community project, it needed a website where people could learn about it and download it. This is why the “dockerproject.org” and “dockerproject.com” domains were registered.

With the move from dotCloud to Docker, Inc. and the shift of focus onto the Docker product, it made sense to move everything to the “docker.com” domain. This is where you now find the company website, documentation, and of course the APT and YUM repositories at download.docker.com have been there since 2017.

On the 31st of March 2020, we will be shutting down the legacy APT and YUM repositories hosted at dockerproject.org and dockerproject.com. These repositories haven’t been updated with the latest releases of Docker and so the packages hosted there contain security vulnerabilities. Removing these repositories will make sure that people download the latest version of Docker ensuring their security and providing the best experience possible

What do I need to do?

If you are currently using the APT or YUM repositories from dockerproject.org or dockerproject.com, please update to use the repositories at download.docker.com.

You can find instructions for CentOS, Debian, Fedora and Ubuntu in the documentation.
The post Changes to dockerproject.org APT and YUM repositories appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing the Docker Index: Insight from the World’s Most Popular Container Registry

8 billion pulls! Yes, that’s billion with a B! This number represents a little known level of activity and innovation happening across the community and ecosystem, all in just one average month. How do we know? From the number of pulls and most popular images to top architectures, data from Docker Hub and Docker Desktop provide a window into application development trends in the age of containers. 

Today, we are sharing these findings in something we call the Docker Index – a look at developers’ preferences and trends, as told by using anonymized data from five million Docker Hub and two million Docker Desktop users, as well as countless other developers engaging with content on Hub. 

At Docker, we’re always looking for ways to make life easier for developers. Understanding the what, why and how behind these projects is imperative. As these trends evolve, we will continue to share updates on the findings.

Whether containers will become mainstream is no longer a topic of debate. As the Docker Index data suggests, containers have become a mainstay to how modern, distributed apps are built and shared so they can run anywhere. 

Usage is showing no signs of slowing down. Docker Desktop and Docker Hub are reaching an increasing number of developers and users are engaging with content from Hub at higher rates. Content from community developers and open source projects continues to make Hub a central and valuable source for developers looking to build containerized applications. 

Collaboration is key when building apps so that developers aren’t starting from scratch. Containers have helped to make building blocks the new norm. With container images readily accessible and shareable, everyone can be more productive. 

Modern apps also give rise to increasingly diverse development environments, drawing more attention to the importance of choice. The ability to select your preferred framework, operating system and architecture go a long way in creating a more productive experience for modern app development.

The ecosystem and community are shaping the future of software development and containers are at the heart of this transformation. The level of activity and collaboration is hitting a new gear and with it, continued advancements in how developers build and share apps. We look forward to sharing updates on the Docker Index data over the course of this year. 

To get started with Docker, download Docker Desktop and take a tutorial here https://www.docker.com/get-started.
The post Introducing the Docker Index: Insight from the World’s Most Popular Container Registry appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

How We Solved a Report on docker-compose Performance on macOS Catalina

Photo by Caspar Camille Rubin on Unsplash

As a Docker Compose maintainer, my daily duty is to check for newly reported issues and try to help users through misunderstanding and possible underlying bugs. Sometimes issues are very well documented, sometimes they are nothing much but some “please help” message. And sometimes they look really weird and can result in funny investigations. Here is the story of how we solved one such report…

A one-line bug report

An issue was reported as “docker-compose super slow on macOS Catalina” – no version, no details. How should I prioritize this? I don’t even know if the reporter is using the latest version of the tool – the opened issue doesn’t follow the bug reporting template. This is just a one-liner. But for some reason, I decided to take a look at it anyway and diagnose the issue.

Without any obvious explanation for super-slowness, I decided to take a risk and upgrade my own MacBook to OSX Catalina. I was able to reproduce significant slow down in docker-compose execution, waiting seconds for the very first line printed on the console even to display usage on invalid command.

Investigating the issue

In the meantime, some users reported getting correct performance when installing docker-compose as a plain python software, not with the packaged executable. The docker-compose executable is packaged using PyInstaller, which embeds a Python runtime and libraries with application code in a single executable file. As a result, one gets a distributable binary that can be created for Windows, Linux and OSX.  I wrote a minimalist “hello world” python application and was able to reproduce the same weird behaviour once packaged the same way docker-compose is, i.e. a few second startup delay.

Here comes the funny part. I’m a remote worker on the Docker team, and I sometimes have trouble with my Internet connection. It happened this exact day, as my network router had to reboot. And during the reboot sequence, docker-compose performance suddenly became quite good … but eventually, the initial execution delay came back. How do you explain such a thing?

So I installed Charles proxy to analyze network traffic, and discovered a request sent to api.apple-cloudkit.com each and everytime docker-compose was run. Apple Cloudkit is Apple cloud storage SDK, and there’s no obvious relation between docker-compose and this service.

As the Docker Desktop team was investigating Catalina support during this period, I heard about the notarization constraints introduced by the Apple OS upgrade. I decided to reconfigure my system with system integrity check disabled (you have to run ‘csrutil disable’ from recovery console on boot). Here again, docker-compose suddenly went reasonably fast.

Looking into PyInstaller implementation details, when executed docker-compose binary extracts itself into a temporary folder, then executes the embedded Python runtime to run the packaged application. This bootstrap sequence takes a blink of an eye on a recent computer with tmp folder mapped to memory, but on my Catalina-upgraded MacBook it took up to 10 seconds – until I disabled integrity check.

Confirming the hypothesis

My assumption was: OSX Catalina reinforced security constraints do apply to the python runtime as it gets extracted, as a security scan, and the system does send a scan report to apple over its own cloud storage service. I can’t remember having approved sending such data to Apple, but I admit I didn’t carefully read the upgrade guide and service agreement before I hit the “upgrade to Catalina” button. As a fresh new Python runtime is extracted for temporary execution, this takes place each and every time we run a docker-compose command: new system scan, new report sent to apple – not even as a background task. 

To confirm this hypothesis, I built a custom flavour of docker-compose using an alternate PyInstaller configuration, so it doesn’t create a single binary, but a folder with runtime and libraries. The first execution of this custom docker-compose packaging took 10 seconds again (initial scan by the system), but subsequent commands were as efficient as expected.

The resolution

A few weeks later, a release candidate build was included in the Docker Desktop Edge channel to confirm that Catalina users get good performance using this alternate packaging, while not introducing unexpected bugs. Docker-compose 1.25.1 was released one month later with the bug fix confirmed. Starting with this release, docker-compose is available both as single binary packaging and as a tar.gz for OSX Catalina.

The post How We Solved a Report on docker-compose Performance on macOS Catalina appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/