Getting Started with Istio Using Docker Desktop

This is a guest post from Docker Captain Elton Stoneman, a Docker alumni who is now a freelance consultant and trainer, helping organizations at all stages of their container journey. Elton is the author of the book Learn Docker in a Month of Lunches, and numerous Pluralsight video training courses – including Managing Apps on Kubernetes with Istio and Monitoring Containerized Application Health with Docker.

Istio is a service mesh – a software component that runs in containers alongside your application containers and takes control of the network traffic between components. It’s a powerful architecture that lets you manage the communication between components independently of the components themselves. That’s useful because it simplifies the code and configuration in your app, removing all network-level infrastructure concerns like routing, load-balancing, authorization and monitoring – which all become centrally managed in Istio.

There’s a lot of good material for digging into Istio. My fellow Docker Captain Lee Calcote is the co-author of Istio: Up and Running, and I’ve just published my own Pluralsight course Managing Apps on Kubernetes with Istio. But it can be a difficult technology to get started with because you really need a solid background in Kubernetes before you get too far. In this post, I’ll try and keep it simple. I’ll focus on three scenarios that Istio enables, and all you need to follow along is Docker Desktop.

Setup

Docker Desktop gives you a full Kubernetes environment on your laptop. Just install the Mac or Windows version – be sure to switch to Linux containers if you’re using Windows – then open the settings from the Docker whale icon, and select Enable Kubernetes in the Kubernetes section. You’ll also need to increase the amount of memory Docker can use, because Istio and the demo app use a fair bit – in the Resources section increase the memory slider to at least 6GB.

Now grab the sample code for this blog post, which is in my GitHub repo:

git clone https://github.com/sixeyed/istio-samples.git
cd istio-samples


The repo has a set of Kubernetes manifests that will deploy Istio and the demo app, which is a simple bookstore website (this is the Istio team’s demo app, but I use it in different ways so be sure to use my repo to follow along). Deploy everything using the Kubernetes control tool kubectl, which is installed as part of Docker Desktop:

kubectl apply -f ./setup/

You’ll see dozens of lines of output as Kubernetes creates all the Istio components along with the demo app – which will all be running in Docker containers. It will take a few minutes for all the images to download from Docker Hub, and you can check the status using kubectl:

# Istio – will have “1/1” in the “READY” column when fully running:
kubectl get deploy -n istio-system

# demo app – will have “2/2” in the “READY” column when fully running:
kubectl get pods

When all the bits are ready, browse to http://localhost/productpage and you’ll see this very simple demo app:

And you’re good to go. If you’re happy working with Kubernetes YAML files you can look at the deployment spec for the demo app, and you’ll see it’s all standard Kubernetes resources – services, service accounts and deployments. Istio is managing the communication for the app, but we haven’t deployed any Istio configurations, so it isn’t doing much yet.

The demo application is a distributed app. The homepage runs in one container and it consumes data from REST APIs running in other containers. The book details and book reviews you see on the page are fetched from other containers. Istio is managing the network traffic between those components, and it’s also managing the external traffic which comes into Kubernetes and on to the homepage.

We’ll use this demo app to explore the main features of Istio: traffic management, security and observability.

Managing Traffic – Canary Deployments with Istio

The homepage is kinda boring, so let’s liven it up with a new release. We want to do a staged release so we can check out how the update gets received, and Istio supports both blue-green and canary deployments. Canary deployments are generally more useful and that’s what we’ll use. We’ll have two versions of the home page running, and Istio will send a proportion of the traffic to version 1 and the remainder to version 2:

We’re using Istio for service discovery and routing here: all incoming traffic comes into Istio and we’re going to set rules for how it forwards that traffic to the product page component. We do that by deploying a VirtualService, which is a custom Istio resource. That contains this routing rule for HTTP traffic:

gateways:
– bookinfo-gateway
http:
– route:
– destination:
host: productpage
subset: v1
port:
number: 9080
weight: 70
– destination:
host: productpage
subset: v2
port:
number: 9080
weight: 30

There are a few moving pieces here:

The gateway is the Istio component which receives external traffic. The bookinfo-gateway object is configured to listen to all HTTP traffic, but gateways can be restricted to specific ports and host names;The destination is the actual target where traffic will be routed (which can be different from the requested domain name). In this case, there are two subsets, v1 which will receive 70% of traffic and v2 which receives 30%;Those subsets are defined in a DestinationRule object, which uses Kubernetes labels to identify pods within a service. In this case the v1 subset finds pods with the label version=v1, and the v2 subset finds pods with the label version=v2.

Sounds complicated, but all it’s really doing is defining the rules to shift traffic between different pods. Those definitions come in Kubernetes manifest YAML files, which you deploy in the same way as your applications. So we can do our canary deployment of version 2 with a single command – this creates the new v2 pod, together with the Istio routing rules:

# deploy:
kubectl apply -f ./canary-deployment

# check the deployment – it’s good when all pods show “2/2” in “READY”:
kubectl get pods

Now if you refresh the bookstore demo app a few times, you’ll see that most of the responses are the same boring v1 page, but a lucky few times you’ll see the v2 page which is the result of much user experience testing:

As the positive feedback rolls in you can increase the traffic to v2 just by altering the weightings in the VirtualService definition and redeploying. Both versions of your app are running throughout the canary stage, so when you shift traffic you’re sending it to components that are already up and ready to handle traffic, so there won’t be additional latency from new pods starting up.

Canary deployments are just one aspect of traffic management which Istio makes simple. You can do much more, including adding add fault tolerance with retries and circuit breakers, all with Istio components and without any changes to your apps.

Securing Traffic – Authentication and Authorization with mTLS

Istio handles all the network traffic between your components transparently, without the components themselves knowing that it’s interfering. It does this by running all the application container traffic through a network proxy, which applies Istio’s rules. We’ve seen how you can use that for traffic management, and it works for security too.

If you need encryption in transit between app components, and you want to enforce access rules so only certain consumers can call services, Istio can do that for you too. You can keep your application code and config simple, use basic unauthenticated HTTP and then apply security at the network level.

Authentication and authorization are security features of Istio which are much easier to use than they are to explain. Here’s the diagram of how the pieces fit together:

Here the product page component on the left is consuming a REST API from the reviews component on the right. Those components run in Kubernetes pods, and you can see each pod has one Docker container for the application and a second Docker container running the Istio proxy, which handles the network traffic for the app.

This setup uses mutual-TLS for encrypting the HTTP traffic and authenticating and authorizing the caller:

The authentication Policy object applied to the service requires mutual TLS, which means the service proxy listens on port 443 for HTTPS traffic, even though the service itself is only configured to listen on port 80 for HTTP traffic;The AuthorizationPolicy object applied to the service specifies which other components are allowed access. In this case, everything is denied access, except the product page component which is allowed HTTP GET access;The DestinationRule object is configured for mutual-TLS, which means the proxy for the product page component will upgrade HTTP calls to HTTPS, so when the app calls the reviews component it will be a mutual-TLS conversation.

Mutual-TLS means the client presents a certificate to identify itself, as well as the service presenting a certificate for encryption (only the server cert is standard HTTPS behavior). Istio can generate and manage all those certs, which removes a huge burden from normal mTLS deployments. 

There’s a lot to take in there, but the deployment and management of all that is super simple, it’s just the same kubectl process:

kubectl apply -f ./service-authorization/

Istio uses the Kubernetes Service Account for identification, and you’ll see when you try the app that nothing’s changed, it all works as before. The difference is that no other components running in the cluster can access the reviews component now, the API is locked down so only the product page can consume it.

You can verify that by connecting to another container – the details component is running in the same cluster. Try to consume the reviews API from the details container:

docker container exec -it $(docker container ls –filter name=k8s_details –format ‘{{ .ID}}’) sh

curl http://reviews:9080/1

You’ll see an error – RBAC: access denied, which is Istio enforcing the authorization policy. This is powerful stuff, especially having Istio manage the certs for you. It generates certs with a short lifespan, so even if they do get compromised they’re not usable for long. All this without complicating your app code or dealing with self-signed certs.

Observability – Visualising the Service Mesh with Kiali

All network traffic runs through Istio, which means it can monitor and record all the communication. Istio uses a pluggable architecture for storing telemetry, which has support for standard systems like Prometheus and Elasticsearch. 

Collecting and storing telemetry for every network call can be expensive, so this is all configurable. The deployment of Istio we’re using is the demo configuration, which has telemetry configured so we can try it out. Telemetry data is sent from the service proxies to the Istio component called Mixer, which can send it out to different back-end stores, in this case, Prometheus:

(This diagram is a simplification – Prometheus actually pulls the data from Istio, and you can use a single Prometheus instance to collect metrics from Istio and your applications).

The data in Prometheus includes response codes and durations, and Istio comes with a bunch of Grafana dashboards you can use to drill down into the metrics. And it also has support for a great tool called Kiali, which gives you a very useful visualization of all your services and the network traffic between them.

Kiali is already running in the demo deployment, but it’s not published by default. You can gain access by deploying a Gateway and a VirtualService:

kubectl apply -f ./visualization-kiali/

Now refresh the app a few times at http://localhost/productpage and then check out the service mesh visualization in Kiali at http://localhost:15029. Log in with the username admin and password admin, then browse to the Graph view and you’ll see the live traffic for the bookstore app:

I’ve turned on “requests percentage” for the labels here, and I can see the traffic split between my product page versions is 67% to 34%, which is pretty close to my 70-30 weighting (the more traffic you have, the closer you’ll get to the specified weightings).

Kiali is just one of the observability tools Istio supports. The demo deployment also runs Grafana with multiple dashboards and Jaeger for distributed tracing – which is a very powerful tool for diagnosing issues with latency in distributed applications. All the data to power those visualizations is collected automatically by Istio.

Wrap-Up

A service mesh makes the communication layer for your application into a separate entity, which you can control centrally and independently from the app itself. Istio is the most fully-featured service mesh available now, although there is also Linkerd (which tends to have better baseline performance), and the Service Mesh Interface project (which aims to standardise mesh features). 

Using a service mesh comes with a cost – there are runtime costs for hosting additional compute for the proxies and organizational costs for getting teams skilled in Istio. But the scenarios it enables will outweigh the cost for a lot of people, and you can very quickly test if Istio is for you, using it with your own apps in Docker Desktop.
The post Getting Started with Istio Using Docker Desktop appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Donates the cnab-to-oci Library to cnab.io

Docker is proud and happy to announce the donation of our cnab-to-oci library to the CNAB project . This project was created last year after Microsoft and Docker moved the CNAB specification to the Linux Foundation’s Joint Development Foundation. At that time, the CNAB specification repository was moved from the deislab GitHub organization to the new cnabio organization. The reference implementations – cnab-go which is the Golang library implementation of the specification and duffle which is the CLI reference implementation – were also moved.

What is cnab-to-oci for?

Docker helped with the development of the CNAB specification and its reference implementations, and led the work on the cnab-to-oci library for sharing a CNAB bundle using an existing container registry. This library is now used by 3 CNAB tools, Docker App, Porter and duffle, as well as Docker Hub. It successfully demonstrated how to push, pull and share a CNAB bundle using a registry. This work will be used as a foundation for the future CNAB Registries specification.

The transfer is already in effect, so starting now please refer to github.com/cnabio/cnab-to-oci in your Golang imports.

How does cnab-to-oci store a CNAB bundle into a registry?

As you may know, the OCI image specification introduces two main objects: the OCI Manifest and the OCI Image Index. The first one is well known and represents the classic Docker image. The other one was, at first, used to store multi-architecture images (see nginx as an example).

But what you may not know is that the specification doesn’t restrict the use of OCI Indexes to multi-arch images. You can store almost anything you want, as long as you meet the specification, and it is quite open.

cnab-to-oci uses this openness to push the bundle.json, but also the invocation image and the component images (or service images for a Docker App). It pushes everything in the same repository, so one has the guarantee that when someone pulls her/his bundle, all the components can be pulled as well.

Demo Time

While cnab-to-oci is implemented as a library that can be used by other tools, the repository contains a handy CLI tool that can perform push and pull of any CNAB bundle.json.

With the following command we push a bundle example to the Docker Hub repository. It pushes all the manifests found in the bundle, then creates an OCI Index and pushes it at the end. The digest we get as a result is pointing to the OCI Index of the bundle.

$ make bin/cnab-to-oci…$ ./bin/cnab-to-oci push examples/helloworld-cnab/bundle.json -t hubusername/repo:demo –log-level=debug –auto-update-bundleDEBU[0000] Fixing up bundle docker.io/hubusername/repo:demoDEBU[0000] Updating entry in relocation map for “cnab/helloworld:0.1.1”Starting to copy image cnab/helloworld:0.1.1…Completed image cnab/helloworld:0.1.1 copyDEBU[0004] Bundle fixedDEBU[0004] Pushing CNAB Bundle docker.io/hubusername/repo:demoDEBU[0004] Pushing CNAB Bundle ConfigDEBU[0004] Trying to push CNAB Bundle ConfigDEBU[0004] CNAB Bundle Config DescriptorDEBU[0004] {  “mediaType”: “application/vnd.cnab.config.v1+json”,  “digest”: “sha256:e91b9dfcbbb3b88bac94726f276b89de46e4460b55f6e6d6f876e666b150ec5b”,  “size”: 498}DEBU[0005] Trying to push CNAB Bundle Config ManifestDEBU[0005] CNAB Bundle Config Manifest DescriptorDEBU[0005] {  “mediaType”: “application/vnd.oci.image.manifest.v1+json”,  “digest”: “sha256:6ec4fd695cace0e3d4305838fdf9fcd646798d3fea42b3abb28c117f903a6a5f”,  “size”: 188}DEBU[0006] Failed to push CNAB Bundle Config Manifest, trying with a fallback methodDEBU[0006] Trying to push CNAB Bundle ConfigDEBU[0006] CNAB Bundle Config DescriptorDEBU[0006] {  “mediaType”: “application/vnd.oci.image.config.v1+json”,  “digest”: “sha256:e91b9dfcbbb3b88bac94726f276b89de46e4460b55f6e6d6f876e666b150ec5b”,  “size”: 498}DEBU[0006] Trying to push CNAB Bundle Config ManifestDEBU[0006] CNAB Bundle Config Manifest DescriptorDEBU[0006] {  “mediaType”: “application/vnd.oci.image.manifest.v1+json”,  “digest”: “sha256:b9616da7500f8c7c9a5e8d915714cd02d11bcc71ff5b4fd190bb77b1355c8549”,  “size”: 193}DEBU[0006] CNAB Bundle Config pushedDEBU[0006] Pushing CNAB IndexDEBU[0006] Trying to push OCI IndexDEBU[0006] {“schemaVersion”:2,”manifests”:[{“mediaType”:”application/vnd.oci.image.manifest.v1+json”,”digest”:”sha256:b9616da7500f8c7c9a5e8d915714cd02d11bcc71ff5b4fd190bb77b1355c8549″,”size”:193,”annotations”:{“io.cnab.manifest.type”:”config”}},{“mediaType”:”application/vnd.docker.distribution.manifest.v2+json”,”digest”:”sha256:a59a4e74d9cc89e4e75dfb2cc7ea5c108e4236ba6231b53081a9e2506d1197b6″,”size”:942,”annotations”:{“io.cnab.manifest.type”:”invocation”}}],”annotations”:{“io.cnab.keywords”:”[”helloworld”,”cnab”,”tutorial”]”,”io.cnab.runtime_version”:”v1.0.0″,”org.opencontainers.artifactType”:”application/vnd.cnab.manifest.v1″,”org.opencontainers.image.authors”:”[{”name”:”Jane Doe”,”email”:”jane.doe@example.com”,”url”:”https://example.com”}]”,”org.opencontainers.image.description”:”A short description of your bundle”,”org.opencontainers.image.title”:”helloworld”,”org.opencontainers.image.version”:”0.1.1″}}DEBU[0006] OCI Index DescriptorDEBU[0006] {  “mediaType”: “application/vnd.oci.image.index.v1+json”,  “digest”: “sha256:fcee8577f3acc8ddc6e0280e6d1eb15be70bdff460fe7353abf917a872487af2”,  “size”: 926}DEBU[0007] CNAB Index pushedDEBU[0007] CNAB Bundle pushedPushed successfully, with digest “sha256:fcee8577f3acc8ddc6e0280e6d1eb15be70bdff460fe7353abf917a872487af2”

Let’s check that our bundle has been pushed on Docker Hub:

We can now pull our bundle back from the registry. It will only fetch the bundle.json file, but as you may notice this now has a digested reference for the image manifest of every component, inside the same registry repository. The Docker Engine will pull any images required by the bundle at runtime. So pulling a bundle is a lightweight process.

$ ./bin/cnab-to-oci pull hubusername/repo:demo –log-level=debugDEBU[0000] Pulling CNAB Bundle docker.io/hubusername/repo:demoDEBU[0000] Getting OCI Index DescriptorDEBU[0001] {  “mediaType”: “application/vnd.oci.image.index.v1+json”,  “digest”: “sha256:fcee8577f3acc8ddc6e0280e6d1eb15be70bdff460fe7353abf917a872487af2”,  “size”: 926}DEBU[0001] Fetching OCI Index sha256:fcee8577f3acc8ddc6e0280e6d1eb15be70bdff460fe7353abf917a872487af2DEBU[0001] {  “schemaVersion”: 2,  “manifests”: [    {      “mediaType”: “application/vnd.oci.image.manifest.v1+json”,      “digest”: “sha256:b9616da7500f8c7c9a5e8d915714cd02d11bcc71ff5b4fd190bb77b1355c8549”,      “size”: 193,      “annotations”: {        “io.cnab.manifest.type”: “config”      }    },    {      “mediaType”: “application/vnd.docker.distribution.manifest.v2+json”,      “digest”: “sha256:a59a4e74d9cc89e4e75dfb2cc7ea5c108e4236ba6231b53081a9e2506d1197b6”,      “size”: 942,      “annotations”: {        “io.cnab.manifest.type”: “invocation”      }    }  ],  “annotations”: {    “io.cnab.keywords”: “[”helloworld”,”cnab”,”tutorial”]”,    “io.cnab.runtime_version”: “v1.0.0”,    “org.opencontainers.artifactType”: “application/vnd.cnab.manifest.v1”,    “org.opencontainers.image.authors”: “[{”name”:”Jane Doe”,”email”:”jane.doe@example.com”,”url”:”https://example.com”}]”,    “org.opencontainers.image.description”: “A short description of your bundle”,    “org.opencontainers.image.title”: “helloworld”,    “org.opencontainers.image.version”: “0.1.1”  }}DEBU[0001] Getting Bundle Config Manifest DescriptorDEBU[0001] {  “mediaType”: “application/vnd.oci.image.manifest.v1+json”,  “digest”: “sha256:b9616da7500f8c7c9a5e8d915714cd02d11bcc71ff5b4fd190bb77b1355c8549”,  “size”: 193,  “annotations”: {    “io.cnab.manifest.type”: “config”  }}DEBU[0001] Getting Bundle Config Manifest sha256:b9616da7500f8c7c9a5e8d915714cd02d11bcc71ff5b4fd190bb77b1355c8549DEBU[0001] {  “schemaVersion”: 2,  “config”: {    “mediaType”: “application/vnd.oci.image.config.v1+json”,    “digest”: “sha256:e91b9dfcbbb3b88bac94726f276b89de46e4460b55f6e6d6f876e666b150ec5b”,    “size”: 498  },  “layers”: null}DEBU[0001] Fetching Bundle sha256:e91b9dfcbbb3b88bac94726f276b89de46e4460b55f6e6d6f876e666b150ec5bDEBU[0002] {  “schemaVersion”: “v1.0.0”,  “name”: “helloworld”,  “version”: “0.1.1”,  “description”: “A short description of your bundle”,  “keywords”: [    “helloworld”,    “cnab”,    “tutorial”  ],  “maintainers”: [    {      “name”: “Jane Doe”,      “email”: “jane.doe@example.com”,      “url”: “https://example.com”    }  ],  “invocationImages”: [    {      “imageType”: “docker”,      “image”: “cnab/helloworld:0.1.1”,      “contentDigest”: “sha256:a59a4e74d9cc89e4e75dfb2cc7ea5c108e4236ba6231b53081a9e2506d1197b6”,      “size”: 942,      “mediaType”: “application/vnd.docker.distribution.manifest.v2+json”    }  ]}

cnab-to-oci has been integrated with Docker App in the last beta release v0.9.0-beta1, to let you push and pull your entire application with the same UX as pushing a regular Docker container image. As Docker App is a standard CNAB runtime, it can also run this generic CNAB example:

$ docker app pull hubusername/repo:demoSuccessfully pulled “helloworld” (0.1.1) from docker.io/hubusername/repo:demo$ docker app run hubusername/repo:demoPort parameter was set to Install actionAction install complete for upbeat_nobelApp “upbeat_nobel” running on context “default”

Want to Know More?

If you’re interested in getting more details about CNAB, a few blog posts are available:

Multi-arch All The ThingsBuilding Multi-Arch Images for Arm and x86 with Docker DesktopAnnouncing CNABDocker App and CNABNext Steps for Cloud Native Application Bundles

Please note that we will give a talk about this topic at KubeCon Europe 2020: “Sharing is Caring! Push your Cloud Application to an OCI Registry – Silvin Lubecki & Djordje Lukic”

And of course, you can also find more information directly on the cnab-to-oci GitHub repository.

Contributions are welcome!!!
The post Docker Donates the cnab-to-oci Library to cnab.io appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Hack Week: How Docker Drives Innovation from the Inside

Since its founding, Docker’s mission has been to help developers bring their ideas to life by conquering the complexity of app development. With millions of Docker developers worldwide, Docker is the de facto standard for building and sharing containerized apps. 

So what is one source of ideas we use to simplify the lives of developers? It starts with being a company of software developers who builds products for software developers. One of the more creative ways Docker has been driving innovation internally is through hackathons. These hackathons have proven to be a great platform for Docker employees to showcase their talent and provide unique opportunities for teams across Docker’s business functions to come together. Our employees get to have fun while creating solutions to problems that simplify the lives of Docker developers.

At Docker, our engineers are always looking for ways to improve their own workflows so as to ship quality code faster. Hack Week gives us a chance to explore the boundaries of what’s possible, and the winning ‘hacks’ make their way into our products to benefit our global developer community.

-Scott Johnston, Docker CEO

With that context, let’s break down how Docker runs employee hackathons. Docker is an open source company, and in the spirit of openness, I am sharing all the gory details here of our hackathon. 

First of all, our hackathon is known as “Hack Week.” We conduct hackathons twice a year. Docker uses Slack channels to manage employee communications, Confluence for team workspaces and Zoom for video conferencing and recording of demos. For example, we have a Confluence Hack Week site with all the info an employee needs to participate: hackathon rules, team sign-ups, calendar and schedule, demo recordings and results.

Because we still need to perform our day jobs, we run Hack Week for a full work week where employees can manage their time but are granted 20% of that time to work on their hackathon project during work hours. Below is a screenshot of Docker’s internal site for Hack Week that provides simple guidance and voting criteria – every employee gets a vote!

Docker Hackathon Home Page

What makes this fun at Docker is the fact that we have employees participating from Paris, Cambridge (UK) and San Francisco. There are no constraints on how teams form. You can have members from all three locations form as one team. Signing up is simple – all we require is a team name, your team members, your region and a 1-3 sentence description of your “hack.” Below is the calendar from Docker’s last Hack Week which we ran back in December 2019. This should give you a good overview of how we execute Hack Week. This actually runs quite smoothly for Docker despite the 8-9 hour time difference between our teams in San Francisco and the teams in the UK and France. 

The winning team for December’s Hack Week was Team M&Ms (s/o to Mathieu Champion in Paris and Michael Parker in Cambridge) after garnering the most employee votes. The description of their hack was “run everything from Docker Desktop.” The hack enables auto-generation of dockerfiles from Docker Desktop. (A dockerfile is a text document that contains all the commands a user could call on the command line to assemble a container image). 

I spoke with Michael Parker regarding his motivations for participation in Hack Week. “Hack Week is a great innovation platform – it lets employees show what can be easily implemented with our current systems and dream a bit bigger about what might be possible rather than focusing on the incremental feature tweaks and bug fixes.” 

Finally, I have shared the recorded video below from our Hack Week winning team. This will give you a good idea as to how we present, collaborate and vote in a virtual work environment with teams spread across two continents and an island. It’s a 6-minute video and will give you a great view of how passionate we are about making the lives of developers that much better by making their jobs that much easier and productive.

Feel free to let any of this content we have shared inspire your organization’s employees to plan and conduct your own hackathons. I remember back in 2012 when I was participating in a public hackathon at Austin’s SXSW Interactive Conference seeing none other than Deepak Chopra kicking off the event and inspiring developers. He talked about hackathons as a form of “creative chaos” and how conflict and destruction of established patterns can often lead to creativity and innovation. I think this is a great description of a hackathon. Are you ready for some creative chaos inside your own organization?

The post Hack Week: How Docker Drives Innovation from the Inside appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Our Favourite Picks from the KubeCon Europe 2020 Schedule

Last Wednesday, the CNCF released the KubeCon Europe 2020 schedule. There are so many talks at KubeCon it can be daunting even to decide what to go to see! Here are some talks by the team at Docker, and some others we think will be particularly interesting. Looking forward to seeing you in Amsterdam!

Simplify Your Cloud-Native Application Packaging and Deployments – Chris Crone

Chris is an engineer in our Paris office and is also co-executive director of the CNAB project. CNAB (Cloud Native Application Bundle) is a specification for bundling up cloud-native applications, which can consist of multiple containers, into a single object that can be pushed to a registry. Open source projects using CNAB, like Docker App or Porter allow you to package apps that would normally require multiple tools like Terraform, Helm, and shell to deploy, into a single tooling agnostic packaging format. These packages can then be shared using existing container registries and used with other CNAB compliant tools. This can really simplify cloud-native development.

Sharing is Caring! Push your Cloud Application to an OCI Registry – Silvin Lubecki & Djordje Lukic

Did you know that you can store anything into a container registry? Did you ever wonder what black magic is behind multi-architecture images? The OCI Image specification is a standard purposely generic enough to enable use cases other than “just” container images.

This talk will give an overview of how images in registries work, and how you can push CNAB applications and other custom resources into a registry. It will also cover our battle scars with the different interpretations of the OCI spec by the mainstream registries. 

How to Work in Cloud Native Security: Demystifying the Security Role – Justin Cormack, Docker

Working in security can be intimidating and the shortage of people in the space makes hiring difficult. But especially in cloud-native environments, security is something everyone must own. If you’ve ever asked yourself, “what does it take to work in security in a cloud-native environment? How can you move into security from a dev or an ops position? Where should you start and what should you learn about?” then this talk is for you. I decided to submit this talk as my journey into working in security was fairly accidental, and I realised that this is true for many people. I meet a lot of people interested in getting into security, through the CNCF SIG Security and elsewhere, and hope I can give help and encouragement.

More interesting talks

I wrote about the work the community is doing in the CNCF on Notary v2 last week. If you found this interesting and want to learn more, we have an introductory session, with me and Omar Paul from Amazon, which will give a beginner’s view and a working session for in-depth work with Steve Lasker from Microsoft and me.

If you want even more on container signing, Justin Cappos and Lukas Puehringer from New York University have a session on securing container delivery with TUF and another on supply chain security with in-toto.

The containerd community continues to grow and innovate. Phil Estes from IBM and Derek McGowan from Docker are covering the Introduction to containerd, while Akihiro Suda and Wei Fu are doing the containerd deep dive. Also on the containerd theme, the great teachers Bret Fisher and Jerome Petazzoni are giving a tutorial: Kubernetes Runtimes: Translating your Docker skills to containerd.

Dominique Top and Ivan Pedrazas run the London Docker meetup and are both lovely people who have built up a great community. Learn from them with 5 Things you Could do to Improve your Local Community.

Lastly, my friend Lee Calcotte always gives great talks, and this one about how to understand the details of traffic control appeals to my geek side: Discreetly Studying the Effects of Individual Traffic Control Functions.
The post Our Favourite Picks from the KubeCon Europe 2020 Schedule appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Changes to dockerproject.org APT and YUM repositories

While many people know about Docker, not that many know its history and where it came from. Docker was started as a project in the dotCloud company, founded by Solomon Hykes, which provided a PaaS solution. The project became so successful that dotCloud renamed itself to Docker, Inc. and focused on Docker as its primary product.

As the “Docker project” grew from being a proof of concept shown off at various meetups and at PyCon in 2013 to a real community project, it needed a website where people could learn about it and download it. This is why the “dockerproject.org” and “dockerproject.com” domains were registered.

With the move from dotCloud to Docker, Inc. and the shift of focus onto the Docker product, it made sense to move everything to the “docker.com” domain. This is where you now find the company website, documentation, and of course the APT and YUM repositories at download.docker.com have been there since 2017.

On the 31st of March 2020, we will be shutting down the legacy APT and YUM repositories hosted at dockerproject.org and dockerproject.com. These repositories haven’t been updated with the latest releases of Docker and so the packages hosted there contain security vulnerabilities. Removing these repositories will make sure that people download the latest version of Docker ensuring their security and providing the best experience possible

What do I need to do?

If you are currently using the APT or YUM repositories from dockerproject.org or dockerproject.com, please update to use the repositories at download.docker.com.

You can find instructions for CentOS, Debian, Fedora and Ubuntu in the documentation.
The post Changes to dockerproject.org APT and YUM repositories appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing the Docker Index: Insight from the World’s Most Popular Container Registry

8 billion pulls! Yes, that’s billion with a B! This number represents a little known level of activity and innovation happening across the community and ecosystem, all in just one average month. How do we know? From the number of pulls and most popular images to top architectures, data from Docker Hub and Docker Desktop provide a window into application development trends in the age of containers. 

Today, we are sharing these findings in something we call the Docker Index – a look at developers’ preferences and trends, as told by using anonymized data from five million Docker Hub and two million Docker Desktop users, as well as countless other developers engaging with content on Hub. 

At Docker, we’re always looking for ways to make life easier for developers. Understanding the what, why and how behind these projects is imperative. As these trends evolve, we will continue to share updates on the findings.

Whether containers will become mainstream is no longer a topic of debate. As the Docker Index data suggests, containers have become a mainstay to how modern, distributed apps are built and shared so they can run anywhere. 

Usage is showing no signs of slowing down. Docker Desktop and Docker Hub are reaching an increasing number of developers and users are engaging with content from Hub at higher rates. Content from community developers and open source projects continues to make Hub a central and valuable source for developers looking to build containerized applications. 

Collaboration is key when building apps so that developers aren’t starting from scratch. Containers have helped to make building blocks the new norm. With container images readily accessible and shareable, everyone can be more productive. 

Modern apps also give rise to increasingly diverse development environments, drawing more attention to the importance of choice. The ability to select your preferred framework, operating system and architecture go a long way in creating a more productive experience for modern app development.

The ecosystem and community are shaping the future of software development and containers are at the heart of this transformation. The level of activity and collaboration is hitting a new gear and with it, continued advancements in how developers build and share apps. We look forward to sharing updates on the Docker Index data over the course of this year. 

To get started with Docker, download Docker Desktop and take a tutorial here https://www.docker.com/get-started.
The post Introducing the Docker Index: Insight from the World’s Most Popular Container Registry appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

How We Solved a Report on docker-compose Performance on macOS Catalina

Photo by Caspar Camille Rubin on Unsplash

As a Docker Compose maintainer, my daily duty is to check for newly reported issues and try to help users through misunderstanding and possible underlying bugs. Sometimes issues are very well documented, sometimes they are nothing much but some “please help” message. And sometimes they look really weird and can result in funny investigations. Here is the story of how we solved one such report…

A one-line bug report

An issue was reported as “docker-compose super slow on macOS Catalina” – no version, no details. How should I prioritize this? I don’t even know if the reporter is using the latest version of the tool – the opened issue doesn’t follow the bug reporting template. This is just a one-liner. But for some reason, I decided to take a look at it anyway and diagnose the issue.

Without any obvious explanation for super-slowness, I decided to take a risk and upgrade my own MacBook to OSX Catalina. I was able to reproduce significant slow down in docker-compose execution, waiting seconds for the very first line printed on the console even to display usage on invalid command.

Investigating the issue

In the meantime, some users reported getting correct performance when installing docker-compose as a plain python software, not with the packaged executable. The docker-compose executable is packaged using PyInstaller, which embeds a Python runtime and libraries with application code in a single executable file. As a result, one gets a distributable binary that can be created for Windows, Linux and OSX.  I wrote a minimalist “hello world” python application and was able to reproduce the same weird behaviour once packaged the same way docker-compose is, i.e. a few second startup delay.

Here comes the funny part. I’m a remote worker on the Docker team, and I sometimes have trouble with my Internet connection. It happened this exact day, as my network router had to reboot. And during the reboot sequence, docker-compose performance suddenly became quite good … but eventually, the initial execution delay came back. How do you explain such a thing?

So I installed Charles proxy to analyze network traffic, and discovered a request sent to api.apple-cloudkit.com each and everytime docker-compose was run. Apple Cloudkit is Apple cloud storage SDK, and there’s no obvious relation between docker-compose and this service.

As the Docker Desktop team was investigating Catalina support during this period, I heard about the notarization constraints introduced by the Apple OS upgrade. I decided to reconfigure my system with system integrity check disabled (you have to run ‘csrutil disable’ from recovery console on boot). Here again, docker-compose suddenly went reasonably fast.

Looking into PyInstaller implementation details, when executed docker-compose binary extracts itself into a temporary folder, then executes the embedded Python runtime to run the packaged application. This bootstrap sequence takes a blink of an eye on a recent computer with tmp folder mapped to memory, but on my Catalina-upgraded MacBook it took up to 10 seconds – until I disabled integrity check.

Confirming the hypothesis

My assumption was: OSX Catalina reinforced security constraints do apply to the python runtime as it gets extracted, as a security scan, and the system does send a scan report to apple over its own cloud storage service. I can’t remember having approved sending such data to Apple, but I admit I didn’t carefully read the upgrade guide and service agreement before I hit the “upgrade to Catalina” button. As a fresh new Python runtime is extracted for temporary execution, this takes place each and every time we run a docker-compose command: new system scan, new report sent to apple – not even as a background task. 

To confirm this hypothesis, I built a custom flavour of docker-compose using an alternate PyInstaller configuration, so it doesn’t create a single binary, but a folder with runtime and libraries. The first execution of this custom docker-compose packaging took 10 seconds again (initial scan by the system), but subsequent commands were as efficient as expected.

The resolution

A few weeks later, a release candidate build was included in the Docker Desktop Edge channel to confirm that Catalina users get good performance using this alternate packaging, while not introducing unexpected bugs. Docker-compose 1.25.1 was released one month later with the bug fix confirmed. Starting with this release, docker-compose is available both as single binary packaging and as a tar.gz for OSX Catalina.

The post How We Solved a Report on docker-compose Performance on macOS Catalina appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

January Virtual Meetup Recap: Improve Image Builds Using the Features in BuildKit

This is a guest post by Docker Captain Nicholas Dille, a blogger, speaker and author with 15 years of experience in virtualization and automation. He works as a DevOps Engineer at Haufe Group, a digital media company located in Freiburg, Germany. He is also a Microsoft Most Valuable Professional.

In this virtual meetup, I share how to improve image builds using the features in BuildKit. BuildKit is an alternative builder with great features like caching, concurrency and the ability to separate your image build into multiple stages – which is useful for separating the build environment from the runtime environment. 

The default builder in Docker is the legacy builder. This is recommended for use when you need support for Windows. However, in nearly every other case, using BuildKit is recommended because of the fast build time, ability to use custom BuildKit front-ends, building stages in parallel and other features.

Catch the full replay below and view the slides to learn:

Build cache in BuildKit – instead of relying on a locally present image, buildkit will pull the appropriate layers of the previous image from a registry.How BuildKit helps prevent disclosure of credentials by allowing files to be mounted into the build process. They are kept in memory and are not written to the image layers.How BuildKit supports access to remote systems through SSH by mounting the SSH agent socket into the build without adding the SSH private key to the image.How to use the CLI plugin buildx to cross-build images for different platforms.How using the new “docker context,” the CLI is able to manage connection to multiple instances of the Docker engine. Note that it supported SSH remoting to Docker Engine.And finally, a tip that extends beyond image builds: When troubleshooting a running container, a debugging container can be started sharing the network and PID namespace. This allows debugging without changing the misbehaving container.

I also covered a few tools that I use in my workflow, namely:

goss, which allows images to be tested to match a configuration expressed in YAML. It comes with a nice wrapper called `dgoss` to use it with Docker easily. And it even provides a health endpoint to integrate into your imagetrivy, an OS tool from AquaSecurity that scans images for known vulnerabilities in the OS as well as well-known package managers.

And finally, answered some of your questions:

Why not use BuildKit by default? 

If your workflow involves building images often, then we recommend that you do set BuildKit as the default builder. Here is how to enable BuildKit by default in the docker daemon config. 

Does docker-compose work with BuildKit? 

Support for BuildKit was added in docker-compose 1.25.0 which can be enabled by setting DOCKER_BUILDKIT=1 and COMPOSE_DOCKER_CLI_BUILD=1.

What are the benefits of using BuildKit? 

In addition to the features presented, BuildKit also improves build performance in many cases.

When would I use BuildKit Secrets? (A special thank you to Captain Brandon Mitchell for answering this question)

BuildKit secrets are a good way to use a secret at build time, without saving the secret in the image. Think of it as pulling a private git repo without saving your ssh key to the image. For runtime, it’s often different compose files to support compose vs swarm mode, each mounting the secret a different way, i.e. a volume vs. swarm secret.

How do I enable BuildKit for Jenkins Docker build plugin? 

The only reference to BuildKit I was able to find refers to adding support in the Docker Pipeline plugin.

Does BuildKit share the build cache with the legacy builder? 

No, the caches are separate.

What are your thoughts on having the testing step as a stage in a multi-stage build? 

The test step can be a separate stage in the build. If the test step requires a special tool to be installed, it can be a second final stage. If your multi-stage build increases in complexity, take a look at CI/CD tools.

How does pulling the previous image save time over just doing the build? The download can be significantly faster than redoing all the work.

Is the created image still “identical” or is there any real difference in the final image artifact? 

The legacy builder, as well as BuildKit, produces identical (or rather equivalent) images.

Will Docker inspect show that the image was built using BuildKit? 

No.

Do you know any combination of debugging with docker images/containers (I use the following technologies: python and Django and Pycharm)?

No. Anyone have any advice here? 

Is Docker BuildKit supported with maven Dockerfile plugin? 

If the question is referring to Spotify’s Dockerfile Maven plugin (which is unmaintained), the answer is no. Other plugins may be able to use BuildKit when providing the environment variable DOCKER_BUILDKIT=1. Instead of changing the way the client works, you could configure the daemon to use BuildKit by default (see first question above).

What do you think about CRI-O? 

I think that containerd has gained more visibility and has been adopted by many cloud providers as the runtime in Kubernetes offerings. But I have no experience myself with CRI-O.

To be notified of upcoming meetups, join the Docker Virtual Meetup Group using your Docker ID or on Meetup.com.
The post January Virtual Meetup Recap: Improve Image Builds Using the Features in BuildKit appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Community Collaboration on Notary v2

One of the most productive meetings I had KubeCon in San Diego last November was a meeting with Docker, Amazon and Microsoft to plan a collaboration around a new version of the CNCF project Notary. We held the Notary v2 kickoff meeting a few weeks later in Seattle in the Amazon offices.

Emphasising that this is a cross-industry collaboration, we had eighteen people in the room (with more dialed in) from Amazon, Microsoft, Docker, IBM, Google, Red Hat, Sylabs and JFrog. This represented all the container registry providers and developers, other than the VMware Harbor developers who could unfortunately not make it in person. Unfortunately, we forgot to take a picture of everyone!

@awscloud, @GCPcloud, @Azure, @Docker, @RedHat, @jfrog collaborating on @CloudNativeFdn Notary v2 – touring the amazon spheres. Who would have thought… https://t.co/6VL3OucX0c pic.twitter.com/rNglQIO5ZM— Steve Lasker (@SteveLasker) December 15, 2019

The consensus and community are important because of the aims of Notary v2. But let’s go back a bit as some of you may not know what Notary is and what it is for.

The Notary project was originally started at Docker back in 2015 to provide a general signing infrastructure for containers based on The Update Framework (TUF), a model for package management security developed by Justin Cappos and his team at New York University. This is what supports the “docker trust” set of commands that allow signing containers, and the DOCKER_CONTENT_TRUST settings for validating signatures.

In 2017, Notary was donated to the CNCF, along with the TUF specification, to make it a cross-industry standard. It began to be shipped in other places as well as Docker Hub, including the Docker Trusted Registry (now a Mirantis product), IBM’s container registry, the Azure Container Registry, and with the Harbor project, another CNCF project. TUF also expanded its use cases, in the package management community, and in projects such as Uptane, a framework for updating firmware on automobiles.

So why a version 2 now? Part of the answer is that we learnt a lot of things about the usage of containers since 2015. Are container years like dog years? I am not sure, but a lot has happened since then, and the usage of containers has expanded enormously. I covered a lot of the reasons in-depth in my KubeCon talk:

Supply chain security – making sure that you ship what you intended to ship into production – has become increasingly important, as attacks on software supply chains have increased in recent years. Signatures are an important part of the validation needed in container supply chains.

Integrating Signatures in the Registry

The first big change that we want to make is because at present not every registry supports Notary. This means that if you use a mixture of registries, some may support signatures while others do. In addition, you cannot move signatures between registries. Both of these are related to the design of Notary as in effect a registry sidecar. While Notary shares the same authentication as a registry, it is built as a separate service, with its own database and API. 

Back when Notary was designed this did not seem so important. But now many people use, or want to use, complex registry configurations with local registries close to a production cluster, or at the cloud provider code is running on, or in an edge location which may be disconnected. The solution that we are working on is that rather than being a standalone sidecar service, signatures will be integrated into the OCI image specification and supported by all registries. The details of this are still being worked out, but this will make portability much easier, as signatures will be able to be pushed and pulled with images.

Improving Usability

The second big set of changes is around usability. The current way of signing containers and checking signatures is complex, as is the key management. One of the aims of Notary v2 is to have signatures and checking on by default where possible. There have been many issues stopping this with the current Notary, many of which are detailed in the KubeCon talk, including the large number of keys involved due to lack of hierarchy and delegation, and lack of standard interfaces for signature checking on different platforms such as Kubernetes.

If you want to learn more, there are weekly meetings on Mondays at 10 a.m. Pacific Time – see the CNCF community calendar for details. The Slack channel is #notary-v2 in the CNCF Slack. There will be two sessions at KubeCon Amsterdam, one introductory overview and state of where we are, and another deep dive working session on current issues. Hope to see you there!
The post Community Collaboration on Notary v2 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Faster builds in Docker Compose 1.25.1 thanks to BuildKit Support

One of the most requested features for the docker-compose tool is definitely support for building using Buildkit which is an alternative builder with great capabilities, like caching, concurrency and ability to use custom BuildKit front-ends just to mention a few… Ahhh with a nice blue output! And the good news is that Docker Compose 1.25.1 – that was just released early January – includes BuildKit support!

BuildKit support for Docker Compose is actually achieved by redirecting the docker-compose build to the Docker CLI with a limited feature set.

Enabling Buildkit build

To enable this, we have to align some stars.

First, it requires that the Docker CLI binary present in your PATH:

$ whichdocker/usr/local/bin/docker

Second, docker-compose has to be run with the environment variable COMPOSE_DOCKER_CLI_BUILD set to 1 like in:

$ COMPOSE_DOCKER_CLI_BUILD=1 docker-compose build

This instruction tells docker-compose to use the Docker CLI when executing a build. You should see the same build output, but starting with the experimental warning.

As docker-compose passes its environment variables to the Docker CLI, we can also tell the CLI to use BuildKit instead of the default builder. To accomplish that, we can execute this:

$ COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker-compose build

A short video is worth a thousand words:

Please note that BuildKit support in docker-compose was initially released with Docker Compose 1.25.0. This feature is marked as experimental for now.

Want to know more?

Discover more options using docker-compose build –helpLearn more about Buildkit: docs.docker.com/develop/develop-images/build_enhancements

Share your feedback

Have nice and fast builds with docker-compose and please share your feedback with us!
The post Faster builds in Docker Compose 1.25.1 thanks to BuildKit Support appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/