Docker Open Sources Compose for Amazon ECS and Microsoft ACI

Today we are open sourcing the code for the Amazon ECS and Microsoft ACI Compose integrations. This is the first time that Docker has made Compose available for the cloud, allowing developers to take their Compose projects they were running locally and deploy them to the cloud by simply switching context.

With Docker focusing on developers, we’ve been doubling down on the parts of Docker that developers love, like Desktop, Hub, and of course Compose. Millions of developers all over the world use Compose to develop their applications and love its simplicity but there was no simple way to get these applications running in the cloud.

Docker is working to make it easier to get code running in the cloud in two ways. First we moved the Compose specification into a community project. This will allow Compose to evolve with the community so that it may better solve more user needs and ensure that it is agnostic of runtime platform. Second, we’ve been working with Amazon and Microsoft on CLI integrations for Amazon ECS and Microsoft ACI that allow you to use docker compose up to deploy Compose applications directly to the cloud.

While implementing these integrations, we wanted to make sure that existing CLI commands were not impacted. We also wanted an architecture that would make it easy to add new backends and provide SDKs in popular languages. We achieved this with the following architecture:

The Node SDK and Compose CLI parts of this diagram are what we have open sourced today. This architecture is not final and we plan to merge the Compose CLI with the existing CLI at a later time.

Depending on the Docker Context that the user selects, the Compose CLI switches which backend is used for the command or API call. This allows us to pass commands which use existing contexts to the existing CLI transparently. The backend interface abstraction allows the implementation of a backend for any container runtime so that users can get the same Docker CLI UX they know and love for it along with the new APIs and SDK.

The Compose CLI can serve a gRPC API to provide similar functionality to that of the CLI commands. We chose to use gRPC as this allows us to generate high quality SDKs in popular languages like Node.js, Python, and Golang. While we currently only provide a Node SDK that supports single container management on ACI, there are plans to add Compose support, extend it to ECS and other backends, and add other language SDKs in the near future. The Node SDK is already used by VS Code to implement its Docker experience on ACI.

This work wouldn’t have been possible without help from our partners at Microsoft and AWS who helped us build the best possible experience for their respective platforms. Our team has enjoyed working with all of you! From Microsoft we’d specifically like to thank Mike Morton, Karol Zadora-Przylecki, Brandon Waterloo, MacKenzie Olson, and Paul Yuknewicz. From AWS we’d like to thank Carmen Puccio, David Killmon, Sravan Rengarajan, Uttara Sridhar, and David Duffey.

These tools are currently in beta so feedback and pull requests are welcome!

Compose CLI source: https://github.com/docker/compose-cliNode SDK source: https://github.com/docker/node-sdk

To get started working with Compose in the Cloud you can download Docker Desktop here, get a free Hub account to deploy your images from here. Once you have you image saved to Docker Hub you will be able to deploy it to either ECS or ACI, to find out more about how to do this:

Read about how to use the Amazon ECS integration here: https://docs.docker.com/engine/context/ecs-integration/Read about how to use the Microsoft ACI integration here: https://docs.docker.com/engine/context/aci-integration/
The post Docker Open Sources Compose for Amazon ECS and Microsoft ACI appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Github Actions

In our first post in our series on CI/CD we went over some of the high level best practices for using Docker. Today we are going to go a bit deeper and look at Github actions. 

We have just released a V2 of our GitHub Action to make using the Cache easier as well! We also want to call out a huge THANK YOU to @crazy-max (Kevin :D) for the of work he put into the V2 of the action, we could not have done this without him! 

Right now let’s have a look at what we can do! 

To start we will need to get a project setup, I am going to use one of my existing simple Docker projects to test this out:

The first thing I need to do is to ensure that I will be able to access Docker Hub from any workflow I create, to do this I will need to add my DockerID and a Personal Access Token (PAT) as secrets into GitHub. I can get a PAT by going to https://hub.docker.com/settings/security and clicking ‘new access token’, in this instance I will call my token ‘whaleCI’

I can then add this and my username as secrets into the GitHub secrets UI:

Great we can now start to set up our action workflow to build and store our images in Hub. In this CI flow I am using two Docker actions, the first allows me to log in to Docker Hub using my secrets store in my GitHub Repository. The second is the build and push action, in this I am setting the push flag to true (as I want to push!) and adding in my tag simply to always go to latest. Lastly in this I am also going to echo my image digest to see what was pushed. 

name: CI to Docker hub

on:

push:

branches: [ master ]

steps:

name: Login to DockerHub

uses: docker/login-action@v1

with:

username: ${{ secrets.DOCKER_HUB_USERNAME }}

password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}

name: Build and push

id: docker_build

uses: docker/build-push-action@v2

with:

context: ./

file: ./Dockerfile

push: true

tags: bengotch/simplewhale:latest

name: Image digest

run: echo ${{ steps.docker_build.outputs.digest }}

Great, now I will just let that run for the first time and then tweak my Dockerfile to make sure the CI is running and pushing the new image changes:

Next we can look at how we can optimize this; the first thing I want to do is look at using my build cache. This has two advantages, first this will reduce my build time as it will not have to re-download all of my images and second it will reduce the number of pulls I complete against Docker Hub. To do this we are going to leverage the GitHub cache, to do this I need to set up my builder with a build cache.

The first thing I want to do is actually set up a Builder, this is using Buildkit under the hood, this is done very simply using the Buildx action.

steps:

name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@master

Next I need to set up my cache for my builder, here I am adding the path and keys to store this under using the github cache for this. 


name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-

And lastly having added these two bits to the top of my Action file I need to add in the extra attributes to my build and push step. Here I am setting the builder to use the output of the buildx step and then using the cache I set up for this to store to and retrieve from.


name: Login to Docker Hub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}

name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
context: ./
file: ./Dockerfile
builder: ${{ steps.buildx.outputs.name }}
push: true
tags: bengotch/simplewhale:latest
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache

name: Image digest
run: echo ${{ steps.docker_build.outputs.digest }}

Great, now we can run it again and I can see that I am using the cache!

Now we can look at how we can improve this more functionally by adding in the ability to have our tagged versions we want to be released to Docker Hub behave differently to my commits to master (rather than everything updating latest on Docker Hub!). You might want to do something like this to have your commits go to a local registry to then use in nightly tests so you can always test what is latest while reserving your tagged versions for release to Hub. 

To start we will need to modify our previous GitHub workflow to only push to Hub if we get a particular tag:

on:
push:
tags:
– “v*.*.*”

This now means our main CI will only fire if we tag our commit with V.n.n.n, let’s have a quick go and test this:

And when I check my GitHub action: 

Great!

Now we need to set up a second GitHub action file to store our latest commit as an image in the GitHub registry, you may want to do this to run your nightly tests or recurring tests against or to share work in progress images with colleagues. To start I am going to clone my previous GitHub action and add back in our previous logic for all pushes. 

Next I am going to change out our Docker Hub login to a GitHub container registry login

if: github.event_name != ‘pull_request’
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.ghcr_TOKEN }}

And I will also need to remember to change how my image is tagged, I have opted to just keep latest as my only tag but you could always add in logic for this:

  tags: ghcr.io/nebuk89/simplewhale:latest

Now we will have two different flows, one for our changes to master and one for our pull requests. Next we will need to modify what we had before so we are pushing our PRs to the GitHub registry rather than to Hub. 

We could now look at how we set up either nightly tests against our latest tag, how we want to test each PR or if we want to do something more elegant with the tags we are using and make use of the Git tag for the same tag in our image. If you would like to look at how you can do one of these or get a full example of how to setup what we have gone through today please check out Chad’s repo which runs you through this and more details on our latest GitHub action: https://github.com/metcalfc/docker-action-examples 

And keep an eye on our blog for new posts coming in the next couple of weeks looking at how we can get this setup on other CIs, if there are some in particular you would like to see reach out to us on Twitter on @docker.To get started setting up your GitHub CI with Docker Hub today sign up for a Docker account and have a go with Docker’s official GitHub actions.
The post Docker Github Actions appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Best practices for using Docker Hub for CI/CD

According to the 2020 Jetbrains developer survey 44% of developers are now using some form of continuous integration and deployment with Docker Containers. We know a ton of developers have got this setup using Docker Hub as their container registry for part of their workflow so we decided to dig out the best practices for doing this and provide some guidance for how to get started. To support this we will be publishing a series of blog posts over the next few weeks to answer the common questions we see with the top CI providers.

We have also heard feedback that given the changes Docker introduced relating to network egress and the number of pulls for free users, that there are questions around the best way to use Docker Hub as part of CI/CD workflows without hitting these limits. This blog post covers best practices that improve your experience and uses a sensible consumption of Docker Hub which will mitigate the risk of hitting these limits and how to increase the limits depending on your use case. 

To get started, one of the most important things when working with Docker and really any CI/CD is to work out when you need to test with the CI or when you can do this locally. At Docker we think about how how developers work in terms of their inner loop (code, build, run, test) and their outer loop (push change, CI build, CI test, deployment) 

Before you think about optimizing your CI/CD, it is always important to think about your inner loop and how it relates to the outer loop (the CI). We know that most people aren’t a fan of ‘debugging via the CI’, so it is always better if your inner loop and outer loop are as similar as possible. To this end it can be a good idea to run unit tests as part of your docker build command by adding a target for them in your Dockerfile. That way as you are making changes and re-building locally you can run the same unit tests you would run in the CI on your local machine with a simple command. Chris wrote a blog post earlier in the year about Go development with Docker, this is a great example of how you can use tests in your Docker project and re-use them in the CI. This creates a shorter feedback loop on issues and reduces the amount of pulls and builds your CI needs to do.

Once you get into your actual outer loop and Docker Hub, there are a few things we can do to get the most of your CI and deliver the fastest Docker experience. 

Firstly and foremost stay secure! When you are setting up your CI make sure you are using a Docker Hub access token rather than your password, you can create new access tokens from your security page on Docker Hub. 

Once you have this and have added it to whatever secrets store is available on your platform you will want to look at when you decide to push and pull in your CI/CD along with where from depending on the change you are making. The first thing you can do here to reduce the build time and reduce your number of calls is make use of the Buildcache to reuse layers you have already pulled. This can be done on many platforms by using BuildX/buildkits caching functionality and whatever cache your platform provides.

The other change you may want to make is only have your release images go to DockerHub, this would mean setting up functions to push your PR images to a more local image store to be quickly pulled and tested rather than promoting them all the way to production.

We know there are a lot more tips and tricks for using Docker in CI but really looking at how to do this around the recent Hub rate changes we think these are the top things you can do.

If you are still finding you have issues with Pull limits once you are authenticated you can consider upgrading to either a Pro or a Team account. This will give you unlimited authenticated pulls from Docker Hub, along with giving you unlimited private repos and unlimited image retention. In the near future this will also include Image Scanning (powered by Snyk) on push of new images to Docker Hub.

Look out for the next blog post in the series about how to put some of these practices into place with Github actions and feel free to give us ideas of what CI providers you would like to see us covering by dropping us a message on Twitter @Docker.
The post Best practices for using Docker Hub for CI/CD appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

A Chat With Docker’s New Community Manager

Community is a backbone of all sustainable open source projects and so at Docker, we’re particularly thrilled to announce that William Quiviger has joined the team as our new Head of Community. 

William is a seasoned community manager based in Paris, having worked with open source communities for the past 15 years for a wide range of organizations including Mozilla Firefox, the United Nations and the Open Networking Foundation. His particular area of expertise is in nurturing, building and scaling communities, as well as developing mentorship and advocacy programs that help push leadership to the edges of a community. 

To get to know William a bit more, we thought we’d ask him a few questions about his experience as a community manager and what he plans to focus on in his new role: 

What motivated you most about joining Docker? 

I started following Docker closely back in 2016 when I joined the Open Networking Foundation. There, I was properly introduced to cloud technologies and containerization and quickly realised how Docker was radically simplifying the lives of our developers and was the de-facto standard for anything deployed in the cloud. I was particularly impressed by the incredible passion and ever growing size of Docker’s community. Naturally, as a community manager,  it’s a dream to have the opportunity to serve a community like Docker.

What are your main goals now that you’re part of the Docker team?

One of my main goals is to bring in my experience and learnings from my 15 years as a community manager in very different types of organizations and in different parts of the world. Through a lot of experimentation and trial and error, I’ve learned a ton. I want to take best practices and good ideas from other communities and apply them to the needs of Docker. 

What will you focus on most in the next few months as you work to engage and help grow the Docker community?

That’s a tough question because there are so many areas I will be focusing on. Scaling a community is a big challenge and I want to make sure that the passion and excitement around Docker is translated into a growing, sustainable community that continues to bring value to our users and helps us achieve our business goals.  A major challenge with growth is that processes and dynamics that worked well when the community was smaller can break down as the size of that community grows so the key is to empower leaders within the community to help scale efforts and push authority to the edges. That’s why the Docker Captains program will be a major focus for me. The Captains have been doing incredible work over the years and I want to help that program have even more impact in terms of engaging our existing community and the developer community at large. Another key area I will be focusing will be on developing community programs and initiatives that help us gather and surface user insights to our engineering and product teams. The more insights we gather about the way developers use Docker in their working lives, the better we can shape the direction of our products to fit their needs and use cases. 

When you’re not building communities, what do you usually do in your spare time ?

When I’m not hunched over my laptop, I’m likely experimenting with a new recipe in my kitchen, reading history books or digging up rare recordings of my favorite Jazz artists. Lately though, I’ve become a chess addict so if you’re reading this and you’re into chess, ping me for a game!
The post A Chat With Docker’s New Community Manager appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Understanding Docker container escapes

blog.trailofbits.com – Trail of Bits recently completed a security assessment of Kubernetes, including its interaction with Docker. Felix Wilhelm’s recent tweet of a Proof of Concept (PoC) “container escape” sparked our in…
Quelle: news.kubernauts.io