The Stars Are Aligning: Announcing our first round of speakers at DockerCon LIVE 2021

With just over a month to go before DockerCon LIVE 2021, we’re thrilled to announce our first round of speakers. We have returning favorites and compelling new first time speakers to round out your DockerCon experience. 

We received hundreds of amazing speaker proposals which made it difficult to select just a few. We set up a small team this year composed of seven Docker staff members and three Docker Captains to diligently review each proposal and deliberate once a week. We have more speakers and sessions to announce so stay tuned. 

Remember, if you haven’t registered for DockerCon, please make sure to do so now to get an early peak at the conference website.

Melissa McKay – Developer Advocate @ JFrogThe Docker and Container Ecosystem 101

Lukonde Mwila – Senior Software Engineer @ EntelectDocker Swarm: A Journey to the AWS Cloud

Peter Mckee – Head of Developer Relations @ DockerEvent Emcee and Panel Moderator

Bret Fisher – DevOps Consultant and Docker CaptainPanel Moderator

Julie Lerman – Software Coach and Docker CaptainPanel Member

Nick Janetakis – Full-Stack Developer and Docker CaptainBest Practices around Creating a Production Ready Web App with Docker and Docker Compose

Anuj Sharma – Software developer Engineer @ AWSMigrate and Modernize applications with a consistent developer experience

Matt Jarvis – Senior Developer Advocate @ SnykMy container image has 500 vulnerabilities, now what?

Alex Iankoulski – Principal Solutions Architect @ AWS and Docker CaptainDeploy and Scale your ML Workloads with Docker on AWS

Jacob Howard – Founder @ Mutagen and Docker CaptainA Pragmatic Tour of Docker Filesystems

Michael Irwin – Application Architect @ Virginia Tech and Docker CaptainWrite Once, Configure to Deploy Anywhere

Benjamin De St Paer-Gotch – Principal Product Manager @ DockerDev Environments – Ben De St Paer-Gotch

Join Us for DockerCon LIVE 2021

Join us for DockerCon LIVE 2021 on Thursday, May 27. DockerCon LIVE is a free, one day virtual event that is a unique experience for developers and development teams who are building the next generation of modern applications. If you want to learn about how to go from code to cloud fast and how to solve your development challenges, DockerCon LIVE 2021 offers engaging live content to help you build, share and run your applications. Register today at https://dockr.ly/2PSJ7vn

The post The Stars Are Aligning: Announcing our first round of speakers at DockerCon LIVE 2021 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Released: Docker Desktop for Mac [Apple Silicon]

Today we are excited to announce the general availability of Docker Desktop for Mac [Apple Silicon], continuing to support developers in our community with their choice of local development environments.  

First, we want to say a big thank you to our community. The excitement you have shown about being able to run Docker Desktop on the new M1 chip has been tremendous and hugely motivating to us. Your engagement on testing builds and reporting problems has been invaluable. As soon as Apple announced the new M1 chip, you let us know on our public roadmap that this was a high priority for you, and it quickly became by far our most upvoted roadmap item ever. You also responded very positively to our previous blog posts.

After the M1 machines were publicly available, those of you on our developer preview program tested some very early builds. And then as we moved into public tech previews and release candidates, many more of you joined in with testing your enormous variety of use cases, and reporting bugs. In total we have had 45,000 downloads of the various preview builds, and 140 tickets raised on our public bug tracker, not to mention countless messages on our community Slack.

We know that Docker Desktop is an essential part of the development process for so many of you. We are very grateful that we have such an active and supportive community, and that you have shared both your excitement and your feedback with us. We couldn’t have gotten here without you.

Thank you!

Where can you get it? 

Download it here!

Release notes can be found here!

Looking for support?

Did you know that you can get Premium Customer Support for Docker Desktop with a Pro or Team subscription?  With this GA release, we’re now ready to officially help support you if you’re thinking about using Docker Desktop for Mac [Apple Silicon], for Mac [Intel] or for Windows. Check out our pricing page to learn more about what’s included in a Pro or Team subscription, and if it’s right for you.

Have you tried multi-platform builds?

Many developers are going to experience multi-platform development for the first time with the Macs powered by the M1 chip. This is one of the key areas where Docker shines. Docker has had support for multi-platform images for a long time, meaning that you can build and run both amd64(Intel) and arm64 (Apple Silicon) images on Docker Desktop today. The new Docker Desktop for Apple Silicon is no exception; you can build and run images for both x86 and ARM architectures without having to set up a complex cross-compilation development environment.

Docker Hub also makes it easy to identify and share repositories that provide multi-platform images.

Using docker buildx you can also easily integrate multi-platform builds into your build pipeline.

Try it today.

Join Us for DockerCon LIVE 2021

Join us for DockerCon LIVE 2021 on Thursday, May 27. DockerCon LIVE is a free, one day virtual event that is a unique experience for developers and development teams who are building the next generation of modern applications. If you want to learn about how to go from code to cloud fast and how to solve your development challenges, DockerCon LIVE 2021 offers engaging live content to help you build, share and run your applications. Register today at https://dockr.ly/2PSJ7vn

The post Released: Docker Desktop for Mac [Apple Silicon] appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Captain Take 5 – Nuno do Carmo

Captain Take 5 – Nuno do Carmo

Docker Captains are select members of the community that are both experts in their field and are passionate about sharing their Docker knowledge with others. “Docker Captains Take 5” is a regular blog series where we get a closer look at our Captains and ask them the same broad set of questions ranging from what their best Docker tip is to whether they prefer cats or dogs (personally, we like whales and turtles over here). Today, we’re interviewing Nuno do Carmo who has been a Docker Captain since 2019. He is a Sr System Analyst for a pharmaceutical company based in Switzerland and he is based in Montreux.

How/when did you first discover Docker?

Back in 2015, I was hanging with friends and we would meet once a week to check on technologies and we found out a training on Pluralsight, given by a certain Nigel Poulton, and we decided to “temporarily” download it, **cough**.

Both the training method from Nigel and the technology of Docker were an instant hit for us. We started to learn as hobbyists and fast forward, I guess I took it more at heart than my friends, haha.

What is your favorite Docker command?

`docker run`, everything starts with a `docker run`. That first time that we launch “something”, we don’t know exactly what, it’s not VM and yet an instance with another OS is running.

Also, the power of simplicity from `docker run`, if the container image, network or volume do not exist, it creates it for us. We can start small from a `docker run -it alpine` to a way more complex command with ports, secrets, privileges.

Special mention to `docker app` and the whole CNAB community as it was the first time I contributed back to a project which used Docker (read below)

What is your top tip you think other people don’t know for working with Docker?

For the ones who know me, I’m a huge fan(boy) of Windows Subsystem for Linux (WSL) and while I do use Docker Desktop for Windows, I keep on reminding people who ask for help, that Docker on WSL2 can be installed like we would do in a native Linux OS.

Therefore we can have Docker Desktop running, let’s say with Windows Containers, and the Docker WSL2 running in parallel.

And special mention (yes, again) to `docker context` that makes it even easier to manage the whole setup.

What’s the coolest Docker demo you have done/seen?

Done, without hesitation, is the Docker to WSL2 distribution, which since then I have remixed a lot of times.

The demo consists in running a Docker container with almost any Linux based OS, export it as a compressed file and re-import it into WSL2 as a new distro.

I could automate it with a CNAB tool, called Porter.sh, and the community was so surprised (as it was not a use case initially intended for), that I got my first KubeCon invite to showcase it during a Day0 CNAB event led by another amazing Captain, Scott Coulton.

A massive thank you to the CNAB team and especially Carolyn Van Slyck, Ralf Squillace and Jeremy Rickard.

Seen, again without hesitation, it’s the demo from Jessie Frazelle called “Willy Wonka of Containers” (https://youtu.be/GsLZz8cZCzc).

This demo had such a huge impact on me and since then, 6 years later, I finally will be able to “mimic” it 100% in WSL2. That’s how much “in the future”, this demo was and Jessie is just incredible.

What have you worked on in the past six months that you’re particularly proud of?

Being a hobbyist (read: I do not work with Docker in my daily job), I can definitively say that I’m proud to have helped community members as a Captain and overall fan.

As for the technical part, I’m hard at work bringing rootless containers to WSL2 where, once again, I adapt the existing work of the great Akihiro Suda.

What do you anticipate will be Docker’s biggest announcement this year?

Docker Desktop for Linux, what else?

What do you think is going to be Docker’s biggest challenge this year?

These past years, Docker Inc (the company), needed to find its place in a Cloud Native world led by Kubernetes and associated projects.

I really think Scott Johnston has a very good vision, refocusing on what Docker does and speaks to best: the Developers.

Still, the road is not an easy one and this year I think Docker will be cementing its “new” position within the Cloud Native ecosystem.

What are some personal goals for the next year with respect to the Docker community?

Being a “new generation” of Captain, it’s always hard for me to believe I am in the same group as Legends that motivate me, still to this day, to use and enjoy Docker.

Being the “WSL Corsair”, I found my niche, and I simply would love to see more adoption of Docker Desktop for Windows with WSL2 backend.

Another point is to keep helping the Docker Windows containers side too. It’s there and my own impression is that it lacks some momentum right now. So bringing back the fire with the help of the community will be lots of fun.

What talk would you most love to see at DockerCon 2021?

Docker Desktop for Linux, what else? (bis)

But also, a lot of community members coming together and having fun, and maybe, why not, someone doing a crazy demo and motivating at least 1 person to become a Captain too in the future.

Looking to the distant future, what is the technology that you’re most excited about and that you think holds a lot of promise?

WSL2, and that’s not just the fanboy talking right now. Having the possibility to have a mix of two worlds, not colliding but merging, it’s just mind blowing.

In terms of hardware, even if ARM devices have existed for a long time, the Apple M1 really shook the world and the speed at which Software makers are porting their applications really opens a new ecosystem.

Rapid fire questions…

What new skill have you mastered during the pandemic?

Cooking new recipes

Cats or Dogs?

4 cats at home…

Salty, sour or sweet?

Salty

Beach or mountains?

I’m Portuguese living in Switzerland: both

Your most often used emoji?

  and

DockerCon Live 2021

Join us for DockerCon LIVE 2021 on Thursday, May 27. DockerCon Live is a free, one day virtual event that is a unique experience for developers and development teams who are building the next generation of modern applications. If you want to learn about how to go from code to cloud fast and how to solve your development challenges, DockerCon 2021 offers engaging live content to help you build, share and run your applications. Register today at https://dockr.ly/2PSJ7vn
The post Captain Take 5 – Nuno do Carmo appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Changing How Updates Work with Docker Desktop 3.3

Today we are pleased to announce the release of Docker Desktop 3.3.

We’ve been listening to your feedback on our Public Roadmap and we are consistently asked for three things: smaller downloads, more flexible installation options, and more frequent feature releases, bug fixes, and security updates.

We also heard from our community that the smaller updates are appreciated, requiring immediate installation is not convenient, and automatic background downloads are problematic for developers on constrained or metered bandwidth.

We’ve heard you and are changing how updates to Docker Desktop work, while still maintaining the ability to provide you with smaller, faster updates. We are also providing additional flexibility to developers with Pro or Team subscriptions.

Flexibility for Updates 

With Docker Desktop 3.3, when a new update to Docker Desktop is available, it will no longer be automatically downloaded and installed on your next restart. You can now choose when to start the download and installation process.

To encourage developers to stay up to date, we have built in increasingly persistent reminders after an update has become available.

If you use Docker Desktop at work you may need to skip a specific update. For this reason, Pro or Team subscription developers can skip notifications for a particular update when a reminder appears.

Finally, developers in larger organizations, who don’t have administrative access to install updates to Docker Desktop, or are only allowed to upgrade to IT-approved versions, there is now an option in the Settings menu to opt out of notifications altogether for Docker Desktop updates if your Docker ID is part of a Team subscription.

It’s your positive feedback that helps us continue to improve the Docker experience. We truly appreciate it. Please keep that feedback coming by raising tickets on our Public Roadmap.

See the release notes for Docker Desktop for Mac and Docker Desktop for Windows for the complete set of changes in Docker Desktop 3.3 including the latest Compose release, update to Linux Kernel 5.10, and several other bug fixes and improvements you’ve been waiting for.

And check out our Tech Preview page for the latest updates on support for Apple Silicon (there’s an RC3!).

Interested in learning more about what else is included with a Pro or Team subscription in addition to more flexible update options? Check out our pricing page for a detailed breakdown.
The post Changing How Updates Work with Docker Desktop 3.3 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Get Involved with Docker

Every day, hundreds of passionate Docker users around the world contribute to Docker. Whether you are just getting started or are an expert in your field, there are many ways to get involved and start contributing to Docker. If you’re into technical writing, you can easily publish and/or edit articles in docs.docker.com. If you’re more into code contribution, there are dozens of open source Docker projects you can dive into. Or if you’re just interested in sharing knowledge and spreading Docker goodness, you can organize a local meetup or a virtual workshop on our community events page. 

There are literally countless ways one can contribute to Docker. This makes it sometimes a bit difficult to find the right project or activity that maps to your interests and level of Docker expertise. That’s why we’ve been working to make it easier for anyone to learn more about ways to contribute and find the right project or activity. To this end, we created a community-driven website that aims to make it easier than ever to navigate the many different contribution opportunities that exist at Docker, and ultimately, to find the right contribution pathway to get started. 

The website is entirely built on top of GitHub, is editable by the community and is organized into six distinct sections, from technical to non-technical contributions. 

We also put emphasis on “community events” which are central in our efforts to engage more contributors. You’ll find lots of tools and resources that will be continuously updated, including event handbooks. These handbooks are specifically designed with step-by-step guidance on how to run a workshop with a full list of topics to cover, e.g. “Docker for Node.JS developers” workshop.  Again, the website is entirely editable by anyone with a GitHub account so if you have content to share, a bug to flag or a recommendation to make, just make a pull request.

This is an experimental website: we’re still building out sections and figuring out the right format and structure. We look forward to seeing it evolve and improve over time with the contributions from the community to become a very useful resource for the Docker community.

A *huge* hat tip to Docker Captain Ajeet Raina for driving this initiative forward!

References and links:

Get Involved with Docker website How to contribute to the Get Involved siteCommunity Leaders Handbooks
The post Get Involved with Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Compiling Containers – Dockerfiles, LLVM and BuildKit

Today we’re featuring a blog from Adam Gordon Bell at Earthly who writes about how BuildKit, a technology developed by Docker and the community, works and how to write a simple frontend. Earthly uses BuildKit in their product.

Introduction

How are containers made? Usually, from a series of statements like `RUN`, `FROM`, and `COPY`, which are put into a Dockerfile and built.  But how are those commands turned into a container image and then a running container?  We can build up an intuition for how this works by understanding the phases involved and creating a container image ourselves. We will create an image programmatically and then develop a trivial syntactic frontend and use it to build an image.

On `docker build`

We can create container images in several ways. We can use Buildpacks, we can use build tools like Bazel or sbt, but by far, the most common way images are built is using `docker build` with a Dockerfile.  The familiar base images Alpine, Ubuntu, and Debian are all created this way.     

Here is an example Dockerfile:

FROM alpine
COPY README.md README.md
RUN echo “standard docker build” > /built.txt”

We will be using variations on this Dockerfile throughout this tutorial. 

We can build it like this:

docker build . -t test

But what is happening when you call `docker build`? To understand that, we will need a little background.

Background

 A docker image is made up of layers. Those layers form an immutable filesystem.  A container image also has some descriptive data, such as the start-up command, the ports to expose, and volumes to mount. When you `docker run` an image, it starts up inside a container runtime.

 I like to think about images and containers by analogy. If an image is like an executable, then a container is like a process. You can run multiple containers from one image, and a running image isn’t an image at all but a container.

Continuing our analogy, BuildKit is a compiler, just like LLVM.  But whereas a compiler takes source code and libraries and produces an executable, BuildKit takes a Dockerfile and a file path and creates a container image.

Docker build uses BuildKit, to turn a Dockerfile into a docker image, OCI image, or another image format.  In this walk-through, we will primarily use BuildKit directly.

This primer on using BuildKit supplies some helpful background on using BuildKit, `buildkitd`, and `buildctl` via the command-line. However, the only prerequisite for today is running `brew install buildkit` or the appropriate OS equivalent steps.

How Do Compilers Work?

A traditional compiler takes code in a high-level language and lowers it to a lower-level language. In most conventional ahead-of-time compilers, the final target is machine code. Machine code is a low-level programming language that your CPU understands.

Fun Fact: Machine Code VS. Assembly

Machine code is written in binary. This makes it hard for a human to understand.  Assembly code is a plain-text representation of machine code that is designed to be somewhat human-readable. There is generally a 1-1 mapping between instructions the machine understands (in machine code) and the OpCodes in Assembly

Compiling the classic C “Hello, World” into x86 assembly code using the Clang frontend for LLVM looks like this:

Creating an image from a dockerfile works a similar way:

BuildKit is passed the Dockerfile and the build context, which is the present working directory in the above diagram. In simplified terms, each line in the dockerfile is turned into a layer in the resulting image.  One significant way image building differs from compiling is this build context.  A compiler’s input is limited to source code, whereas `docker build` takes a reference to the host filesystem as an input and uses it to perform actions such as `COPY`.

There Is a Catch

The earlier diagram of compiling “Hello, World” in a single step missed a vital detail. Computer hardware is not a singular thing. If every compiler were a hand-coded mapping from a high-level language to x86 machine code, then moving to the Apple M1 processor would be quite challenging because it has a different instruction set.  

Compiler authors have overcome this challenge by splitting compilation into phases.  The traditional phases are the frontend, the backend, and the middle. The middle phase is sometimes called the optimizer, and it deals primarily with an internal representation (IR).

This staged approach means you don’t need a new compiler for each new machine architecture. Instead, you just need a new backend. Here is an example of what that looks like in LLVM:

Intermediate Representations

This multiple backend approach allows LLVM to target ARM, X86, and many other machine architectures using LLVM Intermediate Representation (IR) as a standard protocol.  LLVM IR is a human-readable programming language that backends need to be able to take as input. To create a new backend, you need to write a translator from LLVM IR to your target machine code. That translation is the primary job of each backend.

Once you have this IR, you have a protocol that various phases of the compiler can use as an interface, and you can build not just many backends but many frontends as well. LLVM has frontends for numerous languages, including C++, Julia, Objective-C, Rust, and Swift.  

If you can write a translation from your language to LLVM IR, LLVM can translate that IR into machine code for all the backends it supports. This translation function is the primary job of a compiler frontend.

In practice, there is much more to it than that. Frontends need to tokenize and parse input files, and they need to return pleasant errors. Backends often have target-specific optimizations to perform and heuristics to apply. But for this tutorial, the critical point is that having a standard representation ends up being a bridge that connects many front ends with many backends. This shared interface removes the need to create a compiler for every combination of language and machine architecture. It is a simple but very empowering trick!

BuildKit

Images, unlike executables, have their own isolated filesystem. Nevertheless, the task of building an image looks very similar to compiling an executable. They can have varying syntax (dockerfile1.0, dockerfile1.2), and the result must target several machine architectures (arm64 vs. x86_64).

“LLB is to Dockerfile what LLVM IR is to C” – BuildKit Readme

This similarity was not lost on the BuildKit creators.  BuildKit has its own intermediate representation, LLB.  And where LLVM IR has things like function calls and garbage-collection strategies, LLB has mounting filesystems and executing statements.

LLB is defined as a protocol buffer, and this means that BuildKit frontends can make GRPC requests against buildkitd to build a container directly.

Programmatically Making An Image

Alright, enough background.  Let’s programmatically generate the LLB for an image and then build an image.  

Using Go

In this example, we will be using Go which lets us leverage existing BuildKit libraries, but it’s possible to accomplish this in any language with Protocol Buffer support.

Import LLB definitions:

import (
“github.com/moby/buildkit/client/llb”
)

Create LLB for an Alpine image:

func createLLBState() llb.State {
return llb.Image(“docker.io/library/alpine”).
File(llb.Copy(llb.Local(“context”), “README.md”, “README.md”)).
Run(llb.Args([]string{“/bin/sh”, “-c”, “echo “programmatically built” > /built.txt”})).     
Root()
}

We are accomplishing the equivalent of a `FROM` by using `llb.Image`. Then, we copy a file from the local file system into the image using `File` and `Copy`.  Finally, we `RUN` a command to echo some text to a file.  LLB has many more operations, but you can recreate many standard images with these three building blocks.

The final thing we need to do is turn this into protocol-buffer and emit it to standard out:

func main() {

dt, err := createLLBState().Marshal(context.TODO(), llb.LinuxAmd64)
if err != nil {
panic(err)
}
llb.WriteTo(dt, os.Stdout)
}

Let’s look at the what this generates using the `dump-llb` option of buildctl:

go run ./writellb/writellb.go |
buildctl debug dump-llb |
jq .

We get this JSON formatted LLB:

{
“Op”: {
“Op”: {
“source”: {
“identifier”: “local://context”,
“attrs”: {
“local.unique”: “s43w96rwjsm9tf1zlxvn6nezg”
}
}
},
“constraints”: {}
},
“Digest”: “sha256:c3ca71edeaa161bafed7f3dbdeeab9a5ab34587f569fd71c0a89b4d1e40d77f6″,
“OpMetadata”: {
“caps”: {
“source.local”: true,
“source.local.unique”: true
}
}
}
{
“Op”: {
“Op”: {
“source”: {
“identifier”: “docker-image://docker.io/library/alpine:latest”
}
},
“platform”: {
“Architecture”: “amd64″,
“OS”: “linux”
},
“constraints”: {}
},
“Digest”: “sha256:665ba8b2cdc0cb0200e2a42a6b3c0f8f684089f4cd1b81494fbb9805879120f7″,
“OpMetadata”: {
“caps”: {
“source.image”: true
}
}
}
{
“Op”: {
“inputs”: [
{
“digest”: “sha256:665ba8b2cdc0cb0200e2a42a6b3c0f8f684089f4cd1b81494fbb9805879120f7″,
“index”: 0
},
{
“digest”: “sha256:c3ca71edeaa161bafed7f3dbdeeab9a5ab34587f569fd71c0a89b4d1e40d77f6″,
“index”: 0
}
],
“Op”: {
“file”: {
“actions”: [
{
“input”: 0,
“secondaryInput”: 1,
“output”: 0,
“Action”: {
“copy”: {
“src”: “/README.md”,
“dest”: “/README.md”,
“mode”: -1,
“timestamp”: -1
}
}
}
]
}
},
“platform”: {
“Architecture”: “amd64″,
“OS”: “linux”
},
“constraints”: {}
},
“Digest”: “sha256:ba425dda86f06cf10ee66d85beda9d500adcce2336b047e072c1f0d403334cf6″,
“OpMetadata”: {
“caps”: {
“file.base”: true
}
}
}
{
“Op”: {
“inputs”: [
{
“digest”: “sha256:ba425dda86f06cf10ee66d85beda9d500adcce2336b047e072c1f0d403334cf6″,
“index”: 0
}
],
“Op”: {
“exec”: {
“meta”: {
“args”: [
“/bin/sh”,
“-c”,
“echo “programmatically built” > /built.txt”
],
“cwd”: “/”
},
“mounts”: [
{
“input”: 0,
“dest”: “/”,
“output”: 0
}
]
}
},
“platform”: {
“Architecture”: “amd64″,
“OS”: “linux”
},
“constraints”: {}
},
“Digest”: “sha256:d2d18486652288fdb3516460bd6d1c2a90103d93d507a9b63ddd4a846a0fca2b”,
“OpMetadata”: {
“caps”: {
“exec.meta.base”: true,
“exec.mount.bind”: true
}
}
}
{
“Op”: {
“inputs”: [
{
“digest”: “sha256:d2d18486652288fdb3516460bd6d1c2a90103d93d507a9b63ddd4a846a0fca2b”,
“index”: 0
}
],
“Op”: null
},
“Digest”: “sha256:fda9d405d3c557e2bd79413628a435da0000e75b9305e52789dd71001a91c704″,
“OpMetadata”: {
“caps”: {
“constraints”: true,
“platform”: true
}
}
}

Looking through the output, we can see how our code maps to LLB.

Here is our `Copy` as part of a FileOp:

“Action”: {
“copy”: {
“src”: “/README.md”,
“dest”: “/README.md”,
“mode”: -1,
“timestamp”: -1
}

Here is mapping our build context for use in our `COPY` command:

“Op”: {
“source”: {
“identifier”: “local://context”,
“attrs”: {
“local.unique”: “s43w96rwjsm9tf1zlxvn6nezg”
}
}

Similarly, the output contains LLB that corresponds to our  `RUN` and `FROM` commands. 

Building Our LLB

To build our image, we must first start `buildkitd`:

docker run –rm –privileged -d –name buildkit moby/buildkit
export BUILDKIT_HOST=docker-container://buildkit

To build our image, we must first start `buildkitd`:

go run ./writellb/writellb.go |
buildctl build
–local context=.
–output type=image,name=docker.io/agbell/test,push=true

The output flag lets us specify what backend we want BuildKit to use.  We will ask it to build an OCI image and push it to docker.io. 

Real-World Usage

In the real-world tool, we might want to programmatically make sure `buildkitd` is running and send the RPC request directly to it, as well as provide friendly error messages. For tutorial purposes, we will skip all that.

We can run it like this:

> docker run -it –pull always agbell/test:latest /bin/sh

And we can then see the results of our programmatic `COPY` and `RUN` commands:

/ # cat built.txt
programmatically built
/ # ls README.md
README.md

There we go! The full code example can be a great starting place for your own programmatic docker image building.

A True Frontend for BuildKit

A true compiler front end does more than just emit hardcoded IR.  A proper frontend takes in files, tokenizes them, parses them, generates a syntax tree, and then lowers that syntax tree into the internal representation. Mockerfiles are an example of such a frontend:

#syntax=r2d4/mocker
apiVersion: v1alpha1
images:- name: demo
  from: ubuntu:16.04
  package:
    install:
    – curl
    – git
    – gcc

And because Docker build supports the `#syntax` command we can even build a Mockerfiles directly with `docker build`. 

docker build -f mockerfile.yaml

To support the #syntax command, all that is needed is to put the frontend in a docker image that accepts a gRPC request in the correct format, publish that image somewhere.  At that point, anyone can use your frontend `docker build` by just using `#syntax=yourimagename`.

Building Our Own Example Frontend for `docker build`

Building a tokenizer and a parser as a gRPC service is beyond the scope of this article. But we can get our feet wet by extracting and modifying an existing frontend. The standard dockerfile frontend is easy to disentangle from the moby project. I’ve pulled the relevant parts out into a stand-alone repo. Let’s make some trivial modifications to it and test it out.

So far, we’ve only used the docker commands `FROM`, `RUN` and `COPY`.  At a surface level, with its capitalized commands, Dockerfile syntax looks a lot like the programming language INTERCAL. Let change these commands to their INTERCAL equivalent and develop our own Ickfile format.

DockerfileIckfileFROMCOME FROMRUNPLEASECOPYSTASH

The modules in the dockerfile frontend split the parsing of the input file into several discrete steps, with execution flowing this way:

For this tutorial, we are only going to make trivial changes to the frontend.  We will leave all the stages intact and focus on customizing the existing commands to our tastes.  To do this, all we need to do is change `command.go`:

package command

// Define constants for the command strings
const (
Copy = “stash”
Run = “please”
From = “come_from”

)

And we can then see results of our `STASH` and `PLEASE` commands:

/ # cat built.txt
custom frontend built
/ # ls README.md
README.md

I’ve pushed this image to dockerhub.  Anyone can start building images using our `ickfile` format by adding `#syntax=agbell/ick` to an existing Dockerfile. No manual installation is required!

Enabling BuildKit

BuildKit is enabled by default on Docker Desktop. It is not enabled by default in the current version of Docker for Linux (`version 20.10.5`). To instruct `docker build` to use BuildKit set the following environment variable `DOCKER_BUILDKIT=1` or change the Engine config.

Conclusion

We have learned that a three-phased structure borrowed from compilers powers building images, that an intermediate representation called LLB is the key to that structure.  Empowered by the knowledge, we have produced two frontends for building images.  

This deep dive on frontends still leaves much to explore.  If you want to learn more, I suggest looking into BuildKit workers.  Workers do the actual building and are the secret behind `docker buildx`, and multi-archtecture builds. `docker build` also has support for remote workers and cache mounts, both of which can lead to faster builds.

Earthly uses BuildKit internally for its repeatable build syntax. Without it, our containerized Makefile-like syntax would not be possible. If you want a saner CI process, then you should check it out.

There is also much more to explore about how modern compilers work. Modern compilers often have many stages and more than one intermediate representation, and they are often able to do very sophisticated optimizations.
The post Compiling Containers – Dockerfiles, LLVM and BuildKit appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

A Birthday Challenge as Docker Turns 8

Time flies. Eight years ago Docker was introduced to the world and forever changed the way applications are developed. We have enjoyed watching developers from all walks of life and from every corner of the globe bring their ideas to life using our technology. 

As is our tradition in the Docker community, and as announced during our last Community All-Hands, we are celebrating Docker’s big day with a birthday challenge where Docker users are encouraged to learn some of our Docker Captain’s favorite tips + tricks by completing 8 hands-on interactive exercises. Unlike last year’s challenge, this year as you complete an exercise you not only earn badges but you also earn points based on speed and accuracy which will be displayed on a leaderboard organised by individual score, country score and Captain score.

The challenge is on for the next month and we will announce the winners and award special prizes to the top three individual scores. 

So let’s celebrate 8 years of Docker and let the challenge begin!
The post A Birthday Challenge as Docker Turns 8 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

DockerCon LIVE 2021 Registration is Now Open

We’re excited to announce that registration for DockerCon LIVE 2021 is now officially open!

Taking place on Thursday, May 27th, the one day virtual event brings together all of the application development technology, skills, tools and people to help you build, share and run applications faster. And the best part? It’s FREE.  

Attendees will:

Learn about the latest Docker features and technology updatesSee live, on-demand technical demosTalk to a panel of experts and industry leaders who can help you build better apps Connect with peers and network with a thriving, vibrant community of developersShare experiences with other developers about creating leading edge cloud native applications for any cloud environmentAttend tutorials on how to get started with containers and how to use multiple languagesGet best practices tips and insights from innovative organizations that are building next generation applications with DockerHear about what’s new with tools and partner integrationParticipate in live sessions with Docker Captains

Be in on the Action

Our Call For Presentations is open until April 1st so there’s still time for you to submit a talk. If you have any questions about our CFP or the the conference in general, don’t hesitate to drop us a line in #dockercon2021 on our Community Slack.

We look forward to welcoming you in May for what promises to be our best DockerCon yet!  Register today.
The post DockerCon LIVE 2021 Registration is Now Open appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Advanced Image Management in Docker Hub

We are excited to announce the latest feature for Docker Pro and Team users, our new Advanced Image Management Dashboard available on Docker Hub. The new dashboard provides developers with a new level of access to all of the content you have stored in Docker Hub providing you with more fine grained control over removing old content and exploring old versions of pushed images. 

Historically in Docker Hub we have had visibility into the latest version of a tag that a user has pushed, but what has been very hard to see or even understand is what happened to all of those old things that you pushed. When you push an image to Docker Hub you are pushing a manifest, a list of all of the layers of your image, and the layers themselves.

When you are updating an existing tag, only the new layers will be pushed along with the new manifest which references these layers. This new manifest will be given the tag you specify when you push, such as bengotch/simplewhale:latest. But this does mean that all of those old manifests which point at the previous layers that made up your image are removed from Hub. These are still here, there is just no way to easily see them or to manage that content. You can in fact still use and reference these using the digest of the manifest if you know it. You can kind of think of this like your commit history (the old digests) to a particular branch (your tag) of your repo (your image repo!). 

This means you can have hundreds of old versions of images which your systems can still be pulling by hash rather than by the tag and you may be unaware which old versions are still in use. Along with this the only way until now to remove these old versions was to delete the entire repo and start again!

With the release of the image management dashboard we have provided a new GUI with all of this information available to you including whether those currently ‘untagged old manifests’ are still ‘active’ (have been pulled in the last month) or whether they are inactive. This combined with the new bulk delete for these objects and current tags provides you a more powerful tool for batch managing your content in Docker Hub. 

To get started you will find a new banner on your repos page if you have inactive images:

This will tell you how many images you have, tagged or old, which have not been pushed or pulled to in the last month. By clicking view you can go through to the new Advanced Image Management Dashboard to check out all your content, from here you can see what the tags of certain manifests used to be and use the multi-selector option to bulk delete these. 

For a full product tour check out our overview video of the feature below.

We hope that you are excited for the first step of us providing greater insight into your content on Docker Hub, if you want to get started exploring your content then all users can see how many inactive images they have and Pro & Team users can see which tags these used to be associated with, what the hashes of these are and start removing these today. To find out more about becoming a Pro or Team user check out this page.
The post Advanced Image Management in Docker Hub appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/