Multi-Platform Docker Builds

This is a guest post from Docker Captain Adrian Mouat who is Chief Scientist at Container Solutions, a cloud-native consultancy and Kubernetes Certified Service Provider. Adrian is the author of “Using Docker,” published by O’Reilly Media. He is currently developing Trow, a container image registry designed to securely manage the flow of images in a Kubernetes cluster. Adrian is a regular conference speaker and trainer and he has spoken at several events including KubeCon EU, DockerCon, CraftConf, TuringFest and GOTO Amsterdam.

Docker images have become a standard tool for testing and deploying new and third-party software. I’m the main developer of the open source Trow registry and Docker images are the primary way people install the tool. If I didn’t provide images, others would end up rolling their own which would duplicate work and create maintenance issues.

By default, the Docker images we create run on the linux/amd64 platform. This works for the majority of development machines and cloud providers but leaves users of other platforms out in the cold. This is a substantial audience – think of home-labs built from Raspberry Pis, companies producing IoT devices, organisations running on IBM mainframes and clouds utilising low-power arm64 chips. Users of these platforms are typically building their own images or finding another solution.

So how can you build images for these other platforms? The most obvious way is simply to build the image on the target platform itself. This can work in a lot of cases, but if you’re targetting s390x, I hope you have access to an IBM mainframe (try Phil Estes, as I’ve heard he has several in his garage). More common platforms like Raspberry Pis and IoT devices are typically limited in power and are slow or incapable of building images.

So what can we do instead? There’s two more options: 1) emulate the target platform or 2) cross-compile. Interestingly, I’ve found that a blend of the two options can work best.

Emulation

Let’s start by looking at the first option, emulation. There’s a fantastic project called QEMU that can emulate a whole bunch of platforms. With the recent buildx work, it’s easier than ever to use QEMU with Docker.

The QEMU integration relies on a Linux kernel feature with the slightly cryptic name of the binfmt_misc handler. When Linux encounters an executable file format it doesn’t recognise (i.e. one for a different architecture), it will check with the handler if there any “user space applications” configured to deal with the format (i.e. an emulator or VM). If there are, it will pass the executable to the application.

For this to work, we need to register the platforms we’re interested in with the kernel. If you’re using Docker Desktop this will already have been done for you for the most common platforms. If you’re using Linux, you can register handlers in the same way as Docker Desktop by running the latest docker/binfmt image e.g:

docker run –privileged –rm docker/binfmt:a7996909642ee92942dcd6cff44b9b95f08dad64

You may need to restart Docker after doing this. If you’d like a little more control over which platforms you want to register or want to use a more esoteric platform (e.g. PowerPC) take a look at the qus project.

There’s a couple of different ways to use buildx, but the easiest is probably to enable experimental features on the Docker CLI if you haven’t already – just edit ~/.docker/config.json to include the following:

{

"experimental": “enabled”
}

You should now be able to run docker buildx ls and you should get output similar to the following:

$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
default docker
default default running linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6

Let’s try building an image for another platform. Start with this Dockerfile:

FROM debian:buster

CMD uname -m

If we build it normally and run it:

$ docker buildx build -t local-build .

$ docker run –rm local-build
x86_64

But if we explicitly name a platform to build for:

$ docker buildx build –platform linux/arm/v7 -t arm-build .

$ docker run –rm arm-build
armv7l

Success! We’ve managed to build and run an armv7 image on an x86_64 laptop with little work. This technique is effective, but for more complex builds you may find it runs too slowly or you hit bugs in QEMU. In those cases, it’s worth looking into whether or not you can cross-compile your image.

Cross-Compilation

Several compilers are capable of emitting binary for foreign platforms, most notably including Go and Rust. With the Trow registry project, we found cross-compilation to be the quickest and most reliable method to create images for other platforms. For example, here is the Dockerfile for the Trow armv7 image. The most relevant line is:

RUN cargo build –target armv7-unknown-linux-gnueabihf -Z unstable-options –out-dir ./out

Which explicitly tells Rust what platform we want our binary to run on. We can then use a multistage build to copy this binary into a base image for the target architecture (we could also use scratch if we statically compiled) and we’re done. However, in the case of the Trow registry, there are a few more things I want to set in the final image, so the final stage actually begins with:

FROM –platform=linux/arm/v7 debian:stable-slim

Because of this, I’m actually using a blend of both emulation and cross-compilation – cross-compilation to create the binary and emulation to run and configure our final image.

Manifest Lists

In the above advice about emulation, you might have noticed we used the –platform argument to set the build platform, but we left the image specified in the FROM line as debian:buster. It might seem this doesn’t make sense – surely the platform depends on the base image and how it was built, not what the user decides at a later stage?

What is happening here is Docker is using something called manifest lists. These are lists for a given image that contain pointers to images for different architectures. Because the official debian image has a manifest list defined, when I pull the image on my laptop, I automagically get the amd64 image and when I pull it on my Raspberry Pi, I get the armv7 image.

To keep our users happy, we can create manifest lists for our own images. If we go back to our earlier example, first we need to rebuild and push the images to a repository:

$ docker buildx build –platform linux/arm/v7 -t amouat/arch-test:armv7 .

$ docker push amouat/arch-test:armv7

$ docker buildx build -t amouat/arch-test:amd64 .

$ docker push amouat/arch-test:amd64

Next, we create a manifest list that points to these two separate images and push that:

$ docker manifest create amouat/arch-test:blog amouat/arch-test:amd64 amouat/arch-test:armv7
Created manifest list docker.io/amouat/arch-test:blog
$ docker manifest push amouat/arch-test:blog
sha256:039dd768fc0758fbe82e3296d40b45f71fd69768f21bb9e0da02d0fb28c67648

Now Docker will pull and run the appropriate image for the current platform:

$ docker run amouat/arch-test:blog
Unable to find image ‘amouat/arch-test:blog’ locally
blog: Pulling from amouat/arch-test
Digest: sha256:039dd768fc0758fbe82e3296d40b45f71fd69768f21bb9e0da02d0fb28c67648
Status: Downloaded newer image for amouat/arch-test:blog
x86_64

Somebody with a Raspberry Pi to hand can try running the image and confirm that it does indeed work on that platform as well!

To recap; not all users of Docker images run amd64. With buildx and QEMU, it’s possible to support these users with a small amount of extra work.

Happy Birthday, Docker!
The post Multi-Platform Docker Builds appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

COVID-19 public dataset program: Making data freely accessible for better public outcomes

Data always plays a critical role in the ability to research, study, and combat public health emergencies, and nowhere is this more true than in the case of a global crisis. Access to data sets—and tools that can analyze that data at cloud scale—are increasingly essential to the research process, and are particularly necessary in the global response to the novel coronavirus (COVID-19).To aid researchers, data scientists, and analysts in the effort to combat COVID-19, we are making a hosted repository of public datasets, like Johns Hopkins Center for Systems Science and Engineering (JHU CSSE), the Global Health Data from the World Bank, and OpenStreetMap data, free to access and query through our COVID-19 Public Dataset Program. Researchers can also use BigQuery MLto train advanced machine learning models with this data right inside BigQuery at no additional cost.  “Making COVID-19 data open and available in BigQuery will be a boon to researchers and analysis in the field,” says Sam Skillman, head of engineering at Descartes Labs. “In particular, having queries be free will allow greater participation, and the ability to quickly share results and analysis with colleagues and the public will accelerate our shared understanding of how the virus is spreading.”These datasets remove barriers and provide access to critical information quickly and easily, eliminating the need to search for and onboard large data files. Researchers can access the datasets from within the Google Cloud Console, along with a description of the data and sample queries to advance research. All data we include in the program will be public and freely available. The program will remain in effect until September 15, 2020. “Developing data-driven models for the spread of this infectious disease is critical,” said Matteo Chinazzi, Associate Research Scientist, Northeastern University. “Our team is working intensively to model and better understand the spread of the COVID-19 outbreak. By making COVID-19 data open and available in BigQuery, researchers and public health officials can better understand, study, and analyze the impact of this disease.”The contents of these datasets are provided to the public strictly for educational and research purposes only. We are not onboarding or managing PHI or PII data as part of the COVID-19 Public Dataset Program. Google has practices and policies in place to ensure that data is handled in accordance with widely recognized patient privacy and data security policies.We on the Google Cloud team sincerely hope that the COVID-19 Public Dataset Program will enable better and faster research to combat the spread of this disease. Get started today.
Quelle: Google Cloud Platform

Extending the power of Azure AI to Microsoft 365 users

Today, Yusuf Mehdi, Corporate Vice President of Modern Life and Devices, announced the availability of new Microsoft 365 Personal and Family subscriptions. In his blog, he shared a few examples of how Microsoft 365 is innovating to deliver experiences powered by artificial intelligence (AI) to billions of users every day. Whether through familiar products like Outlook and PowerPoint, or through new offerings such as Presenter Coach and Microsoft Editor across Word, Outlook, and the web, Microsoft 365 relies on Azure AI to offer new capabilities that make their users even more productive.

What is Azure AI?

Azure AI is a set of AI services built on Microsoft’s breakthrough innovation from decades of world-class research in vision, speech, language processing, and custom machine learning. What is particularly exciting is that Azure AI provides our customers with access to the same proven AI capabilities that power Microsoft 365, Xbox, HoloLens, and Bing. In fact, there are more than 20,000 active paying customers—and more than 85 percent of the Fortune 100 companies have used Azure AI in the last 12 months.

Azure AI helps organizations:

Develop machine learning models that can help with scenarios such as demand forecasting, recommendations, or fraud detection using Azure Machine Learning.
Incorporate vision, speech, and language understanding capabilities into AI applications and bots, with Azure Cognitive Services and Azure Bot Service.
Build knowledge-mining solutions to make better use of untapped information in their content and documents using Azure Search.

Microsoft 365 provides innovative product experiences with Azure AI

The announcement of Microsoft Editor is one example of innovation. Editor, your personal intelligent writing assistant is available across Word, Outlook.com, and browser extensions for Edge and Chrome. Editor is an AI-powered service available in more than 20 languages that has traditionally helped writers with spell check and grammar recommendations. Powered by AI models built with Azure Machine Learning, Editor can now recommend clear and concise phrasing, suggest more formal language, and provide citation recommendations.

Additionally, Microsoft PowerPoint utilizes Azure AI in multiple ways. PowerPoint Designer uses Azure Machine Learning to recommend design layouts to users based on the content on the slide. In the example image below, Designer made the design recommendation based on the context in the slide. It can also can intelligently crop objects and people in images and place them in optimal layout on a slide. Since its launch, PowerPoint Designer users have kept nearly two billion Designer slides in their presentation.

You can take a closer look at how the PowerPoint team built this feature with Azure Machine Learning in this blog.

PowerPoint also uses Azure Cognitive Services such as the Speech service to power live captions and subtitles for presentations in real-time, making it easier for all audience members to follow along. Additionally, PowerPoint also uses Translator Text to provide live translations into over 60 languages to reach an even wider audience. These AI-powered capabilities in PowerPoint are providing new experiences for users, allowing them to connect with diverse audiences they were unable to reach before.

These same innovations can also be found in Microsoft Teams. As we look to stay connected with co-workers, Teams has some helpful capabilities intended to make it easier to collaborate and communicate while working remotely. For example, Teams offers the ability of live captioning meetings, which leverages the Speech API for speech transcription. But it doesn’t stop there. As you saw with PowerPoint, Teams also uses Azure AI for live translations when you set up Live Events. This functionality is particularly useful for company town hall meetings or even for any virtual event with up to ten thousand attendees, allowing presenters to reach audiences worldwide

These are just a few of the ways Microsoft 365 applications utilize Azure AI to deliver industry-leading experiences to billions of users. When you consider the fact that other Microsoft products such as Microsoft 365, Xbox, HoloLens 2, Dynamics 365, and Power Platform all rely on Azure AI, you begin to see the massive scale and the breadth of scenarios that only Azure can offer. Best of all, these same capabilities are available to anyone in Azure AI. 
Quelle: Azure