Beta IPv6 Support on Docker Hub Registry

At Docker we’re all about our community, so we listened to your excitement about Docker Hub support for IPv6 on the public roadmap, and now we are pleased to be introducing beta IPv6 support for the Docker Hub Registry! This means if you’re on an IPv6 only network, you can now opt in to use the registry directly with no NAT64 gateway.

Internet Protocol version 4 (IPv4), in use since the 1980s, can no longer meet the world’s growing demands for globally unique address spaces and this pool will eventually be depleted. IPv6 was created as a replacement for IPv4 and it is anticipated that it will become the new internet protocol standard. This move not only increases access to Docker Hub, but positions Hub to continue being easily accessible as the world transitions to IPv6.

IPv6 adoption of Google users

Docker will now be one of the few container registries that supports IPv6. This update enables more of our community to use the world’s most popular container registry, while making sure Docker Hub is positioned to support our users in the next stage of the internet.

What does this mean for you?

IPv4 Users: Your access to Hub does not change

Dualstack Users: Can choose between IPv4 or IPv6 endpoints

Dualstack users will now be able to use the new IPv6-only endpoints while in beta. At a future point in time, the primary endpoints will also support IPv6.

IPv6 Only Users: able to access new IPv6 only domain

IPv6 only users will now be able to use the beta IPv6 endpoint without the need of a NAT64 gateway!

How to use the beta IPv6-only endpoint

If you are on a network with IPv6 support, you can begin using the IPv6-only endpoint registry.ipv6.docker.com! To login to this new endpoint simply run the following command (using your regular Docker hub credentials):

docker login registry.ipv6.docker.com

Once logged in, add the IPv6-only endpoint to the image you wish to push/pull. For example, if you wish to pull the official ubuntu image instead of running the following:

docker pull ubuntu:latest

you will run:

docker pull registry.ipv6.docker.com/library/ubuntu:latest

Note: library will only be used for official images, replace this with a namespace when applicable. For example pulling docker/dockerfile will be:

docker pull registry.ipv6.docker.com/docker/dockerfile:latest

This endpoint is only supported for push/pulls for Docker Hub Registry with the Docker CLI, Docker Desktop is not supported. The Docker Hub website and other systems will see updates for IPv6 in the future based on what we learn here.

Please note this new endpoint is only a beta – there is no guarantee of functionality or uptime and it will be removed in the future. Do not use this endpoint for anything other than testing.

Implementation

Updating networking infrastructure correctly and in an automated fashion on a high traffic network such as Docker Hub requires precision, delicacy and rigorous testing. A significant number of changes were made across our Amazon Web Services (AWS) network resources and routing stack in order to support IPv6. To give an idea of the process involved, here are some notable highlights:

Rate Limiting

In order to prevent abuse and to enforce our Docker Hub rate limiting, we limit requests based on a user’s IP address. Previously, we were limiting addresses based on the full 32-bit IPv4 addresses. To keep this consistent, we are now limiting based on full IPv4 addresses and the first 64 bits of IPv6 addresses.

We also updated our allowlist systems, which provide our large organization customers and cloud partners with unlimited access to Hub downloads. Similarly, our regulatory blocklist system was updated to include IPv6 addresses.

Load Balancing

For IPv6 connections, we’ve provisioned brand new Network Load Balancers (NLBs) which will be handling all AAAA (IPv6) traffic. These give us more performance and better scalability.

Likewise, our application load balancer configurations were updated to understand IPv6 addresses, pass those along properly to the backend applications, and correctly create logs and metrics based on those.

Software Compatibility

Docker Hub receives billions of requests per day and all of these are logged in order for us to ensure access compliance, security, and also gives us a tool to have more debugging capabilities. Due to this, our tooling and configuration required an update to ensure our logs were consistent with both IPv4 and IPv6.

Alongside logging, some applications needed an update to support Dualstack endpoints – in particular our distribution service which is now providing IPv6 access to our blob storage! Code changes were made to the registry middleware and authentication services to make sure we could serve IPv6 requests across the whole registry push/pull flow.

The Future

We’re thrilled that more users (specifically on IPv6 only networks) will now have better accessibility to Docker Hub! We’re also happy to be supporting the internet and our industry as we make the step into new this IP space.

If you have feedback on this beta release, please let us know here: https://github.com/docker/hub-feedback/issues/2165
The post Beta IPv6 Support on Docker Hub Registry appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Notary v2 Project Update

Supply chain security is something that has been increasingly important to all of us in the last few years. Almost as important as the global supply chains that are having problems distributing goods around the world! There have been many attacks via the supply chain. This is where some piece of software that you use turns out to be compromised or to contain vulnerabilities that in turn compromises your production environment.

We have written about secure supply chain best practices . Docker is committed to helping you build security into your supply chain, and we are working on more tools to help you with this. We provide Docker Trusted Content, including Docker Official Images and Docker Verified Publisher images for you to use as a  trusted starting point for building your applications.

We have also been heavily involved with many community projects around supply chain security. In particular we are heavily involved in the Notary v2 project in the Cloud Native Computing Foundation (CNCF). We last wrote about this in January. This project is the next generation of the original Notary project that Docker started in 2015 and then donated to the CNCF. Notary (to simplify!) is a project for adding cryptographic signatures to container images so that you can make sure that the image someone produced is the same one that you are using, and that it has not been tampered with on the way.

Over the years we have learned a lot of things about how it is used, and the problems that have hindered wider adoption, and these are part of the community feedback into the design of Notary v2. We are looking to build a signing framework that can be used in every registry, and where signatures can be pushed and pulled with images so that you can identify that an image that you pull from your private on premise registry is the same as the Docker Official Image on Docker Hub, for example. This is one of the many use cases that are important to the community and which Notary v1 did not adequately address. We also want to make it much simpler to use, so we can have signature checks on by default for all users, rather than having opt-in signatures.

Today the project has released an early alpha prototype for further experimentation and for your feedback. Steve Lasker has written a blog post with the details. Check out the demos and please give feedback on whether these workflows fit your use cases, or how we can improve them.

Remember you can give us feedback about any aspect of our products on the Docker public roadmap. We are especially interested in your feedback around supply chain security and what you would like to see; we have had lots of really helpful feedback recently that is helping us work out where to take our products and tools.
The post Notary v2 Project Update appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Screaming In the Cloud with Corey Quinn and Docker CEO Scott Johnston

On August 31st, Docker announced updates to our product subscriptions — Docker Personal, Pro, Team and Business. Our CEO Scott Johnston recently joined Corey Quinn on an episode of Screaming in the Cloud to go over all the details and discuss how the changes have been received by businesses and the broader developer community. 

The episode with Scott is titled “Heresy in the Church of Docker Desktop with Scott Johnston.” It’s a play on the title of a talk Corey once gave (“Heresy in the Church of Docker”) after he met Scott when they both worked at Puppet.

There’s a substantial discussion around Docker Desktop. Scott describes it as a unique hybrid — one that’s based on upstream open-source technologies (Docker Engine, Docker Compose, BuildKit, etc.), while also being a commercial product that’s engineered for the native environments of Mac and Windows, and soon Linux. He also recalls life before Docker Desktop when developers had to contend with complex setup, maintenance, and “tricky stuff that can go wrong” — all of which Docker Desktop handles so that developers can simply focus on building great apps.

Scott and Corey also discuss the why behind the new subscription tiers, and Docker Business in particular. A key factor was large organizations who use Docker Desktop at scale — as in hundreds and thousands of developers — requesting capabilities to help them manage those large developer environments. Another factor was the need to balance continuing investment in Docker Desktop to give organizations increased productivity, flexibility and security, while also sustainably scaling the Docker business, and still providing a generous free experience with the Docker Personal subscription.

According to Scott, the response from businesses to the updated subscriptions has been overwhelmingly positive. Not only have there turned out to be far more Docker Desktop users inside organizations than previously thought, but many companies have already proactively purchased a Docker subscription. The positive momentum is allowing Docker to accelerate items in the company’s roadmap for developers, such as Docker Desktop for Linux.

You can listen to Episode 264 of “Screaming in the Cloud,” titled “Heresy in the Church of Docker Desktop with Scott Johnston,” here.

Considering an Alternative to Docker Desktop?

Read this blog recapping Docker Captain Bret Fisher‘s video where he reminded his audience of the many things — some of them complex and subtle — that Docker Desktop does that make it such a valuable developer tool.
The post Screaming In the Cloud with Corey Quinn and Docker CEO Scott Johnston appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Captain Take 5 – Aurélie Vache

Docker Captains are select members of the community that are both experts in their field and are passionate about sharing their Docker knowledge with others. “Docker Captains Take 5” is a regular blog series where we get a closer look at our Captains and ask them the same broad set of questions ranging from what their best Docker tip is to whether they prefer cats or dogs (personally, we like whales and turtles over here). Today, we’re interviewing Aurélie Vache who has been a Docker Captain since 2021. She is a DevRel at OVHcloud and is based in Toulouse, France.

How/when did you first discover Docker?

I’ve been a developer (with Ops & Data sensibility) for over 15 years. I’m a former Java / JEE Developer, Ops, then Web, Full-Stack and Lead Developer.

Four years ago I was a cloud developer working on connected and autonomous vehicles projects. I discovered the magical world of cloud technologies: managed services with cloud providers, containers, orchestrator, observability, monitoring, infrastructure as code, CI/CD and real DevOps approach & culture. I fell in love with these technologies.

It was not easy to understand all the new concepts, but it was very interesting and enriching.

It was also the moment I discovered Docker. And since thenI have used Docker daily.

What is your favorite Docker command?

$ docker system prune -a

This command helped me in the past to save Jenkins agents :-D.

What is your top tip for working with Docker that others may not know?

I don’t know if it’s a “top tip” but I think it’s useful to know that containers have an exit status code and they can give us explanations about why the container has failed.

If you are interested, you can find my sketchnote about it here:

What’s the coolest Docker demo you have done/seen?

It was when a coworker showed me how he packaged his app into a Docker image, pushed it to a Docker registry, and ran it in a container, so easily. He could deploy an application anywhere, without having to install tools, dependencies, face version conflicts, and he could deploy his application in multiple environments without having to install x environments manually.

It was so magical and powerful.

What have you worked on in the past six months that you’re particularly proud of?

I have helped evangelize and democratize Docker, as well as Kubernetes and the world of containers and the cloud in general, across multiple companies. I also presented several talks entitled “Docker, Kubernetes, Istio: Tips, tricks & tools” across France.

I also created a new way to teach cloud technologies, such as Docker, through a series of sketchnotes: “Understanding Docker | Kubernetes | Istio” in a visual way. 

All technical books are written in the same way. Personally I understand more when I see diagrams, schemas, and illustrations rather than a ton of words. I have found this is helpful for other people too :-).

What do you anticipate will be Docker’s biggest announcement this year?

Honestly I don’t know. I like to be surprised ^^.

What are some personal goals for the next year with respect to the Docker community?

I plan to create a new series of videos about Docker, always in a visual way, in my YouTube channel, in order to continue to try to share and spread my knowledge in an easy way for people.

What talk would you most love to see at DockerCon 2022?

As usual, I like to be surprised, so I don’t have any expectations.

Rapid fire questions…

What new skill have you mastered during the pandemic?

During the pandemic I created a new way to explain complex and abstract concepts in a more simple and visual way, in sketchnotes, titled “Understanding xx in a visual way”. I sketched every evening and week-end, published them on Twitter and dev.to. And finally, I’ve published 3 books about Kubernetes, Istio and Docker. I love to try to explain abstract complex concepts with simple illustrations and words.

Cats or Dogs?

As a child, I would have answered dog, but now I would answer cat. For a few days now I’ve had a kitten named “Sushi”!

Salty, sour or sweet?

salty

Beach or mountains?

mountain

Your most often used emoji?

^^ or
The post Docker Captain Take 5 – Aurélie Vache appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Speed up Building with Docker BuildX and Graviton2 EC2

As the expansion in arm usage continues, building your images on arm is crucial to making images available and performant across all architectures which is why we’ve invested in making it super easy to build arm and multi-arch images. In a previous blog we outlined how to build multi-arch images locally using the QEMU emulator that comes pre-packaged with Docker Desktop. In this blog we outline how to get started using a remote builder to accomplish the same goal, for our purposes we will be using an Amazon EC2 instance. In December of 2019, Amazon announced the new Amazon EC2 instances powered by AWS Graviton2 Processors that significantly improve performance over the first-generation AWS Graviton processors. Using a Graviton2 instance to build your arm images remotely will speed up the process, making it even easier to develop containers on, and for, arm servers and devices. 

We will walk through the following:

Using Graviton2 EC2 instance as a remote hostRegistering the Graviton2 EC2 instance as remote builderBuilding  the docker image on Graviton2 EC2 instanceAdding additional instances as required

To learn more about buildX and remote builders, you can visit our documentation.

Getting Started With a Remote Builder

Prerequisites

First we’ll have to ensure that you are using a remote host instead of your local machine. Common ways to access remote Docker instances are via mTLS or SSH. Here we will use SSH for simplicity. In order to start with using a remote builder with Buildx and BuildKit on Graviton2 the host needs to be accessible through the Docker CLI, using ssh://<USERNAME>@<HOST> URL as follows:

$ docker -H ssh://me@graviton2-instance info
Client:
Context: default
Debug Mode: false
Plugins:
buildx: Build with BuildKit (Docker Inc., v0.6.1-65-gad9dddc3)
compose: Docker Compose (Docker Inc., v2.0.0-rc.3)
scan: Docker Scan (Docker Inc., v0.8.0)

Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 20.10.9
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 5b46e404f6b9f661a205e28d59c982d3634148f8
runc version: v1.0.2-0-g52b36a2
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 5.4.0-1045-aws
Operating System: Ubuntu 20.04.2 LTS
OSType: linux
Architecture: aarch64
CPUs: 2
Total Memory: 1.837GiB

Create the remote builder with Buildx

Now you can register this remote Graviton2 instance to Docker Buildx using the create command:

$ docker buildx create –name graviton2
–driver docker-container
–platform linux/arm64
ssh://me@graviton2-instance
graviton2

–platform is specified so that this node will be preferred for arm64 builds when we add other nodes to this build cluster.

Bootstrap the builder to check and create the BuildKit container on the remote host:

$ docker buildx inspect –bootstrap –builder graviton2
#1 [internal] booting buildkit
#1 pulling image moby/buildkit:buildx-stable-1
#1 pulling image moby/buildkit:buildx-stable-1 2.2s done
#1 creating container buildx_buildkit_node1
#1 creating container buildx_buildkit_node1 1.4s done
#1 DONE 3.7s
Name: graviton2
Driver: docker-container

Nodes:
Name: node1
Endpoint: ssh://me@graviton2-instance
Status: running
Platforms: linux/arm64*, linux/arm/v7, linux/arm/v6

Build your image

Now we will create a simple Dockerfile:

FROM busybox as buildARG TARGETPLATFORMARG BUILDPLATFORMRUN echo “I am running on $BUILDPLATFORM, building for $TARGETPLATFORM” > /logFROM busyboxCOPY –from=build /log /logRUN cat /logRUN uname -a

And let’s build the image against our builder:

docker buildx build –builder graviton2
–push -t example.com/hello:latest
–platform linux/arm64 .
#2 [internal] load .dockerignore
#2 transferring context:
#2 transferring context: 2B 0.2s done
#2 DONE 0.2s

#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 246B 0.3s done
#1 DONE 0.4s

#3 [internal] load metadata for docker.io/library/busybox:latest
#3 …

#4 [auth] library/busybox:pull token for registry-1.docker.io
#4 DONE 0.0s

#3 [internal] load metadata for docker.io/library/busybox:latest
#3 DONE 1.5s

#5 [build 1/2] FROM docker.io/library/busybox@sha256:f7ca5a32c10d51aeda3b4d01c61c6061f497893d7f6628b92f822f7117182a57
#5 resolve docker.io/library/busybox@sha256:f7ca5a32c10d51aeda3b4d01c61c6061f497893d7f6628b92f822f7117182a57 0.0s done
#5 DONE 0.0s

#5 [build 1/2] FROM docker.io/library/busybox@sha256:f7ca5a32c10d51aeda3b4d01c61c6061f497893d7f6628b92f822f7117182a57
#5 sha256:7560ee4921c3fab4f1d34c83600f6f65841ec863e072374f4e8044ff01df156f 821.72kB / 821.72kB 0.1s done
#5 extracting sha256:7560ee4921c3fab4f1d34c83600f6f65841ec863e072374f4e8044ff01df156f 0.0s done
#5 DONE 0.2s

#6 [build 2/2] RUN echo “I am running on $BUILDPLATFORM, building for $TARGETPLATFORM” > /log
#6 DONE 0.1s

#7 [stage-1 2/4] COPY –from=build /log /log
#7 DONE 0.0s

#8 [stage-1 3/4] RUN cat /log
#8 0.078 I am running on $BUILDPLATFORM, building for $TARGETPLATFORM
#8 DONE 0.1s

#9 [stage-1 4/4] RUN uname -a
#9 0.081 Linux buildkitsandbox 5.4.0-1045-aws #47-Ubuntu SMP Tue Apr 13 07:04:23 UTC 2021 aarch64 GNU/Linux
#9 DONE 0.1s

#10 exporting to image
#10 exporting layers
#10 exporting layers 0.3s done
#10 exporting manifest sha256:7097ec1c09675617e2c44b5924b76f7863c4ff685c640b32dfaa1b1e8f2bc641 0.0s done
#10 exporting config sha256:d18ca92a45b563373606f0a06d0a1d2280d0c11976b1ca64dba0e567d540a3e2 0.0s done
#10 pushing layers
#10 pushing layers 0.7s done

Add other instances

You can also append other Graviton2 instances (node) to the same builder with:

docker buildx create –name graviton2
–node node2
–driver docker-container
–platform linux/arm64
–append ssh://me@graviton2-instance2

Note the –append flag to add a node to the existing graviton2 builder.

Now, you’ve built your image remotely using a Graviton2 instance! These are just some of the things you can do with buildx. We’d love your feedback on how it went and what you’d like to see us do next, you can submit it feedback to our public roadmap.
The post Speed up Building with Docker BuildX and Graviton2 EC2 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Online Machine Learning: INTO THE DEEP

radicalbit.medium.com – Online Learning is a branch of Machine Learning that has obtained a significant interest in recent years thanks to its peculiarities that perfectly fit numerous kinds of tasks in today’s world. Let’s…
Quelle: news.kubernauts.io

Join Docker This Month at KubeCon and the Cloud Engineering Summit

Two cloud-related conferences are coming up this month, and Docker will have speakers at both. First up, Docker CTO Justin Cormack will present at KubeCon next week. The week after that Peter McKee, Docker’s head of Developer Relations, will speak at  Pulumi Cloud Engineering Summit.

At KubeCon, Justin and co-presenter Steve Lasker of Microsoft will speak on the topic of tooling for supply chain security with special reference to the Notary project. They’ll also look at the future roadmap and the supply chain landscape. KubeCon, the flagship conference of the Cloud Native Computing Foundation, is geared toward adopters and technologists from leading open source and cloud native communities. The conference runs Oct. 11 – 15 in Los Angeles and virtually. Justin’s presentation, titled Notary: State of the Container Supply Chain, takes place Thursday, Oct. 14 at 4:30 p.m. – 5:05 p.m. Pacific.

At the Cloud Engineering Summit, Peter will team up with Uffizzi’s Josh Thurman to speak about Continuous Previews — a cousin of Continuous Integration and Continuous Deployments that allows developers to easily share new features and changes to a wide audience within their organization, thereby speeding the delivery of features to users. The Wednesday, Oct. 20 summit is a virtual day of learning for cloud practitioners that focuses on best practices for building, deploying and managing modern cloud infrastructure. Peter’s presentation, titled Continuous Previews: Using Infrastructure as Code to Continuously Share and Preview Your Application, takes place at 3:00 p.m. – 3:30 p.m. Pacific.
The post Join Docker This Month at KubeCon and the Cloud Engineering Summit appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/