Welcome Canonical to Docker Hub and the Docker Verified Publisher Program

Today, we are thrilled to announce that Canonical will distribute its free and commercial software through Docker Hub as a Docker Verified Publisher. Canonical and Docker will partner together to ensure that hardened free and commercial Ubuntu images will be available to all developer software supply chains for multi-cloud app development. 

Canonical is the publisher of the Ubuntu OS, and a global provider of enterprise open source software, for all use cases from cloud to IoT. Canonical’s Ubuntu is one of the most popular Docker Official Images on Docker Hub, with over one billion images pulled. With Canonical as a Docker Verified Publisher, developers who pull Ubuntu images from Docker Hub can be confident they get the latest images backed by both Canonical and Docker. 

The Ideal Container Registry for Multi-Cloud 

Canonical is the latest publisher to choose Docker Hub for globally sharing their container images. With millions of users, Docker Hub is the world’s largest container registry, ensuring Canonical can reach their developers regardless of where they build and deploy their applications. 

This partnership, which covers both free and commercial Canonical LTS images, so developers can confidently pull the latest images straight from the source without concern for rate limits. 

Canonical chose Docker Hub as its primary distribution for its Ubuntu images to developers for three key reasons: 

Canonical wanted a container registry with developer ubiquity, that had simple integrations with developer automations, and an independent, un-opinionated registry provider. Canonical wants to enable developers to build new apps on top of Ubuntu with the most flexibility and optionalty for where their apps will run both today and tomorrow. These features of Docker Hub fit well with Canonical’s focus on delivering secure trusted images to customers through image provenance and ongoing maintenance and updates. 

As a Docker Verified Publisher, Canonical joins a list of over 200 ISVs using Docker Hub to distribute their software to developers where they get their work done. When Docker Hub users see the Docker “Verified Publisher” mark, they know that the containers they are pulling come straight from, and are supported by, the ISV publisher.

With 13 billion container image pulls per month from nearly 8 million repositories by over 11  million developers, Docker Hub is the industry’s leading container registry. Docker Hub delivers developers the largest breadth and depth of container images and plays a central role in building and sharing cloud-native applications. Docker Verified Publishers like Canonical ensure that the millions of Docker developers can easily and confidently find images and get to the business of app innovation. 

As part of this agreement Docker and Canonical will also collaborate in the coming months on the Ubuntu versions of Docker Official images to extend the quality of these already trusted and widely-used images .

You can get more information about Canonical’s announcement here, or browse the Canonical LTS offerings on Docker Hub. Software publishers and ISVs interested in joining the Docker Verified Publisher program can get more information by filling out this form.
The post Welcome Canonical to Docker Hub and the Docker Verified Publisher Program appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Captain Take 5 – Ajeet Singh Raina

Docker Captains are select members of the community that are both experts in their field and are passionate about sharing their Docker knowledge with others. Today, we’re introducing “Docker Captains Take 5”, a regular blog series where we get a closer look at the Docker experts who share their knowledge online and offline around the world. A different Captain will be featured each time and we will ask them the same broad set of questions ranging from what their best Docker tip is to whether they prefer cats or dogs (personally, we like whales and turtles over here). To kick us off we’re interviewing Ajeet Singh Raina who has been a Docker Captain since 2016 and is a DevRel Manager at Redis Labs. He is based in Bangalore, India.  

How/when did you first discover Docker?

It was the year 2013 when I watched Solomon Hykes for the first time presenting “The Future of Linux Containers” at PyCon in Santa Clara. This video inspired me to write my first blog post on Docker and the rest is history.

What is your favorite Docker command?

The docker buildx CLI  is one of my favorite commands. It allows you to build and run multi-architectural Docker images with just one-liner CLI:

$ docker buildx build –platform linux/amd64,linux/arm64,linux/arm/v7,  linux/arm/v6 .

I frequently use this tool to build Docker images for my tiny $99 NVIDIA Jetson Nano board as well as Raspberry Pi.

What is your top tip you think other people don’t know for working with Docker?

If you’re looking for a process to automate Docker container base image updates,  Watchtower is a promising tool. Watchtower monitors running containers and watches for changes to the images those containers were originally started from. Whenever an image gets changed,this tool  automatically restarts the container using the new image. Cool, isn’t it?

What’s the coolest Docker demo you have done/seen ?

Early this year, I ran Kubernetes 101 workshop for almost 4 hours in one of Docker Bangalore Community Meetup events at SAP Labs, India in front of an audience of more than 550 people. It was an amazing experience going LIVE and covering the overall KubeLabs tutorials running on Play with Kubernetes playground.

What have you worked on in the past 6 months that you’re particularly proud of?

One of the most exciting projects which I have worked on in the last 6 months is titled “Pico”. The Pico project is all about object detection and text analytics using Docker, Apache Kafka, IoT, and Amazon Rekognition System. Imagine you can capture live video streams, identify objects using deep learning, and then trigger actions or notifications based on the identified objects – all using Docker containers. With Pico, you will be able to set up and run a live video capture, analysis, and alerting solution prototype. This project excited dozens of Indian universities and provided me opportunities to travel and showcase it to larger communities.

The project is hosted over ARM Software Developer GITHUB repository 

What will be big news for Docker in the next year?

Docker Inc. announcing 10+ million Docker Hub repositories.

What is the biggest challenge that we as a community will need to tackle in 2021?

In 2021, the sustainability of community events despite the pandemic and lockdowns is going to be the biggest challenge for us.

What are your goals for the Docker community in the next year? 

Being a Docker Captain as well as Community Leader, I have below a list of goals for 2021:

Grow Docker Bangalore Community members from 10k to 12kTarget 250+ blogs around Docker and Ecosystem in Collabnix by 2021Conduct Joint Meetup with all other leading Docker communities across IndiaTake OSCONF (An Open Source Community Conference) – a conference dedicated to Docker & Kubernetes community to the international level

What talk would you most love to see at the next DockerCon?

Exciting use cases around emerging AI, Docker and IoT Edge Platforms

What is the technology that you’re most excited about and holds a lot of promise?

I’m excited about the emerging “No-code” development platform. A no-code development is an emerging platform that uses a visual development environment to allow layman users to create apps, through methods such as drag-and-drop, adding application components to create a complete application. With no-code, you don’t need coding knowledge to create apps.

Rapid fire questions…

What new skill have you mastered during the pandemic?

Artificial Intelligence using Docker

Cats or Dogs?

Dogs

Salty, sour or sweet?

Sweet

Beach or mountains?

Beach

Your most often used emoji?

  
The post Docker Captain Take 5 – Ajeet Singh Raina appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Compose for Amazon ECS Now Available

Docker is pleased to announce that as of today the integration with Docker Compose and Amazon ECS has reached V1 and is now GA!

We started this work way back at the beginning of the year with our first step – moving the Compose specification into a community run project. Then in July we announced how we were working together with AWS to make it easier to deploy Compose Applications to ECS using the Docker command line. As of today all Docker Desktop users will now have the stable ECS experience available to them, allowing developers to use docker compose commands with an ECS context to run their containers against ECS.

As part of this we want to thank the AWS team who have helped us make this happen: Carmen Puccio, David Killmon, Sravan Rengarajan, Uttara Sridhar, Massimo Re Ferre, Jonah Jones and David Duffey.

Getting started with Docker Compose & ECS

As an existing ECS user or a new starter all you will need to do is update to the latest Docker Desktop Community version (2.5.0.1 or greater) store your image on Docker Hub so you can deploy it (you can get started with Hub here), then you will need to get yourself setup on AWS and then lastly you will need to create an ECS context using that account. You are then read to use your Compose file to start running your applications in ECS.

We have done a couple of blog posts and videos along with AWS to give you an idea of how to get started or use the ECS experience. 

Amazon’s GA announcement of the experienceDocker Announcement / AWS announcement Open sourcing the integration Deploying WordPress to ECS Amazon unboxing of ECS experience Docker Docs 

If you have other questions about the experience or would like to give us feedback then drop us a message in the Compose CLI repo or in the #docker-ecs channel in our community Slack.

New in the Docker Compose ECS integration 

We have been adding new features to the ECS integration over the last few months and we wanted to run you through some of the ones that we are more excited about:

GPU support 

As part of the more recent versions of ECS we have provided the ability to deploy to EC2 (rather than the default fargate) to allow developers to make use of unique instance types/features like GPU within EC2. 

To do this all you need to do is specify that you need a GPU instance type as part of your Compose file and the Compose CLI will take care of the rest! 

services:
learn:
image: itamarost/object-detection-app:latest-gpu
command: python app.py
ports:
– target: 8000
protocol: tcp
x-aws-protocol: http
deploy:
resources:
# devices:
# – capabilities: [”gpu”]
reservations:
memory: 30Gb
generic_resources:
– discrete_resource_spec:
kind: gpus
value: 1

EFS support

We heard early feedback from developers that when you are trying to move to the cloud you may not be ready to move to managed service to persist your data and may still want to use volumes with your application. To solve  this we have added Elastic File System (EFS) volume support to the Compose CLI allowing users to create volumes and use them as part of their Compose applications. This is created with a Retain policy so data won’t be deleted on application shut-down. If the same application (same project name) is deployed again, the file system will be re-attached to offer the same user experience developers are used to locally with docker-compose.

To do this I can either specify an existing file system that I have already created:

volumes:
my-data:
external: true
name: fs-123abcd

Or I can create a new one from scratch by providing information about how I want it configured:

volumes:
my-data:
driver_opts:
# Filesystem configuration
backup_policy: ENABLED
lifecycle_policy: AFTER_14_DAYS
performance_mode: maxIO
throughput_mode: provisioned
provisioned_throughput: 1024

I can also manage these through the docker volume command which lets me list and manage my resources allowing me to remove them when I no longer need them.

Context creation improvements 

We have also been looking at how we improve the context creation flow to make this simpler and more interactive – while also allowing power users to specify things more up front if they know how they want to configure your context. 

When you get started we now have 3 options for creating a new context: 

? Create a Docker context using: [Use arrows to move, type to filter]
> An existing AWS profile
A new AWS profile
AWS environment variables

If you select an existing profile, we will list your available profiles to choose from and allow you to simply select the profile you want to have associated with this context. 

$ docker context create ecs test2
? Create a Docker context using: An existing AWS profile
? Select AWS Profile nondefault
Successfully created ecs context “test2″

$ docker context inspect test2
[
{
“Name”: “test2″,
“Metadata”: {
“Description”: “(eu-west-3)”,
“Type”: “ecs”
},
“Endpoints”: {
“ecs”: {
“Profile”: “nondefault”,
}

}
]

If you want to create a new profile, we will ask you for the credentials needed to do this as part of the creation flow and will save this profile for you:

? Create a Docker context using: A new AWS profile
? AWS Access Key ID fiasdsdkngjgwka
? Enter AWS Secret Access Key *******************
? Region eu-west-3
Saving to profile “test3″
Successfully created ecs context “test3″

$ docker context inspect test3
[
{
“Name”: “test3″,
“Metadata”: {
“Description”: “(eu-west-3)”,
“Type”: “ecs”
},
“Endpoints”: {
“ecs”: {
“Profile”: “test3″,
}
},

}
]

If you want to do this using your existing AWS environment variables, then you can choose this option we will create the context with a reference to these env vars so we continue to respect them as you work with them:

$ docker context create ecs test1
? Create a Docker context using: AWS environment variables
Successfully created ecs context “test1″
$ docker context inspect test1
[
{
“Name”: “test1″,
“Metadata”: {
“Description”: “credentials read from environment”,
“Type”: “ecs”
},
“Endpoints”: {
“ecs”: {
“CredentialsFromEnv”: true
}
},

}
]

We hope this new simplified way of getting started and the flags we have added in here to allow you to override parts of this will help you get started with ECS even faster than before. 

We are really excited about the new experience we have built with ECS, if you have any feedback on the experience or have ideas for other backends for the Compose CLI please let us know via our Public Roadmap.

Join our workshop “I Didn’t Know I Could Do That with Docker – AWS ECS Integration” with Docker’s Peter McKee and AWS’ Jonah Jones Tuesday, November 24, 2020 – 10:00am PT / 1:00pm ET. Register here. 

The post Docker Compose for Amazon ECS Now Available appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Rate Limiting by the Numbers

As a critical part of Docker’s transition into sustainability, we’ve been gradually rolling out limits on docker pulls to the heaviest users of Docker Hub. As we near the end of the implementation of the rate limits, we thought we’d share some of the facts and figures behind our effort. Our goal is to ensure that Docker becomes sustainable for the long term, while continuing to offer developers 100% free tools to build, share, and run their applications.

We announced this plan in August with an effective date of November 1. We also shared that “roughly 30% of all downloads on Hub come from only 1% of our anonymous users,” illustrated in this chart:

This shows the dramatic impact that a very small percentage of anonymous, free users have on all of Docker Hub. That excessive usage by just 1%–2% of our users results not only in an unsustainable model for Docker but also slows performance for the other 98%–99% of the 11.3 million developers, CI services, and other platforms using Docker Hub every month. Those developers rely upon us to save and share their own container images, as well as to pull images from Docker Verified Publishers and our own trusted library of Docker Official Images, amounting to more than 13.6 billion pulls per month.

Based on our goal of ensuring the vast majority of developers can remain productive, we designed limits of 100 or 200 pulls in a 6-hour window for anonymous and authenticated free users, respectively. In the context of a developer’s daily workflow, 100 pulls in 6 hours amounts to a docker pull every 3.6 minutes on average, for 6 consecutive hours. We considered this more than adequate for an individual developer, while other use cases involving high pull rates such as CI/CD pipelines or production container platforms can decrease their usage or subscribe to a paid plan.

Over the course of a month, a single anonymous developer can (with the help of automation) make up to 12,000 docker pulls. By authenticating, that number increases to 24,000 docker pulls for free. As Docker container images vary in size from a few MB to well above 1 GB, focusing on pulls rather than size provides predictability to developers. They can pull images as they’re building applications, without worrying about their size but rather about their value.

Based on these limits, we expected only 1.5% of daily unique IP addresses to be included — roughly 40,000 IPs in total, out of more than 2 million IPs that pull from Docker Hub every day. The other 98.5% of IPs using Docker Hub can carry on unaffected — or more likely, receive improved performance as the heaviest users decreased.

As November 1st approached, we created a rollout plan that provided additional advance notice and decreased impact — even to developers we haven’t been able to reach through our emails or blog posts. We’ve put a few things in place to ease the transition for anyone affected:

Providing a grace period after November 1 prior to full enforcement for all usage, so only a small fraction of the heaviest users were limited early in our rollout;Progressive rollout of enforcement across the affected population, to provide for additional opportunities for communications to reach Docker developers, and to minimize any inadvertent impact; andTemporary time windows of full enforcement, to raise awareness of unknown reliance upon Docker Hub and to reach developers without Docker IDs who we could not otherwise.

On Wednesday, November 18, we expect to complete our progressive rollout to the new limits of 100 pulls and 200 pulls per 6-hour window for anonymous and authenticated free users, respectively. At that point, anyone who has not yet encountered the limits can reasonably conclude that their current usage of docker pulls is in that 98.5% of unaffected Docker Hub users.

As we’ve progressed down this path toward creating a sustainable Docker, we’ve heard multiple times from developers that the temporary full-enforcement windows were valuable. They surfaced unknown reliance upon Docker Hub, as well as areas where our paying customers had not yet authenticated their usage. We’ve also worked with customers to identify problems that were unknowingly causing some of the massive downloads, like runaway processes downloading once every 3 seconds. Alongside this, we’ve created additional paid offerings to support large enterprises, ISVs, and service providers with needs like IP whitelisting or namespace whitelisting.

We greatly appreciate the trust placed in Docker by the entire software community, and we look forward to helping you continue to build the great applications of the future!
The post Rate Limiting by the Numbers appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Apple Silicon M1 Chips and Docker

Revealed at Apple’s ‘One More Thing’ event on Nov 10th, Docker was excited to see new Macs feature Apple silicon and their M1 chip. At Docker we have been looking at the new hypervisor features and support that are required for Mac to continue to delight our millions of customers. We saw the first spotlight of these efforts at Apple WWDC in June, when Apple highlighted Docker Desktop on stage. Our goal at Docker is to provide the same great experience on the new Macs as we do today for our millions of users on Docker Desktop for Mac, and to make this transition as seamless as possible. 

Building the right experience for our customers means getting quite a few things right before we push a release. Although Apple has released Rosetta 2 to help move applications over to the new M1 chips, this does not get us all the way with Docker Desktop. Under the hood of Docker Desktop, we run a virtual machine, to achieve this on Apple’s new hardware we need to move onto Apple’s new hypervisor framework. We also need to do all the plumbing that provides the core experience of Docker Desktop, allowing you to docker run from your terminal as you can today.

Along with this, we have technical dependencies upstream of us that need to make changes prior to making a new version of Docker Desktop GA. We rely on things like Go for the backend of Docker Desktop and Electron for the Docker Dashboard to view your Desktop content. We know these projects are hard at work getting ready for M1 chips, and we are watching them closely. 

We also want to make sure we get the quality of our release right, which means putting the right tooling in place for our team to support repeatable, reliable testing. To do this we need to complete work including setting up CI for M1 chips to supplement the 25 Mac Minis that we use for automated testing of Docker Desktop. Apple’s announcement means we can start to get these set up and put in place to start automating the testing of Desktop on M1 chips. 

Last but by no means least, we also need to review the experience in the product for docker build. We know that developers will look at doing more multi-architecture builds than before. We have support for multi-architecture builds today behind buildx, and we will need to work on how we are going to make this simpler as part of this release. We want developers to continue  to work locally in Docker and have the same confidence that you can just build – share – run your content as easily as you do now regardless of the architecture. 

If you are excited for the new Mac hardware and want to be kept up to date on the status of Docker on M1 chips, please sign up for a Docker ID to get our newsletter for the latest updates. We are also happy to let you know that the latest version of Docker Desktop runs on Big Sur. If you have any feedback, please let us know either by our issue tracker or our public roadmap!

Also a big thank you to all of you who have engaged on the public roadmap, Twitter and our issue trackers highlight how much you care about Docker for Mac. Your interest and energy is greatly appreciated! Keep providing feedback and check in with us as we work on this going forward.
The post Apple Silicon M1 Chips and Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/