Docker Series B: More Fuel To Help Dev Teams Get Ship Done

Today we’re excited and humbled to announce Docker’s Series B raise of $23 million to accelerate our mission of delivering tools development teams love to quickly take their ideas from code to cloud. The round was led by Tribe Capital with participation from our existing investors, Benchmark and Insight Partners. Arjun Sethi, Tribe co-founder and partner, will join the Docker board. 

This would not have been possible without the Docker team: Thank you for giving each other your best, every day, despite the disruptions of the refocusing, the pandemic, the overnight switch to work-from-home, and so much more. We also thank our developer community of users, contributors, customers, partners, and Docker Captains – your enthusiastic engagement throughout this past year was invaluable.

Hit Refresh

Tribe sees in Docker what we saw in November 2019 when we refocused the company: the opportunity to build on the bottoms-up developer love of the Docker experience and provide a collaborative app development platform for development teams to accelerate getting their ideas from code to cloud. And that’s just what we did.

Key inputs to Tribe’s investment decision were our results this last year, which included attracting 80,000 developer participants to DockerCon 2020, adding 1.8 million new registered developers for a total of more than 7.3 million, and growing our annual recurring revenue (ARR) 170% year-over-year. From their own “Magic 8 Ball” analysis of Docker, Tribe concluded that Docker has “a brand halo with a peer set you can count on one hand among developers. Such an asset is rarely and hard-earned. By doubling down on what earned that place in developers’ hearts, Docker seeks to further entrench and evangelize that brand.” It’s an opportunity to create an “N-of-1” company, like Tribe portfolio companies Carta and Slack. 

The Road Ahead

Financings are milestones in company-building, not the destination. This raise will accelerate our build-out of our collaborative app development platform for development teams, so they can spend more time building and sharing applications that impact their organizations. Specifically:

Collaboration. The complexity of microservices-based app development and the new “virtual-first” nature of development teams increases friction when collaborating, which in turn slows shipping. To address this, we’re focusing on helping team members easily and quickly share with each other their in-process work, get better visibility into colleagues’ output and pipeline state, and benefit from the intermediate build and task results of others.

Content. Microservice’s modularity, the ability for development teams to compose their app using a mix of custom code they write with standard components, like databases and base images, can significantly compress the time it takes to get an app working. Of course, any development team using third-party components must trust its software supply chain. This is the root of the popularity of Docker Official Images and Docker Verified Publisher images as well as of our integrated secure supply chain tools with Snyk and JFrog. With this raise, you’ll see us expand the breadth and depth of content from both open source projects and ISVs as well as deliver additional tools to help development teams increase software supply chain confidence, security, and visibility.

Ecosystem. Docker already simplifies app development for teams while providing them choice. Whether interoperability with popular container orchestrators (eg, Kubernetes, AWS ECS, Azure ACI, Swarm), 100% compatibility with the major container runtimes (e.g., Docker Engine, containerd), or simultaneous builds of multi-architecture apps (e.g,. x86, ARM), Docker does so without any additional burden on the development team. To enable teams to continue to benefit from partner innovations throughout the container ecosystem look for us to continue to drive open standards together with ecosystem partners (e.g., OCI, Compose spec, Notary v2). In addition, we will deliver more APIs and SDKs to enable development teams and partners to integrate their tools much more quickly and easily.

You can see in the above our continued focus on helping development teams by simplifying app development complexities while expanding choice, which enables them to get their ideas from code to cloud as quickly as possible. In parallel, we remain committed to growing a sustainable business that continues to provide a 100% free experience for developers while meeting the demands of professional development teams with additional subscription services. Doing so allows us to scale the Docker experience to the next 10 million developers to make our ecosystem that much stronger.

Join Us!

We couldn’t be more excited to continue our journey and double-down on the successes of the past year. As always, you – the Docker developer community – play a critical role in defining and contributing to our direction, and we invite you to join us, whether it’s participating in DockerCon 2021, contributing to the Docker product roadmap, becoming a Docker Verified Publisher, or simply trying Docker for free.

Let’s together go and get ship done!
The post Docker Series B: More Fuel To Help Dev Teams Get Ship Done appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Compose: From Local to Amazon ECS

By using cloud platforms, we can take advantage of different resource configurations and compute capacities. However, deploying containerized applications on cloud platforms is proving to be quite challenging, especially for new users who have no expertise on how to use that platform. As each platform may provide specific APIs, orchestrating the deployment of a containerized application can become a hassle.

Docker Compose is a very popular tool used to manage containerized applications deployed on Docker hosts. Its popularity is maybe due to the simplicity on how to define an application and its components in a Compose file and the compact commands to manage its deployment.

Since cloud platforms for containers have emerged, being able to deploy a Compose application on them is a most-wanted feature by many developers that use Docker Compose for their local development.

In this blog post, we discuss how to use Docker Compose to deploy containerized applications to Amazon ECS. We aim to show how the transition from deploying to a local Docker environment to deploying to Amazon ECS is effortless, the application being managed in the same way for both environments.

Requirements

In order to exercise the examples in this blogpost, the following tools need to be installed locally:

Windows and MacOS: install Docker DesktopLinux: install Docker Engine and Compose CLITo deploy to Amazon ECS: an AWS account

For deploying a Compose file to Amazon ECS, we rely on the new Docker Compose implementation embedded into the Docker CLI binary. Therefore, we are going to run docker compose commands instead of docker-compose. For local deployments, both implementations of Docker Compose should work. If you find a missing feature that you use, report it on the issue tracker.

Throughout this blogpost, we discuss how to:

Build and ship a Compose Application. We exercise how to run an application defined in a Compose file locally and how to build and ship its images to Docker Hub to make them accessible from anywhere.Create an ECS context to target Amazon ECS.   Run the Compose application on Amazon ECS. 

Build and Ship a Compose application

Let us take an example application with the following structure:

$ tree myproject/myproject/├── backend│   ├── Dockerfile│   ├── main.py│   └── requirements.txt├── compose.yaml└── frontend    ├── Dockerfile    └── nginx.conf2 directories, 6 files

The content of the files can be found here. The Compose file define only 2 services as follows:

$ cat compose.yamlservices:frontend:  build: frontend  ports:    – 80:80  depends_on:    – backendbackend:  build: backend

Deploying this file locally on a Docker engine is quite straightforward:

$ docker compose up -d[+] Running 3/3⠿ Network “myproject_default”     Created                     0.5s⠿ Container myproject_backend_1   Started                     0.7s⠿ Container myproject_frontend_1  Started                     1.4s

Check the application is running locally:

$ docker psCONTAINER ID   IMAGE                COMMAND                    CREATED         STATUS         PORTS                NAMESeec2dd88fd67   myproject_frontend   “/docker-entrypoint….”   4 seconds ago   Up 3 seconds   0.0.0.0:80->80/tcp   myproject_frontend_12c64e62b933b   myproject_backend    “python3 /app/main.py”     4 seconds ago   Up 3 seconds                        myproject_backend_1

Query the frontend:

$ curl localhost:80          ##         .    ## ## ##        ==## ## ## ## ##    ===/”””””””””””””””””___/ ==={                       /  ===-______ O           __/            __/  ___________/Hello from Docker!

To remove the application:

$ docker compose down[+] Running 3/3⠿ Container myproject_frontend_1  Removed                                                 0.5s⠿ Container myproject_backend_1   Removed                                                10.3s⠿ Network “myproject_default”     Removed                                                 0.4s

In order to deploy this application on ECS, we need to have the images for the application frontend and backend stored in a public image registry such as Docker Hub. This enables the images to be pulled from anywhere.

To upload the images to Docker Hub, we can set the image names in the compose file as follows:

$ cat compose.yamlservices:frontend:  image: myhubuser/starter-front  build: frontend  ports:    – 80:80  depends_on:    – backendbackend:  image: myhubuser/starter-back  build: backend

Build the images with Docker Compose:

$ docker compose build[+] Building 1.2s (16/16) FINISHED                                                                                                                                 => [myhubuser/starter-front internal] load build definition from Dockerfile            0.0s=> => transferring dockerfile: 31B                                                     0.0s=> [myhubuser/starter-back internal] load build definition from Dockerfile 0.0s…

In the build output we can notice the image has been named and tagged according to the image field from the Compose file.

Before pushing the images to Docker Hub, check to be logged in:

$ docker login…Login Succeeded

Push the images:

$ docker compose push[+] Running 0/16⠧ Pushing Pushing frontend: f009a503aca1 Pushing              [===========================================…                                        2.7s…

The images should be stored now in Docker Hub.

Create an ECS Docker Context

To make Docker Compose target the Amazon ECS platform, we need first to create a Docker context of the ECS type. A docker context is a mechanism that allows redirecting commands to different Docker hosts or cloud platforms. 

We assume at this point that we have AWS credentials set up in the local environment for authenticating with the ECS platform. 

To create an ECS context run the following command:

$ docker context create ecs myecscontext? Create a Docker context using:  [Use arrows to move, type to filter]  An existing AWS profile  AWS secret and token credentials> AWS environment variables

Based on the familiarity with the AWS credentials setup and the AWS tools use, we are prompted to choose between 3 context setups. To skip  the details of  AWS credential setup, we choose the option of using environment variables.

$ docker context create ecs myecscontext? Create a Docker context using: AWS environment variablesSuccessfully created ecs context “myecscontext”

This requires to have the AWS_ACCESS_KEY and AWS_SECRET_KEY set in the local environment when running Docker commands that target Amazon ECS.

The current context in use is marked by  * in the output of context listing:

$ docker context lsNAME                TYPE      DESCRIPTION                                   DOCKER ENDPOINT       default *           moby      Current DOCKER_HOST based configuration       unix:///var/run/docker.sockmyecscontext        ecs       credentials read from environment                                                           

To make all subsequent commands target Amazon ECS, make the newly created ECS context the one in use by running:

$ docker context use myecscontextmyecscontext$ docker context lsNAME                TYPE         DESCRIPTION                               DOCKER ENDPOINT              default             moby         Current DOCKER_HOST based configuration   unix:///var/run/docker.sockmyecscontext *      ecs          credentials read from environment                                                             

 Run the Compose application on Amazon ECS

An alternative to having it as the context in use is to set the context flag for all commands targeting ECS.

WARNING: Check in advance the cost that the ECS deployment may incur for 2 ECS services, load balancing (ALB), cloud map (DNS resolution) etc. 

For the following commands, we keep ECS context as the current context in use. Before running commands on ECS, make sure the Amazon account credentials grant access to manage resources for the application as detailed in the documentation.

 We can now run a command to check we can successfully access ECS.

$ AWS_ACCESS_KEY=”*****” AWS_SECRET_KEY=”******” docker compose lsNAME                                STATUS               

Export the AWS credentials to avoid setting them for every command.

$ export AWS_ACCESS_KEY=”*****”$ export AWS_SECRET_KEY=”******”

The deploy the sample application to ECS, we can run the same command as in the local deployment:

$ docker compose upWARNING services.build: unsupported attribute       WARNING services.build: unsupported attribute       [+] Running 18/18⠿ myproject                      CreateComplete                                     206.0s⠿ FrontendTCP80TargetGroup       CreateComplete                                       0.0s⠿ CloudMap                       CreateComplete                                      46.0s⠿ FrontendTaskExecutionRole      CreateComplete                                      19.0s⠿ Cluster                        CreateComplete                                       5.0s⠿ DefaultNetwork                 CreateComplete                                       5.0s⠿ BackendTaskExecutionRole       CreateComplete                                      19.0s⠿ LogGroup                       CreateComplete                                       1.0s⠿ LoadBalancer                   CreateComplete                                     122.0s⠿ Default80Ingress               CreateComplete                                       1.0s⠿ DefaultNetworkIngress          CreateComplete                                       0.0s⠿ BackendTaskDefinition          CreateComplete                                       2.0s⠿ FrontendTaskDefinition         CreateComplete                                       3.0s⠿ FrontendServiceDiscoveryEntry  CreateComplete                                       1.0s⠿ BackendServiceDiscoveryEntry   CreateComplete                                       2.0s⠿ BackendService                 CreateComplete                                      65.0s⠿ FrontendTCP80Listener          CreateComplete                                       3.0s⠿ FrontendService                CreateComplete                                      66.0s

Docker Compose converts the Compose file to a CloudFormation template defining a set of AWS resources. Details on the resource mapping can be found in the documentation. To review the CloudFormation template generated, we can run the command:

$ docker compose convertWARNING services.build: unsupported attribute       WARNING services.build: unsupported attribute       AWSTemplateFormatVersion: 2010-09-09Resources:  BackendService:    Properties:      Cluster:        Fn::GetAtt:        – Cluster        – Arn      DeploymentConfiguration:        MaximumPercent: 200        MinimumHealthyPercent: 100…

To check the state of the services, we can run the command:

$ docker compose psNAME                                              SERVICE             STATUS              PORTStask/myproject/8c142dea1282499c83050b4d3e689566   backend             Running            task/myproject/a608f6df616e4345b92a3d596991652d   frontend            Running             mypro-LoadB-1ROWIHLNOG5RZ-1172432386.eu-west-3.elb.amazonaws.com:80->80/http

Similarly to the local run, we can query the frontend of the application:

$ curl mypro-LoadB-1ROWIHLNOG5RZ-1172432386.eu-west-3.elb.amazonaws.com:80          ##         .    ## ## ##        ==## ## ## ## ##    ===/”””””””””””””””””___/ ==={                       /  ===-______ O           __/            __/  ___________/Hello from Docker!

We can retrieve logs from the ECS containers by running the compose logs command:

$ docker compose logsbackend  |  * Serving Flask app “main” (lazy loading)backend  |  * Environment: productionbackend  |    WARNING: This is a development server. Do not use it in a production deployment.backend  |    Use a production WSGI server instead….frontend  | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.shfrontend  | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.shfrontend  | /docker-entrypoint.sh: Configuration complete; ready for start upfrontend  | 172.31.22.98 – – [02/Mar/2021:08:35:27 +0000] “GET / HTTP/1.1″ 200 212 “-” “ELB-HealthChecker/2.0″ “-“backend   | 172.31.0.11 – – [02/Mar/2021 08:35:27] “GET / HTTP/1.0″ 200 -backend   | 172.31.0.11 – – [02/Mar/2021 08:35:57] “GET / HTTP/1.0″ 200 -frontend  | 172.31.22.98 – – [02/Mar/2021:08:35:57 +0000] “GET / HTTP/1.1″ 200 212 “-” “curl/7.75.0″ “94.239.119.152”frontend  | 172.31.22.98 – – [02/Mar/2021:08:35:57 +0000] “GET / HTTP/1.1″ 200 212 “-” “ELB-HealthChecker/2.0″ “-”

To terminate the Compose application and release AWS resources, run:

$ docker compose down[+] Running 2/4⠴ myproject              DeleteInProgress User Initiated                                        8.5s⠿ DefaultNetworkIngress  DeleteComplete                                                         1.0s⠿ Default80Ingress       DeleteComplete                                                         1.0s⠴ FrontendService        DeleteInProgress                                                       7.5s…

The Docker documentation provides several examples of Compose files, supported features, details on how to deploy and how to update a Compose application running in ECS, etc.

The following features are discussed in detail:

use of private imagesservice discoveryvolumes and secrets definitionAWS-specific service properties for auto-scaling, IAM roles and load balancinguse of existing AWS resources

Summary

We have covered the transition from local deployment of a Compose application to the deployment on Amazon ECS. We have used a minimal generic example for demonstrating how to use the Docker Compose cloud-capability. For a better understanding on how to update the Compose file and use specific AWS features, the documentation provides much more details.

Resources:

Docker Compose embedded in the Docker CLIhttps://github.com/docker/compose-cli/blob/main/INSTALL.mdCompose to ECS support https://docs.docker.com/cloud/ecs-integration/ECS-specific Compose examples:https://docs.docker.com/cloud/ecs-compose-examples/Deploying Docker containers to ECS:https://docs.docker.com/cloud/ecs-integration/Sample used to demonstrate Compose commands:https://github.com/aiordache/demos/tree/master/ecsblog-demo
The post Docker Compose: From Local to Amazon ECS appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Kubernetes: what are Endpoints

rtfm.co.ua – Usually, we don’t see Endpoints objects when using Kubernetes Services, as they are working under the hood, similarly to ReplicaSets which are “hidden” behind Kubernetes Deployments. So, Service is a…
Quelle: news.kubernauts.io

GitOps with Flux and Helm Operator

itnext.io – These are some notes I took for myself because I’ve done this exercise over and over again over the last couple of years. The idea is to use GitOps to deploy services to a Kubernetes cluster. The…
Quelle: news.kubernauts.io

Guest Post: Calling the Docker CLI from Python with Python-on-whales

Image: Alice Lang, alicelang-creations@outlook.fr

At Docker, we are incredibly proud of our vibrant, diverse and creative community. From time to time, we feature cool contributions from the community on our blog to highlight some of the great work our community does. The following is a guest post by Docker community member Gabriel de Marmiesse. Are you working on something awesome with Docker? Send your contributions to William Quiviger (@william) on the Docker Community Slack and we might feature your work!   

The most common way to call and control Docker is by using the command line.

With the increased usage of Docker, users want to call Docker from programming languages other than shell. One popular way to use Docker from Python has been to use docker-py. This library has had so much success that even docker-compose is written in Python, and leverages docker-py.

The goal of docker-py though is not to replicate the Docker client (written in Golang), but to talk to the Docker Engine HTTP API. The Docker client is extremely complex and is hard to duplicate in another language. Because of this, a lot of features that were in the Docker client could not be available in docker-py. Sometimes users would sometimes get frustrated because docker-py did not behave exactly like the CLI.

Today, we’re presenting a new project built by Gabriel de Marmiesse from the Docker community: Python-on-whales. The goal of this project is to have a 1-to-1 mapping between the Docker CLI and the Python library. We do this by communicating with the Docker CLI instead of calling directly the Docker Engine HTTP API.

If you need to call the Docker command line, use Python-on-whales. And if you need to call the Docker engine directly, use docker-py.

In this post, we’ll take a look at some of the features that are not available in docker-py but are available in Python-on-whales:

Building with Docker buildxDeploying to Swarm with docker stackDeploying to the local Engine with Compose

Start by downloading Python-on-whales with 

pip install python-on-whales

and you’re ready to rock!

Docker Buildx0

Here we build a Docker image. Python-on-whales uses buildx by default and gives you the output in real time.

>>> from python_on_whales import docker
>>> my_image = docker.build(“.”, tags=”some_name”)
[+] Building 1.6s (17/17) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 32B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/python:3.6 1.4s
=> [python_dependencies 1/5] FROM docker.io/library/python:3.6@sha256:293 0.0s
=> [internal] load build context 0.1s
=> => transferring context: 72.86kB 0.0s
=> CACHED [python_dependencies 2/5] RUN pip install typeguard pydantic re 0.0s
=> CACHED [python_dependencies 3/5] COPY tests/test-requirements.txt /tmp 0.0s
=> CACHED [python_dependencies 4/5] COPY requirements.txt /tmp/ 0.0s
=> CACHED [python_dependencies 5/5] RUN pip install -r /tmp/test-requirem 0.0s
=> CACHED [tests_ubuntu_install_without_buildx 1/7] RUN apt-get update && 0.0s
=> CACHED [tests_ubuntu_install_without_buildx 2/7] RUN curl -fsSL https: 0.0s
=> CACHED [tests_ubuntu_install_without_buildx 3/7] RUN add-apt-repositor 0.0s
=> CACHED [tests_ubuntu_install_without_buildx 4/7] RUN apt-get update & 0.0s
=> CACHED [tests_ubuntu_install_without_buildx 5/7] WORKDIR /python-on-wh 0.0s
=> CACHED [tests_ubuntu_install_without_buildx 6/7] COPY . . 0.0s
=> CACHED [tests_ubuntu_install_without_buildx 7/7] RUN pip install -e . 0.0s
=> exporting to image 0.1s
=> => exporting layers 0.0s
=> => writing image sha256:e1c2382d515b097ebdac4ed189012ca3b34ab6be65ba0c 0.0s
=> => naming to docker.io/library/some_image_name

Docker Stacks

Here we deploy a simple Swarmpit stack on a local Swarm. You get a Stack object that has several methods: remove(), services(), ps().

>>> from python_on_whales import docker
>>> docker.swarm.init()
>>> swarmpit_stack = docker.stack.deploy(“swarmpit”, compose_files=[”./docker-compose.yml”])
Creating network swarmpit_net
Creating service swarmpit_influxdb
Creating service swarmpit_agent
Creating service swarmpit_app
Creating service swarmpit_db
>>> swarmpit_stack.services()
[<python_on_whales.components.service.Service object at 0x7f9be5058d60>,
<python_on_whales.components.service.Service object at 0x7f9be506d0d0>,
<python_on_whales.components.service.Service object at 0x7f9be506d400>,
<python_on_whales.components.service.Service object at 0x7f9be506d730>]
>>> swarmpit_stack.remove()

Docker Compose

Here we show how we can run a Docker Compose application with Python-on-whales. Note that, behind the scenes, it uses the new version of Compose written in Golang. This version of Compose is still experimental. Take appropriate precautions.

$ git clone https://github.com/dockersamples/example-voting-app.git
$ cd example-voting-app
$ python
>>> from python_on_whales import docker
>>> docker.compose.up(detach=True)
Network “example-voting-app_back-tier” Creating
Network “example-voting-app_back-tier” Created
Network “example-voting-app_front-tier” Creating
Network “example-voting-app_front-tier” Created
example-voting-app_redis_1 Creating
example-voting-app_db_1 Creating
example-voting-app_db_1 Created
example-voting-app_result_1 Creating
example-voting-app_redis_1 Created
example-voting-app_worker_1 Creating
example-voting-app_vote_1 Creating
example-voting-app_worker_1 Created
example-voting-app_result_1 Created
example-voting-app_vote_1 Created
>>> for container in docker.compose.ps():
… print(container.name, container.state.status)
example-voting-app_vote_1 running
example-voting-app_worker_1 running
example-voting-app_result_1 running
example-voting-app_redis_1 running
example-voting-app_db_1 running
>>> docker.compose.down()
>>> print(docker.compose.ps())
[]

Bonus section: Docker objects attributes as Python attributes

All information that you can access with docker inspect is available as Python attributes:

>>> from python_on_whales import docker
>>> my_container = docker.run(“ubuntu”, [”sleep”, “infinity”], detach=True)
>>> my_container.state.started_at
datetime.datetime(2021, 2, 18, 13, 55, 44, 358235, tzinfo=datetime.timezone.utc)
>>> my_container.state.running
True
>>> my_container.kill()
>>> my_container.remove()

>>> my_image = docker.image.inspect(“ubuntu”)
>>> print(my_image.config.cmd)
[’/bin/bash’]

What’s next for Python-on-whales ?

We’re currently improving the integration of Python-on-whales with the new Compose in the Docker CLI (currently beta).

You can consider that Python-on-whales is in beta. Some small API changes are still possible. 

We encourage the community to try it out and give feedback in the issues!

To learn more about Python-on-whales:

DocumentationGithub repository
The post Guest Post: Calling the Docker CLI from Python with Python-on-whales appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/