New Filesharing Implementation in Docker Desktop Windows Improves Developer Inner Loop UX

Photo by Helloquence on Unsplash

A common developer workflow when using frameworks like Symfony or React is to edit the source code using a Windows IDE while running the app itself in a Docker container. The source is shared between the host and the container with a command like the following:
$ docker run -v C:Usersme:/code -p 8080:8080 my-symfony-app
This allows the developer to edit the source code, save the changes and immediately see the results in their browser. This is where file sharing performance becomes critical.
The latest Edge release of Docker Desktop for Windows has a completely new filesharing implementation using Filesystem in Userspace (FUSE) instead of Samba which:

uses caching to (for example) reduce page load time in Symfony by up to 60%;
supports Linux inotify events, triggering automatic recompilation / reload when the source code is changed;
is independent of how you authenticate to Windows: smartcard, Azure AD are all fine;
always works irrespective of whether your VPN is connected or disconnected;
reduces the amount of code running as Administrator.

Your feedback needed!
This improvement is available today in the Edge release and will roll-out to the stable channel later once we’ve had enough positive feedback. Please download it, give it a try and let us know how it goes.  If you discover any problems, please report them on GitHub and make sure you fill descriptions and reproduction steps so that we can quickly investigate.

Big performance improvements
Performance is vital when application source code is being shared between the host and a container. For example when a developer uses the Symfony PHP framework, edits the source code and then reloads the page in the browser, the web-server in the container must re-read many PHP files stored on the host. This must be fast.
The following graph shows the time taken to load a page of a simple symfony demo in three configurations:

Previous version: this is the implementation in earlier versions of Docker Desktop
Docker Desktop Edge this is the new (faster!) implementation
In-container: the files are not shared from the host at all, instead they are stored in the container to show the upper limit on possible future performance.

The two bars on the left hand side show the latency (in seconds) using an older version of Docker Desktop. Note that the second fetch is only slightly better than the first, suggesting that the effect of caching is small.
The two bars on the right hand side show the latency when the files are not shared at all, but are stored entirely inside the VM. This is the upper limit on performance if the volume sharing system were perfect and had zero overheads.
The two bars in the middle show the latency when the files are shared with the new system in Docker Desktop Edge The initial (uncached) fetch is already better than with the previous Desktop version, but the second (cached) fetch is 60% faster!
Additional enhancements
As well as big performance improvements, the new implementation has the following additional benefits:

The new version can’t conflict with organisation-wide security policies as we don’t need to use Administrator privileges to share the drive and create a firewall exception for port 445.
The new version doesn’t require the user to enter their domain credentials. Not only is this fundamentally more secure, but it avoids the user having to re-enter their credentials every time they change their password. Many organisations require regular password changes, which means the user needed to refresh the credentials frequently.
The new version supports users who authenticate via a smartcard, or AzureAD or any other method. Previously we could only support users who login with a username and password.
The new version is immune to a class of problems caused by enterprise VPN clients and endpoint security software clashing with the Hyper-V network adapter.

Stay tuned for a follow up post that deep dives into the new Docker Desktop filesharing implementation using FUSE.
The post New Filesharing Implementation in Docker Desktop Windows Improves Developer Inner Loop UX appeared first on Docker Blog.

Managing the TICK Stack with Docker App

Photo by Sergio Souza on Unsplash
Docker Application eases the packaging and the distribution of a Docker Compose application. The TICK stack – Telegraf, InfluxDB, Chronograf, and Kapacitor – is a good candidate to illustrate how this actually works. In this blog, I’ll show you how to deploy the TICK stack as a Docker App.
About the TICK Stack
This application stack is mainly used to handle time-series data. That makes it a great choice for IoT projects, where devices send data (temperature, weather indicators, water level, etc.) on a regular basis.
Its name comes from its components:
– Telegraf
– InfluxDB
– Chronograf
– Kapacitor
The schema below illustrates the overall architecture, and outlines the role of each component.

Data are sent to Telegraph and stored in an InfluxDB database. Chronograf can query the database through a web interface. Kapacitor can process, monitor, and raise alerts based on the data.
Defining the Application in a Compose File
The tick.yml file below defines the four components of the stack and the way they communicate with each other:
version: ‘3.7’
    image: telegraf
    – source: telegraf-conf
      target: /etc/telegraf/telegraf.conf
    – 8186:8186
    image: influxdb
    image: chronograf
    – 8888:8888
    command: [“chronograf”, “–influxdb-url=http://influxdb:8086″]
    image: kapacitor
    – KAPACITOR_INFLUXDB_0_URLS_0=http://influxdb:8086
    file: ./telegraf.conf
Telegraf’s configuration is provided through a Docker Config object, created out of the following telegraf.conf file:
  interval = “5s”
  round_interval = true
  metric_batch_size = 1000
  metric_buffer_limit = 10000
  collection_jitter = “0s”
  flush_interval = “5s”
  flush_jitter = “0s”
  precision = “”
  debug = false
  quiet = false
  logfile = “”
  hostname = “$HOSTNAME”
  omit_hostname = false
  urls = [“http://influxdb:8086″]
  database = “test”
  username = “”
  password = “”
  retention_policy = “”
  write_consistency = “any”
  timeout = “5s”
  service_address = “:8186″
  # Whether to report per-cpu stats or not
  percpu = true
  # Whether to report total system cpu stats or not
  totalcpu = true
This configuration:

Defines an agent that gathers host CPU metrics on a regular basis.
Defines an additional input method allowing Telegraf to receive data over HTTP.
Specifies the name of the database the data collected/received will be stored.

Deploying the application from Docker Desktop
Now we will deploy the application using Swarm first, and then using Kubernetes to illustrate some of the differences.
Using Swarm to Deploy the TICK Stack
First we setup a local Swarm using the following command:
$ docker swarm init
Then we deploy the TICK stack as a Docker Stack:
$ docker stack deploy tick -c tick.yaml
Creating network tick_default
Creating config tick_telegraf-conf
Creating service tick_telegraf
Creating service tick_influxdb
Creating service tick_chronograf
Creating service tick_kapacitor
This creates:

A network for communication between the application containers 
A Config object containing the Telegraf configuration we defined in telegraf.conf
The 4 services composing the TICK stack

It only takes a couple of seconds before the application is up and running. Now we can verify the status of each service.
$ docker service ls
ID                  NAME MODE                REPLICAS IMAGE PORTS
74zkf54ruztg        tick_chronograf replicated          1/1 chronograf:latest *:8888->8888/tcp
y97hcx3yyjx6        tick_influxdb replicated          1/1 influxdb:latest
fm4uckqlvhvt        tick_kapacitor replicated          1/1 kapacitor:latest
12zl0sa678xh        tick_telegraf replicated          1/1 telegraf:latest *:8186->8186/tcp
Only Telegraf and Chronograf are exposed to the outside world:

Telegraf is used to ingest data through port 8186
Chronograf is used to visualize the data and is available through a web interface on local port 8888

To query data from the Chronograf interface, we first need to send some data to Telegraf.
Sending Test data
First we will use the lucj/genx Docker image to generate data following a cosine distribution (a couple of other simple distributions are available).
$ docker run lucj/genx
Usage of /genx:
 -duration string
      duration of the generation (default “1d”)
 -first float
      first value for linear type
 -last float
      last value for linear type (default 1)
 -max float
       max value for cos type (default 25)
 -min float
       min value for cos type (default 10)
 -period string
       period for cos type (default “1d”)
 -step string
       step / sampling period (default “1h”)
 -type string
       type of curve (default “cos”)
We will generate three days of data, with a one day period, min/max values of 10/25 and a sampling step of one hour; that will be enough for our tests.
$ docker run lucj/genx:0.1 -type cos -duration 3d -min 10 -max 25 -step 1h > /tmp/data
We then send the data to the Telegraf HTTP endpoint with the following commands:
cat /tmp/data | while read line; do
  ts=”$(echo $line | cut -d’ ‘ -f1)000000000″
  value=$(echo $line | cut -d’ ‘ -f2)
  curl -i -XPOST $endpoint –data-binary “temp value=${value} ${ts}”
Next, from the Explore tab in the Chronograf web interface we can visualize the data using the following query:
select “value” from “test”.”autogen”.”temp”
We will see a neat cosine distribution:

With just a couple of commands, we have deployed the TICK stack on a Swarm cluster, sent time series data and visualized it.
Finally, we remove the stack:
$ docker stack rm tick
Removing service tick_chronograf
Removing service tick_influxdb
Removing service tick_kapacitor
Removing service tick_telegraf
Removing config tick_telegraf-conf
Removing network tick_default
We have shown how to deploy the application stack with Docker Swarm. Now we will deploy it with Kubernetes.
Using Kubernetes to Deploy the TICK Stack
From Docker Desktop, deploying the same application on a Kubernetes cluster is also a simple process.
Activate Kubernetes from Docker Desktop
First, activate Kubernetes from the Docker Desktop settings:

A local Kubernetes cluster starts quickly and is accessible right from our local environment.

When the Kubernetes cluster is created, a configuration file (also known as kubeconfig) is created locally (usually in ~/.kube/config), or used to enrich this file if it already exists. This configuration file contains all the information needed to communicate with the API Server securely:

The cluster’s CA
The API Server endpoint
The default user’s certificate and private key

Creating a new Docker context
Docker 19.03 introduced the context object. It allows you to quickly switch the CLI configuration to connect with different clusters. A single context exists by default as shown below:
$ docker context list
NAME                DESCRIPTION                       DOCKER ENDPOINT KUBERNETES ENDPOINT                           ORCHESTRATOR
default *           Current DOCKER_HOST based configuration   unix:///var/run/docker.sock https://kubernetes.docker.internal:6443 (default)   swarm
Note: as we can see from the ORCHESTRATOR column, this context can only be used to deploy workload on the local Swarm.
We will now create a new Docker context dedicated to run Kubernetes workloads. This can be done with the following command:
$ docker context create k8s-demo
  –kubernetes config-file=$HOME/.kube/config
  –description “Local k8s from Docker Desktop”
  –docker host=unix:///var/run/docker.sock
Next, we verify that both contexts are now available:
$ docker context list
NAME                DESCRIPTION                       DOCKER ENDPOINT KUBERNETES ENDPOINT                           ORCHESTRATOR
default *           Current DOCKER_HOST based configuration   unix:///var/run/docker.sock https://kubernetes.docker.internal:6443 (default)   swarm
k8s-demo            Local k8s from Docker Desktop             unix:///var/run/docker.sock https://kubernetes.docker.internal:6443 (default)   kubernetes
Note: we could use a single context where both orchestrators are defined. In that case, the deployment would be done on Swarm and Kubernetes at the same time.
Next, we switch on the k8s-demo context:
$ docker context use k8s-demo
Current context is now “k8s-demo”
Then we deploy the application in the same way we did before, but this time it will run on Kubernetes instead of Swarm.
$ docker stack deploy tick -c tick.yaml
Waiting for the stack to be stable and running…
chronograf: Ready [pod status: 1/1 ready, 0/1 pending, 0/1 failed]
influxdb: Ready [pod status: 1/1 ready, 0/1 pending, 0/1 failed]
kapacitor: Ready [pod status: 1/1 ready, 0/1 pending, 0/1 failed]
telegraf: Ready [pod status: 1/1 ready, 0/1 pending, 0/1 failed]

Stack tick is stable and running
Using the usual kubectl binary, we can verify all the Kubernetes resources have been created:
$ kubectl get deploy,po,svc
NAME                               READY UP-TO-DATE AVAILABLE AGE
deployment.extensions/chronograf   1/1 1 1 2m50s
deployment.extensions/influxdb     1/1 1 1 2m50s
deployment.extensions/kapacitor    1/1 1 1 2m49s
deployment.extensions/telegraf     1/1 1 1 2m49s

NAME                             READY STATUS RESTARTS AGE
pod/chronograf-c55797884-mp8gc   1/1 Running 0 2m50s
pod/influxdb-67c574845d-z6846    1/1 Running 0 2m50s
pod/kapacitor-57f6787666-t8j6l   1/1 Running 0 2m49s
pod/telegraf-6b8648884c-lq9t5    1/1 Running 0 2m49s

NAME                           TYPE CLUSTER-IP EXTERNAL-IP   PORT(S) AGE
service/chronograf             ClusterIP None <none>        55555/TCP 2m49s
service/chronograf-published   LoadBalancer <pending>     8888:32163/TCP 2m49s
service/influxdb               ClusterIP None <none>        55555/TCP 2m49s
service/kapacitor              ClusterIP None <none>        55555/TCP 2m49s
service/kubernetes             ClusterIP <none>        443/TCP 30h
service/telegraf               ClusterIP None <none>        55555/TCP 2m49s
service/telegraf-published     LoadBalancer <pending>     8186:32460/TCP 2m49s
We can generate some dummy data and visualize them using Chronograf following the same process we did above for Swarm (I only show the result here as the process is the same):

Finally we remove the stack:
$ docker stack rm tick
Removing stack: tick
Note: we used the same command to remove the stack from Kubernetes or Swarm, but notice the output is not the same as each orchestrator handles different  resources / objects.
Defining the TICK stack as a DockerApp
We followed simple steps to deploy the application using both Swarm and Kubernetes. Now we’ll define it as a Docker Application to make it more portable, and see how it eases the deployment process. 
Docker App is shipped with Docker 19.03+ and can be used once the experimental flag is enabled for the CLI. This can be done in several ways:

modifying the config.json file (usually in the $HOME/.docker folder)

“experimental”: “enabled”

setting the DOCKER_CLI_EXPERIMENTAL environment variable

Once this is done, we can check that Docker App is enabled:
$ docker app version
Version: v0.8.0
Git commit: 7eea32b7
Built: Tue Jun 11 20:53:26 2019
OS/Arch: darwin/amd64
Experimental: off
Renderers: none
Invocation Base Image: docker/cnab-app-base:v0.8.0
Note: version 0.8 is currently the last version
Note: the Docker App command is experimental, which means that the feature is subject to change before being ready for production. The user experience will be updated in the next release.
Available commands in Docker App
Several commands are available to manage the lifecycle of a Docker Application, as we can see below. We will illustrate some of them later in this article.
$ docker app

Usage: docker app COMMAND

A tool to build and manage Docker Applications.

bundle Create a CNAB invocation image and `bundle.json` for the application
completion Generates completion scripts for the specified shell (bash or zsh)
init Initialize Docker Application definition
inspect Shows metadata, parameters and a summary of the Compose file for a given application
install Install an application
list List the installations and their last known installation result
merge Merge a directory format Docker Application definition into a single file
pull Pull an application package from a registry
push Push an application package to a registry
render Render the Compose file for an Application Package
split Split a single-file Docker Application definition into the directory format
status Get the installation status of an application
uninstall Uninstall an application
upgrade Upgrade an installed application
validate Checks the rendered application is syntactically correct
version Print version information

Run ‘docker app COMMAND –help’ for more information on a command.

Creating a Docker Application Package for the TICK stack
We start with the folder which contains the Docker Compose file describing the application (tick.yml) and the Telegraf configuration file (telegraf.conf):
$ tree .
├── telegraf.conf
└── tick.yml
Next we create the Docker Application, named tick:
$ docker app init tick –compose-file tick.yml –description “tick stack”
Created “tick.dockerapp”
This creates the folder tick.dockerapp, and three additional files:
$ tree .
├── telegraf.conf
├── tick.dockerapp
│ ├── docker-compose.yml
│ ├── metadata.yml
│ └── parameters.yml
└── tick.yml

1 directory, 5 files

– docker-compose.yml is the copy of the tick.yml file- metadata.yml defines metadata and additional parameters
$ cat tick.dockerapp/metadata.yml
# Version of the application
version: 0.1.0
# Name of the application
name: tick
# A short description of the application
description: tick stack
# List of application maintainers with name and email for each
– name: luc

– parameters.yml defines the default parameters used for the application (more on this in a bit). This file is empty by default.
Note: when initializing the Docker App, it’s possible to use the -s flag. This creates a single file with the content of the three files above instead of a folder / files hierarchy.
As the application uses the telegraf.conf file, we need to copy it into tick.dockerapp folder.
Environment settings
As we mentioned above, the purpose of the parameters.yml file is to provide default values for the application. Those values will replace some placeholders we will define in the application’s compose file. 
To illustrate this, we will consider a dev and a prod environments and assume those two only differ when it comes to the port exposed by the application to the outside world:
– Telegraf listens on port 8000 in dev and 9000 in prod
– Chronograf listens on port 8001 in dev and 9001 in prod
Note: In a real world application, differences between dev and prod would not be limited to a port number. The current example is over-simplified to make it easier to grasp the main concepts.
First, we create a parameter file for each environment:
– parameters.yml defines some default ports for both Telegraf and Chronograf services
// parameters.yml
telegraf: 8186
chronograf: 8888

– dev.yml specifies values for the development environment
// dev.yml
telegraf: 8000
chronograf: 8001
– prod.yml specifies values for the production environment
// prod.yml
telegraf: 9000
chronograf: 9001

Next we modify the docker-compose.yml file to add some placeholders:
$ cat tick.dockerapp/docker-compose.yml
version: ‘3.7’
image: telegraf
— source: telegraf-conf
target: /etc/telegraf/telegraf.conf
— ${ports.telegraf}:8186
image: influxdb
image: chronograf
— ${ports.chronograf}:8888
command: [”chronograf”, “–influxdb-url=http://influxdb:8086″]
image: kapacitor
— KAPACITOR_INFLUXDB_0_URLS_0=http://influxdb:8086
file: ./telegraf.conf

As we can see in the changes above, the way to access the port for Telegraf is to use the ports.telegraf notation. The same approach is used for the Chronograf port.
The Docker App’s render command generates the Docker Compose file, substituting the variables ${ports.XXX} with the content of the settings file specified. The default parameters.yml is used if none are specified. As we can see below, the Telegraf port is now 8186, and the Chronograf one is 8888.
$ docker app render tick.dockerapp/
version: “3.7”
– chronograf
– –influxdb-url=http://influxdb:8086
image: chronograf
– mode: ingress
target: 8888
published: 8888
protocol: tcp
image: influxdb
KAPACITOR_INFLUXDB_0_URLS_0: http://influxdb:8086
image: kapacitor
– source: telegraf-conf
target: /etc/telegraf/telegraf.conf
image: telegraf
– mode: ingress
target: 8186
published: 8186
protocol: tcp
file: telegraf.conf

If we specify a parameters file in the render command, the values within that file are used. As we can see in the following example which uses dev.yml during the rendering, Telegraf is published on port 8000 and Chronograf on port 8001 (values specified in dev.yml):
$ docker app render tick.dockerapp –parameters-file tick.dockerapp/dev.yml
version: “3.7”
– chronograf
– –influxdb-url=http://influxdb:8086
image: chronograf
– mode: ingress
target: 8888
published: 8001
protocol: tcp
image: influxdb
KAPACITOR_INFLUXDB_0_URLS_0: http://influxdb:8086
image: kapacitor
– source: telegraf-conf
target: /etc/telegraf/telegraf.conf
image: telegraf
– mode: ingress
target: 8186
published: 8000
protocol: tcp
file: telegraf.conf
Inspecting the application
The inspect command provides all the information related to the application:
– Its metadata
– The services involved
– The default parameter values- The files the application depends on (telegraf.conf in this example)
$ docker app inspect tick
tick 0.1.0

Maintained by: luc

tick stack

Services (4) Replicas Ports Image
———— ——– —– —–
chronograf 1 8888 chronograf
influxdb 1 influxdb
kapacitor 1 kapacitor
telegraf 1 8186 telegraf

Parameters (2) Value
————– —–
ports.chronograf 8888
ports.telegraf 8186

Attachments (3) Size
————— —-
dev.yml 43B
prod.yml 43B
telegraf.conf 657B

Deploying the Docker App on a Swarm Cluster
First, we go back to the default context which references a Swarm cluster.
$ docker context use default
Next, we deploy the application as a Docker App:
$ docker app install tick.dockerapp –name tick –parameters-file tick.dockerapp/prod.yml
Creating network tick_default
Creating config tick_telegraf-conf
Creating service tick_telegraf
Creating service tick_influxdb
Creating service tick_chronograf
Creating service tick_kapacitor
Application “tick” installed on context “default”

Then we list the deployed application to make sure the one created above is there:
$ docker app list
tick tick (0.1.0) install success 4 minutes 3 minutes
Next we list the services running on the Swarm cluster. We can see the values from the prod.yml parameters file have been taken into account (as the exposed ports are 9000 and 9001 for Telegraf and Chronograf respectively).
$ docker service ls
75onunrvoxgt tick_chronograf replicated 1/1 chronograf:latest *:9001->8888/tcp
vj1ttws2mw1u tick_influxdb replicated 1/1 influxdb:latest
q4brz1i45cai tick_kapacitor replicated 1/1 kapacitor:latest
i6kvr37ycnn5 tick_telegraf replicated 1/1 telegraf:latest *:9000->8186/tcp

Pushing the application to Docker Hub
A Docker application can be distributed through the Docker Hub via a simple push:
$ docker app push tick –tag lucj/tick:0.1.0

Successfully pushed bundle to Digest is sha256:7a71d2bfb5588be0cb74cd76cc46575b58c433da1fa05b4eeccd5288b4b75bac.

It then appears next to the Docker images on the account it was pushed to:

The application is now ready to be used by anyone; it just needs to be pulled from the Docker Hub:
$ docker app pull lucj/tick:0.1.0
Before we move to the next part, we will remove the application we deployed on the Swarm cluster:
$ docker app uninstall tick
Removing service tick_chronograf
Removing service tick_influxdb
Removing service tick_kapacitor
Removing service tick_telegraf
Removing config tick_telegraf-conf
Removing network tick_default
Application “tick” uninstalled on context “default”

Deploying the Docker App on Kubernetes
We saw how easy it is to deploy a Docker App on Swarm. We will now deploy it on Kubernetes, and we’ll see it’s just as easy.
First, we set the Docker context to use Kubernetes as the orchestrator.
$ docker context use k8s-demo
Current context is now “k8s-demo”

Next, we install the application with the exact same command we used to deploy it on Swarm:
$ docker app install tick.dockerapp –name tick –parameters-file tick.dockerapp/prod.yml
Waiting for the stack to be stable and running…
influxdb: Pending
chronograf: Pending
kapacitor: Pending
telegraf: Pending
telegraf: Ready
kapacitor: Ready
chronograf: Ready
influxdb: Ready

Stack tick is stable and running

Application “tick” installed on context “k8s-demo”

Using kubectl, we list the resources to make sure everything was created correctly:
$ kubectl get deploy,po,svc
deployment.extensions/chronograf 1/1 1 1 26s
deployment.extensions/influxdb 1/1 1 1 26s
deployment.extensions/kapacitor 1/1 1 1 26s
deployment.extensions/telegraf 1/1 1 1 26s

pod/chronograf-c55797884-b7rcd 1/1 Running 0 26s
pod/influxdb-67c574845d-bcr8m 1/1 Running 0 26s
pod/kapacitor-57f6787666-82b7l 1/1 Running 0 26s
pod/telegraf-6b8648884c-xcmmx 1/1 Running 0 26s

service/chronograf ClusterIP None <none> 55555/TCP 25s
service/chronograf-published LoadBalancer <pending> 9001:31319/TCP 25s
service/influxdb ClusterIP None <none> 55555/TCP 26s
service/kapacitor ClusterIP None <none> 55555/TCP 26s
service/kubernetes ClusterIP <none> 443/TCP 2d4h
service/telegraf ClusterIP None <none> 55555/TCP 26s
service/telegraf-published LoadBalancer <pending> 9000:30684/TCP 25s

Note: The deployment on Kubernetes only works on Docker Desktop or Docker Enterprise, which run the server side controller needed to handle the stack resource.
I hope this article provides some insights on the Docker Application. The project is still quite young so breaking changes may occur before it reaches 1.0.0, but one thing that’s promising – it lets us deploy to Kubernetes without knowing much of anything about Kubernetes!
To learn more about Docker App:

Read our introductory post on Docker App and CNAB 

Find out how to access Docker App

The post Managing the TICK Stack with Docker App appeared first on Docker Blog.

AWS IoT Greengrass 1.10 Now Supports Docker Containers

On November 25, 2019, AWS announced the release of AWS IoT Greengrass 1.10 allowing developers to package applications into Docker container images and deploy these to edge devices. Deploying and running Docker containers on AWS IoT Greengrass devices enables application portability across development environments, edge locations, and the cloud. Docker images can easily be stored in Docker Hub, private container registries, or with Amazon Elastic Container Registry (Amazon ECR).

Docker is committed to working with cloud service provider partners such as AWS who offer Docker-compatible on-demand container infrastructure services for both individual containers as well as multi-container apps. To make it even easier for developers to benefit from the speed of these services but without giving up app portability and infrastructure choice, Docker Hub will seamlessly integrate developers’ “build” and “share” workflows with the cloud “run” services of their choosing.
“Docker and AWS are collaborating on our shared vision of how workloads can be more easily deployed to edge devices. Docker’s industry-leading container technology including Docker Desktop and Docker Hub are integral to advancing developer workflows for modern apps and IoT solutions. Our customers can now deploy and run Docker containers seamlessly on AWS IoT Greengrass devices, enabling development teams to ship apps faster and accelerate the migration of apps from the data center to the cloud, and now to edge devices,” according to David Messina, EVP Strategic Alliances for Docker.
If you are interested in how to actually deploy a Docker container-based application to an AWS IoT Greengrass core device, AWS’ Danilo Poccia has a great blog that walks developers step-by-step through the process. Developers who are interested in learning more about how to get started with Docker technologies, you can expand your understanding of Docker and Kubernetes with these additional free and paid resources here.
The post AWS IoT Greengrass 1.10 Now Supports Docker Containers appeared first on Docker Blog.

Black Friday Deals on Docker + Kubernetes Courses

In honor of Black Friday, America’s favorite shopping holiday, we’ve rounded up the best deals on Docker + Kubernetes learning materials from Docker Captains. Docker Captain is a distinction that Docker awards to select members of the community that are both experts in their field and are committed to sharing their Docker knowledge with others. 

Learn Docker in a Month of Lunches, Elton Stoneman (Save 40% with the code webdoc40).

Docker in Action Second Edition (2019), Jeff Nickeloff (Save 50% with the code tsdocker).

Manning publications is also offering half off when you spend $50 this week.

Nigel Poulton’s The Kubernetes Book and Docker Deep Dive ebook bundles is $7 (for both!) through December 1st with this link.

Self-Paced Online Courses:

All of Bret Fisher’s courses are $9.99 through Friday, November 29th. Choose from Docker Mastery, Kubernetes Mastery, Swarm Mastery, and Docker for Node.js.

Elton Stoneman has a wealth of courses, from Handling Data and Stateful Applications in Docker to Modernizing .Net Framework Apps with Docker on Pluralsight. Get 40% an annual or premium subscription through Friday November 29th.

Nick Janetakis’s Dive into Docker and Build Web Applications with Flask and Docker courses will be 50% off (no code needed) through December 2nd.

Nigel Poulton’s Kubernetes 101 course is $9.99 with the code K8S101 through Dec 1st.
And his 20 Pluralsight courses – including Docker and Kubernetes: The Big Picture; Docker Deep Dive; Getting Started with Kubernetes and more are also 40% off with an annual or premium subscription through the 29th.

Docker + Kubernetes in French:

Luc Juggery creates Docker + Kubernetes courses in French. Both his Introduction to Kubernetes and The Docker Platform courses are $9.99 through Friday November 29th.

This Thanksgiving, take advantage of leveling up your skills at a great price or check out all educational resources here.
The post Black Friday Deals on Docker + Kubernetes Courses appeared first on Docker Blog.

Docker’s Next Chapter: Advancing Developer Workflows for Modern Apps

Today we start the next chapter in the Docker story, one that’s focused on developers. That we have the opportunity to write this next chapter is thanks to you, our community, for without you we wouldn’t be here. And while our focus on developers builds on recent history, it’s a focus also grounded in Docker’s beginning.
In The Beginning
When Solomon Hykes, Docker’s founder, unveiled the Docker project in 2013, he succinctly stated the problem Docker aimed to solve as, “for a developer, shipping code to the server is hard.” To address, Docker abstracted out OS kernels’ complex container primitives, provided a developer-friendly, CLI-based workflow and defined an immutable, portable image format. The result transformed how developers work, making it much easier to build, ship and run their apps on any server. So while container primitives had existed for decades, Docker democratized them and made them as easy to use as
docker run hello-world
The rest is history. Over the last six years, Docker containerization catalyzed the growth of microservices-based applications, enabled development teams to ship apps many times faster and accelerated the migration of apps from the data center to the cloud. Far from a Docker-only effort, a vibrant community ecosystem of open source and commercial technologies arose which streamlined adoption. Not the least of these is Kubernetes, orchestration technology originating from Google, which enabled highly reliable container deployments at new levels of scale. Also during this time, Docker containers expanded from their Linux x86 beginnings to run on other OSes and architectures, including Microsoft Windows and Arm. To guide the community’s efforts, new governance organizations appeared including CNCF, OCI, and CNAB. As this market grew, our developer community rapidly embraced Docker Desktop and Docker Hub, growing to millions of active users who shared millions of containerized apps and pulled billions of app images.
All this and more in only six years – and the best is yet to come.
The Road Ahead
Going forward, Docker’s focus is to build on these foundations to advance developer workflows for modern apps. Along with real benefits, the last six years also resulted in additional complexities, an explosion of choices and new potential threats of lock-in. In light of these challenges, Docker and our community ecosystem have the opportunity to extend the open standards, functionality, automation tooling and cloud services of Docker Desktop and Docker Hub to better help developers build, share and run modern apps.
Build. Six years ago, most apps could be encapsulated using one or two containers; today, a cloud-native microservices-based app may be a composition of many containers as well as serverless functions and hosted cloud services. To address this growing complexity and help developers simplify defining, building and packaging these apps, Docker will continue to expand the functionality of our open source frameworks and developer productivity tools like Docker Compose, Docker Apps and Docker App Templates.
Share. In 2013, friction-free, language-independent shareable application content was extremely limited. Now, thanks to the modularization of functionality enabled by Docker, the meteoric rise of the Docker container as the de facto industry standard and community distribution venues like Docker Hub, developers can now augment the code they write themselves with shared open source and commercial containers. However, more publishers daily pushing more apps – over 5 million on Docker Hub alone – and the emergence of new packaging formats run the risk of overwhelming and slowing developers. Docker Desktop and Docker Hub can help them quickly find new technologies relevant to their applications.
Run. Early on, container infrastructure was not widely available to developers. In order to run Docker containerized apps on servers developers, had to ask their IT teams to install Docker Engines and, for scale, orchestrators like Kubernetes or Docker Swarm. And this container infrastructure had to be monitored, patched and managed.
Fast-forward to today, where cloud service providers offer Docker-compatible on-demand container infrastructure services for both individual containers, like AWS Fargate, as well as multi-container apps, like Microsoft AKS. These give developers speed and agility, but at the potential risk of lock-in. Thus, to make it even easier for developers to benefit from the speed of these services but without giving up app portability and infrastructure choice, Docker Hub will seamlessly integrate developers’ “build” and “share” workflows with the cloud “run” services of their choosing.
Stay Tuned
In Docker’s early days we shared a vision of making things easier for developers. The growth in the open source and commercial community around Docker and Kubernetes these last six years suggests that we’re onto something ;-). But we’re just getting started, and there’s plenty more to be done. With this future ahead of us, it’s a privilege to lead the outstanding Docker team for this next chapter of the Docker story. And it’s one we look forward to writing together with you, our developer community.
The post Docker’s Next Chapter: Advancing Developer Workflows for Modern Apps appeared first on Docker Blog.

Celebrating Veterans Day: Docker Employee Profiles

On Veterans Day, and every day, we give thanks to our veterans. We are fortunate to have Brent Salisbury, Siobhan Casey, and Johnny Gonzalez, as Docker colleagues who were in the United States Marine Corps Reserve, the United States Army Reserve, and the United States Marine Corps. Thank you all for your service, hard work, and dedication. As a thank you for their service, we’re profiling them on our blog.
Brent Salisbury, Software Alliance Engineer

Brent Salisbury was in the United States Marine Corps Reserve from 1996-2002. Now, he is a Software Alliance Engineer at Docker. You can follow him on Twitter @networkstatic. 

What is your job? 
Software Alliance Engineer.
How long have you worked at Docker?
4.5 years.
Is your current role one that you always intended on your career path? 
Data Networking has been my passion since college. Working at Docker has afforded me the opportunity to help usher in a new software paradigm in what can be achieved in host networking and security versus the traditional proprietary hardware models of the past.

What is your advice for someone entering the field?
It may sound cliche, but find your passion. Everyone in technology is smart. What separates you from the pack is the passion you have for the technology you are working on. Once you have that figured out, network with other people in your field who are just as passionate about the subject.
What are you working on right now that you are excited about?
Helping develop partner solutions that best help the customers deploy and run Docker.
What do you do to get “unstuck” on a really difficult problem/design/bug? 
I tend to just grind a problem until either me or the problem win. In retrospect, most of the time I should have simply stepped away from the problem for a little while to look at it from another perspective and would have saved loads of time in the process.
What is your definition of success?
Making a living from your passion and being part of meaningful technology projects. I am constantly in awe of what we are creating at Docker, and how pervasive the technology is across all conceivable sectors.
What are you passionate about? 
I am currently interested in emerging models of network security that are taking a distributed approach to policy disaggregation. 

Who do you look up to? 
As this is a post celebrating veterans, it’s easy to say that I hold veterans of any branch, active or reserves, that commit to serving our nation in the highest of regard.
Share a story about something or someone who has been very impactful on your life or career? 
I was in the U.S. Marine Corps Reserves from 1996-2002. While there was no shortage of things to complain about (like any good enlisted Marine does), being exposed to senior Marines that had innate leadership qualities at a young age had a lasting impression on me that has helped me throughout my career in technology.

Siobhan Casey, Sr. Manager Global Support

Siobhan Casey was in the United States Army Reserve from 1998-2006. Currently, she is a Sr. Manager, Global Support at Docker.

What is your job?
Sr. Manager, Global Support.
How long have you worked at Docker? 
3 years this Veteran’s Day.
Is your current role one that you always intended on your career path? 
No, I spent the majority of my career as an engineer or leading engineers in the government space. Leading a team at Docker, and watching these talented engineers help our customers succeed every day is a privilege to be a part of. I couldn’t ask for a more dedicated and competent team.

What is your advice for someone entering the field? 
Be open to change and keep growing. Technology moves fast; you need to prepare for future opportunities. Find a mentor you trust and follow their advice. For women specifically, know your worth, believe in yourself, find your voice, and bring a positive attitude.
Tell us about a favorite moment or memory at Docker or from your career?
My favorite memory involved successful Missile Defense testing of the operational BMDS.  Knowing the work we did every day ensured the successful defensive abilities of our country was the most rewarding work I’ve ever done.
What is your superpower?
Calm composure.
What is your definition of success?
A happy healthy family.
What are you passionate about?
Helping young kids to develop a love for sports, and giving back to organizations that help veterans, specifically those wounded or fallen on hard times.
What is something you love to do? 
Trail running. There is nothing but the sound of my breathing, the panting of my German Shepherd, and the silence of the woods.
And something you dislike?

Johnny Gonzalez, Technical Support Engineer

Johnny Gonzalez was in the United States Marine Corps from 2003-2009. Now, he is a Technical Support Engineer at Docker.

What is your job?
Technical Support Engineer.
How long have you worked at Docker?
1 year and 2 months.
Is your current role one that you always intended on your career path? 
Yes, because I am able to work with both systems and networks all in one.
What is your advice for someone entering the field?
Study and learn technology that is growing fast, which will make it easier to understand and work with.
Tell us about a favorite moment or memory at Docker or from your career? 
Everyday is a favorite moment at Docker because there is never a dull moment. There is ALWAYS something new to learn and that in itself is exciting. Docker even has a day for pumpkin carving.
What are you working on right now that you are excited about?
Container technology and how simple it is to run applications in cloud environments as opposed to just one computer.
What do you do to get “unstuck” on a really difficult problem/design/bug?
Ask colleagues for help. Docker has a great support group that is willing to help me grow and teach me new things that I may not know or understand.
What is your superpower?
Agility to be able to tackle new tasks and responsibilities in a way to help any of my colleagues to grow in our industry.

What is your definition of success?
Success is feeling accomplished that a task was completed. Feeling productive even when the task is not completed at that very moment but taking the steps that gets me closer to completing the task.
What are you passionate about?
Raising my daughters and helping them to grow and be successful young ladies gives me great joy. Seeing their faces first thing in the morning gives me the strength to tackle anything thrown at me and do it with a smile.
Who do you look up to?
Myself. Growing up without parents as they passed away in my teenage years and telling myself to go to school, get an education to better my life, and join the best gang ever (Marine Corps) — I couldn’t ask for more.
What is something you love to do? And something you dislike?
I love to DJ. I am teaching my kids about music and how it can help with expressing your inner feelings through words with a beat. I dislike tardiness.
Share a story about something or someone who has been very impactful on your life or career?
My daughters inspire me to grow as I have taught them that you are never too old to learn something new. From the youngest person to the oldest, there is always someone with a little more knowledge that you can learn from.

On #VeteransDay, and every day, we give thanks to our veterans, like @networkstatic, Siobhan Casey, and Johnny Gonzalez, who were in the @MarForRes, @USArmyReserve, and @USMC. Thank you all for your service, hard work, and dedication.Click To Tweet

The post Celebrating Veterans Day: Docker Employee Profiles appeared first on Docker Blog.

A Roadmap for Building Modern Applications

Photo by Alvaro Reyes on Unsplash
No matter what industry you’re in, your application modernization strategy matters. Overlooking or downplaying its importance is a quick way for customers to sour and competitors to gain an edge. It’s why 91% of executives believe their revenues will take a hit without successful digital transformation.
The good news is modern applications offer a clear path forward. Creating a roadmap for your modern application strategy is a critical step toward a more agile and continuous model of software development and delivery – one that’s centered on delivering perpetually expanding value and new experiences to customers. 
This is the first of a series of blogs where we will look at industry viewpoints, different approaches, underlying platforms and real-world stories that are foundational to successful modern application development in order to provide a roadmap for application modernization.
What’s in Your Environment? 
The technology inventory at companies today is as diverse, distributed and complex as ever. It includes a variety of technology stacks, application frameworks, services and languages. During a modernization process, new Open Source technologies are often integrated with legacy solutions. Existing applications need to be maintained and enhanced, modern applications need to be developed and on-ramped, and some applications need to take the gentle off-ramp to retirement. To top that off, applications today can span on-premises, public and hybrid cloud and the edge. 
This is why we view applications as a spectrum. They are built on a variety of services – from multiple cloud resources, managed services and SaaS offerings to containers, configuration formats (Helm charts, Kubernetes YAML and Docker Compose files) and functions. Sure, they can be born in the cloud, but they don’t have to be. No matter the configuration or environment, it’s important to be able to build and manage these applications in a consistent and unified manner. 
Can You Support App Modernization?
While you need to continue supporting existing applications that rely on legacy processes, you also need to ramp up on new application platforms, languages and processes. The more different technologies there are in your environment, the harder it gets to support and maintain everything — particularly without consistent processes and a common underlying platform.
You have a modernization and digital transformation strategy, but are there sufficient resources to support it? Do you have the right talent and skill sets already in place? Is platform and process inconsistency harming your modernization efforts? 
What’s Driving Your Modernization?
Companies of all shapes and sizes have their eyes set on the cloud. It’s no longer an if, but when. In a recent report, Forrester states that nearly half of survey respondents are migrating existing workloads into cloud environments and then improving these applications as part of their current cloud strategy. This helps them prove business value quickly to then expand upon without taking an overwhelming first step. Forrester recommends identifying the compelling events to modernize now (i.e. deliver new customer experiences faster or a directive to move 50% of apps to the cloud) and then setting priorities for the modernization, tackling the highest priority “core” applications first. 
Small steps in the right direction can lead to large scale innovation within your organization. We see this across our customer base at companies including Nationwide, Carnival Corporation and Liberty Mutual.
In future posts of this blog series, we’ll take a closer look at the elements that go into spurring this innovation and bringing about positive results right away and well into the future. In the coming weeks, keep an eye out for new posts on why modern applications are at the heart of digital transformation and industry trends on the state of application development, as well as best practices and real-world customer stories that help to guide modernization strategies. We hope you will follow along!
In the meantime, to learn more about modern application development:

Check out our new eBook on real-world customer stories 
Read the full report from Forrester, “Modernize Core Applications With Cloud” 

New #Docker blog series by @marked_man: A roadmap for building #modernapplications to support #digitaltransformationClick To Tweet

The post A Roadmap for Building Modern Applications appeared first on Docker Blog.

Depend on Docker for Kubeflow

Run Kubeflow natively on Docker Desktop for Mac or Windows
This is a guest post by Alex Iankoulski, Docker Captain and full stack software and infrastructure architect at Shell New Energies. The views expressed here are his own and are neither opposed or endorsed by Shell or Docker. 
In this blog, I will show you how to use Docker Desktop for Mac or Windows to run Kubeflow. To make this easier, I used my Depend on Docker project, which you can find on Github.
Even though we are experiencing a tectonic shift of development workflows in the cloud era towards hosted and remote environments, a substantial amount of work and experimentation still happens on developer’s local machines. The ability to scale down allows us to mimic a cloud deployment locally and enables us to play, learn quickly, and make changes in a safe, isolated environment. A good example of this rationale is provided by Kubeflow and MiniKF.
Since Kubeflow was first released by Google in 2018, adoption has increased significantly, particularly in the data science world for orchestration of machine learning pipelines. There are various ways to deploy Kubeflow both on desktops and servers as described in its Getting Started guide. However, the desktop deployments for Mac and Windows rely on running virtual machines using Vagrant and VirtualBox. If you do not wish to install Vagrant and VirtualBox on your Mac or PC but would still like to run Kubeflow, then you can simply depend on Docker! This article will show you how to deploy Kubeflow natively on Docker Desktop. 
Kubeflow has a hard dependency on Kubernetes and the Docker runtime. The easiest way to satisfy both of these requirements on Mac or Windows is to install Docker Desktop (version 2.1.x.x or higher). In the settings of Docker Desktop, navigate to the Kubernetes tab and check “Enable Kubernetes”:
Fig. 1 – Kubernetes Settings in Docker Desktop
Enabling the Kubernetes feature in Docker Desktop creates a single node Kubernetes cluster on your local machine.
This article offers a detailed walkthrough of setting up Kubeflow on Docker Desktop for Mac. Deploying Kubeflow on Docker Desktop for Windows using Linux containers requires two additional prerequisites: 

Linux shell – to run the bash commands from the Kubeflow installation instructions 
Kfctl and kubectl CLI – to initialize, generate, and apply the Kubeflow deployment

The easiest way to satisfy both of these dependencies is to run a Linux container that has the kfctl and kubectl utilities. A Depend on Docker project was created for this purpose. To start a bash shell with the two CLI’s available, just execute:
docker run -it –rm -v <kube_config_folder_path>:/root/.kube iankoulski/kfctl bash
The remaining setup steps for both Mac and Windows are the same.
Resource Requirements
The instructions for deployment of Kubeflow on a pre-existing Kubernetes cluster specify the following resource requirements:

4 vCPUs
50 GB storage
12 GB memory

The settings in Docker Desktop need to be adjusted to accommodate these requirements as shown below.
Fig. 2 – CPU and Memory settings in Docker Desktop
Fig. 3 – Disk image size setting in Docker Desktop
Note that the settings are adjusted to more than the minimum required resources to accommodate system containers and other applications that may be running on the local machine.
We will follow instructions for the kfctl_k8s_istio configuration.

Download your preferred version from the release archive:curl -L -o kfctl_v0.6.2_darwin.tar.gz
Extract the archive:tar -xvf kfctl_v0.6.2_darwin.tar.gz
Set environment variables:export PATH=$PATH:$(pwd)export KFAPP=localkfexport CONFIG=
Initialize deployment:kfctl init ${KFAPP} –config=${CONFIG}cd ${KFAPP}kfctl generate all -V

Note: The above instructions are for Kubeflow release 0.6.2 and are meant to use as an example. Other releases would have slightly different archive filename, environment variable names and values, and kfctl commands. Those would be available in the release-specific deployment instructions.

Pre-pull container images (optional)

To facilitate the deployment of Kubeflow locally, we can pre-pull all required Docker images. When the container images are already present on the machine, the memory usage of Docker Desktop stays low. Pulling all images at the time of deployment may cause large spikes in memory utilization and can cause Docker Daemon to run out of resources. Pre-pulling images is especially helpful when running Kubeflow on a 16GB laptop.
To pre-pull all container images, execute the following one-line script in your $KFAPP/kustomize folder:
for i in $(grep -R image: . | cut -d ‘:’ -f 3,4 | uniq | sed -e ‘s/ //’ -e ‘s/^”//’ -e ‘s/”$//’); do echo “Pulling $i”; docker pull $i; done;
Fig. 4 – Pre-pulling Kubeflow container images
Depending on your Internet connection, this could take several minutes to complete. Even if Docker Desktop runs out of resources, restarting it and running the script again will resume pulling the remaining images from where you left off. 
If you are using the kfctl container on Windows, you may wish to modify the one-line script above so it saves the docker pull commands to a file and then execute them from your preferred Docker shell.

Apply Kubeflow deployment to Kubernetes:

cd ${KFAPP}kfctl apply all -V
Fig. 5 – Deployment output and Kubeflow pods – found by executing ‘kubectl get pods –all-namespaces’ – running in Docker Desktop.
Note: An existing deployment can be removed by executing “kfctl delete all -V”

Determine the Kubeflow entrypoint

To determine the endpoint, list all services in the istio-system namespace:kubectl get svc -n istio-system
Fig. 6 – Istio Ingress Gateway service.
The Kubeflow end-point service is through the ingress-gateway service on the NodePort connected with the default HTTP port (80). The Node Port number is 31380. To access Kubeflow use:
Using Kubeflow
The Kubeflow central dashboard is now accessible:
Fig. 7 – Kubeflow dashboard
We can run one of the sample pipelines that is included in Kubeflow. Select Pipelines, then Experiments, and choose Conditional expression (or just click the [Sample] Basic – Conditional expression link on the dashboard screen). 
Fig. 8 – Conditional execution pipeline
Next, click the +Create run button, enter a name (e.g. conditional-execution-test), choose an experiment, and then click Start to initiate the run. Navigate to your pipeline by selecting it from the list of runs.
 Fig. 9 – Conditional execution pipeline run
The completed pipeline run looks similar to Fig. 9 above. Due to the random nature of the coin flip in this pipeline, your actual output is likely to be different. Select a node in the graph to review various assets associated with that node, including its logs.
Docker Desktop enables you to easily run container applications on your local machine, including ones that require a Kubernetes cluster. Kubeflow is a deployment that typically targets larger clusters either in cloud or on-prem environments. In this article we’ve demonstrated how to deploy and use Kubeflow locally on your Docker Desktop. 

Docker Desktop
About Kubeflow
MiniKF Rationale
Kubeflow Getting Started
Virtual Box
Kubeflow deployment instructions
Depend on Docker project
Kfctl container image

I’d like to thank the following people for their help with this post and related topics:

Yannis Zarkadas, Arrikto 
Constantinos Venetsanopoulos, Arrikto
Josh Bottum, Arrikto
Fabio Nonato de Paula, Shell
Jenny Burcio, Docker
David Aronchick, Microsoft
Stephen Turner, Docker
David Friedlander, Docker

To learn more about Docker Desktop and running Kubernetes with Docker:

Learn about designing your first application in Kubernetes.
Try Play with Kubernetes, powered by Docker.
Learn more about Docker Desktop and the new Docker Desktop Enterprise

How to run #Kubeflow on #DockerDesktop by #DockerCaptain Alex IankoulskiClick To Tweet

The post Depend on Docker for Kubeflow appeared first on Docker Blog.

For Liberty Mutual, the Openness and Flexibility of the Cloud Means Better Business Outcomes

We had the chance recently to sit down with the Liberty Mutual Insurance team at their Portsmouth, New Hampshire offices and talk about how they deliver better business outcomes with the cloud and containerization.
At this point, Liberty Mutual has moved about 30 percent of their applications to the cloud. One of big improvements the team has seen with the cloud and Docker is the speed at which developers can develop and deploy their applications. That means better business outcomes for Liberty Mutual and its customers.
Here’s what they told us. You can also catch the highlights in this two-minute video:

On how tech is central to Liberty Mutual’s business
Mark Cressey, SVP and GM, IT Hosting Services: Tech and the digitization it’s allowed has really enabled Liberty Mutual to get deeply ingrained in our customers’ lives and support them through their major life journeys. We’re able to be more predictive of what our customer’s needs and get in front of them as a proactive step. How can we help? How can we assist you? Is this the right coverage? And even to the point where using real time information, we can warn them about approaching windstorms or warn our business customers to get their fleet of vehicles out of the way of a flooding event.
On why moving to the cloud matters
Mark: We’re moving to a multi-cloud or hybrid-cloud environment to get the best set of capabilities for our developers, and in turn our customers. Our goal is to take advantage of the latest innovations in all the major cloud environments, so we need to look at how we can write and deploy our applications in the most portable way possible.
Honey Williams, Director of Engineering: Moving to the cloud has empowered our developers to make decisions about when they’re going to deploy their code, or when they’re going to take this image upgrade that fixes a problem. The fact that they have that control and they’re empowered to do it themselves means less handoffs. And it also means less points of failure.
On balancing technical debt and innovation…
Mark: One of our key challenges is balancing investment between our journey to the cloud and what we need to do to keep our on-premise environments modern. We have many applications that the business relies on that can’t be migrated to cloud or aren’t scheduled to go through a modernization effort anytime soon. We still need to achieve those same goals around digitization agility, speed to market for our existing infrastructure—that we have for our cloud environments.
Eric Drobisewski, Senior Architect: We’ve got this mixed mode in terms of dealing with the technical debt of keeping our existing systems stable and secure, but also innovating and moving things to the cloud in a more digital format so that we can succeed in the future. Balancing both of those worlds and building the bridges between them is a big challenge.
On the journey with Docker…
Eric: For us, Docker first came into our picture back in 2014, so we’ve been at it for roughly five years. In hindsight, we were early adopters of a growing and maturing technology. What we saw was an opportunity to improve application development operations and security, particularly as we looked at the cloud. And then over the last four years, we’ve really seen the transformational value of that.
Mark: At Liberty Mutual, Docker is a key part of our journey to the cloud and application modernization efforts. We’ve deployed over 6,000 business services in Docker to let us drive horizontal scale, allow portability the cloud, and simplify our environment for our developers to get them out of configuring infrastructure and get them into the job of building and deploying business functionality.
Eric: One of the things that stands out to me that containerization and Docker provided for Liberty is the openness and flexibility it’s provided around operating in this cloud native ecosystem. It has allowed us to tap into new technologies and move those quickly and securely into the hands of our dev teams to deliver better business outcomes.
On making it easier for developers…
Mallory Quaintaince, Senior Infrastructure Engineer: Some of our application images can be 5 or 6 GB, and you might say, “Why containerize it then?” But we’re really finding that where we have the biggest gains are with downtime and deploys. Instead of having long outage windows for deploys, we’re able to deploy much faster—on the order of minutes versus hours.
The application density that we can get and the container density that we can get really provide both a lot of performance value. The barrier to entry for developers is very low because being able to write a Docker file, write a Docker image, use a Docker compose file—are something that developers can easily learn in a few hours or less.
Honey: Docker has benefited developers at our company because we’re able to provide the package they need in order to deploy their code easily, really making it seem like magic. That’s really what we want for our development teams. We want them to not have to worry about the extra things associated to where they’re going to put their code and how it’s going to run, and Docker brings that for us.

To learn more about how Docker can help you move your applications to the cloud:

Read the Forrester Research report on Modernizing the Core
Download the Customer Innovation eBook

We interviewed @LibertyMutual about how they are moving 30% of apps to the #cloud with #DockerEnterprise. Here’s what they said:Click To Tweet

The post For Liberty Mutual, the Openness and Flexibility of the Cloud Means Better Business Outcomes appeared first on Docker Blog.

Docker’s Recommended Sessions for KubeCon 2019

The Docker team is gearing up for another great KubeCon this year in San Diego, November 17-21. As a Platinum sponsor of this year’s event, we are excited to bring Docker employees, community members and Docker captains together to demonstrate and celebrate  the combined impact of Docker and Kubernetes.
Stop by Booth P37 to learn how to leverage the Docker platform to securely build, share and run modern applications for any Kubernetes environment. We will demonstrate Docker Desktop Enterprise and how it accelerates container application development while supporting developer choice. Experts will be on hand to answer questions about Docker Kubernetes Services (DKS), a secure and production-ready Kubernetes environment. Or come to learn more about Docker’s contributions to Kubernetes while picking up some great Docker swag.
Learn More from Docker Experts
KubeCon will also provide a great opportunity to learn from industry experts and hear from people who run production applications on Kubernetes. Here’s a helpful guide from the Docker team of our recommended talks:
Monday, Nov 18

Kubernetes 101 Workshop – Docker Captain Nigel Poulton

Tuesday, Nov 19

Securing the Software Supply Chain with in-toto – Justin Cappos & Marina Moore, NYU  
Introduction to Windows Containers in Kubernetes – Michael Michael, VMware & Deep Debroy, Docker 
Using TUF to Mitigate Repository Compromises – Justin Cappos, NYU 
Superpowers for Windows Containers – Deep Debroy & Jean Rouge, Docker 
Extending containerd – Samuel Karp & Maksym Pavlenko, Amazon 
Sharing is Caring: How to Begin Speaking at Conferences – Jenny Burcio & Ashlynn Polini, Docker

Wednesday, Nov 20

Application Observability for DevSecOps – Sabree Blackmon, Docker
Redesigning Notary in a Multi-registry World – Justin Cormack, Docker 

Thursday, May 23

Security Beyond Buzzwords: How to Secure Kubernetes with Empathy? – Pushkar Joglekar, Visa
containerd Mini Summit – Derek McGowen, Docker & Phil Estes, IBM & Lantao Liu, Google & Yu-Ju Hong, Google
Introduction to Notary – Justin Cormack, Docker

We hope that helps you navigate through the hundreds of sessions at KubeCon this year. See you there!
To learn more about Kubernetes and Docker:

Get your free Docker Kubernetes Service Cheatsheet
Download the eBook: Kubernetes Made Easy with Docker Enterprise

Attending #KubeCon? Here’s your #Docker guide to help you navigate the conference:Click To Tweet

The post Docker’s Recommended Sessions for KubeCon 2019 appeared first on Docker Blog.