Docker Turns 4: Thank you Docker Community!

In case you missed it, this week we’re celebrating ’s 4th Birthday with meetups all over the world (check out  on twitter). This feels like the right time to look back at the past 4 years and reflect on what makes the Docker Community so unique and vibrant: people, values, mentorship and learning opportunities. You can read our own Jérôme Petazzoni’s blog post for a more technical retrospective.
Managing an open source project at that scale and preserving a healthy community doesn’t come without challenges. Last year, Arnaud Porterie wrote a very interesting 3-part series blog post on open source at Docker covering the different challenges associated with the People, the Process and the Tooling and Automation. The most important aspect of all being the people.
Respect, fairness and openness are essential values required to create a welcoming environment for professionals and hobbyists alike. In that spirit, we’ve launched a scholarship program and partnerships in an attempt to improve opportunities for underrepresented groups in the tech industry while helping the Docker Community become more diverse. If you’re interested in this topic, we’re fortunate enough to have Austin area high school student Kate Hirschfeld presenting at DockerCon on Diversity in the face of adversity.
But what really makes the Docker community so special is all of the passionate contributors who work tremendously hard to submit pull requests, file GitHub issues, organize meetups, give talks at conferences, write blog posts or record Docker tips videos.
Leadership, mentorship, contribution and collaboration play a massive role in the development of the Docker Community and container ecosystem. Through the organization of the Docker Mentor Week last year or a Docker Mentor Summit at DockerCon 2017, we’re always trying to emulate the community and encourage more advanced users to share their knowledge with newcomers.
A great example of leadership and mentorship in the Docker Community is Docker Captain Alex Ellis. We could not write a blog post on without mentioning Alex and the awesome work he does around Docker and Raspberry Pi. In addition to sharing his knowledge through blog posts and videos, Alex is actively inspiring and mentoring younger folks such as Finnian Anderson. Alex’s support and advocacy got Finnian invited to DockerCon 2017 to give a demo of a Raspberry Pi-driven hardware gauge to monitor a Docker Swarm in real time.

If you’re pumped about all the things you learn and all the people you meet at Docker events, you’re going to love what we have planned for you at this year’s DockerCon! We’re giving everyone at DockerCon access to a tool called to connect with people who share the same Docker use cases, topic of interests or hack ideas, or even your favorite TV shows. So no matter where you’re traveling from or how many people you know before the conference, we will make sure you end up feeling at home!
Register for DockerCon 2017 
   

  

Docker turns 4 &; our take on what makes the docker community so vibrant and unique To Tweet

The post Docker Turns 4: Thank you Docker Community! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Turns 4: Mentorship, Pi, Moby Mingle and Moar

In case you missed it, this week we’re celebrating ’s 4th Birthday with meetup celebrations all over the world (check out  on twitter). This feels like the right time to look back at the past 4 years and reflect on what makes the Docker Community so unique and vibrant: people, values, mentorship and learning opportunities. You can read our own Jérôme Petazzoni’s blog post for a more technical retrospective.
Managing an open source project at that scale and preserving a healthy community doesn’t come without challenges. Last year, Arnaud Porterie wrote a very interesting 3-part series blog post on open source at Docker covering the different challenges associated with the People, the Process and the Tooling and Automation. The most important aspect of all being the people.
Respect, fairness and openness are essential values required to create a welcoming environment for professionals and hobbyists alike. In that spirit, we’ve launched a scholarship program and partnerships in an attempt to improve opportunities for underrepresented groups in the tech industry while helping the Docker Community become more diverse. If you’re interested in this topic, we’re fortunate enough to have Austin area high school student Kate Hirschfeld presenting at DockerCon on Diversity in the face of adversity.
But what really makes the Docker community so special is all of the passionate contributors who work tremendously hard to submit pull requests, file GitHub issues, organize meetups, give talks at conferences, write blog posts or record Docker tips videos.
Leadership, mentorship, contribution and collaboration play a massive role in the development of the Docker Community and container ecosystem. Through the organization of the Docker Mentor Week last year or a Docker Mentor Summit at DockerCon 2017, we’re always trying to emulate the community and encourage more advanced users to share their knowledge with newcomers.
A great example of leadership and mentorship in the Docker Community is Docker Captain Alex Ellis. We could not write a blog post on Pi Day without mentioning Alex and the awesome work he does around Docker and Raspberry Pi. In addition to sharing his knowledge through blog posts and videos, Alex is actively inspiring and mentoring younger folks such as Finnian Anderson. Alex’s support and advocacy got Finnian invited to DockerCon 2017 to give a demo of a Raspberry Pi-driven hardware gauge to monitor a Docker Swarm in real time.

If you’re pumped about all the things you learn and all the people you meet at Docker events, you’re going to love what we have planned for you at this year’s DockerCon! We’re giving everyone at DockerCon access to a tool called to connect with people who share the same Docker use cases, topic of interests or hack ideas, or even your favorite TV shows. So no matter where you’re traveling from or how many people you know before the conference, we will make sure you end up feeling at home!

Register for DockerCon 2017 
   

  

Docker turns 4 &; our take on what makes the docker community vibrant and unique dockerbday&;Click To Tweet

The post Docker Turns 4: Mentorship, Pi, Moby Mingle and Moar appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

The First DockerCon with Windows Containers

DockerCon 2017 is only a few weeks away, and the schedule is available now on the DockerCon Agenda Builder. This will be the first DockerCon since Server 2016 was released, bringing native support for containers to Windows. There will be plenty of content for Windows developers and admins &; here are some of the standouts.

Windows and .NET Sessions
On the main stages, there will be hours of content dedicated to Windows and .NET.
Docker for .NET Developers
Michele Bustamante, CIO of Solliance, looks at what Docker can do for .NET applications. Michele will start with a full .NET Framework application and show how to run it in a Windows container. Then Michele will move on to .NET Core and show how the new cross-platform framework can build apps which run in Windows or Linux containers, making for true portability throughout the data center and the cloud.
Escape From Your VMs with Image2Docker
I’ll be presenting with Docker Captain Jeff Nickoloff, covering the Image2Docker tool, which automates app migration from virtual machines to Docker images. There’s Image2Docker for Linux, and Image2Docker for Windows. We’ll demonstrate both, porting an app with a Linux front end and a Windows back end from VMs to Docker images. Then we’ll run the whole application in containers on one Docker swarm, a cluster with Linux and Windows nodes.
Beyond “” &8211; the Path to WIndows and Linux Parity in Docker
Taylor Brown and Dinesh Govindasamy from Microsoft will talk about how Docker support was built for Windows Server 2016. Their session will cover the technical implementation in Windows, the current gaps between Docker on Linux and Docker on Windows, and the plans to bring parity to the Windows experience. This session is from the team at Microsoft who actually delivered the kernel changes to support Windows containers running in Docker.
Creating Effective Images
Abby Fuller from AWS will talk about making efficient Docker images. Optimized Docker images build quickly, are as small as possible, and include only the components needed to run the app. Abby will talk about image layers, caching, Dockerfile best practices, and Docker Security Scanning, in a cross-platform session which looks at Linux and Windows Docker images.
Other Sessions
Check out the topics in the Agenda Builder for sessions from speakers who have been using Docker in production, and seen a huge change in their ability to deliver quality software, quickly. These are Linux case studies, but the principles equally apply to Windows projects.

In Architecture, Cornell University use Docker Datacenter to run monolithic legacy apps alongside greenfield microservice apps &8211; with consistent monitoring and management
In Production, PayPal are on a  journey migrating all their legacy apps to Docker, and using Docker as their production application platform
In Enterprise, MetLife delivered a new microservice application running on Docker in 5 months, embracing new approaches to design, test and engineering.

 
Workshops
Workshops are instructor-led sessions, which run on the Monday of DockerCon. There are a lot of great sessions to choose from, but for Windows folks these two are particularly well-suited:
Learn Docker. Get to grips with the basics of Docker, learning about the basics of images and containers, and moving on to networking, orchestration, security and volumes. This session will focus on Linux containers, which you can run with Docker for Windows, but the principles you’ll learn apply equally to Windows containers.
Modernizing Monolithic ASP.NET Applications with Docker. A workshop focused on Windows and ASP.NET. You’ll learn how to run a monolithic ASP.NET app in Docker without changing code, and then see how to break features out from the main app and run them in separate Docker containers, giving you a path to modernize your app without rebuilding it.
Hands-On Labs
As well as the main sessions and guided workshops, there will be hands-on labs for you to experience Docker on Windows. We’ll provision a Docker environment for you in Azure, and provide self-paced learning guides. The hands-on labs will cover:
Docker on Windows 101. Get started with Docker on Windows, and learn why the world is moving to containers. You’ll start by exploring the Windows Docker images from Microsoft, then you’ll run some simple applications, and learn how to scale apps across multiple servers running Docker in swarm mode
Modernize .NET Apps &8211; for Ops. An admin guide to migrating .NET apps to Docker images, showing how the build, ship, run workflow makes application maintenance fast and risk-free. You’ll start by migrating a sample app to  Docker, and then learn how to upgrade the application, patch the Windows version the app uses, and patch the Windows version on the host &8211; all with zero downtime.
Modernize .NET Apps &8211; for Devs. A developer guide to app migration, showing how the Docker platform lets you update a monolithic application without doing a full rebuild. You’ll start with a sample app and see how to break components out into separate units, plumbing the units together with the Docker platform and the tried-and-trusted applications available on Docker Hub.
Book Your Ticket Now!
DockerCon is always a sell-out conference, so book your DockerCon tickets while there are still spaces left. If you follow the Docker Captains on Twitter, you may find they have discount codes to share.

Check out all the Docker content for Windows at To Tweet

The post The First DockerCon with Windows Containers appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker and Cisco Launch Cisco Validated Designs for Cisco UCS and Flexpod Infrastructures on Docker Enterprise Edition

Last week, and jointly announced a strategic alliance between our organizations. Based on customer feedback, one of the initial joint initiatives is the validation of Docker Enterprise Edition (which includes Docker Datacenter) against Cisco UCS and the Nexus infrastructures. We are excited to announce that Cisco Validated Designs (CVDs) for Cisco UCS and on Docker Enterprise Edition (EE) are immediately available.
CVDs represent the gold standard reference architecture methodology for enterprise customers looking to deploy an end-to-end solution. The CVDs follow defined processes and covers not only provisioning and configuration of the solution, but also test and document the solutions against performance, scale and availability/failure &; something that requires a lab setup with a significant amount of hardware that reflects actual production deployments. This enables our customers achieve faster, more reliable and predictable implementations.
The two new CVDs published for container management offers enterprises a well designed and an end-to-end lab tested configuration for Docker EE on Cisco UCS and Flexpod Datacenter. The collaborative engineering effort between Cisco, NetApp and Docker provides enterprises best of breed solutions for Docker Datacenter on Cisco Infrastructure and NetApp Enterprise Storage to run stateless or stateful containers.
The first CVD includes 2 configurations:

4-node rack servers Bare Metal deployment, co-locating Docker UCP Controller and DTR on 3 manager nodes in a Highly Available configuration and 1 UCP worker node.

10-node Blade servers Bare Metal deployment, with 3 nodes for UCP controllers, 3 nodes for DTR and remaining 4 nodes as UCP worker nodes

The second CVD was based on FlexPod Datacenter in collaboration with NetApp using Cisco UCS Blades and NetApp FAS and E-Series storage.
These CVDs leverage the Docker native user experience of Docker EE, along with Cisco’s UCS converged infrastructure capabilities to provide simple management control planes to orchestrate compute, network and storage provisioning for the application containers to run in a secure and scalable environment. It also uses built in security features of the UCS such as I/O isolation through VLANs, secure bootup of bare metal hosts, and physical storage access path isolation through Cisco VIC’s virtual network interfaces. The combination of UCS and Docker EE’s built-in security such as Secrets Management, Docker Content Trust, and Docker Security Scanning provides a secure end-to-end Container-as-a-Service (CaaS) solution.

Both these solutions use Cisco UCS Service Profiles to provision and configure the UCS servers and their I/O properties to automate the complete installation process. Docker commands and Ansible were used for Docker EE  installation. After configuring proper certificates across the DTR and UCP nodes, we were able to push and pull images successfully. Container images such as busybox, nginx, etc. and applications such as WordPress, Voting application, etc. to test and validate the configuration were pulled from Docker Hub, a central repository for Docker developers to store container images.
The scaling test included the deployment of containers and applications. We were able to deploy 700+ containers on single node and more than 7000 containers across 10 nodes without performance degradation. The scaling tests also covered dynamically adding/deleting nodes to ensure the cluster remains responsive during this change. This excellent scaling and resiliency tests on the clusters are result of swarm mode, container orchestration tightly integrated into Docker EE with Docker Datacenter, and Cisco’s Nexus switches which provides high performance and low latency network speed.
The fail-over tests covered node shutdown, reboot, induce fault at Cisco Fabric Interconnects to adapters on Cisco UCS blade servers. When the UCP manager node was shutdown/rebooted, we were able to validate that users were still able to access containers through Docker UCP UI or CLI. The system was able to start up quickly after a reboot and the UCP cluster and services were restored. Hardware failure resulted in cluster operating in reduced capacity, but there was no single point of failure.
As part of the FlexPod CVD, NFS was configured for Docker Trusted Registry (DTR) nodes for shared access. Flexpod is configured with NetApp enterprise class storage, and NetApp Docker Volume Plugin (nDVP) provides direct integration with Docker ecosystem for NetApp’s ONTAP, E-Series and SolidFire Storage. FlexPod uses NetApp ONTAP storage backend for DTR as well as Container Storage management, and can verify Container volumes deployed using NetApp OnCommand System Manager.
Please refer to CVDs for detailed configuration information.

FlexPod Datacenter with Docker Datacenter for Container Management
Cisco UCS Infrastructure with Docker Datacenter for Container Management

 

Docker Enterprise Edition now w/ @Cisco Validated Designs for Cisco UCS and Flexpod&;Click To Tweet

The post Docker and Cisco Launch Cisco Validated Designs for Cisco UCS and Flexpod Infrastructures on Docker Enterprise Edition appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

InfraKit and Docker Swarm Mode: A Fault-Tolerant and Self-Healing Cluster

Back in October 2016, released , an open source toolkit for creating and managing declarative, self-healing infrastructure. This is the second in a two part series that dives more deeply into the internals of InfraKit.
Introduction
In the first installment of this two part series about the internals of InfraKit, we presented InfraKit’s design, architecture, and approach to high availability.  We also discussed how it can be combined with other systems to give distributed computing clusters self-healing and self-managing properties. In this installment, we present an example of leveraging Docker Engine in Swarm Mode to achieve high availability for InfraKit, which in turn enhances the Docker Swarm cluster by making it self-healing.  
Docker Swarm Mode and InfraKit
One of the key architectural features of Docker in Swarm Mode is the manager quorum powered by SwarmKit.  The manager quorum stores information about the cluster, and the consistency of information is achieved through consensus via the Raft consensus algorithm, which is also at the heart of other systems like Etcd. This guide gives an overview of the architecture of Docker Swarm Mode and how the manager quorum maintains the state of the cluster.
One aspect of the cluster state maintained by the quorum is node membership &; what nodes are in the cluster, who are the managers and workers, and their statuses. The Raft consensus algorithm gives us guarantees about our cluster’s behavior in the face of failure, and fault tolerance of the cluster is related to the number of manager nodes in the quorum. For example, a Docker Swarm with three managers can tolerate one node outage, planned or unplanned, while a quorum of five managers can tolerate outages of up to two members, possibly one planned and one unplanned.
The Raft quorum makes the Docker Swarm cluster fault tolerant; however, it cannot fix itself.  When the quorum experiences outage of manager nodes, manual steps are needed to troubleshoot and restore the cluster.  These procedures require the operator to update or restore the quorum’s topology by demoting and removing old nodes from the quorum and joining new manager nodes when replacements are brought online.  
While these administration tasks are easy via the Docker command line interface, InfraKit can automate this and make the cluster self-healing.  As described in our last post, InfraKit can be deployed in a highly available manner, with multiple replicas running and only one active master.  In this configuration, the InfraKit replicas can accept external input to determine which replica is the active master.  This makes it easy to integrate InfraKit with Docker in Swarm Mode: by running InfraKit on each manager node of the Swarm and by detecting the leadership changes in the Raft quorum via standard Docker API, InfraKit achieves the same fault-tolerance as the Swarm cluster. In turn, InfraKit’s monitoring and infrastructure orchestration capabilities, when there’s an outage, can automatically restore the quorum, making the cluster self-healing.
Example: A Docker Swarm with InfraKit on AWS
To illustrate this idea, we created a Cloudformation template that will bootstrap and create a cluster of Docker in Swarm Mode managed by InfraKit on AWS.  There are couple of ways to run this: you can clone the InfraKit examples repo and upload the template, or you can use this URL to launch the stack in the Cloudformation console.
Please note that this Cloudformation script is for demonstrations only and may not represent best practices.  However, technical users should experiment and customize it to suit their purposes.  A few things about this Cloudformation template:

As a demo, only a few regions are supported: us-west-1 (Northern California), us-west-2 (Oregon), us-east-1 (Northern Virginia), and eu-central-1 (Frankfurt).
It takes the cluster size (number of nodes), SSH key, and instance sizes as the primary user input when launching the stack.
There are options for installing the latest Docker Engine on a base Ubuntu 16.04 AMI or using images which we have pre-installed Docker and published for this demonstration.
It bootstraps the networking environment by creating a VPC, a gateway and routes, a subnet, and a security group.
It creates an IAM role for InfraKit’s AWS instance plugin to describe and create EC2 instances.
It creates a single bootstrap EC2 instance and three EBS volumes (more on this later).  The bootstrap instance is attached to one of the volumes and will be the first leader of the Swarm.  The entire Swarm cluster will grow from this seed, as driven by InfraKit.

With the elements above, this Cloudformation script has everything needed to boot up an Infrakit-managed Docker in Swarm Mode cluster of N nodes (with 3 managers and N-3 workers).  
About EBS Volumes and Auto-Scaling Groups
The use of EBS volumes in our example demonstrates an alternative approach to managing Docker Swarm Mode managers.  Instead of relying on manually updating the quorum topology by removing and then adding new manager nodes to replace crashed instances, we use EBS volumes attached to the manager instances and mounted at /var/lib/docker for durable state that survive past the life of an instance.  As soon as the volume of a terminated manager node is attached to a new replacement EC2 instance, we can carry the cluster state forward quickly because there’s much less state changes to catch up to.  This approach is attractive for large clusters running many nodes and services, where the entirety of cluster state may take a long time to be replicated to a brand new manager that just joined the Swarm.  
The use of persistent volumes in this example highlights InfraKit’s philosophy of running stateful services on immutable infrastructure:

Use compute instances for just the processing cores;  they can come and go.
Keep state on persistent volumes that can survive when compute instances don’t.
The orchestrator has the responsibility to maintain members in a group identified by fixed logical ID’s.  In this case these are the private IP addresses for the Swarm managers.
The pairing of logical ID (IP address) and state (on volume) need to be maintained.

This brings up a related implementation detail &8212; why not use the Auto-Scaling Groups implementations that are already there?  First, auto-scaling group implementations vary from one cloud provider to the next, if even available.  Second, most auto-scalers are designed to manage cattle, where individual instances in a group are identical to one another.  This is clearly not the case for the Swarm managers:

The managers have some kind of identity as resources (via IP addresses)
As infrastructure resources, members of a group know about each other via membership in this stable set of IDs.
The managers identified by these IP addresses have state that need to be detached and reattached across instance lifetimes.  The pairing must be maintained.

Current auto-scaling group implementations focus on managing identical instances in a group.  New instances are launched with assigned IP addresses that don’t match the expectations of the group, and volumes from failed instances in an auto-scaling group don’t carry over to the new instance.  It is possible to work around these limitations with sweat and conviction; InfraKit, through support of allocation, logical IDs and attachments, support this use case natively.
Bootstrapping InfraKit and the Swarm
So far, the Cloudformation template implements what we called ‘bootstrapping’, or the process of creating the minimal set of resources to jumpstart an InfraKit managed cluster.  With the creation of the networking environment and the first “seed” EC2 instance, InfraKit has the requisite resources to take over and complete provisioning of the cluster to match the user’s specification of N nodes (with 3 managers and N-3 workers).   Here is an outline of the process:
When the single “seed” EC2 instance boots up, a single line of code is executed in the UserData (aka cloudinit), in Cloudformation JSON:
“docker run –rm “,{“Ref”:”InfrakitCore”},” infrakit template –url “,
{“Ref”:”InfrakitConfigRoot”}, “/boot.sh”,
” –global /cluster/name=”, {“Ref”:”AWS::StackName”},
” –global /cluster/swarm/size=”, {“Ref”:”ClusterSize”},
” –global /provider/image/hasDocker=yes”,
” –global /infrakit/config/root=”, {“Ref”:”InfrakitConfigRoot”},
” –global /infrakit/docker/image=”, {“Ref”:”InfrakitCore”},
” –global /infrakit/instance/docker/image=”, {“Ref”:”InfrakitInstancePlugin”},
” –global /infrakit/metadata/docker/image=”, {“Ref”:”InfrakitMetadataPlugin”},
” –global /infrakit/metadata/configURL=”, {“Ref”:”MetadataExportTemplate”},
” | tee /var/lib/infrakit.boot | sh n”
Here, we are running InfraKit packaged in a Docker image, and most of this Cloudformation statement references the Parameters (e.g. “InfrakitCore” and “ClusterSize”) defined at the beginning of the template.  Using parameters values in the stack template, this translates to a single statement like this that will execute during bootup of the instance:
docker run –rm infrakit/devbundle:0.4.1 infrakit template
–url https://infrakit.github.io/examples/swarm/boot.sh
–global /cluster/name=mystack
–global /cluster/swarm/size=4 # many more …
| tee /var/lib/infrakit.boot | sh # tee just makes a copy on disk

This single statement marks the hand-off from Cloudformation to InfraKit.  When the seed instance starts up (and installs Docker, if not already part of the AMI), the InfraKit container is run to execute the InfraKit template command.  The template command takes a URL as the source of the template (e.g. https://infrakit.github.io/examples/swarm/boot.sh, or a local file with a URL like file://) and a set of pre-conditions (as the &;global variables) and renders.  Through the &8211;global flags, we are able to pass a set of parameters entered by the user when launching the Cloudformation stack. This allows InfraKit to use Cloudformation as authentication and user interface for configuring the cluster.
InfraKit uses templates to simplify complex scripting and configuration tasks.  The templates can be any text that uses { { } } tags, aka “handle bar” syntax.  Here InfraKit is given a set of input parameters from the Cloudformation template and a URL referencing the boot script.  It then fetches the template and renders a script that is executed to perform the following during boot-up of the instance:
 

Formatting the EBS if it’s not already formatted
Stopping Docker if currently running and mount the volume at /var/lib/docker
Configure the Docker engine with proper labels, restarting it.
Starts up an InfraKit metadata plugin that can introspect its environment.  The AWS instance plugin, in v0.4.1, can introspect an environment formed by Cloudformation, as well as, using the instance metadata service available on AWS.   InfraKit metadata plugins can export important parameters in a read-only namespace that can be referenced in templates as file-system paths.  
Start the InfraKit containers such as the manager, group, instance, and Swarm flavor plugins.
Initializes the Swarm via docker swarm init.
Generates a config JSON for InfraKit itself.  This JSON is also rendered by a template (https://github.com/infrakit/examples/blob/v0.4.1/swarm/groups.json) that references environmental parameters like region, availability zone, subnet id’s and security group id’s that are exported by the metadata plugins.
Performs a infrakit manager commit to tell InfraKit to begin managing the cluster.

See https://github.com/infrakit/examples/blob/v0.4.1/swarm/boot.sh for details.
When the InfraKit replica begins running, it notices that the current infrastructure state (of only one node) does not match the user’s specification of 3 managers and N-3 worker nodes.  InfraKit will then drive the infrastructure state toward user’s specification by creating the rest of the managers and workers to complete the Swarm.
The topic of metadata and templating in InfraKit will be the subjects of future blog posts.  In a nutshell, metadata is information exposed by compatible plugins organized and accessible in a cluster-wide namespace.  Metadata can be accessed in the InfraKit CLI or in templates with file-like path names.  You can think of this as a cluster-wide read-only sysfs.  InfraKit template engine, on the other hand, can make use of this data to render complex configuration script files or JSON documents. The template engine supports fetching a collection of templates from local directory or from a remote site, like the example Github repo that has been configured to serve up the templates like a static website or S3 bucket.
 
Running the Example
You can either fork the examples repo or use this URL to launch the stack on AWS console.   Here we first bootstrap the Swarm with the Cloudformation template, then InfraKit takes over and provisions the rest of the cluster.  Then, we will demonstrate fault tolerance and self-healing by terminating the leader manager node in the Swarm to induce fault and force failover and recovery.
When you launch the stack, you have to answer a few questions:

The size of the cluster.  This script always starts a Swarm with 3 managers, so use a value greater than 3.

The SSH key.

There’s an option to install Docker or use an AMI with Docker pre-installed.  An AMI with Docker pre-installed gives shorter startup time when InfraKit needs to spin up a replacement instance.

Once you agree and launches the stack, it takes a few minutes for the cluster to be up.  In this case, we start a 4 node cluster.  In the AWS console we can verify that the cluster is fully provisioned by InfraKit:

Note the private IP addresses 172.31.16.101, 172.31.16.102, and 172.31.16.103 are assigned to the Swarm managers, and they are the values in our configuration. In this example the public IP addresses are dynamically assigned: 35.156.207.156 is bound to the manager instance at 172.31.16.101.  
Also, we see that InfraKit has attached the 3 EBS volumes to the manager nodes:

Because InfraKit is configured with the Swarm Flavor plugin, it also made sure that the manager and worker instances successfully joined the Swarm.  To illustrate this, we can log into the manager instances and run docker node ls. As a means to visualize the Swarm membership in real-time, we log into all three manager instances and run
watch -d docker node ls  
The watch command will by default refresh docker node ls every 2 seconds.  This allows us to not only watch the Swarm membership changes in real-time but also check the availability of the Swarm as a whole.

Note that at this time, the leader of the Swarm is just as we expected, the bootstrap instance, 172.31.16.101.  
Let’s make a note of this instance’s public IP address (35.156.207.156), private IP address (172.31.16.101), and its Swarm Node cryptographic identity (qpglaj6egxvl20vuisdbq8klr).  Now, to test fault tolerance and self-healing, let’s terminate this very leader instance.  As soon as this instance is terminated, we would expect the quorum leadership to go to a new node, and consequently, the InfraKit replica running on that node will become the new master.

Immediately the screen shows there is an outage:  In the top terminal, the connection to the remote host (172.31.16.101) is lost.  In the second and third terminals below, the Swarm node lists are being updated in real time:

When the 172.31.16.101 instance is terminated, the leadership of the quorum is transferred to another node at IP address 172.31.16.102 Docker Swarm Mode is able to tolerate this failure and continue to function (as seen by the continuously functioning of docker node ls by the remaining managers).  However, the Swarm has noticed that the 172.31.16.101 instance is now Down and Unreachable.

As configured, a quorum of 3 managers can tolerate one instance outage.   At this point, the cluster continues operation without interruption.  All your apps running on the Swarm continue to work and you can deploy services as usual.  However, without any automation, the operator needs to intervene at some point and perform some tasks to restore the cluster before another outage to the remaining nodes occur.  
Because this cluster is managed by InfraKit, the replica running on 172.31.16.102 now becomes the master when the same instance assumes leadership of the quorum.  Because InfraKit is tasked to maintain the specification of 3 manager instances with IP addresses 172.31.16.101, 172.31.16.102, and 172.31.16.103, it will take action when it notices 172.31.16.101 is missing.  In order to correct the situation, it will

Create a new instance with the private IP address 172.31.16.101
Attach the EBS volume that was previously associated with the downed instance
Restore the volume, so that Docker Engine and InfraKit starts running on that new instance.
Join the new instance to the Swarm.

As seen above, the new instance at private IP 172.31.16.101 now has an ephemeral public IP address 35.157.163.34, when it was previously 35.156.207.156.  We also see that the EBS volume has been re-attached:

Because of re-attaching the EBS volume as /var/lib/docker for the new instance and using the same IP address, the new instance will appear exactly as though the downed instance was resurrected and rejoins the cluster.  So as far as the Swarm is concerned, 172.31.16.101 may as well have been subjected to a temporary network partition and has since recovered and rejoined the cluster:

At this point, the cluster has recovered without any manual intervention.  The managers are now showing as healthy, and the quorum lives on!
Conclusion
While this example is only a proof-of-concept, we hope it demonstrates the potential of InfraKit as an active infrastructure orchestrator which can make a distributed computing cluster both fault-tolerant and self-healing.  As these features and capabilities mature and harden, we will incorporate them into Docker products such as Docker Editions for AWS and Azure.
InfraKit is a young project and rapidly evolving, and we are actively testing and building ways to safeguard and automate the operations of large distributed computing clusters.   While this project is being developed in the open, your ideas and feedback can help guide us down the path toward making distributed computing resilient and easy to operate.
Check out the InfraKit repository README for more info, a quick tutorial and to start experimenting &8212; from plain files to Terraform integration to building a Zookeeper ensemble. Have a look, explore, and join us on Github or online at the Docker Community Slack Channel (infrakit).  Send us a PR, open an issue, or just say hello.  We look forward to hearing from you!
More Resources:

Check out all the Infrastructure Plumbing projects
The InfraKit examples GitHub repo
Sign up for Docker for AWS or Docker for Azure
Try Docker today 

Part 2: InfraKit and Docker Swarm Mode: A Fault-Tolerant and Self-Healing Cluster by @dchungsfClick To Tweet

The post InfraKit and Docker Swarm Mode: A Fault-Tolerant and Self-Healing Cluster appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Swarm Mode with Fleet Management and Collaboration now in public beta, powered by Docker Cloud

With the introduction of swarm mode in 1.12, we showed the world how simple it can be to provision a secure and fully-distributed Docker cluster on which to deploy highly available and scalable applications. The latest Docker 1.13 builds on and improves these capabilities with new features, such as secrets management.
Continuing with the trend that simplicity is paramount to empowering individuals and teams to achieve their goals, today we are bringing swarm mode support to Docker Cloud, with a number of new cloud-enabled capabilities. All of this is in addition to the continuous integration (CI) features of Docker Cloud, including automatic builds, tests, security scans and the world’s largest hosted registry of public and private Docker image repositories.

Fleet Management using Docker ID
Keeping track of many swarms sprawling multiple regions or cloud providers can be a challenge. And securely connecting to remote swarms with TLS means teams must also spend time configuring and maintaining a Public Key Infrastructure. By registering your new or existing swarms with Docker Cloud, teams can now easily manage a large number of swarms running anywhere, and only need their Docker ID to authenticate and securely access any of them.
Docker for AWS and Docker for Azure Integration
Individuals and teams can now also provision new swarms on their IaaS provider of choice using Docker Cloud. Swarms are created using Docker CE for AWS and Docker CE for Azure, which allows these swarms to take advantage of the native capabilities of their respective cloud platforms. Swarms provisioned this way are automatically registered with Docker Cloud and can be accessed remotely and securely using your Docker ID.

Swarm Collaboration
Using the team capabilities in Docker Cloud, organizations have full control over who has access to which swarms. Allowing you, for example, to grant your development team access to your staging swarms, and your operations team access to your production swarms.
Docker for Mac and Docker for Windows Integration
We’re bringing fleet management to the developer desktop too! When using Docker for Mac or Docker for Windows, simply login with your Docker ID to see a list of all your accessible swarms registered with Docker Cloud. From there, it’s a single click to securely connect to any swarm and begin managing it. You and your team can easily check the status of an existing application or deploy new applications right from within your local shell.
But wait, there’s more! Docker for Mac and Docker for Windows users that login using their Docker ID can now also create and manage public and private repositories directly through their desktop application.
Please note: fleet management and other integrations with Docker Cloud are currently only available in the Docker for Mac and Docker for Windows edge channel.

Under the hood of Swarm Mode with Fleet Management and Collaboration
Swarm mode with Fleet Management and Collaboration, powered by Docker Cloud, is only possible thanks to the many and diverse open source projects and tools created by Docker and its open source contributors. This announcement is the culmination of work that spans our open source SwarmKit project, our best-in-class IaaS integrations with the industry’s top cloud providers, the native Docker for Mac and Docker for Windows applications, and Docker’s own hosted cloud services. This is an experience that only Docker can deliver.
It is our mission to build tools of mass innovation. At Docker we build powerful technology that is simple to use, providing individuals and teams with the tools they need to accomplish their goals. We hope you enjoy this newest public beta release and look forward to your feedback.
Check out these additional resources to learn more:

Docs for Swarm Mode in Docker Cloud
Get Docker for Mac (edge channel)
Get Docker for Windows (edge channel)
Watch the Fleet Management demo
E-mail us your feedback

 Fleet management and collaboration for Docker now availableClick To Tweet

The post Swarm Mode with Fleet Management and Collaboration now in public beta, powered by Docker Cloud appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Announcing Docker Enterprise Edition

Today we are announcing Enterprise Edition (EE), a new version of the Docker platform optimized for business-critical deployments. Docker EE is supported by Docker Inc., is available on certified operating systems and cloud providers and runs certified Containers and Plugins from Docker Store. Docker EE is available in three tiers: Basic comes with the Docker platform, support and certification, and Standard and Advanced tiers add advanced container management (Docker Datacenter) and Docker Security Scanning.

For consistency, we are also renaming the free Docker products to Docker Community Edition (CE) and adopting a new lifecycle and time-based versioning scheme for both Docker EE and CE. Today’s Docker CE and EE 17.03 release is the first to use the new scheme.
Docker CE and EE are released quarterly, and CE also has a monthly “Edge” option. Each Docker EE release is supported and maintained for one year and receives security and critical bugfixes during that period. We are also improving Docker CE maintainability by maintaining each quarterly CE release for 4 months. That gets Docker CE users a new 1-month window to update from one version to the next.
Both Docker CE and EE are available on a wide range of popular operating systems and cloud infrastructure. This gives developers, devops teams and enterprises the freedom to run Docker and Docker apps on their favorite infrastructure without risk of lock-in.
To download free Docker CE and to try or buy Docker EE, head over to Docker Store. Also check out the companion blog post on the Docker Certified Program. Or read on for details on Docker CE and EE and the new versioning and lifecycle improvements.
Docker Enterprise Edition
Docker Enterprise Edition (EE) is an integrated, supported and certified container platform for CentOS, Red Hat Enterprise Linux (RHEL), Ubuntu, SUSE Linux Enterprise Server (SLES), Oracle Linux, and Windows Server 2016, as well as for cloud providers AWS and Azure. In addition to certifying Docker EE on the underlying infrastructure, we are introducing the Docker Certification Program which includes technology from our ecosystem partners: ISV containers that run on top of Docker and networking and storage and networking plugins that extend the Docker platform.
Docker and Docker partners provide cooperative support for Certified Containers and Plugins so customers can confidently use these products in production. Check out the companion blog post for more details and browse and install certified content from Docker Store. Sign up here if you’re interested in partnering to certify software for the Docker platform.
Docker EE is available in three tiers: Basic, Standard and Advanced.

Basic: The Docker platform for certified infrastructure, with support from Docker Inc. and certified Containers and Plugins from Docker Store
Standard: Adds advanced image and container management, LDAP/AD user integration, and role-based access control (Docker Datacenter)
Advanced: Adds Docker Security Scanning and continuous vulnerability monitoring

Docker EE is available as a free trial and for purchase from Docker Sales, online via Docker Store, and is supported by Alibaba, Canonical, HPE, IBM, Microsoft and by a network of regional partners.
Docker Community Edition and Lifecycle Improvements
Docker Community Edition (CE) is the new name for the free Docker products. Docker CE runs on Mac and Windows 10, on AWS and Azure, and on CentOS, Debian, Fedora, and Ubuntu and is available from Docker Store. Docker CE includes the full Docker platform and is great for developers and DIY ops teams starting to build container apps.
The launch of Docker CE and EE brings big enhancements to the lifecycle, maintainability and upgradability of Docker. Starting with today’s release, version 17.03, Docker is moving to time-based releases and a YY.MM versioning scheme, similar to the scheme used by Canonical for Ubuntu.
The Docker CE experience can be enhanced with free and paid add-ons from Docker Cloud, a set of cloud-based managed services that include automated builds, continuous integration, public and private Docker image repos, and security scanning.
Docker CE comes in two variants:

Edge is for users wanting a drop of the latest and greatest features every month
Stable is released quarterly and is for users that want an easier-to-maintain release pace

Edge releases only get security and bug-fixes during the month they are current. Quarterly stable releases receive patches for critical bug fixes and security issues for 4 months after initial release. This gives users of the quarterly releases a 1-month upgrade window between each release where it’s possible to stay on an old version while still getting fixes. This is an improvement over the previous lifecycle, which dropped maintenance for a release as soon as a new one became available.
Docker EE is released quarterly and each release is supported and maintained for a full year. Security patches and bugfixes are backported to all supported versions. This extended support window, together with certification and support, gives Docker EE subscribers the confidence they need to run business critical apps on Docker.

The Docker API version continues to be independent of the Docker platform version and the API version does not change from Docker 1.13.1 to Docker 17.03. Even with the faster release pace, Docker will continue to maintain careful API backwards compatibility and deprecate APIs and features only slowly and conservatively. And in Docker 1.13 introduced improved interoperability between clients and servers using different API versions, including dynamic feature negotiation.
In addition to clarifying and improving the Docker release life-cycle for users, the new deterministic release train also benefits the Docker project. Maintainers and partners who want to ship new features in Docker are now guaranteed that new features will be in the hands of Edge users within a month of being merged.

New Docker Enterprise Edition, an integrated, supported and certified container platformClick To Tweet

 
Get Started Today
Docker CE and EE are an evolution of the Docker Platform designed to meet the needs of developers, ops and enterprise IT teams. No matter the operating system or cloud infrastructure, Docker CE and EE lets you install, upgrade, and maintain Docker with the support and assurances required for your particular workload.
Here are additional resources:

Register for the Webinar: Docker EE
Download Docker CE from Docker Store
Try Docker EE for free and view pricing plans
Learn More about Docker Certified program
Read the docs 

FAQ
Is this a breaking change to Docker?
No. Docker carefully maintains backwards API compatibility, and only removes features after deprecating them for a period of 3 stable releases. Docker 17.03 uses the same API version as Docker 1.13.1.

What do I need to do to upgrade?
Docker CE for Mac and Windows users will get an automatic upgrade notification. Docker for AWS and Azure users can refer to the release notes for upgrade instructions. Legacy docker-engine package users can upgrade using their distro package manager or upgrade to the new docker-ce package.

Why is Docker adopting a new versioning scheme?
To improve the predictability and cadence of Docker releases, we&;re adopting a monthly and quarterly release pattern. This will benefit the project overall: Instead of waiting an indeterminate period of time after a PR is merged for a feature to be released, contributors will see improvements in the hands of users within a month.
A time-based version is a good way to underscore the change, and to signify the time-based release cadence.

I’m a Docker DDC or CS Engine customer. Do I have to upgrade to Docker EE to continue to get support?
No. Docker will continue to support customers with valid subscriptions whether the subscription covers Docker EE or Commercially Supported Docker. Customers can choose to stay with their current deployed version or upgrade to the latest Docker EE 17.03. For more details, see the Scope of Coverage and Maintenance Lifecycle at https://success.docker.com/Policies/Scope_of_Support
The post Announcing Docker Enterprise Edition appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing the Docker Certification Program for Infrastructure, Plugins and Containers

In conjunction with the introduction of Enterprise Edition (EE), we are excited to announce the Docker Certification Program and availability of partner technologies through Docker Store. A vibrant ecosystem is a sign of a healthy platform and by providing a program that aligns Docker’s commercial platform with the innovation coming from our partners; we are collectively expanding choice for customers investing in the Docker platform.
The Docker Certification Program is designed for both technology partners and enterprise customers to recognize Containers and Plugins that excel in quality, collaborative support and compliance. Docker Certification is aligned to the available Docker EE infrastructure and gives enterprises a trusted way to run more technology in containers with support from both Docker and the publisher. Customers can quickly identify the Certified Containers and Plugins with visible badges and be confident that they were built with best practices, tested to operate smoothly on Docker EE.
Save Your Seat: Webinar &; Docker Certified and Store on March 21st.
There are three categories of Docker Certified technology available:

Certified Infrastructure: Include operating systems and cloud providers that the Docker platform is integrated and optimized for and tested for certification. Through this, Docker provides a great user experience and preserves application portability.
Certified Container: Independent Software Vendors (ISV) are able to package and distribute their software as containers directly to the end user. These containers are tested, built with Docker recommended best practices, are scanned for vulnerabilities, and are reviewed before posting on Docker Store.
Certified Plugin: Networking and Volume plugins for Docker EE are now available to be packaged and distributed to end users as containers.  These plugin containers are built with Docker recommended best practices are scanned for vulnerabilities, and must pass an additional suite of API compliance testing before they are reviewed before posting on Docker Store. Apps are portable across different network and storage infrastructure and work with new plugins without recoding.

Docker Certification presents an evolution of the Docker platform from Linux hackers to a broader community of developers and IT ops teams at businesses of all sizes looking to build and deploy apps on Docker for Linux and Windows on any infrastructure. Many components of their enterprise environment will come from third parties and Docker Certified accelerates the adoption of those technologies into Docker environments with assurances and support.
From the Ecosystem to Docker Certified Publisher
The Docker Certified badge is a great way for technology partners to differentiate solutions to the millions of Docker users out there today. Upon completion of testing, review and posting into Docker Store, these certified listings will display the  badge for customers to quickly understand which containers and plugins meet this extra criteria. Docker Store provides a marketplace for publishers to distribute, sell and manage their listings and for customers to easily browse, evaluate and purchase 3rd party technology as containers.  Customers will be able to manage all subscriptions (Docker products and 3rd party Store content) from a single place.
 

New Docker Certification Program, designed for both technology partners & enterprise customersClick To Tweet

Docker Store is the launch pad for all Docker container based software, plugins and more &8211; and to kick off the program, we have the following Docker Certified technologies  available starting today.

AVI Networks AviVantage
Cisco Contiv Network Plugin
Bleemeo Smart Agent
BlobCity DB
Blockbridge Volume Plugin
CodeCov Enterprise
Datadog
Gitlab Enterprise
Hedvig Docker Volume Plugin
HPE Sitescope
Hypergrid HyperCloud Block Storage Volume Plugin 
Kaazing Enterprise Gateway
Koekiebox Fluid
Microsoft winservercore, nanoserver, mssql-server-linux, mssql-server-windows-express, aspnet, dotnet core, iis
NetApp NDVP Volume Plugin
Nexenta Volume Plugin
Nimble Storage Volume Plugin
Nutanix Volume Plugin
Polyverse Microservice Firewall
Portworx PX-Developer
Sysdig Cloud Monitoring Agent
Weaveworks Weave Cloud Agent

New Docker Certification Program, designed for both technology partners & enterprise customersClick To Tweet

Get started today with the latest Docker Community and Enterprise Edition platforms and browse the Docker Store for Certified Containers and Plugins for a great new Docker experience.

Register for the Webinar featuring Docker Certified and Store
Search and Browse Certified Content and Plugins Docker Store.  
Interested in publishing? Apply to be a Docker Publisher Partner.
Learn more about Docker CE and Docker EE

The post Introducing the Docker Certification Program for Infrastructure, Plugins and Containers appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Dockercast Interview: Docker Captain Stefan Scherer on Microsoft and Docker

In this podcast we chat with Captain and newly minted Microsoft MVP Stefan Scherer. Stefan has done some fantastic work with Docker for Windows and Microservices. We also talk about how lift and shift models work really well for Docker and Windows and Stefan walks us through some of the basics of running Docker on Windows. In addition to the podcast, below is his interview on why being a Captain allows him to give back to the awesome Docker community.
Dockercast with Stephen Scherer

Interview with Stefan Scherer
How has Docker impacted what you do on a daily basis?

Docker helps me to keep my machines clean. I realize more and more that you only need a few tools on your laptop, keeping it clean and lean. And instead of writing documentation on how to build a piece of software, describe all steps in a Dockerfile. So multi GByte fat developer VM’s we maintained some years ago shrink down so a few KByte Dockerfiles for each project. No time-consuming backups needed, just keep the Dockerfile in your sources and have a backup of your Git repos.
Having practiced that on Mac and Linux now for a while, I’m happy to see that this will work on Windows as well. I see the same patterns there to get rid of an exploding PATH variable, keeping all the dependencies out of your machine and inside a container.
As a Docker Captain, how do you share that learning with the community?
When I’ve found something or solved a problem that could be useful for others, I like to write a blog post about my experience. I’m trying to show it in a simple way. If it’s just a cool hack that fits into a Tweet, then you can find it on Twitter.
I’m also watching some GitHub repos and helping people there by answering their questions or giving them some useful links to find the relevant documentation.
More and more people ask me questions directly through Twitter or email, but I gently ask them to ask the question in a public forum like GitHub, Gitter or Slack. Not that I don’t want to answer them, but instead others can profit from the discussion and the given solution.
I also speak at local Meetups. Our Hypriot team has been organizing Docker Meetups for about a year to bring together students and those interested in Docker that are working in various companies.
Why do you like Docker?
What I really like is that Docker, although many new features came in the last year, is that it is still small and simple to use, at least from a developer’s point of view.
What’s so cool about Docker is that with availability of Windows Containers earlier this year,  you now have the same tools and mindset on a formerly very different platform. I believe that this lowers the barrier between Linux and Windows.  Once you know the basic Docker commands, you are able to do things on both platforms. Before that, you probably were afraid, how to run software XY as a service on that previously unknown platform.
What’s your favorite thing about the Docker community?
I remember when I started to test the Windows Docker engine and found the first bugs. So I wrote an issue on GitHub and you know what? I immediately got answer from employees at Microsoft. Well I’ve previously pressed the “Send feedback report to Microsoft” button when Word crashed and nothing happened. But with the Docker project, I learned that there is a much better feedback loop. I think for both sides, so it’s important to give feedback to the developers about their software they are writing.
Are you working on any fun projects on the side?
After some first baby steps with Docker, I joined four other friends at the end of 2014 to really learn Docker together during the holiday. And we wanted to try it out on a Raspberry Pi, with only a single core CPU and half a Gig memory. We hadn’t the slightest idea what this fun idea would lead us to. This is probably not the straightforward way to learn Docker, but we learned a lot of the basics and what’s needed such as  a suitable Linux kernel. In less than two months, we released our version of what was later called HypriotOS. You can’t imagine what hard work is hidden behind an easy-to-use SD image that you just plug into your Raspberry Pi and boot it to Docker.
And we’re happy to see that this project,our work and the efforts of others led to the official ARM support of Docker in the upstream GitHub repo.
How did you first learn about Docker?
We were in the middle of a new software project where we automated a lot of our development and testing environments with Vagrant. We heard about this Docker thing and that it would be much faster and smaller. It took a few  weeks to find the time to play with Docker but it felt right to learn more about it.
Docker Captains
Captains are Docker ambassadors (not Docker employees) and their genuine love of all things Docker has a huge impact on the Docker community. Whether they are blogging, writing books, speaking, running workshops, creating tutorials and classes, offering support in forums, or organizing and contributing to local events – they make Docker’s mission of democratizing technology possible. Whether you are new to Docker or have been a part of the community for a while, please don’t hesitate to reach out to Docker Captains with your challenges, questions, speaking requests and more.
While Docker does not accept applications for the Captains program, we are always on the lookout to add additional leaders that inspire and educate the Docker community. If you are interested in becoming a Docker Captain, we need to know how you are giving back. Sign up for community.docker.com, share your activities on social media with the Docker, get involved in a local meetup as a speaker or organizer and continue to share your knowledge of Docker in your community.
Follow the Docker Captains
You can now follow all of the Docker Captains on Twitter using Docker with Alex Ellis’ tutorial.
 

DockerCast : @botchagalupe interviews @Microsoft MVP @stefscherer on Windows & microservicesClick To Tweet

The post Dockercast Interview: Docker Captain Stefan Scherer on Microsoft and Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Announcing Docker Birthday #4: Spreading the Docker Love!

Community is at the heart of and thanks to the hard work of thousands of maintainers, contributors, Captains, mentors, organizers, and the entire Docker community, the Docker platform is now used in production by companies of all sizes and industries.
To show our love and gratitude, it has become a tradition for Docker and our awesome network of meetup organizers to host Docker Birthday meetup celebrations all over the world. This year the celebrations will take place during the week of March 13-19, 2017. Come learn, mentor, celebrate, eat cake, and take an epic !
Docker Love
We wanted to hear from the community about why they love Docker!
Wellington Silva, Docker São Paulo meetup organizer said “Docker changed my life, I used to spend days compiling and configuring environments. Then I used to spend hours setting up using VM. Nowadays I setup an environment in minutes, sometimes in seconds.”

Love the new organization of commands in Docker 1.13!
— Kaslin Fields (@kaslinfields) January 25, 2017

Docker Santo Domingo organizer, Victor Recio said, “Docker has increased my effectiveness at work, currently I can deploy software to production environment without worrying that it will not work when the delivery takes place. I love docker and I&;m very grateful with it and whenever I can share my knowledge about docker with the young people of the communities of my country I do it and I am proud that there are already startups that have reach a Silicon Valley level.”

We love docker here at @Harvard for our screening platform. https://t.co/zpp8Wpqvk5
— Alan Aspuru-Guzik (@A_Aspuru_Guzik) January 12, 2017

Docker Birthday Labs
At the local birthday 4 meetups, there will be Docker labs and challenges to help attendees at all levels and welcome new members into the community. We’re partnering with CS schools, non-profit organizations, and local meetup groups to throw a series of events around the world. While the courses and labs are geared towards newcomers and intermediate level users, advanced and expert community members are invited to join as mentors to help attendees work through the materials.
Find a Birthday meetup near you!
There are already 46 Docker Birthday 4 celebrations scheduled around the world with more on the way! Check back as more events are announced.

Thursday, March 9th

Fulda, Germany

Saturday, March 11th

Madurai, India

Sunday, March 12th

Mumbai, India

Monday, March 13th

Atlanta, GA
Dallas, TX
Grenoble, France
Liège, Belgium
Luxembourg, Luxembourg

Tuesday, March 14th

Austin, TX
Berlin, Germany
Las Vegas, NV
Malmö, Sweden
Miami, FL
Saint Louis, MO

Wednesday, March 15th

Blacksburg, VA
Columbus, OH
Istanbul, Turkey
Nantes, France
Phoenix, AZ
Prague, Czech Republic
San Francisco, CA
Santa Barbara, CA
Singapore, Singapore

Thursday, March 16th

Brussels, Belgium
Budapest, Hungary
Dhahran, Saudi Arabia
Dortmund, Germany
Iráklion, Greece
Montreal, Canada
Nice, France
Stuttgart, Germany
Tokyo, Japan
Washington, DC

Saturday, March 18th

Delhi, India
Hermosillo, Mexico
Kanpur, India
Kisumu, Kenya
Novosibirsk, Russia
Porto, Portugal
Rio de Janeiro, Brazil
Thanh Pho Ho Chi Minh, Vietnam

Monday, March 20th

London, United Kingdom
Milan, Italy

Thursday, March 23rd

Dublin, Ireland

Wednesday, March 29th

Colorado Springs, CO
Ottawa, Canada

Want to help us organize a Docker Birthday celebration in your city? Email us at meetups@docker.com for more information!
Are you an advanced Docker user? Join us as a mentor!
We are recruiting a network of mentors to attend the local events and help guide attendees through the Docker Birthday labs. Mentors should have experience working with Docker Engine, Docker Networking, Docker Hub, Docker Machine, Docker Orchestration and Docker Compose. Click here to sign up as a mentor.

Excited to LearnDocker at an upcoming 4th celebration! Join your local edition! Click To Tweet

The post Announcing Docker Birthday 4: Spreading the Docker Love! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/