Online Meetup Recap: Docker Community Edition (CE) and Enterprise Edition (EE)

Last week, we announced Docker Enterprise Edition (EE) and Docker Community Edition (CE) new and renamed versions of the Docker platform. Docker EE, supported by Docker Inc., is available on certified operating systems and cloud providers and runs certified Containers and Plugins from Docker Store. For consistency, we renamed the free Docker products to Docker CE and adopted a new lifecycle and time-based versioning scheme for both Docker EE and CE.
We asked product manager and release captain, Michael Friis to introduce Docker CE + EE to our online community. The took place on Wednesday, March 8th and over 600 people RSVPed to hear Michael’s presentation live. He gave an overview of both editions and highlighted the big enhancements to the lifecycle, maintainability and upgradability of Docker.
In case you missed it, you can watch the recording and access Michael&;s slides below.

 

 
Here are additional resources:

Register for the Webinar: Docker EE
Download Docker CE from Docker Store
Try Docker EE for free and view pricing plans
Learn More about Docker Certified program
Read the docs

Missed the CE + EE Online meetup w/ @friism? Check out the video & slides here!Click To Tweet

The post Online Meetup Recap: Docker Community Edition (CE) and Enterprise Edition (EE) appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

The First DockerCon with Windows Containers

DockerCon 2017 is only a few weeks away, and the schedule is available now on the DockerCon Agenda Builder. This will be the first DockerCon since Server 2016 was released, bringing native support for containers to Windows. There will be plenty of content for Windows developers and admins &; here are some of the standouts.

Windows and .NET Sessions
On the main stages, there will be hours of content dedicated to Windows and .NET.
Docker for .NET Developers
Michele Bustamante, CIO of Solliance, looks at what Docker can do for .NET applications. Michele will start with a full .NET Framework application and show how to run it in a Windows container. Then Michele will move on to .NET Core and show how the new cross-platform framework can build apps which run in Windows or Linux containers, making for true portability throughout the data center and the cloud.
Escape From Your VMs with Image2Docker
I’ll be presenting with Docker Captain Jeff Nickoloff, covering the Image2Docker tool, which automates app migration from virtual machines to Docker images. There’s Image2Docker for Linux, and Image2Docker for Windows. We’ll demonstrate both, porting an app with a Linux front end and a Windows back end from VMs to Docker images. Then we’ll run the whole application in containers on one Docker swarm, a cluster with Linux and Windows nodes.
Beyond “” &8211; the Path to WIndows and Linux Parity in Docker
Taylor Brown and Dinesh Govindasamy from Microsoft will talk about how Docker support was built for Windows Server 2016. Their session will cover the technical implementation in Windows, the current gaps between Docker on Linux and Docker on Windows, and the plans to bring parity to the Windows experience. This session is from the team at Microsoft who actually delivered the kernel changes to support Windows containers running in Docker.
Creating Effective Images
Abby Fuller from AWS will talk about making efficient Docker images. Optimized Docker images build quickly, are as small as possible, and include only the components needed to run the app. Abby will talk about image layers, caching, Dockerfile best practices, and Docker Security Scanning, in a cross-platform session which looks at Linux and Windows Docker images.
Other Sessions
Check out the topics in the Agenda Builder for sessions from speakers who have been using Docker in production, and seen a huge change in their ability to deliver quality software, quickly. These are Linux case studies, but the principles equally apply to Windows projects.

In Architecture, Cornell University use Docker Datacenter to run monolithic legacy apps alongside greenfield microservice apps &8211; with consistent monitoring and management
In Production, PayPal are on a  journey migrating all their legacy apps to Docker, and using Docker as their production application platform
In Enterprise, MetLife delivered a new microservice application running on Docker in 5 months, embracing new approaches to design, test and engineering.

 
Workshops
Workshops are instructor-led sessions, which run on the Monday of DockerCon. There are a lot of great sessions to choose from, but for Windows folks these two are particularly well-suited:
Learn Docker. Get to grips with the basics of Docker, learning about the basics of images and containers, and moving on to networking, orchestration, security and volumes. This session will focus on Linux containers, which you can run with Docker for Windows, but the principles you’ll learn apply equally to Windows containers.
Modernizing Monolithic ASP.NET Applications with Docker. A workshop focused on Windows and ASP.NET. You’ll learn how to run a monolithic ASP.NET app in Docker without changing code, and then see how to break features out from the main app and run them in separate Docker containers, giving you a path to modernize your app without rebuilding it.
Hands-On Labs
As well as the main sessions and guided workshops, there will be hands-on labs for you to experience Docker on Windows. We’ll provision a Docker environment for you in Azure, and provide self-paced learning guides. The hands-on labs will cover:
Docker on Windows 101. Get started with Docker on Windows, and learn why the world is moving to containers. You’ll start by exploring the Windows Docker images from Microsoft, then you’ll run some simple applications, and learn how to scale apps across multiple servers running Docker in swarm mode
Modernize .NET Apps &8211; for Ops. An admin guide to migrating .NET apps to Docker images, showing how the build, ship, run workflow makes application maintenance fast and risk-free. You’ll start by migrating a sample app to  Docker, and then learn how to upgrade the application, patch the Windows version the app uses, and patch the Windows version on the host &8211; all with zero downtime.
Modernize .NET Apps &8211; for Devs. A developer guide to app migration, showing how the Docker platform lets you update a monolithic application without doing a full rebuild. You’ll start with a sample app and see how to break components out into separate units, plumbing the units together with the Docker platform and the tried-and-trusted applications available on Docker Hub.
Book Your Ticket Now!
DockerCon is always a sell-out conference, so book your DockerCon tickets while there are still spaces left. If you follow the Docker Captains on Twitter, you may find they have discount codes to share.

Check out all the Docker content for Windows at To Tweet

The post The First DockerCon with Windows Containers appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Partners with Girl Develop It and Launches Pilot Class

Yesterday marked International Women&;s Day, a global day celebrating the social, cultural, economic and political achievements of women. In that spirit, we’re thrilled to announce that we’re partnering with Girl Develop It, a national 501(c)3 nonprofit that provides affordable and judgment-free opportunities for adult women interested in learning web and software development through accessible in-person programs. Through welcoming, low-cost classes, GDI helps women of diverse backgrounds achieve their technology goals and build confidence in their careers and their everyday lives.

Girl Develop It deeply values community and supportive learning for women regardless of race, education levels, income and upbringing, and those are values we share. The Docker team is committed to ensuring that we create welcoming spaces for all members of the tech community. To proactively work towards this goal, we have launched several initiatives to strengthen the Docker community and promote diversity in the larger tech community including our DockerCon Diversity Scholarship Program, which provides mentorship and a financial scholarship to attend DockerCon. PS &; Are you a women in tech and want to attend DockerCon in Austin April 17th-20th? Use code  for 50% off your ticket! 

New program for WomeninTech at @DockerCon incl networking events, mentorship opps, etc. Use code&;Click To Tweet

Launching Pilot Class
In collaboration with the GDI curriculum team, we are developing an intro to Docker class that will introduce students to the Docker platform and take them through installing, integrating, and running it in their working environment. The pilot class will take place this spring in San Francisco and Austin.

The Intro to Docker class is fully aligned with Girl Develop It’s mission to unlock the potential of women returning to the workforce, looking for a career change, or leveling up their skills said Executive Director, Corinne Warnshuis. “A course on Docker has been requested by students and leaders in the community for some time. We&8217;re thrilled to be working with Docker to provide a valuable introduction to their platform through our in-person affordable, judgment free program.”
Want to help Docker with these initiatives?
We’re always happy to connect with others who work towards improving opportunities for women and underrepresented groups throughout the global Docker ecosystem and promote inclusion in the larger tech community.
If you or your organization are interested in getting more involved, please contact us at community@docker.com. Let’s join forces and take our impact to the next level!
 

Docker partners with @girldevelopit to launch a pilot course in San Francisco and AustinClick To Tweet

The post Docker Partners with Girl Develop It and Launches Pilot Class appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker and Cisco Launch Cisco Validated Designs for Cisco UCS and Flexpod Infrastructures on Docker Enterprise Edition

Last week, and jointly announced a strategic alliance between our organizations. Based on customer feedback, one of the initial joint initiatives is the validation of Docker Enterprise Edition (which includes Docker Datacenter) against Cisco UCS and the Nexus infrastructures. We are excited to announce that Cisco Validated Designs (CVDs) for Cisco UCS and on Docker Enterprise Edition (EE) are immediately available.
CVDs represent the gold standard reference architecture methodology for enterprise customers looking to deploy an end-to-end solution. The CVDs follow defined processes and covers not only provisioning and configuration of the solution, but also test and document the solutions against performance, scale and availability/failure &; something that requires a lab setup with a significant amount of hardware that reflects actual production deployments. This enables our customers achieve faster, more reliable and predictable implementations.
The two new CVDs published for container management offers enterprises a well designed and an end-to-end lab tested configuration for Docker EE on Cisco UCS and Flexpod Datacenter. The collaborative engineering effort between Cisco, NetApp and Docker provides enterprises best of breed solutions for Docker Datacenter on Cisco Infrastructure and NetApp Enterprise Storage to run stateless or stateful containers.
The first CVD includes 2 configurations:

4-node rack servers Bare Metal deployment, co-locating Docker UCP Controller and DTR on 3 manager nodes in a Highly Available configuration and 1 UCP worker node.

10-node Blade servers Bare Metal deployment, with 3 nodes for UCP controllers, 3 nodes for DTR and remaining 4 nodes as UCP worker nodes

The second CVD was based on FlexPod Datacenter in collaboration with NetApp using Cisco UCS Blades and NetApp FAS and E-Series storage.
These CVDs leverage the Docker native user experience of Docker EE, along with Cisco’s UCS converged infrastructure capabilities to provide simple management control planes to orchestrate compute, network and storage provisioning for the application containers to run in a secure and scalable environment. It also uses built in security features of the UCS such as I/O isolation through VLANs, secure bootup of bare metal hosts, and physical storage access path isolation through Cisco VIC’s virtual network interfaces. The combination of UCS and Docker EE’s built-in security such as Secrets Management, Docker Content Trust, and Docker Security Scanning provides a secure end-to-end Container-as-a-Service (CaaS) solution.

Both these solutions use Cisco UCS Service Profiles to provision and configure the UCS servers and their I/O properties to automate the complete installation process. Docker commands and Ansible were used for Docker EE  installation. After configuring proper certificates across the DTR and UCP nodes, we were able to push and pull images successfully. Container images such as busybox, nginx, etc. and applications such as WordPress, Voting application, etc. to test and validate the configuration were pulled from Docker Hub, a central repository for Docker developers to store container images.
The scaling test included the deployment of containers and applications. We were able to deploy 700+ containers on single node and more than 7000 containers across 10 nodes without performance degradation. The scaling tests also covered dynamically adding/deleting nodes to ensure the cluster remains responsive during this change. This excellent scaling and resiliency tests on the clusters are result of swarm mode, container orchestration tightly integrated into Docker EE with Docker Datacenter, and Cisco’s Nexus switches which provides high performance and low latency network speed.
The fail-over tests covered node shutdown, reboot, induce fault at Cisco Fabric Interconnects to adapters on Cisco UCS blade servers. When the UCP manager node was shutdown/rebooted, we were able to validate that users were still able to access containers through Docker UCP UI or CLI. The system was able to start up quickly after a reboot and the UCP cluster and services were restored. Hardware failure resulted in cluster operating in reduced capacity, but there was no single point of failure.
As part of the FlexPod CVD, NFS was configured for Docker Trusted Registry (DTR) nodes for shared access. Flexpod is configured with NetApp enterprise class storage, and NetApp Docker Volume Plugin (nDVP) provides direct integration with Docker ecosystem for NetApp’s ONTAP, E-Series and SolidFire Storage. FlexPod uses NetApp ONTAP storage backend for DTR as well as Container Storage management, and can verify Container volumes deployed using NetApp OnCommand System Manager.
Please refer to CVDs for detailed configuration information.

FlexPod Datacenter with Docker Datacenter for Container Management
Cisco UCS Infrastructure with Docker Datacenter for Container Management

 

Docker Enterprise Edition now w/ @Cisco Validated Designs for Cisco UCS and Flexpod&;Click To Tweet

The post Docker and Cisco Launch Cisco Validated Designs for Cisco UCS and Flexpod Infrastructures on Docker Enterprise Edition appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Beta Docker Community Edition for Google Cloud Platform

Today we’re excited to announce beta Docker Community Edition (CE) for Cloud Platform (GCP). Users interested in helping test and improve Docker CE for GCP should sign up at beta.docker.com. We’ll let in users to the beta as the product matures and stabilizes, and we’re looking forward to your input and suggestions.
Docker CE for GCP is built on the same principle as Docker CE for AWS and Docker CE for Azure and provides a Docker setup on GCP that is:

Quick and easy to install in a few minutes
Released in sync with other Docker releases and always available with the latest Docker version
Simple to upgrade from one Docker CE version to the next
Configured securely and deployed on minimal, locked-down Linux maintained by Docker
Self-healing and capable of automatically recovering from infrastructure failures

Docker CE for GCP is the first Docker edition to launch using the InfraKit project. InfraKit helps us configure cloud infrastructure quickly, design upgrade-processes and self-healing tailored to Docker built-in orchestration and smooth out infrastructure differences between different cloud providers to give Docker users a consistent container platform that maximises portability.
Installing Docker CE for GCP
Once you have access to the beta, the simplest way to setup Docker CE is using the
Cloud Shell feature of the Google Cloud Console:
gcloud deployment-manager deployments create docker
–config

https://docker-for-gcp-templates.storage.googleapis.com/v8/Docker.jinja

–properties managerCount:3,workerCount:1,zone:us-central1-f

Setup takes a few minutes and the install output includes instructions on how to connect to the fully operational swarm.
michael_friis@docker:~$ gcloud compute ssh –project my-project –zone us-central1-f friism-test-manager-1

Welcome to Docker!
friism-test-manager-1:~$
You can now start deploying apps and services on Docker. Docker CE for GCP has the same load-balancer integration as Docker CE for AWS and Azure so, any service that publishes ports is immediately available. For example, if you start an nginx service exposed on port 80, that will be immediately available on port 80 on the IP address of the loadbalancer displayed in the deployment output:
docker service create -p 80:80 nginx
You can use Docker CE for GCP directly from the Cloud Shell or use the `gcloud` command-line tools to set up an SSH tunnel to more easily deploy projects from your local machine.
An even simpler way to access your GCP Docker install is by using the new beta Docker Cloud fleet management feature. Simply register the swarm with Docker Cloud from the Cloud Shell:

Now the swarm is available for use on Docker for Mac and Windows, and you can easily share access with team members by adding their Docker Ids.

To try out Docker CE for GCP sign up at https://beta.docker.com. We’re busy improving the beta based on user input and we’re looking forward to your feedback. Later in the year, we’ll also add Docker Enterprise Edition (EE)  support so stay tuned for more!
Learn More about Docker Community Edition (CE) for Google Cloud Platform (GCP):

Sign up here at beta.docker.com
Check out the docs for Docker for GCP
Check out Docker CE for AWS and Azure
Learn More about Docker Community and Enterprise Edition

[Tweet &;Sign Up for the for Google Cloud Beta ]
The post Beta Docker Community Edition for Google Cloud Platform appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

InfraKit and Docker Swarm Mode: A Fault-Tolerant and Self-Healing Cluster

Back in October 2016, released , an open source toolkit for creating and managing declarative, self-healing infrastructure. This is the second in a two part series that dives more deeply into the internals of InfraKit.
Introduction
In the first installment of this two part series about the internals of InfraKit, we presented InfraKit’s design, architecture, and approach to high availability.  We also discussed how it can be combined with other systems to give distributed computing clusters self-healing and self-managing properties. In this installment, we present an example of leveraging Docker Engine in Swarm Mode to achieve high availability for InfraKit, which in turn enhances the Docker Swarm cluster by making it self-healing.  
Docker Swarm Mode and InfraKit
One of the key architectural features of Docker in Swarm Mode is the manager quorum powered by SwarmKit.  The manager quorum stores information about the cluster, and the consistency of information is achieved through consensus via the Raft consensus algorithm, which is also at the heart of other systems like Etcd. This guide gives an overview of the architecture of Docker Swarm Mode and how the manager quorum maintains the state of the cluster.
One aspect of the cluster state maintained by the quorum is node membership &; what nodes are in the cluster, who are the managers and workers, and their statuses. The Raft consensus algorithm gives us guarantees about our cluster’s behavior in the face of failure, and fault tolerance of the cluster is related to the number of manager nodes in the quorum. For example, a Docker Swarm with three managers can tolerate one node outage, planned or unplanned, while a quorum of five managers can tolerate outages of up to two members, possibly one planned and one unplanned.
The Raft quorum makes the Docker Swarm cluster fault tolerant; however, it cannot fix itself.  When the quorum experiences outage of manager nodes, manual steps are needed to troubleshoot and restore the cluster.  These procedures require the operator to update or restore the quorum’s topology by demoting and removing old nodes from the quorum and joining new manager nodes when replacements are brought online.  
While these administration tasks are easy via the Docker command line interface, InfraKit can automate this and make the cluster self-healing.  As described in our last post, InfraKit can be deployed in a highly available manner, with multiple replicas running and only one active master.  In this configuration, the InfraKit replicas can accept external input to determine which replica is the active master.  This makes it easy to integrate InfraKit with Docker in Swarm Mode: by running InfraKit on each manager node of the Swarm and by detecting the leadership changes in the Raft quorum via standard Docker API, InfraKit achieves the same fault-tolerance as the Swarm cluster. In turn, InfraKit’s monitoring and infrastructure orchestration capabilities, when there’s an outage, can automatically restore the quorum, making the cluster self-healing.
Example: A Docker Swarm with InfraKit on AWS
To illustrate this idea, we created a Cloudformation template that will bootstrap and create a cluster of Docker in Swarm Mode managed by InfraKit on AWS.  There are couple of ways to run this: you can clone the InfraKit examples repo and upload the template, or you can use this URL to launch the stack in the Cloudformation console.
Please note that this Cloudformation script is for demonstrations only and may not represent best practices.  However, technical users should experiment and customize it to suit their purposes.  A few things about this Cloudformation template:

As a demo, only a few regions are supported: us-west-1 (Northern California), us-west-2 (Oregon), us-east-1 (Northern Virginia), and eu-central-1 (Frankfurt).
It takes the cluster size (number of nodes), SSH key, and instance sizes as the primary user input when launching the stack.
There are options for installing the latest Docker Engine on a base Ubuntu 16.04 AMI or using images which we have pre-installed Docker and published for this demonstration.
It bootstraps the networking environment by creating a VPC, a gateway and routes, a subnet, and a security group.
It creates an IAM role for InfraKit’s AWS instance plugin to describe and create EC2 instances.
It creates a single bootstrap EC2 instance and three EBS volumes (more on this later).  The bootstrap instance is attached to one of the volumes and will be the first leader of the Swarm.  The entire Swarm cluster will grow from this seed, as driven by InfraKit.

With the elements above, this Cloudformation script has everything needed to boot up an Infrakit-managed Docker in Swarm Mode cluster of N nodes (with 3 managers and N-3 workers).  
About EBS Volumes and Auto-Scaling Groups
The use of EBS volumes in our example demonstrates an alternative approach to managing Docker Swarm Mode managers.  Instead of relying on manually updating the quorum topology by removing and then adding new manager nodes to replace crashed instances, we use EBS volumes attached to the manager instances and mounted at /var/lib/docker for durable state that survive past the life of an instance.  As soon as the volume of a terminated manager node is attached to a new replacement EC2 instance, we can carry the cluster state forward quickly because there’s much less state changes to catch up to.  This approach is attractive for large clusters running many nodes and services, where the entirety of cluster state may take a long time to be replicated to a brand new manager that just joined the Swarm.  
The use of persistent volumes in this example highlights InfraKit’s philosophy of running stateful services on immutable infrastructure:

Use compute instances for just the processing cores;  they can come and go.
Keep state on persistent volumes that can survive when compute instances don’t.
The orchestrator has the responsibility to maintain members in a group identified by fixed logical ID’s.  In this case these are the private IP addresses for the Swarm managers.
The pairing of logical ID (IP address) and state (on volume) need to be maintained.

This brings up a related implementation detail &8212; why not use the Auto-Scaling Groups implementations that are already there?  First, auto-scaling group implementations vary from one cloud provider to the next, if even available.  Second, most auto-scalers are designed to manage cattle, where individual instances in a group are identical to one another.  This is clearly not the case for the Swarm managers:

The managers have some kind of identity as resources (via IP addresses)
As infrastructure resources, members of a group know about each other via membership in this stable set of IDs.
The managers identified by these IP addresses have state that need to be detached and reattached across instance lifetimes.  The pairing must be maintained.

Current auto-scaling group implementations focus on managing identical instances in a group.  New instances are launched with assigned IP addresses that don’t match the expectations of the group, and volumes from failed instances in an auto-scaling group don’t carry over to the new instance.  It is possible to work around these limitations with sweat and conviction; InfraKit, through support of allocation, logical IDs and attachments, support this use case natively.
Bootstrapping InfraKit and the Swarm
So far, the Cloudformation template implements what we called ‘bootstrapping’, or the process of creating the minimal set of resources to jumpstart an InfraKit managed cluster.  With the creation of the networking environment and the first “seed” EC2 instance, InfraKit has the requisite resources to take over and complete provisioning of the cluster to match the user’s specification of N nodes (with 3 managers and N-3 workers).   Here is an outline of the process:
When the single “seed” EC2 instance boots up, a single line of code is executed in the UserData (aka cloudinit), in Cloudformation JSON:
“docker run –rm “,{“Ref”:”InfrakitCore”},” infrakit template –url “,
{“Ref”:”InfrakitConfigRoot”}, “/boot.sh”,
” –global /cluster/name=”, {“Ref”:”AWS::StackName”},
” –global /cluster/swarm/size=”, {“Ref”:”ClusterSize”},
” –global /provider/image/hasDocker=yes”,
” –global /infrakit/config/root=”, {“Ref”:”InfrakitConfigRoot”},
” –global /infrakit/docker/image=”, {“Ref”:”InfrakitCore”},
” –global /infrakit/instance/docker/image=”, {“Ref”:”InfrakitInstancePlugin”},
” –global /infrakit/metadata/docker/image=”, {“Ref”:”InfrakitMetadataPlugin”},
” –global /infrakit/metadata/configURL=”, {“Ref”:”MetadataExportTemplate”},
” | tee /var/lib/infrakit.boot | sh n”
Here, we are running InfraKit packaged in a Docker image, and most of this Cloudformation statement references the Parameters (e.g. “InfrakitCore” and “ClusterSize”) defined at the beginning of the template.  Using parameters values in the stack template, this translates to a single statement like this that will execute during bootup of the instance:
docker run –rm infrakit/devbundle:0.4.1 infrakit template
–url https://infrakit.github.io/examples/swarm/boot.sh
–global /cluster/name=mystack
–global /cluster/swarm/size=4 # many more …
| tee /var/lib/infrakit.boot | sh # tee just makes a copy on disk

This single statement marks the hand-off from Cloudformation to InfraKit.  When the seed instance starts up (and installs Docker, if not already part of the AMI), the InfraKit container is run to execute the InfraKit template command.  The template command takes a URL as the source of the template (e.g. https://infrakit.github.io/examples/swarm/boot.sh, or a local file with a URL like file://) and a set of pre-conditions (as the &;global variables) and renders.  Through the &8211;global flags, we are able to pass a set of parameters entered by the user when launching the Cloudformation stack. This allows InfraKit to use Cloudformation as authentication and user interface for configuring the cluster.
InfraKit uses templates to simplify complex scripting and configuration tasks.  The templates can be any text that uses { { } } tags, aka “handle bar” syntax.  Here InfraKit is given a set of input parameters from the Cloudformation template and a URL referencing the boot script.  It then fetches the template and renders a script that is executed to perform the following during boot-up of the instance:
 

Formatting the EBS if it’s not already formatted
Stopping Docker if currently running and mount the volume at /var/lib/docker
Configure the Docker engine with proper labels, restarting it.
Starts up an InfraKit metadata plugin that can introspect its environment.  The AWS instance plugin, in v0.4.1, can introspect an environment formed by Cloudformation, as well as, using the instance metadata service available on AWS.   InfraKit metadata plugins can export important parameters in a read-only namespace that can be referenced in templates as file-system paths.  
Start the InfraKit containers such as the manager, group, instance, and Swarm flavor plugins.
Initializes the Swarm via docker swarm init.
Generates a config JSON for InfraKit itself.  This JSON is also rendered by a template (https://github.com/infrakit/examples/blob/v0.4.1/swarm/groups.json) that references environmental parameters like region, availability zone, subnet id’s and security group id’s that are exported by the metadata plugins.
Performs a infrakit manager commit to tell InfraKit to begin managing the cluster.

See https://github.com/infrakit/examples/blob/v0.4.1/swarm/boot.sh for details.
When the InfraKit replica begins running, it notices that the current infrastructure state (of only one node) does not match the user’s specification of 3 managers and N-3 worker nodes.  InfraKit will then drive the infrastructure state toward user’s specification by creating the rest of the managers and workers to complete the Swarm.
The topic of metadata and templating in InfraKit will be the subjects of future blog posts.  In a nutshell, metadata is information exposed by compatible plugins organized and accessible in a cluster-wide namespace.  Metadata can be accessed in the InfraKit CLI or in templates with file-like path names.  You can think of this as a cluster-wide read-only sysfs.  InfraKit template engine, on the other hand, can make use of this data to render complex configuration script files or JSON documents. The template engine supports fetching a collection of templates from local directory or from a remote site, like the example Github repo that has been configured to serve up the templates like a static website or S3 bucket.
 
Running the Example
You can either fork the examples repo or use this URL to launch the stack on AWS console.   Here we first bootstrap the Swarm with the Cloudformation template, then InfraKit takes over and provisions the rest of the cluster.  Then, we will demonstrate fault tolerance and self-healing by terminating the leader manager node in the Swarm to induce fault and force failover and recovery.
When you launch the stack, you have to answer a few questions:

The size of the cluster.  This script always starts a Swarm with 3 managers, so use a value greater than 3.

The SSH key.

There’s an option to install Docker or use an AMI with Docker pre-installed.  An AMI with Docker pre-installed gives shorter startup time when InfraKit needs to spin up a replacement instance.

Once you agree and launches the stack, it takes a few minutes for the cluster to be up.  In this case, we start a 4 node cluster.  In the AWS console we can verify that the cluster is fully provisioned by InfraKit:

Note the private IP addresses 172.31.16.101, 172.31.16.102, and 172.31.16.103 are assigned to the Swarm managers, and they are the values in our configuration. In this example the public IP addresses are dynamically assigned: 35.156.207.156 is bound to the manager instance at 172.31.16.101.  
Also, we see that InfraKit has attached the 3 EBS volumes to the manager nodes:

Because InfraKit is configured with the Swarm Flavor plugin, it also made sure that the manager and worker instances successfully joined the Swarm.  To illustrate this, we can log into the manager instances and run docker node ls. As a means to visualize the Swarm membership in real-time, we log into all three manager instances and run
watch -d docker node ls  
The watch command will by default refresh docker node ls every 2 seconds.  This allows us to not only watch the Swarm membership changes in real-time but also check the availability of the Swarm as a whole.

Note that at this time, the leader of the Swarm is just as we expected, the bootstrap instance, 172.31.16.101.  
Let’s make a note of this instance’s public IP address (35.156.207.156), private IP address (172.31.16.101), and its Swarm Node cryptographic identity (qpglaj6egxvl20vuisdbq8klr).  Now, to test fault tolerance and self-healing, let’s terminate this very leader instance.  As soon as this instance is terminated, we would expect the quorum leadership to go to a new node, and consequently, the InfraKit replica running on that node will become the new master.

Immediately the screen shows there is an outage:  In the top terminal, the connection to the remote host (172.31.16.101) is lost.  In the second and third terminals below, the Swarm node lists are being updated in real time:

When the 172.31.16.101 instance is terminated, the leadership of the quorum is transferred to another node at IP address 172.31.16.102 Docker Swarm Mode is able to tolerate this failure and continue to function (as seen by the continuously functioning of docker node ls by the remaining managers).  However, the Swarm has noticed that the 172.31.16.101 instance is now Down and Unreachable.

As configured, a quorum of 3 managers can tolerate one instance outage.   At this point, the cluster continues operation without interruption.  All your apps running on the Swarm continue to work and you can deploy services as usual.  However, without any automation, the operator needs to intervene at some point and perform some tasks to restore the cluster before another outage to the remaining nodes occur.  
Because this cluster is managed by InfraKit, the replica running on 172.31.16.102 now becomes the master when the same instance assumes leadership of the quorum.  Because InfraKit is tasked to maintain the specification of 3 manager instances with IP addresses 172.31.16.101, 172.31.16.102, and 172.31.16.103, it will take action when it notices 172.31.16.101 is missing.  In order to correct the situation, it will

Create a new instance with the private IP address 172.31.16.101
Attach the EBS volume that was previously associated with the downed instance
Restore the volume, so that Docker Engine and InfraKit starts running on that new instance.
Join the new instance to the Swarm.

As seen above, the new instance at private IP 172.31.16.101 now has an ephemeral public IP address 35.157.163.34, when it was previously 35.156.207.156.  We also see that the EBS volume has been re-attached:

Because of re-attaching the EBS volume as /var/lib/docker for the new instance and using the same IP address, the new instance will appear exactly as though the downed instance was resurrected and rejoins the cluster.  So as far as the Swarm is concerned, 172.31.16.101 may as well have been subjected to a temporary network partition and has since recovered and rejoined the cluster:

At this point, the cluster has recovered without any manual intervention.  The managers are now showing as healthy, and the quorum lives on!
Conclusion
While this example is only a proof-of-concept, we hope it demonstrates the potential of InfraKit as an active infrastructure orchestrator which can make a distributed computing cluster both fault-tolerant and self-healing.  As these features and capabilities mature and harden, we will incorporate them into Docker products such as Docker Editions for AWS and Azure.
InfraKit is a young project and rapidly evolving, and we are actively testing and building ways to safeguard and automate the operations of large distributed computing clusters.   While this project is being developed in the open, your ideas and feedback can help guide us down the path toward making distributed computing resilient and easy to operate.
Check out the InfraKit repository README for more info, a quick tutorial and to start experimenting &8212; from plain files to Terraform integration to building a Zookeeper ensemble. Have a look, explore, and join us on Github or online at the Docker Community Slack Channel (infrakit).  Send us a PR, open an issue, or just say hello.  We look forward to hearing from you!
More Resources:

Check out all the Infrastructure Plumbing projects
The InfraKit examples GitHub repo
Sign up for Docker for AWS or Docker for Azure
Try Docker today 

Part 2: InfraKit and Docker Swarm Mode: A Fault-Tolerant and Self-Healing Cluster by @dchungsfClick To Tweet

The post InfraKit and Docker Swarm Mode: A Fault-Tolerant and Self-Healing Cluster appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Swarm Mode with Fleet Management and Collaboration now in public beta, powered by Docker Cloud

With the introduction of swarm mode in 1.12, we showed the world how simple it can be to provision a secure and fully-distributed Docker cluster on which to deploy highly available and scalable applications. The latest Docker 1.13 builds on and improves these capabilities with new features, such as secrets management.
Continuing with the trend that simplicity is paramount to empowering individuals and teams to achieve their goals, today we are bringing swarm mode support to Docker Cloud, with a number of new cloud-enabled capabilities. All of this is in addition to the continuous integration (CI) features of Docker Cloud, including automatic builds, tests, security scans and the world’s largest hosted registry of public and private Docker image repositories.

Fleet Management using Docker ID
Keeping track of many swarms sprawling multiple regions or cloud providers can be a challenge. And securely connecting to remote swarms with TLS means teams must also spend time configuring and maintaining a Public Key Infrastructure. By registering your new or existing swarms with Docker Cloud, teams can now easily manage a large number of swarms running anywhere, and only need their Docker ID to authenticate and securely access any of them.
Docker for AWS and Docker for Azure Integration
Individuals and teams can now also provision new swarms on their IaaS provider of choice using Docker Cloud. Swarms are created using Docker CE for AWS and Docker CE for Azure, which allows these swarms to take advantage of the native capabilities of their respective cloud platforms. Swarms provisioned this way are automatically registered with Docker Cloud and can be accessed remotely and securely using your Docker ID.

Swarm Collaboration
Using the team capabilities in Docker Cloud, organizations have full control over who has access to which swarms. Allowing you, for example, to grant your development team access to your staging swarms, and your operations team access to your production swarms.
Docker for Mac and Docker for Windows Integration
We’re bringing fleet management to the developer desktop too! When using Docker for Mac or Docker for Windows, simply login with your Docker ID to see a list of all your accessible swarms registered with Docker Cloud. From there, it’s a single click to securely connect to any swarm and begin managing it. You and your team can easily check the status of an existing application or deploy new applications right from within your local shell.
But wait, there’s more! Docker for Mac and Docker for Windows users that login using their Docker ID can now also create and manage public and private repositories directly through their desktop application.
Please note: fleet management and other integrations with Docker Cloud are currently only available in the Docker for Mac and Docker for Windows edge channel.

Under the hood of Swarm Mode with Fleet Management and Collaboration
Swarm mode with Fleet Management and Collaboration, powered by Docker Cloud, is only possible thanks to the many and diverse open source projects and tools created by Docker and its open source contributors. This announcement is the culmination of work that spans our open source SwarmKit project, our best-in-class IaaS integrations with the industry’s top cloud providers, the native Docker for Mac and Docker for Windows applications, and Docker’s own hosted cloud services. This is an experience that only Docker can deliver.
It is our mission to build tools of mass innovation. At Docker we build powerful technology that is simple to use, providing individuals and teams with the tools they need to accomplish their goals. We hope you enjoy this newest public beta release and look forward to your feedback.
Check out these additional resources to learn more:

Docs for Swarm Mode in Docker Cloud
Get Docker for Mac (edge channel)
Get Docker for Windows (edge channel)
Watch the Fleet Management demo
E-mail us your feedback

 Fleet management and collaboration for Docker now availableClick To Tweet

The post Swarm Mode with Fleet Management and Collaboration now in public beta, powered by Docker Cloud appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Announcing Docker Enterprise Edition

Today we are announcing Enterprise Edition (EE), a new version of the Docker platform optimized for business-critical deployments. Docker EE is supported by Docker Inc., is available on certified operating systems and cloud providers and runs certified Containers and Plugins from Docker Store. Docker EE is available in three tiers: Basic comes with the Docker platform, support and certification, and Standard and Advanced tiers add advanced container management (Docker Datacenter) and Docker Security Scanning.

For consistency, we are also renaming the free Docker products to Docker Community Edition (CE) and adopting a new lifecycle and time-based versioning scheme for both Docker EE and CE. Today’s Docker CE and EE 17.03 release is the first to use the new scheme.
Docker CE and EE are released quarterly, and CE also has a monthly “Edge” option. Each Docker EE release is supported and maintained for one year and receives security and critical bugfixes during that period. We are also improving Docker CE maintainability by maintaining each quarterly CE release for 4 months. That gets Docker CE users a new 1-month window to update from one version to the next.
Both Docker CE and EE are available on a wide range of popular operating systems and cloud infrastructure. This gives developers, devops teams and enterprises the freedom to run Docker and Docker apps on their favorite infrastructure without risk of lock-in.
To download free Docker CE and to try or buy Docker EE, head over to Docker Store. Also check out the companion blog post on the Docker Certified Program. Or read on for details on Docker CE and EE and the new versioning and lifecycle improvements.
Docker Enterprise Edition
Docker Enterprise Edition (EE) is an integrated, supported and certified container platform for CentOS, Red Hat Enterprise Linux (RHEL), Ubuntu, SUSE Linux Enterprise Server (SLES), Oracle Linux, and Windows Server 2016, as well as for cloud providers AWS and Azure. In addition to certifying Docker EE on the underlying infrastructure, we are introducing the Docker Certification Program which includes technology from our ecosystem partners: ISV containers that run on top of Docker and networking and storage and networking plugins that extend the Docker platform.
Docker and Docker partners provide cooperative support for Certified Containers and Plugins so customers can confidently use these products in production. Check out the companion blog post for more details and browse and install certified content from Docker Store. Sign up here if you’re interested in partnering to certify software for the Docker platform.
Docker EE is available in three tiers: Basic, Standard and Advanced.

Basic: The Docker platform for certified infrastructure, with support from Docker Inc. and certified Containers and Plugins from Docker Store
Standard: Adds advanced image and container management, LDAP/AD user integration, and role-based access control (Docker Datacenter)
Advanced: Adds Docker Security Scanning and continuous vulnerability monitoring

Docker EE is available as a free trial and for purchase from Docker Sales, online via Docker Store, and is supported by Alibaba, Canonical, HPE, IBM, Microsoft and by a network of regional partners.
Docker Community Edition and Lifecycle Improvements
Docker Community Edition (CE) is the new name for the free Docker products. Docker CE runs on Mac and Windows 10, on AWS and Azure, and on CentOS, Debian, Fedora, and Ubuntu and is available from Docker Store. Docker CE includes the full Docker platform and is great for developers and DIY ops teams starting to build container apps.
The launch of Docker CE and EE brings big enhancements to the lifecycle, maintainability and upgradability of Docker. Starting with today’s release, version 17.03, Docker is moving to time-based releases and a YY.MM versioning scheme, similar to the scheme used by Canonical for Ubuntu.
The Docker CE experience can be enhanced with free and paid add-ons from Docker Cloud, a set of cloud-based managed services that include automated builds, continuous integration, public and private Docker image repos, and security scanning.
Docker CE comes in two variants:

Edge is for users wanting a drop of the latest and greatest features every month
Stable is released quarterly and is for users that want an easier-to-maintain release pace

Edge releases only get security and bug-fixes during the month they are current. Quarterly stable releases receive patches for critical bug fixes and security issues for 4 months after initial release. This gives users of the quarterly releases a 1-month upgrade window between each release where it’s possible to stay on an old version while still getting fixes. This is an improvement over the previous lifecycle, which dropped maintenance for a release as soon as a new one became available.
Docker EE is released quarterly and each release is supported and maintained for a full year. Security patches and bugfixes are backported to all supported versions. This extended support window, together with certification and support, gives Docker EE subscribers the confidence they need to run business critical apps on Docker.

The Docker API version continues to be independent of the Docker platform version and the API version does not change from Docker 1.13.1 to Docker 17.03. Even with the faster release pace, Docker will continue to maintain careful API backwards compatibility and deprecate APIs and features only slowly and conservatively. And in Docker 1.13 introduced improved interoperability between clients and servers using different API versions, including dynamic feature negotiation.
In addition to clarifying and improving the Docker release life-cycle for users, the new deterministic release train also benefits the Docker project. Maintainers and partners who want to ship new features in Docker are now guaranteed that new features will be in the hands of Edge users within a month of being merged.

New Docker Enterprise Edition, an integrated, supported and certified container platformClick To Tweet

 
Get Started Today
Docker CE and EE are an evolution of the Docker Platform designed to meet the needs of developers, ops and enterprise IT teams. No matter the operating system or cloud infrastructure, Docker CE and EE lets you install, upgrade, and maintain Docker with the support and assurances required for your particular workload.
Here are additional resources:

Register for the Webinar: Docker EE
Download Docker CE from Docker Store
Try Docker EE for free and view pricing plans
Learn More about Docker Certified program
Read the docs 

FAQ
Is this a breaking change to Docker?
No. Docker carefully maintains backwards API compatibility, and only removes features after deprecating them for a period of 3 stable releases. Docker 17.03 uses the same API version as Docker 1.13.1.

What do I need to do to upgrade?
Docker CE for Mac and Windows users will get an automatic upgrade notification. Docker for AWS and Azure users can refer to the release notes for upgrade instructions. Legacy docker-engine package users can upgrade using their distro package manager or upgrade to the new docker-ce package.

Why is Docker adopting a new versioning scheme?
To improve the predictability and cadence of Docker releases, we&;re adopting a monthly and quarterly release pattern. This will benefit the project overall: Instead of waiting an indeterminate period of time after a PR is merged for a feature to be released, contributors will see improvements in the hands of users within a month.
A time-based version is a good way to underscore the change, and to signify the time-based release cadence.

I’m a Docker DDC or CS Engine customer. Do I have to upgrade to Docker EE to continue to get support?
No. Docker will continue to support customers with valid subscriptions whether the subscription covers Docker EE or Commercially Supported Docker. Customers can choose to stay with their current deployed version or upgrade to the latest Docker EE 17.03. For more details, see the Scope of Coverage and Maintenance Lifecycle at https://success.docker.com/Policies/Scope_of_Support
The post Announcing Docker Enterprise Edition appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing the Docker Certification Program for Infrastructure, Plugins and Containers

In conjunction with the introduction of Enterprise Edition (EE), we are excited to announce the Docker Certification Program and availability of partner technologies through Docker Store. A vibrant ecosystem is a sign of a healthy platform and by providing a program that aligns Docker’s commercial platform with the innovation coming from our partners; we are collectively expanding choice for customers investing in the Docker platform.
The Docker Certification Program is designed for both technology partners and enterprise customers to recognize Containers and Plugins that excel in quality, collaborative support and compliance. Docker Certification is aligned to the available Docker EE infrastructure and gives enterprises a trusted way to run more technology in containers with support from both Docker and the publisher. Customers can quickly identify the Certified Containers and Plugins with visible badges and be confident that they were built with best practices, tested to operate smoothly on Docker EE.
Save Your Seat: Webinar &; Docker Certified and Store on March 21st.
There are three categories of Docker Certified technology available:

Certified Infrastructure: Include operating systems and cloud providers that the Docker platform is integrated and optimized for and tested for certification. Through this, Docker provides a great user experience and preserves application portability.
Certified Container: Independent Software Vendors (ISV) are able to package and distribute their software as containers directly to the end user. These containers are tested, built with Docker recommended best practices, are scanned for vulnerabilities, and are reviewed before posting on Docker Store.
Certified Plugin: Networking and Volume plugins for Docker EE are now available to be packaged and distributed to end users as containers.  These plugin containers are built with Docker recommended best practices are scanned for vulnerabilities, and must pass an additional suite of API compliance testing before they are reviewed before posting on Docker Store. Apps are portable across different network and storage infrastructure and work with new plugins without recoding.

Docker Certification presents an evolution of the Docker platform from Linux hackers to a broader community of developers and IT ops teams at businesses of all sizes looking to build and deploy apps on Docker for Linux and Windows on any infrastructure. Many components of their enterprise environment will come from third parties and Docker Certified accelerates the adoption of those technologies into Docker environments with assurances and support.
From the Ecosystem to Docker Certified Publisher
The Docker Certified badge is a great way for technology partners to differentiate solutions to the millions of Docker users out there today. Upon completion of testing, review and posting into Docker Store, these certified listings will display the  badge for customers to quickly understand which containers and plugins meet this extra criteria. Docker Store provides a marketplace for publishers to distribute, sell and manage their listings and for customers to easily browse, evaluate and purchase 3rd party technology as containers.  Customers will be able to manage all subscriptions (Docker products and 3rd party Store content) from a single place.
 

New Docker Certification Program, designed for both technology partners & enterprise customersClick To Tweet

Docker Store is the launch pad for all Docker container based software, plugins and more &8211; and to kick off the program, we have the following Docker Certified technologies  available starting today.

AVI Networks AviVantage
Cisco Contiv Network Plugin
Bleemeo Smart Agent
BlobCity DB
Blockbridge Volume Plugin
CodeCov Enterprise
Datadog
Gitlab Enterprise
Hedvig Docker Volume Plugin
HPE Sitescope
Hypergrid HyperCloud Block Storage Volume Plugin 
Kaazing Enterprise Gateway
Koekiebox Fluid
Microsoft winservercore, nanoserver, mssql-server-linux, mssql-server-windows-express, aspnet, dotnet core, iis
NetApp NDVP Volume Plugin
Nexenta Volume Plugin
Nimble Storage Volume Plugin
Nutanix Volume Plugin
Polyverse Microservice Firewall
Portworx PX-Developer
Sysdig Cloud Monitoring Agent
Weaveworks Weave Cloud Agent

New Docker Certification Program, designed for both technology partners & enterprise customersClick To Tweet

Get started today with the latest Docker Community and Enterprise Edition platforms and browse the Docker Store for Certified Containers and Plugins for a great new Docker experience.

Register for the Webinar featuring Docker Certified and Store
Search and Browse Certified Content and Plugins Docker Store.  
Interested in publishing? Apply to be a Docker Publisher Partner.
Learn more about Docker CE and Docker EE

The post Introducing the Docker Certification Program for Infrastructure, Plugins and Containers appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Build Your DockerCon Agenda!

It’s that time of the year again…the DockerCon Agenda Builder is live!
Whether you are a Docker beginner or have been dabbling in containers for a while now, we’re confident that DockerCon 2017 will have the right content for you. With 7 tracks and more than 60 sessions presented by Docker Engineering, Docker Captains, community members and corporate heavyweights such as Intuit, MetLife, PayPal, Activision and Netflix, DockerCon 2017 will cover a wide range of container tech use cases and topics.
Build your agenda
We encourage you to review the catalogue of DockerCon sessions and build your agenda for the week. You’ll find a new agenda builder that allows you to apply filters based on your areas of interest, experience, job role and more!
Check Out All The Sessions
 

One of our favorite features of the Agenda Builder is the recommendations generated based on your profile and marked interest sessions. To unlock the recommendations feature you’ll need to sign up for a DockerCon account.

Within this tool you’ll be able to adjust your agenda, rate sessions and add notes to reference after the conference. All of your selections features will be available in the DockerCon mobile app once it’s launched.
So without further ado, happy DockerCon agenda building!
DockerCon All the Things
More info about DockerCon:

What’s new at DockerCon?
5 reasons to attend DockerCon
Convince your manager to send you to DockerCon

 

 It’s time to build your @DockerCon Agenda! To Tweet

The post Build Your DockerCon Agenda! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/