New Year’s Resolution: Learn Docker

Remember last year when I said the market for Docker jobs was blowing up? Well, it’s more than doubled in the last year. And Swarm is also rising quickly, growing 12829%, almost all of that in the last year. We expect that with our partnership with Microsoft and Windows Docker containers, that this will grow even faster in the next year as .NET developers start to containerize their applications and Windows IT Professionals start porting their infrastructure to Docker. Take a look at this trendline from indeed.com.

So what are you doing to increase your Docker skills? Want a few suggestions?
Whether you’re a developer or more an ops person, a great place to start is the Docker Labs repository, which has currently 28 labs for you to choose from. They range from beginner tutorials, to orchestration workshops, security and networking tutorials, and guides for using different programming languages and developer tools.
Of course there’s also the Docker Documentation, which has a rich set of resources.
At Dockercon 2017 in April, there will be rich set of material for beginners and experts alike, and you will get to meet people from all over the world who are using Docker in their daily lives. Here are just a few things attendees can do at DockerCon:

Learn about Docker from getting started to deep dives into Docker internals from Docker Captains
Take hands-on, self-paced labs that give you practical skills
Learn about the ecosystem of companies that build on Docker in our Expo Hall.
And if you are really passionate about Docker, our recruiting team will have a booth there too, so check out our careers page

You can also take a training course. We have instructor lead trainings all over the world, or you can do a self-paced course.
Or connect with the Docker Community by attending a Docker Event including meetups and webinars. There’s also a Docker Community list you can join that will give you access to a Docker Slack Channel, where you can go for support and discussion.

Looking for a new job, learning @docker is a good way to get one To Tweet

The post New Year&;s Resolution: Learn Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Top Docker content of 2016

2016 has been an amazing year for and the container industry. We had 3 major releases of Docker engine this year , and tremendous increase in usage. The community has been following along and contributing amazing Docker resources to help you learn and get hands-on experience. Here’s some of the top read and viewed content for the year:
Releases
Of course releases are always really popular, particularly when they fit requests we had from the community. In particular, we had:

Docker for Mac & Docker for Windows Beta and GA release blog posts, and the video

Docker 1.12 Built-in Orchestration release, and the DockerCon keynote where we announced it

And the release of the Docker for AWS and Azure beta

Windows Containers
When Microsoft made Windows 2016 generally available, people rushed to

Our release blog to read the news
Tutorials to find out how to use Windows containers powered by Docker
The commercial relationship blog post to understand how it all fits together

About Docker
We also provide a lot of information about how to use Docker. In particular, these posts and articles that we shared on social media were the most read:

Containers are Not VMs by Mike Coleman
9 Critical Decisions for Running Docker in Production by James Higginbotham
A Comparative Study of Docker Engine on Windows Server vs. Linux Platform by Docker Captain Ajeet Singh Raina
Our White paper &; The Definitive Guide To Docker

How to Use Docker
Docker has a wide variety of use cases, articles and videos about how to use it are really popular. In particular, when we share content from our users and Docker Captains, they get a lot of views:

Getting started with Docker 1.12 and Raspberry Pi by Docker Captain Alex Ellis
Docker: Making our bioinformatics easier and more reproducible by Jeremy Yoder
NGINX as a Reverse Proxy for Docker by Lorenzo Fontana
5 minute guide for getting Docker 1.12.1 running on your Raspberry Pi 3 by Docker Captain Ajeet Singh Raina
The Docker Cheat Sheet
Docker for Developers

Cgroups, namespaces, and beyond

Still hungry for more info? Here’s some more Docker resources:

Check out Follow all the Captains in one shot with Docker by Docker Captain Alex Ellis
Docker labs and tutorials on GitHub
Follow us on Twitter, Facebook or LinkedIn group
Join the Docker Community Directory and Slack
And of course, keep following this blog for more exciting info

Top Docker content from 2016 &8211; What you docker resources you read the most Click To Tweet

The post Top Docker content of 2016 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Kubernetes supports OpenAPI

Editor’s note: this post is part of a series of in-depth articles on what’s new in Kubernetes 1.5 OpenAPI allows API providers to define their operations and models, and enables developers to automate their tools and generate their favorite language’s client to talk to that API server. Kubernetes has supported swagger 1.2 (older version of OpenAPI spec) for a while, but the spec was incomplete and invalid, making it hard to generate tools/clients based on it. In Kubernetes 1.4, we introduced alpha support for the OpenAPI spec (formerly known as swagger 2.0 before it was donated to the Open API Initiative) by upgrading the current models and operations. Beginning in Kubernetes 1.5, the support for the OpenAPI spec has been completed by auto-generating the spec directly from Kubernetes source, which will keep the spec–and documentation–completely in sync with future changes in operations/models.The new spec enables us to have better API documentation and we have even introduced a supported python client.The spec is modular, divided by GroupVersion: this is future-proof, since we intend to allow separate GroupVersions to be served out of separate API servers.The structure of spec is explained in detail in OpenAPI spec definition. We used operation’s tags to separate each GroupVersion and filled as much information as we can about paths/operations and models. For a specific operation, all parameters, method of call, and responses are documented. For example, OpenAPI spec for reading a pod information is:{…  “paths”: {“/api/v1/namespaces/{namespace}/pods/{name}”: {    “get”: {     “description”: “read the specified Pod”,     “consumes”: [      “*/*”     ],     “produces”: [      “application/json”,      “application/yaml”,      “application/vnd.kubernetes.protobuf”     ],     “schemes”: [      “https”     ],     “tags”: [      “core_v1″     ],     “operationId”: “readCoreV1NamespacedPod”,     “parameters”: [      {       “uniqueItems”: true,       “type”: “boolean”,       “description”: “Should the export be exact.  Exact export maintains cluster-specific fields like ‘Namespace’.”,       “name”: “exact”,       “in”: “query”      },      {       “uniqueItems”: true,       “type”: “boolean”,       “description”: “Should this value be exported.  Export strips fields that a user can not specify.”,       “name”: “export”,       “in”: “query”      }     ],     “responses”: {      “200”: {       “description”: “OK”,       “schema”: {        “$ref”: “#/definitions/v1.Pod”       }      },      “401”: {       “description”: “Unauthorized”      }     }    },…}…Using this information and the URL of `kube-apiserver`, one should be able to make the call to the given url (/api/v1/namespaces/{namespace}/pods/{name}) with parameters such as `name`, `exact`, `export`, etc. to get pod’s information. Client libraries generators would also use this information to create an API function call for reading pod’s information. For example, python client makes it easy to call this operation like this:from kubernetes import clientret = client.CoreV1Api().read_namespaced_pod(name=”pods_name”, namespace=”default”)A simplified version of generated read_namespaced_pod, can be found here.Swagger-codegen document generator would also be able to create documentation using the same information:GET /api/v1/namespaces/{namespace}/pods/{name}(readCoreV1NamespacedPod)read the specified PodPath parametersname (required)Path Parameter — name of the Podnamespace (required)Path Parameter — object name and auth scope, such as for teams and projectsConsumesThis API call consumes the following media types via the Content-Type request header:*/*Query parameterspretty (optional)Query Parameter — If ‘true’, then the output is pretty printed.exact (optional)Query Parameter — Should the export be exact. Exact export maintains cluster-specific fields like ‘Namespace’.export (optional)Query Parameter — Should this value be exported. Export strips fields that a user can not specify.Return typev1.PodProducesThis API call produces the following media types according to the Accept request header; the media type will be conveyed by the Content-Type response header.application/jsonapplication/yamlapplication/vnd.kubernetes.protobufResponses200OK v1.Pod401UnauthorizedThere are two ways to access OpenAPI spec:From `kuber-apiserver`/swagger.json. This file will have all enabled GroupVersions routes and models and would be most up-to-date file with an specific `kube-apiserver`.From Kubernetes GitHub repository with all core GroupVersions enabled. You can access it on master or an specific release (for example 1.5 release).There are numerous tools that works with this spec. For example, you can use the swagger editor to open the spec file and render documentation, as well as generate clients; or you can directly use swagger codegen to generate documentation and clients. The clients this generates will mostly work out of the box–but you will need some support for authorization and some Kubernetes specific utilities. Use python client as a template to create your own client. If you want to get involved in development of OpenAPI support, client libraries, or report a bug, you can get in touch with developers at SIG-API-Machinery.–Mehdy Bohlool, Software Engineer, GoogleDownload KubernetesGet involved with the Kubernetes project on GitHub Post questions (or answer questions) on Stack Overflow Connect with the community on SlackFollow us on Twitter @Kubernetesio for latest updates
Quelle: kubernetes

Cluster Federation in Kubernetes 1.5

Editor’s note: this post is part of a series of in-depth articles on what’s new in Kubernetes 1.5In the latest Kubernetes 1.5 release, you’ll noticed that support for Cluster Federation is maturing. That functionality was introduced in Kubernetes 1.3, and the 1.5 release includes a number of new features, including an easier setup experience and a step closer to supporting all Kubernetes API objects.A new command line tool called ‘kubefed’ was introduced to make getting started with Cluster Federation much simpler. Also, alpha level support was added for Federated DaemonSets, Deployments and ConfigMaps. In summary:DaemonSets are Kubernetes deployment rules that guarantee that a given pod is always present at every node, as new nodes are added to the cluster (more info).Deployments describe the desired state of Replica Sets (more info). ConfigMaps are variables applied to Replica Sets (which greatly improves image reusability as their parameters can be externalized – more info). Federated DaemonSets, Federated Deployments, Federated ConfigMaps take the qualities of the base concepts to the next level. For instance, Federated DaemonSets guarantee that a pod is deployed on every node of the newly added cluster.But what actually is “federation”? Let’s explain it by what needs it satisfies. Imagine a service that operates globally. Naturally, all its users expect to get the same quality of service, whether they are located in Asia, Europe, or the US. What this means is that the service must respond equally fast to requests at each location. This sounds simple, but there’s lots of logic involved behind the scenes. This is what Kubernetes Cluster Federation aims to do.How does it work? One of the Kubernetes clusters must become a master by running a Federation Control Plane. In practice, this is a controller that monitors the health of other clusters, and provides a single entry point for administration. The entry point behaves like a typical Kubernetes cluster. It allows creating Replica Sets, Deployments, Services, but the federated control plane passes the resources to underlying clusters. This means that if we request the federation control plane to create a Replica Set with 1,000 replicas, it will spread the request across all underlying clusters. If we have 5 clusters, then by default each will get its share of 200 replicas.This on its own is a powerful mechanism. But there’s more. It’s also possible to create a Federated Ingress. Effectively, this is a global application-layer load balancer. Thanks to an understanding of the application layer, it allows load balancing to be “smarter” — for instance, by taking into account the geographical location of clients and servers, and routing the traffic between them in an optimal way.In summary, with Kubernetes Cluster Federation, we can facilitate administration of all the clusters (single access point), but also optimize global content delivery around the globe. In the following sections, we will show how it works.Creating a Federation PlaneIn this exercise, we will federate a few clusters. For convenience, all commands have been grouped into 6 scripts available here:0-settings.sh1-create.sh2-getcredentials.sh3-initfed.sh4-joinfed.sh5-destroy.shFirst we need to define several variables (0-settings.sh)$ cat 0-settings.sh && . 0-settings.sh# this project create 3 clusters in 3 zones. FED_HOST_CLUSTER points to the one, which will be used to deploy federation control planeexport FED_HOST_CLUSTER=us-east1-b# Google Cloud project nameexport FED_PROJECT=<YOUR PROJECT e.g. company-project># DNS suffix for this federation. Federated Service DNS names are published with this suffix. This must be a real domain name that you control and is programmable by one of the DNS providers (Google Cloud DNS or AWS Route53)export FED_DNS_ZONE=<YOUR DNS SUFFIX e.g. example.com>And get kubectl and kubefed binaries. (for installation instructions refer to guides here and here).Now the setup is ready to create a few Google Container Engine (GKE) clusters with gcloud container clusters create (1-create.sh). In this case one is in US, one in Europe and one in Asia.$ cat 1-create.sh && . 1-create.shgcloud container clusters create gce-us-east1-b –project=${FED_PROJECT} –zone=us-east1-b –scopes cloud-platform,storage-ro,logging-write,monitoring-write,service-control,service-management,https://www.googleapis.com/auth/ndev.clouddns.readwritegcloud container clusters create gce-europe-west1-b –project=${FED_PROJECT} –zone=europe-west1-b –scopes cloud-platform,storage-ro,logging-write,monitoring-write,service-control,service-management,https://www.googleapis.com/auth/ndev.clouddns.readwritegcloud container clusters create gce-asia-east1-a –project=${FED_PROJECT} –zone=asia-east1-a –scopes cloud-platform,storage-ro,logging-write,monitoring-write,service-control,service-management,https://www.googleapis.com/auth/ndev.clouddns.readwriteThe next step is fetching kubectl configuration with gcloud -q container clusters get-credentials (2-getcredentials.sh). The configurations will be used to indicate the current context for kubectl commands. $ cat 2-getcredentials.sh && . 2-getcredentials.shgcloud -q container clusters get-credentials gce-us-east1-b –zone=us-east1-b –project=${FED_PROJECT}gcloud -q container clusters get-credentials gce-europe-west1-b –zone=europe-west1-b –project=${FED_PROJECT}gcloud -q container clusters get-credentials gce-asia-east1-a –zone=asia-east1-a –project=${FED_PROJECT}Let’s verify the setup:$ kubectl config get-contextsCURRENT   NAME CLUSTER  AUTHINFO  NAMESPACE*         gke_container-solutions_europe-west1-b_gce-europe-west1-b gke_container-solutions_europe-west1-b_gce-europe-west1-b   gke_container-solutions_europe-west1-b_gce-europe-west1-b      gke_container-solutions_us-east1-b_gce-us-east1-bgke_container-solutions_us-east1-b_gce-us-east1-b           gke_container-solutions_us-east1-b_gce-us-east1-bgke_container-solutions_asia-east1-a_gce-asia-east1-a gke_container-solutions_asia-east1-a_gce-asia-east1-a  gke_container-solutions_asia-east1-a_gce-asia-east1-a We have 3 clusters. One, indicated by the FED_HOST_CLUSTER environment variable, will be used to run the federation plane. For this, we will use the kubefed init federation command (3-initfed.sh).$ cat 3-initfed.sh && . 3-initfed.shkubefed init federation –host-cluster-context=gke_${FED_PROJECT}_${FED_HOST_CLUSTER}_gce-${FED_HOST_CLUSTER} –dns-zone-name=${FED_DNS_ZONE}You will notice that after executing the above command, a new kubectl context has appeared:$ kubectl config get-contextsCURRENT   NAME  CLUSTER  AUTHINFO NAMESPACE…         federation federation The federation context will become our administration entry point. Now it’s time to join clusters (4-joinfed.sh):$ cat 4-joinfed.sh && . 4-joinfed.shkubefed –context=federation join cluster-europe-west1-b –cluster-context=gke_${FED_PROJECT}_europe-west1-b_gce-europe-west1-b –host-cluster-context=gke_${FED_PROJECT}_${FED_HOST_CLUSTER}_gce-${FED_HOST_CLUSTER}kubefed –context=federation join cluster-asia-east1-a –cluster-context=gke_${FED_PROJECT}_asia-east1-a_gce-asia-east1-a –host-cluster-context=gke_${FED_PROJECT}_${FED_HOST_CLUSTER}_gce-${FED_HOST_CLUSTER}kubefed –context=federation join cluster-us-east1-b –cluster-context=gke_${FED_PROJECT}_us-east1-b_gce-us-east1-b –host-cluster-context=gke_${FED_PROJECT}_${FED_HOST_CLUSTER}_gce-${FED_HOST_CLUSTER}Note that cluster gce-us-east1-b is used here to run the federation control plane and also to work as a worker cluster. This circular dependency helps to use resources more efficiently and it can be verified by using the kubectl –context=federation get clusters command:$ kubectl –context=federation get clustersNAME                        STATUS    AGEcluster-asia-east1-a        Ready     7scluster-europe-west1-b      Ready     10scluster-us-east1-b          Ready     10sWe are good to go.Using Federation To Run An ApplicationIn our repository you will find instructions on how to build a docker image with a web service that displays the container’s hostname and the Google Cloud Platform (GCP) zone.An example output might look like this:{“hostname”:”k8shserver-6we2u”,”zone”:”europe-west1-b”}Now we will deploy the Replica Set (k8shserver.yaml):$ kubectl –context=federation create -f rs/k8shserverAnd a Federated Service (k8shserver.yaml):$ kubectl –context=federation create -f service/k8shserverAs you can see, the two commands refer to the “federation” context, i.e. to the federation control plane. After a few minutes, you will realize that underlying clusters run the Replica Set and the Service.Creating The IngressAfter the Service is ready, we can create Ingress – the global load balancer. The command is like this:kubectl –context=federation create -f ingress/k8shserver.yamlThe contents of the file point to the service we created in the previous step:apiVersion: extensions/v1beta1kind: Ingressmetadata:  name: k8shserverspec:  backend:    serviceName: k8shserver    servicePort: 80After a few minutes, we should get a global IP address:$ kubectl –context=federation get ingressNAME         HOSTS     ADDRESS          PORTS     AGEk8shserver   *         130.211.40.125   80        20mEffectively, the response of:$ curl 130.211.40.125depends on the location of client. Something like this would be expected in the US:{“hostname”:”k8shserver-w56n4″,”zone”:”us-east1-b”}Whereas in Europe, we might have:{“hostname”:”k8shserver-z31p1″,”zone”:”eu-west1-b”}Please refer to this issue for additional details on how everything we’ve described works.DemoSummaryCluster Federation is actively being worked on and not fully General Availability. Some APIs are in beta and others are in alpha. Some features are missing, for instance cross-cloud load balancing is not supported (federated ingress currently only works on Google Cloud Platform as it depends on GCP HTTP(S) Load Balancing.Nevertheless, as the functionality matures, it will become an enabler for all companies that aim at global markets, but currently cannot afford sophisticated administration techniques as used by the likes of Netflix or Amazon. That’s why we closely watch the technology, hoping that it soon fulfills its promise.PS. When done, remember to destroy your clusters:$ . 5-destroy.sh–Lukasz Guminski, Software Engineer at Container Solutions. Allan Naim, Product Manager and Madhu C.S., Software Engineer, Google
Quelle: kubernetes

Windows Server Support Comes to Kubernetes

Editor’s note: this post is part of a series of in-depth articles on what’s new in Kubernetes 1.5Extending on the theme of giving users choice, Kubernetes 1.5 release includes the support for Windows Servers. WIth more than 80% of enterprise apps running Java on Linux or .Net on Windows, Kubernetes is previewing capabilities that extends its reach to the mass majority of enterprise workloads. The new Kubernetes Windows Server 2016 and Windows Container support includes public preview with the following features:Containerized Multiplatform Applications – Applications developed in operating system neutral languages like Go and .NET Core were previously impossible to orchestrate between Linux and Windows. Now, with support for Windows Server 2016 in Kubernetes, such applications can be deployed on both Windows Server as well as Linux, giving the developer choice of the operating system runtime. This capability has been desired by customers for almost two decades. Support for Both Windows Server Containers and Hyper-V Containers – There are two types of containers in Windows Server 2016. Windows Containers is similar to Docker containers on Linux, and uses kernel sharing. The other, called Hyper-V Containers, is more lightweight than a virtual machine while at the same time offering greater isolation, its own copy of the kernel, and direct memory assignment. Kubernetes can orchestrate both these types of containers. Expanded Ecosystem of Applications – One of the key drivers of introducing Windows Server support in Kubernetes is to expand the ecosystem of applications supported by Kubernetes: IIS, .NET, Windows Services, ASP.NET, .NET Core, are some of the application types that can now be orchestrated by Kubernetes, running inside a container on Windows Server.Coverage for Heterogeneous Data Centers – Organizations already use Kubernetes to host tens of thousands of application instances across Global 2000 and Fortune 500. This will allow them to expand Kubernetes to the large footprint of Windows Server. The process to bring Windows Server to Kubernetes has been a truly multi-vendor effort and championed by the Windows Special Interest Group (SIG) – Apprenda, Google, Red Hat and Microsoft were all involved in bringing Kubernetes to Windows Server. On the community effort to bring Kubernetes to Windows Server, Taylor Brown, Principal Program Manager at Microsoft stated that “This new Kubernetes community work furthers Windows Server container support options for popular orchestrators, reinforcing Microsoft’s commitment to choice and flexibility for both Windows and Linux ecosystems.”Guidance for Current UsageWhere to use Windows Server support?Right now organizations should start testing Kubernetes on Windows Server and provide feedback. Most organizations take months to set up hardened production environments and general availability should be available in next few releases of Kubernetes. What works?Most of the Kubernetes constructs, such as Pods, Services, Labels, etc. work with Windows Containers.What doesn’t work yet?Pod abstraction is not same due to networking namespaces. Net result is that Windows containers in a single POD cannot communicate over localhost. Linux containers can share networking stack by placing them in the same network namespace.DNS capabilities are not fully implementedUDP is not supported inside a containerWhen will it be ready for all production workloads (general availability)?The goal is to refine the networking and other areas that need work to get Kubernetes users a production version of Windows Server 2016 – including with Windows Nano Server and Windows Server Core installation options – support in the next couple releases. Technical DemoKubernetes on Windows Server 2016 ArchitectureRoadmapSupport for Windows Server-based containers is in alpha release mode for Kubernetes 1.5, but the community is not stopping there. Customers want enterprise hardened container scheduling and management for their entire tech portfolio. That has to include full parity of features among Linux and Windows Server in production. The Windows Server SIG will deliver that parity within the next one or two releases of Kubernetes through a few key areas of investment:Networking – the SIG will continue working side by side with Microsoft to enhance the networking backbone of Windows Server Containers, specifically around lighting up container mode networking and native network overlay support for container endpoints. OOBE – Improving the setup, deployment, and diagnostics for a Windows Server node, including the ability to deploy to any cloud (Azure, AWS, GCP)Runtime Operations – the SIG will play a key part in defining the monitoring interface of the Container Runtime Interface (CRI), leveraging it to provide deep insight and monitoring for Windows Server-based containersGet StartedTo get started with Kubernetes on Windows Server 2016, please visit the GitHub guide for more details. If you want to help with Windows Server support, then please connect with the Windows Server SIG or connect directly with Michael Michael, the SIG lead, on GitHub. –Michael Michael, Senior Director of Product Management, Apprenda 
Quelle: kubernetes

StatefulSet: Run and Scale Stateful Applications Easily in Kubernetes

Editor’s note: this post is part of a series of in-depth articles on what’s new in Kubernetes 1.5In the latest release, Kubernetes 1.5, we’ve moved the feature formerly known as PetSet into beta as StatefulSet. There were no major changes to the API Object, other than the community selected name, but we added the semantics of “at most one pod per index” for deployment of the Pods in the set. Along with ordered deployment, ordered termination, unique network names, and persistent stable storage, we think we have the right primitives to support many containerized stateful workloads. We don’t claim that the feature is 100% complete (it is software after all), but we believe that it is useful in its current form, and that we can extend the API in a backwards-compatible way as we progress toward an eventual GA release.When is StatefulSet the Right Choice for my Storage Application?Deployments and ReplicaSets are a great way to run stateless replicas of an application on Kubernetes, but their semantics aren’t really right for deploying stateful applications. The purpose of StatefulSet is to provide a controller with the correct semantics for deploying a wide range of stateful workloads. However, moving your storage application onto Kubernetes isn’t always the correct choice. Before you go all in on converging your storage tier and your orchestration framework, you should ask yourself a few questions.Can your application run using remote storage or does it require local storage media?Currently, we recommend using StatefulSets with remote storage. Therefore, you must be ready to tolerate the performance implications of network attached storage. Even with storage optimized instances, you won’t likely realize the same performance as locally attached, solid state storage media. Does the performance of network attached storage, on your cloud, allow your storage application to meet its SLAs? If so, running your application in a StatefulSet provides compelling benefits from the perspective of automation. If the node on which your storage application is running fails, the Pod containing the application can be rescheduled onto another node, and, as it’s using network attached storage media, its data are still available after it’s rescheduled.Do you need to scale your storage application?What is the benefit you hope to gain by running your application in a StatefulSet? Do you have a single instance of your storage application for your entire organization? Is scaling your storage application a problem that you actually have? If you have a few instances of your storage application, and they are successfully meeting the demands of your organization, and those demands are not rapidly increasing, you’re already at a local optimum. If, however, you have an ecosystem of microservices, or if you frequently stamp out new service footprints that include storage applications, then you might benefit from automation and consolidation.  If you’re already using Kubernetes to manage the stateless tiers of your ecosystem, you should consider using the same infrastructure to manage your storage applications.How important is predictable performance?Kubernetes doesn’t yet support isolation for network or storage I/O across containers. Colocating your storage application with a noisy neighbor can reduce the QPS that your application can handle. You can mitigate this by scheduling the Pod containing your storage application as the only tenant on a node (thus providing it a dedicated machine) or by using Pod anti-affinity rules to segregate Pods that contend for network or disk, but this means that you have to actively identify and mitigate hot spots.If squeezing the absolute maximum QPS out of your storage application isn’t your primary concern, if you’re willing and able to mitigate hotspots to ensure your storage applications meet their SLAs, and if the ease of turning up new “footprints” (services or collections of services), scaling them, and flexibly re-allocating resources is your primary concern, Kubernetes and StatefulSet might be the right solution to address it.Does your application require specialized hardware or instance types?If you run your storage application on high-end hardware or extra-large instance sizes, and your other workloads on commodity hardware or smaller, less expensive images, you may not want to deploy a heterogenous cluster. If you can standardize on a single instance size for all types of apps, then you may benefit from the flexible resource reallocation and consolidation, that you get from Kubernetes.A Practical Example – ZooKeeperZooKeeper is an interesting use case for StatefulSet for two reasons. First, it demonstrates that StatefulSet can be used to run a distributed, strongly consistent storage application on Kubernetes. Second, it’s a prerequisite for running workloads like Apache Hadoop and Apache Kakfa on Kubernetes. An in-depth tutorial on deploying a ZooKeeper ensemble on Kubernetes is available in the Kubernetes documentation, and we’ll outline a few of the key features below.Creating a ZooKeeper EnsembleCreating an ensemble is as simple as using kubectl create to generate the objects stored in the manifest.$ kubectl create -f http://k8s.io/docs/tutorials/stateful-application/zookeeper.yamlservice “zk-headless” createdconfigmap “zk-config” createdpoddisruptionbudget “zk-budget” createdstatefulset “zk” createdWhen you create the manifest, the StatefulSet controller creates each Pod, with respect to its ordinal, and waits for each to be Running and Ready prior to creating its successor.$ kubectl get -w -l app=zkNAME      READY     STATUS    RESTARTS   AGEzk-0      0/1       Pending   0          0szk-0      0/1       Pending   0         0szk-0      0/1       Pending   0         7szk-0      0/1       ContainerCreating   0         7szk-0      0/1       Running   0         38szk-0      1/1       Running   0         58szk-1      0/1       Pending   0         1szk-1      0/1       Pending   0         1szk-1      0/1       ContainerCreating   0         1szk-1      0/1       Running   0         33szk-1      1/1       Running   0         51szk-2      0/1       Pending   0         0szk-2      0/1       Pending   0         0szk-2      0/1       ContainerCreating   0         0szk-2      0/1       Running   0         25szk-2      1/1       Running   0         40sExamining the hostnames of each Pod in the StatefulSet, you can see that the Pods’ hostnames also contain the Pods’ ordinals.$ for i in 0 1 2; do kubectl exec zk-$i — hostname; donezk-0zk-1zk-2ZooKeeper stores the unique identifier of each server in a file called “myid”. The identifiers used for ZooKeeper servers are just natural numbers. For the servers in the ensemble, the “myid” files are populated by adding one to the ordinal extracted from the Pods’ hostnames.$ for i in 0 1 2; do echo “myid zk-$i”;kubectl exec zk-$i — cat /var/lib/zookeeper/data/myid; donemyid zk-01myid zk-12myid zk-23Each Pod has a unique network address based on its hostname and the network domain controlled by the zk-headless Headless Service.$  for i in 0 1 2; do kubectl exec zk-$i — hostname -f; donezk-0.zk-headless.default.svc.cluster.localzk-1.zk-headless.default.svc.cluster.localzk-2.zk-headless.default.svc.cluster.localThe combination of a unique Pod ordinal and a unique network address allows you to populate the ZooKeeper servers’ configuration files with a consistent ensemble membership.$  kubectl exec zk-0 — cat /opt/zookeeper/conf/zoo.cfgclientPort=2181dataDir=/var/lib/zookeeper/datadataLogDir=/var/lib/zookeeper/logtickTime=2000initLimit=10syncLimit=2000maxClientCnxns=60minSessionTimeout= 4000maxSessionTimeout= 40000autopurge.snapRetainCount=3autopurge.purgeInteval=1server.1=zk-0.zk-headless.default.svc.cluster.local:2888:3888server.2=zk-1.zk-headless.default.svc.cluster.local:2888:3888server.3=zk-2.zk-headless.default.svc.cluster.local:2888:3888StatefulSet lets you deploy ZooKeeper in a consistent and reproducible way. You won’t create more than one server with the same id, the servers can find each other via a stable network addresses, and they can perform leader election and replicate writes because the ensemble has consistent membership.The simplest way to verify that the ensemble works is to write a value to one server and to read it from another. You can use the “zkCli.sh” script that ships with the ZooKeeper distribution, to create a ZNode containing some data.$  kubectl exec zk-0 zkCli.sh create /hello world…WATCHER::WatchedEvent state:SyncConnected type:None path:nullCreated /helloYou can use the same script to read the data from another server in the ensemble.$  kubectl exec zk-1 zkCli.sh get /hello …WATCHER::WatchedEvent state:SyncConnected type:None path:nullworld…You can take the ensemble down by deleting the zk StatefulSet.$  kubectl delete statefulset zkstatefulset “zk” deletedThe cascading delete destroys each Pod in the StatefulSet, with respect to the reverse order of the Pods’ ordinals, and it waits for each to terminate completely before terminating its predecessor.$  kubectl get pods -w -l app=zkNAME      READY     STATUS    RESTARTS   AGEzk-0      1/1       Running   0          14mzk-1      1/1       Running   0          13mzk-2      1/1       Running   0          12mNAME      READY     STATUS        RESTARTS   AGEzk-2      1/1       Terminating   0          12mzk-1      1/1       Terminating   0         13mzk-0      1/1       Terminating   0         14mzk-2      0/1       Terminating   0         13mzk-2      0/1       Terminating   0         13mzk-2      0/1       Terminating   0         13mzk-1      0/1       Terminating   0         14mzk-1      0/1       Terminating   0         14mzk-1      0/1       Terminating   0         14mzk-0      0/1       Terminating   0         15mzk-0      0/1       Terminating   0         15mzk-0      0/1       Terminating   0         15mYou can use kubectl apply to recreate the zk StatefulSet and redeploy the ensemble.$  kubectl apply -f http://k8s.io/docs/tutorials/stateful-application/zookeeper.yamlservice “zk-headless” configuredconfigmap “zk-config” configuredstatefulset “zk” createdIf you use the “zkCli.sh” script to get the value entered prior to deleting the StatefulSet, you will find that the ensemble still serves the data.$  kubectl exec zk-2 zkCli.sh get /hello …WATCHER::WatchedEvent state:SyncConnected type:None path:nullworld…StatefulSet ensures that, even if all Pods in the StatefulSet are destroyed, when they are rescheduled, the ZooKeeper ensemble can elect a new leader and continue to serve requests.Tolerating Node FailuresZooKeeper replicates its state machine to different servers in the ensemble for the explicit purpose of tolerating node failure. By default, the Kubernetes Scheduler could deploy more than one Pod in the zk StatefulSet to the same node. If the zk-0 and zk-1 Pods were deployed on the same node, and that node failed, the ZooKeeper ensemble couldn’t form a quorum to commit writes, and the ZooKeeper service would experience an outage until one of the Pods could be rescheduled.You should always provision headroom capacity for critical processes in your cluster, and if you do, in this instance, the Kubernetes Scheduler will reschedule the Pods on another node and the outage will be brief.If the SLAs for your service preclude even brief outages due to a single node failure, you should use a PodAntiAffinity annotation. The manifest used to create the ensemble contains such an annotation, and it tells the Kubernetes Scheduler to not place more than one Pod from the zk StatefulSet on the same node.Tolerating Planned MaintenanceThe manifest used to create the ZooKeeper ensemble also creates a PodDistruptionBudget, zk-budget. The zk-budget informs Kubernetes about the upper limit of disruptions (unhealthy Pods) that the service can tolerate. {              “podAntiAffinity”: {                “requiredDuringSchedulingRequiredDuringExecution”: [{                  “labelSelector”: {                    “matchExpressions”: [{                      “key”: “app”,                      “operator”: “In”,                      “values”: [“zk-headless”]                    }]                  },                  “topologyKey”: “kubernetes.io/hostname”                }]              }            }}$ kubectl get poddisruptionbudget zk-budgetNAME        MIN-AVAILABLE   ALLOWED-DISRUPTIONS   AGEzk-budget   2               1                     2hzk-budget indicates that at least two members of the ensemble must be available at all times for the ensemble to be healthy. If you attempt to drain a node prior taking it offline, and if draining it would terminate a Pod that violates the budget, the drain operation will fail. If you use kubectl drain, in conjunction with PodDisruptionBudgets, to cordon your nodes and to evict all Pods prior to maintenance or decommissioning, you can ensure that the procedure won’t be disruptive to your stateful applications.Looking ForwardAs the Kubernetes development looks towards GA, we are looking at a long list of suggestions from users. If you want to dive into our backlog, checkout the GitHub issues with the stateful label. However, as the resulting API would be hard to comprehend, we don’t expect to implement all of these feature requests. Some feature requests, like support for rolling updates, better integration with node upgrades, and using fast local storage, would benefit most types of stateful applications, and we expect to prioritize these. The intention of StatefulSet is to be able to run a large number of applications well, and not to be able to run all applications perfectly. With this in mind, we avoided implementing StatefulSets in a way that relied on hidden mechanisms or inaccessible features. Anyone can write a controller that works similarly to StatefulSets. We call this “making it forkable.”  Over the next year, we expect many popular storage applications to each have their own community-supported, dedicated controllers or “operators”. We’ve already heard of work on custom controllers for etcd, Redis, and ZooKeeper. We expect to write some more ourselves and to support the community in developing others.   The Operators for etcd and Prometheus from CoreOS, demonstrate an approach to running stateful applications on Kubernetes that provides a level of automation and integration beyond that which is possible with StatefulSet alone. On the other hand, using a generic controller like StatefulSet or Deployment means that a wide range of applications can be managed by understanding a single config object. We think Kubernetes users will appreciate having the choice of these two approaches.–Kenneth Owens & Eric Tune, Software Engineers, GoogleDownload KubernetesGet involved with the Kubernetes project on GitHub Post questions (or answer questions) on Stack Overflow Connect with the community on SlackFollow us on Twitter @Kubernetesio for latest updates
Quelle: kubernetes

Five Days of Kubernetes 1.5

With the help of our growing community of 1,000 contributors, we pushed some 5,000 commits to extend support for production workloads and deliver Kubernetes 1.5. While many improvements and new features have been added, we selected few to highlight in a series of in-depths posts listed below. This progress is our commitment in continuing to make Kubernetes best way to manage your production workloads at scale. Day 1* Introducing Container Runtime Interface (CRI) in KubernetesDay 2* …Day 3* …Day 4* …Day 5* …ConnectDownload KubernetesGet involved with the Kubernetes project on GitHub Post questions (or answer questions) on Stack Overflow Connect with the community on SlackFollow us on Twitter @Kubernetesio for latest updates
Quelle: kubernetes

Introducing Container Runtime Interface (CRI) in Kubernetes

Editor’s note: this post is part of a series of in-depth articles on what’s new in Kubernetes 1.5At the lowest layers of a Kubernetes node is the software that, among other things, starts and stops containers. We call this the “Container Runtime”. The most widely known container runtime is Docker, but it is not alone in this space. In fact, the container runtime space has been rapidly evolving. As part of the effort to make Kubernetes more extensible, we’ve been working on a new plugin API for container runtimes in Kubernetes, called “CRI”.What is the CRI and why does Kubernetes need it?Each container runtime has it own strengths, and many users have asked for Kubernetes to support more runtimes. In the Kubernetes 1.5 release, we are proud to introduce the Container Runtime Interface (CRI) — a plugin interface which enables kubelet to use a wide variety of container runtimes, without the need to recompile. CRI consists of a protocol buffers and gRPC API, and libraries, with additional specifications and tools under active development. CRI is being released as Alpha in Kubernetes 1.5.Supporting interchangeable container runtimes is not a new concept in Kubernetes. In the 1.3 release, we announced the rktnetes project to enable rkt container engine as an alternative to the Docker container runtime. However, both Docker and rkt were integrated directly and deeply into the kubelet source code through an internal and volatile interface. Such an integration process requires a deep understanding of Kubelet internals and incurs significant maintenance overhead to the Kubernetes community. These factors form high barriers to entry for nascent container runtimes. By providing a clearly-defined abstraction layer, we eliminate the barriers and allow developers to focus on building their container runtimes. This is a small, yet important step towards truly enabling pluggable container runtimes and building a healthier ecosystem.Overview of CRIKubelet communicates with the container runtime (or a CRI shim for the runtime) over Unix sockets using the gRPC framework, where kubelet acts as a client and the CRI shim as the server.The protocol buffers API includes two gRPC services, ImageService, and RuntimeService. The ImageService provides RPCs to pull an image from a repository, inspect, and remove an image. The RuntimeService contains RPCs to manage the lifecycle of the pods and containers, as well as calls to interact with containers (exec/attach/port-forward). A monolithic container runtime that manages both images and containers (e.g., Docker and rkt) can provide both services simultaneously with a single socket. The sockets can be set in Kubelet by –container-runtime-endpoint and –image-service-endpoint flags.Pod and container lifecycle managementservice RuntimeService {    // Sandbox operations.    rpc RunPodSandbox(RunPodSandboxRequest) returns (RunPodSandboxResponse) {}    rpc StopPodSandbox(StopPodSandboxRequest) returns (StopPodSandboxResponse) {}    rpc RemovePodSandbox(RemovePodSandboxRequest) returns (RemovePodSandboxResponse) {}    rpc PodSandboxStatus(PodSandboxStatusRequest) returns (PodSandboxStatusResponse) {}    rpc ListPodSandbox(ListPodSandboxRequest) returns (ListPodSandboxResponse) {}    // Container operations.    rpc CreateContainer(CreateContainerRequest) returns (CreateContainerResponse) {}    rpc StartContainer(StartContainerRequest) returns (StartContainerResponse) {}    rpc StopContainer(StopContainerRequest) returns (StopContainerResponse) {}    rpc RemoveContainer(RemoveContainerRequest) returns (RemoveContainerResponse) {}    rpc ListContainers(ListContainersRequest) returns (ListContainersResponse) {}    rpc ContainerStatus(ContainerStatusRequest) returns (ContainerStatusResponse) {}    …}A Pod is composed of a group of application containers in an isolated environment with resource constraints. In CRI, this environment is called PodSandbox. We intentionally leave some room for the container runtimes to interpret the PodSandbox differently based on how they operate internally. For hypervisor-based runtimes, PodSandbox might represent a virtual machine. For others, such as Docker, it might be Linux namespaces. The PodSandbox must respect the pod resources specifications. In the v1alpha1 API, this is achieved by launching all the processes within the pod-level cgroup that kubelet creates and passes to the runtime.Before starting a pod, kubelet calls RuntimeService.RunPodSandbox to create the environment. This includes setting up networking for a pod (e.g., allocating an IP). Once the PodSandbox is active, individual containers can be created/started/stopped/removed independently. To delete the pod, kubelet will stop and remove containers before stopping and removing the PodSandbox.Kubelet is responsible for managing the lifecycles of the containers through the RPCs, exercising the container lifecycles hooks and liveness/readiness checks, while adhering to the restart policy of the pod.Why an imperative container-centric interface?Kubernetes has a declarative API with a Pod resource. One possible design we considered was for CRI to reuse the declarative Pod object in its abstraction, giving the container runtime freedom to implement and exercise its own control logic to achieve the desired state. This would have greatly simplified the API and allowed CRI to work with a wider spectrum of runtimes. We discussed this approach early in the design phase and decided against it for several reasons. First, there are many Pod-level features and specific mechanisms (e.g., the crash-loop backoff logic) in kubelet that would be a significant burden for all runtimes to reimplement. Second, and more importantly, the Pod specification was (and is) still evolving rapidly. Many of the new features (e.g., init containers) would not require any changes to the underlying container runtimes, as long as the kubelet manages containers directly. CRI adopts an imperative container-level interface so that runtimes can share these common features for better development velocity. This doesn’t mean we’re deviating from the “level triggered” philosophy – kubelet is responsible for ensuring that the actual state is driven towards the declared state.Exec/attach/port-forward requestsservice RuntimeService {    …    // ExecSync runs a command in a container synchronously.    rpc ExecSync(ExecSyncRequest) returns (ExecSyncResponse) {}    // Exec prepares a streaming endpoint to execute a command in the container.    rpc Exec(ExecRequest) returns (ExecResponse) {}    // Attach prepares a streaming endpoint to attach to a running container.    rpc Attach(AttachRequest) returns (AttachResponse) {}    // PortForward prepares a streaming endpoint to forward ports from a PodSandbox.    rpc PortForward(PortForwardRequest) returns (PortForwardResponse) {}    …}Kubernetes provides features (e.g. kubectl exec/attach/port-forward) for users to interact with a pod and the containers in it. Kubelet today supports these features either by invoking the container runtime’s native method calls or by using the tools available on the node (e.g., nsenter and socat). Using tools on the node is not a portable solution because most tools assume the pod is isolated using Linux namespaces. In CRI, we explicitly define these calls in the API to allow runtime-specific implementations.Another potential issue with the kubelet implementation today is that kubelet handles the connection of all streaming requests, so it can become a bottleneck for the network traffic on the node. When designing CRI, we incorporated this feedback to allow runtimes to eliminate the middleman. The container runtime can start a separate streaming server upon request (and can potentially account the resource usage to the pod!), and return the location of the server to kubelet. Kubelet then returns this information to the Kubernetes API server, which opens a streaming connection directly to the runtime-provided server and connects it to the client.There are many other aspects of CRI that are not covered in this blog post. Please see the list of design docs and proposals for all the details.Current statusAlthough CRI is still in its early stages, there are already several projects under development to integrate container runtimes using CRI. Below are a few examples:cri-o: OCI conformant runtimes.rktlet: the rkt container runtime.frakti: hypervisor-based container runtimes.docker CRI shim.If you are interested in trying these alternative runtimes, you can follow the individual repositories for the latest progress and instructions. For developers interested in integrating a new container runtime, please see the developer guide for the known limitations and issues of the API. We are actively incorporating feedback from early developers to improve the API. Developers should expect occasional API breaking changes (it is Alpha, after all).Try the new CRI-Docker integrationKubelet does not yet use CRI by default, but we are actively working on making this happen. The first step is to re-integrate Docker with kubelet using CRI. In the 1.5 release, we extended kubelet to support CRI, and also added a built-in CRI shim for Docker. This allows kubelet to start the gRPC server on Docker’s behalf. To try out the new kubelet-CRI-Docker integration, you simply have to start the Kubernetes API server with –feature-gates=StreamingProxyRedirects=true to enable the new streaming redirect feature, and then start the kubelet with –experimental-cri=true.Besides a few missing features, the new integration has consistently passed the main end-to-end tests. We plan to expand the test coverage soon and would like to encourage the community to report any issues to help with the transition.CRI with MinikubeIf you want to try out the new integration, but don’t have the time to spin up a new test cluster in the cloud yet, minikube is a great tool to quickly spin up a local cluster. Before you start, follow the instructions to download and install minikube.1. Check the available Kubernetes versions and pick the latest 1.5.x version available. We will use v1.5.0-beta.1 as an example.$ minikube get-k8s-versions2. Start a minikube cluster with the built-in docker CRI integration.$ minikube start –kubernetes-version=v1.5.0-beta.1 –extra-config=kubelet.EnableCRI=true –network-plugin=kubenet –extra-config=kubelet.PodCIDR=10.180.1.0/24 –iso-url=http://storage.googleapis.com/minikube/iso/buildroot/minikube-v0.0.6.iso–extra-config=kubelet.EnableCRI=true` turns on the CRI implementation in kubelet. –network-plugin=kubenet and –extra-config=kubelet.PodCIDR=10.180.1.0/24  sets the network plugin to kubenet and ensures a PodCIDR is assigned to the node.  Alternatively, you can use the cni plugin which does not rely on the PodCIDR. –iso-url sets an iso image for minikube to launch the node with. The image used in the example 3. Check the minikube log to check that CRI is enabled.$ minikube logs | grep EnableCRII1209 01:48:51.150789    3226 localkube.go:116] Setting EnableCRI to true on kubelet.4. Create a pod and check its status. You should see a “SandboxReceived” event as a proof that Kubelet is using CRI!$ kubectl run foo –image=gcr.io/google_containers/pause-amd64:3.0deployment “foo” created$ kubectl describe pod foo…… From                Type   Reason          Message… —————–   —–  ————— —————————–…{default-scheduler } Normal Scheduled       Successfully assigned foo-141968229-v1op9 to minikube…{kubelet minikube}   Normal SandboxReceived Pod sandbox received, it will be created….Note that kubectl attach/exec/port-forward does not work with CRI enabled in minikube yet, but this will be addressed in the newer version of minikube. CommunityCRI is being actively developed and maintained by the Kubernetes -Node community. We’d love to hear feedback from you. To join the community:Post issues or feature requests on GitHubJoin the sig-node channel on SlackSubscribe to the SIG-Node mailing listFollow us on Twitter @Kubernetesio for latest updates–Yu-Ju Hong, Software Engineer, Google
Quelle: kubernetes

Understanding Docker Networking Drivers and their use cases

Applications requirements and networking environments are diverse and sometimes opposing forces. In between applications and the network sits networking, affectionately called the Container Network Model or CNM. It’s CNM that brokers connectivity for your Docker containers and also what abstracts away the diversity and complexity so common in networking. The result is portability and it comes from CNM’s powerful network drivers. These are pluggable interfaces for the Docker Engine, Swarm, and UCP that provide special capabilities like multi-host networking, network layer encryption, and service discovery.
Naturally, the next question is which network driver should I use? Each driver offers tradeoffs and has different advantages depending on the use case. There are built-in network drivers that come included with Docker Engine and there are also plug-in network drivers offered by networking vendors and the community. The most commonly used built-in network drivers are bridge, overlay and macvlan. Together they cover a very broad list of networking use cases and environments. For a more in depth comparison and discussion of even more network drivers, check out the Docker Network Reference Architecture.
Bridge Network Driver
The bridge networking driver is the first driver on our list. It’s simple to understand, simple to use, and simple to troubleshoot, which makes it a good networking choice for developers and those new to Docker. The bridge driver creates a private network internal to the host so containers on this network can communicate. External access is granted by exposing ports to containers. Docker secures the network by managing rules that block connectivity between different Docker networks.
Behind the scenes, the Docker Engine creates the necessary Linux bridges, internal interfaces, iptables rules, and host routes to make this connectivity possible. In the example highlighted below, a Docker bridge network is created and two containers are attached to it. With no extra configuration the Docker Engine does the necessary wiring, provides service discovery for the containers, and configures security rules to prevent communication to other networks. A built-in IPAM driver provides the container interfaces with private IP addresses from the subnet of the bridge network.
In the following examples, we use a fictitious app called pets comprised of a web and db container. Feel free to try it out on your own UCP or Swarm cluster. Your app will be accessible on `<host-ip>:8000`.
docker network create -d bridge mybridge
docker run -d –net mybridge –name db redis
docker run -d –net mybridge -e DB=db -p 8000:5000 –name web chrch/web
 
 
Our application is now being served on our host at port 8000. The Docker bridge is allowing web to communicate with db by its container name. The bridge driver does the service discovery for us automatically because they are on the same network. All of the port mappings, security rules, and pipework between Linux bridges is handled for us by the networking driver as containers are scheduled and rescheduled across a cluster.
The bridge driver is a local scope driver, which means it only provides service discovery, IPAM, and connectivity on a single host. Multi-host service discovery requires an external solution that can map containers to their host location. This is exactly what makes the overlay driver so great.
Overlay Network Driver
The built-in Docker overlay network driver radically simplifies many of the complexities in multi-host networking. It is a swarm scope driver, which means that it operates across an entire Swarm or UCP cluster rather than individual hosts. With the overlay driver, multi-host networks are first-class citizens inside Docker without external provisioning or components. IPAM, service discovery, multi-host connectivity, encryption, and load balancing are built right in. For control, the overlay driver uses the encrypted Swarm control plane to manage large scale clusters at low convergence times.
The overlay driver utilizes an industry-standard VXLAN data plane that decouples the container network from the underlying physical network (the underlay). This has the advantage of providing maximum portability across various cloud and on-premises networks. Network policy, visibility, and security is controlled centrally through the Docker Universal Control Plane (UCP).

In this example we create an overlay network in UCP so we can connect our web and db containers when they are living on different hosts. Native DNS-based service discovery for services & containers within an overlay network will ensure that web can resolve to db and vice-versa. We turned on encryption so that communication between our containers is secure by default.  Furthermore, visibility and use of the network in UCP is restricted by the permissions label we use.
UCP will schedule services across the cluster and UCP will dynamically program the overlay network to provide connectivity to the containers wherever they are. When services are backed by multiple containers, VIP-based load balancing will distribute traffic across all of the containers.
Feel free to run this example against your UCP cluster with the following CLI commands:
docker network create -d overlay –opt encrypted pets-overlay
docker service create –network pets-overlay –name db redis
docker service create –network pets-overlay -p 8000:5000 -e DB=db –name web chrch/web
 
In this example we are still serving our web app on port 8000 but now we have deployed our application across different hosts. If we wanted to scale our web containers, Swarm & UCP networking would load balance the traffic for us automatically.
The overlay driver is a feature-rich driver that handles much of the complexity and integration that organizations struggle with when crafting piecemeal solutions. It provides an out-of-the-box solution for many networking challenges and does so at scale.
MACVLAN Driver
The macvlan driver is the newest built-in network driver and offers several unique characteristics. It’s a very lightweight driver, because rather than using any Linux bridging or port mapping, it connects container interfaces directly to host interfaces. Containers are addressed with routable IP addresses that are on the subnet of the external network.
As a result of routable IP addresses, containers communicate directly with resources that exist outside a Swarm cluster without the use of NAT and port mapping. This can aid in network visibility and troubleshooting. Additionally, the direct traffic path between containers and the host interface helps reduce latency. macvlan is a local scope network driver which is configured per-host. As a result, there are stricter dependencies between MACVLAN and external networks, which is both a constraint and an advantage that is different from overlay or bridge.
The macvlan driver uses the concept of a parent interface. This interface can be a host interface such as eth0, a sub-interface, or even a bonded host adaptor which bundles Ethernet interfaces into a single logical interface. A gateway address from the external network is required during MACVLAN network configuration, as a MACVLAN network is a L2 segment from the container to the network gateway. Like all Docker networks, MACVLAN networks are segmented from each other – providing access within a network, but not between networks.
The macvlan driver can be configured in different ways to achieve different results. In the below example we create two MACVLAN networks joined to different subinterfaces. This type of configuration can be used to extend multiple L2 VLANs through the host interface directly to containers. The VLAN default gateway exists in the external network.
 
The db and web containers are connected to different MACVLAN networks in this example. Each container resides on its respective external network with an external IP provided from that network. Using this design an operator can control network policy outside of the host and segment containers at L2. The containers could have also been placed in the same VLAN by configuring them on the same MACVLAN network. This just shows the amount of flexibility offered by each network driver.
Portability and choice are important tenants in the Docker philosophy. The Docker Container Network Model provides an open interface for vendors and the community to build network drivers. The complementary evolution of Docker and SDN technologies is providing more options and capabilities every day.

Get familiar with Docker Network drivers: bridge, overlay, macvlanClick To Tweet

Happy Networking!
More Resources:

Check out the latest Docker Datacenter networking updates
Read the latest RA: Docker UCP Service Discovery and Load Balancing
See What’s New in Docker Datacenter
Sign up for a free 30 day trial

The post Understanding Docker Networking Drivers and their use cases appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker & Prometheus Joint Holiday Meetup Recap

Last Wednesday we had our 52nd at HQ, but this time we joined forces with the Prometheus user group to host a mega-meetup! There was a great turnout and members were excited to see the talks on using Docker with Prometheus, OpenTracing and the new Docker playground; play-with-docker.
First up was Stephen Day, a Senior Software Engineer at Docker, who presented a talk entitled, ‘The History of According to Me’. Stephen believes that metrics and should be built into every piece of software we create, from the ground up. By solving the hard parts of application metrics in Docker, he thinks it becomes more likely that metrics are a part of your services from the start. See the video of his intriguing talk and slides below.

&;The History of Metrics According to me&; by Stephen Day from Docker, Inc.

&8216;The History of Metrics According to Me&8217; @stevvooe talking metrics and monitoring at the Docker SF meetup! @prometheusIO @CloudNativeFdn pic.twitter.com/6hk0yAtats
— Docker (@docker) December 15, 2016

Next up was Ben Sigelman, an expert in distributed tracing, whose talk ‘OpenTracing Isn’t Just Tracing: Measure Twice, Instrument Once’ was both informative and humorous. He began by describing OpenTracing and explaining why anyone who monitors microservices should care about it. He then stepped back to examine the historical role of operational logging and metrics in distributed system monitoring and illustrated how the OpenTracing API maps to these tried-and-true abstractions. To find out more and see his demo involving donuts watch the video below and slides.

Last but certainly not least were two of our amazing Docker Captains all the way from Buenos Aires, Marcos Nils and Jonathan Leibiusky! During the Docker Distributed Systems Summit in Berlin last October, they built ‘play-with-docker’. It is a a Docker playground which gives you the experience of having a free Alpine Linux Virtual Machine in the cloud where you can build and run Docker containers and even create clusters with Docker features like Swarm Mode. Under the hood DIND or Docker-in-Docker is used to give the effect of multiple VMs/PCs. Watch the video below to see how they built it and hear all about the new features.

@marcosnils & @xetorthio sharing at the Docker HQ meetup all the way from Buenos Aires! pic.twitter.com/kXqOZgClMz
— Docker (@docker) December 15, 2016

play-with-docker was a hit with the audience  and there was a line of attendees hoping to speak to Marcos and Jonathan after their talk! All in all, it was a great night thanks to our amazing speakers, Docker meetup members, the Prometheus user group and the CNCF who sponsored drinks and snacks.

New blog post w/ videos & slides from the Docker & @PrometheusIO joint meetup! To Tweet

The post Docker &; Prometheus Joint Holiday Meetup Recap appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/