Kubernetes 1.8 release integrates with containerd 1.0 Beta

Intent of containerd effort
When containerd was first developed it had two goals. The first was to solve the upgrade problem with running containers and provide a codebase where OCI runtimes, like runc, could be integrated into Docker.  However, as needs change in the container space and after speaking  with various members of the community at the beginning of this year, we decided to expand the scope of containerd and make it a fully functional container daemon with storage, image distribution and runtime.
containerd fully supports the OCI Runtime and Image specifications that are part of the recently released 1.0 specifications. Additionally, it was important to build a stable runtime for users and platform builders. We wanted containerd to be fully functional; but also, it needed to retain a small core codebase so that it is easy to maintain and support in the long run with an LTS release receiving backported patches on a stable API.
To demonstrate the progress made on the project,  Stephen Day presented the current status of containerd 1.0 alpha at the Moby Summit in LA two weeks ago,:

Check out the getting started with containerd guide to get your feet wet with containerd if you want to integrate it in your own container based system.

Introduction of the cri-containerd effort
Docker and Kubernetes both have similar requirements when it comes to a container runtime. They need something small, stable and easy to maintain. They also need an API that abstracts away platform and system specific details so that they can build a featureset for users without being slowed down by the messy syscalls and various driver support that is required to execute containers on a variety of operating systems.        
In order to have Kubernetes consume containerd for its container runtime we needed to implement the CRI interface.  CRI stands for “Container Runtime Interface” and is responsible for distribution and the lifecycle of pods and containers running on a cluster.
At Docker, we have a full time engineer working on the cri-containerd project along with the other maintainers to finish the cri-containerd integration to get Kubernetes running on containerd. Here is a presentation Liu Lantao from Google presented 2 weeks ago at Moby Summit LA about the status of cri-containerd:

Kubernetes CRI containerd integration by Lantao Liu (Google) from Docker, Inc.
Moby Summit LA allowed the various teams from different companies involved in these projects to meet and demo the latest about containerd, cri-containerd, bucketbench, and libnetwork CNI implementation. You can find a recap of the summit on the Moby blog, and get the latest updates from the teams at Moby Summit Copenhagen in a few weeks.

#MobySummit @APrativadi doing the first public demo of cri-containerd, @kubernetesio + @containerd + libnetwork drivers  used as CNI plugins pic.twitter.com/sMSWlS9ANM
— chanezon (@chanezon) September 14, 2017

.@Kubernetesio 1.8 release integrates w/ @containerd 1.0 Beta Click To Tweet

Learn more:

Getting started with containerd
Getting started guide for CRI-containerd
Kubernetes OS images with LinuxKit

The post Kubernetes 1.8 release integrates with containerd 1.0 Beta appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Kubernetes 1.8: Security, Workloads and Feature Depth

Editor’s note: today’s post is by Aparna Sinha, Group Product Manager, Kubernetes, Google; Ihor Dvoretskyi, Developer Advocate, CNCF; Jaice Singer DuMars, Kubernetes Ambassador, Microsoft; and Caleb Miles, Technical Program Manager, CoreOS on the latest release of Kubernetes 1.8. We’re pleased to announce the delivery of Kubernetes 1.8, our third release this year. Kubernetes 1.8 represents a snapshot of many exciting enhancements and refinements underway. In addition to functional improvements, we’re increasing project-wide focus on maturing process, formalizing architecture, and strengthening Kubernetes’ governance model. The evolution of mature processes clearly signals that sustainability is a driving concern, and helps to ensure that Kubernetes is a viable and thriving project far into the future. Spotlight on securityKubernetes 1.8 graduates support for role based access control (RBAC) to stable.RBAC allows cluster administrators to dynamically define roles to enforceaccess policies through the Kubernetes API. Beta support for filtering outbound trafficthrough network policies augments existing support for filtering inboundtraffic to a pod. RBAC and Network Policies are two powerful tools for enforcingorganizational and regulatory security requirements within Kubernetes. Transport Layer Security (TLS) certificate rotation for the Kubelet graduates to beta. Automatic certificate rotation eases secure cluster operation.Spotlight on workload supportKubernetes 1.8 promotes the core Workload APIs to beta with the apps/v1beta2 group and version. The beta contains the current version of Deployment, DaemonSet, ReplicaSet, and StatefulSet. The Workloads APIs provide a stable foundation for migrating existing workloads to Kubernetes as well as developing cloud native applications that target Kubernetes natively. For those considering running Big Data workloads on Kubernetes, the Workloads API now enables native Kubernetes support in Apache Spark. Batch workloads, such as nightly ETL jobs, will benefit from the graduation of CronJobs to beta.Custom Resource Definitions (CRDs) remain in beta for Kubernetes 1.8. A CRDprovides a powerful mechanism to extend Kubernetes with user-defined API objects.One use case for CRDs is the automation of complex stateful applications such as key-value stores, databases and storage engines through the Operator Pattern. Expect continued enhancements to CRDs such as validation as stabilization continues.Spoilers aheadvolume snapshots, PV resizing, automatic taints, priority pods, kubectl plugins, oh my!In addition to stabilizing existing functionality, Kubernetes 1.8 offers a number of alpha features that preview new functionality. Each Special Interest Group (SIG) in the community continues to deliver the most requested user features for their area. For a complete list, please visit the release notes.AvailabilityKubernetes 1.8 is available for download on GitHub. To get started with Kubernetes, check out these interactive tutorials. Release teamThe Release team for 1.8 was led by Jaice Singer DuMars, Kubernetes Ambassador at Microsoft, and was comprised of 14 individuals  responsible for managing all aspects of the release, from documentation to testing, validation, and feature completeness. As the Kubernetes community has grown, our release process has become an amazing demonstration of collaboration in open source software development. Kubernetes continues to gain new users at a rapid clip. This growth creates a positive feedback cycle where more contributors commit code creating a more vibrant ecosystem. User highlightsAccording to Redmonk, 54 percent of Fortune 100 companies are running Kubernetes in some form with adoption coming from every sector across the world. Recent user stories from the community include: Ancestry.com currently holds 20 billion historical records and 90 million family trees, making it the largest consumer genomics DNA network in the world. With the move to Kubernetes, its deployment time for its Shaky Leaf icon service was cut down from 50 minutes to 2 or 5 minutes.Wink, provider of smart home devices and apps, runs 80 percent of its workloads on a unified stack of Kubernetes-Docker-CoreOS, allowing them to continually innovate and improve its products and services.Pear Deck, a teacher communication app for students, ported their Heroku apps into Kubernetes, allowing them to deploy the exact same configuration in lots of different clusters in 30 seconds. Buffer, social media management for agencies and marketers, has a remote team of 80 spread across a dozen different time zones. Kubernetes has provided the kind of liquid infrastructure where a developer could create an app and deploy it and scale it horizontally as necessary.Is Kubernetes helping your team? Share your story with the community. Ecosystem updatesAnnounced on September 11, Kubernetes Certified Service Providers (KCSPs) are pre-qualified organizations with deep experience helping enterprises successfully adopt Kubernetes. Individual professionals can now register for the new Certified Kubernetes Administrator (CKA) program and exam, which requires passing an online, proctored, performance-based exam that tests one’s ability to solve multiple issues in a hands-on, command-line environment.CNCF also offers online training that teaches the skills needed to create and configure a real-world Kubernetes cluster.KubeConJoin the community at KubeCon + CloudNativeCon in Austin, December 6-8 for the largest Kubernetes gathering ever. The premiere Kubernetes event will feature technical sessions, case studies, developer deep dives, salons and more! A full schedule of events and speakers will be available here on September 28. Discounted registration ends October 6.Open Source Summit EUIhor Dvoretskyi, Kubernetes 1.8 features release lead, will present new features and enhancements at Open Source Summit EU in Prague, October 23. Registration is still open.Get involvedThe simplest way to get involved with Kubernetes is by joining one of the many Special Interest Groups (SIGs) that align with your interests. Have something you’d like to broadcast to the Kubernetes community? Share your voice at our weekly community meeting, and through the channels below.Thank you for your continued feedback and support.Post questions (or answer questions) on Stack OverflowJoin the community portal for advocates on K8sPortFollow us on Twitter @Kubernetesio for latest updatesChat with the community on SlackShare your Kubernetes story.
Quelle: kubernetes

The Docker Modernize Traditional Apps (MTA) Program Adds Microsoft Azure Stack

In April of this year, Docker announced the Modernize Traditional Apps (MTA) POC program with partners Avanade, Booz Allen, Cisco, HPE and Microsoft. The MTA program is designed to help IT teams flip the 80% maintenance to 20% innovation ratio on it’s head. The combination of Docker Enterprise Edition (EE), services and infrastructure into a turnkey program delivers portability, security and efficiency for the existing app portfolio to drive down total costs and make room for innovation like cloud strategies and new app development. The program starts by packaging of existing apps into isolated containers, providing the opportunity to migrate them to new on-prem or cloud environments, without any recoding.
 
Docker customers have already been taking advantage of the program to jumpstart their migration to Azure and are experiencing dramatically reduced deployment and scaling times — from weeks to minutes —  and cutting their total costs by 50% or more.
 
The general availability of Microsoft Azure Stack provides IT with the ability to manage their datacenters in the same way they manage Azure. The consistency in hybrid cloud infrastructure deployment combined with consistency in application packaging, deployment and management only further enhance operational efficiency. Docker is pleased to announce the addition of Azure Stack to the MTA Program for hybrid cloud environments. Docker will provide partners with a Technical Preview of the Docker EE template for Azure Stack as an easy and quick way to deploy and manage containers for the MTA project.

The Docker MTA Program is available from the partners:
 

Microsoft and Avande have been actively delivering MTA PoCs to containerize and deploy legacy workloads to Azure cloud and will be applying those same skills to Azure Stack and help further modernize apps to microservices.
“With expertise in Microsoft Azure Stack and great success leading the Modernize Traditional Application program [MTA] with Docker, Avanade is excited to bring these technologies together for clients via a single provider,” said Pat Cimprich, Executive, Cloud and Application Transformation, Avanade. “The Avanade solution provides a turnkey, fully managed Azure-consistent experience so enterprises can quickly move applications to Microsoft Azure or Microsoft Azure Stack via Docker container technology. This combination enables clients to migrate applications to the cloud today to realize significant cost savings immediately and then modernize them over time at their own pace.”
 

Booz Allen Hamilton specializes in modernizing apps to any infrastructure specifically for federal agency IT (civilian and department of defense) with a deep understanding of unique  compliance requirements and continues the transformation to devops and microservices.
 

Cisco Data Center solutions help organizations develop, deploy, and run their business-essential applications and workloads quickly, securely, and reliably across the multi-cloud domain. The Cisco MTA program is an end-to-end “Proof of Value” offer that demonstrates the ease and savings of containerizing traditional applications on Cisco UCS and Docker Enterprise Edition.
“Cisco and Microsoft offer a turnkey hybrid cloud solution built on the power of Cisco UCS and Microsoft Azure Stack and it is orderable starting today. With the expansion of the MTA program, our customers can now confidently deploy containerized applications on a validated solution jointly developed by Cisco and Microsoft, taking advantage of the agile and scalable cloud infrastructure provided by the Cisco Integrated System for Microsoft Azure Stack.” said Satinder Sehti, VP Data Center Solutions Engineering & UCS Product Management.
 

The HPE MTA Program is driving engagements to help customers modernize traditional applications on HPE Proliant for Azure Stack integrated solution and a wide range of other datacenter systems available.
“Customers have been asking for more choice in the way they manage and deliver applications. Our goal is to help customers move to modern application architectures that best suit their needs,” said McLeod Glass, Vice President, Product Management, Software-Define and Cloud Group, HPE. “With the integration of HPE ProLiant for Microsoft Azure Stack and Windows Server 2016 validated solutions into Docker’s Modernize Traditional Apps Program, customers now have more options to deploy and manage containerized legacy apps to help them simplify hybrid IT.”
Whether on-premises or in the cloud, the Docker MTA program delivers immediate benefits of portability, security and efficiency for existing applications without recoding the application. Now with Azure Stack, consistency and simplicity of hybrid cloud infrastructures services go hand in hand with consistency and agility of application container deployment. 
Visit www.docker.com/MTA or contact Docker sales.

Modernize Traditional Apps (MTA) to #Azure + #AzureStack with #DockerClick To Tweet

To learn more about Docker solutions for IT:

Visit IT Starts with Docker and learn more about MTA
Learn more about Docker Enterprise Edition
Start a hosted trial
Sign up for upcoming webinars

The post The Docker Modernize Traditional Apps (MTA) Program Adds Microsoft Azure Stack appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Kubernetes StatefulSets & DaemonSets Updates

Editor’s note: today’s post is by Janet Kuo and Kenneth Owens, Software Engineers at Google.GoogleThis post talks about recent updates to the DaemonSet and StatefulSet API objects for Kubernetes. We explore these features using Apache ZooKeeper and Apache Kafka StatefulSets and a Prometheus node exporter DaemonSet.In Kubernetes 1.6, we added the RollingUpdate update strategy to the DaemonSet API Object. Configuring your DaemonSets with the RollingUpdate strategy causes the DaemonSet controller to perform automated rolling updates to the Pods in your DaemonSets when their spec.template are updated. In Kubernetes 1.7, we enhanced the DaemonSet controller to track a history of revisions to the PodTemplateSpecs of DaemonSets. This allows the DaemonSet controller to roll back an update. We also added the RollingUpdate strategy to the StatefulSet API Object, and implemented revision history tracking for the StatefulSet controller. Additionally, we added the Parallel pod management policy to support stateful applications that require Pods with unique identities but not ordered Pod creation and termination.StatefulSet rolling update and Pod management policyFirst, we’re going to demonstrate how to use StatefulSet rolling updates and Pod management policies by deploying a ZooKeeper ensemble and a Kafka cluster.PrerequisitesTo follow along, you’ll need to set up a Kubernetes 1.7 cluster with at least 3 schedulable nodes. Each node needs 1 CPU and 2 GiB of memory available. You will also need either a dynamic provisioner to allow the StatefulSet controller to provision 6 persistent volumes (PVs) with 10 GiB each, or you will need to manually provision the PVs prior to deploying the ZooKeeper ensemble or deploying the Kafka cluster.Deploying a ZooKeeper ensembleApache ZooKeeper is a strongly consistent, distributed system used by other distributed systems for cluster coordination and configuration management. Note: You can create a ZooKeeper ensemble using this zookeeper_mini.yaml manifest. You can learn more about running a ZooKeeper ensemble on Kubernetes here, as well as a more in-depth explanation of the manifest and its contents.When you apply the manifest, you will see output like the following.$ kubectl apply -f zookeeper_mini.yaml service “zk-hs” createdservice “zk-cs” createdpoddisruptionbudget “zk-pdb” createdstatefulset “zk” createdThe manifest creates an ensemble of three ZooKeeper servers using a StatefulSet, zk; a Headless Service, zk-hs, to control the domain of the ensemble; a Service, zk-cs, that clients can use to connect to the ready ZooKeeper instances; and a PodDisruptionBugdet, zk-pdb, that allows for one planned disruption. (Note that while this ensemble is suitable for demonstration purposes, it isn’t sized correctly for production use.)If you use kubectl get to watch Pod creation in another terminal you will see that, in contrast to the OrderedReady strategy (the default policy that implements the full version of the StatefulSet guarantees), all of the Pods in the zk StatefulSet are created in parallel. $ kubectl get po -lapp=zk -wNAME      READY     STATUS     RESTARTS   AGEzk-0      0/1       Pending    0          0szk-0      0/1       Pending   0          0szk-1      0/1       Pending   0          0szk-1      0/1       Pending   0          0szk-0      0/1       ContainerCreating    0          0szk-2      0/1       Pending    0          0szk-1      0/1       ContainerCreating   0          0szk-2      0/1       Pending    0          0szk-2      0/1       ContainerCreating    0          0szk-0      0/1       Running   0          10szk-2      0/1       Running   0          11szk-1      0/1       Running    0          19szk-0      1/1       Running    0          20szk-1      1/1       Running    0          30szk-2      1/1       Running    0          30sThis is because the zookeeper_mini.yaml manifest sets the podManagementPolicy of the StatefulSet to Parallel.apiVersion: apps/v1beta1kind: StatefulSetmetadata:  name: zkspec:  serviceName: zk-hs  replicas: 3  updateStrategy:    type: RollingUpdate  podManagementPolicy: Parallel …Many distributed systems, like ZooKeeper, do not require ordered creation and termination for their processes. You can use the Parallel Pod management policy to accelerate the creation and deletion of StatefulSets that manage these systems. Note that, when Parallel Pod management is used, the StatefulSet controller will not block when it fails to create a Pod. Ordered, sequential Pod creation and termination is performed when a StatefulSet’s podManagementPolicy is set to  OrderedReady.Deploying a Kafka ClusterApache Kafka is a popular distributed streaming platform. Kafka producers write data to partitioned topics which are stored, with a configurable replication factor, on a cluster of brokers. Consumers consume the produced data from the partitions stored on the brokers. Note: Details of the manifests contents can be found here. You can learn more about running a Kafka cluster on Kubernetes here. To create a cluster, you only need to download and apply the kafka_mini.yaml manifest. When you apply the manifest, you will see output like the following:$ kubectl apply -f kafka_mini.yaml service “kafka-hs” createdpoddisruptionbudget “kafka-pdb” createdstatefulset “kafka” createdThe manifest creates a three broker cluster using the kafka StatefulSet, a Headless Service, kafka-hs, to control the domain of the brokers; and a PodDisruptionBudget, kafka-pdb, that allows for one planned disruption. The brokers are configured to use the ZooKeeper ensemble we created above by connecting through the zk-cs Service. As with the ZooKeeper ensemble deployed above, this Kafka cluster is fine for demonstration purposes, but it’s probably not sized correctly for production use.If you watch Pod creation, you will notice that, like the ZooKeeper ensemble created above, the Kafka cluster uses the Parallel podManagementPolicy.$ kubectl get po -lapp=kafka -wNAME      READY     STATUS     RESTARTS   AGEkafka-0   0/1       Pending    0          0skafka-0   0/1       Pending    0          0skafka-1   0/1       Pending    0          0skafka-1   0/1       Pending    0          0skafka-2   0/1       Pending    0          0skafka-0   0/1       ContainerCreating   0          0skafka-2   0/1       Pending    0          0skafka-1   0/1       ContainerCreating   0          0skafka-1   0/1       Running   0          11skafka-0   0/1       Running   0          19skafka-1   1/1       Running   0          23skafka-0   1/1       Running   0          32sProducing and consuming dataYou can use kubectl run to execute the kafka-topics.sh script to create a topic named test.$ kubectl run -ti –image=gcr.io/google_containers/kubernetes-kafka:1.0-10.2.1 createtopic –restart=Never –rm — kafka-topics.sh –create > –topic test > –zookeeper zk-cs.default.svc.cluster.local:2181 > –partitions 1 > –replication-factor 3Now you can use kubectl run to execute the kafka-console-consumer.sh command to listen for messages.$ kubectl run -ti –image=gcr.io/google_containers/kubnetes-kafka:1.0-10.2.1 consume –restart=Never –rm — kafka-console-consumer.sh –topic test –bootstrap-server kafka-0.kafka-hs.default.svc.cluster.local:9093In another terminal, you can run the kafka-console-producer.sh command. $kubectl run -ti –image=gcr.io/google_containers/kubernetes-kafka:1.0-10.2.1 produce –restart=Never –rm >  — kafka-console-producer.sh –topic test –broker-list kafka-0.kafka-hs.default.svc.cluster.local:9093,kafka-1.kafka-hs.default.svc.cluster.local:9093,kafka-2.kafka-hs.default.svc.cluster.local:9093Output from the second terminal appears in the first terminal. If you continue to produce and consume messages while updating the cluster, you will notice that no messages are lost. You may see error messages as the leader for the partition changes when individual brokers are updated, but the client retries until the message is committed. This is due to the ordered, sequential nature of StatefulSet rolling updates which we will explore further in the next section.Updating the Kafka clusterStatefulSet updates are like DaemonSet updates in that they are both configured by setting the spec.updateStrategy of the corresponding API object. When the update strategy is set to OnDelete, the respective controllers will only create new Pods when a Pod in the StatefulSet or DaemonSet has been deleted. When the update strategy is set to RollingUpdate, the controllers will delete and recreate Pods when a modification is made to the spec.template field of a DaemonSet or StatefulSet. You can use rolling updates to change the configuration (via environment variables or command line parameters), resource requests, resource limits, container images, labels, and/or annotations of the Pods in a StatefulSet or DaemonSet. Note that all updates are destructive, always requiring that each Pod in the DaemonSet or StatefulSet be destroyed and recreated. StatefulSet rolling updates differ from DaemonSet rolling updates in that Pod termination and creation is ordered and sequential.You can patch the kafka StatefulSet to reduce the CPU resource request to 250m.$ kubectl patch sts kafka –type=’json’ -p='[{“op”: “replace”, “path”: “/spec/template/spec/containers/0/resources/requests/cpu”, “value”:”250m”}]’statefulset “kafka” patchedIf you watch the status of the Pods in the StatefulSet, you will see that each Pod is deleted and recreated in reverse ordinal order (starting with the Pod with the largest ordinal and progressing to the smallest). The controller waits for each updated Pod to be running and ready before updating the subsequent Pod.$kubectl get po -lapp=kafka -wNAME      READY     STATUS    RESTARTS   AGEkafka-0   1/1       Running   0          13mkafka-1   1/1       Running   0          13mkafka-2   1/1       Running   0          13mkafka-2   1/1       Terminating   0         14mkafka-2   0/1       Terminating   0         14mkafka-2   0/1       Terminating   0         14mkafka-2   0/1       Terminating   0         14mkafka-2   0/1       Pending   0         0skafka-2   0/1       Pending   0         0skafka-2   0/1       ContainerCreating   0         0skafka-2   0/1       Running   0         10skafka-2   1/1       Running   0         21skafka-1   1/1       Terminating   0         14mkafka-1   0/1       Terminating   0         14mkafka-1   0/1       Terminating   0         14mkafka-1   0/1       Terminating   0         14mkafka-1   0/1       Pending   0         0skafka-1   0/1       Pending   0         0skafka-1   0/1       ContainerCreating   0         0skafka-1   0/1       Running   0         11skafka-1   1/1       Running   0         21skafka-0   1/1       Terminating   0         14mkafka-0   0/1       Terminating   0         14mkafka-0   0/1       Terminating   0         14mkafka-0   0/1       Terminating   0         14mkafka-0   0/1       Pending   0         0skafka-0   0/1       Pending   0         0skafka-0   0/1       ContainerCreating   0         0skafka-0   0/1       Running   0         10skafka-0   1/1       Running   0         22sNote that unplanned disruptions will not lead to unintentional updates during the update process. That is, the StatefulSet controller will always recreate the Pod at the correct version to ensure the ordering of the update is preserved. If a Pod is deleted, and if it has already been updated, it will be created from  the updated version of the StatefulSet’s spec.template. If the Pod has not already been updated, it will be created from the previous version of the StatefulSet’s spec.template. We will explore this further in the following sections.Staging an updateDepending on how your organization handles deployments and configuration modifications, you may want or need to stage updates to a StatefulSet prior to allowing the roll out to progress. You can accomplish this by setting a partition for the RollingUpdate. When the StatefulSet controller detects a partition in the updateStrategy of a StatefulSet, it will only apply the updated version of the StatefulSet’s spec.template to Pods whose ordinal is greater than or equal to the value of the partition.You can patch the kafka StatefulSet to add a partition to the RollingUpdate update strategy. If you set the partition to a number greater than or equal to the StatefulSet’s spec.replicas (as below), any subsequent updates you perform to the StatefulSet’s spec.template will be staged for roll out, but the StatefulSet controller will not start a rolling update.$ kubectl patch sts kafka -p ‘{“spec”:{“updateStrategy”:{“type”:”RollingUpdate”,”rollingUpdate”:{“partition”:3}}}}’statefulset “kafka” patchedIf you patch the StatefulSet to set the requested CPU to 0.3, you will notice that none of the Pods are updated.$ kubectl patch sts kafka –type=’json’ -p='[{“op”: “replace”, “path”: “/spec/template/spec/containers/0/resources/requests/cpu”, “value”:”0.3″}]’statefulset “kafka” patchedEven if you delete a Pod and wait for the StatefulSet controller to recreate it, you will notice that the Pod is recreated with current CPU request.$  kubectl delete po kafka-1pod “kafka-1″ deleted$ kubectl get po kafka-1 -wNAME      READY     STATUS              RESTARTS   AGEkafka-1   0/1       ContainerCreating   0          10skafka-1   0/1       Running   0         19skafka-1   1/1       Running   0         21s$ kubectl get po kafka-1 -o yamlapiVersion: v1kind: Podmetadata:  …    resources:      requests:        cpu: 250m        memory: 1GiRolling out a canaryOften, we want to verify an image update or configuration change on a single instance of an application before rolling it out globally. If you modify the partition created above to be 2, the StatefulSet controller will roll out a canary that can be used to verify that the update is working as intended.$ kubectl patch sts kafka -p ‘{“spec”:{“updateStrategy”:{“type”:”RollingUpdate”,”rollingUpdate”:{“partition”:2}}}}’statefulset “kafka” patchedYou can watch the StatefulSet controller update the kafka-2 Pod and pause after the update is complete.$  kubectl get po -lapp=kafka -wNAME      READY     STATUS    RESTARTS   AGEkafka-0   1/1       Running   0          50mkafka-1   1/1       Running   0          10mkafka-2   1/1       Running   0          29skafka-2   1/1       Terminating   0         34skafka-2   0/1       Terminating   0         38skafka-2   0/1       Terminating   0         39skafka-2   0/1       Terminating   0         39skafka-2   0/1       Pending   0         0skafka-2   0/1       Pending   0         0skafka-2   0/1       Terminating   0         20skafka-2   0/1       Terminating   0         20skafka-2   0/1       Pending   0         0skafka-2   0/1       Pending   0         0skafka-2   0/1       ContainerCreating   0         0skafka-2   0/1       Running   0         19skafka-2   1/1       Running   0         22sPhased roll outsSimilar to rolling out a canary, you can roll out updates based on a phased progression (e.g. linear, geometric, or exponential roll outs).If you patch the kafka StatefulSet to set the partition to 1, the StatefulSet controller updates one more broker. $ kubectl patch sts kafka -p ‘{“spec”:{“updateStrategy”:{“type”:”RollingUpdate”,”rollingUpdate”:{“partition”:1}}}}’statefulset “kafka” patchedIf you set it to 0, the StatefulSet controller updates the final broker and completes the update.$ kubectl patch sts kafka -p ‘{“spec”:{“updateStrategy”:{“type”:”RollingUpdate”,”rollingUpdate”:{“partition”:0}}}}’statefulset “kafka” patchedNote that you don’t have to decrement the partition by one. For a larger StatefulSet–for example, one with 100 replicas–you might use a progression more like 100, 99, 90, 50, 0. In this case, you would stage your update, deploy a canary, roll out to 10 instances, update fifty percent of the Pods, and then complete the update.Cleaning upTo delete the API Objects created above, you can use kubectl delete on the two manifests you used to create the ZooKeeper ensemble and the Kafka cluster.$ kubectl delete -f kafka_mini.yaml service “kafka-hs” deletedpoddisruptionbudget “kafka-pdb” deletedStatefulset “kafka” deleted$ kubectl delete -f zookeeper_mini.yaml service “zk-hs” deletedservice “zk-cs” deletedpoddisruptionbudget “zk-pdb” deletedstatefulset “zk” deletedBy design, the StatefulSet controller does not delete any persistent volume claims (PVCs): the PVCs created for the ZooKeeper ensemble and the Kafka cluster must be manually deleted. Depending on the storage reclamation policy of your cluster, you many also need to manually delete the backing PVs.DaemonSet rolling update, history, and rollbackIn this section, we’re going to show you how to perform a rolling update on a DaemonSet, look at its history, and then perform a rollback after a bad rollout. We will use a DaemonSet to deploy a Prometheus node exporter on each Kubernetes node in the cluster. These node exporters export node metrics to the Prometheus monitoring system. For the sake of simplicity, we’ve omitted the installation of the Prometheus server and the service for communication with DaemonSet pods from this blogpost. PrerequisitesTo follow along with this section of the blog, you need a working Kubernetes 1.7 cluster and kubectl version 1.7 or later. If you followed along with the first section, you can use the same cluster.DaemonSet rolling update: Prometheus node exportersFirst, prepare the node exporter DaemonSet manifest to run a v0.13 Prometheus node exporter on every node in the cluster:$ cat >> node-exporter-v0.13.yaml <<EOFapiVersion: extensions/v1beta1kind: DaemonSetmetadata:  name: node-exporterspec:  updateStrategy:    type: RollingUpdate  template:    metadata:      labels:        app: node-exporter      name: node-exporter    spec:      containers:      – image: prom/node-exporter:v0.13.0        name: node-exporter        ports:        – containerPort: 9100          hostPort: 9100          name: scrape      hostNetwork: true      hostPID: trueEOFNote that you need to enable the DaemonSet rolling update feature by explicitly setting DaemonSet .spec.updateStrategy.type to RollingUpdate.Apply the manifest to create the node exporter DaemonSet:$ kubectl apply -f node-exporter-v0.13.yaml –recorddaemonset “node-exporter” createdWait for the first DaemonSet rollout to complete:$ kubectl rollout status ds node-exporterdaemon set “node-exporter” successfully rolled outYou should see each of your node runs one copy of the node exporter pod:$ kubectl get pods -l app=node-exporter -o wideTo perform a rolling update on the node exporter DaemonSet, prepare a manifest that includes the v0.14 Prometheus node exporter:$ cat node-exporter-v0.13.yaml | sed “s/v0.13.0/v0.14.0/g” > node-exporter-v0.14.yamlThen apply the v0.14 node exporter DaemonSet:$ kubectl apply -f node-exporter-v0.14.yaml –recorddaemonset “node-exporter” configuredWait for the DaemonSet rolling update to complete:$ kubectl rollout status ds node-exporter …Waiting for rollout to finish: 3 out of 4 new pods have been updated…Waiting for rollout to finish: 3 of 4 updated pods are available…daemon set “node-exporter” successfully rolled outWe just triggered a DaemonSet rolling update by updating the DaemonSet template. By default, one old DaemonSet pod will be killed and one new DaemonSet pod will be created at a time. Now we’ll cause a rollout to fail by updating the image to an invalid value:$ cat node-exporter-v0.13.yaml | sed “s/v0.13.0/bad/g” > node-exporter-bad.yaml$ kubectl apply -f node-exporter-bad.yaml –recorddaemonset “node-exporter” configuredNotice that the rollout never finishes:$ kubectl rollout status ds node-exporter Waiting for rollout to finish: 0 out of 4 new pods have been updated…Waiting for rollout to finish: 1 out of 4 new pods have been updated…# Use ^C to exitThis behavior is expected. We mentioned earlier that a DaemonSet rolling update kills and creates one pod at a time. Because the new pod never becomes available, the rollout is halted, preventing the invalid specification from propagating to more than one node. StatefulSet rolling updates implement the same behavior with respect to failed deployments. Unsuccessful updates are blocked until it corrected via roll back or by rolling forward with a specification.$ kubectl get pods -l app=node-exporter NAME                  READY     STATUS         RESTARTS   AGEnode-exporter-f2n14   0/1       ErrImagePull   0          3m…# N = number of nodes$ kubectl get ds node-exporterNAME            DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGEnode-exporter   N         N         N-1       1            N           <none>          46mDaemonSet history, rollbacks, and rolling forwardNext,  perform a rollback. Take a look at the node exporter DaemonSet rollout history:$ kubectl rollout history ds node-exporter daemonsets “node-exporter”REVISION        CHANGE-CAUSE1               kubectl apply –filename=node-exporter-v0.13.yaml –record=true2               kubectl apply –filename=node-exporter-v0.14.yaml –record=true3               kubectl apply –filename=node-exporter-bad.yaml –record=trueCheck the details of the revision you want to roll back to:$ kubectl rollout history ds node-exporter –revision=2daemonsets “node-exporter” with revision #2Pod Template:  Labels:       app=node-exporter  Containers:   node-exporter:    Image:      prom/node-exporter:v0.14.0    Port:       9100/TCP    Environment:        <none>    Mounts:     <none>  Volumes:      <none>You can quickly roll back to any DaemonSet revision you found through kubectl rollout history:# Roll back to the last revision$ kubectl rollout undo ds node-exporter daemonset “node-exporter” rolled back# Or use –to-revision to roll back to a specific revision$ kubectl rollout undo ds node-exporter –to-revision=2daemonset “node-exporter” rolled backA DaemonSet rollback is done by rolling forward. Therefore, after the rollback, DaemonSet revision 2 becomes revision 4 (current revision):$ kubectl rollout history ds node-exporter daemonsets “node-exporter”REVISION        CHANGE-CAUSE1               kubectl apply –filename=node-exporter-v0.13.yaml –record=true3               kubectl apply –filename=node-exporter-bad.yaml –record=true4               kubectl apply –filename=node-exporter-v0.14.yaml –record=trueThe node exporter DaemonSet is now healthy again:$ kubectl rollout status ds node-exporterdaemon set “node-exporter” successfully rolled out# N = number of nodes$ kubectl get ds node-exporter NAME            DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGEnode-exporter   N         N         N         N            N           <none>          46mIf current DaemonSet revision is specified while performing a rollback, the rollback is skipped: $ kubectl rollout undo ds node-exporter –to-revision=4daemonset “node-exporter” skipped rollback (current template already matches revision 4)You will see this complaint from kubectl if the DaemonSet revision is not found:$ kubectl rollout undo ds node-exporter –to-revision=10error: unable to find specified revision 10 in historyNote that kubectl rollout history and kubectl rollout status support StatefulSets, too! Cleaning up$ kubectl delete ds node-exporterWhat’s next for DaemonSet and StatefulSetRolling updates and roll backs close an important feature gap for DaemonSets and StatefulSets. As we plan for Kubernetes 1.8, we want to continue to focus on advancing the core controllers to GA. This likely means that some advanced feature requests (e.g. automatic roll back, infant mortality detection) will be deferred in favor of ensuring the consistency, usability, and stability of the core controllers. We welcome feedback and contributions, so please feel free to reach out on Slack, to ask questions on Stack Overflow, or open issues or pull requests on GitHub.Post questions (or answer questions) on Stack OverflowJoin the community portal for advocates on K8sPortFollow us on Twitter @Kubernetesio for latest updatesConnect with the community on SlackGet involved with the Kubernetes project on GitHub
Quelle: kubernetes

Yes to databases in containers – Microsoft SQL Server available on Docker Store

Microsoft SQL Server 2017 is now available for the first time on multiple platforms: Windows, Linux and Docker. Your databases can be in containers with no lengthy setup and no prerequisites, and using Docker Enterprise Edition (EE) to modernize your database delivery. The speed and efficiency benefits of Docker and containerizing apps that IT Pros and developers have been enjoying for years are now available to DBAs.
 
Try the Docker SQL Server lab now and see how database containers start in seconds, and how you can package your own schemas as Docker images.
 
If you’ve ever sat through a SQL Server install, you know why this is a big deal: SQL Server takes a while to set up, and running multiple independent SQL Server instances on the same host is not simple. This complicates maintaining dev, test and CI/CD systems where tests and experiments might break the SQL Server instance.
With SQL Server in Docker containers, all that changes. Getting SQL Server is as simple as running `docker image pull`, and you can start as many instances on a host as you want, each of them fresh and clean, and tear them back down when you’re done.
Database engines are just like any other server-side application: they run in a process that uses CPU and memory, they store state to disk, and they make services available to clients over the network. That all works the same in containers, with the added benefit that you can limit resources, manage state with volume plugins and restrict network access.
Many Docker customers are already running highly-available production databases in containers, using technologies like Postgres. Now the portability, security and efficiency you get with Docker EE is available to SQL Server DBAs.
 
Modernize your database delivery with Docker

Traditional database delivery is difficult to fit into a modern CI/CD pipeline, but Docker makes it easy. You use Microsoft’s SQL Server Docker image and package your own schema on top, using an automated process. Anyone can run any version of the database schema, just by starting a container – they don’t even need to have SQL Server installed on their machine.
This is the database delivery workflow with Docker:

DBA pushes schema changes to source control
CI process packages the schema into a Docker image based on Microsoft-published SQL Server base images
CI process runs test suites using disposable database containers created from the new image
CD process upgrades the persistent database container in the test environment to the new image
CD process runs a database container to upgrade the production database, applying diff scripts to align the schema to the new image

The whole process of packaging, testing, distributing and upgrading databases can be automated with Docker. You run database containers in development and test environments which are fast, isolated, and have identical schema versions. You can continue using your existing production database, but use the tested Docker image to deploy updates to production.
Support and availability
Docker Enterprise Edition is a supported platform for running SQL Server in Linux in containers in production. SQL Server for Linux is a certified container image which means you have support from Microsoft and Docker to resolve any issues.
On Windows Server and Windows 10 you can run SQL Server Express in containers with Docker, to modernize your database delivery process for existing SQL Server deployments, without changing your production infrastructure.
The new SQL containers will be available for download in Docker Store in October – but you can start testing with the pre-GA containers in Store today. Already there have been over 1 million downloads from Docker Hub of the SQL Server preview for Linux containers.

Yes you can run databases in containers – #SQLServer on #Docker EEClick To Tweet

To learn more about Docker solutions for IT:

Try out the SQL Server Docker lab and run your own database containers
Visit IT Starts with Docker and sign up for ongoing alerts
Learn more about Docker Enterprise Edition and start a hosted trial
Sign up for upcoming webinars

The post Yes to databases in containers – Microsoft SQL Server available on Docker Store appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Exciting new things for Docker with Windows Server 17.09

What a difference a year makes… last September, Microsoft and Docker launched Docker Enterprise Edition (EE), a Containers-as-a-Service platform for IT that manages and secures diverse applications across disparate infrastructures, for Windows Server 2016. Since then we’ve continued to work together and Windows Server 1709 contains several enhancements for Docker customers.
Docker Enterprise Edition Preview
To experiment with the new Docker and Windows features, a preview build of Docker is required. Here’s how to install it on Windows Server 1709 (this will also work on Insider builds):
Install-Module DockerProvider
Install-Package Docker -ProviderName DockerProvider -RequiredVersion preview
To run Docker Windows containers in production on any Windows Server version, please stick to Docker EE 17.06.
Docker Linux Containers on Windows
A key focus of Windows Server version 1709 is support for Linux containers on Windows. We’ve already blogged about how we’re supporting Linux containers on Windows with the LinuxKit project.
To try Linux Containers on Windows Server 1709, install the preview Docker package and enable the feature. The preview Docker EE package includes a full LinuxKit system (all 13MB of it) for use when running Docker Linux containers.
[Environment]::SetEnvironmentVariable(“LCOW_SUPPORTED”, “1”, “Machine”)
Restart-Service Docker
To disable, just remove the environment variable:
[Environment]::SetEnvironmentVariable(“LCOW_SUPPORTED”, $null, “Machine”)
Restart-Service Docker
Docker Linux containers on Windows is in preview, with ongoing joint development by Microsoft and Docker. Linux Containers is also available on Windows 10 version 1709 (“Creators Update 2”). To try it out, install the special Docker for Windows preview available here.
Docker ingress mode service publishing on Windows
Parity with Linux service publishing options has been highly requested by Windows customers. Adding support for service publishing using ingress mode in Windows Server 1709 enables use of Docker’s routing mesh, allowing external endpoints to access a service via any node in the swarm regardless of which nodes are running tasks for the service.
These networking improvements also unlock VIP-based service discovery when using overlay networks so that Windows users are not limited to DNS Round Robin.
Named pipes in Windows containers
A common and powerful Docker pattern is to run Docker containers that use the Docker API of the host that the container is running on, for example to start more Docker containers or to visualize the containers, networks and volumes on the Docker host. This pattern lets you ship, in a container, software that manages or visualizes what’s going on with Docker. This is great for building software like Docker Universal Control Plane.
Running Docker on Linux, the Docker API is usually hosted on Unix domain socket, and since these are in the filesystem namespace, sockets can be bind-mounted easily into containers. On Windows, the Docker API is available on a named pipe. Previously, named pipes where not bind-mountable into Docker Windows containers, but starting with Windows 10 and Windows Server 1709, named pipes can now bind-mounted.
Jenkins CI is a neat way to demonstrate this. With Docker and Windows Server 1709, you can now:

Run Jenkins in a Docker Windows containers (no more hand-installing and maintaining Java, Git and Jenkins on CI machines)
Have that Jenkins container build Docker images and run Docker CI/CD jobs on the same host

I’ve built a Jenkins sample image (Windows Server 1709 required) that uses the new named-pipe mounting feature. To run it, simple start a container, grab the initial password and visit port 8080. You don’t have to setup any Jenkins plugins or extra users:
> docker run -d -p 8080:8080 -v .pipedocker_engine:.pipedocker_engine friism/jenkins
3c90fdf4ff3f5b371de451862e02f2b7e16be4311903649b3fc8ec9e566774ed
> docker exec 3c cmd /c type c:.jenkinssecretsinitialAdminPassword
<password>
Now create a simple freestyle project and use the “Windows Batch Command” build step. We’ll build my fork of the Jenkins Docker project itself:
git clone –depth 1 –single-branch –branch add-windows-dockerfile https://github.com/friism/docker-3 %BUILD_NUMBER%
cd %BUILD_NUMBER%
docker build -f Dockerfile-windows -t jenkins-%BUILD_NUMBER% .
cd ..
rd /s /q %BUILD_NUMBER%
Hit “Build Now” and see Jenkins (running in a container) start to build a CI job to build a container image on the very host it’s running on!
Smaller Windows base images
When Docker and Microsoft launched Windows containers last year, some people noticed that Windows container base images are not as small as typical Linux ones. Microsoft has worked very hard to winnow down the base images, and with 1709, the Nanoserver download is now about 70MB (200MB expanded on the filesystem).
One of the things that’s gone from the Nanoserver Docker image is PowerShell. This can present some challenges when authoring Dockerfiles, but multi-stage builds make it fairly easy to do all the build and component assembly in a Windows Server Core image, and then move just the results into a nanoserver image. Here’s an example showing how to build a minimal Docker image containing just the Docker CLI:
# escape=`
FROM microsoft/windowsservercore as builder
SHELL [“powershell”, “-Command”, “$ErrorActionPreference = ‘Stop'; $ProgressPreference = ‘SilentlyContinue';”]
RUN Invoke-WebRequest -Uri https://download.docker.com/win/static/test/x86_64/docker-17.09.0-ce-rc1.zip -OutFile ‘docker.zip’
RUN Expand-Archive -Path docker.zip -DestinationPath .

FROM microsoft/nanoserver
COPY –from=builder [“dockerdocker.exe”, “C:Program Filesdockerdocker.exe”]
RUN setx PATH “%PATH%;C:Program Filesdocker”
ENTRYPOINT [“docker”]
You now get the best of both worlds: Easy-to-use, full-featured build environment and ultra-small and minimal runtime images that deploy and start quickly, and have minimal exploit surface area. Another good example of this pattern in action are the .NET Core base images maintained by the Microsoft .NET team.
Summary
It’s hard to believe that Docker Windows containers GA’d on Windows Server 2016 and Windows 10 just one year ago. In those 12 months, we’ve seen lots of adoption by the Docker community and lots of uptake with customers and partners. The latest release only adds more functionality to smooth the user experience and brings Windows overlay networking up to par with Linux, with smaller container images and with support for bind-mounting named pipes into containers.
To learn more about Docker solutions for IT:

Learn more about Docker for Windows
Visit IT Starts with Docker and sign up for ongoing alerts
Learn more about Docker Enterprise Edition
Sign up for upcoming webinars

Exciting new things for #Docker with @Windows Server 17.09 Click To Tweet

The post Exciting new things for Docker with Windows Server 17.09 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

A Day in the Life of a Docker Admin

About two months ago, we celebrated SysAdmin Day and kicked off our learning series for IT professionals. So far we’ve gone through the basics of containers and how containers are delivering value back to the company through cost savings. Now we begin the next stage of the journey by introducing how to deploy and operate containerized applications.
For the next few weeks, we are going to relate typical IT administrative tasks that many of you are familiar with to the tasks of a Docker admin. In the end, containerized applications are still applications and it is still primarily the responsibility of IT to secure and manage them. That is the same regardless of if the application runs in a container or not.
In this “A Day in the LIfe of a Docker Admin” series, we will discuss how common IT tasks translate to the world of Docker, such as:

Managing .NET apps and migrating them off Windows Server 2008
How networking with containers work and how to build an agile and secure network for containers
How to achieve a secure and compliant application environment for any industry
Integrating Docker with monitoring and logging tools

As a first step, let’s make sure we know how to deliver and deploy your first container.
Hello World!
Just like the first time you installed ESXi and built a virtual machine, or when you opened an account on AWS or Azure and spun up your first cloud instance, one of the first things anyone wants to do is deploy their first working Docker container.

With Docker, there are great hands-on systems you can use to get started right away, with nothing to download or install. The best place to start is the Play With Docker online classroom (PWD). PWD was started by a couple of our Docker Captains to do exactly what the name implies: get hands-on experience and learn. We have gathered together several labs geared towards IT pros into 3 stages, with a handful of short tutorials in each stage.
Stage 1: Hello World!
Create and run your first Docker containers, learn about images and layers, and then turn on Swarm Mode to run a multi-service / multi-container application in a cluster.
Stage 2: Dig Deeper
Learn about Docker platform security, securing containers, and Docker networking, and then combine all of your knowledge of Swarm Mode, Services, and Security in an Orchestration Workshop. Plus, there’s a link to our hosted Docker Enterprise Edition trial so you can try it all in the full system on our hosted site (still, free, of course).
Stage 3: Moving to Production
After you have seen and played with all the pieces, it is time to learn how to bring Docker in to your own environment. You can also download Docker for your own system on both Windows and Mac.
To learn more about Docker for IT Pros, be sure to check out these resources:

Watch this webinar on Docker + vSphere: Two Great Tools That Work Great Together
Sign up for our Docker for IT Pros newsletter for occasional updates on new assets and resources
Register for an upcoming Docker webinar

Follow our new blog series – A Day in the Life of a #Docker AdminClick To Tweet

The post A Day in the Life of a Docker Admin appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing the Resource Management Working Group

Editor’s note: today’s post is by Jeremy Eder, Senior Principal Software Engineer at Red Hat, on the formation of the Resource Management Working Group Why are we here?Kubernetes has evolved to support diverse and increasingly complex classes of applications. We can onboard and scale out modern, cloud-native web applications based on microservices, batch jobs, and stateful applications with persistent storage requirements. However, there are still opportunities to improve Kubernetes; for example, the ability to run workloads that require specialized hardware or those that perform measurably better when hardware topology is taken into account. These conflicts can make it difficult for application classes (particularly in established verticals) to adopt Kubernetes. We see an unprecedented opportunity here, with a high cost if it’s missed. The Kubernetes ecosystem must create a consumable path forward to the next generation of system architectures by catering to needs of as-yet unserviced workloads in meaningful ways. The Resource Management Working Group, along with other SIGs, must demonstrate the vision customers want to see, while enabling solutions to run well in a fully integrated, thoughtfully planned end-to-end stack.   Kubernetes Working Groups are created when a particular challenge requires cross-SIG collaboration. The Resource Management Working Group, for example, works primarily with sig-node and sig-scheduling to drive support for additional resource management capabilities in Kubernetes. We make sure that key contributors from across SIGs are frequently consulted because working groups are not meant to make system-level decisions on behalf of any SIG.   An example and key benefit of this is the working group’s relationship with sig-node.  We were able to ensure completion of several releases of node reliability work (complete in 1.6) before contemplating feature design on top. Those designs are use-case driven: research into technical requirements for a variety of workloads, then sorting based on measurable impact to the largest cross-section. Target Workloads and Use-casesOne of the working group’s key design tenets is that user experience must remain clean and portable, while still surfacing infrastructure capabilities that are required by businesses and applications.   While not representing any commitment, we hope in the fullness of time that Kubernetes can optimally run financial services workloads, machine learning/training, grid schedulers, map-reduce, animation workloads, and more. As a use-case driven group, we account for potential application integration that can also facilitate an ecosystem of complementary independent software vendors to flourish on top of Kubernetes. Why do this?Kubernetes covers generic web hosting capabilities very well, so why go through the effort of expanding workload coverage for Kubernetes at all? The fact is that workloads elegantly covered by Kubernetes today, only represent a fraction of the world’s compute usage. We have a tremendous opportunity to safely and methodically expand upon the set of workloads that can run optimally on Kubernetes. To date, there’s demonstrable progress in the areas of expanded workload coverage: Stateful applications such as Zookeeper, etcd, MySQL, Cassandra, ElasticSearch Jobs, such as timed events to process the day’s logs or any other batch processing Machine Learning and compute-bound workload acceleration through Alpha GPU support Collectively, the folks working on Kubernetes are hearing from their customers that we need to go further. Following the tremendous popularity of containers in 2014, industry rhetoric circled around a more modern, container-based, datacenter-level workload orchestrator as folks looked to plan their next architectures. As a consequence, we began advocating for increasing the scope of workloads covered by Kubernetes, from overall concepts to specific features. Our aim is to put control and choice in users hands, helping them move with confidence towards whatever infrastructure strategy they choose. In this advocacy, we quickly found a large group of like-minded companies interested in broadening the types of workloads that Kubernetes can orchestrate. And thus the working group was born. Genesis of the Resource Management Working GroupAfter extensive development/feature discussions during the Kubernetes Developer Summit 2016 after CloudNativeCon | KubeCon Seattle, we decided to formalize our loosely organized group. In January 2017, the Kubernetes Resource Management Working Group was formed. This group (led by Derek Carr from Red Hat and Vishnu Kannan from Google) was originally cast as a temporary initiative to provide guidance back to sig-node and sig-scheduling (primarily). However, due to the cross-cutting nature of the goals within the working group, and the depth of roadmap quickly uncovered, the Resource Management Working Group became its own entity within the first few months. Recently, Brian Grant from Google (@bgrant0607) posted the following image on his Twitter feed. This image helps to explain the role of each SIG, and shows where the Resource Management Working Group fits into the overall project organization. To help bootstrap this effort, the Resource Management Working Group had its first face-to-face kickoff meeting in May 2017. Thanks to Google for hosting! Folks from Intel, NVIDIA, Google, IBM, Red Hat. and Microsoft (among others) participated. You can read the outcomes of that 3-day meeting here. The group’s prioritized list of features for increasing workload coverage on Kubernetes enumerated in the charter of the Resource Management Working group includes: Support for performance sensitive workloads (exclusive cores, cpu pinning strategies, NUMA) Integrating new hardware devices (GPUs, FPGAs, Infiniband, etc.) Improving resource isolation (local storage, hugepages, caches, etc.) Improving Quality of Service (performance SLOs) Performance benchmarking APIs and extensions related to the features mentioned above The discussions made it clear that there was tremendous overlap between needs for various workloads, and that we ought to de-duplicate requirements, and plumb generically. Workload CharacteristicsThe set of initially targeted use-cases share one or more of the following characteristics:Deterministic performance (address long tail latencies) Isolation within a single node, as well as within groups of nodes sharing a control plane Requirements on advanced hardware and/or software capabilities Predictable, reproducible placement: applications need granular guarantees around placement The Resource Management Working Group is spearheading the feature design and development in support of these workload requirements. Our goal is to provide best practices and patterns for these scenarios. Initial ScopeIn the months leading up to our recent face-to-face, we had discussed how to safely abstract resources in a way that retains portability and clean user experience, while still meeting application requirements. The working group came away with a multi-release roadmap that included 4 short- to mid-term targets with great overlap between target workloads:Device Manager (Plugin) ProposalKubernetes should provide access to hardware devices such as NICs, GPUs, FPGA, Infiniband and so on.CPU ManagerKubernetes should provide a way for users to request static CPU assignment via the Guaranteed QoS tier. No support for NUMA in this phase.HugePages support in KubernetesKubernetes should provide a way for users to consume huge pages of any size.Resource Class proposalKubernetes should implement an abstraction layer (analogous to StorageClasses) for devices other than CPU and memory that allows a user to consume a resource in a portable way. For example, how can a pod request a GPU that has a minimum amount of memory? Getting Involved & SummaryOur charter document includes a Contact Us section with links to our mailing list, Slack channel, and Zoom meetings. Recordings of previous meetings are uploaded to Youtube. We plan to discuss these topics and more at the 2017 Kubernetes Developer Summit at CloudNativeCon | KubeCon in Austin. Please come and join one of our meetings (users, customers, software and hardware vendors are all welcome) and contribute to the working group!
Quelle: kubernetes

Get Familiar with Docker Enterprise Edition Client Bundles

Docker Enterprise Edition (EE) is the only Containers as a Service (CaaS) Platform for IT that manages and secures diverse applications across disparate infrastructure, both on-premises and in the cloud.
There’s a little mentioned big feature in Docker Enterprise Edition (EE) that seems to always bring smiles to the room once it’s displayed. Before I tell you about it, let me first describe the use case. You’re a sysadmin managing a Docker cluster and you have the following requirements:

Different individuals in your LDAP/AD need various levels of access to the containers/services in your cluster
Some users need to be able to go inside the running containers.
Some users just need to be able to see the logs
You do NOT want to give SSH access to each host in your cluster.

Now, how do you achieve this? The answer, or feature rather, is a client bundle. When you do a docker version command you will see two entries. The client portion of the engine is able to connect to a local server AND a remote once a client bundle is invoked.

What is a client bundle?
A client bundle is a group of certificates downloadable directly from the Docker Universal Control Plane (UCP) user interface within the admin section for “My Profile”. This allows you to authorize a remote Docker engine to a specific user account managed in Docker EE, absorbing all associated RBAC controls in the process. You can now execute docker swarm commands from your remote machine that take effect on the remote cluster.
Example:
I have a user named ‘bkauf’ in my UCP. I download and extract a client bundle for this user.

I open a terminal session with my docker for mac and issue a docker version command. You will see the server version matches the client. I can do a docker ps and verify nothing is running.

Now, I navigate to the extracted bundle directory and run the env.sh script (env.ps1 for windows)

Notice the server now lists my version as ucp/2.2.2. This is the version of my UCP manager; I’m remotely connected from my laptop to my remote cluster assuming the bkauf user’s access levels. I can now do various things such as create a service, view its tasks(containers) and even log into this REMOTE container from my laptop all through the API, no SSH access needed. I need not worry about what host the container is on! This is made possible by the role/permission set up for the use with the granular Role Based Access Control available with Docker EE.

What about a Windows container on a Windows node in a UCP cluster you ask? Linux OR Windows nodes, remote access through your client bundle all works the same!

 Docker Enterprise Edition (EE) is the only Containers as a Service (CaaS) Platform for IT that manages and secures diverse applications across disparate infrastructure, both on-premises and in the cloud. Docker EE embraces both traditional applications and microservices, built on Linux and Windows, and intended for x86 servers, mainframes, and public clouds. Docker EE unites all of these applications into single platform, complete with customizable and flexible access control, support for a broad range of applications and infrastructure, and a highly automated software supply chain.
Learn More

Visit IT Starts with Docker and learn more about MTA
Learn more about Docker Enterprise Edition
Start a hosted trial
Sign up for upcoming webinars

Get Familiar with #Docker Enterprise Edition Client BundlesClick To Tweet

The post Get Familiar with Docker Enterprise Edition Client Bundles appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Official Images are now Multi-platform

This past week, Docker rolled out a big update to our Official Images to make them multi-platform aware. Now, when you run `docker run hello-world`, Docker CE and EE will pull and run the correct hello-world image whether that’s for x86-64 Linux, Windows, ARM, IBM Z mainframes or any other system where Docker runs. With Docker rapidly adding support for additional operating systems (like Windows) and CPU architectures (like IBM Z) this is an important UX improvement.
Docker Official Images are a curated set of container images that include:

Base operating system images like Ubuntu, BusyBox and Debian
Ready-to-use build and runtime images for popular programming languages like Go, Python and Java
Easy-to-use images for data stores such as PostgreSQL, Neo4j and Redis
Pre-packaged software images to run WordPress, Ghost and Redmine and many other popular open source projects

The official images have always been available for x86-64 Linux. Images for non x86 Linux architectures have also been available, but to be fetched either from a different namespace (`docker pull s390x/golang` on IBM Z mainframe) or using a different tag (`docker pull golang:nanoserver` on Windows). This was not the seamless and portable experience that we wanted for users of Docker’s new multi-arch and and multi-os orchestration features.
Luckily the Docker registry and distribution protocol have supported multi-platform images since Docker 1.10, using a technology called manifest lists. A manifest list can take the place of a single-architecture image manifest in a registry (for example for `golang`) and contains a list of (“platform”, “manifest-reference”) tuples. If a registry responds to a `docker pull` command with a registry list instead of an image manifest, Docker examines the manifest list and then pull the correct list entry for the platform that it happens to be running on.
The distribution protocol is backwards compatible, and manifest lists are only served to clients that indicate support in the `Accept` header. For clients that don’t support manifest lists, registries will fall back to the x86-64 Linux image manifest. Manifest lists are fully supported by Docker Content Trust to ensure that multi-platform image content is cryptographically signed and verified.
Manifest lists have been rolled out for Linux images for most CPU architectures, and Windows support is also getting there. If your favorite CPU architecture or OS isn’t covered yet, you can always continue to use a CPU or OS-specific tag or image when pulling. Fetching images by digest is also unaffected by this update.
If you’re interested in building multi-arch images, check out Phil Estes’ manifest-list tool and keep track of the PR to add a manifest command to the Docker CLI.
Manifest lists and multi-arch Docker images have been in the works for a long time. We’re excited that these features are now making it simpler to pull and use Docker Official Repo images seamlessly on the many platforms where Docker is available.
Resources:

Phil Estes’ and Utz Bacher’s posts on Official Images going multi-arch
Official Repo documentation
Details on multi-arch official images
Official Repo GitHub org
Manifest-list specification

 

.@Docker Official Images are now Multi-platformClick To Tweet

The post Docker Official Images are now Multi-platform appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/