RBAC Support in Kubernetes

Editor’s note: this post is part of a series of in-depth articles on what’s new in Kubernetes 1.6One of the highlights of the Kubernetes 1.6 release is the RBAC authorizer feature moving to beta. RBAC, Role-based access control, is an an authorization mechanism for managing permissions around Kubernetes resources. RBAC allows configuration of flexible authorization policies that can be updated without cluster restarts.The focus of this post is to highlight some of the interesting new capabilities and best practices.RBAC vs ABACCurrently there are several authorization mechanisms available for use with Kubernetes. Authorizers are the mechanisms that decide who is permitted to make what changes to the cluster using the Kubernetes API. This affects things like kubectl, system components, and also certain applications that run in the cluster and manipulate the state of the cluster, like Jenkins with the Kubernetes plugin, or Helm that runs in the cluster and uses the Kubernetes API to install applications in the cluster. Out of the available authorization mechanisms, ABAC and RBAC are the mechanisms local to a Kubernetes cluster that allow configurable permissions policies.ABAC, Attribute Based Access Control, is a powerful concept. However, as implemented in Kubernetes, ABAC is difficult to manage and understand. It requires ssh and root filesystem access on the master VM of the cluster to make authorization policy changes. For permission changes to take effect the cluster API server must be restarted.RBAC permission policies are configured using kubectl or the Kubernetes API directly. Users can be authorized to make authorization policy changes using RBAC itself, making it possible to delegate resource management without giving away ssh access to the cluster master. RBAC policies map easily to the resources and operations used in the Kubernetes API.Based on where the Kubernetes community is focusing their development efforts, going forward RBAC should be preferred over ABAC.Basic ConceptsThe are a few basic ideas behind RBAC that are foundational in understanding it. At its core, RBAC is a way of granting users granular access to Kubernetes API resources.The connection between user and resources is defined in RBAC using two objects.RolesA Role is a collection of permissions. For example, a role could be defined to include read permission on pods and list permission for pods. A ClusterRole is just like a Role, but can be used anywhere in the cluster.Role BindingsA RoleBinding maps a Role to a user or set of users, granting that Role’s permissions to those users for resources in that namespace. A ClusterRoleBinding allows users to be granted a ClusterRole for authorization across the entire cluster.Additionally there are cluster roles and cluster role bindings to consider. Cluster roles and cluster role bindings function like roles and role bindings except they have wider scope. The exact differences and how cluster roles and cluster role bindings interact with roles and role bindings are covered in the Kubernetes documentation.RBAC in KubernetesRBAC is now deeply integrated into Kubernetes and used by the system components to grant the permissions necessary for them to function. System roles are typically prefixed with system: so they can be easily recognized.➜  kubectl get clusterroles –namespace=kube-systemNAME                    KINDadmin ClusterRole.v1beta1.rbac.authorization.k8s.iocluster-admin ClusterRole.v1beta1.rbac.authorization.k8s.ioedit ClusterRole.v1beta1.rbac.authorization.k8s.iokubelet-api-admin ClusterRole.v1beta1.rbac.authorization.k8s.iosystem:auth-delegator ClusterRole.v1beta1.rbac.authorization.k8s.iosystem:basic-user ClusterRole.v1beta1.rbac.authorization.k8s.iosystem:controller:attachdetach-controller ClusterRole.v1beta1.rbac.authorization.k8s.iosystem:controller:certificate-controller ClusterRole.v1beta1.rbac.authorization.k8s.io…The RBAC system roles have been expanded to cover the necessary permissions for running a Kubernetes cluster with RBAC only.During the permission translation from ABAC to RBAC, some of the permissions that were enabled by default in many deployments of ABAC authorized clusters were identified as unnecessarily broad and were scoped down in RBAC. The area most likely to impact workloads on a cluster is the permissions available to service accounts. With the permissive ABAC configuration, requests from a pod using the pod mounted token to authenticate to the API server have broad authorization. As a concrete example, the curl command at the end of this sequence will return a JSON formatted result when ABAC is enabled and an error when only RBAC is enabled.➜  kubectl run nginx –image=nginx:latest➜  kubectl exec -it $(kubectl get pods -o jsonpath='{.items[0].metadata.name}’) bash➜  apt-get update && apt-get install -y curl➜  curl -ik  -H “Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)”  https://kubernetes/api/v1/namespaces/default/podsAny applications you run in your Kubernetes cluster that interact with the Kubernetes API have the potential to be affected by the permissions changes when transitioning from ABAC to RBAC.To smooth the transition from ABAC to RBAC, you can create Kubernetes 1.6 clusters with both ABAC and RBAC authorizers enabled. When both ABAC and RBAC are enabled, authorization for a resource is granted if either authorization policy grants access. However, under that configuration the most permissive authorizer is used and it will not be possible to use RBAC to fully control permissions.At this point, RBAC is complete enough that ABAC support should be considered deprecated going forward. It will still remain in Kubernetes for the foreseeable future but development attention is focused on RBAC.Two different talks at the at the Google Cloud Next conference touched on RBAC related changes in Kubernetes 1.6, jump to the relevant parts here and here. For more detailed information about using RBAC in Kubernetes 1.6 read the full RBAC documentation.Get InvolvedIf you’d like to contribute or simply help provide feedback and drive the roadmap, join our community. Specifically interested in security and RBAC related conversation, participate through one of these channels:Chat with us on the Kubernetes Slack sig-auth channelJoin the biweekly SIG-Auth meetings on Wednesday at 11:00 AM PTThanks for your support and contributions. Read more in-depth posts on what’s new in Kubernetes 1.6 here.– Jacob Simpson, Greg Castle & CJ Cullen, Software Engineers at GooglePost questions (or answer questions) on Stack Overflow Join the community portal for advocates on K8sPortGet involved with the Kubernetes project on GitHub Follow us on Twitter @Kubernetesio for latest updatesConnect with the community on SlackDownload Kubernetes
Quelle: kubernetes

Enterprise Ready Software from Docker Store

Store is the place to discover and procure trusted, enterprise-ready containerized software &; free, open source and commercial.
Docker Store is the evolution of the Docker Hub, which is the world’s largest container registry, catering to millions of users. As of March 1, 2017, we crossed 11 billion pulls from the public registry!  Docker Store leverages the public registry’s massive user base and ensures our customers &8211; developers, operators and enterprise Docker users get what they ask for. The Official Images program was developed to create a set of curated and trusted content that developers could use as a foundation for building containerized software. From the lessons learned and best practices, Docker recently launched a certification program that  enables ISVs, around the world to take advantage of Store in offering great software, packaged to operate optimally on the Docker platform.

The Docker Store is designed to bring Docker users and ecosystem partners together with

Certified with ISV apps that have been validated against Docker Enterprise Edition, and comes with cooperative support from Docker and the ISV
Enhanced search and discovery capabilities of containers, including filtering support for platforms, categories and OS.
Self service publisher workflow and interface to facilitate a scalable marketplace.
Support for a range of licensing models for published content

Publishers with certified content on Docker Store include:  AVI Networks, Cisco, Bleemeo, BlobCity DB, Blockbridge, CodeCov, CoScale, Datadog, Dynatrace, Gitlab, Hedvig, HPE, Hypergrid, Kaazing, Koekiebox, Microsoft, NetApp, Nexenta, Nimble, Nutanix, Polyverse Portworx, Sysdig, and Weaveworks
The simplest way to get started is to go check out Docker Store!

Using Docker Store
For developers and IT teams building Docker apps, the Docker Store is the best place to get the components they need available as containers. Containerization technology has emerged as a strong solution for developers, devops and IT &8211; and enterprises especially need assurances that software packages are trusted and “just works” when deployed. The Docker Certification program takes containers and through an end-to-end testing process and provides collaborative support for any potential issues. Read more about the certification program here!

Enhanced Discovery: Easily search for a wide range of solutions from Docker, ISV containers or plugins. Use filters and categories to search for specific characteristics
Software Trials: Where available, free trials of commercial software (including Docker) are available from the Docker Store.
Community Content: Developers can continue to browse and download from Docker Hub public repos from the Docker Store. The Docker Community is very vibrant and active, and community images will be accessible from the Docker Store.
Notifications: Alerts and updates are available to manage subscriptions of Docker Store listings including patches, fixes or new versions.

Publish Content to Docker Store
From large ISV with hundreds of products to a small start up building new tools, Docker Store provides a marketplace to package and distribute software and plugins in containers ready for use on the Docker platform. Making their tools more accessible to the community of millions of Docker users and accelerating their time to value with these partner solutions.
In addition, Publishers gain the following benefits from the Docker Store:

Access to a globally scalable container distribution service.
Path to certification for software and plugin content to differentiate the solution from ecosystem and to signal additional value to end users.
Visibility and analytics including managing subscribers and sales reports.
Flexible fulfillment and billing support with “Paid via Docker” and BYOL (Bring your own License) models. You focus on creating great software and we take care of the rest.
Reputation management via Ratings and Reviews.

Getting started as a publisher on Docker Store is as simple as 1-2-3!

Tips for becoming a publisher:

Create Great Containerized Content (you have probably already done this!)
Follow best practices

https://success.docker.com/store

https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/

Use an official image as your base image.
github.com/docker/docker-bench-securityWe will keep adding more best practices and tools to make your content robust.

Go to https://store.docker.com and click on “Publish”.

More Resources

Learn more about certification. 
Sign up for a Docker Store Workshop at DockerCon
Learn More about Docker Enterprise Edition 

Docker Store is the place to get your certified containers, plugins and Editions!Click To Tweet

The post Enterprise Ready Software from Docker Store appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Configuring Private DNS Zones and Upstream Nameservers in Kubernetes

Editor’s note: this post is part of a series of in-depth articles on what’s new in Kubernetes .6Many users have existing domain name zones that they would like to integrate into their Kubernetes DNS namespace. For example, hybrid-cloud users may want to resolve their internal “.corp” domain addresses within the cluster. Other users may have a zone populated by a non-Kubernetes service discovery system (like Consul). We’re pleased to announce that, in Kubernetes 1.6, kube-dns adds support for configurable private DNS zones (often called “stub domains”) and external upstream DNS nameservers. In this blog post, we describe how to configure and use this feature.Default lookup flowKubernetes currently supports two DNS policies specified on a per-pod basis using the dnsPolicy flag: “Default” and “ClusterFirst”. If dnsPolicy is not explicitly specified, then “ClusterFirst” is used:If dnsPolicy is set to “Default”, then the name resolution configuration is inherited from the node the pods run on. Note: this feature cannot be used in conjunction with dnsPolicy: “Default”.If dnsPolicy is set to “ClusterFirst”, then DNS queries will be sent to the kube-dns service. Queries for domains rooted in the configured cluster domain suffix (any address ending in “.cluster.local” in the example above) will be answered by the kube-dns service. All other queries (for example, www.kubernetes.io) will be forwarded to the upstream nameserver inherited from the node.Before this feature, it was common to introduce stub domains by replacing the upstream DNS with a custom resolver. However, this caused the custom resolver itself to become a critical path for DNS resolution, where issues with scalability and availability may cause the cluster to lose DNS functionality. This feature allows the user to introduce custom resolution without taking over the entire resolution path.Customizing the DNS FlowBeginning in Kubernetes 1.6, cluster administrators can specify custom stub domains and upstream nameservers by providing a ConfigMap for kube-dns. For example, the configuration below inserts a single stub domain and two upstream nameservers. As specified, DNS requests with the “.acme.local” suffix will be forwarded to a DNS listening at 1..3.4. Additionally, Google Public DNS will serve upstream queries. See ConfigMap Configuration Notes at the end of this section for a few notes about the data format.apiVersion: v1kind: ConfigMapmetadata: name: kube-dns namespace: kube-systemdata: stubDomains: | {“acme.local”: [“1.2.3.4”]} upstreamNameservers: | [“8.8.8.8”, “8.8.4.4”]The diagram below shows the flow of DNS queries specified in the configuration above. With the dnsPolicy set to “ClusterFirst” a DNS query is first sent to the DNS caching layer in kube-dns. From here, the suffix of the request is examined and then forwarded to the appropriate DNS.  In this case, names with the cluster suffix (e.g.; “.cluster.local”) are sent to kube-dns. Names with the stub domain suffix (e.g.; “.acme.local”) will be sent to the configured custom resolver. Finally, requests that do not match any of those suffixes will be forwarded to the upstream DNS.Below is a table of example domain names and the destination of the queries for those domain names:Domain nameServer answering the querykubernetes.default.svc.cluster.localkube-dnsfoo.acme.localcustom DNS (1.2.3.4)widget.comupstream DNS (one of 8.8.8.8, 8.8.4.4)ConfigMap Configuration NotesstubDomains (optional)Format: a JSON map using a DNS suffix key (e.g.; “acme.local”) and a value consisting of a JSON array of DNS IPs.Note: The target nameserver may itself be a Kubernetes service. For instance, you can run your own copy of dnsmasq to export custom DNS names into the ClusterDNS namespace.upstreamNameservers (optional)Format: a JSON array of DNS IPs.Note: If specified, then the values specified replace the nameservers taken by default from the node’s /etc/resolv.confLimits: a maximum of three upstream nameservers can be specifiedExample 1: Adding a Consul DNS Stub DomainIn this example, the user has Consul DNS service discovery system they wish to integrate with kube-dns. The consul domain server is located at 10.150.0.1, and all consul names have the suffix “.consul.local”.  To configure Kubernetes, the cluster administrator simply creates a ConfigMap object as shown below.  Note: in this example, the cluster administrator did not wish to override the node’s upstream nameservers, so they didn’t need to specify the optional upstreamNameservers field.Example 2: Replacing the Upstream NameserversIn this example the cluster administrator wants to explicitly force all non-cluster DNS lookups to go through their own nameserver at 172.16.0.1.  Again, this is easy to accomplish; they just need to create a ConfigMap with the upstreamNameservers field specifying the desired nameserver.apiVersion: v1kind: ConfigMapmetadata: name: kube-dns namespace: kube-systemdata: upstreamNameservers: | [“172.16.0.1”]Get involvedIf you’d like to contribute or simply help provide feedback and drive the roadmap, join our community. Specifically for network related conversations participate though one of these channels:Chat with us on the Kubernetes Slack network channel Join our Special Interest Group, SIG-Network, which meets on Tuesdays at 14:00 PTThanks for your support and contributions. Read more in-depth posts on what’s new in Kubernetes 1.6 here.–Bowei Du, Software Engineer and Matthew DeLio, Product Manager, GooglePost questions (or answer questions) on Stack Overflow Join the community portal for advocates on K8sPortGet involved with the Kubernetes project on GitHub Follow us on Twitter @Kubernetesio for latest updatesConnect with the community on SlackDownload Kubernetes
Quelle: kubernetes

Docker Gives Back at DockerCon

is actively working to improve opportunities for women and underrepresented minorities throughout the global ecosystem and promote diversity and inclusion in the larger tech community.
For instance, at DockerCon 2016, attendees contributed to a scholarship program through the Bump Up Challenge unlocking funds towards full-tuition scholarships for three applicants to attend Hack Reactor. We selected two recipients in 2016 and are excited to announce our third recipient, Tabitha Hsia, who is already in her first week of the program.
In her own words:

“My name is Tabitha Hsia. I grew up in the East Bay. I come from an art-focused family with my sister being a professional cellist, my mother being a professional pianist, and my great grandfather being a famous Taiwanese painter. I chose Hack Reactor because of their impressive student outcomes and their weekly schedule. Already in my first week, I have learned a ton of information from lectures and their wealth of resources. I have enjoyed pair programming the most so far. While the lectures expose me to new topics, applying the topics to actual problems has deepened my understanding the most. After graduation, my long-term goal is to become a virtual reality developer. Seeing the integration of the solutions and tools into society excites me.”

DockerCon Gives Back  
Following the success of previous DockerCon initiatives promoting diversity in the tech industry, we’re proud to continue our efforts at the upcoming DockerCon 2017 in Austin.
With this year’s program called DockerCon Gives Back, we’re recognizing four organizations that are doing outstanding work locally in Austin and globally. Attendees at the show will have the chance to connect and support these great organizations by dropping their token in their box &; each token represents a dollar that Docker will donate at the end of the conference.

            

Meet the DockerCon 2017 Diversity Scholarship winners
The DockerCon team is excited to announce the recipients of this year’s DockerCon Diversity Scholarship Program! The DockerCon Diversity Scholarship aims to provide support and guidance to members of the Docker Community who are traditionally underrepresented in tech through mentorship and a scholarship to attend DockerCon. Meet the recipients of this year’s scholarship here.

Congrats to our Austin scholarship winners! Learn more about how Docker Gives Back at To Tweet

The post Docker Gives Back at DockerCon appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Advanced Scheduling in Kubernetes

Editor’s note: this post is part of a series of in-depth articles on what’s new in Kubernetes 1.6The Kubernetes scheduler’s default behavior works well for most cases — for example, it ensures that pods are only placed on that have sufficient free resources, it ties to spread pods from the same set (ReplicaSet, StatefulSet, etc.) across nodes, it tries to balance out the resource utilization of nodes, etc.But sometimes you want to control how your pods are scheduled. For example, perhaps you want to ensure that certain pods only schedule on nodes with specialized hardware, or you want to co-locate services that communicate frequently, or you want to dedicate a set of nodes to a particular set of users. Ultimately, you know much more about how your applications should be scheduled and deployed than Kubernetes ever will. So Kubernetes 1.6 offers four advanced scheduling features: node affinity/anti-affinity, taints and tolerations, pod affinity/anti-affinity, and custom schedulers. Each of these features are now in beta in Kubernetes 1.6.Node Affinity/Anti-AffinityNode Affinity/Anti-Affinity is one way to set rules on which nodes are selected by the scheduler. This feature is a generalization of the nodeSelector feature which has been in Kubernetes since version 1.0. The rules are defined using the familiar concepts of custom labels on nodes and selectors specified in pods, and they can be either required or preferred, depending on how strictly you want the scheduler to enforce them. Required rules must be met for a pod to schedule on a particular node. If no node matches the criteria (plus all of the other normal criteria, such as having enough free resources for the pod’s resource request), then the pod won’t be scheduled. Required rules are specified in the requiredDuringSchedulingIgnoredDuringExecution field of nodeAffinity. For example, if we want to require scheduling on a node that is in the us-central1-a GCE zone of a multi-zone Kubernetes cluster, we can specify the following affinity rule as part of the Pod spec:affinity:  nodeAffinity:    requiredDuringSchedulingIgnoredDuringExecution:      nodeSelectorTerms:        – matchExpressions:          – key: “failure-domain.beta.kubernetes.io/zone”            operator: In            values: [“us-central1-a”]“IgnoredDuringExecution” means that the pod will still run if labels on a node change and affinity rules are no longer met. There are future plans to offer requiredDuringSchedulingRequiredDuringExecution which will evict pods from nodes as soon as they don’t satisfy the node affinity rule(s).Preferred rules mean that if nodes match the rules, they will be chosen first, and only if no preferred nodes are available will non-preferred nodes be chosen. You can prefer instead of require that pods are deployed to us-central1-a by slightly changing the pod spec to use preferredDuringSchedulingIgnoredDuringExecution:affinity:  nodeAffinity:    preferredDuringSchedulingIgnoredDuringExecution:      nodeSelectorTerms:        – matchExpressions:          – key: “failure-domain.beta.kubernetes.io/zone”            operator: In            values: [“us-central1-a”]Node anti-affinity can be achieved by using negative operators. So for instance if we want our pods to avoid us-central1-a we can do this:affinity:  nodeAffinity:    requiredDuringSchedulingIgnoredDuringExecution:      nodeSelectorTerms:        – matchExpressions:          – key: “failure-domain.beta.kubernetes.io/zone”            operator: NotIn            values: [“us-central1-a”]Valid operators you can use are In, NotIn, Exists, DoesNotExist. Gt, and Lt.Additional use cases for this feature are to restrict scheduling based on nodes’ hardware architecture, operating system version, or specialized hardware. Node affinity/anti-affinity is beta in Kubernetes 1.6. Taints and TolerationsA related feature is “taints and tolerations,” which allows you to mark (“taint”) a node so that no pods can schedule onto it unless a pod explicitly “tolerates” the taint. Marking nodes instead of pods (as in node affinity/anti-affinity) is particularly useful for situations where most pods in the cluster should avoid scheduling onto the node. For example, you might want to mark your master node as schedulable only by Kubernetes system components, or dedicate a set of nodes to a particular group of users, or keep regular pods away from nodes that have special hardware so as to leave room for pods that need the special hardware.The kubectl command allows you to set taints on nodes, for example:kubectl taint nodes node1 key=value:NoSchedulecreates a taint that marks the node as unschedulable by any pods that do not have a toleration for taint with key key, value value, and effect NoSchedule. (The other taint effects are PreferNoSchedule, which is the preferred version of NoSchedule, and NoExecute, which means any pods that are running on the node when the taint is applied will be evicted unless they tolerate the taint.) The toleration you would add to a PodSpec to have the corresponding pod tolerate this taint would look like thistolerations: – key: “key”  operator: “Equal”  value: “value”  effect: “NoSchedule”In addition to moving taints and tolerations to beta in Kubernetes 1.6, we have introduced an alpha feature that uses taints and tolerations to allow you to customize how long a pod stays bound to a node when the node experiences a problem like a network partition instead of using the default five minutes. See this section of the documentation for more details.Pod Affinity/Anti-AffinityNode affinity/anti-affinity allows you to constrain which nodes a pod can run on based on the nodes’ labels. But what if you want to specify rules about how pods should be placed relative to one another, for example to spread or pack pods within a service or relative to pods in other services? For that you can use pod affinity/anti-affinity, which is also beta in Kubernetes 1.6.Let’s look at an example. Say you have front-ends in service S1, and they communicate frequently with back-ends that are in service S2 (a “north-south” communication pattern). So you want these two services to be co-located in the same cloud provider zone, but you don’t want to have to choose the zone manually–if the zone fails, you want the pods to be rescheduled to another (single) zone. You can specify this with a pod affinity rule that looks like this (assuming you give the pods of this service a label “service=S2” and the pods of the other service a label “service=S1”):affinity:    podAffinity:      requiredDuringSchedulingIgnoredDuringExecution:      – labelSelector:          matchExpressions:          – key: service            operator: In            values: [“S1”]        topologyKey: failure-domain.beta.kubernetes.io/zoneAs with node affinity/anti-affinity, there is also a preferredDuringSchedulingIgnoredDuringExecution variant.Pod affinity/anti-affinity is very flexible. Imagine you have profiled the performance of your services and found that containers from service S1 interfere with containers from service S2 when they share the same node, perhaps due to cache interference effects or saturating the network link. Or maybe due to security concerns you never want containers of S1 and S2 to share a node. To implement these rules, just make two changes to the snippet above — change podAffinity to podAntiAffinity and change topologyKey to kubernetes.io/hostname.Custom SchedulersIf the Kubernetes scheduler’s various features don’t give you enough control over the scheduling of your workloads, you can delegate responsibility for scheduling arbitrary subsets of pods to your own custom scheduler(s) that run(s) alongside, or instead of, the default Kubernetes scheduler. Multiple schedulers is beta in Kubernetes 1.6.Each new pod is normally scheduled by the default scheduler. But if you provide the name of your own custom scheduler, the default scheduler will ignore that Pod and allow your scheduler to schedule the Pod to a node. Let’s look at an example.Here we have a Pod where we specify the schedulerName field:apiVersion: v1kind: Podmetadata:  name: nginx  labels:    app: nginxspec:  schedulerName: my-scheduler  containers:  – name: nginx    image: nginx:1.10If we create this Pod without deploying a custom scheduler, the default scheduler will ignore it and it will remain in a Pending state. So we need a custom scheduler that looks for, and schedules, pods whose schedulerName field is my-scheduler.A custom scheduler can be written in any language and can be as simple or complex as you need. Here is a very simple example of a custom scheduler written in Bash that assigns a node randomly. Note that you need to run this along with kubectl proxy for it to work.#!/bin/bashSERVER=’localhost:8001’while true;do    for PODNAME in $(kubectl –server $SERVER get pods -o json | jq ‘.items[] | select(.spec.schedulerName == “my-scheduler”) | select(.spec.nodeName == null) | .metadata.name’ | tr -d ‘”‘);    do        NODES=($(kubectl –server $SERVER get nodes -o json | jq ‘.items[].metadata.name’ | tr -d ‘”‘))        NUMNODES=${NODES[@]}        CHOSEN=${NODES[$[ $RANDOM % $NUMNODES ]]}        curl –header “Content-Type:application/json” –request POST –data ‘{“apiVersion”:”v1″, “kind”: “Binding”, “metadata”: {“name”: “‘$PODNAME'”}, “target”: {“apiVersion”: “v1″, “kind”: “Node”, “name”: “‘$CHOSEN'”}}’ http://$SERVER/api/v1/namespaces/default/pods/$PODNAME/binding/        echo “Assigned $PODNAME to $CHOSEN”    done    sleep 1doneLearn moreThe Kubernetes 1.6 release notes have more information about these features, including details about how to change your configurations if you are already using the alpha version of one or more of these features (this is required, as the move from alpha to beta is a breaking change for these features).AcknowledgementsThe features described here, both in their alpha and beta forms, were a true community effort, involving engineers from Google, Huawei, IBM, Red Hat and more.Get InvolvedShare your voice at our weekly community meeting: Post questions (or answer questions) on Stack Overflow Follow us on Twitter @Kubernetesio for latest updatesConnect with the community on Slack (room -scheduling)Many thanks for your contributions.–Ian Lewis, Developer Advocate, and David Oppenheimer, Software Engineer, Google
Quelle: kubernetes

Scalability updates in Kubernetes 1.6: 5,000 node and 150,000 pod clusters

Editor’s note: this post is part of a series of in-depth articles on what’s new in Kubernetes 1.6Last summer we shared updates on Kubernetes scalability, since then we’ve been working hard and are proud to announce that Kubernetes 1.6 can handle 5,000-node clusters with up to 150,000 pods. Moreover, those cluster have even better end-to-end pod startup time than the previous 2,000-node clusters in the 1.3 release; and latency of the API calls are within the one-second SLO.In this blog post we review what metrics we monitor in our tests and describe our performance results from Kubernetes 1.6. We also discuss what changes we made to achieve the improvements, and our plans for upcoming releases in the area of system scalability.X-node clusters – what does it mean?Now that Kubernetes 1.6 is released, it is a good time to review what it means when we say we “support” X-node clusters. As described in detail in a previous blog post, we currently have two performance-related Service Level Objectives (SLO):API-responsiveness: 99% of all API calls return in less than 1sPod startup time: 99% of pods and their containers (with pre-pulled images) start within 5s.As before, it is possible to run larger deployments than the stated supported 5,000-node cluster (and users have), but performance may be degraded and it may not meet our strict SLO defined above.We are aware of the limited scope of these SLOs. There are many aspects of the system that they do not exercise. For example, we do not measure how soon a new pod that is part of a service will be reachable through the service IP address after the pod is started. If you are considering using large Kubernetes clusters and have performance requirements not covered by our SLOs, please contact the Kubernetes Scalability SIG so we can help you understand whether Kubernetes is ready to handle your workload now.The top scalability-related priority for upcoming Kubernetes releases is to enhance our definition of what it means to support X-node clusters by:refining currently existing SLOsadding more SLOs (that will cover various areas of Kubernetes, including networking)Kubernetes 1.6 performance metrics at scaleSo how does performance in large clusters look in Kubernetes 1.6? The following graph shows the end-to-end pod startup latency with 2000- and 5000-node clusters. For comparison, we also show the same metric from Kubernetes 1.3, which we published in our previous scalability blog post that described support for 2000-node clusters. As you can see, Kubernetes 1.6 has better pod startup latency with both 2000 and 5000 nodes compared to Kubernetes 1.3 with 2000 nodes [1].The next graph shows API response latency for a 5000-node Kubernetes 1.6 cluster. The latencies at all percentiles are less than 500ms, and even 90th percentile is less than about 100ms.How did we get here?Over the past nine months (since the last scalability blog post), there have been a huge number of performance and scalability related changes in Kubernetes. In this post we will focus on the two biggest ones and will briefly enumerate a few others.etcd v3In Kubernetes 1.6 we switched the default storage backend (key-value store where the whole cluster state is stored) from etcd v2 to etcd v3. The initial works towards this transition has been started during the 1.3 release cycle. You might wonder why it took us so long, given that:the first stable version of etcd supporting the v3 API was announced on June 30, 2016the new API was designed together with the Kubernetes team to support our needs (from both a feature and scalability perspective)the integration of etcd v3 with Kubernetes had already mostly been finished when etcd v3 was announced (indeed CoreOS used Kubernetes as a proof-of-concept for the new etcd v3 API)As it turns out, there were a lot of reasons. We will describe the most important ones below.Changing storage in a backward incompatible way, as is in the case for the etcd v2 to v3 migration, is a big change, and thus one for which we needed a strong justification. We found this justification in September when we determined that we would not be able to scale to 5000-node clusters if we continued to use etcd v2 (kubernetes/32361 contains some discussion about it). In particular, what didn’t scale was the watch implementation in etcd v2. In a 5000-node cluster, we need to be able to send at least 500 watch events per second to a single watcher, which wasn’t possible in etcd v2.Once we had the strong incentive to actually update to etcd v3, we started thoroughly testing it. As you might expect, we found some issues. There were some minor bugs in Kubernetes, and in addition we requested a performance improvement in etcd v3’s watch implementation (watch was the main bottleneck in etcd v2 for us). This led to the 3.0.10 etcd patch release.Once those changes had been made, we were convinced that new Kubernetes clusters would work with etcd v3. But the large challenge of migrating existing clusters remained. For this we needed to automate the migration process, thoroughly test the underlying CoreOS etcd upgrade tool, and figure out a contingency plan for rolling back from v3 to v2.But finally, we are confident that it should work.Switching storage data format to protobufIn the Kubernetes 1.3 release, we enabled protobufs as the data format for Kubernetes components to communicate with the API server (in addition to maintaining support for JSON). This gave us a huge performance improvement.However, we were still using JSON as a format in which data was stored in etcd, even though technically we were ready to change that. The reason for delaying this migration was related to our plans to migrate to etcd v3. Now you are probably wondering how this change was depending on migration to etcd v3. The reason for it was that with etcd v2 we couldn’t really store data in binary format (to workaround it we were additionally base64-encoding the data), whereas with etcd v3 it just worked. So to simplify the transition to etcd v3 and avoid some non-trivial transformation of data stored in etcd during it, we decided to wait with switching storage data format to protobufs until migration to etcd v3 storage backend is done.Other optimizationsWe made tens of optimizations throughout the Kubernetes codebase during the last three releases, including:optimizing the scheduler (which resulted in 5-10x higher scheduling throughput)switching all controllers to a new recommended design using shared informers, which reduced resource consumption of controller-manager – for reference see this documentoptimizing individual operations in the API server (conversions, deep-copies, patch)reducing memory allocation in the API server (which significantly impacts the latency of API calls)We want to emphasize that the optimization work we have done during the last few releases, and indeed throughout the history of the project, is a joint effort by many different companies and individuals from the whole Kubernetes community.What’s next?People frequently ask how far we are going to go in improving Kubernetes scalability. Currently we do not have plans to increase scalability beyond 5000-node clusters (within our SLOs) in the next few releases. If you need clusters larger than 5000 nodes, we recommend to use federation to aggregate multiple Kubernetes clusters.However, that doesn’t mean we are going to stop working on scalability and performance. As we mentioned at the beginning of this post, our top priority is to refine our two existing SLOs and introduce new ones that will cover more parts of the system, e.g. networking. This effort has already started within the Scalability SIG. We have made significant progress on how we would like to define performance SLOs, and this work should be finished in the coming month.Join the effortIf you are interested in scalability and performance, please join our community and help us shape Kubernetes. There are many ways to participate, including:Chat with us in the Kubernetes Slack scalability channel: Join our Special Interest Group, SIG-Scalability, which meets every Thursday at 9:00 AM PSTThanks for the support and contributions! Read more in-depth posts on what’s new in Kubernetes 1.6 here. — Wojciech Tyczynski, Software Engineer, Google[1] We are investigating why 5000-node clusters have better startup time than 2000-node clusters. The current theory is that it is related to running 5000-node experiments using 64-core master and 2000-node experiments using 32-core master.
Quelle: kubernetes

Five Days of Kubernetes 1.6

With the help of our growing community of 1,110 plus contributors, we pushed around 5,000 commits to deliver Kubernetes 1.6, bringing focus on multi-user, multi-workloads at scale. While many improvements have been contributed, we selected few features to highlight in a series of in-depths posts listed below. Follow along and read what’s new:Day 1* ???Day 2* ???Day 3* ???Day 4* ???Day 5* ??ConnectFollow us on Twitter @Kubernetesio for latest updatesPost questions (or answer questions) on Stack Overflow Join the community portal for advocates on K8sPortGet involved with the Kubernetes project on GitHub Connect with the community on SlackDownload Kubernetes
Quelle: kubernetes

Dynamic Provisioning and Storage Classes in Kubernetes

Editor’s note: this post is part of a series of in-depth articles on what’s new in Kubernetes 1.6Storage is a critical part of running stateful containers, and Kubernetes offers powerful primitives for managing it. Dynamic volume provisioning, a feature unique to Kubernetes, allows storage volumes to be created on-demand. Before dynamic provisioning, cluster administrators had to manually make calls to their cloud or storage provider to provision new storage volumes, and then create PersistentVolume objects to represent them in Kubernetes. With dynamic provisioning, these two steps are automated, eliminating the need for cluster administrators to pre-provision storage. Instead, the storage resources can be dynamically provisioned using the provisioner specified by the StorageClass object (see user-guide). StorageClasses are essentially blueprints that abstract away the underlying storage provider, as well as other parameters, like disk-type (e.g.; solid-state vs standard disks).StorageClasses use provisioners that are specific to the storage platform or cloud provider to give Kubernetes access to the physical media being used. Several storage provisioners are provided in-tree (see user-guide), but additionally out-of-tree provisioners are now supported (see kubernetes-incubator).In the Kubernetes 1.6 release, dynamic provisioning has been promoted to stable (having entered beta in 1.4). This is a big step forward in completing the Kubernetes storage automation vision, allowing cluster administrators to control how resources are provisioned and giving users the ability to focus more on their application. With all of these benefits, there are a few important user-facing changes (discussed below) that are important to understand before using Kubernetes 1.6.Storage Classes and How to Use themStorageClasses are the foundation of dynamic provisioning, allowing cluster administrators to define abstractions for the underlying storage platform. Users simply refer to a StorageClass by name in the PersistentVolumeClaim (PVC) using the “storageClassName” parameter.In the following example, a PVC refers to a specific storage class named “gold”.apiVersion: v1kind: PersistentVolumeClaimmetadata:  name: mypvc  namespace: testnsspec:  accessModes:  – ReadWriteOnce  resources:    requests:      storage: 100Gi  storageClassName: goldIn order to promote the usage of dynamic provisioning this feature permits the cluster administrator to specify a default StorageClass. When present, the user can create a PVC without having specifying a storageClassName, further reducing the user’s responsibility to be aware of the underlying storage provider. When using default StorageClasses, there are some operational subtleties to be aware of when creating PersistentVolumeClaims (PVCs). This is particularly important if you already have existing PersistentVolumes (PVs) that you want to re-use:PVs that are already “Bound” to PVCs will remain bound with the move to 1.6They will not have a StorageClass associated with them unless the user manually adds itIf PVs become “Available” (i.e.; if you delete a PVC and the corresponding PV is recycled), then they are subject to the followingIf storageClassName is not specified in the PVC, the default storage class will be used for provisioning.Existing, “Available”, PVs that do not have the default storage class label will not be considered for binding to the PVCIf storageClassName is set to an empty string (‘’) in the PVC, no storage class will be used (i.e.; dynamic provisioning is disabled for this PVC)Existing, “Available”, PVs (that do not have a specified storageClassName) will be considered for binding to the PVCIf storageClassName is set to a specific value, then the matching storage class will be usedExisting, “Available”, PVs that have a matching storageClassName will be considered for binding to the PVCIf no corresponding storage class exists, the PVC will fail.To reduce the burden of setting up default StorageClasses in a cluster, beginning with 1.6, Kubernetes installs (via the add-on manager) default storage classes for several cloud providers. To use these default StorageClasses, users do not need refer to them by name – that is, storageClassName need not be specified in the PVC.The following table provides more detail on default storage classes pre-installed by cloud provider as well as the specific parameters used by these defaults.Cloud ProviderDefault StorageClass NameDefault ProvisionerAmazon Web Servicesgp2aws-ebsMicrosoft Azurestandardazure-diskGoogle Cloud Platformstandardgce-pdOpenStackstandardcinderVMware vSpherethinvsphere-volumeWhile these pre-installed default storage classes are chosen to be “reasonable” for most storage users, this guide provides instructions on how to specify your own default.Dynamically Provisioned Volumes and the Reclaim PolicyAll PVs have a reclaim policy associated with them that dictates what happens to a PV once it becomes released from a claim (see user-guide). Since the goal of dynamic provisioning is to completely automate the lifecycle of storage resources, the default reclaim policy for dynamically provisioned volumes is “delete”. This means that when a PersistentVolumeClaim (PVC) is released, the dynamically provisioned volume is de-provisioned (deleted) on the storage provider and the data is likely irretrievable. If this is not the desired behavior, the user must change the reclaim policy on the corresponding PersistentVolume (PV) object after the volume is provisioned.How do I change the reclaim policy on a dynamically provisioned volume?You can change the reclaim policy by editing the PV object and changing the “persistentVolumeReclaimPolicy” field to the desired value. For more information on various reclaim policies see user-guide.FAQsHow do I use a default StorageClass?If your cluster has a default StorageClass that meets your needs, then all you need to do is create a PersistentVolumeClaim (PVC) and the default provisioner will take care of the rest – there is no need to specify the storageClassName:apiVersion: v1kind: PersistentVolumeClaimmetadata:  name: mypvc  namespace: testnsspec:  accessModes:  – ReadWriteOnce  resources:    requests:      storage: 100GiCan I add my own storage classes?Yes. To add your own storage class, first determine which provisioners will work in your cluster. Then, create a StorageClass object with parameters customized to meet your needs (see user-guide for more detail). For many users, the easiest way to create the object is to write a yaml file and apply it with “kubectl create -f”. The following is an example of a StorageClass for Google Cloud Platform named “gold” that creates a “pd-ssd”. Since multiple classes can exist within a cluster, the administrator may leave the default enabled for most workloads (since it uses a “pd-standard”), with the “gold” class reserved for workloads that need extra performance. kind: StorageClassapiVersion: storage.k8s.io/v1metadata:  name: goldprovisioner: kubernetes.io/gce-pdparameters:  type: pd-ssdHow do I check if I have a default StorageClass Installed?You can use kubectl to check for StorageClass objects. In the example below there are two storage classes: “gold” and “standard”. The “gold” class is user-defined, and the “standard” class is installed by Kubernetes and is the default.$ kubectl get scNAME                 TYPEgold                 kubernetes.io/gce-pd   standard (default)   kubernetes.io/gce-pd$ kubectl describe storageclass standardName:     standardIsDefaultClass: YesAnnotations: storageclass.beta.kubernetes.io/is-default-class=trueProvisioner: kubernetes.io/gce-pdParameters: type=pd-standardEvents:         <none>Can I delete/turn off the default StorageClasses?You cannot delete the default storage class objects provided. Since they are installed as cluster addons, they will be recreated if they are deleted.You can, however, disable the defaulting behavior by removing (or setting to false) the following annotation: storageclass.beta.kubernetes.io/is-default-class.If there are no StorageClass objects marked with the default annotation, then PersistentVolumeClaim objects (without a StorageClass specified) will not trigger dynamic provisioning. They will, instead, fall back to the legacy behavior of binding to an available PersistentVolume object.Can I assign my existing PVs to a particular StorageClass?Yes, you can assign a StorageClass to an existing PV by editing the appropriate PV object and adding (or setting) the desired storageClassName field to it.What happens if I delete a PersistentVolumeClaim (PVC)?If the volume was dynamically provisioned, then the default reclaim policy is set to “delete”. This means that, by default, when the PVC is deleted, the underlying PV and storage asset will also be deleted. If you want to retain the data stored on the volume, then you must change the reclaim policy from “delete” to “retain” after the PV is provisioned.–Saad Ali & Michelle Au, Software Engineers, and Matthew De Lio, Product Manager, GooglePost questions (or answer questions) on Stack Overflow Join the community portal for advocates on K8sPortGet involved with the Kubernetes project on GitHub Follow us on Twitter @Kubernetesio for latest updatesConnect with the community on SlackDownload Kubernetes
Quelle: kubernetes

containerd joins the Cloud Native Computing Foundation

Today, we’re excited to announce that  – Docker’s core container runtime – has been accepted by the Technical Oversight Committee (TOC) as an incubating project in the Cloud Native Computing Foundation (CNCF). containerd’s acceptance into the CNCF alongside projects such as Kubernetes, gRPC and Prometheus comes three months after Docker, with support from the five largest cloud providers, announced its intent to contribute the project to a neutral foundation in the first quarter of this year.
In the process of spinning containerd out of Docker and contributing it to CNCF there are a few changes that come along with it.  For starters, containerd now has a logo; see below. In addition, we have a new @containerd twitter handle. In the next few days, we’ll be moving the containerd GitHub repository to a separate GitHub organization. Similarly, the containerd slack channel will be moved to separate slack team which will soon available at containerd.slack.com

containerd has been extracted from Docker’s container platform and includes methods for transferring container images, container execution and supervision and low-level local storage, across both Linux and Windows. containerd is an essential upstream component of the Docker platform used by millions of end users that  also provides the industry with an open, stable and extensible base for building non-Docker products and container solutions.

“Our decision to contribute containerd to the CNCF closely follows months of collaboration and input from thought leaders in the Docker community,” said Solomon Hykes, founder, CTO and Chief Product Officer at Docker. “Since our announcement in December, we have been progressing the design of the project with the goal of making it easily embedded into higher level systems to provide core container capabilities. Our focus has always been on solving users’ problems. By donating containerd to an open foundation, we can accelerate the rate of innovation through cross-project collaboration – making the end user the ultimate benefactor of our joint efforts.”

The donation of containerd aligns with Docker’s history of making key open source plumbing projects available to the community. This effort began in 2014 when the company open sourced libcontainer. Over the past two years, Docker has continued along this path by making libnetwork, notary, runC (contributed to the Open Container Initiative, which like CNCF, is part of The Linux Foundation), HyperKit, VPNKit, DataKit, SwarmKit and InfraKit available as open source projects as well.
containerd is already a key foundation for Kubernetes, as Kubernetes 1.5 runs with Docker 1.10.3 to 1.12.3. There is also strong alignment with other CNCF projects: containerd exposes an API using gRPC and exposes metrics in the Prometheus format. containerd also fully leverages the Open Container Initiative’s (OCI) runtime, image format specifications and OCI reference implementation (runC), and will pursue OCI certification when it is available. A proof of concept for integrating containerd directly into Kubernetes CRI is currently being worked on. Check out the pull request on github for more technical details.

Figure 1: containerd’s role in the Container Ecosystem
Community consensus leads to technical progress
In the past few months, the containerd team has been active implementing Phase 1 and Phase 2 of the containerd roadmap. Details about the project can be charted in the containerd weekly development reports posted in the Github project.
At the end of February, Docker hosted the containerd Summit with more than 50 members of the community from companies including Alibaba, AWS, Google, IBM, Microsoft, Red Hat and VMware. The group gathered to learn more about containerd, get more information on containerd’s progress and discuss its design. To view the presentations, check out the containerd summit recap blog post.
The target date to finish implementing the containerd 1.0 roadmap is June 2017. To contribute to containerd, or embed it into a container system, check out the project on GitHub. If you want to learn more about containerd progress, or discuss its design, join the team in Berlin tomorrow at KubeCon 2017 for the containerd Salon, or Austin for DockerCon Day 4 Thursday April 20th, as the Docker Internals Summit morning session will be a containerd summit.
Additional containerd Resources:

Roadmap
Scope table
Architecture document
Draft APIs

Docker’s core container runtime: containerd joins the @CloudNativeFdnClick To Tweet

The post containerd joins the Cloud Native Computing Foundation appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Kubernetes 1.6: Multi-user, Multi-workloads at Scale

Today we’re announcing the release of Kubernetes 1.6.In this release the community’s focus is on scale and automation, to help you deploy multiple workloads to multiple users on a cluster. We are announcing that 5,000 node clusters are supported. We moved dynamic storage provisioning to stable. Role-based access control (RBAC), kubefed, kubeadm, and several scheduling features are moving to beta. We have also added intelligent defaults throughout to enable greater automation out of the box.What’s NewScale and Federation: Large enterprise users looking for proof of at-scale performance will be pleased to know that Kubernetes’ stringent scalability SLO now supports 5,000 node (150,000 pod) clusters. This 150% increase in total cluster size, powered by a new version of etcd v3 by CoreOS, is great news if you are deploying applications such as search or games which can grow to consume larger clusters.For users who want to scale beyond 5,000 nodes or spread across multiple regions or clouds, federation lets you combine multiple Kubernetes clusters and address them through a single API endpoint. In this release, the kubefed command line utility graduated to beta – with improved support for on-premise clusters. kubefed now automatically configures kube-dns on joining clusters and can pass arguments to federated components.Security and Setup: Users concerned with security will find that RBAC, now beta adds a significant security benefit through more tightly scoped default roles for system components. The default RBAC policies in 1.6 grant scoped permissions to control-plane components, nodes, and controllers. RBAC allows cluster administrators to selectively grant particular users or service accounts fine-grained access to specific resources on a per-namespace basis. RBAC users upgrading from 1.5 to 1.6 should view the guidance here. Users looking for an easy way to provision a secure cluster on physical or cloud servers can use kubeadm, which is now beta. kubeadm has been enhanced with a set of command line flags and a base feature set that includes RBAC setup, use of the Bootstrap Token system and an enhanced Certificates API.Advanced Scheduling: This release adds a set of powerful and versatile scheduling constructs to give you greater control over how pods are scheduled, including rules to restrict pods to particular nodes in heterogeneous clusters, and rules to spread or pack pods across failure domains such as nodes, racks, and zones.Node affinity/anti-affinity, now in beta, allows you to restrict pods to schedule only on certain nodes based on node labels. Use built-in or custom node labels to select specific zones, hostnames, hardware architecture, operating system version, specialized hardware, etc. The scheduling rules can be required or preferred, depending on how strictly you want the scheduler to enforce them.A related feature, called taints and tolerations, makes it possible to compactly represent rules for excluding pods from particular nodes. The feature, also now in beta, makes it easy, for example, to dedicate sets of nodes to particular sets of users, or to keep nodes that have special hardware available for pods that need the special hardware by excluding pods that don’t need it.Sometimes you want to co-schedule services, or pods within a service, near each other topologically, for example to optimize North-South or East-West communication. Or you want to spread pods of a service for failure tolerance, or keep antagonistic pods separated, or ensure sole tenancy of nodes. Pod affinity and anti-affinity, now in beta, enables such use cases by letting you set hard or soft requirements for spreading and packing pods relative to one another within arbitrary topologies (node, zone, etc.).Lastly, for the ultimate in scheduling flexibility, you can run your own custom scheduler(s) alongside, or instead of, the default Kubernetes scheduler. Each scheduler is responsible for different sets of pods. Multiple schedulers is beta in this release. Dynamic Storage Provisioning: Users deploying stateful applications will benefit from the extensive storage automation capabilities in this release of Kubernetes.Since its early days, Kubernetes has been able to automatically attach and detach storage, format disk, mount and unmount volumes per the pod spec, and do so seamlessly as pods move between nodes. In addition, the PersistentVolumeClaim (PVC) and PersistentVolume (PV) objects decouple the request for storage from the specific storage implementation, making the pod spec portable across a range of cloud and on-premise environments. In this release StorageClass and dynamic volume provisioning are promoted to stable, completing the automation story by creating and deleting storage on demand, eliminating the need to pre-provision.The design allows cluster administrators to define and expose multiple flavors of storage within a cluster, each with a custom set of parameters. End users can stop worrying about the complexity and nuances of how storage is provisioned, while still selecting from multiple storage options.In 1.6 Kubernetes comes with a set of built-in defaults to completely automate the storage provisioning lifecycle, freeing you to work on your applications. Specifically, Kubernetes now pre-installs system-defined StorageClass objects for AWS, Azure, GCP, OpenStack and VMware vSphere by default. This gives Kubernetes users on these providers the benefits of dynamic storage provisioning without having to manually setup StorageClass objects. This is a change in the default behavior of PVC objects on these clouds. Note that default behavior is that dynamically provisioned volumes are created with the “delete” reclaim policy. That means once the PVC is deleted, the dynamically provisioned volume is automatically deleted so users do not have the extra step of ‘cleaning up’.In addition, we have expanded the range of storage supported overall including:ScaleIO Kubernetes Volume Plugin enabling pods to seamlessly access and use data stored on ScaleIO volumes.Portworx Kubernetes Volume Plugin adding the capability to use Portworx as a storage provider for Kubernetes clusters. Portworx pools your server capacity and turns your servers or cloud instances into converged, highly available compute and storage nodes.Support for NFSv3, NFSv4, and GlusterFS on clusters using the COS node image Support for user-written/run dynamic PV provisioners. A golang library and examples can be found here.Beta support for mount options in persistent volumesContainer Runtime Interface, etcd v3 and Daemon set updates: while users may not directly interact with the container runtime or the API server datastore, they are foundational components for user facing functionality in Kubernetes’. As such the community invests in expanding the capabilities of these and other system components.The Docker-CRI implementation is beta and is enabled by default in kubelet. Alpha support for other runtimes, cri-o, frakti, rkt, has also been implemented.The default backend storage for the API server has been upgraded to use etcd v3 by default for new clusters. If you are upgrading from a 1.5 cluster, care should be taken to ensure continuity by planning a data migration window. Node reliability is improved as Kubelet exposes an admin configurable Node Allocatable feature to reserve compute resources for system daemons.Daemon set updates lets you perform rolling updates on a daemon setAlpha features: this release was mostly focused on maturing functionality, however, a few alpha features were added to support the roadmapOut-of-tree cloud provider support adds a new cloud-controller-manager binary that may be used for testing the new out-of-core cloud provider flowPer-pod-eviction in case of node problems combined with tolerationSeconds, lets users tune the duration a pod stays bound to a node that is experiencing problemsPod Injection Policy adds a new API resource PodPreset to inject information such as secrets, volumes, volume mounts, and environment variables into pods at creation time.Custom metrics support in the Horizontal Pod Autoscaler changed to use Multiple Nvidia GPU support is introduced with the Docker runtime onlyThese are just some of the highlights in our first release for the year. For a complete list please visit the release notes.CommunityThis release is possible thanks to our vast and open community. Together, we’ve pushed nearly 5,000 commits by some 275 authors. To bring our many advocates together, the community has launched a new program called K8sPort, an online hub where the community can participate in gamified challenges and get credit for their contributions. Read more about the program here.Release ProcessA big thanks goes out to the release team for 1.6 (lead by Dan Gillespie of CoreOS) for their work bringing the 1.6 release to light. This release team is an exemplar of the Kubernetes community’s commitment to community governance. Dan is the first non-Google release manager and he, along with the rest of the team, worked throughout the release (building on the 1.5 release manager, Saad Ali’s, great work) to uncover and document tribal knowledge, shine light on tools and processes that still require special permissions, and prioritize work to improve the Kubernetes release process. Many thanks to the team.User AdoptionWe’re continuing to see rapid adoption of Kubernetes in all sectors and sizes of businesses. Furthermore, adoption is coming from across the globe, from a startup in Tennessee, USA to a Fortune 500 company in China. JD.com, one of China’s largest internet companies, uses Kubernetes in conjunction with their OpenStack deployment. They’ve move 20% of their applications thus far on Kubernetes and are already running 20,000 pods daily. Read more about their setup here. Spire, a startup based in Tennessee, witnessed their public cloud provider experience an outage, but suffered zero downtime because Kubernetes was able to move their workloads to different zones. Read their full experience here.“With Kubernetes, there was never a moment of panic, just a sense of awe watching the automatic mitigation as it happened.”Share your Kubernetes use case story with the community here.AvailabilityKubernetes 1.6 is available for download here on GitHub and via get.k8s.io. To get started with Kubernetes, try one of the these interactive tutorials. Get InvolvedCloudNativeCon + KubeCon in Berlin is this week March 29-30, 2017. We hope to get together with much of the community and share more there!Share your voice at our weekly community meeting: Post questions (or answer questions) on Stack Overflow Follow us on Twitter @Kubernetesio for latest updatesConnect with the community on SlackMany thanks for your contributions and advocacy!– Aparna Sinha, Senior Product Manager, Kubernetes, Google
Quelle: kubernetes