Scaling with Kubernetes DaemonSets

The post Scaling with Kubernetes DaemonSets appeared first on Mirantis | Pure Play Open Cloud.
We&;re used to thinking about scaling from the point of view of a deployment; we want it to scale up under different conditions, so it looks for appropriate nodes, and puts pods on them. DaemonSets, on the other hand, take a different tack: any time you have a node that belongs to the set, it runs the pods you specify.  For example, you might create a DaemonSet to tell Kubernetes that any time you create a node with the label app=webserver you want it to run Nginx.  Let&8217;s take a look at how that works.
Creating a DaemonSet
Let&8217;s start by looking at a sample YAML file to define a Daemon Set:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
 name: frontend
spec:
 template:
   metadata:
     labels:
       app: frontend-webserver
   spec:
     nodeSelector:
       app: frontend-node
     containers:
       – name: webserver
         image: nginx
         ports:
         – containerPort: 80
Here we&8217;re creating a DaemonSet called frontend. As with a ReplicationController, pods launched by the DaemonSet are given the label specified in the spec.template.metadata.labels property &; in this case, app=frontend-webserver.
The template.spec itself has two important parts: the nodeSelector and the containers.  The containers are fairly self-evident (see our discussion of ReplicationControllers if you need a refresher) but the interesting part here is the nodeSelector.
The nodeSelector tells Kubernetes which nodes are part of the set and should run the specified containers.  In other words, these pods are deployed automatically; there&8217;s no input at all from the scheduler, so schedulability of a node isn&8217;t taken into account.  On the other hand, Daemon Sets are a great way to deploy pods that need to be running before other objects.
Let&8217;s go ahead and create the Daemon Set.  Create a file called ds.yaml with the definition in it and run the command:
$ kubectl create -f ds.yaml
daemonset “datastore” created
Now let&8217;s see the Daemon Set in action.
Scaling capacity using a DaemonSet
If we check to see if the pods have been deployed, we&8217;ll see that they haven&8217;t:
$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
That&8217;s because we don&8217;t yet have any nodes that are part of our DaemonSet.  If we look at the nodes we do have &;
$ kubectl get nodes
NAME        STATUS    AGE
10.0.10.5   Ready     75d
10.0.10.7   Ready     75d
We can go ahead and add at least one of them by adding the app=frontend-node label:
$kubectl label  node 10.0.10.5 app=frontend-node
node “10.0.10.5” labeled
Now if we get a list of pods again&8230;
$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
frontend-7nfxo              1/1       Running   0          19s
We can see that the pod was started without us taking any additional action.  
Now we have a single webserver running.  If we wanted to scale up, we could simply add our second node to the Daemon Set:
$ kubectl label  node 10.0.10.7 app=frontend-node
node “10.0.10.7” labeled
If we check the list of pods again, we can see that a new one was automatically started:
$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
frontend-7nfxo              1/1       Running   0          1m
frontend-rp9bu              1/1       Running   0          35s
If we remove a node from the DaemonSet, any related pods are automatically terminated:
$ kubectl label  node 10.0.10.5 –overwrite app=backend
node “10.0.10.5” labeled

$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
frontend-rp9bu              1/1       Running   0          1m
Updating Daemon Sets, and improvements in Kubernetes 1.6
OK, so how do we update a running DaemonSet?  Well, as of Kubernetes 1.5, the answer is &;you don&8217;t.&; Currently, it&8217;s possible to change the template of a DaemonSet, but it won&8217;t affect the pods that are already running.  
Starting in Kubernetes 1.6, however, you will be able to do rolling updates with Kubernetes DaemonSets. You&8217;ll have to set the updateStrategy, as in:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
 name: frontend
spec:
 updateStrategy: RollingUpdate
   maxUnavailable: 1
   minReadySeconds: 0
 template:
   metadata:
     labels:
       app: frontend-webserver
   spec:
     nodeSelector:
       app: frontend-node
     containers:
       – name: webserver
         image: nginx
         ports:
         – containerPort: 80
Once you&8217;ve done that, you can make changes and they&8217;ll propagate to the running pods. For example, you can change the image on which the containers are based. For example:
$kubectl set image ds/frontend webserver=httpd
If you want to make more substantive changes, you can edit or patch the Daemon Set:
kubectl edit ds/frontend
or
kubectl patch ds/frontend -p=ds-changes.yaml
(Obviously you would use your own DaemonSet names and files!)
So that&8217;s the basics of working with DaemonSets.  What else would you like to learn about them? Let us know in the comments below.
The post Scaling with Kubernetes DaemonSets appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

What’s new in Kubernetes 1.6 — a focus on stability

The post What&;s new in Kubernetes 1.6 &; a focus on stability appeared first on Mirantis | Pure Play Open Cloud.
Kubernetes 1.6 is forecast to be released this week. Major themes include new capabilities for Daemon Sets, the beta release of Kubernetes federation and new scheduling features, and new networking capabilities. You can get an in-depth look at all of the new features in the Kubernetes 1.6 release notes, but let&8217;s get a quick overview here.
DaemonSet rolling updates
You&8217;re probably used to dealing with Kubernetes in terms of creating a Deployment or a ReplicationController and having it manage your pods, making certain that you always have a particular number of instances spread among the nodes that are available.  DaemonSets, on the other hand, look at things from the opposite perspective.
With DaemonSets, you specify the nodes to run a particular set of containers, and Kubernetes will make certain that any nodes that satisfy those requirements will run those pods. With Kubernetes 1.6, you now have the option to update those DaemonSets with a new image or other information.  (For more information on DaemonSets, you can see this article,which explains how and why to use them.)
Kubernetes Federation
As Kubernetes takes hold, the likelihood of running into situations in which users have multiple large clusters to deal with increases. Federation enables you to create an infrastructure in which users can use, say, the closest cluster to them, or the one that has the most spare capacity.
Now in beta, kubefed &;supports hosting federation on on-prem clusters, [and] automatically configures kube-dns in joining clusters and allows passing arguments to federation components.&;
Authentication and access control improvements
Role-Based Access Control (RBAC), which makes it possible to define roles for control plane, node, and controller components, is now in the beta phase.  (It also defines default roles for these components.) There are numerous changes from the alpha version (such as a change from using * for all users to using system:authenticated or system:unauthenticated) so make sure to check out the release notes for all the details.
Attribute-Based Access Control (ABAC) also been tweaked, with wild cards defaulting to authenticated users. The kube-apiserver and the authentication API have also seen a number of improvements.
Scheduling changes
Now in beta is the ability to have multiple schedulers, with each controlling a different set of pods. You can also set the scheduler you want for a particular pod on the pod sec, rather than as an annotation, as in the alpha version.
Also in beta are node and pod affinity/anti-affinity. This capability enables you to intelligently schedule pods that should, or shouldn&8217;t be, on the same piece of hardware.  For example, if you have a web application that talks to a database, you might wat them on the same pod.  If, on the other hand, you have a pod that needs to be highly available, you might want to spread different instances over different nodes as a safeguard against failure. You can specify the affinity field on the PodSpec.
Kubernetes 1.6 also includes the beta release of taints and tolerations, and some improvements to that functionality from the alpha version.  Taints enable you to dedicate a node to a particular kind of pod, similar to the way in which you might flavors in OpenStack. Unlike OpenStack, however, you can tell Kubernetes to try to avoid scheduling pods that aren&8217;t explicitly allowed (read: tolerated) to that node, but if it has no choice, it can go ahead. This functionality also enables to you specify a period of time a mod might run on this node before being &8220;evicted.&8221;
And speaking of being evicted, Kubernetes 1.6 now enables you to override the default 5 minute period during which a pod remains bound to a node if there are problems,s o you can specify that a pod either finds another node more quickly, or is more patient and waits even longer.
The Container Runtime Interface is now the default
While it&8217;s natural to assume that containers running on Kubernetes are Docker containers, that&8217;s not always true.  Kubernetes also supports rkt containers, and in fact the goal is to enable Kubernetes to orchestrate any container runtime. Up until now, that&8217;s been difficult, because the container runtimes were coded into the kubelet component that runs the actual containers.
Now, with Kubernetes 1.6, the beta version of the Docker Container Runtime Interface is enabled by default &8212; you can turn it off with &;enable-cri=false &8212; it will be easier to add new runtimes.  The old non-runtime architecture is deprecated in 1.6 and is scheduled for remove in Kubernetes 1.7.
Storage improvements
Kubernetes 1.6 includes the general availability release of StorageClasses, which enable you to specify a particular type of storage resource for users without exposing them to the details.  (This is also similar to flavors in OpenStack.)
Also now in GA are the ability to populate environment variables from a configmap or a secret, as well as support for writing and running your own dynamic PersistentVolume provisioners.
Note that StorageClasses will change the behaviors of PersistentVolumeClaim objects on existing clouds, so be sure to read the Release Notes.
Networking improvements
You now have added control over DNS; Kubernetes 1.6 enables you to set stubDomains, which define the nameservers used for specific domains (such as *.mycompany.local), and to specify what upstreamNameservers you want to use, overriding resolve.conf.
Digging deeper, the Container Network Interface (CNI) is now integrated with the Container Runtime Interface (CRI) by default, and the standard bridge plugin has been validated with the combination.
Other changes
Kubernetes 1.6 includes a huge number of changes and improvements, some of which will only be of interest to operators, as opposed to end users, but all of which are important. Some of these changes include:

By default, etcd v3 is enabled, enabling clusters up to 5000 nodes
The ability to know via the API whether a Deployment is blocked
Easier logging access
Improvements to the Horizontal Pod Autoscaler
The ability to add third party resources and extension API servers with the edit command
New commands for creating roles, as well as determining whether you can perform an action
New fields added to describe output
Improvements to kubeadm

Definitely take a look at the full release notes to get the details.
The post What&8217;s new in Kubernetes 1.6 &8212; a focus on stability appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis