Docker Partners with Girl Develop It and Launches Pilot Class

Yesterday marked International Women&;s Day, a global day celebrating the social, cultural, economic and political achievements of women. In that spirit, we’re thrilled to announce that we’re partnering with Girl Develop It, a national 501(c)3 nonprofit that provides affordable and judgment-free opportunities for adult women interested in learning web and software development through accessible in-person programs. Through welcoming, low-cost classes, GDI helps women of diverse backgrounds achieve their technology goals and build confidence in their careers and their everyday lives.

Girl Develop It deeply values community and supportive learning for women regardless of race, education levels, income and upbringing, and those are values we share. The Docker team is committed to ensuring that we create welcoming spaces for all members of the tech community. To proactively work towards this goal, we have launched several initiatives to strengthen the Docker community and promote diversity in the larger tech community including our DockerCon Diversity Scholarship Program, which provides mentorship and a financial scholarship to attend DockerCon. PS &; Are you a women in tech and want to attend DockerCon in Austin April 17th-20th? Use code  for 50% off your ticket! 

New program for WomeninTech at @DockerCon incl networking events, mentorship opps, etc. Use code&;Click To Tweet

Launching Pilot Class
In collaboration with the GDI curriculum team, we are developing an intro to Docker class that will introduce students to the Docker platform and take them through installing, integrating, and running it in their working environment. The pilot class will take place this spring in San Francisco and Austin.

The Intro to Docker class is fully aligned with Girl Develop It’s mission to unlock the potential of women returning to the workforce, looking for a career change, or leveling up their skills said Executive Director, Corinne Warnshuis. “A course on Docker has been requested by students and leaders in the community for some time. We&8217;re thrilled to be working with Docker to provide a valuable introduction to their platform through our in-person affordable, judgment free program.”
Want to help Docker with these initiatives?
We’re always happy to connect with others who work towards improving opportunities for women and underrepresented groups throughout the global Docker ecosystem and promote inclusion in the larger tech community.
If you or your organization are interested in getting more involved, please contact us at community@docker.com. Let’s join forces and take our impact to the next level!
 

Docker partners with @girldevelopit to launch a pilot course in San Francisco and AustinClick To Tweet

The post Docker Partners with Girl Develop It and Launches Pilot Class appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker and Cisco Launch Cisco Validated Designs for Cisco UCS and Flexpod Infrastructures on Docker Enterprise Edition

Last week, and jointly announced a strategic alliance between our organizations. Based on customer feedback, one of the initial joint initiatives is the validation of Docker Enterprise Edition (which includes Docker Datacenter) against Cisco UCS and the Nexus infrastructures. We are excited to announce that Cisco Validated Designs (CVDs) for Cisco UCS and on Docker Enterprise Edition (EE) are immediately available.
CVDs represent the gold standard reference architecture methodology for enterprise customers looking to deploy an end-to-end solution. The CVDs follow defined processes and covers not only provisioning and configuration of the solution, but also test and document the solutions against performance, scale and availability/failure &; something that requires a lab setup with a significant amount of hardware that reflects actual production deployments. This enables our customers achieve faster, more reliable and predictable implementations.
The two new CVDs published for container management offers enterprises a well designed and an end-to-end lab tested configuration for Docker EE on Cisco UCS and Flexpod Datacenter. The collaborative engineering effort between Cisco, NetApp and Docker provides enterprises best of breed solutions for Docker Datacenter on Cisco Infrastructure and NetApp Enterprise Storage to run stateless or stateful containers.
The first CVD includes 2 configurations:

4-node rack servers Bare Metal deployment, co-locating Docker UCP Controller and DTR on 3 manager nodes in a Highly Available configuration and 1 UCP worker node.

10-node Blade servers Bare Metal deployment, with 3 nodes for UCP controllers, 3 nodes for DTR and remaining 4 nodes as UCP worker nodes

The second CVD was based on FlexPod Datacenter in collaboration with NetApp using Cisco UCS Blades and NetApp FAS and E-Series storage.
These CVDs leverage the Docker native user experience of Docker EE, along with Cisco’s UCS converged infrastructure capabilities to provide simple management control planes to orchestrate compute, network and storage provisioning for the application containers to run in a secure and scalable environment. It also uses built in security features of the UCS such as I/O isolation through VLANs, secure bootup of bare metal hosts, and physical storage access path isolation through Cisco VIC’s virtual network interfaces. The combination of UCS and Docker EE’s built-in security such as Secrets Management, Docker Content Trust, and Docker Security Scanning provides a secure end-to-end Container-as-a-Service (CaaS) solution.

Both these solutions use Cisco UCS Service Profiles to provision and configure the UCS servers and their I/O properties to automate the complete installation process. Docker commands and Ansible were used for Docker EE  installation. After configuring proper certificates across the DTR and UCP nodes, we were able to push and pull images successfully. Container images such as busybox, nginx, etc. and applications such as WordPress, Voting application, etc. to test and validate the configuration were pulled from Docker Hub, a central repository for Docker developers to store container images.
The scaling test included the deployment of containers and applications. We were able to deploy 700+ containers on single node and more than 7000 containers across 10 nodes without performance degradation. The scaling tests also covered dynamically adding/deleting nodes to ensure the cluster remains responsive during this change. This excellent scaling and resiliency tests on the clusters are result of swarm mode, container orchestration tightly integrated into Docker EE with Docker Datacenter, and Cisco’s Nexus switches which provides high performance and low latency network speed.
The fail-over tests covered node shutdown, reboot, induce fault at Cisco Fabric Interconnects to adapters on Cisco UCS blade servers. When the UCP manager node was shutdown/rebooted, we were able to validate that users were still able to access containers through Docker UCP UI or CLI. The system was able to start up quickly after a reboot and the UCP cluster and services were restored. Hardware failure resulted in cluster operating in reduced capacity, but there was no single point of failure.
As part of the FlexPod CVD, NFS was configured for Docker Trusted Registry (DTR) nodes for shared access. Flexpod is configured with NetApp enterprise class storage, and NetApp Docker Volume Plugin (nDVP) provides direct integration with Docker ecosystem for NetApp’s ONTAP, E-Series and SolidFire Storage. FlexPod uses NetApp ONTAP storage backend for DTR as well as Container Storage management, and can verify Container volumes deployed using NetApp OnCommand System Manager.
Please refer to CVDs for detailed configuration information.

FlexPod Datacenter with Docker Datacenter for Container Management
Cisco UCS Infrastructure with Docker Datacenter for Container Management

 

Docker Enterprise Edition now w/ @Cisco Validated Designs for Cisco UCS and Flexpod&;Click To Tweet

The post Docker and Cisco Launch Cisco Validated Designs for Cisco UCS and Flexpod Infrastructures on Docker Enterprise Edition appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Kubernetes Replication Controller, Replica Set and Deployments: Understanding replication options

The post Kubernetes Replication Controller, Replica Set and Deployments: Understanding replication options appeared first on Mirantis | Pure Play Open Cloud.
As a container management tool, Kubernetes was designed to orchestrate multiple containers and replication, and in fact there are currently several ways to do it. In this article, we&;ll look at three options: Replication Controllers, Replica Sets, and Deployments.
What is Kubernetes replication for?
Before we go into how you would do replication, let&8217;s talk about why.  Typically you would want to replicate your containers (and thereby your applications) for several reasons, including:

Reliability: By having multiple versions of an application, you prevent problems if one or more fails.  This is particularly true if the system replaces any containers that fail.
Load balancing: Having multiple versions of a container enables you to easily send traffic to different instances to prevent overloading of a single instance or node. This is something that Kubernetes does out of the box, making it extremely convenient.
Scaling: When load does become too much for the number of existing instances, Kubernetes enables you to easily scale up your application, adding additional instances as needed.

Replication is appropriate for numerous use cases, including:

Microservices-based applications: In these cases, multiple small applications provide very specific functionality.
Cloud native applications: Because cloud-native applications are based on the theory that any component can fail at any time, replication is a perfect environment for implementing them, as multiple instances are baked into the architecture.
Mobile applications: Mobile applications can often be architected so that the mobile client interacts with an isolated version of the server application.

Kubernetes has multiple ways in which you can implement replication.
Types of Kubernetes replication
In this article, we&8217;ll discuss three different forms of replication: the Replication Controller, Replica Sets, and Deployments.
Replication Controller
The Replication Controller is the original form of replication in Kubernetes.  It&8217;s being replaced by Replica Sets, but it&8217;s still in wide use, so it&8217;s worth understanding what it is and how it works.

A Replication Controller is a structure that enables you to easily create multiple pods, then make sure that that number of pods always exists. If a pod does crash, the Replication Controller replaces it.

Replication Controllers also provide other benefits, such as the ability to scale the number of pods, and to update or delete multiple pods with a single command.

You can create a Replication Controller with an imperative command, or declaratively, from a file.  For example, create a new file called rc.yaml and add the following text:
apiVersion: v1
kind: ReplicationController
metadata:
 name: soaktestrc
spec:
 replicas: 3
 selector:
   app: soaktestrc
 template:
   metadata:
     name: soaktestrc
     labels:
       app: soaktestrc
   spec:
     containers:
     – name: soaktestrc
       image: nickchase/soaktest
       ports:
       – containerPort: 80
Most of this structure should look familiar from our discussion of Deployments; we&8217;ve got the name of the actual Replication Controller (soaktestrc) and we&8217;re designating that we should have 3 replicas, each of which are defined by the template.  The selector defines how we know which pods belong to this Replication Controller.

Now tell Kubernetes to create the Replication Controller based on that file:
# kubectl create -f rc.yaml
replicationcontroller “soaktestrc” created
Let&8217;s take a look at what we have using the describe command:
# kubectl describe rc soaktestrc
Name:           soaktestrc
Namespace:      default
Image(s):       nickchase/soaktest
Selector:       app=soaktestrc
Labels:         app=soaktestrc
Replicas:       3 current / 3 desired
Pods Status:    3 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type   Reason                   Message
 ———     ——–        —–   —-                            ————-   ————–                  ——-
 1m            1m              1       {replication-controller }                       Normal SuccessfulCreate Created pod: soaktestrc-g5snq
 1m            1m              1       {replication-controller }                       Normal SuccessfulCreate Created pod: soaktestrc-cws05
 1m            1m              1       {replication-controller }                       Normal SuccessfulCreate Created pod: soaktestrc-ro2bl
As you can see, we&8217;ve got the Replication Controller, and there are 3 replicas, of the 3 that we wanted.  All 3 of them are currently running.  You can also see the individual pods listed underneath, along with their names.  If you ask Kubernetes to show you the pods, you can see those same names show up:
# kubectl get pods
NAME               READY     STATUS    RESTARTS   AGE
soaktestrc-cws05   1/1       Running   0          3m
soaktestrc-g5snq   1/1       Running   0          3m
soaktestrc-ro2bl   1/1       Running   0          3m
Next we&8217;ll look at Replica Sets, but first let&8217;s clean up:
# kubectl delete rc soaktestrc
replicationcontroller “soaktestrc” deleted

# kubectl get pods
As you can see, when you delete the Replication Controller, you also delete all of the pods that it created.
Replica Sets
Replica Sets are a sort of hybrid, in that they are in some ways more powerful than Replication Controllers, and in others they are less powerful.

Replica Sets are declared in essentially the same way as Replication Controllers, except that they have more options for the selector.  For example, we could create a Replica Set like this:
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
 name: soaktestrs
spec:
 replicas: 3
 selector:
   matchLabels:
     app: soaktestrs
 template:
   metadata:
     labels:
       app: soaktestrs
  environment: dev
   spec:
     containers:
     – name: soaktestrs
       image: nickchase/soaktest
       ports:
       – containerPort: 80
In this case, it&8217;s more or less the same as when we were creating the Replication Controller, except we&8217;re using matchLabels instead of label.  But we could just as easily have said:

spec:
 replicas: 3
 selector:
    matchExpressions:
     – {key: app, operator: In, values: [soaktestrs, soaktestrs, soaktest]}
     – {key: teir, operator: NotIn, values: [production]}
 template:
   metadata:

In this case, we&8217;re looking at two different conditions:

The app label must be soaktestrc, soaktestrs, or soaktest
The tier label (if it exists) must not be production

Let&8217;s go ahead and create the Replica Set and get a look at it:
# kubectl create -f replicaset.yaml
replicaset “soaktestrs” created

# kubectl describe rs soaktestrs
Name:           soaktestrs
Namespace:      default
Image(s):       nickchase/soaktest
Selector:       app in (soaktest,soaktestrs),teir notin (production)
Labels:         app=soaktestrs
Replicas:       3 current / 3 desired
Pods Status:    3 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type    Reason                   Message
 ———     ——–        —–   —-                            ————-   ————–                   ——-
 1m            1m              1       {replicaset-controller }                        Normal  SuccessfulCreate Created pod: soaktestrs-it2hf
 1m            1m              1       {replicaset-controller }                       Normal  SuccessfulCreate Created pod: soaktestrs-kimmm
 1m            1m              1       {replicaset-controller }                        Normal  SuccessfulCreate Created pod: soaktestrs-8i4ra

# kubectl get pods
NAME               READY     STATUS    RESTARTS   AGE
soaktestrs-8i4ra   1/1       Running   0          1m
soaktestrs-it2hf   1/1       Running   0          1m
soaktestrs-kimmm   1/1       Running   0          1m
As you can see, the output is pretty much the same as for a Replication Controller (except for the selector), and for most intents and purposes, they are similar.  The major difference is that the rolling-update command works with Replication Controllers, but won&8217;t work with a Replica Set.  This is because Replica Sets are meant to be used as the backend for Deployments.

Let&8217;s clean up before we move on.
# kubectl delete rs soaktestrs
replicaset “soaktestrs” deleted

# kubectl get pods
Again, the pods that were created are deleted when we delete the Replica Set.
Deployments
Deployments are intended to replace Replication Controllers.  They provide the same replication functions (through Replica Sets) and also the ability to rollout changes and roll them back if necessary.

Let&8217;s create a simple Deployment using the same image we&8217;ve been using.  First create a new file, deployment.yaml, and add the following:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: soaktest
spec:
 replicas: 5
 template:
   metadata:
     labels:
       app: soaktest
   spec:
     containers:
     – name: soaktest
       image: nickchase/soaktest
       ports:
       – containerPort: 80
Now go ahead and create the Deployment:
# kubectl create -f deployment.yaml
deployment “soaktest” created
Now let&8217;s go ahead and describe the Deployment:
# kubectl describe deployment soaktest
Name:                   soaktest
Namespace:              default
CreationTimestamp:      Sun, 05 Mar 2017 16:21:19 +0000
Labels:                 app=soaktest
Selector:               app=soaktest
Replicas:               5 updated | 5 total | 5 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:         <none>
NewReplicaSet:          soaktest-3914185155 (5/5 replicas created)
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type    Reason                   Message
 ———     ——–        —–   —-                            ————-   ————–                   ——-
 38s           38s             1       {deployment-controller }                        Normal  ScalingReplicaSet        Scaled up replica set soaktest-3914185155 to 3
 36s           36s             1       {deployment-controller }                        Normal  ScalingReplicaSet        Scaled up replica set soaktest-3914185155 to 5
As you can see, rather than listing the individual pods, Kubernetes shows us the Replica Set.  Notice that the name of the Replica Set is the Deployment name and a hash value.

A complete discussion of updates is out of scope for this article &; we&8217;ll cover it in the future &8212; but couple of interesting things here:

The StrategyType is RollingUpdate. This value can also be set to Recreate.
By default we have a minReadySeconds value of 0; we can change that value if we want pods to be up and running for a certain amount of time &8212; say, to load resources &8212; before they&8217;re truly considered &;ready&;.
The RollingUpdateStrategy shows that we have a limit of 1 maxUnavailable &8212; meaning that when we&8217;re updating the Deployment, we can have up to 1 missing pod before it&8217;s replaced, and 1 maxSurge, meaning we can have one extra pod as we scale the new pods back up.

As you can see, the Deployment is backed, in this case, by Replica Set soaktest-3914185155. If we go ahead and look at the list of actual pods&;
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3914185155-7gyja   1/1       Running   0          2m
soaktest-3914185155-lrm20   1/1       Running   0          2m
soaktest-3914185155-o28px   1/1       Running   0          2m
soaktest-3914185155-ojzn8   1/1       Running   0          2m
soaktest-3914185155-r2pt7   1/1       Running   0          2m
&8230; you can see that their names consist of the Replica Set name and an additional identifier.
Passing environment information: identifying a specific pod
Before we look at the different ways that we can affect replicas, let&8217;s set up our deployment so that we can see what pod we&8217;re actually hitting with a particular request.  To do that, the image we&8217;ve been using displays the pod name when it outputs:
<?php
$limit = $_GET[‘limit’];
if (!isset($limit)) $limit = 250;
for ($i; $i < $limit; $i++){
    $d = tan(atan(tan(atan(tan(atan(tan(atan(tan(atan(123456789.123456789))))))))));
}
echo “Pod “.$_SERVER[‘POD_NAME’].” has finished!n”;
?>
As you can see, we&8217;re displaying an environment variable, POD_NAME.  Since each container is essentially it&8217;s own server, this will display the name of the pod when we execute the PHP.

Now we just have to pass that information to the pod.

We do that through the use of the Kubernetes Downward API, which lets us pass environment variables into the containers:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: soaktest
spec:
 replicas: 3
 template:
   metadata:
     labels:
       app: soaktest
   spec:
     containers:
     – name: soaktest
       image: nickchase/soaktest
       ports:
       – containerPort: 80
       env:
       – name: POD_NAME
         valueFrom:
           fieldRef:
             fieldPath: metadata.name
As you can see, we&8217;re passing an environment variable and assigning it a value from the Deployment&8217;s metadata.  (You can find more information on metadata here.)

So let&8217;s go ahead and clean up the Deployment we created earlier&8230;
# kubectl delete deployment soaktest
deployment “soaktest” deleted

# kubectl get pods
&8230; and recreate it with the new definition:
# kubectl create -f deployment.yaml
deployment “soaktest” created
Next let&8217;s go ahead and expose the pods to outside network requests so we can call the nginx server that is inside the containers:
# kubectl expose deployment soaktest –port=80 –target-port=80 –type=NodePort
service “soaktest” exposed
Now let&8217;s describe the services we just created so we can find out what port the Deployment is listening on:
# kubectl describe services soaktest
Name:                   soaktest
Namespace:              default
Labels:                 app=soaktest
Selector:               app=soaktest
Type:                   NodePort
IP:                     11.1.32.105
Port:                   <unset> 80/TCP
NodePort:               <unset> 30800/TCP
Endpoints:              10.200.18.2:80,10.200.18.3:80,10.200.18.4:80 + 2 more…
Session Affinity:       None
No events.
As you can see, the NodePort is 30800 in this case; in your case it will be different, so make sure to check.  That means that each of the servers involved is listening on port 30800, and requests are being forwarded to port 80 of the containers.  That means we can call the PHP script with:
http://[HOST_NAME OR HOST_IP]:[PROVIDED PORT]
In my case, I&8217;ve set the IP for my Kubernetes hosts to hostnames to make my life easier, and the PHP file is the default for nginx, so I can simply call:
# curl http://kube-2:30800
Pod soaktest-3869910569-xnfme has finished!
So as you can see, this time the request was served by pod soaktest-3869910569-xnfme.
Recovering from crashes: Creating a fixed number of replicas
Now that we know everything is running, let&8217;s take a look at some replication use cases.

The first thing we think of when it comes to replication is recovering from crashes. If there are 5 (or 50, or 500) copies of an application running, and one or more crashes, it&8217;s not a catastrophe.  Kubernetes improves the situation further by ensuring that if a pod goes down, it&8217;s replaced.

Let&8217;s see this in action.  Start by refreshing our memory about the pods we&8217;ve got running:
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-qqwqc   1/1       Running   0          11m
soaktest-3869910569-qu8k7   1/1       Running   0          11m
soaktest-3869910569-uzjxu   1/1       Running   0          11m
soaktest-3869910569-x6vmp   1/1       Running   0          11m
soaktest-3869910569-xnfme   1/1       Running   0          11m
If we repeatedly call the Deployment, we can see that we get different pods on a random basis:
# curl http://kube-2:30800
Pod soaktest-3869910569-xnfme has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-x6vmp has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-uzjxu has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-x6vmp has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-uzjxu has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-qu8k7 has finished!
To simulate a pod crashing, let&8217;s go ahead and delete one:
# kubectl delete pod soaktest-3869910569-x6vmp
pod “soaktest-3869910569-x6vmp” deleted

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-516kx   1/1       Running   0          18s
soaktest-3869910569-qqwqc   1/1       Running   0          27m
soaktest-3869910569-qu8k7   1/1       Running   0          27m
soaktest-3869910569-uzjxu   1/1       Running   0          27m
soaktest-3869910569-xnfme   1/1       Running   0          27m
As you can see, pod *x6vmp is gone, and it&8217;s been replaced by *516kx.  (You can easily find the new pod by looking at the AGE column.)

If we once again call the Deployment, we can (eventually) see the new pod:
# curl http://kube-2:30800
Pod soaktest-3869910569-516kx has finished!
Now let&8217;s look at changing the number of pods.
Scaling up or down: Manually changing the number of replicas
One common task is to scale up a Deployment in response to additional load. Kubernetes has autoscaling, but we&8217;ll talk about that in another article.  For now, let&8217;s look at how to do this task manually.

The most straightforward way is to simply use the scale command:
# kubectl scale –replicas=7 deployment/soaktest
deployment “soaktest” scaled

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-2w8i6   1/1       Running   0          6s
soaktest-3869910569-516kx   1/1       Running   0          11m
soaktest-3869910569-qqwqc   1/1       Running   0          39m
soaktest-3869910569-qu8k7   1/1       Running   0          39m
soaktest-3869910569-uzjxu   1/1       Running   0          39m
soaktest-3869910569-xnfme   1/1       Running   0          39m
soaktest-3869910569-z4rx9   1/1       Running   0          6s
In this case, we specify a new number of replicas, and Kubernetes adds enough to bring it to the desired level, as you can see.

One thing to keep in mind is that Kubernetes isn&8217;t going to scale the Deployment down to be below the level at which you first started it up.  For example, if we try to scale back down to 4&8230;
# kubectl scale –replicas=4 -f deployment.yaml
deployment “soaktest” scaled

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-l5wx8   1/1       Running   0          11s
soaktest-3869910569-qqwqc   1/1       Running   0          40m
soaktest-3869910569-qu8k7   1/1       Running   0          40m
soaktest-3869910569-uzjxu   1/1       Running   0          40m
soaktest-3869910569-xnfme   1/1       Running   0          40m
&8230; Kubernetes only brings us back down to 5, because that&8217;s what was specified by the original deployment.
Deploying a new version: Replacing replicas by changing their label
Another way you can use deployments is to make use of the selector.  In other words, if a Deployment controls all the pods with a tier value of dev, changing a pod&8217;s teir label to prod will remove it from the Deployment&8217;s sphere of influence.

This mechanism enables you to selectively replace individual pods. For example, you might move pods from a dev environment to a production environment, or you might do a manual rolling update, updating the image, then removing some fraction of pods from the Deployment; when they&8217;re replaced, it will be with the new image. If you&8217;re happy with the changes, you can then replace the rest of the pods.

Let&8217;s see this in action.  As you recall, this is our Deployment:
# kubectl describe deployment soaktest
Name:                   soaktest
Namespace:              default
CreationTimestamp:      Sun, 05 Mar 2017 19:31:04 +0000
Labels:                 app=soaktest
Selector:               app=soaktest
Replicas:               3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:         <none>
NewReplicaSet:          soaktest-3869910569 (3/3 replicas created)
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type              Reason                  Message
 ———     ——–        —–   —-                            ————-   ——–  ——                  ——-
 50s           50s             1       {deployment-controller }                        Normal            ScalingReplicaSet       Scaled up replica set soaktest-3869910569 to 3
And these are our pods:
# kubectl describe replicaset soaktest-3869910569
Name:           soaktest-3869910569
Namespace:      default
Image(s):       nickchase/soaktest
Selector:       app=soaktest,pod-template-hash=3869910569
Labels:         app=soaktest
               pod-template-hash=3869910569
Replicas:       5 current / 5 desired
Pods Status:    5 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type              Reason                  Message
 ———     ——–        —–   —-                            ————-   ——–  ——                  ——-
 2m            2m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-0577c
 2m            2m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-wje85
 2m            2m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-xuhwl
 1m            1m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-8cbo2
 1m            1m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-pwlm4
We can also get a list of pods by label:
# kubectl get pods -l app=soaktest
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-0577c   1/1       Running   0          7m
soaktest-3869910569-8cbo2   1/1       Running   0          6m
soaktest-3869910569-pwlm4   1/1       Running   0          6m
soaktest-3869910569-wje85   1/1       Running   0          7m
soaktest-3869910569-xuhwl   1/1       Running   0          7m
So those are our original soaktest pods; what if we wanted to add a new label?  We can do that on the command line:
# kubectl label pods soaktest-3869910569-xuhwl experimental=true
pod “soaktest-3869910569-xuhwl” labeled

# kubectl get pods -l experimental=true
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-xuhwl   1/1       Running   0          14m
So now we have one experimental pod.  But since the experimental label has nothing to do with the selector for the Deployment, it doesn&8217;t affect anything.

So what if we change the value of the app label, which the Deployment is looking at?
# kubectl label pods soaktest-3869910569-wje85 app=notsoaktest –overwrite
pod “soaktest-3869910569-wje85″ labeled
In this case, we need to use the overwrite flag because the app label already exists. Now let&8217;s look at the existing pods.
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-0577c   1/1       Running   0          17m
soaktest-3869910569-4cedq   1/1       Running   0          4s
soaktest-3869910569-8cbo2   1/1       Running   0          16m
soaktest-3869910569-pwlm4   1/1       Running   0          16m
soaktest-3869910569-wje85   1/1       Running   0          17m
soaktest-3869910569-xuhwl   1/1       Running   0          17m
As you can see, we now have six pods instead of five, with a new pod having been created to replace *wje85, which was removed from the deployment. We can see the changes by requesting pods by label:
# kubectl get pods -l app=soaktest
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-0577c   1/1       Running   0          17m
soaktest-3869910569-4cedq   1/1       Running   0          20s
soaktest-3869910569-8cbo2   1/1       Running   0          16m
soaktest-3869910569-pwlm4   1/1       Running   0          16m
soaktest-3869910569-xuhwl   1/1       Running   0          17m
Now, there is one wrinkle that you have to take into account; because we&8217;ve removed this pod from the Deployment, the Deployment no longer manages it.  So if we were to delete the Deployment&8230;
# kubectl delete deployment soaktest
deployment “soaktest” deleted
The pod remains:
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-wje85   1/1       Running   0          19m
You can also easily replace all of the pods in a Deployment using the &;all flag, as in:
# kubectl label pods –all app=notsoaktesteither –overwrite
But remember that you&8217;ll have to delete them all manually!
Conclusion
Replication is a large part of Kubernetes&8217; purpose in life, so it&8217;s no surprise that we&8217;ve just scratched the surface of what it can do, and how to use it. It is useful for reliability purposes, for scalability, and even as a basis for your architecture.

What do you anticipate using replication for, and what would you like to know more about? Let us know in the comments!The post Kubernetes Replication Controller, Replica Set and Deployments: Understanding replication options appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Kubernetes Replication Controller, Replica Set and Deployments: Understanding replication options

The post Kubernetes Replication Controller, Replica Set and Deployments: Understanding replication options appeared first on Mirantis | Pure Play Open Cloud.
As a container management tool, Kubernetes was designed to orchestrate multiple containers and replication, and in fact there are currently several ways to do it. In this article, we&;ll look at three options: Replication Controllers, Replica Sets, and Deployments.
What is Kubernetes replication for?
Before we go into how you would do replication, let&8217;s talk about why.  Typically you would want to replicate your containers (and thereby your applications) for several reasons, including:

Reliability: By having multiple versions of an application, you prevent problems if one or more fails.  This is particularly true if the system replaces any containers that fail.
Load balancing: Having multiple versions of a container enables you to easily send traffic to different instances to prevent overloading of a single instance or node. This is something that Kubernetes does out of the box, making it extremely convenient.
Scaling: When load does become too much for the number of existing instances, Kubernetes enables you to easily scale up your application, adding additional instances as needed.

Replication is appropriate for numerous use cases, including:

Microservices-based applications: In these cases, multiple small applications provide very specific functionality.
Cloud native applications: Because cloud-native applications are based on the theory that any component can fail at any time, replication is a perfect environment for implementing them, as multiple instances are baked into the architecture.
Mobile applications: Mobile applications can often be architected so that the mobile client interacts with an isolated version of the server application.

Kubernetes has multiple ways in which you can implement replication.
Types of Kubernetes replication
In this article, we&8217;ll discuss three different forms of replication: the Replication Controller, Replica Sets, and Deployments.
Replication Controller
The Replication Controller is the original form of replication in Kubernetes.  It&8217;s being replaced by Replica Sets, but it&8217;s still in wide use, so it&8217;s worth understanding what it is and how it works.

A Replication Controller is a structure that enables you to easily create multiple pods, then make sure that that number of pods always exists. If a pod does crash, the Replication Controller replaces it.

Replication Controllers also provide other benefits, such as the ability to scale the number of pods, and to update or delete multiple pods with a single command.

You can create a Replication Controller with an imperative command, or declaratively, from a file.  For example, create a new file called rc.yaml and add the following text:
apiVersion: v1
kind: ReplicationController
metadata:
 name: soaktestrc
spec:
 replicas: 3
 selector:
   app: soaktestrc
 template:
   metadata:
     name: soaktestrc
     labels:
       app: soaktestrc
   spec:
     containers:
     – name: soaktestrc
       image: nickchase/soaktest
       ports:
       – containerPort: 80
Most of this structure should look familiar from our discussion of Deployments; we&8217;ve got the name of the actual Replication Controller (soaktestrc) and we&8217;re designating that we should have 3 replicas, each of which are defined by the template.  The selector defines how we know which pods belong to this Replication Controller.

Now tell Kubernetes to create the Replication Controller based on that file:
# kubectl create -f rc.yaml
replicationcontroller “soaktestrc” created
Let&8217;s take a look at what we have using the describe command:
# kubectl describe rc soaktestrc
Name:           soaktestrc
Namespace:      default
Image(s):       nickchase/soaktest
Selector:       app=soaktestrc
Labels:         app=soaktestrc
Replicas:       3 current / 3 desired
Pods Status:    3 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type   Reason                   Message
 ———     ——–        —–   —-                            ————-   ————–                  ——-
 1m            1m              1       {replication-controller }                       Normal SuccessfulCreate Created pod: soaktestrc-g5snq
 1m            1m              1       {replication-controller }                       Normal SuccessfulCreate Created pod: soaktestrc-cws05
 1m            1m              1       {replication-controller }                       Normal SuccessfulCreate Created pod: soaktestrc-ro2bl
As you can see, we&8217;ve got the Replication Controller, and there are 3 replicas, of the 3 that we wanted.  All 3 of them are currently running.  You can also see the individual pods listed underneath, along with their names.  If you ask Kubernetes to show you the pods, you can see those same names show up:
# kubectl get pods
NAME               READY     STATUS    RESTARTS   AGE
soaktestrc-cws05   1/1       Running   0          3m
soaktestrc-g5snq   1/1       Running   0          3m
soaktestrc-ro2bl   1/1       Running   0          3m
Next we&8217;ll look at Replica Sets, but first let&8217;s clean up:
# kubectl delete rc soaktestrc
replicationcontroller “soaktestrc” deleted

# kubectl get pods
As you can see, when you delete the Replication Controller, you also delete all of the pods that it created.
Replica Sets
Replica Sets are a sort of hybrid, in that they are in some ways more powerful than Replication Controllers, and in others they are less powerful.

Replica Sets are declared in essentially the same way as Replication Controllers, except that they have more options for the selector.  For example, we could create a Replica Set like this:
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
 name: soaktestrs
spec:
 replicas: 3
 selector:
   matchLabels:
     app: soaktestrs
 template:
   metadata:
     labels:
       app: soaktestrs
  environment: dev
   spec:
     containers:
     – name: soaktestrs
       image: nickchase/soaktest
       ports:
       – containerPort: 80
In this case, it&8217;s more or less the same as when we were creating the Replication Controller, except we&8217;re using matchLabels instead of label.  But we could just as easily have said:

spec:
 replicas: 3
 selector:
    matchExpressions:
     – {key: app, operator: In, values: [soaktestrs, soaktestrs, soaktest]}
     – {key: teir, operator: NotIn, values: [production]}
 template:
   metadata:

In this case, we&8217;re looking at two different conditions:

The app label must be soaktestrc, soaktestrs, or soaktest
The tier label (if it exists) must not be production

Let&8217;s go ahead and create the Replica Set and get a look at it:
# kubectl create -f replicaset.yaml
replicaset “soaktestrs” created

# kubectl describe rs soaktestrs
Name:           soaktestrs
Namespace:      default
Image(s):       nickchase/soaktest
Selector:       app in (soaktest,soaktestrs),teir notin (production)
Labels:         app=soaktestrs
Replicas:       3 current / 3 desired
Pods Status:    3 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type    Reason                   Message
 ———     ——–        —–   —-                            ————-   ————–                   ——-
 1m            1m              1       {replicaset-controller }                        Normal  SuccessfulCreate Created pod: soaktestrs-it2hf
 1m            1m              1       {replicaset-controller }                       Normal  SuccessfulCreate Created pod: soaktestrs-kimmm
 1m            1m              1       {replicaset-controller }                        Normal  SuccessfulCreate Created pod: soaktestrs-8i4ra

# kubectl get pods
NAME               READY     STATUS    RESTARTS   AGE
soaktestrs-8i4ra   1/1       Running   0          1m
soaktestrs-it2hf   1/1       Running   0          1m
soaktestrs-kimmm   1/1       Running   0          1m
As you can see, the output is pretty much the same as for a Replication Controller (except for the selector), and for most intents and purposes, they are similar.  The major difference is that the rolling-update command works with Replication Controllers, but won&8217;t work with a Replica Set.  This is because Replica Sets are meant to be used as the backend for Deployments.

Let&8217;s clean up before we move on.
# kubectl delete rs soaktestrs
replicaset “soaktestrs” deleted

# kubectl get pods
Again, the pods that were created are deleted when we delete the Replica Set.
Deployments
Deployments are intended to replace Replication Controllers.  They provide the same replication functions (through Replica Sets) and also the ability to rollout changes and roll them back if necessary.

Let&8217;s create a simple Deployment using the same image we&8217;ve been using.  First create a new file, deployment.yaml, and add the following:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: soaktest
spec:
 replicas: 5
 template:
   metadata:
     labels:
       app: soaktest
   spec:
     containers:
     – name: soaktest
       image: nickchase/soaktest
       ports:
       – containerPort: 80
Now go ahead and create the Deployment:
# kubectl create -f deployment.yaml
deployment “soaktest” created
Now let&8217;s go ahead and describe the Deployment:
# kubectl describe deployment soaktest
Name:                   soaktest
Namespace:              default
CreationTimestamp:      Sun, 05 Mar 2017 16:21:19 +0000
Labels:                 app=soaktest
Selector:               app=soaktest
Replicas:               5 updated | 5 total | 5 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:         <none>
NewReplicaSet:          soaktest-3914185155 (5/5 replicas created)
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type    Reason                   Message
 ———     ——–        —–   —-                            ————-   ————–                   ——-
 38s           38s             1       {deployment-controller }                        Normal  ScalingReplicaSet        Scaled up replica set soaktest-3914185155 to 3
 36s           36s             1       {deployment-controller }                        Normal  ScalingReplicaSet        Scaled up replica set soaktest-3914185155 to 5
As you can see, rather than listing the individual pods, Kubernetes shows us the Replica Set.  Notice that the name of the Replica Set is the Deployment name and a hash value.

A complete discussion of updates is out of scope for this article &; we&8217;ll cover it in the future &8212; but couple of interesting things here:

The StrategyType is RollingUpdate. This value can also be set to Recreate.
By default we have a minReadySeconds value of 0; we can change that value if we want pods to be up and running for a certain amount of time &8212; say, to load resources &8212; before they&8217;re truly considered &;ready&;.
The RollingUpdateStrategy shows that we have a limit of 1 maxUnavailable &8212; meaning that when we&8217;re updating the Deployment, we can have up to 1 missing pod before it&8217;s replaced, and 1 maxSurge, meaning we can have one extra pod as we scale the new pods back up.

As you can see, the Deployment is backed, in this case, by Replica Set soaktest-3914185155. If we go ahead and look at the list of actual pods&;
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3914185155-7gyja   1/1       Running   0          2m
soaktest-3914185155-lrm20   1/1       Running   0          2m
soaktest-3914185155-o28px   1/1       Running   0          2m
soaktest-3914185155-ojzn8   1/1       Running   0          2m
soaktest-3914185155-r2pt7   1/1       Running   0          2m
&8230; you can see that their names consist of the Replica Set name and an additional identifier.
Passing environment information: identifying a specific pod
Before we look at the different ways that we can affect replicas, let&8217;s set up our deployment so that we can see what pod we&8217;re actually hitting with a particular request.  To do that, the image we&8217;ve been using displays the pod name when it outputs:
<?php
$limit = $_GET[‘limit’];
if (!isset($limit)) $limit = 250;
for ($i; $i < $limit; $i++){
    $d = tan(atan(tan(atan(tan(atan(tan(atan(tan(atan(123456789.123456789))))))))));
}
echo “Pod “.$_SERVER[‘POD_NAME’].” has finished!n”;
?>
As you can see, we&8217;re displaying an environment variable, POD_NAME.  Since each container is essentially it&8217;s own server, this will display the name of the pod when we execute the PHP.

Now we just have to pass that information to the pod.

We do that through the use of the Kubernetes Downward API, which lets us pass environment variables into the containers:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: soaktest
spec:
 replicas: 3
 template:
   metadata:
     labels:
       app: soaktest
   spec:
     containers:
     – name: soaktest
       image: nickchase/soaktest
       ports:
       – containerPort: 80
       env:
       – name: POD_NAME
         valueFrom:
           fieldRef:
             fieldPath: metadata.name
As you can see, we&8217;re passing an environment variable and assigning it a value from the Deployment&8217;s metadata.  (You can find more information on metadata here.)

So let&8217;s go ahead and clean up the Deployment we created earlier&8230;
# kubectl delete deployment soaktest
deployment “soaktest” deleted

# kubectl get pods
&8230; and recreate it with the new definition:
# kubectl create -f deployment.yaml
deployment “soaktest” created
Next let&8217;s go ahead and expose the pods to outside network requests so we can call the nginx server that is inside the containers:
# kubectl expose deployment soaktest –port=80 –target-port=80 –type=NodePort
service “soaktest” exposed
Now let&8217;s describe the services we just created so we can find out what port the Deployment is listening on:
# kubectl describe services soaktest
Name:                   soaktest
Namespace:              default
Labels:                 app=soaktest
Selector:               app=soaktest
Type:                   NodePort
IP:                     11.1.32.105
Port:                   <unset> 80/TCP
NodePort:               <unset> 30800/TCP
Endpoints:              10.200.18.2:80,10.200.18.3:80,10.200.18.4:80 + 2 more…
Session Affinity:       None
No events.
As you can see, the NodePort is 30800 in this case; in your case it will be different, so make sure to check.  That means that each of the servers involved is listening on port 30800, and requests are being forwarded to port 80 of the containers.  That means we can call the PHP script with:
http://[HOST_NAME OR HOST_IP]:[PROVIDED PORT]
In my case, I&8217;ve set the IP for my Kubernetes hosts to hostnames to make my life easier, and the PHP file is the default for nginx, so I can simply call:
# curl http://kube-2:30800
Pod soaktest-3869910569-xnfme has finished!
So as you can see, this time the request was served by pod soaktest-3869910569-xnfme.
Recovering from crashes: Creating a fixed number of replicas
Now that we know everything is running, let&8217;s take a look at some replication use cases.

The first thing we think of when it comes to replication is recovering from crashes. If there are 5 (or 50, or 500) copies of an application running, and one or more crashes, it&8217;s not a catastrophe.  Kubernetes improves the situation further by ensuring that if a pod goes down, it&8217;s replaced.

Let&8217;s see this in action.  Start by refreshing our memory about the pods we&8217;ve got running:
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-qqwqc   1/1       Running   0          11m
soaktest-3869910569-qu8k7   1/1       Running   0          11m
soaktest-3869910569-uzjxu   1/1       Running   0          11m
soaktest-3869910569-x6vmp   1/1       Running   0          11m
soaktest-3869910569-xnfme   1/1       Running   0          11m
If we repeatedly call the Deployment, we can see that we get different pods on a random basis:
# curl http://kube-2:30800
Pod soaktest-3869910569-xnfme has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-x6vmp has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-uzjxu has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-x6vmp has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-uzjxu has finished!
# curl http://kube-2:30800
Pod soaktest-3869910569-qu8k7 has finished!
To simulate a pod crashing, let&8217;s go ahead and delete one:
# kubectl delete pod soaktest-3869910569-x6vmp
pod “soaktest-3869910569-x6vmp” deleted

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-516kx   1/1       Running   0          18s
soaktest-3869910569-qqwqc   1/1       Running   0          27m
soaktest-3869910569-qu8k7   1/1       Running   0          27m
soaktest-3869910569-uzjxu   1/1       Running   0          27m
soaktest-3869910569-xnfme   1/1       Running   0          27m
As you can see, pod *x6vmp is gone, and it&8217;s been replaced by *516kx.  (You can easily find the new pod by looking at the AGE column.)

If we once again call the Deployment, we can (eventually) see the new pod:
# curl http://kube-2:30800
Pod soaktest-3869910569-516kx has finished!
Now let&8217;s look at changing the number of pods.
Scaling up or down: Manually changing the number of replicas
One common task is to scale up a Deployment in response to additional load. Kubernetes has autoscaling, but we&8217;ll talk about that in another article.  For now, let&8217;s look at how to do this task manually.

The most straightforward way is to simply use the scale command:
# kubectl scale –replicas=7 deployment/soaktest
deployment “soaktest” scaled

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-2w8i6   1/1       Running   0          6s
soaktest-3869910569-516kx   1/1       Running   0          11m
soaktest-3869910569-qqwqc   1/1       Running   0          39m
soaktest-3869910569-qu8k7   1/1       Running   0          39m
soaktest-3869910569-uzjxu   1/1       Running   0          39m
soaktest-3869910569-xnfme   1/1       Running   0          39m
soaktest-3869910569-z4rx9   1/1       Running   0          6s
In this case, we specify a new number of replicas, and Kubernetes adds enough to bring it to the desired level, as you can see.

One thing to keep in mind is that Kubernetes isn&8217;t going to scale the Deployment down to be below the level at which you first started it up.  For example, if we try to scale back down to 4&8230;
# kubectl scale –replicas=4 -f deployment.yaml
deployment “soaktest” scaled

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-l5wx8   1/1       Running   0          11s
soaktest-3869910569-qqwqc   1/1       Running   0          40m
soaktest-3869910569-qu8k7   1/1       Running   0          40m
soaktest-3869910569-uzjxu   1/1       Running   0          40m
soaktest-3869910569-xnfme   1/1       Running   0          40m
&8230; Kubernetes only brings us back down to 5, because that&8217;s what was specified by the original deployment.
Deploying a new version: Replacing replicas by changing their label
Another way you can use deployments is to make use of the selector.  In other words, if a Deployment controls all the pods with a tier value of dev, changing a pod&8217;s teir label to prod will remove it from the Deployment&8217;s sphere of influence.

This mechanism enables you to selectively replace individual pods. For example, you might move pods from a dev environment to a production environment, or you might do a manual rolling update, updating the image, then removing some fraction of pods from the Deployment; when they&8217;re replaced, it will be with the new image. If you&8217;re happy with the changes, you can then replace the rest of the pods.

Let&8217;s see this in action.  As you recall, this is our Deployment:
# kubectl describe deployment soaktest
Name:                   soaktest
Namespace:              default
CreationTimestamp:      Sun, 05 Mar 2017 19:31:04 +0000
Labels:                 app=soaktest
Selector:               app=soaktest
Replicas:               3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:         <none>
NewReplicaSet:          soaktest-3869910569 (3/3 replicas created)
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type              Reason                  Message
 ———     ——–        —–   —-                            ————-   ——–  ——                  ——-
 50s           50s             1       {deployment-controller }                        Normal            ScalingReplicaSet       Scaled up replica set soaktest-3869910569 to 3
And these are our pods:
# kubectl describe replicaset soaktest-3869910569
Name:           soaktest-3869910569
Namespace:      default
Image(s):       nickchase/soaktest
Selector:       app=soaktest,pod-template-hash=3869910569
Labels:         app=soaktest
               pod-template-hash=3869910569
Replicas:       5 current / 5 desired
Pods Status:    5 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type              Reason                  Message
 ———     ——–        —–   —-                            ————-   ——–  ——                  ——-
 2m            2m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-0577c
 2m            2m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-wje85
 2m            2m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-xuhwl
 1m            1m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-8cbo2
 1m            1m              1       {replicaset-controller }                        Normal            SuccessfulCreate        Created pod: soaktest-3869910569-pwlm4
We can also get a list of pods by label:
# kubectl get pods -l app=soaktest
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-0577c   1/1       Running   0          7m
soaktest-3869910569-8cbo2   1/1       Running   0          6m
soaktest-3869910569-pwlm4   1/1       Running   0          6m
soaktest-3869910569-wje85   1/1       Running   0          7m
soaktest-3869910569-xuhwl   1/1       Running   0          7m
So those are our original soaktest pods; what if we wanted to add a new label?  We can do that on the command line:
# kubectl label pods soaktest-3869910569-xuhwl experimental=true
pod “soaktest-3869910569-xuhwl” labeled

# kubectl get pods -l experimental=true
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-xuhwl   1/1       Running   0          14m
So now we have one experimental pod.  But since the experimental label has nothing to do with the selector for the Deployment, it doesn&8217;t affect anything.

So what if we change the value of the app label, which the Deployment is looking at?
# kubectl label pods soaktest-3869910569-wje85 app=notsoaktest –overwrite
pod “soaktest-3869910569-wje85″ labeled
In this case, we need to use the overwrite flag because the app label already exists. Now let&8217;s look at the existing pods.
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-0577c   1/1       Running   0          17m
soaktest-3869910569-4cedq   1/1       Running   0          4s
soaktest-3869910569-8cbo2   1/1       Running   0          16m
soaktest-3869910569-pwlm4   1/1       Running   0          16m
soaktest-3869910569-wje85   1/1       Running   0          17m
soaktest-3869910569-xuhwl   1/1       Running   0          17m
As you can see, we now have six pods instead of five, with a new pod having been created to replace *wje85, which was removed from the deployment. We can see the changes by requesting pods by label:
# kubectl get pods -l app=soaktest
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-0577c   1/1       Running   0          17m
soaktest-3869910569-4cedq   1/1       Running   0          20s
soaktest-3869910569-8cbo2   1/1       Running   0          16m
soaktest-3869910569-pwlm4   1/1       Running   0          16m
soaktest-3869910569-xuhwl   1/1       Running   0          17m
Now, there is one wrinkle that you have to take into account; because we&8217;ve removed this pod from the Deployment, the Deployment no longer manages it.  So if we were to delete the Deployment&8230;
# kubectl delete deployment soaktest
deployment “soaktest” deleted
The pod remains:
# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
soaktest-3869910569-wje85   1/1       Running   0          19m
You can also easily replace all of the pods in a Deployment using the &;all flag, as in:
# kubectl label pods –all app=notsoaktesteither –overwrite
But remember that you&8217;ll have to delete them all manually!
Conclusion
Replication is a large part of Kubernetes&8217; purpose in life, so it&8217;s no surprise that we&8217;ve just scratched the surface of what it can do, and how to use it. It is useful for reliability purposes, for scalability, and even as a basis for your architecture.

What do you anticipate using replication for, and what would you like to know more about? Let us know in the comments!The post Kubernetes Replication Controller, Replica Set and Deployments: Understanding replication options appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

People Are Pissed That Snapchat's Marie Curie Filter Adds A Full Face Of Makeup

&;Didn&;t realize eye makeup and false lashes were essential to being a baller female physicist&;&;

But some users have pointed out an odd detail in the Marie Curie filter: a smokey eye, false eyelashes, and complexion-smoothing makeup.

But some users have pointed out an odd detail in the Marie Curie filter: a smokey eye, false eyelashes, and complexion-smoothing makeup.

Twitter: @Selfies_AndCats

For anyone who isn’t familiar with Marie Curie, she was a Polish-born French physicist and chemist who won the Nobel Prize twice for research on radioactivity.

For anyone who isn't familiar with Marie Curie, she was a Polish-born French physicist and chemist who won the Nobel Prize twice for research on radioactivity.

Curie won the 1903 Nobel Prize in Physics and the 1911 Nobel Prize in Chemistry. She ultimately died due to radiation exposure from her research, and many regard her as a hero in the field of science.

mlahanas.de / Via commons.wikimedia.org

Given her work, many felt the makeup was unnecessary.

Given her work, many felt the makeup was unnecessary.

Twitter: @MarrowNator


View Entire List ›

Quelle: <a href="People Are Pissed That Snapchat&039;s Marie Curie Filter Adds A Full Face Of Makeup“>BuzzFeed

Announcing the DockerCon speakers and sessions

Today we’re excited to share the launch the 2017 agenda. With 100+ DockerCon speakers, 60+ breakout sessions, 11 workshops, and hands on labs, we’re confident that you’ll find the right content for your role (Developer, IT Ops, Enterprise) or your level of Docker expertise (Beginner, Intermediate, Advanced).

View the announced schedule and speakers lineup  

Announced sessions include:
Use Case

0 to 60 with Docker in 5 Months: How a Traditional Fortune 40 Company Turns on a Dime by Tim Tyler (MetLife)
Activision&;s Skypilot: Delivering Amazing Game Experiences through Containerized Pipelines by Tom Shaw (Activision)
Cool Genes: The Search for a Cure Using Genomics, Big Data, and Docker by James Lowey (TGEN)
The Tale of Two Deployments: Greenfield and Monolith Docker at Cornell by Shawn Bower and Brett Haranin (Cornell University)
Taking Docker From Local to Production at Intuit by JanJaap Lahpor (Intuit)

The Use Case track at @dockercon looks great w/ @tomwillfixit @JanJaapLahpor @drizzt51 dockercon Click To Tweet

Using Docker

Docker for Devs by John Zaccone (Ippon Technologies)
Docker for Ops by Scott Couton (Puppet)
Docker for Java Developers by Arun Gupta (Couchbase) and Fabiane Nardon (TailTarget)
Docker for .NET Developers by Michele Bustamonte (Solliance)
Creating Effective Images by Abby Fuller (AWS)
Troubleshooting Tips from a Docker Support Engineer by Jeff Anderson (Docker)
Journey to Docker Production: Evolving Your Infrastructure and Processes by Bret Fisher (Independent DevOps Consultant)
Escape From Your VMs with Image2Docker by Elton Stoneman (Docker) and Jeff Nickoloff (All in Geek Consulting)

Excited about the Using @Docker track @dockercon cc @JohnZaccone @scottcoulton @arungupta&;Click To Tweet

Docker Deep Dive &; Presented by Docker Engineering

What&8217;s New in Docker by Victor Vieux
Under the Hood with Docker Swarm Mode by Drew Erny and Nishant Totla
Modern Storage Platform for Container Environments by Julien Quintard
Secure Substrate: Least Privilege Container Deployment by Diogo Monica and Riyaz Faizullabhoy
Docker Networking: From Application-Plane to Data-Plane by Madhu Venugopal
Plug-ins: Building, Shipping, Storing, and Running by Anusha Ragunathan and Nandhini Santhanam
Container Publishing through Docker Store by Chinmayee Nirmal and Alfred Landrum
Automation and Collaboration Across Multiple Swarms Using Docker Cloud by Fernando Mayo and Marcus Martins
Making Docker Datacenter (DDC) Work for You by Vivek Saraswat

Everything you need to know in the @Docker Deep Dive track at @dockercon w/ @vieux @diogomonica&8230;Click To Tweet

Black Belt

Monitoring, the Prometheus Way by Julius Volz (Prometheus)
Everything You Thought You Already Knew About Orchestration by Laura Frank (Codeship)
Cilium &8211; Network and Application Security with BPF and XDP (Noiro Networks)
What Have Namespaces Done For You Lately? By Liz Rice (Microscaling Systems)
Securing the Software Supply Chain with TUF and Docker by Justin Cappos (NYU)
Container Performance Analysis by Brendan Gregg (Netflix)
Securing Containers, One Patch at a Time by Michael Crosby (Docker)

Excited about the Black Belt track track at @dockercon w/ @lizrice @juliusvolz @rhein_wein @tgraf__&8230;Click To Tweet

Workshops &8211; Presented by Docker Engineering and Docker Captains

Docker Security
Hands-on Docker for Raspberry Pi
Modernizing Monolithic ASP.NET Applications with Docker
Introduction to Enterprise Docker Operations
Docker Store for Publishers

[Tweet &;Check out the new dockercon workshops w/@alexellisuk @EltonStoneman @lilybguo @kristiehow &8220;
Convince your manager
Do you really want to go to DockerCon, but are having a hard time convincing your manager to send you? Have you already explained that sessions, training and hands-on exercises are definitely worth the financial investment and time away from your desk?
Well, fear not! We’ve put together a few more resources and reasons to help convince your manager that DockerCon 2017 on April 17-20, is an invaluable experience you need to attend.

Announcing @dockercon 2017 speakers & sessions dockercon Click To Tweet

The post Announcing the DockerCon speakers and sessions appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

The first and final words on OpenStack availability zones

The post The first and final words on OpenStack availability zones appeared first on Mirantis | Pure Play Open Cloud.
Availability zones are one of the most frequently misunderstood and misused constructs in OpenStack. Each cloud operator has a different idea about what they are and how to use them. What&;s more, each OpenStack service implements availability zones differently &; if it even implements them at all.
Often, there isn’t even agreement over the basic meaning or purpose of availability zones.
On the one hand, you have the literal interpretation of the words “availability zone”, which would lead us to think of some logical subdivision of resources into failure domains, allowing cloud applications to intelligently deploy in ways to maximize their availability. (We’ll be running with this definition for the purposes of this article.)
On the other hand, the different ways that projects implement availability zones lend themselves to certain ways of using the feature as a result. In other words, because this feature has been implemented in a flexible manner that does not tie us down to one specific concept of an availability zone, there&8217;s a lot of confusion over how to use them.
In this article, we&8217;ll look at the traditional definition of availability zones, insights into and best practices for planning and using them, and even a little bit about non-traditional uses. Finally, we hope to address the question: Are availability zones right for you?
OpenStack availability zone Implementations
One of the things that complicates use of availability zones is that each OpenStack project implements them in their own way (if at all). If you do plan to use availability zones, you should evaluate which OpenStack projects you&8217;re going to use support them, and how that affects your design and deployment of those services.
For the purposes of this article, we will look at three core services with respect to availability zones: , Cinder, and Neutron. We won&8217;t go into the steps to set up availability zones, but but instead we&8217;ll focus on a few of the key decision points, limitations, and trouble areas with them.
Nova availability zones
Since host aggregates were first introduced in OpenStack Grizzly, I have seen a lot of confusion about availability zones in Nova. Nova tied their availability zone implementation to host aggregates, and because the latter is a feature unique to the Nova project, its implementation of availability zones is also unique.
I have had many people tell me they use availability zones in Nova, convinced they are not using host aggregates. Well, I have news for these people &8212; all* availability zones in Nova are host aggregates (though not all host aggregates are availability zones):
* Exceptions being the default_availability_zone that compute nodes are placed into when not in another user-defined availability zone, and the internal_service_availability_zone where other nova services live
Some of this confusion may come from the nova CLI. People may do a quick search online, see they can create an availability zone with one command, and may not realize that they’re actually creating a host aggregate. Ex:
$ nova aggregate-create <aggregate name> <AZ name>
$ nova aggregate-create HA1 AZ1
+—-+———+——————-+——-+————————+
| Id | Name    | Availability Zone | Hosts | Metadata               |
+—-+———+——————-+——-+————————+
| 4  |   HA1   | AZ1               |       | ‘availability_zone=AZ1’|
+—-+———+——————-+——-+————————+
I have seen people get confused with the second argument (the AZ name). This is just a shortcut for setting the availability_zone metadata for a new host aggregate you want to create.
This command is equivalent to creating a host aggregate, and then setting the metadata:
$ nova aggregate-create HA1
+—-+———+——————-+——-+———-+
| Id | Name    | Availability Zone | Hosts | Metadata |
+—-+———+——————-+——-+———-+
| 7  |   HA1   | –                 |       |          |
+—-+———+——————-+——-+———-+
$ nova aggregate-set-metadata HA1 availability_zone=AZ1
Metadata has been successfully updated for aggregate 7.
+—-+———+——————-+——-+————————+
| Id | Name    | Availability Zone | Hosts | Metadata               |
+—-+———+——————-+——-+————————+
| 7  |   HA1   | AZ1               |       | ‘availability_zone=AZ1’|
+—-+———+——————-+——-+————————+
Doing it this way, it’s more apparent that the workflow is the same as any other host aggregate, the only difference is the “magic” metadata key availability_zone which we set to AZ1 (notice we also see AZ1 show up under the Availability Zone column). And now when we add compute nodes to this aggregate, they will be automatically transferred out of the default_availability_zone and into the one we have defined. For example:
Before:
$ nova availability-zone-list
| nova              | available                              |
| |- node-27        |                                        |
| | |- nova-compute | enabled :-) 2016-11-06T05:13:48.000000 |
+——————-+—————————————-+
After:
$ nova availability-zone-list
| AZ1               | available                              |
| |- node-27        |                                        |
| | |- nova-compute | enabled :-) 2016-11-06T05:13:48.000000 |
+——————-+—————————————-+
Note that there is one behavior that sets apart the availability zone host aggregates apart from others. Namely, nova does not allow you to assign the same compute host to multiple aggregates with conflicting availability zone assignments. For example, we can first add compute a node to the previously created host aggregate with availability zone AZ1:
$ nova aggregate-add-host HA1 node-27
Host node-27 has been successfully added for aggregate 7
+—-+——+——————-+———-+————————+
| Id | Name | Availability Zone | Hosts    | Metadata               |
+—-+——+——————-+———-+————————+
| 7  | HA1  | AZ1               | ‘node-27’| ‘availability_zone=AZ1’|
+—-+——+——————-+———-+————————+
Next, we create a new host aggregate for availability zone AZ2:
$ nova aggregate-create HA2

+—-+———+——————-+——-+———-+
| Id | Name    | Availability Zone | Hosts | Metadata |
+—-+———+——————-+——-+———-+
| 13 |   HA2   | –                 |       |          |
+—-+———+——————-+——-+———-+

$ nova aggregate-set-metadata HA2 availability_zone=AZ2
Metadata has been successfully updated for aggregate 13.
+—-+———+——————-+——-+————————+
| Id | Name    | Availability Zone | Hosts | Metadata               |
+—-+———+——————-+——-+————————+
| 13 |   HA2   | AZ2               |       | ‘availability_zone=AZ2’|
+—-+———+——————-+——-+————————+
Now if we try to add the original compute node to this aggregate, we get an error because this aggregate has a conflicting availability zone:
$ nova aggregate-add-host HA2 node-27
ERROR (Conflict): Cannot add host node-27 in aggregate 13: host exists (HTTP 409)
+—-+——+——————-+———-+————————+
| Id | Name | Availability Zone | Hosts    | Metadata               |
+—-+——+——————-+———-+————————+
| 13 | HA2  | AZ2               |          | ‘availability_zone=AZ2’|
+—-+——+——————-+———-+————————+
(Incidentally, it is possible to have multiple host aggregates with the same availability_zone metadata, and add the same compute host to both. However, there are few, if any, good reasons for doing this.)
In contrast, Nova allows you to assign this compute node to another host aggregate with other metadata fields, as long as the availability_zone doesn&8217;t conflict:
You can see this if you first create a third host aggregate:
$ nova aggregate-create HA3
+—-+———+——————-+——-+———-+
| Id | Name    | Availability Zone | Hosts | Metadata |
+—-+———+——————-+——-+———-+
| 16 |   HA3   | –                 |       |          |
+—-+———+——————-+——-+———-+
Next, tag  the host aggregate for some purpose not related to availability zones (for example, an aggregate to track compute nodes with SSDs):
$ nova aggregate-set-metadata HA3 ssd=True
Metadata has been successfully updated for aggregate 16.
+—-+———+——————-+——-+———–+
| Id | Name    | Availability Zone | Hosts |  Metadata |
+—-+———+——————-+——-+———–+
| 16 |   HA3   | –                 |       | ‘ssd=True’|
+—-+———+——————-+——-+———–+
Adding original node to another aggregate without conflicting availability zone metadata works:
$ nova aggregate-add-host HA3 node-27
Host node-27 has been successfully added for aggregate 16
+—-+——-+——————-+———–+————+
| Id | Name  | Availability Zone | Hosts     |  Metadata  |
+—-+——-+——————-+———–+————+
| 16 | HA3   | –                 | ‘node-27′ | ‘ssd=True’ |
+—-+——-+——————-+———–+————+
(Incidentally, Nova will also happily let you assign the same compute node to another aggregate with ssd=False for metadata, even though that clearly doesn&8217;t make sense. Conflicts are only checked/enforced in the case of the availability_zone metadata.)
Nova configuration also holds parameters relevant to availability zone behavior. In the nova.conf read by your nova-api service, you can set a default availability zone for scheduling, which is used if users do not specify an availability zone in the API call:
[DEFAULT]
default_schedule_zone=AZ1
However, most operators leave this at its default setting (None), because it allows users who don’t care about availability zones to omit it from their API call, and the workload will be scheduled to any availability zone where there is available capacity.
If a user requests an invalid or undefined availability zone, the Nova API will report back with an HTTP 400 error. There is no availability zone fallback option.
Cinder
Creating availability zones in Cinder is accomplished by setting the following configuration parameter in cinder.conf, on the nodes where your cinder-volume service runs:
[DEFAULT]
storage_availability_zone=AZ1
Note that you can only set the availability zone to one value. This is consistent with availability zones in other OpenStack projects that do not allow for the notion of overlapping failure domains or multiple failure domain levels or tiers.
The change takes effect when you restart your cinder-volume services. You can confirm your availability zone assignments as follows:
cinder service-list
+—————+——————-+——+———+——-+
|     Binary    |        Host       | Zone | Status  | State |
+—————+——————-+——+———+——-+
| cinder-volume | hostname1@LVM     |  AZ1 | enabled |   up  |
| cinder-volume | hostname2@LVM     |  AZ2 | enabled |   up  |
If you would like to establish a default availability zone, you can set the this parameter in cinder.conf on the cinder-api nodes:
[DEFAULT]
default_availability_zone=AZ1
This instructs Cinder which availability zone to use if the API call did not specify one. If you don’t, it will use a hardcoded default, nova. In the case of our example, where we&8217;ve set the default availability zone in Nova to AZ1, this would result in a failure. This also means that unlike Nova, users do not have the flexibility of omitting availability zone information and expecting that Cinder will select any available backend with spare capacity in any availability zone.
Therefore, you have a choice with this parameter. You can set it to one of your availability zones so API calls without availability zone information don’t fail, but causing a potential situation of uneven storage allocation across your availability zones. Or, you can not set this parameter, and accept that user API calls that forget or omit availability zone info will fail.
Another option is to set the default to a non-existent availability zone you-must-specify-an-AZ or something similar, so when the call fails due to the non-existant availability zone, this information will be included in the error message sent back to the client.
Your storage backends, storage drivers, and storage architecture may also affect how you set up your availability zones. If we are using the reference Cinder LVM ISCSI Driver deployed on commodity hardware, and that hardware fits the same availability zone criteria of our computes, then we could setup availability zones to match what we have defined in Nova. We could also do the same if we had a third party storage appliance in each availability zone, e.g.:
|     Binary    |           Host          | Zone | Status  | State |
| cinder-volume | hostname1@StorageArray1 |  AZ1 | enabled |   up  |
| cinder-volume | hostname2@StorageArray2 |  AZ2 | enabled |   up  |
(Note: Notice that the hostnames (hostname1 and hostname2) are still different in this example. The cinder multi-backend feature allows us to configure multiple storage backends in the same cinder.conf (for the same cinder-volume service), but Cinder availability zones can only be defined per cinder-volume service, and not per-backend per-cinder-volume service. In other words, if you define multiple backends in one cinder.conf, they will all inherit the same availability zone.)
However, in many cases if you’re using a third party storage appliance, then these systems usually have their own built-in redundancy that exist outside of OpenStack notions of availability zones. Similarly if you use a distributed storage solution like Ceph, then availability zones have little or no meaning in this context. In this case, you can forgo Cinder availability zones.
The one issue in doing this, however, is that any availability zones you defined for Nova won’t match. This can cause problems when Nova makes API calls to Cinder &; for example, when performing a Boot from Volume API call through Nova. If Nova decided to provision your VM in AZ1, it will tell Cinder to provision a boot volume in AZ1, but Cinder doesn’t know anything about AZ1, so this API call will fail. To prevent this from happening, we need to set the following parameter in cinder.conf on your nodes running cinder-api:
[DEFAULT]
allow_availability_zone_fallback=True
This parameter prevents the API call from failing, because if the requested availability zone does not exist, Cinder will fallback to another availability zone (whichever you defined in default_availability_zone parameter, or in storage_availability_zone if the default is not set). The hardcoded default storage_availability_zone is nova, so the fallback availability zone should match the default availability zone for your cinder-volume services, and everything should work.
The easiest way to solve the problem, however is to remove the AvailabilityZoneFilter from your filter list in cinder.conf on nodes running cinder-scheduler. This makes the scheduler ignore any availability zone information passed to it altogether, which may also be helpful in case of any availability zone configuration mismatch.
Neutron
Availability zone support was added to Neutron in the Mitaka release. Availability zones can be set for DHCP and L3 agents in their respective configuration files:
[AGENT]
Availability_zone = AZ1
Restart the agents, and confirm availability zone settings as follows:
neutron agent-show <agent-id>
+———————+————+
| Field               | Value      |
+———————+————+
| availability_zone   | AZ1        |

If you would like to establish a default availability zone, you can set the this parameter in neutron.conf on neutron-server nodes:
[DEFAULT]
default_availability_zones=AZ1,AZ2
This parameters tells Neutron which availability zones to use if the API call did not specify any. Unlike Cinder, you can specify multiple availability zones, and leaving it undefined places no constraints in scheduling, as there are no hard coded defaults. If you have users making API calls that do not care about the availability zone, then you can enumerate all your availability zones for this parameter, or simply leave it undefined &8211; both would yield the same result.
Additionally, when users do specify an availability zone, such requests are fulfilled as a “best effort” in Neutron. In other words, there is no need for an availability zone fallback parameter, because your API call still execute even if your availability zone hint can’t be satisfied.
Another important distinction that sets Neutron aside from Nova and Cinder is that it implements availability zones as scheduler hints, meaning that on the client side you can repeat this option to chain together multiple availability zone specifications in the event that more than one availability zone would satisfy your availability criteria. For example:
$ neutron net-create –availability-zone-hint AZ1 –availability-zone-hint AZ2 new_network
As with Cinder, the Neutron plugins and backends you’re using deserve attention, as the support or need for availability zones may be different depending on their implementation. For example, if you’re using a reference Neutron deployment with the ML2 plugin and with DHCP and L3 agents deployed to commodity hardware, you can likely place these agents consistently according to the same availability zone criteria used for your computes.
Whereas in contrast, other alternatives such as the Contrail plugin for Neutron do not support availability zones. Or if you are using Neutron DVR for example, then availability zones have limited significance for Layer 3 Neutron.
OpenStack Project availability zone Comparison Summary
Before we move on, it&8217;s helpful to review how each project handles availability zones.

Nova
Cinder
Neutron

Default availability zone scheduling
Can set to one availability zone or None
Can set one availability zone; cannot set None
Can set to any list of availability zones or none

Availability zone fallback
None supported
Supported through configuration
N/A; scheduling to availability zones done on a best effort basis

Availability zone definition restrictions
No more than availability zone per nova-compute
No more than 1 availability zone per cinder-volume
No more than 1 availability zone per neutron agent

Availability zone client restrictions
Can specify one availability zone or none
Can specify one availability zone or none
Can specify an arbitrary number of availability zones

Availability zones typically used when you have &;
Commodity HW for computes, libvirt driver
Commodity HW for storage, LVM iSCSI driver
Commodity HW for neutron agents, ML2 plugin

Availability zones not typically used when you have&8230;
Third party hypervisor drivers that manage their own HA for VMs (DRS for VCenter)
Third party drivers, backends, etc. that manage their own HA
Third party plugins, backends, etc. that manage their own HA

Best Practices for availability zones
Now let&8217;s talk about how to best make use of availability zones.
What should my availability zones represent?
The first thing you should do as a cloud operator is to nail down your own accepted definition of an availability zone and how you will use them, and remain consistent. You don’t want to end up in a situation where availability zones are taking on more than one meaning in the same cloud. For example:
Fred’s AZ            | Example of AZ used to perform tenant workload isolation
VMWare cluster 1 AZ | Example of AZ used to select a specific hypervisor type
Power source 1 AZ   | Example of AZ used to select a specific failure domain
Rack 1 AZ           | Example of AZ used to select a specific failure domain
Such a set of definitions would be a source of inconsistency and confusion in your cloud. It’s usually better to keep things simple with one availability zone definition, and use OpenStack features such as Nova Flavors or Nova/Cinder boot hints to achieve other requirements for multi-tenancy isolation, ability to select between different hypervisor options and other features, and so on.
Note that OpenStack currently does not support the concept of multiple failure domain levels/tiers. Even though we may have multiple ways to define failure domains (e.g., by power circuit, rack, room, etc), we must pick a single convention.
For the purposes of this article, we will discuss availability zones in the context of failure domains. However, we will cover one other use for availability zones in the third section.
How many availability zones do I need?
One question that customers frequently get hung up on is how many availability zones they should create. This can be tricky because the setup and management of availability zones involves stakeholders at every layer of the solution stack, from tenant applications to cloud operators, down to data center design and planning.

A good place to start is your cloud application requirements: How many failure domains are they designed to work with (i.e. redundancy factor)? The likely answer is two (primary + backup), three (for example, for a database or other quorum-based system), or one (for example, a legacy app with single points of failure). Therefore, the vast majority of clouds will have either 2, 3, or 1 availability zone.
Also keep in mind that as a general design principle, you want to minimize the number of availability zones in your environment, because the side effect of availability zone proliferation is that you are dividing your capacity into more resource islands. The resource utilization in each island may not be equal, and now you have an operational burden to track and maintain capacity in each island/availability zone. Also, if you have a lot of availability zones (more than the redundancy factor of tenant applications), tenants are left to guess which availability zones to use and which have available capacity.
How do I organize my availability zones?
The value proposition of availability zones is that tenants are able to achieve a higher level of availability in their applications. In order to make good on that proposition, we need to design our availability zones in ways that mitigate single points of failure.             
For example, if our resources are split between two power sources in the data center, then we may decide to define two resource pools (availability zones) according to their connected power source:
Or, if we only have one TOR switch in our racks, then we may decide to define availability zones by rack. However, here we can run into problems if we make each rack its own availability zone, as this will not scale from a capacity management perspective for more than 2-3 racks/availability zones (because of the &;resource island&; problem). In this case, you might consider dividing/arranging your total rack count into into 2 or 3 logical groupings that correlate to your defined availability zones:
We may also find situations where we have redundant TOR switch pairs in our racks, power source diversity to each rack, and lack a single point of failure. You could still place racks into availability zones as in the previous example, but the value of availability zones is marginalized, since you need to have a double failure (e.g., both TORs in the rack failing) to take down a rack.
Ultimately with any of the above scenarios, the value added by your availability zones will depend on the likelihood and expression of failure modes &8211; both the ones you have designed for, and the ones you have not. But getting accurate and complete information on such failure modes may not be easy, so the value of availability zones from this kind of unplanned failure can be difficult to pin down.
There is however another area where availability zones have the potential to provide value &8212; planned maintenance. Have you ever needed to move, recable, upgrade, or rebuild a rack? Ever needed to shut off power to part of the data center to do electrical work? Ever needed to apply disruptive updates to your hypervisors, like kernel or QEMU security updates? How about upgrades of OpenStack or the hypervisor operating system?
Chances are good that these kinds of planned maintenance are a higher source of downtime than unplanned hardware failures that happen out of the blue. Therefore, this type of planned maintenance can also impact how you define your availability zones. In general the grouping of racks into availability zones (described previously) still works well, and is the most common availability zone paradigm we see for our customers.
However, it could affect the way in which you group your racks into availability zones. For example, you may choose to create availability zones based on physical parameters like which floor, room or building the equipment is located in, or other practical considerations that would help in the event you need to vacate or rebuild certain space in your DC(s). Ex:
One of the limitations of the OpenStack implementation of availability zones made apparent in this example is that you are forced to choose one definition. Applications can request a specific availability zone, but are not offered the flexibility of requesting building level isolation, vs floor, room, or rack level isolation. This will be a fixed, inherited property of the availability zones you create. If you need more, then you start to enter the realm of other OpenStack abstractions like Regions and Cells.
Other uses for availability zones?
Another way in which people have found to use availability zones is in multi-hypervisor environments. In the ideal world of the “implementation-agnostic” cloud, we would abstract such underlying details from users of our platform. However there are some key differences between hypervisors that make this aspiration difficult. Take the example of KVM & VMWare.
When an iSCSI target is provisioned through with the LVM iSCSI Cinder driver, it cannot be attached to or consumed by ESXi nodes. The provision request must go through VMWare’s VMDK Cinder driver, which proxies the creation and attachment requests to vCenter. This incompatibility can cause errors and thus a negative user experience issues if the user tries to attach a volume to their hypervisor provisioned from the wrong backend.
To solve this problem, some operators use availability zones as a way for users to select hypervisor types (for example, AZ_KVM1, AZ_VMWARE1), and set the following configuration in their nova.conf:
[cinder]
cross_az_attach = False
This presents users with an error if they attempt to attach a volume from one availability zone (e.g., AZ_VMWARE1) to another availability zone (e.g., AZ_KVM1). The call would have certainly failed regardless, but with a different error from farther downstream from one of the nova-compute agents, instead of from nova-api. This way, it&8217;s easier for the user to see where they went wrong and correct the problem.
In this case, the availability zone also acts as a tag to remind users which hypervisor their VM resides on.
In my opinion, this is stretching the definition of availability zones as failure domains. VMWare may be considered its own failure domain separate from KVM, but that’s not the primary purpose of creating availability zones this way. The primary purpose is to differentiate hypervisor types. Different hypervisor types have a different set of features and capabilities. If we think about the problem in these terms, there are a number of other solutions that come to mind:

Nova Flavors: Define a “VMWare” set of flavors that map to your VCenter cluster(s). If your tenants that use VMWare are different than your tenants who use KVM, you can make them private flavors, ensuring that tenants only ever see or interact with the hypervisor type they expect.
Cinder: Similarly for Cinder, you can make the VMWare backend private to specific tenants, and/or set quotas differently for that backend, to ensure that tenants will only allocate from the correct storage pools for their hypervisor type.
Image metadata: You can tag your images according to the hypervisor they run on. Set image property hypervisor_type to vmware for VMDK images, and to qemu for other images. The ComputeCapabilitiesFilter in Nova will honor the hypervisor placement request.

Soo… Are availability zones right for me?
So wrapping up, think of availability zones in terms of:

Unplanned failures: If you have a good history of failure data, well understood failure modes, or some known single point of failure that availability zones can help mitigate, then availability zones may be a good fit for your environment.
Planned maintenance: If you have well understood maintenance processes that have awareness of your availability zone definitions and can take advantage of them, then availability zones may be a good fit for your environment. This is where availability zones can provide some of the creates value, but is also the most difficult to achieve, as it requires intelligent availability zone-aware rolling updates and upgrades, and affects how data center personnel perform maintenance activities.
Tenant application design/support: If your tenants are running legacy apps, apps with single points of failure, or do not support use of availability zones in their provisioning process, then availability zones will be of no use for these workloads.
Other alternatives for achieving app availability: Workloads built for geo-redundancy can achieve the same level of HA (or better) in a multi-region cloud. If this were the case for your cloud and your cloud workloads, availability zones would be unnecessary.
OpenStack projects: You should factor into your decision the limitations and differences in availability zone implementations between different OpenStack projects, and perform this analysis for all OpenStack projects in your scope of deployment.

The post The first and final words on OpenStack availability zones appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

The first and final words on OpenStack availability zones

The post The first and final words on OpenStack availability zones appeared first on Mirantis | Pure Play Open Cloud.
Availability zones are one of the most frequently misunderstood and misused constructs in OpenStack. Each cloud operator has a different idea about what they are and how to use them. What&;s more, each OpenStack service implements availability zones differently &; if it even implements them at all.
Often, there isn’t even agreement over the basic meaning or purpose of availability zones.
On the one hand, you have the literal interpretation of the words “availability zone”, which would lead us to think of some logical subdivision of resources into failure domains, allowing cloud applications to intelligently deploy in ways to maximize their availability. (We’ll be running with this definition for the purposes of this article.)
On the other hand, the different ways that projects implement availability zones lend themselves to certain ways of using the feature as a result. In other words, because this feature has been implemented in a flexible manner that does not tie us down to one specific concept of an availability zone, there&8217;s a lot of confusion over how to use them.
In this article, we&8217;ll look at the traditional definition of availability zones, insights into and best practices for planning and using them, and even a little bit about non-traditional uses. Finally, we hope to address the question: Are availability zones right for you?
OpenStack availability zone Implementations
One of the things that complicates use of availability zones is that each OpenStack project implements them in their own way (if at all). If you do plan to use availability zones, you should evaluate which OpenStack projects you&8217;re going to use support them, and how that affects your design and deployment of those services.
For the purposes of this article, we will look at three core services with respect to availability zones: , Cinder, and Neutron. We won&8217;t go into the steps to set up availability zones, but but instead we&8217;ll focus on a few of the key decision points, limitations, and trouble areas with them.
Nova availability zones
Since host aggregates were first introduced in OpenStack Grizzly, I have seen a lot of confusion about availability zones in Nova. Nova tied their availability zone implementation to host aggregates, and because the latter is a feature unique to the Nova project, its implementation of availability zones is also unique.
I have had many people tell me they use availability zones in Nova, convinced they are not using host aggregates. Well, I have news for these people &8212; all* availability zones in Nova are host aggregates (though not all host aggregates are availability zones):
* Exceptions being the default_availability_zone that compute nodes are placed into when not in another user-defined availability zone, and the internal_service_availability_zone where other nova services live
Some of this confusion may come from the nova CLI. People may do a quick search online, see they can create an availability zone with one command, and may not realize that they’re actually creating a host aggregate. Ex:
$ nova aggregate-create <aggregate name> <AZ name>
$ nova aggregate-create HA1 AZ1
+—-+———+——————-+——-+————————+
| Id | Name    | Availability Zone | Hosts | Metadata               |
+—-+———+——————-+——-+————————+
| 4  |   HA1   | AZ1               |       | ‘availability_zone=AZ1’|
+—-+———+——————-+——-+————————+
I have seen people get confused with the second argument (the AZ name). This is just a shortcut for setting the availability_zone metadata for a new host aggregate you want to create.
This command is equivalent to creating a host aggregate, and then setting the metadata:
$ nova aggregate-create HA1
+—-+———+——————-+——-+———-+
| Id | Name    | Availability Zone | Hosts | Metadata |
+—-+———+——————-+——-+———-+
| 7  |   HA1   | –                 |       |          |
+—-+———+——————-+——-+———-+
$ nova aggregate-set-metadata HA1 availability_zone=AZ1
Metadata has been successfully updated for aggregate 7.
+—-+———+——————-+——-+————————+
| Id | Name    | Availability Zone | Hosts | Metadata               |
+—-+———+——————-+——-+————————+
| 7  |   HA1   | AZ1               |       | ‘availability_zone=AZ1’|
+—-+———+——————-+——-+————————+
Doing it this way, it’s more apparent that the workflow is the same as any other host aggregate, the only difference is the “magic” metadata key availability_zone which we set to AZ1 (notice we also see AZ1 show up under the Availability Zone column). And now when we add compute nodes to this aggregate, they will be automatically transferred out of the default_availability_zone and into the one we have defined. For example:
Before:
$ nova availability-zone-list
| nova              | available                              |
| |- node-27        |                                        |
| | |- nova-compute | enabled :-) 2016-11-06T05:13:48.000000 |
+——————-+—————————————-+
After:
$ nova availability-zone-list
| AZ1               | available                              |
| |- node-27        |                                        |
| | |- nova-compute | enabled :-) 2016-11-06T05:13:48.000000 |
+——————-+—————————————-+
Note that there is one behavior that sets apart the availability zone host aggregates apart from others. Namely, nova does not allow you to assign the same compute host to multiple aggregates with conflicting availability zone assignments. For example, we can first add compute a node to the previously created host aggregate with availability zone AZ1:
$ nova aggregate-add-host HA1 node-27
Host node-27 has been successfully added for aggregate 7
+—-+——+——————-+———-+————————+
| Id | Name | Availability Zone | Hosts    | Metadata               |
+—-+——+——————-+———-+————————+
| 7  | HA1  | AZ1               | ‘node-27’| ‘availability_zone=AZ1’|
+—-+——+——————-+———-+————————+
Next, we create a new host aggregate for availability zone AZ2:
$ nova aggregate-create HA2

+—-+———+——————-+——-+———-+
| Id | Name    | Availability Zone | Hosts | Metadata |
+—-+———+——————-+——-+———-+
| 13 |   HA2   | –                 |       |          |
+—-+———+——————-+——-+———-+

$ nova aggregate-set-metadata HA2 availability_zone=AZ2
Metadata has been successfully updated for aggregate 13.
+—-+———+——————-+——-+————————+
| Id | Name    | Availability Zone | Hosts | Metadata               |
+—-+———+——————-+——-+————————+
| 13 |   HA2   | AZ2               |       | ‘availability_zone=AZ2’|
+—-+———+——————-+——-+————————+
Now if we try to add the original compute node to this aggregate, we get an error because this aggregate has a conflicting availability zone:
$ nova aggregate-add-host HA2 node-27
ERROR (Conflict): Cannot add host node-27 in aggregate 13: host exists (HTTP 409)
+—-+——+——————-+———-+————————+
| Id | Name | Availability Zone | Hosts    | Metadata               |
+—-+——+——————-+———-+————————+
| 13 | HA2  | AZ2               |          | ‘availability_zone=AZ2’|
+—-+——+——————-+———-+————————+
(Incidentally, it is possible to have multiple host aggregates with the same availability_zone metadata, and add the same compute host to both. However, there are few, if any, good reasons for doing this.)
In contrast, Nova allows you to assign this compute node to another host aggregate with other metadata fields, as long as the availability_zone doesn&8217;t conflict:
You can see this if you first create a third host aggregate:
$ nova aggregate-create HA3
+—-+———+——————-+——-+———-+
| Id | Name    | Availability Zone | Hosts | Metadata |
+—-+———+——————-+——-+———-+
| 16 |   HA3   | –                 |       |          |
+—-+———+——————-+——-+———-+
Next, tag  the host aggregate for some purpose not related to availability zones (for example, an aggregate to track compute nodes with SSDs):
$ nova aggregate-set-metadata HA3 ssd=True
Metadata has been successfully updated for aggregate 16.
+—-+———+——————-+——-+———–+
| Id | Name    | Availability Zone | Hosts |  Metadata |
+—-+———+——————-+——-+———–+
| 16 |   HA3   | –                 |       | ‘ssd=True’|
+—-+———+——————-+——-+———–+
Adding original node to another aggregate without conflicting availability zone metadata works:
$ nova aggregate-add-host HA3 node-27
Host node-27 has been successfully added for aggregate 16
+—-+——-+——————-+———–+————+
| Id | Name  | Availability Zone | Hosts     |  Metadata  |
+—-+——-+——————-+———–+————+
| 16 | HA3   | –                 | ‘node-27′ | ‘ssd=True’ |
+—-+——-+——————-+———–+————+
(Incidentally, Nova will also happily let you assign the same compute node to another aggregate with ssd=False for metadata, even though that clearly doesn&8217;t make sense. Conflicts are only checked/enforced in the case of the availability_zone metadata.)
Nova configuration also holds parameters relevant to availability zone behavior. In the nova.conf read by your nova-api service, you can set a default availability zone for scheduling, which is used if users do not specify an availability zone in the API call:
[DEFAULT]
default_schedule_zone=AZ1
However, most operators leave this at its default setting (None), because it allows users who don’t care about availability zones to omit it from their API call, and the workload will be scheduled to any availability zone where there is available capacity.
If a user requests an invalid or undefined availability zone, the Nova API will report back with an HTTP 400 error. There is no availability zone fallback option.
Cinder
Creating availability zones in Cinder is accomplished by setting the following configuration parameter in cinder.conf, on the nodes where your cinder-volume service runs:
[DEFAULT]
storage_availability_zone=AZ1
Note that you can only set the availability zone to one value. This is consistent with availability zones in other OpenStack projects that do not allow for the notion of overlapping failure domains or multiple failure domain levels or tiers.
The change takes effect when you restart your cinder-volume services. You can confirm your availability zone assignments as follows:
cinder service-list
+—————+——————-+——+———+——-+
|     Binary    |        Host       | Zone | Status  | State |
+—————+——————-+——+———+——-+
| cinder-volume | hostname1@LVM     |  AZ1 | enabled |   up  |
| cinder-volume | hostname2@LVM     |  AZ2 | enabled |   up  |
If you would like to establish a default availability zone, you can set the this parameter in cinder.conf on the cinder-api nodes:
[DEFAULT]
default_availability_zone=AZ1
This instructs Cinder which availability zone to use if the API call did not specify one. If you don’t, it will use a hardcoded default, nova. In the case of our example, where we&8217;ve set the default availability zone in Nova to AZ1, this would result in a failure. This also means that unlike Nova, users do not have the flexibility of omitting availability zone information and expecting that Cinder will select any available backend with spare capacity in any availability zone.
Therefore, you have a choice with this parameter. You can set it to one of your availability zones so API calls without availability zone information don’t fail, but causing a potential situation of uneven storage allocation across your availability zones. Or, you can not set this parameter, and accept that user API calls that forget or omit availability zone info will fail.
Another option is to set the default to a non-existent availability zone you-must-specify-an-AZ or something similar, so when the call fails due to the non-existant availability zone, this information will be included in the error message sent back to the client.
Your storage backends, storage drivers, and storage architecture may also affect how you set up your availability zones. If we are using the reference Cinder LVM ISCSI Driver deployed on commodity hardware, and that hardware fits the same availability zone criteria of our computes, then we could setup availability zones to match what we have defined in Nova. We could also do the same if we had a third party storage appliance in each availability zone, e.g.:
|     Binary    |           Host          | Zone | Status  | State |
| cinder-volume | hostname1@StorageArray1 |  AZ1 | enabled |   up  |
| cinder-volume | hostname2@StorageArray2 |  AZ2 | enabled |   up  |
(Note: Notice that the hostnames (hostname1 and hostname2) are still different in this example. The cinder multi-backend feature allows us to configure multiple storage backends in the same cinder.conf (for the same cinder-volume service), but Cinder availability zones can only be defined per cinder-volume service, and not per-backend per-cinder-volume service. In other words, if you define multiple backends in one cinder.conf, they will all inherit the same availability zone.)
However, in many cases if you’re using a third party storage appliance, then these systems usually have their own built-in redundancy that exist outside of OpenStack notions of availability zones. Similarly if you use a distributed storage solution like Ceph, then availability zones have little or no meaning in this context. In this case, you can forgo Cinder availability zones.
The one issue in doing this, however, is that any availability zones you defined for Nova won’t match. This can cause problems when Nova makes API calls to Cinder &; for example, when performing a Boot from Volume API call through Nova. If Nova decided to provision your VM in AZ1, it will tell Cinder to provision a boot volume in AZ1, but Cinder doesn’t know anything about AZ1, so this API call will fail. To prevent this from happening, we need to set the following parameter in cinder.conf on your nodes running cinder-api:
[DEFAULT]
allow_availability_zone_fallback=True
This parameter prevents the API call from failing, because if the requested availability zone does not exist, Cinder will fallback to another availability zone (whichever you defined in default_availability_zone parameter, or in storage_availability_zone if the default is not set). The hardcoded default storage_availability_zone is nova, so the fallback availability zone should match the default availability zone for your cinder-volume services, and everything should work.
The easiest way to solve the problem, however is to remove the AvailabilityZoneFilter from your filter list in cinder.conf on nodes running cinder-scheduler. This makes the scheduler ignore any availability zone information passed to it altogether, which may also be helpful in case of any availability zone configuration mismatch.
Neutron
Availability zone support was added to Neutron in the Mitaka release. Availability zones can be set for DHCP and L3 agents in their respective configuration files:
[AGENT]
Availability_zone = AZ1
Restart the agents, and confirm availability zone settings as follows:
neutron agent-show <agent-id>
+———————+————+
| Field               | Value      |
+———————+————+
| availability_zone   | AZ1        |

If you would like to establish a default availability zone, you can set the this parameter in neutron.conf on neutron-server nodes:
[DEFAULT]
default_availability_zones=AZ1,AZ2
This parameters tells Neutron which availability zones to use if the API call did not specify any. Unlike Cinder, you can specify multiple availability zones, and leaving it undefined places no constraints in scheduling, as there are no hard coded defaults. If you have users making API calls that do not care about the availability zone, then you can enumerate all your availability zones for this parameter, or simply leave it undefined &8211; both would yield the same result.
Additionally, when users do specify an availability zone, such requests are fulfilled as a “best effort” in Neutron. In other words, there is no need for an availability zone fallback parameter, because your API call still execute even if your availability zone hint can’t be satisfied.
Another important distinction that sets Neutron aside from Nova and Cinder is that it implements availability zones as scheduler hints, meaning that on the client side you can repeat this option to chain together multiple availability zone specifications in the event that more than one availability zone would satisfy your availability criteria. For example:
$ neutron net-create –availability-zone-hint AZ1 –availability-zone-hint AZ2 new_network
As with Cinder, the Neutron plugins and backends you’re using deserve attention, as the support or need for availability zones may be different depending on their implementation. For example, if you’re using a reference Neutron deployment with the ML2 plugin and with DHCP and L3 agents deployed to commodity hardware, you can likely place these agents consistently according to the same availability zone criteria used for your computes.
Whereas in contrast, other alternatives such as the Contrail plugin for Neutron do not support availability zones. Or if you are using Neutron DVR for example, then availability zones have limited significance for Layer 3 Neutron.
OpenStack Project availability zone Comparison Summary
Before we move on, it&8217;s helpful to review how each project handles availability zones.

Nova
Cinder
Neutron

Default availability zone scheduling
Can set to one availability zone or None
Can set one availability zone; cannot set None
Can set to any list of availability zones or none

Availability zone fallback
None supported
Supported through configuration
N/A; scheduling to availability zones done on a best effort basis

Availability zone definition restrictions
No more than availability zone per nova-compute
No more than 1 availability zone per cinder-volume
No more than 1 availability zone per neutron agent

Availability zone client restrictions
Can specify one availability zone or none
Can specify one availability zone or none
Can specify an arbitrary number of availability zones

Availability zones typically used when you have &;
Commodity HW for computes, libvirt driver
Commodity HW for storage, LVM iSCSI driver
Commodity HW for neutron agents, ML2 plugin

Availability zones not typically used when you have&8230;
Third party hypervisor drivers that manage their own HA for VMs (DRS for VCenter)
Third party drivers, backends, etc. that manage their own HA
Third party plugins, backends, etc. that manage their own HA

Best Practices for availability zones
Now let&8217;s talk about how to best make use of availability zones.
What should my availability zones represent?
The first thing you should do as a cloud operator is to nail down your own accepted definition of an availability zone and how you will use them, and remain consistent. You don’t want to end up in a situation where availability zones are taking on more than one meaning in the same cloud. For example:
Fred’s AZ            | Example of AZ used to perform tenant workload isolation
VMWare cluster 1 AZ | Example of AZ used to select a specific hypervisor type
Power source 1 AZ   | Example of AZ used to select a specific failure domain
Rack 1 AZ           | Example of AZ used to select a specific failure domain
Such a set of definitions would be a source of inconsistency and confusion in your cloud. It’s usually better to keep things simple with one availability zone definition, and use OpenStack features such as Nova Flavors or Nova/Cinder boot hints to achieve other requirements for multi-tenancy isolation, ability to select between different hypervisor options and other features, and so on.
Note that OpenStack currently does not support the concept of multiple failure domain levels/tiers. Even though we may have multiple ways to define failure domains (e.g., by power circuit, rack, room, etc), we must pick a single convention.
For the purposes of this article, we will discuss availability zones in the context of failure domains. However, we will cover one other use for availability zones in the third section.
How many availability zones do I need?
One question that customers frequently get hung up on is how many availability zones they should create. This can be tricky because the setup and management of availability zones involves stakeholders at every layer of the solution stack, from tenant applications to cloud operators, down to data center design and planning.

A good place to start is your cloud application requirements: How many failure domains are they designed to work with (i.e. redundancy factor)? The likely answer is two (primary + backup), three (for example, for a database or other quorum-based system), or one (for example, a legacy app with single points of failure). Therefore, the vast majority of clouds will have either 2, 3, or 1 availability zone.
Also keep in mind that as a general design principle, you want to minimize the number of availability zones in your environment, because the side effect of availability zone proliferation is that you are dividing your capacity into more resource islands. The resource utilization in each island may not be equal, and now you have an operational burden to track and maintain capacity in each island/availability zone. Also, if you have a lot of availability zones (more than the redundancy factor of tenant applications), tenants are left to guess which availability zones to use and which have available capacity.
How do I organize my availability zones?
The value proposition of availability zones is that tenants are able to achieve a higher level of availability in their applications. In order to make good on that proposition, we need to design our availability zones in ways that mitigate single points of failure.             
For example, if our resources are split between two power sources in the data center, then we may decide to define two resource pools (availability zones) according to their connected power source:
Or, if we only have one TOR switch in our racks, then we may decide to define availability zones by rack. However, here we can run into problems if we make each rack its own availability zone, as this will not scale from a capacity management perspective for more than 2-3 racks/availability zones (because of the &;resource island&; problem). In this case, you might consider dividing/arranging your total rack count into into 2 or 3 logical groupings that correlate to your defined availability zones:
We may also find situations where we have redundant TOR switch pairs in our racks, power source diversity to each rack, and lack a single point of failure. You could still place racks into availability zones as in the previous example, but the value of availability zones is marginalized, since you need to have a double failure (e.g., both TORs in the rack failing) to take down a rack.
Ultimately with any of the above scenarios, the value added by your availability zones will depend on the likelihood and expression of failure modes &8211; both the ones you have designed for, and the ones you have not. But getting accurate and complete information on such failure modes may not be easy, so the value of availability zones from this kind of unplanned failure can be difficult to pin down.
There is however another area where availability zones have the potential to provide value &8212; planned maintenance. Have you ever needed to move, recable, upgrade, or rebuild a rack? Ever needed to shut off power to part of the data center to do electrical work? Ever needed to apply disruptive updates to your hypervisors, like kernel or QEMU security updates? How about upgrades of OpenStack or the hypervisor operating system?
Chances are good that these kinds of planned maintenance are a higher source of downtime than unplanned hardware failures that happen out of the blue. Therefore, this type of planned maintenance can also impact how you define your availability zones. In general the grouping of racks into availability zones (described previously) still works well, and is the most common availability zone paradigm we see for our customers.
However, it could affect the way in which you group your racks into availability zones. For example, you may choose to create availability zones based on physical parameters like which floor, room or building the equipment is located in, or other practical considerations that would help in the event you need to vacate or rebuild certain space in your DC(s). Ex:
One of the limitations of the OpenStack implementation of availability zones made apparent in this example is that you are forced to choose one definition. Applications can request a specific availability zone, but are not offered the flexibility of requesting building level isolation, vs floor, room, or rack level isolation. This will be a fixed, inherited property of the availability zones you create. If you need more, then you start to enter the realm of other OpenStack abstractions like Regions and Cells.
Other uses for availability zones?
Another way in which people have found to use availability zones is in multi-hypervisor environments. In the ideal world of the “implementation-agnostic” cloud, we would abstract such underlying details from users of our platform. However there are some key differences between hypervisors that make this aspiration difficult. Take the example of KVM & VMWare.
When an iSCSI target is provisioned through with the LVM iSCSI Cinder driver, it cannot be attached to or consumed by ESXi nodes. The provision request must go through VMWare’s VMDK Cinder driver, which proxies the creation and attachment requests to vCenter. This incompatibility can cause errors and thus a negative user experience issues if the user tries to attach a volume to their hypervisor provisioned from the wrong backend.
To solve this problem, some operators use availability zones as a way for users to select hypervisor types (for example, AZ_KVM1, AZ_VMWARE1), and set the following configuration in their nova.conf:
[cinder]
cross_az_attach = False
This presents users with an error if they attempt to attach a volume from one availability zone (e.g., AZ_VMWARE1) to another availability zone (e.g., AZ_KVM1). The call would have certainly failed regardless, but with a different error from farther downstream from one of the nova-compute agents, instead of from nova-api. This way, it&8217;s easier for the user to see where they went wrong and correct the problem.
In this case, the availability zone also acts as a tag to remind users which hypervisor their VM resides on.
In my opinion, this is stretching the definition of availability zones as failure domains. VMWare may be considered its own failure domain separate from KVM, but that’s not the primary purpose of creating availability zones this way. The primary purpose is to differentiate hypervisor types. Different hypervisor types have a different set of features and capabilities. If we think about the problem in these terms, there are a number of other solutions that come to mind:

Nova Flavors: Define a “VMWare” set of flavors that map to your VCenter cluster(s). If your tenants that use VMWare are different than your tenants who use KVM, you can make them private flavors, ensuring that tenants only ever see or interact with the hypervisor type they expect.
Cinder: Similarly for Cinder, you can make the VMWare backend private to specific tenants, and/or set quotas differently for that backend, to ensure that tenants will only allocate from the correct storage pools for their hypervisor type.
Image metadata: You can tag your images according to the hypervisor they run on. Set image property hypervisor_type to vmware for VMDK images, and to qemu for other images. The ComputeCapabilitiesFilter in Nova will honor the hypervisor placement request.

Soo… Are availability zones right for me?
So wrapping up, think of availability zones in terms of:

Unplanned failures: If you have a good history of failure data, well understood failure modes, or some known single point of failure that availability zones can help mitigate, then availability zones may be a good fit for your environment.
Planned maintenance: If you have well understood maintenance processes that have awareness of your availability zone definitions and can take advantage of them, then availability zones may be a good fit for your environment. This is where availability zones can provide some of the creates value, but is also the most difficult to achieve, as it requires intelligent availability zone-aware rolling updates and upgrades, and affects how data center personnel perform maintenance activities.
Tenant application design/support: If your tenants are running legacy apps, apps with single points of failure, or do not support use of availability zones in their provisioning process, then availability zones will be of no use for these workloads.
Other alternatives for achieving app availability: Workloads built for geo-redundancy can achieve the same level of HA (or better) in a multi-region cloud. If this were the case for your cloud and your cloud workloads, availability zones would be unnecessary.
OpenStack projects: You should factor into your decision the limitations and differences in availability zone implementations between different OpenStack projects, and perform this analysis for all OpenStack projects in your scope of deployment.

The post The first and final words on OpenStack availability zones appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Why Boycott Containers? Remain Calm!

There are a lot of important things happening all around the world these days. Some of them have people upset or angry, to the point where they feel the desire to march or boycott to raise awareness and demonstrate their frustrations. As a company that believes in open communities and open discussions, we appreciate the need [&;]
Quelle: OpenShift

Docker Meetup Community reaches 150K members

We are thrilled to announce that the community has reached over 150,000 members! We’d like to take a moment to acknowledge all the amazing contributors and Docker enthusiasts who are working hard to organize frequent and interesting Docker-centric meetups. Thanks to you, there are 275 Docker meetup groups, in 75 countries, across 6 continents.
There were over 1000 Docker meetups held all over the world last year. Big shout out to Ben Griffin, organizer of Docker Melbourne, who organized 18 meetups in 2016,  Karthik Gaekwad, Lee Calcote, Vikram Sabnis and Everett Toews, organizers of Docker Austin who organized 16 meetups, Gerhard Schweinitz and Stephen J Wallace, organizers of Docker Sydney who organized 13, and Jesse White, Luisa Morales and Doug Masiero from Docker NYC who organized 12. 

We also wanted to thank and give a massive shout out to organizers Adrien Blind and Patrick Aljord have grown the Docker Paris Meetup group to nearly 4,000 members and have hosted 46 events since they launched the group almost 4 years ago!
 

Reached 3925 @DockerParis meetup members ! We may be able to celebrate 4000 members during feb docker event @vcoisne @jpetazzo @docker pic.twitter.com/CGmvShIj0L
— Adrien Blind (@AdrienBlind) January 17, 2017

One of our newest groups, Docker Havana, started last November and they already have +200 members! The founding organizers, Enrique Carbonell and Manuel Morejón are doing a fantastic job recruiting new members and have even started planning awesome meetups in other Cuban cities too!

 
Interested in getting involved with the Docker Community? The best way to participate is through your local meetup group. Check out this map to see if a Docker user group exists in your city, or take a look at the list of upcoming Docker events.

Can’t find a group near you? Learn more here about how to start a group and the process of becoming an organizer. Our community team would be happy to work with you on solving some of the challenges associated with organizing meetups in your area.
Not interested in starting a group? You can always join the Docker Online Meetup Group!
In case you missed it, we’ve recently introduced a Docker Community Directory and Slack to further enable community building and collaboration. Our goal is to give everyone the opportunity to become a more informed and engaged member of the community by creating sub groups and channels based on location, language, use cases, interest in specific Docker-centric projects or initiatives.
Sign up for the Docker Community Directory and Slack  
 

Docker community reaches 150k meetup members! Join a local group to and meet other&;Click To Tweet

The post Docker Meetup Community reaches 150K members appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/