Multi-node Kubernetes with KDC: A Quick and Dirty Guide

The post Multi-node Kubernetes with KDC: A Quick and Dirty Guide appeared first on Mirantis | Pure Play Open Cloud.
Kubeadm-dind-cluster, or KDC, is a configurable script that enables you to easily create a multi-node cluster on a single machine by deploying Kubernetes nodes as Docker containers (hence the Docker-in-Docker (dind) part of the name) rather than VMs or separate bare metal machines.  It even enables you to easily create multiple clusters on the same machine.
In this article we’ll look at how to use KDC and at some of the simple ways to configure it for more complicated use cases.
Deploying a multi-node Kubernetes cluster with KDC
At its core, deploying Kubernetes with KDC is a simple matter of downloading the script and executing it:
$ wget https://github.com/kubernetes-sigs/kubeadm-dind-cluster/releases/download/v0.1.0/dind-cluster-v1.13.sh
(You’ll notice that the script includes a version number that happens to match the latest version of Kubernetes. As you might have guessed, that’s no coincidence. KDC supports versions 1.10 through 1.13 of Kubernetes, and to change versions you simply need to change the script version.  So to deploy Kubernetes 1.12 you would use dind-cluster-v.12.sh instead of dind-cluster-v1.13.sh.)
Once you’ve got the script, make sure it’s executable, then run it:
$ chmod +x dind-cluster-v1.13.sh
$ sudo ./dind-cluster-v1.13.sh up
The script can take a few minutes to run. During that time, it’s performing several steps, including:

Pulling in the most recent DIND images
Running kubeadm init to create the cluster
Creating additional containers to act as Kubernetes nodes
Joining those nodes to the original cluster
Setting up CNI
Creating Management, Service, and pode networks
Bringing up the Kubernetes dashboard for the new cluster

When it’s finished running, you will see the URL for the Dashboard, as in:

* Bringing up coredns and kubernetes-dashboard
deployment.extensions/coredns scaled
deployment.extensions/kubernetes-dashboard scaled
………………………..[done]
NAME       STATUS   ROLES AGE VERSION
kube-master   Ready master   3m49s v1.13.0
kube-node-1   Ready <none>   2m32s v1.13.0
kube-node-2   Ready <none>   2m33s v1.13.0
* Access dashboard at: http://127.0.0.1:32768/api/v1/namespaces/kube-system/services/kubernetes-dashboard:/proxy
You can then pull that up in your browser and see the brand new empty cluster.

You can also go ahead and work with the cluster from the command line.  First make sure to fix your $PATH; KDC downloads an appropriate version of kubectl for you and places it in the ~/.kubeadm-dind-cluster directory:
$ export PATH=”$HOME/.kubeadm-dind-cluster:$PATH”
Then you can see the nodes in the cluster:
$ kubectl get nodes
NAME       STATUS   ROLES AGE VERSION
kube-master   Ready master   8m40s v1.13.0
kube-node-1   Ready <none>   7m23s v1.13.0
kube-node-2   Ready <none>   7m24s v1.13.0
You can also see the actual Docker containers corresponding to the nodes:
$ sudo docker ps  –format ‘{{ .ID }} – {{ .Names }} — {{ .Labels }}’
c4d28e8b86d8 – kube-node-2 — mirantis.kubeadm_dind_cluster=1,mirantis.kubeadm_dind_cluster_final=1,mirantis.kubeadm_dind_cluster_runtime=
8009079bde24 – kube-node-1 — mirantis.kubeadm_dind_cluster=1,mirantis.kubeadm_dind_cluster_final=1,mirantis.kubeadm_dind_cluster_runtime=
39563d1fb241 – kube-master — mirantis.kubeadm_dind_cluster=1,mirantis.kubeadm_dind_cluster_final=1,mirantis.kubeadm_dind_cluster_runtime=

As you can see, with a single step you have created a 3 node Kubernetes cluster.  But what if you want a second cluster? Fortunately, since the nodes are just Docker containers, you can go ahead and create additional instances without them interfering with each other.
Creating multiple clusters with KDC
Creating an additional cluster is as straightforward as setting a new CLUSTER_ID and re-running the script.  For example:
$ sudo CLUSTER_ID=”2″ ./dind-cluster-v1.13.sh up

…………………[done]
NAME                 STATUS   ROLES AGE VERSION
kube-master-cluster-2   Ready master   3m58s v1.13.0
kube-node-1-cluster-2   Ready <none>   2m43s v1.13.0
kube-node-2-cluster-2   Ready <none>   2m41s v1.13.0
* Access dashboard at: http://127.0.0.1:32770/api/v1/namespaces/kube-system/services/kubernetes-dashboard:/proxy
As you can see, you wind up with a completely separate cluster, with a completely separate dashboard.
You can also set the DIND_LABEL, as in:
$ sudo DIND_LABEL=”edge_test” ./dind-cluster-v1.13.sh up
The advantage here is that you simply get a random cluster_id so you don’t have to worry about collisions.  Also, while CLUSTER_ID must be an integer, DIND_LABEL can be a human-readable string.
Customizing a KDC Kubernetes deployment
To change the behavior of the KDC script you just have to change various variables. To see the available variables, check the config.sh file, which you can find here: https://github.com/kubernetes-sigs/kubeadm-dind-cluster/blob/master/config.sh
We’ve already used this when we created a second cluster:
$ sudo DIND_LABEL=”edge_test” ./dind-cluster-v1.13.sh up
For example, to create a cluster with 5 nodes, you would use the NUM_NODES variable:
$ sudo NUM_NODES=5 ./dind-cluster-v1.13.sh up
Another variable you might want to change is the networking framework. By default, KDC bridges together the various containers, but you also have the option to use flannel, calico, calico-kdd, or weave.  For example, if you were to use calico, you would start your cluster with
$ sudo CNI_PLUGIN=”calico” ./dind-cluster-v1.13.sh up
Of course, there’s one more thing we need to take care of: cleaning up.
Starting, stopping, and cleaning up
KDC also gives you the ability to stop, restart, and delete a deployment.  For example, to restart the cluster, you would execute:
$ sudo ./dind-cluster-v1.13.sh up
just as before, but the process is much faster the second time because images don’t have to be downloaded, and so on.
To shut down and remove a cluster, use the down command:
$ sudo ./dind-cluster-v1.13.sh down
This command removes the containers, but the volumes that back them remain so that you can start the cluster back up. On the other hand, if you want to completely remove the cluster, including volumes, you need to clean:
$ sudo ./dind-cluster-v1.13.sh clean
If you’re going to change Kubernetes versions, you’ll want to run the clean command first.
So that’s what you need to know to get started. What do you plan to build?
The post Multi-node Kubernetes with KDC: A Quick and Dirty Guide appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Published by