Building Your First Certified Kubernetes Cluster On-Premises, Part 1

This is the first in a series of guest blog posts by Docker Captain Ajeet Raina diving in to how to run Kubernetes on Docker Enterprise. You can follow Ajeet on Twitter @ajeetsraina and read his blog at http://www.collabnix.com. 

There are now a number of options for running certified Kubernetes in the cloud. But let’s say you’re looking to adopt and operationalize Kubernetes for production workloads on-premises. What then? For an on-premises certified Kubernetes distribution, you need an enterprise container platform that allows you to leverage your existing team and processes. 
Enter Docker Kubernetes Service
At DockerCon 2019, Docker announced the Docker Kubernetes Service (DKS). It is a certified Kubernetes distribution that is included with Docker Enterprise 3.0 and is designed to solve this fundamental challenge.
In this blog series, I’ll explain Kubernetes support and capabilities under Docker Enterprise 3.0, covering these topics:

Deploying certified Kubernetes Cluster using Docker Enterprise 3.0 running on a Bare Metal System
Implementing Persistent storage for Kubernetes workload using iSCSI
Implementing Cluster Ingress for Kubernetes
Deploying Istio Service Mesh under Docker Enterprise 3.0
Support of Kubernetes on Windows Server 2019 with Docker Enterprise 3.0

So About DKS…
DKS is the only offering that integrates Kubernetes from the developer desktop to production servers, with ‘sensible secure defaults’ out-of-the-box. Simply put, DKS makes Kubernetes easy to use and more secure for the entire organization. Here are three things that DKS does to simplify and accelerate Kubernetes adoption for the enterprise:

Consistent, seamless Kubernetes experience for developers and operators. With the use of Version Packs, developers’ Kubernetes environments running in Docker Desktop Enterprise stay in sync with production environments for a complete, seamless Kubernetes experience. 
Streamlined Kubernetes lifecycle management (Day 1 and Day 2 operations). A new Cluster Management CLI Plugin allows operations teams to easily deploy, scale, backup and restore and upgrade a certified Kubernetes environment using a set of simple CLI commands.
Enhanced security with ‘sensible defaults.’ Teams get out-of-the-box configurations for security, encryption, access control, and lifecycle management — all without having to become Kubernetes experts.

DKS is compatible with Kubernetes YAML, Helm charts, and the Docker Compose tool for creating multi-container applications. It also provides an automated way to install and configure Kubernetes applications across hybrid and multi-cloud deployments. Capabilities include security, access control, and lifecycle management. Additionally, it uses Docker Swarm Mode to orchestrate Docker containers.
Kubernetes 1.14+ in Docker Enterprise
Docker Enterprise 3.0 comes with the following components:

Containerd 1.2.6
Docker Engine 19.03.1
Runc 1.0.0-rc8
docker-init 0.18.0
Universal Control Plane 3.2.0
Docker Trusted Registry 2.7
Kubernetes 1.14+
Calico v3.5.7

Docker UCP manager and worker nodes.
In this first post of the series, I will show you how to deploy a Certified Kubernetes cluster using Docker Enterprise 3.0 on bare metal (meaning you can deploy on-premises).
Pre-Requisites:

Ubuntu 18.04 (at least 2 Node to setup Multi-Node Cluster)
Minimal 4GB RAM is required for UCP 3.2.0
Go to https://hub.docker.com/my-content.
Click the Setup button for Docker Enterprise Edition for Ubuntu.
Copy the URL from the field labeled Copy and paste this URL to download your Edition.

Now you’re ready to start installing Docker Enterprise and Kubernetes. Let’s get started.
Step 1: Install packages to allow apt to use a repository over HTTPS
$sudo apt-get install
apt-transport-https
ca-certificates
curl
software-properties-common
Step 2: Add the $DOCKER_EE_URL variable into your environment
Replace with the URL you noted down in the prerequisites. Replace sub-xxx too.
$curl -fsSL https://storebits.docker.com/ee/m/sub-XXX-44fb-XXX-b6bf-XXXXXX/ubuntu/gpg |
sudo apt-key add –
Step 3: Add the stable Repository
$sudo add-apt-repository
“deb [arch=amd64] https://storebits.docker.com/ee/m/sub-XXX-44fb-XXX-b6bf-XXXXXX/ubuntu
$(lsb_release -cs)
stable-19.03″
Step 4: Install Docker Enterprise
$sudo apt-get install docker-ee docker-ee-cli containerd.io
Step 5: Verifying Docker Enterprise Version
$ sudo docker version
Client: Docker Engine – Enterprise
Version: 19.03.1
API version: 1.40
Go version: go1.12.5
Git commit: f660560
Built: Thu Jul 25 20:59:23 2019
OS/Arch: linux/amd64
Experimental: false

Server: Docker Engine – Enterprise
Engine:
Version: 19.03.1
API version: 1.40 (minimum version 1.12)
Go version: go1.12.5
Git commit: f660560
Built: Thu Jul 25 20:57:45 2019
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.6
GitCommit: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc:
Version: 1.0.0-rc8
GitCommit: 425e105d5a03fabd737a126ad93d62a9eeede87f
docker-init:
Version: 0.18.0
GitCommit: fec3683
cse@ubuntu1804-1:~$
Step 6: Test the Hello World Example
$ sudo docker run hello-world
Unable to find image ‘hello-world:latest’ locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete
Digest: sha256:6540fc08ee6e6b7b63468dc3317e3303aae178cb8a45ed3123180328bcc1d20f
Status: Downloaded newer image for hello-world:latest

Hello from Docker!

This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the “hello-world” image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the executable
that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it to your
terminal.
To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID by signing up or logging in to Docker Hub.
For more examples and ideas, visit the Docker Docs getting started page.
Step 7: Install Universal Control Plane v3.2.0
$ sudo docker container run –rm -it –name ucp
> -v /var/run/docker.sock:/var/run/docker.sock
> docker/ucp:3.2.0 install
> –host-address 10.94.214.115
> –interactive
Step 8: Accessing the UCP
Now you should be able to access Docker Universal Control Plane via https://<node-ip>

Click on “Sign In” and upload the license file to access Docker Enterprise UCP 3.2.0 WebUI as shown below:

Step 9: Adding Worker Nodes to the Cluster
Let’s add worker nodes to the cluster. Click on “Shared Resources” on the left pane and Click on “Nodes”. Select “Add Nodes” and choose an orchestrator. You can also add either Linux or Windows nodes to the cluster here as shown below:

I assume that you have a worker node installed with Ubuntu 18.04 and the latest Docker binaries (it can be either the free version of Docker Engine or Docker Enterprise).
@ubuntu1804-1:~$ sudo curl -sSL https://get.docker.com/ | sh

$ sudo usermod -aG docker cs
$ sudo docker swarm join –token SWMTKN-1-3n4mwkzhXXXXXXt2hip0wonqagmjtos-bch9ezkt5kiroz6jncid
rz13x <managernodeip>:2377
This node joined a swarm as a worker.
By now, you should be able to see both manager node and 1 worker node added under UCP.

If you see a warning on the UCP dashboard stating that you have a similar hostname on both the manager and worker node, change the hostname on the worker node and it will automatically get updated on UCP dashboard.
Step 10: Install the Docker Client Bundle
Click on Dashboard and scroll down to see the Docker CLI option. This option allows you to download a client bundle to create and manage services using the Docker CLI client. Once you click, you will be able to find a new window as shown below:

Click on “user profile page” and it should redirect you to https://<manager-ip-node/manage/profile/clientbundle page as seen in the below screenshot:

Click on “Generate Client Bundle” and it will download ucp-bundle-<username>.zip
$ unzip ucp-bundle-ajeetraina.zip
Archive:  ucp-bundle-ajeetraina.zip
 extracting: ca.pem
 extracting: cert.pem
 extracting: key.pem
 extracting: cert.pub
 extracting: kube.yml
 extracting: env.sh
 extracting: env.ps1
 extracting: env.cmd
 extracting: meta.json
 extracting: tls/docker/key.pem
 extracting: tls/kubernetes/ca.pem
 extracting: tls/kubernetes/cert.pem
 extracting: tls/kubernetes/key.pem
 extracting: tls/docker/ca.pem
 extracting: tls/docker/cert.pem
@ubuntu1804-1:~$ eval “$(<env.sh)”
The env script updates the DOCKER_HOST and DOCKER_CERT_PATH environment variables to make the Docker CLI client interact with UCP and use the client certificates you downloaded. From now on, when you use the Docker CLI client, it includes your user specific client certificates as part of the request to UCP.
Step 11: Install Kubectl on Docker Enterprise 3.0
Once you have the Kubernetes version, install the kubectl client for the relevant operating system. As shown below, we need to install Kubectl version 1.14.3:

Step 12: Set the Kubectl version
@ubuntu1804-1:~$ k8sversion=v1.14.3
@ubuntu1804-1:~$ curl -LO https://storage.googleapis.com/kubernetes-release/release/
$k8sversion/bin/linux/amd64/kubectl
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 41.1M  100 41.1M    0     0  7494k      0  0:00:05  0:00:05 –:–:– 9070k
@ubuntu1804-1:~$ chmod +x ./kubectl
@ubuntu1804-1:~$ sudo mv ./kubectl /usr/local/bin/kubectl
@ubuntu1804-1:~$
Step 13: Verify the Kubectl Installation
~$ kubectl version
Client Version: version.Info{Major:”1″, Minor:”14″, GitVersion:”v1.14.3″, GitCommit:
“5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0″, GitTreeState:”clean”, BuildDate:
“2019-06-06T01:44:30Z”, GoVersion:”go1.12.5″, Compiler:”gc”, Platform:”linux/amd64″}
Server Version: version.Info{Major:”1″, Minor:”14+”, GitVersion:”v1.14.3-docker-2″,
GitCommit:”7cfcb52617bf94c36953159ee9a2bf14c7fcc7ba”, GitTreeState:”clean”,
BuildDate:”2019-06-06T16:18:13Z”, GoVersion:”go1.12.5″, Compiler:”gc”, Platform:”linux/amd64″
Step 14: List out the Kubernetes Nodes
cse@ubuntu1804-1:~$ kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
node2          Ready    <none>   23h   v1.14.3-docker-2
ubuntu1804-1   Ready    master   23h   v1.14.3-docker-2
Step 15: Enabling Helm and Tiller with UCP
$ kubectl create rolebinding default-view –clusterrole=view –serviceaccount=kube-system
:default –namespace=kube-system
rolebinding.rbac.authorization.k8s.io/default-view created

$ kubectl create clusterrolebinding add-on-cluster-admin –clusterrole=cluster-admin
–serviceaccount=kube-system:default
clusterrolebinding.rbac.authorization.k8s.io/add-on-cluster-admin created
cse@ubuntu1804-1:~$
Step 16: Install Helm
$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > install-helm.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  7001  100  7001    0     0   6341      0  0:00:01  0:00:01 –:–:–  6347
$ chmod u+x install-helm.sh

$ ./install-helm.sh
Downloading https://get.helm.sh/helm-v2.14.3-linux-amd64.tar.gz
Preparing to install helm and tiller into /usr/local/bin
helm installed into /usr/local/bin/helm
tiller installed into /usr/local/bin/tiller
Run ‘helm init’ to configure helm.

cse@ubuntu1804-1:~$ helm init
Creating /home/cse/.helm
Creating /home/cse/.helm/repository
Creating /home/cse/.helm/repository/cache
Creating /home/cse/.helm/repository/local
Creating /home/cse/.helm/plugins
Creating /home/cse/.helm/starters
Creating /home/cse/.helm/cache/archive
Creating /home/cse/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /home/cse/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure ‘allow unauthenticated users’
policy.
To prevent this, run `helm init` with the –tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/
#securing-your-helm-installation
cse@ubuntu1804-1:~$
Step 17: Verify the Helm Installation
$ helm version
Client: &version.Version{SemVer:”v2.14.3″, GitCommit:”0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085″
, GitTreeState:”clean”}
Server: &version.Version{SemVer:”v2.14.3″, GitCommit:”0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085″
, GitTreeState:”clean”}
Step 18: Deploying MySQL using Helm on Docker Enterprise 3.0
Let’s try out deploying MySQL using HelmPack.
$ helm install –name mysql stable/mysql
NAME:   mysql
LAST DEPLOYED: Wed Aug  7 11:43:01 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME        DATA  AGE
mysql-test  1     0s

==> v1/PersistentVolumeClaim
NAME   STATUS   VOLUME  CAPACITY  ACCESS MODES  STORAGECLASS  AGE
mysql  Pending  0s

==> v1/Secret
NAME   TYPE    DATA  AGE
mysql  Opaque  2     0s

==> v1/Service
NAME   TYPE       CLUSTER-IP   EXTERNAL-IP  PORT(S)   AGE
mysql  ClusterIP  10.96.77.83  <none>       3306/TCP  0s

==> v1beta1/Deployment
NAME   READY  UP-TO-DATE  AVAILABLE  AGE
mysql  0/1    0           0          0s

NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
mysql.default.svc.cluster.local

To get your root password run:

    MYSQL_ROOT_PASSWORD=$(kubectl get secret –namespace default mysql -o jsonpath=
“{.data.mysql-root-password}” | base64 –decode; echo)

To connect to your database:

1. Run an Ubuntu pod that you can use as a client:

    kubectl run -i –tty ubuntu –image=ubuntu:16.04 –restart=Never — bash -il

2. Install the mysql client:

    $ apt-get update && apt-get install mysql-client -y

3. Connect using the mysql cli, then provide your password:
    $ mysql -h mysql -p

To connect to your database directly from outside the K8s cluster:
    MYSQL_HOST=127.0.0.1
    MYSQL_PORT=3306

    # Execute the following command to route the connection:
    kubectl port-forward svc/mysql 3306

    mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}

cse@ubuntu1804-1:~$
Step 19: Listing out the Releases
The helm list command lists all of the releases. By default, it lists only releases that are deployed or failed. Flags like ‘–deleted’ and ‘–all’ will alter this behavior. Such flags can be combined: ‘–deleted –failed’. By default, items are sorted alphabetically. Use the ‘-d’ flag to sort by release date.
$ helm list
NAME    REVISION      UPDATED                      STATUS         CHART          APP VERSION     NAMESPACE
mysql   1             Wed Aug  7 11:43:01 2019     DEPLOYED       mysql-1.3.0    5.7.14          default

$ kubectl get po,deploy,svc
NAME                         READY   STATUS    RESTARTS   AGE
pod/mysql-6f6bff58d8-t2kwm   1/1     Running   0          5m35s

NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/mysql   1/1     1            0           5m35s

NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
service/kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP    28h
service/mysql        ClusterIP   10.96.77.83   <none>        3306/TCP   5m35s
cse@ubuntu1804-1:~$
With DKS, you can use Helm flawlessly with UCP under Docker Enterprise 3.0.
Kubernetes, On-Premises
Now you have Kubernetes running on-premises. You can do a lot from here, and I’ll cover some possibilities in the rest of this series.
You may also want to experiment with designing your first application in Kubernetes. Bill Mills from the Docker training team wrote a great blog series recently covering just that. I highly recommend checking it out starting with part 1 here.

The Rise of On-Premises Certified #Kubernetes Cluster with Docker Enterprise 3.0 & DKS by #DockerCaptain @ajeetsrainaClick To Tweet

Have a look at these resources if you’re looking to learn more about Docker Enterprise 3.0 and Kubernetes:

Try out Play with Kubernetes.
Try the Docker Kubernetes Service.
Learn about Kubernetes Lifecycle Management with DKS.

The post Building Your First Certified Kubernetes Cluster On-Premises, Part 1 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Building Your First Certified Kubernetes Cluster On-Premises, Part 1

This is the first in a series of guest blog posts by Docker Captain Ajeet Raina diving in to how to run Kubernetes on Docker Enterprise. You can follow Ajeet on Twitter @ajeetsraina and read his blog at http://www.collabnix.com. 

There are now a number of options for running certified Kubernetes in the cloud. But let’s say you’re looking to adopt and operationalize Kubernetes for production workloads on-premises. What then? For an on-premises certified Kubernetes distribution, you need an enterprise container platform that allows you to leverage your existing team and processes. 
Enter Docker Kubernetes Service
At DockerCon 2019, Docker announced the Docker Kubernetes Service (DKS). It is a certified Kubernetes distribution that is included with Docker Enterprise 3.0 and is designed to solve this fundamental challenge.
In this blog series, I’ll explain Kubernetes support and capabilities under Docker Enterprise 3.0, covering these topics:

Deploying certified Kubernetes Cluster using Docker Enterprise 3.0 running on a Bare Metal System
Support of Kubernetes on Windows Server 2019 with Docker Enterprise 3.0
Implementing Persistent storage for Kubernetes workload using iSCSI
Implementing Cluster Ingress for Kubernetes
Deploying Istio Service Mesh under Docker Enterprise 3.0

So About DKS…
DKS is the only offering that integrates Kubernetes from the developer desktop to production servers, with ‘sensible secure defaults’ out-of-the-box. Simply put, DKS makes Kubernetes easy to use and more secure for the entire organization. Here are three things that DKS does to simplify and accelerate Kubernetes adoption for the enterprise:

Consistent, seamless Kubernetes experience for developers and operators. With the use of Version Packs, developers’ Kubernetes environments running in Docker Desktop Enterprise stay in sync with production environments for a complete, seamless Kubernetes experience. 
Streamlined Kubernetes lifecycle management (Day 1 and Day 2 operations). A new Cluster Management CLI Plugin allows operations teams to easily deploy, scale, backup and restore and upgrade a certified Kubernetes environment using a set of simple CLI commands.
Enhanced security with ‘sensible defaults.’ Teams get out-of-the-box configurations for security, encryption, access control, and lifecycle management — all without having to become Kubernetes experts.

DKS is compatible with Kubernetes YAML, Helm charts, and the Docker Compose tool for creating multi-container applications. It also provides an automated way to install and configure Kubernetes applications across hybrid and multi-cloud deployments. Capabilities include security, access control, and lifecycle management. Additionally, it uses Docker Swarm Mode to orchestrate Docker containers.
Kubernetes 1.14+ in Docker Enterprise
Docker Enterprise 3.0 comes with the following components:

Containerd 1.2.6
Docker Engine 19.03.1
Runc 1.0.0-rc8
docker-init 0.18.0
Universal Control Plane 3.2.0
Docker Trusted Registry 2.7
Kubernetes 1.14+
Calico v3.5.7

Docker UCP manager and worker nodes.
In this first post of the series, I will show you how to deploy a Certified Kubernetes cluster using Docker Enterprise 3.0 on bare metal (meaning you can deploy on-premises).
Pre-Requisites:

Ubuntu 18.04 (at least 2 Node to setup Multi-Node Cluster)
Minimal 4GB RAM is required for UCP 3.2.0
Go to https://hub.docker.com/my-content.
Click the Setup button for Docker Enterprise Edition for Ubuntu.
Copy the URL from the field labeled Copy and paste this URL to download your Edition.

Now you’re ready to start installing Docker Enterprise and Kubernetes. Let’s get started.
Step 1: Install packages to allow apt to use a repository over HTTPS
$sudo apt-get install
apt-transport-https
ca-certificates
curl
software-properties-common
Step 2: Add the $DOCKER_EE_URL variable into your environment
Replace with the URL you noted down in the prerequisites. Replace sub-xxx too.
$curl -fsSL https://storebits.docker.com/ee/m/sub-XXX-44fb-XXX-b6bf-XXXXXX/ubuntu/gpg |
sudo apt-key add –
Step 3: Add the stable Repository
$sudo add-apt-repository
“deb [arch=amd64] https://storebits.docker.com/ee/m/sub-XXX-44fb-XXX-b6bf-XXXXXX/ubuntu
$(lsb_release -cs)
stable-19.03″
Step 4: Install Docker Enterprise
$sudo apt-get install docker-ee docker-ee-cli containerd.io
Step 5: Verifying Docker Enterprise Version
$ sudo docker version
Client: Docker Engine – Enterprise
Version: 19.03.1
API version: 1.40
Go version: go1.12.5
Git commit: f660560
Built: Thu Jul 25 20:59:23 2019
OS/Arch: linux/amd64
Experimental: false

Server: Docker Engine – Enterprise
Engine:
Version: 19.03.1
API version: 1.40 (minimum version 1.12)
Go version: go1.12.5
Git commit: f660560
Built: Thu Jul 25 20:57:45 2019
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.6
GitCommit: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc:
Version: 1.0.0-rc8
GitCommit: 425e105d5a03fabd737a126ad93d62a9eeede87f
docker-init:
Version: 0.18.0
GitCommit: fec3683
cse@ubuntu1804-1:~$
Step 6: Test the Hello World Example
$ sudo docker run hello-world
Unable to find image ‘hello-world:latest’ locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete
Digest: sha256:6540fc08ee6e6b7b63468dc3317e3303aae178cb8a45ed3123180328bcc1d20f
Status: Downloaded newer image for hello-world:latest

Hello from Docker!

This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the “hello-world” image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the executable
that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it to your
terminal.
To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID by signing up or logging in to Docker Hub.
For more examples and ideas, visit the Docker Docs getting started page.
Step 7: Install Universal Control Plane v3.2.0
$ sudo docker container run –rm -it –name ucp
> -v /var/run/docker.sock:/var/run/docker.sock
> docker/ucp:3.2.0 install
> –host-address 10.94.214.115
> –interactive
Step 8: Accessing the UCP
Now you should be able to access Docker Universal Control Plane via https://<node-ip>

Click on “Sign In” and upload the license file to access Docker Enterprise UCP 3.2.0 WebUI as shown below:

Step 9: Adding Worker Nodes to the Cluster
Let’s add worker nodes to the cluster. Click on “Shared Resources” on the left pane and Click on “Nodes”. Select “Add Nodes” and choose an orchestrator. You can also add either Linux or Windows nodes to the cluster here as shown below:

I assume that you have a worker node installed with Ubuntu 18.04 and the latest Docker binaries (it can be either the free version of Docker Engine or Docker Enterprise).
@ubuntu1804-1:~$ sudo curl -sSL https://get.docker.com/ | sh

$ sudo usermod -aG docker cs
$ sudo docker swarm join –token SWMTKN-1-3n4mwkzhXXXXXXt2hip0wonqagmjtos-bch9ezkt5kiroz6jncid
rz13x <managernodeip>:2377
This node joined a swarm as a worker.
By now, you should be able to see both manager node and 1 worker node added under UCP.

If you see a warning on the UCP dashboard stating that you have a similar hostname on both the manager and worker node, change the hostname on the worker node and it will automatically get updated on UCP dashboard.
Step 10: Install the Docker Client Bundle
Click on Dashboard and scroll down to see the Docker CLI option. This option allows you to download a client bundle to create and manage services using the Docker CLI client. Once you click, you will be able to find a new window as shown below:

Click on “user profile page” and it should redirect you to https://<manager-ip-node/manage/profile/clientbundle page as seen in the below screenshot:

Click on “Generate Client Bundle” and it will download ucp-bundle-<username>.zip
$ unzip ucp-bundle-ajeetraina.zip
Archive:  ucp-bundle-ajeetraina.zip
 extracting: ca.pem
 extracting: cert.pem
 extracting: key.pem
 extracting: cert.pub
 extracting: kube.yml
 extracting: env.sh
 extracting: env.ps1
 extracting: env.cmd
 extracting: meta.json
 extracting: tls/docker/key.pem
 extracting: tls/kubernetes/ca.pem
 extracting: tls/kubernetes/cert.pem
 extracting: tls/kubernetes/key.pem
 extracting: tls/docker/ca.pem
 extracting: tls/docker/cert.pem
@ubuntu1804-1:~$ eval “$(<env.sh)”
The env script updates the DOCKER_HOST and DOCKER_CERT_PATH environment variables to make the Docker CLI client interact with UCP and use the client certificates you downloaded. From now on, when you use the Docker CLI client, it includes your user specific client certificates as part of the request to UCP.
Step 11: Install Kubectl on Docker Enterprise 3.0
Once you have the Kubernetes version, install the kubectl client for the relevant operating system. As shown below, we need to install Kubectl version 1.14.3:

Step 12: Set the Kubectl version
@ubuntu1804-1:~$ k8sversion=v1.14.3
@ubuntu1804-1:~$ curl -LO https://storage.googleapis.com/kubernetes-release/release/
$k8sversion/bin/linux/amd64/kubectl
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 41.1M  100 41.1M    0     0  7494k      0  0:00:05  0:00:05 –:–:– 9070k
@ubuntu1804-1:~$ chmod +x ./kubectl
@ubuntu1804-1:~$ sudo mv ./kubectl /usr/local/bin/kubectl
@ubuntu1804-1:~$
Step 13: Verify the Kubectl Installation
~$ kubectl version
Client Version: version.Info{Major:”1″, Minor:”14″, GitVersion:”v1.14.3″, GitCommit:
“5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0″, GitTreeState:”clean”, BuildDate:
“2019-06-06T01:44:30Z”, GoVersion:”go1.12.5″, Compiler:”gc”, Platform:”linux/amd64″}
Server Version: version.Info{Major:”1″, Minor:”14+”, GitVersion:”v1.14.3-docker-2″,
GitCommit:”7cfcb52617bf94c36953159ee9a2bf14c7fcc7ba”, GitTreeState:”clean”,
BuildDate:”2019-06-06T16:18:13Z”, GoVersion:”go1.12.5″, Compiler:”gc”, Platform:”linux/amd64″
Step 14: List out the Kubernetes Nodes
cse@ubuntu1804-1:~$ kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
node2          Ready    <none>   23h   v1.14.3-docker-2
ubuntu1804-1   Ready    master   23h   v1.14.3-docker-2
Step 15: Enabling Helm and Tiller with UCP
$ kubectl create rolebinding default-view –clusterrole=view –serviceaccount=kube-system
:default –namespace=kube-system
rolebinding.rbac.authorization.k8s.io/default-view created

$ kubectl create clusterrolebinding add-on-cluster-admin –clusterrole=cluster-admin
–serviceaccount=kube-system:default
clusterrolebinding.rbac.authorization.k8s.io/add-on-cluster-admin created
cse@ubuntu1804-1:~$
Step 16: Install Helm
$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > install-helm.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  7001  100  7001    0     0   6341      0  0:00:01  0:00:01 –:–:–  6347
$ chmod u+x install-helm.sh

$ ./install-helm.sh
Downloading https://get.helm.sh/helm-v2.14.3-linux-amd64.tar.gz
Preparing to install helm and tiller into /usr/local/bin
helm installed into /usr/local/bin/helm
tiller installed into /usr/local/bin/tiller
Run ‘helm init’ to configure helm.

cse@ubuntu1804-1:~$ helm init
Creating /home/cse/.helm
Creating /home/cse/.helm/repository
Creating /home/cse/.helm/repository/cache
Creating /home/cse/.helm/repository/local
Creating /home/cse/.helm/plugins
Creating /home/cse/.helm/starters
Creating /home/cse/.helm/cache/archive
Creating /home/cse/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /home/cse/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure ‘allow unauthenticated users’
policy.
To prevent this, run `helm init` with the –tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/
#securing-your-helm-installation
cse@ubuntu1804-1:~$
Step 17: Verify the Helm Installation
$ helm version
Client: &version.Version{SemVer:”v2.14.3″, GitCommit:”0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085″
, GitTreeState:”clean”}
Server: &version.Version{SemVer:”v2.14.3″, GitCommit:”0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085″
, GitTreeState:”clean”}
Step 18: Deploying MySQL using Helm on Docker Enterprise 3.0
Let’s try out deploying MySQL using HelmPack.
$ helm install –name mysql stable/mysql
NAME:   mysql
LAST DEPLOYED: Wed Aug  7 11:43:01 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME        DATA  AGE
mysql-test  1     0s

==> v1/PersistentVolumeClaim
NAME   STATUS   VOLUME  CAPACITY  ACCESS MODES  STORAGECLASS  AGE
mysql  Pending  0s

==> v1/Secret
NAME   TYPE    DATA  AGE
mysql  Opaque  2     0s

==> v1/Service
NAME   TYPE       CLUSTER-IP   EXTERNAL-IP  PORT(S)   AGE
mysql  ClusterIP  10.96.77.83  <none>       3306/TCP  0s

==> v1beta1/Deployment
NAME   READY  UP-TO-DATE  AVAILABLE  AGE
mysql  0/1    0           0          0s

NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
mysql.default.svc.cluster.local

To get your root password run:

    MYSQL_ROOT_PASSWORD=$(kubectl get secret –namespace default mysql -o jsonpath=
“{.data.mysql-root-password}” | base64 –decode; echo)

To connect to your database:

1. Run an Ubuntu pod that you can use as a client:

    kubectl run -i –tty ubuntu –image=ubuntu:16.04 –restart=Never — bash -il

2. Install the mysql client:

    $ apt-get update && apt-get install mysql-client -y

3. Connect using the mysql cli, then provide your password:
    $ mysql -h mysql -p

To connect to your database directly from outside the K8s cluster:
    MYSQL_HOST=127.0.0.1
    MYSQL_PORT=3306

    # Execute the following command to route the connection:
    kubectl port-forward svc/mysql 3306

    mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}

cse@ubuntu1804-1:~$
Step 19: Listing out the Releases
The helm list command lists all of the releases. By default, it lists only releases that are deployed or failed. Flags like ‘–deleted’ and ‘–all’ will alter this behavior. Such flags can be combined: ‘–deleted –failed’. By default, items are sorted alphabetically. Use the ‘-d’ flag to sort by release date.
$ helm list
NAME    REVISION      UPDATED                      STATUS         CHART          APP VERSION     NAMESPACE
mysql   1             Wed Aug  7 11:43:01 2019     DEPLOYED       mysql-1.3.0    5.7.14          default

$ kubectl get po,deploy,svc
NAME                         READY   STATUS    RESTARTS   AGE
pod/mysql-6f6bff58d8-t2kwm   1/1     Running   0          5m35s

NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/mysql   1/1     1            0           5m35s

NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
service/kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP    28h
service/mysql        ClusterIP   10.96.77.83   <none>        3306/TCP   5m35s
cse@ubuntu1804-1:~$
With DKS, you can use Helm flawlessly with UCP under Docker Enterprise 3.0.
Kubernetes, On-Premises
Now you have Kubernetes running on-premises. You can do a lot from here, and I’ll cover some possibilities in the rest of this series.
You may also want to experiment with designing your first application in Kubernetes. Bill Mills from the Docker training team wrote a great blog series recently covering just that. I highly recommend checking it out starting with part 1 here.
In Part 2 of this blog series, I will take a deep dive into support of Kubernetes on Windows Server with Docker Enterprise 3.0

The Rise of On-Premises Certified #Kubernetes Cluster with Docker Enterprise 3.0 & DKS by #DockerCaptain @ajeetsrainaClick To Tweet

Have a look at these resources if you’re looking to learn more about Docker Enterprise 3.0 and Kubernetes:

Try out Play with Kubernetes.
Try the Docker Kubernetes Service.
Learn about Kubernetes Lifecycle Management with DKS.

The post Building Your First Certified Kubernetes Cluster On-Premises, Part 1 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Designing Your First Application in Kubernetes, Part 5: Provisioning Storage

In this blog series on Kubernetes, we’ve already covered:

The basic setup for building applications in Kubernetes
How to set up processes using pods and controllers
Configuring Kubernetes networking services to allow pods to communicate reliably
How to identify and manage the environment-specific configurations to make applications portable between environments

In this series’ final installment, I’ll explain how to provision storage to a Kubernetes application. 

Step 4: Provisioning Storage
The final component we want to think about when we build applications for Kubernetes is storage. Remember, a container’s filesystem is transient, and any data kept there is at risk of being deleted along with your container if that container ever exits or is rescheduled. If we want to guarantee that data lives beyond the short lifecycle of a container, we must write it out to external storage.
Any container that generates or collects valuable data should be pushing that data out to stable external storage. In our web app example, the database tier should be pushing its on-disk contents out to external storage so they can survive a catastrophic failure of our database pods.
Similarly, any container that requires the provisioning of a lot of data should be getting that data from an external storage location. We can even leverage external storage to push stateful information out of our containers, making them stateless and therefore easier to schedule and route to.
Decision #5: What data does your application gather or use that should live longer than the lifecycle of a pod?
The full Kubernetes storage model has a number of moving parts:
The Kubernetes storage model.

Container Storage Interface (CSI) Plugins can be thought of as the driver for your external storage.
StorageClass objects take a CSI driver and add some metadata that typically configures how storage on that backend will be treated
PersistentVolume (PV) objects represent an actual bucket of storage, as parameterized by a StorageClass
PersistentVolumeClaim (PVC) objects allow a pod to ask for a PersistentVolume to be provisioned to it
Finally, we met Volumes earlier in this series. In the case of storage, we can populate a volume with the contents of the external storage captured by a PV and requested by a PVC, provision that volume to a pod and finally mount its contents into a container in that pod.

Managing all these components can be cumbersome during development, but as in our discussion of configuration, Kubernetes volumes provide a convenient abstraction by defining how and where to mount external storage into your containers. They form the start of what I like to think of as the “storage frontend” in Kubernetes—these are the components most closely integrated with your pods and which won’t change from environment to environment.
All those other components, from the CSI driver all the way through the PVC, which I like to think of as the “storage backend”, can be torn out and replaced as you move between environments without affecting your code, containers, or the controller definitions that deploy them.
Note that on a single-node cluster (like the one created for your by Docker Desktop on your development machine), you can create hostpath backed persistentVolumes which will provision persistent storage from your local disk without setting up any CSI plugins or special storage classes. This is an easy way to get started developing your application without getting bogged down in the diagram above—effectively deferring the decision and setup of CSI plugins and storageClasses until you’re ready to move off of your dev machine and into a larger cluster.
Advanced Topics
The simple hostpath PVs mentioned above are appropriate for early development and proof-of-principle work, but they will need to be replaced with more powerful storage solutions before you get to production. This will require you to look into the ‘backend’ components of Kubernetes’ storage solution, namely StorageClasses and CSI plugins:

 StorageClasses
 Container Storage Interface plugins

The Future
In this series, I’ve walked you through the basic Kubernetes tooling you’ll need to containerize a wide variety of applications, and provided you with next-step pointers on where to look for more advanced information. Try working through the stages of containerizing workloads, networking them together, modularizing their config, and provisioning them with storage to get fluent with the ideas above.
Kubernetes provides powerful solutions for all four of these areas, and a well-built app will leverage all four of them. If you’d like more guidance and technical details on how to operationalize these ideas, you can explore the Docker Training team’s workshop offerings, and check back for new Training content landing regularly.
After mastering the basics of building a Kubernetes application, ask yourself, “How well does this application fit the values of portability, scalability and shareability we started with?” Containers themselves are engineered to easily move between clusters and users, but what about the entire application you just built? How can we move that around while still preserving its integrity and not invalidating any unit and integration testing you’ll perform on it?
Docker App sets out to solve that problem by packaging applications in an integrated bundle that can be moved around as easily as a single image. Stay tuned to this blog and Docker Training for more guidance on how to use this emerging format to share your Kubernetes applications seamlessly.
To learn more about Kubernetes storage and Kubernetes in general:

Read the Kubernetes documentation on PersistentVolumes and PersistentVolumeClaims.
Find out more about running Kubernetes on Docker Enterprise and Docker Desktop.
Check out Play with Kubernetes, powered by Docker.

We will also be offering training on Kubernetes starting in early 2020. In the training, we’ll provide more specific examples and hands on exercises.To get notified when the training is available, sign up here:
Get Notified About Training

Designing Your First App in #Kubernetes, Part 5 — Provisioning StorageClick To Tweet

The post Designing Your First Application in Kubernetes, Part 5: Provisioning Storage appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

At the Grace Hopper Celebration, Learn Why Developers Love Docker

Lisa Dethmers-Pope and Amn Rahman at Docker also contributed to this blog post.
Docker hosted a Women’s Summit at DockerCon 2019.
As a Technical Recruiter at Docker, I am excited to be a part of Grace Hopper Celebration. It is a marvelous opportunity to speak with many talented women in tech and to continue pursuing one of Docker’s most valued ambitions: further diversifying our team. The Docker team will be on the show floor at the Grace Hopper Celebration, the world’s largest gathering of women technologists the week of October 1st in Orlando, Florida.
Our Vice President of Human Resources, and our Senior Director of Product Management, along with representatives from our Talent Acquisition and Engineering teams will be there to connect with attendees. We will be showing how to easily build, run, and share an applications using the Docker platform, and talking about what it’s like to work in tech today. 
Supporting Women in Tech
While we’ve made strides in diversity within tech, the 2019 Stack Overflow Developer Survey shows we have work to do. According to the survey, only 7.5 percent of professional developers are women worldwide (it’s 11 percent of all developers in the U.S.).
That’s why Docker hosts Women in Tech events at our own conferences, and we’re pleased to participate in the  Grace Hopper Celebration this year. It’s a place for women technologists to learn, network, and connect with a like-minded community. The conference offers attendees several opportunities to advance their professional development, find and provide mentorship, and further develop their leadership skills.
Last year’s celebration hosted over 20,000 attendees from 78 countries as well as thousands of listeners over livestream. We are thrilled to be involved with the conference and show our support for an organization making such a powerful impact.
Creating and Fostering Connections
2 million developers already use Docker regularly today. We have over 240 regional user groups, and a presence in 80 countries. Diversity and inclusion are a key part of our community, and we’ll continue building on that as we grow.
We are seeking forward-thinking individuals to join our team who have diverse experiences and are passionate about bringing technology that transforms lives, industries, and the world to life.
Whether you’re a curious explorer, a Docker newbie, or a super-powered Docker ninja, you should come join us at the Docker booth to learn more about how you can get the most benefit out of the platform!
If you’re a data scientist, a developer, or just bouncing from one coding assignment to the next, come and learn how you can start using Docker almost immediately! Apart from being introduced to cool Docker lingo, you’ll learn how to quickly launch a Docker environment, spin up an app on your machine, and share it with the rest of the world via Docker Hub.
We look forward to collaborating and connecting with you. Come visit us at the technology showcase booth 359, 3648!
In the meantime, if you’d like to dive into diversity in tech, these three DockerCon sessions are a great starting point:

A Transformation of Attitude: Why Mentor Matter
Diversity is not Only about Ethnicity and Gender
How Intentional Diversity Creates Thought Leadership

At @ghc, learn why #developers love Docker and why #diversity in the developer community matters to usClick To Tweet

The post At the Grace Hopper Celebration, Learn Why Developers Love Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Designing Your First Application in Kubernetes, Part 4: Configuration

I reviewed the basic setup for building applications in Kubernetes in part 1 of this blog series, and discussed processes as pods and controllers in part 2. In part 3, I explained how to configure networking services in Kubernetes to allow pods to communicate reliably with each other. In this installment, I’ll explain how to identify and manage the environment-specific configurations expected by your application to ensure its portability between environments.

Factoring out Configuration
One of the core design principles of any containerized app must be portability. We absolutely do not want to reengineer our containers or even the controllers that manage them for every environment. One very common reason why an application may work in one place but not another is problems with the environment-specific configuration expected by that app.
A well-designed application should treat configuration like an independent object, separate from the containers themselves, that’s provisioned to them at runtime. That way, when you move your app from one environment to another, you don’t need to rewrite any of your containers or controllers; you simply provide a configuration object appropriate to this new environment, leaving everything else untouched.
When we design applications, we need to identify what configurations we want to make pluggable in this way. Typically, these will be environment variables or config files that change from environment to environment, such as access tokens for different services used in staging versus production or different port configurations.
Decision #4: What application configurations will need to change from environment to environment?
From our web app example, a typical set of configs would include the access credentials for our database and API (of course, you’d never use the same ones for development and production environments), or a proxy config file if we chose to include a containerized proxy in front of our web frontend.
Once we’ve identified the configs in our application that should be pluggable, we can enable the behavior we want by using Kubernetes’ system of volumes and configMaps.
In Kubernetes, a volume can be thought of as a filesystem fragment. Volumes are provisioned to a pod and owned by that pod. The file contents of a volume can be mounted into any filesystem path we like in the pod’s containers.
I like to think of the volume declaration as the interface between the environment-specific config object and the portable, universal application definition. Your volume declaration will contain the instructions to map a set of external configs onto the appropriate places in your containers.
ConfigMaps contain the actual contents you’re going to use to populate a pod’s volumes or environment variables. They contain-key value pairs describing either files and file contents, or environment variables and their values. ConfigMaps typically differ from environment to environment. For example, you will probably have one configMap for your development environment and another for production—with the correct variables and config files for each environment. 

The configMap and Volume interact to provide configuration for containers.

Checkpoint #4: Create a configMap appropriate to each environment. 
Your development environment’s configMap objects should capture the environment-specific configuration you identified above, with values appropriate for your development environment. Be sure to include a volume in your pod definitions that uses that configMap to populate the appropriate config files in your containers as necessary. Once you have the above set up for your development environment, it’s simple to create a new configMap object for each downstream environment and swap it in, leaving the rest of your application unchanged.
Advanced Topics
Basic configMaps are a powerful tool for modularizing configuration, but some situations require a slightly different approach.

Secrets in Kubernetes are like configMaps in that they package up a bunch of files or key/value pairs to be provisioned to a pod. However, secrets offer added security guarantees around encryption data management. They are the more appropriate choice for any sensitive information, like passwords, access tokens or other key-like objects.

To learn more about configuring Kubernetes and related topics: 

Check out Play with Kubernetes, powered by Docker.
Read the Kubernetes documentation on Volumes. 
Read the Kubernetes documentation on ConfigMaps.

We will also be offering training on Kubernetes starting in early 2020. In the training, we’ll provide more specific examples and hands on exercises.To get notified when the training is available, sign up here:
Get Notified About Training

Designing Your First App in #Kubernetes, Part 4 — Managing Environment-Specific ConfigurationsClick To Tweet

The post Designing Your First Application in Kubernetes, Part 4: Configuration appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Designing Your First Application in Kubernetes, Part 3: Communicating via Services

I reviewed the basic setup for building applications in Kubernetes in part 1 of this blog series, and discussed processes as pods and controllers in part 2. In this post, I’ll explain how to configure networking services in Kubernetes to allow pods to communicate reliably with each other.
Setting up Communication via Services 
At this point, we’ve deployed our workloads as pods managed by controllers, but there’s no reliable, practical way for pods to communicate with each other, nor is there any way for us to visit any network-facing pod from outside the cluster. Kubernetes networking model says that any pod can reach any other pod at the target pod’s IP by default, but discovering those IPs and maintaining that list while pods are potentially being rescheduled — resulting in them getting an entirely new IP — by hand would be a lot of tedious, fragile work.
Instead, we need to think about Kubernetes services when we’re ready to start building the networking part of our application. Kubernetes services provide reliable, simple networking endpoints for routing traffic to pods via the fixed metadata defined in the controller that created them, rather than via unreliable pod IPs. For simple applications, two services cover most use cases: clusterIP and nodePort services. This brings us to another decision point:
Decision #3: What kind of services should route to each controller? 
For simple use cases, you’ll choose either clusterIP or nodePort services. The simplest way to decide between them is to determine whether the target pods are meant to be reachable from outside the cluster or not. In our example application, our web frontend should be reachable externally so users can access our web app.
In this case, we’d create a nodePort service, which would route traffic sent to a particular port on any host in your Kubernetes cluster onto our frontend pods (Swarm fans: this is functionally identical to the L4 mesh net).
A Kubernetes nodePort service allows external traffic to be routed to the pods.
For our private API + database pods, we may only want them to be reachable from inside our cluster for security and traffic control purposes. In this case, a clusterIP service is most appropriate. The clusterIP service will provide an IP and port which only other containers in the cluster may send traffic to, and have it forwarded onto the backend pods.
A Kubernetes clusterIP service only accepts traffic from within the cluster.
Checkpoint #3: Write some yaml and verify routing
Write some Kubernetes yaml to describe the services you choose for your application and make sure traffic gets routed as you expect.
Advanced Topics
The simple routing and service discovery above will get pods talking to other pods and allow some simple ingress traffic, but there are many more advanced patterns you’ll want to learn for future applications:

Headless Services can be used to discover and route to specific pods; you’ll use them for stateful pods declared by a statefulSet controller.
Kube Ingress and IngressController objects provide managed proxies for doing routing at layer 7 and implementing patterns like sticky sessions and path-based routing.
ReadinessProbes work exactly like the healthchecks mentioned above, but instead of managing the health of containers and pods, they monitor and respond to their readiness to accept network traffic.
NetworkPolicies allow for the segmentation of the normally flat and open Kubernetes network, allowing you to define what ingress and egress communication is allowed for a pod, preventing access from or to an unauthorized endpoint.

For additional information on these topics, have a look at the Kubernetes documentation:

Kubernetes Services
Kubernetes Cluster Networking

You can also check out Play with Kubernetes, powered by Docker.
We will also be offering training on Kubernetes starting in early 2020. In the training, we’ll provide more specific examples and hands on exercises.To get notified when the training is available, sign up here:
Get Notified About Training

Designing Your First App in #Kubernetes, Part 3 — Communication via ServicesClick To Tweet

The post Designing Your First Application in Kubernetes, Part 3: Communicating via Services appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Designing Your First App in Kubernetes, Part 2: Setting up Processes

I reviewed the basic setup for building applications in Kubernetes in part 1 of this blog series. In this post, I’ll explain how to use pods and controllers to create scalable processes for managing your applications.
Processes as Pods & Controllers in Kubernetes
The heart of any application is its running processes, and in Kubernetes we fundamentally create processes as pods. Pods are a bit fancier than individual containers, in that they can schedule whole groups of containers, co-located on a single host, which brings us to our first decision point:
Decision #1: How should our processes be arranged into pods?
The original idea behind a pod was to emulate a logical host – not unlike a VM. The containers in a pod will always be scheduled on the same Kubernetes node, and they’ll be able to communicate with each other via localhost, making pods a good representation of clusters of processes that need to work together closely. 
A pod can contain one or more containers, but containers in the pod must scale together.
But there’s an important consideration: it’s not possible to scale individual containers in a pod separately from each other. If you need to scale your application up, you have to add more pods, which come with copies of every container they include. Factors such as which application components will scale at similar rates, which ones will not, and which ones should reside on the same host will factor into how you arrange processes in pods.
Thinking about our web app, we might start by making a pod containing only the frontend container; we want to be able to scale this frontend independently from the rest of our application, so it should live in its own pod.
On the other hand, we might design another pod that has one container each for our database and API; this way, our API is guaranteed to be able to talk to our database on the same physical host, eliminating network latency between the API and database and maximizing performance. As noted, this comes at the expense of independent scaling; if we schedule our API and database containers in the same pod, every time we want a new instance of our database container, it’s going to come with a new instance of our API.
Case specific arguments can be made for or against this choice: Is API-to-database latency really expected to be a major bottleneck? Could it be more important to scale your API and database separately? Final decisions may vary, but the same decision points can be applied generically to many applications.
Now that we have our pods planned out (one for the frontend and one for the API-plus-database combo), we need to decide how to manage these pods. We virtually never want to schedule pods directly (called ‘bare pods’); we want to take advantage of Kubernetes controllers, which will automatically reschedule failed pods, give us some simple influence on how and where our pods are scheduled, and give us some functionality on how to update and maintain those pods. There are at least two main types of controllers we need to decide between:
Decision #2: What kind of controller should we use for each pod: a deployment or a daemonset?

Deployments are the most common kind of controller, typically the best choice for stateless pods which can be scheduled anywhere resources are available.
DaemonSets are appropriate for pods meant to run one per host; these are typically used for daemon-like processes, like log aggregators, filesystem managers, system monitors or other utilities that make sense to have exactly one of on every host in your Kubernetes cluster.

Most, but not all, pods are best scheduled by one of these two controllers, and of them deployments make the large majority. Since neither of our web app components make sense as cluster-wide daemons, we would schedule both of them as deployments. If later we wanted to deploy a logging or monitoring appliance, a daemonSet would be a common pattern to ensure it runs on every node in the cluster.
Now that we’ve decided on how to arrange our containers into pods and how to manage our pods using controllers, its time to write some Kubernetes yaml to capture these objects; many examples of how to do this are available in the Kubernetes documentation and Docker’s Training material.
I strongly encourage you to define your applications using Kubernetes yaml definitions, and not imperative kubectl commands. As I mentioned in the first post, one of the most important aspects of orchestrating a containerized application is shareability, and it is much easier to share a yaml file you can check into version control and distribute, rather than a series of CLI commands that can quickly become hard to read and hard to keep track of.
Checkpoint #2: write Kubernetes yaml to describe your controllers and pods.
Once you have that yaml in hand, now’s a good time to create your deployments and make sure they all work as expected: individual containers in pods should run without crashing, and containers inside the same pod should be able to reach each other on localhost.
Advanced Topics
Once you’ve mastered the pods, deployments, and daemonSets mentioned above, there are a few deeper topics you can approach to enhance your Kube applications even further:

StatefulSets are another kind of controller appropriate for managing stateful pods; note these require an understanding of Kube services and storage (discussed below).
Scheduling affinity rules allow you to influence and control where your pods are scheduled in a cluster, useful for sophisticated operations in larger clusters.
Healthchecks in the form of livenessProbes are an important maintenance tool for your pods and containers, that tell Kube how to automatically monitor the health of your containers, and take action when they become unhealthy.
PodSecurityPolicy definitions allow an added layer of security for cluster administrators to control exactly who and how pods are scheduled, commonly used to prevent the creation of pods with elevated or root privileges.

To learn more about Kubernetes pods and controllers, read the documentation:

Kubernetes pods
Kubernetes controllers

You can also check out Play with Kubernetes, powered by Docker.
We will also be offering training on Kubernetes starting in early 2020. In the training, we’ll provide more specific examples and hands on exercises. To get notified when the training is available, sign up here:
Get Notified About Training

Designing Your First App in #Kubernetes, Part 2 — Processes as Pods and ControllersClick To Tweet

The post Designing Your First App in Kubernetes, Part 2: Setting up Processes appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Designing Your First App in Kubernetes, Part 1: Getting Started

Image credit: Evan Lovely

Kubernetes: Always Powerful, Occasionally Unwieldy
Kubernetes’s gravity as the container orchestrator of choice continues to grow, and for good reason: It has the broadest capabilities of any container orchestrator available today. But all that power comes with a price; jumping into the cockpit of a state-of-the-art jet puts a lot of power under you, but how to actually fly the thing is not obvious. 
Kubernetes’ complexity is overwhelming for a lot of people jumping in for the first time. In this blog series, I’m going to walk you through the basics of architecting an application for Kubernetes, with a tactical focus on the actual Kubernetes objects you’re going to need. I’m not, however, going to spend much time reviewing 12-factor design principles and microservice architecture; there are some excellent ideas in those sort of strategic discussions with which anyone designing an application should be familiar, but here on the Docker Training Team I like to keep the focus on concrete, hands-on-keyboard implementation as much as possible.
Furthermore, while my focus is on application architecture, I would strongly encourage devops engineers and developers building to Kubernetes to follow along, in addition to readers in application architecture roles. As container orchestration becomes mainstream, devops teams will need to anticipate the architectural patterns that application architects will need them to support, while developers need to be aware of the orchestrator features that directly affect application logic, especially around networking and configuration consumption. 
Just Enough Kube
When starting out with a machine as rich as Kubernetes, I like to identify the absolute minimum set of things we’ll need to understand in order to be successful; there’ll be time to learn about all the other bells and whistles another day, after we master the core ideas. No matter where your application runs, in Kubernetes or anywhere else, there are four concerns we are going to have to address:

 Processes: Your actual running code, compiled or interpreted, is the core of your application. We’re going to need a set of tools not only to schedule these processes, but to maintain and scale those processes over time. For this, we’re going to use pods and controllers.
 Networking: The processes that make up your application will likely need to talk to each other, external resources, and the outside world. We’re going to need tooling to allow us to do service discovery, load balancing and routing between all the components of our application. For this, we’re going to use Kubernetes services.
 Configuration: A well-written application factors out its configuration, rather than hard-coding it. This is a direct consequence of applying the paradigm of Don’t Repeat Yourself when coding; things that may change based on context, like access tokens, external resource locations, and environment variables should be defined in exactly one place which can be both read from and updated as needed. An orchestrator should be able to provision configuration in modular fashion, and for this we’re going to use volumes and configMaps.
 Storage: Well-built applications always assume their containers will be short lived, and that their filesystems will be destroyed with no warning. Any data collected or generated by a container, as well as any data that needs to be provisioned to a container, should be offloaded to some sort of external storage. For this, we’ll look at Container Storage Interface plugins and persistentVolumes.

And that’s it. I’ll provide some ‘advanced topics’ pointers throughout the blog series to give you some ideas on what to study after you’ve mastered the basics in order to take your Kubernetes apps even further. When you’re starting out, focus on the components mentioned above and detailed in this series.
Just Enough High-Level Design
I promised above to keep this series more tactical than strategic, but there are some high-level design points we absolutely need to understand the engineering decisions that follow, and to make sure we’re getting the maximum benefit out of our containerization platform. Regardless of what orchestrator we’re using, there are three key principles we need to keep in mind that set a standard for what we’re trying to achieve when containerizing applications: portability, scalability, and shareability:

 Portability: Whatever we build, we should be able to deploy it on any Kubernetes cluster; this means not having hard dependencies on any feature or configuration of the underlying host or its filesystem. If the idea of moving your app from your dev machine to a testing server sounds stressful, something probably needs to be rethought.
 Scalability: Containerized applications scale best when they scale horizontally: by adding more containers, and not just containers with more compute resources. It doesn’t matter how many resources are allocated to a container; they are still mortal and often short-lived objects managed by your orchestrator as it tries to adapt to changing cluster conditions and load. Therefore, we’re going to need to arrange our applications to easily leverage more copies of the containers it expects, typically by using the routing and load balancing features of our orchestrator, and by trying to make our containers stateless whenever possible.
 Shareability: We don’t want to be trapped maintaining and consulting on every app we build forever. It’s crucial that we’re able to share our apps with other developers who we may hand it off to in future, operators who have to manage it in production, and third parties who may be able to leverage it in an open-source context. We’re halfway there with portability, ensuring that it’s possible to move our app from cluster to cluster, but beyond it just being technically possible, shareability emphasizes that hand off should also be easy and reliable. Standing up our app on a new cluster should be as foolproof as possible, at least for a first pass.

Thinking Through Your First Application on Kubernetes
For the rest of this series, let’s think through containerizing a simple three-tier web app for Kubernetes, with the following typical components:

 A database for holding all the data required by the application
 An API which is allowed to access the database
 A frontend which is reachable by users on the web, and which uses the API to interact with the database

Even if applications like these are the furthest thing from what you work with, the example is instructive; we’ll focus on decision points that are widely applicable to many different kinds of applications, so you can see examples of how to make these decisions. The generic application above is just a vehicle for touring the relevant concepts, but they apply quite generally.
Let’s begin by imagining that you’ve already created Docker images for each component of your application, whether they resemble the components listed above or are completely different. If you’d like a primer on designing and building Docker images, see my colleague Tibor Vass’s excellent blog post on dockerfile best practices.
Checkpoint #1: Make your images.
Before you’ll be able to orchestrate anything, you’ll need images built for every type of container you want to run in your application.
Also note, we’re going to consider some of the simplest cases for each concern; start with these, and when you master them, see the Advanced Topics subsection in each step for pointers to what topics to explore next.
You’ve made it this far! In the next post, I’ll explore setting up processes as pods and controllers.
We will also be offering training on Kubernetes starting in early 2020. To get notified when the training is available, sign up here:
Get Notified About Training
To learn more about running Kubernetes with Docker:

Try the Docker Kubernetes Service, the easiest way to securely run and manage Kubernetes in the enterprise.
Try out Play with Kubernetes.
Find out how to simplify Kubernetes with Docker Compose.

Getting Started: How To Build Your First Application in #Kubernetes (part 1)Click To Tweet

The post Designing Your First App in Kubernetes, Part 1: Getting Started appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker + Arm Virtual Meetup Recap: Building Multi-arch Apps with Buildx

Docker support for cross-platform applications is better than ever. At this month’s Docker Virtual Meetup, we featured Docker Architect Elton Stoneman showing how to build and run truly cross-platform apps using Docker’s buildx functionality. 
With Docker Desktop, you can now describe all the compilation and packaging steps for your app in a single Dockerfile, and use it to build an image that will run on Linux, Windows, Intel and Arm – 32-bit and 64-bit. In the video, Elton covers the Docker runtime and its understanding of OS and CPU architecture, together with the concept of multi-architecture images and manifests.
The key takeaways from the meetup on using buildx:

Everything should be multi-platform
Always use multi-stage Dockerfiles 
buildx is experimental but solid (based on BuildKit)
Alternatively use docker manifest — also experimental

Not a Docker Desktop user? Jason Andrews, a Solutions Director at Arm, posted this great article on how to setup buildx using Docker Community Engine on Linux. 
Check out the full meetup on Docker’s YouTube Channel:

You can also access the demo repo here. The sample code for this meetup is from Elton’s latest book, Learn Docker in a Month of Lunches, an accessible task-focused guide to Docker on Linux, Windows, or Mac systems. In it, you’ll learn practical Docker skills to help you tackle the challenges of modern IT, from cloud migration and microservices to handling legacy systems. There’s no excessive theory or niche-use cases — just a quick-and-easy guide to the essentials of Docker you’ll use every day (use the code webdoc40 for 40% off).
To get started building multi-arch apps today:

Download Docker Desktop 
Read about Building Multi-Arch Images for Arm and x86 with Docker Desktop
Watch the DockerCon session on Developing Containers for Arm

#Docker + @arm virtual meetup recap: How to build multi-arch images with buildxClick To Tweet

The post Docker + Arm Virtual Meetup Recap: Building Multi-arch Apps with Buildx appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

New in Docker Hub: Personal Access Tokens

The Hub token list view.
On the heels of our recent update on image tag details, the Docker Hub team is excited to share the availability of personal access tokens (PATs) as an alternative way to authenticate into Docker Hub.
Already available as part of Docker Trusted Registry, personal access tokens can now be used as a substitute for your password in Docker Hub, especially for integrating your Hub account with other tools. You’ll be able to leverage these tokens for authenticating your Hub account from the Docker CLI – either from Docker Desktop or Docker Engine: 
docker login –username <username>
When you’re prompted for a password, enter your token instead.
The advantage of using tokens is the ability to create and manage multiple tokens at once so you can generate different tokens for each integration – and revoke them independently at any time.
Create and Manage Personal Access Tokens in Docker Hub 
Personal access tokens are created and managed in your Account Settings.
From here, you can:

Create new access tokens
Modify existing tokens
Delete access tokens

Creating an access token in Docker Hub.
Note that the actual token is only shown once, at the time of creation. You will need to copy the token and save it in either a credential manager or use it immediately. If you lose a token, you will need to delete the lost token and create a new one. 
The Next Step for Tokens
Personal access tokens open a new set of ways to authenticate into your Docker Hub account. Their introduction also serves as a foundational building block for more advanced access control capabilities, including multi-factor authentication and team-based access controls – both areas that we’re working on at the moment. We’re excited to share this and many other updates that are coming to Docker Hub over the next few months. Give access tokens a try and let us know what you think!
To learn more about personal access tokens for Docker Hub:

Read more about Docker Hub
Explore the Docker Hub documentation 
Get started with Docker by creating your Hub account

New in #DockerHub: Personal Access TokensClick To Tweet

The post New in Docker Hub: Personal Access Tokens appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/