NVIDIA GPU Nodes for Docker Enterprise Kubernetes

The post NVIDIA GPU Nodes for Docker Enterprise Kubernetes appeared first on Mirantis | Pure Play Open Cloud.
Accelerate Machine Learning, Data Analysis, Media Processing, and other Challenging Workloads
Docker Enterprise 3.1 with Kubernetes 1.17 makes it simple to add standard GPU (Graphics Processing Unit – but you knew that) worker node capacity to Kubernetes clusters. A few easily-automated steps configure the (Linux) node, either before or after joining it to Docker Enterprise, and thereafter, Docker Kubernetes Service automatically recognizes the node as GPU-enabled, and deployments requiring or able to use this specialized capacity can be tagged and configured to seek it out. 
This capability complements the ever-wider availability of NVIDIA GPU boards for datacenter (and desktop) computing, as well as rapid expansion of GPU hardware-equipped virtual machine options from public cloud providers. Easy availability of GPU compute capacity and strong support for standard GPUs at the container level (for example, in containerized TensorFlow) is enabling an explosion of new applications and business models, from AI to bioinformatics to gaming. Meanwhile, Kubernetes and containers are making it easier to share still-relatively-expensive GPU capacity. You can also configure and deploy to cloud-based GPU nodes on an as-needed basis, potentially enabling savings, since billing for GPU nodes tends to be high.
The GPU Recipe
Currently, GPU capability is supported on Linux only, but that includes a wide range of compatible NVIDIA hardware. The main thing to consider is whether the apps you intend to run can exploit the hardware at hand. Most application substrates (for example, the Tensorflow Docker Container) can access a range of NVIDIA datacenter GPU architectures. 
However, distributed computing projects aimed at consumer-side deployment (such as Folding at Home, which is deployable on Kubernetes thanks to community contributions such as k8s-fah) may not work well with, for example, the Tesla M60 to V100 GPUs found in Amazon G2, P3, and other GPU-equipped instance types. The problem isn’t technical compatibility so much as the fact that FAH workloads are configured for running on at-home graphics cards, so the system preferentially distributes work units to these devices, which may leave volunteered cloud GPUs idle.
All of this means it’s important to prep workloads carefully, with an eye to the GPU hardware on which they’ll run. You must select the correct parent and base images, and you’ll typically need to add configuration steps to Dockerfiles to add NVIDIA Container Toolkit and other bits before workloads installed on these containers can access underlying physical GPUs. The TensorFlow Docker workflow is also kind of neat in that it lets you build and debug apps in CPU-only containers, then swap in GPU support to deploy on production platforms. In general, containerizing GPU-reliant apps is a big win, since it makes these applications and deployments highly portable between environments, testable on diverse environments, and easy to share, accelerating research. In fact, because containerization minimizes external dependencies, it can even help make certain kinds of experiments more repeatable. 
Configuring a GPU Node for Docker Kubernetes
Preparing a GPU node for use in a Docker Enterprise Kubernetes cluster begins with configuring the node to join the cluster as a regular worker. You can then join the node to the cluster as a regular worker, or wait until after you’ve installed and persisted GPU drivers.
We’ll show the procedure for configuring a Ubuntu Linux 18.04 node to be added to a Docker Enterprise cluster as a GPU node. Please refer to the documentation for recipes for other operating systems and versions.
We’ll assume you updated the host before installing Docker Enterprise Engine and UCP (a recent kernel is essential). So the next step is to ensure that certain dependencies are in place. For Ubuntu 18.04, you’ll need to install pkg-config (an application that NVIDIA and other software uses to find build and other components).
sudo apt install pkg-config
Next, you’ll make sure build tools and headers are onboard:
sudo apt-get install -y gcc make curl linux-headers-$(uname -r)
Then confirm that i2c_core and ipmi_msghandler kernel modules are in place. These are used to manage communications between and among CPUs and GPUs:
sudo modprobe -a i2c_core ipmi_msghandler
And then configure to reload the modules on restarts:
echo -e “i2c_corenipmi_msghandler” | sudo tee /etc/modules-load.d/nvidia.conf
Then we set a prefix term used in subsequent commands, make a directory for the NVIDIA drivers in the appropriate place, update the NVIDIA configuration file, and refresh links to libraries.
NVIDIA_OPENGL_PREFIX=/opt/kubernetes/nvidia
sudo mkdir -p $NVIDIA_OPENGL_PREFIX/lib
echo “${NVIDIA_OPENGL_PREFIX}/lib” | sudo tee /etc/ld.so.conf.d/nvidia.conf
sudo ldconfig
Next, we set another variable to contain the current driver version:
NVIDIA_DRIVER_VERSION=440.59
We curl down the driver executable and save it as nvidia.run:
curl -LSf https://us.download.nvidia.com/XFree86/Linux-x86_64/${NVIDIA_DRIVER_VERSION}/NVIDIA-Linux-x86_64-${NVIDIA_DRIVER_VERSION}.run -o nvidia.run
Then we install the driver itself:
sudo sh nvidia.run –opengl-prefix=”${NVIDIA_OPENGL_PREFIX}”
The driver app will ask you a few questions before completing the installation. Do you want to install NVIDIA 32-bit drivers? (Answer is probably no.) The utility may complain that it has no tool (Xorg) to resolve changed library paths, and that if it fails to install, you’ll need to install Xorg dev tools. (Answer is OK. Don’t worry about this.) Update the configuration to load NVIDIA X driver on restarts? (Answer is probably yes.)
Once the library is installed, we load NVIDIA memory kernel modules while creating device files for them:
sudo tee /etc/systemd/system/nvidia-modprobe.service << END
[Unit]
Description=NVIDIA modprobe

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/nvidia-modprobe -c0 -u

[Install]
WantedBy=multi-user.target
END
Then enable and start the modules:
sudo systemctl enable nvidia-modprobe
sudo systemctl start nvidia-modprobe
Finally, we configure the NVIDIA persistence daemon service:
sudo tee /etc/systemd/system/nvidia-persistenced.service << END
[Unit]
Description=NVIDIA Persistence Daemon
Wants=syslog.target

[Service]
Type=forking
PIDFile=/var/run/nvidia-persistenced/nvidia-persistenced.pid
Restart=always
ExecStart=/usr/bin/nvidia-persistenced –verbose
ExecStopPost=/bin/rm -rf /var/run/nvidia-persistenced

[Install]
WantedBy=multi-user.target
END
Then enable and start it up.
sudo systemctl enable nvidia-persistenced
sudo systemctl start nvidia-persistenced
At this point, you can (if you haven’t done so already), obtain from Docker Enterprise/UCP the “docker join” command required to add nodes to your cluster, copy this to the command line of your GPU node, and join it up. The generalized or node-specific orchestration settings in Docker Enteprise will determine that the node joins as a Kubernetes worker. The Kubernetes GPU device plugin is built in, and the node will come up (or change state) to indicate on the Dashboard that it’s a GPU node.
Test Deployment
You can now easily run a test deployment with kubectl, pulling it down from Docker Hub. The following can be pasted directly to your command line, or the YAML portion can be saved as a file and applied after editing. The image contains a program that will access the platform and log a dossier on your available GPU hardware:
kubectl apply -f- <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
 creationTimestamp: null
 labels:
   run: gpu-test
 name: gpu-test
spec:
 replicas: 1
 selector:
   matchLabels:
     run: gpu-test
 template:
   metadata:
     labels:
       run: gpu-test
   spec:
     containers:
     – command:
       – sh
       – -c
       – “deviceQuery && sleep infinity”
       image: mirantis/gpu-example:cuda-10.2
       name: gpu-test
       resources:
         limits:
           nvidia.com/gpu: “1”
EOF
You can change the number of replicas and increase the limit if more than one GPU or GPU-equipped node is available to your cluster. Pods will be scheduled only onto GPU nodes per the stated limit. Attempts to schedule more pods than you have GPU capacity to support will result in a FailedScheduling error with the annotation “Insufficient nvidia.com/gpu.”
List the pods and their states with:
kubectl get pods
(Example output)
NAME                        READY   STATUS    RESTARTS   AGE
gpu-test-747d746885-hpv74   1/1     Running   0          14m
Then view the log with:
kubectl logs <name of pod>
Finally, you can delete the deployment by entering:
kubectl delete deployment gpu-test
More to Come
Now that we have GPU capacity, we’re playing with some machine learning tools. Soon we expect to post some benchmark results comparing classifier training exercises on conventional (e.g., Xeon) (v)CPUs vs GPUs.
If you’d like to see this in action, download the Mirantis Launchpad CLI Tool, or join us for a demonstration webinar!
The post NVIDIA GPU Nodes for Docker Enterprise Kubernetes appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Published by