Designing Your First App in Kubernetes: An Overview

Kubernetes is a powerful container orchestrator and has been establishing itself as IT architects’ container orchestrator of choice. But Kubernetes’ power comes at a price; jumping into the cockpit of a state-of-the-art jet puts a lot of power under you, but knowing how to actually fly it is not so simple. That complexity can overwhelm a lot of people approaching the system for the first time.
I wrote a blog series recently where I walk you through the basics of architecting an application for Kubernetes, with a tactical focus on the actual Kubernetes objects you’re going to need. The posts go into quite a bit of detail, so I’ve provided an abbreviated version here, with links to the original posts.

Part 1: Getting Started 

Just Enough Kube
With a machine as powerful as Kubernetes, I like to identify the absolute minimum set of things we’ll need to understand in order to be successful; there’ll be time to learn about all the other bells and whistles another day, after we master the core ideas. No matter where your application runs, in Kubernetes or anywhere else, there are four concerns we are going to have to address:

Processes: In Kubernetes, that means using pods and controllers to schedule, maintain and scale processes.
Networking: Kubernetes services allow application components to talk to each other.
Configuration: A well-written application factors out configuration, rather than hard-coding it. In Kubernetes, volumes and configMaps are our tools for this.
Storage: Containers are short-lived, so data you want to keep should be stored elsewhere. For this, we’ll look at Container Storage Interface plugins and persistentVolumes.

Just Enough Design
There are some high-level design points we need so we can understand the engineering decisions that follow, and to make sure we’re getting the maximum benefit out of our containerization platform. Regardless of what orchestrator we’re using, there are three key principles we need to keep in mind that set a standard for what we’re trying to achieve when containerizing applications: portability, scalability, and shareability. Containerization is fundamentally meant to confer these benefits to applications; if at any time when you’re containerizing an app and you aren’t seeing returns in these three areas, something may well need to be rethought.
For more information on Kubernetes and where to start when using them to develop an application, check out Part 1 of our series.
Part 2: Setting up Processes
The heart of any application is its running processes, and in Kubernetes, we create processes as pods, which are used to schedule groups of individual containers. Pods are a bit fancier than individual containers, in that they can schedule whole groups of containers, co-located on a single host, which brings us to our first decision point — How should our processes be arranged into pods?

A pod can contain one or more containers, but containers in the pod must scale together.

There are two important considerations for how we set up pods:
Pods and containers must scale together. If you need to scale your application up, you have to add more pods; these pods will come with copies of every container they include. 
Kubernetes controllers are the best way to schedule pods, since controllers like deployments or daemonSets provide numerous operational tools for scaling and maintenance of workloads beyond what’s offered by bare pods.
To learn more about setting up processes for managing your applications, check out Part 2 of our series. 
Part 3: Communicating via Services
After deploying workloads as pods managed by controllers, you have to establish a way for those pods to reliably communicate with each other without incurring a lot of complexity for developers.
That’s where Kubernetes services come in. They provide reliable, simple networking endpoints for routing traffic to pods via the fixed metadata defined in the controller that created them. For basic applications, two services cover most use cases: clusterIP and nodePort services. That brings us to another decision point: What kind of services should route to each controller?
The simplest way to decide between them is to determine whether the target pods are meant to be reachable from outside the cluster or not.

A Kubernetes nodePort service allows external traffic to be routed to the pods
A Kubernetes clusterIP service only accepts traffic from within the cluster.

You can learn more about communication via Kubernetes services and how to decide between clusterIP and nodePort services in Part 3 of our series. 
Part 4: Configuration
One of the core design principles of any containerized app is portability. When you build an application with Kubernetes, you’ll want to address any problems with the environment-specific configuration expected by that app.
A well-designed application should treat configuration like an independent object — separate from the containers themselves and provisioned to them at runtime. When we design applications, we need to identify what configurations we want to make pluggable in this way — which brings us to another decision point:
What application configurations will need to change from environment to environment?
Typically, these will be environment variables or config files that change from environment to environment, such as access tokens for different services used in staging versus production or different port configurations.
Once we’ve identified the configs in our application that should be pluggable, we can enable the behavior we want by using Kubernetessystem of volumes and configMaps.

The configMap and Volume interact to provide configuration for containers.

You can read more about configuration in Part 4 of the series.
Part 5: Storage
The final component we want to think about when we build applications for Kubernetes is storage. Remember, a container’s filesystem is transient, and any data kept there is at risk of being deleted along with your container if that container ever exits or is rescheduled. 
Any container that generates or collects valuable data should be pushing that data out to stable external storage; conversely, any container that requires the provisioning of a lot of data should be receiving that data from an external storage location. 
Which brings us to our last decision point: What data does your application gather or use that should live longer than the lifecycle of a pod?
Tackling that requires working with the Kubernetes storage model. The full model has a number of moving parts: The Container Storage Interface (CSI) Plugins, StorageClass, PersistentVolumes (PV), PersistentVolumeClaims (PVC), and Volumes.

To learn more about how to leverage the Kubernetes storage model for your applications, be sure to check out Part 5 of the series. 
The Future
I’ve walked you through the basic Kubernetes tooling you’ll need to containerize a wide variety of applications, and provided you with next-step pointers on where to look for more advanced information. Try working through the stages of containerizing workloads, networking them together, modularizing their config, and provisioning them with storage to get fluent with the ideas above.
After mastering the basics of building a Kubernetes application, ask yourself, “How well does this application fit the values of portability, scalability and shareability we started with?” Containers themselves are engineered to easily move between clusters and users, but what about the entire application you just built? How can we move that around while still preserving its integrity and not invalidating any unit and integration testing you’ll perform on it?
Docker App sets out to solve that problem by packaging applications in an integrated bundle that can be moved around as easily as a single image. Stay tuned to Docker’s blog for more guidance on how to use this emerging format with your Kubernetes applications.
To learn more about Kubernetes and Docker:

Find out more about running Kubernetes on Docker Enterprise and Docker Desktop.
Check out Play with Kubernetes, powered by Docker.

We will also be offering training on Kubernetes starting in early 2020. In the training, we’ll provide more specific examples and hands on exercises.To get notified when the training is available, sign up here:
Get Notified About Training

Building Your First App in #Kubernetes – a summary of our 5 part blog seriesClick To Tweet

The post Designing Your First App in Kubernetes: An Overview appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Published by