Managing the TICK Stack with Docker App

Photo by Sergio Souza on Unsplash
Docker Application eases the packaging and the distribution of a Docker Compose application. The TICK stack – Telegraf, InfluxDB, Chronograf, and Kapacitor – is a good candidate to illustrate how this actually works. In this blog, I’ll show you how to deploy the TICK stack as a Docker App.
About the TICK Stack
This application stack is mainly used to handle time-series data. That makes it a great choice for IoT projects, where devices send data (temperature, weather indicators, water level, etc.) on a regular basis.
Its name comes from its components:
– Telegraf
– InfluxDB
– Chronograf
– Kapacitor
The schema below illustrates the overall architecture, and outlines the role of each component.

Data are sent to Telegraph and stored in an InfluxDB database. Chronograf can query the database through a web interface. Kapacitor can process, monitor, and raise alerts based on the data.
Defining the Application in a Compose File
The tick.yml file below defines the four components of the stack and the way they communicate with each other:
version: ‘3.7’
services:
  telegraf:
    image: telegraf
    configs:
    – source: telegraf-conf
      target: /etc/telegraf/telegraf.conf
    ports:
    – 8186:8186
  influxdb:
    image: influxdb
  chronograf:
    image: chronograf
    ports:
    – 8888:8888
    command: [“chronograf”, “–influxdb-url=http://influxdb:8086″]
  kapacitor:
    image: kapacitor
    environment:
    – KAPACITOR_INFLUXDB_0_URLS_0=http://influxdb:8086
configs:
  telegraf-conf:
    file: ./telegraf.conf
Telegraf’s configuration is provided through a Docker Config object, created out of the following telegraf.conf file:
[agent]
  interval = “5s”
  round_interval = true
  metric_batch_size = 1000
  metric_buffer_limit = 10000
  collection_jitter = “0s”
  flush_interval = “5s”
  flush_jitter = “0s”
  precision = “”
  debug = false
  quiet = false
  logfile = “”
  hostname = “$HOSTNAME”
  omit_hostname = false
[[outputs.influxdb]]
  urls = [“http://influxdb:8086″]
  database = “test”
  username = “”
  password = “”
  retention_policy = “”
  write_consistency = “any”
  timeout = “5s”
[[inputs.http_listener]]
  service_address = “:8186″
[cpu]
  # Whether to report per-cpu stats or not
  percpu = true
  # Whether to report total system cpu stats or not
  totalcpu = true
This configuration:

Defines an agent that gathers host CPU metrics on a regular basis.
Defines an additional input method allowing Telegraf to receive data over HTTP.
Specifies the name of the database the data collected/received will be stored.

Deploying the application from Docker Desktop
Now we will deploy the application using Swarm first, and then using Kubernetes to illustrate some of the differences.
Using Swarm to Deploy the TICK Stack
First we setup a local Swarm using the following command:
$ docker swarm init
Then we deploy the TICK stack as a Docker Stack:
$ docker stack deploy tick -c tick.yaml
Creating network tick_default
Creating config tick_telegraf-conf
Creating service tick_telegraf
Creating service tick_influxdb
Creating service tick_chronograf
Creating service tick_kapacitor
This creates:

A network for communication between the application containers 
A Config object containing the Telegraf configuration we defined in telegraf.conf
The 4 services composing the TICK stack

It only takes a couple of seconds before the application is up and running. Now we can verify the status of each service.
$ docker service ls
ID                  NAME MODE                REPLICAS IMAGE PORTS
74zkf54ruztg        tick_chronograf replicated          1/1 chronograf:latest *:8888->8888/tcp
y97hcx3yyjx6        tick_influxdb replicated          1/1 influxdb:latest
fm4uckqlvhvt        tick_kapacitor replicated          1/1 kapacitor:latest
12zl0sa678xh        tick_telegraf replicated          1/1 telegraf:latest *:8186->8186/tcp
Only Telegraf and Chronograf are exposed to the outside world:

Telegraf is used to ingest data through port 8186
Chronograf is used to visualize the data and is available through a web interface on local port 8888

To query data from the Chronograf interface, we first need to send some data to Telegraf.
Sending Test data
First we will use the lucj/genx Docker image to generate data following a cosine distribution (a couple of other simple distributions are available).
$ docker run lucj/genx
Usage of /genx:
 -duration string
      duration of the generation (default “1d”)
 -first float
      first value for linear type
 -last float
      last value for linear type (default 1)
 -max float
       max value for cos type (default 25)
 -min float
       min value for cos type (default 10)
 -period string
       period for cos type (default “1d”)
 -step string
       step / sampling period (default “1h”)
 -type string
       type of curve (default “cos”)
We will generate three days of data, with a one day period, min/max values of 10/25 and a sampling step of one hour; that will be enough for our tests.
$ docker run lucj/genx:0.1 -type cos -duration 3d -min 10 -max 25 -step 1h > /tmp/data
We then send the data to the Telegraf HTTP endpoint with the following commands:
PORT=8186
endpoint=”http://localhost:$PORT/write”
cat /tmp/data | while read line; do
  ts=”$(echo $line | cut -d’ ‘ -f1)000000000″
  value=$(echo $line | cut -d’ ‘ -f2)
  curl -i -XPOST $endpoint –data-binary “temp value=${value} ${ts}”
done
Next, from the Explore tab in the Chronograf web interface we can visualize the data using the following query:
select “value” from “test”.”autogen”.”temp”
We will see a neat cosine distribution:

With just a couple of commands, we have deployed the TICK stack on a Swarm cluster, sent time series data and visualized it.
Finally, we remove the stack:
$ docker stack rm tick
Removing service tick_chronograf
Removing service tick_influxdb
Removing service tick_kapacitor
Removing service tick_telegraf
Removing config tick_telegraf-conf
Removing network tick_default
We have shown how to deploy the application stack with Docker Swarm. Now we will deploy it with Kubernetes.
Using Kubernetes to Deploy the TICK Stack
From Docker Desktop, deploying the same application on a Kubernetes cluster is also a simple process.
Activate Kubernetes from Docker Desktop
First, activate Kubernetes from the Docker Desktop settings:

A local Kubernetes cluster starts quickly and is accessible right from our local environment.

When the Kubernetes cluster is created, a configuration file (also known as kubeconfig) is created locally (usually in ~/.kube/config), or used to enrich this file if it already exists. This configuration file contains all the information needed to communicate with the API Server securely:

The cluster’s CA
The API Server endpoint
The default user’s certificate and private key

Creating a new Docker context
Docker 19.03 introduced the context object. It allows you to quickly switch the CLI configuration to connect with different clusters. A single context exists by default as shown below:
$ docker context list
NAME                DESCRIPTION                       DOCKER ENDPOINT KUBERNETES ENDPOINT                           ORCHESTRATOR
default *           Current DOCKER_HOST based configuration   unix:///var/run/docker.sock https://kubernetes.docker.internal:6443 (default)   swarm
Note: as we can see from the ORCHESTRATOR column, this context can only be used to deploy workload on the local Swarm.
We will now create a new Docker context dedicated to run Kubernetes workloads. This can be done with the following command:
$ docker context create k8s-demo
  –default-stack-orchestrator=kubernetes
  –kubernetes config-file=$HOME/.kube/config
  –description “Local k8s from Docker Desktop”
  –docker host=unix:///var/run/docker.sock
Next, we verify that both contexts are now available:
$ docker context list
NAME                DESCRIPTION                       DOCKER ENDPOINT KUBERNETES ENDPOINT                           ORCHESTRATOR
default *           Current DOCKER_HOST based configuration   unix:///var/run/docker.sock https://kubernetes.docker.internal:6443 (default)   swarm
k8s-demo            Local k8s from Docker Desktop             unix:///var/run/docker.sock https://kubernetes.docker.internal:6443 (default)   kubernetes
Note: we could use a single context where both orchestrators are defined. In that case, the deployment would be done on Swarm and Kubernetes at the same time.
Next, we switch on the k8s-demo context:
$ docker context use k8s-demo
k8s-demo
Current context is now “k8s-demo”
Then we deploy the application in the same way we did before, but this time it will run on Kubernetes instead of Swarm.
$ docker stack deploy tick -c tick.yaml
Waiting for the stack to be stable and running…
chronograf: Ready [pod status: 1/1 ready, 0/1 pending, 0/1 failed]
influxdb: Ready [pod status: 1/1 ready, 0/1 pending, 0/1 failed]
kapacitor: Ready [pod status: 1/1 ready, 0/1 pending, 0/1 failed]
telegraf: Ready [pod status: 1/1 ready, 0/1 pending, 0/1 failed]

Stack tick is stable and running
Using the usual kubectl binary, we can verify all the Kubernetes resources have been created:
$ kubectl get deploy,po,svc
NAME                               READY UP-TO-DATE AVAILABLE AGE
deployment.extensions/chronograf   1/1 1 1 2m50s
deployment.extensions/influxdb     1/1 1 1 2m50s
deployment.extensions/kapacitor    1/1 1 1 2m49s
deployment.extensions/telegraf     1/1 1 1 2m49s

NAME                             READY STATUS RESTARTS AGE
pod/chronograf-c55797884-mp8gc   1/1 Running 0 2m50s
pod/influxdb-67c574845d-z6846    1/1 Running 0 2m50s
pod/kapacitor-57f6787666-t8j6l   1/1 Running 0 2m49s
pod/telegraf-6b8648884c-lq9t5    1/1 Running 0 2m49s

NAME                           TYPE CLUSTER-IP EXTERNAL-IP   PORT(S) AGE
service/chronograf             ClusterIP None <none>        55555/TCP 2m49s
service/chronograf-published   LoadBalancer 10.105.63.34 <pending>     8888:32163/TCP 2m49s
service/influxdb               ClusterIP None <none>        55555/TCP 2m49s
service/kapacitor              ClusterIP None <none>        55555/TCP 2m49s
service/kubernetes             ClusterIP 10.96.0.1 <none>        443/TCP 30h
service/telegraf               ClusterIP None <none>        55555/TCP 2m49s
service/telegraf-published     LoadBalancer 10.107.223.80 <pending>     8186:32460/TCP 2m49s
We can generate some dummy data and visualize them using Chronograf following the same process we did above for Swarm (I only show the result here as the process is the same):

Finally we remove the stack:
$ docker stack rm tick
Removing stack: tick
Note: we used the same command to remove the stack from Kubernetes or Swarm, but notice the output is not the same as each orchestrator handles different  resources / objects.
Defining the TICK stack as a DockerApp
We followed simple steps to deploy the application using both Swarm and Kubernetes. Now we’ll define it as a Docker Application to make it more portable, and see how it eases the deployment process
Docker App is shipped with Docker 19.03+ and can be used once the experimental flag is enabled for the CLI. This can be done in several ways:

modifying the config.json file (usually in the $HOME/.docker folder)

{
“experimental”: “enabled”
}

setting the DOCKER_CLI_EXPERIMENTAL environment variable

export DOCKER_CLI_EXPERIMENTAL=enabled
Once this is done, we can check that Docker App is enabled:
$ docker app version
Version: v0.8.0
Git commit: 7eea32b7
Built: Tue Jun 11 20:53:26 2019
OS/Arch: darwin/amd64
Experimental: off
Renderers: none
Invocation Base Image: docker/cnab-app-base:v0.8.0
Note: version 0.8 is currently the last version
Note: the Docker App command is experimental, which means that the feature is subject to change before being ready for production. The user experience will be updated in the next release.
Available commands in Docker App
Several commands are available to manage the lifecycle of a Docker Application, as we can see below. We will illustrate some of them later in this article.
$ docker app

Usage: docker app COMMAND

A tool to build and manage Docker Applications.

Commands:
bundle Create a CNAB invocation image and `bundle.json` for the application
completion Generates completion scripts for the specified shell (bash or zsh)
init Initialize Docker Application definition
inspect Shows metadata, parameters and a summary of the Compose file for a given application
install Install an application
list List the installations and their last known installation result
merge Merge a directory format Docker Application definition into a single file
pull Pull an application package from a registry
push Push an application package to a registry
render Render the Compose file for an Application Package
split Split a single-file Docker Application definition into the directory format
status Get the installation status of an application
uninstall Uninstall an application
upgrade Upgrade an installed application
validate Checks the rendered application is syntactically correct
version Print version information

Run ‘docker app COMMAND –help’ for more information on a command.

Creating a Docker Application Package for the TICK stack
We start with the folder which contains the Docker Compose file describing the application (tick.yml) and the Telegraf configuration file (telegraf.conf):
$ tree .
.
├── telegraf.conf
└── tick.yml
Next we create the Docker Application, named tick:
$ docker app init tick –compose-file tick.yml –description “tick stack”
Created “tick.dockerapp”
This creates the folder tick.dockerapp, and three additional files:
$ tree .
.
├── telegraf.conf
├── tick.dockerapp
│ ├── docker-compose.yml
│ ├── metadata.yml
│ └── parameters.yml
└── tick.yml

1 directory, 5 files

docker-compose.yml is the copy of the tick.yml file- metadata.yml defines metadata and additional parameters
$ cat tick.dockerapp/metadata.yml
# Version of the application
version: 0.1.0
# Name of the application
name: tick
# A short description of the application
description: tick stack
# List of application maintainers with name and email for each
maintainers:
– name: luc
email:

– parameters.yml defines the default parameters used for the application (more on this in a bit). This file is empty by default.
Note: when initializing the Docker App, it’s possible to use the -s flag. This creates a single file with the content of the three files above instead of a folder / files hierarchy.
As the application uses the telegraf.conf file, we need to copy it into tick.dockerapp folder.
Environment settings
As we mentioned above, the purpose of the parameters.yml file is to provide default values for the application. Those values will replace some placeholders we will define in the application’s compose file. 
To illustrate this, we will consider a dev and a prod environments and assume those two only differ when it comes to the port exposed by the application to the outside world:
– Telegraf listens on port 8000 in dev and 9000 in prod
– Chronograf listens on port 8001 in dev and 9001 in prod
Note: In a real world application, differences between dev and prod would not be limited to a port number. The current example is over-simplified to make it easier to grasp the main concepts.
First, we create a parameter file for each environment:
– parameters.yml defines some default ports for both Telegraf and Chronograf services
// parameters.yml
ports:
telegraf: 8186
chronograf: 8888

– dev.yml specifies values for the development environment
// dev.yml
ports:
telegraf: 8000
chronograf: 8001
– prod.yml specifies values for the production environment
// prod.yml
ports:
telegraf: 9000
chronograf: 9001

Next we modify the docker-compose.yml file to add some placeholders:
$ cat tick.dockerapp/docker-compose.yml
version: ‘3.7’
services:
telegraf:
image: telegraf
configs:
source: telegraf-conf
target: /etc/telegraf/telegraf.conf
ports:
— ${ports.telegraf}:8186
influxdb:
image: influxdb
chronograf:
image: chronograf
ports:
— ${ports.chronograf}:8888
command: [”chronograf”, “–influxdb-url=http://influxdb:8086″]
kapacitor:
image: kapacitor
environment:
— KAPACITOR_INFLUXDB_0_URLS_0=http://influxdb:8086
configs:
telegraf-conf:
file: ./telegraf.conf

As we can see in the changes above, the way to access the port for Telegraf is to use the ports.telegraf notation. The same approach is used for the Chronograf port.
The Docker App’s render command generates the Docker Compose file, substituting the variables ${ports.XXX} with the content of the settings file specified. The default parameters.yml is used if none are specified. As we can see below, the Telegraf port is now 8186, and the Chronograf one is 8888.
$ docker app render tick.dockerapp/
version: “3.7”
services:
chronograf:
command:
– chronograf
– –influxdb-url=http://influxdb:8086
image: chronograf
ports:
– mode: ingress
target: 8888
published: 8888
protocol: tcp
influxdb:
image: influxdb
kapacitor:
environment:
KAPACITOR_INFLUXDB_0_URLS_0: http://influxdb:8086
image: kapacitor
telegraf:
configs:
source: telegraf-conf
target: /etc/telegraf/telegraf.conf
image: telegraf
ports:
– mode: ingress
target: 8186
published: 8186
protocol: tcp
configs:
telegraf-conf:
file: telegraf.conf

If we specify a parameters file in the render command, the values within that file are used. As we can see in the following example which uses dev.yml during the rendering, Telegraf is published on port 8000 and Chronograf on port 8001 (values specified in dev.yml):
$ docker app render tick.dockerapp –parameters-file tick.dockerapp/dev.yml
version: “3.7”
services:
chronograf:
command:
– chronograf
– –influxdb-url=http://influxdb:8086
image: chronograf
ports:
– mode: ingress
target: 8888
published: 8001
protocol: tcp
influxdb:
image: influxdb
kapacitor:
environment:
KAPACITOR_INFLUXDB_0_URLS_0: http://influxdb:8086
image: kapacitor
telegraf:
configs:
source: telegraf-conf
target: /etc/telegraf/telegraf.conf
image: telegraf
ports:
– mode: ingress
target: 8186
published: 8000
protocol: tcp
configs:
telegraf-conf:
file: telegraf.conf
Inspecting the application
The inspect command provides all the information related to the application:
– Its metadata
– The services involved
– The default parameter values- The files the application depends on (telegraf.conf in this example)
$ docker app inspect tick
tick 0.1.0

Maintained by: luc

tick stack

Services (4) Replicas Ports Image
———— ——– —– —–
chronograf 1 8888 chronograf
influxdb 1 influxdb
kapacitor 1 kapacitor
telegraf 1 8186 telegraf

Parameters (2) Value
————– —–
ports.chronograf 8888
ports.telegraf 8186

Attachments (3) Size
————— —-
dev.yml 43B
prod.yml 43B
telegraf.conf 657B

Deploying the Docker App on a Swarm Cluster
First, we go back to the default context which references a Swarm cluster.
$ docker context use default
Next, we deploy the application as a Docker App:
$ docker app install tick.dockerapp –name tick –parameters-file tick.dockerapp/prod.yml
Creating network tick_default
Creating config tick_telegraf-conf
Creating service tick_telegraf
Creating service tick_influxdb
Creating service tick_chronograf
Creating service tick_kapacitor
Application “tick” installed on context “default”

Then we list the deployed application to make sure the one created above is there:
$ docker app list
INSTALLATION APPLICATION LAST ACTION RESULT CREATED MODIFIED REFERENCE
tick tick (0.1.0) install success 4 minutes 3 minutes
Next we list the services running on the Swarm cluster. We can see the values from the prod.yml parameters file have been taken into account (as the exposed ports are 9000 and 9001 for Telegraf and Chronograf respectively).
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
75onunrvoxgt tick_chronograf replicated 1/1 chronograf:latest *:9001->8888/tcp
vj1ttws2mw1u tick_influxdb replicated 1/1 influxdb:latest
q4brz1i45cai tick_kapacitor replicated 1/1 kapacitor:latest
i6kvr37ycnn5 tick_telegraf replicated 1/1 telegraf:latest *:9000->8186/tcp

Pushing the application to Docker Hub
A Docker application can be distributed through the Docker Hub via a simple push:
$ docker app push tick –tag lucj/tick:0.1.0

Successfully pushed bundle to docker.io/lucj/tick:0.1.0. Digest is sha256:7a71d2bfb5588be0cb74cd76cc46575b58c433da1fa05b4eeccd5288b4b75bac.

It then appears next to the Docker images on the account it was pushed to:

The application is now ready to be used by anyone; it just needs to be pulled from the Docker Hub:
$ docker app pull lucj/tick:0.1.0
Before we move to the next part, we will remove the application we deployed on the Swarm cluster:
$ docker app uninstall tick
Removing service tick_chronograf
Removing service tick_influxdb
Removing service tick_kapacitor
Removing service tick_telegraf
Removing config tick_telegraf-conf
Removing network tick_default
Application “tick” uninstalled on context “default”

Deploying the Docker App on Kubernetes
We saw how easy it is to deploy a Docker App on Swarm. We will now deploy it on Kubernetes, and we’ll see it’s just as easy.
First, we set the Docker context to use Kubernetes as the orchestrator.
$ docker context use k8s-demo
k8s-demo
Current context is now “k8s-demo”

Next, we install the application with the exact same command we used to deploy it on Swarm:
$ docker app install tick.dockerapp –name tick –parameters-file tick.dockerapp/prod.yml
Waiting for the stack to be stable and running…
influxdb: Pending
chronograf: Pending
kapacitor: Pending
telegraf: Pending
telegraf: Ready
kapacitor: Ready
chronograf: Ready
influxdb: Ready

Stack tick is stable and running

Application “tick” installed on context “k8s-demo”

Using kubectl, we list the resources to make sure everything was created correctly:
$ kubectl get deploy,po,svc
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.extensions/chronograf 1/1 1 1 26s
deployment.extensions/influxdb 1/1 1 1 26s
deployment.extensions/kapacitor 1/1 1 1 26s
deployment.extensions/telegraf 1/1 1 1 26s

NAME READY STATUS RESTARTS AGE
pod/chronograf-c55797884-b7rcd 1/1 Running 0 26s
pod/influxdb-67c574845d-bcr8m 1/1 Running 0 26s
pod/kapacitor-57f6787666-82b7l 1/1 Running 0 26s
pod/telegraf-6b8648884c-xcmmx 1/1 Running 0 26s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/chronograf ClusterIP None <none> 55555/TCP 25s
service/chronograf-published LoadBalancer 10.104.26.162 <pending> 9001:31319/TCP 25s
service/influxdb ClusterIP None <none> 55555/TCP 26s
service/kapacitor ClusterIP None <none> 55555/TCP 26s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d4h
service/telegraf ClusterIP None <none> 55555/TCP 26s
service/telegraf-published LoadBalancer 10.108.195.24 <pending> 9000:30684/TCP 25s

Note: The deployment on Kubernetes only works on Docker Desktop or Docker Enterprise, which run the server side controller needed to handle the stack resource.
Summary
I hope this article provides some insights on the Docker Application. The project is still quite young so breaking changes may occur before it reaches 1.0.0, but one thing that’s promising – it lets us deploy to Kubernetes without knowing much of anything about Kubernetes!
To learn more about Docker App:

Read our introductory post on Docker App and CNAB 

Find out how to access Docker App

The post Managing the TICK Stack with Docker App appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Published by