Community All-Hands Q3: What We’ll Cover

Join us for our next Community All-Hands event on September 1, 2022 at 8am PST/5pm CET. We have an exciting program in store this quarter for you, our Docker community. Make sure to grab a seat, settle in, and join us for this event by registering now!

What we’ll cover

Within the first hour, you can look forward to a recap of recent Docker updates (and a sneak peek at what to expect in the coming months.) Then, we’ll present some demos and updates about you: the Docker community. 

We’ll also give prizes out to some lucky community members. Stay tuned for more!

[Click Here to Enlarge]

Here’s our Main Stage line-up:

A message from our CEO, Scott JohnstonA recap from our CPO, Jake Levrine on Docker’s efforts to boost developer innovation and productivity in Docker Desktop and Docker EngineAn update from Jim Clark on viewing images through layered SBOMsA word from Djordje Lukic on multi-platform image support in Docker Desktop

Featuring unique, community tracks

[Click Here to Enlarge]

At this virtual event, we want to show you the world’s worth of knowledge that the Docker community has to offer. To do this, we’ll be showcasing thought leadership content from community members across the globe with eight different tracks:

Best Practices. If you’re looking to optimize your image build time, mitigate runtime errors, and learn how to debug your application, join us in the best practices track. We’ll be looking at some real-world example applications in .Net and Golang, and you’ll learn how to interact with the community to solve problems.

Demos. If you learn best by example, this is the track for you. Join us in the demos track to learn about building an integration test suite for legacy code, creating a CV in LaTeX, setting up Kubernetes on Docker Desktop, and more.

Security. No matter how great your app is, if it’s not secure, it’s not going to make it far. Learn about pentesting, compliance, and robustness!

Extensions. Discover helpful, community Docker Extensions. By attending this track, you’ll even learn how to create your own extensions and share them with the world!

Cutting Edge. Deploy your next AI application or Blockchain extension. You’ll also learn about the latest advancements in the tech space.

Open Source. Take your projects to the next level with the Docker-Sponsored Open Source program. We’ll also feature several panels hosted by the open source community.

International Waters. Learn about the work being done in Docker’s international community and how to get involved. We’ll have sessions in French, Spanish, and Portuguese.

Unconference. You’re the most important voice in our Community All-Hands. Join the conversation by engaging in the unconference track!

Reserve your seat now

Our Community All-Hands is specially designed for our Docker community, so it wouldn’t be the same without you! Sign up today for this much-anticipated event, packed with innovation and collaboration. We’ll save you a seat. 
Quelle: https://blog.docker.com/feed/

How to Develop and Deploy a Customer Churn Prediction Model Using Python, Streamlit, and Docker

Customer churn is a million-dollar problem for businesses today. The SaaS market is becoming increasingly saturated, and customers can choose from plenty of providers. Retention and nurturing are challenging. Online businesses view customers as churn when they stop purchasing goods and services. Customer churn can depend on industry-specific factors, yet some common drivers include lack of product usage, contract tenure, and cheaper prices elsewhere.

Limiting churn strengthens your revenue streams. Businesses and marketers must predict and prevent customer churn to remain sustainable. The best way to do so is by knowing your customers. And spotting behavioral patterns in historical data can help immensely with this. So, how do we uncover them? 

Applying machine learning (ML) to customer data helps companies develop focused customer-retention programs. For example, a marketing department could use an ML churn model to identify high-risk customers and send promotional content to entice them. 

To enable these models to make predictions with new data, knowing how to package a model as a user-facing, interactive application is essential. In this blog, we’ll take an ML model from a Jupyter Notebook environment to a containerized application. We’ll use Streamlit as our application framework to build UI components and package our model. Next, we’ll use Docker to publish our model as an endpoint. 

Docker containerization helps make this application hardware-and-OS agnostic. Users can access the app from their browser through the endpoint, input customer details, and receive a churn probability in a fraction of a second. If a customer’s churn score exceeds a certain threshold, that customer may receive targeted push notifications and special offers. The diagram below puts this into perspective: 

Why choose Streamlit?

Streamlit is an open source, Python-based framework for building UIs and powerful ML apps from a trained model. It’s popular among machine learning engineers and data scientists as it enables quick web-app development — requiring minimal Python code and a simple API. This API lets users create widgets using pure Python without worrying about backend code, routes, or requests. It provides several components that let you build charts, tables, and different figures to meet your application’s needs. Streamlit also utilizes models that you’ve saved or pickled into the app to make predictions.

Conversely, alternative frameworks like FastAPI, Flask, and Shiny require a strong grasp of HTML/CSS to build interactive, frontend apps. Streamlit is the fastest way to build and share data apps. The Streamlit API is minimal and extremely easy to understand. Minimal changes to your underlying Python script are needed to create an interactive dashboard.

Getting Started

git clone https://github.com/dockersamples/customer-churnapp-streamlit

Key Components

An IDE or text editor Python 3.6+ PIP (or Anaconda)Not required but recommended: An environment-management tool such as pipenv, venv, virtualenv, or condaDocker Desktop

Before starting, install Python 3.6+. Afterwards, follow these steps to install all libraries required to run the model on your system. 

Our project directory structure should look like this:

$ tree
.
├── Churn_EDA_model_development.ipynb
├── Churn_model_metrics.ipynb
├── Dockerfile
├── Pipfile
├── Pipfile.lock
├── WA_Fn-UseC_-Telco-Customer-Churn.csv
├── train.py
├── requirements.txt
├── README.md
├── images
│ ├── churndemo.gif
│ ├── icone.png
│ └── image.png
├── model_C=1.0.bin
└── stream_app.py

Install project dependencies in a virtual environment 

We’ll use the Pipenv library to create a virtual Python environment and install the dependencies required to run Streamlit. The Pipenv tool automatically manages project packages through the Pipfile as you install or uninstall them. It also generates a Pipfile.lock file, which helps produce deterministic builds and creates a snapshot of your working environment. Follow these steps to get started.

1) Enter your project directory

cd customer-churnapp-streamlit

2) Install Pipenv

pip install pipenv

3) Install the dependencies

pipenv install

4) Enter the pipenv virtual environment

pipenv shell

After completing these steps, you can run scripts from your virtual environment! 

Building a simple machine-learning model

Machine learning uses algorithms and statistical models. These analyze historical data and make inferences from patterns without any explicit programming. Ultimately, the goal is to predict outcomes based on incoming data. 

In our case, we’re creating a model from historical customer data to predict which customers are likely to leave. Since we need to classify customers as either churn or no-churn, we’ll train a simple-yet-powerful classification model. Our model uses logistic regression on a telecom company’s historical customer dataset. This set tracks customer demographics, tenure, monthly charges, and more. However, one key question is also answered: did the customer churn? 

Logistic regression estimates an event’s probability based on a given dataset of independent variables. Since the outcome is a probability, the dependent variable is bounded between 0 and 1. The model will undergo multiple iterations and calculate best-fit coefficients for each variable. This quantifies just how much each impacts churn. With these coefficients, the model can assign churn likelihood scores between 0 and 1 to new customers. Someone who scores a 1 is extremely likely to churn. Someone with a 0 is incredibly unlikely to churn. 

Python has great libraries like Pandas, NumPy, and Matplotlib that support data analytics. Open-source frameworks like Scikit Learn have pre-built wrappers for various ML models. We’ll use their API to train a logistic-regression model. To understand how this basic churn prediction model was born, refer to Churn_EDA_model_development.ipynb. ML models require many attempts to get right. Therefore, we recommend using a Jupyter notebook or an IDE. 

In a nutshell we performed the below steps to create our churn prediction model:

Initial data preparation Perform sanity checks on data types and column names Make data type corrections if needed Data and feature understanding Check the distribution of numerical featuresCheck the distinct values of categorical features Check the target feature distribution Exploratory data analysis Handle missing values Handle outliers Understand correlations and identify spurious ones Feature engineering and importance Analyze churn rate and risk scores across different cohorts and feature groups Calculate mutual information Check feature correlations Encoding categorical features and scaling numerical features Convert categorical features into numerical values using Scikit-Learn’s helper function: Dictionary Vectoriser Scale numerical features to standardize them into a fixed range Model training Select an appropriate ML algorithm Train the model with custom parameters Model evaluation Refer to Churn_model_metrics.ipynb Use different metrics to evaluate the model like accuracy, confusion table, precision, recall, ROC curves, AUROC, and cross-validation.Repeat steps 6 and 7 for different algorithms and model hyperparameters, then select the best-fit model.

It’s best practice to automate the training process using a Python script. Each time we choose to retrain the model with a new parameter or a new dataset, we can execute this script and save the resulting model. 

Check out train.py to explore how to package a model into a script that automates model training! 

Once we uncover the best-fit model, we must save it to reuse it later without running any of the above training code scripts. Let’s get started.

Save the model

In machine learning, we save trained models in a file and restore them to compare each with other models. We can also test them using new data. The save process is called Serialization, while restoration is called Deserialization.

We use a helper Python library called Pickle to save the model. The Pickle module implements a fundamental, yet powerful, algorithm for serializing and de-serializing a Python object structure. 

You can also use the following functions: 

pickle.dump serializes an object hierarchy using dump().pickle.load deserializes a data stream via the load() function.

We’ve chosen Pickle since it supports models created using the Scikit-Learn framework and offers great loading performance. Similar training frameworks like Tensorflow and Keras have their own built-in libraries for saving models, which are designed to perform well with their architectures. 

Dump the Model and Dictionary Vectorizer

import pickle

with open(‘model_C=1.0.bin’, ‘wb’) as f_out
pickle.dump((dict_vectorizer, model), f_out)
f_out.close() ## After opening any file it’s necessary to close it

We just saved a binary file named model_C=1.0.bin and wrote the dict_vectorizer for one Hot Encoding and included Logistic Regression Model as an array within it. 

Create a new Python file

Now, we’ll create a stream_app.py script that both defines our app layout and trigger-able backend logic. This logic activates when users interact with different UI components. Crucially, this file is reusable with any model. 

This is just an example. We strongly recommend exploring more components and design options from the Streamlit library. If you’re skilled in HTML and JavaScript, you can create your own Streamlit components that grant you more control over your app’s layout. 

First, import the required libraries:

import pickle
import streamlit as st
import pandas as pd
from PIL import Image

Next, you’ll need to load the same binary file we saved earlier to deserialize the model and dictionary vectorizer.

model_file = ‘model_C=1.0.bin’

with open(model_file, ‘rb’) as f_in:
dv, model = pickle.load(f_in)

The following code snippet loads the images and displays them on your screen. The st.image portion helps display an image on the frontend:

image = Image.open(‘images/icone.png’)
image2 = Image.open(‘images/image.png’)

st.image(image,use_column_width=False)

To display items in the sidebar, you’ll need the following code snippet:

add_selectbox = st.sidebar.selectbox("How would you like to predict?",
("Online", "Batch"))

st.sidebar.info(‘This app is created to predict Customer Churn’)
st.sidebar.image(image2)

Streamlit’s sidebar renders a vertical, collapsible bar where users can select the type of model scoring they want to perform — like batch scoring (predictions for multiple customers) or online scoring (for single customers). We also add text and images to decorate the sidebar. 

The following code helps you display the main title:

st.title("Predicting Customer Churn")

You can display input widgets to collect customer details and generate predictions, when the user selects the ‘Online’ option:

if add_selectbox == ‘Online':

gender = st.selectbox(‘Gender:’, [’male’, ‘female’])
seniorcitizen= st.selectbox(‘ Customer is a senior citizen:’, [0, 1])
partner= st.selectbox(‘ Customer has a partner:’, [’yes’, ‘no’])
dependents = st.selectbox(‘ Customer has dependents:’, [’yes’, ‘no’])
phoneservice = st.selectbox(‘ Customer has phoneservice:’, [’yes’, ‘no’])
multiplelines = st.selectbox(‘ Customer has multiplelines:’, [’yes’, ‘no’, ‘no_phone_service’])
internetservice= st.selectbox(‘ Customer has internetservice:’, [’dsl’, ‘no’, ‘fiber_optic’])
onlinesecurity= st.selectbox(‘ Customer has onlinesecurity:’, [’yes’, ‘no’, ‘no_internet_service’])
onlinebackup = st.selectbox(‘ Customer has onlinebackup:’, [’yes’, ‘no’, ‘no_internet_service’])
deviceprotection = st.selectbox(‘ Customer has deviceprotection:’, [’yes’, ‘no’, ‘no_internet_service’])
techsupport = st.selectbox(‘ Customer has techsupport:’, [’yes’, ‘no’, ‘no_internet_service’])
streamingtv = st.selectbox(‘ Customer has streamingtv:’, [’yes’, ‘no’, ‘no_internet_service’])
streamingmovies = st.selectbox(‘ Customer has streamingmovies:’, [’yes’, ‘no’, ‘no_internet_service’])
contract= st.selectbox(‘ Customer has a contract:’, [’month-to-month’, ‘one_year’, ‘two_year’])
paperlessbilling = st.selectbox(‘ Customer has a paperlessbilling:’, [’yes’, ‘no’])
paymentmethod= st.selectbox(‘Payment Option:’, [’bank_transfer_(automatic)’, ‘credit_card_(automatic)’, ‘electronic_check’ ,’mailed_check’])
tenure = st.number_input(‘Number of months the customer has been with the current telco provider :’, min_value=0, max_value=240, value=0)
monthlycharges= st.number_input(‘Monthly charges :’, min_value=0, max_value=240, value=0)
totalcharges = tenure*monthlycharges
output= ""
output_prob = ""
input_dict={
"gender":gender ,
"seniorcitizen": seniorcitizen,
"partner": partner,
"dependents": dependents,
"phoneservice": phoneservice,
"multiplelines": multiplelines,
"internetservice": internetservice,
"onlinesecurity": onlinesecurity,
"onlinebackup": onlinebackup,
"deviceprotection": deviceprotection,
"techsupport": techsupport,
"streamingtv": streamingtv,
"streamingmovies": streamingmovies,
"contract": contract,
"paperlessbilling": paperlessbilling,
"paymentmethod": paymentmethod,
"tenure": tenure,
"monthlycharges": monthlycharges,
"totalcharges": totalcharges
}

if st.button("Predict"):
X = dv.transform([input_dict])
y_pred = model.predict_proba(X)[0, 1]
churn = y_pred >= 0.5
output_prob = float(y_pred)
output = bool(churn)
st.success(‘Churn: {0}, Risk Score: {1}’.format(output, output_prob))

Your app’s frontend leverages Streamlit’s input widgets like select box, slider, and number input. Users interact with these widgets by entering values. Input data is then packaged into a Python dictionary. The backend — which handles the prediction score computation logic — is defined inside the st.button layer and awaits the user trigger. When this happens, the dictionary is passed to the dictionary vectorizer which performs encoding for categorical features and makes it consumable for the model. 

Streamlit passes any transformed inputs to the model and calculates the churn prediction score. Using the threshold of 0.5, the churn score is converted into a binary class. The risk score and churn class are returned to the frontend via Streamlit’s success component. This displays a success message. 

To display the file upload button when the user selects “Batch” from the sidebar, the following code snippet might be useful:

if add_selectbox == ‘Batch':
file_upload = st.file_uploader("Upload csv file for predictions", type=["csv"])
if file_upload is not None:
data = pd.read_csv(file_upload)
X = dv.transform([data])
y_pred = model.predict_proba(X)[0, 1]
churn = y_pred >= 0.5
churn = bool(churn)
st.write(churn)

When the user wants to batch score customers, the page layout will dynamically change to match this selection. Streamlit’s file uploader component will display a related widget. This prompts the user to upload a CSV file, which is then read using the panda library and processed by the dictionary vectorizer and model. It displays prediction scores on the frontend using st.write. 

The above application skeleton is wrapped within a main function in the below script. Running the script invokes the main function. Here’s how that final script looks:

import pickle
import streamlit as st
import pandas as pd
from PIL import Image
model_file = ‘model_C=1.0.bin’

with open(model_file, ‘rb’) as f_in:
dv, model = pickle.load(f_in)

def main():
image = Image.open(‘images/icone.png’)
image2 = Image.open(‘images/image.png’)
st.image(image,use_column_width=False)
add_selectbox = st.sidebar.selectbox(
"How would you like to predict?",
("Online", "Batch"))
st.sidebar.info(‘This app is created to predict Customer Churn’)
st.sidebar.image(image2)
st.title("Predicting Customer Churn")
if add_selectbox == ‘Online':
gender = st.selectbox(‘Gender:’, [’male’, ‘female’])
seniorcitizen= st.selectbox(‘ Customer is a senior citizen:’, [0, 1])
partner= st.selectbox(‘ Customer has a partner:’, [’yes’, ‘no’])
dependents = st.selectbox(‘ Customer has dependents:’, [’yes’, ‘no’])
phoneservice = st.selectbox(‘ Customer has phoneservice:’, [’yes’, ‘no’])
multiplelines = st.selectbox(‘ Customer has multiplelines:’, [’yes’, ‘no’, ‘no_phone_service’])
internetservice= st.selectbox(‘ Customer has internetservice:’, [’dsl’, ‘no’, ‘fiber_optic’])
onlinesecurity= st.selectbox(‘ Customer has onlinesecurity:’, [’yes’, ‘no’, ‘no_internet_service’])
onlinebackup = st.selectbox(‘ Customer has onlinebackup:’, [’yes’, ‘no’, ‘no_internet_service’])
deviceprotection = st.selectbox(‘ Customer has deviceprotection:’, [’yes’, ‘no’, ‘no_internet_service’])
techsupport = st.selectbox(‘ Customer has techsupport:’, [’yes’, ‘no’, ‘no_internet_service’])
streamingtv = st.selectbox(‘ Customer has streamingtv:’, [’yes’, ‘no’, ‘no_internet_service’])
streamingmovies = st.selectbox(‘ Customer has streamingmovies:’, [’yes’, ‘no’, ‘no_internet_service’])
contract= st.selectbox(‘ Customer has a contract:’, [’month-to-month’, ‘one_year’, ‘two_year’])
paperlessbilling = st.selectbox(‘ Customer has a paperlessbilling:’, [’yes’, ‘no’])
paymentmethod= st.selectbox(‘Payment Option:’, [’bank_transfer_(automatic)’, ‘credit_card_(automatic)’, ‘electronic_check’ ,’mailed_check’])
tenure = st.number_input(‘Number of months the customer has been with the current telco provider :’, min_value=0, max_value=240, value=0)
monthlycharges= st.number_input(‘Monthly charges :’, min_value=0, max_value=240, value=0)
totalcharges = tenure*monthlycharges
output= ""
output_prob = ""
input_dict={
"gender":gender ,
"seniorcitizen": seniorcitizen,
"partner": partner,
"dependents": dependents,
"phoneservice": phoneservice,
"multiplelines": multiplelines,
"internetservice": internetservice,
"onlinesecurity": onlinesecurity,
"onlinebackup": onlinebackup,
"deviceprotection": deviceprotection,
"techsupport": techsupport,
"streamingtv": streamingtv,
"streamingmovies": streamingmovies,
"contract": contract,
"paperlessbilling": paperlessbilling,
"paymentmethod": paymentmethod,
"tenure": tenure,
"monthlycharges": monthlycharges,
"totalcharges": totalcharges
}
if st.button("Predict"):

X = dv.transform([input_dict])
y_pred = model.predict_proba(X)[0, 1]
churn = y_pred >= 0.5
output_prob = float(y_pred)
output = bool(churn)

st.success(‘Churn: {0}, Risk Score: {1}’.format(output, output_prob))

if add_selectbox == ‘Batch':

file_upload = st.file_uploader("Upload csv file for predictions", type=["csv"])
if file_upload is not None:
data = pd.read_csv(file_upload)
X = dv.transform([data])
y_pred = model.predict_proba(X)[0, 1]
churn = y_pred >= 0.5
churn = bool(churn)
st.write(churn)

if __name__ == ‘__main__':
main()

You can download the complete script from our Dockersamples GitHub page.

Execute the script

streamlit run stream_app.py

View your Streamlit app

You can now view your Streamlit app in your browser. Navigate to the following:

Local URL: http://localhost:8501Network URL: http://192.168.1.23:8501

Containerizing the Streamlit app with Docker

Let’s explore how to easily run this app within a Docker container, using a Docker Official image. First, you’ll need to download Docker Desktop. Docker Desktop accelerates the image-building process while making useful images more discoverable. Complete this installation process once your download is finished.

Docker uses a Dockerfile to specify each image’s “layers.” Each layer stores important changes stemming from the base image’s standard configuration. Create an empty Dockerfile in your Streamlit project:

touch Dockerfile

Next, use your favorite text editor to open this Dockerfile. We’re going to build out this new file piece by piece. To start, let’s define a base image:

FROM python:3.8.12-slim

It’s now time to ensure that the latest pip modules are installed:

RUN /usr/local/bin/python -m pip install –upgrade pip

Next, let’s quickly create a directory to house our image’s application code. This is the working directory for your application:

WORKDIR /app

The following COPY instruction copies the requirements file from the host machine to the container image:

COPY requirements.txt ./requirements.txt
RUN pip install -r requirements.txt

The EXPOSE instruction tells Docker that your container is listening on the specified network ports at runtime:

EXPOSE 8501

Finally, create an ENTRYPOINT to make your image executable:

ENTRYPOINT ["streamlit", "run"]
CMD ["stream_app.py"]

After assembling each piece, here’s your complete Dockerfile:

FROM python:3.8.12-slim
RUN /usr/local/bin/python -m pip install –upgrade pip
WORKDIR /app
COPY requirements.txt ./requirements.txt
RUN pip install -r requirements.txt
EXPOSE 8501
COPY . .
ENTRYPOINT ["streamlit", "run"]
CMD ["stream_app.py"]

Build your image

docker build -t customer_churn .

Run the app

docker run -d -p 8501:8501 customer_churn

View the app within Docker Desktop

You can do this by navigating to the Containers interface, which lists your running application as a named container:

Access the app

First, select your app container in the list. This opens the Logs view. Click the button with a square icon (with a slanted arrow) located next to the Stats pane. This opens your app in your browser:

Alternatively, you can hover over your container in the list and click that icon once the righthand toolbar appears.

Develop and deploy your next machine learning model, today

Congratulations! You’ve successfully explored how to build and deploy customer churn prediction models using Streamlit and Docker. With a single Dockerfile, we’ve demonstrated how easily you can build an interactive frontend and deploy this application in seconds. 

With just a few extra steps, you can use this tutorial to build applications with much greater complexity. You can make your app more useful by implementing push-notification logic in the app — letting the marketing team send promotional emails to high-churn customers on the fly. Happy coding.
Quelle: https://blog.docker.com/feed/

KubeVirt on Killercoda on KubeVirt

itnext.io – This article provides a high-level overview of how Killercoda uses KubeVirt to schedule disposable learning environments. We also talk about how KubeVirt can run on Killercoda in some kind of…
Quelle: news.kubernauts.io

How to Use the Redis Docker Official Image

Maintained in partnership with Redis, the Redis Docker Official Image (DOI) lets developers quickly and easily containerize a Redis instance. It streamlines the cross-platform deployment process — even letting you use Redis with edge devices if they support your workflows. 

Developers have pulled the Redis DOI over one billion times from Docker Hub. As the world’s most popular key-value store, Redis helps apps concurrently access critical bits of data while remaining resource friendly. It’s highly performant, in-memory, networked, and durable. It also stands apart from relational databases like MySQL and PostgreSQL that use tabular data structures. From day one, Redis has also been open source. 

Finally, Redis cluster nodes are horizontally scalable — making it a natural fit for containerization and multi-container operation. Read on as we explore how to use the Redis Docker Official Image to containerize and accelerate your Redis database deployment.

In this tutorial:

What is the Redis Docker Official Image?How to run Redis in DockerUse a quick pull commandStart your Redis instanceSet up Redis persistent storageConnect with the Redis CLIConfigurations and modulesNotes on using Redis modulesGet up and running with Redis today

What is the Redis Docker Official Image?

The Redis DOI is a building block for Redis Docker containers. It’s an executable software package that tells Docker and your application how to behave. It bundles together source code, dependencies, libraries, tools, and other core components that support your application. In this case, these components determine how your app and Redis database interact.

Our Redis Docker Official Image supports multiple CPU architectures. An assortment of over 50 supported tags lets you choose the best Redis image for your project. They’re also multi-layered and run using a default configuration (if you’re simply using docker pull). Complexity and base images also vary between tags. 

That said, you can also configure your Redis Official Image’s Dockerfile as needed. We’ll touch on this while outlining how to use the Redis DOI. Let’s get started.

How to run Redis in Docker

Before proceeding, we recommend installing Docker Desktop. Desktop is built upon Docker Engine and packages together the Docker CLI, Docker Compose, and more. Running Docker Desktop lets you use Docker commands. It also helps you manage images and containers using the Docker Dashboard UI. 

Use a quick pull command

Next, you’ll need to pull the Redis DOI to use it with your project. The quickest method involves visiting the image page on Docker Hub, copying the docker pull command, and running it in your terminal:

Your output confirms that Docker has successfully pulled the :latest Redis image. You can also verify this by hopping into Docker Desktop and opening the Images interface from the left sidebar. Your redis image automatically appears in the list:

We can also see that our new Redis image is 111.14 MB in size. This is pretty lightweight compared to many images. However, using an alpine variant like redis:alpine3.16 further slims your image.

Now that you’re acquainted with Docker Desktop, let’s jump into our CLI workflow to get Redis up and running. 

Start your Redis instance

Redis acts as a server, and related server processes power its functionality. We need to start a Redis instance, or software server process, before linking it with our application. Luckily, you can create a running instance with just one command: 

docker run –name some-redis -d redis

We recommend naming your container. This helps you reference later on. It also makes it easier to run additional commands that involve it. Your container will run until you stop it. 

By adding -d redis in this command, Docker will run your Redis service in “detached” mode. Redis, therefore, runs in the background. Your container will also automatically exit when its root process exits. You’ll see that we’re not explicitly telling the service to “start” within this command. By leaving this verbiage out, our Redis service will start and continue running — remaining usable to our application.

Set up Redis persistent storage

Persistent storage is crucial when you want your application to save data between runs. You can have Redis write its data to a destination like an SSD. Persistence is also useful for keeping log files across restarts. 

You can capture every Redis operation using the Redis Database (RDB) method. This lets you designate snapshot intervals and record data at certain points in time. However, that running container from our initial docker run command is using port 6379. You should remove (or stop) this container before moving on, since it’s not critical for this example. 

Once that’s done, this command triggers persistent storage snapshots every 60 seconds: 

docker run –name some-redis -d redis redis-server –save 60 1 –loglevel warning

The RDB approach is valuable as it enables “set-and-forget” persistence. It also generates more logs. Logging can be useful for troubleshooting, yet it also requires you to monitor accumulation over time. 

However, you can also forego persistence entirely or choose another option. To learn more, check out Redis’ documentation. 

Redis stores your persisted data in the VOLUME /data location. These connected volumes are shareable between containers. This shareability becomes useful when Redis lives within one container and your application occupies another. 

Connect with the Redis CLI

The Redis CLI lets you run commands directly within your running Redis container. However, this isn’t automatically possible via Docker. Enter the following commands to enable this functionality: 

docker network create some-network

​​docker run -it –network some-network –rm redis redis-cli -h some-redis

Your Redis service understands Redis CLI commands. Numerous commands are supported, as are different CLI modes. Read through the Redis CLI documentation to learn more. 

Once you have CLI functionality up and running, you’re free to leverage Redis more directly!

Configurations and modules

Finally, we’ve arrived at customization. While you can run a Redis-powered app using defaults, you can tweak your Dockerfile to grab your pre-existing redis.conf file. This better supports production applications. While Redis can successfully start without these files, they’re central to configuring your services. 

You can see what a redis.conf file looks like on GitHub. Otherwise, here’s a sample Dockerfile: 

FROM redis
COPY redis.conf /usr/local/etc/redis/redis.conf
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]

You can also use docker run to achieve this. However, you should first do two things for this method to work correctly. First, create the /myredis/config directory on your host machine. This is where your configuration files will live. 

Second, open Docker Desktop and click the Settings gear in the upper right. Choose Resources > File Sharing to view your list of directories. You’ll see a grayed-out directory entry at the bottom, which is an input field for a named directory. Type in /myredis/config there and hit the “+” button to locally verify your file path:

You’re now ready to run your command! 

docker run -v /myredis/conf:/usr/local/etc/redis –name myredis redis redis-server /usr/local/etc/redis/redis.conf

The Dockerfile gives you more granular control over your image’s construction. Alternatively, the CLI option lets you run your Redis container without a Dockerfile. This may be more approachable if your needs are more basic. Just ensure that your mapped directory is writable and exists locally. 

Also, consider the following: 

If you edit your Redis configurations on the fly, you’ll have to use CONFIG REWRITE to automatically identify and apply any field changes on the next run.You can also apply configuration changes manually.

Remember how we connected the Redis CLI earlier? You can now pass arguments directly through the Redis CLI (ideal for testing) and edit configs while your database server is running. 

Notes on using Redis modules

Redis modules let you extend your Redis service, and build new services, and adapt your database without taking a performance hit. Redis also processes them in memory. These standard modules support querying, search, JSON processing, filtering, and more. As a result, Docker Hub’s redislabs/redismod image bundles seven of these official modules together: 

RedisBloomRedisTimeSeriesRedisJSONRedisAIRedisGraphRedisGearsRedisearch

If you’d like to spin up this container and experiment, simply enter docker run -d -p 6379:6379 redislabs/redismod in your terminal. You can open Docker Desktop to view this container like we did earlier on. 

You can view Redis’ curated modules or visit the Redis Modules Hub to explore further.

Get up and running with Redis today

We’ve explored how to successfully Dockerize Redis. Going further, it’s easy to grab external configurations and change how Redis operates on the fly. This makes it much easier to control how Redis interacts with your application. Head on over to Docker Hub and pull your first Redis Docker Official Image to start experimenting. 

The Redis Stack also helps extend Redis within Docker. It adds modern, developer-friendly data models and processing engines. The Stack also grants easy access to full-text search, document store, graphs, time series, and probabilistic data structures. Redis has published related container images through the Docker Verified Publisher (DVP) program. Check them out!
Quelle: https://blog.docker.com/feed/

Docker Captain Take 5 — James Spurin

Docker Captains are select members of the community that are both experts in their field and are passionate about sharing their Docker knowledge with others. “Docker Captains Take 5” is a regular blog series where we get a closer look at our Captains and ask them the same broad set of questions ranging from what their best Docker tip is to whether they prefer cats or dogs (personally, we like whales and turtles over here). Today, we’re interviewing James Spurin who recently joined the Captain’s Program. He is a DevOps Consultant and Course/Content Creator at DiveInto and is based in Hertfordshire, United Kingdom. Check out James’ socials on LinkedIn and Twitter!

How/when did you first discover Docker?

I’m part of the earlier ISP generation, so my early career involved working at Demon Internet, one of the first internet providers in the UK back in 1998-2000.

Back then, it was cool to host and serve your personal ISP services on your own managed system (generally hidden in a cupboard at home and served via a cable modem to the world) for the likes of Web/DNS/Email and other services.

Whilst times have changed, and I’ve moved to more appropriate cloud-based solutions for essential services and hosting, I’ve always been passionate about cosplaying with systems administration. A friend with the same passion recommended linuxserver.io to me. It’s a great resource that manages and maintains a fleet of common Docker images.

I transitioned many of the services I was manually running to Docker, either using their images or their Dockerfiles as a reference for learning how to create my own Docker images.

If you’re looking for a great way of starting with Docker, I highly recommend looking at the resources available on linuxserver.io.

The advice we would share with new starters back in my early ISP career days was to create and self-host an ISP in a box.

In essence, we’d combine a Web Server (using Apache at the time), Email (using Exim), and a DNS server (using Bind), alongside a custom domain name, to make it available on the internet. It provided a great learning opportunity for understanding how these protocols work.

Today my advice would be to try this out, but also with Docker in the mix!

What is your favorite Docker command?

My favorite Docker command would be docker buildx. With the growth of arm architecture, docker buildx is an excellent resource that I rely on tremendously. Being a content creator I leverage Docker extensively for creating lab environments that anyone can utilize with their own resources. See my “Dive Into Ansible” repository for an example that utilizes docker-compose and has had over 250k pulls.

Just a few years ago, building images for arm alongside AMD64 could have been considered a niche in my area. Only a tiny percentage of my students were using a Raspberry Pi for personal computing.

These days, however, especially with the growth of Apple Silicon, cross-built images are much more of a necessity when providing community container images. As a result, Buildx is one of my favorite CLI Plugins and is a step I consider essential as a milestone in a successful Docker project.

What is your top tip for working with Docker that others may not know?

Consider Dockerfiles (or automated image builds) and guided instructions as a standard part of your projects from Day 1. Your users will thank you and your likelihood of open source contributors will grow.

Take, for example, the Python programming language. When browsing GitHub/Gitlab for Python projects, it’s common to see a requirements.txt file with dependencies related to the project.

The expectation is then for the consumer to install dependencies via pip. An experienced developer may utilize virtual environments, whereas a less experienced developer may install this straight into their running system (thus, potential cross-contamination).

Whilst Python 3+ is the standard for most common Python projects, there may be nuances between a version of Python locally installed and that used within a codebase. We should also consider that some dependencies require compilation, which presents another obstacle for general usage, especially if the likes of Developer Compilation Tools aren’t available.

By providing a Dockerfile that utilizes a trusted Python image and offering automated prebuilt images using the likes of DockerHub in conjunction with GitHub/Gitlab (to trigger automated builds), individuals can get involved and run projects as a single command in a matter of minutes. Such efforts also provide great reuse opportunities with Kubernetes, CI/CD pipelines, and automated testing.

What’s the coolest Docker demo you have done/seen?

The Flappy Moby efforts that took place at KubeCon Valencia. I liked this so much that I captured this at the time and created a video!

The project was novel; after all, who doesn’t love these types of games? It was a fantastic showpiece at the event. As a content creator and someone who has worked on creating games to demonstrate and teach technical concepts, I was also very appreciative of the efforts involved around the graphical elements to bring this to life.

Seeing Docker Desktop extensions in action inspired my own Docker Desktop extension journey and follow-ups. When I returned from Kubecon, I created a Docker Desktop extension that instantly provides an Ansible-based lab with six nodes and web terminals. Check out the related video of how this extension was made!

What have you worked on in the past six months that you’re particularly proud of?

I created a free Kubernetes Introduction course available on YouTube and Udemy which is receiving an incredible amount of views and positive feedback. This was a very personal project for me that focused on community giveback.

When I first started learning Kubernetes there were areas that I found frustrating. Learning resources in this space often show theoretical overviews of core Kubernetes architecture but lack hands-on demonstrations. I made this course to ensure that anyone could get a good understanding of Kubernetes alongside hands-on use of the essential components in just one hour.

The course also provided me with a unique opportunity to share perspectives on overlooked areas relating to Docker Inc. For example, I cover the positive efforts made by Docker to Cloud Native with their contributions of containerd and runC to the Cloud Native Computing Foundation and the Open Container Initiative, respectively.

It was a pleasure to work on a project that covered many of my favorite passions in one go, including, Kubernetes, Docker, Cloud Native, content, and community.

What do you anticipate will be Docker’s biggest announcement this year?

I’ve already mentioned this above, but it’s Docker Desktop extensions for me. When considered alongside Docker Desktop (now native for Windows, Mac, and Linux), we have a consistent Docker Desktop environment and Extension platform that can provide a consistent development resource on all major OS platforms.

What are some personal goals for the next year with respect to the Docker community?

My aims are focused on community, and I’m already working on content that will heavily emphasize Docker in conjunction with Kubernetes (there’s so much opportunity to do more with the Docker Desktop Kubernetes installation.) As the tagline in the Docker Slack announcement channel says… Docker, Docker, Docker!!!

What was your favorite thing about DockerCon 2022?

Community. While watching the various talks and discussions, I was active in the chat rooms.

The participants were highly engaged, and I made some great connections with individuals who were mutually chatting at the time.

There were also some very unexpected moments. For example, Justin Cormack and Ajeet Singh Raina were using some interesting vintage microphones that kicked off some good chat room and post-event discussions.

Looking to the distant future, what is the technology that you’re most excited about and that you think holds a lot of promise?

A technology that has blown my mind is Dall-E 2, an AI solution that can automatically create images based on textual information. If you haven’t heard of this, you must check this video out.

It’s possible at the moment to try out Dall-E Mini. Whilst this isn’t as powerful as Dall-E 2, it can be fun to use.

For example, this is a unique image created by AI using the input of “Docker”. Considering that this technology is not re-using existing images and has learnt the concept of “Docker” to make this, it is truly remarkable.

Rapid fire questions…

What new skill have you mastered during the pandemic?

Coffee is a personal passion and a fuel that I both depend upon and enjoy! The Aeropress is a cheap, simple, and effective device with many opportunities. I’ve explored how to make a fantastic Aeropress coffee, and I think I’ve nailed it! For those interested, check out some feeds from the Aeropress Barista Championships.

Cats or Dogs?

Cats. I have two, one named Whisper Benedict and the other named Florence Rhosyn. Whisper is a British Blue, and Flo is a British Blue and White. At the time, we only intended to get one cat, but the lady at the cattery offered us Flo at a discount, and we couldn’t resist.

The lady at the cattery was a breeder of British Blues and British Whites, and the Dad from the Blues had snuck in with the Mum of the Whites; alas, you can guess what happened. This gives Flo her very unique mottled colors.

The two of them are extraordinary characters. Although Whisper is the brawn of the two and would be assumed to be the Alpha cat, he’s an absolute softie and doesn’t mind anybody picking him up.

On the other hand, what Flo lacks in physique, she makes up with brains and agility.

Both my children Lily (11) and Anwen (4) can hold Flo, and nothing will happen. They’ve all grown up together, and it’s as if she knows that they are children. However, should you try to pick her up as an adult, you’re not getting away unscathed. Flo also seems to have this uncanny ability to know when we’re intending on taking her to the vets, even without a carry basket in sight!

Despite their characteristics, we wouldn’t have our furry family members any other way.

Salty, sour, or sweet?

Sweet!

Beach or mountains?

Beaches (with some favouritism towards Skiathos) please!

Your most often used emoji?

🚀
Quelle: https://blog.docker.com/feed/

Build and Deploy a Retail Store Items Detection System using No-Code AI Vision at the Edge

Low-code and no-code platforms have risen sharply in popularity over the past few years. These platforms let users with little or no knowledge of coding build apps 20x faster with minimal coding. They’ve even evolved to a point where they’ve become indispensable tools for expert developers. Such platforms are highly visual and follow a user-friendly, modular approach. Consequently, you need to drag and drop software components into place — all of which are visually represented — to create an app.
Node-RED is a low-code programming language for event-driven applications. It’s a programming tool for wiring together hardware devices, APIs, and online services in new and interesting ways. Node-RED also provides a browser-based flow editor that makes it easy to wire together flows using the wide range of nodes within the palette. Accordingly, you can deploy it to its runtime with a single-click. Once more, you can create JavaScript functions within the editor using the rich-text editor. Finally, Node-RED ships with a built-in library that lets you save useful and reusable functions, templates or flows.
Node-RED’s lightweight runtime is built upon Node.js, taking full advantage of Node’s event-driven, non-blocking model. This helps it run at the edge of the network on low-cost hardware — like the Raspberry Pi and in the cloud. With over 225,000 modules in Node’s package repository, it’s easy to extend the range of palette nodes and add new capabilities.The flows created in Node-RED are stored using JSON, which is easily importable and exportable for sharing purposes. An online flow library lets you publish your best flows publicly.
Users have downloaded our Node-RED Docker Official Image over 100 million times from Docker Hub. What’s driving this significant download rate? There’s an ever-increasing demand for Docker containers to streamline development workflows, while giving Node-RED developers the freedom to innovate with their choice of project-tailored tools, application stacks, and deployment environments. Our Node-RED Official Image also supports multiple architectures like amd64, arm32v6, arm32v7, arm64v8, and s390x.
Why is containerizing Node-RED important?
The Node-RED Project has a huge community of third-party nodes available for installation. Also, note that the community doesn’t generally recommend using an odd-numbered Node version. This advice is tricky for new users, since they might end up fixing Node compatibility issues.
Running your Node-RED app in a Docker container lets users get started quickly with sensible defaults and customization via environmental variables. Users no longer need to worry about compatibility issues. Next, Docker enables users to build, share, and run containerized Node-RED applications — made accessible for developers of all skill levels.
Building your application
In this tutorial, you’ll learn how to build a retail store items detection system using Node-RED. First, you’ll set up Node-RED manually on an IoT Edge device without Docker. Second, you’ll learn how to run it within a Docker container via a one-line command. Finally, you’ll see how Docker containers help you build and deploy this detection system using Node-RED. Let’s jump in.
Hardware components

Seeed Studio reComputer J1010 with Jetson Nano
USB/IP camera module
Ethernet cable/USB WiFi adapter
Keyboard and mouse

Software Components

NVIDIA JetPack v4.6.1 with SDK components
Node 16.x
NPM
Docker
Docker Compose

Preparing your Seeed Studio reComputer and development environment
For this demonstration, we’re using a Seeed Studio reComputer. The Seeed Studio reComputer J1010 is powered by the Jetson Nano development kit. It’s a small, powerful, palm-sized computer that makes modern AI accessible to embedded developers. It’s built around the NVIDIA Jetson Nano system-on-module (SoM) and designed for edge AI applications.
Wire it up
Plug your WiFi adapter/Ethernet cable, Keyboard/Mouse, and USB camera into the reComputer system and turn it on using the power cable. Follow the steps to perform initial system startup.

Before starting, make sure you have Node installed in your system. Then, follow these steps to set up Node-RED on your Edge device.
Installing Node.js
Ensure that you have the latest stable version of Node.js installed in your system.
curl -fsSL https://deb.nodesource.com/setup_16.x | sudo -E bash –
sudo apt-get install -y nodejs
Verify Node.js and npm versions
The above installer will install both Node.js and npm. Let’s verify that they’re installed properly:
# check Node.js version
nodejs -v
v16.16.0
# check npm version
npm -v
8.11.0
Installing Node-RED
To install Node-RED, you can use the npm command that comes with Node.js:
sudo npm install -g –unsafe-perm node-red

changed 294 packages, and audited 295 packages in 17s

38 packages are looking for funding
run `npm fund` for details

found 0 vulnerabilities
Running Node-RED
Use the node-red command to start Node-RED in your terminal:
node-red
27 Jul 15:08:36 – [info]

Welcome to Node-RED
===================

27 Jul 15:08:36 – [info] Node-RED version: v3.0.1
27 Jul 15:08:36 – [info] Node.js version: v16.16.0
27 Jul 15:08:36 – [info] Linux 4.9.253-tegra arm64 LE
27 Jul 15:08:37 – [info] Loading palette nodes
27 Jul 15:08:38 – [info] Settings file : /home/ajetraina/.node-red/settings.js
27 Jul 15:08:38 – [info] Context store : ‘default’ [module=memory]
27 Jul 15:08:38 – [info] User directory : /home/ajetraina/.node-red
27 Jul 15:08:38 – [warn] Projects disabled : editorTheme.projects.enabled=false
27 Jul 15:08:38 – [info] Flows file : /home/ajetraina/.node-red/flows.json
27 Jul 15:08:38 – [info] Creating new flow file
27 Jul 15:08:38 – [warn]

———————————————————————
Your flow credentials file is encrypted using a system-generated key.

If the system-generated key is lost for any reason, your credentials
file will not be recoverable, you will have to delete it and re-enter
your credentials.

You should set your own key using the ‘credentialSecret’ option in
your settings file. Node-RED will then re-encrypt your credentials
file using your chosen key the next time you deploy a change.
———————————————————————

27 Jul 15:08:38 – [info] Server now running at http://127.0.0.1:1880/
27 Jul 15:08:38 – [warn] Encrypted credentials not found
27 Jul 15:08:38 – [info] Starting flows
27 Jul 15:08:38 – [info] Started flows
 
You can then access the Node-RED editor by navigating to http://localhost:1880 in your browser.
The log output shares some important pieces of information:

Installed versions of Node-RED and Node.js
Any errors encountered while trying to load the palette nodes
The location of your Settings file and User Directory
The name of the flow file currently being used

 

Node-RED consists of a Node.js based runtime that provides a web address to access the flow editor. You create your application in the browser by dragging nodes from your palette into a workspace, From there, you start wiring them together. With one click, Node-RED deploys your application back to the runtime where it’s run.
Running Node-RED in a Docker container
The Node-RED Official Image is based on our Node.js Alpine Linux images, in order to keep them as slim as possible. Run the following command to create and mount a named volume called node_red_data to the container’s /data directory. This will allow us to persist any flow changes.
docker run -it -p 1880:1880 -v node_red_data:/data –name mynodered nodered/node-red
 
You can now access the Node-RED editor via http://localhost:1880 or http://<ip_address_Jetson>:1880.
Building and running your retail store items detection system
To build a fully functional retail store items detection system, follow these next steps.
Write the configuration files
We must define a couple of files that add Node-RED configurations — such as custom themes and custom npm packages.
First, create an empty folder called “node-red-config”:
mkdir node-red-config
Change your directory to node-red-config and run the following command to setup a new NPM package.
npm init
This utility will walk you through the package.json file creation process. It only covers the most common items, and tries to guess sensible defaults.
{
"name": "node-red-project",
"description": "A Node-RED Project",
"version": "0.0.1",
"private": true,
"dependencies": {
"@node-red-contrib-themes/theme-collection": "^2.2.3",
"node-red-seeed-recomputer": "git+https://github.com/Seeed-Studio/node-red-seeed-recomputer.git"
}
}
Create a file called settings.js inside the node-red-config folder and enter the following content. This file defines Node-RED server, runtime, and editor settings. We’ll mainly change the editor settings. For more information about individual settings, refer to the documentation.

module.exports = {

flowFile: ‘flows.json’,

flowFilePretty: true,

uiPort: process.env.PORT || 1880,

logging: {
console: {
level: "info",
metrics: false,
audit: false
}
},

exportGlobalContextKeys: false,

externalModules: {
},

editorTheme: {
theme: "midnight-red",

page: {
title: "reComputer Flow Editor"
},
header: {
title: " Flow Editor<br/>",
image: "/data/seeed.webp", // or null to remove image
},

palette: {
},

projects: {
enabled: false,
workflow: {
mode: "manual"
}
},

codeEditor: {
lib: "ace",
options: {
theme: "vs",
}
}
},

functionExternalModules: true,

functionGlobalContext: {
},

debugMaxLength: 1000,

mqttReconnectTime: 15000,

serialReconnectTime: 15000,

}

 
You can download this image and put it under the node-red-config folder. This image file’s location is defined inside the settings.js file we just created.
Write the script
Create an empty file by running the following command:
touch docker-ubuntu.sh
In order to print colored output, let’s first define a few colors in the shell script. This will get reflected as an output when you execute the script at a later point of time:
IBlack=’\033[0;90m’ # Black
IRed=’\033[0;91m’ # Red
IGreen=’\033[0;92m’ # Green
IYellow=’\033[0;93m’ # Yellow
IBlue=’\033[0;94m’ # Blue
IPurple=’\033[0;95m’ # Purple
ICyan=’\033[0;96m’ # Cyan
IWhite=’\033[0;97m’ # White
 
The sudo command allows a normal user to run a command with elevated privileges so that they can perform certain administrative tasks. As this script involves running multiple tasks that involve administrative privileges, it’s always recommended to check if you’re running the script as a “sudo” user.
if ! [ $(id -u) = 0 ] ; then
echo "$0 must be run as sudo user or root"
exit 1
fi
 
The reComputer for Jetson is sold with 16 GB of eMMC. This ready-to-use hardware has Ubuntu 18.04 LTS and NVIDIA JetPack 4.6 installed, so the remaining user space available is about 2 GB. This could be a significant obstacle to using the reComputer for training and deployment in some projects. Hence, it’s sometimes important to remove unnecessary packages and libraries. This code snippet confirms that you have enough storage to install all included packages and Docker images.
If you have the required storage space, it’ll continue to the next section. Otherwise, the installer will ask if you want to free up some device space. Typing “y” for “yes” will delete unnecessary files and packages to clear some space.

storage=$(df | awk ‘{ print $4 } ‘ | awk ‘NR==2{print}’ )
#if storage > 3.8G
if [ $storage -gt 3800000 ] ; then
echo -e "${IGreen}Your storage space left is $(($storage /1000000))GB, you can install this application."
else
echo -e "${IRed}Sorry, you don’t have enough storage space to install this application. You need about 3.8GB of storage space."
echo -e "${IYellow}However, you can regain about 3.8GB of storage space by performing the following:"
echo -e "${IYellow}-Remove unnecessary packages (~100MB)"
echo -e "${IYellow}-Clean up apt cache (~1.6GB)"
echo -e "${IYellow}-Remove thunderbird, libreoffice and related packages (~400MB)"
echo -e "${IYellow}-Remove cuda, cudnn, tensorrt, visionworks and deepstream samples (~800MB)"
echo -e "${IYellow}-Remove local repos for cuda, visionworks, linux-headers (~100MB)"
echo -e "${IYellow}-Remove GUI (~400MB)"
echo -e "${IYellow}-Remove Static libraries (~400MB)"
echo -e "${IRed}So, please agree to uninstall the above. Press [y/n]"
read yn
if [ $yn = "y" ] ; then
echo "${IGreen}starting to remove the above-mentioned"
# Remove unnecessary packages, clean apt cache and remove thunderbird, libreoffice
apt update
apt autoremove -y
apt clean
apt remove thunderbird libreoffice-* -y

# Remove samples
rm -rf /usr/local/cuda/samples
/usr/src/cudnn_samples_*
/usr/src/tensorrt/data
/usr/src/tensorrt/samples
/usr/share/visionworks* ~/VisionWorks-SFM*Samples
/opt/nvidia/deepstream/deepstream*/samples

# Remove local repos
apt purge cuda-repo-l4t-*local* libvisionworks-*repo -y
rm /etc/apt/sources.list.d/cuda*local* /etc/apt/sources.list.d/visionworks*repo*
rm -rf /usr/src/linux-headers-*

# Remove GUI
apt-get purge gnome-shell ubuntu-wallpapers-bionic light-themes chromium-browser* libvisionworks libvisionworks-sfm-dev -y
apt-get autoremove -y
apt clean -y

# Remove Static libraries
rm -rf /usr/local/cuda/targets/aarch64-linux/lib/*.a
/usr/lib/aarch64-linux-gnu/libcudnn*.a
/usr/lib/aarch64-linux-gnu/libnvcaffe_parser*.a
/usr/lib/aarch64-linux-gnu/libnvinfer*.a
/usr/lib/aarch64-linux-gnu/libnvonnxparser*.a
/usr/lib/aarch64-linux-gnu/libnvparsers*.a

# Remove additional 100MB
apt autoremove -y
apt clean
else
exit 1
fi
fi

This code snippet checks if the required software (curl, docker, nvidia-docker2, and Docker Compose) is installed:
apt update

if ! [ -x "$(command -v curl)" ]; then
apt install curl
fi

if ! [ -x "$(command -v docker)" ]; then
apt install docker
fi

if ! [ -x "$(command -v nvidia-docker)" ]; then
apt install nvidia-docker2
fi

if ! [ -x "$(command -v docker-compose)" ]; then
curl -SL https://files.seeedstudio.com/wiki/reComputer/compose.tar.bz2 -o /tmp/compose.tar.bz2
tar xvf /tmp/compose.tar.bz2 -C /usr/local/bin
chmod +x /usr/local/bin/docker-compose
fi
Next, you need to create a node-red directory under $HOME and then copy all your Node-RED configuration files to your device’s home directory as shown in the snippet below:
mkdir -p $HOME/node-red
cp node-red-config/* $HOME/node-red
 
The below snippet allows the script to bring up container services using Docker Compose CLI:
docker compose –file docker-compose.yaml up -d
 
Note: You’ll see how to create a Docker Compose file in the next section.
Within the script, let’s specify the command to install a custom Node-RED theme package with three Node-RED blocks corresponding to video input, detection, and video view. We’ll circle back to these nodes later.
docker exec node-red-contrib-ml-node-red-1 bash -c "cd /data && npm install"
 
Finally, the below command embedded in the script allows you to restart the node-red-contrib-ml-node-red-1 container to implement your theme changes:
docker restart node-red-contrib-ml-node-red-1
Lastly, save the script as docker-ubuntu.sh.
Define your services within a Compose file
Create an empty file by running the following command inside the same directory as docker-ubuntu.sh:
touch docker-compose.yaml
Add the following lines within your docker-compose.yml file. These specify which services Docker should initiate concurrently at application launch:

services:
node-red:
image: nodered/node-red:3.0.1
restart: always
network_mode: "host"
volumes:
– "$HOME/node-red:/data"
user: "0"
dataloader:
image: baozhu/node-red-dataloader:v1.2
restart: always
runtime: nvidia
network_mode: "host"
privileged: true
devices:
– "/dev:/dev"
– "/var/run/udev:/var/run/udev"
detection:
image: baozhu/node-red-detection:v1.2
restart: always
runtime: nvidia
network_mode: "host"

 
Your application has the following parts:

Three services backed by Docker images: your Node-RED node-red app, dataloader, and detection
The dataloader service container will broadcast an OpenCV video stream (either from a USB webcam or an IP camera with RTSP) using the Pub/Sub messaging pattern to port 5550. It’s important to note that one needs to pass privileged:true to allow your service containers to get access to USB camera devices.
The detection service container will grab the above video stream and perform inference using TensorRT implementation of YOLOv5. This is an object-detection algorithm that can identify objects in real-time.

Execute the script
Open your terminal and run the following command:
sudo ./docker-ubuntu.sh
It’ll take approximately 2-3 minutes for these scripts to execute completely.
View your services
Once your script is executed, you can verify that your container services are up and running:
docker compose ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e487c20eb87b baozhu/node-red-dataloader:v1.2 "python3 python/pub_…" 48 minutes ago Up About a minute retail-store-items-detection-nodered-dataloader-1
4441bc3c2a2c baozhu/node-red-detection:v1.2 "python3 python/yolo…" 48 minutes ago Up About a minute retail-store-items-detection-nodered-detection-1
dd5c5e37d60d nodered/node-red:3.0.1 "./entrypoint.sh" 48 minutes ago Up About a minute (healthy) retail-store-items-detection-nodered-node-red-1
 
Visit http://127.0.0.1:1880/ to access the app.

You’ll find built-in nodes (video input, detection, and video view) available in the palette:

Let’s try to wire nodes by dragging them one-by-one from your palette into a workspace. First, let’s drag video input from the palette to the workspace. Double-click “Video Input” to view the following properties, and select “Local Camera”.
Note: We choose a local camera here to grab the video stream from the connected USB webcam. However, you can also grab the video stream from an IP camera via RTSP.

You’ll see that Node-RED chooses “COCO dataset” model name by default:

Next, drag Video View from the palette to the workspace. If you double-click on Video View, you’ll discover that msg.payload is already chosen for you under the Property section.

Wire up the nodes
Once you have all the nodes placed in the workspace, it’s time to wire nodes together. Nodes are wired together by pressing the left-mouse button on a node’s port, dragging to the destination node and releasing the mouse button (as shown in the following screenshot).

Trigger Deploy at the top right corner to start the deployment process. By now, you should be able to see the detection process working as Node-RED detects items.

Conclusion
The ultimate goal of modernizing software development is to deliver high-value software to end users even faster. Low-code technology like Node-RED and Docker help us achieve this by accelerating the time from ideation to software delivery. Docker helps accelerate the process of building, running, and sharing modern AI applications.
Docker Official Images help you develop your own unique applications — no matter what tech stack you’re accustomed to. With one YAML file, we’ve demonstrated how Docker Compose helps you easily build Node-RED apps. We can even take Docker Compose and develop real-world microservices applications. With just a few extra steps, you can apply this tutorial while building applications with much greater complexity. Happy coding!
References:

Project Source Code
Getting Started with Docker in Seeed Studio
Node-RED
Node-RED Library
Node-RED Docker Hub Repository

Quelle: https://blog.docker.com/feed/