Integrated Terminal for Running Containers, Extended Integration with Containerd, and More in Docker Desktop 4.12

Docker Desktop 4.12 is now live! This release brings some key quality-of-life improvements to the Docker Dashboard. We’ve also made some changes to our container image management and added it as an experimental feature. Finally, we’ve made it easier to find useful Extensions. Let’s dive in.

Execute commands in a running container straight from the Docker Dashboard

Developers often need to explore a running container’s contents to understand its current state or debug it when issues arise. With Docker Desktop 4.12, you can quickly start an interactive session in a running container directly through a Docker Dashboard terminal. This easy access lets you run commands without needing an external CLI. 

Opening this integrated terminal is equal to running docker exec -it <container-id> /bin/sh (or docker exec -it cmd.exe if you’re using Windows containers) in your system terminal. Docker detects a running container’s default user from the image’s Dockerfile. If there’s none specified, it defaults to root. Placing this in the Docker Dashboard gives you real-time access to logs and other information about your running containers. 

Your session is persisted if you navigate throughout the Dashboard and return — letting you easily pick up where you left off. The integrated terminal also supports copy, paste, search, and session clearing.

Still want to use your external terminal? No problem. We’ve added two easy ways to launch a session externally.

Option 1: Use the “Open in External Terminal” button straight from this tab. Even if you prefer an integrated terminal, this might help you run commands and watch logs simultaneously, for example.

Option 2: Change your default settings to always open your system default terminal. We’ve added the option to choose what fits your workflow. After applying this setting, the “Open in terminal” button from the Containers tab will always open your system terminal.

Extending Docker Desktop’s integration with containerd

We’re extending Docker Desktop’s integration with containerd to include image management. This integration is available as an opt-in, experimental feature within this latest release.

Docker’s involvement in the containerd project extends all the way back to 2016. Docker has used containerd within the Docker Engine to manage the container lifecycle (creating, starting, and stopping) for a while now! 

This new feature is a step towards deeper containerd integration with Docker Engine. It lets you use containerd to store images and then push and pull them. When enabled in the latest Docker Desktop version, this experimental feature lets you use the following Docker commands with containerd under the hood: run, commit, build, push, load, and save. 

This integration has the following benefits:

Containerd’s snapshotter implementation helps you quickly plug in new features. One example is using stargz to lazy pull images on startup.The containerd content store can natively store multi-platform images and other OCI-compatible objects. This lets you build and manipulate multi-platform images, for example, or leverage other related features.

You can learn more in our recent announcement, which fully explains containerd’s integration with Docker.

Easily discover extensions

We’ve added two new ways to interact with extensions in Docker Desktop 4.12.

Docker Extensions are now available directly within the Docker menu. From there, you can browse the Marketplace for new extensions, manage your installed extensions, or change extension settings. 

You can also search for extensions in the Extensions Marketplace! Narrow things down by name or keyword to find the tool you need.

Two new extensions have also joined the Extensions Marketplace:

Docker Volumes Backup & Share

Docker Volumes Backup & Share lets you effortlessly back up, clone, restore, and share Docker volumes. You can now easily create copies of your volumes and share them through SSH or by pushing them to a registry. Learn more about Volumes Backup & Share on Docker Hub. 

Mini Cluster

Mini Cluster enables developers who work with Apache Mesos to deploy and test their Mesos applications with ease. Learn more about Mini Cluster on Docker Hub.

Try out Dev Environments with Awesome Compose samples

We’ve updated our GitHub Awesome Compose samples to highlight projects that you can easily launch as Dev Environments in Docker Desktop. This helps you quickly understand how to add multi-service applications as Dev Environment projects. Look for the following green icon in the list of Docker Compose application samples:

Here’s our new Awesome Compose/Dev Environments feature in action:

Get started with Docker Desktop 4.12 today

While we’ve explored some headlining features in this release, Docker Desktop 4.12 also adds important security enhancements under the hood. To learn about these fixes and more, browse our full release notes. 

Have any feedback for us? Upvote, comment, or submit new ideas via our in-product links or our public roadmap. 

Looking to become a new Docker Desktop user? Visit our Get Started page to jumpstart your development journey.
Quelle: https://blog.docker.com/feed/

How to Build and Run Next.js Applications with Docker, Compose, & NGINX

At DockerCon 2022, Kathleen Juell, a Full Stack Engineer at Sourcegraph, shared some tips for combining Next.js, Docker, and NGINX to serve static content. With nearly 400 million active websites today, efficient content delivery is key to attracting new web application users.

In some cases, using Next.js can boost deployment efficiency, accelerate time to market, and help attract web users. Follow along as we tackle building and running Next.js applications with Docker. We’ll also cover key processes and helpful practices for serving that static content. 

Why serve static content with a web application?

According to Kathleen, the following are the benefits of serving static content: 

Fewer moving parts, like databases or other microservices, directly impact page rendering. This backend simplicity minimizes attack surfaces. Static content stands up better (with fewer uncertainties) to higher traffic loads.Static websites are fast since they don’t require repeated rendering.Static website code is stable and relatively unchanging, improving scalability.Simpler content means more deployment options.

Since we know why building a static web app is beneficial, let’s explore how.

Building our services stack

To serve static content efficiently, a three-pronged services approach composed of Next.js, NGINX, and Docker is useful. While it’s possible to run a Next.js server, offloading those tasks to an NGINX server is preferable. NGINX is event-driven and excels at rapidly serving content thanks to its single-threaded architecture. This enables performance optimization even during periods of higher traffic.  

Luckily, containerizing a cross-platform NGINX server instance is pretty straightforward. This setup is also resource friendly. Below are some of the reasons why Kathleen — explicitly or perhaps implicitly — leveraged three technologies. 

Docker Desktop also gives us the tools needed to build and deploy our application. It’s important to install Docker Desktop before recreating Kathleen’s development process. 

The following trio of services will serve our static content:

First, our auth-backend has a build context rooted in a directory and a port mapping. It’s based on a slimmer alpine flavor of the Node.js Docker Official Image and uses named Dockerfile build stages to prevent reordered COPY instructions from breaking. 

Second, our client service has its own build context and a named volume mapped to the staticbuild:/app/out directory. This lets us mount our volume within our NGINX container. We’re not mapping any ports since NGINX will serve our content.

Third, we’ll containerize an NGINX server that’s based on the NGINX Docker Official Image.

As Kathleen mentions, ending this client service’s Dockerfile with a RUN command is key. We want the container to exit after completing the yarn build process. This process generates our static content and should only happen once for a static web application.

Each component is accounted for within its own container. Now, how do we seamlessly spin up this multi-container deployment and start serving content? Let’s dive in!

Using Docker Compose and Docker volumes

The simplest way to orchestrate multi-container deployments is with Docker Compose. This lets us define multiple services within a unified configuration, without having to juggle multiple files or write complex code. 

We use a compose.yml file to describe our services, their contexts, networks, ports, volumes, and more. These configurations influence app behavior. 

Here’s what our complete Docker Compose file looks like: 

services:
auth-backend:
build:
context: ./auth-backend
ports:
– "3001:3001"
networks:
– dev

client:
build:
context: ./client
volumes:
– staticbuild:/app/out
networks:
– dev

nginx:
build:
context: ./nginx
volumes:
– staticbuild:/app/public
ports:
– “8080:80”
networks:
– dev

networks:
dev:
driver: bridge

volumes:
staticbuild:

You’ll also see that we’ve defined our networks and volumes in this file. These services all share the dev network, which lets them communicate with each other while remaining discoverable. You’ll also see a common volume between these services. We’ll now explain why that’s significant.

Using mounted volumes to share files

Specifically, this example leverages named volumes to share files between containers. By mapping the staticbuild volume to Next.js’ default out directory location, you can export your build and serve content with your NGINX server. This typically exists as one or more HTML files. Note that NGINX uses the app/public directory by comparison. 

While Next.js helps present your content on the frontend, NGINX delivers those important resources from the backend. 

Leveraging A/B testing to create tailored user experiences

You can customize your client-side code to change your app’s appearance, and ultimately the end-user experience. This code impacts how page content is displayed while something like an NGINX server is running. It may also determine which users see which content — something that’s common based on sign-in status, for example. 

Testing helps us understand how application changes can impact these user experiences, both positively and negatively. A/B testing helps us uncover the “best” version of our application by comparing features and page designs. How does this look in practice? 

Specifically, you can use cookies and hooks to track user login activity. When a user logs in, they’ll see something like user stories (from Kathleen’s example). Logged-out users won’t see this content. Alternatively, a web user might only have access to certain pages once they’re authenticated. It’s your job to monitor user activity, review any feedback, and determine if those changes bring clear value. 

These are just two use cases for A/B testing, and the possibilities are nearly endless when it comes to conditionally rendering static content with Next.js. 

Containerize your Next.js static web app

There are many different ways to serve static content. However, Kathleen’s three-service method remains an excellent example. It’s useful both during exploratory testing and in production. To learn more, check out Kathleen’s complete talk. 

By containerizing each service, your application remains flexible and deployable across any platform. Docker can help developers craft accessible, customizable user experiences within their web applications. Get started with Next.js and Docker today to begin serving your static web content! 

Additional Resources

Check out the NGINX Docker Official ImageRead about the Node Docker Official ImageLearn about getting started with Docker ComposeView our awesome-compose sample GitHub projects
Quelle: https://blog.docker.com/feed/

August 2022 Newsletter

Community All-Hands: September 1st
Join us tomorrow at our Community All-Hands on September 1st! This virtual event is an opportunity for the community to come together with Docker staff to learn, share, and collaborate. Don’t miss your opportunity to win Docker swag!

Register Now

News you can use and monthly highlights:
6 Docker Compose Best Practices for Dev and Prod – Give your development team a quick knowledge boost with these tips and best practices for using Docker Compose in development and production environments.
Docker Multistage Builds for Hugo – Learn how to keep your Docker container images nice and slim with the use of multistage builds for a hugo documentation project.
How to create a dockerized Nuxt 3 development environment – Learn how Docker simplifies and accelerates the Nuxt.js development workflow. It can also help you build a bleeding-edge, local web app while ensuring consistency between development and production.
Optimize Dockerfile images for NextJS – Is the size of your Next.js Docker image impacting your overall CI/CD pipeline? Here’s an article that’ll help you to improve the development and production lifecycle by optimizing your Docker images efficiently.
Building and Testing Multi-Arch Images with Docker Buildx and QEMU – Building Docker images for multiple architectures has become increasingly popular. This guide walks you through how to build a Docker image for linux/amd64 and linux/arm64 using Docker Buildx. It’ll also walk you through using QEMU to emulate an ARM environment for multiple platform builds.
Implementation And Containerization Of Microservices Using .NET Core 6 And Docker – This article helps you create microservices using the .NET Core 6 Web API and Docker. It’ll also walk you through how to connect them using Ocelot API Gateway.​

Testing with Telepresence
Wishing for a way to synchronize local changes with a remote Kubernetes environment? There is a Docker Extension for that! Learn how Telepresence partners with Docker Desktop to help you run integration tests quickly and where to get started.

Learn More

The latest tips and tricks from the community:

How to Build and Deploy a Task Management Application Using Go
Containerizing a Legendary PetClinic App Built with Spring Boot
Build and Deploy a Retail Store Items Detection System Using No-Code AI Vision at the Edge
Slim.AI Docker Extension for Docker Desktop

Dear Moby: Advice for developers
Looking for developer-specific advice and insights? Introducing our Dear Moby collection — a web series and advice column inspired by and made for YOU, our Docker community. Check out the show, read the column, and submit your app dev questions here.

Watch Dear Moby

Educational content created by the experts at Docker:

How I Built My First Containerized Java Web Application
How to Use the Apache httpd Docker Official Image
How to Use the Redis Docker Official Image
How to Develop and Deploy a Customer Churn Prediction Model Using Python, Streamlit, and Docker

Docker Captain: Julien Maitrehenry
Julien has been working with Docker since 2014 and is now joining as a Docker Captain! His friends call him “Mister Docker” because he’s always sharing his knowledge with others. Julien’s top tip for working with Docker is to build cross-platform images.

Meet the Captain

See what the Docker team has been up to:

Bulk User Add for Docker Business and Teams
Virtual Desktop Support, Mac Permission Changes, & New Extensions in Docker Desktop 4.11
Demo: Managing and Securing Your Docker Workflows
Conversation with RedMonk: Developer Engagement in the Remote Work Era

Docker Hub v1 API deprecation
On September 5th, 2022, Docker plans to deprecate the Docker Hub v1 API endpoints that access information related to Docker Hub repositories. Please update to the v2 endpoints to continue using the Docker Hub API.

Learn More

Subscribe to our newsletter to get the latest news, blogs, tips, how-to guides, best practices, and more from Docker experts sent directly to your inbox once a month.

Quelle: https://blog.docker.com/feed/

Community All-Hands Q3: What We’ll Cover

Join us for our next Community All-Hands event on September 1, 2022 at 8am PST/5pm CET. We have an exciting program in store this quarter for you, our Docker community. Make sure to grab a seat, settle in, and join us for this event by registering now!

What we’ll cover

Within the first hour, you can look forward to a recap of recent Docker updates (and a sneak peek at what to expect in the coming months.) Then, we’ll present some demos and updates about you: the Docker community. 

We’ll also give prizes out to some lucky community members. Stay tuned for more!

[Click Here to Enlarge]

Here’s our Main Stage line-up:

A message from our CEO, Scott JohnstonA recap from our CPO, Jake Levrine on Docker’s efforts to boost developer innovation and productivity in Docker Desktop and Docker EngineAn update from Jim Clark on viewing images through layered SBOMsA word from Djordje Lukic on multi-platform image support in Docker Desktop

Featuring unique, community tracks

[Click Here to Enlarge]

At this virtual event, we want to show you the world’s worth of knowledge that the Docker community has to offer. To do this, we’ll be showcasing thought leadership content from community members across the globe with eight different tracks:

Best Practices. If you’re looking to optimize your image build time, mitigate runtime errors, and learn how to debug your application, join us in the best practices track. We’ll be looking at some real-world example applications in .Net and Golang, and you’ll learn how to interact with the community to solve problems.

Demos. If you learn best by example, this is the track for you. Join us in the demos track to learn about building an integration test suite for legacy code, creating a CV in LaTeX, setting up Kubernetes on Docker Desktop, and more.

Security. No matter how great your app is, if it’s not secure, it’s not going to make it far. Learn about pentesting, compliance, and robustness!

Extensions. Discover helpful, community Docker Extensions. By attending this track, you’ll even learn how to create your own extensions and share them with the world!

Cutting Edge. Deploy your next AI application or Blockchain extension. You’ll also learn about the latest advancements in the tech space.

Open Source. Take your projects to the next level with the Docker-Sponsored Open Source program. We’ll also feature several panels hosted by the open source community.

International Waters. Learn about the work being done in Docker’s international community and how to get involved. We’ll have sessions in French, Spanish, and Portuguese.

Unconference. You’re the most important voice in our Community All-Hands. Join the conversation by engaging in the unconference track!

Reserve your seat now

Our Community All-Hands is specially designed for our Docker community, so it wouldn’t be the same without you! Sign up today for this much-anticipated event, packed with innovation and collaboration. We’ll save you a seat. 
Quelle: https://blog.docker.com/feed/

How to Develop and Deploy a Customer Churn Prediction Model Using Python, Streamlit, and Docker

Customer churn is a million-dollar problem for businesses today. The SaaS market is becoming increasingly saturated, and customers can choose from plenty of providers. Retention and nurturing are challenging. Online businesses view customers as churn when they stop purchasing goods and services. Customer churn can depend on industry-specific factors, yet some common drivers include lack of product usage, contract tenure, and cheaper prices elsewhere.

Limiting churn strengthens your revenue streams. Businesses and marketers must predict and prevent customer churn to remain sustainable. The best way to do so is by knowing your customers. And spotting behavioral patterns in historical data can help immensely with this. So, how do we uncover them? 

Applying machine learning (ML) to customer data helps companies develop focused customer-retention programs. For example, a marketing department could use an ML churn model to identify high-risk customers and send promotional content to entice them. 

To enable these models to make predictions with new data, knowing how to package a model as a user-facing, interactive application is essential. In this blog, we’ll take an ML model from a Jupyter Notebook environment to a containerized application. We’ll use Streamlit as our application framework to build UI components and package our model. Next, we’ll use Docker to publish our model as an endpoint. 

Docker containerization helps make this application hardware-and-OS agnostic. Users can access the app from their browser through the endpoint, input customer details, and receive a churn probability in a fraction of a second. If a customer’s churn score exceeds a certain threshold, that customer may receive targeted push notifications and special offers. The diagram below puts this into perspective: 

Why choose Streamlit?

Streamlit is an open source, Python-based framework for building UIs and powerful ML apps from a trained model. It’s popular among machine learning engineers and data scientists as it enables quick web-app development — requiring minimal Python code and a simple API. This API lets users create widgets using pure Python without worrying about backend code, routes, or requests. It provides several components that let you build charts, tables, and different figures to meet your application’s needs. Streamlit also utilizes models that you’ve saved or pickled into the app to make predictions.

Conversely, alternative frameworks like FastAPI, Flask, and Shiny require a strong grasp of HTML/CSS to build interactive, frontend apps. Streamlit is the fastest way to build and share data apps. The Streamlit API is minimal and extremely easy to understand. Minimal changes to your underlying Python script are needed to create an interactive dashboard.

Getting Started

git clone https://github.com/dockersamples/customer-churnapp-streamlit

Key Components

An IDE or text editor Python 3.6+ PIP (or Anaconda)Not required but recommended: An environment-management tool such as pipenv, venv, virtualenv, or condaDocker Desktop

Before starting, install Python 3.6+. Afterwards, follow these steps to install all libraries required to run the model on your system. 

Our project directory structure should look like this:

$ tree
.
├── Churn_EDA_model_development.ipynb
├── Churn_model_metrics.ipynb
├── Dockerfile
├── Pipfile
├── Pipfile.lock
├── WA_Fn-UseC_-Telco-Customer-Churn.csv
├── train.py
├── requirements.txt
├── README.md
├── images
│ ├── churndemo.gif
│ ├── icone.png
│ └── image.png
├── model_C=1.0.bin
└── stream_app.py

Install project dependencies in a virtual environment 

We’ll use the Pipenv library to create a virtual Python environment and install the dependencies required to run Streamlit. The Pipenv tool automatically manages project packages through the Pipfile as you install or uninstall them. It also generates a Pipfile.lock file, which helps produce deterministic builds and creates a snapshot of your working environment. Follow these steps to get started.

1) Enter your project directory

cd customer-churnapp-streamlit

2) Install Pipenv

pip install pipenv

3) Install the dependencies

pipenv install

4) Enter the pipenv virtual environment

pipenv shell

After completing these steps, you can run scripts from your virtual environment! 

Building a simple machine-learning model

Machine learning uses algorithms and statistical models. These analyze historical data and make inferences from patterns without any explicit programming. Ultimately, the goal is to predict outcomes based on incoming data. 

In our case, we’re creating a model from historical customer data to predict which customers are likely to leave. Since we need to classify customers as either churn or no-churn, we’ll train a simple-yet-powerful classification model. Our model uses logistic regression on a telecom company’s historical customer dataset. This set tracks customer demographics, tenure, monthly charges, and more. However, one key question is also answered: did the customer churn? 

Logistic regression estimates an event’s probability based on a given dataset of independent variables. Since the outcome is a probability, the dependent variable is bounded between 0 and 1. The model will undergo multiple iterations and calculate best-fit coefficients for each variable. This quantifies just how much each impacts churn. With these coefficients, the model can assign churn likelihood scores between 0 and 1 to new customers. Someone who scores a 1 is extremely likely to churn. Someone with a 0 is incredibly unlikely to churn. 

Python has great libraries like Pandas, NumPy, and Matplotlib that support data analytics. Open-source frameworks like Scikit Learn have pre-built wrappers for various ML models. We’ll use their API to train a logistic-regression model. To understand how this basic churn prediction model was born, refer to Churn_EDA_model_development.ipynb. ML models require many attempts to get right. Therefore, we recommend using a Jupyter notebook or an IDE. 

In a nutshell we performed the below steps to create our churn prediction model:

Initial data preparation Perform sanity checks on data types and column names Make data type corrections if needed Data and feature understanding Check the distribution of numerical featuresCheck the distinct values of categorical features Check the target feature distribution Exploratory data analysis Handle missing values Handle outliers Understand correlations and identify spurious ones Feature engineering and importance Analyze churn rate and risk scores across different cohorts and feature groups Calculate mutual information Check feature correlations Encoding categorical features and scaling numerical features Convert categorical features into numerical values using Scikit-Learn’s helper function: Dictionary Vectoriser Scale numerical features to standardize them into a fixed range Model training Select an appropriate ML algorithm Train the model with custom parameters Model evaluation Refer to Churn_model_metrics.ipynb Use different metrics to evaluate the model like accuracy, confusion table, precision, recall, ROC curves, AUROC, and cross-validation.Repeat steps 6 and 7 for different algorithms and model hyperparameters, then select the best-fit model.

It’s best practice to automate the training process using a Python script. Each time we choose to retrain the model with a new parameter or a new dataset, we can execute this script and save the resulting model. 

Check out train.py to explore how to package a model into a script that automates model training! 

Once we uncover the best-fit model, we must save it to reuse it later without running any of the above training code scripts. Let’s get started.

Save the model

In machine learning, we save trained models in a file and restore them to compare each with other models. We can also test them using new data. The save process is called Serialization, while restoration is called Deserialization.

We use a helper Python library called Pickle to save the model. The Pickle module implements a fundamental, yet powerful, algorithm for serializing and de-serializing a Python object structure. 

You can also use the following functions: 

pickle.dump serializes an object hierarchy using dump().pickle.load deserializes a data stream via the load() function.

We’ve chosen Pickle since it supports models created using the Scikit-Learn framework and offers great loading performance. Similar training frameworks like Tensorflow and Keras have their own built-in libraries for saving models, which are designed to perform well with their architectures. 

Dump the Model and Dictionary Vectorizer

import pickle

with open(‘model_C=1.0.bin’, ‘wb’) as f_out
pickle.dump((dict_vectorizer, model), f_out)
f_out.close() ## After opening any file it’s necessary to close it

We just saved a binary file named model_C=1.0.bin and wrote the dict_vectorizer for one Hot Encoding and included Logistic Regression Model as an array within it. 

Create a new Python file

Now, we’ll create a stream_app.py script that both defines our app layout and trigger-able backend logic. This logic activates when users interact with different UI components. Crucially, this file is reusable with any model. 

This is just an example. We strongly recommend exploring more components and design options from the Streamlit library. If you’re skilled in HTML and JavaScript, you can create your own Streamlit components that grant you more control over your app’s layout. 

First, import the required libraries:

import pickle
import streamlit as st
import pandas as pd
from PIL import Image

Next, you’ll need to load the same binary file we saved earlier to deserialize the model and dictionary vectorizer.

model_file = ‘model_C=1.0.bin’

with open(model_file, ‘rb’) as f_in:
dv, model = pickle.load(f_in)

The following code snippet loads the images and displays them on your screen. The st.image portion helps display an image on the frontend:

image = Image.open(‘images/icone.png’)
image2 = Image.open(‘images/image.png’)

st.image(image,use_column_width=False)

To display items in the sidebar, you’ll need the following code snippet:

add_selectbox = st.sidebar.selectbox("How would you like to predict?",
("Online", "Batch"))

st.sidebar.info(‘This app is created to predict Customer Churn’)
st.sidebar.image(image2)

Streamlit’s sidebar renders a vertical, collapsible bar where users can select the type of model scoring they want to perform — like batch scoring (predictions for multiple customers) or online scoring (for single customers). We also add text and images to decorate the sidebar. 

The following code helps you display the main title:

st.title("Predicting Customer Churn")

You can display input widgets to collect customer details and generate predictions, when the user selects the ‘Online’ option:

if add_selectbox == ‘Online':

gender = st.selectbox(‘Gender:’, [’male’, ‘female’])
seniorcitizen= st.selectbox(‘ Customer is a senior citizen:’, [0, 1])
partner= st.selectbox(‘ Customer has a partner:’, [’yes’, ‘no’])
dependents = st.selectbox(‘ Customer has dependents:’, [’yes’, ‘no’])
phoneservice = st.selectbox(‘ Customer has phoneservice:’, [’yes’, ‘no’])
multiplelines = st.selectbox(‘ Customer has multiplelines:’, [’yes’, ‘no’, ‘no_phone_service’])
internetservice= st.selectbox(‘ Customer has internetservice:’, [’dsl’, ‘no’, ‘fiber_optic’])
onlinesecurity= st.selectbox(‘ Customer has onlinesecurity:’, [’yes’, ‘no’, ‘no_internet_service’])
onlinebackup = st.selectbox(‘ Customer has onlinebackup:’, [’yes’, ‘no’, ‘no_internet_service’])
deviceprotection = st.selectbox(‘ Customer has deviceprotection:’, [’yes’, ‘no’, ‘no_internet_service’])
techsupport = st.selectbox(‘ Customer has techsupport:’, [’yes’, ‘no’, ‘no_internet_service’])
streamingtv = st.selectbox(‘ Customer has streamingtv:’, [’yes’, ‘no’, ‘no_internet_service’])
streamingmovies = st.selectbox(‘ Customer has streamingmovies:’, [’yes’, ‘no’, ‘no_internet_service’])
contract= st.selectbox(‘ Customer has a contract:’, [’month-to-month’, ‘one_year’, ‘two_year’])
paperlessbilling = st.selectbox(‘ Customer has a paperlessbilling:’, [’yes’, ‘no’])
paymentmethod= st.selectbox(‘Payment Option:’, [’bank_transfer_(automatic)’, ‘credit_card_(automatic)’, ‘electronic_check’ ,’mailed_check’])
tenure = st.number_input(‘Number of months the customer has been with the current telco provider :’, min_value=0, max_value=240, value=0)
monthlycharges= st.number_input(‘Monthly charges :’, min_value=0, max_value=240, value=0)
totalcharges = tenure*monthlycharges
output= ""
output_prob = ""
input_dict={
"gender":gender ,
"seniorcitizen": seniorcitizen,
"partner": partner,
"dependents": dependents,
"phoneservice": phoneservice,
"multiplelines": multiplelines,
"internetservice": internetservice,
"onlinesecurity": onlinesecurity,
"onlinebackup": onlinebackup,
"deviceprotection": deviceprotection,
"techsupport": techsupport,
"streamingtv": streamingtv,
"streamingmovies": streamingmovies,
"contract": contract,
"paperlessbilling": paperlessbilling,
"paymentmethod": paymentmethod,
"tenure": tenure,
"monthlycharges": monthlycharges,
"totalcharges": totalcharges
}

if st.button("Predict"):
X = dv.transform([input_dict])
y_pred = model.predict_proba(X)[0, 1]
churn = y_pred >= 0.5
output_prob = float(y_pred)
output = bool(churn)
st.success(‘Churn: {0}, Risk Score: {1}’.format(output, output_prob))

Your app’s frontend leverages Streamlit’s input widgets like select box, slider, and number input. Users interact with these widgets by entering values. Input data is then packaged into a Python dictionary. The backend — which handles the prediction score computation logic — is defined inside the st.button layer and awaits the user trigger. When this happens, the dictionary is passed to the dictionary vectorizer which performs encoding for categorical features and makes it consumable for the model. 

Streamlit passes any transformed inputs to the model and calculates the churn prediction score. Using the threshold of 0.5, the churn score is converted into a binary class. The risk score and churn class are returned to the frontend via Streamlit’s success component. This displays a success message. 

To display the file upload button when the user selects “Batch” from the sidebar, the following code snippet might be useful:

if add_selectbox == ‘Batch':
file_upload = st.file_uploader("Upload csv file for predictions", type=["csv"])
if file_upload is not None:
data = pd.read_csv(file_upload)
X = dv.transform([data])
y_pred = model.predict_proba(X)[0, 1]
churn = y_pred >= 0.5
churn = bool(churn)
st.write(churn)

When the user wants to batch score customers, the page layout will dynamically change to match this selection. Streamlit’s file uploader component will display a related widget. This prompts the user to upload a CSV file, which is then read using the panda library and processed by the dictionary vectorizer and model. It displays prediction scores on the frontend using st.write. 

The above application skeleton is wrapped within a main function in the below script. Running the script invokes the main function. Here’s how that final script looks:

import pickle
import streamlit as st
import pandas as pd
from PIL import Image
model_file = ‘model_C=1.0.bin’

with open(model_file, ‘rb’) as f_in:
dv, model = pickle.load(f_in)

def main():
image = Image.open(‘images/icone.png’)
image2 = Image.open(‘images/image.png’)
st.image(image,use_column_width=False)
add_selectbox = st.sidebar.selectbox(
"How would you like to predict?",
("Online", "Batch"))
st.sidebar.info(‘This app is created to predict Customer Churn’)
st.sidebar.image(image2)
st.title("Predicting Customer Churn")
if add_selectbox == ‘Online':
gender = st.selectbox(‘Gender:’, [’male’, ‘female’])
seniorcitizen= st.selectbox(‘ Customer is a senior citizen:’, [0, 1])
partner= st.selectbox(‘ Customer has a partner:’, [’yes’, ‘no’])
dependents = st.selectbox(‘ Customer has dependents:’, [’yes’, ‘no’])
phoneservice = st.selectbox(‘ Customer has phoneservice:’, [’yes’, ‘no’])
multiplelines = st.selectbox(‘ Customer has multiplelines:’, [’yes’, ‘no’, ‘no_phone_service’])
internetservice= st.selectbox(‘ Customer has internetservice:’, [’dsl’, ‘no’, ‘fiber_optic’])
onlinesecurity= st.selectbox(‘ Customer has onlinesecurity:’, [’yes’, ‘no’, ‘no_internet_service’])
onlinebackup = st.selectbox(‘ Customer has onlinebackup:’, [’yes’, ‘no’, ‘no_internet_service’])
deviceprotection = st.selectbox(‘ Customer has deviceprotection:’, [’yes’, ‘no’, ‘no_internet_service’])
techsupport = st.selectbox(‘ Customer has techsupport:’, [’yes’, ‘no’, ‘no_internet_service’])
streamingtv = st.selectbox(‘ Customer has streamingtv:’, [’yes’, ‘no’, ‘no_internet_service’])
streamingmovies = st.selectbox(‘ Customer has streamingmovies:’, [’yes’, ‘no’, ‘no_internet_service’])
contract= st.selectbox(‘ Customer has a contract:’, [’month-to-month’, ‘one_year’, ‘two_year’])
paperlessbilling = st.selectbox(‘ Customer has a paperlessbilling:’, [’yes’, ‘no’])
paymentmethod= st.selectbox(‘Payment Option:’, [’bank_transfer_(automatic)’, ‘credit_card_(automatic)’, ‘electronic_check’ ,’mailed_check’])
tenure = st.number_input(‘Number of months the customer has been with the current telco provider :’, min_value=0, max_value=240, value=0)
monthlycharges= st.number_input(‘Monthly charges :’, min_value=0, max_value=240, value=0)
totalcharges = tenure*monthlycharges
output= ""
output_prob = ""
input_dict={
"gender":gender ,
"seniorcitizen": seniorcitizen,
"partner": partner,
"dependents": dependents,
"phoneservice": phoneservice,
"multiplelines": multiplelines,
"internetservice": internetservice,
"onlinesecurity": onlinesecurity,
"onlinebackup": onlinebackup,
"deviceprotection": deviceprotection,
"techsupport": techsupport,
"streamingtv": streamingtv,
"streamingmovies": streamingmovies,
"contract": contract,
"paperlessbilling": paperlessbilling,
"paymentmethod": paymentmethod,
"tenure": tenure,
"monthlycharges": monthlycharges,
"totalcharges": totalcharges
}
if st.button("Predict"):

X = dv.transform([input_dict])
y_pred = model.predict_proba(X)[0, 1]
churn = y_pred >= 0.5
output_prob = float(y_pred)
output = bool(churn)

st.success(‘Churn: {0}, Risk Score: {1}’.format(output, output_prob))

if add_selectbox == ‘Batch':

file_upload = st.file_uploader("Upload csv file for predictions", type=["csv"])
if file_upload is not None:
data = pd.read_csv(file_upload)
X = dv.transform([data])
y_pred = model.predict_proba(X)[0, 1]
churn = y_pred >= 0.5
churn = bool(churn)
st.write(churn)

if __name__ == ‘__main__':
main()

You can download the complete script from our Dockersamples GitHub page.

Execute the script

streamlit run stream_app.py

View your Streamlit app

You can now view your Streamlit app in your browser. Navigate to the following:

Local URL: http://localhost:8501Network URL: http://192.168.1.23:8501

Containerizing the Streamlit app with Docker

Let’s explore how to easily run this app within a Docker container, using a Docker Official image. First, you’ll need to download Docker Desktop. Docker Desktop accelerates the image-building process while making useful images more discoverable. Complete this installation process once your download is finished.

Docker uses a Dockerfile to specify each image’s “layers.” Each layer stores important changes stemming from the base image’s standard configuration. Create an empty Dockerfile in your Streamlit project:

touch Dockerfile

Next, use your favorite text editor to open this Dockerfile. We’re going to build out this new file piece by piece. To start, let’s define a base image:

FROM python:3.8.12-slim

It’s now time to ensure that the latest pip modules are installed:

RUN /usr/local/bin/python -m pip install –upgrade pip

Next, let’s quickly create a directory to house our image’s application code. This is the working directory for your application:

WORKDIR /app

The following COPY instruction copies the requirements file from the host machine to the container image:

COPY requirements.txt ./requirements.txt
RUN pip install -r requirements.txt

The EXPOSE instruction tells Docker that your container is listening on the specified network ports at runtime:

EXPOSE 8501

Finally, create an ENTRYPOINT to make your image executable:

ENTRYPOINT ["streamlit", "run"]
CMD ["stream_app.py"]

After assembling each piece, here’s your complete Dockerfile:

FROM python:3.8.12-slim
RUN /usr/local/bin/python -m pip install –upgrade pip
WORKDIR /app
COPY requirements.txt ./requirements.txt
RUN pip install -r requirements.txt
EXPOSE 8501
COPY . .
ENTRYPOINT ["streamlit", "run"]
CMD ["stream_app.py"]

Build your image

docker build -t customer_churn .

Run the app

docker run -d -p 8501:8501 customer_churn

View the app within Docker Desktop

You can do this by navigating to the Containers interface, which lists your running application as a named container:

Access the app

First, select your app container in the list. This opens the Logs view. Click the button with a square icon (with a slanted arrow) located next to the Stats pane. This opens your app in your browser:

Alternatively, you can hover over your container in the list and click that icon once the righthand toolbar appears.

Develop and deploy your next machine learning model, today

Congratulations! You’ve successfully explored how to build and deploy customer churn prediction models using Streamlit and Docker. With a single Dockerfile, we’ve demonstrated how easily you can build an interactive frontend and deploy this application in seconds. 

With just a few extra steps, you can use this tutorial to build applications with much greater complexity. You can make your app more useful by implementing push-notification logic in the app — letting the marketing team send promotional emails to high-churn customers on the fly. Happy coding.
Quelle: https://blog.docker.com/feed/

KubeVirt on Killercoda on KubeVirt

itnext.io – This article provides a high-level overview of how Killercoda uses KubeVirt to schedule disposable learning environments. We also talk about how KubeVirt can run on Killercoda in some kind of…
Quelle: news.kubernauts.io

How to Use the Redis Docker Official Image

Maintained in partnership with Redis, the Redis Docker Official Image (DOI) lets developers quickly and easily containerize a Redis instance. It streamlines the cross-platform deployment process — even letting you use Redis with edge devices if they support your workflows. 

Developers have pulled the Redis DOI over one billion times from Docker Hub. As the world’s most popular key-value store, Redis helps apps concurrently access critical bits of data while remaining resource friendly. It’s highly performant, in-memory, networked, and durable. It also stands apart from relational databases like MySQL and PostgreSQL that use tabular data structures. From day one, Redis has also been open source. 

Finally, Redis cluster nodes are horizontally scalable — making it a natural fit for containerization and multi-container operation. Read on as we explore how to use the Redis Docker Official Image to containerize and accelerate your Redis database deployment.

In this tutorial:

What is the Redis Docker Official Image?How to run Redis in DockerUse a quick pull commandStart your Redis instanceSet up Redis persistent storageConnect with the Redis CLIConfigurations and modulesNotes on using Redis modulesGet up and running with Redis today

What is the Redis Docker Official Image?

The Redis DOI is a building block for Redis Docker containers. It’s an executable software package that tells Docker and your application how to behave. It bundles together source code, dependencies, libraries, tools, and other core components that support your application. In this case, these components determine how your app and Redis database interact.

Our Redis Docker Official Image supports multiple CPU architectures. An assortment of over 50 supported tags lets you choose the best Redis image for your project. They’re also multi-layered and run using a default configuration (if you’re simply using docker pull). Complexity and base images also vary between tags. 

That said, you can also configure your Redis Official Image’s Dockerfile as needed. We’ll touch on this while outlining how to use the Redis DOI. Let’s get started.

How to run Redis in Docker

Before proceeding, we recommend installing Docker Desktop. Desktop is built upon Docker Engine and packages together the Docker CLI, Docker Compose, and more. Running Docker Desktop lets you use Docker commands. It also helps you manage images and containers using the Docker Dashboard UI. 

Use a quick pull command

Next, you’ll need to pull the Redis DOI to use it with your project. The quickest method involves visiting the image page on Docker Hub, copying the docker pull command, and running it in your terminal:

Your output confirms that Docker has successfully pulled the :latest Redis image. You can also verify this by hopping into Docker Desktop and opening the Images interface from the left sidebar. Your redis image automatically appears in the list:

We can also see that our new Redis image is 111.14 MB in size. This is pretty lightweight compared to many images. However, using an alpine variant like redis:alpine3.16 further slims your image.

Now that you’re acquainted with Docker Desktop, let’s jump into our CLI workflow to get Redis up and running. 

Start your Redis instance

Redis acts as a server, and related server processes power its functionality. We need to start a Redis instance, or software server process, before linking it with our application. Luckily, you can create a running instance with just one command: 

docker run –name some-redis -d redis

We recommend naming your container. This helps you reference later on. It also makes it easier to run additional commands that involve it. Your container will run until you stop it. 

By adding -d redis in this command, Docker will run your Redis service in “detached” mode. Redis, therefore, runs in the background. Your container will also automatically exit when its root process exits. You’ll see that we’re not explicitly telling the service to “start” within this command. By leaving this verbiage out, our Redis service will start and continue running — remaining usable to our application.

Set up Redis persistent storage

Persistent storage is crucial when you want your application to save data between runs. You can have Redis write its data to a destination like an SSD. Persistence is also useful for keeping log files across restarts. 

You can capture every Redis operation using the Redis Database (RDB) method. This lets you designate snapshot intervals and record data at certain points in time. However, that running container from our initial docker run command is using port 6379. You should remove (or stop) this container before moving on, since it’s not critical for this example. 

Once that’s done, this command triggers persistent storage snapshots every 60 seconds: 

docker run –name some-redis -d redis redis-server –save 60 1 –loglevel warning

The RDB approach is valuable as it enables “set-and-forget” persistence. It also generates more logs. Logging can be useful for troubleshooting, yet it also requires you to monitor accumulation over time. 

However, you can also forego persistence entirely or choose another option. To learn more, check out Redis’ documentation. 

Redis stores your persisted data in the VOLUME /data location. These connected volumes are shareable between containers. This shareability becomes useful when Redis lives within one container and your application occupies another. 

Connect with the Redis CLI

The Redis CLI lets you run commands directly within your running Redis container. However, this isn’t automatically possible via Docker. Enter the following commands to enable this functionality: 

docker network create some-network

​​docker run -it –network some-network –rm redis redis-cli -h some-redis

Your Redis service understands Redis CLI commands. Numerous commands are supported, as are different CLI modes. Read through the Redis CLI documentation to learn more. 

Once you have CLI functionality up and running, you’re free to leverage Redis more directly!

Configurations and modules

Finally, we’ve arrived at customization. While you can run a Redis-powered app using defaults, you can tweak your Dockerfile to grab your pre-existing redis.conf file. This better supports production applications. While Redis can successfully start without these files, they’re central to configuring your services. 

You can see what a redis.conf file looks like on GitHub. Otherwise, here’s a sample Dockerfile: 

FROM redis
COPY redis.conf /usr/local/etc/redis/redis.conf
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]

You can also use docker run to achieve this. However, you should first do two things for this method to work correctly. First, create the /myredis/config directory on your host machine. This is where your configuration files will live. 

Second, open Docker Desktop and click the Settings gear in the upper right. Choose Resources > File Sharing to view your list of directories. You’ll see a grayed-out directory entry at the bottom, which is an input field for a named directory. Type in /myredis/config there and hit the “+” button to locally verify your file path:

You’re now ready to run your command! 

docker run -v /myredis/conf:/usr/local/etc/redis –name myredis redis redis-server /usr/local/etc/redis/redis.conf

The Dockerfile gives you more granular control over your image’s construction. Alternatively, the CLI option lets you run your Redis container without a Dockerfile. This may be more approachable if your needs are more basic. Just ensure that your mapped directory is writable and exists locally. 

Also, consider the following: 

If you edit your Redis configurations on the fly, you’ll have to use CONFIG REWRITE to automatically identify and apply any field changes on the next run.You can also apply configuration changes manually.

Remember how we connected the Redis CLI earlier? You can now pass arguments directly through the Redis CLI (ideal for testing) and edit configs while your database server is running. 

Notes on using Redis modules

Redis modules let you extend your Redis service, and build new services, and adapt your database without taking a performance hit. Redis also processes them in memory. These standard modules support querying, search, JSON processing, filtering, and more. As a result, Docker Hub’s redislabs/redismod image bundles seven of these official modules together: 

RedisBloomRedisTimeSeriesRedisJSONRedisAIRedisGraphRedisGearsRedisearch

If you’d like to spin up this container and experiment, simply enter docker run -d -p 6379:6379 redislabs/redismod in your terminal. You can open Docker Desktop to view this container like we did earlier on. 

You can view Redis’ curated modules or visit the Redis Modules Hub to explore further.

Get up and running with Redis today

We’ve explored how to successfully Dockerize Redis. Going further, it’s easy to grab external configurations and change how Redis operates on the fly. This makes it much easier to control how Redis interacts with your application. Head on over to Docker Hub and pull your first Redis Docker Official Image to start experimenting. 

The Redis Stack also helps extend Redis within Docker. It adds modern, developer-friendly data models and processing engines. The Stack also grants easy access to full-text search, document store, graphs, time series, and probabilistic data structures. Redis has published related container images through the Docker Verified Publisher (DVP) program. Check them out!
Quelle: https://blog.docker.com/feed/