Learn Cloud Native | www.learncloudnative.com
learncloudnative.com – A collection of more than 20 Kubernetes CLI tips from the community.
Quelle: news.kubernauts.io
learncloudnative.com – A collection of more than 20 Kubernetes CLI tips from the community.
Quelle: news.kubernauts.io
iximiuz.com – How to use Ephemeral Containers to debug Kubernetes workloads with and without the kubectl debug command.
Quelle: news.kubernauts.io
Docker Captains are select members of the community that are both experts in their field and are passionate about sharing their Docker knowledge with others. “Docker Captains Take 5” is a regular blog series where we get a closer look at our Captains and ask them the same broad set of questions ranging from what their best Docker tip is to whether they prefer cats or dogs (personally, we like whales and turtles over here). Today, we’re interviewing James Spurin who recently joined the Captain’s Program. He is a DevOps Consultant and Course/Content Creator at DiveInto and is based in Hertfordshire, United Kingdom. Check out James’ socials on LinkedIn and Twitter!
How/when did you first discover Docker?
I’m part of the earlier ISP generation, so my early career involved working at Demon Internet, one of the first internet providers in the UK back in 1998-2000.
Back then, it was cool to host and serve your personal ISP services on your own managed system (generally hidden in a cupboard at home and served via a cable modem to the world) for the likes of Web/DNS/Email and other services.
Whilst times have changed, and I’ve moved to more appropriate cloud-based solutions for essential services and hosting, I’ve always been passionate about cosplaying with systems administration. A friend with the same passion recommended linuxserver.io to me. It’s a great resource that manages and maintains a fleet of common Docker images.
I transitioned many of the services I was manually running to Docker, either using their images or their Dockerfiles as a reference for learning how to create my own Docker images.
If you’re looking for a great way of starting with Docker, I highly recommend looking at the resources available on linuxserver.io.
The advice we would share with new starters back in my early ISP career days was to create and self-host an ISP in a box.
In essence, we’d combine a Web Server (using Apache at the time), Email (using Exim), and a DNS server (using Bind), alongside a custom domain name, to make it available on the internet. It provided a great learning opportunity for understanding how these protocols work.
Today my advice would be to try this out, but also with Docker in the mix!
What is your favorite Docker command?
My favorite Docker command would be docker buildx. With the growth of arm architecture, docker buildx is an excellent resource that I rely on tremendously. Being a content creator I leverage Docker extensively for creating lab environments that anyone can utilize with their own resources. See my “Dive Into Ansible” repository for an example that utilizes docker-compose and has had over 250k pulls.
Just a few years ago, building images for arm alongside AMD64 could have been considered a niche in my area. Only a tiny percentage of my students were using a Raspberry Pi for personal computing.
These days, however, especially with the growth of Apple Silicon, cross-built images are much more of a necessity when providing community container images. As a result, Buildx is one of my favorite CLI Plugins and is a step I consider essential as a milestone in a successful Docker project.
What is your top tip for working with Docker that others may not know?
Consider Dockerfiles (or automated image builds) and guided instructions as a standard part of your projects from Day 1. Your users will thank you and your likelihood of open source contributors will grow.
Take, for example, the Python programming language. When browsing GitHub/Gitlab for Python projects, it’s common to see a requirements.txt file with dependencies related to the project.
The expectation is then for the consumer to install dependencies via pip. An experienced developer may utilize virtual environments, whereas a less experienced developer may install this straight into their running system (thus, potential cross-contamination).
Whilst Python 3+ is the standard for most common Python projects, there may be nuances between a version of Python locally installed and that used within a codebase. We should also consider that some dependencies require compilation, which presents another obstacle for general usage, especially if the likes of Developer Compilation Tools aren’t available.
By providing a Dockerfile that utilizes a trusted Python image and offering automated prebuilt images using the likes of DockerHub in conjunction with GitHub/Gitlab (to trigger automated builds), individuals can get involved and run projects as a single command in a matter of minutes. Such efforts also provide great reuse opportunities with Kubernetes, CI/CD pipelines, and automated testing.
What’s the coolest Docker demo you have done/seen?
The Flappy Moby efforts that took place at KubeCon Valencia. I liked this so much that I captured this at the time and created a video!
The project was novel; after all, who doesn’t love these types of games? It was a fantastic showpiece at the event. As a content creator and someone who has worked on creating games to demonstrate and teach technical concepts, I was also very appreciative of the efforts involved around the graphical elements to bring this to life.
Seeing Docker Desktop extensions in action inspired my own Docker Desktop extension journey and follow-ups. When I returned from Kubecon, I created a Docker Desktop extension that instantly provides an Ansible-based lab with six nodes and web terminals. Check out the related video of how this extension was made!
What have you worked on in the past six months that you’re particularly proud of?
I created a free Kubernetes Introduction course available on YouTube and Udemy which is receiving an incredible amount of views and positive feedback. This was a very personal project for me that focused on community giveback.
When I first started learning Kubernetes there were areas that I found frustrating. Learning resources in this space often show theoretical overviews of core Kubernetes architecture but lack hands-on demonstrations. I made this course to ensure that anyone could get a good understanding of Kubernetes alongside hands-on use of the essential components in just one hour.
The course also provided me with a unique opportunity to share perspectives on overlooked areas relating to Docker Inc. For example, I cover the positive efforts made by Docker to Cloud Native with their contributions of containerd and runC to the Cloud Native Computing Foundation and the Open Container Initiative, respectively.
It was a pleasure to work on a project that covered many of my favorite passions in one go, including, Kubernetes, Docker, Cloud Native, content, and community.
What do you anticipate will be Docker’s biggest announcement this year?
I’ve already mentioned this above, but it’s Docker Desktop extensions for me. When considered alongside Docker Desktop (now native for Windows, Mac, and Linux), we have a consistent Docker Desktop environment and Extension platform that can provide a consistent development resource on all major OS platforms.
What are some personal goals for the next year with respect to the Docker community?
My aims are focused on community, and I’m already working on content that will heavily emphasize Docker in conjunction with Kubernetes (there’s so much opportunity to do more with the Docker Desktop Kubernetes installation.) As the tagline in the Docker Slack announcement channel says… Docker, Docker, Docker!!!
What was your favorite thing about DockerCon 2022?
Community. While watching the various talks and discussions, I was active in the chat rooms.
The participants were highly engaged, and I made some great connections with individuals who were mutually chatting at the time.
There were also some very unexpected moments. For example, Justin Cormack and Ajeet Singh Raina were using some interesting vintage microphones that kicked off some good chat room and post-event discussions.
Looking to the distant future, what is the technology that you’re most excited about and that you think holds a lot of promise?
A technology that has blown my mind is Dall-E 2, an AI solution that can automatically create images based on textual information. If you haven’t heard of this, you must check this video out.
It’s possible at the moment to try out Dall-E Mini. Whilst this isn’t as powerful as Dall-E 2, it can be fun to use.
For example, this is a unique image created by AI using the input of “Docker”. Considering that this technology is not re-using existing images and has learnt the concept of “Docker” to make this, it is truly remarkable.
Rapid fire questions…
What new skill have you mastered during the pandemic?
Coffee is a personal passion and a fuel that I both depend upon and enjoy! The Aeropress is a cheap, simple, and effective device with many opportunities. I’ve explored how to make a fantastic Aeropress coffee, and I think I’ve nailed it! For those interested, check out some feeds from the Aeropress Barista Championships.
Cats or Dogs?
Cats. I have two, one named Whisper Benedict and the other named Florence Rhosyn. Whisper is a British Blue, and Flo is a British Blue and White. At the time, we only intended to get one cat, but the lady at the cattery offered us Flo at a discount, and we couldn’t resist.
The lady at the cattery was a breeder of British Blues and British Whites, and the Dad from the Blues had snuck in with the Mum of the Whites; alas, you can guess what happened. This gives Flo her very unique mottled colors.
The two of them are extraordinary characters. Although Whisper is the brawn of the two and would be assumed to be the Alpha cat, he’s an absolute softie and doesn’t mind anybody picking him up.
On the other hand, what Flo lacks in physique, she makes up with brains and agility.
Both my children Lily (11) and Anwen (4) can hold Flo, and nothing will happen. They’ve all grown up together, and it’s as if she knows that they are children. However, should you try to pick her up as an adult, you’re not getting away unscathed. Flo also seems to have this uncanny ability to know when we’re intending on taking her to the vets, even without a carry basket in sight!
Despite their characteristics, we wouldn’t have our furry family members any other way.
Salty, sour, or sweet?
Sweet!
Beach or mountains?
Beaches (with some favouritism towards Skiathos) please!
Your most often used emoji?
🚀
Quelle: https://blog.docker.com/feed/
Low-code and no-code platforms have risen sharply in popularity over the past few years. These platforms let users with little or no knowledge of coding build apps 20x faster with minimal coding. They’ve even evolved to a point where they’ve become indispensable tools for expert developers. Such platforms are highly visual and follow a user-friendly, modular approach. Consequently, you need to drag and drop software components into place — all of which are visually represented — to create an app.
Node-RED is a low-code programming language for event-driven applications. It’s a programming tool for wiring together hardware devices, APIs, and online services in new and interesting ways. Node-RED also provides a browser-based flow editor that makes it easy to wire together flows using the wide range of nodes within the palette. Accordingly, you can deploy it to its runtime with a single-click. Once more, you can create JavaScript functions within the editor using the rich-text editor. Finally, Node-RED ships with a built-in library that lets you save useful and reusable functions, templates or flows.
Node-RED’s lightweight runtime is built upon Node.js, taking full advantage of Node’s event-driven, non-blocking model. This helps it run at the edge of the network on low-cost hardware — like the Raspberry Pi and in the cloud. With over 225,000 modules in Node’s package repository, it’s easy to extend the range of palette nodes and add new capabilities.The flows created in Node-RED are stored using JSON, which is easily importable and exportable for sharing purposes. An online flow library lets you publish your best flows publicly.
Users have downloaded our Node-RED Docker Official Image over 100 million times from Docker Hub. What’s driving this significant download rate? There’s an ever-increasing demand for Docker containers to streamline development workflows, while giving Node-RED developers the freedom to innovate with their choice of project-tailored tools, application stacks, and deployment environments. Our Node-RED Official Image also supports multiple architectures like amd64, arm32v6, arm32v7, arm64v8, and s390x.
Why is containerizing Node-RED important?
The Node-RED Project has a huge community of third-party nodes available for installation. Also, note that the community doesn’t generally recommend using an odd-numbered Node version. This advice is tricky for new users, since they might end up fixing Node compatibility issues.
Running your Node-RED app in a Docker container lets users get started quickly with sensible defaults and customization via environmental variables. Users no longer need to worry about compatibility issues. Next, Docker enables users to build, share, and run containerized Node-RED applications — made accessible for developers of all skill levels.
Building your application
In this tutorial, you’ll learn how to build a retail store items detection system using Node-RED. First, you’ll set up Node-RED manually on an IoT Edge device without Docker. Second, you’ll learn how to run it within a Docker container via a one-line command. Finally, you’ll see how Docker containers help you build and deploy this detection system using Node-RED. Let’s jump in.
Hardware components
Seeed Studio reComputer J1010 with Jetson Nano
USB/IP camera module
Ethernet cable/USB WiFi adapter
Keyboard and mouse
Software Components
NVIDIA JetPack v4.6.1 with SDK components
Node 16.x
NPM
Docker
Docker Compose
Preparing your Seeed Studio reComputer and development environment
For this demonstration, we’re using a Seeed Studio reComputer. The Seeed Studio reComputer J1010 is powered by the Jetson Nano development kit. It’s a small, powerful, palm-sized computer that makes modern AI accessible to embedded developers. It’s built around the NVIDIA Jetson Nano system-on-module (SoM) and designed for edge AI applications.
Wire it up
Plug your WiFi adapter/Ethernet cable, Keyboard/Mouse, and USB camera into the reComputer system and turn it on using the power cable. Follow the steps to perform initial system startup.
Before starting, make sure you have Node installed in your system. Then, follow these steps to set up Node-RED on your Edge device.
Installing Node.js
Ensure that you have the latest stable version of Node.js installed in your system.
curl -fsSL https://deb.nodesource.com/setup_16.x | sudo -E bash –
sudo apt-get install -y nodejs
Verify Node.js and npm versions
The above installer will install both Node.js and npm. Let’s verify that they’re installed properly:
# check Node.js version
nodejs -v
v16.16.0
# check npm version
npm -v
8.11.0
Installing Node-RED
To install Node-RED, you can use the npm command that comes with Node.js:
sudo npm install -g –unsafe-perm node-red
changed 294 packages, and audited 295 packages in 17s
38 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
Running Node-RED
Use the node-red command to start Node-RED in your terminal:
node-red
27 Jul 15:08:36 – [info]
Welcome to Node-RED
===================
27 Jul 15:08:36 – [info] Node-RED version: v3.0.1
27 Jul 15:08:36 – [info] Node.js version: v16.16.0
27 Jul 15:08:36 – [info] Linux 4.9.253-tegra arm64 LE
27 Jul 15:08:37 – [info] Loading palette nodes
27 Jul 15:08:38 – [info] Settings file : /home/ajetraina/.node-red/settings.js
27 Jul 15:08:38 – [info] Context store : ‘default’ [module=memory]
27 Jul 15:08:38 – [info] User directory : /home/ajetraina/.node-red
27 Jul 15:08:38 – [warn] Projects disabled : editorTheme.projects.enabled=false
27 Jul 15:08:38 – [info] Flows file : /home/ajetraina/.node-red/flows.json
27 Jul 15:08:38 – [info] Creating new flow file
27 Jul 15:08:38 – [warn]
———————————————————————
Your flow credentials file is encrypted using a system-generated key.
If the system-generated key is lost for any reason, your credentials
file will not be recoverable, you will have to delete it and re-enter
your credentials.
You should set your own key using the ‘credentialSecret’ option in
your settings file. Node-RED will then re-encrypt your credentials
file using your chosen key the next time you deploy a change.
———————————————————————
27 Jul 15:08:38 – [info] Server now running at http://127.0.0.1:1880/
27 Jul 15:08:38 – [warn] Encrypted credentials not found
27 Jul 15:08:38 – [info] Starting flows
27 Jul 15:08:38 – [info] Started flows
You can then access the Node-RED editor by navigating to http://localhost:1880 in your browser.
The log output shares some important pieces of information:
Installed versions of Node-RED and Node.js
Any errors encountered while trying to load the palette nodes
The location of your Settings file and User Directory
The name of the flow file currently being used
Node-RED consists of a Node.js based runtime that provides a web address to access the flow editor. You create your application in the browser by dragging nodes from your palette into a workspace, From there, you start wiring them together. With one click, Node-RED deploys your application back to the runtime where it’s run.
Running Node-RED in a Docker container
The Node-RED Official Image is based on our Node.js Alpine Linux images, in order to keep them as slim as possible. Run the following command to create and mount a named volume called node_red_data to the container’s /data directory. This will allow us to persist any flow changes.
docker run -it -p 1880:1880 -v node_red_data:/data –name mynodered nodered/node-red
You can now access the Node-RED editor via http://localhost:1880 or http://<ip_address_Jetson>:1880.
Building and running your retail store items detection system
To build a fully functional retail store items detection system, follow these next steps.
Write the configuration files
We must define a couple of files that add Node-RED configurations — such as custom themes and custom npm packages.
First, create an empty folder called “node-red-config”:
mkdir node-red-config
Change your directory to node-red-config and run the following command to setup a new NPM package.
npm init
This utility will walk you through the package.json file creation process. It only covers the most common items, and tries to guess sensible defaults.
{
"name": "node-red-project",
"description": "A Node-RED Project",
"version": "0.0.1",
"private": true,
"dependencies": {
"@node-red-contrib-themes/theme-collection": "^2.2.3",
"node-red-seeed-recomputer": "git+https://github.com/Seeed-Studio/node-red-seeed-recomputer.git"
}
}
Create a file called settings.js inside the node-red-config folder and enter the following content. This file defines Node-RED server, runtime, and editor settings. We’ll mainly change the editor settings. For more information about individual settings, refer to the documentation.
module.exports = {
flowFile: ‘flows.json’,
flowFilePretty: true,
uiPort: process.env.PORT || 1880,
logging: {
console: {
level: "info",
metrics: false,
audit: false
}
},
exportGlobalContextKeys: false,
externalModules: {
},
editorTheme: {
theme: "midnight-red",
page: {
title: "reComputer Flow Editor"
},
header: {
title: " Flow Editor<br/>",
image: "/data/seeed.webp", // or null to remove image
},
palette: {
},
projects: {
enabled: false,
workflow: {
mode: "manual"
}
},
codeEditor: {
lib: "ace",
options: {
theme: "vs",
}
}
},
functionExternalModules: true,
functionGlobalContext: {
},
debugMaxLength: 1000,
mqttReconnectTime: 15000,
serialReconnectTime: 15000,
}
You can download this image and put it under the node-red-config folder. This image file’s location is defined inside the settings.js file we just created.
Write the script
Create an empty file by running the following command:
touch docker-ubuntu.sh
In order to print colored output, let’s first define a few colors in the shell script. This will get reflected as an output when you execute the script at a later point of time:
IBlack=’\033[0;90m’ # Black
IRed=’\033[0;91m’ # Red
IGreen=’\033[0;92m’ # Green
IYellow=’\033[0;93m’ # Yellow
IBlue=’\033[0;94m’ # Blue
IPurple=’\033[0;95m’ # Purple
ICyan=’\033[0;96m’ # Cyan
IWhite=’\033[0;97m’ # White
The sudo command allows a normal user to run a command with elevated privileges so that they can perform certain administrative tasks. As this script involves running multiple tasks that involve administrative privileges, it’s always recommended to check if you’re running the script as a “sudo” user.
if ! [ $(id -u) = 0 ] ; then
echo "$0 must be run as sudo user or root"
exit 1
fi
The reComputer for Jetson is sold with 16 GB of eMMC. This ready-to-use hardware has Ubuntu 18.04 LTS and NVIDIA JetPack 4.6 installed, so the remaining user space available is about 2 GB. This could be a significant obstacle to using the reComputer for training and deployment in some projects. Hence, it’s sometimes important to remove unnecessary packages and libraries. This code snippet confirms that you have enough storage to install all included packages and Docker images.
If you have the required storage space, it’ll continue to the next section. Otherwise, the installer will ask if you want to free up some device space. Typing “y” for “yes” will delete unnecessary files and packages to clear some space.
storage=$(df | awk ‘{ print $4 } ‘ | awk ‘NR==2{print}’ )
#if storage > 3.8G
if [ $storage -gt 3800000 ] ; then
echo -e "${IGreen}Your storage space left is $(($storage /1000000))GB, you can install this application."
else
echo -e "${IRed}Sorry, you don’t have enough storage space to install this application. You need about 3.8GB of storage space."
echo -e "${IYellow}However, you can regain about 3.8GB of storage space by performing the following:"
echo -e "${IYellow}-Remove unnecessary packages (~100MB)"
echo -e "${IYellow}-Clean up apt cache (~1.6GB)"
echo -e "${IYellow}-Remove thunderbird, libreoffice and related packages (~400MB)"
echo -e "${IYellow}-Remove cuda, cudnn, tensorrt, visionworks and deepstream samples (~800MB)"
echo -e "${IYellow}-Remove local repos for cuda, visionworks, linux-headers (~100MB)"
echo -e "${IYellow}-Remove GUI (~400MB)"
echo -e "${IYellow}-Remove Static libraries (~400MB)"
echo -e "${IRed}So, please agree to uninstall the above. Press [y/n]"
read yn
if [ $yn = "y" ] ; then
echo "${IGreen}starting to remove the above-mentioned"
# Remove unnecessary packages, clean apt cache and remove thunderbird, libreoffice
apt update
apt autoremove -y
apt clean
apt remove thunderbird libreoffice-* -y
# Remove samples
rm -rf /usr/local/cuda/samples
/usr/src/cudnn_samples_*
/usr/src/tensorrt/data
/usr/src/tensorrt/samples
/usr/share/visionworks* ~/VisionWorks-SFM*Samples
/opt/nvidia/deepstream/deepstream*/samples
# Remove local repos
apt purge cuda-repo-l4t-*local* libvisionworks-*repo -y
rm /etc/apt/sources.list.d/cuda*local* /etc/apt/sources.list.d/visionworks*repo*
rm -rf /usr/src/linux-headers-*
# Remove GUI
apt-get purge gnome-shell ubuntu-wallpapers-bionic light-themes chromium-browser* libvisionworks libvisionworks-sfm-dev -y
apt-get autoremove -y
apt clean -y
# Remove Static libraries
rm -rf /usr/local/cuda/targets/aarch64-linux/lib/*.a
/usr/lib/aarch64-linux-gnu/libcudnn*.a
/usr/lib/aarch64-linux-gnu/libnvcaffe_parser*.a
/usr/lib/aarch64-linux-gnu/libnvinfer*.a
/usr/lib/aarch64-linux-gnu/libnvonnxparser*.a
/usr/lib/aarch64-linux-gnu/libnvparsers*.a
# Remove additional 100MB
apt autoremove -y
apt clean
else
exit 1
fi
fi
This code snippet checks if the required software (curl, docker, nvidia-docker2, and Docker Compose) is installed:
apt update
if ! [ -x "$(command -v curl)" ]; then
apt install curl
fi
if ! [ -x "$(command -v docker)" ]; then
apt install docker
fi
if ! [ -x "$(command -v nvidia-docker)" ]; then
apt install nvidia-docker2
fi
if ! [ -x "$(command -v docker-compose)" ]; then
curl -SL https://files.seeedstudio.com/wiki/reComputer/compose.tar.bz2 -o /tmp/compose.tar.bz2
tar xvf /tmp/compose.tar.bz2 -C /usr/local/bin
chmod +x /usr/local/bin/docker-compose
fi
Next, you need to create a node-red directory under $HOME and then copy all your Node-RED configuration files to your device’s home directory as shown in the snippet below:
mkdir -p $HOME/node-red
cp node-red-config/* $HOME/node-red
The below snippet allows the script to bring up container services using Docker Compose CLI:
docker compose –file docker-compose.yaml up -d
Note: You’ll see how to create a Docker Compose file in the next section.
Within the script, let’s specify the command to install a custom Node-RED theme package with three Node-RED blocks corresponding to video input, detection, and video view. We’ll circle back to these nodes later.
docker exec node-red-contrib-ml-node-red-1 bash -c "cd /data && npm install"
Finally, the below command embedded in the script allows you to restart the node-red-contrib-ml-node-red-1 container to implement your theme changes:
docker restart node-red-contrib-ml-node-red-1
Lastly, save the script as docker-ubuntu.sh.
Define your services within a Compose file
Create an empty file by running the following command inside the same directory as docker-ubuntu.sh:
touch docker-compose.yaml
Add the following lines within your docker-compose.yml file. These specify which services Docker should initiate concurrently at application launch:
services:
node-red:
image: nodered/node-red:3.0.1
restart: always
network_mode: "host"
volumes:
– "$HOME/node-red:/data"
user: "0"
dataloader:
image: baozhu/node-red-dataloader:v1.2
restart: always
runtime: nvidia
network_mode: "host"
privileged: true
devices:
– "/dev:/dev"
– "/var/run/udev:/var/run/udev"
detection:
image: baozhu/node-red-detection:v1.2
restart: always
runtime: nvidia
network_mode: "host"
Your application has the following parts:
Three services backed by Docker images: your Node-RED node-red app, dataloader, and detection
The dataloader service container will broadcast an OpenCV video stream (either from a USB webcam or an IP camera with RTSP) using the Pub/Sub messaging pattern to port 5550. It’s important to note that one needs to pass privileged:true to allow your service containers to get access to USB camera devices.
The detection service container will grab the above video stream and perform inference using TensorRT implementation of YOLOv5. This is an object-detection algorithm that can identify objects in real-time.
Execute the script
Open your terminal and run the following command:
sudo ./docker-ubuntu.sh
It’ll take approximately 2-3 minutes for these scripts to execute completely.
View your services
Once your script is executed, you can verify that your container services are up and running:
docker compose ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e487c20eb87b baozhu/node-red-dataloader:v1.2 "python3 python/pub_…" 48 minutes ago Up About a minute retail-store-items-detection-nodered-dataloader-1
4441bc3c2a2c baozhu/node-red-detection:v1.2 "python3 python/yolo…" 48 minutes ago Up About a minute retail-store-items-detection-nodered-detection-1
dd5c5e37d60d nodered/node-red:3.0.1 "./entrypoint.sh" 48 minutes ago Up About a minute (healthy) retail-store-items-detection-nodered-node-red-1
Visit http://127.0.0.1:1880/ to access the app.
You’ll find built-in nodes (video input, detection, and video view) available in the palette:
Let’s try to wire nodes by dragging them one-by-one from your palette into a workspace. First, let’s drag video input from the palette to the workspace. Double-click “Video Input” to view the following properties, and select “Local Camera”.
Note: We choose a local camera here to grab the video stream from the connected USB webcam. However, you can also grab the video stream from an IP camera via RTSP.
You’ll see that Node-RED chooses “COCO dataset” model name by default:
Next, drag Video View from the palette to the workspace. If you double-click on Video View, you’ll discover that msg.payload is already chosen for you under the Property section.
Wire up the nodes
Once you have all the nodes placed in the workspace, it’s time to wire nodes together. Nodes are wired together by pressing the left-mouse button on a node’s port, dragging to the destination node and releasing the mouse button (as shown in the following screenshot).
Trigger Deploy at the top right corner to start the deployment process. By now, you should be able to see the detection process working as Node-RED detects items.
Conclusion
The ultimate goal of modernizing software development is to deliver high-value software to end users even faster. Low-code technology like Node-RED and Docker help us achieve this by accelerating the time from ideation to software delivery. Docker helps accelerate the process of building, running, and sharing modern AI applications.
Docker Official Images help you develop your own unique applications — no matter what tech stack you’re accustomed to. With one YAML file, we’ve demonstrated how Docker Compose helps you easily build Node-RED apps. We can even take Docker Compose and develop real-world microservices applications. With just a few extra steps, you can apply this tutorial while building applications with much greater complexity. Happy coding!
References:
Project Source Code
Getting Started with Docker in Seeed Studio
Node-RED
Node-RED Library
Node-RED Docker Hub Repository
Quelle: https://blog.docker.com/feed/
Ever found yourself wishing for a way to synchronize local changes with a remote Kubernetes environment? There’s a Docker extension for that! Read on to learn more about how Telepresence partners with Docker Desktop to help you run integration tests quickly and where to get started.
A version of this article was first published on Ambassador’s blog.
Run integration tests locally with the Telepresence Extension on Docker Desktop
Testing your microservices-based application becomes difficult when it can no longer be run locally due to resource requirements. Moving to the cloud for testing is a no-brainer, but how do you synchronize your local changes against your remote Kubernetes environment?
Run integration tests locally instead of waiting on remote deployments with Telepresence, now available as an Extension on Docker Desktop. By using Telepresence with Docker, you get flexible remote development environments that work with your Docker toolchain so you can code and test faster for Kubernetes-based apps.
Install Telepresence for Docker through these quick steps:
Download Docker Desktop.
Open Docker Desktop.
In the Docker Dashboard, click “Add Extensions” in the left navigation bar.
In the Extensions Marketplace, search for the Ambassador Telepresence extension.
Click install.
Connect to Ambassador Cloud through the Telepresence extension:
After you install the Telepresence extension in Docker Desktop, you need to generate an API key to connect the Telepresence extension to Ambassador Cloud.
Click the Telepresence extension in Docker Desktop, then click Get Started.
Click the Get API Key button to open Ambassador Cloud in a browser window.
Sign in with your Google, GitHub, or Gitlab account. Ambassador Cloud opens to your profile and displays the API key.
Copy the API key and paste it into the API key field in the Docker Dashboard.
Connect to your cluster in Docker Desktop:
Select the desired cluster from the dropdown menu and click Next. This cluster is now set to kubectl’s current context.
Click Connect to [your cluster]. Your cluster is connected, and you can now create intercepts.
To get hands-on with the example shown in the above recording, please follow these instructions:
1. Enable and start your Docker Desktop Kubernetes cluster locally 1.
Install the Telepresence extension to Docker Desktop if you haven’t already.
2. Install the emojivoto application in your local Docker Desktop cluster (we will use this to simulate a remote K8s cluster).
Use the following command to apply the Emojivoto application to your cluster.
kubectl apply -k github.com/BuoyantIO/emojivoto/kustomize/deployment
3. Start the web service in a single container with Docker.
Create a file docker-compose.yml and paste the following into that file:
version: ‘3’
services:
web:
image: buoyantio/emojivoto-web:v11
environment:
– WEB_PORT=8080
– EMOJISVC_HOST=emoji-svc.emojivoto:8080
– VOTINGSVC_HOST=voting-svc.emojivoto:8080
– INDEX_BUNDLE=dist/index_bundle.js
ports:
– "8080:8080"
network_mode: host
In your terminal run docker compose up to start running the web service locally.
4. Using a test container, curl the “list” API endpoint in Emojivoto and watch it fail (because it can’t access the backend cluster).
In a new terminal, we will test the Emojivoto app with another container. Run the following command docker run -it –rm –network=host alpine. Then we’ll install curl: apk –no-cache add curl.
Finally, curl localhost:8080/api/list, and you should get an rpc error message because we are not connected to the backend cluster and cannot resolve the emoji or voting services:
> docker run -it –rm –network=host alpine
apk –no-cache add curl
fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/community/x86_64/APKINDEX.tar.gz
(1/5) Installing ca-certificates (20211220-r0)
(2/5) Installing brotli-libs (1.0.9-r5)
(3/5) Installing nghttp2-libs (1.46.0-r0)
(4/5) Installing libcurl (7.80.0-r0)
(5/5) Installing curl (7.80.0-r0)
Executing busybox-1.34.1-r3.trigger
Executing ca-certificates-20211220-r0.trigger
OK: 8 MiB in 19 packages
curl localhost:8080/api/list
{"error":"rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp: lookup emoji-svc on 192.168.65.5:53: no such host""}
5. Run Telepresence connect via Docker Desktop.
Open Docker Desktop and click the Telepresence extension on the left-hand side. Click the blue “Connect” button. Copy and paste an API key from Ambassador Cloud if you haven’t done so already (https://app.getambassador.io/cloud/settings/licenses-api-keys). Select the cluster you deployed the Emojivoto application to by selecting the appropriate Kubernetes Context from the menu. Click next and the extension will connect Telepresence to your local cluster.
6. Re-run the curl and watch this succeed.
Now let’s re-run the curl command. Instead of an error, the list of emojis should be returned indicating that we are connected to the remote cluster:
curl localhost:8080/api/list
[{"shortcode":":joy:","unicode":"😂"},{"shortcode":":sunglasses:","unicode":"😎"},{"shortcode":":doughnut:","unicode":"🍩"},{"shortcode":":stuck_out_tongue_winking_eye:","unicode":"😜"},{"shortcode":":money_mouth_face:","unicode":"🤑"},{"shortcode":":flushed:","unicode":"😳"},{"shortcode":":mask:","unicode":"😷"},{"shortcode":":nerd_face:","unicode":"🤓"},{"shortcode":":gh
7. Now, let’s “intercept” traffic being sent to the web service running in our K8s cluster and reroute it to the “local” Docker Compose instance by creating a Telepresence intercept.
Select the emojivoto namespace and click the intercept button next to “web”. Set the target port to 8080 and service port to 80, then create the intercept.
After the intercept is created you will see it listed in the UI. Click the nearby blue button with three dots to access your preview URL. Open this URL in your browser, and you can interact with your local instance of the web service with its dependencies running in your Kubernetes cluster.
Congratulations, you’ve successfully created a Telepresence intercept and a sharable preview URL! Send this to your teammates and they will be able to see the results of your local service interacting with the Docker Desktop cluster.
—
Want to learn more about Telepresence or find other life-improving Docker Extensions? Check out the following related resources:
Install the official Telepresence Docker Desktop Extension.
Learn more about Ambassador and their solutions for Kubernetes developers.
Read similar articles covering new Docker Extensions.
Find more helpful Docker Extensions on Docker Hub.
Learn how to create your own extensions for Docker Desktop.
Get started and download Docker Desktop for Windows, Mac, or Linux.
Quelle: https://blog.docker.com/feed/
It gives me immense pleasure to present this blog as I intern with Docker as a Product Marketer. This incredible experience has given me the chance to brainstorm many new ideas. As a prior Java developer, I’ve always been amazed by how Java and Spring Boot work wonders together! Shoutout to everyone who helped drive the completion of this blog!
For the past 30 years, web development has become vital across multiple industries. Developers have long used Java and the Spring Framework for web development, particularly on the server side.
Java follows the “old is gold” philosophy. And after evolving over 25 years, it’s still one of today’s most popular programming languages. Fifty-million websites including Google, LinkedIn, eBay, Amazon, and Stack Overflow, use Java extensively.
In this blog, we’ll create a simple Java Spring Boot web application and containerize it using Docker, which works by running our application as a software “image.” This image packages together the operating system, code, and any supporting libraries or dependencies. This makes it much easier to develop and deploy cross-platform applications. Let’s jump into the process.
Here’s what you’ll be doing
Building your first Java Spring Boot web app
Running and building your application without Docker, first
Containerizing the Spring Boot web application
What you’ll need
1. JDK 17 or above
2. Spring Tool Suite for Eclipse
3. Docker Desktop
Building your first Java Spring Boot web app
We’re using Spring Tool Suite (STS) for our application. STS is Eclipse-based and tailored for creating Spring applications. It includes a built-in and customizable Tomcat server that offers Spring configuration file validation, coding assistance, and more.
Another advantage is that Spring Tool Suite 4 doesn’t need an explicit Maven plugin. It ships with its own Maven plugin, which is easy to enable by navigating to Windows > Preferences > Maven. This IDE also offers an approachable UI and tools to simplify Spring app development.
That said, let’s now create a new base project for our app. Create a new Spring starter project from the Package Explorer:
Since we’re building a Spring web app, we need to add our Spring web and Thymeleaf dependencies. Thymeleaf is a Java template engine for implementing frontend functions like HTML, XML, JavaScript, CSS, or even plain text files with Spring Boot.
You’ll notice that configuring your starter project takes some time, since we’re pulling from the official website. Once finished, you can further configure your project base. The project structure looks like this:
By default, Maven compiles sources from src/main/java and src/test/java is where your test cases reside. Meanwhile, src/main/resources is the standard Maven location for application resources like templates, images and other configurations.
Maven’s fundamental unit of work is the pom.xml (Project Object Model). It contains information about the project, its dependencies, and configuration details that Maven uses while building.
Here’s our project’s POM. You’ll also notice the Spring Web and Thymeleaf dependencies that we added initially:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.7.2</version>
<relativePath/> <!– lookup parent from repository –>
</parent>
<groupId>com.example</groupId>
<artifactId>webapp</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>webapp</name>
<description>Demo project for Spring Boot</description>
<properties>
<java.version>17</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-thymeleaf</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
Next, if you inspect the source code inside src/main/java, you’ll see a generated class WebappApplication.java file:
package com.example.mypkg;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication
public class WebappApplication {
public static void main(String[] args) {
SpringApplication.run(WebappApplication.class, args);
}
}
This is the main class that your Spring Boot app executes from. The @SpringBootApplication annotation denotes a variety of features — including the ability to enable Spring Boot auto-configuration, Java-based Spring configuration, and component scanning.
Therefore, @SpringBootApplication is akin to using @Configuration, @EnableAutoConfiguration, and @ComponentScan with their default attributes. Here’s more information about each annotation:
@Configuration denotes that the particular class has @Bean definition methods. The Spring container may process it to provide bean definitions.
@EnableAutoConfiguration helps you auto-configure beans present in the classpath.
@ComponentScan lets Spring scan for configurations, controllers, services, and other predefined components.
A Spring application is bootstrapped as a standalone from the main method using SpringApplication.run(<Classname>.class, args).
As mentioned, you can embed both static and dynamic web pages in src/main/resources. Here, we’ve designated Products.html as the home page of our application:
We’ll use a simple RESTful web service to grab our application’s home page. First, create a Controller class in the same location as your main class. This’ll be responsible for processing incoming REST API calls, preparing a model, and returning the rendered view as a response.
package com.example.mypkg;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.GetMapping;
@Controller
public class HomeController {
@GetMapping(value = "/DockerProducts")
public String index() {
return "Products";
}
}
The @Controller annotation assigns the controller role to a particular class. You’d use this annotation to mark a class as a web request handler. @GetMapping commonly maps HTTP GET requests onto specific handler methods. Here, it’s the method “index” that returns our app’s homepage, called “Products.”
Building and running the application without Docker
It’s time to test our application by running it as a Spring Boot application.
Your application is now available at port 8080, which you can access by opening http://localhost:8080/DockerProducts in your browser.
We’ve tested our project by running it. It’s now time to build our application by creating a JAR file. Choose the “Maven clean” install option within Spring Tool Suite:
Here’s the console for the ongoing build. You’ll see that STS has successfully built our JAR:
You can access this JAR file in the Target folder shown below:
Containerizing our Spring Boot web application with Docker
Next, we’re using Docker to containerize our application. Before starting, download and install Docker Desktop. Docker Desktop includes multiple developer-focused tools like the Docker CLI, Docker Compose, and Docker Engine. It also features a user-friendly UI (the Docker Dashboard) that streamlines common container, image, and volume management tasks.
Once that’s installed, we’ll tackle containerization with the following steps:
Creating a Dockerfile
Building the Docker image
Running Docker container to access the application
Creating a Dockerfile
A Dockerfile is a plain-text file that specifies instructions for building a Docker image. You can create this in your project’s root directory:
FROM eclipse-temurin:17-jdk-focal
ADD target/webapp-0.0.1-SNAPSHOT.jar webapp-0.0.1-SNAPSHOT.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "webapp-0.0.1-SNAPSHOT.jar"]
What does each instruction do?
FROM – Specifies the base image that your Dockerfile uses to build a new image. We’re using eclipse-temurin:17-jdk-focal as our base image. The Eclipse Temurin Project shares code and procedures that enable you to easily create Java SE runtime binaries. It also helps you leverage common, related technologies that appear throughout Java’s ecosystem.
ADD – Copies the new files and JAR into your Docker container’s filesystem at a specific destination
EXPOSE – Reveals specific ports to the host machine. We’re exposing port 8080 since embedded Tomcat servers automatically use it.
ENTRYPOINT – Sets executables that’ll run once the container spins up
Building the Docker image
Docker images are instructive templates for creating Docker containers. You’ll build your Docker image by opening the STS terminal at your project’s root directory, and entering the following command:
docker build -t docker_desktop_page .
Our image name is docker_desktop_page. Here’s how your images will appear if you request a list:
Run your application as a Docker container
A Docker container is a running instance of a Docker image. It’s a lightweight, standalone, executable software package that includes everything needed to run an application. Enter this command to start your container:
docker run -p 8080:8080 docker_desktop_page
Access your application at http://localhost:8080/DockerProducts. Here’s a quick glimpse of our webpage!
You can also view your image and running container via the Docker Dashboard:
You can also manage these containers within the Container interface.
Containerization enables easier build and deploy
Congratulations! You’ve successfully built your first Java website. You can access the full project source code here.
You’ve now learned how easy containerizing an application is — even without prior Docker experience. To learn more about developing your next Java Spring Boot application, check out our Getting started with Java Overview.
Quelle: https://blog.docker.com/feed/
Deploying and spinning up a functional server is key to distributing web-based applications to users. The Apache HTTP Server Project has long made this possible. However, despite Apache Server’s popularity, users can face some hurdles with configuration and deployment.
Thankfully, Apache and Docker containers can work together to streamline this process — saving you time while reducing complexity. You can package your application code and configurations together into one cross-platform unit. The Apache httpd Docker Official Image helps you containerize a web-server application that works across browsers, OSes, and CPU architectures.
In this guide, we’ll cover Apache HTTP Server (httpd), the httpd Docker Official Image, and how to use each. You’ll also learn some quick tips and best practices. Feel free to skip our Apache intro if you’re familiar, but we hope you’ll learn something new by following along. Let’s dive in.
In this tutorial:
What is Apache Server?
How to use the httpd Docker Official Image
How to use a Dockerfile with your image
How to use your image without a Dockerfile
Configuration and useful tips
How to unlock data encryption through SSL
Pull your first httpd Docker Official Image
What is Apache Server?
The Apache HTTP Server was created as a “commercial-grade, featureful, and freely available source code implementation of an HTTP (Web) server.” It’s equally suitable for basic applications and robust enterprise alternatives.
Like any server, Apache lets developers store and access backend resources — to ultimately serve user-facing content. HTTP web requests are central to this two-way communication. The “d” portion of the “httpd” acronym stands for “daemon.” This daemon handles and routes any incoming connection requests to the server.
Developers also leverage Apache’s modularity, which lets them add authentication, caching, SSL, and much more. This early extensibility update to Apache HTTP Server sparked its continued growth. Since Apache HTTP Server began as a series of NCSA patches, its name playfully embraces its early existence as “a patchy web server.”
Some Apache HTTP Server fun facts:
Apache debuted in 1995 and is still widely used.
It’s modeled after NCSA httpd v1.3.
Apache currently serves roughly 47% of all sites with a known web server
Httpd vs. Other Server Technologies
If you’re experienced with Apache HTTP Server and looking to containerize your application, the Apache httpd Docker Official Image is a good starting point. You may also want to look at NGINX Server, PHP, or Apache Tomcat depending on your use case.
As a note, HTTP Server differs from Apache Tomcat — another Apache server technology. Apache HTTP Server is written in C while Tomcat is Java based. Tomcat is a Java Servlet dedicated to running Java code. It also helps developers create application pages via JavaServer Pages.
What is the httpd Docker Official Image?
We maintain the httpd Docker Official Image in tandem with the Docker community. Developers can use httpd to quickly and easily spin up a containerized Apache web server application. Out of the box, httpd contains Apache HTTP Server’s default configuration.
Why use the Apache httpd Docker Official Image? Here are some core use cases:
Creating an HTML server, as mentioned, to serve static web pages to users
Forming secure server HTTPS connections, via SSL, using Apache’s modules
Using an existing complex configuration file
Leveraging advanced modules like mod_perl, which this GitHub project outlines
While these use cases aren’t specific to our httpd Official Image itself, it’s easy to include these external configurations within your image itself. We’ll explore this process and outline how to use your first Apache container image now.
For use cases such as mod_php, a dedicated image such as the PHP Docker Official Image is probably a better fit.
How to use the httpd Docker Official Image
Before proceeding, you’ll want to download and install Docker Desktop. While we’ll still use the CLI during this tutorial, the built-in Docker Dashboard gives you an easy-to-use UI for managing your images and containers. It’s easy to start, pause, remove, and inspect running containers with the click of a button. Have Desktop running and open before moving on.
The quickest way to leverage the httpd Official Image is to visit Docker Hub, copy the docker pull httpd command into your terminal, and run it. This downloads each package and dependency within your image before automatically adding it into Docker Desktop:
Some key things happened while we verified that httpd is working correctly in this video:
We pulled our httpd image using the docker pull httpd command.
We found our image in Docker Desktop in the Images pane, chose “Run,” and expanded the Optional settings pane. We named our image so it’s easy to find, and entered 8080 as the host port before clicking “Run” again.
Desktop took us directly into the Containers pane, where our named container, TestApache, was running as expected.
We visited `http://localhost:8080` in our browser to test our basic setup.
This example automatically grabs the :latest version of httpd. We recommend specifying a numbered version or a tag with greater specificity, since these :latest versions can introduce breaking changes. It can be challenging to monitor these changes and test them effectively before moving into production.
That’s a great test case, but what if you want to build something a little more customized? This is where a Dockerfile comes in handy.
How to use a Dockerfile with your image
Though less common than other workflows, using a Dockerfile with the httpd Docker Official Image is helpful for defining custom configurations.
Your Dockerfile is a plain text file that instructs Docker on how to build your image. While building your image manually, this file lets you create configurations and useful image layers — beyond what the default httpd image includes.
Running an HTML server is a common workflow with the httpd Docker Official Image. You’ll want to add your Dockerfile in a directory which contains your project’s complete HTML. We’ll call it public-html in this example:
FROM httpd:2.4
COPY ./public-html/ /usr/local/apache2/htdocs/
The FROM instruction tells our builder to use httpd:2.4 as our base image. The COPY instruction copies new files or directories from our specified source, and adds them to the filesystem at a certain location. This setup is pretty bare bones, yet still lets you create a functional Apache HTTP Server image!
Next, you’ll need to both build and run this new image to see it in action. Run the following two commands in sequence:
$ docker build -t my-apache2 .
$ docker run -d –name my-running-app -p 8080:80 my-apache2
First, docker build will create your image from your earlier Dockerfile. The docker run command takes this image and starts a container from it. This container is running in detached mode, or in the background. If you wanted to take a step further and open a shell within that running container, you’d enter a third command: docker exec -ti my-running-app sh. However, that’s not necessary for this example.
Finally, visit http://localhost:8080 in your browser to confirm that everything is running properly.
How to use your image without a Dockerfile
Sometimes, you don’t even need nor want a Dockerfile for your image builds. This is the more common approach that most developers will take — compared to using a Dockerfile. It also requires just a couple of commands.
That said, enter the following commands to run your Apache HTTP Server container:
Mac:
$ docker run -d –name my-apache-app -p 8080:80 -v "$PWD":/usr/local/apache2/htdocs/ httpd:2.4
Linux:
$ docker run -d –name my-apache-app -p 8080:80 -v
$(pwd):/usr/local/apache2/htdocs/ httpd:2.4
Windows:
$ docker run -d –name my-apache-app -p 8080:80 -v "$pwd":/usr/local/apache2/htdocs/ httpd:2.4
Note: For most Linux users, the Mac version of this command works — but the Linux version is safest for those running compatible shells. While Windows users running Docker Desktop will have bash available, ”$pwd” is needed for Powershell.
Using -v bind mounts your project directory and $PWD (or its OS-specific variation) effectively expands to your current working directory, if you’re running macOS or Linux. This lets your container access your filesystem effectively and grab what it needs to run. You’re still connecting host port 8080 to container port 80/tcp — just like we did earlier within Docker Desktop — and running your Apache container in the background.
Configuration and useful tips
Customizing your Apache HTTP Server configuration is possible with two quick steps. First, enter the following command to grab the default configuration upstream:
&lt;code&gt;docker run –rm httpd:2.4 cat /usr/local/apache2/conf/httpd.conf &amp;gt; my-httpd.conf&lt;/code&gt;
Second, return to your Dockerfile and COPY in your custom configuration from the required directory:
FROM httpd:2.4
COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf
That’s it! You’ve now dropped your Apache HTTP Server configurations into place. This might include changes to any modules and any functional additions to help your server run.
How to unlock data encryption through SSL
Apache forms connections over HTTP by default. This is fine for smaller projects, test cases, and server setups where security isn’t important. However, larger applications and especially those that transport sensitive data — like enterprise apps — may require encryption. HTTPS is the standard that all web traffic should use given its default encryption.
This is possible natively through Apache using the mod_ssl encryption module. In a Docker context, running web traffic over SSL means using the COPY instruction to add your server.crt and server.key into your /usr/local/apache2/conf/ directory.
This is a condensed version of this process, and more steps are needed to get SSL up and running. Check out our Docker Hub documentation under the “SSL/HTTPS” section for a complete list of approachable steps. Crucially, SSL uses port 443 instead of port 80 — the latter of which is normally reserved for unencrypted data).
Pull your first httpd Docker Official Image
We’ve demoed how to successfully use the httpd Docker Official Image to containerize and run Apache HTTP Server. This is great for serving web pages and powering various web applications — both secure or otherwise. Using this image lets you deploy cross-platform and cross-browser without encountering hiccups.
Combining Apache with Docker also preserves much of the customizability and functionality developers expect from Apache HTTP Server. To quickly start experimenting, head over to Docker Hub and pull your first httpd container image.
—
Further reading:
The httpd GitHub Repository
Awesome Compose: A sample PHP application using Apache2
Quelle: https://blog.docker.com/feed/
Whalecome, dear reader, to the first issue of Dear Moby — my new advice column where I, Moby Dock, will be answering real developer questions from you, the Docker community. Ever hear of the Dear Abby column? Well, this one is better, because it’s just for developers.
Since we announced this column (and its video counterpart with my friends, Kat and Shy), we’ve received a tidal wave of questions. (You can submit your own questions here!)
Despite my whaleth of knowledge and passion for all things app development, I’m only one whale — and I don’t have all the answers! Many, but not all. So, I’ve commissioned a crew of fellow Docker experts to voyage alongside me in pursuit of the answers you seek.
So without further ado, let’s dive into the fray…
Our first question comes from shaileshb who asks:
“Hey! I’m creating a CronJob for my kubernetes cluster. Currently, I am confused as to whether I should put database connection strings and the main logic inside the CronJob itself, or whether those should exist in an API that the CronJob calls.”
Today’s commissioned experts: Director of Engineering Shawn Axsom and Principal Software Engineer Josh Newman
Dear shaileshb,
The best approach depends on your specific circumstances, but there are important considerations around performance and security you have to take into account with every deployment to a cluster you create.
Whether you use an additional API or not, you should secure connection strings and other secrets.
You want to keep it secret and keep it safe, so try these best practices:
Don’t put connection strings in environmental variables that someone could access while breaching the container or inspecting container or pod metadata.
Set identity access management policies based on the Principle of Least Privilege. (More about PoLP here.)
Consolidate database access to a single service or limited subset.
Consider a secrets manager, regardless of what deployment approach you take. (Take a deep dive into Kubernetes secret storage with this post from Conjur!)
Next, when it comes to performance, it really depends on your circumstance. Either approach can give you good performance, but your choice needs to take into account things like how often the CronJob runs, whether it runs in parallel, whether caching is involved, etc. And if designing for scale, you’ll want to consider connection limits and connection pooling.
When going the CronJob route, we suggest considering an external connection pool — but be sure it’s set up properly to avoid exhausting it. This might be an advantage of the API route. Depending on your tech stack and usage, you can get better connection pooling within the API service. If you want to brush up on your connection pooling considerations, check out this Stack Overflow article.
In the long run, simplicity is a virtue and will always reign supreme.
An understandable, secure, and scalable system is a one with a cohesive design that isn’t over-engineered. It’s often best to start small (and also consider existing services that may benefit from the functionality) and keep the code contained in one location where secrets are stored securely — especially if there aren’t performance or scalability concerns (or a need to reuse it). And if these concerns do rear their ugly heads, that simplicity makes it easier to refactor. Truth be told, after observing the code in production (or getting feedback), you might even opt for a different approach, so the simpler you start, the easier you make things for yourself later.
Above all else, it’s always good to design for resiliency and extensibility. If best practices aren’t put in place from the start, make sure to design the CronJob or API in a modular and composable way so these practices can be put into practice later without a rewrite.
Well, that does it for our first issue of Dear Moby! Many thanks to Josh and Shawn.
Have another question for me and the Docker dev team to tackle? Submit it here!
Until next time,
Moby Dock
Quelle: https://blog.docker.com/feed/
Extensions are great for expanding the capability of Docker Desktop. We’re proud to feature this extension from Slim.AI which promises deep container insight and optimization features. Follow along as Slim.AI walks through how to install, use, and connect with these handy new features!
A version of this article was first published on Slim.AI’s blog.
We’re excited to announce that we’ve been working closely with the team at Docker developing our own Slim.AI Docker Extension to help developers build secure, optimized containers faster. You can find the Slim Extension in the Docker Marketplace or on Docker Hub.
Docker Extensions give developers a simple way to install, and run helpful container development tools directly within the Docker Desktop. For more information about Docker Extensions, check out https://docs.docker.com/desktop/extensions/.
The Slim.AI Docker Extension brings some of the Slim Platform’s capabilities directly to your local environment. Our initial release, available to everyone, is focused on being the easiest way for developers to get visibility into the composition and construction of their images and help reduce friction when selecting, troubleshooting, optimizing, and securing images.
Why should I install the Slim.AI Extension?
At Slim, we believe that knowing your software is a key building block to creating secure, small, production-ready containers and reducing software supply chain risk. One big challenge many of us face when attempting to optimize and secure container images is that images often lack important documentation. This leaves us in a pinch when trying to figure out even basic details about whether or not an image is usable, well constructed, and secure.
This is where, we believe, the Slim Docker Extension can help.
Currently, the Slim Docker extension is free to developers and includes the following capabilities:
Available Free to All Developers without Authentication to the Slim Platform
Easy to access deep analyses of your local container images by tag with quick access to information like the local arch, exposed ports, shells, volumes, and certs
Security insights including whether the containers runs with a root user and a complete list of files that have special permissions
Optimization opportunities including a counts of deleted and duplicate files
Fully searchable File Explorer filterable by layer, instruction, and file type with the ability to view the contents of any text-based file
The ability to compare any two local images or image tags with deep analyses and File Explorer capabilities
Reverse engineered dockerfiles for each image when the originals are not available
Features available to developers with Slim.AI accounts (https://portal.slim.dev):
Search across multiple public and authenticated registries for quality images including support for Docker Hub, Github, DigitalOcean, ECR, MCR, GCR, with more coming soon.
View deep analysis, insights, and File Explorer for images across available registries prior to pulling them down to your local machine.
How do I install the extension?
Make sure you’re running Docker Desktop version 4.10 or greater. You can get the latest version of Docker Desktop at docker.com/desktop.
Go to Slim.AI Extension on Docker Hub. (https://hub.docker.com/extensions/slimdotai/dd-ext)
Click “Open in Docker Desktop”.
This will open the Slim.AI extension in the Extensions Marketplace in Docker Desktop. Click “Install.” The installation should take just a few seconds.
How do I use the Slim.AI Docker Desktop Extension?
Once installed, click on the Slim.AI extension in the left nav of Docker Desktop.
You should see a “Welcome” screen. Go ahead and click to view our Terms and Privacy Policy. Then, click “Start analyzing local containers”.
The Slim.AI Docker Desktop Extension will list the images on your local machine.
You can use the search bar to find specific images by name.
Click the “Explore” button or carrot icon to view details about your images:
The File Explorer is a complete searchable view into your image’s file system. You can filter by layer, file type, and whether or not it is an Addition, Modification, or Deletion in a given layer. You can also view the content of non-binary files by clicking on the file name then clicking File Contents in the new window.
The Overview displays important metadata and insights about your image including the user, certs, exposed ports, volumes, environment variables, and more.
The Docker File shows a reverse engineered dockerfile we generate when the original dockerfile may not be available.
Click the “Compare” button to compare two images or image tags.
Select the tag via the dropdown under the image name. Then, click the “Compare” button in its card.
Select a second image or tag, and click the “Compare” button in its card.
You will be taken to a comparison view where you can explore the differences in the files, metadata, and reverse engineered dockerfiles.
How do I connect the Slim.AI Docker Desktop Extension to my Slim.AI Account?
Once installed, click on the “Login” button at the top of the extension.
Sign in using your GitHub, GitLab, or BitBucket account. (Accounts are free for individual developers.)
Navigate back to the Slim Docker Desktop Extension
Once successfully connected, you can use the search bar to search over all of your connected registries and explore remote images before pulling them down to your local machine.
What if I don’t have a Slim.AI account?
The Slim platform is currently free to use. You can create an account from the Docker Desktop Extension by clicking the Login button in the top right of the extension. You will be taken to a sign in page where you can authenticate using Github, Gitlab, or Bitbucket.
What’s on the roadmap?
We have a number of features and refinements planned for the extension, but we need your feedback to help us improve. Please provide your feedback here.
Planned capabilities include:
Improvements to the Overview to provide more useful insights
Design and UX updates to make the experience even easier to use
More capabilities that connect your local experience to the Slim Portal
—
Interested in learning more about how extensions can expand your experience in Docker Desktop? Check out our Docker Hub extensions library or see below for further reading:
Install the Slim.AI Docker Desktop Extension.
Read similar articles covering new Docker Extensions.
Learn how to create your own extensions for Docker Desktop.
Get started and download Docker Desktop for Windows, Mac, or Linux.
Quelle: https://blog.docker.com/feed/
Mark your calendars! Join us for our next Community All-Hands event on September 1st. This quarterly event is a unique opportunity for the Docker community and staff to come together and present topics we’re passionate about.
Don’t miss out on this special event with community news, project demos, programming workshops, company and product updates, and more! In this edition, we will be introducing an unconference track, in which you will be able to propose any topic and drive an interactive discussion with the community (the world is your oyster!).
We are looking for speakers who want to share their experience with the community in a 30 to 45-minute presentation. We welcome all topics related to software development and the tech ecosystem. Additionally, we’re looking for unconference hosts who are passionate about a particular topic and want to lead a discussion on it.
Have a topic you’re especially excited about and would like to host? Follow these top five tips to get your topic suggestion accepted, and submit your proposal before August 14th!
Top five tips to get your proposal accepted
To increase the chances of your topic being selected, there’s a few qualities our community especially looks forward to. When brainstorming your proposal, we recommend keeping the following in mind.
1. Keep it short and sweet
Brevity is key when it comes to proposals. The review committee is looking for clear and concise proposals that get to the point.
2. Make it relevant
Your proposal should be relevant to the Docker community. Think about what would be of interest to other members of the community and what would contribute to the overall goal of the All-Hands event, which is to bring the community together.
During this event, developers can look forward to an altruistic platform for learning about new technologies and driving group conversation. For this reason, please don’t submit commercial and promotional content.
3. Don’t be afraid to repeat topics
Everyone has a different perspective, so don’t be afraid to repeat a topic that has been presented before. Your unique perspective is what makes your proposal valuable!
4. Know your audience
Keep your audience in mind when crafting your proposal. The All-Hands event is open to developers of all levels, so make sure your proposal is accessible to a broad range of people.
5. Follow the submission guidelines
Be sure to follow the submission guidelines when submitting your proposal. This includes providing all the required information and using the correct format.
Still not sure what to submit? Here’s a list of ideas
Main track:
Converting your application from a monolith into a collection of microservices
Getting started with Carbon, WASM, or any other technology. Provide a template in DockerHub for people to use!
A workshop on how to build modern web development architecture based Jamstack
A workshop on how to use some exciting new technologies (Web3, AI, Blockchain)
Case study: How you increased the productivity of your team with modern software tooling
Best practices for developing and deploying cloud-native applications
Showcase your new IoT project and information on how to contribute
How you made your tech meetup more inclusive
Your new Docker extension and how others can benefit from installing it
Unconference track:
Discussion on security best practices
Lightning talk about your open source project
Birds of a Feather session – Reactive Programming
Discussion panel on Artificial Intelligence trends
Guided meditation session
Docker sessions in French, Spanish or Klingon
An interpretative dance session of marine creatures
We hope these tips help you in submitting a successful proposal for our next Community All-Hands event. Make sure to submit your ideas before the August 14th deadline, and we’ll see you there!
Quelle: https://blog.docker.com/feed/