Announcing Docker Hub OCI Artifacts Support

We’re excited to announce that Docker Hub can now help you distribute any type of application artifact! You can now keep everything in one place without having to leverage multiple registries.

Before today, you could only use Docker Hub to store and distribute container images — or artifacts usable by container runtimes. This became a limitation of our platform, since container image distribution is just the tip of the application delivery iceberg. Nowadays, modern application delivery requires numerous types of artifacts:

Helm chartsWebAssembly modulesDocker VolumesSBOMsOPA bundles…and many other custom artifacts

Developers often share these with clients that need them since they add immense value to each project. And while the OCI working groups are busy releasing the latest OCI Artifact Specification, we still have to package application artifacts as OCI images in the meantime. 

Docker Hub acts as an image registry and is perfectly suited for distributing application artifacts. That’s why we’ve added support for any software artifact — packaged as an OCI image — to Docker Hub.

What’s the Open Container Initiative (OCI)?

Back in 2015, we helped establish the Open Container Initiative as an open governance structure to standardize container image formats, container runtimes, and image distribution.

The OCI maintains a few core specifications. These govern the following:

How to package filesystem bundlesHow to launch containerized, cross-platform appsHow to make packaged content accessible to remote clients

The Runtime Specification determines how OCI images and runtimes interact. Next, the Image Specification outlines how to create OCI images. Finally, the Distribution Specification defines how to make content distribution interoperable.

The OCI’s overall aim is to boost transparency, runtime predictability, software compatibility, and distribution. We’ve since donated our own container format and runC OCI-compliant runtime to the OCI, plus given the OCI-compliant distribution project to the CNCF.

Why are we adding OCI support? 

Container images are integral to supporting your containerized application builds. We know that images accumulate between projects, making centralized cloud storage essential to efficiently manage resources. Developers shouldn’t have to rely on local storage or wonder if these resources are readily accessible. However, we also know that developers want to store a variety of artifacts within Docker Hub. 

Storing your artifacts in Docker Hub unlocks “anywhere access” while also enabling improved collaboration through Docker Hub’s standard sharing capabilities. This aligns us more closely with the OCI’s content distribution mission by giving users greater control over key pieces of application delivery.

How do I manage different OCI artifacts?

We recommend using dedicated tools to help manage non-container OCI artifacts, like the Helm CLI for Helm charts or the OCI Registry-as-Storage (ORAS) CLI for arbitrary content types.

Let’s walk through a few use cases to showcase OCI support in Docker Hub.

Working with Helm charts

Helm chart support was your most-requested feature, and we’ve officially added it to Docker Hub! So, how do you take advantage? We’ll create a simple Helm chart and push it to Docker Hub. This process will follow Helm’s official guide for storing Helm charts as OCI images in registries.

First, we’ll create a demo Helm chart:

$ helm create demo

This’ll generate a familiar Helm chart boilerplate of files that you can edit:

demo
├── Chart.yaml
├── charts
├── templates
│ ├── NOTES.txt
│ ├── _helpers.tpl
│ ├── deployment.yaml
│ ├── hpa.yaml
│ ├── ingress.yaml
│ ├── service.yaml
│ ├── serviceaccount.yaml
│ └── tests
│ └── test-connection.yaml
└── values.yaml

3 directories, 10 files

Once we’re done editing, we’ll need to package the Helm chart as an OCI image:

$ helm package demo

Successfully packaged chart and saved it to: /Users/martine/tmp/demo-0.1.0.tgz

Don’t forget to log into Docker Hub before pushing your Helm chart. We recommend creating a Personal Access Token (PAT) for this. You can export your PAT via an environment variable, and login, as follows:

$ echo $REG_PAT | helm registry login registry-1.docker.io -u martine –password-stdin

Pushing your Helm chart

You’re now ready to push your first Helm chart to Docker Hub! But first, make sure you have write access to your Helm chart’s destination namespace. In this example, let’s push to the docker namespace:

$ helm push demo-0.1.0.tgz oci://registry-1.docker.io/docker

Pushed: registry-1.docker.io/docker/demo:0.1.0
Digest: sha256:1e960ad1693c234b66ec1f9ddce80986cbf7159d2bb1e9a6d2c2cd6e89925e54

Viewing your Helm chart and using filters

Now, If you log in to Docker Hub and navigate to the demo repository detail, you’ll find your Helm chart in the list of repository tags:

You can navigate to the Helm chart page by clicking on the tag. The page displays useful Helm CLI commands:

Repository content management is now easier. We’ve improved content discoverability by adding a drop-down button to quickly filter the repository list by content type. Simply click the Content drop-down and select Helm from the list:

Working with volumes

Developers use volumes throughout the Docker ecosystem to share arbitrary application data like database files. You can already back up your volumes using the Volume Backup & Share extension that we recently launched. You can now also filter repositories to find those containing volumes using the same drop-down menu.

What if you want to push a volume to Docker Hub without using the Volume Backup & Share extension — say from the command line — and still have Docker Hub recognize it as a volume? The easiest method leverages the ORAS project. Let’s walk through a simple use case that mirrors the examples documented by the ORAS CLI.

First, we’ll create a simple file we want to package as a volume:

$ echo "bar" > foo.txt

For Docker Hub to recognize this volume, we must attach a config file to the OCI image upon creation and mark it with a specific media type. The file can contain arbitrary content, so let’s create one:

$ echo "{"name":"foo","value":"bar"}" > config.json

With this step completed, you’re now ready to push your volume.

Pushing your volume

Here’s where the magic happens. The media type Docker Hub needs to successfully recognize the OCI image as a volume is application/vnd.docker.volume.v1+tar.gz. You can attach the media type to the config file and push it to Docker Hub with the following command (plus its resulting output):

$ oras push registry-1.docker.io/docker/demo:0.0.1 –config config.json:application/vnd.docker.volume.v1+tar.gz foo.txt:text/plain

Uploading b5bb9d8014a0 foo.txt
Uploaded b5bb9d8014a0 foo.txt
Pushed registry-1.docker.io/docker/demo:0.0.1
Digest: sha256:f36eddbab8459d0ad1436b7ca8af6bfc512ec74f45d8136b53c16db87562016e

We now have two types of content in the demo repository as shown in the following breakdown:

If you navigate to the content page, you’ll see some basic information that we’ll expand upon in future iterations. This will boost visibility into a volume’s contents.

Handling generic content types

If you don’t use the application/vnd.docker.volume.v1+tar.gz media type when pushing the volume with the ORAS CLI, Docker Hub will mark the artifact as generic to distinguish it from recognized content.

Let’s push the same volume but use application/vnd.random.volume.v1+tar.gz media type instead of the one known to Docker Hub:

$ oras push registry-1.docker.io/docker/demo:0.1.1 –config config.json:application/vnd.random.volume.v1+tar.gz foo.txt:text/plain

Exists 7d865e959b24 foo.txt
Pushed registry-1.docker.io/docker/demo:0.1.1
Digest: sha256:d2fb2b176ee4e326f1f34ecdaede8db742f2c444cb2c9ceff0f5c8b743281c95

You can see the new content is assigned a generic Other type. We can still view the tagged content’s media type by hovering over the type label. In this case, that’s application/vnd.random.volume.v1+tar.gz:

If you’d like to filter the repositories that contain both Helm charts and volumes, use the same drop-down menu in the top-right corner:

Working with container images

Finally, you can continue pushing your regular container images to the exact same repository as your other artifacts. Say we re-tag the Redis Docker Official Image and push it to Docker Hub:

$ docker tag redis:3.2-alpine docker/demo:v1.2.2

$ docker push docker/demo:v1.2.2

The push refers to repository [docker.io/docker/demo]
a1892d5d1a6d: Mounted from library/redis
e41876edb6d0: Mounted from library/redis
7119119b7542: Mounted from library/redis
169a281fff0f: Mounted from library/redis
04c8ef03e935: Mounted from library/redis
df64d3292fd6: Mounted from library/redis
v1.2.2: digest: sha256:359cfebb00bef01cda3bc1ca453e6455c770a246a06ad8df499a28118c144eda size: 1570

Viewing your container images

If you now visit the demo repository page on Docker Hub, you’ll see every artifact listed under Tags and scans:

We’ll also introduce more features soon to help you better organize your application content, so stay tuned for more announcements!

Follow along for more updates

All developers can now access and choose from more robust sets of artifacts while building and distributing applications with Docker Hub. Not only does this remove existing roadblocks, but it’ll hopefully encourage you to create and distribute even more exciting applications.

But, our mission doesn’t end here! We’re continually working to bolster our OCI support. While the OCI Artifact Specification is considered a release candidate, full Docker Hub support for OCI Reference Types and the accompanying Referrers API is on the horizon. Stay tuned for upcoming enhancements, improved repo organization, and more.
Quelle: https://blog.docker.com/feed/

Docker Captain Take 5 — Nelson

Docker Captains are select members of the community that are both experts in their field and passionate about sharing their Docker knowledge with others. “Docker Captains Take 5” is a regular blog series where we get a closer look at our Captains and ask them the same broad set of questions ranging from what their best Docker tip is to whether they prefer cats or dogs (personally, we like whales and turtles over here). Today, we’re interviewing Nelson, one of our newest Captains. He’s the founder of Amigoscode and based in London, United Kingdom. You can follow Nelson on Twitter and Instagram for updates on his programming courses, YouTube videos, and more. 

How/when did you first discover Docker?

I discovered Docker back in 2015 while building a PoC that needed a local Postgres database. I was so impressed that with one single command I had a database up and running and was able to connect the backend to it. Ever since then, I’ve learned even more about Docker and use it in all of my projects. It has been a game-changer.

What’s your favorite Docker command?

docker exec — it container-name/id /bin/(bash/sh)

What’s your top tip for working with Docker that others may not know?

Whenever building a PoC, you should always use Docker Compose. With this Docker tool, you can spin an entire set of applications that can talk to each other with a single command. In most cases, you can spin up your entire dev environment using Docker Compose. 

What’s the coolest Docker demo you have done/seen?

I’ve built a Golang CLI tool that transcribes all of my videos and translates captions to any language. 

What have you worked on in the past six months that you’re particularly proud of?

Serverless email marketing tool to send emails to my students and subscribers. This tool is written in Golang and deployed on AWS Lambdas running as a Docker container.

What do you anticipate will be Docker’s biggest announcement this year?

I have no idea, but I know it’ll be cool.

What are some personal goals for the next year with respect to the Docker community?

Provide end-to-end programming content that teaches real-world applications by incorporating tools such as Docker. 

What was your favorite thing about DockerCon 2022?

The diversity of software engineers. There’s always something new to take from the talks/presentations.

Looking to the distant future, what’s the technology that you’re most excited about and you think holds a lot of promise?

I think serverless technology will enable engineering teams to think less about servers and focus more on the business side. 

Rapid fire questions…

What new skill have you mastered during the pandemic?

Cooking

Cats or Dogs?

Cats

Salty, sour, or sweet?

Sweet

Beach or mountains?

Mountains

Your most often-used emoji?

😁
Quelle: https://blog.docker.com/feed/

Security Advisory: Critical OpenSSL Vulnerability

What is it?

The OpenSSL Project will release a security fix (OpenSSL version 3.0.7) for a new-and-disclosed CVE on Tuesday, November 1, 2022. This CVE is categorized as “CRITICAL” and affects all OpenSSL versions after 3.0.

Docker estimates about 1,000 image repositories could be impacted across various Docker Official Images and Docker Verified Publisher images. This includes images that are based on versions of Debian 12, Ubuntu 22.04, and Redhat Enterprise Linux 9+ which install 3.x versions of OpenSSL.

We’re updating our users now so you can prepare to remediate any impacted images. We’ll also update this advisory as the OpenSSL Project releases more details next week.

Am I vulnerable?

While we’re waiting on the Project to release specific vulnerability details, you can still see if your public and private repositories are impacted. Docker created a placeholder for the OpenSSL CVE, which we’ll soon replace with the official CVE once it’s disclosed. 

Like with Heartbleed, OpenSSL’s maintainers are being careful about what information they publicize until a fix arrives. However, you can act before this announcement. We’ve created a way to quickly and transparently analyze any image’s security flaws.

Visit Docker’s Image Vulnerability Database, navigate to the “Vulnerability search” tab, and search for the placeholder security advisory dubbed “DSA-2022-0001.” You can also use this tool to see other vulnerabilities as they’re discovered, receive updates to refresh outdated base images, and more:

Once we learn more about this vulnerability, you can take targeted steps to determine how vulnerable you are. We suggest using the docker scan CLI command and Snyk’s Docker Hub Vulnerability Scanning tool. This will help detect the presence of vulnerable library versions and flag your image as vulnerable.

Alternatively, Docker is providing an experimental local tool to detect OpenSSL 3.x in Docker images. You can install this tool from its GitHub repository. Then, you can search your image for OpenSSL 3.x version with the following command:

$ docker-index cve –image gradle@sha256:1a6b42a0a86c9b62ee584f209a17d55a2c0c1eea14664829b2630f28d57f430d DSA-2022–0001

If the image contains a potentially vulnerable OpenSSL version, your terminal output will resemble the following:

WARNING Detected DSA-2022-0001 at
WARNING
WARNING pkg:deb/ubuntu/openssl@3.0.2-0ubuntu1.6?os_distro=jammy&os_name=ubuntu&os_version=22.04
WARNING
WARNING Instruction: /bin/sh -c #(nop) ADD file:ba96f963bbfd429a0839c40603fdd7829eaca58f20adfa0d15e6beae8244bc08 in /
WARNING Layer 0: sha256:301a8b74f71f85f3a31e9c7e7fedd5b001ead5bcf895bc2911c1d260e06bd987

And if Docker doesn’t detect a vulnerable version of OpenSSL in your image, you’ll see the following:

INFO DSA-2022-0001 not detected

Check back soon for more

As mentioned earlier, we’ll update this blog once the OpenSSL Project provides more vulnerability details. We also encourage you to sign up for our Early Access Program to access the tools discussed in this blog — plus share invaluable product feedback to help us improve!
Quelle: https://blog.docker.com/feed/

How to Implement Decentralized Storage Using Docker Extensions

This is a guest post written by Marton Elek, Principal Software Engineer at Storj.

In part one of this two-part series, we discussed the intersection of Web3 and Docker at a conceptual level. In this post, it’s time to get our hands dirty and review practical examples involving decentralized storage.

We’d like to see how we can integrate Web3 projects with Docker. At the beginning we have to choose from two options:

We can use Docker to containerize any Web3 application. We can also start an IPFS daemon or an Ethereum node inside a container. Docker resembles an infrastructure layer since we can run almost anything within containers.What’s most interesting is integrating Docker itself with Web3 projects. That includes using Web3 to help us when we start containers or run something inside containers. In this post, we’ll focus on this portion.

The two most obvious integration points for a container engine are execution and storage. We choose storage here since more mature decentralized storage options are currently available. There are a few interesting approaches for decentralized versions of cloud container runtimes (like ankr), but they’re more likely replacements for container orchestrators like Kubernetes — not the container engine itself.

Let’s use Docker with decentralized storage. Our example uses Storj, but all of our examples apply to almost any decentralized cloud storage solution.

Storj is a decentralized cloud storage where node providers are compensated to host the data, but metadata servers (which manage the location of the encrypted pieces) are federated (many, interoperable central servers can work together with storage providers).

It’s important to mention that decentralized storage almost always requires you to use a custom protocol. A traditional HTTP upload is a connection between one client and one server. Decentralization requires uploading data to multiple servers. 

Our goal is simple: we’d like to use docker push and docker pull commands with decentralized storage instead of a central Docker registry. In our latest DockerCon presentation, we identified multiple approaches:

We can change Docker and containerd to natively support different storage optionsWe can provide tools that magically download images from decentralized storage and persists them in the container engine’s storage location (in the right format, of course)We can run a service which translates familiar Docker registry HTTP requests to a protocol specific to the decentralized cloudUsers can manage this themselves.This can also be a managed service.

Leveraging native support

I believe the ideal solution would be to extend Docker (and/or the underlying containerd runtime) to support different storage options. But this is definitely a bigger challenge. Technically, it’s possible to modify every service, but massive adoption and a big user base mean that large changes require careful planning.Currently, it’s not readily possible to extend the Docker daemon to use special push or pull targets. Check out our presentation on extending Docker if you’re interested in technical deep dives and integration challenges. The best solution might be a new container plugin type, which is being considered.

One benefit of this approach would be good usability. Users can leverage common push or pull commands. But based on the host, the container layers can be sent to a decentralized storage.

Using tool-based push and pull

Another option is to upload or download images with an external tool — which can directly use remote decentralized storage and save it to the container engine’s storage directory.

One example of this approach (but with centralized storage) is the AWS ECR container resolver project. It provides a CLI tool which can pull and push images using a custom source. It also saves them as container images of the containerd daemon.

Unfortunately this approach also have some strong limitations:

It couldn’t work with a container orchestrator like Kubernetes, since they aren’t prepared to run custom CLI commands outside of pulling or pushing images.It’s containerd specific. The Docker daemon – with different storage – couldn’t use it directly.The usability is reduced since users need different CLI tools.

Using a user-manager gateway

If we can’t push or pull directly to decentralized storage, we can create a service which resembles a Docker registry and meshes with any client.ut under the hood, it uploads the data using the decentralized storage’s native protocol.

This thankfully works well, and the standard Docker registry implementation is already compatible with different storage options. 

At Storj, we already have an implementation that we use internally for test images. However, the nerdctl ipfs subcommand is another good example for this approach (it starts a local registry to access containers from IPFS).

We have problems here as well:

Users should run the gateway on each host. This can be painful alongside Kubernetes or other orchestrators.Implementation can be more complex and challenging compared to a native upload or download.

Using a hosted gateway

To make it slightly easier one can provide a hosted version of the gateway. For example, Storj is fully S3 compatible via a hosted (or self-hosted) S3 compatible HTTP gateway. With this approach, users have three options:

Use the native protocol of the decentralized storage with full end-to-end encryption and every featureUse the convenient gateway services and trust the operator of the hosted gateways.Run the gateway on its own

While each option is acceptable, a perfect solution still doesn’t exist.

Using Docker Extensions

One of the biggest concerns with using local gateways was usability. Our local registry can help push images to decentralized storage, but it requires additional technical work (configuring and running containers, etc.)

This is where Docker Extensions can help us. Extensions are a new feature of Docker Desktop. You can install them via the Docker Dashboard, and they can provide additional functionality — including new screens, menu items, and options within Docker Desktop. These are discoverable within the Extensions Marketplace:

And this is exactly what we need! A good UI can make Web3 integration more accessible for all users.

Docker Extensions are easily discoverable within the Marketplace, and you can also add them manually (usually for the development).

At Storj, we started experimenting with better user experiences by developing an extension for Docker Desktop. It’s still under development and not currently in the Marketplace, but feedback so far has convinced us that it can massively improve usability, which was our biggest concern with almost every available integration option.

Extensions themselves are Docker containers, which make the development experience very smooth and easy. Extensions can be as simple as a metadata file in a container and static HTML/JS files. There are special JavaScript APIs that manipulate the Docker daemon state without a backend.

You can also use a specialized backend. The JavaScript part of the extension can communicate with any containerized backend via a mounted socket.

The new docker extension command can help you quickly manage extensions (as an example: there’s a special docker extension dev debug subcommand that shows the Web Developer Toolbar for Docker Desktop itself.)

Thanks to the provided developer tools, the challenge is not creating the Docker Desktop extension, but balancing the UI and UX.

Summary

As we discussed in our previous post, Web3 should be defined by user requirements, not by technologies (like blockchain or NFT). Web3 projects should address user concerns around privacy, data control, security, and so on. They should also be approachable and easy to use.

Usability is a core principle of containers, and one reason why Docker became so popular. We need more integration and extension points to make it easier for Web3 project users to provide what they need. Docker Extensions also provide a very powerful way to pair good integration with excellent usability.

We welcome you to try our Storj Extension for Docker (still under development). Please leave any comments and feedback via GitHub.
Quelle: https://blog.docker.com/feed/

How to Use the Node Docker Official Image

Topping Stack Overflow’s 2022 list of most popular web frameworks and technologies, Node.js continues to grow as a critical MERN stack component. And since Node applications are written in JavaScript — the world’s leading programming language — many developers will feel right at home using it. We introduced the Node Docker Official Image (DOI) due to Node.js’ popularity and to solve some common development challenges. 

The Node.js Foundation describes Node as “an open-source, cross-platform JavaScript runtime environment.” Developers use it to create performant, scalable server and networking applications. Despite Node’s advantages, building and deploying cross-platform services can be challenging with traditional workflows.

Conversely, the Node Docker Official Image accelerates and simplifies your development processes while allowing additional configuration. You can deploy containerized Node applications in minutes. Throughout this guide, we’ll discuss the Node Official Image, how to use it, and some valuable best practices. 

In this tutorial:

What is the Node Docker Official Image?Node.js use casesAbout Docker Official ImagesHow to run Node in DockerEnter a quick pull commandConfirm that Node is functionalCreate your Node image from a DockerfileOptimize your Node imageUsing Docker ComposeRunning a simple Node scriptDocker Node best practicesGet started with Node today

What is the Node Docker Official Image?

The Node Docker Official Image contains all source code, core dependencies, tools, and libraries your application needs to work correctly. 

This image supports multiple CPU architectures like amd64, arm32v6, arm32v7, arm64v8, ppc641le, and s390x. You can also choose between multiple tags (or image versions) for any project. Choosing a pinned version like node:19.0.0-slim locks you into a stable, streamlined version of Node.js. 

Node.js use cases

Node.js lets developers write server-side code in JavaScript. The runtime environment then transforms this JavaScript into hardware-friendly machine code. As a result, the CPU can process these low-level instructions. 

Node is event-driven (through user actions), non-blocking, and known for being lightweight while simultaneously handling numerous operations. As a result, you can use the Node DOI to create the following: 

Web server applicationsNetworking applications

Node works well here because it supports HTTP requests and socket connections. An asynchronous I/O library lets Node containers read and write various system files that support applications. 

You could use the Node DOI to build streaming apps, single-page applications, chat apps, to-do list apps, and microservices. Or — if you’re like Community All-Hands’ Kathleen Juell — you could use Node.js to help serve static content. Containerized Node will shine in any scenario dictated by numerous client-server requests. 

Docker Captain Bret Fisher also offered his thoughts on Dockerized Node.js during DockerCon 2022. He discussed best practices for managing Node.js projects while diving into optimization. 

Lastly, we also maintain some Node sample applications within our GitHub Awesome Compose library. You can learn to use Node with different databases or even incorporate an NGINX proxy. 

About Docker Official Images

We’ve curated the Node Docker Official Image as one of many core container images on Docker Hub. The Node.js community maintains this image alongside members of the Docker community. 

Like other Docker Official Images, the Node DOI offers a common starting point for Node and JavaScript developers. We also maintain an evolving list of Node best practices while regularly pushing critical security updates. This distinguishes Docker Official Images from alternatives on Docker Hub. 

How to run Node in Docker

Before getting started, download the latest Docker Desktop release and install it. Docker Desktop includes the Docker CLI, Docker Compose, and additional core development tools. The Docker Dashboard (Docker Desktop’s UI component) will help you manage images and containers. 

You’re then ready to Dockerize Node!

Enter a quick pull command

Pulling the Node DOI is the quickest way to begin. Enter docker pull node in your terminal to grab the default latest Node version from Docker Hub. You can readily use this tag for testing or local development. But, a pinned version might be safer for production use. Here’s how the pull process works: 

Your CLI will display a status message once it’s done. You can also double-check this within Docker Desktop! Click the Images tab on the left sidebar and scan through your listed images. Docker Desktop will display your node image:

Your node:latest image is a hefty 942.33 MB. If you inspect your Node image’s contents using docker sbom node, you’ll see that it currently includes 623 packages. The Node image contains numerous dependencies and modules that support Node and various applications. 

However, your final Node image can be much slimmer! We’ll tackle optimization while discussing Dockerfiles. After all, the Node DOI has 24 supported tags spread amongst four major Node versions. Each has its own impact on image size.  

Confirm that Node is functional

Want to run your new image as a container? Hover over your listed node image and click the blue “Run” button. In this state, your Node container will produce some minimal log entries and run continuously in case requests come through. 

Exit this container before moving on by clicking the square “stop” button in Docker Desktop or by entering docker stop YourContainerName in the CLI. 

Create your Node image from a Dockerfile

Building from a Dockerfile gives you ultimate control over image composition, configuration, and your overall application. However, Node requires very little to function properly. Here’s a barebones Dockerfile to get you up and running (using a pinned, Debian-based image version): 

FROM node:19-bullseye

Docker will build your image from your chosen Node version. 

It’s safest to use node:19-bullseye because this image supports numerous use cases. This version is also stable and prevents you from pulling in new breaking changes, which sometimes happens with latest tags. 

To build your image from a Dockerfile, run the docker build -t my-nodejs-app . command. You can then run your new image by entering docker run -it –rm –name my-running-app my-nodejs-app.

Optimize your Node image

The complete version of Node often includes extra packages that weigh your application down. This leaves plenty of room for optimization. 

For example, removing unneeded development dependencies reduces image bloat. You can do this by adding a RUN instruction to our previous file: 

FROM node:19-bullseye

RUN npm prune –production

This approach is pretty granular. It also relies on you knowing exactly what you do and don’t need for your project. Alternatively, switching to a slim image build offers the quickest results. You’ll encounter similar caveats but spend less time writing individual Dockerfile instructions. The easiest approach is to replace node:19-bullseye with its node:19-bullseye-slim counterpart. This alone shrinks image size by 75%. 

You can even pull node:19-alpine to save more disk space. However, this tag contains even fewer dependencies and isn’t officially supported by the Node.js Foundation. Keep this in mind while developing. 

Finally, multi-stage builds lead to smaller image sizes. These let you copy only what you need between build stages to combat bloat. 

Using Docker Compose

Say you have a start script, an existing package.json file, and (possibly) want to operate Node alongside other services. Spinning up Node containers with Docker Compose can be pretty handy in these situations.

Here’s a sample docker-compose.yml file: 

services:
node:
image: “node:19-bullseye”
user: “node”
working_dir: /home/node/app
environment:
– NODE_ENV=production
volumes:
– ./:/home/node/app
ports:
– “8888:8888″
command: “npm start”

You’ll see some parameters that we didn’t specify earlier in our Dockerfile. For example, the user parameter lets you run your container as an unprivileged user. This follows the principle of least privilege. 

To jumpstart your Node container, simply enter the docker compose up -d command. Like before, you can verify that Node is running within Docker Desktop. The docker container ls –all command also displays all existing containers within the CLI.  

Running a simple Node script

Your project doesn’t always need a  Dockerfile. In these cases, you can directly leverage the Node DOI with the following command: 

docker run -it –rm –name my-running-script -v “$PWD”:/usr/src/app -w /usr/src/app node:19-bullseye node your-daemon-or-script.js

This simplistic approach is ideal for single-file projects.

Docker Node best practices

It’s important to get the most out of Docker and the Node Official Image. We’ve briefly outlined the benefits of running as a non-root node user, but here are some useful tips for developing with Node: 

Easily pass secrets and other runtime configurations to your application by setting NODE_ENV to production, as seen here: -e “NODE_ENV=production”.Place any installed, global Node dependencies into a non-root user directory.Remember to manually install curl if using an alpine image tag, since it’s not included by default.Wrap your Node process in an init system with the –init flag, so it can successfully run as PID1. Set memory limitations for your containers that run on the same host. Include the package.json start command directly within your Dockerfile, to reduce active container processes and let Node properly receive exit signals. 

This isn’t an exhaustive list. To view more details, check out our best practices documentation.

Get started with Node today

As you’ve seen, spinning up a Node container from the Node Docker Official Image is quick and requires just a few steps depending on your workflow. You’ll no longer need to worry about platform-specific builds or get bogged down with complex development processes. 

We’ve also covered many ways to help your Node builds perform better. Check out our top containerization tips article to learn even more about optimization and security. 

Ready to get started? Swing by Docker Hub and pull our Node image to start experimenting. In no time, you’ll have your server and networking applications up and running. You can also learn more on our GitHub read.me page.
Quelle: https://blog.docker.com/feed/

October 2022 Newsletter

Going “Remocal” with Docker, Telepresence, & Kubernetes
Gone are the days of locally running and testing entire applications on your laptop before pushing to production. Join us with Ambassador on a tour of coding, testing, and shipping microservices using remote-to-local tools and techniques.

Register Now

News you can use and monthly highlights:
How did I shrink my NextJS Docker image by 90% – Learn how to improve the development and production lifecycle by optimizing your NextJS Docker images.
How To Create A Production Image For A Node.js + TypeScript App Using Docker Multi-Stage Builds – Keep your NodeJS Docker container images slim by using multistage builds to create TypeScript-based apps.
Oracle SQLDeveloper Docker Extension – Discover the Extension that lets you run the Oracle SQLDeveloper Web tool and connect with Oracle XE 21c or other RDBMS instances.
React and .NET Core 6.0 Sample Project with Docker – Learn how to use CRUD operations in ASP.NET Core 6.0 WEP API with the Entity Framework Core Code First approach.
Deploying FusionAuth + Docker on Fly.io – Find the perfect guide to self-hosting FusionAuth for timesaving authentication and access management using Docker.
How to containerize your ASP.NET Core application and SQL Server with Docker – Learn how to deploy a Dockerized .NET Web API application and connect it to a SQL Server container.

Introducing Hardened Docker Desktop
Looking for a better, more secure way to manage your dev environments? Our new security model, Hardened Docker Desktop, helps you cover all the bases!

Learn More

State of Application Development Survey We’re looking for feedback from developers like you. Take our survey for a chance to win prizes!

Take the Survey

Docker+Wasm Tech Preview
At KubeCon North America, we announced the Docker+Wasm Technical Preview. This lighter, faster alternative to linux containers lets developers build Wasm apps with the same ease as container apps.

Learn More

The latest tips and tricks from the community:

Creating Kubernetes Extensions in Docker Desktop
Simplified Deployment of Local Container Images to OpenShift
9 Tips for Containerizing Your Node.js Application
Adding Docker Compose Logs to Your CI Pipeline Is Worth It
Live Reload in Rust with Cargo Watch and Docker
Enabling Microservices using Docker and Docker-Compose

October Extensions Roundup: CI on Your Laptop and Hacktoberfest!
Find out what’s new in the Docker Extension Marketplace! Get CI on your laptop, find new tools from the open source community, and use categories to find the perfect Extension.

Learn More

Educational content created by the experts at Docker:

Security Advisory: CVE-2022-42889 “Text4Shell”
How to Use the Postgres Docker Official Image
How to Fix and Debug Docker Containers Like a Superhero
Developer Engagement in the Remote Work Era with RedMonk and Miva

Docker Captain: Sebastien Flochlay
Sebastien discovered Docker back in 2016 and has been a huge fan ever since. Find out why his favorite command is docker run and the importance of writing Dockerfiles — the right way.

Meet the Captain

Subscribe to our newsletter to get the latest news, blogs, tips, how-to guides, best practices, and more from Docker experts sent directly in your inbox once a month.

Quelle: https://blog.docker.com/feed/

Resolve Vulnerabilities Sooner With Contextual Data

OpenSSL 3.0.7 and “Text4Shell” might be the most recent critical vulnerabilities to plague your development team, but they won’t be the last. In 2021, critical vulnerabilities reached a record high. Attackers are even reusing their work, with over 50% of zero-day attacks this year being variants of previously-patched vulnerabilities. 

With each new security vulnerability, we’re forced to re-examine our current systems and processes. If you’re impacted by OpenSSL or Text4Shell (aka CVE-2022-42889), you’ve probably asked yourself, “Are we using Apache Commons Text (and where)?” or “Is it a vulnerable version?” — and similar questions. And if you’re packaging applications into container images and running those on cloud infrastructure, then a breakdown by image, deployment environment, and impacted Commons-Text version would be extremely useful. 

Developers need contextual data to help cut through the noise and answer these questions, but gathering information takes time and significantly impacts productivity. An entire day is derailed if developers must context switch and spend countless hours researching, triaging, and fixing these issues. So, how do we stop these disruptions and surface crucial data in a more accessible way for developers?

Start with continuously examining images

Bugs, misconfigurations, and vulnerabilities don’t stop once an image is pushed to production, and neither should development. Improving images is a continuous effort that requires a constant flow of information before, during, and after development.

Before images are used, teams spend a significant amount of time vetting and selecting them. That same amount of effort needs to be put into continuously inspecting those same images. Otherwise, you’ll find yourself in a reactive cycle of unnecessary rework, wasted time, and overall developer frustration.

That’s where contextual data comes in. Contextual data ties directly to the situation around it to give developers a broader understanding. As an example, contextual data for vulnerabilities gives you clear and precise insights to understand what the vulnerability is, how urgent it is, and its specific impact on the developer and the application architecture — whether local, staging, or production.

Contextual data reduces noise and helps the developer know the what and the where so they can prioritize making the correct changes in the most efficient way. What does contextual data look like? It can be…

A comparison of detected vulnerabilities between images built from a PR branch with the image version currently running in productionA comparison between images that use the same custom base imageAn alert sent into a Slack channel that’s connected to a GitHub repository when a new critical or high CVE is detected in an image currently running in productionAn alert or pull request to update to a newer version of your base image to remediate a certain CVE

Contextual data makes it faster for developers to locate and remediate the vulnerabilities in their application.

Use Docker to surface contextual data

Contextual data is about providing more information that’s relevant to developers in their daily tasks. How does it work?

Docker can index and analyze public and private images within your registries to provide insights about the quality of your images. For example, you can get open source package updates, receive alerts about new vulnerabilities as security researchers discover them, send updates to refresh outdated base images, and be informed about accidentally embedded secrets like access tokens. 

The screenshot below shows what appears to be a very common list of vulnerabilities on a select Docker image. But there’s a lot more data on this page that correlates to the image:

The page breaks the vulnerabilities up by layers and base images making it easy to assess where to apply a fix for a detected vulnerability.Image refs in the right column highlight that this version of the image is currently running in production.We also see that this image represents the current head commit in the corresponding Git repository and we can see which Dockerfile it was built from.The current and potential other base images are listed for comparison.

An image report with a list of common CVEs — including Text4Shell

Using Slack, notifications are sent to the channels your team already uses. Below shows an alert sent into a Slack channel that’s configured to show activity for a selected set of Git repositories. Besides activity like commits, CI builds, and deployments, you can see the Text4Shell alert providing very concise and actionable information to developers collaborating in this channel:

Slack update on the critical Text4Shell vulnerability

You can also get suggestions to remediate certain categories of vulnerabilities and raise pull requests to update vulnerable packages like those in the following screenshot:

Remediating the Text4Shell CVE via a PR and comparing to main branch

Find out more about this type of information for public images like Docker Official Images or Docker Verified Publisher images using our Image Vulnerability Database.

Vulnerability remediation is just the beginning

Contextual data is essential for faster resolution of vulnerabilities, but it’s more than that. With the right data at the right time, developers are able to work faster and spend their time innovating instead of drowning in security tickets.

Imagine you could assess your production images today to find out where you’re potentially going to be vulnerable. Your teams could have days or weeks to prepare to remediate the next critical vulnerability, like the OpenSSL forthcoming notification on a new critical CVE next Tuesday, November 1st 2022.

Searching for Debian OpenSSL on dso.docker.com

Interested in getting these types of insights and learning more about providing contextual data for happier, more productive devs? Sign up for our Early Access Program to harness these tools and provide invaluable feedback to help us improve our product!
Quelle: https://blog.docker.com/feed/

October Extensions Roundup: CI on Your Laptop and Hacktoberfest!

This month, we’ve got some new extensions so good, they’re scary! Docker Extensions build new functionality into Docker Desktop, extend its existing capabilities, and allow you to discover and integrate additional tools that you’re already using with Docker. Let’s take a look at some of the recent ones. And if you’d like to see everything available, check out our full Extensions Marketplace!

Drone CI

Do you need to build and test a container friendly pipeline before sharing with your team? Or do you need the ability to perform continuous integration (CI) or debug failing tests on your laptop? If the answer is yes, the Drone CI extension can help you! With the extension, you can:

Import Drone CI pipelines and run them locallyRun specific steps of a pipelineMonitor execution resultsInspect logs

See it in action in the gif below!

Open Source Docker Extensions

This month, Docker celebrated Hacktoberfest, a month-long celebration of open-source projects, their maintainers, and the entire community of contributors. During this event, Docker worked with the community to contribute to our open source Docker Extensions — and encourage developers to create their own open source extensions.

In fact, here’s a list of open source extensions available in our Marketplace:

DDosify – High performance, open-source, and simple load testing tool written in Golang.Drone CI – Run Continuous Integration & Delivery Pipeline (CI/CD) from within Docker Desktop.GOSH – Build your decentralized and secure software supply chain with Docker and Git Open Source Hodler.JFrog – Scan your Docker images for vulnerabilities with JFrog Xray.Lacework Scanner – Enable developers with the insights to securely build their containers and minimize the vulnerabilities before the images go into production.Meshery – Design and operate your cloud native deployments with the Meshery extensible management plane.Mini Cluster – Run a local Apache Mesos cluster.Okteto – Remote development for Docker Compose.Open Source management tool for PostgreSQL – Use an embedded PGAdmin4 Open Source management tool for PostgreSQL.Oracle SQLcl client tool – Use an embedded version of Oracle SQLcl client tool.RedHat OpenShift – Easily deploy and test applications on to OpenShift.Volumes Backup & Share – Backup, clone, restore and share Docker volumes effortlessly.

Hacktoberfest generated a lot of great extensions from our community that aren’t yet available in the Marketplace. To check out these extensions, visit our Hacktoberfest Github Repo. All of these extensions can be installed via the CLI, and you can visit our docs to learn how. 

Check out the latest Docker Extensions with Docker Desktop

Docker is always looking for ways to improve the developer experience. We hope that these new extensions will make your life easier and that you’ll give them a try! Check out these resources for more info on extensions:

Try October’s newest extensions by installing Docker Desktop for Mac, Windows, or Linux.Visit our Extensions Marketplace to see all of our extensions.Build your own extension with our Extensions SDK.
Quelle: https://blog.docker.com/feed/

Introducing the Docker+Wasm Technical Preview

The Technical Preview of Docker+Wasm is now available! Wasm has been producing a lot of buzz recently, and this feature will make it easier for you to quickly build applications targeting Wasm runtimes.

As part of this release, we’re also happy to announce that Docker will be joining the Bytecode Alliance as a voting member. The Bytecode Alliance is a nonprofit organization dedicated to creating secure new software foundations, building on standards such as WebAssembly and WebAssembly System Interface (WASI).

What is Wasm?

WebAssembly, often shortened to Wasm, is a relatively new technology that allows you to compile application code written in over 40+ languages (including Rust, C, C++, JavaScript, and Golang) and run it inside sandboxed environments.

The original use cases were focused on running native code in web browsers, such as Figma, AutoCAD, and Photoshop. In fact, fastq.bio saw a 20x speed improvement when converting their web-based DNA sequence quality analyzer to Wasm. And Disney built their Disney+ Application Development Kit on top of Wasm! The benefits in the browser are easy to see.

But Wasm is quickly spreading beyond the browser thanks to the WebAssembly System Interface (WASI). Companies like Vercel, Fastly, Shopify, and Cloudflare support using Wasm for running code at the edge, and Fermyon is building a platform to run Wasm microservices in the cloud.

Why Docker?

At Docker, our goal is to help developers bring their ideas to life by conquering the complexity of app development. We strive to make it easy to build, share, and run your application, regardless of the underlying technologies. By making containers accessible to all, we proved our ability to make the lives of developers easier and were recognized as the #1 most-loved developer tool.

We see Wasm as a complementary technology to Linux containers where developers can choose which technology they use (or both!) depending on the use case. And as the community explores what’s possible with Wasm, we want to help make Wasm applications easier to develop, build, and run using the experience and tools you know and love.

How do I get the technical preview?

Ready to dive in and try it for yourself? Great! But before you do, a couple quick notes to keep in mind as you start exploring:

Important note #1: This is a technical preview build of Docker Desktop, and things might not work as expected. Be sure to back up your containers and images before proceeding.Important note #2: This preview has the containerd image store enabled and cannot be disabled. If you’re not currently using the containerd image store, then pre-existing images and containers will be inaccessible.

You can download the technical preview build of Docker Desktop here:

macOS Apple SiliconmacOS IntelWindows AMD64Linux Arm64 (deb)Linux AMD64 (deb, rpm, tar)

Are there any known limitations?

Yes! This is an early technical preview and we’re still working on making the experience as smooth as possible. But here are a few things you should be aware of:

Docker Compose may not exit cleanly when interruptedWorkaround: Clean up docker-compose processes by sending them a SIGKILL (killall -9 docker-compose).Pushes to Hub might give an error stating server message: insufficient_scope: authorization failed, even after logging in using Docker DesktopWorkaround: Run docker login in the CLI

Okay, so how does the Wasm integration actually work?

We’re glad you asked! First off, we need to remind you that since this is a technical preview, things may change quite rapidly. But here’s how it currently works.

We’re leveraging our recent work to migrate image management to containerd, as it provides the ability to use both OCI-compatible artifacts and containerd shims.We collaborated with WasmEdge to create a containerd shim. This shim extracts the Wasm module from the OCI artifact and runs it using the WasmEdge runtime.We added support to declare the Wasm runtime, which will enable the use of this new shim.

Let’s look at an example!

After installing the preview, we can run the following command to start an example Wasm application:

docker run -dp 8080:8080 –name=wasm-example –runtime=io.containerd.wasmedge.v1 –platform=wasi/wasm32 michaelirwin244/wasm-example

Since a few of the flags might be unfamiliar, let’s explain what they’re doing:

–runtime=io.containerd.wasmedge.v1 – This informs the Docker engine that we want to use the Wasm containerd shim instead of the standard Linux container runtime–platform=wasi/wasm32 – This specifies the architecture of the image we want to use. By leveraging a Wasm architecture, we don’t need to build separate images for the different architectures. The Wasm runtime will do the final step of converting the Wasm binary to machine instructions.

After the image is pulled, the runtime reads the ENTRYPOINT of the image to locate and extract the Wasm module. The module is then loaded into the Wasm runtime, started, and networking is configured. We now have a Wasm app running on our machine!

This particular application is a simple web server that says “Hello world!” and echos data back to us. To verify it’s working, let’s first view the logs.

docker logs wasm-example
Server is now running

We can get the “Hello world” message by either opening to http://localhost:8080 or using curl.

curl localhost:8080

And our response will give us a Hello world message:

Hello world from Rust running with Wasm! Send POST data to /echo to have it echoed back to you

To send data to the echo endpoint, we can use curl:

curl localhost:8080/echo -d ‘{“message”:”Hi there”}’ -H “Content-type: application/json”

And we’ll see the data sent back to use in the response:

{“message”:”Hi there”}

To remove the application, you can remove it as you do any other Docker service:

docker rm -f wasm-example

The new integration means you can run a Wasm application alongside your Linux containers (even with Compose). To learn more, check out the docs!

What’s next for Wasm and Docker?

Another great question! Wasm is rapidly growing and evolving, including exploration on how to support multi-threading, garbage collection, and more. There are also many still-to-tackle challenges, including shortening the developer feedback loop and possible paths to production.

So try it out yourself and then let us know your thoughts or feedback on the public roadmap. We’d love to hear from you!
Quelle: https://blog.docker.com/feed/

Developer Engagement in the Remote Work Era with RedMonk and Miva

With the rise of remote-first and hybrid work models in the tech world, promoting developer engagement has become more important than ever. Maintaining a culture of engagement, productivity, and collaboration can be a hurdle for businesses making this new shift to remote work. But it’s far from impossible.

As a fully-remote, developer-focused company, Docker was thrilled to join in a like-minded conversation with RedMonk and Miva. Jake Levirne (Head of Product at Docker) was joined by Jon Burchmore (CTO at Miva) for a talk led by RedMonk’s Sr. Analyst Rachel Stephens. In this webinar on developer engagement in the remote work era, these industry specialists discuss navigating developer engagement with a focus on productivity, collaboration, and much more.

Navigating developer engagement

Companies with newly-distributed work environments often struggle to create an engaging culture for their employees. This remains especially true for the developer workforce. Because of this, developer engagement has become a priority for more organizations than ever, including Miva.

“We actually brought [developer engagement] up as a part of our developer roadmap. As we were talking about ‘this is our product roadmap for 2022 — what’s the biggest challenge?’, my answer was ‘keeping people engaged so that we can keep productivity high.’” Jon Burchmore

Like Miva, other organizations are starting to incorporate developer engagement into their broader business decisions. Teams are intentionally choosing tools and processes that support not only development requirements but also involvement and preference. By taking a look at productivity and collaboration, we can see the impact of these decisions. 

Measuring developer productivity and collaboration

As both an art and a science, measuring developer productivity and collaboration can be difficult. While metrics can be informative, Jon is most interested in seeing the qualitative impact.

“How much is the team engaging with itself […] and is that engagement bottom up […] or from peer-to-peer? And a healthy team to me feels like a team where the peers are engaging as well. It’s not everyone just going upstream to get their problems solved.” Jon Burchmore

As Jake adds, it’s more than just tracking lines of code. It’s about focusing on the outcomes. While developer engagement can be difficult to measure, the message is clear. Engaged developers are non-negotiable for high-performing teams.

Approaching developer collaboration

Developer collaboration is another linchpin for building developer engagement. Teams are now challenging themselves to find more opportunities for pair programming or similar types of coworking. Healthy collaboration should also not be limited to single teams.

“When you unlock collaboration both within teams and across teams, I think that’s what allows you to build what effectively are the increasingly complex real-world applications that are needed to keep creating business value.” Jake Levirne

Organizations are taking a more holistic, inter-team perspective to avoid the dreaded, siloed productivity approach.

Watch the full, on-demand webinar

These points are just a snapshot of our talk with RedMonk and Miva on the challenges of developer engagement in the remote work era. Hear the rest of the discussion and more detail by watching the full conversation on demand.
Quelle: https://blog.docker.com/feed/