Docker Security Roundup: News, Articles, Sessions

With the eyes of the security world converging on Black Hat USA next week, now is a good time to remember that building secure applications is paramount.

In the latest chapter in Docker’s security story, Docker CTO Justin Cormack last month provided an important update on software supply chain security. He blogged about the publication of a white paper, “Software Supply Chain Best Practices,” by the security technical advisory group of the Cloud Native Computing Foundation (CNCF).

The long-awaited document is important because the software supply chain — that stage of the software development journey in which software is written, assembled, built or tested before production — has become a favored target of cyber criminals. Justin was one of the prime movers of the project and one of the CNCF reviewers who helped steer the 50-page document through multiple iterations to completion.

The paper aims to make secure supply chains easier and more widely adopted through four key principles, which Justin summarizes:

“In simpler language, this means that you need to be able to securely trace all the code you are using, which exact versions you are using, where they came from, and in an automated way so that there are no errors. Your build environments should be minimal, secure and well defined, i.e. containerized. And you should be making sure everything is authenticated securely.”

Contributing writer Robert Lemos quoted Justin’s blog in a Dark Reading article last week. The article, titled “What Does It Take to Secure Containers,” quotes Justin on why creating a trusted pipeline is so critical:

“Every time you use software that you didn’t write yourself, often open source software that you use in your applications, you are trusting both that the software you added is what you thought it is, and that it is trustworthy not hostile. Usually both these things are true, but when they go wrong, like when hundreds of people installed updates from SolarWinds that turned out to contain code to attack their infrastructure, the consequences are serious.”

Security at DockerCon

Several other facets of our security story were on the menu at DockerCon in May.

Alvaro Muro, an integrations engineer at Sysdig, led a webinar on Top Dockerfile Security Best Practices showing how these practices for image builds help you prevent security issues and optimize containerized applications. And he shared ways to avoid unnecessary privileges, reduce the attack surface with multistage builds, prevent confidential data leaks, detect bad practices and more.

In their talk, An Ounce of Prevention: Curing Insecure Container Images, Red Ventures’ Seyfat Khamidov and Eric Smalling of Snyk shared keys to catching vulnerabilities in your Docker containers before they go to production, such as scanning individual containers and incorporating container security scanning into your continuous integration build jobs. They also covered what Red Ventures is doing to scan container images at scale, and the new integration between Docker and Snyk for scanning container images for security vulnerabilities.

You know that feeling of panic when you scan a container and find a long list of vulnerabilities? Yeah, that one. In his DockerCon presentation, My Container Image Has 500 Vulnerabilities, Now What?, Snyk’s Matt Jarvis talks you off the ledge. How do you assess and prioritize security risk? How do you start to remediate? He lays out what you need to consider and how to get started.

Speaking of the SolarWinds breach, GitLab’s Brendan O’Leary dissected that and a number of other supply chain attacks in his talk, As Strong as the Weakest Link: Securing the Software Supply Chain. He delved into the simple, practical security measures that were missed, allowing the attacks to get a foothold in the first place.

Finally, in a session titled Secure Container Workloads for K8s in Containerd, Om Moolchandani, CISO and CTO at Accurics, spells out how security can be easily embedded into Docker development workflows and Kubernetes deployments to increase resiliency while practically eliminating the effort required to “be secure.” He also highlights open source tools that enable you to establish security guardrails, ensuring you build in security from the start, with programmatic enforcement in development pipelines, and stay secure with automated enforcement in the K8s runtime.

At Docker, security is more than a watchword — it’s an obsession. To learn more, read Justin’s blog and watch the recorded sessions listed above. They’re still available and still free.

Join the Next Community All Hands on September 16th!

We’re excited to announce that our next Community All Hands will be in exactly 2 months,  on Thursday September 16th 2021 @ 8am PST/5pm CET. The event is a unique opportunity for Docker staff, Captains, Community Leaders and the broader Docker community to come together for live company updates, product updates, demos, community updates, contributor shout-outs and of course, a live Q&A. Make sure to register for the event here!
The post Docker Security Roundup: News, Articles, Sessions appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introduction to heredocs in Dockerfiles

Guest post by Docker Community Member Justin Chadell. This post originally appeared here.

As of a couple weeks ago, Docker’s BuildKit tool for building Dockerfiles now supports heredoc syntax! With these new improvements, we can do all sorts of things that were difficult before, like multiline RUNs without needing all those pesky backslashes at the end of each line, or the creation of small inline configuration files.

In this post, I’ll cover the basics of what these heredocs are, and more importantly what you can use them for, and how to get started with them!

BuildKit (a quick refresher)

From BuildKit’s own github:

BuildKit is a toolkit for converting source code to build artifacts in an efficient, expressive and repeatable manner.

Essentially, it’s the next generation builder for docker images, neatly separate from the rest of the main docker runtime; you can use it for building docker images, or images for other OCI runtimes.

It comes with a lot of useful (and pretty) features beyond what the basic builder supports, including neater build log output, faster and more cache-efficient builds, concurrent builds, as well as a very flexible architecture to allow easy extensibility (I’m definitely not doing it justice).

You’re either most likely using it already, or you probably want to be! You can enable it locally by setting the environment variable DOCKER_BUILDKIT=1 when performing your docker build, or switch to using the new(ish) docker buildx command.

At a slightly more technical level, buildkit allows easy switching between multiple different “builders”, which can be local or remote, in the docker daemon itself, in docker containers or even in a Kubernetes pod. The builder itself is split up into two main pieces, a frontend and a backend: the frontend produces intermediate Low Level Builder (LLB) code, which is then constructed into an image by the backend.

You can think of LLB to BuildKit as the LLVM IR is to Clang.

Part of what makes buildkit so fantastic is it’s flexibility – these components are completely detached from each other, so you can use any frontend in any image. For example, you could use the default Dockerfile frontend, or compile your own self-contained buildpacks, or even develop your own alternative file format like Mockerfile.

Getting setup

To get started with using heredocs, first make sure you’re setup with buildkit. Switching to buildkit gives you a ton of out-of-the-box improvements to your build setup, and should have complete compatibility with the old builder (and you can always switch back if you don’t like it).

With buildkit properly setup, you can create a new Dockerfile: at the top of this file, we need to include a #syntax= directive. This directive informs the parser to use a specific frontend – in this case, the one located at docker/dockerfile:1.3-labs on Docker Hub.

# syntax=docker/dockerfile:1.3-labs

With this line (which has to be the very first line), buildkit will find and download the right image, and then use it to build the image.

We then specify the base image to build from (just like we normally would):

FROM ubuntu:20.04

With all that out the way, we can use a heredoc, executing two commands in the same RUN!

RUN <<EOF

echo “Hello” >> /hello

echo “World!” >> /hello

EOF

Why?

Now that heredocs are working, you might be wondering – why all the fuss? Well, this feature has kind of, until now, been missing from Dockerfiles.

See moby/moby#34423 for the original issue that proposed heredocs in 2017.

Let’s suppose you want to build an image that requires a lot of commands to setup. For example, a fairly common pattern in Dockerfiles involves wanting to update the system, and then to install some additional dependencies, i.e. apt update, upgrade and install all at once.

Naively, we might put all of these as separate RUNs:

RUN apt-get update

RUN apt-get upgrade -y

RUN apt-get install -y …

But, sadly like too many intuitive solutions, this doesn’t quite do what we want. It certainly works – but we create a new layer for each RUN, making our image much larger than it needs to be (and making builds take much longer).

So, we can squish this into a single RUN command:

RUN apt-get update &&

    apt-get upgrade -y &&

    apt-get install -y …

And that’s what most Dockerfiles do today, from the official docker images down to the messy ones I’ve written for myself. It works fine, images are small and fast to build… but it does look a bit ugly. And if you accidentally forget the line continuation symbol , well, you’ll get a syntax error!

Heredocs are the next step to improve this! Now, we can just write:

RUN <<EOF

apt-get update

apt-get upgrade -y

apt-get install -y …

EOF

We use the <<EOF to introduce the heredoc (just like in sh/bash/zsh/your shell of choice), and EOF at the end to close it. In between those, we put all our commands as the content of our script to be run by the shell!

More ways to run…

So far, we’ve seen some basic syntax. However, the new heredoc support doesn’t just allow simple examples, there’s lots of other fun things you can do.

For completeness, the hello world example using the same syntax we’ve already seen:

RUN <<EOF

echo “Hello” >> /hello

echo “World!” >> /hello

EOF

But let’s say your setup scripts are getting more complicated, and you want to use another language – say, like Python. Well, no problem, you can connect heredocs to other programs!

RUN python3 <<EOF

with open(“/hello”, “w”) as f:

    print(“Hello”, file=f)

    print(“World”, file=f)

EOF

In fact, you can use as complex commands as you like with heredocs, simplifying the above to:

RUN python3 <<EOF > /hello

print(“Hello”)

print(“World”)

EOF

If that feels like it’s getting a bit fiddly or complicated, you can also always just use a shebang:

RUN <<EOF

#!/usr/bin/env python3

with open(“/hello”, “w”) as f:

    print(“Hello”, file=f)

    print(“World”, file=f)

EOF

There’s lots of different ways to connect heredocs to RUN, and hopefully some more ways and improvements to come in the future!

…and some file fun!

Heredocs in Dockerfiles also let us mess around with inline files! Let’s suppose you’re building an nginx site, and want to create a custom index page:

FROM nginx

COPY index.html /usr/share/nginx/html

And then in a separate file index.html, you put your content. But if your index page is just really simple, it feels frustrating to have to separate everything out: heredocs let you keep everything in the same place if you want!

FROM nginx

COPY <<EOF /usr/share/nginx/html/index.html

(your index page goes here)

EOF

You can even copy multiple files at once, in a single layer:

COPY <<robots.txt <<humans.txt /usr/share/nginx/html/

(robots content)

robots.txt

(humans content)

humans.txt

Finishing up

Hopefully, I’ve managed to convince you to give heredocs a try when you can! For now, they’re still only available in the staging frontend, but they should be making their way into a release very soon – so make sure to take a look and give your feedback!If you’re interested, you can find out more from the official buildkit Dockerfile syntax guide.
The post Introduction to heredocs in Dockerfiles appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Engineering Update: BuildKit 0.9 and Docker Buildx 0.6 Releases

On July 16th we released BuildKit 0.9.0, Docker Buildx 0.6.0, Dockerfile 1.3.0 and Dockerfile 1.3.0-labs. These releases come with bug fixes, feature-parity improvements, refactoring and also new features.

Dockerfile new features

Installation

There is no installation needed: BuildKit supports loading frontends dynamically from container images. Images for Dockerfile frontends are available at docker/dockerfile repository.

To use the external frontend, the first line of your Dockerfile needs to be pointing to the specific image you want to use:

# syntax=docker/dockerfile:1.3

More info: https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md

RUN command now supports –network flag

RUN command supports –network flag for requesting a specific type of network conditions (host, none, default). This allows the user to disable all networking during a docker build to ensure no network is used, or opt-in to explicitly using the hosts network, which requires a specific flag to be set before this works.

–network=host requires allowing network.host entitlement. This feature was previously only available on labs channel:

# syntax=docker/dockerfile:1.3
FROM python:3.6
ADD mypackage.tgz wheelz/
RUN –network=none pipe install –find-links wheels mypackage

RUN –mount flag variable expansion

Values for RUN –mount flag now support variable expansion, except for the from field:

# syntax=docker/dockerfile:1.3
FROM golang

ARG GO_CACHE_DIR=/root/.cache/go-build
RUN –mount=type=cache,target=$GO_CACHE_DIR go build …

Here-document syntax

RUN and COPY commands now support Here-document syntax allowing writing multiline inline scripts and files (labs channel) without lots of && and characters:

Before:

# syntax=docker/dockerfile:1.3
FROM debian
RUN apt-get update
&& apt-get install -y vim

With new Here-document syntax:

# syntax=docker/dockerfile:1.3-labs
FROM debian
RUN <<eot bash
apt-get update
apt-get install -y vim
eot

In COPY commands source parameters can be replaced with here-doc indicators. Regular here-doc variable expansion and tab stripping rules apply:

# syntax=docker/dockerfile:1.3-labs
FROM alpine
ARG FOO=bar
COPY <<-eot /app/foo
hello ${FOO}
eot

More info on BuildKit repository.

OpenTracing providers replaced with support for OpenTelemetry

OpenTelemetry has superseded OpenTracing. The API is quite similar but additional collector endpoints should allow forwarding traces from client or container in the future.

JAEGER_TRACE env is supported like before. OTEL collector is also supported both via grpc and HTTP protos.

This is also the first time cross-process tracing from Buildx is available:

# create jaeger container
$ docker run -d –name jaeger
-p 6831:6831/udp -p 16686:16686
jaegertracing/all-in-one

# create builder with JAEGER_TRACE env
$ docker buildx create
–name builder
–driver docker-container
–driver-opt network=host
–driver-opt env.JAEGER_TRACE=localhost:6831
–use

# open Jaeger UI at http://localhost:16686/ and see results as shown below

Resource limiting

Users can now limit the parallelism of the BuildKit solver, Buildkitd config now allows max-parallelism for limiting the parallelism of the BuildKit solver, which is particularly useful for low-powered machines.

Here is an example if you want to do this with GitHub Actions:

We are also now limiting TCP connections to 4 per registry with an additional connection not used for layer pulls and pushes. This limitation will be able to manage TCP connection per host to avoid your build being stuck while pulling images. The additional connection is used for metadata requests (image config retrieval) to enhance the overall build time.

GitHub Actions cache backend (experimental)

We have released a new experimental GitHub cache backend to speed up Docker builds in GitHub actions.

This solves a previously inefficient method which reuses the build cache by using official actions/cache action together with local cache exporter/importer, which is inefficient as the cache needs to be saved/loaded every time and tracking does not happen per blob.

To start using this experimental cache, use our example on Build Push Action repository.

Git improvements

Default branch name is now detected correctly from remote when using the Git context.

Support for subdirectories when building from Git source is now released:

$ docker buildx build git://github.com/repo:path/to/subapp

SSH agent is automatically forwarded when building from SSH Git URL:

$ docker buildx build git@github.com:myrepo.git

New platforms and QEMU update

Risc-V (buildkitd, buildctl, buildx, frontends)Windows ARM64 (buildctl, buildx)MacOS ARM64 (buildctl)

Also embedded QEMU emulators have been updated to v6.0.0 and provide emulation for additional platforms.

–metadata-file for buildctl and buildx

New –metadata-file flag has been added to build and bake command for buildx that allows saving build result metadata in JSON format:

$ docker buildx build –output type=docker –metadata-file ./metadata.json .

{
“containerimage.config.digest”: “sha256:d8b8b4f781520aeafedb5a88ff50fbb625cfebad87e392794f1e26a724a2f22a”,
“containerimage.digest”: “sha256:868f04872380274dcf8528222e84dc66702394de80889e51c87a14126ea9ff6a”
}

Buildctl also allows –metadata-file flag to output build metadata.

Docker buildx image

Buildx binaries can now be accessed through buildx-bin Docker image to build able to use buildx inside a Docker image in your CI for example. Here is how to use buildx inside a Dockerfile:

FROM docker
COPY –from=docker/buildx-bin:latest /buildx /usr/libexec/docker/cli-plugins/docker-buildx
RUN docker buildx version

Other changes

For the complete list of changes, see the official release notes:

https://github.com/moby/buildkit/releases/tag/v0.9.0https://github.com/moby/buildkit/releases/tag/dockerfile%2F1.3.0https://github.com/moby/buildkit/releases/tag/dockerfile%2F1.3.0-labshttps://github.com/docker/buildx/releases/tag/v0.6.0

The post Engineering Update: BuildKit 0.9 and Docker Buildx 0.6 Releases appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Invoking the AWS CLI with Terraform

faun.pub – As awesome and powerful Terraform is, there are times when you find yourself unable to execute certain actions for your automation. This could be due to many reasons including: no Terraform resource…
Quelle: news.kubernauts.io

Level Up Security with Scoped Access Tokens

Scoped tokens are here !

Scopes give you more fine grained control over what access your tokens have to your content and other public content on Docker Hub! 

It’s been a while since we first introduced tokens into Docker Hub (back in 2019!) and we are now excited to say that we have added the ability for accounts on a Pro or Team plan to apply scopes to their Personal Access Tokens (PATs) as a way to authenticate with Docker Hub. 

Access tokens can be used as a substitute for your password in Docker Hub, adding scopes to these tokens gives you more fine grained control over what access the machine logged in has. This is great for setting up things like service accounts in CI systems, registry mirrors or even on your local machine to make sure you are not giving too much access away. 

PATs are an alternative to using passwords for authentication to Docker Hub (link to https://hub.docker.com/ ) when using Docker command line

docker login –username <username>

When prompted for your password you can simply provide a token. The other advantages of tokens are that you can create and manage multiple tokens at once, being able to see when they were last used and if things look wrong – revoke the tokens access. This and our API support make it easy to manage the rotation of your tokens to help improve the security of your supply chain. 

Create and Manage Personal Access Tokens in Docker Hub 

Personal access tokens are created and managed in your Account Settings.

Then head to security: 

From here, you can:

Create new access tokensModify existing tokensDelete access tokens

The other way you can manage your tokens is through the Hub APIs. We have Swagger docs for our APIs and the new docs for scoped tokens can be found here:

http://docs.docker.com/docker-hub/api/latest/#tag/access-tokens

Scopes available 

When you are creating a token Pro and Team plan members will now have access to 4 scopes:Read, write, delete: The scope of this token allows you to read, write and delete all of the repos that you have access to. (It does not allow you to modify account settings as a password authentication would) 

Read, write: This scope is for read/write within repos you have access to (all the public content on Hub & your private content). This is the sort of scope to use within a CI that is also pushing to a repo

Read only: This scope is read only for all repos you have have access to, this is great when used in production where it only needs to pull content from your repos to run it/

Public repo read only: This scope is for reading only public content, so nothing from your or your team’s repos. This is great when you want to set up a system which is just pulling say Docker Official Images or Verified content from Docker Hub. 

These scopes are for Pro accounts (which get 5 tokens) and Team accounts (which give each team member unlimited tokens). Free users can continue to use their single read, write, delete token and revoke/reissue this as they need. 

Scoped access tokens levels up the security of Docker users supply chain with how you can authenticate into Docker Hub. Available for Pro and Team plans, we are excited for you to try the scope tokens out and start giving us some feedback. 

Want to learn more about Docker Scoped Tokens? Make sure to follow us on Twitter: @Docker. We’ll be hosting a live Twitter Spaces event on Thursday, Jul 22, 2021 from 8:30 – 9:00 am PST, where you’ll hear from Docker engineers, product managers and a Docker Captain!

If you have feedback or other ideas, remember to add them to our public roadmap. We are always interested in what you would like us to build next!
The post Level Up Security with Scoped Access Tokens appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

SAVE THE DATE : Next Community All Hands on September 16th !

We’re excited to announce that our next Community All Hands will be in exactly 2 months,  on Thursday September 16th 2021 @ 8am PST/5pm CET. The event is a unique opportunity for Docker staff and the broader Docker community to come together for live company updates, product updates, demos, community updates, contributor shout-outs and of course, a live Q&A. 

Based on the great feedback we received from our last all-hands, we’ll stick to a similar format as last time: 

The first hour will focus on company and product updates with presentations from Docker executive staff, including Scott Johnston (CEO @ Docker), Justin Cormack (CTO @ Docker), Jean-Laurent de Morlhon (VP of Engineering @ Docker). The following two hours will focus on community-led breakout sessions with live demos and workshops in different languages and around specific thematic areas

As for the virtual event platform we’ll be using, we’re thrilled to be collaborating with Tulu.la again to leverage their unique interface, their integrated chat features and rock solid multi-casting capability.  They made incredible improvements to their platform since we last collaborated and we’re really keen to try out a bunch of new features they’ve shipped that will provide an even more interactive virtual experience for attendees. 

Docker Community All-Hands event are all about bringing the community together for three hours of sharing, learning and networking in a very informal, welcoming environment, and this one is going to be our best one yet! Stay tuned for more updates on speakers, talks, demos etc… as we get closer to the date. In the meantime…make sure to register for the event here!

The post SAVE THE DATE : Next Community All Hands on September 16th ! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Captain Take 5 – Lucas Santos

Docker Captains are select members of the community that are both experts in their field and are passionate about sharing their Docker knowledge with others. “Docker Captains Take 5” is a regular blog series where we get a closer look at our Captains and ask them the same broad set of questions ranging from what their best Docker tip is to whether they prefer cats or dogs (personally, we like whales and turtles over here). Today, we’re interviewing Lucas Santos who has been a Docker Captain since 2021. He is a Cloud Advocate at Microsoft and is based in São Paulo, Brazil.

How/when did you first discover Docker?

My first contact with Docker was in 2015 when I worked at a logistics company. Docker made it very easy to deploy applications in customer’s infrastructure. Every customer had a different need and a different architecture that needed to be taken into account, unfortunately I had to leave the company before I could make this real. So I took that knowledge to my next company where we deployed over 100 microservices using Docker images and Docker infrastructure.

What is your favorite Docker command?

I would say it’s “pull”. Because it makes it look as if images are incredibly simple things. However, there’s a whole system behind image pulling and, despite being simple to understand, a simple image pull contains a lot of steps and a lot of aggregated knowledge about containers, filesystems and so much more. And this is all transparent to the user as if it’s magic.

What is your top tip you think other people don’t know for working with Docker?

Some people, especially those who are not familiar with containers, think Docker is just a fancy word for a VM. My top tip for everyone that is working with Docker as a fancy VM, don’t. Docker containers can do so much more than just run simple processes and act as a simple VM. There’s so much we can do using containers and Docker images it’s an endless world.

What’s the coolest Docker demo you have done/seen ?

I think I don’t have a favorite, but one that really stuck with me all those years was one of the first demos I’ve seen with Docker back in 2016 or 2017. I won’t remember who was the speaker or where I was but it stuck with me because it was the first time I was seeing someone using CI with Docker. In this demo, the speaker not only created images on demand using a Docker container, but also spinned up several other containers, one for each part of the pipeline. I had never seen something like that before at that time.

What have you worked on in the past 6 months that you’re particularly proud of?

I’m proud of my work with my blog and even prouder of my work in the KEDA HTTP Add-On (https://github.com/kedacore/http-add-on) team. We’ve developed a way to scale applications in a Kubernetes cluster using KEDA native scalers. One of the things that I’m proudest is of the DockerCon community room for the Brazilian community, we had an amazing engagement and this was one of the most amazing events I’ve ever helped to organize.

What do you anticipate will be Docker’s biggest announcement this year?

This is a tricky question. I really don’t know what to hope for, technology moves so fast that I literally hope for anything.

What do you think is going to be Docker’s biggest challenge this year?

I think one of the biggest challenges Docker is going to face this year is to reinvent itself and reposition the company in the eyes of the developers.

What are some personal goals for the next year with respect to the Docker community ?

One of my main goals is to close the gap between the people who are still beginners in containers, and those who are experts because there is too little documentation about it. Along with that I plan to make the Brazilian community more aware of container technologies. I can say that my main goal this year is to make everyone understand what a Docker container is, deep down.

What talk would you most love to see at DockerCon 2021?

I’d love to see more Docker integration with the cloud and new ways to use containers in the cloud.

Looking to the distant future, what is the technology that you’re most excited about and that you think holds a lot of promise?

I think one of the technologies that I’m most excited about is the future of containers. They evolve so fast that I’m anxious to see what it’ll hold next. Especially in the security field, where I feel there are a lot of things we are yet to see.

Rapid fire questions…

What new skill have you mastered during the pandemic?

Patience, probably. I started to learn IoT, Electronics, Photography, Music and a lot of other things.

Cats or Dogs?

Cats

Salty, sour or sweet?

Salty

Beach or mountains?

Mountains

Your most often used emoji?

 
The post Docker Captain Take 5 – Lucas Santos appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker for Node.js Developers: 5 Things You Need to Know Not to Fail Your Security

Guest post by Liran Tal, Snyk Director of Developer Advocacy 

Docker is totalling up to more than 318 billion downloads of container images. With millions of applications available on Docker Hub, container-based applications are popular and make an easy way to consume and publish applications.

That being said, the naive way of building your own Docker Node.js web applications may come with many security risks. So, how do we make security an essential part of Docker for Node.js developers?

Many articles on this topic have been written, yet sadly, without thoughtful consideration of security and production best practices for building Node.js Docker images. This is the focus of my article here and the demos I shared on this recent Docker Build show with Peter McKee. 

Before we jump into the gist of Docker for Node.js and building Docker images, let’s have a look at some frequently asked questions on the topic.

How do I dockerize Node.js applications?

Running your Node.js application in a Docker container can be as simple as copying over the project’s directory and installing all the required npm packages, but there are many security and production related concerns that you might miss. These production-grade tips are laid out in the following guide on containerizing Node.js web applications with Docker, which covers everything from choosing the right Docker base image and using multi-stage builds, to managing secrets safely and properly enabling production-related framework configuration.

This article focuses on the information you need to better understand the impact of choosing the right Node.js Docker base image for your web application and will help you find the most secure Docker image available for your application.  

How is Docker helpful for Node.js developers?

Packaging your Node.js application in a container allows you to bundle your complete application, including the runtime, configuration and OS-level dependencies, and everything required for your web application to run across different platforms and CPU architectures. These images are bundled as deployable artifacts called container images. These Docker images are software-based bundles enabling easily reproducible builds, and give Node.js developers a way to run the same project or product in all environments. 

Finally, Docker containers allow you to experiment more easily with new platform releases or other changes without requiring special permissions, or setting up a dedicated environment to run a project.

1. Choose the right Node.js Docker base image for your application

When creating a Docker image for a Node.js project, we build our own application image based on another Docker image, which we pull from Docker Hub. This is what we refer to as the base image. The base image is the building block of the new Docker image you are about to build for your Node.js application.

The selection of a base image is critical because it significantly impacts everything from the Docker image build speed, as well as the security and performance of your web application. This is so critical Docker and Snyk co-wrote this practical guide focused on container image security for developer teams. 

It’s quite possible that you are choosing a full-fledged operating system image based on Debian or Ubuntu, because it enables you to utilize all the tooling and libraries available in these images. However, this comes at a price. When a base image has a security vulnerability, you will inherit it in your newly created image. Why would you want to start off on bad terms by defaulting to a big base image that contains many vulnerabilities?

When we look at the base images, many of the security vulnerabilities belong to the Operating System (OS) layer this base image uses. Snyk’s 2019 research Shifting Docker security left, showed that the vulnerabilities brought in by the OS layer can vary largely depending on the flavor you choose.

2. Scan your Node.js Docker image during development

Creating a Docker image based on other images, as well as rebuilding them can potentially introduce new vulnerabilities, but there’s a way for you to be on top of it.

Treat the Docker image build process just like any other development related activity. Just as you test the code you write, you should test the Docker images you build. 

These tests include static file checks—also known as linters—to ensure you’re avoiding security pitfalls and other bad patterns in your Dockerfile. We’ve outlined some of these in our Docker image security best practices. If you’re a Node.js application developer you’ll also want to read through this step-by-step 10 best practices to containerize Node.js web applications with Docker.

Connecting your git repositories to Snyk is also an excellent choice. Snyk supports native integrations with GitHub, GitLab, Bitbucket and Azure Repos. Having a git integration means that we can scan your pull requests and annotate them with security information, if we find security vulnerabilities. This allows you to put gates and deny merging a pull request if it brings new security vulnerabilities.

If you need more flexibility for your Continuous Integration (CI), or a closely integrated developer experience, meet the Snyk CLI.

The CLI allows you to easily test your Docker container image. Let’s say you’re building a Docker image locally and tagged it as nodejs:notification-v99.9—we test it as follows:

Install the Snyk CLI:$ npm install -g snykThen let the Snyk CLI automatically grab an API token for you with:$ snyk authScan the local base image:$ snyk container test nodejs:notification-v99.9

Test results are then printed to the screen, along with information about the CVE, the path that introduces the vulnerability, so you know which OS dependency is responsible for it.

Following is an example output for testing Docker base image node:15:

✗ High severity vulnerability found in binutils
Description: Out-of-Bounds
Info: https://snyk.io/vuln/SNYK-DEBIAN9-BINUTILS-404153
Introduced through: dpkg/dpkg-dev@1.18.25, libtool@2.4.6-2
From: dpkg/dpkg-dev@1.18.25 > binutils@2.28-5
From: libtool@2.4.6-2 > gcc-defaults/gcc@4:6.3.0-4 > gcc-6@6.3.0-18+deb9u1 > binutils@2.28-5
Introduced by your base image (node:15)

✗ High severity vulnerability found in binutils
Description: Integer Overflow or Wraparound
Info: https://snyk.io/vuln/SNYK-DEBIAN9-BINUTILS-404253
Introduced through: dpkg/dpkg-dev@1.18.25, libtool@2.4.6-2
From: dpkg/dpkg-dev@1.18.25 > binutils@2.28-5
From: libtool@2.4.6-2 > gcc-defaults/gcc@4:6.3.0-4 > gcc-6@6.3.0-18+deb9u1 > binutils@2.28-5
Introduced by your base image (node:15)

Organization: snyk-demo-567
Package manager: deb
Target file: Dockerfile
Project name: docker-image|node
Docker image: node:15
Platform: linux/amd64
Base image: node:15
Licenses: enabled

Tested 412 dependencies for known issues, found 554 issues.

Base Image Vulnerabilities Severity
node:15 554 56 high, 63 medium, 435 low

Recommendations for base image upgrade:

Alternative image types
Base Image Vulnerabilities Severity
node:current-buster-slim 53 10 high, 4 medium, 39 low
node:15.5-slim 72 18 high, 7 medium, 47 low
node:current-buster 304 33 high, 43 medium, 228 low

3. Fix your Node.js runtime vulnerabilities in your Docker images

An often overlooked detail, when managing the risk of Docker container images, is the application runtime itself. Whether you’re practicing Docker for Java, or you’re running Docker for Node.js web applications, the Node.js application runtime itself may be vulnerable.

You should be aware and follow Node.js security releases and the Node.js security policy. Instead of manually keeping up with these, take advantage of Snyk to also find Node.js security vulnerabilities.

To give you more context on security vulnerabilities across the different Node.js base image tags, I scanned some of them with the Snyk CLI and plotted the results in the following logarithmic scale chart:

You can see that:

The default node base image tag, also tagged as node:latest, bundles more than 500 security vulnerabilities, but also introduces 2 security vulnerabilities in the Node.js runtime itself. That should worry you if you’re currently running a Node.js 15 version in production and you didn’t patch or fix it.The node:alpine base image tag might not be bundling vulnerable OS dependencies in the base image—this is why it’s missing a blue bar—but it still has a vulnerable version of the latest Node.js runtime (version 15).If you’re running an unsupported version of Node.js—for example, Node.js 10—it is vulnerable and you can see that it is not receiving any security updates.

If you were to choose the Node.js version 15, which is the latest version released, at the time of writing this article, you would  actually be exposing yourself not only to 561 security vulnerabilities within this container, but also to two security vulnerabilities in the Node.js runtime itself.

We can see the Docker scan test results found in this public image testing URL: https://snyk.io/test/docker/node:15.5.0. You’re welcome to test other Node.js base image tags that you’re using with this public and free Docker scanning service: https://snyk.io/test.

Security is now an integral part of the Docker workflow, with Snyk powering container scanning in Docker Hub and Docker Desktop. In fact, if you’re using Docker as a development platform, you should review our Snyk and Docker Vulnerability Cheatsheet.

If you already have a Docker user account, you can use it to connect to Snyk and quickly import your Docker Hub repositories with up to 200 free scans per month. 

4. Monitor your deployed Docker images for your Node.js applications

Once you have Docker images built, you’re probably pushing them to a Docker registry that keeps track of the images, so that these can be deployed and spun up as a functional container application.

Why should we monitor Docker base images?

If you’re practicing all of the security guidelines we covered so far with scanning and fixing base images, that’s great. However, keep in mind that new security vulnerabilities get discovered all the time. If you have 78 security vulnerabilities in your image now, that doesn’t mean you won’t have 100 tomorrow morning when new CVEs are reported and impact your running containers in production. That’s why monitoring your registry of container images—those that you’re using to deploy containers—is crucial to ensure you will find out about security issues soon and can remediate them.

If you’re using a paid Docker Hub registry for your images, you might have already seen the integrated Docker security scanning by Snyk in Docker Hub. 

You can also integrate with many Docker image registries from the Snyk app directly. For example, you can import images from Docker Hub, ACR, ECR, GCR, or Artifactory and then Snyk will scan these regularly for you and alert you via Slack or email about any security issues found:

5. Follow security guidelines and production-grade recommendation for a secure and optimal Node.js Docker image

Congratulations for keeping up with all the security guidelines so far!

To wrap up, if you want to dive deep into security best practices for building optimal Docker images for Node.js and Java applications, check out these resources:

10 Docker Security Best Practices – detailed security practices that you should follow when building Docker base images and when pulling them too, as it also introduces the reader to Docker content trust.Are you a Java developer? You’ll find this resource valuable: Docker for Java developers: 5 things you need to know not to fail your security.10 best practices to containerize Node.js web applications with Docker – If you’re a Node.js developer you are going to love this step by step walkthrough, showing you how to build secure and performant Docker base images for your Node.js applications.

Start testing and fixing your container images with Snyk and your Docker ID.
The post Docker for Node.js Developers: 5 Things You Need to Know Not to Fail Your Security appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Video: Docker Build: Simplify Cloud-native Development with Docker & DAPR

Docker’s Peter McKee hosts serverless wizard and open sorcerer Yaron Schneider for a high-octane tour of DAPR (as in Distributed Application Runtime) and how it leverages Docker. A principal software engineer at Microsoft in Redmond, Washington, Schneider co-founded the DAPR project to make developers’ lives easier when writing distributed applications.

DAPR, which Schneider defines as “essentially a sidecar with APIs,” helps developers build event-driven, resilient distributed applications on-premises, in the cloud, or on an edge device. Lightweight and adaptable, it tries to target any type of language or framework, and can help you tackle a host of challenges that come with building microservices while keeping your code platform agnostic.

How do I reliably handle a state in a distributed environment? How do I deal with network failures when I have a lot of distributed components in my ecosystem? How do I fetch secrets securely? How do I scope down my applications? Tag along as Schneider, who also co-founded KEDA (Kubernetes-based event-driven autoscaling), demos how to “dapr-ize” applications while he and McKee discuss their favorite languages and Schneider’s collection of rare Funco pops!

Watch the video (Duration 0:50:36):

The post Video: Docker Build: Simplify Cloud-native Development with Docker & DAPR appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/