A Kubernetes engineer's guide to mTLS

buoyant.io – mTLS is a hot topic in the Kubernetes world, especially for anyone tasked with getting “encryption in transit” for their applications. But what is mTLS, what kind of security does it provide, and why…
Quelle: news.kubernauts.io

Start Dev Environments locally, Compose V2 RC 1, and more in Docker Desktop 3.6

Docker Desktop 3.6 has just been released and we’re looking forward to you trying it.

Start Dev Environments from your Local Machine

You can now share dev environments with your colleagues and get started with code already on your local machine as well as the existing remote repository support.

It’s easy to use your local code! Just click Create in the top right corner of the dev environments page. 

Next select the Local tab and click Select directory to open the root of the code that you would like to work on.

Finally, click Create. This creates a Dev Environment using your local folder, and bind-mounts your local code in the Dev Environment. It opens VS Code inside the Dev Environment container.

We are excited that you are trying out our Dev Environments Preview and would love to hear from you! Let us know your feedback by creating an issue in the Dev Environments GitHub repository. Alternatively, get in touch with us on the #docker-dev-environments channel in the Docker Community Slack.

Enhanced Usability on Volume Management

We know that volumes can take up a lot of disk space, but when you’re dealing with  a lot of volumes, it can be hard to find which ones you want to clean up. In 3.6 we’ve made it easier to find and sort your volumes. You can now sort volumes by the name, the date created, and the size of the volume. You can also search for specific volumes using the search field. 

We’re continuing to enhance volume management and would love your input. Have ideas on how we might make managing volumes easier? Interested in sharing your volumes with colleagues? Let us know here.

Docker Compose V2 Release Candidate 1 

A first Release Candidate for Compose V2 is now available! We’ve been working hard to address all your feedback so that you can seamlessly run the compose command in the Docker CLI. Let us know your feedback on the new ‘compose’ command by creating an issue in the Compose-CLI GitHub repository.

We have also introduced the following new features:

Docker compose command line completion, less typing is always better  docker-compose logs –follow which makes it easier to follow logs of new containers. This reacts to containers added by scale and reports additional logs when more containers are added to service.

You can test this new functionality by running the docker compose command, dropping the – in docker-compose. We are continuing to roll this out gradually; 54% of compose users are already using compose V2. You’ll be notified if you are using the new docker compose. You can opt-in to run Compose v2 with docker-compose, by running docker-compose enable-v2 command or by updating your Docker Desktop’s Experimental Features settings.  

If you run into any issues using Compose V2, simply run docker-compose disable-v2 command, or turn it off using Docker Desktop’s Experimental Features. 

To get started simply download or update to Docker Desktop 3.6. If you’d like to dig deeper into your volumes or take your collaboration to the next level with dev environments, upgrade to a Pro or Team subscription today!
The post Start Dev Environments locally, Compose V2 RC 1, and more in Docker Desktop 3.6 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Building a healthy and secure software supply chain

Securing the software supply chain is now an everyday concern for developers. As attackers increasingly target open-source components as a way to compromise the software supply chain, developers hold the keys to making their projects as secure as they can be. That’s why Docker continues to invest heavily in our developer tools like Docker Desktop and secure supply chain offerings such as Docker Official Images and Docker Verified Publisher content.

This Tuesday, August 17, Docker CTO Justin Cormack and Head of Developer Relations Peter McKee will cover what it takes to securely develop from code to cloud. The webinar will provide a comprehensive overview on software security including explaining what is a software supply chain attack, key principles to identifying the weakest link and the stages for effectively securing the software supply chain.

As Justin told Dark Reading last month:  

“Every time you use software that you didn’t write yourself, often open source software that you use in your applications, you are trusting both that the software you added is what you thought it is, and that it is trustworthy not hostile. Usually both these things are true, but when they go wrong, like when hundreds of people installed updates from SolarWinds that turned out to contain code to attack their infrastructure, the consequences are serious.”

This is a webinar you don’t want to miss. Register today.

The post Building a healthy and secure software supply chain appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Security Roundup: News, Articles, Sessions

With the eyes of the security world converging on Black Hat USA next week, now is a good time to remember that building secure applications is paramount.

In the latest chapter in Docker’s security story, Docker CTO Justin Cormack last month provided an important update on software supply chain security. He blogged about the publication of a white paper, “Software Supply Chain Best Practices,” by the security technical advisory group of the Cloud Native Computing Foundation (CNCF).

The long-awaited document is important because the software supply chain — that stage of the software development journey in which software is written, assembled, built or tested before production — has become a favored target of cyber criminals. Justin was one of the prime movers of the project and one of the CNCF reviewers who helped steer the 50-page document through multiple iterations to completion.

The paper aims to make secure supply chains easier and more widely adopted through four key principles, which Justin summarizes:

“In simpler language, this means that you need to be able to securely trace all the code you are using, which exact versions you are using, where they came from, and in an automated way so that there are no errors. Your build environments should be minimal, secure and well defined, i.e. containerized. And you should be making sure everything is authenticated securely.”

Contributing writer Robert Lemos quoted Justin’s blog in a Dark Reading article last week. The article, titled “What Does It Take to Secure Containers,” quotes Justin on why creating a trusted pipeline is so critical:

“Every time you use software that you didn’t write yourself, often open source software that you use in your applications, you are trusting both that the software you added is what you thought it is, and that it is trustworthy not hostile. Usually both these things are true, but when they go wrong, like when hundreds of people installed updates from SolarWinds that turned out to contain code to attack their infrastructure, the consequences are serious.”

Security at DockerCon

Several other facets of our security story were on the menu at DockerCon in May.

Alvaro Muro, an integrations engineer at Sysdig, led a webinar on Top Dockerfile Security Best Practices showing how these practices for image builds help you prevent security issues and optimize containerized applications. And he shared ways to avoid unnecessary privileges, reduce the attack surface with multistage builds, prevent confidential data leaks, detect bad practices and more.

In their talk, An Ounce of Prevention: Curing Insecure Container Images, Red Ventures’ Seyfat Khamidov and Eric Smalling of Snyk shared keys to catching vulnerabilities in your Docker containers before they go to production, such as scanning individual containers and incorporating container security scanning into your continuous integration build jobs. They also covered what Red Ventures is doing to scan container images at scale, and the new integration between Docker and Snyk for scanning container images for security vulnerabilities.

You know that feeling of panic when you scan a container and find a long list of vulnerabilities? Yeah, that one. In his DockerCon presentation, My Container Image Has 500 Vulnerabilities, Now What?, Snyk’s Matt Jarvis talks you off the ledge. How do you assess and prioritize security risk? How do you start to remediate? He lays out what you need to consider and how to get started.

Speaking of the SolarWinds breach, GitLab’s Brendan O’Leary dissected that and a number of other supply chain attacks in his talk, As Strong as the Weakest Link: Securing the Software Supply Chain. He delved into the simple, practical security measures that were missed, allowing the attacks to get a foothold in the first place.

Finally, in a session titled Secure Container Workloads for K8s in Containerd, Om Moolchandani, CISO and CTO at Accurics, spells out how security can be easily embedded into Docker development workflows and Kubernetes deployments to increase resiliency while practically eliminating the effort required to “be secure.” He also highlights open source tools that enable you to establish security guardrails, ensuring you build in security from the start, with programmatic enforcement in development pipelines, and stay secure with automated enforcement in the K8s runtime.

At Docker, security is more than a watchword — it’s an obsession. To learn more, read Justin’s blog and watch the recorded sessions listed above. They’re still available and still free.

Join the Next Community All Hands on September 16th!

We’re excited to announce that our next Community All Hands will be in exactly 2 months,  on Thursday September 16th 2021 @ 8am PST/5pm CET. The event is a unique opportunity for Docker staff, Captains, Community Leaders and the broader Docker community to come together for live company updates, product updates, demos, community updates, contributor shout-outs and of course, a live Q&A. Make sure to register for the event here!
The post Docker Security Roundup: News, Articles, Sessions appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introduction to heredocs in Dockerfiles

Guest post by Docker Community Member Justin Chadell. This post originally appeared here.

As of a couple weeks ago, Docker’s BuildKit tool for building Dockerfiles now supports heredoc syntax! With these new improvements, we can do all sorts of things that were difficult before, like multiline RUNs without needing all those pesky backslashes at the end of each line, or the creation of small inline configuration files.

In this post, I’ll cover the basics of what these heredocs are, and more importantly what you can use them for, and how to get started with them!

BuildKit (a quick refresher)

From BuildKit’s own github:

BuildKit is a toolkit for converting source code to build artifacts in an efficient, expressive and repeatable manner.

Essentially, it’s the next generation builder for docker images, neatly separate from the rest of the main docker runtime; you can use it for building docker images, or images for other OCI runtimes.

It comes with a lot of useful (and pretty) features beyond what the basic builder supports, including neater build log output, faster and more cache-efficient builds, concurrent builds, as well as a very flexible architecture to allow easy extensibility (I’m definitely not doing it justice).

You’re either most likely using it already, or you probably want to be! You can enable it locally by setting the environment variable DOCKER_BUILDKIT=1 when performing your docker build, or switch to using the new(ish) docker buildx command.

At a slightly more technical level, buildkit allows easy switching between multiple different “builders”, which can be local or remote, in the docker daemon itself, in docker containers or even in a Kubernetes pod. The builder itself is split up into two main pieces, a frontend and a backend: the frontend produces intermediate Low Level Builder (LLB) code, which is then constructed into an image by the backend.

You can think of LLB to BuildKit as the LLVM IR is to Clang.

Part of what makes buildkit so fantastic is it’s flexibility – these components are completely detached from each other, so you can use any frontend in any image. For example, you could use the default Dockerfile frontend, or compile your own self-contained buildpacks, or even develop your own alternative file format like Mockerfile.

Getting setup

To get started with using heredocs, first make sure you’re setup with buildkit. Switching to buildkit gives you a ton of out-of-the-box improvements to your build setup, and should have complete compatibility with the old builder (and you can always switch back if you don’t like it).

With buildkit properly setup, you can create a new Dockerfile: at the top of this file, we need to include a #syntax= directive. This directive informs the parser to use a specific frontend – in this case, the one located at docker/dockerfile:1.3-labs on Docker Hub.

# syntax=docker/dockerfile:1.3-labs

With this line (which has to be the very first line), buildkit will find and download the right image, and then use it to build the image.

We then specify the base image to build from (just like we normally would):

FROM ubuntu:20.04

With all that out the way, we can use a heredoc, executing two commands in the same RUN!

RUN <<EOF

echo “Hello” >> /hello

echo “World!” >> /hello

EOF

Why?

Now that heredocs are working, you might be wondering – why all the fuss? Well, this feature has kind of, until now, been missing from Dockerfiles.

See moby/moby#34423 for the original issue that proposed heredocs in 2017.

Let’s suppose you want to build an image that requires a lot of commands to setup. For example, a fairly common pattern in Dockerfiles involves wanting to update the system, and then to install some additional dependencies, i.e. apt update, upgrade and install all at once.

Naively, we might put all of these as separate RUNs:

RUN apt-get update

RUN apt-get upgrade -y

RUN apt-get install -y …

But, sadly like too many intuitive solutions, this doesn’t quite do what we want. It certainly works – but we create a new layer for each RUN, making our image much larger than it needs to be (and making builds take much longer).

So, we can squish this into a single RUN command:

RUN apt-get update &&

    apt-get upgrade -y &&

    apt-get install -y …

And that’s what most Dockerfiles do today, from the official docker images down to the messy ones I’ve written for myself. It works fine, images are small and fast to build… but it does look a bit ugly. And if you accidentally forget the line continuation symbol , well, you’ll get a syntax error!

Heredocs are the next step to improve this! Now, we can just write:

RUN <<EOF

apt-get update

apt-get upgrade -y

apt-get install -y …

EOF

We use the <<EOF to introduce the heredoc (just like in sh/bash/zsh/your shell of choice), and EOF at the end to close it. In between those, we put all our commands as the content of our script to be run by the shell!

More ways to run…

So far, we’ve seen some basic syntax. However, the new heredoc support doesn’t just allow simple examples, there’s lots of other fun things you can do.

For completeness, the hello world example using the same syntax we’ve already seen:

RUN <<EOF

echo “Hello” >> /hello

echo “World!” >> /hello

EOF

But let’s say your setup scripts are getting more complicated, and you want to use another language – say, like Python. Well, no problem, you can connect heredocs to other programs!

RUN python3 <<EOF

with open(“/hello”, “w”) as f:

    print(“Hello”, file=f)

    print(“World”, file=f)

EOF

In fact, you can use as complex commands as you like with heredocs, simplifying the above to:

RUN python3 <<EOF > /hello

print(“Hello”)

print(“World”)

EOF

If that feels like it’s getting a bit fiddly or complicated, you can also always just use a shebang:

RUN <<EOF

#!/usr/bin/env python3

with open(“/hello”, “w”) as f:

    print(“Hello”, file=f)

    print(“World”, file=f)

EOF

There’s lots of different ways to connect heredocs to RUN, and hopefully some more ways and improvements to come in the future!

…and some file fun!

Heredocs in Dockerfiles also let us mess around with inline files! Let’s suppose you’re building an nginx site, and want to create a custom index page:

FROM nginx

COPY index.html /usr/share/nginx/html

And then in a separate file index.html, you put your content. But if your index page is just really simple, it feels frustrating to have to separate everything out: heredocs let you keep everything in the same place if you want!

FROM nginx

COPY <<EOF /usr/share/nginx/html/index.html

(your index page goes here)

EOF

You can even copy multiple files at once, in a single layer:

COPY <<robots.txt <<humans.txt /usr/share/nginx/html/

(robots content)

robots.txt

(humans content)

humans.txt

Finishing up

Hopefully, I’ve managed to convince you to give heredocs a try when you can! For now, they’re still only available in the staging frontend, but they should be making their way into a release very soon – so make sure to take a look and give your feedback!If you’re interested, you can find out more from the official buildkit Dockerfile syntax guide.
The post Introduction to heredocs in Dockerfiles appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Engineering Update: BuildKit 0.9 and Docker Buildx 0.6 Releases

On July 16th we released BuildKit 0.9.0, Docker Buildx 0.6.0, Dockerfile 1.3.0 and Dockerfile 1.3.0-labs. These releases come with bug fixes, feature-parity improvements, refactoring and also new features.

Dockerfile new features

Installation

There is no installation needed: BuildKit supports loading frontends dynamically from container images. Images for Dockerfile frontends are available at docker/dockerfile repository.

To use the external frontend, the first line of your Dockerfile needs to be pointing to the specific image you want to use:

# syntax=docker/dockerfile:1.3

More info: https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md

RUN command now supports –network flag

RUN command supports –network flag for requesting a specific type of network conditions (host, none, default). This allows the user to disable all networking during a docker build to ensure no network is used, or opt-in to explicitly using the hosts network, which requires a specific flag to be set before this works.

–network=host requires allowing network.host entitlement. This feature was previously only available on labs channel:

# syntax=docker/dockerfile:1.3
FROM python:3.6
ADD mypackage.tgz wheelz/
RUN –network=none pipe install –find-links wheels mypackage

RUN –mount flag variable expansion

Values for RUN –mount flag now support variable expansion, except for the from field:

# syntax=docker/dockerfile:1.3
FROM golang

ARG GO_CACHE_DIR=/root/.cache/go-build
RUN –mount=type=cache,target=$GO_CACHE_DIR go build …

Here-document syntax

RUN and COPY commands now support Here-document syntax allowing writing multiline inline scripts and files (labs channel) without lots of && and characters:

Before:

# syntax=docker/dockerfile:1.3
FROM debian
RUN apt-get update
&& apt-get install -y vim

With new Here-document syntax:

# syntax=docker/dockerfile:1.3-labs
FROM debian
RUN <<eot bash
apt-get update
apt-get install -y vim
eot

In COPY commands source parameters can be replaced with here-doc indicators. Regular here-doc variable expansion and tab stripping rules apply:

# syntax=docker/dockerfile:1.3-labs
FROM alpine
ARG FOO=bar
COPY <<-eot /app/foo
hello ${FOO}
eot

More info on BuildKit repository.

OpenTracing providers replaced with support for OpenTelemetry

OpenTelemetry has superseded OpenTracing. The API is quite similar but additional collector endpoints should allow forwarding traces from client or container in the future.

JAEGER_TRACE env is supported like before. OTEL collector is also supported both via grpc and HTTP protos.

This is also the first time cross-process tracing from Buildx is available:

# create jaeger container
$ docker run -d –name jaeger
-p 6831:6831/udp -p 16686:16686
jaegertracing/all-in-one

# create builder with JAEGER_TRACE env
$ docker buildx create
–name builder
–driver docker-container
–driver-opt network=host
–driver-opt env.JAEGER_TRACE=localhost:6831
–use

# open Jaeger UI at http://localhost:16686/ and see results as shown below

Resource limiting

Users can now limit the parallelism of the BuildKit solver, Buildkitd config now allows max-parallelism for limiting the parallelism of the BuildKit solver, which is particularly useful for low-powered machines.

Here is an example if you want to do this with GitHub Actions:

We are also now limiting TCP connections to 4 per registry with an additional connection not used for layer pulls and pushes. This limitation will be able to manage TCP connection per host to avoid your build being stuck while pulling images. The additional connection is used for metadata requests (image config retrieval) to enhance the overall build time.

GitHub Actions cache backend (experimental)

We have released a new experimental GitHub cache backend to speed up Docker builds in GitHub actions.

This solves a previously inefficient method which reuses the build cache by using official actions/cache action together with local cache exporter/importer, which is inefficient as the cache needs to be saved/loaded every time and tracking does not happen per blob.

To start using this experimental cache, use our example on Build Push Action repository.

Git improvements

Default branch name is now detected correctly from remote when using the Git context.

Support for subdirectories when building from Git source is now released:

$ docker buildx build git://github.com/repo:path/to/subapp

SSH agent is automatically forwarded when building from SSH Git URL:

$ docker buildx build git@github.com:myrepo.git

New platforms and QEMU update

Risc-V (buildkitd, buildctl, buildx, frontends)Windows ARM64 (buildctl, buildx)MacOS ARM64 (buildctl)

Also embedded QEMU emulators have been updated to v6.0.0 and provide emulation for additional platforms.

–metadata-file for buildctl and buildx

New –metadata-file flag has been added to build and bake command for buildx that allows saving build result metadata in JSON format:

$ docker buildx build –output type=docker –metadata-file ./metadata.json .

{
“containerimage.config.digest”: “sha256:d8b8b4f781520aeafedb5a88ff50fbb625cfebad87e392794f1e26a724a2f22a”,
“containerimage.digest”: “sha256:868f04872380274dcf8528222e84dc66702394de80889e51c87a14126ea9ff6a”
}

Buildctl also allows –metadata-file flag to output build metadata.

Docker buildx image

Buildx binaries can now be accessed through buildx-bin Docker image to build able to use buildx inside a Docker image in your CI for example. Here is how to use buildx inside a Dockerfile:

FROM docker
COPY –from=docker/buildx-bin:latest /buildx /usr/libexec/docker/cli-plugins/docker-buildx
RUN docker buildx version

Other changes

For the complete list of changes, see the official release notes:

https://github.com/moby/buildkit/releases/tag/v0.9.0https://github.com/moby/buildkit/releases/tag/dockerfile%2F1.3.0https://github.com/moby/buildkit/releases/tag/dockerfile%2F1.3.0-labshttps://github.com/docker/buildx/releases/tag/v0.6.0

The post Engineering Update: BuildKit 0.9 and Docker Buildx 0.6 Releases appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Invoking the AWS CLI with Terraform

faun.pub – As awesome and powerful Terraform is, there are times when you find yourself unable to execute certain actions for your automation. This could be due to many reasons including: no Terraform resource…
Quelle: news.kubernauts.io