Enable No-Code Kubernetes with the harpoon Docker Extension

(This post is co-written by Dominic Holt, Founder & CEO of harpoon.)

Kubernetes has been a game-changer for ensuring scalable, high availability container orchestration in the Software, DevOps, and Cloud Native ecosystems. While the value is great, it doesn’t come for free. Significant effort goes into learning Kubernetes and all the underlying infrastructure and configuration necessary to power it. Still more effort goes into getting a cluster up and running that’s configured for production with automated scalability, security, and cluster maintenance.

All told, Kubernetes can take an incredible amount of effort, and you may end up wondering if there’s an easier way to get all the value without all the work.

In this post:Meet harpoonHow to use the harpoon Docker ExtensionNext steps

Meet harpoon

With harpoon, anyone can provision a Kubernetes cluster and deploy their software to the cloud without writing code or configuration. Get your software up and running in seconds with a drag and drop interface. When it comes to monitoring and updating your software, harpoon handles that in real-time to make sure everything runs flawlessly. You’ll be notified if there’s a problem, and harpoon can re-deploy or roll back your software to ensure a seamless experience for your end users. harpoon does this dynamically for any software — not just a small, curated list.

To run your software on Kubernetes in the cloud, just enter your credentials and click the start button. In a few minutes, your production environment will be fully running with security baked in. Adding any software is as simple as searching for it and dragging it onto the screen. Want to add your own software? Connect your GitHub account with only a couple clicks and choose which repository to build and deploy in seconds with no code or complicated configurations.

harpoon enables you to do everything you need, like logging and monitoring, scaling clusters, creating services and ingress, and caching data in seconds with no code. harpoon makes DevOps attainable for anyone, leveling the playing field by delivering your software to your customers at the same speed as the largest and most technologically advanced companies at a fraction of the cost.

The architecture of harpoon

harpoon works in a hybrid SaaS model and runs on top of Kubernetes itself, which hosts the various microservices and components that form the harpoon enterprise platform. This is what you interface with when you’re dragging and dropping your way to nirvana. By providing cloud service provider credentials to an account owned by you or your organization, harpoon uses terraform to provision all of the underlying virtual infrastructure in your account, including your own Kubernetes cluster. In this way, you have complete control over all of your infrastructure and clusters.

Once fully provisioned, harpoon’s UI can send commands to various harpoon microservices in order to communicate with your cluster and create Kubernetes deployments, services, configmaps, ingress, and other key constructs.

If the cloud’s not for you, we also offer a fully on-prem, air-gapped version of harpoon that can be deployed essentially anywhere.

Why harpoon?

Building production software environments is hard, time-consuming, and costly, with average costs to maintain often starting at $200K for an experienced DevOps engineer and going up into the tens of millions for larger clusters and teams. Using harpoon instead of writing custom scripts can save hundreds of thousands of dollars per year in labor costs for small companies and millions per year for mid to large size businesses

Using harpoon will enable your team to have one of the highest quality production environments available in mere minutes. Without writing any code, harpoon automatically sets up your production environment in a secure environment and enables you to dynamically maintain your cluster without any YAML or Kubernetes expertise. Better yet, harpoon is fun to use. You shouldn’t have to worry about what underlying technologies are deploying your software to the cloud. It should just work. And making it work should be simple. 

Why run harpoon as a Docker Extension?

Docker Extensions help you build and integrate software applications into your daily workflows. With the harpoon Docker Extension, you can simplify the deployment process with drag and drop, visually deploying and configuring your applications directly into your Kubernetes environment. Currently, the harpoon extension for Docker Desktop supports the following features:

Link harpoon to a cloud service provider like AWS and deploy a Kubernetes cluster and the underlying virtual infrastructure.

Easily accomplish simple or complex enterprise-grade cloud deployments without writing any code or configuration scripts.

Connect your source code repository and set up an automated deployment pipeline without any code in seconds.

Supercharge your DevOps team with real-time visual cues to check the health and status of your software as it runs in the cloud.

Drag and drop container images from Docker Hub, source, or private container registries

Manage your K8s cluster with visual pods, ingress, volumes, configmaps, secrets, and nodes.

Dynamically manipulate routing in a service mesh with only simple clicks and port numbers.

How to use the harpoon Docker Extension

Prerequisites: Docker Desktop 4.8 or later

Step 1: Enable Docker Extensions

You’ll need to enable Docker Extensions under the Settings tab in Docker Desktop

Hop into Docker Desktop and confirm that the Docker Extensions feature is enabled. Go to Settings > Extensions > and check the “Enable Docker Extensions” box.

Step 2: Install the harpoon Docker Extension

The harpoon extension is available on the Extensions Marketplace in Docker Desktop and on Docker Hub. To get started, search for harpoon in the Extensions Marketplace, then select Install.

This will download and install the latest version of the harpoon Docker Extension from Docker Hub.

Step 3: Register with harpoon

If you’re new to harpoon, then you might need to register by clicking the Register button. Otherwise, you can use your credentials to log in.

Step 4: Link your AWS Account

While you can drag out any software or Kubernetes components you like, if you want to do actual deployments, you will first need to link your cloud service provider account. At the moment, harpoon supports Amazon Web Services (AWS). Over time, we’ll be supporting all of the major cloud service providers.

If you want to deploy software on top of AWS, you will need to provide harpoon with an access key ID and a secret access key. Since harpoon is deploying all of the necessary infrastructure in AWS in addition to the Kubernetes cluster, we require fairly extensive access to the account in order to successfully provision the environment. Your keys are only used for provisioning the necessary infrastructure to stand up Kubernetes in your account and to scale up/down your cluster as you designate. We take security very seriously at harpoon, and aside from using an extensive and layered security approach for harpoon itself, we use both disk and field level encryption for any sensitive data.

The following are the specific permissions harpoon needs to successfully deploy a cluster:

AmazonRDSFullAccess

IAMFullAccess

AmazonEC2FullAccess

AmazonVPCFullAccess

AmazonS3FullAccess

AWSKeyManagementServicePowerUser

Step 5: Start the cluster

Once you’ve linked your cloud service provider account, you just click the “Start” button on the cloud/node element in the workspace. That’s it. No, really! The cloud/node element will turn yellow and provide a countdown. While your experience may vary a bit, we tend to find that you can get a cluster up in under 6 minutes. When the cluster is running, the cloud will return and the element will glow a happy blue color.

Step 6: Deployment

You can search for any container image you’d like from Docker Hub, or link your GitHub account to search any GitHub repository (public or private) to deploy with harpoon. You can drag any search result over to the workspace for a visual representation of the software.

Deploying containers is as easy as hitting the “Deploy” button. Github containers will require you to build the repository first. In order for harpoon to successfully build a GitHub repository, we currently require the repository to have a top-level Dockerfile, which is industry best practice. If the Dockerfile is there, once you click the “Build” button, harpoon will automatically find it and build a container image. After a successful build, the “Deploy” button will become enabled and you can deploy the software directly.

Once you have a deployment, you can attach any Kubernetes element to it, including ingress, configmaps, secrets, and persistent volume claims.

You can find more info here if you need help: https://docs.harpoon.io/en/latest/usage.html 

Next steps

The harpoon Docker Extension makes it easy to provision and manage your Kubernetes clusters. You can visually deploy your software to Kubernetes and configure it without writing code or configuration. By integrating directly with Docker Desktop, we hope to make it easy for DevOps teams to dynamically start and maintain their cluster without any YAML, helm chart, or Kubernetes expertise.

Check out the harpoon Docker Extension for yourself!
Quelle: https://blog.docker.com/feed/

Docker Compose: What’s New, What’s Changing, What’s Next

We’ll walk through new Docker Compose features the team has built, share what we plan to work on next, and remind you to switch to Compose V2 as soon as possible.

Compose V1 support will no longer be provided after June 2023 and will be removed from all future Docker Desktop versions. If you’re still on Compose V1, we recommend you switch as soon as possible to leave time to address any issues with running your Compose applications. (Until the end of June 2023, we’ll monitor Compose issues to address challenges related to V1 migration to V2.)

In this postCompose V1: So long and farewell, old friend!What’s new?Build improvementsUsing ssh resourcesBuild multi-arch images with ComposeAdditional updatesWhat’s next?

Compose V1: So long and farewell, old friend!

In the Compose V2 GA announcement we proposed the following timeline:

We’ve extended the timeline, so support now ends after June 2023. 

Switching is easy. Type docker compose instead of docker-compose in your favorite terminal.

An even easier way is to choose Compose V2 by default inside Docker Desktop settings. Activating this option creates a symlink for you so you can continue using docker-compose to preserve your potential existing scripts, but start using the newest version of Compose.

For more on the differences between V1 and V2, see the Evolution of Compose in docs.

What’s new?

Build improvements

During the past few months, the main effort of the team was to focus on improving the build experience within Compose. After collecting all the proposals opened in the Compose specification, we started to ship the following new features incrementally:

cache_to support to allow sharing layers from intermediary images in a multi-stage build. One of the best ways to use this option is sharing cache in your CI between your workflow steps.

no-cache to force a full rebuild of your service.

pull to trigger a registry sync for force-pulling your base images.

secrets to use at build time.

tags to define a list associated with your final build image.

ssh to use your local ssh configuration or pass keys to your build process. This allows you to clone a private repo or interact with protected resources; the ssh info won’t be stored in the final image.

platforms to define multiple platforms and let Compose produce multi-arch images of your services.

Let’s dive deeper into those last two improvements.

Using ssh resources

ssh was introduced in Compose V2.4.0 GA and lets you use ssh resources at build time. Now you’re able to use your local ssh configuration or public/private keys when you build your service image. For example, you can clone a private Git repository inside your container or connect to a remote server to use critical resources during the build process of your services.

The ssh resources are only used during the build process and won’t be available in your final image.

There are different possibilities for using ssh with Compose. The first one is the new ssh attribute of the build section in your Compose file:

services:
myservice:
image: build-test-ssh
build:
context: .
ssh:
– fake-ssh=./fixtures/build-test/ssh/fake_rsa

And you need to reference the ID of your ssh resource inside your Dockerfile:

FROM alpine
RUN apk add –no-cache openssh-client

WORKDIR /compose
COPY fake_rsa.pub /compose/

RUN –mount=type=ssh,id=fake-ssh,required=true diff <(ssh-add -L) <(cat /compose/fake_rsa.pub)

This example is a simple demonstration of using keys at build time. It copies a public ssh key, mounts the private key inside the container, and checks if it matches the public key previously added.

It’s also possible to directly use the CLI with the new –ssh flag. Let’s try to use it to copy a private Git repository. 

The following Dockerfile adds GitHub as a known host in the ssh configuration of the image and then mounts the ssh local agent to clone the private repository:

# syntax=docker/dockerfile:1
FROM alpine:3.15

RUN apk add –no-cache openssh-client git
RUN mkdir -p -m 0700 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts
RUN –mount=type=ssh git clone git@github.com:glours/secret-repo.git

CMD ls -lah secret-repo

And using the docker compose build –no-cache –progress=plain –ssh default command will pass your local ssh agent to Compose.

Build multi-arch images with Compose

In Compose version V2.11.0, we introduced the ability to add platforms in the build section and let Compose do a cross-platform build for you.

The following Dockerfile logs the name of the service, the targeted platform to build, and the platform used for doing this build:

FROM –platform=$BUILDPLATFORM golang:alpine AS build

ARG TARGETPLATFORM
ARG BUILDPLATFORM
ARG SERVICENAME
RUN echo "I am $SERVICENAME building for $TARGETPLATFORM, running on $BUILDPLATFORM" > /log

FROM alpine
COPY –from=build /log /log

This Compose file defines an application stack with two services (A and B) which are targeting different build platforms:

services:
serviceA:
image: build-test-platform-a:test
build:
context: .
args:
– SERVICENAME=serviceA
platforms:
– linux/amd64
– linux/arm64
serviceB:
image: build-test-platform-b:test
build:
context: .
args:
– SERVICENAME=serviceB
platforms:
– linux/386
– linux/arm64

Be sure to create and use a docker-container build driver that allows you to build multi-arch images: 

docker buildx create –driver docker-container –use

To use the multi-arch build feature:

> docker compose build –no-cache

Additional updates

We also fixed issues, managed corner cases, and added features. For example, you can define a secret from the environment variable value:

services:
myservice:
image: build-test-secret
build:
context: .
secrets:
– envsecret

secrets:
envsecret:
environment: SOME_SECRET

We’re now providing Compose binaries for windows/arm64 and linux/riscv64.

We overhauled the way Compose manages .env files, environment variables, and precedence interpolation. Read the environment variables precedence documentation to learn more. 

To see all the changes we’ve made since April 2022, check out the Compose release page or the comparing changes page.

What’s next?

The Compose team is focused on improving the developer inner loop using Compose. Ideas we’re working on include:

A development section in the Compose specification, including a watch mode so you will be able to use the one defined by your programming tooling or let Compose manage it for you 

Capabilities to add specific debugging ports, or use profiling tooling inside your service containers

Lifecycle hooks to interact with services at different moments of the container lifecycle (for example, letting you execute a command when a container is created but not started, or when it’s up and healthy)

A –dry-run flag to test a Compose command before executing it

If you’d like to see something in Compose to improve your development workflow, we invite your feedback in our Public Roadmap.

To take advantage of ongoing improvements to Compose and surface any issues before support ends June 2023, make sure you’re on Compose V2. Use the docker compose CLI or activate the option in Docker Desktop settings.

To learn more about the differences between V1 and V2, check out the Evolution of Compose in our documentation.
Quelle: https://blog.docker.com/feed/

January Extensions: Deploy Kubernetes and Develop Cloud Apps Locally

It’s a new year, and we’ve got some new Docker Extensions to share with you! Docker extensions build new functionality into Docker Desktop, extend its existing capabilities, and allow you to discover and integrate additional tools that you’re already using with Docker. Let’s take a look at our exciting new extensions from January.

And if you’d like to see everything available, check out our full Extensions Marketplace!

Deploy production-grade Kubernetes with no code

Are you new to Kubernetes, but need to deploy something to production? With the harpoon extension for Docker Desktop, you can use a no-code solution to deploy any software in seconds. It’s even great for a seasoned pro, as it has all the features you need to be successful in deploying and configuring your software.

With the harpoon extension, you can:

Deploy your software to the cloud

Monitor your software in real time

Be notified if there is a problem

Manage and run cloud applications locally

Do you need configuration profiles, container logs, and more locally? With LocalStack for Docker Desktop, you can now easily manage your LocalStack instance. Using the LocalStack extension enables a highly efficient and fully local development and testing loop for you and your team.

With the LocalStack extension, you can:

Control LocalStack: Start, stop, and restart LocalStack from the Docker Desktop. You can also see the current status of your LocalStack instance and navigate to LocalStack Web Application.

Get LocalStack insights: You can see the log information of the LocalStack instance and all the available services and their status on the service page.

Manage LocalStack configurations: You can manage and use your profiles via configurations and create new configurations for your LocalStack instance.

Check out the latest Docker Extensions with Docker Desktop

Docker is always looking for ways to improve the developer experience. Check out these resources for more info on extensions:

Install Docker Desktop for Mac, Windows, or Linux to try extensions yourself.

Visit our Extensions Marketplace to see all of our extensions.

Learn about building your own extension with our Quick Start page.

Quelle: https://blog.docker.com/feed/

Generating SBOMs for Your Image with BuildKit

The latest release series of BuildKit, v0.11, introduces support for build-time attestations and SBOMs, allowing publishers to create images with records of how the image was built. This makes it easier for you to answer common questions, like which packages are in the image, where the image was built from, and whether you can reproduce the same results locally.

This new data helps you make informed decisions about the security of the images you consume — without needing to do all the manual work yourself.

In this blog post, we’ll discuss what attestations and SBOMs are, how to build images that contain SBOMs, and how to start analyzing the resulting data!

In this post:What are attestations?Getting the latest releaseAdding SBOMs to your imagesSupplementing SBOMsEven more SBOMs!Analyzing imagesGoing further

What are attestations?

An attestation is a declaration that a statement is true. With software, an attestation is a record that specifies a statement about a software artifact. For example, it could include who built it and when, what inputs it was built with, what outputs it produced, etc.

By writing these attestations, and distributing them alongside the artifacts themselves, you can see these details that might otherwise be tricky to find. To get this kind of information without attestations, you’d have to try and reverse-engineer how the image was built by trying to locate the source code and even attempting to reproduce the build yourself.

To provide this valuable information to the end-users of your images, BuildKit v0.11 lets you build these attestations as part of your normal build process. All it takes is adding a few options to your build step.

BuildKit supports attestations in the in-toto format (from the in-toto framework). Currently, the Dockerfile frontend produces two types of attestations that answer two different questions:

SBOM (Software Bill of Materials) – An SBOM contains a list of software components inside an image. This will include the names of various packages installed, their version numbers, and any other associated metadata. You can use this to see, at a glance, if an image contains a specific package or determine if an image is vulnerable to specific CVEs.

SLSA Provenance – The Provenance of the image describes details of the build process, such as what materials (like, images, URLs, files, etc.) were consumed, what build parameters were set, as well as source maps that allow mapping the resulting image back to the Dockerfile that created it. You can use this to analyze how an image was built, determine whether the sources consumed all appear legitimate, and even attempt to rebuild the image yourself.

Users can also define their own custom attestation types via a custom BuildKit frontend. In this post, we’ll focus on SBOMs and how to use them with Dockerfiles.

Getting the latest release

Building attestations into your images requires the latest releases of both Buildx and BuildKit – you can get the latest versions by updating Docker Desktop to the most recent version.

You can check your version number, and ensure it matches the buildx v0.10 release series:

$ docker buildx version
github.com/docker/buildx 0.10.0 …

To use the latest release of BuildKit, create a docker-container builder using buildx:

$ docker buildx create –use –name=buildkit-container –driver=docker-container

You can check that the new builder is configured correctly, and ensure it matches the buildkit v0.11 release series:

$ docker buildx inspect | grep -i buildkit
Buildkit: v0.11.1

If you’re using the docker/setup-buildx-action in GitHub Actions, then you’ll get this all automatically without needing to update.

With that out of the way, you can move on to building an image containing SBOMs!

Adding SBOMs to your images

You’re now ready to generate an SBOM for your image!

Let’s start with the following Dockerfile to create an nginx web server:

# syntax=docker/dockerfile:1.5

FROM nginx:latest
COPY ./static /usr/share/nginx/html

You can build and push this image, along with its SBOM, in one step:

$ docker buildx build –sbom=true -t <myorg>/<myimage> –push .

That’s all you need! In your build output, you should spot a message about generating the SBOM:


=> [linux/amd64] generating sbom using docker.io/docker/buildkit-syft-scanner:stable-1 0.2s

BuildKit generates SBOMs using scanner plugins. By default, it uses buildkit-syft-scanner, a scanner built on top of Anchore’s Syft open-source project, to do the heavy lifting. If you like, you can use another scanner by specifying the generator= option. 

Here’s how you view the generated SBOM using buildx imagetools:

$ docker buildx imagetools inspect <myorg>/<myimage> –format "{{ json .SBOM.SPDX }}"
{
"spdxVersion": "SPDX-2.3",
"dataLicense": "CC0-1.0",
"SPDXID": "SPDXRef-DOCUMENT",
"name": "/run/src/core/sbom",
"documentNamespace": "https://anchore.com/syft/dir/run/src/core/sbom-a589a536-b5fb-49e8-9120-6a12ce988b67",
"creationInfo": {
"licenseListVersion": "3.18",
"creators": [
"Organization: Anchore, Inc",
"Tool: syft-v0.65.0",
"Tool: buildkit-v0.11.0"
],
"created": "2023-01-05T16:13:17.47415867Z"
},

SBOMs also work with the local and tar exporters. When you export with these exporters, instead of attaching the attestations directly to the output image, the attestations are exported as separate files into the output filesystem:

$ docker buildx build –sbom=true -o ./image .
$ ls -lh ./image
-rw——- 1 user user 6.5M Jan 17 14:36 sbom.spdx.json

Viewing the SBOM in this case is as simple as cat-ing the result:

$ cat ./image/sbom.spdx.json | jq .predicate
{
"spdxVersion": "SPDX-2.3",
"dataLicense": "CC0-1.0",
"SPDXID": "SPDXRef-DOCUMENT",

Supplementing SBOMs

Generating SBOMs using a scanner is a good first start! But some packages won’t be correctly detected because they’ve been installed in a slightly unconventional way.

If that’s the case, you can still get this information into your SBOMs with a bit of manual interaction.

Let’s suppose you’ve installed foo v1.2.3 into your image by downloading it using curl:

RUN curl https://example.com/releases/foo-v1.2.3-amd64.tar.gz | tar xzf – &&
mv foo /usr/local/bin/

Software installed this way likely won’t appear in your SBOM unless the SBOM generator you’re using has special support for this binary (for example, Syft has support for detecting certain known binaries).

You can manually generate an SBOM for this piece of software by writing an SPDX snippet to a location of your choice on the image filesystem using a Dockerfile heredoc:

COPY /usr/local/share/sbom/foo.spdx.json <<"EOT"
{
"spdxVersion": "SPDX-2.3",
"SPDXID": "SPDXRef-DOCUMENT",
"name": "foo-v1.2.3",

}
EOT

This SBOM should then be picked up by your SBOM generator and included in the final SBOM for the whole image. This behavior is included out-of-the-box in buildkit-syft-scanner, but may not be part of every generator’s toolkit.

Even more SBOMs!

While the above section is good for scanning a basic image, it might struggle to provide more detailed package and file information. BuildKit can help you scan additional components of your build, including intermediate stages and your build context using the BUILDKIT_SBOM_SCAN_STAGE and BUILDKIT_SBOM_SCAN_CONTEXT arguments respectively.

In the case of multi-stage builds, this allows you to track dependencies from previous stages, even though that software might not appear in your final image.

For example, for a demo C/C++ program, you might have the following Dockerfile:

# syntax=docker/dockerfile:1.5

FROM ubuntu:22.04 AS build
ARG BUILDKIT_SBOM_SCAN_STAGE=true
RUN apt-get update && apt-get install -y git build-essential
WORKDIR /src
RUN git clone https://example.com/myorg/myrepo.git .
RUN make build

FROM scratch
COPY –from=build /src/build/ /

If you just scanned the resulting image, it wouldn’t reveal that the build tools, like Git or GCC (included in the build-essential package), were ever used in the build process! By integrating SBOM scanning into your build using the BUILDKIT_SBOM_SCAN_STAGE build argument, you can get much richer information that would otherwise have been completely lost.

You can access these additionally generated SBOM documents in imagetools as well:

$ docker buildx imagetools inspect <myorg>/<myimage> –format "{{ range .SBOM.AdditionalSPDXs }}{{ json . }}{{ end }}"
{
"spdxVersion": "SPDX-2.3",
"SPDXID": "SPDXRef-DOCUMENT",

}
{
"spdxVersion": "SPDX-2.3",
"SPDXID": "SPDXRef-DOCUMENT",

}

For the local and tar exporters, these will appear as separate files in your output directory:

$ docker buildx build –sbom=true -o ./image .
$ ls -lh ./image
-rw——- 1 user user 4.3M Jan 17 14:40 sbom-build.spdx.json
-rw——- 1 user user 877 Jan 17 14:40 sbom.spdx.json

Analyzing images

Now that you’re publishing images containing SBOMs, it’s important to find a way to analyze them to take advantage of this additional data.

As mentioned above, you can extract the attached SBOM attestation using the imagetools subcommand:

$ docker buildx imagetools inspect <myorg>/<myimage> –format "{{json .SBOM.SPDX}}"
{
"spdxVersion": "SPDX-2.3",
"dataLicense": "CC0-1.0",
"SPDXID": "SPDXRef-DOCUMENT",

If your target image is built for multiple architectures using the –platform flag, then you’ll need a slightly different syntax to extract the SBOM attestation:

$ docker buildx imagetools inspect <myorg>/<myimage> –format "{{ json (index .SBOM "linux/amd64").SPDX}}"
{
"spdxVersion": "SPDX-2.3",
"dataLicense": "CC0-1.0",
"SPDXID": "SPDXRef-DOCUMENT",

Now suppose you want to list all of the packages, and their versions, inside an image. You can modify the value passed to the –format flag to be a go template that lists the packages:

$ docker buildx imagetools inspect <myorg>/<myimage> –format ‘{{ range .SBOM.SPDX.packages }}{{ println .name .versionInfo }}{{ end }}’ | sort
adduser 3.118
apt 2.2.4
base-files 11.1+deb11u6
base-passwd 3.5.51
bash 5.1-2+deb11u1
bsdutils 1:2.36.1-8+deb11u1
ca-certificates 20210119
coreutils 8.32-4+b1
curl 7.74.0-1.3+deb11u3

Alternatively, you might want to get the version information for a piece of software that you know is installed:

$ docker buildx imagetools inspect <myorg>/<myimage> –format ‘{{ range .SBOM.SPDX.packages }}{{ if eq .name "nginx" }}{{ println .versionInfo }}{{ end }}{{ end }}’
1.23.3-1~bullseye

You can even take the whole SBOM and use it to scan for CVEs using a tool that can use SBOMs to search for CVEs (like Anchore’s Grype):

$ docker buildx imagetools inspect <myorg>/<myimage> –format ‘{{ json .SBOM.SPDX }}’ | grype
NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY
apt 2.2.4 deb CVE-2011-3374 Negligible
bash 5.1-2+deb11u1 (won’t fix) deb CVE-2022-3715

These operations should complete super quickly! Because the SBOM was already generated at build, you’re just querying already-existing data from Docker Hub instead of needing to generate it from scratch every time.

Going further

In this post, we’ve only covered the absolute basics to getting started with BuildKit and SBOMs — you can find out more about the things we’ve talked about on docs.docker.com:

Read more about build-time attestations

Learn about how to use buildx to create SBOMs

Implement your own SBOM scanner using the BuildKit SBOM protocol

Dive into how attestations are stored in the registry

And you can find out more about other features released in the latest BuildKit v0.11 release here.
Quelle: https://blog.docker.com/feed/

5 Docker Desktop Alternatives

hackernoon.com – For Windows and macOS users, Docker Desktop has been the main way to use Docker containers for many years. But how about now?Tweeted by @Vivek_H_Vadgama https://twitter.com/Vivek_H_Vadgama/status/1608163852138455042
Quelle: news.kubernauts.io