January Extensions: Deploy Kubernetes and Develop Cloud Apps Locally

It’s a new year, and we’ve got some new Docker Extensions to share with you! Docker extensions build new functionality into Docker Desktop, extend its existing capabilities, and allow you to discover and integrate additional tools that you’re already using with Docker. Let’s take a look at our exciting new extensions from January.

And if you’d like to see everything available, check out our full Extensions Marketplace!

Deploy production-grade Kubernetes with no code

Are you new to Kubernetes, but need to deploy something to production? With the harpoon extension for Docker Desktop, you can use a no-code solution to deploy any software in seconds. It’s even great for a seasoned pro, as it has all the features you need to be successful in deploying and configuring your software.

With the harpoon extension, you can:

Deploy your software to the cloud

Monitor your software in real time

Be notified if there is a problem

Manage and run cloud applications locally

Do you need configuration profiles, container logs, and more locally? With LocalStack for Docker Desktop, you can now easily manage your LocalStack instance. Using the LocalStack extension enables a highly efficient and fully local development and testing loop for you and your team.

With the LocalStack extension, you can:

Control LocalStack: Start, stop, and restart LocalStack from the Docker Desktop. You can also see the current status of your LocalStack instance and navigate to LocalStack Web Application.

Get LocalStack insights: You can see the log information of the LocalStack instance and all the available services and their status on the service page.

Manage LocalStack configurations: You can manage and use your profiles via configurations and create new configurations for your LocalStack instance.

Check out the latest Docker Extensions with Docker Desktop

Docker is always looking for ways to improve the developer experience. Check out these resources for more info on extensions:

Install Docker Desktop for Mac, Windows, or Linux to try extensions yourself.

Visit our Extensions Marketplace to see all of our extensions.

Learn about building your own extension with our Quick Start page.

Quelle: https://blog.docker.com/feed/

Generating SBOMs for Your Image with BuildKit

The latest release series of BuildKit, v0.11, introduces support for build-time attestations and SBOMs, allowing publishers to create images with records of how the image was built. This makes it easier for you to answer common questions, like which packages are in the image, where the image was built from, and whether you can reproduce the same results locally.

This new data helps you make informed decisions about the security of the images you consume — without needing to do all the manual work yourself.

In this blog post, we’ll discuss what attestations and SBOMs are, how to build images that contain SBOMs, and how to start analyzing the resulting data!

In this post:What are attestations?Getting the latest releaseAdding SBOMs to your imagesSupplementing SBOMsEven more SBOMs!Analyzing imagesGoing further

What are attestations?

An attestation is a declaration that a statement is true. With software, an attestation is a record that specifies a statement about a software artifact. For example, it could include who built it and when, what inputs it was built with, what outputs it produced, etc.

By writing these attestations, and distributing them alongside the artifacts themselves, you can see these details that might otherwise be tricky to find. To get this kind of information without attestations, you’d have to try and reverse-engineer how the image was built by trying to locate the source code and even attempting to reproduce the build yourself.

To provide this valuable information to the end-users of your images, BuildKit v0.11 lets you build these attestations as part of your normal build process. All it takes is adding a few options to your build step.

BuildKit supports attestations in the in-toto format (from the in-toto framework). Currently, the Dockerfile frontend produces two types of attestations that answer two different questions:

SBOM (Software Bill of Materials) – An SBOM contains a list of software components inside an image. This will include the names of various packages installed, their version numbers, and any other associated metadata. You can use this to see, at a glance, if an image contains a specific package or determine if an image is vulnerable to specific CVEs.

SLSA Provenance – The Provenance of the image describes details of the build process, such as what materials (like, images, URLs, files, etc.) were consumed, what build parameters were set, as well as source maps that allow mapping the resulting image back to the Dockerfile that created it. You can use this to analyze how an image was built, determine whether the sources consumed all appear legitimate, and even attempt to rebuild the image yourself.

Users can also define their own custom attestation types via a custom BuildKit frontend. In this post, we’ll focus on SBOMs and how to use them with Dockerfiles.

Getting the latest release

Building attestations into your images requires the latest releases of both Buildx and BuildKit – you can get the latest versions by updating Docker Desktop to the most recent version.

You can check your version number, and ensure it matches the buildx v0.10 release series:

$ docker buildx version
github.com/docker/buildx 0.10.0 …

To use the latest release of BuildKit, create a docker-container builder using buildx:

$ docker buildx create –use –name=buildkit-container –driver=docker-container

You can check that the new builder is configured correctly, and ensure it matches the buildkit v0.11 release series:

$ docker buildx inspect | grep -i buildkit
Buildkit: v0.11.1

If you’re using the docker/setup-buildx-action in GitHub Actions, then you’ll get this all automatically without needing to update.

With that out of the way, you can move on to building an image containing SBOMs!

Adding SBOMs to your images

You’re now ready to generate an SBOM for your image!

Let’s start with the following Dockerfile to create an nginx web server:

# syntax=docker/dockerfile:1.5

FROM nginx:latest
COPY ./static /usr/share/nginx/html

You can build and push this image, along with its SBOM, in one step:

$ docker buildx build –sbom=true -t <myorg>/<myimage> –push .

That’s all you need! In your build output, you should spot a message about generating the SBOM:

=> [linux/amd64] generating sbom using docker.io/docker/buildkit-syft-scanner:stable-1 0.2s

BuildKit generates SBOMs using scanner plugins. By default, it uses buildkit-syft-scanner, a scanner built on top of Anchore’s Syft open-source project, to do the heavy lifting. If you like, you can use another scanner by specifying the generator= option. 

Here’s how you view the generated SBOM using buildx imagetools:

$ docker buildx imagetools inspect <myorg>/<myimage> –format "{{ json .SBOM.SPDX }}"
"spdxVersion": "SPDX-2.3",
"dataLicense": "CC0-1.0",
"name": "/run/src/core/sbom",
"documentNamespace": "https://anchore.com/syft/dir/run/src/core/sbom-a589a536-b5fb-49e8-9120-6a12ce988b67",
"creationInfo": {
"licenseListVersion": "3.18",
"creators": [
"Organization: Anchore, Inc",
"Tool: syft-v0.65.0",
"Tool: buildkit-v0.11.0"
"created": "2023-01-05T16:13:17.47415867Z"

SBOMs also work with the local and tar exporters. When you export with these exporters, instead of attaching the attestations directly to the output image, the attestations are exported as separate files into the output filesystem:

$ docker buildx build –sbom=true -o ./image .
$ ls -lh ./image
-rw——- 1 user user 6.5M Jan 17 14:36 sbom.spdx.json

Viewing the SBOM in this case is as simple as cat-ing the result:

$ cat ./image/sbom.spdx.json | jq .predicate
"spdxVersion": "SPDX-2.3",
"dataLicense": "CC0-1.0",

Supplementing SBOMs

Generating SBOMs using a scanner is a good first start! But some packages won’t be correctly detected because they’ve been installed in a slightly unconventional way.

If that’s the case, you can still get this information into your SBOMs with a bit of manual interaction.

Let’s suppose you’ve installed foo v1.2.3 into your image by downloading it using curl:

RUN curl https://example.com/releases/foo-v1.2.3-amd64.tar.gz | tar xzf – &&
mv foo /usr/local/bin/

Software installed this way likely won’t appear in your SBOM unless the SBOM generator you’re using has special support for this binary (for example, Syft has support for detecting certain known binaries).

You can manually generate an SBOM for this piece of software by writing an SPDX snippet to a location of your choice on the image filesystem using a Dockerfile heredoc:

COPY /usr/local/share/sbom/foo.spdx.json <<"EOT"
"spdxVersion": "SPDX-2.3",
"name": "foo-v1.2.3",


This SBOM should then be picked up by your SBOM generator and included in the final SBOM for the whole image. This behavior is included out-of-the-box in buildkit-syft-scanner, but may not be part of every generator’s toolkit.

Even more SBOMs!

While the above section is good for scanning a basic image, it might struggle to provide more detailed package and file information. BuildKit can help you scan additional components of your build, including intermediate stages and your build context using the BUILDKIT_SBOM_SCAN_STAGE and BUILDKIT_SBOM_SCAN_CONTEXT arguments respectively.

In the case of multi-stage builds, this allows you to track dependencies from previous stages, even though that software might not appear in your final image.

For example, for a demo C/C++ program, you might have the following Dockerfile:

# syntax=docker/dockerfile:1.5

FROM ubuntu:22.04 AS build
RUN apt-get update && apt-get install -y git build-essential
RUN git clone https://example.com/myorg/myrepo.git .
RUN make build

FROM scratch
COPY –from=build /src/build/ /

If you just scanned the resulting image, it wouldn’t reveal that the build tools, like Git or GCC (included in the build-essential package), were ever used in the build process! By integrating SBOM scanning into your build using the BUILDKIT_SBOM_SCAN_STAGE build argument, you can get much richer information that would otherwise have been completely lost.

You can access these additionally generated SBOM documents in imagetools as well:

$ docker buildx imagetools inspect <myorg>/<myimage> –format "{{ range .SBOM.AdditionalSPDXs }}{{ json . }}{{ end }}"
"spdxVersion": "SPDX-2.3",

"spdxVersion": "SPDX-2.3",


For the local and tar exporters, these will appear as separate files in your output directory:

$ docker buildx build –sbom=true -o ./image .
$ ls -lh ./image
-rw——- 1 user user 4.3M Jan 17 14:40 sbom-build.spdx.json
-rw——- 1 user user 877 Jan 17 14:40 sbom.spdx.json

Analyzing images

Now that you’re publishing images containing SBOMs, it’s important to find a way to analyze them to take advantage of this additional data.

As mentioned above, you can extract the attached SBOM attestation using the imagetools subcommand:

$ docker buildx imagetools inspect <myorg>/<myimage> –format "{{json .SBOM.SPDX}}"
"spdxVersion": "SPDX-2.3",
"dataLicense": "CC0-1.0",

If your target image is built for multiple architectures using the –platform flag, then you’ll need a slightly different syntax to extract the SBOM attestation:

$ docker buildx imagetools inspect <myorg>/<myimage> –format "{{ json (index .SBOM "linux/amd64").SPDX}}"
"spdxVersion": "SPDX-2.3",
"dataLicense": "CC0-1.0",

Now suppose you want to list all of the packages, and their versions, inside an image. You can modify the value passed to the –format flag to be a go template that lists the packages:

$ docker buildx imagetools inspect <myorg>/<myimage> –format ‘{{ range .SBOM.SPDX.packages }}{{ println .name .versionInfo }}{{ end }}’ | sort
adduser 3.118
apt 2.2.4
base-files 11.1+deb11u6
base-passwd 3.5.51
bash 5.1-2+deb11u1
bsdutils 1:2.36.1-8+deb11u1
ca-certificates 20210119
coreutils 8.32-4+b1
curl 7.74.0-1.3+deb11u3

Alternatively, you might want to get the version information for a piece of software that you know is installed:

$ docker buildx imagetools inspect <myorg>/<myimage> –format ‘{{ range .SBOM.SPDX.packages }}{{ if eq .name "nginx" }}{{ println .versionInfo }}{{ end }}{{ end }}’

You can even take the whole SBOM and use it to scan for CVEs using a tool that can use SBOMs to search for CVEs (like Anchore’s Grype):

$ docker buildx imagetools inspect <myorg>/<myimage> –format ‘{{ json .SBOM.SPDX }}’ | grype
apt 2.2.4 deb CVE-2011-3374 Negligible
bash 5.1-2+deb11u1 (won’t fix) deb CVE-2022-3715

These operations should complete super quickly! Because the SBOM was already generated at build, you’re just querying already-existing data from Docker Hub instead of needing to generate it from scratch every time.

Going further

In this post, we’ve only covered the absolute basics to getting started with BuildKit and SBOMs — you can find out more about the things we’ve talked about on docs.docker.com:

Read more about build-time attestations

Learn about how to use buildx to create SBOMs

Implement your own SBOM scanner using the BuildKit SBOM protocol

Dive into how attestations are stored in the registry

And you can find out more about other features released in the latest BuildKit v0.11 release here.
Quelle: https://blog.docker.com/feed/

5 Docker Desktop Alternatives

hackernoon.com – For Windows and macOS users, Docker Desktop has been the main way to use Docker containers for many years. But how about now?Tweeted by @Vivek_H_Vadgama https://twitter.com/Vivek_H_Vadgama/status/1608163852138455042
Quelle: news.kubernauts.io

Highlights from the BuildKit v0.11 Release

BuildKit v0.11 is now available, along with Buildx v0.10 and v1.5 of the Dockerfile syntax. We’ve released new features, bug fixes, performance improvements, and improved documentation for all of the Docker Build tools.Let’s dive into what’s new! We’ll cover the highlights, but you can get the whole story in the full changelogs.

In this post:1. SLSA Provenance2. Software Bill of Materials3. SOURCE_DATE_EPOCH4. OCI image layouts as named contexts5. Cloud cache backends6. OCI Image annotations7. Build inspection8. Bake features

1. SLSA Provenance

BuildKit can now create SLSA Provenance attestation to trace the build back to source and make it easier to understand how a build was created. Images built with new versions of Buildx and BuildKit include metadata like links to source code, build timestamps, and the materials used during the build. To attach the new provenance, BuildKit now defaults to creating OCI-compliant images.

Although docker buildx will add a provenance attestation to all new images by default, you can also opt into more detail. These additional details include your Dockerfile source, source maps, and the intermediate representations used by BuildKit. You can enable all of these new provenance records using the new –provenance flag in Buildx:

$ docker buildx build –provenance=true -t <myorg>/<myimage> –push .

Or manually set the provenance generation mode to either min or max (read more about the different modes):

$ docker buildx build –provenance=mode=max -t <myorg>/<myimage> –push .

You can inspect the provenance of an image using the imagetools subcommand. For example, here’s what it looks like on the moby/buildkit image itself:

$ docker buildx imagetools inspect moby/buildkit:latest –format ‘{{ json .Provenance }}’
"linux/amd64": {
"SLSA": {
"buildConfig": {

You can use this provenance to find key information about the build environment, such as the git repository it was built from:

$ docker buildx imagetools inspect moby/buildkit:latest –format ‘{{ json (index .Provenance "linux/amd64").SLSA.invocation.configSource }}’
"digest": {
"sha1": "830288a71f447b46ad44ad5f7bd45148ec450d44"
"entryPoint": "Dockerfile",
"uri": "https://github.com/moby/buildkit.git#refs/tags/v0.11.0"

Or even the CI job that built it in GitHub actions:

$ docker buildx imagetools inspect moby/buildkit:latest –format ‘{{ (index .Provenance "linux/amd64").SLSA.builder.id }}’


Read the documentation to learn more about SLSA Provenance attestations or to explore BuildKit’s SLSA fields.

2. Software Bill of Materials

While provenance attestations help to record how a build was completed, Software Bill of Materials (SBOMs) record what components are used. This is similar to tools like docker sbom, but, instead of requiring you to perform your own scans, the author of the image can build the results into the image.

You can enable built-in SBOMs with the new –sbom flag in Buildx:

$ docker buildx build –sbom=true -t <myorg>/<myimage> –push .

By default, BuildKit uses docker/buildkit-syft-scanner (powered by Anchore’s Syft project) to build an SBOM from the resulting image. But any scanner that follows the BuildKit SBOM scanning protocol can be used here:

$ docker buildx build –sbom=generator=<custom-scanner> -t <myorg>/<myimage> –push .

Similar to SLSA provenance, you can use imagetools to query SBOMs attached to images. For example, if you list all of the discovered dependencies used in moby/buildkit, you get this:

$ docker buildx imagetools inspect moby/buildkit:latest –format ‘{{ range (index .SBOM "linux/amd64").SPDX.packages }}{{ println .name }}{{ end }}’

Read the SBOM attestations documentation to learn more.


Getting reproducible builds out of Dockerfiles has historically been quite tricky — a full reproducible build requires bit-for-bit accuracy that produces the exact same result each time. Even builds that are fully deterministic would get different timestamps between runs.

The new SOURCE_DATE_EPOCH build argument helps resolve this, following the standardized environment variable from the Reproducible Builds project. If the build argument is set or detected in the environment by Buildx, then BuildKit will set timestamps in the image config and layers to be the specified Unix timestamp. This helps you get perfect bit-for-bit reproducibility in your builds.

SOURCE_DATE_EPOCH is automatically detected by Buildx from the environment. To force all the timestamps in the image to the Unix epoch:

$ SOURCE_DATE_EPOCH=0 docker buildx build -t <myorg>/<myimage> .

Alternatively, to set it to the timestamp of the most recent commit:

$ SOURCE_DATE_EPOCH=$(git log -1 –pretty=%ct) docker buildx build -t <myorg>/<myimage> .

Read the documentation to find out more about how BuildKit handles SOURCE_DATE_EPOCH. 

4. OCI image layouts as named contexts

BuildKit has been able to export OCI image layouts for a while now. As of v0.11, BuildKit can import those results again using named contexts. This makes it easier to build contexts entirely locally — without needing to push intermediate results to a registry.

For example, suppose you want to build your own custom intermediate image based on Alpine that contains some development tools:

$ docker buildx build . -f intermediate.Dockerfile –output type=oci,dest=./intermediate,tar=false

This builds the contents of intermediate.Dockerfile and exports it into an OCI image layout into the intermediate/ directory (using the new tar=false option for OCI exports). To use this intermediate result in a Dockerfile, refer to it using any name you like in the FROM statement in your main Dockerfile:

FROM base
RUN … # use the development tools in the intermediate image

You can then connect this Dockerfile to your OCI layout using the new oci-layout:// URI schema for the –build-context flag:

$ docker buildx build . -t <myorg>/<myimage> –build-context base=oci-layout://intermediate

Instead of resolving the image base to Docker Hub, BuildKit will instead read it from oci-layout://intermediate in the current directory, so you don’t need to push the intermediate image to a remote registry to be able to use it.

Refer to the documentation to find out more about using oci-layout:// with the –build-context flag.

5. Cloud cache backends

To get good build performance when building in ephemeral environments, such as CI pipelines, you need to store the cache in a remote backend. The newest release of BuildKit supports two new storage backends: Amazon S3 and Azure Blob Storage.

When you build images, you can provide the details of your S3 bucket or Azure Blob store to automatically store your build cache to pull into future builds. This build cache means that even though your CI or local runners might be destroyed and recreated, you can still access your remote cache to get quick builds when nothing has changed.

To use the new backends, you can specify them using the –cache-to and –cache-from flags:

$ docker buildx build –push -t <user>/<image>
–cache-to type=s3,region=<region>,bucket=<bucket>,name=<cache-image>[,parameters…]
–cache-from type=s3,region=<region>,bucket=<bucket>,name=<cache-image> .

$ docker buildx build –push -t <registry>/<image>
–cache-to type=azblob,name=<cache-image>[,parameters…]
–cache-from type=azblob,name=<cache-image>[,parameters…] .

You also don’t have to choose between one cache backend or the other. BuildKit v0.11 supports multiple cache exports at a time so you can use as many as you’d like.

Find more information about the new S3 backend in the Amazon S3 cache and the Azure Blob Storage cache backend documentation. 

6. OCI Image annotations

OCI image annotations allow attaching metadata to container images at the manifest level. They’re an alternative to labels that are more generic, and they can be more easily attached to multi-platform images.

All BuildKit image exporters now allow setting annotations to the image exporters. To set the annotations of your choice, use the –output flag:

$ docker buildx build …
–output "type=image,name=foo,annotation.org.opencontainers.image.title=Foo"

You can set annotations at any level of the output, for example, on the image index:

$ docker buildx build …
–output "type=image,name=foo,annotation-index.org.opencontainers.image.title=Foo"

Or even set different annotations for each platform:

$ docker buildx build …
–output "type=image,name=foo,annotation[linux/amd64].org.opencontainers.image.title=Foo,annotation[linux/arm64].org.opencontainers.image.title=Bar"

You can find out more about creating OCI annotations on BuildKit images in the documentation.

7. Build inspection with –print

If you are starting in a codebase with Dockerfiles, understanding how to use them can be tricky. Buildx supports the new –print flag to print details about a build. This flag can be used to get quick and easy information about required build arguments and secrets, and targets that you can build. 

For example, here’s how you get an outline of BuildKit’s Dockerfile:

$ BUILDX_EXPERIMENTAL=1 docker buildx build –print=outline https://github.com/moby/buildkit.git
TARGET: buildkit
DESCRIPTION: builds the buildkit container image

BUILDKITD_TAGS defines additional Go build tags for compiling buildkitd

We can also list all the different targets to build:

$ BUILDX_EXPERIMENTAL=1 docker buildx build –print=targets https://github.com/moby/buildkit.git

Any frontend that implements the BuildKit subrequests interface can be used with the buildx –print flag. They can even define their own print functions, and aren’t just limited to outline or targets.

The –print feature is still experimental, so the interface may change, and we may add new functionality over time. If you have feedback, please open an issue or discussion on the docker/buildx GitHub, we’d love to hear your thoughts!

8. Bake features

The Bake file format for orchestrating builds has also been improved.

Bake now supports more powerful variable interpolation, allowing you to use fields from the same or other blocks. This can reduce duplication and make your bake files easier to read:

target "foo" {
dockerfile = target.foo.name + ".Dockerfile"
tags = [target.foo.name]

Bake also supports null values for build arguments and allows labels to use the defaults set in your Dockerfile so your bake definition doesn’t override those:

variable "GO_VERSION" {
default = null
target "default" {
args = {

Read the Bake documentation to learn more. 

More improvements and bug fixes 

In this post, we’ve only scratched the surface of the new features in the latest release. Along with all the above features, the latest releases include quality-of-life improvements and bug fixes. Read the full changelogs to learn more:

BuildKit v0.11 changelog

Buildx v0.10 changelog

Dockerfile v1.5 changelog

We welcome bug reports and contributions, so if you find an issue in the releases, let us know by opening a GitHub issue or pull request, or get in contact in the #buildkit channel in the Docker Community Slack.
Quelle: https://blog.docker.com/feed/

Develop Your Cloud App Locally with the LocalStack Extension

Local deployment is a great way to improve your development speed, lower your cloud costs, and develop for the cloud when access is restricted due to regulations. But it can also mean one more tool to manage when you’re developing an application.

With the LocalStack Docker Extension, you get a fully functional local cloud stack integrated directly into Docker Desktop, so it’s easy to develop and test cloud-native applications in one place.

Let’s take a look at local deployment and how to use the LocalStack Docker Extension.

Why run cloud applications locally?

By running your cloud app locally, you have complete control over your environment. That control makes it easier to reproduce results consistently and test new features. This gives you faster deploy-test-redeploy cycles and makes it easier to debug and replicate bugs. And since you’re not using cloud resources, you can create and tear down resources at will without incurring cloud costs.

Local cloud development also allows you to work in regulated environments where access to the cloud is restricted. By running the app on their machines, you can still work on projects without being constrained by external restrictions. 

How LocalStack works

LocalStack is a cloud service emulator that provides a fully functional local cloud stack for developing and testing AWS cloud and serverless applications. With 45K+ GitHub stars and 450+ contributors, LocalStack is backed by a large, active open-source community with 100,000+ active users worldwide.

LocalStack acts as a local “mini-cloud” operating system with multiple components, such as process management, file system abstraction, event processing, schedulers, and more. These LocalStack components run in a Docker container and expose a set of external network ports for integrations, SDKs, or CLI interfaces to connect to LocalStack APIs.

The LocalStack architecture is designed to be lightweight and cross-platform compatible to make it easy to use a local cloud stack.

With LocalStack, you can simulate the functionality of many AWS cloud services, like Lambda and S3, without having to connect to the actual cloud environment. You can even apply your complex CDK applications or Terraform configurations and emulate everything locally.

The official LocalStack Docker image has been downloaded 100+ million times and provides a multi-arch build that’s compatible with AMD/x86 and ARM-based CPU architectures. LocalStack supports over 80 AWS APIs, including compute (Lambda, ECS), databases (RDS, DynamoDB), messaging (SQS, MSK), and other sophisticated services (Glue, Athena). It offers advanced collaboration features and integrations, with Infrastructure-as-Code toolings, continuous integration (CI) systems, and much more, thus enabling an efficient development and testing loop for developers. 

Why run LocalStack as a Docker Extension?

Docker Extensions help you build and integrate software applications into your daily workflows. With LocalStack as a Docker Extension, you now have an easier, faster way to run LocalStack.

The extension creates a running LocalStack instance. This allows you to easily configure LocalStack to fit the needs of a local cloud sandbox for development, testing and experimentation. Currently, the LocalStack extension for Docker Desktop supports the following features:

Control LocalStack: Start, stop, and restart LocalStack from Docker Desktop. You can also see the current status of your LocalStack instance and navigate to the LocalStack Web Application.

LocalStack insights: You can see the log information of the LocalStack instance and all the available services and their status on the service page.

LocalStack configurations: You can manage and use your profiles via configurations and create new configurations for your LocalStack instance.

How to use the LocalStack Docker Extension 

In this section, we’ll emulate some simple AWS commands by running LocalStack through Docker Desktop. For this tutorial, you’ll need to have Docker Desktop(v4.8+) and the AWS CLI installed.

Step 1: Enable Docker Extensions

You’ll need to enable Docker Extensions under the Preferences tab in Docker Desktop.

Step 2: Install the LocalStack extension

The LocalStack extension is available on the Extensions Marketplace in Docker Desktop and on Docker Hub. To get started, search for LocalStack in the Extensions Marketplace, then select Install.

Alternatively, you can install the LocalStack Extension for Docker Desktop by pulling our public Docker image from Docker Hub:

docker extension install localstack/localstack-docker-desktop:0.3.1

Step 3: Initialize LocalStack

Once the extension is installed, you’re ready to use LocalStack! When you open the extension for the first time, you’ll be prompted to select where LocalStack will be mounted. Open the drop-down and choose the username.You can also change this setting by navigating to the Configurations tab and selecting the mount point.

Use the Start button to get started using LocalStack. If LocalStack’s Docker image isn’t present, the extension will pull it automatically (which may take some time).

Step 4. Run the basic AWS command

To demonstrate the functionalities of LocalStack, you can try to mock all AWS commands against the local infrastructure using awslocal, our wrapper around the AWS CLI. You can install it using pip.

pip install awscli-local

After it’s installed, all the available services will be displayed in the LocalStack Docker image on startup.

You can now run some basic AWS commands to check if the extension is working correctly. Try these commands to create a hello-world file on LocalStack’s S3, fully emulated locally:

awslocal s3 mb s3://test
echo "hello world" > /tmp/hello-world
awslocal s3 cp /tmp/hello-world s3://test/hello-world
awslocal s3 ls s3://test/

You should see a hello-world file in your local S3 bucket. You can now navigate to the Docker Desktop to see that S3 is running while the rest of the services are still marked available.

Navigate to the logs and you’ll see the API requests being made with 200 status codes. If you’re running LocalStack to emulate a local AWS infrastructure, you can check the logs to see if a particular API request has gone wrong and further debug it through the logs.

Since the resources are ephemeral, you can stop LocalStack anytime to start fresh. And unlike doing this on AWS, you can spin up or down any resources you want without worrying about lingering resources inferring costs.

Step 5: Use configuration profiles to quickly spin up different environments

Using LocalStack’s Docker Extension, you can create a variety of pre-made configuration profiles, specific LocalStack Configuration variables, or API keys. When you select a configuration profile before starting the container, you directly pass these variables to the running LocalStack container.

This makes it easy for you to change the behavior of LocalStack so you can quickly spin up local cloud environments already configured to your needs.

What will you build with LocalStack?

The LocalStack Docker Extension makes it easy to control the LocalStack container via a user interface. By integrating directly with Docker Desktop, we hope to make your development process easier and faster.

And even more is on the way! In upcoming iterations, the extension will be further developed to increase supported AWS APIs, integrations with LocalStack Web Application, and toolings like Cloud Pods, LocalStack’s state management and team collaboration feature.

Please let us know what you think! LocalStack is an open source, community focused project. If you’d like to contribute, you can follow our contributing documentation to set up the project on your local machine and use developer tools to develop new features or fix old bugs. You can also use the LocalStack Docker Extension issue tracker to create new issues or propose new features to LocalStack through our LocalStack Discuss forum.
Quelle: https://blog.docker.com/feed/

Docker Desktop 4.16: Better Performance and Docker Extensions GA

Docker Desktop 4.16 is our first release of the new year and we’re excited to kick off 2023 with a bang. In this release, Docker Extensions moved from beta to GA. The Docker Extensions feature connects the Docker toolchain to your application development and deployment workflows. To make using Docker Extensions easier, we added the ability to search, install, and uninstall extensions right from the search bar. Also, we’re excited to announce that we’ve improved the performance of our image analysis — it’s 4x faster and requires 76% less memory (YAY!).

Let’s check out some highlights from Docker Desktop 4.16.

Docker Extensions now GA

In May 2022, Docker introduced Docker Extensions as a way to connect the Docker toolchain to application development and deployment workflows. And with Docker Desktop 4.16, Docker Extensions and the Docker Extensions SDK are now generally available on all platforms.

The Docker Extensions feature allows you to augment Docker Desktop with debugging, testing, security, networking, and many more functionalities. You can even build custom add-ons using the Extensions SDK.

Since its launch, the Extensions Marketplace has expanded from 15 initial extensions to over 30 — with more on the way. 

To help you get the most out of extensions, we’ve also:

Improved discoverability with search, categories, listing the number of installs, and more.

Added the Extensions Marketplace on Docker Hub so you can search extensions and images.

Created a Build section in Docker Desktop to make it simpler for you to create your own.

Made sharing new extensions easier with a Share button so you can share your extension with URL.

And more!

Over the coming months, you can expect new functionality for Docker Extensions and the Docker Extensions SDK.

Performance improvements to image analysis

Now you can analyze an image for vulnerabilities up to 4x faster — and use up to 76% less memory while you do it. Rolling out as part of Docker Desktop 4.16, these performance improvements can make a big difference when you’re dealing with larger (5GB+) images.

To perform an analysis, select any image in the Images tab. You’ll automatically kick off an analysis so you can learn about vulnerabilities in your base images and dependencies.

Thanks to everyone who reached out and worked with us to make these performance improvements. Please keep the feedback coming!

Quick search now GA

In Docker Desktop 4.15, we launched an experimental quick search that brings the Hub experience into your local workflow so you can find, pull, and run any public or private remote images on Hub. We had a great response from the community, and quick search is now GA.

Launch Docker Desktop 4.16, and you’ll find lots of great features to jump right into your work without leaving Docker Desktop.

Search for local containers or compose apps

Now you can quickly find containers or compose apps on your local system. And once you find them, you can perform quick actions right from the search, including start, stop, delete, view logs, or start an interactive terminal session with a running container. You can even get a quick overview of the environment variables. 

Search for public Docker Hub images, local images, and images from remote repositories

Search through all images at your disposal, including Docker Hub and remote repositories, then filter by type to quickly narrow down the results. Depending on the category, you’ll have a variety of different quick actions to choose from:

Docker Hub public images: Pull the image by tag, run it (which also pulls the image as the first step), view documentation, or go to Docker Hub for more details. 

Local images: Run a new container using the image and get an overview of which containers use it.

Remote images: Pull by tag or get quick info, like the last updated date, size, or vulnerabilities. 

Find new extensions easier with quick search

Now when you search in Docker Desktop, Docker Extensions will be included in the search results. From here, you can learn more about the extension and install it with a single click.

Check out the tips in the footer of the search module for more shortcuts and ways to use it. As always, we’d appreciate your feedback on the experience and suggestions for improvement.

Let us know what you think

Thanks for using Docker Desktop! Learn more about what’s in store with our public roadmap on GitHub, and let us know what other features you’d like to see.

Check out the release notes for a full breakdown of what’s new in Docker Desktop 4.16.
Quelle: https://blog.docker.com/feed/