What Rust Brings to Frontend and Web Development
thenewstack.io – Rust offers a safer, more secure language that’s robust enough for application development. Here’s what can it do for the frontend and web.
Quelle: news.kubernauts.io
thenewstack.io – Rust offers a safer, more secure language that’s robust enough for application development. Here’s what can it do for the frontend and web.
Quelle: news.kubernauts.io
BuildKit v0.11 is now available, along with Buildx v0.10 and v1.5 of the Dockerfile syntax. We’ve released new features, bug fixes, performance improvements, and improved documentation for all of the Docker Build tools.Let’s dive into what’s new! We’ll cover the highlights, but you can get the whole story in the full changelogs.
In this post:1. SLSA Provenance2. Software Bill of Materials3. SOURCE_DATE_EPOCH4. OCI image layouts as named contexts5. Cloud cache backends6. OCI Image annotations7. Build inspection8. Bake features
1. SLSA Provenance
BuildKit can now create SLSA Provenance attestation to trace the build back to source and make it easier to understand how a build was created. Images built with new versions of Buildx and BuildKit include metadata like links to source code, build timestamps, and the materials used during the build. To attach the new provenance, BuildKit now defaults to creating OCI-compliant images.
Although docker buildx will add a provenance attestation to all new images by default, you can also opt into more detail. These additional details include your Dockerfile source, source maps, and the intermediate representations used by BuildKit. You can enable all of these new provenance records using the new –provenance flag in Buildx:
$ docker buildx build –provenance=true -t <myorg>/<myimage> –push .
Or manually set the provenance generation mode to either min or max (read more about the different modes):
$ docker buildx build –provenance=mode=max -t <myorg>/<myimage> –push .
You can inspect the provenance of an image using the imagetools subcommand. For example, here’s what it looks like on the moby/buildkit image itself:
$ docker buildx imagetools inspect moby/buildkit:latest –format ‘{{ json .Provenance }}’
{
"linux/amd64": {
"SLSA": {
"buildConfig": {
You can use this provenance to find key information about the build environment, such as the git repository it was built from:
$ docker buildx imagetools inspect moby/buildkit:latest –format ‘{{ json (index .Provenance "linux/amd64").SLSA.invocation.configSource }}’
{
"digest": {
"sha1": "830288a71f447b46ad44ad5f7bd45148ec450d44"
},
"entryPoint": "Dockerfile",
"uri": "https://github.com/moby/buildkit.git#refs/tags/v0.11.0"
}
Or even the CI job that built it in GitHub actions:
$ docker buildx imagetools inspect moby/buildkit:latest –format ‘{{ (index .Provenance "linux/amd64").SLSA.builder.id }}’
https://github.com/moby/buildkit/actions/runs/3878249653
Read the documentation to learn more about SLSA Provenance attestations or to explore BuildKit’s SLSA fields.
2. Software Bill of Materials
While provenance attestations help to record how a build was completed, Software Bill of Materials (SBOMs) record what components are used. This is similar to tools like docker sbom, but, instead of requiring you to perform your own scans, the author of the image can build the results into the image.
You can enable built-in SBOMs with the new –sbom flag in Buildx:
$ docker buildx build –sbom=true -t <myorg>/<myimage> –push .
By default, BuildKit uses docker/buildkit-syft-scanner (powered by Anchore’s Syft project) to build an SBOM from the resulting image. But any scanner that follows the BuildKit SBOM scanning protocol can be used here:
$ docker buildx build –sbom=generator=<custom-scanner> -t <myorg>/<myimage> –push .
Similar to SLSA provenance, you can use imagetools to query SBOMs attached to images. For example, if you list all of the discovered dependencies used in moby/buildkit, you get this:
$ docker buildx imagetools inspect moby/buildkit:latest –format ‘{{ range (index .SBOM "linux/amd64").SPDX.packages }}{{ println .name }}{{ end }}’
github.com/Azure/azure-sdk-for-go/sdk/azcore
github.com/Azure/azure-sdk-for-go/sdk/azidentity
github.com/Azure/azure-sdk-for-go/sdk/internal
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
…
Read the SBOM attestations documentation to learn more.
3. SOURCE_DATE_EPOCH
Getting reproducible builds out of Dockerfiles has historically been quite tricky — a full reproducible build requires bit-for-bit accuracy that produces the exact same result each time. Even builds that are fully deterministic would get different timestamps between runs.
The new SOURCE_DATE_EPOCH build argument helps resolve this, following the standardized environment variable from the Reproducible Builds project. If the build argument is set or detected in the environment by Buildx, then BuildKit will set timestamps in the image config and layers to be the specified Unix timestamp. This helps you get perfect bit-for-bit reproducibility in your builds.
SOURCE_DATE_EPOCH is automatically detected by Buildx from the environment. To force all the timestamps in the image to the Unix epoch:
$ SOURCE_DATE_EPOCH=0 docker buildx build -t <myorg>/<myimage> .
Alternatively, to set it to the timestamp of the most recent commit:
$ SOURCE_DATE_EPOCH=$(git log -1 –pretty=%ct) docker buildx build -t <myorg>/<myimage> .
Read the documentation to find out more about how BuildKit handles SOURCE_DATE_EPOCH.
4. OCI image layouts as named contexts
BuildKit has been able to export OCI image layouts for a while now. As of v0.11, BuildKit can import those results again using named contexts. This makes it easier to build contexts entirely locally — without needing to push intermediate results to a registry.
For example, suppose you want to build your own custom intermediate image based on Alpine that contains some development tools:
$ docker buildx build . -f intermediate.Dockerfile –output type=oci,dest=./intermediate,tar=false
This builds the contents of intermediate.Dockerfile and exports it into an OCI image layout into the intermediate/ directory (using the new tar=false option for OCI exports). To use this intermediate result in a Dockerfile, refer to it using any name you like in the FROM statement in your main Dockerfile:
FROM base
RUN … # use the development tools in the intermediate image
You can then connect this Dockerfile to your OCI layout using the new oci-layout:// URI schema for the –build-context flag:
$ docker buildx build . -t <myorg>/<myimage> –build-context base=oci-layout://intermediate
Instead of resolving the image base to Docker Hub, BuildKit will instead read it from oci-layout://intermediate in the current directory, so you don’t need to push the intermediate image to a remote registry to be able to use it.
Refer to the documentation to find out more about using oci-layout:// with the –build-context flag.
5. Cloud cache backends
To get good build performance when building in ephemeral environments, such as CI pipelines, you need to store the cache in a remote backend. The newest release of BuildKit supports two new storage backends: Amazon S3 and Azure Blob Storage.
When you build images, you can provide the details of your S3 bucket or Azure Blob store to automatically store your build cache to pull into future builds. This build cache means that even though your CI or local runners might be destroyed and recreated, you can still access your remote cache to get quick builds when nothing has changed.
To use the new backends, you can specify them using the –cache-to and –cache-from flags:
$ docker buildx build –push -t <user>/<image>
–cache-to type=s3,region=<region>,bucket=<bucket>,name=<cache-image>[,parameters…]
–cache-from type=s3,region=<region>,bucket=<bucket>,name=<cache-image> .
$ docker buildx build –push -t <registry>/<image>
–cache-to type=azblob,name=<cache-image>[,parameters…]
–cache-from type=azblob,name=<cache-image>[,parameters…] .
You also don’t have to choose between one cache backend or the other. BuildKit v0.11 supports multiple cache exports at a time so you can use as many as you’d like.
Find more information about the new S3 backend in the Amazon S3 cache and the Azure Blob Storage cache backend documentation.
6. OCI Image annotations
OCI image annotations allow attaching metadata to container images at the manifest level. They’re an alternative to labels that are more generic, and they can be more easily attached to multi-platform images.
All BuildKit image exporters now allow setting annotations to the image exporters. To set the annotations of your choice, use the –output flag:
$ docker buildx build …
–output "type=image,name=foo,annotation.org.opencontainers.image.title=Foo"
You can set annotations at any level of the output, for example, on the image index:
$ docker buildx build …
–output "type=image,name=foo,annotation-index.org.opencontainers.image.title=Foo"
Or even set different annotations for each platform:
$ docker buildx build …
–output "type=image,name=foo,annotation[linux/amd64].org.opencontainers.image.title=Foo,annotation[linux/arm64].org.opencontainers.image.title=Bar"
You can find out more about creating OCI annotations on BuildKit images in the documentation.
7. Build inspection with –print
If you are starting in a codebase with Dockerfiles, understanding how to use them can be tricky. Buildx supports the new –print flag to print details about a build. This flag can be used to get quick and easy information about required build arguments and secrets, and targets that you can build.
For example, here’s how you get an outline of BuildKit’s Dockerfile:
$ BUILDX_EXPERIMENTAL=1 docker buildx build –print=outline https://github.com/moby/buildkit.git
TARGET: buildkit
DESCRIPTION: builds the buildkit container image
BUILD ARG VALUE DESCRIPTION
RUNC_VERSION v1.1.4
ALPINE_VERSION 3.17
BUILDKITD_TAGS defines additional Go build tags for compiling buildkitd
BUILDKIT_SBOM_SCAN_STAGE true
We can also list all the different targets to build:
$ BUILDX_EXPERIMENTAL=1 docker buildx build –print=targets https://github.com/moby/buildkit.git
TARGET DESCRIPTION
alpine-amd64
alpine-arm
alpine-arm64
alpine-s390x
Any frontend that implements the BuildKit subrequests interface can be used with the buildx –print flag. They can even define their own print functions, and aren’t just limited to outline or targets.
The –print feature is still experimental, so the interface may change, and we may add new functionality over time. If you have feedback, please open an issue or discussion on the docker/buildx GitHub, we’d love to hear your thoughts!
8. Bake features
The Bake file format for orchestrating builds has also been improved.
Bake now supports more powerful variable interpolation, allowing you to use fields from the same or other blocks. This can reduce duplication and make your bake files easier to read:
target "foo" {
dockerfile = target.foo.name + ".Dockerfile"
tags = [target.foo.name]
}
Bake also supports null values for build arguments and allows labels to use the defaults set in your Dockerfile so your bake definition doesn’t override those:
variable "GO_VERSION" {
default = null
}
target "default" {
args = {
GO_VERSION = GO_VERSION
}
}
Read the Bake documentation to learn more.
More improvements and bug fixes
In this post, we’ve only scratched the surface of the new features in the latest release. Along with all the above features, the latest releases include quality-of-life improvements and bug fixes. Read the full changelogs to learn more:
BuildKit v0.11 changelog
Buildx v0.10 changelog
Dockerfile v1.5 changelog
We welcome bug reports and contributions, so if you find an issue in the releases, let us know by opening a GitHub issue or pull request, or get in contact in the #buildkit channel in the Docker Community Slack.
Quelle: https://blog.docker.com/feed/
Local deployment is a great way to improve your development speed, lower your cloud costs, and develop for the cloud when access is restricted due to regulations. But it can also mean one more tool to manage when you’re developing an application.
With the LocalStack Docker Extension, you get a fully functional local cloud stack integrated directly into Docker Desktop, so it’s easy to develop and test cloud-native applications in one place.
Let’s take a look at local deployment and how to use the LocalStack Docker Extension.
Why run cloud applications locally?
By running your cloud app locally, you have complete control over your environment. That control makes it easier to reproduce results consistently and test new features. This gives you faster deploy-test-redeploy cycles and makes it easier to debug and replicate bugs. And since you’re not using cloud resources, you can create and tear down resources at will without incurring cloud costs.
Local cloud development also allows you to work in regulated environments where access to the cloud is restricted. By running the app on their machines, you can still work on projects without being constrained by external restrictions.
How LocalStack works
LocalStack is a cloud service emulator that provides a fully functional local cloud stack for developing and testing AWS cloud and serverless applications. With 45K+ GitHub stars and 450+ contributors, LocalStack is backed by a large, active open-source community with 100,000+ active users worldwide.
LocalStack acts as a local “mini-cloud” operating system with multiple components, such as process management, file system abstraction, event processing, schedulers, and more. These LocalStack components run in a Docker container and expose a set of external network ports for integrations, SDKs, or CLI interfaces to connect to LocalStack APIs.
The LocalStack architecture is designed to be lightweight and cross-platform compatible to make it easy to use a local cloud stack.
With LocalStack, you can simulate the functionality of many AWS cloud services, like Lambda and S3, without having to connect to the actual cloud environment. You can even apply your complex CDK applications or Terraform configurations and emulate everything locally.
The official LocalStack Docker image has been downloaded 100+ million times and provides a multi-arch build that’s compatible with AMD/x86 and ARM-based CPU architectures. LocalStack supports over 80 AWS APIs, including compute (Lambda, ECS), databases (RDS, DynamoDB), messaging (SQS, MSK), and other sophisticated services (Glue, Athena). It offers advanced collaboration features and integrations, with Infrastructure-as-Code toolings, continuous integration (CI) systems, and much more, thus enabling an efficient development and testing loop for developers.
Why run LocalStack as a Docker Extension?
Docker Extensions help you build and integrate software applications into your daily workflows. With LocalStack as a Docker Extension, you now have an easier, faster way to run LocalStack.
The extension creates a running LocalStack instance. This allows you to easily configure LocalStack to fit the needs of a local cloud sandbox for development, testing and experimentation. Currently, the LocalStack extension for Docker Desktop supports the following features:
Control LocalStack: Start, stop, and restart LocalStack from Docker Desktop. You can also see the current status of your LocalStack instance and navigate to the LocalStack Web Application.
LocalStack insights: You can see the log information of the LocalStack instance and all the available services and their status on the service page.
LocalStack configurations: You can manage and use your profiles via configurations and create new configurations for your LocalStack instance.
How to use the LocalStack Docker Extension
In this section, we’ll emulate some simple AWS commands by running LocalStack through Docker Desktop. For this tutorial, you’ll need to have Docker Desktop(v4.8+) and the AWS CLI installed.
Step 1: Enable Docker Extensions
You’ll need to enable Docker Extensions under the Preferences tab in Docker Desktop.
Step 2: Install the LocalStack extension
The LocalStack extension is available on the Extensions Marketplace in Docker Desktop and on Docker Hub. To get started, search for LocalStack in the Extensions Marketplace, then select Install.
Alternatively, you can install the LocalStack Extension for Docker Desktop by pulling our public Docker image from Docker Hub:
docker extension install localstack/localstack-docker-desktop:0.3.1
Step 3: Initialize LocalStack
Once the extension is installed, you’re ready to use LocalStack! When you open the extension for the first time, you’ll be prompted to select where LocalStack will be mounted. Open the drop-down and choose the username.You can also change this setting by navigating to the Configurations tab and selecting the mount point.
Use the Start button to get started using LocalStack. If LocalStack’s Docker image isn’t present, the extension will pull it automatically (which may take some time).
Step 4. Run the basic AWS command
To demonstrate the functionalities of LocalStack, you can try to mock all AWS commands against the local infrastructure using awslocal, our wrapper around the AWS CLI. You can install it using pip.
pip install awscli-local
After it’s installed, all the available services will be displayed in the LocalStack Docker image on startup.
You can now run some basic AWS commands to check if the extension is working correctly. Try these commands to create a hello-world file on LocalStack’s S3, fully emulated locally:
awslocal s3 mb s3://test
echo "hello world" > /tmp/hello-world
awslocal s3 cp /tmp/hello-world s3://test/hello-world
awslocal s3 ls s3://test/
You should see a hello-world file in your local S3 bucket. You can now navigate to the Docker Desktop to see that S3 is running while the rest of the services are still marked available.
Navigate to the logs and you’ll see the API requests being made with 200 status codes. If you’re running LocalStack to emulate a local AWS infrastructure, you can check the logs to see if a particular API request has gone wrong and further debug it through the logs.
Since the resources are ephemeral, you can stop LocalStack anytime to start fresh. And unlike doing this on AWS, you can spin up or down any resources you want without worrying about lingering resources inferring costs.
Step 5: Use configuration profiles to quickly spin up different environments
Using LocalStack’s Docker Extension, you can create a variety of pre-made configuration profiles, specific LocalStack Configuration variables, or API keys. When you select a configuration profile before starting the container, you directly pass these variables to the running LocalStack container.
This makes it easy for you to change the behavior of LocalStack so you can quickly spin up local cloud environments already configured to your needs.
What will you build with LocalStack?
The LocalStack Docker Extension makes it easy to control the LocalStack container via a user interface. By integrating directly with Docker Desktop, we hope to make your development process easier and faster.
And even more is on the way! In upcoming iterations, the extension will be further developed to increase supported AWS APIs, integrations with LocalStack Web Application, and toolings like Cloud Pods, LocalStack’s state management and team collaboration feature.
Please let us know what you think! LocalStack is an open source, community focused project. If you’d like to contribute, you can follow our contributing documentation to set up the project on your local machine and use developer tools to develop new features or fix old bugs. You can also use the LocalStack Docker Extension issue tracker to create new issues or propose new features to LocalStack through our LocalStack Discuss forum.
Quelle: https://blog.docker.com/feed/
Docker Desktop 4.16 is our first release of the new year and we’re excited to kick off 2023 with a bang. In this release, Docker Extensions moved from beta to GA. The Docker Extensions feature connects the Docker toolchain to your application development and deployment workflows. To make using Docker Extensions easier, we added the ability to search, install, and uninstall extensions right from the search bar. Also, we’re excited to announce that we’ve improved the performance of our image analysis — it’s 4x faster and requires 76% less memory (YAY!).
Let’s check out some highlights from Docker Desktop 4.16.
Docker Extensions now GA
In May 2022, Docker introduced Docker Extensions as a way to connect the Docker toolchain to application development and deployment workflows. And with Docker Desktop 4.16, Docker Extensions and the Docker Extensions SDK are now generally available on all platforms.
The Docker Extensions feature allows you to augment Docker Desktop with debugging, testing, security, networking, and many more functionalities. You can even build custom add-ons using the Extensions SDK.
Since its launch, the Extensions Marketplace has expanded from 15 initial extensions to over 30 — with more on the way.
To help you get the most out of extensions, we’ve also:
Improved discoverability with search, categories, listing the number of installs, and more.
Added the Extensions Marketplace on Docker Hub so you can search extensions and images.
Created a Build section in Docker Desktop to make it simpler for you to create your own.
Made sharing new extensions easier with a Share button so you can share your extension with URL.
And more!
Over the coming months, you can expect new functionality for Docker Extensions and the Docker Extensions SDK.
Performance improvements to image analysis
Now you can analyze an image for vulnerabilities up to 4x faster — and use up to 76% less memory while you do it. Rolling out as part of Docker Desktop 4.16, these performance improvements can make a big difference when you’re dealing with larger (5GB+) images.
To perform an analysis, select any image in the Images tab. You’ll automatically kick off an analysis so you can learn about vulnerabilities in your base images and dependencies.
Thanks to everyone who reached out and worked with us to make these performance improvements. Please keep the feedback coming!
Quick search now GA
In Docker Desktop 4.15, we launched an experimental quick search that brings the Hub experience into your local workflow so you can find, pull, and run any public or private remote images on Hub. We had a great response from the community, and quick search is now GA.
Launch Docker Desktop 4.16, and you’ll find lots of great features to jump right into your work without leaving Docker Desktop.
Search for local containers or compose apps
Now you can quickly find containers or compose apps on your local system. And once you find them, you can perform quick actions right from the search, including start, stop, delete, view logs, or start an interactive terminal session with a running container. You can even get a quick overview of the environment variables.
Search for public Docker Hub images, local images, and images from remote repositories
Search through all images at your disposal, including Docker Hub and remote repositories, then filter by type to quickly narrow down the results. Depending on the category, you’ll have a variety of different quick actions to choose from:
Docker Hub public images: Pull the image by tag, run it (which also pulls the image as the first step), view documentation, or go to Docker Hub for more details.
Local images: Run a new container using the image and get an overview of which containers use it.
Remote images: Pull by tag or get quick info, like the last updated date, size, or vulnerabilities.
Find new extensions easier with quick search
Now when you search in Docker Desktop, Docker Extensions will be included in the search results. From here, you can learn more about the extension and install it with a single click.
Check out the tips in the footer of the search module for more shortcuts and ways to use it. As always, we’d appreciate your feedback on the experience and suggestions for improvement.
Let us know what you think
Thanks for using Docker Desktop! Learn more about what’s in store with our public roadmap on GitHub, and let us know what other features you’d like to see.
Check out the release notes for a full breakdown of what’s new in Docker Desktop 4.16.
Quelle: https://blog.docker.com/feed/
blog.devops.dev – Kubernetes, a powerful orchestrator that will ease deployment and automatically manage your applications on a set of machines, called a Cluster. The aim of this article is to explain the most used…
Quelle: news.kubernauts.io
New in Docker Desktop 4.15: Improving Usability and Performance for Easier Builds
Docker Desktop 4.15 is here! And it’s packed with usability upgrades to help you find the images you want, manage your containers, discover vulnerabilities, and more.
Learn More
News you can use and monthly highlights:
Find and Fix Vulnerabilities Faster Now that Docker’s a CNA — by Kat Yi, Docker Sr. Security Engineer
How to Monitor Container Memory and CPU Usage in Docker Desktop — by Ivan Curkovic, Docker Engineering Manager
December Extensions Roundup: Improving Visibility for Your APIs and Images — by Amy Bass, Docker Group Product Manager
Configure, Manage, and Simplify Your Observability Data Pipelines with the Calyptia Core Docker Extension — by Ajeet Raina, Docker DevRel, & Eduardo Silva, Founder and CEO of Calyptia
Container Tools, Tips, and Tricks – Issue #2
Debugging is a fact of developer life, and Ivan Velichko is here to help make it go a little smoother. Check out his advice for debugging containers faster.
Learn More
The latest tips and tricks from the community:
Ruby on Rails Docker for local development environment — by Snyk
Docker Made Easy Part #0 — Build your first Node JS Docker App — by Abdurrachman — mpj
How to set up a Rails development environment with Docker — by Simon Chiu, Code with Rails
Traefik, Docker and dnsmasq to simplify container networking — by David Worms, Adaltas
Using a Random Forest Model for Fraud Detection in Confidential Computing — by Ellie Kloberdanz, Senior Data Scientist at Cape Privacy
See more great content from Docker and the community
Read Now
Subscribe to our newsletter to get the latest news, blogs, tips, how-to guides, best practices, and more from Docker experts sent directly in your inbox once a month.
Quelle: https://blog.docker.com/feed/
cncf.io – End user post by Sean Isom and Colin Murphy, Adobe Born out of pizza-fueled build nights, Adobe’s Ethos project emerged from a desire to find better ways to…
Quelle: news.kubernauts.io
It’s time for the holidays, and we’ve got some exciting new Docker Extensions to share with you! Docker extensions build new functionality into Docker Desktop, extend its existing capabilities, and allow you to discover and integrate additional tools that you’re already using with Docker. Let’s take a look at two exciting new extensions from December.
And if you’d like to see everything available, check out our full Extensions Marketplace!
Move faster with API endpoints with Akita
Are you working on a new service or shipping lots of changes? Do you have a handle on which of your API endpoints might be slow or which ones are throwing errors? With the Akita API extension for Docker Desktop, you can find this out in a few minutes.
The Akita API Docker extension makes it easy to try out Akita without additional work. With Akita, you can:
See your API endpoints.
See slow endpoints and endpoints with errors.
Automatically monitor across your endpoints.
The Akita API extension is in beta. To join Akita’s beta, sign up here.
Get more visibility with Dive-In
There are many advantages to keeping your container sizes small. Often, that starts with keeping your Docker image small as well. But it can sometimes be hard to understand where to start or which layers can be reduced. With the Dive-In Docker extension, you can explore your docker image and its layer contents, then discover ways to shrink the size of your Docker/OCI image.
With the Dive-In extension, you can:
View the total size of your image.
Identify the inefficient bytes.
See an efficiency score.
Identify the largest files in your image.
View the size of each layer in your image.
Dive-In is an open source extension. Feel free to contribute or raise issues on https://github.com/prakhar1989/dive-in.
Building extensions? We’d love to hear from you!
Adding new extensions to the Extensions Marketplace is really exciting and we’d love to see more from our partners and the community. If you’re currently working on an extension or have built one in the past, we’d love to hear from you! And you can help us improve the developer experience for our Extensions SDK by taking this short survey.
Check out the latest Docker Extensions with Docker Desktop
Docker is always looking for ways to improve the developer experience. Check out these resources for more info on extensions:
Install Docker Desktop for Mac, Windows, or Linux to try extensions yourself.
Visit our Extensions Marketplace to see all of our extensions.
Build your own extension with our Extensions SDK.
Quelle: https://blog.docker.com/feed/
blog.devops.dev – Helm makes it easy to define, install, and upgrade even the most complicated Kubernetes applications. Helm helps you manage Kubernetes applications. Chart creation, versioning, sharing, and…
Quelle: news.kubernauts.io
This guest post is written by Prakhar Srivastav, Senior Software Engineer at Google.
Anyone who’s built their own containers, either for local development or for cloud deployment, knows the advantages of keeping container sizes small. In most cases, keeping the container image size small translates to real dollars saved by reducing bandwidth and storage costs on the cloud. In addition, smaller images ensure faster transfer and deployments when using them in a CI/CD server.
However, even for experienced Docker users, it can be hard to understand how to reduce the sizes of their containers. The Docker CLI can be very helpful for this, but it can be intimidating to figure out where to start. That’s where Dive comes in.
What is Dive?
Dive is an open-source tool for exploring a Docker image and its layer contents, then discovering ways to shrink the size of your Docker/OCI image.
At a high level, it works by analyzing the layers of a Docker image. With every layer you add, more space will be taken up by the image. Or you can say each line in the Dockerfile (like a separate RUN instruction) adds a new layer to your image.
Dive takes this information and does the following:
Breaks down the image contents in the Docker image layer by layer.
Shows the contents of each layer in details.
Shows the total size of the image.
Shows how much space was potentially wasted.
Shows the efficiency score of the image.
While Dive is awesome and extremely helpful, it’s a command line tool and uses a TUI (terminal UI) to display all the analysis. This can sometimes seem limiting and hard to use for some users.
Wouldn’t it be cool to show all this useful data from Dive in an easy-to-use UI? Enter Dive-In, a new Docker Extension that integrates Dive into Docker Desktop!
Prerequisites
You’ll need to download Docker Desktop 4.8 or later before getting started. Make sure to choose the correct version for your OS and then install it.
Next, hop into Docker Desktop and confirm that the Docker Extensions feature is enabled. Click the Settings gear > Extensions tab > check the “Enable Docker Extensions” box.
Dive-In: A Docker Extension for Dive
Dive-In is a Docker extension that’s built on top of Dive so Docker users can explore their containers directly from Docker Desktop.
To get started, search for Dive-In in the Extensions Marketplace, then install it.
Alternatively, you can also run:
docker extension install prakhar1989/dive-in
When you first access Dive-In, it’ll take a few seconds to pull the Dive image from Docker Hub. Once it does, it should show a grid of all the images that you can analyze.
Note: Currently Dive-In does not show the dangling images (or the images that have the repo tag of “none”). This is to keep this grid uncluttered and as actionable as possible.
To analyze an image, click on the analyze button, which calls Dive behind the scenes to gather the data. Based on the size of the image this can sometimes take some time. When it’s done, it’ll present the results.
On the top, Dive-In shows three key metrics for the image which are useful in getting a high level view about how inefficient the image is. The lower the efficiency score, the more room for improvement.
Below the key metrics, it shows a table of the largest files in the image, which can be a good starting point for reducing the size.
Finally, as you scroll down, it shows the information of all the layers along with the size of each of them, which is extremely helpful in seeing which layer is contributing the most to the final size.
And that’s it!
Conclusion
The Dive-In Docker Extension helps you explore a Docker image and discover ways to shrink the size. It’s built on top of Dive, a popular open-source tool. Use Dive-In to gain insights into your container right from Docker Desktop!
Try it out for yourself and let me know what you think. Pull requests are also welcome!
About the Author
Prakhar Srivastav is a senior software engineer at Google where he works on Firebase to make app development easier for developers. When he’s not staring at Vim, he can be found playing guitar or exploring the outdoors.
Quelle: https://blog.docker.com/feed/