Adding the ESLint Tool to an AI Assistant: Improving Recommendations for JS/TS Projects

This ongoing Docker Labs GenAI series will explore the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real-time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing things as open source so you can play, explore, and hack with us, too.

Can an AI assistant help you write better JavaScript or TypeScript?

Background and introduction

Projects that heavily rely on JavaScript (JS) or TypeScript (TS) are synonymous with the web, so there is a high demand for tools that can improve the consistency and quality of projects using these languages. In previous Docker Labs GenAI posts, we’ve introduced the idea that tools both enable AI assistants to understand our code more and enable them to take action based on this understanding.

In this article, we’ll be trying to enable our AI assistant to provide advice that is both helpful and actionable for linting JS/TS projects and to finally delve into the NPM ecosystem.

Another simple prompt

As we learned in this previous Docker Labs GenAI article, you won’t get much help asking an LLM to tell you how to lint your project without any details. So, like before, we’re using our “linguist” tool to learn about the languages used in the project and augment the prompt (Figure 1):

How do I lint my project?
{{# linguist }}
This project contains code from the language {{ language }} so if you have any
recommendations pertaining to {{ language }}, please include them.
{{/linguist}}

What LLMs provide out of the box

Figure 1: AI assistant responds with information about ESLint.

In Figure 2, we see that GPT-4 recognizes that ESLint is highly configurable and actually doesn’t work without a config, and so it is trying to provide that for us by either helping us run ESLint’s init tool or by writing a config to use.

Figure 2: AI assistant provides information for setting up and running ESLint.

However, this response gives us either a config that does not work for many projects, or a boilerplate setup task for the user to do manually. This is in contrast with other linters, like Pylint or golangci-lint, where linguist was actually enough for the LLM to find a clear path to linting. So, with ESLint, we need to add more knowledge to help the LLM figure this out.

Configuring ESLint

Using StandardJS

StandardJS is a community-led effort to simplify ESLint configurations. Let’s start by nudging the assistant toward using this as a starting point. The ESLint config is published under its own package, StandardJS, so we can add the following prompt:

If there are no ESLint configuration files found, use StandardJS to lint the project with a consistent config.

We will also add a function definition so that our assistant knows how to run StandardJS. Note the container image defined at the bottom of the following definition:

– name: run-standardjs
description: Lints the current project with StandardJS
parameters:
type: object
properties:
typescript:
type: boolean
description: Whether to lint Typescript files
fix:
type: boolean
description: Whether to fix the files
files:
type: array
items:
type: string
description: The filepaths to pass to the linter. Defaults to '.'
required:
– typescript
– fix
container:
image: vonwig/standardjs:latest

This definition will work for both TypeScript and JavaScript projects using an argument. The assistant uses the project content to determine how to optimally set the TypeScript property.

When using StandardJS with TypeScript, two things happen in the container:

Lints with ts-standard instead of standard

Lints ts-standard from the working directory containing tsconfig.json.

But, with the right tools, this behavior is enabled with a single prompt:

When using StandardJS, use Typescript only if there are tsconfigs in the project.

Docker environments

Both ESLint and StandardJS run in Node.js environments. In our current prototype, our assistant uses three different Docker images.

ESLint (includes versions 7-9 of ESLint)

StandardJS (includes standard, ts-standard)

Extractor-ESLint (for extracting the knowledge)

Docker is significant because of the previously mentioned requirement of using ts-standard in a directory with tsconfig.json. When we baked this logic into the Docker image, we effectively introduced a contract bridging the AI Assistant, the linter tool, and the overall structure of the repository.

After determining that a project uses JavaScript or TypeScript, our assistant also adds Git Hooks. (See this GenAI article for details.) Docker gives us a way to reliably distribute these tools.

If we detect `TypeScript` in the project, we can add the following hook to the hooks entry in the `local` repo entry.

“`yaml
id: standardjs-docker-ts
name: standardjs linter (TS and JS)
entry: vonwig/standardjs:latest '{"typescript": true, "fix": false}'
language: docker_image
files ".(ts|tsx)$"

Fixing violations

Linting output comes in the form of violations. A violation is attached to a range in the code file with the offending code and the violation reason. As mentioned previously, 75% of StandardJS violations are automatically fixable. Can we use the AI assistant to automatically fix the remaining violations?

Respond with an edit to resolve the violation using the following JSON format:

{
"start": [1,4],
"end": [1,6],
"edit": "lorem ipsum"
}

If you take, for example, the lint rule for type casting, all of the models we tested will replace === with ==. Here’s the response when we ask for fixes to lines with the no-this-before-super violation:

Here's the proposed edit to resolve the violation:

“`json
{
"start": [1, 4],
"end": [2, 6],
"edit": "class myclass extends base {ntval: number;ntconstructor()
{nttsuper();nttthis.val = 10;nt}n}"
}
“`

If these models are able to fix these violations, why doesn’t ESLint just make them automatically fixable? In many of the cases, they represent riskier changes that still require some developer supervision. Perhaps the best thing an assistant can do is present these auto-fixes to the user directly in their editors. For example, a fix that has been generated by our assistant can be presented in VSCode (Figure 3).

Figure 3: A fix is presented to the user.

Editor complaints

With the rise of tools like GitHub Copilot, developers are now becoming accustomed to assistants being present in their editors (Figure 4).

Figure 4: AI assistant is present in the editor.

Our work is showing that linting tools can improve the quality of these fixes.

For example, when asking Copilot to fix the line from earlier, it lacks the additional context from ESLint (Figure 5).

Figure 5: Additional context is needed.

The assistant is unable to infer that there is a violation there. In this instance, Copilot is hallucinating because it was triggered by the developer’s editor action without any of the context coming in from the linter. As far as Copilot knows, I just asked it to fix perfectly good code.

To improve this, we can use the output of a linter to “complain” about a violation. The editor allows us to surface a quick action to fix the code. Figure 6 shows the same “fix using Copilot” from the “problems” window, triggered by another violation:

Figure 6: “Fix using Copilot” is shown in the problems window.

This is shown in VSCode’s “problems” window, which helps developers locate problems in the codebase. An assistant can use the editor to put the ESLint tool in a more effective relationship with the developer (Figure 7).

Figure 7: A more complete fix.

Most importantly, we get an immediate resolution rather than a hallucination. We’re also hosting these tools in Docker, so these improvements do not require installs of Node.js, NPM, or ESLint.

Summary

We continue to investigate the use of tools for gathering context and improving suggestions. In this article, we have looked at how AI assistants can provide significant value to developers by:

Cutting out busy work setting up Node/NPM/ESLint.

Leveraging expert knowledge about ESLint to “level up” developers

Generating and surfacing actionable fixes directly to developers where they’re already working (in the editor)

Generating simple workflows as outcomes from natural language prompts and tools

As always, feel free to follow along in our new public repo and please reach out. Everything we’ve discussed in this blog post is available for you to try out on your own projects.

For more on what we’re doing at Docker, subscribe to our newsletter.

Learn more

Subscribe to the Docker Newsletter.

Read the Docker Labs GenAI series.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Docker Best Practices: Understanding the Differences Between ADD and COPY Instructions in Dockerfiles

COPY vs. ADD tl;dr:

When you search for “Dockerfile best practices,” one of the suggestions you will find is that you always use the COPY instruction instead of the ADD instruction when adding files into your Docker image.

This blog post will explore why this suggestion exists by providing additional detail on the functionality of these two instructions. Once you understand these concepts, you may find scenarios where you can benefit from ignoring the suggestion and using the ADD command instead of COPY.

Understanding file system build context

Before diving into the differences between ADD and COPY, it’s important to understand the concept of build context. The build context is the set of files and directories that are accessible to the Docker engine when building an image. When you run a docker build command, Docker sends the content of the specified context directory (and its subdirectories) to the Docker daemon. This context forms the scope within which the COPY and ADD instructions operate.

COPY instruction

The COPY instruction is straightforward and does exactly what its name implies: It copies files and directories from a source within the build context to a destination layer in the Docker image. This instruction can be used to copy both files and directories, and all paths on the host are relative to the root of the build context.

Syntax:

COPY <src>… <dest>

<src>: The source files or directories on the host.

<dest>: The destination path inside the Docker image.

Key points

Basic functionality: COPY only supports copying files and directories from the host file system. It does not support URLs or automatic unpacking of compressed files.

Security: Because COPY only handles local files, it tends to be more predictable and secure than ADD, reducing the risk of unintentionally introducing files from external sources.

Use case: Best used when you need to include files from your local build context into the Docker image without any additional processing.

Example:

COPY ./app /usr/src/app
COPY requirements.txt /usr/src/app/

In this example, the contexts of the local app directory are copied into the /usr/src/app directory inside the Docker image being built. The second command copies the requirements.txt file into the /usr/src/app directory as well.

ADD instruction

The ADD instruction provides the same functionality that the COPY instruction does, but it also has additional functionality that, if misunderstood, can introduce complexity and potential security risks.

Syntax:

ADD <src>… <dest>

<src>: The source files (directories or URLs).

<dest>: The destination path inside the Docker image.

Key points

Extended functionality: In addition to copying local files and directories from the build context, ADD provides the following advanced functionality:

Handle URLs: When supplied as a source, the file referenced by a URL will be downloaded to the current Docker image layer at the supplied destination path.

Extract archives: When supplied as a source, ADD will automatically unpack and expand archives to the current Docker image layer at the supplied destination path.

Flexibility vs. security: Although ADD is more flexible, it does introduce risk. Downloading external URLs into the build process may allow malicious code or contents to be brought into the process. Using ADD with archives may result in unintended consequences if you do not understand how it handles archives.

Use case: ADD should only be used when you need specific functionality that it provides and are willing to manage the potential security issues arising from this usage.

Example:

ADD https://example.com/file.tar.gz /usr/src/app/
ADD my-archive.tar.gz /usr/src/app/

In this example, the build process first downloads https://example.com/file.tar.gz and extracts the contents into /usr/src/app in the Docker image layer. In the next step, it takes the local file my-archive.tar.gz and extracts it into the Docker image layer under /usr/src/app.

When to use COPY vs. ADD

For most use cases, COPY is the better choice due to its simplicity and security. This instruction allows you to transfer files and directories from your local context into the Docker image you are building.

Use ADD only when you need the additional capabilities it offers, but be mindful of potential security implications.

Remote contexts

In addition to traditional file system contexts, Docker also supports remote contexts, which can be particularly useful in cloud environments or for building images from code repositories directly. These include:

Git repositories: You can specify a Git repository URL as the build context, allowing Docker to clone the repository and use its content as the context.

docker build https://github.com/username/repository.git#branch

Remote URLs: Docker can use remote URLs for the build context. This is useful for building images directly from archives available online.

docker build http://example.com/context.tar.gz

OCI images: You can use an OCI image as the build context, which is useful for using pre-built images as the starting point for new builds.

docker build oci://registry.example.com/image:tag

How ADD and COPY behave in remote contexts

Note that both ADD and COPY behave slightly differently when used in a remote context.

Using COPY with remote contexts

COPY still operates within the scope of the build context, and can copy files and directories from the cloned repository into the Docker image. For example, when using a Git repository as the build context, COPY can copy files and directories from the cloned repository into the Docker image. It does not support copying files from URLs or other remote sources directly.

Example with Git repository as build context:

# Using a Git repository as build context
COPY ./src /app/src

In this case, COPY will copy the src directory from the Git repository (the build context) to /app/src in the Docker image.

Example with URL build context:

# Using an archive from a URL
COPY ./src /app/src

In this case, COPY will copy the src directory from the extracted archive (the build context) to /app/src in the Docker image.

Example with OCI image as build context:

# Using an OCI image as build context
COPY /path/in/oci/image /app/path

In this case, COPY will copy the contents from the specified path within the OCI image to the specified destination path in the Docker image.

Using ADD with remote contexts

The ADD instruction can still be used to download files and extract archives as well as copy files from the build context. Note that all the caveats provided about the ADD instruction above apply here as well.

Example with Git repository as build context:

# Using a Git repository as build context
ADD https://example.com/data.tar.gz /data
ADD ./src /app/src

In this example, ADD will download and extract data.tar.gz from the URL into the /data directory in the Docker image. It will also copy the src directory from the Git repository (the build context) to /app/src in the Docker image.

Example with URL build context:

# Using an archive from a URL
ADD https://example.com/data.tar.gz /data
ADD ./src /app/src

In this example, ADD will download and extract data.tar.gz from the URL into the /data directory in the Docker image. It will also copy the src directory from the downloaded/unpacked URL (the build context) to /app/src in the Docker image.

Example with OCI image as build context:

# Using an OCI image as build context
ADD https://example.com/data.tar.gz /data
ADD /path/in/oci/image /app/path

In this scenario, ADD will download and extract data.tar.gz from the URL into the /data directory in the Docker image. It will also copy the contents from the specified path within the OCI image to the specified destination path in the Docker image.

COPY vs. ADD tl;dr:

Prefer COPY: For most use cases, COPY is the better choice due to its simplicity and security. Use it to transfer files and directories from your local context or a remote context like a Git repository to the Docker image.

Use ADD with caution: Opt for ADD only when you need its additional functionalities, like downloading files from URLs or automatically extracting archives (Figure 1). Always be mindful of the potential security implications when using ADD.

Conclusion

Understanding the differences between ADD and COPY instructions in Dockerfiles and how they can be affected by build context can help you build more efficient and secure Docker images. Although COPY offers a straightforward way to include local files, ADD provides additional flexibility with the cost of increased complexity and potential security risks.

Learn more

Read the Dockerfile reference documentation.

Subscribe to the Docker Newsletter. 

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Thank You to the Stack Overflow Community for Ranking Docker the Most Used, Desired, and Admired Developer Tool 

As you might have seen, Stack Overflow recently unveiled the 2024 Developer Survey results. This always serves for me as a time to reflect on what Docker has accomplished each year in between. Since our inclusion in the survey five years ago, the Stack Overflow community has consistently ranked Docker highly. We were humbled to see Docker recognized as the most-used and most-desired developer tool for the second consecutive year. In addition, this year the community has elevated Docker to be the most-admired (78%). Moreover, Docker is the most-used tool (in the “other tools” category) by professional developers, with 59% using it in their work. This is the direct result of the value developers get by using Docker: a great developer experience, a step-function improvement in productivity, the industry’s largest repository of trusted content, and a community to support getting things done. 

Your votes and support mean the world to us, and we couldn’t have achieved this without the Docker and Stack Overflow developer communities! Your feedback and enthusiasm drive us to keep improving and innovating.

When Stack Overflow released the results of last year’s 2023 Developer Survey and we learned that Stack Overflow’s community ranked Docker as the #1 most-desired and #1 most-used developer tool, I said that it means we can’t slow down and need to go even faster in our effort to serve developers. Since the 2023 survey, we have continued listening to your needs and have delivered many improvements in speed, security, collaboration, content, and functionality.  

The 2024 survey results highlight a few key themes that resonate deeply with Docker’s mission and feedback we’re hearing directly from our community: Developers want tools that enhance productivity, simplify workflows, and help them with the latest technological advancements — and yes, that includes AI. 

Let’s look at a few key innovations and updates from the past year that reflect how we’re addressing your feedback and the evolving landscape. 

What’s new

We released Docker Scout for actionable insights in the software supply chain, helping developers address security and policy issues at the time of writing code rather than wait for CI results or, much worse, discover issues when an app is in production. We also provide a free Docker Scout Team subscription to all Docker-Sponsored Open Source (DSOS) participants to help ensure more of the content on Docker Hub is secure from the start. Then, we added Docker Scout Health Scores for security grading containers in your Docker Hub repos. We announced Docker Build Cloud to speed up build times. We also welcomed AtomicJar, maker of Testcontainers, to the Docker family.

Docker continues to innovate in bringing the power of the cloud to local development. Specifically, through Docker Desktop developers easily benefit from Docker’s cloud services in their inner loops, including Docker Build Cloud, Docker Scout, Testcontainers Cloud, and Docker Hub. The result? More frequent releases of higher quality, more secure applications. 

Speaking of Docker Desktop, we’ve delivered more than a dozen Docker Desktop releases in the past year, each one providing more capabilities to boost developer productivity, including Docker Debug, Docker Build checks, Docker Init, Builds view, private marketplace for Docker Extensions, Compose Watch, Resource Saver mode, and much more.

And we’ve also shipped Betas of many new capabilities, including GitHub Actions builds, Compose File Viewer, a new terminal feature in Docker Desktop, enterprise-grade Volume Backup to cloud providers, Docker Desktop for Windows on Arm, Docker Desktop support for Red Hat Enterprise Linux, and others.

While rapidly rolling out new features and products, we remain focused on security. In addition to unveiling Docker Scout, our tool designed to enhance the security of the software supply chain, we were happy to announce that we have received our SOC 2 Type 2 attestation and ISO 27001 certification with no exceptions or major non-conformities. 

This past year has been a busy one for open source, and Docker remains committed to actively maintaining projects that are core to the container ecosystem, including Compose, BuildKit, runc, containerd, Moby (Docker Engine), Distribution, and more. As but one example, BuildKit now includes experimental support for Windows containers, expanding its versatility and reach. By investing in these open source projects, Docker and our community together ensure the container ecosystem continues to evolve to better serve developers.

AI/ML advancements 

We know from our community and customers that Docker is already a pivotal part of the AI/ML development ecosystem, and its use in AI/ML is only growing. For example, a year ago there were more than 100 million pulls of AI/ML images in Docker Hub. Since then that number has grown to more than 500 million!

In the past year we’ve also leaned into leveraging AI to help developers innovate faster and smarter. For example, our integration with tools like GitHub Copilot supports rapid onboarding and continuous learning for developers. Additionally, we’ve added an AI-powered assistant to Docker documentation. By leveraging AI-driven assistance, developers can enhance their coding skills, stay updated with the latest trends, and contribute more effectively to their organizations. 

Looking further out, we see AI/ML fundamentally changing how developers work and how applications are built. To explore these quickly evolving spaces together with our community, we are experimenting in public with new techniques and tools in our Docker Labs GenAI series. For example, a recent post explores how to create Dockerfiles with GenAI. 

Stay tuned — we have even more AI ideas percolating!

Guides and manuals

Speaking of documentation, our Docs and DevRel teams, with help from our Docker Captains, have been up-leveling our guides and manuals. Whether you’re brand new to the Docker community or have been with us from the beginning, you’ll find guides that can take you from starting with Docker foundational concepts to language-specific, use-case, and deep-dive tutorials. Do you have ideas to contribute? We want to hear from you!  

Thank you, and stay in touch

Stack Overflow’s 2024 Developer Survey highlights the critical role Docker plays in the developer ecosystem. By continually innovating and addressing the needs of our community and customers, we help developers and businesses achieve their goals. As we look to the future, Docker remains dedicated to empowering every developer and team with the best solutions to navigate and thrive in the ever-evolving software development landscape.

On behalf of everyone here at Team Docker: Thank you for your ongoing support and trust in Docker!

Learn more

Subscribe to Docker Navigator to stay updated with what’s happening at Docker.

Read our 2024 State of Application Development Report to see insights from our own survey, which digs into key trends such as the expanding roles of cloud and AI/ML in software development and attitudes toward the shift-left approach to security.

See our case study collection to learn how our customers are using Docker as a foundational tool in their tech stack.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

Quelle: https://blog.docker.com/feed/

Docker Scout Health Scores: Security Grading for Container Images in Your Docker Hub Repo

We are thrilled to introduce Docker Scout health scores, our latest feature designed to make software security simpler and more effective for developers. 

Developer-friendly software security

Docker Scout health scores rate the security and compliance status of container images within Docker Hub, providing a single, quantifiable metric to represent the “health” of an image. This feature addresses one of the key friction points in developer-led software security — the lack of security expertise — and makes it easier for developers to turn critical insights from tools into actionable steps.

How Docker Scout health scores work

Docker Scout health scores utilize an alphabetical grading system to rate images stored in Hub repositories. The scores range from A to F, with A representing the highest overall standing and F the lowest. These health scores are calculated by evaluating images against a set of security and compliance checks based on widely accepted secure supply chain best practices. Factors considered include known vulnerabilities, risky licenses, Software Bill of Materials (SBOM) availability, provenance attestations, freshness of base image, and more. To learn more about these checks and the scoring process, visit our documentation.

Note: To maintain the privacy of these assessments, health scores can only be viewed by users who are members of the Docker Hub organization that owns an image repository and have at least “read” access to the repository.

The power of Docker Scout within Docker Hub

Health scores are powered by Docker Scout, our secure software supply chain tool that empowers organizations to strengthen their containerized application security posture via detailed analysis and insights across the software supply chain. Additionally, Docker Scout evaluates container images against detailed policies to ensure compliance with security and licensing standards.

By embedding Docker Scout’s powerful analysis capabilities into Docker Hub, health scores seamlessly fit into developers’ image lifecycle management workflows. Developers visiting hub.docker.com can leverage up-to-date and dependable assessments of their latest and historical images and take proactive measures to prioritize and improve images with lower scores. This capability is crucial for protecting containerized applications from potential security threats.

Figure 1 shows an example of an image with a low health score. The image was awarded a D score because it contains at least one known, high-profile CVE (think Log4Shell), is missing supply chain attestations (like SBOM and provenance), is using an out-of-date base image, and has specified a default root user.

Figure 1: Sample image with a low health score.

Health scores in Docker Hub 

We’ve made it straightforward for developers to leverage health scores. Users can view them directly within the Docker Hub interface by navigating to their organization’s Repositories tab (Figure 2) or from the detailed view for any given repository (Figure 3). 

Figure 2: Repositories tab — health scores per repository.

Figure 3: Repositories details — health scores per tag.

For those seeking more in-depth analysis, enabling Docker Scout for a specific image repository offers easy access to detailed secure software supply chain insights and recommendations for how to address identified issues (Figure 4).

Figure 4: Image details from Docker Scout.

Proactive security through gamification

In addition to making convoluted secure supply chain insights easier to digest, health scores also introduce an element of gamification. Within our own teams at Docker, we are seeing them motivate developers to improve the container images for which they’re responsible. With the clear, quantifiable A to F metric, developers are taking the initiative to pursue higher scores through proactive steps. This process has fostered a culture of continuous improvement, where our developers are self-motivated to prioritize corrective actions and updates to achieve better scores, thus bolstering the security and compliance of our own portfolio.

Conclusion

By leveraging Docker Scout health scores, we aim to encourage organizations to take proactive steps towards better security and compliance management in their containerized environments and increase the overall resilience of their software supply chain. 

The feature is currently available as beta and rolled out to a limited number of organizations that have been selected to participate in the early access program. To try out health scores or to give feedback, reach out to our product team on social channels, such as X and Slack.

Learn more

Subscribe to the Docker Newsletter. 

Visit the Docker Scout product page.

Looking to get up and running? Use our Quickstart guide.

Get the latest release of Docker Desktop.

Have questions? The Docker community is here to help.

Quelle: https://blog.docker.com/feed/

How to Create Dockerfiles with GenAI 

This ongoing Docker Labs GenAI series will explore the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing things as open source so you can play, explore, and hack with us, too.

As we learn how to work more effectively with generative AI, one of the tasks we return to is Dockerfile generation. Because we can equip our AI assistants with tools, does this allow us to reframe the problem of content generation as an interaction between LLMs and tools?

If you ask ChatGPT to write a Dockerfile for your project, it will initiate a conversation with you and ask you to extract some details from your project (Figure 1).

Figure 1: Asking ChatGPT for help writing a Dockerfile.

However, we can also equip our assistant with a tool to extract this information from the project directly. Instead of asking the user, an assistant can ask for a tool instead.

Using tools

Here’s an alternative framing. Let’s prompt the assistant with two high-level instructions whenever it is asked to create a Dockerfile.

* First, analyze the project to determine how it should be built.
* Once the analysis is complete, create a Dockerfile to build that project.

In addition to these new prompts, we will also supply the LLM with two function definitions.

[{"name": "analyze_project",
"description": "Analyze a project to determine how it should be built"},
{"name": "write_files",
"description": "write a set of files to my project",
"parameters": {…}}]

With these two functions, the assistant will have enough agency that it can write a Dockerfile directly to a user’s project. Unfortunately, without best practices, the actual Dockerfile content is never very good. 

Here’s an example of the kind of Dockerfile that gets generated when the assistant knows the kind of project but not much else.

# Use the official Node.js image from the Docker Hub
FROM node:14

# Create and change to the app directory
WORKDIR /usr/src/app

# Copy the package.json and package-lock.json files
COPY package*.json ./

# Install dependencies
RUN npm install

# Copy the rest of the application code
COPY . .

# Expose the port your app runs on
EXPOSE 3000

# Define the command to run your app
CMD ["npm", "start"]

This example shows several standard Dockerfile authoring problems.

Uses out of date base images.

Does not employ multi-stage builds.

Skips newer Docker features such as cache mounts.

So, what can we do about this? Let’s extract some best practices for authoring NPM project Dockerfiles.

Write Dockerfiles for NPM projects using three stages. Do these three steps sequentially.
* the first node dependencies stage should be called "deps"
and it should fetch the runtime dependencies using npm ci
with the –omit=dev flag.
* The second Dockerfile stage should be called build and it should be based on the deps stage.
It should run npm ci and then npm build
* The third Dockerfile stage should select a base image
that is recommended by Scout and use that in the FROM line.
This stage should do then do three things.
1. it copies the node_modules directory from the deps stage.
2. it copies the dist directory from the build stage.
3. it then runs npm start

If you need to use a RUN statement containing npm ci always
add the argument –mount=type=cache,target=/root/.npm to the RUN instruction.
The –mount argument should be placed between the word RUN and the npm command.
This will cache the npm packages in the docker build cache and speed up the build process.
After selecting a base image, get a recommended tag. This recommendation will change over time but start with what is recommended now.

We’ll include this content in any conversation occurring in the context of an NPM project. We’ll also add a third function.

[{"name": "docker_scout_tag_recommendation",
"description": "get a recommended tag",
"parameters": {…}}]

At the time of writing, here’s the output from our assistant. In this example, the assistant is using the GPT-4 model, but we are seeing the same results with other local models.

# Stage 1 – Downloading dependencies
FROM node:22-slim AS deps
WORKDIR /usr/src/app
COPY package*.json ./
RUN –mount=type=cache,target=/root/.npm npm ci –omit=dev

# Stage 2 – Building application
FROM deps AS build
RUN –mount=type=cache,target=/root/.npm npm ci && npm build

# Stage 3 – Using a recommended base image from Scout
FROM node:22-slim
WORKDIR /usr/src/app
COPY –from=deps /usr/src/app/node_modules ./node_modules
COPY –from=build /usr/src/app/dist ./dist
CMD [ "npm", "start" ]

This is a big improvement. The three inputs that underlie this improvement are listed below.

The two top-level instructions that anchor the process our assistant should use:

* First, analyze the project to determine how it should be built.
* Once the analysis is complete, create a Dockerfile to build that project.

The Markdown content that summarizes our Docker specific best practices.

The functions (analyze_project, write_files, and docker_scout_tag_recommendation).

Distilling best practices is certainly not easy, but a little can go a long way. Still, it is still hard to know whether this can be considered an improvement over just using code templates. Our continuing investigation centers around whether generative AI will help us to more easily adapt these models to new knowledge, new features, and new project types.

Conversation loop

It’s instructive to go through the actual messages passed between agents here. It’s particularly interesting to look at the interplay between the Assistant and the Tool agents. Near the end of the exchange, there is an unexpected twist where the write_files tool complains that it was passed an invalid JSON file. In this case, the assistant actually did generate invalid JSON and the tool was right to complain. However, the assistant also responded to this error by fixing the JSON and calling the tool a second time! That was unscripted.

Another interesting example of an unscripted moment is the last message from the assistant. The assistant correctly determines that the task is now complete and generates a summary of the work that was done and why.

Transcript

System:

You are an assistant who specializes in authoring Dockerfiles for projects. Since you are an expert and know about their project, be definitive about recommendations.

User:

First, analyze the project to determine how it should be built.

Once the analysis is complete, create a Dockerfile to build that project.

Assistant:

[{:function {:arguments "{}" :name "analyze_project"}}]

Tool (analyze_project):

This is an NPM project.

System:

Write Dockerfiles for NPM projects using three stages. Do these three steps sequentially …

… (system prompts excluded for brevity) ..

Assistant:

[{:function {:arguments "{
"repository": "node"
}"
:name "docker_scout_tag_recommendation"}}]

Tool (docker_scout_tag_recommendation):

22-slim

Assistant:

[{:function {:arguments "{
"files": [
{
"path": "Dockerfile",
"content": `
# Stage 1 – the build process
FROM node:22-slim AS deps
WORKDIR /usr/src/app
COPY package*.json ./
RUN –mount=type=cache,target=/root/.npm npm ci –omit=dev
# Stage 2 – the production environment
FROM deps AS build
RUN npm ci && npm run build
# Stage 3 – the production environment
FROM node:22-slim
WORKDIR /usr/src/app
COPY –from=deps /usr/src/app/node_modules ./node_modules
COPY –from=build /usr/src/app/dist ./dist
CMD [ "npm", "start" ]`,
"executable": false
}
]
}"
:name "write_files"}
:id "call_6VVCLiXcIBNyEGPQwftR7xEV"}]

Tool:

function call write_files failed call exited with non-zero code (1):  call exited with non-zero code (1): #error {rn :cause Unexpected character (‘`’)

Assistant:

I apologize for the error. Let me try that again.

[{:function {:arguments "{
"files": [
{
"path": "Dockerfile",
"content": "FROM node:22-slim AS depsnWORKDIR /usr/src/appnCOPY package*.json ./nRUN –mount=type=cache,target=/root/.npm npm ci –omit=devnnFROM deps AS buildnRUN npm ci && npm run buildnnFROM node:22-slimnWORKDIR /usr/src/appnCOPY –from=deps /usr/src/app/node_modules ./node_modulesnCOPY –from=build /usr/src/app/dist ./distnCMD [ \"npm\", \"start\" ]",
"executable": false
}
]
}"
:name "write_files"}}]

Tool (write_files):

wrote 1 files: Dockerfile

Assistant:

I have written the Dockerfile for your NPM project. You can find it in your project folder. This Dockerfile uses a multi-stage build process to efficiently build and run your application within a Docker container. The first stage fetches the runtime dependencies, the second stage builds the application, and the final stage runs the application. Please let me know if you need further assistance.

Summary

For a generative AI assistant to improve its ability to generate Dockerfile content, we have identified both tools and stack-specific best practice content as key enablers. Incorporating these elements into a generative AI based assistant can serve as an alternative to code templates. We continue to evaluate whether this approach allows us to more easily adapt to new project types.

As always, feel free to follow along in our public repo and reach out. Everything we’ve discussed in this blog post is available for you to try on your own projects.

For more on what we’re doing at Docker, subscribe to our newsletter.

Learn more

Subscribe to the Docker Newsletter.

Read the Docker Labs GenAI series.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Introducing Docker Build Checks: Optimize Dockerfiles with Best Practices

Today, we’re excited to announce the release of Docker Build checks with Docker Desktop 4.33. Docker Build checks help your team learn and follow best practices for building container images. When you run a Docker Build, you will get a list of warnings for any check violations detected in your build. Taking a proactive approach and resolving Build warnings and issues early will save you time and headaches downstream. 

Why did we create Docker Build checks?

During conversations with developers, we found that many struggle to learn and follow the best practices for building container images. According to our 2024 State of Application Development Report, 35% of Docker users reported creating and editing Dockerfiles as one of the top three tasks performed. However, 55% of respondents reported that creating Dockerfiles is the most selected task they refer to support. 

Developers often don’t have the luxury of reading through the Docker Build docs, making the necessary changes to get things working, and then moving on. A Docker Build might “work” when you run docker build, but a poorly written Dockerfiles may introduce quality issues, such as they are:

Hard to maintain or update

Contain hidden and unexpected bugs 

Have sub-optimal performance

In our conversations with Docker users, we heard that they want to optimize their Dockerfiles to improve build performance, aren’t aware of current best practices, and would like to be guided as they build. 

Investigating and fixing build issues wastes time. We created Docker Build checks to empower developers to write well-structured Dockerfiles from the get-go and learn from existing best practices. With Build checks, your team spends less time on build issues and more on innovation and coding.   

Why should you use Docker Build checks? 

You want to write better Dockerfiles and save time! 

We have collected a set of best practices from the community of build experts and codified them into Docker Build tooling. You can use Docker Build checks to evaluate all stages of your local and CI workflows, including multi-stage builds and Bake, and deep dive in the Docker Desktop Builds view. You can also choose which rules to skip. 

You can access Docker Build checks in the CLI and in the Docker Desktop Builds view. 

More than just linting: Docker Build checks are powerful and fast 

Linting tools typically just evaluate the text files against a set of rules. As a native part of Docker Build, the rules in Docker Build checks are more powerful and accurate than just linting. Docker Build checks evaluate the entire build, including the arguments passed in and the base images used. These checks are quick enough to be run in real-time as you edit your Dockerfile. You can quickly evaluate a build without waiting for a full build execution. 

Check your local builds

A good practice is to evaluate a new or updated Dockerfile before committing or sharing your changes. Running docker build will now give you an overview of issues and warnings in your Dockerfile.

Figure 1: A Docker Build with four check warnings displayed.

To get more information about these specific issues, you can specify the debug flag to the Docker CLI with docker –debug build. This information includes the type of warning, where it occurs, and a link to more information on how to resolve it. 

Figure 2: Build debug output for the check warnings.

Quickly check your build

Running these checks during a build is great, but it can be time-consuming to wait for the complete build to run each time when you’re making changes or fixing issues. For this reason, we added the –check flag as part of the build command. 

# The check flag can be added anywhere as part of your build command
docker build . –check
docker build –check .
docker build –build-arg VERSION=latest –platfrom linux/arm64 . –check

As illustrated in the following figure, appending the flag to your existing build command will do the same full evaluation of the build configuration without executing the entire build. This faster feedback typically completes in less than a second, making for a smoother development process. 

Figure 3: Running check of build.

Check your CI builds

By default, running a Docker build with warnings will not cause the build to fail (return a non-zero exit code). However, to catch any regressions in your CI builds, add the following declarations to instruct the checks to generate errors. 

# syntax=docker/dockerfile:1
# check=error=true

FROM alpine
CMD echo “Hello, world!”

Checking multi-stage builds in CI

During a build, only the specified stage/target, including its dependent, is executed. We recommend adding a stage check step in your workflow to do a complete evaluation of your Dockerfile. This is similar to how you would run automated tests before executing the full build.

If any warnings are detected, it will return a non-zero exit code, which will cause the workflow to fail, therefore catching any issues.

docker build –check .

Checking builds in Docker Build Cloud

Of course, this also works seamlessly with Docker Build Cloud, both locally and through CI. Use your existing cloud builders to evaluate your builds. Your team now has the combined benefit of Docker Build Cloud performance with the reassurance that the build will align with best practices. In fact, as we expand our checks, you should see even better performance from your Docker Build Cloud builds.

Figure 4: Running checks in Docker Build Cloud.

Configure rules

You have the flexibility to configure rules in Build checks with a skip argument. You can also specify skip=all or skip=none to toggle the rules on and off. Here’s an example of skipping the JSONArgsRecommended and StageNameCasing rules:

# syntax=docker/dockerfile:1
# check=skip=JSONArgsRecommended,StageNameCasing

FROM alpine AS BASE_STAGE
CMD echo “Hello, world!”

Dive deep into Docker Desktop Builds view

In Docker Desktop Builds view, you can see the output of the build warnings. Locating the cause of warnings in Dockerfiles and understanding how to resolve them quickly is now easy.

As with build errors, warnings are shown inline with your Dockerfile when inspecting a build in Docker Desktop:

Figure 5: Build checks warnings in Docker Desktop Builds view.

What’s next? 

More checks

We are excited about the new Builds checks to help you apply best practices to your Dockfiles, but this is just the start. In addition to the current set of checks, we plan on adding even more to provide a more comprehensive evaluation of your builds. Further, we look forward to including custom checks and policies for your Docker builds.

IDE integration

The earlier you identify issues in your builds, the easier and less costly it is to resolve them. We plan to integrate Build checks with your favorite IDEs so you can get real-time feedback as you type.

Figure 6: Check violations displaying in VS Code.

GitHub Actions and Docker Deskop

You can already see Build checks warnings in Docker Desktop, but more detailed insights are coming soon to Docker Desktop. As you may have heard, we recently announced Inspecting Docker Builds in GitHub Actions’s beta release, and we plan to build on this new functionality to include support for investigating check warnings.

Get started now

To get started with Docker Build checks, upgrade to Docker Desktop 4.33 today and try them out with your existing Dockerfiles. Head over to our documentation for a more detailed breakdown of Build checks. 

Learn more

Authenticate and update to receive your subscription level’s newest Docker Desktop features.

What else is new Docker Desktop 4.33? GA Releases of Docker Debug and Docker Build Checks Plus Enhanced Configuration Integrity Checks.

New to Docker? Create an account. 

Subscribe to the Docker Newsletter.

Quelle: https://blog.docker.com/feed/

Docker Desktop 4.33: GA Releases of Docker Debug and Docker Build Checks Plus Enhanced Configuration Integrity Checks   

Key features of the Docker Desktop 4.33 release include: 

Docker Debug GA release

Docker Build checks GA release

Configuration integrity check UX improvements

Docker Desktop 4.33 is packed with powerful new features designed to elevate your container development experience. Docker Desktop 4.33 includes GA releases of Docker Debug and Docker Build checks, and significant UX improvements to the Configuration integrity check. These features provide robust tools for debugging, building, and maintaining containerized applications, ensuring a more efficient and user-friendly development environment.

Docker Debug GA released

Developers face significant hurdles when debugging their applications within containers. Traditionally, debugging slim containers or those without a shell has been a challenging task. Developers have had to rebuild images or use workarounds to gain visibility into container internals, which is time-consuming and error-prone. The most common existing method, docker exec -it, comes with significant limitations, particularly for modern slim containers that remove basic tools like ping or Vim, hindering efficient debugging processes.

We are pleased to announce that Docker Debug, initially introduced as a beta feature in Docker Desktop 4.27, is now generally available (GA) in Docker Desktop 4.33. This powerful feature is accessible to all Docker users with a Pro, Teams, or Business license.

Docker Debug revolutionizes the debugging process by providing a seamless and efficient way to get a shell into any container (regardless of state) or image, even those that are slimmed down and lack a shell. This capability is crucial for maintaining the security and performance of containerized applications without compromising the ability to troubleshoot issues effectively. Now, you can access a dedicated shell for quick issue resolution directly from the Docker Desktop CLI or GUI for efficient debugging, targeting both local and remote applications.

Figure 1: Docker Debug is now generally available.

Key benefits of Docker Debug

Improved productivity

Simplified commands: The docker debug command simplifies attaching a shell to any container or image. This reduces the cognitive load on developers, allowing them to focus on solving problems rather than configuring their environment.

Dynamic port forwarding: Break the isolation between the container and the host OS, making it easier to troubleshoot network-related issues.

Enhanced debugging capabilities

Shell access: Docker Debug provides a robust debug shell equipped with essential tools by default, such as Vim, Nano, htop, curl, and more. This makes it easy to inspect and modify container contents.

Support for slim containers: Even if a container does not include a shell, Docker Debug allows you to attach a debug shell, facilitating troubleshooting without needing to modify the container image.

Persistent customizations: Customize your debugging environment with additional tools using the built-in NixOS package manager. These customizations persist across debug sessions, enhancing your workflow efficiency.

Seamless integration

Integration with existing workflows: Docker Debug integrates seamlessly with your existing Docker workflows. Whether you are working with running containers, stopped containers, or just images, Docker Debug provides a consistent and intuitive interface for debugging.

Remote debugging: Use Docker Debug to troubleshoot containers running on remote Docker instances, enhancing your ability to manage and debug applications deployed across different environments.

Docker Debug is a game-changer for developers working with containerized applications. By simplifying and enhancing the debugging process, Docker Debug empowers developers to maintain secure, performant, and reliable applications. Upgrade to Docker Desktop 4.33 today and experience the power of Docker Debug.

For more details, refer to the Docker Debug docs.

Docker Build checks GA released 

Docker Desktop is more than a simple containerization tool — it’s a comprehensive solution designed to support developers at every stage of the development process. With Docker Build checks in Docker Desktop 4.33, we’ve enhanced BuildKit — the powerful engine behind Docker Build and Docker Build Cloud — to ensure Dockerfiles meet the highest standards. 

Ensuring that Dockerfiles adhere to best practices can be challenging for development teams. Evaluating and maintaining the quality of Dockerfiles across local and CI workflows — including complex multi-stage builds, which leads to inefficient builds, increased build times, and potential security vulnerabilities — can be difficult. 

Docker Build checks guide developers in writing optimized Dockerfiles, improving efficiency and reducing build times. We have enhanced BuildKit, our build engine behind Docker Build and Docker Build Cloud, to support guiding you on best practices for writing great Dockerfiles. Driving your team to follow best practices enhances security and reduces your build times in Docker Build Cloud.

By using Docker Build checks to evaluate all stages of your local and CI workflows, including multi-stage builds and bake, and deep-dive in Docker Desktop Builds view, you ensure that your Dockerfiles are set up to leverage the full potential of Docker Build Cloud. This new feature not only saves you time in maintaining your Dockerfiles, but also enhances your overall build performance in Docker Build Cloud. 

To fully benefit from Docker Build Cloud, we encourage you to authenticate and explore how Docker Build checks can elevate your development experience and drive you toward more efficient and high-quality container builds. 

Figure 2: Build checks warnings in Docker Desktop Build view.

Configuration integrity check 2.0 

In the past, Docker Desktop users have encountered disruptive warning messages about configuration issues, which can interrupt their workflow and lead to repeated and ineffective repair attempts. To improve the overall user experience and ensure smoother operation of Docker Desktop, a more user-friendly and efficient way to alert users about configuration problems, provide actionable guidance, and handle errors has been needed. Our latest release includes a new and improved UX for the Configuration integrity check for macOS, with:

Banner migrated to Notification Center: The warning is now less obtrusive and appears as an alert in the Notification Center.

Actionable alert: The Notification Center will read Find out more instead of Repair. Clicking it will open a pop-up with details on configuration changes (such as incorrect or missing symlinks) and the option to repair.

Error-handling improvements: Introducing an error dialogue with a diagnostic upload option if the check fails, operating system checks, and preventing the Integrity Repair from running multiple times if the user clicks Repair in quick succession.

If you ignore the notification, it will be shown again only at the next Docker Desktop startup. If you choose to repair your configuration, you won’t be prompted again.

If you want to switch off Configuration integrity check notifications, navigate to Docker Desktop’s Settings and in the General tab, and clear the Automatically check configuration setting.

Fill out the feedback form if you have feedback on how to improve the Configuration integrity check feature further.

Figure 3: Clicking the notification will open a pop-up with details on configuration changes and the option to repair.

Conclusion 

Docker Desktop 4.33 is another milestone in our commitment to providing an industry-leading container development suite. By introducing features like Docker Debug, Docker Build checks, and improved Configuration integrity check, we empower development teams to write better Dockerfiles, troubleshoot applications with ease, and maintain optimal configurations. These enhancements drive productivity and streamline workflows, solidifying Docker Desktop as your launchpad for engineering excellence. Upgrade to Docker Desktop 4.33 today and experience the future of container development. 

Learn more

Authenticate and update to receive your subscription level’s newest Docker Desktop features.

New to Docker? Create an account. 

Subscribe to the Docker Newsletter.

Dive deep into Docker Debug with additional Docker Debug documentation.

Introducing Docker Build Checks: Optimize Dockerfiles with Best Practices.

Leave feedback on the Configuration integrity check feature.

Quelle: https://blog.docker.com/feed/

3 Ways CARIAD Configures Docker Business for Security and Compliance

CARIAD, an automotive software and technology company, unites more than 6,000 global experts and aligns major brands in the Volkswagen Group under one software strategy. Founded in 2020, CARIAD provides solutions to securely and compliantly update the fleet from mere transport to fully integrated digital experiences. CARIAD’s use of Docker provides a framework for embedding advanced software into existing systems.

As a subsidiary of Volkswagen Group, CARIAD has expertise in complex identity access requirements, including integrating Docker with multiple Active Directory instances. Security and compliance requirements are critical, with added layers of complexity due to environment requirements introduced when developing embedded systems.

Docker Business is a specialized containerization platform for large enterprises, providing features that enhance security, compliance, and scalability. CARIAD leverages Docker Business to integrate Single Sign-On (SSO) and Image Access Management (IAM), which are crucial for meeting their stringent security requirements. These features allow CARIAD to control access to Docker resources effectively, supporting their security and compliance requirements.

Integration with WSL 2 

Docker Desktop makes it simple for CARIAD developers to run Linux containers natively on their Windows machines without the need for a dual-boot setup or a dedicated Linux machine.

Windows Subsystem for Linux 2 (WSL 2) provides a hybrid development environment, with a Linux kernel running in a lightweight virtual machine, fully managed by Windows, yet offering near-native performance. 

Before WSL 2, the original WSL used a translation layer between Windows and the Linux file system, which introduced potential performance bottlenecks, especially for running build scripts or version control operations. WSL 2 introduces a full Linux kernel with a real Linux file system, stored in a virtual disk image. This greatly improves file IO performance and supports a broader range of tools and applications with better Linux system call support.

WSL 2 also improves resource management by dynamically managing memory and CPU resources allocated to the Linux subsystem. This functionality is crucial for CARIAD because it allows efficient scaling of resources based on workload demands, which is particularly important when developing and testing resource-intensive applications.

Docker Desktop integrates well with WSL 2 and provides the capability to execute Docker commands with any Linux distribution installed within WSL 2. This approach enables CARIAD to execute Docker commands within a custom WSL distribution that adheres to their organizational policy requirements.

Single Sign-On and User Access Management

CARIAD integrates Docker SSO, available in Docker Business, with its existing Azure Active Directory instances to ensure that only authenticated and authorized users access Docker resources, aligning with required policies. Enhancing the benefits of Enterprise SSO, this feature is crucial for proper configuration and enforcement of other security measures, like Image Access Management (IAM).  

Image Access Management 

CARIAD ensures it uses only authorized images from Docker Hub, enforced through tailored administrative configurations with IAM. This approach manages access levels by group and is a key component in enforcing security protocols, particularly in safeguarding container environments. Properly configured and enforced IAM, which is automatically enabled by enforcing sign-in, reduces the risk associated with unauthorized or unsecured images.

This process involves activating IAM, setting permissions that align with user roles and project requirements, and testing to ensure the permissions are working as intended (Figure 1).

The CARIAD team explains the importance of RAM and IAM when using WSL 2 this way: “While WSL 2 seamlessly grants elevated root capabilities within its environment, it is fortunate that these permissions do not extend to SYSTEM rights on the Windows host. However, if both registry and image access management are absent by the Docker Desktop setup, the lack of firewall and anti-malware protection could introduce a potential malicious container attack and a local privilege escalation.” 

Figure 1: Potential introduction of a malicious container.

Conclusion

CARIAD’s strategies for deploying Docker Business into a secure enterprise environment represent strong choices for any organization managing similar security, compliance, or identity access management requirements. For organizations looking to enhance their development operations, CARIAD’s model offers a blueprint for deploying Docker Desktop to large enterprises.

Using Docker Business features and WSL 2, CARIAD ensures compliance and supports a developer-friendly workflow. Within the stringent requirements necessary for automotive systems, developers at Volkswagen Group work with best-in-class tools and processes to build securely and quickly. CARIAD’s approach provides valuable lessons for enterprises looking to improve their development operations with Docker.

Read more from CARIAD in their case study — Building a Secure and Compliant Framework with Docker at CARIAD — and white paper — Using Docker Desktop in Large-Scale Enterprises — and get inspiration for secure, compliant Docker implementations in the automotive industry.

Learn more

Subscribe to the Docker Newsletter. 

Get the latest release of Docker Desktop.

Upgrade to Docker Business.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Empowering Developers with Docker: Simplifying Compliance and Enhancing Security for SOC 2, ISO 27001, FedRAMP, and More

The compliance and regulatory landscape is evolving and complicated, and the burden on developers to maintain compliance is not often acknowledged in articles about maintaining SOC 2, ISO 27001, FedRAMP, NIS 2, EU 14028, etc. 

Docker’s products aim to put power into the developer’s hands to maintain compliance with these requirements and eliminate what can often be a bottleneck between engineering and security teams. 

With a Docker Business subscription, Docker customers have access to granular controls and a full product suite which can help customers maintain compliance and improve controls. 

Access controls

Docker’s solutions offer Single Sign On (SSO) allowing customers to streamline the Docker product suite with their existing access controls and identity provider (idP). 

Docker customers can also enforce login to Docker Desktop. Utilizing the registry.json file, you can require that all users sign into Docker Desktop, providing granular access to Docker’s local desktop application. 

Within Docker Hub, Organization Owners can control access to registries as well as public content and develop granular teams to ensure that teams have access to approved images. 

Hardened Docker Desktop

By using security configurations available in Docker Desktop, customers can add additional security features to meet the needs of their environment. These features allow companies to comply with compliance and regulatory requirements for supply chain security, network security, and network access restriction and monitoring. These features include:

Settings Management

Docker Desktop’s Settings Management provides granular access controls so that customers can directly control all aspects of how their users interact within their environments. This includes, but is not limited to, the following:

Configure HTTP proxies, network settings, and Kubernetes settings.

Configure Docker Engine.

Turn off Docker Desktop’s ability to check for updates, turn off Docker Extensions, turn off beta and experimental features, etc. 

Specify which paths for developer file shares.

Enhanced Container Isolation

Enhanced Container Isolation allows customers to designate security settings to help prevent container escape.

Registry Access Management

Using Registry Access Management, customers can granularly control which registries their users have access to, narrowing it down to just the registries they approve.

Image Access Management

Within Docker Hub, customers can also control what images their users have access to, allowing customers to create an inventory of approved and trusted content. With Image Access Management, customers can implement a secure software development life cycle (SDLC). 

Air-Gapped Containers

With Docker Desktop’s Air-Gapped Containers, customers may also restrict containers from accessing network resources, limiting where data can be uploaded to or downloaded from. This feature allows customers more granular control over their development environment. 

Vulnerability monitoring and continuous assessment with Docker Scout

All compliance and regulatory standards require vulnerability scanning to occur at the application level, but most solutions do not scan at the container level nor do they help prevent vulnerabilities from ever reaching production. 

Docker Scout provides a GitHub application that can be embedded in the CI/CD to identify and prevent vulnerabilities in images from going into production. By using this as part of development, developers can patch during development reducing the amount of vulnerabilities identified as part of SAST, penetration testing, bug bounty programs, and so on. 

Companies can also use Docker Scout to monitor their images for vulnerabilities, identify whether fixes are available, and provide the most up-to-date information to create more secure products. When a zero-day vulnerability is released, you can easily search your images for every instance and remediate them as soon as possible. 

Policy management

Customers can utilize Docker Scout to monitor compliance for the following:

Monitor packages using AGPLv3 and GPLv3 licenses.

Ensure images specify a non-root username.

Monitor for all fixable critical and high vulnerabilities.

Outdated base images.

Supply chain attestations.

Customers can also create custom policies within Docker Scout to monitor their own compliance requirements. Do you have vulnerability SLAs? Monitor your environment to ensure you are meeting SLA requirements for vulnerability remediation. 

Software Bill of Materials (SBOM)

Customers may also use Docker Scout to help compile full SBOMs. Many SBOM solutions do not look at images to break down the images into their individual components and packages. Docker Scout also supports multi-stage builds, which you won’t find in another solution. 

Reduced security risk with Docker Build Cloud and Testcontainers Cloud

Docker Build Cloud

With Docker Build Cloud, organizations can have more autonomy throughout the build process through the following features:

By using remote build infrastructure, Docker Build Cloud ensures that build processes are isolated from local environments, reducing the risk of local vulnerabilities affecting the build process.

Customers do not need to manage individual build infrastructures. Centralized management allows for consistent security policies and updates across all builds.

The shared cache helps avoid redundant builds and reduces the attack surface by minimizing the number of times an image needs to be built from scratch.

Docker Build Cloud supports native multi-platform builds, ensuring that security configurations are consistent across different environments and platforms. 

Testcontainers Cloud 

Avoid running Docker runtime on your CI pipeline to support your tests. Testcontainers Cloud eliminates the complexity of running this securely and safely, through the use of the Testcontainers Cloud agent, which has a smaller attack surface area for your infrastructure. 

With CI and Docker-in-Docker, developers do not need to run a root-privileged Docker daemon next to the source code, thereby reducing the supply chain risk.

Conclusion

Docker’s comprehensive approach to security and compliance empowers developers to efficiently manage these aspects throughout the development lifecycle. By integrating granular access controls, enhanced isolation, and continuous vulnerability monitoring, Docker ensures that security is a seamless part of the development process. 

The Docker product suite equips developers with the tools they need to maintain compliance and manage security risks without security team intervention.

Learn more

Subscribe to the Docker Newsletter. 

Get the latest release of Docker Desktop.

Get started with Testcontainers Cloud by creating a free account.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Docker Security Advisory: AuthZ Plugin Bypass Regression in Docker Engine

Certain versions of Docker Engine have a security vulnerability that could allow an attacker to bypass authorization plugins (AuthZ) under specific circumstances. The base likelihood of this being exploited is low. This advisory outlines the issue, identifies the affected versions, and provides remediation steps for impacted users.

Problem

Docker’s default authorization model is all-or-nothing. Users with access to the Docker daemon can execute any Docker command. For greater access control, authorization plugins (AuthZ) can be used. These plugins approve or deny requests to the Docker daemon based on authentication and command context.

In 2018, a security issue was discovered where an attacker could bypass AuthZ plugins using a specially crafted API request. This could lead to unauthorized actions, including privilege escalation. Although this issue was fixed in Docker Engine v18.09.1 in January 2019, the fix was not carried forward to later versions, resulting in a regression.

Vulnerability details

AuthZ bypass and privilege escalation: An attacker could exploit a bypass using an API request with Content-Length set to 0, causing the Docker daemon to forward the request without the body to the AuthZ plugin, which might approve the request incorrectly.

Initial fix: The issue was fixed in Docker Engine v18.09.1 January 2019.

Regression: The fix was not included in Docker Engine v19.03 or newer versions. This was identified in April 2024 and patches were released for the affected versions on July 23, 2024. The issue was assigned CVE-2024-41110.

Affected versions

Affected versionsPatched versions<= v19.03.15, <= v20.10.27, <= v23.0.14, <= v24.0.9, <= v25.0.5, <= v26.0.2, <= v26.1.4, <= v27.0.3, <= v27.1.0> v23.0.14, > v27.1.0

Who is impacted?

Users of Docker Engine v19.03.x and later versions who rely on authorization plugins to make access control decisions.

Who is not impacted?

Users of Docker Engine v19.03.x and later versions who do not rely on authorization plugins to make access control decisions and users of all versions of Mirantis Container Runtime are not vulnerable.

Users of Docker commercial products and internal infrastructure who do not rely on AuthZ plugins are unaffected.

Impact on Docker Desktop

Docker Desktop up to v4.32.0 includes affected versions of Docker Engine.

The impact for Docker Desktop is limited compared to production environments.

Exploitation requires access to the Docker API, which usually means the attacker needs to already have local access to the host machine, unless the Docker daemon is insecurely exposed over TCP.

Default Docker Desktop configuration does not include AuthZ plugins.

Privilege escalation is limited to the Docker Desktop VM, not the underlying host.

A patched version of Docker Engine will be included in Docker Desktop v4.33.

Remediation steps

Update Docker Engine:

If you are running an affected version, update to the most recent patched version.

Mitigation if unable to update immediately:

Avoid using AuthZ plugins.

Restrict access to the Docker API to trusted parties, following the principle of least privilege.

Update Docker Desktop:

If using an affected version, update to Docker Desktop 4.33 after it is released.

Ensure AuthZ plugins are not used and do not expose the Docker API over TCP without protection.

Docker Business subscribers can use Settings Management to enforce secure settings.

Learn more

See the GitHub security advisory.

Authenticate and update to receive your subscription level’s newest Docker Desktop features.

New to Docker? Create an account.

Subscribe to the Docker Newsletter.

Quelle: https://blog.docker.com/feed/