Elektro-Kleinbus: Opel macht den Zafira 15.000 Euro günstiger
Der überarbeitete Opel Zafira kostet deutlich weniger als das Vorgängermodell. Doch beim Antrieb hat sich noch nichts geändert. (Elektroauto, Technologie)
Quelle: Golem
Der überarbeitete Opel Zafira kostet deutlich weniger als das Vorgängermodell. Doch beim Antrieb hat sich noch nichts geändert. (Elektroauto, Technologie)
Quelle: Golem
Die frühere Gesichtserkennungsfunktion auf Facebook kostet den Meta-Konzern nach einer Klage aus Texas eine hohe Summe. (Gesichtserkennung, Soziales Netz)
Quelle: Golem
GlasfaserFörderung von der Bundesregierung ist von den Netzbetreibern meist nicht mehr gewünscht. Der Ton zwischen Befürwortern und Gegnern wird rauer. (Breko, Glasfaser)
Quelle: Golem
Die Infotafel informiert Kunden über Daten-Back-ups per Onedrive. Solche Werbung kann aber komplett abgeschaltet werden. (Windows 11, Microsoft)
Quelle: Golem
Apple hat in der iOS-18.1-Entwickler-Beta Apple Intelligence eingebaut: Telefonat-Transkription, Schreibhilfen, Mail-Priorisierung und Fotosuche sind KI-gestützt. (Apple Intelligence, Apple)
Quelle: Golem
Die Community kann Wikipedia künftig auch mit dunklem Hintergrund lesen. Das Feature ist aktuell aber noch etwas versteckt. (Wikipedia, Software)
Quelle: Golem
In Japan boomt der Tourismus – und damit ein Verkehrshindernis: Fahrbare Koffer, die ähnlich E-Scootern durch Flughäfen und Straßen sausen. Viele wissen nicht, dass hierfür Regeln gelten. (Mobilität, Reisen)
Quelle: Golem
Mit dem Bundesverfassungsgericht lässt sich rein elektronisch kommunizieren. Bürgern stehen mehrere verschlüsselte Kanäle offen. Faxen bleibt erlaubt. (Verschlüsselung, Datenschutz)
Quelle: Golem
This ongoing Docker Labs GenAI series will explore the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing things as open source so you can play, explore, and hack with us, too.
As we learn how to work more effectively with generative AI, one of the tasks we return to is Dockerfile generation. Because we can equip our AI assistants with tools, does this allow us to reframe the problem of content generation as an interaction between LLMs and tools?
If you ask ChatGPT to write a Dockerfile for your project, it will initiate a conversation with you and ask you to extract some details from your project (Figure 1).
Figure 1: Asking ChatGPT for help writing a Dockerfile.
However, we can also equip our assistant with a tool to extract this information from the project directly. Instead of asking the user, an assistant can ask for a tool instead.
Using tools
Here’s an alternative framing. Let’s prompt the assistant with two high-level instructions whenever it is asked to create a Dockerfile.
* First, analyze the project to determine how it should be built.
* Once the analysis is complete, create a Dockerfile to build that project.
In addition to these new prompts, we will also supply the LLM with two function definitions.
[{"name": "analyze_project",
"description": "Analyze a project to determine how it should be built"},
{"name": "write_files",
"description": "write a set of files to my project",
"parameters": {…}}]
With these two functions, the assistant will have enough agency that it can write a Dockerfile directly to a user’s project. Unfortunately, without best practices, the actual Dockerfile content is never very good.
Here’s an example of the kind of Dockerfile that gets generated when the assistant knows the kind of project but not much else.
# Use the official Node.js image from the Docker Hub
FROM node:14
# Create and change to the app directory
WORKDIR /usr/src/app
# Copy the package.json and package-lock.json files
COPY package*.json ./
# Install dependencies
RUN npm install
# Copy the rest of the application code
COPY . .
# Expose the port your app runs on
EXPOSE 3000
# Define the command to run your app
CMD ["npm", "start"]
This example shows several standard Dockerfile authoring problems.
Uses out of date base images.
Does not employ multi-stage builds.
Skips newer Docker features such as cache mounts.
So, what can we do about this? Let’s extract some best practices for authoring NPM project Dockerfiles.
Write Dockerfiles for NPM projects using three stages. Do these three steps sequentially.
* the first node dependencies stage should be called "deps"
and it should fetch the runtime dependencies using npm ci
with the –omit=dev flag.
* The second Dockerfile stage should be called build and it should be based on the deps stage.
It should run npm ci and then npm build
* The third Dockerfile stage should select a base image
that is recommended by Scout and use that in the FROM line.
This stage should do then do three things.
1. it copies the node_modules directory from the deps stage.
2. it copies the dist directory from the build stage.
3. it then runs npm start
If you need to use a RUN statement containing npm ci always
add the argument –mount=type=cache,target=/root/.npm to the RUN instruction.
The –mount argument should be placed between the word RUN and the npm command.
This will cache the npm packages in the docker build cache and speed up the build process.
After selecting a base image, get a recommended tag. This recommendation will change over time but start with what is recommended now.
We’ll include this content in any conversation occurring in the context of an NPM project. We’ll also add a third function.
[{"name": "docker_scout_tag_recommendation",
"description": "get a recommended tag",
"parameters": {…}}]
At the time of writing, here’s the output from our assistant. In this example, the assistant is using the GPT-4 model, but we are seeing the same results with other local models.
# Stage 1 – Downloading dependencies
FROM node:22-slim AS deps
WORKDIR /usr/src/app
COPY package*.json ./
RUN –mount=type=cache,target=/root/.npm npm ci –omit=dev
# Stage 2 – Building application
FROM deps AS build
RUN –mount=type=cache,target=/root/.npm npm ci && npm build
# Stage 3 – Using a recommended base image from Scout
FROM node:22-slim
WORKDIR /usr/src/app
COPY –from=deps /usr/src/app/node_modules ./node_modules
COPY –from=build /usr/src/app/dist ./dist
CMD [ "npm", "start" ]
This is a big improvement. The three inputs that underlie this improvement are listed below.
The two top-level instructions that anchor the process our assistant should use:
* First, analyze the project to determine how it should be built.
* Once the analysis is complete, create a Dockerfile to build that project.
The Markdown content that summarizes our Docker specific best practices.
The functions (analyze_project, write_files, and docker_scout_tag_recommendation).
Distilling best practices is certainly not easy, but a little can go a long way. Still, it is still hard to know whether this can be considered an improvement over just using code templates. Our continuing investigation centers around whether generative AI will help us to more easily adapt these models to new knowledge, new features, and new project types.
Conversation loop
It’s instructive to go through the actual messages passed between agents here. It’s particularly interesting to look at the interplay between the Assistant and the Tool agents. Near the end of the exchange, there is an unexpected twist where the write_files tool complains that it was passed an invalid JSON file. In this case, the assistant actually did generate invalid JSON and the tool was right to complain. However, the assistant also responded to this error by fixing the JSON and calling the tool a second time! That was unscripted.
Another interesting example of an unscripted moment is the last message from the assistant. The assistant correctly determines that the task is now complete and generates a summary of the work that was done and why.
Transcript
System:
You are an assistant who specializes in authoring Dockerfiles for projects. Since you are an expert and know about their project, be definitive about recommendations.
User:
First, analyze the project to determine how it should be built.
Once the analysis is complete, create a Dockerfile to build that project.
Assistant:
[{:function {:arguments "{}" :name "analyze_project"}}]
Tool (analyze_project):
This is an NPM project.
System:
Write Dockerfiles for NPM projects using three stages. Do these three steps sequentially …
…
… (system prompts excluded for brevity) ..
…
Assistant:
[{:function {:arguments "{
"repository": "node"
}"
:name "docker_scout_tag_recommendation"}}]
Tool (docker_scout_tag_recommendation):
22-slim
Assistant:
[{:function {:arguments "{
"files": [
{
"path": "Dockerfile",
"content": `
# Stage 1 – the build process
FROM node:22-slim AS deps
WORKDIR /usr/src/app
COPY package*.json ./
RUN –mount=type=cache,target=/root/.npm npm ci –omit=dev
# Stage 2 – the production environment
FROM deps AS build
RUN npm ci && npm run build
# Stage 3 – the production environment
FROM node:22-slim
WORKDIR /usr/src/app
COPY –from=deps /usr/src/app/node_modules ./node_modules
COPY –from=build /usr/src/app/dist ./dist
CMD [ "npm", "start" ]`,
"executable": false
}
]
}"
:name "write_files"}
:id "call_6VVCLiXcIBNyEGPQwftR7xEV"}]
Tool:
function call write_files failed call exited with non-zero code (1): call exited with non-zero code (1): #error {rn :cause Unexpected character (‘`’)
Assistant:
I apologize for the error. Let me try that again.
[{:function {:arguments "{
"files": [
{
"path": "Dockerfile",
"content": "FROM node:22-slim AS depsnWORKDIR /usr/src/appnCOPY package*.json ./nRUN –mount=type=cache,target=/root/.npm npm ci –omit=devnnFROM deps AS buildnRUN npm ci && npm run buildnnFROM node:22-slimnWORKDIR /usr/src/appnCOPY –from=deps /usr/src/app/node_modules ./node_modulesnCOPY –from=build /usr/src/app/dist ./distnCMD [ \"npm\", \"start\" ]",
"executable": false
}
]
}"
:name "write_files"}}]
Tool (write_files):
wrote 1 files: Dockerfile
Assistant:
I have written the Dockerfile for your NPM project. You can find it in your project folder. This Dockerfile uses a multi-stage build process to efficiently build and run your application within a Docker container. The first stage fetches the runtime dependencies, the second stage builds the application, and the final stage runs the application. Please let me know if you need further assistance.
Summary
For a generative AI assistant to improve its ability to generate Dockerfile content, we have identified both tools and stack-specific best practice content as key enablers. Incorporating these elements into a generative AI based assistant can serve as an alternative to code templates. We continue to evaluate whether this approach allows us to more easily adapt to new project types.
As always, feel free to follow along in our public repo and reach out. Everything we’ve discussed in this blog post is available for you to try on your own projects.
For more on what we’re doing at Docker, subscribe to our newsletter.
Learn more
Subscribe to the Docker Newsletter.
Read the Docker Labs GenAI series.
Get the latest release of Docker Desktop.
Vote on what’s next! Check out our public roadmap.
Have questions? The Docker community is here to help.
New to Docker? Get started.
Quelle: https://blog.docker.com/feed/
Today, we’re excited to announce the release of Docker Build checks with Docker Desktop 4.33. Docker Build checks help your team learn and follow best practices for building container images. When you run a Docker Build, you will get a list of warnings for any check violations detected in your build. Taking a proactive approach and resolving Build warnings and issues early will save you time and headaches downstream.
Why did we create Docker Build checks?
During conversations with developers, we found that many struggle to learn and follow the best practices for building container images. According to our 2024 State of Application Development Report, 35% of Docker users reported creating and editing Dockerfiles as one of the top three tasks performed. However, 55% of respondents reported that creating Dockerfiles is the most selected task they refer to support.
Developers often don’t have the luxury of reading through the Docker Build docs, making the necessary changes to get things working, and then moving on. A Docker Build might “work” when you run docker build, but a poorly written Dockerfiles may introduce quality issues, such as they are:
Hard to maintain or update
Contain hidden and unexpected bugs
Have sub-optimal performance
In our conversations with Docker users, we heard that they want to optimize their Dockerfiles to improve build performance, aren’t aware of current best practices, and would like to be guided as they build.
Investigating and fixing build issues wastes time. We created Docker Build checks to empower developers to write well-structured Dockerfiles from the get-go and learn from existing best practices. With Build checks, your team spends less time on build issues and more on innovation and coding.
Why should you use Docker Build checks?
You want to write better Dockerfiles and save time!
We have collected a set of best practices from the community of build experts and codified them into Docker Build tooling. You can use Docker Build checks to evaluate all stages of your local and CI workflows, including multi-stage builds and Bake, and deep dive in the Docker Desktop Builds view. You can also choose which rules to skip.
You can access Docker Build checks in the CLI and in the Docker Desktop Builds view.
More than just linting: Docker Build checks are powerful and fast
Linting tools typically just evaluate the text files against a set of rules. As a native part of Docker Build, the rules in Docker Build checks are more powerful and accurate than just linting. Docker Build checks evaluate the entire build, including the arguments passed in and the base images used. These checks are quick enough to be run in real-time as you edit your Dockerfile. You can quickly evaluate a build without waiting for a full build execution.
Check your local builds
A good practice is to evaluate a new or updated Dockerfile before committing or sharing your changes. Running docker build will now give you an overview of issues and warnings in your Dockerfile.
Figure 1: A Docker Build with four check warnings displayed.
To get more information about these specific issues, you can specify the debug flag to the Docker CLI with docker –debug build. This information includes the type of warning, where it occurs, and a link to more information on how to resolve it.
Figure 2: Build debug output for the check warnings.
Quickly check your build
Running these checks during a build is great, but it can be time-consuming to wait for the complete build to run each time when you’re making changes or fixing issues. For this reason, we added the –check flag as part of the build command.
# The check flag can be added anywhere as part of your build command
docker build . –check
docker build –check .
docker build –build-arg VERSION=latest –platfrom linux/arm64 . –check
As illustrated in the following figure, appending the flag to your existing build command will do the same full evaluation of the build configuration without executing the entire build. This faster feedback typically completes in less than a second, making for a smoother development process.
Figure 3: Running check of build.
Check your CI builds
By default, running a Docker build with warnings will not cause the build to fail (return a non-zero exit code). However, to catch any regressions in your CI builds, add the following declarations to instruct the checks to generate errors.
# syntax=docker/dockerfile:1
# check=error=true
FROM alpine
CMD echo “Hello, world!”
Checking multi-stage builds in CI
During a build, only the specified stage/target, including its dependent, is executed. We recommend adding a stage check step in your workflow to do a complete evaluation of your Dockerfile. This is similar to how you would run automated tests before executing the full build.
If any warnings are detected, it will return a non-zero exit code, which will cause the workflow to fail, therefore catching any issues.
docker build –check .
Checking builds in Docker Build Cloud
Of course, this also works seamlessly with Docker Build Cloud, both locally and through CI. Use your existing cloud builders to evaluate your builds. Your team now has the combined benefit of Docker Build Cloud performance with the reassurance that the build will align with best practices. In fact, as we expand our checks, you should see even better performance from your Docker Build Cloud builds.
Figure 4: Running checks in Docker Build Cloud.
Configure rules
You have the flexibility to configure rules in Build checks with a skip argument. You can also specify skip=all or skip=none to toggle the rules on and off. Here’s an example of skipping the JSONArgsRecommended and StageNameCasing rules:
# syntax=docker/dockerfile:1
# check=skip=JSONArgsRecommended,StageNameCasing
FROM alpine AS BASE_STAGE
CMD echo “Hello, world!”
Dive deep into Docker Desktop Builds view
In Docker Desktop Builds view, you can see the output of the build warnings. Locating the cause of warnings in Dockerfiles and understanding how to resolve them quickly is now easy.
As with build errors, warnings are shown inline with your Dockerfile when inspecting a build in Docker Desktop:
Figure 5: Build checks warnings in Docker Desktop Builds view.
What’s next?
More checks
We are excited about the new Builds checks to help you apply best practices to your Dockfiles, but this is just the start. In addition to the current set of checks, we plan on adding even more to provide a more comprehensive evaluation of your builds. Further, we look forward to including custom checks and policies for your Docker builds.
IDE integration
The earlier you identify issues in your builds, the easier and less costly it is to resolve them. We plan to integrate Build checks with your favorite IDEs so you can get real-time feedback as you type.
Figure 6: Check violations displaying in VS Code.
GitHub Actions and Docker Deskop
You can already see Build checks warnings in Docker Desktop, but more detailed insights are coming soon to Docker Desktop. As you may have heard, we recently announced Inspecting Docker Builds in GitHub Actions’s beta release, and we plan to build on this new functionality to include support for investigating check warnings.
Get started now
To get started with Docker Build checks, upgrade to Docker Desktop 4.33 today and try them out with your existing Dockerfiles. Head over to our documentation for a more detailed breakdown of Build checks.
Learn more
Authenticate and update to receive your subscription level’s newest Docker Desktop features.
What else is new Docker Desktop 4.33? GA Releases of Docker Debug and Docker Build Checks Plus Enhanced Configuration Integrity Checks.
New to Docker? Create an account.
Subscribe to the Docker Newsletter.
Quelle: https://blog.docker.com/feed/