Better Debugging: How the Signal0ne Docker Extension Uses AI to Simplify Container Troubleshooting

Consider this scenario: You fire up your Docker containers, hit an API endpoint, and … bam! It fails. Now what? The usual drill involves diving into container logs, scrolling through them to understand the error messages, and spending time looking for clues that will help you understand what’s wrong. But what if you could get a summary of what’s happening in your containers and potential issues with the proposed solutions already provided?

In this article, we’ll dive into a solution that solves this issue using AI. AI can already help developers write code, so why not help developers understand their system, too? 

Signal0ne is a Docker Desktop extension that scans Docker containers’ state and logs in search of problems, analyzes the discovered issues, and outputs insights to help developers debug. We first learned about Signal0ne as the winning submission in the 2023 Docker AI/ML Hackathon, and we’re excited to show you how to use it to debug more efficiently. 

Introducing Signal0ne Docker extension: Streamlined debugging for Docker

The magic of the Signal0ne Docker extension is its ability to shorten feedback loops for working with and developing containerized applications. Forget endless log diving — the extension offers a clear and concise summary of what’s happening inside your containers after logs and states are analyzed by an AI agent, pinpointing potential issues and even suggesting solutions. 

Developing applications these days involves more than a block of code executed in a vacuum. It is a complex system of dependencies, and different user flows that need debugging from time to time. AI can help filter out all the system noise and focuses on providing data about certain issues in the system so that developers can debug faster and better. 

Docker Desktop is one of the most popular tools used for local development with a huge community, and Docker features like Docker Debug enhance the community’s ability to quickly debug and resolve issues with their containerized apps.

Signal0ne Docker extension’s suggested solutions and summaries can help you while debugging your container or editing your code so that you can focus on bringing value as a software engineer. The term “developer experience” is often used, but this extension focuses on one crucial aspect: shortening development time. This translates directly to increased productivity, letting you build containerized applications faster and more efficiently.

How does the Docker Desktop extension work?

Between AI co-pilots, highly integrated in IDEs that help write code, and browser AI chats that help understand software development concepts in a Q&A way, there is one piece missing: logs and runtime system data. 

The Signal0ne Docker Desktop extension consists of three components: two hosted on the user’s local system (UI and agent) and one in the Signal0ne cloud backend service. The agent scans the user’s local environment in the search of containers with invalid states, runtime issues, or some warnings or errors in the logs, after issue discovery, it collects additional data from container definition for enhanced analysis. 

After the Signal0ne agent discovery, data is sent to the backend service, where a combination of pre-trained LLM and solution search retrieval service performs the analysis. The analysis of the issues can be seen from the Signal0ne extension UI, including: 

Short log summary — Outlines what is happening within a particular container, including logs on which analysis was based can be accessed from the sources dropdown if you wish.

Solutions summary — One paragraph about possible solutions to the analyzed issue with some explanation of why this particular solution was proposed.

Solution sources — Links to websites where the issue has already been solved by the community (for example, a GitHub issue or StackOverflow post).

Watch this short video demonstration for more details: 

Technical details

The issue analysis pipeline is powered by the LangChain tooling framework, which utilizes open source Mistral 7b LLM with LLM-optimized web search that browses sources of existing solutions on the open web (Figure 1).

Figure 1: Signal0ne architecture overview.

The following section shows sample issue analysis pipeline output:

Logs from Docker container:

“$ irbn>> "12341:asdf"[/d+/]n# => "12341" n>> "12341:asdf"[/d*/]n# => "12341" n>> "12341:asdf"[0..5]n# => "12341:" n>> "12341:asdf"[0…5]n# => "12341" n>> "12341:asdf"[0, ':']nTypeError: can't convert String into Integern from (irb):5:in”

Summary: 

The user is trying to extract a substring from the string 12341:asdf using different Ruby methods, such as [/\d+/], [/\d\*/], [0..5], [0…5], and [0, ‘:’]. However, when attempting to use the indexing method [0, ‘:’], an error occurs because the method expects an integer argument but receives a string instead.

Predicted solutions: 

“To extract a substring from the given string 12341:asdf without encountering an error when using indexing methods, consider the following options:

1. Using regular expressions:

To extract a substring using regular expressions, you can use the scan method with a regular expression pattern. For instance, to extract the digits before the colon, you can use the following code snippet:

str = "12341:asdf"
substring = str.scan(/[\d]+/)
puts substring.first
# Output: ["12341"]
puts substring.first[0]
# Output: 12341

2. Slicing:

Another alternative to indexing is slicing, which allows you to extract a specified range of characters from a string. To extract the digits before the colon using slicing, you can use the following code snippet:

str = "12341:asdf"
substring = str[0…str.index(":")]
puts substring
# Output: "12341"
puts substring.to_i
# Output: 12341

These two methods should help you extract the desired substring from the given string without encountering an error.

Getting started

Getting started with the Signal0ne Docker Desktop extension is a straightforward process that allows developers to leverage the benefits of unified development. 

Here are the steps for installing Signal0ne Docker extension:

1. Install Docker Desktop.

2. Choose Add Extensions in the left sidebar. The Browse tab will appear by default (Figure 2).

Figure 2: Signal0ne extension installation from the marketplace.

3. In the Filters drop-down, select the Utility tools category.

4. Find Signal0ne and then select Install (Figure 3).

Figure 3: Extension installation process.

5. Log in after the extension is installed (Figure 4).

Figure 4: Signal0ne extension login screen.

6. Start developing your apps, and, if you face some issues while debugging, have a look at the Signal0ne extension UI. The issue analysis will be there to help you with debugging.

Make sure the Signal0ne agent is enabled by toggling on (Figure 5):

Figure 5: Agent settings tab.

Figure 6 shows the summary and sources:

Figure 6: Overview of the inspected issue.

Proposed solutions and sources are shown in Figures 7 and 8. Solutions sources will redirect you to a webpage with predicted solution:

Figure 7: Overview of proposed solutions to the encountered issue.

Figure 8: Overview of the list of helpful links.

If you want to contribute to the project, you can leave feedback via the Like or Dislike button in the issue analysis output (Figure 9).

Figure 9: You can leave feedback about analysis output for further improvements.

To explore Signal0ne Docker Desktop extension without utilizing your containers, consider experimenting with dummy containers using this docker compose to observe how logs are being analyzed and how helpful the output is with the insights:

services:
broken_bulb: # c# application that cannot start properly
image: 'Signal0neai/broken_bulb:dev'
faulty_roger: #
image: 'Signal0neai/faulty_roger:dev'
smoked_server: # nginx server hosting the website with the miss-configuration
image: 'Signal0neai/smoked_server:dev'
ports:
– '8082:8082'
invalid_api_call: # python webserver with bug
image: 'Signal0neai/invalid_api_call:dev'
ports:
– '5000:5000'

broken_bulb: This service uses the image Signal0neai/broken_bulb:dev. It’s a C# application that throws System.NullReferenceException during the startup. Thanks to that application, you can observe how Signal0ne discovers the failed container, extracts the error logs, and analyzes it.

faulty_roger: This service uses the image Signal0neai/faulty_roger:dev. It is a Python API server that is trying to connect to an unreachable database on localhost.

smoked_server: This service utilizes the image Signal0neai/smoked_server:dev. The smoked_server service is an Nginx instance that is throwing 403 forbidden while the user is trying to access the root path (http://127.0.0.1:8082/). Signal0ne can help you debug that.

invalid_api_call: API service with a bug in one of the endpoints, to generate an error call http://127.0.0.1:5000/create-table  after running the container. Follow the analysis of Signal0ne and try to debug the issue.

Conclusion

Debugging containerized applications can be time-consuming and tedious, often involving endless scrolling through logs and searching for clues to understand the issue. However, with the introduction of the Signal0ne Docker extension, developers can now streamline this process and boost their productivity significantly.

By leveraging the power of AI and language models, the extension provides clear and concise summaries of what’s happening inside your containers, pinpoints potential issues, and even suggests solutions. With its user-friendly interface and seamless integration with Docker Desktop, the Signal0ne Docker extension is set to transform how developers debug and develop containerized applications.

Whether you’re a seasoned Docker user or just starting your journey with containerized development, this extension offers a valuable tool that can save you countless hours of debugging and help you focus on what matters most — building high-quality applications efficiently. Try the extension in Docker Desktop today, and check out the documentation on GitHub.

Learn more

Subscribe to the Docker Newsletter.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

A Promising Methodology for Testing GenAI Applications in Java

In the vast universe of programming, the era of generative artificial intelligence (GenAI) has marked a turning point, opening up a plethora of possibilities for developers.

Tools such as LangChain4j and Spring AI have democratized access to the creation of GenAI applications in Java, allowing Java developers to dive into this fascinating world. With Langchain4j, for instance, setting up and interacting with large language models (LLMs) has become exceptionally straightforward. Consider the following Java code snippet:

public static void main(String[] args) {
var llm = OpenAiChatModel.builder()
.apiKey("demo")
.modelName("gpt-3.5-turbo")
.build();
System.out.println(llm.generate("Hello, how are you?"));
}

This example illustrates how a developer can quickly instantiate an LLM within a Java application. By simply configuring the model with an API key and specifying the model name, developers can begin generating text responses immediately. This accessibility is pivotal for fostering innovation and exploration within the Java community. More than that, we have a wide range of models that can be run locally, and various vector databases for storing embeddings and performing semantic searches, among other technological marvels.

Despite this progress, however, we are faced with a persistent challenge: the difficulty of testing applications that incorporate artificial intelligence. This aspect seems to be a field where there is still much to explore and develop.

In this article, I will share a methodology that I find promising for testing GenAI applications.

Project overview

The example project focuses on an application that provides an API for interacting with two AI agents capable of answering questions. 

An AI agent is a software entity designed to perform tasks autonomously, using artificial intelligence to simulate human-like interactions and responses. 

In this project, one agent uses direct knowledge already contained within the LLM, while the other leverages internal documentation to enrich the LLM through retrieval-augmented generation (RAG). This approach allows the agents to provide precise and contextually relevant answers based on the input they receive.

I prefer to omit the technical details about RAG, as ample information is available elsewhere. I’ll simply note that this example employs a particular variant of RAG, which simplifies the traditional process of generating and storing embeddings for information retrieval.

Instead of dividing documents into chunks and making embeddings of those chunks, in this project, we will use an LLM to generate a summary of the documents. The embedding is generated based on that summary.

When the user writes a question, an embedding of the question will be generated and a semantic search will be performed against the embeddings of the summaries. If a match is found, the user’s message will be augmented with the original document.

This way, there’s no need to deal with the configuration of document chunks, worry about setting the number of chunks to retrieve, or worry about whether the way of augmenting the user’s message makes sense. If there is a document that talks about what the user is asking, it will be included in the message sent to the LLM.

Technical stack

The project is developed in Java and utilizes a Spring Boot application with Testcontainers and LangChain4j.

For setting up the project, I followed the steps outlined in Local Development Environment with Testcontainers and Spring Boot Application Testing and Development with Testcontainers.

I also use Tescontainers Desktop to facilitate database access and to verify the generated embeddings as well as to review the container logs.

The challenge of testing

The real challenge arises when trying to test the responses generated by language models. Traditionally, we could settle for verifying that the response includes certain keywords, which is insufficient and prone to errors.

static String question = "How I can install Testcontainers Desktop?";
@Test
void verifyRaggedAgentSucceedToAnswerHowToInstallTCD() {
String answer = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message();
assertThat(answer).contains("https://testcontainers.com/desktop/");
}

This approach is not only fragile but also lacks the ability to assess the relevance or coherence of the response.

An alternative is to employ cosine similarity to compare the embeddings of a “reference” response and the actual response, providing a more semantic form of evaluation. 

This method measures the similarity between two vectors/embeddings by calculating the cosine of the angle between them. If both vectors point in the same direction, it means the “reference” response is semantically the same as the actual response.

static String question = "How I can install Testcontainers Desktop?";
static String reference = """
– Answer must indicate to download Testcontainers Desktop from https://testcontainers.com/desktop/
– Answer must indicate to use brew to install Testcontainers Desktop in MacOS
– Answer must be less than 5 sentences
""";
@Test
void verifyRaggedAgentSucceedToAnswerHowToInstallTCD() {
String answer = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message();
double cosineSimilarity = getCosineSimilarity(reference, answer);
assertThat(cosineSimilarity).isGreaterThan(0.8);
}

However, this method introduces the problem of selecting an appropriate threshold to determine the acceptability of the response, in addition to the opacity of the evaluation process.

Toward a more effective method

The real problem here arises from the fact that answers provided by the LLM are in natural language and non-deterministic. Because of this, using current testing methods to verify them is difficult, as these methods are better suited to testing predictable values. 

However, we already have a great tool for understanding non-deterministic answers in natural language: LLMs themselves. Thus, the key may lie in using one LLM to evaluate the adequacy of responses generated by another LLM. 

This proposal involves defining detailed validation criteria and using an LLM as a “Validator Agent” to determine if the responses meet the specified requirements. This approach can be applied to validate answers to specific questions, drawing on both general knowledge and specialized information

By incorporating detailed instructions and examples, the Validator Agent can provide accurate and justified evaluations, offering clarity on why a response is considered correct or incorrect.

static String question = "How I can install Testcontainers Desktop?";
static String reference = """
– Answer must indicate to download Testcontainers Desktop from https://testcontainers.com/desktop/
– Answer must indicate to use brew to install Testcontainers Desktop in MacOS
– Answer must be less than 5 sentences
""";

@Test
void verifyStraightAgentFailsToAnswerHowToInstallTCD() {
String answer = restTemplate.getForObject("/chat/straight?question={question}", ChatController.ChatResponse.class, question).message();
ValidatorAgent.ValidatorResponse validate = validatorAgent.validate(question, answer, reference);
assertThat(validate.response()).isEqualTo("no");
}

@Test
void verifyRaggedAgentSucceedToAnswerHowToInstallTCD() {
String answer = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message();
ValidatorAgent.ValidatorResponse validate = validatorAgent.validate(question, answer, reference);
assertThat(validate.response()).isEqualTo("yes");
}

We can even test more complex responses where the LLM should suggest a better alternative to the user’s question.

static String question = "How I can find the random port of a Testcontainer to connect to it?";
static String reference = """
– Answer must not mention using getMappedPort() method to find the random port of a Testcontainer
– Answer must mention that you don't need to find the random port of a Testcontainer to connect to it
– Answer must indicate that you can use the Testcontainers Desktop app to configure fixed port
– Answer must be less than 5 sentences
""";

@Test
void verifyRaggedAgentSucceedToAnswerHowToDebugWithTCD() {
String answer = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message();
ValidatorAgent.ValidatorResponse validate = validatorAgent.validate(question, answer, reference);
assertThat(validate.response()).isEqualTo("yes");
}

Validator Agent

The configuration for the Validator Agent doesn’t differ from that of other agents. It is built using the LangChain4j AI Service and a list of specific instructions:

public interface ValidatorAgent {
@SystemMessage("""
### Instructions
You are a strict validator.
You will be provided with a question, an answer, and a reference.
Your task is to validate whether the answer is correct for the given question, based on the reference.

Follow these instructions:
– Respond only 'yes', 'no' or 'unsure' and always include the reason for your response
– Respond with 'yes' if the answer is correct
– Respond with 'no' if the answer is incorrect
– If you are unsure, simply respond with 'unsure'
– Respond with 'no' if the answer is not clear or concise
– Respond with 'no' if the answer is not based on the reference

Your response must be a json object with the following structure:
{
"response": "yes",
"reason": "The answer is correct because it is based on the reference provided."
}

### Example
Question: Is Madrid the capital of Spain?
Answer: No, it's Barcelona.
Reference: The capital of Spain is Madrid
###
Response: {
"response": "no",
"reason": "The answer is incorrect because the reference states that the capital of Spain is Madrid."
}
""")
@UserMessage("""
###
Question: {{question}}
###
Answer: {{answer}}
###
Reference: {{reference}}
###
""")
ValidatorResponse validate(@V("question") String question, @V("answer") String answer, @V("reference") String reference);

record ValidatorResponse(String response, String reason) {}
}

As you can see, I’m using Few-Shot Prompting to guide the LLM on the expected responses. I also request a JSON format for responses to facilitate parsing them into objects, and I specify that the reason for the answer must be included, to better understand the basis of its verdict.

Conclusion

The evolution of GenAI applications brings with it the challenge of developing testing methods that can effectively evaluate the complexity and subtlety of responses generated by advanced artificial intelligences. 

The proposal to use an LLM as a Validator Agent represents a promising approach, paving the way towards a new era of software development and evaluation in the field of artificial intelligence. Over time, we hope to see more innovations that allow us to overcome the current challenges and maximize the potential of these transformative technologies.

Learn more

Check out the GenAI Stack to get started with adding AI to your apps. 

Subscribe to the Docker Newsletter.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

AI Trends Report 2024: AI’s Growing Role in Software Development

The landscape of application development is rapidly evolving, propelled by the integration of Artificial Intelligence (AI) into the development process. Results in the Docker AI Trends Report 2024, a precursor to the upcoming State of Application Development Report, show interesting AI trends among developers, highlighted in this report.

The most recent Docker State of Application Development Survey results offer insights into how developers are adopting and utilizing AI, reflecting a shift toward more intelligent, efficient, and adaptable development methodologies. This transformation is part of a larger trend observed across the tech industry as AI becomes increasingly central to software development.

The annual Docker State of Application Development survey, conducted by our User Research Team, is one way Docker product managers, engineers, and designers gather insights from Docker users to continuously develop and improve the suite of tools the company offers. For example, in Docker’s 2022 State of Application Development Survey, we found that the task for which Docker users most often refer to support/documentation was creating a Dockerfile (reported by 60% of respondents). This finding helped spur the innovation of Docker AI.

More than 1,300 developers participated in the latest Docker State of Application Development survey, conducted in late 2023. The online survey asked respondents about what tools they use, their application development processes and frustrations, feelings about industry trends, Docker usage, and participation in developer communities. We wanted to know where developers are focused, what they’re working on, and what is most important to them.

Of the approximately 1,300 respondents to the survey, 885 completed it. The findings in this report are based on the 885 completed responses. 

Who responded to the Docker survey?

Respondents who took our survey ranged from home hobbyists to professionals at companies with more than 5,000 employees. Forty-two percent of respondents are working for a small company (up to 100 employees), 28% of participants say they work for mid-sized companies (between 100 and 1,000 employees), and 25% work for large companies (more than 1,000 employees). 

Well over half of the respondents were in engineering roles — for example, 36% of respondents identified as back-end or full-stack developers; 21% were DevOps, infrastructure managers, or platform engineers; and 4% were front-end developers. Other roles of respondents included dev/engineering managers, company leadership, product managers, security roles, and AI/ML roles. There was nearly an even split between respondents with more experience (6+ years, 54%) and less experienced (0-5 years, 46%). 

Our survey underscored a marked growth in roles focused on machine learning (ML) engineering and data science within the Docker ecosystem. In our 2022 survey, approximately 1% of respondents represented this demographic, whereas they made up 8% in the most recent survey. ML engineers and data scientists represent a rapidly expanding user base. This signals the growing relevance of AI to the software development field, and the blurring of the lines between tools used by developers and tools used by AI/ML scientists. 

More than 34% of respondents said they work in the computing or IT/SaaS industry, but we also saw responses from individuals working in accounting, banking, or finance (8%); business, consultancy, or management (7%); engineering or manufacturing (6%), and education (5%). Other responses came in from professionals in a wide range of fields, including media; academic research; transport or logistics; retail; marketing, advertising, or PR; charity or volunteer work; healthcare; construction; creative arts or design; and environment or agriculture.

Docker users made up 87% of our respondents, whereas 13% reported that they do not use Docker.  

AI as an up-and-coming trend

We asked participants what they felt were the most important trends currently in the industry. GenAI (40% of respondents) and AI assistants for software engineering (38% of respondents) were the top-selected options identified as important industry trends in software development. More senior developers (back-end, front-end, and full-stack developers with over 5 years of experience) tended to view GenAI as most important, whereas more junior developers (less than 5 years of experience) view AI assistants for software engineering as most important. This difference may signal varied and unique uses of AI throughout a career in software development.

It’s clearly trendy, but how do developers really feel about AI? The majority (65%) agree that AI is a positive option, it makes their jobs easier (61%), and it allows them to focus on more important tasks (55%). A much smaller number of respondents see AI as a threat to their jobs (23%) or say it makes their jobs more difficult (19%). 

Interestingly, despite high usage and generally positive feelings towards AI, 45% of respondents also reported that they feel AI is over-hyped. Why might this be? It’s not fully clear, but when this finding is considered alongside responses to perception of job threat, one possible answer could be entertained: respondents may be viewing AI as a critical and useful tool for their work, but they’re not too worried about the hype of it replacing them anytime soon.  

How AI is used in the developer’s world

We asked users what they use AI for, how dependent they feel on AI, and what AI tools they use most often. A majority of developers (64%) already report using AI for work, underscoring AI’s penetration into the software development field. Developers leverage AI at work mainly for coding (33% of respondents), writing documentation (29%), research (28%), writing tests (23%), troubleshooting/debugging (21%), and CLI commands (20%). 

For the 568 respondents who indicated they use AI for work, we also asked how dependent they felt on AI to get their job done on a scale of 0 (not at all dependent) to 10 (completely dependent). Responses ranged substantially and varied by role and years of experience, but the overall average reported dependence was about 4 out of 10, indicating relatively low dependence.

In the developer toolkit, respondents indicate that AI tools like ChatGPT (46% of respondents), GitHub Copilot (30%), and Bard (19%) stand out as most frequently used. 

Conclusion

Concluding our 2024 Docker AI Trends Report, Artificial Intelligence is already shifting the way software development is approached. The insights from more than 800 respondents in our latest survey illuminate a path toward a future where AI is seamlessly integrated into every aspect of application development. From coding and documentation to debugging and writing tests, AI tools are becoming indispensable in enhancing efficiency and problem-solving capabilities, allowing developers to focus on more creative and important work.

The uptake of AI tools such as ChatGPT, GitHub Copilot, and Bard among developers is a testament to AI’s value in the development process. Moreover, the growing interest in machine learning engineering and data science within the Docker community signals a broader acceptance and integration of AI technologies.

As Docker continues to innovate and support developers in navigating these changes, the evolving landscape of AI in software development presents both opportunities and challenges. Embracing AI as a positive force that can augment human capabilities rather than replace them is crucial. Docker is committed to facilitating this transition by providing tools and resources that empower developers to leverage AI effectively, ensuring they can remain at the forefront of technological innovation.

Looking ahead, Docker will continue to monitor these trends, adapt our offerings accordingly, and support our user community in harnessing the full potential of AI in software development. As the industry evolves, so too will Docker’s role in shaping the future of application development, ensuring our users are equipped to meet the challenges and seize the opportunities that lie ahead in this exciting era of AI-driven development.

Learn more

Introducing a New GenAI Stack: Streamlined AI/ML Integration Made Easy

Get started with the GenAI Stack: Langchain + Docker + Neo4j + Ollama

Building a Video Analysis and Transcription Chatbot with the GenAI Stack

Docker Partners with NVIDIA to Support Building and Running AI/ML Applications

Build Multimodal GenAI Apps with OctoAI and Docker

How IKEA Retail Standardizes Docker Images for Efficient Machine Learning Model Deployment

Case Study: How Docker Accelerates ZEISS Microscopy’s AI Journey

Docker AI: From Prototype to CI/CD Pipeline solutions brief

Containerize a GenAI app (use case guide)

Docker’s User Research Team — Olga Diachkova, Julia Wilson, and Rebecca Floyd — conducted this survey, analyzed the results, and provided insights.

For a complete methodology, contact uxresearch@docker.com.
Quelle: https://blog.docker.com/feed/

Docker Desktop 4.29: Docker Socket Mount Permissions in ECI, Advanced Error Management, Moby 26, and New Beta Features 

The release of Docker Desktop 4.29 introduces enhancements to secure and streamline the development process and to improve error management and workflow efficiency. With the integration of Enhanced Container Isolation (ECI) with Docker socket mount permissions, the debut of Moby 26 within Docker Desktop, and exciting features such as Docker Compose enhancements via synchronized file shares reaching beta release, we’re equipping developers with the essential resources to tackle the complexities of modern development head-on.

Dive into the details to discover these new enhancements and get a sneak peek at exciting advancements currently in beta release.

Enhanced Container Isolation with Docker socket mount permissions 

We’re pleased to unveil a new feature in the latest Docker Desktop release, now in General Availability to Business subscribers, that further improves Desktop’s Enhanced Container Isolation (ECI) mode: Docker socket mount permissions. This update blends robust security with the flexibility you love, allowing you to enjoy key development tools like Testcontainers with the peace of mind provided by ECI’s unprivileged containers. Initially launched in beta with Docker Desktop 4.27, this update moves the ECI Docker socket mount permissions feature to General Availability (GA), demonstrating our commitment to making Docker Desktop the best modern application development platform.

The Docker Engine socket, a crucial component for container management, has historically been a vector for potential security risks. Unauthorized access could enable malicious activities, such as supply chain attacks. However, legitimate use cases, like the Testcontainers framework, require socket access for operational tasks.

With ECI, Docker Desktop enhances security by default, blocking unapproved bind-mounting of the Docker Engine socket into containers. Yet, recognizing the need for flexibility, we introduce controlled access through admin-settings.json configuration. This allows specified images to bind-mount the Docker socket, combining security with functionality. 

Key features include:

Selective permissions: Admins can now specify which container images can access the Docker socket through a curated imageList, ensuring that only trusted containers have the necessary permissions.

Command restrictions: The commandList feature further tightens security by limiting the Docker commands approved containers can execute, acting as a secondary defense layer.

While we celebrate this release, our journey doesn’t stop here. We’re continuously exploring ways to expand Docker Desktop’s capabilities, ensuring our users can access the most secure, efficient, and user-friendly containerization tools.

Stay tuned for further security enhancements, including our beta release of air-gapped containers. Update to Docker Desktop 4.29 to start leveraging the full potential of Enhanced Container Isolation with Docker socket mount permissions today.

Advanced error management in Docker Desktop 

We’re redefining error management to significantly improve the developer experience. This update isn’t just about fixing bugs; it’s a comprehensive overhaul aimed at making the development process more efficient, reliable, and user-friendly.

Central to this update is our shift toward self-service troubleshooting and resilience, transforming errors from roadblocks into opportunities for growth and learning. The new system presents actionable insights for errors, ensuring developers can swiftly move toward a resolution.

Key enhancements include:

An enhanced error interface: Combining error codes with explanatory text and support links, making troubleshooting straightforward.

Direct diagnostic uploads: Allowing users to share diagnostics from the error screen, streamlining support. 

Reset and exit options: Offering quick fixes directly from the error interface.

Self-service remediation: Providing clear, actionable steps for users to resolve issues independently (Figure 1).

Figure 1: Error message displaying self-service remediation options.

This update marks a significant leap in our commitment to enhancing the Docker Desktop user experience, empowering developers, and reducing the need for support tickets. Read Next-Level Error Handling: How Docker Desktop 4.29 Aims to Simplify Developer Challenges to dive deeper into these enhancements in our blog and discover how Docker Desktop 4.29 is setting a new standard for error management and developer support.

New in Docker Engine: Volume subpath mounts, networking enhancements, BuildKit 0.13, and more 

In the latest Docker Engine update, Moby 26, packaged in Docker Desktop 4.29, introduces several enhancements aimed at enriching the developer experience. Here’s the breakdown of what’s new: 

Volume subpath mounts: Responding to widespread user requests, we’ve made it possible to mount a subdirectory as a named volume. This addition enhances flexibility and control over data management within containers. Detailed guidance on specifying these mounts is available in the docs. 

Networking enhancements: Significant improvements have been made to bolster the stability of networking capabilities within the engine, along with preliminary efforts to support future IPv6 enhancements.

Integration of BuildKit 0.13: Among other updates, this BuildKit version includes experimental support for Windows Containers, ensuring builds remain dependable and efficient.

Streamlined API: Deprecated API versions have been removed, concentrating on quality enhancements and promoting a more secure, reliable environment.

Multi-platform image enhancements: In this release, you’ll see an improved docker images UX as we’ve combined image entries for multi-platform images.

Beta release highlights

Docker Debug in Docker Desktop GUI and CLI 

Docker Debug (Beta), a recent addition to Docker Desktop, streamlines the debugging process for developers. This feature, accessible in Docker Pro, Teams, and Business subscriptions, offers a shell for efficiently debugging both local and remote containerized applications — even those that fail to run. With Docker Debug, developers can swiftly pinpoint and address issues, freeing up more time for innovation.

Now, in beta release, Docker Debug introduces comprehensive debugging directly from the Docker Desktop CLI for active and inactive containers alike. Moreover, the Docker Desktop GUI has been enhanced with an intuitive option: Click the toggle in the Exec tab within a container to switch on Debug mode to start debugging with the necessary tools at your fingertips.

Figure 2: Docker Desktop containers view showcasing debugging a running container with Docker Debug.

To dive into Docker Debug, ensure you’re logged in with your subscription account, then initiate debugging by executing docker debug <Container or Image name> in the CLI or by selecting a container from the GUI container list for immediate debugging from any device local or in the cloud.

Improved volume backup capabilities 

With our latest release, we’re elevating volume backup capabilities in Docker Desktop, introducing an upgraded feature set in beta release. This enhancement directly integrates the Volumes Backup & Share extension directly into Docker Desktop, streamlining your backup processes. 

Figure 3: Docker Desktop Volumes view showcasing new backup functionality.

This release marks a significant step forward, but it’s just the beginning. We’re committed to expanding these capabilities, adding even more value in future updates. Start exploring the new feature today and prepare for an enhanced backup experience soon.

Support for host network mode on Docker Desktop for Mac and Windows 

Support for host network mode (docker run –net=host), previously limited to Linux users, is now available for Mac and Windows Docker Desktop users, offering enhanced networking capabilities and flexibility.

With host network mode support, Docker Desktop becomes a more versatile tool for advanced networking tasks, such as dynamic network penetration testing, without predefined port mappings. This feature is especially useful for applications requiring the ability to dynamically accept connections on various ports, just as if they were running directly on the host. Features include:

Simplified networking: Eases the setup for complex networking tasks, facilitating security testing and the development of network-centric applications.

Greater flexibility: Allows containers to use the host’s network stack, avoiding the complexities of port forwarding.

Figure 4: The host network mode enhancement in Preview Beta reflects our commitment to improving Docker Desktop and is available after authenticating against all Docker subscriptions.

Enhancing security with Docker Desktop’s new air-gapped containers

Docker Desktop’s latest beta feature, air-gapped containers, is now available in version 4.29, reflecting our deep investment in security enhancements. This Business subscription feature empowers administrators to limit container access to network resources, tightening security across containerized applications by: 

Restricting network access: Ensuring containers communicate only with approved sources.

Customizing proxy rules: Allowing detailed control over container traffic.

Enhancing data protection: Preventing unauthorized data transfer in or out of containers.

The introduction of air-gapped containers is part of our broader effort to make Docker Desktop not just a development tool, but an even more secure development environment. We’re excited about the potential this feature holds for enhancing security protocols and simplifying the management of sensitive data.

Compose bind mount support with synchronized file shares 

We’re elevating the Docker Compose experience for our subscribers by integrating synchronized file shares (SFS) directly into Compose. This feature eradicates the sluggishness typically associated with managing large codebases in containers. Formerly known as Mutagen, synchronized file shares enhances bind mounts with native filesystem performance, accelerating file operations by an impressive 2-10x. This leap forward is incredibly impactful for developers handling extensive codebases, effortlessly streamlining their workflow.

With a Docker subscription, you’ll find that Docker Compose and SFS work together seamlessly, automatically optimizing bind mounts to significantly boost synchronization speeds. This integration requires no additional configuration; Compose intelligently activates SFS whenever a bind mount is used, instantly enhancing your development process.

Enabling synchronized file shares in Compose is simple:

Log into Docker Desktop.

Under Settings, navigate to Features in development and choose the Experimental features tab.

Enable Access experimental features and Manage Synchronized file shares with Compose.

Once set up via Docker Desktop settings, these folders act as standard bind mounts with the added benefit of SFS speed enhancements. 

Figure 5: Docker Desktop settings displaying the option to turn on synchronized file shares with Docker Compose.

Figure 6: Demonstration of compose up creating and synching shares in the terminal.

If your Compose project relies on a bind mount that could benefit from synchronized file shares, the initial share creation must be done through the Docker Desktop GUI.

Embrace the future of Docker Compose with Docker Desktop’s synchronized file shares and transform your development workflow with unparalleled speed and efficiency.

Try Docker Desktop 4.29 now

Docker Desktop 4.29 introduces updates focused on innovation, security, and enhancing the developer experience. This release integrates community feedback and advances Docker’s capabilities, providing solutions that meet developers’ and businesses’ immediate needs while setting the stage for future features. We advise all Docker users to upgrade to version 4.29. Please note that access to certain features in this release requires authentication and may be contingent upon your subscription tier. We encourage you to evaluate your feature needs and select the subscription level that best suits your requirements.

Join the conversation

Dive into the discussion and contribute to the evolution of Docker Desktop. Use our feedback form to share your thoughts and let us know how to improve the Hardened Desktop features. Your input directly influences the development roadmap, ensuring Docker Desktop meets and exceeds our community and customers’ needs.

Learn more

Authenticate and update to receive the newest Docker Desktop features per your subscription level.

New to Docker? Create an account.

Read the Docker Desktop Release Notes.

Read Next-Level Error Handling: How Docker Desktop 4.29 Aims to Simplify Developer Challenges.

Learn about Docker Build Cloud and how you can leverage cloud resources directly from Docker Desktop.

Subscribe to the Docker Newsletter.

Have questions? The Docker community is here to help.

Quelle: https://blog.docker.com/feed/

Next-Level Error Handling: How Docker Desktop 4.29 Aims to Simplify Developer Challenges

Imagine you’re deep in the zone, coding away on a groundbreaking project. The ideas are flowing, the coffee’s still warm, and then — bam! An error message pops up, halting your progress like a red light at a busy intersection. We’ve all been there, staring at cryptic codes or vague advice, feeling more like ancient mariners navigating by the stars than modern developers armed with cutting-edge technology.

This scenario is all too familiar in the world of software development. With an arsenal of tools, languages, platforms, and security protocols at our disposal, the complexity of our work environment has skyrocketed. For developers charting the unexplored territories of innovation, encountering errors can feel like facing a tempest with a leaky boat. But fear not, for Docker Desktop sails to the rescue with a lighthouse’s guidance: a new, intuitive prompt that sheds light on the mysterious seas of error messages.

Enhancing Docker Desktop with advanced error management

In our Docker Desktop 4.29 release, we’ve embarked on an ambitious journey to elevate the user experience by fundamentally redefining error management. This initiative goes far beyond simple bug fixes; it aims to create a development environment that is not only more efficient and reliable but also more satisfying for developers. At the heart of these enhancements is our unwavering commitment to empowering users and providing them with the tools they need to recover swiftly from any setbacks they may encounter.

This strategic update is built around a core objective: pivoting Docker Desktop toward a model that supports self-service troubleshooting and fosters user resilience. By reimagining errors as opportunities for learning and growth, we’re doing more than just solving technical problems. We’re transforming how developers interact with Docker Desktop, enabling them to overcome challenges confidently and enhance their skills in the process. The changes introduced in Docker Desktop 4.29 signify a significant leap forward in our mission to address user issues and enhance their ability to navigate the complexities of software development with ease and efficiency.

Bridging the gap: From frustration to resolution

Previously, encountering an error in Docker Desktop could feel like reaching a dead end. Users were often greeted with cryptic error codes or minimal guidance, lacking the necessary context for a swift resolution. This outdated approach impeded user experience, efficiency, and overall satisfaction. The contrast with our new system couldn’t be more stark: Users now receive actionable insights when an error arises, ensuring every issue is a step toward a solution (Figure 1).

Figure 1: Previous Docker Desktop error message: 4.28 and earlier did not provide intuitive instructions on how to remediate.

Empowering users, reducing support tickets

This latest update introduces an intuitive error management interface, direct diagnostic uploads, and self-service remediation options. These enhancements make troubleshooting more accessible and reduce the need for support inquiries, improving Docker Desktop’s usability and reliability specifically in the following ways: 

1. Enhanced error interface: Introducing an updated error interface that combines raw error codes with helpful explanatory text, including links for streamlined support. This not only makes troubleshooting more accessible but also significantly enhances the support process.

2. Direct diagnostic uploads: Users can now easily collect and upload diagnostics directly from the error screen. This feature enhances our support and troubleshooting capabilities, making it easier for users to get the help they need without navigating away from the error context.

3. Reset and exit options: Recognizing that some situations may require more drastic measures, the updated error interface also allows users to reset the application to factory settings or exit the application directly from the error screen (Figure 2).

Figure 2: New Docker Desktop error message: 4.29 release providing remediation information and diagnostic sharing options.

4. Self-Service Options: For errors within the user’s ability to remedy, the error message now provides a user-friendly technical error description accompanied by clear, actionable buttons for immediate remediation. This reduces the need for support tickets and fosters a sense of user empowerment.

Figure 3: Error message displaying self-service remediation options.

Conclusion

This update is evidence of our continuous focus on refining and enhancing our Docker Desktop users’ experiences — and there are more updates to come. We’re committed to making every aspect of application development as intuitive and empowering as possible. Look for further improvements as we continue to advance the state of user support and error remediation that supports sky-rocketing your innovation trajectory and productivity.

Learn more

Read the Docker Desktop 4.29 announcement to learn more about the advancements in the latest release.

Authenticate and update to receive your subscription level‘s newest Docker Desktop features.

New to Docker? Create an account.

Read the Docker Desktop Release Notes.

Learn about Docker Build Cloud and how you can leverage cloud resources directly from Docker Desktop.

Subscribe to the Docker Newsletter.

Have questions? The Docker community is here to help.

Quelle: https://blog.docker.com/feed/

Get started with the latest updates for Dockerfile syntax (v1.7.0)

Dockerfiles are fundamental tools for developers working with Docker, serving as a blueprint for creating Docker images. These text documents contain all the commands a user could call on the command line to assemble an image. Understanding and effectively utilizing Dockerfiles can significantly streamline the development process, allowing for the automation of image creation and ensuring consistent environments across different stages of development. Dockerfiles are pivotal in defining project environments, dependencies, and the configuration of applications within Docker containers.

With new versions of the BuildKit builder toolkit, Docker Buildx CLI, and Dockerfile frontend for BuildKit (v1.7.0), developers now have access to enhanced Dockerfile capabilities. This blog post delves into these new Dockerfile capabilities and explains how you can can leverage them in your projects to further optimize your Docker workflows.

Versioning

Before we get started, here’s a quick reminder of how Dockerfile is versioned and what you should do to update it. 

Although most projects use Dockerfiles to build images, BuildKit is not limited only to that format. BuildKit supports multiple different frontends for defining the build steps for BuildKit to process. Anyone can create these frontends, package them as regular container images, and load them from a registry when you invoke the build.

With the new release, we have published two such images to Docker Hub: docker/dockerfile:1.7.0 and docker/dockerfile:1.7.0-labs.

To use these frontends, you need to specify a #syntax directive at the beginning of the file to tell BuildKit which frontend image to use for the build. Here we have set it to use the latest of the 1.x.x major version. For example:

#syntax=docker/dockerfile:1

FROM alpine

This means that BuildKit is decoupled from the Dockerfile frontend syntax. You can start using new Dockerfile features right away without worrying about which BuildKit version you’re using. All the examples described in this article will work with any version of Docker that supports BuildKit (the default builder as of Docker 23), as long as you define the correct #syntax directive on the top of your Dockerfile.

You can learn more about Dockerfile frontend versions in the documentation. 

Variable expansions

When you write Dockerfiles, build steps can contain variables that are defined using the build arguments (ARG) and environment variables (ENV) instructions. The difference between build arguments and environment variables is that environment variables are kept in the resulting image and persist when a container is created from it.

When you use such variables, you most likely use ${NAME} or, more simply, $NAME in COPY, RUN, and other commands.

You might not know that Dockerfile supports two forms of Bash-like variable expansion:

${variable:-word}: Sets a value to word if the variable is unset

${variable:+word}: Sets a value to word if the variable is set

Up to this point, these special forms were not that useful in Dockerfiles because the default value of ARG instructions can be set directly:

FROM alpine
ARG foo="default value"

If you are an expert in various shell applications, you know that Bash and other tools usually have many additional forms of variable expansion to ease the development of your scripts.

In Dockerfile v1.7, we have added:

${variable#pattern} and ${variable##pattern} to remove the shortest or longest prefix from the variable’s value.

${variable%pattern} and ${variable%%pattern} to remove the shortest or longest prefix from the variable’s value.

${variable/pattern/replacement} to first replace occurrence of a pattern

${variable//pattern/replacement} to replace all occurrences of a pattern

How these rules are used might not be completely obvious at first. So, let’s look at a few examples seen in actual Dockerfiles.

For example, projects often can’t agree on whether versions for downloading your dependencies should have a “v” prefix or not. The following allows you to get the format you need:

# example VERSION=v1.2.3
ARG VERSION=${VERSION#v}
# VERSION is now '1.2.3'

In the next example, multiple variants are used by the same project:

ARG VERSION=v1.7.13
ADD https://github.com/containerd/containerd/releases/download/${VERSION}/containerd-${VERSION#v}-linux-amd64.tar.gz /

To configure different command behaviors for multi-platform builds, BuildKit provides useful built-in variables like TARGETOS and TARGETARCH. Unfortunately, not all projects use the same values. For example, in containers and the Go ecosystem, we refer to 64-bit ARM architecture as arm64, but sometimes you need aarch64 instead.

ADD https://github.com/oven-sh/bun/releases/download/bun-v1.0.30/bun-linux-${TARGETARCH/arm64/aarch64}.zip /

In this case, the URL also uses a custom name for AMD64 architecture. To pass a variable through multiple expansions, use another ARG definition with an expansion from the previous value. You could also write all the definitions on a single line, as ARG allows multiple parameters, which may hurt readability.

ARG ARCH=${TARGETARCH/arm64/aarch64}
ARG ARCH=${ARCH/amd64/x64}
ADD https://github.com/oven-sh/bun/releases/download/bun-v1.0.30/bun-linux-${ARCH}.zip /

Note that the example above is written in a way that if a user passes their own –build-arg ARCH=value, then that value is used as-is.

Now, let’s look at how new expansions can be useful in multi-stage builds.

One of the techniques described in “Advanced multi-stage build patterns” shows how build arguments can be used so that different Dockerfile commands run depending on the build-arg value. For example, you can use that pattern if you build a multi-platform image and want to run additional COPY or RUN commands only for specific platforms. If this method is new to you, you can learn more about it from that post.

In summarized form, the idea is to define a global build argument and then define build stages that use the build argument value in the stage name while pointing to the base of your target stage via the build-arg name.

Old example:

ARG BUILD_VERSION=1

FROM alpine AS base
RUN …

FROM base AS branch-version-1
RUN touch version1

FROM base AS branch-version-2
RUN touch version2

FROM branch-version-${BUILD_VERSION} AS after-condition

FROM after-condition
RUN …

When using this pattern for multi-platform builds, one of the limitations is that all the possible values for the build-arg need to be defined by your Dockerfile. This is problematic as we want Dockerfile to be built in a way that it can build on any platform and not limit it to a specific set. 

You can see other examples here and here of Dockerfiles where dummy stage aliases must be defined for all architectures, and no other architecture can be built. Instead, the pattern we would like to use is that there is one architecture that has a special behavior, and everything else shares another common behavior.

With new expansions, we can write this to demonstrate running special commands only on RISC-V, which is still somewhat new and may need custom behavior:

#syntax=docker/dockerfile:1.7

ARG ARCH=${TARGETARCH#riscv64}
ARG ARCH=${ARCH:+"common"}
ARG ARCH=${ARCH:-$TARGETARCH}

FROM –platform=$BUILDPLATFORM alpine AS base-common
ARG TARGETARCH
RUN echo "Common build, I am $TARGETARCH" > /out

FROM –platform=$BUILDPLATFORM alpine AS base-riscv64
ARG TARGETARCH
RUN echo "Riscv only special build, I am $TARGETARCH" > /out

FROM base-${ARCH} AS base

Let’s look at these ARCH definitions more closely.

The first sets ARCH to TARGETARCH but removes riscv64 from the value.

Next, as we described previously, we don’t actually want the other architectures to use their own values but instead want them all to share a common value. So, we set ARCH to common except if it was cleared from the previous riscv64 rule. 

Now, if we still have an empty value, we default it back to $TARGETARCH.

The last definition is optional, as we would already have a unique value for both cases, but it makes the final stage name base-riscv64 nicer to read.

Additional examples of including multiple conditions with shared conditions, or conditions based on architecture variants can be found in this GitHub Gist page.

Comparing this example to the initial example of conditions between stages, the new pattern isn’t limited to just controlling the platform differences of your builds but can be used with any build-arg. If you have used this pattern before, then you can effectively now define an “else” clause, whereas previously, you were limited to only “if” clauses.

Copy with keeping parent directories

The following feature has been released in the “labs” channel. Define the following at the top of your Dockerfile to use this feature.

#syntax=docker/dockerfile:1.7-labs

When you are copying files in your Dockerfile, for example, do this:

COPY app/file /to/dest/dir/

This example means the source file is copied directly to the destination directory. If your source path was a directory, all the files inside that directory would be copied directly to the destination path.

What if you have a file structure like the following:

.
├── app1
│ ├── docs
│ │ └── manual.md
│ └── src
│ └── server.go
└── app2
└── src
└── client.go

You want to copy only files in app1/src, but so that the final files at the destination would be /to/dest/dir/app1/src/server.go and not just /to/dest/dir/server.go.

With the new COPY –parents flag, you can write:

COPY –parents /app1/src/ /to/dest/dir/

This will copy the files inside the src directory and recreate the app1/src directory structure for these files.

Things get more powerful when you start to use wildcard paths. To copy the src directories for both apps into their respective locations, you can write:

COPY –parents */src/ /to/dest/dir/

This will create both /to/dest/dir/app1 and /to/dest/dir/app2, but it will not copy the docs directory. Previously, this kind of copy was not possible with a single command. You would have needed multiple copies for individual files (as shown in this example) or used some workaround with the RUN –mount instruction instead.

You can also use double-star wildcard (**) to match files under any directory structure. For example, to copy only the Go source code files anywhere in your build context, you can write:

COPY –parents **/*.go /to/dest/dir/

If you are thinking about why you would need to copy specific files instead of just using COPY ./ to copy all files, remember that your build cache gets invalidated when you include new files in your build. If you copy all files, the cache gets invalidated when any file is added or changed, whereas if you copy only Go files, only changes in these files influence the cache.

The new –parents flag is not only for COPY instructions from your build context, but obviously, you can also use them in multi-stage builds when copying files between stages using COPY –from. 

Note that with COPY –from syntax, all source paths are expected to be absolute, meaning that if the –parents flag is used with such paths, they will be fully replicated as they were in the source stage. That may not always be desirable, and instead, you may want to keep some parents but discard and replace others. In that case, you can use a special /./ relative pivot point in your source path to mark which parents you wish to copy and which should be ignored. This special path component resembles how rsync works with the –relative flag.

#syntax=docker/dockerfile:1.7-labs
FROM … AS base
RUN ./generate-lot-of-files -o /out/
# /out/usr/bin/foo
# /out/usr/lib/bar.so
# /out/usr/local/bin/baz

FROM scratch
COPY –from=base –parents /out/./**/bin/ /
# /usr/bin/foo
# /usr/local/bin/baz

This example above shows how only bin directories are copied from the collection of files that the intermediate stage generated, but all the directories will keep their paths relative to the out directory. 

Exclusion filters

The following feature has been released in the “labs” channel. Define the following at the top of your Dockerfile to use this feature:

#syntax=docker/dockerfile:1.7-labs

Another related case when moving files in your Dockerfile with COPY and ADD instructions is when you want to move a group of files but exclude a specific subset. Previously, your only options were to use RUN –mount or try to define your excluded files inside a .dockerignore file. 

.dockerignore files, however, are not a good solution for this problem, because they only list the files excluded from the client-side build context and not from builds from remote Git/HTTP URLs and are limited to one per Dockerfile. You should use them similarly to .gitignore to mark files that are never part of your project but not as a way to define your application-specific build logic.

With the new –exclude=[pattern] flag, you can now define such exclusion filters for your COPY and ADD commands directly in the Dockerfile. The pattern uses the same format as .dockerignore.

The following example copies all the files in a directory except Markdown files:

COPY –exclude=*.md app /dest/

You can use the flag multiple times to add multiple filters. The next example excludes Markdown files and also a file called README:

COPY –exclude=*.md –exclude=README app /dest/

Double-star wildcards exclude not only Markdown files in the copied directory but also in any subdirectory:

COPY –exclude=**/*.md app /dest/

As in .dockerignore files, you can also define exceptions to the exclusions with ! prefix. The following example excludes all Markdown files in any copied directory, except if the file is called important.md — in that case, it is still copied.

COPY –exclude=**/*.md –exclude=!**/important.md app /dest/

This double negative may be confusing initially, but note that this is a reversal of the previous exclude rule, and “include patterns” are defined by the source parameter of the COPY instruction.

When using –exclude together with previously described –parents copy mode, note that the exclude patterns are relative to the copied parent directories or to the pivot point /./ if one is defined. See the following directory structure for example:

assets
├── app1
│ ├── icons32x32
│ ├── icons64x64
│ ├── notes
│ └── backup
├── app2
│ └── icons32x32
└── testapp
└── icons32x32

COPY –parents –exclude=testapp assets/./**/icons* /dest/

This command would create the directory structure below. Note that only directories with the icons prefix were copied, the root parent directory assets was skipped as it was before the relative pivot point, and additionally, testapp was not copied as it was defined with an exclusion filter.

dest
├── app1
│ ├── icons32x32
│ └── icons64x64
└── app2
└── icons32x32

Conclusion

We hope this post gave you ideas for improving your Dockerfiles and that the patterns shown here will help you describe your build more efficiently. Remember that your Dockerfile can start using all these features today by defining the #syntax line on top, even if you haven’t updated to the latest Docker yet.

For a full list of other features in the new BuildKit, Buildx, and Dockerfile releases, check out the changelogs:

BuildKit v0.13.0

Buildx v0.13.0

Dockerfile v1.7.0 v1.7.0-labs

Thanks to community members @tstenner, @DYefimov, and @leandrosansilva for helping to implement these features!

If you have issues or suggestions you want to share, let us know in the issue tracker.

Learn more

Subscribe to the Docker Newsletter.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

From Misconceptions to Mastery: Enhancing Security and Transparency with Docker Official Images

Docker Official Images are a curated set of Docker repositories hosted on Docker Hub that provide a wide range of pre-configured images for popular language runtimes and frameworks, cloud-first utilities, data stores, and Linux distributions. These images are maintained and vetted, ensuring they meet best practices for security, usability, and versioning, making it easier for developers to deploy and run applications consistently across different environments.

Docker Official Images are an important component of Docker’s commitment to the security of both the software supply chain and open source software. Docker Official Images provide thousands of images you can use directly or as a base image when building your own images. For example, there are Docker Official Images for Alpine Linux, NGINX, Ubuntu, PostgreSQL, Python, and Node.js. Visit Docker Hub to search through the currently available Docker Official Images.

In this blog post, we address three common misconceptions about Docker Official Images and outline seven ways they help secure the software supply chain.

3 common misconceptions about Docker Official Images

Even though Docker Official Images have been around for more than a decade and have been used billions of times, they are somewhat misunderstood. Who “owns” Docker Official Images? What is with all those tags? How should you use Docker Official Images? Let’s address some of the more common misconceptions.

Misconception 1: Docker Official Images are controlled by Docker

Docker Official Images are maintained through a partnership between upstream maintainers, community volunteers, and Docker engineers. External developers maintain the majority of Docker Official Images Dockerfiles, with Docker engineers providing insight and review to ensure best practices and uniformity across the Docker Official Images catalog. Additionally, Docker provides and maintains the Docker Official Images build infrastructure and logic, ensuring consistent and secure build environments that allow Docker Official Images to support more than 10 architecture/operating system combinations.

Misconception 2: Docker Official Images are designed for a single use case

Most Docker Official Images repositories offer several image variants and maintain multiple supported versions. In other words, the latest tag of a Docker Official Image might not be the right choice for your use case. 

Docker Official Images tags

The documentation for each Docker Official Images repository contains a “Supported tags and respective Dockerfile links” section that lists all the current tags with links to the Dockerfiles that created the image with those tags (Figure 1). This section can be a little intimidating for first-time users, but keeping in mind a few conventions will allow even novices to understand what image variants are available and, more importantly, which variant best fits their use case.

Figure 1: Documentation showing the current tags with links to the Dockerfiles that created the image with those tags.

Tags listed on the same line all refer to the same underlying image. (Multiple tags can point to the same image.) For example, Figure 1 shows the ubuntu Docker Official Images repository, where the 20.04, focal-20240216, and focal tags all refer to the same image.

Often the latest tag for a Docker Official Images repository is optimized for ease of use and includes a wide variety of software helpful, but not strictly necessary, when using the main software packaged in the Docker Official Image. For example, latest images often include tools like Git and build tools. Because of their ease of use and wide applicability, latest images are often used in getting-started guides.

Some operating system and language runtime repositories offer “slim” variants that have fewer packages installed and are therefore smaller. For example, the python:3.12.2-bookworm image contains not only the Python runtime, but also any tool you might need to build and package your Python application — more than 570 packages! Compare this to the python:3.12.2-slim-bookworm image, which has about 150 packages.

Many Docker Official Images repositories offer “alpine” variants built on top of the Alpine Linux distribution rather than Debian or Ubuntu. Alpine Linux is focused on providing a small, simple, and secure base for container images, and Docker Official Images alpine variants typically aim to install only necessary packages. As a result, Docker Official Images alpine variants are typically even smaller than “slim” variants. For example, the linux/amd64 node:latest image is 382 MB, the node:slim image is 70 MB, and the node:alpine image is 47 MB.

If you see tags with words that look like Toy Story characters (for example, bookworm, bullseye, and trixie) or adjectives (such as jammy, focal, and bionic), those indicate the codename of the Linux distribution they use as a base image. Debian-release codenames are based on Toy Story characters, and Ubuntu releases use alliterative adjective-animal appellations. Linux distribution indicators are helpful because many Docker Official Images provide variants built upon multiple underlying distribution versions (for example, postgres:bookworm and postgres:bullseye).

Tags may contain other hints to the purpose of their image variant. Often these are explained later in the Docker Official Images repository documentation. Check the “How to use this image” and/or “Image Variants” sections.

Misconception 3: Docker Official Images do not follow software development best practices

Some critics argue that Docker Official Images go against the grain of best practices, such as not running container processes as root. While it’s true that we encourage users to embrace a few opinionated standards, we also recognize that different use cases require different approaches. For example, some use cases may require elevated privileges for their workloads, and we provide options for them to do so securely.

7 ways Docker Official Images help secure the software supply chain

We recognize that security is a continuous process, and we’re committed to providing the best possible experience for our users. Since the company’s inception in 2013, Docker has been a leader in the software supply chain, and our commitment to security — including open source security — has helped to protect developers from emerging threats all along the way.

With the availability of open source software, efficiently building powerful applications and services is easier than ever. The transparency of open source allows unprecedented insight into the security posture of the software you create. But to take advantage of the power and transparency of open source software, fully embracing software supply chain security is imperative. A few ways Docker Official Images help developers build a more secure software supply chain include:

Open build process 

Because visibility is an important aspect of the software supply chain, Docker Official Images are created from a transparent and open build process. The Dockerfile inputs and build scripts are all open source, all Docker Official Images updates go through a public pull request process, and the logs from all Docker Official Images builds are available to inspect (Jenkins / GitHub Actions).

Principle of least privilege

The Docker Official Images build system adheres strictly to the principle of least privilege (POLP), for example, by restricting writes for each architecture to architecture-specific build agents. 

Updated build system 

Ensuring the security of Docker Official Images builds and images is paramount. The Docker Official Images build system is kept up to date through automated builds, regular security audits, collaboration with upstream projects, ongoing testing, and security patches. 

Vulnerability reports and continuous monitoring

Courtesy of Docker Scout, vulnerability insights are available for all Docker Official Images and are continuously updated as new vulnerabilities are discovered. We are committed to continuously monitoring our images for security issues and addressing them promptly. For example, we were among the first to provide reasoned guidance and remediation for the recent xz supply chain attack. We also use insights and remediation guidance from Docker Scout, which surfaces actionable insights in near-real-time by updating CVE results from 20+ CVE databases every 20-60 minutes.

Software Bill of Materials (SBOM) and provenance attestations 

We are committed to providing a complete and accurate SBOM and detailed build provenance as signed attestations for all Docker Official Images. This allows our users to have confidence in the origin of Docker Official Images and easily identify and mitigate any potential vulnerabilities.

Signature validation 

We are working on integrating signature validation into our image pull and build processes. This will ensure that all Docker Official Images are verified before use, providing an additional layer of security for our users.

Increased update frequency 

Docker Official Images provide the best of both worlds: the latest version of the software you want, built upon stable versions of Linux distributions. This allows you to use the latest features and fixes of the software you are running without having to wait for a new package from your Linux distribution or being forced to use an unstable version of your Linux distribution. Further, we are working to increase the throughput of the Docker Official Images build infrastructure to allow us to support more frequent updates for larger swaths of Docker Official Images. As part of this effort, we are piloting builds on GitHub Actions and Docker Build Cloud.

Conclusion

Docker’s leadership in security and protecting open source software has been established through Docker Official Images and other trusted content we provide our customers. We take a comprehensive approach to security, focusing on best practices, tooling, and community engagement, and we work closely with upstream projects and SIGs to address security issues promptly and proactively.

Docker Official Images provide a flexible and secure way for developers to build, ship, test, and run their applications. Docker Official Images are maintained through a partnership between the Docker Official Images community, upstream maintainers/volunteers, and Docker engineers, ensuring best practices and uniformity across the Docker Official Images catalog. Each Docker Official Image offers numerous image variants that cater to different use cases, with tags indicating the purpose of each variant. 

Developers can build using Docker tools and products with confidence, knowing that their applications are built on a secure, transparent foundation. 

Looking to dive in? Get started building with Docker Official Images today.

Learn more

Browse the available Docker Official Images.

Visit hub.docker.com and start developing.

Find Docker Official Images on GitHub.

Learn more about Docker Hub.

Read:

Debian’s Dedication to Security: A Robust Foundation for Docker Developers

How to Use the Postgres Docker Official Image

How to Use the NGINX Docker Official Image

How to Use the Alpine Docker Official Image

How to Use the Node Docker Official Image

Quelle: https://blog.docker.com/feed/

Debian’s Dedication to Security: A Robust Foundation for Docker Developers

As security threats become more and more prevalent, building software with security top of mind is essential. Security has become an increasing concern for container workloads specifically and, commensurately, for container base-image choice. Many conversations around choosing a secure base image focus on CVE counts, but security involves a lot more than that. 

One organization that has been leading the way in secure software development is the Debian Project. In this post, I will outline how and why Debian operates as a secure basis for development.

For more than 30 years, Debian’s diverse group of volunteers has provided a free, open, stable, and secure GNU/Linux distribution. Debian’s emphasis on engineering excellence and clean design, as well as its wide variety of packages and supported architectures, have made it not only a widely used distribution in its own right but also a meta-distribution. Many other Linux distributions, such as Ubuntu, Linux Mint, and Kali Linux, are built on top of Debian, as are many Docker Official Images (DOI). In fact, more than 1,000 Docker Official Images variants use the debian DOI or the Debian-derived ubuntu DOI as their base image. 

Why Debian?

As a bit of a disclaimer, I have been using Debian GNU/Linux for a long time. I remember installing Debian from floppy disks in the 1990s on a PC that I cobbled together, and later reinstalling so I could test prerelease versions of the netinst network installer. Installing over the network took a while using a 56-kbps modem. At those network speeds, you had to be very particular about which packages you chose in dselect. 

Having used a few other distributions before trying Debian, I still remember being amazed by how well-organized and architected the system was. No dangling or broken dependencies. No download failures. No incompatible shared libraries. No package conflicts, but rather a thoughtful handling of packages providing similar functionality. 

Much has changed over the years, no more floppies, dselect has been retired, my network connection speed has increased by a few orders of magnitude, and now I “install” Debian via docker pull debian. What has not changed is the feeling of amazement I have toward Debian and its community.

Open source software and security

Despite the achievements of the Debian project and the many other projects it has spawned, it is not without detractors. Like many other open source projects, Debian has received its share of criticsm in the past few years by opportunists lamenting the state of open source security. Writing about the software supply chain while bemoaning high-profile CVEs and pointing to malware that has been uploaded to an open source package ecosystem, such as PyPI or NPM, has become all too common. 

The pernicious assumption in such articles is that open source software is the problem. We know this is not the case. We’ve been through this before. Back when I was installing Debian over a 56-kbps modem, all sorts of fear, uncertainty, and doubt (FUD) was being spread by various proprietary software vendors. We learned then that open source is not a security problem — it is a security solution. 

Being open source does not automatically convey an improved security status compared to closed-source software, but it does provide significant advantages. In his Secure Programming HOWTO, David Wheeler provides a balanced summary of the relationship between open source software and security. A purported advantage conveyed by closed-source software is the nondisclosure of its source code, but we know that security through obscurity is no security at all. 

The transparency of open source software and open ecosystems allows us to better know our security posture. Openness allows for the rapid identification and remediation of vulnerabilities. Openness enables the vast majority of the security and supply chain tooling that developers regularly use. How many closed-source tools regularly publish CVEs? With proprietary software, you often only find out about a vulnerability after it is too late.

Debian’s rapid response strategy

Debian has been criticized for moving too slowly on the security front. But this narrative, like the open vs. closed-source narrative, captures neither the nuance nor reality. Although several distributions wait to publish CVEs until a fixed version is available, Debian opts for complete transparency and urgency when communicating security information to its users.

Furthermore, Debian maintainers are not a mindless fleet of automatons hastily applying patches and releasing new package versions. As a rule, Debian maintainers are experts among experts, deeply steeped in software and delivery engineering, open source culture, and the software they package.

zlib vulnerability example

A recent zlib vulnerability, CVE-2023-45853, provides an insightful example of the Debian project’s diligent, thorough approach to security. Several distributions grabbed a patch for the vulnerability, applied it, rebuilt, packaged, and released a new zlib package. The Debian security community took a closer look.

As mentioned in the CVE summary, the vulnerability was in minizip, which is a utility under the contrib directory of the zlib source code. No minizip source files are compiled into the zlib library, libz. As such, this vulnerability did not actually affect any zlib packages.

If that were where the story had ended, the only harm would be in updating a package unnecessarily. But the story did not end there. As detailed in the Debian bug thread, the offending minizip code was copied (i.e., vendored) and used in a lot of other widely used software. In fact, the vendored minizip code in both Chromium and Node.js was patched about a month before the zlib CVE was even published. 

Unfortunately, other commonly used software packages also had vendored copies of minizip that were still vulnerable. Thanks to the diligence of the Debian project, either the patch was applied to those projects as well, or they were compiled against the patched system minizip (not zlib!) dev package rather than the vendored version. In other distributions, those buggy vendored copies are in some cases still being compiled into software packages, with nary a mention in any CVE.

Thinking beyond CVEs

In the past 30 years, we have seen an astronomical increase in the role open source software plays in the tech industry. Despite the productivity gains that software engineers get by leveraging the massive amount of high-quality open source software available, we are once again hearing the same FUD we heard in the early days of open source. 

The next time you see an article about the dangers lurking in your open source dependencies, don’t be afraid to look past the headlines and question the assumptions. Open ecosystems lead to secure software, and the Debian project provides a model we would all do well to emulate. Debian’s goal is security, which encompasses a lot more than a report showing zero CVEs. Consumers of operating systems and container images would be wise to understand the difference. 

So go ahead and build on top of the debian DOI. FROM debian is never a bad way to start a Dockerfile!

Learn more

Find the Debian Official Image on Docker Hub.

Read From Misconceptions to Mastery: Enhancing Security and Transparency with Docker Official Images.

Visit Docker Hub to find more Docker Official Images.

Find the Debian DOI on GitHub.

Learn more about Docker Hub.

Secure your images with Docker Scout.

Subscribe to the Docker Newsletter.

Quelle: https://blog.docker.com/feed/

KubeCon EU 2024: Highlights from Paris

Are in-person tech conferences back in fashion? Or are engineers just willing to travel for fresh baguettes? In this post, I round up a few highlights from KubeCon Europe 2024, held March 19-24 in Paris.

My last KubeCon was in Detroit in 2022, when tech events were still slowly recovering from COVID. But KubeCon EU in Paris was buzzing, with more than 12,000 attendees! I couldn’t even get into a few of the most popular talks because the lines to get in wrapped around the exhibition hall even after the rooms were full. Fortunately, the CNCF has already posted all the talk recordings so we can catch up on what we missed in person.

Now that I’ve been back home for a bit, here are a few highlights I rounded up from KubeCon EU 2024.

Docker at KubeCon

If you stopped by the Docker booth, you may have seen our Megennis Motorsport Racing experience.

The KubeCon EU 2024 Docker booth featured a Megennis Motorsport Racing experience.

Or you may have talked to one of our engineers about our new fast Docker Build Cloud experience. Everyone I talked to about Build Cloud got it immediately. I’m proud of all the work we did to make fast, hosted image builds work seamlessly with the existing docker build. 

KubeCon booth visitors ran 173 identical builds using both Docker Build Cloud and GHA. Build Cloud builds took an average of 32 seconds each compared to GHA’s 159 sec/build.

Docker Build Cloud wasn’t the only new product we highlighted at KubeCon this year. I also got a lot of questions about Docker Scout and how to track image dependencies. Our Head of Security, Rachel Taylor, was available to demo Docker Scout for curious customers.

Chloe Cucinotta, a Director in Docker’s marketing org, hands out eco-friendly swag for booth visitors. Photo by Docker Captain Mohammad-Ali A’râbi.

Docker Scout and Sysdig Security Day

In addition to live Docker Scout demos at the booth, Docker Scout was represented at Kubecon through a co-sponsored AMA panel and party with Sysdig Security Day. The event aimed to raise awareness around Docker’s impact on securing the software supply chain and how to solve concrete security issues with Docker Scout. It was an opportunity to explore topics in the cloud-native and open source security space alongside industry leaders Snyk and Sysdig.

The AMA panel featured Rachel Taylor, Director of Information Security, Risk, & Trust at Docker, who discussed approaches to securing the software supply chain. The post-content party served as an opportunity for Docker to hear more about our shared customers’ unique challenges one-on-one. Through participation in the event, Docker customers were able to learn more about how the Sysdig runtime monitoring integration within Docker Scout results in even more actionable insights and remediation recommendations.

Live from the show floor

Docker CEO Scott Johnston spoke with theCUBE hosts Savannah Peterson and Rob Strechay to discuss Docker Build Cloud. “What used to take an hour is now a minute and a half,” he explained.

Testcontainers and OpenShift

During KubeCon, we announced that Red Hat and Testcontainers have partnered to provide Testcontainers in OpenShift. This collaboration simplifies the testing process, allowing developers to efficiently manage their workflows without compromising on security or flexibility. By streamlining development tasks, this solution promises a significant boost in productivity for developers working within containerized environments. Read Improving the Developer Experience with Testcontainers and OpenShift to learn more.

Eli Aleyner (Head of Technical Alliances at Docker) and Daniel Oh (Senior Principal Technical Marketing Manager at Red Hat) take a selfie at the Red Hat booth.

Eli Aleyner (Head of Technical Alliances at Docker) and Daniel Oh (Senior Principal Technical Marketing Manager at Red Hat) provided a demo and an AMA at the Red Hat booth. 

Must-watch talks

During the Friday keynote, Bob Wise, CEO of Heroku, describes a lightbulb moment when he first heard about Docker as part of his discussion about the beginnings of cloud-native.

For a long time, I’ve felt that the Kubernetes API model has been its superpower. The investment in easy ways to extend Kubernetes with CRDs and the controller-runtime project are unlocking a bunch of exciting platform engineering projects.

Here are a few of the many talks that I and other people on my team really liked, and that are on YouTube now.

Platform 

In his talk Building a Large Scale Multi-Cloud Multi-Region SaaS Platform with Kubernetes Controllers, I loved how Sébastien Guilloux (Elastic) explains how to put all the pieces together to help build a multi-region platform. It takes advantage of the nice bits of Kubernetes controllers, while also questioning the assumptions about how global state should work.

Stefan Proda (ControlPlane) gave a talk on GitOps Continuous Delivery at Scale with Flux. Flux has a strong, opinionated point of view on how CI/CD tools should interact with CRDs and the events API.  There were a few different talks on Crossplane that I’d like to go back and watch. We’ve been experimenting a lot with Crossplane at Docker, and we like how it fits into Helm and Image Registries in a way that fits in with our existing image and registry tools. 

AI 

Of course, people at KubeCon are infra nerds, so when we think about AI, we first think about all those GPUs the AIs are going to need.

There was an armful of GPU provisioning talks. I attended the How KubeVirt Improves Performance with CI-Driven Benchmarking, and You Can Too. Speakers Ryan Hallisey and Alay Patel from Nvidia talked about driving down the time to allocate VMs with GPUs. But how is AI going to fit into how we run and operate servers on Kubernetes? There was less consensus on this point, but it was fun to make random guesses on what it might look like. When I was hanging out at the AuthZed booth, I made a joke about asking an AI to write my infra access control rules, and they mostly laughed and rolled their eyes.

Slimming and debugging 

Here’s a container journey I see a lot these days:

I have a fat container image.

I get a security alert about a vulnerability in one of that image’s dependencies that I don’t even use.

I switch to a slimmer base image, like a distroless image.

Oops! Now the image doesn’t work and is annoying to debug because there’s no shell.

But we’re making progress on making this easier!

In his KubeCon talk Is Your Image Really Distroless?, Docker’s Laurent Goderre walked through how to use multi-stage builds and init containers to separate out the build + init dependencies from the steady-state runtime dependencies. 

Ephemeral containers in Kubernetes graduated to stable in 2022. In their talk, Building a Tool to Debug Minimal Container Images in Kubernetes, Docker, and ContainerD, Kyle Quest (AutonomousPlane) and Saiyam Pathak (Civo) showed how you can use the ephemeral containers API to build tooling for creating a shell in a distroless container without a shell.

Talk slide showing the comparison of different approaches for running commands in a minimal container image with no shell.

One thing that Kyle and Saiyam mentioned was how useful Nix and Nixery.dev is for building these kinds of debugging tools. We’re also using Nix in docker debug. Docker engineer Johannes Grossman says that Nix solves some problems around dynamic linking that he calls “the clash-free composability property of Nix.”

See you in Salt Lake City!

Now that we’ve recovered from the action-packed KubeCon in Paris, we can start planning for KubeCon + CloudNativeCon North America 2024. We’ll see you in beautiful Salt Lake City!

Learn more

Read Red Hat, Docker Ease Developer Experience with Testcontainers Cloud.

Read Docker CEO leans in on Build Cloud and testing to give time back to developers on SiliconAngle.

Get started with Docker Build Cloud for the fastest build times.

To learn more about Docker Build Cloud, reach out to a Docker sales expert. 

Quelle: https://blog.docker.com/feed/

Empower Your Development: Dive into Docker’s Comprehensive Learning Ecosystem

Continuous learning is a necessity for developers in today’s fast-paced development landscape. Docker recognizes the importance of keeping developers at the forefront of innovation, and to do so, we aim to empower the developer community with comprehensive learning resources.

Docker has taken a multifaceted approach to developer education by forging partnerships with renowned platforms like Udemy and LinkedIn Learning, investing in our own documentation and guides, and highlighting the incredible learning content created by the developer community, including Docker Captains.

Commitment to developer learning

At Docker, our goal is to simplify the lives of developers, which begins with empowering devs with understanding how to maximize the power of Docker tools throughout their projects. We also recognize that developers have different learning styles, so we are taking a diversified approach to delivering this material across an array of platforms and formats, which means developers can learn in the format that best suits them. 

Strategic partnerships for developer learning

Recognizing the diverse learning needs of developers, Docker has partnered with leading online learning platforms — Udemy and LinkedIn Learning. These partnerships offer developers access to a wide range of courses tailored to different expertise levels, from beginners looking to get started with Docker to advanced users aiming to deepen their knowledge. 

For teams already utilizing these platforms for other learning needs, this collaboration places Docker learning in a familiar platform next to other coursework.

Udemy: Docker’s collaboration with Udemy highlights an array of Endorsed Docker courses, designed by industry experts. Whether getting a handle on containerization or mastering Docker with Kubernetes, Udemy’s platform offers the flexibility and depth developers need to upskill at their own pace. Today, demand remains high for Docker content across the Udemy platform, with more than 350 courses offered and nearly three million enrollments to date.

LinkedIn Learning: Through LinkedIn Learning, developers can dive into curated Docker courses to earn a Docker Foundations Professional Certificate once they complete the program. These resources are not just about technical skills; they also cover best practices and practical applications, ensuring learners are job-ready.

Leveraging Docker’s documentation and guides

Although third-party platforms provide comprehensive learning paths, Docker’s own documentation and guides are indispensable tools for developers. Our documentation is continuously updated to serve as both a learning resource and a reference. From installation and configuration to advanced container orchestration and networking, Docker’s guides are designed to help you find your solution with step-by-step walk-throughs.

If it’s been a while since you’ve checked out Docker Docs, you can visit docs.docker.com to find manuals, a getting started guide, along with many new use-case guides to help you with advanced applications including generative AI and security.  

Learners interested in live sessions can register for upcoming live webinars and training on the Docker Training site. There, you will find sessions where you can interact with the Docker support team and discuss best practices for using Docker Scout and Docker Admin.

The role of community in learning

Docker’s community is a vibrant ecosystem of learners, contributors, and innovators. We are thrilled to see the community creating content, hosting workshops, providing mentorship, and enriching the vast array of Docker learning resources. In particular, Docker Captains stand out for their expertise and dedication to sharing knowledge. From James Spurin’s Dive Into Docker course, to Nana Janashia’s Docker Crash Course, to Vladimir Mikhalev’s blog with guided IT solutions using Docker (just to name a few), it’s clear there’s much to learn from within the community.

We encourage developers to join the community and participate in conversations to seek advice, share knowledge, and collaborate on projects. You can also check out the Docker Community forums and join the Slack community to connect with other members of the community.

Conclusion

Docker’s holistic approach to developer learning underscores our commitment to empowering developers with knowledge and skills. By combining our comprehensive documentation and guides with top learning platform partnerships and an active community, we offer developers a robust framework for learning and growth. We encourage you to use all of these resources together to build a solid foundation of knowledge that is enhanced with new perspectives and additional insights as new learning offerings continue to be added.

Whether you’re a novice eager to explore the world of containers or a seasoned pro looking to refine your expertise, Docker’s learning ecosystem is designed to support your journey every step of the way.

Join us in this continuous learning journey, and come learn with Docker.

Learn more

Subscribe to the Docker Newsletter.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/