Getting Started with the Labs AI Tools for Devs Docker Desktop Extension

This ongoing Docker Labs GenAI series explores the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real-time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing software as open source so you can play, explore, and hack with us, too.

We’ve released a simple way to run AI tools in Docker Desktop. With the Labs AI Tools for Devs Docker Desktop extension, people who want a simple way to run prompts can easily get started. 

If you’re a prompt author, this approach also allows you to build, run, and share your prompts more easily. Here’s how you can get started.

Get the extension

You can download the extension from Docker Hub. Once it’s installed, enter an OpenAI key.

Import a project

With our approach, the information a prompt needs should be extractable from a project. Add projects here that you want to run SDLC tools inside (Figure 1).

Figure 1: Add projects.

Inputting prompts

A prompt can be a git ref or a git URL, which will convert to a ref. You can also import your own local prompt files, which allows you to quickly iterate on building custom prompts.

Sample prompts

(copy + paste the ref)

ToolGit RefLinkDescriptionDockergithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/dockerhttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/dockerGenerates a runbook for any Docker projectDockerfilesgithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/dockerfileshttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/dockerfilesGenerate multi-stage Dockerfiles for NPM projectsLazy Dockergithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/lazy_dockerhttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/lazy_dockerGenerates a runbook for Lazy DockerNPMgithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/npmhttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/npmResponds with helpful information about NPM projectsESLintgithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/eslinthttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/eslintRuns ESLint in your projectESLint Fixgithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/eslint_fixhttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/eslint_fixRuns ESLint in your project and responds with a fix for the first violation it findsPylintgithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/pylinthttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/pylintRuns Pylint in your project, and responds with a fix for the first violation it finds

Figure 2: Enter a GitHub ref or URL.

Writing and testing your own prompt

Create a prompt file

A prompt file is a markdown file. Here’s an example: prompt.md

# prompt system
You are an assistant who can write comedic monologs in the style of Stephen Colbert.

# prompt user
Tell me about my project.

Now, we need to add information about the project. Doing so is done with mustache templates:

# prompt system
You are an assistant who can write comedic monologues in the style of Stephen Colbert.

# prompt user
Tell me about my project.

My project uses the following languages:
{{project.languages}}

My project has the following files:
{{project.files}}

Leverage tools in your project

Just like extractors, which can be used to render prompts, we define tools in the form of Docker images. A function image follows the same spec as extractors but in reverse. 

The Docker image is automatically bind-mounted to the project.

The Docker image entry point is automatically run within the project using –workdir.

The first argument will be a JSON payload. This payload is generated when the LLM tries to call our function.

– name: write_files
description: Write a set of files to my project
parameters:
type: object
properties:
files:
type: array
items:
type: object
properties:
path:
type: string
description: the relative path to the file that should be written
content:
type: string
description: the content that should be written to a file
executable:
type: boolean
description: whether to make the file executable
container:
image: vonwig/function_write_files:latest

Test your prompt

Add the file to a Git repository and push to a public remote.

Paste the URL to reference the file on GitHub.

Alternatively, import a local prompt and select the file on your computer.

Figure 3: Add the URL for the folder.

3. Run.

## ROLE assistant

Don’t even get me started on the files, I mean, have you ever sat down and really looked at a list of files? This project has got more layers than that seven-layer bean dip I had at last weekend’s potluck. This project isn’t just files on files, its files within files, its dot something after dot something else – and before you ask: Yes, all of these are REQUIRED!

Coming down to Dockerfile. Now, I’ve seen some Dockerfiles but our Dockerfile, folks, it’s something else. It lifts, it grinds, it effectively orchestrates our code like a veteran conductor at the symphony. We also have multiple templates because who doesn’t love a good template, right?

Oh, and did I mention the walkthroughs and the resources? Let’s just say this isn’t a “teach a man to fish” situation. This is more of a “teach a man to create an entire fishing corporation” scenario. Now THAT’S dedication.

Finally we’ve got the main.js, and let’s be real, is it even a project without a main.js anymore?

As always, feel free to follow along in our new public repo. Everything we’ve discussed in this blog post is available for you to try out on your own projects.

For more on what we’re doing at Docker, subscribe to our newsletter.

Learn more

Subscribe to the Docker Newsletter. 

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Join Docker CEO Scott Johnston at SwampUP 2024 in Austin

We are excited to announce Docker’s participation in JFrog’s flagship event, SwampUP 2024, which will take place September 9 – 11, in Austin, Texas. In his SwampUP keynote talk, Docker CEO Scott Johnston will discuss how the Docker and JFrog collaboration boosts secure software and AI application development.

Keynote highlights

Johnston will discuss Docker’s approach to managing secure software supply chains by providing developer teams with trusted content, reducing and limiting exposure to malicious content in the early development stages. He will explore how Docker Desktop, Docker Hub, and Docker Scout play critical roles in ensuring that the building blocks developers rely on are deployed securely. By bringing security to the root of the software development lifecycle, highlighting vulnerabilities, and bringing trusted container images to the inner loop, Docker empowers development teams to safeguard their process, ensuring the delivery of higher quality, more secure applications, faster. 

Attendees will get insights into how Docker innovations, including Docker Business capabilities and Docker Hub benefits, are transforming software development. Johnston will walk through the practical benefits of integrating Docker’s products within JFrog’s ecosystem, showcasing real-world examples of how companies use these combined tools to streamline their development pipelines and accelerate delivering applications, many of which are powered by ML and AI. This combination enables a more comprehensive approach to managing software supply chains, ensuring that security is embedded throughout the development lifecycle.

Better together 

Docker and JFrog’s partnership is more than just a collaboration: It’s a commitment to providing developers with the tools and resources they need to build secure, efficient, and scalable applications. This connection between Docker’s expertise in container-first software development and JFrog’s comprehensive DevOps platform empowers development teams to manage their software supply chains with precision. By bringing together Docker’s trusted content and JFrog’s robust artifact management, developers can ensure their applications are built on a foundation of security and reliability.

Our mutual customers with Docker Business subscriptions can leverage features like Registry Access Management and Image Access Management to ensure developers only access verified registries and image repositories, such as specific instances of JFrog Artifactory or JFrog Container Registry.

Looking ahead, Docker and JFrog are committed to continuing their joint efforts in advancing secure software supply chain practices. Upcoming initiatives include expanding the availability of trusted content, enabling deeper integrations between Docker Scout and JFrog’s products, and introducing new features that will further enhance developer productivity and security. These developments will help organizations navigate the complexities of modern software development with greater confidence and control.

See you in Austin

As we prepare for SwampUP, we invite you to explore the integrations between Docker and JFrog that are already transforming development workflows. Whether you’re looking to manage your on-premise images with JFrog Artifactory or leverage Docker’s advanced security analytics and automated image management capabilities, this partnership offers resources to help developers successfully deploy cloud-native and hybrid applications with containerization best practices at their core.

Catch Scott Johnston’s keynote at SwampUP and learn more about how our partnership with JFrog can elevate your development processes. We’re excited to work together to build a more secure, efficient, and innovative software development ecosystem. See you in Austin!

Learn more

Find us at SwampUP 2024.

Explore the integration between Docker Scout and JFrog Artifactory.

Subscribe to the Docker Newsletter.

Quelle: https://blog.docker.com/feed/

Why We Need More Gender Diversity in the Cybersecurity Space

What does it mean to be diverse? At the root of diversity is the ability to bring people together with different perspectives, experiences, and ideas. It’s about enriching the work environment to lead to more innovative solutions, better decision-making, and a more inclusive environment.

For me, it’s about ensuring that my daughter one day knows that it really is okay for her to be whatever she wants to be in life. That she isn’t bound by a gender stereotype or what is deemed appropriate based on her sex.  

This is why building a more diverse workforce in technology is so critical. I want the children of the world, my children, to be able to see themselves in the people they admire, in the fields they are interested in, and to know that the world is accepting of the path that they choose.

Monday, August 26th, was Women’s Equality Day, and while I recognize that women have come a long way, there is still work to be done. Diversity is not just a buzzword — it’s a necessity. When diverse perspectives converge, they create a rich ground for innovation. 

Women in cybersecurity

Despite progress in many areas, women are still underrepresented in cybersecurity. Let’s look at key statistics. According to data published in the ISC2 Cybersecurity Workforce Study published in 2023:

Women make up 26% of the cybersecurity workforce globally. 

The average global salary of women who participated in the ISC2 survey was US$109,609 compared to $115,003 for men. For US women, the average salary was $141,066 compared to $148,035 for men. 

Making progress

We should recognize where we have had wins in cybersecurity diversity, too.

The 2024 Cybersecurity Skills Gap global research report highlights significant progress in improving diversity within the cybersecurity industry. According to the report, 83% of companies have set diversity hiring goals for the next few years, with a particular focus on increasing the representation of women and minority groups. Additionally, structured programs targeting women have remained a priority, with 73% of IT decision-makers implementing initiatives specifically aimed at recruiting more women into cybersecurity roles. These efforts suggest a growing commitment to enhancing diversity and inclusion within the field, which is essential for addressing the global cybersecurity skills shortage.

Women hold approximately 25% of the cybersecurity jobs globally, and that number is growing. This representation has seen a steady increase from about 10% in 2013 to 20% in 2019, and it’s projected to reach 30% by 2025, reflecting ongoing efforts to enhance gender diversity in this field. 

Big tech companies are playing a pivotal role in increasing the number of women in cybersecurity by launching large-scale initiatives aimed at closing the gender gap. Microsoft, for instance, has committed to placing 250,000 people into cybersecurity roles by 2025, with a specific focus on underrepresented groups, including women. Similarly, Google and IBM are investing billions into cybersecurity training programs that target women and other underrepresented groups, aiming to equip them with the necessary skills to succeed in the industry.

This progress is crucial as diverse teams are often better equipped to tackle complex cybersecurity challenges, bringing a broader range of perspectives and innovative solutions to the table. As organizations continue to emphasize diversity in hiring, the cybersecurity industry is likely to see improvements not only in workforce composition but also in the overall effectiveness of cybersecurity strategies.

Good for business

This imbalance is not just a social issue — it’s a business one. There are not enough cybersecurity professionals to join the workflow, resulting in a shortage. As of the ISC2’s 2022 report, there is a worldwide gap of 3.4 million cybersecurity professionals. In fact, most organizations feel at risk because they do not have enough cybersecurity staffing. 

Cybersecurity roles are also among the fastest growing roles in the United States. The Cybersecurity and Infrastructure Security Agency (CISA) introduced the Diverse Cybersecurity Workforce Act of 2024 to promote the cybersecurity field to underrepresented and disadvantaged communities. 

Here are a few ideas for how we can help accelerate gender diversity in cybersecurity:

Mentorship and sponsorship: Experienced professionals should actively mentor and sponsor women in these fields, helping them navigate the challenges and seize opportunities.Unfortunately, this year the cybersecurity industry has seen major losses in organizations that support women. Women Who Code (WWC) and Girls in Tech shut their doors due to shortages in funds. Other organizations are still available, including:

Girls Who Code

Women in CyberSecurity (WiCyS)

Black Girls in Cyber

Women in Security and Privacy (WISP)

Women in Technology International (WITI)

Women4Cyber (Europe)

Companies may also consider internal mentorship programs or working with partners to allow cross-company mentorship opportunities.

Women within the cybersecurity field should also consider guest lecture positions or even teaching. Young girls who do not get to see women in the field are statistically less likely to choose that as a profession.

Inclusive work environments: Companies must create cultures where diversity is celebrated, not just tolerated or a means to an end. This means fostering environments where women feel empowered to share their ideas and take risks. This could include:

Provide training to employees at all levels. At Docker, every employee receives an annual training budget. Additionally, our Employee Resource Groups (ERGs) are provided with budgets to facilitate educational initiatives to support under-represented groups. Teams also can add additional training as part of the annual budgeting process.

Ensure there is an established career ladder for cybersecurity roles within the organization. Work with team members to understand their wishes for career advancement and create internal development plans to support those achievements. Make sure results are measurable. 

Provide transparency around promotions and pay, reducing the gender gaps in these areas. 

Ensure recruiters and managers are trained on diversity and identifying diverse candidate pools. At Docker, we invest in sourcing diverse candidates and ensuring our interview panels have a diverse team so candidates can learn about different perspectives regarding life at Docker.

Ensure diverse recruitment panels. This is important for recruiting new diverse talent and allows people to understand the culture from multiple perspectives.

Policy changes: Companies should implement policies that support work-life balance, such as flexible working hours and parental leave, making it easier for women to thrive in these demanding fields. Companies could consider the following programs:

Generous paid parental leave.

Ramp-back programs for parents returning from parental leave.

Flexible working hours, remote working options, condensed workdays, etc. 

Manager training to ensure managers are being inclusive and can navigate diverse direct report needs.

Employee Resource Groups (ERGs): Establishing allyship groups and/or employee resource groups (ERGs) help ensure that employees feel supported and have a mechanism to report needs to the organization. For example, a Caregivers ERG can help advocate for women who need flexibility in their schedule to allow for caregiving responsibilities. 

Better together

As we reflect on the progress made in gender diversity, especially in the cybersecurity industry, it’s clear that while we’ve come a long way, there is still much more to achieve. The underrepresentation of women in cybersecurity is not just a diversity issue — it’s a business imperative. Diverse teams bring unique perspectives that drive innovation, foster creativity, and enhance problem-solving capabilities. The ongoing efforts by companies, coupled with supportive policies and inclusive cultures, are critical steps toward closing the gender gap.

The cybersecurity landscape is evolving, and so must our approach to diversity. It’s encouraging to see big tech companies and organizations making strides in this direction, but the journey is far from over. As we commemorate Women’s Equality Day, let’s commit to not just acknowledging the need for diversity but actively working toward it. The future of cybersecurity — and the future of technology — depends on our ability to embrace and empower diverse voices.

Let’s make this a reality, not just for the sake of our daughters but for our entire industry.

Learn more

Subscribe to the Docker Newsletter. 

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Docker Desktop 4.34: MSI Installer GA, Upgraded Host Networking, and Powerful Enhancements for Boosted Productivity & Administration

Key GA features of the Docker Desktop 4.34 release include: 

MSI installer GA and Windows registry key 

Host networking support to Docker Desktop 

Automatic reclamation of disk space in Docker Desktop for WSL2

Authentication enhancements 

NVIDIA AI Workbench Docker Desktop Integration 

Multi-platform UX improvements and the containerd image store

Docker Desktop 4.34 introduces key features to enhance security, scalability, and productivity for all development team sizes, making deploying and managing environments more straightforward. With the general availability (GA) of the MSI installer for bulk deployment, managing installations across Windows environments becomes even simpler. Enhanced authentication features offer an improved administration experience while reinforcing security. Automatically reclaim valuable disk space with Docker Desktop’s new smart compaction feature, streamlining storage management for WSL2 users. Additionally, the integration with NVIDIA AI Workbench provides developers with a seamless connection between model training and local development. Explore how these innovations simplify your workflows and foster a culture of innovation and reliability in your development practices.

Deploy Docker Desktop in bulk with the MSI installer

We’re excited to announce that the MSI installer for Docker Desktop is now generally available to all our Docker Business customers. This powerful tool allows you to customize and deploy Docker Desktop across multiple users or machines in an enterprise environment, making it easier to manage Docker at scale. 

Features include:

Interactive and silent installations: Choose between an interactive setup process or deploy silently across your organization without interrupting your users.

Customizable installation paths: Tailor the installation location to fit your organization’s needs.

Desktop shortcuts and automatic startup: Simplify access for users with automatic creation of desktop shortcuts and Docker Desktop starting automatically after installation.

Set usage to specific Docker Hub organizations: Control which Docker Hub organizations your users are tied to during installation.

Docker administrators can download the MSI installer directly from the Docker Admin Console.

One of the standout features of this installer is the –allowed-org flag. This option enables the creation of a Windows registry key during installation, enforcing sign-in to a specified organization. By requiring sign-in, you ensure that your developers are using Docker Desktop with their corporate credentials, fully leveraging your Docker Business subscription. This also adds an extra layer of security, protecting your software supply chain.

Additionally, this feature paves the way for Docker to provide you with valuable usage insights across your organization and enable cloud-based control over application settings for every user in your organization in the future.

Figure 1: Docker admins can download the MSI installer directly from the Docker Admin Console.

What’s next

We’re also working on releasing a PKG enterprise installer for macOS, config profiles for macOS, and supporting multiple organizations in all supported sign-in enforcement mechanisms. 

Refer to our docs to learn about MSI configuration and discover more about sign-in enforcement via Windows registry key.

Host networking support to Docker Desktop 

Previously, Docker Desktop lacked seamless host networking capability, complicating the integration between host and container network services. Developers had to take time to set up and enable communication between the host and containers. Docker Desktop now supports host networking capability directly into Docker Desktop. 

Host networking allows containers that are started with –net=host to use localhost to connect to TCP and UDP services on the host. It will automatically allow software on the host to use localhost to connect to TCP and UDP services in the container. This simplifies the setup for scenarios in which close integration between host and container network services is required. Additionally, we’re driving cross-platform consistency and simplifying configuration by reducing the need for additional steps, such as setting up port forwarding or bridge networks. 

While this has previously been available in the Docker Engine, we’re now extending this capability to Docker Desktop for Windows, macOS, and Linux. We’re dedicated to improving developer productivity, and this is another way we help developers spend less time configuring network settings and more time building and testing applications, accelerating development cycles. 

This new capability is available for all users logged into Docker Desktop. To enable this feature, navigate to Settings > Resources > Network. Learn more about this feature on Docker Docs. 

Figure 2: Enable the host networking support feature in the Settings menu.

Automatic reclamation of disk space in Docker Desktop for WSL2 

Previously, when customers using Docker Desktop for WSL2 deleted Docker objects such as containers, images, or builds (for example via a docker system prune), the freed storage space was not automatically reclaimed on their host. Instead, they had to use external tools to “compact” the virtual disk/distribution backing Docker Desktop.

Starting with Docker 4.34, we are rolling out automatic reclamation of disk space. When you quit the app, Docker Desktop will automatically check whether there is storage space that can be returned to the host. It will then scan the virtual disk used for Docker storage, and compact it by returning all zeroed blocks to the operating system. Currently Docker Desktop will only start the scan when it estimates that at least 16GB of space can be returned. In the future, we plan to make this threshold adaptive and configurable by the user.

The feature is now enabled for all customers running the Mono distribution architecture for Docker Desktop on WSL2. This new architecture, which was rolled out starting with Docker Desktop 4.30 for all fresh installations of Docker Desktop, removed the need for a dedicated docker-desktop-data WSL2 distribution to store docker data. We will be rolling out the new architecture to all customers in the upcoming Docker Desktop releases.

Customers with installations still using the docker-desktop-data WSL2 distribution can compact storage manually via VHDX compaction tools, or change the WSL2 configuration to enable the experimental WSL2 feature for disk cleanup.

(Pro tip: Did you know you can use the Disk Usage extension to see how Docker Desktop is using your storage and use it to prune dangling objects with a single click?)

Authentication enhancements 

Previously, authenticating via the CLI required developers to either type their password into the command-line interface — which should generally be avoided by the security-minded — or manually create a personal access token (PAT) by navigating to their Docker account settings, generating the token, and then copying it into the CLI for authentication. This process was time-consuming and forced developers to switch contexts between the CLI and the web portal.

In this latest Docker Desktop release, we’re streamlining the CLI authentication flow. Now, users can authenticate through a seamless browser-based process, similar to the experience in CLIs like GitHub’s gh or Amazon’s AWS CLI. With this improved flow, typing docker login in the CLI will print a confirmation code and open your browser for authentication, automating PAT creation behind the scenes and eliminating the need for manual PAT provisioning. This enhancement saves time, reduces complexity, and delivers a smoother and more secure user experience. Additionally, when you authenticate using this workflow, you’ll be logged in across both Docker CLI and Docker Desktop. 

This new flow also supports developers in organizations that require single sign-on (SSO), ensuring a consistent and secure authentication process.

Figure 3: When you log in via the new workflow, you’ll be logged in across both Docker CLI and Docker Desktop.

Enterprise-grade AI application development with Docker Desktop and NVIDIA AI Workbench  

AI development is a complex journey, often hindered by the challenge of connecting the dots between model training, local development, and deployment. Developers frequently encounter a fragmented and inconsistent development environment and toolchain, making it difficult to move seamlessly from training models in the cloud to running them locally. This fragmentation slows down innovation, introduces errors, and complicates the end-to-end development process.

To solve this, we’re proud to announce the integration of Docker Desktop with NVIDIA AI Workbench, a collaboration designed to streamline every stage of AI development. This solution brings together the power of Docker’s containerization with NVIDIA’s leading AI tools, providing a unified environment that bridges the gap between model training and local development.

With this integration, you can now train models in the cloud using NVIDIA’s robust toolkit and effortlessly transition to local development on Docker Desktop. This eliminates the friction of managing different environments and configurations, enabling a smoother, more efficient workflow from start to finish. 

To learn more about this collaboration and how Docker Business supports enterprise-grade AI application development, read our blog post. 

Multi-platform UX improvements and the containerd image store  

In February 2024, we announced the general availability of the containerd image store in Docker Desktop. Since then, we’ve been working on improving the output of our commands to make multi-platform images easier to view and manage. 

Now, we are happy to announce that the docker image list CLI command now supports an experimental –tree flag. This offers a completely new tree view of the image list, which is more suitable for describing multi-platform images.

Figure 4: New CLI tree view of the image list.

If you’re looking for multi-platform support, you need to ensure that you have the containerd image store enabled in Docker Desktop (see General settings in Docker Desktop, select Use containerd for pulling and storing images). As of the Docker Desktop 4.34 release, fresh installs or factory resets of Docker Desktop will now default to using the containerd image store, meaning that you get multi-platform building capability out of the box. 

Figure 5: You can enable the containerd image store in the Docker Desktop general settings.

To learn more about the containerd image store, check out our containerd documentation. 

Wrapping up 

Docker Desktop 4.34 marks a significant milestone in our commitment to providing an industry-leading container development suite. With key features such as the MSI installer for bulk deployment, enhanced authentication mechanisms, and the integration with NVIDIA AI Workbench, Docker Desktop is transforming how teams manage deployments, protect their environments, and accelerate their development workflows. 

These advancements simplify your development processes and help drive a culture of innovation and reliability. Stay tuned for more exciting updates and enhancements as we continue to deliver solutions designed to empower your development teams and secure your operations at scale. 

Upgrade to Docker Desktop 4.34 today and experience the future of container development. 

Learn more

Authenticate and update to receive your subscription level’s newest Docker Desktop features.

New to Docker? Create an account. 

Subscribe to the Docker Newsletter.

Read New Docker Desktop Enterprise Admin Features: MSI Installer and Login Enforcement Alternative. 

Read Optimizing AI Application Development with Docker Desktop and NVIDIA AI Workbench. 

Learn more about MSI configuration.

Discover more about sign-in enforcement via Windows registry key.

Learn more about host networking support.

Quelle: https://blog.docker.com/feed/

New Docker Desktop Enterprise Admin Features: MSI Installer and Login Enforcement Alternative

At Docker, we continuously strive to enhance the ease and security of our platform for all users. We’re excited to launch the general availability for two significant updates: the Docker Desktop MSI installer and a new sign-in enforcement alternative. These updates aim to streamline administration, improve security, and ensure users can take full advantage of Docker Business subscription features.

Docker Desktop MSI installer

Replacing an EXE installer with an MSI installer for desktop applications offers numerous advantages, particularly for enterprise customers:

Enhanced deployment features: MSI installers provide the ability to manage installations through Group Policy and offer more comprehensive installation and uninstallation control.

Easier and more secure mass deployment: Facilitates secure, efficient deployment across multiple devices, enhancing IT administration efficiency.

Widely accepted: MSI installers are recognized in both home and enterprise environments.

Supports standardized silent install parameters: Aligns with industry standards for silent installations.

Ideal for large-scale deployment: MSI files can be customized to include specific options, such as silent installs or custom installation paths, making them perfect for corporate environments.

For customers using Intune MDM, we have detailed documentation to assist with integration: Intune MDM Documentation.

To access the installer, navigate to the Docker Admin Console > Security and Access > Deploy Docker Desktop.

Sign-in enforcement: Streamlined alternative for organizations

Figure 1: Sign-in enforcement.

Recognizing the need for more streamlined and familiar ways to enforce sign-in protocols, Docker is introducing a new sign-in enforcement mechanism for Windows OS (Figure 1). This update brings several business benefits, including increased user logins and better seat allocation awareness, ultimately helping customers maximize their business subscription features and manage license costs more effectively.

We now offer integration with the Windows Registry, allowing admins to add approved organizations directly within familiar Windows system settings. Find out more.

By moving away from the traditional registry.json method and adopting universally recognized settings, Docker simplifies the process for IT admins already familiar with these systems. This change means:

Easier integration: Organizations can seamlessly integrate Docker sign-in enforcement into their existing configuration management workflows.

Simplified administration: Reduces the learning curve and eliminates the need for additional internal approvals for new file types.

These changes are designed to offer quick deployment and familiar processes to IT administrators. We are committed to refining these mechanisms based on user feedback and evolving requirements. 

Note that the legacy registry.json method will continue to work, ensuring support for existing customers, but it should now be considered a legacy method. If you roll out a registry key, it will take precedence over any pre-existing registry.json.

Roll out the registry key sign-in enforcement at Docker install time via the –allowed-org flag. 

For example, to deploy the MSI installer with sign-in enforcement, run the following: 

msiexec /i "DockerDesktop.msi" /L*V ".msi.log" /quiet /norestart ALLOWEDORG="docker.com"

The above command installs Docker Desktop silently with verbose logging, without restarting the system, and it allows only the specified organization (in this case, “docker.com”) to use Docker Desktop by enforcing sign-in.

Check our full step-by-step installation documentation.

Roadmap

We’re also working on several related administrative improvements, such as:

PKG enterprise installer for macOS.

macOS configuration profiles for enforcing sign-in.

Supporting multiple organizations in all available sign-in enforcement mechanism.

Stay tuned for these exciting updates!

Wrapping up

These updates reflect our ongoing commitment to improving the Docker platform for our users. By introducing the Docker Desktop MSI installer and new sign-in enforcement alternatives, we aim to simplify deployment, enhance security, and streamline administration for organizations of all sizes. We encourage IT teams and administrators to start planning for these changes to enhance their Docker experience.

Learn more

See what else is new in Docker Desktop.

Install Docker Desktop with MSI.

Enforce sign-in with Windows Registry Key.

Subscribe to the Docker Newsletter.

Authenticate and update to receive your subscription level’s newest Docker Desktop features.

New to Docker? Create an account. 

Quelle: https://blog.docker.com/feed/

Streamlining Local Development with Dev Containers and Testcontainers Cloud

In today’s world of fast-paced development, setting up a consistent and efficient development environment for all team members is crucial for overall productivity. Although Docker itself is a powerful tool that enhances developer efficiency, configuring a local development environment still can be complex and time-consuming. This is where development containers (or dev containers) come into play.

Dev containers provide an all-encompassing solution, offering everything needed to start working on a feature and running the application seamlessly. In specific terms, dev containers are Docker containers (running locally or remotely) that encapsulate everything necessary for the software development of a given project, including integrated development environments (IDEs), specific software, tools, libraries, and preconfigured services. 

This description of an isolated environment can be easily transferred and launched on any computer or cloud infrastructure, allowing developers and teams to abstract away the specifics of their operating systems. The dev container settings are defined in a devcontainer.json file, which is located within a given project, ensuring consistency across different environments.

However, development is only one part of a developer’s workflow. Another critical aspect is testing to ensure that code changes work as expected and do not introduce new issues. If you use Testcontainers for integration testing or rely on Testcontainers-based services to run your application locally, you must have Docker available from within your dev container. 

In this post, we will show how you can run Testcontainers-based tests or services from within the dev container and how to leverage Testcontainers Cloud within a dev container securely and efficiently to make interacting with Docker even easier. 

Getting started with dev containers

To get started with dev containers on your computer using this tutorial, you will need:

Git 2.25+

Docker 

IntelliJ IDE 

There’s no need to preconfigure your project to support development containers; the IDE will do it for you. But, we will need some Testcontainers usage examples to run in the dev container, so let’s use the existing Java Local Development workshop repository. It contains the implementation of a Spring Boot-based microservice application for managing a catalog of products. The demo-state branch contains the implementation of Testcontainers-based integration tests and services for local development. 

Although this project typically requires Java 21 and Maven installed locally, we will instead use dev containers to preconfigure all necessary tools and dependencies within the development container. 

Setting up your first dev container

To begin, clone the project:

git clone https://github.com/testcontainers/java-local-development-workshop.git

Next, open the project in your local IntelliJ IDE and install the Dev Containers plugin (Figure 1).

Figure 1: Install the Dev Containers plugin.

Next, we will add a .devcontainer/devcontainer.json  file with the requirements to the project. In the context menu of the project root, select New > .devcontainer (Figure 2).

Figure 2: Choosing new .devcontainer.

We’ll need Java 21, so let’s use the Java Dev Container Template. Then, select Java version 21 and enable Install Maven (Figure 3).

Figure 3: Dev Container Template options.

Select OK, and you’ll see a newly generated devcontainer.json file. Let’s now tweak that a bit more. 

Because Testcontainers requires access to Docker, we need to provide some access to Docker inside of the dev container. Let’s use an existing Development Container Feature to do this. Features enhance development capabilities within your dev container by providing self-contained units of specific container configuration including installation steps, environment variables, and other settings. 

You can add the Docker-in-Docker feature to your devcontainer.json to install Docker into the dev container itself and thus have a Docker environment available for the Testcontainers tests.

Your devcontainer.json file should now look like the following:

{
"name": "Java Dev Container TCC Demo",
"image": "mcr.microsoft.com/devcontainers/java:1-21-bullseye",
"features": {
"ghcr.io/devcontainers/features/java:1": {
"version": "none",
"installMaven": "true",
"installGradle": "false"
},
"docker-in-docker": {
"version": "latest",
"moby": true,
"dockerDashComposeVersion": "v1"
}
},
"customizations" : {
"jetbrains" : {
"backend" : "IntelliJ"
}
}
}

Now you can run the container. Navigate to devcontainer.json and click on the Dev Containers plugin and select Create Dev Container and Clone Sources. The New Dev Container window will open (Figure 4).

Figure 4: New Dev Container window.

In the New Dev Container window, you can select the Git branch and specify where to create your dev container. By default, it uses the local Docker instance, but you can select the ellipses (…) to add additional Docker servers from the cloud or WSL and configure the connection via SSH.

If the build process is successful, you will be able to select the desired IDE backend, which will be installed and launched within the container (Figure 5).

Figure 5: Dev container build process.

After you select Continue, a new IDE window will open, allowing you to code as usual. To view the details of the running dev container, execute docker ps in the terminal of your host (Figure 6).

Figure 6: Viewing dev container details.

If you run the TestApplication class, your application will start with all required dependencies managed by Testcontainers. (For more implementation details, refer to the “Local development environment with Testcontainers” step on GitHub.) You can see the services running in containers by executing docker ps in your IDE terminal (within the container). See Figure 7.

Figure 7: Viewing services running in containers.

Setting up Testcontainers Cloud in your dev container

To reduce the load on local resources and enhance the observability of Testcontainers-based containers, let’s switch from the Docker-in-Docker feature to the Testcontainers Cloud (TCC) feature:  ghcr.io/eddumelendez/test-devcontainer/tcc:0.0.2.

This feature will install and run the Testcontainers Cloud agent within the dev container, providing a remote Docker environment for your Testcontainers tests.

To enable this functionality, you’ll need to obtain a valid TC_CLOUD_TOKEN, which the Testcontainers Cloud agent will use to establish the connection. If you don’t already have a Testcontainers Cloud account, you can sign up for a free account. Once logged in, you can create a Service Account to generate the necessary token (Figure 8).

Figure 8: Generating Testcontainers Cloud access token.

To use the token value, we’ll utilize an .env file. Create an environment file under .devcontainer/devcontainer.env and add your newly generated token value (Figure 9). Be sure to add devcontainer.env to .gitignore to prevent it from being pushed to the remote repository.

Figure 9: Add your token value to devcontainer.env.

In your devcontainer.json file, include the following options:

The runArgs to specify that the container should use the .env file located at .devcontainer/devcontainer.env. 

The containerEnv to set the environment variables TC_CLOUD_TOKEN and TCC_PROJECT_KEY within the container. The TC_CLOUD_TOKEN variable is dynamically set from the local environment variable.

The resulting devcontainer.json file will look like this:

{
"name": "Java Dev Container TCC Demo",
"image": "mcr.microsoft.com/devcontainers/java:21",
"runArgs": [
"–env-file",
".devcontainer/devcontainer.env"
],
"containerEnv": {
"TC_CLOUD_TOKEN": "${localEnv:TC_CLOUD_TOKEN}",
"TCC_PROJECT_KEY": "java-local-development-workshop"
},
"features": {
"ghcr.io/devcontainers/features/java:1": {
"version": "none",
"installMaven": "true",
"installGradle": "false"
},
"ghcr.io/eddumelendez/test-devcontainer/tcc:0.0.2": {}
},
"customizations": {
"jetbrains": {
"backend": "IntelliJ"
}
}
}

Let’s rebuild and start the dev container again. Navigate to devcontainer.json, select the Dev Containers plugin, then select Create Dev Container and Clone Sources, and follow the steps as in the previous example. Once the build process is finished, choose the necessary IDE backend, which will be installed and launched within the container.

To verify that the Testcontainers Cloud agent was successfully installed in your dev container, run the following in your dev container IDE terminal: 

cat /usr/local/share/tcc-agent.log

You should see a log line similar to Listening address= if the agent started successfully (Figure 10).

Figure 10: Verifying successful installation of Testcontainers Cloud agent.

Now you can run your tests. The ProductControllerTest class contains Testcontainers-based integration tests for our application. (For more implementation details, refer to the “Let’s write tests” step on GitHub.)

To view the containers running during the test cycle, navigate to the Testcontainers Cloud dashboard and check the latest session (Figure 11). You will see the name of the Service Account you created earlier in the Account line, and the Project name will correspond to the TCC_PROJECT_KEY defined in the containerEnv section. You can learn more about how to tag your session by project or workflow in the documentation.

Figure 11: Testcontainers Cloud dashboard.

If you want to run the application and debug containers, you can Connect to the cloud VM terminal and access the containers via the CLI (Figure 12).

Figure 12: Accessing containers via the CLI.

Wrapping up

In this article, we’ve explored the benefits of using dev containers to streamline your Testcontainers-based local development environment. Using Testcontainers Cloud enhances this setup further by providing a secure, scalable solution for running Testcontainers-based containers by addressing potential security concerns and resource limitations of Docker-in-Docker approach. This powerful combination simplifies your workflow and boosts productivity and consistency across your projects.

Running your dev containers in the cloud can further reduce the load on local resources and improve performance. Stay tuned for upcoming innovations from Docker that will enhance this capability even further.

Learn more

Subscribe to the Docker Newsletter.

Sign up for a Testcontainers Cloud account.

Learn about Testcontainers best practices.

Learn about Spring Boot Application Testing and Development with Testcontainers.

Get started with the Testcontainers guide.

Connect on the Testcontainers Slack.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

How to Fix ESLint Violations with AI Assistance

This ongoing Docker Labs GenAI series will explore the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing things as open source so you can play, explore, and hack with us, too.

In a previous article in this series, we discussed linters, and we talked about running ESLint with an AI assistant. In this post, we will cover AI-assisted violation resolutions. 

Before an AI assistant can take action on violations via the editor or Git (Figure 1), we first need to answer the question: Can an LLM generate resolutions for ESLint violations with the right knowledge and tools?

Figure 1: Fixing ESLint violations.

Fix any lint violations you can in my project.

ESLint violations

To resolve a violation, the LLM needs information about the violation from the linter, and some context about the source code causing it. A level of human supervision may also be necessary. We want to optimize to shrink context, because models both limit context windows and yield better results with less context. This makes us wonder: “What is the minimum amount of context and supervision necessary to resolve a lint violation?”

For example, without human instruction, our assistant can’t know how to balance our parentheses:

// export const dockerfileRegex = new RegExp('(.*.?Dockerfile', 'i');
// Goal: new RegExp('(.*).?Dockerfile', 'i');

Your `dockerfileRegex` regular expression is currently unterminated. This means the group you started at `(.*.?Dockerfile` doesn't have an ending bracket. Here is a fixed version of it:
“`typescript
export const dockerfileRegex = new RegExp('(.*.?Dockerfile)', 'i');
“`
I added a ")" to close the group.

Focusing on certain violations

We want to narrow the list of ESLint rules to violations the LLM can help with.

In Figure 2, we have compiled every ESLint rule that:

Is recommended

Is not auto-fixable by ESLint itself

And we want rules that:

Do not require supervision to make reasonable fixes.

Figure 2: Violations that the LLM can help with.

This context yields four total groups of violations:

Group 1 (No context needed from the surrounding source code or supervision needed.) Scripts can be applied to fix these violations.

Group 2 (No context needed from the surrounding source code, but these might be mistakes that necessitate the LLM to evaluate some of the code.) LLM can generate scripts to resolve these violations, but we would be assuming violations are mistakes. There are some violations that the LLM can evaluate on its own.

Group 3 (Context is needed from surrounding source code, but no supervision is necessary.) This is the best opportunity for our LLM to provide help. We can use tools to pull the right context in.

Group 4 (Context is needed from surrounding source code and supervision might be needed.)

Invalid Regex

Unsafe optional chaining

Constant condition

Depends a lot on the exact situation, but LLMs may be useful?

Thankfully, nearly all the violations could have reasonable fixes applied without supervision. These are the violations that we will focus on.

Initial prompts

First, we create the prompts to attempt to fix ESLint violations.

You are an AI assistant who specializes in resolving lint violations in projects. Use the tools available to quickly take action and be very brief.
1. Run linter.
2. Evaluate total violations.
// What to do?

Unfortunately, we run into a roadblock when it comes to learning about lint violations from our ESLint tool. When using summary output, we don’t have enough information to know what we’re fixing. However, when using JSON output, we found that as few as 100 violations in a project caused ESLint to send over 10,000 characters over standard out. That would be a problem since many models at current limit us to 4-8k tokens. We need a way to persist this large output but without consuming tokens.

Artifacts

While we want to use ESLint, it can easily balloon an AI conversation past the model’s context size. To address this, we modified the ESLint tool to write eslint.json to a new ephemeral Docker volume. Reading and writing to this volume serves as another conversation between tools, but sandboxed away from the LLM’s context-sensitive function call output. 

We added tools to consume the ESLint artifact in different ways depending on the number of violations summarized.

– name: parse_lint_results
description: Loads lint violations grouped by type.
parameters:
type: object
properties:
outputLevel:
type: string
description: Supports condensed or complaints
container:
image: vonwig/read_eslint

Now we can use this tool to finish our prompt.

You are an AI assistant who specializes in resolving lint violations in projects. Use the tools available to quickly take action and be very brief.
1. Run lint. // Will write /thread/eslint.json
2. Evaluate total violations. // Will read /thread/eslint.json
<10 violations: Parse output with complaints output.
10+ violations: Parse output with condensed output.
3. Fix the violations using the following steps:
## Condensed:
{>fixing}

A user prompt of:

1. Get the line Use read_eslint with the `path` arg to get all of the violations for a file.
2. Make the correction Respond in the following format:

“`json
{
"start": [1, 4],
"end": [2, 4],
"edit": "Lorem ipsum"
}
“`

Once you have fixed one file, move on to the next.
You are able to fix the following violations:
no-async-promise-executor

yields:

The violation is located in the file src/utils/promiseUtils.ts at line 30-31. The current code is:
“`javascript
return new Promise(async (resolve, reject) => {

});
“`

To resolve the no-async-promise-executor violation, remove the async keyword from the `Promise` executor function. The corrected code would look like:

“`javascript
return new Promise((resolve, reject) => {

});
“`

Unfortunately, we’re still not seeing a good result. Without the violating context from the code, again, our assistant can’t provide an actionable fix. So we need to make a tool to get that.

Tree Sitter

Without the right tools, the LLM is, at best, vague and helpful — and at worse quite liable to hallucinate. We need tools to let the LLM be actionable here. Tree-sitter is a popular open source tool we can use to parse a variety of languages into abstract syntax trees (or ASTs). 

The powerful part of Tree-sitter that we want to take advantage of is the community of grammar authors providing support for almost any language our assistant could encounter. We have Dockerized this tool to get context given a file path and line number.

Results

We have trimmed debug finish responses from the payload but kept function calls.

## ROLE assistant
…({:function {:name run_lint, :arguments {}}, :id call_MaK7CGpOISktz1fZP3GS7m4H})

## ROLE assistant
…………………….({:function {:name run-eslint, :arguments {
"args": ".",
"version": 8,
"outputLevel": "summary"
}}, :id call_RHX2hztWvghxbyZx5cvSXgMU})

## ROLE tool (run-eslint)
Running npx with args: eslint@8 –format json .
Peparing output
Found 41 violations in 271 files.

## ROLE tool (run_lint)
I ran ESLint (version 8) on your entire project. Here are the results:

– 271 files were checked for violations
– Out of these, 41 files had one or more violations

## ROLE assistant
………….({:function {:name parse_lint_results, :arguments {
"outputLevel": "condensed"
}}, :id call_sYHP8tXuKbXGaXdNCdO3W5kU})

## ROLE tool (parse_lint_results)
{ "@typescript-eslint/no-floating-promises": [ "src/commands/help.ts",
"src/commands/registerWorkspaceCommand.ts" ], "@typescript-eslint/no-explicit-any": [ "src/commands/registerLocalCommand.ts" ], "@typescript-eslint/naming-convention": [ "src/constants.ts" ], "eqeqeq": [ "src/debugging/DockerDebugConfigurationProvider.ts", "src/debugging/DockerServerReadyAction.ts", "src/runtimes/files/ContainerFilesProvider.ts", "src/tree/LocalRootTreeItemBase.ts" ], "@typescript-eslint/no-unused-vars": [ "src/extension.ts", "src/scaffolding/wizard/netCore/NetCoreGatherInformationStep.ts" ], "@typescript-eslint/semi": [ "src/utils/registryExperience.ts" ] }

## ROLE assistant
……………({:function {:name violations_for_file, :arguments {
"path": "src/commands/help.ts"
}}, :id call_fAN5kTrXndaNs7Ix4iH51E4T})

## ROLE tool (violations_for_file)
Getting eslint.json for path: src/commands/help.ts
ESLint violations for src/commands/help.ts:
{ "filePath": "src/commands/help.ts", "messages": [ { "ruleId": "@typescript-eslint/no-floating-promises", "severity": 1, "message": "Promises must be awaited, end with a call to .catch, end with a call to .then with a rejection handler or be explicitly marked as ignored with the void operator.", "line": 35, "column": 5, "nodeType": "ExpressionStatement", "messageId": "floatingVoid", "endLine": 35, "endColumn": 100] }

## ROLE assistant
………………….({:function {:name run_tree_sitter, :arguments {
"path": "src/commands/help.ts",
"line": 35
}}, :id call_cKnNRQs5hmMsTszTfDnwfK12})

## ROLE tool (run_tree_sitter)
{
offending_line: '}',
line_node: SyntaxNode {
type: },
startPosition: {row: 35, column: 0},
endPosition: {row: 35, column: 1},
childCount: 0,
},
parent: SetNode {
type: set,
startPosition: {row: 33, column: 45},
endPosition: {row: 35, column: 1},
childCount: 4,
},
parent_text: '{n' +
" vscode.env.openExternal(vscode.Uri.parse('https://github.com/microsoft/vscode-docker/issues'));n" +
'}'
}

## ROLE assistant
The violation on line 35 of the `src/commands/help.ts` file is due to a floating promise. To resolve this, add the void operator to the beginning of the line as below:
“`js
void vscode.env.openExternal(vscode.Uri.parse('https://github.com/microsoft/vscode-docker/issues'));
“`.

This result demonstrates that the LLM can orchestrate a small code improvement using the right tools.

Failures

Unfortunately, it terminates after coming to the end of the first violation rather than continuing on to the next. This seems to be difficult to change, even trying two other combinations of prompts failed to get a loop going.

Fix any lint violations you can using the tools provided.
After resolving one violation, move onto the next.
Once you have fixed one file, move on to the next.

Successful prompt

However, what we did find was that specifying the lint violation to focus on did work:

Please only fix no-unused-vars

yields:

## ROLE assistant
The violation in the file "src/extension.ts" is a declared variable 'unuseVar' at line 41, which is not used anywhere. The fix would be the removal of that line..nil

Figure 3 lists a few interesting violations we tested.

Figure 3: Violations tested.

Overall, we can see that our assistant, when given the right tools to find and fix violations, can make good code change suggestions for even difficult lint violations. We welcome you to check out the code and test with different violations that we didn’t cover.

Summary

We continue to investigate the use of tools for gathering context and improving suggestions. In this installment, we have looked at how AI assistants can provide significant value to developers by:

Using a linter to learn about violations in a TS project without Node, NPM, or ESLint.

Leveraging an ephemeral volume to store large context without using tokens.

Using Tree_sitter to load precise, relevant code context from a project.

Generating fixes for violations in a TS project.

As always, feel free to follow along in our new public repo and please reach out. Everything we’ve discussed in this blog post is available for you to try out on your own projects.

Learn more

Subscribe to the Docker Newsletter.

Read the Docker Labs GenAI series.

Get the latest release of Docker Desktop.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Optimizing AI Application Development with Docker Desktop and NVIDIA AI Workbench

Are you looking to streamline how to incorporate LLMs into your applications? Would you prefer to do this using the products and services you’re already familiar with? This is where Docker Desktop, especially when paired with the advanced capabilities offered by Docker’s Business subscription tier, comes into play — particularly when combined with NVIDIA’s cutting-edge technology.

Imagine a development environment where setting up and managing AI workloads is as intuitive as the everyday tools you’re already using. With our deepening partnership with NVIDIA, we are committed to making this a reality. This collaboration not only enhances your ability to leverage Docker containers but also significantly improves your overall experience of building and developing AI applications.

What’s more, this partnership is designed to support your long-term growth and innovation goals. Docker Desktop with Docker Business, combined with NVIDIA software, provides the perfect launchpad for developers who want to accelerate their AI development journey — whether it’s building prototypes or deploying enterprise-grade AI applications. This isn’t just about providing tools; it’s about investing in your abilities, your career, and the innovation capabilities of your organization.

With Docker Business, you gain access to advanced capabilities that enhance security, streamline management, and offer unparalleled support. Meanwhile, NVIDIA AI Workbench provides a robust, containerized environment tailored for AI and machine learning projects. Together, these solutions empower you to push the boundaries of what’s possible, bringing AI into your applications more effortlessly and effectively.

What is NVIDIA AI Workbench?

NVIDIA AI Workbench is a free developer toolkit powered by containers that enables data scientists and developers to create, collaborate, and migrate AI workloads and development environments across GPU systems. It targets scenarios like model fine-tuning, data science workflows, retrieval-augmented generation, and more. Users can install it on multiple systems but drive everything from a client application that runs locally on Windows, Ubuntu, and macOS. NVIDIA AI Workbench helps enable collaboration and distribution through Git-based platforms, like GitHub and GitLab. 

How does Docker Desktop relate to NVIDIA AI Workbench?

NVIDIA AI Workbench requires a container runtime. Docker’s container runtime (Docker Engine), delivered through Docker Desktop, is the recommended AI Workbench runtime for developers using AI Workbench on Windows and macOS. Previously, AI Workbench users had to install Docker Desktop manually. With this newest release of AI Workbench, developers who select Docker as their container runtime will have Docker Desktop installed on their machine automatically, with no manual steps required.

 You can learn about this integration in NVIDIA’s technical blog.

Moving beyond the AI application prototype

Docker Desktop is more than just a tool for application development; it’s a launchpad that provides an integrated, easy-to-use environment for developing a wide range of applications, including AI. What makes Docker Desktop particularly powerful is its ability to seamlessly create and manage containerized environments, ensuring that developers can focus on innovation without worrying about the underlying infrastructure.

For developers who have already invested in Docker, this means that the skills, automation, infrastructure, and tooling they’ve built up over the years for other workloads are directly applicable to AI workloads as well. This cross-compatibility offers a huge return on investment, as it allows teams to extend their existing Docker-based workflows to include AI applications and services without needing to overhaul their processes or learn new tools.

Docker Desktop’s compatibility with Windows, macOS, and Linux makes it an ideal choice for diverse development teams. Its robust features support a wide range of development workflows, from initial prototyping to large-scale deployment, ensuring that as AI applications move from concept to production, developers can leverage their existing Docker infrastructure and expertise to accelerate and scale their work.

For those looking to create high-quality, enterprise-grade AI applications, Docker Desktop with Docker Business offers advanced capabilities. These include enhanced security, management, and support features that are crucial for enterprise and advanced development environments. With Docker Business, development teams can build securely, collaborate efficiently, and maintain compliance, all while continuing to utilize their existing Docker ecosystem. By leveraging Docker Business, developers can confidently accelerate their workflows and deliver innovative AI solutions with the same reliability and efficiency they’ve come to expect from Docker.

Accelerating developer innovation with NVIDIA GPUs

In the rapidly evolving landscape of AI development, the ability to leverage GPU capabilities is crucial for handling the intensive computations required for tasks like model training and inference. Docker is working to offer flexible solutions to cater to different developers, whether you have your own GPUs or need to leverage cloud-based compute. 

Running containers with NVIDIA GPUs through Docker Desktop 

GPUs are at the heart of AI development, and Docker Desktop is optimized to leverage NVIDIA GPUs effectively. With Docker Desktop 4.29 or later, developers can configure CDI support in the daemon and easily make all NVIDIA GPUs available in a running container by using the –device option via support for CDI devices.

For instance, the following command can be used to make all NVIDIA GPUs available in a container:

docker run –device nvidia.com/gpu=all <image> <command>

For more information on how Docker Desktop supports NVIDIA GPUs, refer to our GPU documentation.

No GPUs? No problem with Testcontainers Cloud

Not all developers have local access to powerful GPU hardware. To bridge this gap, we’re exploring GPU support in Testcontainers Cloud. This will allow developers to access GPU resources in a cloud environment, enabling them to run their tests and validate AI models without needing physical GPUs. With Testcontainers Cloud, you will be able to harness the power of GPUs from anywhere, democratizing high-performance AI development.

Trusted AI/ML content on Docker Hub

Docker Desktop provides a reliable and efficient platform for developers to discover and experiment with new ideas and approaches in AI development. Through its trusted content program, Docker selects and curates with open source and commercial communities high-quality images and distributes them on Docker Hub, under Docker Official Images, Docker Sponsored Open Source, and Docker Verified Publishers. With a wealth of AI/ML content, Docker makes it easy for users to discover and pull images for quick experimentation. This includes various images, such as NVIDIA software offerings and many more, allowing developers to get started quickly and efficiently.

Accelerated builds with Docker Build Cloud

Docker Build Cloud is a fully managed service designed to streamline and accelerate the building, testing, and deployment of any application. By leveraging Docker Build Cloud, AI application developers can shift builds from local machines to remote BuildKit instances — resulting in up to 39x faster builds. By offloading the complex build process to Docker Build Cloud, AI development teams can focus on refining their models and algorithms while Docker handles the rest.

Docker Business users can experience faster, more efficient builds and reproducible AI deployments with Docker Build Cloud minutes as part of their subscription.

Ensuring quality with Testcontainers

As AI applications evolve from prototypes to production-ready solutions, ensuring their reliability and performance becomes critical. This is where testing frameworks like Testcontainers come into play. Testcontainers allows developers to test their applications using real containerized dependencies, making it easier to validate application logic that utilize AI models in self-contained, idempotent, reproducible ways. 

For instance, developers working with LLMs can create Testcontainers-based tests that will test their application by utilizing any model available on Hugging Face utilizing the recently released Ollama container.  

Wrap up

The collaboration between Docker and NVIDIA marks a significant step forward in the AI development landscape. By integrating Docker Desktop into NVIDIA AI Workbench, we are making it easier than ever for developers to build, ship, and run AI applications. Docker Desktop provides a robust, streamlined environment that supports a wide range of development workflows, from initial prototyping to large-scale deployment. 

With advanced capabilities from Docker Business, AI developers can focus on innovation and efficiency. As we deepen our partnership with NVIDIA, we look forward to bringing even more enhancements to the AI development community, empowering developers to push the boundaries of what’s possible in AI and machine learning. 

Stay tuned for more exciting updates as we work to revolutionize AI application development.

Learn more

Get started with Docker Desktop on NVIDIA AI Workbench by installing today.

Authenticate and update to receive your subscription level’s newest Docker Desktop features.

New to Docker? Create an account. 

Subscribe to the Docker Newsletter.

Check out NVIDIA’s Docker Hub Library.

Get started with RAG application development with Docker’s GenAI Stack.

Quelle: https://blog.docker.com/feed/

Deprecation of Password Logins on CLI with Docker SSO Enforcement

At Docker, security is a core priority in everything we build. To align with our commitment to provide a secure experience for our developers and organizations, we are announcing an update to Single Sign-On (SSO) enforcement. Starting September 16, 2024, we will deprecate password logins on the Docker CLI when SSO is enforced.

SSO enforcement

SSO enforcement was introduced in 2022 for Docker Business subscriptions and allows organizations to mandate that users belonging to customer domains authenticate through the organization’s Identity Provider (IdP). In particular, SSO enforcement disables the ability of using existing username/password combinations to sign in to Docker Desktop, Docker Hub, Docker Scout, and Docker Build Cloud.

When SSO enforcement was initially rolled out, a grace period was provided to continue to allow the Docker CLI to access the Docker Hub registry. The period was allowed to facilitate easier adoption of SSO enforcement across organizations.

Today, we announce the end of this grace period.

What is changing?

Effective September 16, 2024, we will discontinue the ability for users to authenticate to the Docker Hub Registry via CLI using passwords when SSO is enforced. Users will need to use Personal Access Tokens (PATs) for CLI authentication instead. This change ensures a unified and secure authentication process across all Docker services.

Key changes:

Discontinued password logins: Users will no longer be able to use passwords on CLI authentication when SSO is enforced.

Mandatory use of PATs: Personal Access Tokens will be required for CLI login, enhancing security and control over access to Docker resources.

Why this change matters

At Docker, we continuously strive to improve the security of our platform. This deprecation is an essential step toward providing a secure environment for our users. By ensuring consistent enforcement of SSO across all services, we enhance your organization’s overall security posture. Adopting PATs helps you maintain the highest security standards and control over access to Docker resources.

Support and assistance

We understand that changes can be challenging, and we are here to support you through this transition. For any questions or assistance, reach out to us via:

Docker Support

Docker Community forums

Direct responses to our communication emails

We appreciate your cooperation and understanding as we work together to enhance the security of the Docker platform. Thank you for being a valued Docker user, and we look forward to continuing to support your development needs with increased security and reliability.

Learn more

Subscribe to the Docker Newsletter. 

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Zero Trust and Docker Desktop: An Introduction

Today’s digital landscape is characterized by frequent security breaches resulting in lost revenue, potential legal liability, and loss of customer trust. The Zero Trust model was devised to improve an organization’s security posture and minimize the risk and scope of security breaches.

In this post, we explore Zero Trust security and walk through several strategies for implementing Zero Trust within a Docker Desktop-based development environment. Although this overview is not exhaustive, it offers a foundational perspective that organizations can build upon as they refine and implement their own security strategies.

What is Zero Trust security?

The Zero Trust security model assumes that no entity — inside or outside the network boundary — should be automatically trusted. This approach eliminates automatic trust and mandates rigorous verification of all requests and operations before granting access to resources. Zero Trust significantly enhances security measures by consistently requiring that trust be earned.

The principles and practices of Zero Trust are detailed by the National Institute of Standards and Technology (NIST) in Special Publication 800-207 — Zero Trust Architecture. This document serves as an authoritative guide, outlining the core tenets of Zero Trust, which include strict access control, minimal privileges, and continuous verification of all operational and environmental attributes. For example, Section 2.1 of this publication elaborates on the foundational principles that organizations can adopt to implement a robust Zero Trust environment tailored to their unique security needs. By referencing these guidelines, practitioners can gain a comprehensive understanding of Zero Trust, which aids in strategically implementing its principles across network architectures and strengthening an organization’s security posture.

As organizations transition toward containerized applications and cloud-based architectures, adopting Zero Trust is essential. These environments are marked by their dynamism, with container fleets scaling and evolving rapidly to meet business demands. Unlike traditional security models that rely on perimeter defenses, these modern infrastructures require a security strategy that supports continuous change while ensuring system stability. 

Integrating Zero Trust into the software development life cycle (SDLC) from the outset is crucial. Early adoption ensures that Zero Trust principles are not merely tacked on post-deployment but are embedded within the development process, providing a foundational security framework from the beginning.

Containers and Zero Trust

The isolation of applications and environments from each other via containerization helps with the implementation of Zero Trust by making it easier to apply access controls, apply more granular monitoring and detection rules, and audit the results.

As noted previously, these examples are specific to Docker Desktop, but you can apply the concepts to any container-based environment, including orchestration systems such as Kubernetes.

A solid foundation: Host and network

When applying Zero Trust principles to Docker Desktop, starting with the host system is important. This system should also meet Zero Trust requirements, such as using encrypted storage, limiting user privileges within the operating system, and enabling endpoint monitoring and logging. The host system’s attachment to network resources should require authentication, and all communications should be secured and encrypted.

Principle of least privilege

The principle of least privilege is a fundamental security approach stating that a user, program, or process should have only the minimum permissions necessary to perform its intended function and no more. In terms of working with containers, effectively implementing this principle requires using AppArmor/SELinux, using seccomp (secure computing mode) profiles, ensuring containers do not run as root, ensuring containers do not request or receive heightened privileges, and so on.

Hardened Docker Desktop (available with a Docker Business or Docker Government subscription), however, can satisfy this requirement through the Enhanced Container Isolation (ECI) setting. When active, ECI will do the following:

Running containers unprivileged: ECI ensures that even if a container is started with the –privileged flag, the actual processes inside the container do not have elevated privileges within the host or the Docker Desktop VM. This step is crucial for preventing privilege escalation attacks.

User namespace remapping: ECI uses a technique where the root user inside a container is mapped to a non-root user outside the container, in the Docker Desktop VM. This approach limits the potential damage and access scope even if a container is compromised.

Restricted file system access: Containers run under ECI have limited access to the file system of the host machine. This restriction prevents a compromised container from altering system files or accessing sensitive areas of the host file system.

Blocking sensitive system calls: ECI can block or filter system calls from containers that are typically used in attacks, such as certain types of mount operations, further reducing the risk of a breakout.

Isolation from the Docker Engine: ECI prevents containers from interacting directly with the Docker Engine’s API unless explicitly allowed, protecting against attacks that target the Docker infrastructure itself.

Network microsegmentation

Microsegmentation offers a way to enhance security further by controlling traffic flow among containers. Through the implementation of stringent network policies, only authorized containers are allowed to interact, which significantly reduces the risk of lateral movement in case of a security breach. For example, a payment processing container may only accept connections from specific parts of the application, isolating it from other, less secure network segments.

The concept of microsegmentation also plays a role for AI systems and workloads. By segmenting networks and data, organizations can apply controls to different parts of their AI infrastructure, effectively isolating the environments used for training, testing, and production. This isolation helps reduce the list of data leakage between environments and can help reduce the blast radius of a security breach.

Docker Desktop’s robust networking provides several ways to address microsegmentation. By leveraging the bridge network for creating isolated networks within the same host or using the Macvlan network driver that allows containers to be treated as physical devices with distinct MAC addresses, administrators can define precise communication paths that align with the least privileged access principles of Zero Trust. Additionally, Docker Compose can easily manage and configure these networks, specifying which containers can communicate on predefined networks. 

This setup facilitates fine-grained network policies at the infrastructure level. It also simplifies the management of container access, ensuring that strict network segmentation policies are enforced to minimize the attack surface and reduce the risk of unauthorized access in containerized environments. Additionally, Docker Desktop supports third-party network drivers, which can also be used to address this concern.

For use cases where Docker Desktop requires containers to have different egress rules than the host, “air-gapped containers” allow for the configuration of granular rules applied to containers. For example, containers can be completely restricted from the internet but allowed on the local network, or they could be proxied/firewalled to a small set of approved hosts.

Note that in Kubernetes, this type of microsegmentation and network traffic management is usually managed by a service mesh.

Authentication and authorization

Implementing strong authentication and role-based access control (RBAC) is crucial in a Docker-based Zero Trust environment. These principles need to be addressed in several different areas, starting with the host and network as noted above.

Single Sign On (SSO) and System for Cross-Domain Identity Management (SCIM) should be enabled and used to manage user authentication to the Docker SaaS. These tools allow for better management of users, including the use of groups to enforce role and team membership at the account level. Additionally, Docker Desktop should be configured to require and enforce login to the Docker organization in use, which prevents users from logging into any other organizations or personal accounts.

When designing, deploying, building, and testing containers locally under Docker Desktop, implementing robust authentication and authorization mechanisms is crucial to align with security best practices and principles. It’s essential to enforce strict access controls at each stage of the container lifecycle.

This approach starts with managing registry and image access, to ensure only approved images are brought into the development process. This can be accomplished by using an internal registry and enforcing firewall rules that block access to other registries. However, an easier approach is to use Registry Access Management (RAM) and Image Access Management (IAM) — features provided by Hardened Docker Desktop — to control images and registries.

The implementation of policies and procedures around secrets management — such as using a purpose-designed secrets store — should be part of the development process. Finally, using Enhanced Container Isolation (as described above) will help ensure that container privileges are managed consistently with best practices.

This comprehensive approach not only strengthens security but also helps maintain the integrity and confidentiality of the development environment, especially when dealing with sensitive or proprietary application data.

Monitoring and auditing

Continuous monitoring and auditing of activities within the Docker environment are vital for early detection of potential security issues. These controls build on the areas identified above by allowing for the auditing and monitoring of the impact of these controls.

Docker Desktop produces a number of logs that provide insight into the operations of the entire application platform. This includes information about the local environment, the internal VM, the image store, container runtime, and more. This data can be redirected and parsed/analyzed by industry standard tooling.

Container logging is important and should be sent to a remote log aggregator for processing. Because the best development approaches require that log formats and log levels from development mirror those used in production, this data can be used not only to look for anomalies in the development process but also to provide operations teams with an idea of what production will look like.

Docker Scout

Ensuring containerized applications comply with security and privacy policies is another key part of continuous monitoring. Docker Scout is designed from the ground up to support this effort. 

Docker Scout starts with the image software bill of materials (SBOM) and continually checks against known and emerging CVEs and security policies. These policies can include detecting high-profile CVEs to be mitigated, validating that approved base images are used, verifying that only valid licenses are being used, and ensuring that a non-root user is defined in the image. Beyond that, the Docker Scout policy engine can be used to write custom policies using the wide array of data points available.  

Immutable containers

The concept of immutable containers, which are not altered after they are deployed, plays a significant role in securing environments. By ensuring that containers are replaced rather than changed, the security of the environment is enhanced, preventing unauthorized or malicious alterations during runtime.

Docker images — more broadly, OCI-compliant images — are immutable by default. When they are deployed as containers, they become writable while they are running via the addition of a “scratch layer” on top of the immutable image. Note that this layer does not persist beyond the life of the container. When the container is removed, the scratch layer is removed as well.

When the immutable flag is added — either by adding the –read-only flag to the docker run command or by adding the read_only: true key value pair in docker compose — Docker will mount the root file system read-only, which prevents writes to the container file system.

In addition to making a container immutable, it is possible to mount Docker volumes as read/write or read-only. Note that you can make the container’s root file system read-only and then use a volume read/write to better manage write access for your container.

Encryption

Ensuring that data is securely encrypted, both in transit and at rest, is non-negotiable in a secure Docker environment. Docker containers should be configured to use TLS for communications both between containers and outside the container environment. Docker images and volumes are stored locally and can benefit from the host system’s disk encryption when they are at rest.

Tool chain updates

Finally, it is important to make sure that Docker Desktop is updated to the most current version, as the Docker team is continually making improvements and mitigating CVEs as they are discovered. For more information, refer to Docker security documentation and Docker security announcements.

Overcoming challenges in Zero Trust adoption

Implementing a Zero Trust architecture with Docker Desktop is not without its challenges. Such challenges include the complexity of managing such environments, potential performance overhead, and the need for a cultural shift within organizations towards enhanced security awareness. However, the benefits of a secure, resilient infrastructure far outweigh these challenges, making the effort and investment in Zero Trust worthwhile.

Conclusion

Incorporating Zero Trust principles into Docker Desktop environments is essential for protecting modern infrastructures against sophisticated cyber threats. By understanding and implementing these principles, organizations can safeguard their applications and data more effectively, ensuring a secure and resilient digital presence.

Learn more

Subscribe to the Docker Newsletter. 

Read: Empowering Developers with Docker: Simplifying Compliance and Enhancing Security for SOC 2, ISO 27001, FedRAMP, and More.

Find out how Docker Scout can help address security issues before they hit production.

Learn about Docker Scout health scores, security grading for container images in your Docker Hub repo.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/