Security Advisory: High Severity Curl Vulnerability

The maintainers of curl, the popular command-line tool and library for transferring data with URLs, will release curl 8.4.0 on October 11, 2023. This version will include a fix for two common vulnerabilities and exposures (CVEs), one of which the curl maintainers rate as “HIGH” severity and described as “probably the worst curl security flaw in a long time.” 

The CVE IDs are: 

CVE-2023-38545: severity HIGH (affects both libcurl and the curl tool)

CVE-2023-38546: severity LOW (affects libcurl only, not the tool)

Specific details of the exploit have yet to be published. We expect these details to be published when the version update becomes available.

We will continue to update this blog post as more information becomes available.

In the meantime, you can prepare ahead of exploitability details being released on October 11 by using Docker Scout to check whether you’re using the curl library as a dependency in any of the container images in your organization.

Am I vulnerable?

We anticipate that any version of curl prior to 8.4.0 will be affected by these CVEs, so now is a good time to build up a list of what you’ll need to update on October 11, 2023.

Having a dependency on curl won’t necessarily mean the exploit will be possible for your application. When more details are published, Docker Scout will surface specifics about the exploitability of this vulnerability. The first step is to understand whether your images have a dependency on curl.  

Quickest way to assess all images 

The quickest way to assess all images is to enable Docker Scout for your container registry. 

Step 1: Enable Docker Scout

Docker Scout currently supports Docker Hub, JFrog Artifactory, and AWS Elastic Container Registry. Instructions for integrating Docker Scout with these container registries:

Integrating Docker Scout with Docker Hub

Integrating Docker Scout with JFrog Artifactory

Integrating Docker Scout with AWS Elastic Container Registry

Note: If your container registry isn’t supported right now, you’ll need to use the local evaluation method via the CLI, described later.

Step 2: Select the repositories you want to analyze and kick off an analysis

Docker Scout analyzes all local images by default, but to analyze images in remote repositories, you need to enable Docker Scout image analysis. You can do this from Docker Hub, the Docker Scout Dashboard, and CLI. Find out how in the overview guide.

Sign in to your Docker account with the docker login command or use the Sign in button in Docker Desktop.

Use the Docker CLI docker scout repo enable command to enable analysis on an existing repository:

$ docker scout repo enable –org <org-name> <org-name>/scout-demo

Step 3: Visit scout.docker.com 

On the scout.docker.com homepage, find the policy card called No vulnerable version of curl and select View details (Figure 1). 

Figure 1: Docker Scout dashboard with the policy card that will help identify if and where the vulnerable version of curl exists.

The resulting list contains all the images that violate this policy — that is, they contain a version of curl that is likely to be susceptible to the HIGH severity CVE (CVE-2023-38545) listed above.

Figure 2: Docker Scout showing list of images that violate the policy by containing affected versions of the curl library.

Alternative CLI method

An alternative method is to use the Docker Scout CLI to analyze and evaluate local container images.

You can use the docker scout policy command to evaluate images against Docker Scout’s built-in policies on the command line, including the No vulnerable version of curl. 

docker scout policy [IMAGE] –org [ORG]

Figure 3: Docker Scout showing the results of running the Docker Scout command to evaluate a container image against the ‘No vulnerable version of curl’ policy.

If you’d rather understand all the CVEs identified in an individual container image, you can run the following command. This method doesn’t require you to enable Docker Scout in your container registry but will take a little longer if you have a large number of images to analyze. 

docker scout cves [OPTIONS] [IMAGE|DIRECTORY|ARCHIVE]

Learn more

Follow direct updates from the maintainer of curl project via the GitHub issue.

Learn more about Docker Scout at docs.docker.com/scout.

Read Announcing Docker Scout GA: Actionable Insights for the Software Supply Chain.

Quelle: https://blog.docker.com/feed/

Introducing a New GenAI Stack: Streamlined AI/ML Integration Made Easy

At DockerCon 2023, with partners Neo4j, LangChain, and Ollama, we announced a new GenAI Stack. We have brought together the top technologies in the generative artificial intelligence (GenAI) space to build a solution that allows developers to deploy a full GenAI stack with only a few clicks.

Here’s what’s included in the new GenAI Stack:

1. Pre-configured LLMs: We provide preconfigured Large Language Models (LLMs), such as  Llama2, GPT-3.5, and GPT-4, to jumpstart your AI projects.

2. Ollama management: Ollama simplifies the local management of open source LLMs, making your AI development process smoother.

3. Neo4j as the default database: Neo4j serves as the default database, offering graph and native vector search capabilities. This helps uncover data patterns and relationships, ultimately enhancing the speed and accuracy of AI/ML models. Neo4j also serves as a long-term memory for these models.

4. Neo4j knowledge graphs: Neo4j knowledge graphs to ground LLMs for more precise GenAI predictions and outcomes.

5. LangChain orchestration: LangChain facilitates communication between the LLM, your application, and the database, along with a robust vector index. LangChain serves as a framework for developing applications powered by LLMs. This includes LangSmith, an exciting new way to debug, test, evaluate, and monitor your LLM applications.

6. Comprehensive support: To support your GenAI journey, we provide a range of helpful tools, code templates, how-to guides, and GenAI best practices. These resources ensure you have the guidance you need.

Figure 1: The GenAI Stack guide and access to the GenAI Stack components.

Conclusion

The GenAI Stack simplifies AI/ML integration, making it accessible to developers. Docker’s commitment to fostering innovation and collaboration means we’re excited to see the practical applications and solutions that will emerge from this ecosystem. Join us as we make AI/ML more accessible and straightforward for developers everywhere.

The GenAI Stack is available in Early Access now and is accessible from the Docker Desktop Learning Center or on GitHub. 

Participate in our Docker Docker AI/ML Hackathon to show off your most creative AI/ML solutions built on Docker. Read our blog post “Announcing Docker AI/ML Hackathon” to learn more.

At DockerCon 2023, Docker also announced its first AI-powered product, Docker AI. Sign up now for early access to Docker AI. 

Learn more

Find the GenAI Stack on GitHub.

Join the Docker Docker AI/ML Hackathon.

Sign up now for early access to Docker AI. 

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Announcing Docker Scout GA: Actionable Insights for the Software Supply Chain

We are excited to announce that Docker Scout General Availability (GA) now allows developers to continuously evaluate container images against a set of out-of-the-box policies, aligned with software supply chain best practices. These new capabilities also include a full suite of integrations enabling you to attain visibility from development into production. These updates strengthen Docker Scout’s position as integral to the software supply chain. 

For developers building modern cloud-native applications, Docker Scout provides actionable insights in real-time to make it simple to secure and manage their software supply chain end-to-end. Docker Scout meets developers in the Docker tools they use daily, and provides the richest library of open source trusted content, policy evaluation, and automated remediation in real-time across the entire software supply chain. From the first base image pulled all the way to git commit, CI pipeline, and deployed workloads in production, Docker Scout leverages its large open ecosystem and the container runtime as vehicles for insights developers can easily act upon.

With Docker Scout operating as the system of record for the software supply chain, developers have access to real-time insights, identification of anomalies in their applications, and automated recommendations to improve application integrity, in tandem with Docker Desktop, Docker CLI, Docker Hub, and the full suite of Docker products. Docker Scout is designed to see the bigger picture and address challenges that are layered into the software supply chain.

Software supply chain

Through many in-depth conversations with our customers, we uncovered a clear trend: Developers are increasingly aware about not consuming content they don’t trust or haven’t permitted within their accepted range of organizational policies. To solve for this, Docker provides Docker Official Images, a curated set of Docker repositories hosted on Docker Hub. These images provide essential base repositories that serve as the starting point for most users. Docker and Docker Scout will continue to provide additional forms of software supply chain metadata for the Docker Official Image catalog in the coming months.

Trusted content is the foundation of secure software applications. A key aspect of this foundation is Docker Hub, the largest and most-used source of secure software artifacts, which includes Docker Official Images, Docker Verified Publishers, and Docker-Sponsored Open Source trusted content. Docker Scout policies leverage this metadata to track the life cycle of images, generate unique insights for developers, and help customers automate the enhancement of their software supply chain objectives — from inner loop to production.

Ensuring the reliability of applications requires constant vigilance and in-depth software design considerations, from selecting dependencies with a JSON file, running compilations and associated tests, to ensuring the safety of every image built. While some monitoring platforms focus solely on policies for container images currently running in production, the new Docker Scout GA release recognizes this is not sufficient to oversee the entire software supply chain, since that approach occurs too late in the development process.

Docker Scout offers a seamless set of actionable insights and suggested workflows that meet developers where they build and monitor today. These insights and workflows are particularly helpful for developers building Docker container images based on Docker Trusted Content (Docker Official Images, Docker-Sponsored Open Source, Docker Verified Publishers), and for many data sources beyond that, including data sourced from integrations with JFrog Artifactory, Amazon ECR, Sysdig Runtime Monitoring, GitHub Actions, GitLab, and CircleCI (Figure 1).

Figure 1: Integrate Docker Scout with tools you already use.

Policy evaluation

Current policy solutions support only a binary outcome, where artifacts are either allowed or denied based on the oversimplified results from policy analysis. This pass/fail approach and the corresponding gatekeeping is often too rigid and does not consider nuanced situations or intermediate states, leading to day-to-day friction around implementing traditional policy enforcement. Docker Scout now goes deeper, not only by indicating more subtle deviations from policy, but also by actively suggesting upgrade and remediation paths to bring you back within policy guidelines, reducing MTTR in the process.

Additionally, Docker Scout Policy makes for a productivity boost that stems from not having to wait for CI/CD to finish to know if you need to upgrade dependencies. Evaluating policies with Docker Scout prevents last-minute security blockers in the CI/CD that can impact release dates, which is one of the primary reasons teams tend not to adopt policy evaluation solutions.

Policy can take many forms: Specifying which repos or sources it is safe to pull in components or libraries from, requiring secure and verified build processes, and continuously monitoring for issues after deployment. Docker Scout is now equipped to expand the types of policies that can be implemented within these wider sets of definitions (Figures 2 & 3).

Figure 2: Docker Scout dashboard showing policy violations with fixes.

Figure 3: Overview of the security status of images across all Docker-enabled repos over time.

Policy for maximal coverage

Our vision for policy is to allow developers to define what’s most developer-friendly and secure for their needs and their environments. While there’s a common thread across some industries, every enterprise has nuances. Whether setting security policies or aligning to software development life cycle tooling best practices, Docker Scout’s goal is to continuously evaluate container images, assisting teams to incrementally improve security posture within their cloud infrastructure.

These new capabilities include built-in out-of-the-box policies to maintain up-to-date base images, track associated vulnerabilities, and monitor for relevant in-scope licenses. Through evaluation results, users can see the status of policies for each image. Users will have an aggregated view of their policy results so they know what to focus on, as well as the ability to evaluate those results in more detail to understand what has changed within a given set of policies in a repo list view (Figure 4).

Figure 4: View from the CLI.

What’s next?

Docker Scout is designed to be with you in every step of improving developer workflows — from helping developers understand which actions to take to improve code reliability and bring it back in line with policy, to ensuring optimal code performance.

With the next phase on the horizon, we’re gearing up to deliver more value to our customers and always welcome feedback through our Docker Scout Design Partner Program. For now, the Docker Scout team is excited to get the latest solutions into our customers’ hands, ensuring safety, efficiency, and quality in a rapidly evolving ecosystem within the software supply chain.

Developers in our Docker-Sponsored Open Source (DSOS) Program will soon be able to access our Docker Scout Team plan, which includes unlimited local image analysis, as well as up to 100 repos for remote images, SDLC integrations, security posture reporting, and policy evaluation. Once Docker Scout is enabled for the DSOS program in late 2023, DSOS members can enable it on up to 100 repositories within their DSOS-approved namespace.

To learn more about Docker Scout, visit the Docker Scout product page 

Learn more

Try Docker Scout.

Looking to get up and running? Use our Quickstart guide.

Vote on what’s next! Check out the Docker Scout public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Announcing Udemy + Docker Partnership

Docker and Udemy announced a new partnership at DockerCon to give developers a clear, defined, accessible path for learning how to use Docker, best practices, advanced concepts, and everything in between. As the #1 rated online course platform (as ranked by Stack Overflow), Udemy will be the first to house Docker-accredited content and customized learning paths to provide developers with the latest training materials on how to best use Docker tools. Launching in early 2024, the platform will provide learning paths that award Docker badges to those who successfully complete the path, enabling them to showcase their proficiency with the material.

Udemy instructors are vetted, experienced, and have in-depth knowledge of Docker and the Docker suite of development tools, services, trusted content, and automations. As a fully online education portal, Udemy offers classes that are accessible, inclusive, and attainable for a broad range of developers. This Docker + Udemy partnership will establish a key destination for developers and hobbyists who want to further their Docker education. Together, Docker and Udemy will enhance their communities with shared standards, education paths, and credibility.

This partnership will bring Docker educational content together into easy-to-navigate course paths and reward badges for completion of verified courses. This platform aims to bring verified and best-in-class materials together in one place. The Docker + Udemy partnership will create a streamlined source for developers to gain knowledge, receive badges, and stay current on the latest content.

These courses and their curricula will be vetted by Udemy instructional designers, Docker experts from the Docker community, and Docker employees. In the true spirit of open source, the curricula by which we vet and certify courses will be made publicly available for all content creators to use.  

In the coming months, we will invite members of the Docker community who are experienced instructors and content creators to apply to be a Udemy instructor and create Docker courses on Udemy, or bring their existing content into our learning paths. We are thrilled to be able to bring our community into this endeavor and to amplify visibility for the community.

Stay tuned for more details on this partnership soon. To get started today and gain access to Udemy’s collection of more than 350 Docker courses, developers can visit: https://www.udemy.com/topic/docker/.

Learn more

Check out Udemy’s collection of Docker courses.

Get the latest release of Docker Desktop.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Docker Desktop 4.24: Compose Watch, Resource Saver, and Docker Engine

We’re excited to share this month’s highlights that will further improve your Docker experience. Our commitment to supporting your development journey has led to enhancements across our tools, and today, we’re pleased to announce the official General Availability of  Docker Compose Watch and Resource Saver. Combined with our new enhancements to managing Docker Engine in Docker Desktop, these updates will help you be more efficient and make your software development experience more enjoyable.

Docker Compose Watch is now Generally Available

The Docker Compose Watch GA release marks a significant milestone in our journey. Once labeled alpha as docker-compose watch, this tool is faster, more resilient, and ready to support your development needs effectively.

We’ve been listening to your feedback since its initial alpha launch (introduced in Compose v2.17 and bundled with Docker Desktop 4.18). Our goal was to make it faster and more robust, ensuring a smoother development experience.

We created Docker Compose Watch to enhance your workflow by providing native support for common development tasks, such as hot reloading for front-end development.

Figure 1: Docker Compose Watch configuration.

Figure 2: Docker Compose Watch gives developers more control over how local file changes sync into the container.

These improvements mean fewer hiccups during everyday tasks, such as merging branches or switching codebases. Docker Compose Watch now intelligently manages changes, allowing you to focus on what matters most — building great software.

As Docker Compose Watch transitions to General Availability, we thank you for your support and feedback. Your insights have been invaluable in shaping this tool.

Resource Saver is now Generally Available

The performance enhancement feature, Resource Saver, is now Generally Available, supporting automatic low-memory mode for Mac, Windows, and Linux. 

This new feature automatically detects when Docker Desktop is not running containers and dramatically reduces its memory footprint by 10x, freeing up valuable resources on developers’ machines for other tasks and minimizing the risk of lag when navigating across different applications. Memory allocation can now be quick and efficient, resulting in a seamless and performant development experience.

Figure 3: Docker Desktop resource saver settings tab.

Resource Saver is available to all Desktop users as default, configured from the Resources tab in Settings. For more information, refer to the Docker Desktop’s Resource Saver mode documentation.

Docker Desktop streamlines Docker Engine control: A user-centric upgrade

At Docker, we value your feedback, and one of the most frequently requested features has been an enhancement to Docker Engine’s status and associated actions in Docker Desktop. Listening to your input, we’ve made some straightforward yet impactful UX improvements:

Constant engine status: You’ll now see the engine status at all times, eliminating the need to hover for tooltips.

One-click actions: Common engine and desktop actions like start, pause, and quit are now easily accessible from the dashboard, reducing clicks for everyday tasks.

Enhanced menu visibility: We’ve revamped the menu for greater prominence, making it easier to find essential features, such as Troubleshoot.

What’s in it for you? A more user-friendly Docker experience that minimizes clicks, reduces cognitive load, and provides quicker access to essential actions. We want to hear your thoughts on these improvements, so don’t hesitate to share your feedback via the Give Feedback option in the whale menu!

Figure 4: Docker Engine status interactive interface supporting stop, start, and pause.

Conclusion

Upgrade now to explore what’s new in the 4.24 release of Docker Desktop. Do you have feedback? Leave feedback on our public GitHub roadmap, and let us know what else you’d like to see in upcoming releases.

Learn more

Read the Docker Desktop Release Notes.

Get the latest release of Docker Desktop.

Learn more about Resource Saver Mode in Docker Desktop. 

Lear more about Docker Compose Watch.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Announcing Docker Compose Watch GA Release

Docker Compose Watch, a tool to improve the inner loop of application development, is now generally available. Hot reload is one of those engineering workflow features that’s seemingly minor and simple but has cumulative benefits. If you can trust your app will update seamlessly as you code, without losing state, it’s one less thing pulling your focus from the work at hand. You can see your frontend components come to life while you stay in your IDE. 

With containerized application development, there are more steps than Alt+Tab and hitting reload in your browser. Even with caching, rebuilding the image and re-creating the container — especially after waiting on stop and start time — can disrupt focus.

We built Docker Compose Watch to smooth away these workflow papercuts. We have learned from many people using our open source Docker Compose project for local development. Now we are natively addressing common workflow friction we observe, like the use case of hot reload for frontend development. 

Bind mount vs. Watch

A common workaround to get hot reload to work is to set up a bind mount to mirror file changes between the local system and a container. This method uses operating system and hypervisor APIs to expose a local directory to the otherwise isolated file system in the container.

The workaround is not trivial engineering since how bind mounts function in Docker Desktop differs from Docker Engine on Linux. For parity, Docker Desktop must provide seamless and efficient file sharing between your machine and its virtual machine (VM), ensuring permissions, replicating file notifications, and maintaining low-level filesystem consistency to prevent corruption. 

In contrast, Docker Compose Watch is specifically targeting development use cases. You may want to ensure your code changes sync into the container, allowing React or NextJS to kick off its own live reload. However, you don’t want the changes you’ve made in the container for ad-hoc testing to reflect in your local directory. For this reason, the tradeoffs we make for Docker Compose Watch favor fine-grained control for common development workflows with Docker Compose (Figures 1 and 2).

Figure 1: Docker Compose Watch configuration.

Figure 2: Docker Compose Watch gives developers more control over how local file changes sync into the container.

Improving Watch for development

Since the alpha launch (in Compose v2.17, bundled with Docker Desktop 4.18), we’ve responded to early feedback by making Docker Compose Watch faster and more robust. This improvement avoids hiccups on common development tasks that kick off many changes, such as merging in the latest main or switching branches. 

Your code sync operation now batches, debounces, and ignores unimportant changes:

What previously would be many API calls are now batched as a single API call to the Docker Engine.

We’ve fine-tuned the streaming of changes to your containers for improved transfer performance. A new built-in debounce mechanism prevents unnecessary transfers in case of back-to-back writes to the same file. This optimizes CPU usage by preventing unnecessary incremental compiles.

The built-in filters have been refined to ignore standard temporary files generated by common code editors and integrated development environments (IDEs).

Previously, Docker Compose Watch required attaching to an already running Compose project. Docker Compose Watch now automatically builds and starts all required services at launch. One command is all you need: docker compose watch.

Try Docker Compose Watch in Docker Desktop 4.24

As of Compose 2.22.0, bundled with Docker Desktop 4.24, Docker Compose Watch is now Generally Available. Make sure to upgrade to the latest version of Docker Desktop and develop more efficiently with docker compose watch.

Let us know how Docker Compose Watch supports your use case and where it can improve. Or you can contribute directly to the open source Docker Compose project.

Learn more

Read the Docker Desktop Release Notes.

Learn about Docker Desktop 4.24.

Get the latest release of Docker Desktop.

Contribute to the open source Docker Compose project.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Announcing Docker AI/ML Hackathon 

With the return of DockerCon, held October 4-5 in Los Angeles, we’re excited to announce the kick-off of a Docker AI/ML Hackathon. Join us at DockerCon — in-person or virtually — to learn about the latest Docker product announcements. Then, bring your innovative artificial intelligence (AI) and machine learning (ML) solutions to life in the hackathon for a chance to win cool prizes.

The Docker AI/ML Hackathon is open from October 3 – November 7, 2023. DockerCon in-person attendees are invited to the dedicated hackspace, where you can chat with fellow developers, Dockhands, and our partners Datastax, Navan.ai, Neo4J, OctoML, and Ollama. 

We’ll also host virtual webinars, Q&A, and engaging chats throughout the next five weeks to keep the ideas flowing. Register for the Docker AI/ML Hackathon to participate and to be notified of event activities.

Hackathon tips

Docker AI/ML Hackathon participants are encouraged to build solutions that are innovative, applicable in real life, use Docker technology, and have an impact on developer productivity.  Submissions can also be non-code proof-of-concepts, extensions that improve Docker workflows, or integrations to improve existing AI/ML solutions.  

Solutions should be AI/ML projects or models built using Docker technology and distributed through DockerHub, AI/ML integrations into Docker products that improve the developer experience, or extensions of Docker products that make working with AI/ML more productive.

Submissions should be a working application or a non-code proof of concept. We would like to see submissions as close to a real-world implementation as possible, but we will accept submissions that are not fully functional with a strong proof of concept. Additionally, all submissions should include a 3-5 minute video that showcases the hack along with background and context (we will not judge the submission on the quality or editing of the video itself). 

After submitting your solution, you’ll be in the running for $20,000 in cash prizes and exclusive Docker swag. Judging will be based on criteria such as the applicability of the solution, innovativeness of the solution, incorporation of Docker tooling, and impact on the developer experience and productivity.

Get started 

Follow the #DockerHackathon hashtag on social media platforms and join the Docker AI/ML Hackathon Slack channel to connect with other participants.

Check out the site for full details about the Docker AI/ML Hackathon and register to start hacking today! 

Submissions close on November 7, 2023, at 5 PM Pacific Time (November 8 at 1 AM UTC).

Learn more

Register for the Docker AI/ML Hackathon.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Get Started with the Microcks Docker Extension for API Mocking and Testing

In the dynamic landscape of software development, collaborations often lead to innovative solutions that simplify complex challenges. The Docker and Microcks partnership is a prime example, demonstrating how the relationship between two industry leaders can reshape local application development.

This article delves into the collaborative efforts of Docker and Microcks, spotlighting the emergence of the Microcks Docker Desktop Extension and its transformative impact on the development ecosystem.

What is Microcks?

Microcks is an open source Kubernetes and cloud-native tool for API mocking and testing. It has been a Cloud Native Computing Foundation Sandbox project since summer 2023.  

Microcks addresses two primary use cases: 

Simulating (or mocking) an API or a microservice from a set of descriptive assets (specifications or contracts) 

Validating (or testing) the conformance of your application regarding your API specification by conducting contract-test

The unique thing about Microcks is that it offers a uniform and consistent approach for all kinds of request/response APIs (REST, GraphQL, gRPC, SOAP) and event-driven APIs (currently supporting eight different protocols) as shown in Figure 1.

Figure 1: Microcks covers all kinds of APIs.

Microcks speeds up the API development life cycle by shortening the feedback loop from the design phase and easing the pain of provisioning environments with many dependencies. All these features establish Microcks as a great help to enforce backward compatibility of your API of microservices interfaces.  

So, for developers, Microcks brings consistency, convenience, and speed to your API lifecycle.

Why run Microcks as a Docker Desktop Extension?

Although Microcks is a powerhouse, running it as a Docker Desktop Extension takes the developer experience, ease of use, and rapid iteration in the inner loop to new levels. With Docker’s containerization capabilities seamlessly integrated, developers no longer need to navigate complex setups or wrestle with compatibility issues. It’s a plug-and-play solution that transforms the development environment into a playground for innovation.

The simplicity of running Microcks as a Docker extension is a game-changer. Developers can effortlessly set up and deploy Microcks in their existing Docker environment, eliminating the need for extensive configurations. This ease of use empowers developers to focus on what they do best — building and testing APIs rather than grappling with deployment intricacies.

In agile development, rapid iterations in the inner loop are paramount. Microcks, as a Docker extension, accelerates this process. Developers can swiftly create, test, and iterate on APIs without leaving the Docker environment. This tight feedback loop ensures developers identify and address issues early, resulting in faster development cycles and higher-quality software.

The combination of two best-of-breed projects, Docker and Microcks, provides: 

Streamlined developer experience

Easiness at its core

Rapid iterations in the inner loop

Extension architecture

The Microcks Docker Desktop Extension has an evolving architecture depending on your enabling features. The UI that executes in Docker Desktop manages your preferences in a ~/.microcks-docker-desktop-extension folder and starts/stops/cleans the needed containers.

At its core, the architecture (Figure 2) embeds two minimal elements: the Microcks main container and a MongoDB database. The different containers of the extension run in an isolated Docker network where only the HTTP port of the main container is bound to your local host.

Figure 2: Microcks extension default architecture.

Through the Settings panel offered by the extension (Figure 3), you can tune the port binding and enable more features, such as:

The support of asynchronous APIs mocking and testing via the usefulness of AsyncAPI with Kafka and WebSocket

The ability to run Postman collection tests in Microcks includes support for Postman testing.

Figure 3: Microcks extension Settings panel.

When applied, your settings are persistent in your ~/.microcks-docker-desktop-extension folder, and the extension augments the initial architecture with the required services. Even though the extension starts with additional containers, they are carefully crafted and chosen to be lightweight and consume as few resources as possible. For example, we selected the Redpanda Kafka-compatible broker for its super-light experience. 

The schema shown in Figure 4 illustrates such a “maximal architecture” for the extension:

Figure 4: Microcks extension maximal architecture.

The Docker Desktop Extension architecture encapsulates the convergence of Docker’s containerization capabilities and Microcks’ API testing prowess. This collaborative endeavor presents developers with a unified interface to toggle between these functionalities seamlessly. The architecture ensures a cohesive experience, enabling developers to harness the power of both Docker and Microcks without the need for constant tool switching.

Getting started

Getting started with the Docker Desktop Extension is a straightforward process that empowers developers to leverage the benefits of unified development. The extension can be easily integrated into existing workflows, offering a familiar interface within Docker. This seamless integration streamlines the setup process, allowing developers to dive into their projects without extensive configuration.

Here are the steps for installing Microcks as a Docker Desktop Extension:1. Choose Add Extensions in the left sidebar (Figure 5).

Figure 5: Add extensions in the Docker Desktop.

2. Switch to the Browse tab.

3. In the Filters drop-down, select the Testing Tools category.

4. Find Microcks and then select Install (Figure 6).

Figure 6: Find and open Microcks.

Launching Microcks

The next step is to launch Microcks (Figure 7).

Figure 7: Launch Microcks.

The Settings panel allows you to configure some options, like whether you’d like to enable the asynchronous APIs features (default is disabled) and if you’d need to set an offset to ports used to access the services (Figures 8 and 9).

Figure 8: Microcks is up and running.

Figure 9: Access asynchronous APIs and services.

Sample app deployment

To illustrate the real-world implications of the Docker Desktop Extension, consider a sample application deployment. As developers embark on local application development, the Docker Desktop Extension enables them to create, test, and iterate on their containers while leveraging Microcks’ API mocking and testing capabilities.

This combined approach ensures that the application’s containerization and API aspects are thoroughly validated, resulting in a higher quality end product. Check out the three-minute “Getting Started with Microcks Docker Desktop Extension” video for more information.

Conclusion

The Docker and Microcks partnership, exemplified by the Docker Desktop Extension, signifies a milestone in collaborative software development. By harmonizing containerization and API testing, this collaboration addresses the challenges of fragmented workflows, accelerating development cycles and elevating the quality of applications.

By embracing the capabilities of Docker and Microcks, developers are poised to embark on a journey characterized by efficiency, reliability, and collaborative synergy.

Remember that Microcks is a Cloud Native Computing Sandbox project supported by an open community, which means you, too, can help make Microcks even greater. Come and say hi on our GitHub discussion or Zulip chat 🐙, send some love through GitHub stars ⭐️, or follow us on Twitter, Mastodon, LinkedIn, and our YouTube channel.

Learn more

Try the Microcks Docker Extension.

Learn about Docker Extensions.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Let’s DockerCon!

For the last three years, DockerCon, our annual global developer event, was 100% virtual. Still, we were humbled by the interest and response — tens of thousands of developer participants from around the world each year. Wow! (If you missed any of ’em, they’re available on YouTube: 2020, 2021, 2022!)

With our collective global return to the “new normal,” DockerCon 2023 will be hybrid — both live (in Los Angeles, California) and virtual. Our desire is to once again experience the live magic of the hallway track, the serendipitous developer-to-developer sharing of tips and tricks, and the celebration of our community’s accomplishments … all while looking forward together toward a really exciting future. And for members of our community who can’t attend in person, we hope you’ll join us virtually!

In the spirit of keeping this post brief, I’ll share a few community highlights here, but expect much more news and updates next week at DockerCon! 

Our open source projects — containerd, Compose, BuildKit, moby/moby, and others — continue to scale in terms of contributions, contributors, and stars. Thank you! 

And overall, our developer community is now at 20M monthly active IPs, 16M registered developers, 15M Docker Hub repos, and 16B image pulls per month from Docker Hub. Again, we’re humbled by this continued growth, engagement, and enthusiasm of our developer community.

And in terms of looking forward to what’s next … well, you gotta tune-in to DockerCon to find out! 😀 But, seriously, there’s never been a better time to be a developer. To wit, with the digitization of All The Things, there’s a need for more than 750 million apps in the next couple of years. That means there’s a need for more developers and more creativity and innovation. And at DockerCon you’ll hear how our community plans to help developers capitalize on this opportunity.

Specifically, and without revealing too much here: We see a chance to bring the power of the cloud to accelerate the developer’s “inner loop,” before the git commit and CI. Furthermore, we see an untapped opportunity to apply GenAI to optimize the non-code gen aspects of the application. By some accounts, this encompasses 85% or more of the overall app.

Piqued your interest? Hope so! 😀 Looking forward to seeing you at DockerCon!

sj

Learn more

Register for DockerCon.

Register for DockerCon workshops.

Watch past DockerCon videos: 2020, 2021, 2022.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Docker’s Journey Toward Enabling Lightning-Fast Developer Innovation: Unveiling Performance Milestones

Our journey has been remarkable. Recently, Docker shifted focus from Docker Swarm, specializing in container orchestration, to the “inner loop” — the foundation — of the Software Development Life Cycle (SLDC). Today, in the early planning, coding, and building stages, we are setting the stage for container development success, ensuring development teams can rapidly and consistently build innovative containerized applications with confidence.

At Docker, we’re dedicated to optimizing the “inner loop” to ensure your journey from development to deployment and in-production software management is flawless. Whether you’re developing locally or in the cloud, our commitment to delivering all this with top-tier performance and security remains unwavering.

In this post, I’ll highlight our focus on performance and walk you through the milestones of the past year. We’re thrilled about the momentum we’re building in providing you with a robust, performant, agile, and secure container application development platform.

These achievements are more than just numbers; they demonstrate the positive impact and return on investment we deliver to you. Our work continues, but let’s explore these improvements and what they mean for you — the driving force behind innovation. 

Improving startup performance by up to 75% 

In 2022, we embarked on a journey that transformed how macOS users experience Docker. At that time, experimental virtualization was the norm, resulting in startup times that tested your patience, often exceeding 30 seconds. We knew we had to improve this, so we made several adjustments to reduce startup time significantly, such as adding support for the Mac virtualization framework and optimizing the Docker Desktop Linux VM boot sequence.

Now, when you fire up Docker Desktop 4.23, brace yourself for a lightning-fast launch, taking a mere 3.481 seconds. That’s right, a startup time that’s not just improved but slashed by 75% (Figure 1).

Figure 1: Startup time improvements across dev environments from Docker Desktop 4.12 to 4.23.

Mac users aren’t the only ones celebrating. Windows Hyper-V and Windows WSL2 users have their reason to cheer. Startup times have decreased from 20.257 seconds (with 4.12) to just 10.799 seconds (with 4.23). That 47% performance boost provides a smoother and more efficient development experience.

And the startup performance journey continues. We want to achieve sub-three-second startup times for all supported development environments. We’re looking forward to delivering this additional advancement soon, and we anticipate our startup performance to continue improving with each release.

Accelerating network performance by 85x

Downloading and uploading container images can be time-consuming. On Mac, with Docker Desktop 4.23, we’ve accelerated the process, delivering speeds over 30GB/s (bytes/sec), ensuring swift development workflows. We did this by entirely replacing the Docker Desktop network stack with a newer, modernized version that is much more efficient. This change resulted in an 85x improvement in upload speed compared to previous versions (4.12) (Figure 2). Think of it as upgrading from a horse-drawn carriage to a bullet train. Your data can now move seamlessly without delays.

Figure 2: Host-to-container use case occurs when a service hosted inside the container is accessed from outside the VM (for example, when a web developer accesses a website they are working on using a browser). Container-to-host use case occurs when the container accesses a service provided from the host (for example, when a package is installed as part of a build using internet access).

On Windows, downloading an image has never been faster. Docker Desktop 4.23 now achieves a speed of 1.1Gbits/s, enhancing developer efficiency. This achievement represents a 650% improvement compared to the previous version (4.12).

For real-time downloading speed, such as you would expect for video games and movies, Docker Desktop 4.23 on macOS offers UDP streaming improvements, soaring to 4.75GB/s (bytes/sec), a 5,800% increase in streaming speed compared to the previous version (4.12).These numbers translate to a faster, smoother digital experience, helping to keep your digital world at the speed of your ideas.

Optimizing host file sharing performance by more than 2x 

File sharing may not always be in the spotlight. Still, it’s an unsung hero of modern development that can make or break your development experience, and we’ve even made improvements here.

Imagine this scenario: Not too long ago, working with Docker Desktop 4.11 on your trusty Mac host, building Redis from within a container (where your Redis source code resided on your local host) was a patience-testing ordeal. It demanded 7 minutes and 25 seconds of your valuable time, primarily because the container’s access to the host files introduced frustrating delays. 

Today, with Docker Desktop 4.23, we’ve revolutionized the game. Thanks to groundbreaking improvements in virtiofs, that same Redis build now takes only 2 minutes and 6 seconds. That’s an impressive 71% reduction in build time.

Since macOS 12.5+, virtiofs is now the default in Docker Desktop as the standard to deliver substantial performance gains when sharing files with containers (Figure 3). You can read more about this in “Docker Desktop 4.23: Updates to Docker Init, New Configuration Integrity Check, Quick Search Improvements, Performance Enhancements, and More.”

Figure 3: Docker Desktop 4.11 compared to 4.22 with virtiofs enabled.

But wait, there’s more to come. Expect even more progress in the file-sharing arena soon. We continue working toward seamless collaboration and faster development cycles because we know that every minute saved is a minute gained for innovation.

Increasing efficiency and reducing idle memory usage by 10x

Let’s talk about efficiency and a little touch of green innovation.

In Docker Desktop 4.22, we introduced the Resource Saver mode, which is like having your development environment on standby, ready to jump into action when needed and conserving resources when it’s not. Resource Saver mode works on Mac, Windows, and Linux, and supports both Mac and Windows by massively reducing Docker Desktop’s memory and CPU footprint when Docker Desktop is idle (i.e., not running containers for a configurable period of time), reducing memory utilization on host machines by a 2GBs, thereby allowing developers to multitask uninterrupted (Figure 4).

Figure 4: Idle memory usage improvements since Docker Desktop 4.20.

Besides improving developer multitasking, what else is so remarkable about this feature? Well, let me paint the picture. We’re saving 38,500 CPU hours daily across all our Docker Desktop users. To put that in perspective, that’s enough to power 1,000 American homes for an entire month.

We have also made significant improvements while Docker Desktop is active (i.e., running containers), resulting in a 52.85% reduction in footprint. These improvements make Docker Desktop lighter and free up resources on your machine to leverage other tools and applications efficiently (Figure 5).

Figure 5: Docker Desktop active memory usage improvements since 4.20.

This means we’re not just optimizing your development workflow but doing so efficiently, reducing energy costs, and positively impacting the environment — an area we will continue to invest in. The reduced footprint is one small way of giving back while helping you build the future — a win-win.

Streamlining the build process, delivering up to a 40% compression improvement

Imagine your containers are digital backpacks; the heavier the bag, the harder to carry it around while you work. We’ve introduced support for Zstandard (zstd) compression of Docker container images in Docker Desktop 4.19 to lighten the load, reducing container image sizes with remarkable results. 

Look at the data for a debian:12.1 container image in Figure 6. Zstandard delivers a ~40% improvement in compression compared to the traditional gzip method. And for the Docker Engine:24.0 image, we are achieving a ~20% enhancement.

Figure 6: Data for a debian:12.1 container image and Docker Engine 24.0 with improved compression.

In practical terms, your container images become leaner and faster to transfer, allowing you to work more swiftly and effectively. With Docker Desktop, it’s like fitting your backpack with a magical compression spell, making every byte count. Your containers are lighter, your image pulls and pushes are faster, and your development is smoother — optimizing your journey, one compression at a time.

Enterprise-level security (and peace of mind)

When we talk about speed and performance, there’s a crucial aspect we mustn’t overlook: security. At Docker, we understand that speed without security is like a ship without a compass — it may move fast but won’t stay on course.

While we’ve been investing heavily in accelerating your development journey, we haven’t lost sight of our commitment to enterprise-level security and governance. In fact, it’s quite the opposite. Our goal is to create a seamless union between velocity and vigilance.

Here’s how we do it:

Unprivileged users: Unlike the native Docker Engine on Linux, unprivileged users can run Docker Desktop. This is because Docker Desktop runs Docker Engine inside a Linux VM, isolated from the underlying host machine.

Enhanced Container Isolation: ECI runs containers in rootless mode by default, vets sensitive system calls in containers, and restricts sensitive mounts, thereby adding an extra layer of isolation between containers and the host. It does this without changing developer workflows, so you can continue using Docker as usual with an extra layer of peace of mind.

Settings management: With settings management, IT admins can manipulate security settings in Docker Desktop per organization security policies to better secure developer environments.

Robust security model: Our security model is designed for safety and optimal performance. The two should go hand in hand. So, while protecting your environment, we ensure it runs efficiently.

Continuous security audits: Our commitment to security goes beyond features and tools. We are dedicated to safeguarding the platform, user community, and customers from various modern-day threats. We invest in regular security audits to scrutinize every nook and cranny of our applications and services. Vulnerabilities are swiftly identified and mitigated.

We aim to provide you with a holistic platform, an enterprise-grade offering that seamlessly integrates performance and security. In this fast-paced world, the perfect blend of speed and security truly empowers innovation. At Docker, we’re here to ensure you have both every step of the way.

Continuing our journey

At Docker, our unwavering commitment to performance and innovation is crystal clear. The achievements showcased here are just the beginning. So, as you embark on your development endeavors, know that we’re right there with you, making the seconds count and ensuring your confidence and ability to focus energy on what truly matters — creating and innovating. Together, we’re rewriting the story of development across the SDLC, one build, container, and application at a time.

I hope you will join us at DockerCon 2023, in person or virtually, to explore what we have planned for Docker Desktop, Docker Hub, and Docker Scout. Upgrade to the latest Docker Desktop version and check out our Docker 101 webinar: What Docker can do for your business.

Learn more

Register for DockerCon.

Register for DockerCon workshops.

See the DockerCon program.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/