Debian’s Dedication to Security: A Robust Foundation for Docker Developers

As security threats become more and more prevalent, building software with security top of mind is essential. Security has become an increasing concern for container workloads specifically and, commensurately, for container base-image choice. Many conversations around choosing a secure base image focus on CVE counts, but security involves a lot more than that. 

One organization that has been leading the way in secure software development is the Debian Project. In this post, I will outline how and why Debian operates as a secure basis for development.

For more than 30 years, Debian’s diverse group of volunteers has provided a free, open, stable, and secure GNU/Linux distribution. Debian’s emphasis on engineering excellence and clean design, as well as its wide variety of packages and supported architectures, have made it not only a widely used distribution in its own right but also a meta-distribution. Many other Linux distributions, such as Ubuntu, Linux Mint, and Kali Linux, are built on top of Debian, as are many Docker Official Images (DOI). In fact, more than 1,000 Docker Official Images variants use the debian DOI or the Debian-derived ubuntu DOI as their base image. 

Why Debian?

As a bit of a disclaimer, I have been using Debian GNU/Linux for a long time. I remember installing Debian from floppy disks in the 1990s on a PC that I cobbled together, and later reinstalling so I could test prerelease versions of the netinst network installer. Installing over the network took a while using a 56-kbps modem. At those network speeds, you had to be very particular about which packages you chose in dselect. 

Having used a few other distributions before trying Debian, I still remember being amazed by how well-organized and architected the system was. No dangling or broken dependencies. No download failures. No incompatible shared libraries. No package conflicts, but rather a thoughtful handling of packages providing similar functionality. 

Much has changed over the years, no more floppies, dselect has been retired, my network connection speed has increased by a few orders of magnitude, and now I “install” Debian via docker pull debian. What has not changed is the feeling of amazement I have toward Debian and its community.

Open source software and security

Despite the achievements of the Debian project and the many other projects it has spawned, it is not without detractors. Like many other open source projects, Debian has received its share of criticsm in the past few years by opportunists lamenting the state of open source security. Writing about the software supply chain while bemoaning high-profile CVEs and pointing to malware that has been uploaded to an open source package ecosystem, such as PyPI or NPM, has become all too common. 

The pernicious assumption in such articles is that open source software is the problem. We know this is not the case. We’ve been through this before. Back when I was installing Debian over a 56-kbps modem, all sorts of fear, uncertainty, and doubt (FUD) was being spread by various proprietary software vendors. We learned then that open source is not a security problem — it is a security solution. 

Being open source does not automatically convey an improved security status compared to closed-source software, but it does provide significant advantages. In his Secure Programming HOWTO, David Wheeler provides a balanced summary of the relationship between open source software and security. A purported advantage conveyed by closed-source software is the nondisclosure of its source code, but we know that security through obscurity is no security at all. 

The transparency of open source software and open ecosystems allows us to better know our security posture. Openness allows for the rapid identification and remediation of vulnerabilities. Openness enables the vast majority of the security and supply chain tooling that developers regularly use. How many closed-source tools regularly publish CVEs? With proprietary software, you often only find out about a vulnerability after it is too late.

Debian’s rapid response strategy

Debian has been criticized for moving too slowly on the security front. But this narrative, like the open vs. closed-source narrative, captures neither the nuance nor reality. Although several distributions wait to publish CVEs until a fixed version is available, Debian opts for complete transparency and urgency when communicating security information to its users.

Furthermore, Debian maintainers are not a mindless fleet of automatons hastily applying patches and releasing new package versions. As a rule, Debian maintainers are experts among experts, deeply steeped in software and delivery engineering, open source culture, and the software they package.

zlib vulnerability example

A recent zlib vulnerability, CVE-2023-45853, provides an insightful example of the Debian project’s diligent, thorough approach to security. Several distributions grabbed a patch for the vulnerability, applied it, rebuilt, packaged, and released a new zlib package. The Debian security community took a closer look.

As mentioned in the CVE summary, the vulnerability was in minizip, which is a utility under the contrib directory of the zlib source code. No minizip source files are compiled into the zlib library, libz. As such, this vulnerability did not actually affect any zlib packages.

If that were where the story had ended, the only harm would be in updating a package unnecessarily. But the story did not end there. As detailed in the Debian bug thread, the offending minizip code was copied (i.e., vendored) and used in a lot of other widely used software. In fact, the vendored minizip code in both Chromium and Node.js was patched about a month before the zlib CVE was even published. 

Unfortunately, other commonly used software packages also had vendored copies of minizip that were still vulnerable. Thanks to the diligence of the Debian project, either the patch was applied to those projects as well, or they were compiled against the patched system minizip (not zlib!) dev package rather than the vendored version. In other distributions, those buggy vendored copies are in some cases still being compiled into software packages, with nary a mention in any CVE.

Thinking beyond CVEs

In the past 30 years, we have seen an astronomical increase in the role open source software plays in the tech industry. Despite the productivity gains that software engineers get by leveraging the massive amount of high-quality open source software available, we are once again hearing the same FUD we heard in the early days of open source. 

The next time you see an article about the dangers lurking in your open source dependencies, don’t be afraid to look past the headlines and question the assumptions. Open ecosystems lead to secure software, and the Debian project provides a model we would all do well to emulate. Debian’s goal is security, which encompasses a lot more than a report showing zero CVEs. Consumers of operating systems and container images would be wise to understand the difference. 

So go ahead and build on top of the debian DOI. FROM debian is never a bad way to start a Dockerfile!

Learn more

Find the Debian Official Image on Docker Hub.

Read From Misconceptions to Mastery: Enhancing Security and Transparency with Docker Official Images.

Visit Docker Hub to find more Docker Official Images.

Find the Debian DOI on GitHub.

Learn more about Docker Hub.

Secure your images with Docker Scout.

Subscribe to the Docker Newsletter.

Quelle: https://blog.docker.com/feed/

KubeCon EU 2024: Highlights from Paris

Are in-person tech conferences back in fashion? Or are engineers just willing to travel for fresh baguettes? In this post, I round up a few highlights from KubeCon Europe 2024, held March 19-24 in Paris.

My last KubeCon was in Detroit in 2022, when tech events were still slowly recovering from COVID. But KubeCon EU in Paris was buzzing, with more than 12,000 attendees! I couldn’t even get into a few of the most popular talks because the lines to get in wrapped around the exhibition hall even after the rooms were full. Fortunately, the CNCF has already posted all the talk recordings so we can catch up on what we missed in person.

Now that I’ve been back home for a bit, here are a few highlights I rounded up from KubeCon EU 2024.

Docker at KubeCon

If you stopped by the Docker booth, you may have seen our Megennis Motorsport Racing experience.

The KubeCon EU 2024 Docker booth featured a Megennis Motorsport Racing experience.

Or you may have talked to one of our engineers about our new fast Docker Build Cloud experience. Everyone I talked to about Build Cloud got it immediately. I’m proud of all the work we did to make fast, hosted image builds work seamlessly with the existing docker build. 

KubeCon booth visitors ran 173 identical builds using both Docker Build Cloud and GHA. Build Cloud builds took an average of 32 seconds each compared to GHA’s 159 sec/build.

Docker Build Cloud wasn’t the only new product we highlighted at KubeCon this year. I also got a lot of questions about Docker Scout and how to track image dependencies. Our Head of Security, Rachel Taylor, was available to demo Docker Scout for curious customers.

Chloe Cucinotta, a Director in Docker’s marketing org, hands out eco-friendly swag for booth visitors. Photo by Docker Captain Mohammad-Ali A’râbi.

Docker Scout and Sysdig Security Day

In addition to live Docker Scout demos at the booth, Docker Scout was represented at Kubecon through a co-sponsored AMA panel and party with Sysdig Security Day. The event aimed to raise awareness around Docker’s impact on securing the software supply chain and how to solve concrete security issues with Docker Scout. It was an opportunity to explore topics in the cloud-native and open source security space alongside industry leaders Snyk and Sysdig.

The AMA panel featured Rachel Taylor, Director of Information Security, Risk, & Trust at Docker, who discussed approaches to securing the software supply chain. The post-content party served as an opportunity for Docker to hear more about our shared customers’ unique challenges one-on-one. Through participation in the event, Docker customers were able to learn more about how the Sysdig runtime monitoring integration within Docker Scout results in even more actionable insights and remediation recommendations.

Live from the show floor

Docker CEO Scott Johnston spoke with theCUBE hosts Savannah Peterson and Rob Strechay to discuss Docker Build Cloud. “What used to take an hour is now a minute and a half,” he explained.

Testcontainers and OpenShift

During KubeCon, we announced that Red Hat and Testcontainers have partnered to provide Testcontainers in OpenShift. This collaboration simplifies the testing process, allowing developers to efficiently manage their workflows without compromising on security or flexibility. By streamlining development tasks, this solution promises a significant boost in productivity for developers working within containerized environments. Read Improving the Developer Experience with Testcontainers and OpenShift to learn more.

Eli Aleyner (Head of Technical Alliances at Docker) and Daniel Oh (Senior Principal Technical Marketing Manager at Red Hat) take a selfie at the Red Hat booth.

Eli Aleyner (Head of Technical Alliances at Docker) and Daniel Oh (Senior Principal Technical Marketing Manager at Red Hat) provided a demo and an AMA at the Red Hat booth. 

Must-watch talks

During the Friday keynote, Bob Wise, CEO of Heroku, describes a lightbulb moment when he first heard about Docker as part of his discussion about the beginnings of cloud-native.

For a long time, I’ve felt that the Kubernetes API model has been its superpower. The investment in easy ways to extend Kubernetes with CRDs and the controller-runtime project are unlocking a bunch of exciting platform engineering projects.

Here are a few of the many talks that I and other people on my team really liked, and that are on YouTube now.

Platform 

In his talk Building a Large Scale Multi-Cloud Multi-Region SaaS Platform with Kubernetes Controllers, I loved how Sébastien Guilloux (Elastic) explains how to put all the pieces together to help build a multi-region platform. It takes advantage of the nice bits of Kubernetes controllers, while also questioning the assumptions about how global state should work.

Stefan Proda (ControlPlane) gave a talk on GitOps Continuous Delivery at Scale with Flux. Flux has a strong, opinionated point of view on how CI/CD tools should interact with CRDs and the events API.  There were a few different talks on Crossplane that I’d like to go back and watch. We’ve been experimenting a lot with Crossplane at Docker, and we like how it fits into Helm and Image Registries in a way that fits in with our existing image and registry tools. 

AI 

Of course, people at KubeCon are infra nerds, so when we think about AI, we first think about all those GPUs the AIs are going to need.

There was an armful of GPU provisioning talks. I attended the How KubeVirt Improves Performance with CI-Driven Benchmarking, and You Can Too. Speakers Ryan Hallisey and Alay Patel from Nvidia talked about driving down the time to allocate VMs with GPUs. But how is AI going to fit into how we run and operate servers on Kubernetes? There was less consensus on this point, but it was fun to make random guesses on what it might look like. When I was hanging out at the AuthZed booth, I made a joke about asking an AI to write my infra access control rules, and they mostly laughed and rolled their eyes.

Slimming and debugging 

Here’s a container journey I see a lot these days:

I have a fat container image.

I get a security alert about a vulnerability in one of that image’s dependencies that I don’t even use.

I switch to a slimmer base image, like a distroless image.

Oops! Now the image doesn’t work and is annoying to debug because there’s no shell.

But we’re making progress on making this easier!

In his KubeCon talk Is Your Image Really Distroless?, Docker’s Laurent Goderre walked through how to use multi-stage builds and init containers to separate out the build + init dependencies from the steady-state runtime dependencies. 

Ephemeral containers in Kubernetes graduated to stable in 2022. In their talk, Building a Tool to Debug Minimal Container Images in Kubernetes, Docker, and ContainerD, Kyle Quest (AutonomousPlane) and Saiyam Pathak (Civo) showed how you can use the ephemeral containers API to build tooling for creating a shell in a distroless container without a shell.

Talk slide showing the comparison of different approaches for running commands in a minimal container image with no shell.

One thing that Kyle and Saiyam mentioned was how useful Nix and Nixery.dev is for building these kinds of debugging tools. We’re also using Nix in docker debug. Docker engineer Johannes Grossman says that Nix solves some problems around dynamic linking that he calls “the clash-free composability property of Nix.”

See you in Salt Lake City!

Now that we’ve recovered from the action-packed KubeCon in Paris, we can start planning for KubeCon + CloudNativeCon North America 2024. We’ll see you in beautiful Salt Lake City!

Learn more

Read Red Hat, Docker Ease Developer Experience with Testcontainers Cloud.

Read Docker CEO leans in on Build Cloud and testing to give time back to developers on SiliconAngle.

Get started with Docker Build Cloud for the fastest build times.

To learn more about Docker Build Cloud, reach out to a Docker sales expert. 

Quelle: https://blog.docker.com/feed/

Empower Your Development: Dive into Docker’s Comprehensive Learning Ecosystem

Continuous learning is a necessity for developers in today’s fast-paced development landscape. Docker recognizes the importance of keeping developers at the forefront of innovation, and to do so, we aim to empower the developer community with comprehensive learning resources.

Docker has taken a multifaceted approach to developer education by forging partnerships with renowned platforms like Udemy and LinkedIn Learning, investing in our own documentation and guides, and highlighting the incredible learning content created by the developer community, including Docker Captains.

Commitment to developer learning

At Docker, our goal is to simplify the lives of developers, which begins with empowering devs with understanding how to maximize the power of Docker tools throughout their projects. We also recognize that developers have different learning styles, so we are taking a diversified approach to delivering this material across an array of platforms and formats, which means developers can learn in the format that best suits them. 

Strategic partnerships for developer learning

Recognizing the diverse learning needs of developers, Docker has partnered with leading online learning platforms — Udemy and LinkedIn Learning. These partnerships offer developers access to a wide range of courses tailored to different expertise levels, from beginners looking to get started with Docker to advanced users aiming to deepen their knowledge. 

For teams already utilizing these platforms for other learning needs, this collaboration places Docker learning in a familiar platform next to other coursework.

Udemy: Docker’s collaboration with Udemy highlights an array of Endorsed Docker courses, designed by industry experts. Whether getting a handle on containerization or mastering Docker with Kubernetes, Udemy’s platform offers the flexibility and depth developers need to upskill at their own pace. Today, demand remains high for Docker content across the Udemy platform, with more than 350 courses offered and nearly three million enrollments to date.

LinkedIn Learning: Through LinkedIn Learning, developers can dive into curated Docker courses to earn a Docker Foundations Professional Certificate once they complete the program. These resources are not just about technical skills; they also cover best practices and practical applications, ensuring learners are job-ready.

Leveraging Docker’s documentation and guides

Although third-party platforms provide comprehensive learning paths, Docker’s own documentation and guides are indispensable tools for developers. Our documentation is continuously updated to serve as both a learning resource and a reference. From installation and configuration to advanced container orchestration and networking, Docker’s guides are designed to help you find your solution with step-by-step walk-throughs.

If it’s been a while since you’ve checked out Docker Docs, you can visit docs.docker.com to find manuals, a getting started guide, along with many new use-case guides to help you with advanced applications including generative AI and security.  

Learners interested in live sessions can register for upcoming live webinars and training on the Docker Training site. There, you will find sessions where you can interact with the Docker support team and discuss best practices for using Docker Scout and Docker Admin.

The role of community in learning

Docker’s community is a vibrant ecosystem of learners, contributors, and innovators. We are thrilled to see the community creating content, hosting workshops, providing mentorship, and enriching the vast array of Docker learning resources. In particular, Docker Captains stand out for their expertise and dedication to sharing knowledge. From James Spurin’s Dive Into Docker course, to Nana Janashia’s Docker Crash Course, to Vladimir Mikhalev’s blog with guided IT solutions using Docker (just to name a few), it’s clear there’s much to learn from within the community.

We encourage developers to join the community and participate in conversations to seek advice, share knowledge, and collaborate on projects. You can also check out the Docker Community forums and join the Slack community to connect with other members of the community.

Conclusion

Docker’s holistic approach to developer learning underscores our commitment to empowering developers with knowledge and skills. By combining our comprehensive documentation and guides with top learning platform partnerships and an active community, we offer developers a robust framework for learning and growth. We encourage you to use all of these resources together to build a solid foundation of knowledge that is enhanced with new perspectives and additional insights as new learning offerings continue to be added.

Whether you’re a novice eager to explore the world of containers or a seasoned pro looking to refine your expertise, Docker’s learning ecosystem is designed to support your journey every step of the way.

Join us in this continuous learning journey, and come learn with Docker.

Learn more

Subscribe to the Docker Newsletter.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

OpenSSH and XZ/liblzma: A nation-state attack was thwarted, what did we learn?

I have been recently watching The Americans, a decade-old TV series about undercover KGB agents living disguised as a normal American family in Reagan’s America in a paranoid period of the Cold War. I was not expecting this weekend to be reading mailing list posts of the same type of operation being performed on open source maintainers by agents with equally shadowy identities (CVE-2024-3094).

As The Grugq explains, “The JK-persona hounds Lasse (the maintainer) over multiple threads for many months. Fortunately for Lasse, his new friend and star developer is there, and even more fortunately, Jia Tan has the time available to help out with maintenance tasks. What luck! This is exactly the style of operation a HUMINT organization will run to get an agent in place. They will position someone and then create a crisis for the target, one which the agent is able to solve.”

The operation played out over two years, getting the agent in place, setting up the infrastructure for the attack, hiding it from various tools, and then rushing to get it into Linux distributions before some recent changes in systemd were shipped that would have stopped this attack from working.

An equally unlikely accident resulted when Andres Freund, a Postgres maintainer, discovered the attack before it had reached the vast majority of systems, from a probably accidental performance slowdown. Andres says, “I didn’t even notice it while logging in with SSH or such. I was doing some micro-benchmarking at the time and was looking to quiesce the system to reduce noise. Saw sshd processes were using a surprising amount of CPU, despite immediately failing because of wrong usernames etc. Profiled sshd. Which showed lots of cpu time in code with perf unable to attribute it to a symbol, with the dso showing as liblzma. Got suspicious. Then I recalled that I had seen an odd valgrind complaint in my automated testing of Postgres, a few weeks earlier, after some package updates were installed. Really required a lot of coincidences.” 

It is hard to overstate how lucky we were here, as there are no tools that will detect this vulnerability. Even ex-post it is not possible to detect externally as we do not have the private key needed to trigger the vulnerability, and the code is very well hidden. While Linus’s law has been stated as “given enough eyeballs all bugs are shallow,” we have seen in the past this is not always true, or there are just not enough eyeballs looking at all the code we consume, even if this time it worked.

In terms of immediate actions, the attack appears to have been targeted at subset of OpenSSH servers patched to integrate with systemd. Running SSH servers in containers is rare, and the initial priority should be container hosts, although as the issue was caught early it is likely that few people updated. There is a stream of fixes to liblzma, the xz compression library where the exploit was placed, as the commits from the last two years are examined, although at present there is no evidence that there are exploits for any software other than OpenSSH included. In the Docker Scout web interface you can search for “lzma” in package names, and issues will be flagged in the “high profile vulnerabilities” policy.

So many commentators have simple technical solutions, and so many vendors are using this to push their tools. As a technical community, we want there to be technical solutions to problems like this. Vendors want to sell their products after events like this, even though none even detected it. Rewrite it in Rust, shoot autotools, stop using GitHub tarballs, and checked-in artifacts, the list goes on. These are not bad things to do, and there is no doubt that understandability and clarity are valuable for security, although we often will trade them off for performance. It is the case that m4 and autotools are pretty hard to read and understand, while tools like ifunc allow dynamic dispatch even in a mostly static ecosystem. Large investments in the ecosystem to fix these issues would be worthwhile, but we know that attackers would simply find new vectors and weird machines. Equally, there are many naive suggestions about the people, as if having an identity for open source developers would solve a problem, when there are very genuine people who wish to stay private while state actors can easily find fake identities, or “just say no” to untrusted people. Beware of people bringing easy solutions, there are so many in this hot-take world.

Where can we go from here? Awareness and observability first. Hyper awareness even, as we see in this case small clues matter. Don’t focus on the exact details of this attack, which will be different next time, but think more generally. Start by understanding your organization’s software consumption, supply chain, and critical points. Ask what you should be funding to make it different. Then build in resilience. Defense in depth, and diversity — not a monoculture. OpenSSH will always be a target because it is so widespread, and the OpenBSD developers are doing great work and the target was upstream of them because of this. But we need a diverse ecosystem with multiple strong solutions, and as an organization you need second suppliers for critical software. The third critical piece of security in this era is recoverability. Planning for the scenario in which the worst case has happened and understanding the outcomes and recovery process is everyone’s homework now, and making sure you are prepared with tabletop exercises around zero days. 

This is an opportunity for all of us to continue working together to strengthen the open source supply chain, and to work on resilience for when this happens next. We encourage dialogue and discussion on this within Docker communities.

Learn more

Docker Scout dashboard: https://scout.docker.com/vulnerabilities/id/CVE-2024-3094

NIST CVE: https://nvd.nist.gov/vuln/detail/CVE-2024-3094

Quelle: https://blog.docker.com/feed/

Building a Video Analysis and Transcription Chatbot with the GenAI Stack

Videos are full of valuable information, but tools are often needed to help find it. From educational institutions seeking to analyze lectures and tutorials to businesses aiming to understand customer sentiment in video reviews, transcribing and understanding video content is crucial for informed decision-making and innovation. Recently, advancements in AI/ML technologies have made this task more accessible than ever. 

Developing GenAI technologies with Docker opens up endless possibilities for unlocking insights from video content. By leveraging transcription, embeddings, and large language models (LLMs), organizations can gain deeper understanding and make informed decisions using diverse and raw data such as videos. 

In this article, we’ll dive into a video transcription and chat project that leverages the GenAI Stack, along with seamless integration provided by Docker, to streamline video content processing and understanding. 

High-level architecture 

The application’s architecture is designed to facilitate efficient processing and analysis of video content, leveraging cutting-edge AI technologies and containerization for scalability and flexibility. Figure 1 shows an overview of the architecture, which uses Pinecone to store and retrieve the embeddings of video transcriptions. 

Figure 1: Schematic diagram outlining a two-component system for processing and interacting with video data.

The application’s high-level service architecture includes the following:

yt-whisper: A local service, run by Docker Compose, that interacts with the remote OpenAI and Pinecone services. Whisper is an automatic speech recognition (ASR) system developed by OpenAI, representing a significant milestone in AI-driven speech processing. Trained on an extensive dataset of 680,000 hours of multilingual and multitask supervised data sourced from the web, Whisper demonstrates remarkable robustness and accuracy in English speech recognition. 

Dockerbot: A local service, run by Docker Compose, that interacts with the remote OpenAI and Pinecone services. The service takes the question of a user, computes a corresponding embedding, and then finds the most relevant transcriptions in the video knowledge database. The transcriptions are then presented to an LLM, which takes the transcriptions and the question and tries to provide an answer based on this information.

OpenAI: The OpenAI API provides an LLM service, which is known for its cutting-edge AI and machine learning technologies. In this application, OpenAI’s technology is used to generate transcriptions from audio (using the Whisper model) and to create embeddings for text data, as well as to generate responses to user queries (using GPT and chat completions).

Pinecone: A vector database service optimized for similarity search, used for building and deploying large-scale vector search applications. In this application, Pinecone is employed to store and retrieve the embeddings of video transcriptions, enabling efficient and relevant search functionality within the application based on user queries.

Getting started

To get started, complete the following steps:

Create an OpenAI API Key.

Ensure that you have a Pinecone API Key.

Ensure that you have installed the latest version of Docker Desktop. 

The application is a chatbot that can answer questions from a video. Additionally, it provides timestamps from the video that can help you find the sources used to answer your question.

Clone the repository 

The next step is to clone the repository:

git clone https://github.com/dockersamples/docker-genai.git

The project contains the following directories and files:

├── docker-genai/
│ ├── docker-bot/
│ ├── yt-whisper/
│ ├── .env.example
│ ├── .gitignore
│ ├── LICENSE
│ ├── README.md
│ └── docker-compose.yaml

Specify your API keys

In the /docker-genai directory, create a text file called .env, and specify your API keys inside. The following snippet shows the contents of the .env.example file that you can refer to as an example.

#————————————————————-
# OpenAI
#————————————————————-
OPENAI_TOKEN=your-api-key # Replace your-api-key with your personal API key

#————————————————————-
# Pinecone
#————————————————————–
PINECONE_TOKEN=your-api-key # Replace your-api-key with your personal API key

Build and run the application

In a terminal, change directory to your docker-genai directory and run the following command:

docker compose up –build

Next, Docker Compose builds and runs the application based on the services defined in the docker-compose.yaml file. When the application is running, you’ll see the logs of two services in the terminal.

In the logs, you’ll see the services are exposed on ports 8503 and 8504. The two services are complementary to each other.

The yt-whisper service is running on port 8503. This service feeds the Pinecone database with videos that you want to archive in your knowledge database. The next section explores the yt-whisper service.

Using yt-whisper

The yt-whisper service is a YouTube video processing service that uses the OpenAI Whisper model to generate transcriptions of videos and stores them in a Pinecone database. The following steps outline how to use the service.

Open a browser and access the yt-whisper service at http://localhost:8503. Once the application appears, specify a YouTube video URL in the URL field and select Submit. The example shown in Figure 2 uses a video from David Cardozo.

Figure 2: A web interface showcasing processed video content with a feature to download transcriptions.

Submitting a video

The yt-whisper service downloads the audio of the video, then uses Whisper to transcribe it into a WebVTT (*.vtt) format (which you can download). Next, it uses the “text-embedding-3-small” model to create embeddings and finally uploads those embeddings into the Pinecone database.

After the video is processed, a video list appears in the web app that informs you which videos have been indexed in Pinecone. It also provides a button to download the transcript.

Accessing Dockerbot chat service

You can now access the Dockerbot chat service on port 8504 and ask questions about the videos as shown in Figure 3.

Figure 3: Example of a user asking Dockerbot about NVIDIA containers and the application giving a response with links to specific timestamps in the video.

Conclusion

In this article, we explored the exciting potential of GenAI technologies combined with Docker for unlocking valuable insights from video content. It shows how the integration of cutting-edge AI models like Whisper, coupled with efficient database solutions like Pinecone, empowers organizations to transform raw video data into actionable knowledge. 

Whether you’re an experienced developer or just starting to explore the world of AI, the provided resources and code make it simple to embark on your own video-understanding projects. 

Learn more

Accelerated AI/ML with Docker

Build and run natural language processing (NLP) applications with Docker

Video transcription and chat using GenAI Stack

PDF analysis and chat using GenAI Stack

Subscribe to the Docker Newsletter.

Have questions? The Docker community is here to help.

Quelle: https://blog.docker.com/feed/

Is Your Container Image Really Distroless?

Containerization helped drastically improve the security of applications by providing engineers with greater control over the runtime environment of their applications. However, a significant time investment is required to maintain the security posture of those applications, given the daily discovery of new vulnerabilities as well as regular releases of languages and frameworks. 

The concept of “distroless” images offers the promise of greatly reducing the time needed to keep applications secure by eliminating most of the software contained in typical container images. This approach also reduces the amount of time teams spend remediating vulnerabilities, allowing them to focus only on the software they are using. 

In this article, we explain what makes an image distroless, describe tools that make the creation of distroless images practical, and discuss whether distroless images live up to their potential.

What’s a distro?

A Linux distribution is a complete operating system built around the Linux kernel, comprising a package management system, GNU tools and libraries, additional software, and often a graphical user interface.

Common Linux distributions include Debian, Ubuntu, Arch Linux, Fedora, Red Hat Enterprise Linux, CentOS, and Alpine Linux (which is more common in the world of containers). These Linux distributions, like most Linux distros, treat security seriously, with teams working diligently to release frequent patches and updates to known vulnerabilities. A key challenge that all Linux distributions must face involves the usability/security dilemma. 

On its own, the Linux kernel is not very usable, so many utility commands are included in distributions to cover a large array of use cases. Having the right utilities included in the distribution without having to install additional packages greatly improves a distro’s usability. The downside of this increase in usability, however, is an increased attack surface area to keep up to date. 

A Linux distro must strike a balance between these two elements, and different distros have different approaches to doing so. A key aspect to keep in mind is that a distro that emphasizes usability is not “less secure” than one that does not emphasize usability. What it means is that the distro with more utility packages requires more effort from its users to keep it secure.

Multi-stage builds

Multi-stage builds allow developers to separate build-time dependencies from runtime ones. Developers can now start from a full-featured build image with all the necessary components installed, perform the necessary build step, and then copy only the result of those steps to a more minimal or even an empty image, called “scratch”. With this approach, there’s no need to clean up dependencies and, as an added bonus, the build stages are also cacheable, which can considerably reduce build time. 

The following example shows a Go program taking advantage of multi-stage builds. Because the Golang runtime is compiled into the binary, only the binary and root certificates need to be copied to the blank slate image.

FROM golang:1.21.5-alpine as build
WORKDIR /
COPY go.* .
RUN go mod download
COPY . .
RUN go build -o my-app

FROM scratch
COPY –from=build
/etc/ssl/certs/ca-certificates.crt
/etc/ssl/certs/ca-certificates.crt
COPY –from=build /my-app /usr/local/bin/my-app
ENTRYPOINT ["/usr/local/bin/my-app"]

BuildKit

BuildKit, the current engine used by docker build, helps developers create minimal images thanks to its extensible, pluggable architecture. It provides the ability to specify alternative frontends (with the default being the familiar Dockerfile) to abstract and hide the complexity of creating distroless images. These frontends can accept more streamlined and declarative inputs for builds and can produce images that contain only the software needed for the application to run. 

The following example shows the input for a frontend for creating Python applications called mopy by Julian Goede.

#syntax=cmdjulian/mopy
apiVersion: v1
python: 3.9.2
build-deps:
– libopenblas-dev
– gfortran
– build-essential
envs:
MYENV: envVar1
pip:
– numpy==1.22
– slycot
– ./my_local_pip/
– ./requirements.txt
labels:
foo: bar
fizz: ${mopy.sbom}
project: my-python-app/

So, is your image really distroless?

Thanks to new tools for creating container images like multi-stage builds and BuildKit, it is now a lot more practical to create images that only contain the required software and its runtime dependencies. 

However, many images claiming to be distroless still include a shell (usually Bash) and/or BusyBox, which provides many of the commands a Linux distribution does — including wget — that can leave containers vulnerable to Living off the land (LOTL) attacks. This raises the question, “Why would an image trying to be distroless still include key parts of a Linux distribution?” The answer typically involves container initialization. 

Developers often have to make their applications configurable to meet the needs of their users. Most of the time, those configurations are not known at build time so they need to be configured at run time. Often, these configurations are applied using shell initialization scripts, which in turn depend on common Linux utilities such as sed, grep, cp, etc. When this is the case, the shell and utilities are only needed for the first few seconds of the container’s lifetime. Luckily, there is a way to create true distroless images while still allowing initialization using tools available from most container orchestrators: init containers.

Init containers

In Kubernetes, an init container is a container that starts and must complete successfully before the primary container can start. By using a non-distroless container as an init container that shares a volume with the primary container, the runtime environment and application can be configured before the application starts. 

The lifetime of that init container is short (often just a couple seconds), and it typically doesn’t need to be exposed to the internet. Much like multi-stage builds allow developers to separate the build-time dependencies from the runtime dependencies, init containers allow developers to separate initialization dependencies from the execution dependencies. 

The concept of init container may be familiar if you are using relational databases, where an init container is often used to perform schema migration before a new version of an application is started.

Kubernetes example

Here are two examples of using init containers. First, using Kubernetes:

apiVersion: v1
kind: Pod
metadata:
name: kubecon-postgress-pod
labels:
app.kubernetes.io/name: KubeConPostgress
spec:
containers:
– name: postgress
image: laurentgoderre689/postgres-distroless
securityContext:
runAsUser: 70
runAsGroup: 70
volumeMounts:
– name: db
mountPath: /var/lib/postgresql/data/
initContainers:
– name: init-postgress
image: postgres:alpine3.18
env:
– name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: kubecon-postgress-admin-pwd
key: password
command: ['docker-ensure-initdb.sh']
volumeMounts:
– name: db
mountPath: /var/lib/postgresql/data/
volumes:
– name: db
emptyDir: {}

– – –

> kubectl apply -f pod.yml && kubectl get pods
pod/kubecon-postgress-pod created
NAME READY STATUS RESTARTS AGE
kubecon-postgress-pod 0/1 Init:0/1 0 0s
> kubectl get pods
NAME READY STATUS RESTARTS AGE
kubecon-postgress-pod 1/1 Running 0 10s

Docker Compose example

The init container concept can also be emulated in Docker Compose for local development using service dependencies and conditions.

services:
db:
image: laurentgoderre689/postgres-distroless
user: postgres
volumes:
– pgdata:/var/lib/postgresql/data/
depends_on:
db-init:
condition: service_completed_successfully

db-init:
image: postgres:alpine3.18
environment:
POSTGRES_PASSWORD: example
volumes:
– pgdata:/var/lib/postgresql/data/
user: postgres
command: docker-ensure-initdb.sh

volumes:
pgdata:

– – –
> docker-compose up
[+] Running 4/0
✔ Network compose_default Created
✔ Volume "compose_pgdata" Created
✔ Container compose-db-init-1 Created
✔ Container compose-db-1 Created
Attaching to db-1, db-init-1
db-init-1 | The files belonging to this database system will be owned by user "postgres".
db-init-1 | This user must also own the server process.
db-init-1 |
db-init-1 | The database cluster will be initialized with locale "en_US.utf8".
db-init-1 | The default database encoding has accordingly been set to "UTF8".
db-init-1 | The default text search configuration will be set to "english".
db-init-1 | […]
db-init-1 exited with code 0
db-1 | 2024-02-23 14:59:33.191 UTC [1] LOG: starting PostgreSQL 16.1 on aarch64-unknown-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r10) 12.2.1 20220924, 64-bit
db-1 | 2024-02-23 14:59:33.191 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db-1 | 2024-02-23 14:59:33.191 UTC [1] LOG: listening on IPv6 address "::", port 5432
db-1 | 2024-02-23 14:59:33.194 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db-1 | 2024-02-23 14:59:33.196 UTC [9] LOG: database system was shut down at 2024-02-23 14:59:32 UTC
db-1 | 2024-02-23 14:59:33.198 UTC [1] LOG: database system is ready to accept connections

As demonstrated by the previous example, an init container can be used alongside a container to remove the need for general-purpose software and allow the creation of true distroless images. 

Conclusion

This article explained how Docker build tools allow for the separation of build-time dependencies from run-time dependencies to create “distroless” images. For example, using init containers allows developers to separate the logic needed to configure a runtime environment from the environment itself and provide a more secure container. This approach also helps teams focus their efforts on the software they use and find a better balance between security and usability.

Learn more

Subscribe to the Docker Newsletter.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

containerd vs. Docker: Understanding Their Relationship and How They Work Together

During the past decade, containers have revolutionized software development by introducing higher levels of consistency and scalability. Now, developers can work without the challenges of dependency management, environment consistency, and collaborative workflows.

When developers explore containerization, they might learn about container internals, architecture, and how everything fits together. And, eventually, they may find themselves wondering about the differences between containerd and Docker and how they relate to one another.

In this blog post, we’ll explain what containerd is, how Docker and containerd work together, and how their combined strengths can improve developer experience.

What’s a container?

Before diving into what containerd is, I should briefly review what containers are. Simply put, containers are processes with added isolation and resource management. Containers have their own virtualized operating system with access to host system resources. 

Containers also use operating system kernel features. They use namespaces to provide isolation and cgroups to limit and monitor resources like CPU, memory, and network bandwidth. As you can imagine, container internals are complex, and not everyone has the time or energy to become an expert in the low-level bits. This is where container runtimes, like containerd, can help.

What’s containerd?

In short, containerd is a runtime built to run containers. This open source tool builds on top of operating system kernel features and improves container management with an abstraction layer, which manages namespaces, cgroups, union file systems, networking capabilities, and more. This way, developers don’t have to handle the complexities directly. 

In March 2017, Docker pulled its core container runtime into a standalone project called containerd and donated it to the Cloud Native Computing Foundation (CNCF).  By February 2019, containerd had reached the Graduated maturity level within the CNCF, representing its significant development, adoption, and community support. Today, developers recognize containerd as an industry-standard container runtime known for its scalability, performance, and stability.

Containerd is a high-level container runtime with many use cases. It’s perfect for handling container workloads across small-scale deployments, but it’s also well-suited for large, enterprise-level environments (including Kubernetes). 

A key component of containerd’s robustness is its default use of Open Container Initiative (OCI)-compliant runtimes. By using runtimes such as runc (a lower-level container runtime), containerd ensures standardization and interoperability in containerized environments. It also efficiently deals with core operations in the container life cycle, including creating, starting, and stopping containers.

How is containerd related to Docker?

But how is containerd related to Docker? To answer this, let’s take a high-level look at Docker’s architecture (Figure 1). 

Containerd facilitates operations on containers by directly interfacing with your operating system. The Docker Engine sits on top of containerd and provides additional functionality and developer experience enhancements.

How Docker interacts with containerd

To better understand this interaction, let’s talk about what happens when you run the docker run command:

After you select enter, the Docker CLI will send the run command and any command-line arguments to the Docker daemon (dockerd) via REST API call.

dockerd will parse and validate the request, and then it will check that things like container images are available locally. If they’re not, it will pull the image from the specified registry.

Once the image is ready, dockerd will shift control to containerd to create the container from the image.

Next, containerd will set up the container environment. This process includes tasks such as setting up the container file system, networking interfaces, and other isolation features.

containerd will then delegate running the container to runc using a shim process. This will create and start the container.

Finally, once the container is running, containerd will monitor the container status and manage the lifecycle accordingly.

Docker and containerd: Better together 

Docker has played a key role in the creation and adoption of containerd, from its inception to its donation to the CNCF and beyond. This involvement helped standardize container runtimes and bolster the open source community’s involvement in containerd’s development. Docker continues to support the evolution of the open source container ecosystem by continuously maintaining and evolving containerd.

Containerd specializes in the core functionality of running containers. It’s a great choice for developers needing access to lower-level container internals and other advanced features. Docker builds on containerd to create a cohesive developer experience and comprehensive toolchain for building, running, testing, verifying, and sharing containers.

Build + Run

In development environments, tools like Docker Desktop, Docker CLI, and Docker Compose allow developers to easily define, build, and run single or multi-container environments and seamlessly integrate with your favorite editors or IDEs or even in your CI/CD pipeline. 

Test

One of the largest developer experience pain points involves testing and environment consistency. With Testcontainers, developers don’t have to worry about reproducibility across environments (for example, dev, staging, testing, and production). Testcontainers also allows developers to use containers for isolated dependency management, parallel testing, and simplified CI/CD integration.

Verify

By analyzing your container images and creating a software bill of materials (SBOM), Docker Scout works with Docker Desktop, Docker Hub, or Docker CLI to help organizations shift left. It also empowers developers to find and fix software vulnerabilities in container images, ensuring a secure software supply chain.

Share

Docker Registry serves as a store for developers to push container images to a shared repository securely. This functionality streamlines image sharing, making maintaining consistency and efficiency in development and deployment workflows easier. 

With Docker building on top of containerd, the software development lifecycle benefits from the inner loop and testing to secure deployment to production.

Wrapping up

In this article, we discussed the relationship between Docker and containerd. We showed how containers, as isolated processes, leverage operating system features to provide efficient and scalable development and deployment solutions. We also described what containerd is and explained how Docker leverages containerd in its stack. 

Docker builds upon containerd to enhance the developer experience, offering a comprehensive suite of tools for the entire development lifecycle across building, running, verifying, sharing, and testing containers. 

Start your next projects with containerd and other container components by checking out Docker’s open source projects and most popular open source tools. 

Learn more

Subscribe to the Docker Newsletter.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

11 Years of Docker: Shaping the Next Decade of Development

Eleven years ago, Solomon Hykes walked onto the stage at PyCon 2013 and revealed Docker to the world for the first time. The problem Docker was looking to solve? “Shipping code to the server is hard.”

And the world of application software development changed forever.

Docker was built on the shoulders of giants of the Linux kernel, copy-on-write file systems, and developer-friendly git semantics. The result? Docker has fundamentally transformed how developers build, share, and run applications. By “dockerizing” an app and its dependencies into a standardized, open format, Docker dramatically lowered the friction between devs and ops, enabling devs to focus on their apps — what’s inside the container — and ops to focus on deploying any app, anywhere — what’s outside the container, in a standardized format. Furthermore, this standardized “unit of work” that abstracts the app from the underlying infrastructure enables an “inner loop” for developers of code, build, test, verify, and debug, which results in 13X more frequent releases of higher-quality, more secure updates.

The subsequent energy over the past 11 years from the ecosystem of developers, community, open source maintainers, partners, and customers cannot be understated, and we are so thankful and appreciative of your support. This has shown up in many ways, including the following:

Ranked #1 “most-wanted” tool/platform by Stack Overflow’s developer community for the past four years

26 million monthly active IPs accessing 15 million repos on Docker Hub, pulling them 25 billion times per month

17 million registered developers

Moby project has 67.5k stars, 18.5k forks, and more than 2,200 contributors; Docker Compose has 32.1k stars and 5k forks

A vibrant network of 70 Docker Captains across 25 countries serving 167 community meetup groups with more than 200k members and 4800 total meetups

79,000+ customers

The next decade

In our first decade, we changed how developers build, share, and run any app, anywhere — and we’re only accelerating in our second!

Specifically, you’ll see us double down on meeting development teams where they are to enable them to rapidly ship high-quality, secure apps via the following focus areas:

Dev Team Productivity. First, we’ll continue to help teams take advantage of the right tech for the right job — whether that’s Linux containers, Windows containers, serverless functions, and/or Wasm (Web Assembly) — with the tools they love and the skills they already possess. Second, by bringing together the best of local and the best of cloud, we’ll enable teams to discover and address issues even faster in the “inner loop,” as you’re already seeing today with our early efforts with Docker Scout, Docker Build Cloud, and Testcontainers Cloud.

GenAI. This tech is ushering in a “golden age” for development teams, and we’re investing to help in two areas: First, our GenAI Stack — built through collaboration with partners Ollama, LangChain, and Neo4j — enables dev teams to quickly stand up secure, local GenAI-powered apps. Second, our Docker AI is uniquely informed by anonymized data from dev teams using Docker, which enables us to deliver automations that eliminate toil and reduce security risks.

Software Supply Chain. The software supply chain is heterogeneous, extensive, and complex for dev teams to navigate and secure, and Docker will continue to help simplify, make more visible, and manage it end-to-end. Whether it’s the trusted content “building blocks” of Docker Official Images (DOI) in Docker Hub, the transformation of ingredients into runnable images via BuildKit, verifying and securing the dev environment with digital signatures and enhanced container isolation, consuming metadata feedback from running containers in production, or making the entire end-to-end supply chain visible and issues actionable in Docker Scout, Docker has it covered and helps make a more secure internet!

Dialing it past 11

While our first decade was fantastic, there’s so much more we can do together as a community to serve app development teams, and we couldn’t be more excited as our second decade together gets underway and we dial it past 11! If you haven’t already, won’t you join us today?!

How has Docker influenced your approach to software development? Share your experiences with the community and join the conversation on LinkedIn.

Let’s build, share, and run — together!

Quelle: https://blog.docker.com/feed/

Docker Partners with NVIDIA to Support Building and Running AI/ML Applications

The domain of GenAI and LLMs has been democratized and tasks that were once purely in the domain of AI/ML developers must now be reasoned with by regular application developers into everyday products and business logic. This is leading to new products and services across banking, security, healthcare, and more with generative text, images, and videos. Moreover, GenAI’s potential economic impact is substantial, with estimates it could add trillions of dollars annually to the global economy. 

Docker offers an ideal way for developers to build, test, run, and deploy the NVIDIA AI Enterprise software platform — an end-to-end, cloud-native software platform that brings generative AI within reach for every business. The platform is available to use in Docker containers, deployable as microservices. This enables teams to focus on cutting-edge AI applications where performance isn’t just a goal — it’s a necessity.

This week, at the NVIDIA GTC global AI conference, the latest release of NVIDIA AI Enterprise was announced, providing businesses with the tools and frameworks necessary to build and deploy custom generative AI models with NVIDIA AI foundation models, the NVIDIA NeMo framework, and the just-announced NVIDIA NIM inference microservices, which deliver enhanced performance and efficient runtime. 

This blog post summarizes some of the Docker resources available to customers today.

Docker Hub

Docker Hub is the world’s largest repository for container images with an extensive collection of AI/ML development-focused container images, including leading frameworks and tools such as PyTorch, TensorFlow, Langchain, Hugging Face, and Ollama. With more than 100 million pull requests for AI/ML-related images, Docker Hub’s significance to the developer community is self-evident. It not only simplifies the development of AI/ML applications but also democratizes innovation, making AI technologies accessible to developers across the globe.

NVIDIA’s Docker Hub library offers a suite of container images that harness the power of accelerated computing, supplementing NVIDIA’s API catalog. Docker Hub’s vast audience — which includes approximately 27 million monthly active IPs, showcasing an impressive 47% year-over-year growth — can use these container images to enhance AI performance. 

Docker Hub’s extensive reach, underscored by an astounding 26 billion monthly image pulls, suggests immense potential for continued growth and innovation.

Docker Desktop with NVIDIA AI Workbench

Docker Desktop on Windows and Mac helps deliver NVIDIA AI Workbench developers a smooth experience on local and remote machines. 

NVIDIA AI Workbench is an easy-to-use toolkit that allows developers to create, test, and customize AI and machine learning models on their PC or workstation and scale them to the data center or public cloud. It simplifies interactive development workflows while automating technical tasks that halt beginners and derail experts. AI Workbench makes workstation setup and configuration fast and easy. Example projects are also included to help developers get started even faster with their own data and use cases.   

Docker engineering teams are collaborating with NVIDIA to improve the user experience with NVIDIA GPU-accelerated platforms through recent improvements to the AI Workbench installation on WSL2.Check out how NVIDIA AI Workbench can be used locally to tune a generative image model to produce more accurate prompted results:

In a near-term update, AI Workbench will use the Container Device Interface (CDI) to govern local and remote GPU-enabled environments. CDI is a CNCF-sponsored project led by NVIDIA and Intel, which exposes NVIDIA GPUs inside of containers to support complex device configurations and CUDA compatibility checks. This simplifies how research, simulation, GenAI, and ML applications utilize local and cloud-native GPU resources.  

With Docker Desktop 4.29 (which includes Moby 25), developers can configure CDI support in the daemon and then easily make all NVIDIA GPUs available in a running container by using the –device option via support for CDI devices.

docker run –device nvidia.com/gpu=all <image> <command>

LLM-powered apps with Docker GenAI Stack

The Docker GenAI Stack lets teams easily integrate NVIDIA accelerated computing into their AI workflows. This stack, designed for seamless component integration, can be set up on a developer’s laptop using Docker Desktop for Windows. It helps deliver the power of NVIDIA GPUs and NVIDIA NIM to accelerate LLM inference, providing tangible improvements in application performance. Developers can experiment and modify five pre-packaged applications to leverage the stack’s capabilities.

Accelerate AI/ML development with Docker Desktop

Docker Desktop facilitates an accelerated machine learning development environment on a developer’s laptop. By tapping NVIDIA GPU support for containers, developers can leverage tools distributed via Docker Hub, such as PyTorch and TensorFlow, to see significant speed improvements in their projects, underscoring the efficiency gains possible with NVIDIA technology on Docker.

Securing the software supply chain

Docker Hub’s registry and tools, including capabilities for build, digital signing, Software Bill of Materials (SBOM), and vulnerability assessment via Docker Scout, allow customers to ensure the quality and integrity of container images from end to end. This comprehensive approach not only accelerates the development of machine learning applications but also secures the GenAI and LLM software supply chain, providing developers with the confidence that their applications are built on a secure and efficient foundation.

“With exploding interest in AI from a huge range of developers, we are excited to work with NVIDIA to build tooling that helps accelerate building AI applications. The ecosystem around Docker and NVIDIA has been building strong foundations for many years and this is enabling a new community of enterprise AI/ML developers to explore and build GPU accelerated applications.”
Justin Cormack, Chief Technology Officer, Docker

“Enterprise applications like NVIDIA AI Workbench can benefit enormously from the streamlining that Docker Desktop provides on local systems. Our work with the Docker team will help improve the AI Workbench user experience for managing GPUs on Windows.”
Tyler Whitehouse, Principal Product Manager, NVIDIA

Learn more 

By leveraging Docker Desktop and Docker Hub with NVIDIA technologies, developers are equipped to harness the revolutionary power of AI, grow their skills, and seize opportunities to deliver innovative applications that push the boundaries of what’s possible. Check out NVIDIA’s Docker Hub library  and NVIDIA AI Enterprise to get started with your own AI solutions.
Quelle: https://blog.docker.com/feed/

Rev Up Your Cloud-Native Journey: Join Docker at KubeCon + CloudNativeCon EU 2024 for Innovation, Expert Insight, and Motorsports

We are racing toward the finish line at KubeCon + CloudNativeCon Europe, March 19 – 22, 2024 in Paris, France. Join the Docker “pit crew” at Booth #J3 for an incredible racing experience, new product demos, and limited-edition SWAG. 

Meet us at our KubeCon booth, sessions, and events to learn about the latest trends in AI productivity and best practices in cloud-native development with Docker. At our KubeCon booth (#J3), we’ll show you how building in the cloud accelerates development and simplifies multi-platform builds with a side-by-side demo of Docker Build Cloud. Learn how Docker and Test Containers Cloud provide a seamless integration within the testing framework to improve the quality and speed of application delivery.

It’s not all work, though — join us at the booth for our Megennis Motorsport Racing experience and try to beat the best!

Take advantage of this opportunity to connect with the Docker team, learn from the experts, and contribute to the ever-evolving cloud-native landscape. Let’s shape the future of cloud-native technologies together at KubeCon!

Deep dive sessions from Docker experts

Is Your Image Really Distroless? — Docker software engineer Laurent Goderre will dive into the world of “distroless” Docker images on Wednesday, March 20. In this session, Goderre will explain the significance of separating build-time and run-time dependencies to enhance container security and reduce vulnerabilities. He’ll also explore strategies for configuring runtime environments without compromising security or functionality. Don’t miss this must-attend session for KubeCon attendees keen on fortifying their Docker containers.

Simplified Inner and Outer Cloud Native Developer Loops — Docker Staff Community Relations Manager Oleg Šelajev and Diagrid Customer Success Engineer Alice Gibbons tackle the challenges of developer productivity in cloud-native development. On Wednesday, March 20, they will present tools and practices to bridge the gap between development and production environments, demonstrating how a unified approach can streamline workflows and boost efficiency across the board.

Engage, learn, and network at these events

Security Soiree: Hands-on cloud-native security workshop and party — Join Sysdig, Snyk, and Docker on March 19 for cocktails, team photos, music, prizes, and more at the Security Soiree. Listen to a compelling panel discussion led by industry experts, including Docker’s Director of Security, Risk & Trust, Rachel Taylor, followed by an evening of networking and festivities. Get tickets to secure your invitation.

Docker Meetup at KubeCon: Development & data productivity in the age of AI — Join us at our meetup during KubeCon on March 21 and hear insights from Docker, Pulumi, Tailscale, and New Relic. This networking mixer at Tonton Becton Restaurant promises candid discussions on enhancing developer productivity with the latest AI and data technologies. Reserve your spot now for an evening of casual conversation, drinks, and delicious appetizers.

See you March 19 – 22 at KubeCon + CloudNativeCon Europe

We look forward to seeing you in Paris — safe travels and prepare for an unforgettable experience!

Learn more

New to Docker? Create an account. 

Learn about Docker Build Cloud.

Subscribe to the Docker Newsletter.

Read about what rolled out in Docker Desktop 4.27, including synchronized file shares, Docker Init GA, a private marketplace for extensions, Moby 25, support for Testcontainers with ECI, Docker Build Cloud, and Docker Debug Beta.

Quelle: https://blog.docker.com/feed/