Rev Up Your Cloud-Native Journey: Join Docker at KubeCon + CloudNativeCon EU 2024 for Innovation, Expert Insight, and Motorsports

We are racing toward the finish line at KubeCon + CloudNativeCon Europe, March 19 – 22, 2024 in Paris, France. Join the Docker “pit crew” at Booth #J3 for an incredible racing experience, new product demos, and limited-edition SWAG. 

Meet us at our KubeCon booth, sessions, and events to learn about the latest trends in AI productivity and best practices in cloud-native development with Docker. At our KubeCon booth (#J3), we’ll show you how building in the cloud accelerates development and simplifies multi-platform builds with a side-by-side demo of Docker Build Cloud. Learn how Docker and Test Containers Cloud provide a seamless integration within the testing framework to improve the quality and speed of application delivery.

It’s not all work, though — join us at the booth for our Megennis Motorsport Racing experience and try to beat the best!

Take advantage of this opportunity to connect with the Docker team, learn from the experts, and contribute to the ever-evolving cloud-native landscape. Let’s shape the future of cloud-native technologies together at KubeCon!

Deep dive sessions from Docker experts

Is Your Image Really Distroless? — Docker software engineer Laurent Goderre will dive into the world of “distroless” Docker images on Wednesday, March 20. In this session, Goderre will explain the significance of separating build-time and run-time dependencies to enhance container security and reduce vulnerabilities. He’ll also explore strategies for configuring runtime environments without compromising security or functionality. Don’t miss this must-attend session for KubeCon attendees keen on fortifying their Docker containers.

Simplified Inner and Outer Cloud Native Developer Loops — Docker Staff Community Relations Manager Oleg Šelajev and Diagrid Customer Success Engineer Alice Gibbons tackle the challenges of developer productivity in cloud-native development. On Wednesday, March 20, they will present tools and practices to bridge the gap between development and production environments, demonstrating how a unified approach can streamline workflows and boost efficiency across the board.

Engage, learn, and network at these events

Security Soiree: Hands-on cloud-native security workshop and party — Join Sysdig, Snyk, and Docker on March 19 for cocktails, team photos, music, prizes, and more at the Security Soiree. Listen to a compelling panel discussion led by industry experts, including Docker’s Director of Security, Risk & Trust, Rachel Taylor, followed by an evening of networking and festivities. Get tickets to secure your invitation.

Docker Meetup at KubeCon: Development & data productivity in the age of AI — Join us at our meetup during KubeCon on March 21 and hear insights from Docker, Pulumi, Tailscale, and New Relic. This networking mixer at Tonton Becton Restaurant promises candid discussions on enhancing developer productivity with the latest AI and data technologies. Reserve your spot now for an evening of casual conversation, drinks, and delicious appetizers.

See you March 19 – 22 at KubeCon + CloudNativeCon Europe

We look forward to seeing you in Paris — safe travels and prepare for an unforgettable experience!

Learn more

New to Docker? Create an account. 

Learn about Docker Build Cloud.

Subscribe to the Docker Newsletter.

Read about what rolled out in Docker Desktop 4.27, including synchronized file shares, Docker Init GA, a private marketplace for extensions, Moby 25, support for Testcontainers with ECI, Docker Build Cloud, and Docker Debug Beta.

Quelle: https://blog.docker.com/feed/

Are Containers Only for Microservices? Myth Debunked

In the ever-evolving software delivery landscape, containerization has emerged as a transformative force, reshaping how organizations build, test, deploy, and manage their applications. 

Whether you are maintaining a monolithic legacy system, navigating the complexities of Service-Oriented Architecture (SOA), or orchestrating your digital strategy around application programming interfaces (APIs), containerization offers a pathway to increased efficiency, resilience, and agility. 

In this post, we’ll debunk the myth that containerization is solely the domain of microservices by exploring its applicability and advantages across different architectural paradigms. 

Containerization across architectures

Although containerization is commonly associated with microservices architecture because of its agility and scalability, the potential extends far beyond, offering compelling benefits to a variety of architectural styles. From the tightly integrated components of monolithic applications to the distributed nature of SOA and the strategic approach of API-led connectivity, containerization stands as a universal tool, adaptable and beneficial across the board.

Beyond the immediate benefits of improved resource utilization, faster deployment cycles, and streamlined maintenance, the true value of containerization lies in its ability to ensure consistent application performance across varied environments. This consistency is a cornerstone for reliability and efficiency, pivotal in today’s fast-paced software delivery demands.

Here, we will provide examples of how this technology can be a game-changer for your digital strategy, regardless of your adopted style. Through this exploration, we invite technology leaders and executives to broaden their perspective on containerization, seeing it not just as a tool for one architectural approach but as a versatile ally in the quest for digital excellence.

1. Event-driven architecture

Event-driven architecture (EDA) represents a paradigm shift in how software components interact, pivoting around the concept of events — such as state changes or specific action occurrences — as the primary conduit for communication. This architectural style fosters loose coupling, enabling components to operate independently and react asynchronously to events, thereby augmenting system flexibility and agility. EDA’s intrinsic support for scalability, by allowing components to address fluctuating workloads independently, positions it as an ideal candidate for handling dynamic system demands.

Within the context of EDA, containerization emerges as a critical enabler, offering a streamlined approach to encapsulate applications alongside their dependencies. This encapsulation guarantees that each component of an event-driven system functions within a consistent, isolated environment — a crucial factor when managing components with diverse dependency requirements. Containers’ scalability becomes particularly advantageous in EDA, where fluctuating event volumes necessitate dynamic resource allocation. By deploying additional container instances in response to increased event loads, systems maintain high responsiveness levels.

Moreover, containerization amplifies the deployment flexibility of event-driven components, ensuring consistent event generation and processing across varied infrastructures (Figure 1). This adaptability facilitates the creation of agile, scalable, and portable architectures, underpinning the deployment and management of event-driven components with a robust, flexible infrastructure. Through containerization, EDA systems achieve enhanced operational efficiency, scalability, and resilience, embodying the principles of modern, agile application delivery.

Figure 1: Event-driven architecture.

2. API-led architecture

API-led connectivity represents a strategic architectural approach focused on the design, development, and management of APIs to foster seamless connectivity and data exchange across various systems, applications, and services within an organization (Figure 2). This methodology champions a modular and scalable framework ideal for the modern digital enterprise.

The principles of API-led connectivity — centering on system, process, and experience APIs — naturally harmonize with the benefits of containerization. By encapsulating each API within its container, organizations can achieve unparalleled modularity and scalability. Containers offer an isolated runtime environment for each API, ensuring operational independence and eliminating the risk of cross-API interference. This isolation is critical, as it guarantees that modifications or updates to one API can proceed without adversely affecting others, which is a cornerstone of maintaining a robust API-led ecosystem.

Moreover, the dual advantages of containerization — ensuring consistent execution environments and enabling easy scalability — align perfectly with the goals of API-led connectivity. This combination not only simplifies the deployment and management of APIs across diverse environments but also enhances the resilience and flexibility of the API infrastructure. Together, API-led connectivity and containerization empower organizations to develop, scale, and manage their API ecosystems more effectively, driving efficiency and innovation in application delivery.

Figure 2: API-led architecture.

3. Service-oriented architecture

Service-oriented architecture (SOA) is a design philosophy that emphasizes the use of discrete services within an architecture to provide business functionalities. These services communicate through well-defined interfaces and protocols, enabling interoperability and facilitating the composition of complex applications from independently developed services. SOA’s focus on modularity and reusability makes it particularly amenable to the benefits offered by containerization.

Containerization brings a new dimension of flexibility and efficiency to SOA by encapsulating these services into containers. This encapsulation provides an isolated environment for each service, ensuring consistent execution regardless of the deployment environment. Such isolation is crucial for maintaining the integrity and availability of services, particularly in complex, distributed architectures where services must communicate across different platforms and networks.

Moreover, containerization enhances the scalability and manageability of SOA-based systems. Containers can be dynamically scaled to accommodate varying loads, enabling organizations to respond swiftly to changes in demand. This scalability, combined with the ease of deployment and rollback provided by container orchestration platforms, supports the agile delivery and continuous improvement of services.

The integration of containerization with SOA essentially results in a more resilient, scalable, and manageable architecture. It enables organizations to leverage the full potential of SOA by facilitating faster deployment, enhancing performance, and simplifying the lifecycle management of services. Together, SOA and containerization create a powerful framework for building flexible, future-proof applications that can adapt to the evolving needs of the business.

4. Monolithic applications

Contrary to common perceptions, monolithic applications stand to gain significantly from containerization. This technology can encapsulate the full application stack — including the core application, its dependencies, libraries, and runtime environment within a container. This encapsulation ensures uniformity across various stages of the development lifecycle, from development and testing to production, effectively addressing the infamous ‘it works on my machine’ challenge. Such consistency streamlines the deployment process and simplifies scaling efforts, which is particularly beneficial for applications that need to adapt quickly to changing demands.

Moreover, containerization fosters enhanced collaboration among development teams by standardizing the operational environment, thereby minimizing discrepancies that typically arise from working in divergent development environments. This uniformity is invaluable in accelerating development cycles and improving product reliability.

Perhaps one of the most strategic benefits of containerization for monolithic architectures is the facilitation of a smoother transition to microservices. By containerizing specific components of the monolith, organizations can incrementally decompose their application into more manageable, loosely coupled microservices. This approach not only mitigates the risks associated with a full-scale migration but also allows teams to gradually adapt to microservices’ architectural patterns and principles.

Containerization presents a compelling proposition for monolithic applications, offering a pathway to modernization that enhances deployment efficiency, operational consistency, and the flexibility to evolve toward a microservices-oriented architecture. Through this lens, containerization is not just a tool for new applications but a bridge that allows legacy applications to step into the future of software development.

Conclusion

The journey of modern software development, with its myriad architectural paths, is markedly enhanced by the adoption of containerization. This technology transcends architectural boundaries, bringing critical advantages such as isolation, scalability, and portability to the forefront of application delivery. Whether your environment is monolithic, service-oriented, event-driven, or API-led, containerization aligns perfectly with the ethos of modern, distributed, and cloud-native applications. 

By embracing the adaptability and transformative potential of containerization, you can open your architectures to a future where agility, efficiency, and resilience are not just aspirations but achievable realities. Begin your transformative journey with Docker Desktop today and redefine what’s possible within the bounds of your existing architectural framework.

Learn more

How to Stuff Monolithic Applications Into a Container (DockerCon 2023) 

Subscribe to the Docker Newsletter.

Explore Docker Desktop.

Join the Docker community.

Skill up with Docker training.

Quelle: https://blog.docker.com/feed/

Filter Out Security Vulnerability False Positives with VEX

Development and security teams are becoming overwhelmed by an ever-growing backlog of security vulnerabilities requiring their attention. Although these vulnerability insights are essential to safeguard organizations and their customers from potential threats, the findings are often bloated with a high volume of noise, especially from false positives. 

The 2022 Cloud Security Alert Fatigue Report states that more than 40% of alerts from security tools are false positives, which means that teams can be inundated with vulnerabilities that pose no actual risk. The impact of these false positives includes delayed releases, wasted productivity, internal friction, burnout, and eroding customer trust, all of which accumulate significant financial loss for organizations.

How can developers and security professionals cut through the noise so that they can more effectively manage vulnerabilities and focus on what truly matters? That is where the Vulnerability Exploitability eXchange (VEX) comes in.

In this article, we’ll explain how VEX works with Docker Scout and walk through how you can get started. 

What is VEX?

VEX, developed by the National Telecommunications and Information Administration (NTIA), is a specification aimed at capturing and conveying information about exploitable vulnerabilities within a product. Among other details, the framework classifies vulnerability status into four key categories, forming the core of a VEX document:

Not affected — No remediation is required regarding this vulnerability.

Affected — Actions are recommended to remediate or address this vulnerability.

Fixed — These product versions contain a fix for the vulnerability.

Under Investigation — Whether these product versions are affected by the vulnerability is still unknown. An update will be provided in a later release.

By ingesting the context from VEX, organizations can distinguish the noise from the confirmed exploitable vulnerabilities to get a more accurate picture of their attack surface and bring focus to their remediation activities. For example, vulnerabilities assigned a “not affected” status in the VEX document may potentially be ruled out as false positives and hidden from tool outputs to minimize distraction.

Although the practice of documenting software vulnerability context is not novel per se, VEX itself represents an advancement over solutions that have traditionally ruled over the vulnerability management processes, such as emails, spreadsheets, Confluence pages, and Jira tickets. 

What sets VEX apart are its standardized and machine-readable features, which make it much better suited for integration and automation within an organization’s vulnerability ecosystem, resulting in a more streamlined and effective approach to vulnerability management without unnecessary resource drain. However, to yield these results — repeatedly and at scale — the technology landscape surrounding VEX must first evolve to deliver tools and experiences that can successfully put VEX data into action in verifiable, automatable, and meaningful ways. 

For more information on VEX, refer to the one-page summary (PDF) by NTIA.

Want to get started with VEX? Docker can help

The implementation of VEX is still nascent in the industry and widespread utilization and adoption will be key in unleashing its full potential. Docker, too, is early in its VEX journey, but read on for how we’re helping our users get started.

Use Docker Scout with local VEX documents

If you want to try how VEX works with Docker Scout, the quickest way to get up and running is to create a local VEX document with the tool of your choice, such as vexctl, and incorporate it into your image analysis with the –vex-location flag for the docker scout cves command.

$ mkdir -p /usr/local/share/vex
$ vexctl create [options] –file /usr/local/share/vex/example.vex.json
$ docker scout cves –vex-location /usr/local/share/vex <image-reference>

Embed VEX documents as attestations

The new docker scout attestation add command lets you attach VEX documents to images as in-toto attestations, which means VEX statements are available on and distributed together with the image.

$ docker scout attestation add
–file /usr/local/share/vex/example.vex.json
–predicate-type https://openvex.dev/ns/v0.2.0
<image>

Docker Scout automatically incorporates any VEX attestations into the results when you analyze images on the CLI. It also works with attestations signed with Sigstore and attached using vexctl attest –attest –sign.

Automatically create VEX documents with Sysdig

The Sysdig integration for Docker Scout detects what packages are being loaded into memory in your runtime environment and automatically creates VEX statements for filtering out non-applicable CVEs.

Try it out

We are working on embedding the above capability and more into Docker Scout so that users can effortlessly generate and apply VEX to vanquish their false positives for good. Simultaneously, we are exploring VEX for Docker Official Images to allow upstream maintainers to indicate non-applicable CVEs in their images, which can improve tooling (e.g., scanner) accuracy if VEX is taken into account. 

In the meantime, if you are curious about how this all works in practice, we’ve created a guide that walks you through the steps of creating a VEX document, applying it to image analysis, and creating VEX attestations. 

Learn more

Read the guide: Suppress image vulnerabilities with VEX.

Subscribe to the Docker Newsletter.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Azure Container Registry and Docker Hub: Connecting the Dots with Seamless Authentication and Artifact Cache

By leveraging the wide array of public images available on Docker Hub, developers can accelerate development workflows, enhance productivity, and, ultimately, ship scalable applications that run like clockwork. When building with public content, acknowledging the potential operational risks associated with using that content without proper authentication is crucial. 

In this post, we will describe best practices for mitigating these risks and ensuring the security and reliability of your containers.

Import public content locally

There are several advantages to importing public content locally. Doing so improves the availability and reliability of your public content pipeline and protects you from failed CI builds. By importing your public content, you can easily validate, verify, and deploy images to help run your business more reliably.

For more information on this best practice, check out the Open Container Initiative’s guide on Consuming Public Content.

Configure Artifact Cache to consume public content

Another best practice is to configure Artifact Cache to consume public content. Azure Container Registry’s (ACR) Artifact Cache feature allows you to cache your container artifacts in your own Azure Container Registry, even for private networks. This approach limits the impact of rate limits and dramatically increases pull reliability when combined with geo-replicated ACR, allowing you to pull artifacts from the region closest to your Azure resource. 

Additionally, ACR offers various security features, such as private networks, firewall configuration, service principals, and more, which can help you secure your container workloads. For complete information on using public content with ACR Artifact Cache, refer to the Artifact Cache technical documentation.

Authenticate pulls with public registries

We recommend authenticating your pull requests to Docker Hub using subscription credentials. Docker Hub offers developers the ability to authenticate when building with public library content. Authenticated users also have access to pull content directly from private repositories. For more information, visit the Docker subscriptions page. Microsoft Artifact Cache also supports authenticating with other public registries, providing an additional layer of security for your container workloads.

Following these best practices when using public content from Docker Hub can help mitigate security and reliability risks in your development and operational cycles. By importing public content locally, configuring Artifact Cache, and setting up preferred authentication methods, you can ensure your container workloads are secure and reliable.

Learn more about securing containers

Try Docker Scout to assess your images for security risks.

Looking to get up and running? Use our Quickstart guide.

Have questions? The Docker community is here to help.

Subscribe to the Docker Newsletter to stay updated with Docker news and announcements.

Additional resources for improving container security for Microsoft and Docker customers

Visit Microsoft Learn.

Read the introduction to Microsoft’s framework for securing containers.

Learn how to manage public content with Azure Container Registry.

Quelle: https://blog.docker.com/feed/

Revolutionize Your CI/CD Pipeline: Integrating Testcontainers and Bazel

One of the challenges in modern software development is being able to release software often and with confidence. This can only be achieved when you have a good CI/CD setup in place that can test your software and release it with minimal or even no human intervention. But modern software applications also use a wide range of third-party dependencies and often need to run on multiple operating systems and architectures. 

In this post, I will explain how the combination of Bazel and Testcontainers helps developers build and release software by providing a hermetic build system.

Using Bazel and Testcontainers together

Bazel is an open source build tool developed by Google to build and test multi-language, multi-platform projects. Several big IT companies have adopted monorepos for various reasons, such as:

Code sharing and reusability 

Cross-project refactoring 

Consistent builds and dependency management 

Versioning and release management

With its multi-language support and focus on reproducible builds, Bazel shines in building such monorepos.

A key concept of Bazel is hermeticity, which means that when all inputs are declared, the build system can know when an output needs to be rebuilt. This approach brings determinism where, given the same input source code and product configuration, it will always return the same output by isolating the build from changes to the host system.

Testcontainers is an open source framework for provisioning throwaway, on-demand containers for development and testing use cases. Testcontainers make it easy to work with databases, message brokers, web browsers, or just about anything that can run in a Docker container.

Using Bazel and Testcontainers together offers the following features:

Bazel can build projects using different programming languages like C, C++, Java, Go, Python, Node.js, etc.

Bazel can dynamically provision the isolated build/test environment with desired language versions.

Testcontainers can provision the required dependencies as Docker containers so that your test suite is self-contained. You don’t have to manually pre-provision the necessary services, such as databases, message brokers, and so on. 

All the test dependencies can be expressed through code using Testcontainers APIs, and you avoid the risk of breaking hermeticity by sharing such resources between tests.

Let’s see how we can use Bazel and Testcontainers to build and test a monorepo with modules using different languages.We are going to explore a monorepo with a customers module, which uses Java, and a products module, which uses Go. Both modules interact with relational databases (PostgreSQL) and use Testcontainers for testing.

Getting started with Bazel

To begin, let’s get familiar with Bazel’s basic concepts. The best way to install Bazel is by using Bazelisk. Follow the official installation instructions to install Bazelisk. Once it’s installed, you should be able to run the Bazelisk version and Bazel version commands:

$ brew install bazelisk
$ bazel version

Bazelisk version: v1.12.0
Build label: 7.0.0

Before you can build a project using Bazel, you need to set up its workspace. 

A workspace is a directory that holds your project’s source files and contains the following files:

The WORKSPACE.bazel file, which identifies the directory and its contents as a Bazel workspace and lives at the root of the project’s directory structure.

A MODULE.bazel file, which declares dependencies on Bazel plugins (called “rulesets”).

One or more BUILD (or BUILD.bazel) files, which describe the sources and dependencies for different parts of the project. A directory within the workspace that contains a BUILD file is a package.

In the simplest case, a MODULE.bazel file can be an empty file, and a BUILD file can contain one or more generic targets as follows:

genrule(
name = "foo",
outs = ["foo.txt"],
cmd_bash = "sleep 2 && echo 'Hello World' >$@",
)

genrule(
name = "bar",
outs = ["bar.txt"],
cmd_bash = "sleep 2 && echo 'Bye bye' >$@",
)

Here, we have two targets: foo and bar. Now we can build those targets using Bazel as follows:

$ bazel build //:foo <- runs only foo target, // indicates root workspace
$ bazel build //:bar <- runs only bar target
$ bazel build //… <- runs all targets

Configuring the Bazel build in a monorepo

We are going to explore using Bazel in the testcontainers-bazel-demo repository. This repository is a monorepo with a customers module using Java and a products module using Go. Its structure looks like the following:

testcontainers-bazel-demo
|____customers
| |____BUILD.bazel
| |____src
|____products
| |____go.mod
| |____go.sum
| |____repo.go
| |____repo_test.go
| |____BUILD.bazel
|____MODULE.bazel

Bazel uses different rules for building different types of projects. Bazel uses rules_java for building Java packages, rules_go for building Go packages, rules_python for building Python packages, etc.

We may also need to load additional rules providing additional features. For building Java packages, we may want to use external Maven dependencies and use JUnit 5 for running tests. In that case, we should load rules_jvm_external to be able to use Maven dependencies. 

We are going to use Bzlmod, the new external dependency subsystem, to load the external dependencies. In the MODULE.bazel file, we can load the additional rules_jvm_external and contrib_rules_jvm as follows:

bazel_dep(name = "contrib_rules_jvm", version = "0.21.4")
bazel_dep(name = "rules_jvm_external", version = "5.3")

maven = use_extension("@rules_jvm_external//:extensions.bzl", "maven")
maven.install(
name = "maven",
artifacts = [
"org.postgresql:postgresql:42.6.0",
"ch.qos.logback:logback-classic:1.4.6",
"org.testcontainers:postgresql:1.19.3",
"org.junit.platform:junit-platform-launcher:1.10.1",
"org.junit.platform:junit-platform-reporting:1.10.1",
"org.junit.jupiter:junit-jupiter-api:5.10.1",
"org.junit.jupiter:junit-jupiter-params:5.10.1",
"org.junit.jupiter:junit-jupiter-engine:5.10.1",
],
)
use_repo(maven, "maven")

Let’s understand the above configuration in the MODULE.bazel file:

We have loaded the rules_jvm_external rules from Bazel Central Registry and loaded extensions to use third-party Maven dependencies.

We have configured all our Java application dependencies using Maven coordinates in the maven.install artifacts configuration.

We are loading the contrib_rules_jvm rules that supports running JUnit 5 tests as a suite.

Now, we can run the @maven//:pin program to create a JSON lockfile of the transitive dependencies, in a format that rules_jvm_external can use later:

bazel run @maven//:pin

Rename the generated file rules_jvm_external~4.5~maven~maven_install.json to maven_install.json. Now update the MODULE.bazel to reflect that we pinned the dependencies.

Add a lock_file attribute to the maven.install() and update the use_repo call to also expose the unpinned_maven repository used to update the dependencies:

maven.install(

lock_file = "//:maven_install.json",
)

use_repo(maven, "maven", "unpinned_maven")

Now, when you update any dependencies, you can run the following command to update the lock file:

​​bazel run @unpinned_maven//:pin

Let’s configure our build targets in the customers/BUILD.bazel file, as follows:

load(
"@bazel_tools//tools/jdk:default_java_toolchain.bzl",
"default_java_toolchain", "DEFAULT_TOOLCHAIN_CONFIGURATION", "BASE_JDK9_JVM_OPTS", "DEFAULT_JAVACOPTS"
)

default_java_toolchain(
name = "repository_default_toolchain",
configuration = DEFAULT_TOOLCHAIN_CONFIGURATION,
java_runtime = "@bazel_tools//tools/jdk:remotejdk_17",
jvm_opts = BASE_JDK9_JVM_OPTS + ["–enable-preview"],
javacopts = DEFAULT_JAVACOPTS + ["–enable-preview"],
source_version = "17",
target_version = "17",
)

load("@rules_jvm_external//:defs.bzl", "artifact")
load("@contrib_rules_jvm//java:defs.bzl", "JUNIT5_DEPS", "java_test_suite")

java_library(
name = "customers-lib",
srcs = glob(["src/main/java/**/*.java"]),
deps = [
artifact("org.postgresql:postgresql"),
artifact("ch.qos.logback:logback-classic"),
],
)

java_library(
name = "customers-test-resources",
resources = glob(["src/test/resources/**/*"]),
)

java_test_suite(
name = "customers-lib-tests",
srcs = glob(["src/test/java/**/*.java"]),
runner = "junit5",
test_suffixes = [
"Test.java",
"Tests.java",
],
runtime_deps = JUNIT5_DEPS,
deps = [
":customers-lib",
":customers-test-resources",
artifact("org.junit.jupiter:junit-jupiter-api"),
artifact("org.junit.jupiter:junit-jupiter-params"),
artifact("org.testcontainers:postgresql"),
],
)

Let’s understand this BUILD configuration:

We have loaded default_java_toolchain and then configured the Java version to 17.

We have configured a java_library target with the name customers-lib that will build the production jar file.

We have defined a java_test_suite target with the name customers-lib-tests to define our test suite, which will execute all the tests. We also configured the dependencies on the other target customers-lib and external dependencies.

We also defined another target with the name customers-test-resources to add non-Java sources (e.g., logging config files) to our test suite target as a dependency.

In the customers package, we have a CustomerService class that stores and retrieves customer details in a PostgreSQL database. And we have CustomerServiceTest that tests CustomerService methods using Testcontainers. Take a look at the GitHub repository for the complete code.

Note: You can use Gazelle, which is a Bazel build file generator, to generate the BUILD.bazel files instead of manually writing them.

Running Testcontainers tests

For running Testcontainers tests, we need a Testcontainers-supported container runtime. Let’s assume you have a local Docker installed using Docker Desktop.

Now, with our Bazel build configuration, we are ready to build and test the customers package:

# to run all build targets of customers package
$ bazel build //customers/…

# to run a specific build target of customers package
$ bazel build //customers:customers-lib

# to run all test targets of customers package
$ bazel test //customers/…

# to run a specific test target of customers package
$ bazel test //customers:customers-lib-tests

When you run the build for the first time, it will take time to download the required dependencies and then execute the targets. But, if you try to build or test again without any code or configuration changes, Bazel will not re-run the build/test again and will show the cached result. Bazel has a powerful caching mechanism that will detect code changes and run only the targets that are necessary to run.

While using Testcontainers, you define the required dependencies as part of code using Docker image names along with tags, such as Postgres:16. So, unless you change the code (e.g., Docker image name or tag), Bazel will cache the test results.

Similarly, we can use rules_go and Gazelle for configuring Bazel build for Go packages. Take a look at the MODULE.bazel and products/BUILD.bazel files to learn more about configuring Bazel in a Go package.

As mentioned earlier, we need a Testcontainers-supported container runtime for running Testcontainers tests. Installing Docker on complex CI platforms might be challenging, and you might need to use a complex Docker-in-Docker setup. Additionally, some Docker images might not be compatible with the operating system architecture (e.g., Apple M1). 

Testcontainers Cloud solves these problems by eliminating the need to have Docker on the localhost or CI runners and run the containers on cloud VMs transparently.

Here is an example of running the Testcontainers tests using Bazel on Testcontainers Cloud using GitHub Actions:

name: CI

on:
push:
branches:
– '**'

jobs:
build:
runs-on: ubuntu-latest
steps:
– uses: actions/checkout@v4

– name: Configure TestContainers cloud
uses: atomicjar/testcontainers-cloud-setup-action@main
with:
wait: true
token: ${{ secrets.TC_CLOUD_TOKEN }}

– name: Cache Bazel
uses: actions/cache@v3
with:
path: |
~/.cache/bazel
key: ${{ runner.os }}-bazel-${{ hashFiles('.bazelversion', '.bazelrc', 'WORKSPACE', 'WORKSPACE.bazel', 'MODULE.bazel') }}
restore-keys: |
${{ runner.os }}-bazel-

– name: Build and Test
run: bazel test –test_output=all //…

GitHub Actions runners already come with Bazelisk installed, so we can use Bazel out of the box. We have configured the TC_CLOUD_TOKEN environment variable through Secrets and started the Testcontainers Cloud agent. If you check the build logs, you can see that the tests are executed using Testcontainers Cloud.

Summary

We have shown how to use the Bazel build system to build and test monorepos with multiple modules using different programming languages. Combined with Testcontainers, you can make the builds self-contained and hermetic.

Although Bazel and Testcontainers help us have a self-contained build, we need to take extra measures to make it a hermetic build: 

Bazel can be configured to use a specific version of SDK, such as JDK 17, Go 1.20, etc., so that builds always use the same version instead of what is installed on the host machine. 

For Testcontainers tests, using Docker tag latest for container dependencies may result in non-deterministic behavior. Also, some Docker image publishers override the existing images using the same tag. To make the build/test deterministic, always use the Docker image digest so that the builds and tests always use the exact same version of images that gives reproducible and hermetic builds.

Using Testcontainers Cloud for running Testcontainers tests reduces the complexity of Docker setup and gives a deterministic container runtime environment.

Visit the Testcontainers website to learn more, and get started with Testcontainers Cloud by creating a free account.

Learn more

Visit the Testcontainers website.

Get started with Testcontainers Cloud by creating a free account.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Docker Desktop 4.28: Enhanced File Sharing and Security Plus Refined Builds View in Docker Build Cloud

Docker Desktop 4.28 introduces updates to file-sharing controls, focusing on security and administrative ease. Responding to feedback from our business users, this update brings refined file-sharing capabilities and path allow-listing, aiming to simplify management and enhance security for IT administrators and users alike.

Along with our investments in bringing access to cloud resources within the local Docker Desktop experience with Docker Build Cloud Builds view, this release provides a more efficient and flexible platform for development teams.

Introducing enhanced file-sharing controls in Docker Desktop Business 

As we continue to innovate and elevate the Docker experience for our business customers, we’re thrilled to unveil significant upgrades to the Docker Desktop’s Hardened Desktop feature. Recognizing the importance of administrative control over Docker Desktop settings, we’ve listened to your feedback and are introducing enhancements prioritizing security and ease of use.

For IT administrators and non-admin users, Docker now offers the much-requested capability to specify and manage file-sharing options directly via Settings Management (Figure 1). This includes:

Selective file sharing: Choose your preferred file-sharing implementation directly from Settings > General, where you can choose between VirtioFS, gRPC FUSE, or osxfs. VirtioFS is only available for macOS versions 12.5 and above and is turned on by default.

Path allow-listing: Precisely control which paths users can share files from, enhancing security and compliance across your organization.

Figure 1: Display of Docker Desktop settings enhanced file-sharing settings.

We’ve also reimagined the Settings > Resources > File Sharing interface to enhance your interaction with Docker Desktop (Figure 2). You’ll notice:

Clearer error messaging: Quickly understand and rectify issues with enhanced error messages.

Intuitive action buttons: Experience a smoother workflow with redesigned action buttons, making your Docker Desktop interactions as straightforward as possible.

Figure 2: Displaying settings management in Docker Desktop to notify business subscribers of their access rights.

These enhancements are not just about improving current functionalities; they’re about unlocking new possibilities for your Docker experience. From increased security controls to a more navigable interface, every update is designed with your efficiency in mind.

Refining development with Docker Desktop’s Builds view update 

Docker Desktop’s previous update introduced Docker Build Cloud integration, aimed at reducing build times and improving build management. In this release, we’re landing incremental updates that refine the Builds view, making it easier and faster to manage your builds.

New in Docker Desktop 4.28:

Dedicated tabs: Separates active from completed builds for better organization (Figure 3).

Build insights: Displays build duration and cache steps, offering more clarity on the build process.

Reliability fixes: Resolves issues with updates for a more consistent experience.

UI improvements: Updates the empty state view for a clearer dashboard experience (Figure 4).

These updates are designed to streamline the build management process within Docker Desktop, leveraging Docker Build Cloud for more efficient builds.

Figure 3: Dedicated tabs for Build history vs. Active builds to allow more space for inspecting your builds.

Figure 4: Updated view supporting empty state — no Active builds.

To explore how Docker Desktop and Docker Build Cloud can optimize your development workflow, read our Docker Build Cloud blog post. Experience the latest Builds view update to further enrich your local, hybrid, and cloud-native development journey.

These Docker Desktop updates support improved platform security and a better user experience. By introducing more detailed file-sharing controls, we aim to provide developers with a more straightforward administration experience and secure environment. As we move forward, we remain dedicated to refining Docker Desktop to meet the evolving needs of our users and organizations, enhancing their development workflows and agility to innovate.

Join the conversation and make your mark

Dive into the dialogue and contribute to the evolution of Docker Desktop. Use our feedback form to share your thoughts and let us know how to improve the Hardened Desktop features. Your input directly influences the development roadmap, ensuring Docker Desktop meets and exceeds our community and customers’ needs.

Learn more

Authenticate and update to receive the newest Docker Desktop features per your subscription level.

New to Docker? Create an account. 

Read our latest blog on synchronized file shares.

Read about what rolled out in Docker Desktop 4.27, including synchronized file shares, Docker Init GA, a private marketplace for extensions, Moby 25, support for Testcontainers with ECI, Docker Build Cloud, and Docker Debug Beta.

Learn about Docker Build Cloud.

Subscribe to the Docker Newsletter.

Quelle: https://blog.docker.com/feed/

How to Use Testcontainers on Jenkins CI

Releasing software often and with confidence relies on a strong continuous integration and continuous delivery (CI/CD) process that includes the ability to automate tests. Jenkins offers an open source automation server that facilitates such release of software projects.

In this article, we will explore how you can run tests based on the open source Testcontainers framework in a Jenkins pipeline using Docker and Testcontainers Cloud. 

Jenkins, which streamlines the development process by automating the building, testing, and deployment of code changes, is widely adopted in the DevOps ecosystem. It supports a vast array of plugins, enabling integration with various tools and technologies, making it highly customizable to meet specific project requirements.

Testcontainers is an open source framework for provisioning throwaway, on-demand containers for development and testing use cases. Testcontainers makes it easy to work with databases, message brokers, web browsers, or just about anything that can run in a Docker container.

Testcontainers also provides support for many popular programming languages, including Java, Go, .NET, Node.js, Python, and more. This article will show how to test a Java Spring Boot application (testcontainers-showcase) using Testcontainers in a Jenkins pipeline. Please fork the repository into your GitHub account. To run Testcontainers-based tests, a Testcontainers-supported container runtime, like Docker, needs to be available to agents.

Note: As Jenkins CI servers are mostly run on Linux machines, the following configurations are tested on a Linux machine only.

Docker containers as Jenkins agents

Let’s see how to use dynamic Docker container-based agents. To be able to use Docker containers as agents, install the Docker Pipeline plugin. 

Now, let’s create a file with name Jenkinsfile in the root of the project with the following content:

pipeline {
agent {
docker {
image 'eclipse-temurin:17.0.9_9-jdk-jammy'
args '–network host -u root -v /var/run/docker.sock:/var/run/docker.sock'
}
}

triggers { pollSCM 'H/2 * * * *' } // poll every 2 mins

stages {
stage('Build and Test') {
steps {
sh './mvnw verify'
}
}
}
}

We are using the eclipse-temurin:17.0.9_9-jdk-jammy Docker container as an agent to run the builds for this pipeline. Note that we are mapping the host’s Unix Docker socket as a volume with root user permissions to make it accessible to the agent, but this can potentially be a security risk.

Add the Jenkinsfile and push the changes to the Git repository.

Now, go to the Jenkins Dashboard and select New Item to create the pipeline. Follow these steps:

Enter testcontainers-showcase as pipeline name.

Select Pipeline as job type.

Select OK.

Under Pipeline section:

Select Definition: Pipeline script from SCM.

SCM: Git.

Repository URL: https://github.com/YOUR_GITHUB_USERNAME/testcontainers-showcase.git. Replace YOUR_GITHUB_USERNAME with your actual GitHub username.

Branches to build: Branch Specifier (blank for ‘any’): */main.

Script Path: Jenkinsfile.

Select Save.

Choose Build Now to trigger the pipeline for the first time.

The pipeline should run the Testcontainers-based tests successfully in a container-based agent using the remote Docker-in-Docker based configuration.

Kubernetes pods as Jenkins agents

While running Testcontainers-based tests on Kubernetes pods, you can run a Docker-in-Docker (DinD) container as a sidecar. To use Kubernetes pods as Jenkins agents, install Kubernetes plugin.

Now you can create the Jenkins pipeline using Kubernetes pods as agents as follows:

def pod =
"""
apiVersion: v1
kind: Pod
metadata:
labels:
name: worker
spec:
serviceAccountName: jenkins
containers:
– name: java17
image: eclipse-temurin:17.0.9_9-jdk-jammy
resources:
requests:
cpu: "1000m"
memory: "2048Mi"
imagePullPolicy: Always
tty: true
command: ["cat"]
– name: dind
image: docker:dind
imagePullPolicy: Always
tty: true
env:
– name: DOCKER_TLS_CERTDIR
value: ""
securityContext:
privileged: true
"""

pipeline {
agent {
kubernetes {
yaml pod
}
}
environment {
DOCKER_HOST = 'tcp://localhost:2375'
DOCKER_TLS_VERIFY = 0
}

stages {
stage('Build and Test') {
steps {
container('java17') {
script {
sh "./mvnw verify"
}
}
}
}
}
}

Although we can use a Docker-in-Docker based configuration to make the Docker environment available to the agent, this setup also brings configuration complexities and security risks.

By volume mounting the host’s Docker Unix socket (Docker-out-of-Docker) with the agents, the agents have direct access to the host Docker engine.

When using DooD approach file sharing, using bind-mounting doesn’t work because the containerized app and Docker engine work in different contexts. 

The Docker-in-Docker (DinD) approach requires the use of insecure privileged containers.

You can watch the Docker-in-Docker: Containerized CI Workflows presentation to learn more about the challenges of a Docker-in-Docker based CI setup.

This is where Testcontainers Cloud comes into the picture to make it easy to run Testcontainers-based tests more simply and reliably. 

By using Testcontainers Cloud, you don’t even need a Docker daemon running on the agent. Containers will be run in on-demand cloud environments so that you don’t need to use powerful CI agents with high CPU/memory for your builds.

Let’s see how to use Testcontainers Cloud with minimal setup and run Testcontainers-based tests.

Testcontainers Cloud-based setup

Testcontainers Cloud helps you run Testcontainers-based tests at scale by spinning up the dependent services as Docker containers on the cloud and having your tests connect to those services.

If you don’t have a Testcontainers Cloud account already, you can create an account and get a Service Account Token as follows:

Sign up for a Testcontainers Cloud account.

Once logged in, create an organization.

Navigate to the Testcontainers Cloud dashboard and generate a Service account (Figure 1).

Figure 1: Create a new Testcontainers Cloud service account.

To use Testcontainers Cloud, we need to start a lightweight testcontainers-cloud agent by passing TC_CLOUD_TOKEN as an environment variable.

You can store the TC_CLOUD_TOKEN value as a secret in Jenkins as follows:

From the Dashboard, select Manage Jenkins.

Under Security, choose Credentials.

You can create a new domain or use System domain.

Under Global credentials, select Add credentials.

Select Kind as Secret text.

Enter TC_CLOUD_TOKEN value in Secret.

Enter tc-cloud-token-secret-id as ID.

Select Create.

Next, you can update the Jenkinsfile as follows:

pipeline {
agent {
docker {
image 'eclipse-temurin:17.0.9_9-jdk-jammy'
}
}

triggers { pollSCM 'H/2 * * * *' }

stages {

stage('TCC SetUp') {
environment {
TC_CLOUD_TOKEN = credentials('tc-cloud-token-secret-id')
}
steps {
sh "curl -fsSL https://get.testcontainers.cloud/bash | sh"
}
}

stage('Build and Test') {
steps {
sh './mvnw verify'
}
}
}
}

We have set the TC_CLOUD_TOKEN environment variable using the value from tc-cloud-token-secret-id credential we created and started a Testcontainers Cloud agent before running our tests.

Now if you commit and push the updated Jenkinsfile, then the pipeline will run the tests using Testcontainers Cloud. You should see log statements similar to the following indicating that the Testcontainers-based tests are using Testcontainers Cloud instead of the default Docker daemon.

14:45:25.748 [testcontainers-lifecycle-0] INFO org.testcontainers.DockerClientFactory – Connected to docker:
Server Version: 78+testcontainerscloud (via Testcontainers Desktop 1.5.5)
API Version: 1.43
Operating System: Ubuntu 20.04 LTS
Total Memory: 7407 MB

You can also leverage Testcontainers Cloud’s Turbo mode in conjunction with build tools that feature parallel run capabilities to run tests even faster.

In the case of Maven, you can use the -DforkCount=N system property to specify the degree of parallelization. For Gradle, you can specify the degree of parallelization using the maxParallelForks property.

We can enable parallel execution of our tests using four forks in Jenkinsfile as follows:

stage('Build and Test') {
steps {
sh './mvnw verify -DforkCount=4'
}
}

For more information, check out the article on parallelizing your tests with Turbo mode.

Conclusion

In this article, we have explored how to run Testcontainers-based tests on Jenkins CI using dynamic containers and Kubernetes pods as agents with Docker-out-of-Docker and Docker-in-Docker based configuration. 

Then we learned how to create a Testcontainers Cloud account and configure the pipeline to run tests using Testcontainers Cloud. We also explored leveraging Testcontainers Cloud Turbo mode combined with your build tool’s parallel execution capabilities. 

Although we have demonstrated this setup using a Java project as an example, Testcontainers libraries exist for other popular languages, too, and you can follow the same pattern of configuration to run your Testcontainers-based tests on Jenkins CI in Golang, .NET, Python, Node.js, etc.

Get started with Testcontainers Cloud by creating a free account at the website.

Learn more

Sign up for a Testcontainers Cloud account.

Watch the Docker-in-Docker: Containerized CI Workflows session from DockerCon 2023.

Subscribe to the Docker Newsletter.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

How to Use OpenPubkey to Solve Key Management via SSO

This post was contributed by BastionZero.

Giving people the ability to sign messages under their identity is extremely powerful. For instance, this functionality lets you SSH into servers, sign software artifacts, and create end-to-end encrypted communications under your single sign-on (SSO) identity.

The OpenPubkey protocol and open source project brings the power of digital signatures to both people and workloads without adding trusted parties. OpenPubkey is built on the OpenID Connect (OIDC) SSO protocol, which is supported by major identity providers, including Google, Microsoft, Okta, and Facebook. 

This article will explore how OpenPubkey works and look at three use cases in detail.

What can OpenPubkey do?

Public key cryptography was invented in the 1970s and has become the most powerful tool in the security engineering toolbox. It allows anything holding a public key, and its associated signing key, to create a cryptographic identity. This identity is extremely secure because the party cannot only use their signing key to prove they are who they say they are but also sign messages under this identity. 

Servers often authenticate themselves to people using public keys associated with the server’s identity, yet the process rarely works the other way. People rarely authenticate to servers using public keys associated with a person’s identity. Instead, less secure authentication methods are employed, such as authentication secrets stored in cookies, which must be transmitted on every request.Let’s say that Alice wanted to sign the message “Flee at once — all is discovered” under her email address alice@example.com. How would she do it? One approach would be for Alice to create a public key (PK) and signing key (SK) and then publish the mapping between her email and the PK. 

This approach has two problems. First, you and everyone verifying this message must trust that the webpage has honestly mapped Alice’s email to her public key and has not maliciously replaced her public key with another key that could be used to impersonate Alice. Second, Alice must now protect and manage the signing key associated with this public key. History has shown that users can be terrible at protecting signing keys. Probably the most famous example is of the man who lost a signing key controlling half a billion dollars worth of Bitcoin.Human authentication on the web was originally supposed to work the same way as server authentication. Much like a certificate authority (CA) issues a certificate to a server, which associates a public key with the server’s identity (`example.com`), the plan was to have a CA issue a client certificate to a person that associates a public key with that person’s identity. These client certificates are still around and are well-used for certain applications, but they never caught on for widespread personal use, likely because of the terrible user experience (UX) of asking people to secure and manage secret signing keys.OpenPubkey addresses both of these problems. It uses your identity provider to perform the mapping between identity and your public key. Because you already trust your identity provider, having your identity provider perform this mapping does not add any new trusted parties. For instance, Alice must already trust her identity provider, Example.com, to manage her identity (alice@example.com), so it is natural to use Example.com to perform the mapping between Alice’s public key and her Example.com identity (alice@example.com). Example.com already knows how to authenticate @example.com users, so Alice doesn’t need to set up a new account or create new authentication factors.Second, to solve the problem of lost or stolen signing keys, OpenPubkey public keys and signing keys are ephemeral. That means the signing keys can be deleted and recreated at will. OpenPubkey generates a fresh public key and signing key for a user every time that user authenticates to their identity provider. This approach to making public keys ephemeral removes one of the most significant UX barriers to authenticating people with public keys. It also provides a security win; it creates a much smaller window of exposure if a signing key is stolen, as signing keys can be deleted when the user idles or logs out.

How does OpenPubkey work?

Let’s return to our situation: Alice wants to sign the message “Flee at once — all is discovered” under her identity (alice@example.com). First, Alice’s computer generates a fresh public key and signing key. Next, she needs her identity provider, Example.com, to associate her identity with this public key. How does OpenPubkey do this? To understand the process, we first need to provide details about how SSO/OpenID Connect works.Example.com, which is the identity provider for @example.com, knows how to check that Alice is really alice@example.com. Example.com does this every time Alice signs into Example.com. In OIDC, the identity provider signs a statement, called an ID Token, which roughly says “this is alice@example.com”. Part of the authentication process in OIDC allows the user (or their software) to submit a random value that will be included in the issued ID Token. 

Alice’s OpenPubkey client puts the cryptographic hash of Alice’s public key into this value in her ID Token. Alice’s OpenPubkey client modifies the ID Token into an object called a PK Token, which essentially says: “this is alice@example.com and her public key is 0xABCE…“. We’re skipping a few details of OpenPubkey, but this is the basic idea.

Now that Alice has a PK Token signed by Example.com, which binds her public key to her identity, she can sign the statement “Flee at once — all is discovered” and broadcast the message, the signature, and her ID Token. Bob, or anyone else for that matter, can check whether this message is really from alice@example.com, by checking that the ID Token is signed by Example.com and then checking that Alice’s signature matches the public key in the ID Token.

OpenPubkey use cases

Now let’s look at OpenPubkey use cases.

SSH

OpenPubkey is useful for more than just telling your friends that “Flee at once — all is discovered.” Because most security protocols are built on public key cryptography, OpenPubkey can easily plug human identities into these protocols.

SSH supports the authentication of both machines and users with public keys (also known as SSH keys).  However, these SSH keys are not associated with identities. With an SSH key, you can say “allow root access for key 0xABCD…”, but not “allow root access for alice@example.com.” This presents several UX and security problems. As mentioned previously, people struggle with managing their secret signing keys, and SSH is no exception. 

Even more problematic, because public keys are not associated with identities, it is difficult to tell if an SSH key represents a person or machine that should no longer have access. As Tatu Ylonen, the inventor of SSH, writes in his recent paper Challenges in Managing SSH Keys — and a Call for Solutions:

“In analyzing SSH keys for dozens of large enterprises, it has turned out that in many environments 90% of all authorized keys are no longer used. They represent access that was provisioned, but never terminated when the person left or the need for access ceased to exist. Some of the authorized keys are 10-20 years old, and typically about 10% of them grant root access or other privileged access. The vast majority of private user keys found in most environments do not have passphrases.”

OpenPubkey can be used to solve this problem by binding SSH keys to user identities. That way,  the server can check whether the identity (alice@example.com) is allowed to connect to the server or not. This means that Alice can access her SSH server using SSO; she can log in to Example.com as alice@example.com and then gain access to the server as long as her SSO is valid.OpenPubkey authentication can be added to SSH with a small change to the SSH config. No code changes to SSH are required. To try it out, or learn more about how OpenPubkey’s SSH works, see our recent post: How to Use OpenPubkey to SSH Without SSH Keys.

Secure messaging

OpenPubkey can also be used to solve one of the major issues with end-to-end encrypted messaging. Suppose someone sends you a message on a secure messaging app: How do you know they are actually that person? Some secure messaging apps let you look up the public key that is securing your communication, but how do you know that that public key is actually the public key of the person you want to privately communicate with?This connection between public key and identity is the core problem that OpenPubkey solves. With OpenPubkey, Bob can learn the public key for alice@example.com by checking an ID Token signed by Example.com, which includes Alice’s public key and her email address. This does involve trusting Example.com, but you generally already have to trust Example.com to SSO @example.com users.

While it’s not discussed here, OpenPubkey does support an optional protocol — the MFA cosigner — which removes the requirement of trusting the identity provider. But even without the MFA cosigner protocol, OpenPubkey provides stronger security for end-to-end encrypted messaging because it allows Bob to learn Alice’s public key directly from Alice’s identity provider.

Signing container images

OpenPubkey is not limited to human use cases. OpenPubkey developers are working on a solution to allow workflows (rather than people) to sign images using GitHub’s identity provider and GitHub Actions. You can learn more about this use case by reading How to Use OpenPubkey with GitHub Actions Workloads.

Help us expand the utility of OpenPubkey

These three use cases should not be seen as the limits of what OpenPubkey can do. This approach is highly flexible and can be used for VPNs, cosigning, container service meshes, cryptocurrencies, web applications, and even physical access. 

We invite anyone who wants to contribute to OpenPubkey to visit and star our GitHub repo. We are building an open and friendly community and welcome pull requests from anyone — see the contribution guidelines to learn more.    

Learn more

Read How to Use OpenPubkey with GitHub Actions Workloads.

Read How to use OpenPubkey to SSH Without SSH Keys.

Fireside chat — Redefining Security Standards: Identity Providers as Certificate Authorities.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Docker Desktop 4.27: Synchronized File Shares, Docker Init GA, Private Extensions Marketplace, Moby 25, Support for Testcontainers with ECI, Docker Build Cloud, and Docker Debug Beta

We’re pleased to announce Docker Desktop 4.27, packed with exciting new features and updates. The new release includes key advancements such as synchronized file shares, collaboration enhancements in Docker Build Cloud, the introduction of the private marketplace for extensions (available for Docker Business customers), and the much-anticipated release of Moby 25. 

Additionally, we explore the support for Testcontainers with Enhanced Container Isolation, the general availability of docker init with expanded language support, and the beta release of Docker Debug. These updates represent significant strides in improving development workflows, enhancing security, and offering advanced customization for Docker users.

Docker Desktop synchronized file shares GA

We’re diving into some fantastic updates for Docker Desktop, and we’re especially thrilled to introduce our latest feature, synchronized file shares, which is available now in version 4.27 (Figure 1). Following our acquisition announcement in June 2023, we have integrated the technology behind Mutagen into the core of Docker Desktop.

You can now say goodbye to the challenges of using large codebases in containers with virtual filesystems. Synchronized file shares unlock native filesystem performance for bind mounts and provides a remarkable 2-10x boost in file operation speeds. For developers managing extensive codebases, this is a game-changer.

Figure 1: Shares have been created and are available for use in containers.

To get started, log in to Docker Desktop with your subscription account (Pro, Teams, or Business) to harness the power of Docker Desktop synchronized file shares. You can read more about this feature in the Docker documentation.

Collaborate on shared Docker Build Cloud builds in Docker Desktop

With the recent GA of Docker Build Cloud, your team can now leverage Docker Desktop to use powerful cloud-based build machines and shared caching to reduce unnecessary rebuilds and get your build done in a fraction of the time, regardless of your local physical hardware.

New builds can make instant use of the shared cache. Even if this is your first time building the project, you can immediately speed up build times with shared caches.

We know that team members have varying levels of Docker expertise. When a new developer has issues with their build failing, the Builds view makes it effortless for anyone on the team to locate the troublesome build using search and filtering. They can then collaborate on a fix and get unblocked in no time.

When all your team is building on the same cloud builder, it can get noisy, so we added filtering by specific build types, helping you focus on the builds that are important to you.

Link to builder settings for a build

Previously, to access builder settings, you had to jump back to the build list or the settings page, but now you can access them directly from a build (Figure 2).

Figure 2: Access builder settings directly from a build.

Delete build history for a builder

And, until now you could only delete build in batches, which meant if you wanted to clear the build history it required a lot of clicks. This update enables you to clear all builds easily (Figure 3).

Figure 3: Painlessly clear the build history for an individual builder.

Refresh storage data for your builder at any point in time

Refreshing the storage data is an intensive operation, so it only happens periodically. Previously, when you were clearing data, you would have to wait a while to see the update. Now it’s just a one-click process (Figure 4).

Figure 4: Quickly refresh storage data for a builder to get an up-to-date view of your usage.

New feature: Private marketplace for extensions available for Docker Business subscribers

Docker Business customers now have exclusive access to a new feature: the private marketplace for extensions. This enhancement focuses on security, compliance, and customization, and empowering developers, providing:

Controlled access: Manage which extensions developers can use through allow-listing.

Private distribution: Easily distribute company-specific extensions from a private registry.

Customized development: Deploy customized team processes and tools as unpublished/private Docker extensions tailored to a specific organization.

The private marketplace for extensions enables a secure, efficient, and tailored development environment, aligning with your enterprise’s specific needs. Get started today by learning how to configure a private marketplace for extensions.

Moby 25 release — containerd image store 

We are happy to announce the release of Moby 25.0 with Docker Desktop 4.27. In case you’re unfamiliar, Moby is the open source project for Docker Engine, which ships in Docker Desktop. We have dedicated significant effort to this release, which marks a major release milestone for the open source Moby project. You can read a comprehensive list of enhancements in the v25.0.0 release notes.

With the release of Docker Desktop 4.27,  support for the containerd image store has graduated from beta to general availability. This work began in September 2022 when we started extending the Docker Engine integration with containerd, so we are excited to have this functionality reach general availability.

This support provides a more robust user experience by natively storing and building multi-platform images and using snapshotters for lazy pulling images (e.g., stargz) and peer-to-peer image distribution (e.g., dragonfly, nydus). It also provides a foundation for you to run Wasm containers (currently in beta). 

Using the containerd image store is not currently enabled by default for all users but can be enabled in the general settings in Docker Desktop under Use containers for pulling and storing images (Figure 5).

Figure 5: Enable use of the containerd image store in the general settings in Docker Desktop.

Going forward, we will continue improving the user experience of pushing, pulling, and storing images with the containerd image store, help migrate user images to use containerd, and work toward enabling it by default for all users. 

As always, you can try any of the features landing in Moby 25 in Docker Desktop.

Support for Testcontainers with Enhanced Container Isolation

Docker Desktop 4.27 introduces the ability to use the popular Testcontainers framework with Enhanced Container Isolation (ECI). 

ECI, which is available to Docker Business customers, provides an additional layer of security to prevent malicious workloads running in containers from compromising the Docker Desktop or the host by running containers without root access to the Docker Desktop VM, by vetting sensitive system calls inside containers and other advanced techniques. It’s meant to better secure local development environments. 

Before Docker Desktop 4.27, ECI blocked mounting the Docker Engine socket into containers to increase security and prevent malicious containers from gaining access to Docker Engine. However, this also prevented legitimate scenarios (such as Testcontainers) from working with ECI.   

Starting with Docker Desktop 4.27, admins can now configure ECI to allow Docker socket mounts, but in a controlled way (e.g., on trusted images of their choice) and even restrict the commands that may be sent on that socket. This functionality, in turn, enables users to enjoy the combined benefits of frameworks such as Testcontainers (or any others that require containers to access the Docker engine socket) with the extra security and peace of mind provided by ECI.

Docker init GA with Java support 

Initially released in its beta form in Docker 4.18, docker init has undergone several enhancements. The docker init command-line utility aids in the initialization of Docker resources within a project. It automatically generates Dockerfiles, Compose files, and .dockerignore files based on the nature of the project, significantly reducing the setup time and complexity associated with Docker configurations. 

The initial beta release of docker init only supported Go and generic projects. The latest version, available in Docker 4.27, supports Go, Python, Node.js, Rust, ASP.NET, PHP, and Java (Figure 6).

Figure 6. Docker init will suggest the best template for the application.

The general availability of docker init offers an efficient and user-friendly way to integrate Docker into your projects. Whether you’re a seasoned Docker user or new to containerization, docker init is ready to enhance your development workflow. 

Beta release of Docker Debug 

As previously announced at DockerCon 2023, Docker Debug is now available as a beta offering in Docker Desktop 4.27.

Figure 7: Docker Debug.

Developers can spend as much as 60% of their time debugging their applications, with much of that time taken up by sorting and configuring tools and setup instead of debugging. Docker Debug (available in Pro, Teams, or Business subscriptions) provides a language-independent, integrated toolbox for debugging local and remote containerized apps — even when the container fails to launch — enabling developers to find and solve problems faster.

To get started, run docker debug <Container or Image name> in the Docker Desktop CLI while logged in with your subscription account.

Conclusion

Docker Desktop’s latest updates and features, from synchronized file shares to the first beta release of Docker Debug, reflect our ongoing commitment to enhancing developer productivity and operational efficiency. Integrating these capabilities into Docker Desktop streamlines development processes and empowers teams to collaborate more effectively and securely. As Docker continues to evolve, we remain dedicated to providing our community and customers with innovative solutions that address the dynamic needs of modern software development.Stay tuned for further updates and enhancements, and as always, we encourage you to explore these new features to see how they can benefit your development workflow.Upgrade to Docker Desktop 4.27 to explore these updates and experiment with Docker’s latest features.

Learn more

Read the Docker Desktop Release Notes.

Install and authenticate against the latest release of Docker Desktop.

Learn more about synchronized file shares.

Check out Docker Build Cloud.

Read Streamline Dockerization with Docker Init GA

Read Docker Init: Initialize Dockerfiles and Compose files with a single CLI command.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Build Multimodal GenAI Apps with OctoAI and Docker

This post was contributed by Thierry Moreau, co-founder and head of DevRel at OctoAI.

Generative AI models have shown immense potential over the past year with breakthrough models like GPT3.5, DALL-E, and more. In particular, open source foundational models have gained traction among developers and enterprise users who appreciate how customizable, cost-effective, and transparent these models are compared to closed-source alternatives.

In this article, we’ll explore how you can compose an open source foundational model into a streamlined image transformation pipeline that lets you manipulate images with nothing but text to achieve surprisingly good results.

With this approach, you can create fun versions of corporate logos, bring your kids’ drawings to life, enrich your product photography, or even remodel your living room (Figure 1).

Figure 1: Examples of image transformation including, from left to right: Generating creative corporate logo, bringing children’s drawings to life, enriching commercial photography, remodeling your living room

Pretty cool, right? Behind the scenes, a lot needs to happen, and we’ll walk step by step through how to reproduce these results yourself. We call the multimodal GenAI pipeline OctoShop as a nod to the popular image editing software.

Feeling inspired to string together some foundational GenAI models? Let’s dive into the technology that makes this possible.

Architecture overview

Let’s look more closely at the open source foundational GenAI models that compose the multimodal pipeline we’re about to build.

Going forward, we’ll use the term “model cocktail” instead of “multimodal GenAI model pipeline,” as it flows a bit better (and sounds tastier, too). A model cocktail is a mix of GenAI models that can process and generate data across multiple modalities: text and images are examples of data modalities across which GenAI models consume and produce data, but the concept can also extend to audio and video (Figure 2).

To build on the analogy of crafting a cocktail (or mocktail, if you prefer), you’ll need to mix ingredients, which, when assembled, are greater than the sum of their individual parts.

Figure 2: The multimodal GenAI workflow — by taking an image and text, this pipeline transforms the input image according to the text prompt.

Let’s use a Negroni, for example — my favorite cocktail. It’s easy to prepare; you need equal parts of gin, vermouth, and Campari. Similarly, our OctoShop model cocktail will use three ingredients: an equal mix of image-generation (SDXL), text-generation (Mistral-7B), and a custom image-to-text generation (CLIP Interrogator) model. 

The process is as follows: 

CLIP Interrogator takes in an image and generates a textual description (e.g., “a whale with a container on its back”).

An LLM model, Mistral-7B, will generate a richer textual description based on a user prompt (e.g., “set the image into space”). The LLM will consequently transform the description into a richer one that meets the user prompt (e.g., “in the vast expanse of space, a majestic whale carries a container on its back”).

Finally, an SDXL model will be used to generate a final AI-generated image based on the textual description transformed by the LLM model. We also take advantage of SDXL styles and a ControlNet to better control the output of the image in terms of style and framing/perspective.

Prerequisites

Let’s go over the prerequisites for crafting our cocktail.

Here’s what you’ll need:

Sign up for an OctoAI account to use OctoAI’s image generation (SDXL), text generation (Mistral-7B), and compute solutions (CLIP Interrogator) — OctoAI serves as the bar from which to get all of the ingredients you’ll need to craft your model cocktail. If you’re already using a different compute service, feel free to bring that instead.

Run a Jupyter notebook to craft the right mix of GenAI models. This is your place for experimenting and mixing, so this will be your cocktail shaker. To make it easy to run and distribute the notebook, we’ll use Google Colab.

Finally, we’ll deploy our model cocktail as a Streamlit app. Think of building your app and embellishing the frontend as the presentation of your cocktail (e.g., glass, ice, and choice of garnish) to enhance your senses.

Getting started with OctoAI

Head to octoai.cloud and create an account if you haven’t done so already. You’ll receive $10 in credits upon signing up for the first time, which should be sufficient for you to experiment with your own workflow here.

Follow the instructions on the Getting Started page to obtain an OctoAI API token — this will help you get authenticated whenever you use the OctoAI APIs. 

Notebook walkthrough

We’ve built a Jupyter notebook in Colab to help you learn how to use the different models that will constitute your model cocktail. Here are the steps to follow: 

1. Launch the notebook

Get started by launching the following Colab notebook. 

There’s no need to change the runtime type or rely on a GPU or TPU accelerator — all we need is a CPU here, given that all of the AI heavy-lifting is done on OctoAI endpoints.

2. OctoAI SDK setup

Let’s get started by installing the OctoAI SDK. You’ll use the SDK to invoke the different open source foundational models we’re using, like SDXL and Mistral-7B. You can install through pip:

# Install the OctoAI SDK
!pip install octoai-sdk

In some cases, you may get a message about pip packages being previously imported in the runtime, causing an error. If that’s the case, selecting the Restart Session button at the bottom should take care of the package versioning issues. After this, you should be able to re-run the cell that pip-installs the OctoAI SDK without any issues.

3. Generate images with SDXL

You’ll first learn to generate an image with SDXL using the Image Generation solution API. To learn more about what each parameter does in the code below, check out OctoAI’s ImageGenerator client.

In particular, the ImageGenerator API takes several arguments to generate an image:

Engine: Lets you choose between versions of Stable Diffusion models, such as SDXL, SD1.5, and SSD.

Prompt: Describes the image you want to generate.

Negative prompt: Describes the traits you want to avoid in the final image.

Width, height: The resolution of the output image.

Num images: The number of images to generate at once.

Sampler: Determines the sampling method used to denoise your image. If you’re not familiar with this process, this article provides a comprehensive overview.

Number of steps: Number of denoising steps — the more steps, the higher the quality, but generally going past 30 will lead to diminishing returns.

Cfg scale: How closely to adhere to the image description — generally stays around 7-12.

Use refiner: Whether to apply the SDXL refiner model, which improves the output quality of the image.

Seed: A parameter that lets you control the reproducibility of image generation (set to a positive value to always get the same image given stable input parameters).

Note that tweaking the image generation parameters — like number of steps, number of images, sampler used, etc. — affects the amount of GPU compute needed to generate an image. Increasing GPU cycles will affect the pricing of generating the image. 

Here’s an example using simple parameters:

# To use OctoAI, we'll need to set up OctoAI to use it
from octoai.clients.image_gen import Engine, ImageGenerator

# Now let's use the OctoAI Image Generation API to generate
# an image of a whale with a container on its back to recreate
# the moby logo
image_gen = ImageGenerator(token=OCTOAI_API_TOKEN)
image_gen_response = image_gen.generate(
engine=Engine.SDXL,
prompt="a whale with a container on its back",
negative_prompt="blurry photo, distortion, low-res, poor quality",
width=1024,
height=1024,
num_images=1,
sampler="DPM_PLUS_PLUS_2M_KARRAS",
steps=20,
cfg_scale=7.5,
use_refiner=True,
seed=1
)
images = image_gen_response.images

# Display generated image from OctoAI
for i, image in enumerate(images):
pil_image = image.to_pil()
display(pil_image)

Feel free to experiment with the parameters to see what happens to the resulting image. In this case, I’ve put in a simple prompt meant to describe the Docker logo: “a whale with a container on its back.” I also added standard negative prompts to help generate the style of image I’m looking for. Figure 3 shows the output:

Figure 3: An SDXL-generated image of a whale with a container on its back.

4. Control your image output with ControlNet

One thing you may want to do with SDXL is control the composition of your AI-generated image. For example, you can specify a specific human pose or control the composition and perspective of a given photograph, etc. 

For our experiment using Moby (the Docker mascot), we’d like to get an AI-generated image that can be easily superimposed onto the original logo — same shape of whale and container, orientation of the subject, size, and so forth. 

This is where ControlNet can come in handy: they let you constrain the generation of images by feeding a control image as input. In our example we’ll feed the image of the Moby logo as our control input.

By tweaking the following parameters used by the ImageGenerator API, we are constraining the SDXL image generation with a control image of Moby. That control image will be converted into a depth map using a depth estimation model, then fed into the ControlNet, which will constrain SDXL image generation.

# Set the engine to controlnet SDXL
engine="controlnet-sdxl",
# Select depth controlnet which uses a depth map to apply
# constraints to SDXL
controlnet="depth_sdxl",
# Set the conditioning scale anywhere between 0 and 1, try different
# values to see what they do!
controlnet_conditioning_scale=0.3,
# Pass in the base64 encoded string of the moby logo image
controlnet_image=image_to_base64(moby_image),

Now the result looks like it matches the Moby outline a lot more closely (Figure 4). This is the power of ControlNet. You can adjust the strength by varying the controlnet_conditioning_scale parameter. This way, you can make the output image more or less faithfully match the control image of Moby.

Figure 4: Left: The Moby logo is used as a control image to a ControlNet. Right: the SDXL-generated image resembles the control image more closely than in the previous example.

5. Control your image output with SDXL style presets

Let’s add a layer of customization with SDXL styles. We’ll use the 3D Model style preset (Figure 5). Behind the scenes, these style presets are adding additional keywords to the positive and negative prompts that the SDXL model ingests.

Figure 5: You can try various styles on the OctoAI Image Generation solution UI — there are more than 100 to choose from, each delivering a unique feel and aesthetic.

Figure 6 shows how setting this one parameter in the ImageGenerator API transforms our AI-generated image of Moby. Go ahead and try out more styles; we’ve generated a gallery for you to get inspiration from.

Figure 6: SDXL-generated image of Moby with the “3D Model” style preset applied.

6. Manipulate images with Mistral-7B LLM

So far we’ve relied on SDXL, which does text-to-image generation. We’ve added ControlNet in the mix to apply a control image as a compositional constraint. 

Next, we’re going to layer an LLM into the mix to transform our original image prompt into a creative and rich textual description based on a “transformation prompt.” 

Basically, we’re going to use an LLM to make our prompt better automatically. This will allow us to perform image manipulation using text in our OctoShop model cocktail pipeline:

Take a logo of Moby: Set it into an ultra-realistic photo in space.

Take a child’s drawing: Bring it to life in a fantasy world.

Take a photo of a cocktail: Set it on a beach in Italy.

Take a photo of a living room: Transform it into a staged living room in a designer house.

To achieve this text-to-text transformation, we will use the LLM user prompt as follows. This sets the original textual description of Moby into a new setting: the vast expanse of space.

'''
Human: set the image description into space: “a whale with a container on its back”
AI: '''

We’ve configured the LLM system prompt so that LLM responses are concise and at most one sentence long. We could make them longer, but be aware that the prompt consumed by SDXL has a 77-token context limit.

You can read more on the text generation Python SDK and its Chat Completions API used to generate text:

Model: Lets you choose out of selection of foundational open source models like Mixtral, Mistral, Llama2, Code Llama (the selection will grow with more open source models being released).

Messages: Contains a list of messages (system and user) to use as context for the completion.

Max tokens: Enforces a hard limit on output tokens (this could cut a completion response in the middle of a sentence).

Temperature: Lets you control the creativity of your answer: with a higher temperature, less likely tokens can be selected.

The choice of model, input, and output tokens will influence pricing on OctoAI. In this example, we’re using the Mistral-7B LLM, which is a great open source LLM model that really packs a punch given its small parameter size. 

Let’s look at the code used to invoke our Mistral-7B LLM:

# Let's go ahead and start with the original prompt that we used in our
# image generation examples.
image_desc = "a whale with a container on its back"

# Let's then prepare our LLM prompt to manipulate our image
llm_prompt = '''
Human: set the image description into space: {}
AI: '''.format(image_desc)

# Now let's use an LLM to transform this craft clay rendition
# of Moby into a fun scify universe
from octoai.client import Client

client = Client(OCTOAI_API_TOKEN)
completion = client.chat.completions.create(
messages=[
{
"role": "system",
"content": "You are a helpful assistant. Keep your responses short and limited to one sentence."
},
{
"role": "user",
"content": llm_prompt
}
],
model="mistral-7b-instruct-fp16",
max_tokens=128,
temperature=0.01
)

# Print the message we get back from the LLM
llm_image_desc = completion.choices[0].message.content
print(llm_image_desc)

Here’s the output:

Our LLM has created a short yet imaginative description of Moby traveling through space. Figure 7 shows the result when we feed this LLM-generated textual description into SDXL.

Figure 7: SDXL-generated image of Moby where we used an LLM to set the scene in space and enrich the text prompt.

This image is great. We can feel the immensity of space. With the power of LLMs and the flexibility of SDXL, we can take image creation and manipulation to new heights. And the great thing is, all we need to manipulate those images is text; the GenAI models do the rest of the work.

7. Automate the workflow with AI-based image labeling

So far in our image transformation pipeline, we’ve had to manually label the input image to our OctoShop model cocktail. Instead of just passing in the image of Moby, we had to provide a textual description of that image.

Thankfully, we can rely on a GenAI model to perform text labeling tasks: CLIP Interrogator. Think of this task as the reverse of what SDXL does: It takes in an image and produces text as the output.

To get started, we’ll need a CLIP Interrogator model running behind an endpoint somewhere. There are two ways to get a CLIP Interrogator model endpoint on OctoAI. If you’re just getting started, we recommend the simple approach, and if you feel inspired to customize your model endpoint, you can use the more advanced approach. For instance, you may be interested in trying out the more recent version of CLIP Interrogator.

You can now invoke the CLIP Interrogator model in a few lines of code. We’ll use the fast interrogator mode here to get a label generated as quickly as possible.

# Let's go ahead and invoke the CLIP interrogator model

# Note that under a cold start scenario, you may need to wait a minute or two
# to get the result of this inference… Be patient!
output = client.infer(
endpoint_url=CLIP_ENDPOINT_URL+'/predict',
inputs={
"image": image_to_base64(moby_image),
"mode": "fast"
}
)

# All labels
clip_labels = output["completion"]["labels"]
print(clip_labels)

# Let's get just the top label
top_label = clip_labels.split(',')[0]
print(top_label)

The top label described our Moby logo as:

That’s pretty on point. Now that we’ve tested all ingredients individually, let’s assemble our model cocktail and test it on interesting use cases.

8. Assembling the model cocktail

Now that we have tested our three models (CLIP interrogator, Mistral-7B, SDXL), we can package them into one convenient function, which takes the following inputs:

An input image that will be used to control the output image and also be automatically labeled by our CLIP interrogator model.

A transformation string that describes the transformation we want to apply to the input image (e.g., “set the image description in space”).

A style string which lets us better control the artistic output of the image independently of the transformation we apply to it (e.g., painterly style vs. cinematic).

The function below is a rehash of all of the code we’ve introduced above, packed into one function.

def genai_transform(image: Image, transformation: str, style: str) -> Image:
# Step 1: CLIP captioning
output = client.infer(
endpoint_url=CLIP_ENDPOINT_URL+'/predict',
inputs={
"image": image_to_base64(image),
"mode": "fast"
}
)
clip_labels = output["completion"]["labels"]
top_label = clip_labels.split(',')[0]

# Step 2: LLM transformation
llm_prompt = '''
Human: {}: {}
AI: '''.format(transformation, top_label)
completion = client.chat.completions.create(
messages=[
{
"role": "system",
"content": "You are a helpful assistant. Keep your responses short and limited to one sentence."
},
{
"role": "user",
"content": llm_prompt
}
],
model="mistral-7b-instruct-fp16",
max_tokens=128,
presence_penalty=0,
temperature=0.1,
top_p=0.9,
)
llm_image_desc = completion.choices[0].message.content

# Step 3: SDXL+controlnet transformation
image_gen_response = image_gen.generate(
engine="controlnet-sdxl",
controlnet="depth_sdxl",
controlnet_conditioning_scale=0.4,
controlnet_image=image_to_base64(image),
prompt=llm_image_desc,
negative_prompt="blurry photo, distortion, low-res, poor quality",
width=1024,
height=1024,
num_images=1,
sampler="DPM_PLUS_PLUS_2M_KARRAS",
steps=20,
cfg_scale=7.5,
use_refiner=True,
seed=1,
style_preset=style
)
images = image_gen_response.images

# Display generated image from OctoAI
pil_image = images[0].to_pil()
return top_label, llm_image_desc, pil_image

Now you can try this out on several images, prompts, and styles. 

Package your model cocktail into a web app

Now that you’ve mixed your unique GenAI cocktail, it’s time to pour it into a glass and garnish it, figuratively speaking. We built a simple Streamlit frontend that lets you deploy your unique OctoShop GenAI model cocktail and share the results with your friends and colleagues (Figure 8). You can check it on GitHub.

Follow the README instructions to deploy your app locally or get it hosted on Streamlit’s web hosting services.

Figure 8: The Streamlit app transforms images into realistic renderings in space — all thanks to the magic of GenAI.

We look forward to seeing what great image-processing apps you come up with. Go ahead and share your creations on OctoAI’s Discord server in the #built_with_octo channel! 

If you want to learn how you can put OctoShop behind a Discord Bot or build your own model containers with Docker, we also have instructions on how to do that from an AI/ML workshop organized by OctoAI at DockerCon 2023.

About OctoAI

OctoAI provides infrastructure to run GenAI at scale, efficiently, and robustly. The model endpoints that OctoAI delivers to serve models like Mixtral, Stable Diffusion XL, etc. all rely on Docker to containerize models and make them easier to serve at scale. 

If you go to octoai.cloud, you’ll find three complementary solutions that developers can build on to bring their GenAI-powered apps and pipelines into production. 

Image Generation solution exposes endpoints and APIs to perform text to image, image to image tasks built around open source foundational models such as Stable Diffusion XL or SSD.

Text Generation solution exposes endpoints and APIs to perform text generation tasks built around open source foundational models, such as Mixtral/Mistral, Llama2, or CodeLlama.

Compute solution lets you deploy and manage any dockerized model container on capable OctoAI cloud endpoints to power your demanding GenAI needs. This compute service complements the image generation and text generation solutions by exposing infinite programmability and customizability for AI tasks that are not currently readily available on either the image generation or text generation solutions.

Disclaimer

OctoShop is built on the foundation of CLIP Interrogator and SDXL, and Mistral-7B and is therefore likely to carry forward the potential dangers inherent in these base models. It’s capable of generating unintended, unsuitable, offensive, and/or incorrect outputs. We therefore strongly recommend exercising caution and conducting comprehensive assessments before deploying this model into any practical applications.

This GenAI model workflow doesn’t work on people as it won’t preserve their likeness; the pipeline works best on scenes, objects, or animals. Solutions are available to address this problem, such as face mapping techniques (also known as face swapping), which we can containerize with Docker and deploy on OctoAI Compute solution, but that’s something to cover in another blog post.

Conclusion

This article covered the fundamentals of building a GenAI model cocktail by relying on a combination of text generation, image generation, and compute solutions powered by the portability and scalability enabled by Docker containerization. 

If you’re interested in learning more about building these kinds of GenAI model cocktails, check out the OctoAI demo page or join OctoAI on Discord to see what people have been building.

Acknowledgements

The authors acknowledge Justin Gage for his thorough review, as well as Luis Vega, Sameer Farooqui, and Pedro Toruella for their contributions to the DockerCon AI/ML Workshop 2023, which inspired this article. The authors also thank Cia Bodin and her daughter Ada for the drawing used in this blog post.

Learn more

Watch the DockerCon 2023 Docker for ML, AI, and Data Science workshop.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/