Compose Editing Evolved: Schema-Driven and Context-Aware

Every day, thousands of developers are creating and editing Compose files. At Docker, we are regularly adding more features to Docker Compose such as the new provider services capability that lets you run AI models as part of your multi-container applications with Docker Model Runner. We know that providing a first-class editing experience for Compose files is key to empowering our users to ship amazing products that will delight their customers. We are pleased to announce today some new additions to the Docker Language Server that will make authoring Compose files easier than ever before.

Schema-Driven Features

To help you stay on the right track as you edit your Compose file, the Docker Language Server brings the Compose specification into the editor to help minimize window switching and keeps you in your editor where you are most productive.

Figure 1: Leverage hover tooltips to quickly understand what a specific Compose attribute is for.

Context-Aware Intelligence

Although attribute names and types can be inferred from the Compose specification, certain attributes have a contextual meaning on them and reference values of different attributes or content from another file. The Docker Language Server understands these relationships and will suggest the available values so that there is no guesswork on your part.

Figure 2: Code completion understands how your files are connected and will only give you suggestions that are relevant in your current context.

Freedom of Choice

The Docker Language Server is built on the Language Server Protocol (LSP) which means you can connect it with any LSP-compatible editor of your choosing. Whatever editor you like using, we will be right there with you to guide you along your software development journey.

Figure 3: The Docker Language Server can run in any LSP-compliant editor such as the JetBrains IDE with the LSP4IJ plugin.

Conclusion

Docker Compose is a core part of hundreds of companies’ development cycles. By offering a feature-rich editing experience with the Docker Language Server, developers everywhere can test and ship their products faster than ever before. Install the Docker DX extension for Visual Studio Code today or download the Docker Language Server to integrate it with your favourite editor.

What’s Next

Your feedback is critical in helping us improve and shape the Docker DX extension and the Docker Language Server.

If you encounter any issues or have ideas for enhancements that you would like to see, please let us know:

Open an issue on the Docker DX VS Code extension GitHub repository or the Docker Language Server GitHub repository 

Or submit feedback through the Docker feedback page

We’re listening and excited to keep making things better for you!

Learn More

Setup the Docker Language Server after installing LSP4IJ in your favorite JetBrains IDE.

Quelle: https://blog.docker.com/feed/

Docker Unveils the Future of Agentic Apps at WeAreDevelopers

Agentic applications – what actually are they and how do we make them easier to build, test, and deploy? At WeAreDevelopers, we defined agentic apps as those that use LLMs to define execution workflows based on desired goals with access to your tools, data, and systems. 

While there are new elements to this application stack, there are many aspects that feel very similar. In fact, many of the same problems experienced with microservices now exist with the evolution of agentic applications.

Therefore, we feel strongly that teams should be able to use the same processes and tools, but with the new agentic stack. Over the past few months, we’ve been working to evolve the Docker tooling to make this a reality and we were excited to share it with the world at WeAreDevelopers.

Let’s unpack those announcements, as well as dive into a few other things we’ve been working on!

Docker Captain Alan Torrance from JPMC with Docker COO Mark Cavage and WeAreDevelopers organizers

WeAreDeveloper keynote announcements

Mark Cavage, Docker’s President and COO, and Tushar Jain, Docker’s EVP of Product and Engineering, took the stage for a keynote at WeAreDevelopers and shared several exciting new announcements – Compose for agentic applications, native Google Cloud support for Compose, and Docker Offload. Watch the keynote in its entirety here.

Docker EVP, Product & Engineering Tushar Jain delivering the keynote at WeAreDevelopers

Compose has evolved to support agentic applications

Agentic applications need three things – models, tools, and your custom code that glues it all together. 

The Docker Model Runner provides the ability to download and run models.

The newly open-sourced MCP Gateway provides the ability to run containerized MCP servers, giving your application access to the tools it needs in a safe and secure manner.

With Compose, you can now define and connect all three in a single compose.yaml file! 

Here’s an example of a Compose file bringing it all together:

# Define the models
models:
gemma3:
model: ai/gemma3

services:
# Define a MCP Gateway that will provide the tools needed for the app
mcp-gateway:
image: docker/mcp-gateway
command: –transport=sse –servers=duckduckgo
use_api_socket: true

# Connect the models and tools with the app
app:
build: .
models:
gemma3:
endpoint_var: OPENAI_BASE_URL
model_var: OPENAI_MODEL
environment:
MCP_GATEWAY_URL: http://mcp-gateway:8811/sse

The application can leverage a variety of agentic frameworks – ADK, Agno, CrewAI, Vercel AI SDK, Spring AI, and more. 

Check out the newly released compose-for-agents sample repo for examples using a variety of frameworks and ideas.

Taking Compose to production with native cloud-provider support

During the keynote, we shared the stage with Google Cloud’s Engineering Director Yunong Xiao to demo how you can now easily deploy your Compose-based applications with Google Cloud Run. With this capability, the same Compose specification works from dev to prod – no rewrites and no reconfig. 

Google Cloud’s Engineering Director Yunong Xiao announcing the native Compose support in Cloud Run

With Google Cloud Run (via gcloud run compose up) and soon Microsoft Azure Container Apps, you can deploy apps to serverless platforms with ease. It already has support for the newly released model support too!

Compose makes the entire journey from dev to production consistent, portable, and effortless – just the way applications should be.

Learn more about the Google Cloud Run support with their announcement post here.

Announcing Docker Offload – access to cloud-based compute resources and GPUs during development and testing

Running LLMs requires a significant amount of compute resources and large GPUs. Not every developer has access to those resources on their local machines. Docker Offload allows you to run your containers and models using cloud resources, yet still feel local. Port publishing and bind mounts? It all just works.

No complex setup, no GPU shortages, no configuration headaches. It’s a simple toggle switch in Docker Desktop. Sign up for our beta program and get 300 free GPU minutes!

Getting hands-on with our new agentic Compose workshop

Selfie with at the end of the workshop with attendees

At WeAreDevelopers, we released and ran a workshop to enable everyone to get hands-on with the new Compose capabilities, and the response blew us away!

In the room, every seat was filled and a line remained outside well into the workshop hoping folks would leave early to open a spot. But, not a single person left early! It was so thrilling to see attendees stay fully engaged for the entire workshop.

During the workshop, participants were able to learn about the agentic application stack, digging deep into models, tools, and agentic frameworks. They used Docker Model Runner, the Docker MCP Gateway, and the Compose integrations to package it all together. 

Want to try the workshop yourself? Check it out on GitHub at dockersamples/workshop-agentic-compose.

Lightning talks that sparked ideas

Lightning talk on testing with LLMs in the Docker booth

In addition to the workshop, we hosted a rapid-fire series of lightning talks in our booth on a range of topics. These talks were intended to inspire additional use cases and ideas for agentic applications:

Digging deep into the fundamentals of GenAI applications

Going beyond the chatbot with event-driven agentic applications

Using LLMs to perform semantic testing of applications and websites

Using Gordon to build safer and more secure images with Docker Hardened Images

These talks made it clear: agentic apps aren’t just a theory—they’re here, and Docker is at the center of how they get built.

Stay tuned for future blog posts that dig deeper into each of these topics. 

Sharing our industry insights and learnings

At WeAreDevelopers, our UX Research team was able to present their findings and insights after analyzing the past three years of Docker-sponsored industry research. And interestingly, the AI landscape is already starting to have an impact in language selection, attitudes toward trends like shifted-left security, and more!

Julia Wilson on stage sharing insights from the Docker UX research team

To learn more about the insights, view the talk here.

Bringing a European Powerhouse to North America

In addition to the product announcements, we announced a major co‑host partnership between Docker and WeAreDevelopers, launching WeAreDevelopers World Congress North America, set for September 2026. Personally, I’m super excited for this because WeAreDevelopers is a genuine developer-first conference – it covers topics at all levels, has an incredibly fun and exciting atmosphere, live coding hackathons, and helps developers find jobs and further their career!

The 2026 WeAreDevelopers World Congress North America will mark the event’s first major expansion outside Europe. This creates a new, developer-first alternative to traditional enterprise-style conferences, with high-energy talks, live coding, and practical takeaways tailored to real builders.

Docker Captains Mohammad-Ali A’râbi (left) and Francesco Ciulla (right) in attendance with Docker Principal Product Manager Francesco Corti (center)

Try it, build it, contribute

We’re excited to support this next wave of AI-native applications. If you’re building with agentic AI, try out these tools in your workflow today. Agentic apps are complex, but with Docker, they don’t have to be hard. Let’s build cool stuff together.

Sign up for our beta program and get 300 free GPU minutes! 

Use Docker Compose to build and run your AI agents

Watch the keynote, a panel on securing the agentic workflow, and dive into insights from our annual developer survey here. 

Try Docker Model Runner and MCP Gateway

Quelle: https://blog.docker.com/feed/

GoFiber v3 + Testcontainers: Production-like Local Dev with Air

Intro

Local development can be challenging when apps rely on external services like databases or queues, leading to brittle scripts and inconsistent environments. Fiber v3 and Testcontainers solve this by making real service dependencies part of your app’s lifecycle, fully managed, reproducible, and developer-friendly.

With the upcoming v3 release, Fiber is introducing a powerful new abstraction: Services. These provide a standardized way to start and manage backing services like databases, queues, and cloud emulators, enabling you to manage backing services directly as part of your app’s lifecycle, with no extra orchestration required. Even more exciting is the new contrib module that connects Services with Testcontainers, allowing you to spin up real service dependencies in a clean and testable way.

In this post, I’ll walk through how to use these new features by building a small Fiber app that uses a PostgreSQL container for persistence, all managed via the new Service interface.

TL;DR

Use Fiber v3’s new Services API to manage backing containers.

Integrate with testcontainers-go to start a PostgreSQL container automatically.

Add hot-reloading with air for a fast local dev loop.

Reuse containers during dev by disabling Ryuk and naming them consistently.

Full example here: GitHub Repo

Local Development, state of the art

This is a blog post about developing in Go, but let’s look at how other major frameworks approach local development, even across different programming languages.

In the Java ecosystem, the most important frameworks, such as Spring Boot, Micronaut and Quarkus, have the concept of Development-time services. Let’s look at how other ecosystems handle this concept of services.

From Spring Boot docs:

Development-time services provide external dependencies needed to run the application while developing it. They are only supposed to be used while developing and are disabled when the application is deployed.

Micronaut uses the concept of Test Resources:

Micronaut Test Resources adds support for managing external resources which are required during development or testing.

For example, an application may need a database to run (say MySQL), but such a database may not be installed on the development machine or you may not want to handle the setup and tear down of the database manually.

And finally, in Quarkus, the concept of Dev Services is also present.

Quarkus supports the automatic provisioning of unconfigured services in development and test mode. We refer to this capability as Dev Services.

Back to Go, one of the most popular frameworks, Fiber, has added the concept of Services, including a new contrib module to add support for Testcontainers-backed services.

What’s New in Fiber v3?

Among all the new features in Fiber v3, we have two main ones that are relevant to this post:

Services: Define and attach external resources (like databases) to your app in a composable way. This new approach ensures external services are automatically started and stopped with your Fiber app.

Contrib module for Testcontainers: Start real backing services using Docker containers, managed directly from your app’s lifecycle in a programmable way.

A Simple Fiber App using Testcontainers

The application we are going to build is a simple Fiber app that uses a PostgreSQL container for persistence. It’s based on todo-app-with-auth-form Fiber recipe, but using the new Services API to start a PostgreSQL container, instead of an in-memory SQLite database.

Project Structure

.
├── app
| ├── dal
| | ├── todo.dal.go
| | ├── todo.dal_test.go
| | ├── user.dal.go
| | └── user.dal_test.go
| ├── routes
| | ├── auth.routes.go
| | └── todo.routes.go
| ├── services
| | ├── auth.service.go
| | └── todo.service.go
| └── types
| ├── auth.types.go
| ├── todo.types.go
| └── types.go
├── config
| ├── database
| | └── database.go
| ├── config.go
| ├── config_dev.go
| ├── env.go
| └── types.go
├── utils
| ├── jwt
| | └── jwt.go
| ├── middleware
| | └── authentication.go
| └── password
| └── password.go
├── .air.conf
├── .env
├── main.go
└── go.mod
└── go.sum

This app exposes several endpoints, for /users and /todos, and stores data in a PostgreSQL instance started using Testcontainers. Here’s how it’s put together.

Since the application is based on a recipe, we’ll skip the details of creating the routes, the services and the data access layer. You can find the complete code in the GitHub repository.

I’ll instead cover the details about how to use Testcontainers to start the PostgreSQL container, and how to use the Services API to manage the lifecycle of the container, so that the data access layer can use it without having to worry about the lifecycle of the container. Furthermore, I’ll cover how to use air to have a fast local development experience, and how to handle the graceful shutdown of the application, separating the configuration for production and local development.

In the config package, we have defined three files that will be used to configure the application, depending on a Go build tag. The first one, the config/types.go file, defines a struct to hold the application configuration and the cleanup functions for the services startup and shutdown.

package config

import (
"context"

"github.com/gofiber/fiber/v3"
)

// AppConfig holds the application configuration and cleanup functions
type AppConfig struct {
// App is the Fiber app instance.
App *fiber.App
// StartupCancel is the context cancel function for the services startup.
StartupCancel context.CancelFunc
// ShutdownCancel is the context cancel function for the services shutdown.
ShutdownCancel context.CancelFunc
}

The config.go file has the configuration for production environments:

//go:build !dev

package config

import (
"github.com/gofiber/fiber/v3"
)

// ConfigureApp configures the fiber app, including the database connection string.
// The connection string is retrieved from the environment variable DB, or using
// falls back to a default connection string targeting localhost if DB is not set.
func ConfigureApp(cfg fiber.Config) (*AppConfig, error) {
app := fiber.New(cfg)

db := getEnv("DB", "postgres://postgres:postgres@localhost:5432/postgres?sslmode=disable")
DB = db

return &AppConfig{
App: app,
StartupCancel: func() {}, // No-op for production
ShutdownCancel: func() {}, // No-op for production
}, nil
}

The ConfigureApp function is responsible for creating the Fiber app, and it’s used in the main.go file to initialize the application. By default, it will try to connect to a PostgreSQL instance, using the DB environment variable, falling back to a local PostgreSQL instance if the environment variable is not set. It also uses empty functions for the StartupCancel and ShutdownCancel fields, as we don’t need to cancel anything in production.

When running the app with go run main.go, the !dev tag applies by default, and the ConfigureApp function will be used to initialize the application. But the application will not start, as the connection to the PostgreSQL instance will fail.

go run main.go

2025/05/29 11:55:36 gofiber-services/config/database/database.go:18
[error] failed to initialize database, got error failed to connect to `user=postgres database=postgres`:
[::1]:5432 (localhost): dial error: dial tcp [::1]:5432: connect: connection refused
127.0.0.1:5432 (localhost): dial error: dial tcp 127.0.0.1:5432: connect: connection refused
panic: gorm open: failed to connect to `user=postgres database=postgres`:
[::1]:5432 (localhost): dial error: dial tcp [::1]:5432: connect: connection refused
127.0.0.1:5432 (localhost): dial error: dial tcp 127.0.0.1:5432: connect: connection refused

goroutine 1 [running]:
gofiber-services/config/database.Connect({0x105164a30?, 0x0?})
gofiber-services/config/database/database.go:33 +0x9c
main.main()
gofiber-services/main.go:34 +0xbc
exit status 2

Let’s fix that!

Step 1: Add the dependencies

First, we need to make sure we have the dependencies added to the go.mod file:

Note: Fiber v3 is still in development. To use Services, you’ll need to pull the main branch from GitHub:

go get github.com/gofiber/fiber/v3@main
go get github.com/gofiber/contrib/testcontainers
go get github.com/testcontainers/testcontainers-go
go get github.com/testcontainers/testcontainers-go/modules/postgres
go get gorm.io/driver/postgres

Step 2: Define a PostgreSQL Service using Testcontainers

To leverage the new Services API, we need to define a new service. We can implement the interface exposed by the Fiber app, as shown in the Services API docs, or simply use the Testcontainers contrib module to create a new service, as we are going to do next.

In the config/config_dev.go file, we define a new function to add a PostgreSQL container as a service to the Fiber application, using the Testcontainers contrib module. This file is using the dev build tag, so it will only be used when we start the application with air.

//go:build dev

package config

import (
"fmt"

"github.com/gofiber/contrib/testcontainers"
"github.com/gofiber/fiber/v3"
tc "github.com/testcontainers/testcontainers-go"
"github.com/testcontainers/testcontainers-go/modules/postgres"
)

// setupPostgres adds a Postgres service to the app, including custom configuration to allow
// reusing the same container while developing locally.
func setupPostgres(cfg *fiber.Config) (*testcontainers.ContainerService[*postgres.PostgresContainer], error) {
// Add the Postgres service to the app, including custom configuration.
srv, err := testcontainers.AddService(cfg, testcontainers.NewModuleConfig(
"postgres-db",
"postgres:16",
postgres.Run,
postgres.BasicWaitStrategies(),
postgres.WithDatabase("todos"),
postgres.WithUsername("postgres"),
postgres.WithPassword("postgres"),
tc.WithReuseByName("postgres-db-todos"),
))
if err != nil {
return nil, fmt.Errorf("add postgres service: %w", err)
}

return srv, nil
}

This creates a reusable Service that Fiber will automatically start and stop along with the app, and it’s registered as part of the fiber.Config struct that our application uses. This new service uses the postgres module from the testcontainers package to create the container. To learn more about the PostgreSQL module, please refer to the Testcontainers PostgreSQL module documentation.

Step 3: Initialize the Fiber App with the PostgreSQL Service

Our fiber.App is initialized in the config/config.go file, using the ConfigureApp function for production environments. For local development, instead, we need to initialize the fiber.App in the config/config_dev.go file, using a function with the same signature, but using the contrib module to add the PostgreSQL service to the app config.

We need to define a context provider for the services startup and shutdown, and add the PostgreSQL service to the app config, including custom configuration. The context provider is useful to define a cancel policy for the services startup and shutdown, so we can cancel the startup or shutdown if the context is canceled. If no context provider is defined, the default is to use the context.Background().

// ConfigureApp configures the fiber app, including the database connection string.
// The connection string is retrieved from the PostgreSQL service.
func ConfigureApp(cfg fiber.Config) (*AppConfig, error) {
// Define a context provider for the services startup.
// The timeout is applied when the context is actually used during startup.
startupCtx, startupCancel := context.WithCancel(context.Background())
var startupTimeoutCancel context.CancelFunc
cfg.ServicesStartupContextProvider = func() context.Context {
// Cancel any previous timeout context
if startupTimeoutCancel != nil {
startupTimeoutCancel()
}
// Create a new timeout context
ctx, cancel := context.WithTimeout(startupCtx, 10*time.Second)
startupTimeoutCancel = cancel
return ctx
}

// Define a context provider for the services shutdown.
// The timeout is applied when the context is actually used during shutdown.
shutdownCtx, shutdownCancel := context.WithCancel(context.Background())
var shutdownTimeoutCancel context.CancelFunc
cfg.ServicesShutdownContextProvider = func() context.Context {
// Cancel any previous timeout context
if shutdownTimeoutCancel != nil {
shutdownTimeoutCancel()
}
// Create a new timeout context
ctx, cancel := context.WithTimeout(shutdownCtx, 10*time.Second)
shutdownTimeoutCancel = cancel
return ctx
}

// Add the Postgres service to the app, including custom configuration.
srv, err := setupPostgres(&cfg)
if err != nil {
if startupTimeoutCancel != nil {
startupTimeoutCancel()
}
if shutdownTimeoutCancel != nil {
shutdownTimeoutCancel()
}
startupCancel()
shutdownCancel()
return nil, fmt.Errorf("add postgres service: %w", err)
}

app := fiber.New(cfg)

// Retrieve the Postgres service from the app, using the service key.
postgresSrv := fiber.MustGetService[*testcontainers.ContainerService[*postgres.PostgresContainer]](app.State(), srv.Key())

connString, err := postgresSrv.Container().ConnectionString(context.Background())
if err != nil {
if startupTimeoutCancel != nil {
startupTimeoutCancel()
}
if shutdownTimeoutCancel != nil {
shutdownTimeoutCancel()
}
startupCancel()
shutdownCancel()
return nil, fmt.Errorf("get postgres connection string: %w", err)
}

// Override the default database connection string with the one from the Testcontainers service.
DB = connString

return &AppConfig{
App: app,
StartupCancel: func() {
if startupTimeoutCancel != nil {
startupTimeoutCancel()
}
startupCancel()
},
ShutdownCancel: func() {
if shutdownTimeoutCancel != nil {
shutdownTimeoutCancel()
}
shutdownCancel()
},
}, nil
}

This function:

Defines a context provider for the services startup and shutdown, defining a timeout for the startup and shutdown when the context is actually used during startup and shutdown.

Adds the PostgreSQL service to the app config.

Retrieves the PostgreSQL service from the app’s state cache.

Uses the PostgreSQL service to obtain the connection string.

Overrides the default database connection string with the one from the Testcontainers service.

Returns the app config.

As a result, the fiber.App will be initialized with the PostgreSQL service, and it will be automatically started and stopped along with the app. The service representing the PostgreSQL container will be available as part of the application State, which we can easily retrieve from the app’s state cache. Please refer to the State Management docs for more details about how to use the State cache.

Step 4: Optimizing Local Dev with Container Reuse

Please note that, in the config/config_dev.go file, the tc.WithReuseByName option is used to reuse the same container while developing locally. This is useful to avoid having to wait for the database to be ready when the application is started.

Also, set TESTCONTAINERS_RYUK_DISABLED=true to prevent container cleanup between hot reloads. In the .env file, add the following:

TESTCONTAINERS_RYUK_DISABLED=true

Ryuk is the Testcontainers companion container that removes the Docker resources created by Testcontainers. For our use case, where we want to develop locally using air, we don’t want to remove the container when the application is hot-reloaded, so we disable Ryuk and give the container a name that will be reused across multiple runs of the application.

Step 5: Retrieve and Inject the PostgreSQL Connection

Now that the PostgreSQL service is part of the application, we can use it in our data access layer. The application has a global configuration variable that includes the database connection string, in the config/env.go file:

// DB returns the connection string of the database.
DB = getEnv("DB", "postgres://postgres:postgres@localhost:5432/postgres?sslmode=disable")

Retrieve the service from the app’s state and use it to connect:

// Add the PostgreSQL service to the app, including custom configuration.
srv, err := setupPostgres(&cfg)
if err != nil {
panic(err)
}

app := fiber.New(cfg)

// Retrieve the PostgreSQL service from the app, using the service key.
postgresSrv := fiber.MustGetService[*testcontainers.ContainerService[*postgres.PostgresContainer]](app.State(), srv.Key())

Here, the fiber.MustGetService function is used to retrieve a generic service from the State cache, and we need to cast it to the specific service type, in this case *testcontainers.ContainerService[*postgres.PostgresContainer].

testcontainers.ContainerService[T] is a generic service that wraps a testcontainers.Container instance. It’s provided by the github.com/gofiber/contrib/testcontainers module.

*postgres.PostgresContainer is the specific type of the container, in this case a PostgreSQL container. It’s provided by the github.com/testcontainers/testcontainers-go/modules/postgres module.

Once we have the postgresSrv service, we can use it to connect to the database. The ContainerService type provides a Container() method that unwraps the container from the service, so we are able to use the APIs provided by the testcontainers package to interact with the container. Finally, we pass the connection string to the global DB variable, so the data access layer can use it to connect to the database.

// Retrieve the PostgreSQL service from the app, using the service key.
postgresSrv := fiber.MustGetService[*testcontainers.ContainerService[*postgres.PostgresContainer]](app.State(), srv.Key())

connString, err := postgresSrv.Container().ConnectionString(context.Background())
if err != nil {
panic(err)
}

// Override the default database connection string with the one from the Testcontainers service.
config.DB = connString

database.Connect(config.DB)

Step 6: Live reload with air

Let’s add the build tag to the air command, so our local development experience is complete. We need to add the -tags dev flag to the command used to build the application. In .air.conf, add the -tags dev flag to ensure the development configuration is used:

cmd = "go build -tags dev -o ./todo-api ./main.go"

Step 7: Graceful Shutdown

Fiber automatically shuts down the application and all its services when the application is stopped. But air is not passing the right signal to the application to trigger the shutdown, so we need to do it manually.

In main.go, we need to listen from a different goroutine, and we need to notify the main thread when an interrupt or termination signal is sent. Let’s add this to the end of the main function:

// Listen from a different goroutine
go func() {
if err := app.Listen(fmt.Sprintf(":%v", config.PORT)); err != nil {
log.Panic(err)
}
}()

quit := make(chan os.Signal, 1) // Create channel to signify a signal being sent
signal.Notify(quit, os.Interrupt, syscall.SIGTERM) // When an interrupt or termination signal is sent, notify the channel

<-quit // This blocks the main thread until an interrupt is received
fmt.Println("Gracefully shutting down…")
err = app.Shutdown()
if err != nil {
log.Panic(err)
}

And we need to make sure air is passing the right signal to the application to trigger the shutdown. Add this to .air.conf to make it work:

# Send Interrupt signal before killing process (windows does not support this feature)
send_interrupt = true

With this, air will send an interrupt signal to the application when the application is stopped, so we can trigger the graceful shutdown when we stop the application with air.

Seeing it in action

Now, we can start the application with air, and it will start the PostgreSQL container automatically, and it will handle the graceful shutdown when we stop the application. Let’s see it in action!

Let’s start the application with air. You should see output like this in the logs:

air

`.air.conf` will be deprecated soon, recommend using `.air.toml`.

__ _ ___
/ / | | | |_)
/_/– |_| |_| _ v1.61.7, built with Go go1.24.1

mkdir gofiber-services/tmp
watching .
watching app
watching app/dal
watching app/routes
watching app/services
watching app/types
watching config
watching config/database
!exclude tmp
watching utils
watching utils/jwt
watching utils/middleware
watching utils/password
building…
running…
[DATABASE]::CONNECTED

2025/05/29 07:33:19 gofiber-services/config/database/database.go:44
[89.614ms] [rows:1] SELECT count(*) FROM information_schema.tables WHERE table_schema = CURRENT_SCHEMA() AND table_name = 'users' AND table_type = 'BASE TABLE'

2025/05/29 07:33:19 gofiber-services/config/database/database.go:44

[31.446ms] [rows:0] CREATE TABLE "users" ("id" bigserial,"created_at" timestamptz,"updated_at" timestamptz,"deleted_at" timestamptz,"name" text,"email" text NOT NULL,"password" text NOT NULL,PRIMARY KEY ("id"))

2025/05/29 07:33:19 gofiber-services/config/database/database.go:44
[28.312ms] [rows:0] CREATE UNIQUE INDEX IF NOT EXISTS "idx_users_email" ON "users" ("email")

2025/05/29 07:33:19 gofiber-services/config/database/database.go:44
[28.391ms] [rows:0] CREATE INDEX IF NOT EXISTS "idx_users_deleted_at" ON "users" ("deleted_at")

2025/05/29 07:33:19 gofiber-services/config/database/database.go:44
[28.920ms] [rows:1] SELECT count(*) FROM information_schema.tables WHERE table_schema = CURRENT_SCHEMA() AND table_name = 'todos' AND table_type = 'BASE TABLE'

2025/05/29 07:33:19 gofiber-services/config/database/database.go:44
[29.659ms] [rows:0] CREATE TABLE "todos" ("id" bigserial,"created_at" timestamptz,"updated_at" timestamptz,"deleted_at" timestamptz,"task" text NOT NULL,"completed" boolean DEFAULT false,"user" bigint,PRIMARY KEY ("id"),CONSTRAINT "fk_users_todos" FOREIGN KEY ("user") REFERENCES "users"("id"))

2025/05/29 07:33:19 gofiber-services/config/database/database.go:44
[27.900ms] [rows:0] CREATE INDEX IF NOT EXISTS "idx_todos_deleted_at" ON "todos" ("deleted_at")

_______ __
/ ____(_) /_ ___ _____
/ /_ / / __ / _ / ___/
/ __/ / / /_/ / __/ /
/_/ /_/_.___/___/_/ v3.0.0-beta.4
————————————————–
INFO Server started on: http://127.0.0.1:8000 (bound on host 0.0.0.0 and port 8000)
INFO Services: 1
INFO [ RUNNING ] postgres-db (using testcontainers-go)
INFO Total handlers count: 10
INFO Prefork: Disabled
INFO PID: 36210
INFO Total process count: 1

If we open a terminal and check the running containers, we see the PostgreSQL container is running:

docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8dc70e1124da postgres:16 "docker-entrypoint.s…" 2 minutes ago Up 2 minutes 127.0.0.1:32911->5432/tcp postgres-db-todos

Notice two important things:

the container name is postgres-db-todos, that’s the name we gave to the container in the setupPostgres function.

the container is mapping the standard PostgreSQL port 5432 to a dynamically assigned host port 32911 in the host. This is a Testcontainers feature to avoid port conflicts when running multiple containers of the same type, making the execution deterministic and reliable. To learn more about this, please refer to the Testcontainers documentation.

Fast Dev Loop

If we now stop the application with air, we see the container is stopped, thanks to the graceful shutdown implemented in the application.

But, best of all, if you let air handle reloads, and you update the application, air will hot-reload the application, and the PostgreSQL container will be reused, so we do not need to wait for it to be started! Sweet!

Check out the full example in the GitHub repository.

Integration Tests

The application includes integration tests for the data access layer, in the app/dal folder. They use Testcontainers to create the database and test it in isolation! Run the tests with:

go test -v ./app/dal

In less than 10 seconds, we have a clean database and our persistence layer is verified to behave as expected!

Thanks to Testcontainers, tests can run alongside the application, each using its own isolated container with random ports.

Conclusion

Fiber v3’s Services abstraction combined with Testcontainers unlocks a simple, production-like local dev experience. No more hand-crafted scripts, no more out-of-sync environments — just Go code that runs clean everywhere, providing a “Clone & Run” experience. Besides that, using Testcontainers offers a unified developer experience for both integration testing and local development, a great way to test your application cleanly and deterministically—with real dependencies.

Because we’ve separated configuration for production and local development, the same codebase can cleanly support both environments—without polluting production with development-only tools or dependencies.

What’s next?

Check the different testcontainers modules in the Testcontainers Modules Catalog.

Check the Testcontainers Go repository for more information about the Testcontainers Go library.

Try Testcontainers Cloud to run the Service containers in a reliable manner, locally and in your CI.

Have feedback or want to share how you’re using Fiber v3? Drop a comment or open an issue in the GitHub repo!

Quelle: https://blog.docker.com/feed/

Powering Local AI Together: Docker Model Runner on Hugging Face

At Docker, we always believe in the power of community and collaboration. It reminds me of what Robert Axelrod said in The Evolution of Cooperation: “The key to doing well lies not in overcoming others, but in eliciting their cooperation.” And what better place for Docker Model Runner to foster this cooperation than at Hugging Face, the well-known gathering place for the AI, ML, and data science community. We’re excited to share that developers can use Docker Model Runner as the local inference engine for running models and filtering for Model Runner supported models on Hugging Face!

Of course, Docker Model Runner has supported pulling models directly from Hugging Face repositories for some time now:

docker model pull hf.co/bartowski/Llama-3.2-1B-Instruct-GGUF

Local Inference with Model Runner on Hugging Face

But so far, it has been cumbersome to rummage through the vast collection of models available at Hugging Face and find repositories that work with Docker Model Runner. But not anymore! Hugging Face now supports Docker as a Local Apps provider, so you can select it as the local inference engine to run models. And you don’t even have to configure it in your account; it is already selected as a default Local Apps provider for all users.

Figure 1: Docker Model Runner is a new inference engine available in Hugging Face for running local models.

This makes running a model directly from HuggingFace as easy as visiting a repository page, selecting Docker Model Runner as the Local Apps provider, and executing the provided snippet:

Figure 2: Running models from Hugging Face using Docker Model Runner is now a breeze!

You can even get the list of all models supported by Docker Model Runner (meaning repositories containing models in GGUF format) through a search filter!

Figure 3: Easily discover models supported by Docker Model Runner with a search filter in Hugging Face

We are very happy that HuggingFace is now a first-class source for Docker Model Runner models, making model discovery as routine as pulling a container image. It’s a small change, but one that quietly shortens the distance between research and runnable code.

Conclusion

With Docker Model Runner now directly integrated on Hugging Face, running local inference just got a whole lot more convenient. Developers can filter for compatible models, pull them with a single command, and get the run command directly from the Hugging Face UI using Docker Model Runner as the Local Apps engine. This tighter integration makes model discovery and execution feel as seamless as pulling a container image. 

And coming back to Robert Axelrod and The Evolution of Cooperation, Docker Model Runner has been an open-source project from the very beginning, and we are interested in building it together with the community. So head over to GitHub, check out our repositories, log issues and suggestions, and let’s keep on building the future together.

Learn more

Sign up for the Docker Offload Beta and get 300 free minutes to run resource-intensive models in the cloud, right from your local workflow.

Get an inside look at the design architecture of the Docker Model Runner. 

Explore the story behind our model distribution specification

Read our quickstart guide to Docker Model Runner.

Find documentation for Model Runner.

Visit our new AI solution page

New to Docker? Create an account. 

Quelle: https://blog.docker.com/feed/

AI-Powered Testing: Using Docker Model Runner with Microcks for Dynamic Mock APIs

The non-deterministic nature of LLMs makes them ideal for generating dynamic, rich test data, perfect for validating app behavior and ensuring consistent, high-quality user experiences. Today, we’ll walk you through how to use Docker’s Model Runner with Microcks to generate dynamic mock APIs for testing your applications.

Microcks is a powerful CNCF tool that allows developers to quickly spin up mock services for development and testing. By providing predefined mock responses or generating them directly from an OpenAPI schema, you can point your applications to consume these mocks instead of hitting real APIs, enabling efficient and safe testing environments.

Docker Model Runner is a convenient way to run LLMs locally within your Docker Desktop. It provides an OpenAI-compatible API, allowing you to integrate sophisticated AI capabilities into your projects seamlessly, using local hardware resources.

By integrating Microcks with Docker Model Runner, you can enrich your mock APIs with AI-generated responses, creating realistic and varied data that is less rigid than static examples.

In this guide, we’ll explore how to set up these two tools together, giving you the benefits of dynamic mock generation powered by local AI.

Setting up Docker Model Runner

To start, ensure you’ve enabled Docker Model Runner as described in our previous blog on configuring Goose for a local AI assistant setup. Next, select and pull your desired LLM model from Docker Hub. For example:

docker model pull ai/qwen3:8B-Q4_0

Configuring Microcks with Docker Model Runner

First, clone the Microcks repository:

git clone https://github.com/microcks/microcks –depth 1

Navigate to the Docker Compose setup directory:

cd microcks/install/docker-compose

You’ll need to adjust some configurations to enable the AI Copilot feature within Microcks.In the /config/application.properties file, configure the AI Copilot to use Docker Model Runner:

ai-copilot.enabled=true
ai-copilot.implementation=openai
ai-copilot.openai.api-key=irrelevant
ai-copilot.openai.api-url=http://model-runner.docker.internal:80/engines/llama.cpp/
ai-copilot.openai.timeout=600
ai-copilot.openai.maxTokens=10000
ai-copilot.openai.model=ai/qwen3:8B-Q4_0

We’re using the model-runner.docker.internal:80 as the base URL for the OpenAI compatible API. Docker Model Runner is available there from the containers running in Docker Desktop.  Using it ensures direct communication between the containers and the Model Runner and avoids unnecessary networking using the host machine ports.

Next, enable the copilot feature itself by adding this line to the Microcks config/features.properties file:

features.feature.ai-copilot.enabled=true

Running Microcks

Start Microcks with Docker Compose in development mode:

docker-compose -f docker-compose-devmode.yml up

Once up, access the Microcks UI at http://localhost:8080.

Install the example API for testing. Click through these buttons on the Microcks page:Microcks Hub → MicrocksIO Samples APIs → pastry-api-openapi v.2.0.0 → Install → Direct import → Go.

Figure 1: Screenshot of the Pastry API 2.0 page on Microcks Hub with option to install.

Using AI Copilot samples

Within the Microcks UI, navigate to the service page of the imported API and select an operation you’d like to enhance. Open the “AI Copilot Samples” dialog, prompting Microcks to query the configured LLM via Docker Model Runner.

Figure 2: A display of the “AI Copilot Samples” dialog inside Microcks.

You may notice increased GPU activity as the model processes your request.

After processing, the AI-generated mock responses are displayed, ready to be reviewed or added directly to your mocked operations.

Figure 3: Mocked data generated within the AI Copilot Suggested Samples on Microcks.

You can easily test the generated mocks with a simple curl command. For example:

curl -X PATCH 'http://localhost:8080/rest/API+Pastry+-+2.0/2.0.0/pastry/Chocolate+Cake'
-H 'accept: application/json'
-H 'Content-Type: application/json'
-d '{"status":"out_of_stock"}'

{
"name" : "Chocolate Cake",
"description" : "Rich chocolate cake with vanilla frosting",
"size" : "L",
"price" : 12.99,
"status" : "out_of_stock"
}

This returns a realistic, AI-generated response that enhances the quality and reliability of your test data. 

Now you could use this approach in the tests; for example, a shopping cart application, where the app depends on some inventory service. With realistic yet randomized mocked data, you can cover more application behaviors with the same set of tests. For better reproducibility, you can also specify the Docker Model Runner dependency and the chosen model explicitly in your compose.yml:

models:
qwen3:
model: ai/qwen3:8B-Q4_0
context_size: 8096

Then starting the compose setup will pull the model too and wait for it to be available, the same way it does for containers.

Conclusion

Docker Model Runner is an excellent local resource for running LLMs and provides compatibility with OpenAI APIs, allowing for seamless integration into existing workflows. Tools like Microcks can leverage Model Runner to generate dynamic sample responses for mocked APIs, giving you richer, more realistic synthetic data for integration testing.

If you have local AI workflows or just run LLMs locally, please discuss with us in the Docker Forum! We’d love to explore more local AI integrations with Docker.

Learn more

Get an inside look at the design architecture of the Docker Model Runner. 

Explore the story behind our model distribution specification

Read our quickstart guide to Docker Model Runner.

Find documentation for Model Runner.

Visit our new AI solution page

Subscribe to the Docker Navigator Newsletter.

New to Docker? Create an account. 

Have questions? The Docker community is here to help.

Quelle: https://blog.docker.com/feed/

Build a GenAI App With Java Using Spring AI and Docker Model Runner

When thinking about starting a Generative AI (GenAI) project, you might assume that Python is required to get started in this new space. However, if you’re already a Java developer, there’s no need to learn a new language. The Java ecosystem offers robust tools and libraries that make building GenAI applications both accessible and productive.

In this blog, you’ll learn how to build a GenAI app using Java. We’ll do a step-by-step demo to show you how RAG enhances the model response, using Spring AI and Docker tools. Spring AI integrates with many model providers (for both chat and embeddings), vector databases, and more. In our example, we’ll use the OpenAI and Qdrant modules provided by the Spring AI project to take advantage of built-in support for these integrations. Additionally, we’ll use Docker Model Runner (instead of a cloud-hosted OpenAI model), which offers an OpenAI-compatible API, making it easy to run AI models locally. We’ll automate the testing process using Testcontainers and Spring AI’s tools to ensure the LLM’s answers are contextually grounded in the documents we’ve provided. Last, we’ll show you how to use Grafana for observability and ensure our app behaves as designed. 

Getting started 

Let’s start building a sample application by going to Spring Initializr and choosing the following dependencies: Web, OpenAI, Qdrant Vector Database, and Testcontainers.

It’ll have two endpoints: a “/chat” endpoint that interacts directly with the model and a   “/rag” endpoint that provides the model with additional context from documents stored in the vector database.

Configuring Docker Model Runner

Enable Docker Model Runner in your Docker Desktop or Docker Engine as described in the official documentation.

Then pull the following two models:

docker model pull ai/llama3.1
docker model pull ai/mxbai-embed-large

ai/llama3.1 – chat model

ai/mxbai-embed-large – embedding model

Both models are hosted at Docker Hub under the ai namespace. You can also pick specific tags for the model, which usually provide different quantization of the model. If you don’t know which tag to pick, the default one is a good starting point.

Building the GenAI app

Let’s create a ChatController under /src/main/java/com/example, which will be our entry point to interact with the chat model:

@RestController
public class ChatController {

private final ChatClient chatClient;

public ChatController(ChatModel chatModel) {
this.chatClient = ChatClient.builder(chatModel).build();
}

@GetMapping("/chat")
public String generate(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
return this.chatClient.prompt().user(message).call().content();
}

}

ChatClient is the interface that provides the available operations to interact with the model. We’ll be injecting the actual model value (which model to use) via configuration properties.

If no message query param is provided, then we’ll ask the model to tell a joke (as seen in the defaultValue).

Let’s configure our application to point to Docker Model Runner and use the “ai/llama3.1” model by adding the following properties to /src/test/resources/application.properties

spring.ai.openai.base-url=http://localhost:12434/engines
spring.ai.openai.api-key=test
spring.ai.openai.chat.options.model=ai/llama3.1

spring.ai.openai.api-key is required by the framework, but we can use any value here since it is not needed for Docker Model Runner.

Let’s start our application by running ./mvnw spring-boot:test-run or ./gradlew bootTestRun and ask it about Testcontainers:

http :8080/chat message=="What’s testcontainers?"

Below, we can find the answer provided by the LLM (ai/llama3.1)

Testcontainers is a fantastic and increasingly popular library for **local testing with containers**. Let's break down what it is, why it's useful, and how it works:

**What is Testcontainers?**

Testcontainers provides a way to run real, fully functional containerized services (like databases, message queues, web servers, etc.) directly within your tests. Instead of relying on mocked or stubbed versions of these services, you're using the *actual* services, which leads to much more realistic and reliable test results.

**Why Use Testcontainers?**

* **Realistic Testing:** This is the biggest benefit. Mocking databases or message queues can be brittle and difficult to maintain. Testcontainers provides a service that behaves exactly like the real thing, leading to tests that more accurately reflect how your application will perform in production.
* **Simplified Test Setup:** Forget about manually setting up and configuring databases or other services on your test machine. Testcontainers automatically handles the container creation, configuration, and cleanup for you.
* **Faster Tests:** Because the services are running locally, there’s no network latency involved, resulting in significantly faster test execution times.
* **Consistent Environments:** You eliminate the "it works on my machine" problem. Everyone running the tests will be using the same, pre-configured environment.
* **Supports Many Services:** Testcontainers supports a huge range of services, including:
* **Databases:** PostgreSQL, MySQL, MongoDB, Redis, Cassandra, MariaDB
* **Message Queues:** RabbitMQ, Kafka, ActiveMQ
* **Web Servers:** Tomcat, Jetty, H2 (for in-memory databases)
* **And many more!** The list is constantly growing.

**How Does It Work?**

1. **Client Library:** Testcontainers provides client libraries for various programming languages (Java, Python, JavaScript, Ruby, Go, .NET, and more).
2. **Container Run:** When you use the Testcontainers client library in your test, it automatically starts the specified container (e.g., a PostgreSQL database) in the background.
3. **Connection:** Your test code then connects to the running container using standard protocols (e.g., JDBC for PostgreSQL, HTTP for a web server).
4. **Test Execution:** You execute your tests as usual.
5. **Cleanup:** When the tests are finished, Testcontainers automatically shuts down the container, ensuring a clean state for the next test run.

**Example (Conceptual – Python):**

“`python
from testcontainers.postgresql import PostgreSQLEnvironment

# Create a PostgreSQL environment
env = PostgreSQLEnvironment()

# Start the container
env.start()

# Connect to the database
db = env.db() # This creates a connection object to the running PostgreSQL container

# Perform database operations in your test
# …

# Stop the container (cleanup)
env.shutdown()
“`

**Key Concepts:**

* **Environment:** A Testcontainers environment is a configuration that defines which containers to run and how they should be configured.
* **Container:** A running containerized service (e.g., a database instance).
* **Connection:** An object that represents a connection to a specific container.

**Resources to Learn More:**

* **Official Website:** [https://testcontainers.io/](https://testcontainers.io/) – This is the best place to start.
* **GitHub Repository:** [https://github.com/testcontainers/testcontainers](https://github.com/testcontainers/testcontainers) – See the source code and contribute.
* **Documentation:** [https://testcontainers.io/docs/](https://testcontainers.io/docs/) – Comprehensive documentation with examples for various languages.

**In short, Testcontainers is a powerful tool that dramatically improves the quality and reliability of your local tests by allowing you to test against real, running containerized services.**

Do you want me to delve deeper into a specific aspect of Testcontainers, such as:

* A specific language implementation (e.g., Python)?
* A particular service it supports (e.g., PostgreSQL)?
* How to integrate it with a specific testing framework (e.g., JUnit, pytest)?

We can see that the answer provided by the LLM has some mistakes, for example, PostgreSQLEnvironment doesn’t exist in testcontainers-python. Another one is the links to the docs, testcontainers.io doesn’t exist. So, we can see some hallucinations in the answer.

Of course, LLM responses are non-deterministic, and since each model is trained until a certain cutoff date, the information may be outdated, and the answers might not be accurate.

To improve this situation, let’s provide the model with some curated context about Testcontainers!

We’ll create another controller, RagController, which will retrieve documents from a vector search database.

@RestController
public class RagController {

private final ChatClient chatClient;

private final VectorStore vectorStore;

public RagController(ChatModel chatModel, VectorStore vectorStore) {
this.chatClient = ChatClient.builder(chatModel).build();
this.vectorStore = vectorStore;
}

@GetMapping("/rag")
public String generate(@RequestParam(value = "message", defaultValue = "What's Testcontainers?") String message) {
return callResponseSpec(this.chatClient, this.vectorStore, message).content();
}

static ChatClient.CallResponseSpec callResponseSpec(ChatClient chatClient, VectorStore vectorStore,
String question) {
QuestionAnswerAdvisor questionAnswerAdvisor = QuestionAnswerAdvisor.builder(vectorStore)
.searchRequest(SearchRequest.builder().topK(1).build())
.build();
return chatClient.prompt().advisors(questionAnswerAdvisor).user(question).call();
}

}

Spring AI provides many advisors. In this example, we are going to use the QuestionAnswerAdvisor to perform the query against the vector search database. It takes care of all the individual integrations with the vector database.

Ingesting documents into the vector database

First, we need to load the relevant documents into the vector database. Under src/test/java/com/example, let’s create an IngestionConfiguration class:

@TestConfiguration(proxyBeanMethods = false)
public class IngestionConfiguration {

@Value("classpath:/docs/testcontainers.txt")
private Resource testcontainersDoc;

@Bean
ApplicationRunner init(VectorStore vectorStore) {
return args -> {
var javaTextReader = new TextReader(this.testcontainersDoc);
javaTextReader.getCustomMetadata().put("language", "java");

var tokenTextSplitter = new TokenTextSplitter();
var testcontainersDocuments = tokenTextSplitter.apply(javaTextReader.get());

vectorStore.add(testcontainersDocuments);
};
}

}

testcontainers.txt under /src/test/resources/docs directory will have the following content  specific information. For a real-world use case, you would probably have a more extensive collection of documents.

Testcontainers is a library that provides easy and lightweight APIs for bootstrapping local development and test dependencies with real services wrapped in Docker containers. Using Testcontainers, you can write tests that depend on the same services you use in production without mocks or in-memory services.

Testcontainers provides modules for a wide range of commonly used infrastructure dependencies including relational databases, NoSQL datastores, search engines, message brokers, etc. See https://testcontainers.com/modules/ for a complete list.

Technology-specific modules are a higher-level abstraction on top of GenericContainer which help configure and run these technologies without any boilerplate, and make it easy to access their relevant parameters.

Official website: https://testcontainers.com/
Getting Started: https://testcontainers.com/getting-started/
Module Catalog: https://testcontainers.com/modules/

Now, let’s add an additional property to the src/test/resources/application.properties file.

spring.ai.openai.embedding.options.model=ai/mxbai-embed-large
spring.ai.vectorstore.qdrant.initialize-schema=true
spring.ai.vectorstore.qdrant.collection-name=test

ai/mxbai-embed-large is an embedding model that will be used to create the embeddings of the documents. They will be stored in the vector search database, in our case, Qdrant. Spring AI will initialize the Qdrant schema and use the collection named test.

Let’s update our TestDemoApplication Java class and add the IngestionConfiguration.class

public class TestDemoApplication {

public static void main(String[] args) {
SpringApplication.from(DemoApplication::main)
.with(TestcontainersConfiguration.class, IngestionConfiguration.class)
.run(args);
}

}

Now we start our application by running ./mvnw spring-boot:test-run or ./gradlew bootTestRun and ask it again about Testcontainers:

http :8080/rag message=="What’s testcontainers?"

This time, the answer contains references from the docs we have provided and is more accurate.

Testcontainers is a library that helps you write tests for your applications by bootstrapping real services in Docker containers, rather than using mocks or in-memory services. This allows you to test your applications as they would run in production, but in a controlled and isolated environment.

It provides modules for commonly used infrastructure dependencies such as relational databases, NoSQL datastores, search engines, and message brokers.

If you have any specific questions about how to use Testcontainers or its features, I'd be happy to help.

Integration testing

Testing is a key part of software development. Fortunately, Testcontainers and Spring AI’s utilities support testing of GenAI applications. So far, we’ve been testing the application manually, starting the application and performing requests to the given endpoints, verifying the correctness of the response ourselves. Now, we’re going to automate it by writing an integration test to check if the answer provided by the LLM is more contextual, augmented by the information provided in the documents.

@SpringBootTest(classes = { TestcontainersConfiguration.class, IngestionConfiguration.class },
webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
class RagControllerTest {

@LocalServerPort
private int port;

@Autowired
private VectorStore vectorStore;

@Autowired
private ChatClient.Builder chatClientBuilder;

@Test
void verifyTestcontainersAnswer() {
var question = "Tell me about Testcontainers";
var answer = retrieveAnswer(question);

assertFactCheck(question, answer);
}

private String retrieveAnswer(String question) {
RestClient restClient = RestClient.builder().baseUrl("http://localhost:%d".formatted(this.port)).build();
return restClient.get().uri("/rag?message={question}", question).retrieve().body(String.class);
}

private void assertFactCheck(String question, String answer) {
FactCheckingEvaluator factCheckingEvaluator = new FactCheckingEvaluator(this.chatClientBuilder);
EvaluationResponse evaluate = factCheckingEvaluator.evaluate(new EvaluationRequest(docs(question), answer));
assertThat(evaluate.isPass()).isTrue();
}

private List<Document> docs(String question) {
var response = RagController
.callResponseSpec(this.chatClientBuilder.build(), this.vectorStore, question)
.chatResponse();
return response.getMetadata().get(QuestionAnswerAdvisor.RETRIEVED_DOCUMENTS);
}

}

Importing ContainerConfiguration, Qdrant will be provided.

Importing IngestionConfiguration, will load the documents into the vector database.

We’re going to use FactCheckingEvaluator to tell the chat model (ai/llama3.1) to check the answer provided by the LLM and verify it with the documents stored in the vector database.

Note: The integration test is using the same model we have declared in the previous steps. But we can definitely use a different model.

Automating your tests ensures consistency and reduces the risk of errors that often come with manual execution. 

Observability with the Grafana LGTM Stack

Finally, let’s introduce some observability into our application. By introducing metrics and tracing, we can understand if our application is behaving as designed during development and in production.

Add the following dependencies to the pom.xml

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-otlp</artifactId>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-tracing-bridge-otel</artifactId>
</dependency>
<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-exporter-otlp</artifactId>
</dependency>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>grafana</artifactId>
<scope>test</scope>
</dependency>

Now, let’s create GrafanaContainerConfiguration under src/test/java/com/example.

@TestConfiguration(proxyBeanMethods = false)
public class GrafanaContainerConfiguration {

@Bean
@ServiceConnection
LgtmStackContainer lgtmContainer() {
return new LgtmStackContainer("grafana/otel-lgtm:0.11.4");
}

}

Grafana provides the grafana/otel-lgtm image, which will start Prometheus, Tempo, and OpenTelemetry Collector, and other related services, all combined into a single convenient Docker image.

For the sake of our demo, let’s add a couple of properties at /src/test/resources/application.properties to sample 100% of requests.

spring.application.name=demo
management.tracing.sampling.probability=1

Update the TestDemoApplication class to include GrafanaContainerConfiguration.class

public class TestDemoApplication {

public static void main(String[] args) {
SpringApplication.from(DemoApplication::main)
.with(TestcontainersConfiguration.class, IngestionConfiguration.class, GrafanaContainerConfiguration.class)
.run(args);
}

}

Now, run ./mvnw spring-boot:test-run or ./gradlew bootTestRun one more time, and perform a request.

http :8080/rag message=="What’s testcontainers?"

Then, look for the following text in logs.

o.t.grafana.LgtmStackContainer : Access to the Grafana dashboard:

http://localhost:64908

The port can be different for you, but clicking on it should open the Grafana dashboard. This is where you can query for metrics related to the model or vector search, and also see the traces.

Figure 1: Grafana dashboard showing model metrics, vector search performance, and traces

We can also display the token usage metric used by the chat endpoint.

Figure 2: Grafana dashboard panel displaying token usage metrics for the chat endpoint

List traces for the service with name “demo”, and we can see a list of operations executed as part of this trace. You can use the trace ID with the name http get /rag to see the full control flow within the same HTTP request.

Figure 3:  Grafana dashboard showing trace details for a /rag endpoint in a Java GenAI application

Conclusion

Docker offers powerful capabilities that complement the Spring AI project, allowing developers to build GenAI applications efficiently with Docker tools that they know and trust. It simplifies the startup of service dependencies, including the Docker Model Runner, which exposes an OpenAI-compatible API for running local models. Testcontainers help to quickly spin out integration testing to evaluate your app by providing lightweight containers for your services and dependencies.  From development to testing, Docker and Spring AI have proven to be a reliable and productive combination for building modern AI-driven applications.

Learn more

Get an inside look at the design architecture of the Docker Model Runner. 

Explore the story behind our model distribution specification

Read our quickstart guide to Docker Model Runner.

Find documentation for Model Runner.

Visit our new AI solution page

Quelle: https://blog.docker.com/feed/

Docker Brings Compose to the Agent Era: Building AI Agents is Now Easy

Agents are the future, and if you haven’t already started building agents, you probably will soon. Across industries and use cases, agents can act on our behalf, and offload repetitive work, because they can act on our behalf with judgment and context. 

But while agentic development is moving fast, today it’s tedious, hard, and not fun: you need to quickly iterate with different prompts and models (both frontier models and local/open models), you need to find and connect MCP tools to internal data securely, and you need to declaratively package everything so that others can run your agent. And you need this to be built once, and run anywhere: on your laptop, in CI, or in production.

These problems are not new: they are what Docker was originally conceived for. It’s not an overstatement to say that once upon a time, Docker made microservices possible, and today we’re excited to share how we’re evolving Docker for the era of agents.

Launching today: Compose enters the agent era

Starting today, Docker makes it easy to build, ship, and run agents and agentic applications. Docker Compose launched a decade ago, and solved the problem of how to build and describe multi-container applications. It’s used and loved by millions of developers every day, which is why we’re excited to announce that we have agent building blocks in Compose.

Now, with just a compose.yaml, you can define your open models, agents, and MCP-compatible tools, then spin up your full agentic stack with a simple docker compose up. From dev to production (more on this later), your agents are wired, connected, and ready to run. 

Not just that. Compose is also seamlessly integrated with today’s most popular agentic frameworks:

LangGraph – Define your LangGraph workflow, wrap it as a service, plug it into compose.yaml, and run the full graph with docker compose up. Try the LangGraph tutorial.

Embabel – Use Compose to connect models, embed tools, and get a complete Embabel environment running. Explore the quickstart guide.

Vercel AI SDK – Compose makes it easy to stand up supporting agents and services locally. Check out the Vercel AI examples.

Spring AI – Use Compose to spin up vector stores, model endpoints, and agents alongside your Spring AI backend. View the Spring AI samples.

CrewAI – Compose lets you containerize CrewAI agents. Try the CrewAI Getting Started guide.

Google’s ADK – Easily deploy your ADK-based agent stack with Docker Compose- agents, tools, and routing layers all defined in a single file. Try our example. 

Agno – Use Compose to run your Agno-based agents and tools effortlessly. Explore the Agno example.

But the power of the new Docker Compose goes beyond SDKs: it’s deeply integrated with Docker’s broader suite of AI features.

Docker’s MCP Catalog gives you instant access to a growing library of trusted, plug-and-play tools for your agents. No need to dig through repos, worry about compatibility, or wire things up manually. Just drop what you need into your Compose file and you’re up and running.

Docker Model Runner lets you pull open-weight LLMs directly from Docker Hub, run them locally, and interact with them via built-in OpenAI-compatible endpoints, so your existing SDKs and libraries just work, no rewrites, no retooling. And they run with full GPU acceleration. But what if you don’t have enough local resources?

Introducing Docker Offload: Cloud power, local simplicity

When building agents, local resource limits shouldn’t slow you down. That’s why we’re introducing Docker Offload, a truly seamless way to run your models and containers on a cloud GPU.

Docker Offload frees you from infrastructure constraints by offloading compute-intensive workloads, like large language models and multi-agent orchestration, to high-performance cloud environments. No complex setup, no GPU shortages, no configuration headaches.

With native integration into Docker Desktop and Docker Engine, Docker Offload gives you a one-click path from Compose to cloud. Build, test, and scale your agentic applications just like you always have locally, while Docker handles the heavy lifting behind the scenes. It’s the same simple docker compose up experience, now supercharged with the power of the cloud.

And to get you started, we’re offering 300 minutes of free Offload usage. Try it out, build your agents, and scale effortlessly from your laptop to the cloud.

Compose is now production-ready with Google Cloud and Microsoft Azure

Last, but certainly not least, we’ve worked hard to make sure that the exact same Compose file you used during development works in production, with no rewrites and no reconfiguration.

We’re proud to announce new integrations with Google Cloud Run and Microsoft Azure Container Apps Service that allow Docker Compose to specify a serverless architecture. For example, with Google Cloud, you can deploy your agentic app directly to a serverless environment using the new gcloud run compose up command. And we’re working closely with Microsoft to bring this seamless experience to Azure as well.

From the first line of YAML to production deployment, Compose makes the entire journey consistent, portable, and effortless, just the way agentic development should be.

Let’s Compose the future. Together.

The future of software is agentic, where every developer builds goal-driven, multi-LLM agents that reason, plan, and act across a rich ecosystem of tools and services. 

With Docker Compose, Docker Offload, Docker’s broader AI capabilities, and our partnerships with Google, Microsoft, and Agent SDKs, we’re making that future accessible to, and easy for, everyone. 

In short: Docker is the easiest way to build, run, and scale intelligent agents, from development to production.

We can’t wait to see what you create.

Resources

Docker is simplifying Agent Development 

Explore the capabilities of Docker Offload

Learn more about our AI Agent: Ask Gordon 

Build Agentic Apps with Docker Compose 

Learn more about Docker Model Runner

Quelle: https://blog.docker.com/feed/

The 2025 Docker State of Application Development Report

Executive summary

The 2025 Docker State of Application Development Report offers an ultra high-resolution view of today’s fast-evolving dev landscape. Drawing insights from over 4,500 developers, engineers, and tech leaders — three times more users than last year — the survey explores tools, workflows, pain points, and industry trends. In this our third report, key themes emerge: AI is gaining ground but adoption remains uneven; security is now a shared responsibility across teams; and developers still face friction in the inner loop despite better tools and culture. With a broader, more diverse respondent base than our previous more IT-focused surveys, this year’s report delivers a richer, more nuanced view of how modern software is built and orgs operate.

2025 report key findings

IT is among the leaders in AI — with 76% of IT/SaaS pros using AI tools at work, compared with just 22% across industries. Overall, there’s a huge spread across industries — from 1% to 84%. 

Security is no longer a siloed specialty — especially when vulnerabilities strike. Just 1 in 5 organizations outsource security, and it’s top of mind at most others: only 1% of respondents say security is not a concern at their organization.

Container usage soared to 92% in the IT industry — up from 80% in our 2024 survey. But adoption is lower across other industries, at 30%. IT’s greater reliance on microservice-based architectures — and the modularity and scalability that containers provide — could explain the disparity.

Non-local dev environments are now the norm — not the exception. In a major shift from last year, 64% of developers say they use non-local environments as their primary development setup, with local environments now accounting for only 36% of dev workflows. 

Data quality is the bottleneck when it comes to building AI/ML-powered apps — and it affects everything downstream. 26% of AI builders say they’re not confident in how to prep the right datasets — or don’t trust the data they have. 

Developer productivity, AI, and security are key themes

Like last year’s survey, our 2025 report drills down into three main areas:

What’s helping devs thrive — and what’s still holding them back?

AI is changing software development — but not how you think

Security — it’s a team sport

1. What’s helping devs thrive — and what’s still holding them back?

Great culture, better tools — but developers often still hit sticking points. From pull requests held up in review to tasks without clear estimates, the inner loop remains cluttered with surprisingly persistent friction points.

How devs learn — and what’s changing

Self-guided learning is on the upswing. Across all industries, fully 85% of respondents turn to online courses or certifications, far outpacing traditional sources like school (33%), books (25%), or on-the-job training (25%). 

Among IT folks, the picture is more nuanced. School is still the top venue for learning to code (65%, up from 57% in our 2024 survey), but online resources are also trending upward. Some 63% of IT pros learned coding skills via online resources (up from 54% in our 2024 survey) and 57% favored online courses or certifications (up from 45% in 2024).

How devs like to learn

As for how devs prefer to learn, reading documentation tops the list, as in last year’s report — that despite the rise in new and interactive forms of learning. Some 29% say they lean on documentation, edging out videos and side projects (28% each) and slightly ahead of structured online training (26%). 

AI tools play a relatively minor role in how respondents learn, with GitHub Copilot cited by just 13% overall — and only 9% among IT pros. It’s also cited by 13% as a preferred learning method.

Online resources

When learning to code via online resources, respondents overwhelmingly favored technical documentation (82%) ahead of written tutorials and how-to videos (66% each), and blogs (63%).

Favorite online courses or certifications included Coursera (28%), LinkedIn Learning (24%), and Pluralsight (23%).

Discovering new tools

When it comes to finding out about new tools, developers tend to trust the opinions and insights of other developers — as evidenced by the top four selected options. Across industries, the main ways are developer communities, social media, and blogs (tied at 23%), followed closely by friends/colleagues (22%).

Within the IT industry only, the findings mirror last year’s, though blogs have moved up from third place to first. The primary ways devs learn about new tools are blogs (54%), developer communities (52%), and social media (50%), followed by searching online (48%) and friends/colleagues (46%).Conferences still play a significant role, with 34% of IT folks selecting this response, versus 17% across industries.

Open source contributions

Unsurprisingly, IT industry workers are more active in the open source space: 

48% contributed to open source in the past year, while 52% did not. 

That’s a slight drop from 2024, when 59% reported contributing and 41% had not.

Falling open source contributions could be worth keeping an eye on — especially with growing developer reliance on AI coding copilots. Could AI be chipping away at the need for open source code itself? Future studies could reveal whether this is a blip or a trend.

Across industries, just 13% made open source contributions, while 87% did not. But the desire to contribute is widespread — spanning industries as diverse as energy and utilities (91%), media or digital and education (90% each), and IT and engineering or manufacturing (82% each).

Mirroring our 2024 study, the biggest barrier to contributing to open source is time — 24%, compared with 40% in last year’s IT-focused study. Other barriers include not knowing where to start (18%) and needing guidance from others on how to contribute (15%).

Employers are often supportive: 37% allow employees to contribute to open source, while just 29% do not.

Tech stack

This year, we dove deeper into  the tech stack landscape to understand more about the application structures, languages, and frameworks devs are using — and how they have evolved since the previous survey.

Application structure

Asked about the structure of the applications they work on, respondents’ answers underscored the continued rise of microservices we saw in our 2024 report.

Thirty-five percent said they work on microservices-based applications — far more than those who work on monolithic or hybrid monolith/microservices (24% each), but still behind the 38% who work on client-server apps.

Non-local dev environments are now the norm — not the exception

The tides have officially turned. In 2025, 64% of developers say they use non-local environments as their primary development setup, up from just 36% last year. Local environments — laptops or desktops — now account for only 36% of dev workflows.

What’s driving the shift? A mix of flexible, cloud-based tooling:

Ephemeral or preview environments: 10% (↓ from 12% in 2024)

Personal remote dev environments or clusters: 22% (↑ from 11%)

Other remote dev tools (e.g., Codespaces, Gitpod, JetBrains Space): 12% (↑ from 8%)

Compared to 2024, adoption of persistent, personal cloud environments has doubled, while broader usage of remote dev tools is also on the rise.

Bottom line: As we’ve tracked since our first app dev report in 2022, the future of software development is remote, flexible, and increasingly cloud-native.

IDP adoption remains low — except at larger companies

Internal Developer Portals (IDPs) may be buzzy, but adoption is still in early days. Only 7% of developers say their team currently uses an IDP, while 93% do not.That said, usage climbs with company size. Among orgs with 5,000+ employees, IDP adoption jumps to 36%. IDPs aren’t mainstream yet — but at scale, they’re proving their value.

OS preferences hold steady — with Linux still on top

When it comes to operating systems for app development, Linux continues to lead the pack, used by 53% of developers — the same share as last year. macOS usage has ticked up slightly to 51% (from 50%), while Windows trails just behind at 47% (up from 46%).

The gap among platforms remains narrow, suggesting that today’s development workflows are increasingly OS-flexible — and often cross-platform.

Python surges past JavaScript in language popularity

Python is now the top language among developers, used by 64%, surpassing JavaScript at 57% and Java at 40%. That’s a big jump from 2024, when JavaScript held the lead.

Framework usage is more evenly spread: Spring Boot leads at 19%, with Angular, Express.js, and Flask close behind at 18% each.

Data store preferences are shifting

In 2025, MongoDB leads the pack at 21%, followed closely by MySQL/MariaDB and Amazon RDS (both at 20%). That’s a notable shift from 2024, when PostgreSQL (45%) topped the list.

Tool favorites hold

GitHub, VS Code, and JetBrains editors remain top development tools, as they did in our previous survey. And there’s little change across CI/CD, provisioning, and monitoring tools:

CI/CD: GitHub Actions (40%), GitLab (39%), Jenkins (36%)

Provisioning: Terraform (39%), Ansible (35%), GCP (32%)

Monitoring: Grafana (40%), Prometheus (38%), Elastic (34%)

Containers: the great divide?

Among IT pros, container usage soared to 92% — up from 80% in our 2024 survey. Zoom out to a broader view across industries, however, and adoption appears considerably lower. Just 30% of developers say they use containers in any part of their workflow. 

Why the gap? Differences in app structure may offer an explanation: IT industry respondents work with microservice-based architectures more often than those in other industries (68% versus 31%). So the higher container adoption may stem from IT pros’ need for modularity and scalability — which containers provide in spades.

And among container users, needs are evolving. They want better tools for time estimation (31% compared to 23% of all respondents), task planning (18% for both container users and all respondents), and monitoring/logging (16%) vs designing from scratch (18%) in the number 3 spot for all respondents — stubborn pain points across the software lifecycle.

An equal-opportunity headache: estimating time

No matter the role, estimating how long a task will take is the most consistent pain point across the board. Whether you’re a front-end developer (28%), data scientist (31%), or a software decision-maker (49%), precision in time planning remains elusive.

Other top roadblocks? Task planning (26%) and pull-request review (25%) are slowing teams down. Interestingly, where people say they need better tools doesn’t always match where they’re getting stuck. Case in point, testing solutions and Continuous Delivery (CD) come up often when devs talk about tooling gaps — even though they’re not always flagged as blockers.

Productivity by role: different hats, same struggles

When you break it down by role, some unique themes emerge:

Experienced developers struggle most with time estimation (42%).

Engineering managers face a three-way tie: planning, time estimation, and designing from scratch (28% each).

Data scientists are especially challenged by CD (21%) — a task not traditionally in their wheelhouse.

Front-end devs, surprisingly, list writing code (28%) as a challenge, closely followed by CI (26%).

Across roles, a common thread stands out: even seasoned professionals are grappling with foundational coordination tasks — not the “hard” tech itself, but the orchestration around it.

Tools vs. culture: two sides of the experience equation

On the tooling side, the biggest callouts for improvement across industries include:

Time estimation (23%)

Task planning (18%)

Designing solutions from scratch (18%)

Within the IT industry specifically, the top priority is the same — but even more prevalent:

Time estimation (31%)

Task planning (18%)

PR review (18%)

But productivity isn’t just about tools — it’s deeply cultural. When asked what’s working well, developers pointed to work-life balance (39%), location flexibility such as work from home policies (38%), and flexible hours (37%) as top cultural strengths.

The weak spots? Career development (38%), recognition (36%), and meaningful work (33%). In other words: developers like where, when, and how they work, but not always why.

What’s easy? What’s not?

While the dev world is full of moving parts, a few areas are surprisingly not challenging:

Editing config files (8%)

Debugging in dev (8%)

Writing config files (7%)

Contrast that with the most taxing areas:

Troubleshooting in production (9%)

Debugging in production (9%)

Security-related tasks (8%)

It’s a reminder that production is still where the stress — and the stakes — are highest.

2. AI is changing software development — but not how you think

Rumors of AI’s pervasiveness in software development have been greatly exaggerated. A look under the hood shows adoption is far from uniform. 

AI in dev workflows: two very different camps

One of the clearest splits we saw in the data is how people use AI at work. There are two groups:

Developers using AI tools like ChatGPT, Copilot, and Gemini to help with everyday tasks — writing, documentation, and research.

Teams building AI/ML applications from the ground up.

IT is among the leaders in AI 

Overall, only 22% of respondents said they use AI tools at work. But that number masks a huge spread across industries — from 1% to 84%. IT/SaaS folks sit near the top of the range at 76%. 

IT’s leadership in AI is even more marked when you look at how many are building AI/ML into apps: 

34% of IT/SaaS respondents say they do.

Only 8% of non-tech industries can say the same.

And the strategy gap is just as big. While 73% of tech companies say they have a clear AI strategy, only 16% of non-tech companies do. Translation: AI is gaining traction, but it’s still living mostly inside tech bubbles.

AI tools: overhyped and indispensable

Here’s the paradox: while 59% of respondents say AI tools are overhyped, 64% say they make their work easier.

Even more telling, 65% of users say they’re using AI more now than they did a year ago — and that they use it daily.

The hype is real. But for many devs, the utility is even more real.

These numbers track with our 2024 findings, too — where nearly two-thirds of devs said AI made their job easier, even as 45% weren’t fully sold on the buzz.

AI tool usage is up — and ChatGPT leads the pack

No big surprise here. ChatGPT is still the most-used AI tool by far. But the gap is growing.

Compared to last year’s survey:

ChatGPT usage jumped from 46% → 82%

GitHub Copilot: 30% → 57%

Google Gemini: 19% → 22%

Expect that trend to continue as more teams test drive (and trust) these tools in production workflows, moving from experimentation to greater integration.

Not all devs use AI the same way

While coding is the top AI use case overall, how devs actually lean on it varies by role:

Seasoned devs use AI for documentation and writing tests — but only lightly.

DevOps engineers use it to write docs and navigate CLI tools.

Software developers often turn to it for research and test automation.

And how dependent they feel on AI also shifts:

Seasoned devs: Often rate their dependence as 0/10.

DevOps engineers: Closer to 7/10.

Software devs: Usually around 5/10.

For comparison, the overall average dependence on AI in our 2024 survey was 4/10 (all users). Looking ahead, it will be interesting to see how dependence on AI shifts and becomes further integrated by role. 

The hidden bottleneck: data prep

When it comes to building AI/ML-powered apps, data is the choke point. A full 26% of AI builders say they’re not confident in how to prep the right datasets — or don’t trust the data they have.

This issue lives upstream but affects everything downstream — time to delivery, model performance, user experience. And it’s often overlooked.

Feelings around AI

How do people really feel about AI tools? Mostly positive — but it’s a mixed bag.

Compared to last year’s survey:

AI tools are a positive option: 65% → 66%

They allow me to focus on more important tasks: 55% → 63%

They make my job more difficult: 19% → 40%

They are a threat to my job: 23% → 44%

The predominant emotions around building AI/ML apps are distinctly positive — enthusiasm, curiosity, and happiness or interest.

3. Security — it’s a team sport

Why everyone owns security now

In the evolving world of software development, one thing is clear — security is no longer a siloed specialty. It’s a team sport, especially when vulnerabilities strike. Forget the myth that only “security people” handle security. Across orgs big and small, roles are blending. If you’re writing code, you’re in the security game. As one respondent put it, “We don’t have dedicated teams — we all do it.” 

Just 1 in 5 organizations outsource security.

Security is top of mind at most others: only 1% of respondents say security is not a concern at their organization.

One exception to this trend: In larger IT organizations (50+ employees), software security is more likely to be the exclusive domain of security engineers, with other types of engineers playing less of a role.

Devs, leads, and ops all claim the security mantle

It’s not just security engineers who are on alert. Team leads, DevOps pros, and senior developers all see themselves as responsible for security. And they’re all right. Security has become woven into every function.

When vulnerabilities hit, it’s all hands on deck

No turf wars here. When scan alerts go off, everyone pitches in — whether it’s security engineers helping experienced devs to decode scan results, engineering managers overseeing the incident, or DevOps engineers filling in where needed.

As in our 2024 survey, fixing vulnerabilities is the most common security task (30%) — followed by logging data analysis, running security scans, monitoring security incidents, and dealing with scan results (all tied at 24% each).

Fixing vulnerabilities is also a major time suck. Last year, respondents pointed to better vulnerability remediation tools as a key gap in the developer experience.

Security tools

For the second year in a row, SonarQube is the most widely used security tool, cited by 11% of respondents. But that’s a noticeable drop from last year’s 24%, likely due to the 2024 survey’s heavier focus on IT professionals.

Dependabot follows at 8%, with Snyk and AWS Security Hub close behind at 7% each — all showing lower adoption compared to last year’s more tech-centric sample.

Security isn’t the bottleneck — planning and execution are

Surprisingly, security doesn’t crack the top 10 issues holding teams back. Planning and execution-type activities are bigger sticking points.

Overall, across all industries and development-focused roles, security issues are the 11th and 14th most selected, way behind planning and execution type activities.

Translation? Security is better integrated into the workflow than ever before. 

Shift-left is yesterday’s news

The once-pervasive mantra of “shift security left” is now only the ninth most important trend (14%) — behind Generative AI (27%), AI assistants for software engineering (23%), and Infrastructure as Code (19%). Has the shift left already happened? Is AI and cloud complexity drowning it out? Or is this further evidence that security is, by necessity, shifting everywhere?

Our 2024 survey identified the shift-left approach as a possible source of frustration for developers and an area where more effective tools could make a difference. Perhaps security tools have gotten better, making it easier to shift left. Or perhaps there’s simply broader acceptance of the shift-left trend.

Shift-left may not be buzzy — but it still matters

It’s no longer a headline-grabber, but security-minded dev leads still value the shift-left mindset. They’re the ones embedding security into design, coding, CI/CD, and deployment.

Even if the buzz has faded, the impact hasn’t.

Conclusion

The 2025 Docker State of Application Development Report captures a fast-changing software landscape defined by AI adoption, evolving security roles, and persistent friction in developer workflows. While AI continues to gain ground, adoption remains uneven across industries. Security has become a shared responsibility, and the shift to non-local dev environments signals a more cloud-native future. Through it all, developers are learning, building, and adapting quickly.

In spotlighting these trends, the report doesn’t just document the now — it charts a path forward. Docker will continue evolving to meet the needs of modern teams, helping developers navigate change, streamline workflows, and build what’s next.

Methodology

The 2025 Docker State of Application Development Report was an online, 25-minute survey conducted by Docker’s User Research Team in the fall of 2024. The distribution was much wider than in previous years due to advertising the survey on a larger range of platforms.

Credits

This research was designed, conducted, and analyzed by the Docker UX Research Team: Rebecca Floyd, Ph.D.; Julia Wilson, Ph.D.; and Olga Diachkova.  

Quelle: https://blog.docker.com/feed/

Docker MCP Gateway: Open Source, Secure Infrastructure for Agentic AI

Since releasing the Docker MCP Toolkit, we’ve seen strong community adoption, including steady growth in MCP server usage and over 1 million pulls from the Docker MCP Catalog. With the community, we’re laying the groundwork by standardizing how developers define, run, and share agent-based workloads with Docker Compose. 

Now, we’re expanding on that foundation with the MCP Gateway, a new open-source project designed to help you move beyond local development and into production environments. The MCP Gateway acts as a secure enforcement point between agents and external tools. It integrates seamlessly with Docker Compose while enhancing the security posture of the broader MCP ecosystem.

We believe that infrastructure of this kind should be transparent, secure, and community-driven, which is why we’re open-sourcing all of this work. We’re excited to announce that the MCP Gateway project is available now in this public GitHub repository!

When we started building the MCP Gateway project, our vision was to enable a wide range of agents to access trusted catalogs of MCP servers. The goal was simple: make it easy and safe to run MCP servers. 

Figure 1: Architecture diagram of the MCP Gateway, securely orchestrating and managing MCP servers

This project’s tools are designed to help users discover, configure, and run MCP workloads. In the sections below, we’ll walk through these tools.

Discovery

To view entries in the current default catalog, use the following CLI command.

docker mcp catalog show

This is the set of servers that are available on your host.

As the Official MCP Registry continues to progress, the details for how MCP server authors publish will change. 

For now, we’ve created a PR-based process for contributing content to the Docker MCP Catalog.

Configure

To safely store secrets on an MCP host or to configure an MCP host to support OAuth-enabled MCP servers, we need to prepare the host. For example, servers like the Brave MCP server require an API key. To prepare your MCP host to inject this secret into the Brave MCP server runtime, we provide a CLI interface.

docker mcp secret set 'brave.api_key=XXXXX'

Some servers will also have host-specific configuration that needs to be made available to the server runtimes, usually in the form of environment variables. For example, both the filesystem, and resend server support host specific configurations.

cat << 'EOF' | docker mcp config write
filesystem:
paths:
– /Users/slim
resend:
reply_to: slim@gmail.com
sender: slim@slimslenderslacks.com
EOF

MCP servers have different requirements for host configuration and secret management, so we will need tools to manage this.

Run

An MCP Gateway exposes a set of MCP server runtimes.  For example, if clients should be able to connect to Google-maps and Brave, then those two servers can be enabled by default.

docker mcp server enable google-maps brave
docker mcp gateway run

However, each gateway can also expose custom views. For example, here is a gateway configuration that exposes only the Brave and Wikipedia servers, over SSE, and then only a subset of the tools from each.

docker mcp gateway run
–transport=sse
–servers=brave,wikipedia-mcp
–tools=brave_web_search,get_article,get_summary,get_related_topics

Secure

One of the advantages of a gateway process is that users can plug in generic interceptors to help secure any MCP server. By securing the MCP host, we can ease the adoption burden for any MCP client.

Expect this list to grow quickly, but we have an initial set of features available in the repository to begin demonstrating what’ll be possible.

Verify signatures – ensure that the gateway can verify provenance of the MCP container image before using it.

Block-secrets – scan inbound and outbound payloads for content that looks like secrets of some kind.

Log-calls

These can be enabled when starting the gateway.

docker mcp gateway run
–verify-signatures
–log-calls
–block-secrets

Summary

The MCP Gateway is Docker’s answer to the growing complexity and security risks of connecting AI agents to MCP servers. By aggregating multiple MCP servers behind a single, secure interface, it gives developers and teams a consistent way to build, scale, and govern agent-based workloads from local development to production environments.

The Gateway is available out of the box in the latest release of Docker Desktop. Now open source, it’s also ready for you to use with any community edition of Docker. Whether you’re building AI agents or supporting others who do, the MCP Gateway is a great foundational tool for developing secure, scalable agentic applications with MCP. Visit the Gateway GitHub repository to get started!
Quelle: https://blog.docker.com/feed/

Compose your way with Provider services!

With the release of Docker Compose v2.36.0, we’re excited to introduce a powerful new feature: provider services. This extension point opens up Docker Compose to interact not only with containers but also with any kind of external system, all while keeping the familiar Compose file at the center of the workflow.

In this blog post, we’ll walk through what provider services are, how developers can use them to streamline their workflows, how the provider system works behind the scenes, and how you can build your own provider to extend Compose for your platform needs.

Why Provider Services Are a Game-Changer

Docker Compose has long been a favorite among developers for orchestrating multi-container applications in a simple and declarative way. But as development environments have become more complex, the need to integrate non-container dependencies has become a common challenge. Applications often rely on managed databases, SaaS APIs, cloud-hosted message queues, VPN tunnels, or LLM inference engines — all of which traditionally sit outside the scope of Compose.

Developers have had to resort to shell scripts, Makefiles, or wrapper CLIs to manage these external components, fragmenting the developer experience and making it harder to onboard new contributors or maintain consistent workflows across teams.

Provider services change that. By introducing a native extension point into the Compose, developers can now define and manage external resources directly in their compose.yaml. Compose delegates their lifecycle to the provider binary, coordinating with it as part of its own service lifecycle.

This makes Docker Compose a more complete solution for full-stack, platform-aware development — from local environments to hybrid or remote setups.

Using a Provider Service in Your Compose File

Provider services are declared like any other Compose service, but instead of specifying an image, you specify a provider with a type, and optionally some options. The type must correspond to the name of a binary available in your $PATH that implements the Compose provider specification.

As an example we will use the Telepresence provider plugin, which routes Kubernetes traffic to a local service for live cloud debugging. This is especially useful for testing how a local service behaves when integrated into a real cluster:

In this setup, when you run docker compose up, Compose will call the compose-telepresence plugin binary. The plugin performs the following actions:

Up Action:

Check if the Telepresence traffic manager is installed in the Kubernetes cluster, and install it if needed.

Establish an intercept to re-route traffic from the specified Kubernetes service to the local service.

Down Action:

Remove the previously established intercept.

Uninstall the Telepresence traffic manager from the cluster.

Quit the active Telepresence session.

The structure and content of the options field are specific to each provider. It is up to the plugin author to define and document the expected keys and values.If you’re unsure how to properly configure your provider service in your Compose file, the Compose Language Server (LSP) can guide you step by step with inline suggestions and validation.

You can find more usage examples and supported workflows in the official documentation: https://docs.docker.com/compose/how-tos/provider-services/

How Provider Services Work Behind the Scenes

Under the hood, when Compose encounters a service using the provider key, it looks for an executable in the user’s $PATH matching the provider type name (e.g. docker-model cli plugin or compose-telepresence). Compose then spawns the binary and passes the service options as flags, allowing the provider to receive all required configuration via command-line arguments.

The binary must respond to JSON-formatted requests on stdin and return structured JSON responses on stdout.

Here’s a diagram illustrating the interaction:

Communication with Compose

Compose send all the necessary information to the provider binary by transforming all the options attributes as flags. It also passes the project and the service name. If we look at the compose-telepresence provider example, on the up command Compose will execute the following command:

$ compose-telepresence compose –project-name my-project up –name api –port 5732:api-80 –namespace avatars –service api dev-api

On the other side, providers can also send runtime messages to Compose:

info: Reports status updates. Displayed in Compose’s logs.

error: Reports an error. Displayed as the failure reason.

setenv: Exposes environment variables to dependent services.

debug: Debug messages displayed only when running Compose with -verbose.

This flexible protocol makes it easy to add new types and build rich provider integrations.

Refer to the official protocol spec for detailed structure and examples.

Building Your Own Provider Plugin

The real power of provider services lies in their extensibility. You can write your own plugin, in any language, as long as it adheres to the protocol.

A typical provider binary implements logic to handle a compose command with up and down subcommands.

The source code of compose-telepresence-plugin will be a good starting point. This plugin is implemented in Go and wraps the Telepresence CLI to bridge a local dev container with a remote Kubernetes service.

Here’s a snippet from its up implementation:

This method is triggered when docker compose up is run, and it starts the service by calling the Telepresence CLI based on the received options.

To build your own provider:

Read the full extension protocol spec

Parse all the options as flags to collect the whole configuration needed by the provider

Implement the expected JSON response handling over /stdout

Don’t forget to add debug messages to have as many details as possible during your implementation phase.

Compile your binary and place it in your $PATH

Reference it in your Compose file using provider.type

You can build anything from service emulators to remote cloud service starters. Compose will automatically invoke your binary as needed.

What’s Next?

Provider services will continue to evolve, future enhancements will be guided by real-world feedback from users to ensure provider services grow in the most useful and impactful directions.

Looking forward, we envision a future where Compose can serve as a declarative hub for full-stack dev environments,  including containers, local tooling, remote services, and AI runtimes.

Whether you’re connecting to a cloud-hosted database, launching a tunnel, or orchestrating machine learning inference, Compose provider services give you a native way to extend your dev environment, no wrappers, no hacks.

Let us know what kind of providers you’d like to build or see added. We can’t wait to see how the community takes this further.

Stay tuned and happy coding!

Quelle: https://blog.docker.com/feed/