Umfrage: Einige Stunden Zeitgewinn durch KI am Arbeitsplatz
Wer mehr Arbeitsbelastung oder Stellenabbau fürchten muss, ist nicht offen für KI. Der Zeitgewinn ist größer, wenn Firmen den Einsatz fördern. (Arbeit, Studien)
Quelle: Golem
Wer mehr Arbeitsbelastung oder Stellenabbau fürchten muss, ist nicht offen für KI. Der Zeitgewinn ist größer, wenn Firmen den Einsatz fördern. (Arbeit, Studien)
Quelle: Golem
Die Gaming-Maus Razer DeathAdder Essential ist bei Amazon auf den aktuell besten Vergleichspreis gekracht. (Maus, Razer)
Quelle: Golem
Wer im Microsoft Store die automatischen Updates deaktiviert hatte, könnte bald eine Überraschung erleben. Die Option wird ungefragt reaktiviert. (Updates & Patches, Microsoft)
Quelle: Golem
In der organisierten Kriminalität werden zunehmend Kryptowährungen anstelle von Bargeld genutzt. Allerdings fehlt den Gangstern oft das Fachwissen. Ein Bericht von Ulrich Hottelet (Blockchain, Wirtschaft)
Quelle: Golem
Die Idee für eine America Party hielt nicht einmal zwei Monate. Stattdessen will sich Elon Musk wohl auf seine Firmen konzentrieren. (Elon Musk, Politik)
Quelle: Golem
Der Opel Corsa GSE weist starke Leistungsdaten und ein aggressives Äußeres auf. Das Konzeptauto mit 800 PS wird in Gran Turismo 7 fahrbar sein. (Opel, Elektroauto)
Quelle: Golem
Eine neue Nvidia-GPU für China soll leistungsfähiger sein als die US-Regierung erlaubt. US-Präsident Trump hat bereits Redebereitschaft angedeutet. (Nvidia, Grafikkarten)
Quelle: Golem
Building AI agents in the real world often involves more than just making model calls — it requires integrating with external tools, handling complex workflows, and ensuring the solution can scale in production.
In this post, we’ll walk through a real-world developer setup for creating an agent using the Docker MCP Toolkit.
To make things concrete, I’ve built an agent that takes a Git repository as input and can answer questions about its contents — whether it’s explaining the purpose of a function, summarizing a module, or finding where a specific API call is made. This simple but practical use case serves as a foundation for exploring how agents can interact with real-world data sources and respond intelligently.
I built and ran it using the Docker MCP Toolkit, which made setup and integration fast, portable, and repeatable. This blog walks you through that developer setup and explains why Docker MCP is a game changer for building and running agents.
Use Case: GitHub Repo Question-Answering Agent
The goal: Build an AI agent that can connect to a GitHub repository, retrieve relevant code or metadata, and answer developer questions in plain language.
Example queries:
“Summarize this repo: https://github.com/owner/repo”
“Where is the authentication logic implemented?”
“List main modules and their purpose.”
“Explain the function parse_config and show where it’s used.”
This goes beyond a simple code demo — it reflects how developers work in real-world environments
The agent acts like a code-aware teammate you can query anytime.
The MCP Gateway handles tooling integration (GitHub API) without bloating the agent code.
Docker Compose ties the environment together so it runs the same in dev, staging, or production.
Role of Docker MCP Toolkit
Without MCP Toolkit, you’d spend hours wiring up API SDKs, managing auth tokens, and troubleshooting environment differences.
With MCP Toolkit:
Containerized connectors – Run the GitHub MCP Gateway as a ready-made service (docker/mcp-gateway:latest), no SDK setup required.
Consistent environments – The container image has fixed dependencies, so the setup works identically for every team member.
Rapid integration – The agent connects to the gateway over HTTP; adding a new tool is as simple as adding a new container.
Iterate faster – Restart or swap services in seconds using docker compose.
Focus on logic, not plumbing – The gateway handles the GitHub-specific heavy lifting while you focus on prompt design, reasoning, and multi-agent orchestration.
Role of Docker Compose
Running everything via Docker Compose means you treat the entire agent environment as a single deployable unit:
One-command startup – docker compose up brings up the MCP Gateway (and your agent, if containerized) together.
Service orchestration – Compose ensures dependencies start in the right order.
Internal networking – Services talk to each other by name (http://mcp-gateway-github:8080) without manual port wrangling.
Scaling – Run multiple agent instances for concurrent requests.
Unified logging – View all logs in one place for easier debugging.
Architecture Overview
This setup connects a developer’s local agent to GitHub through a Dockerized MCP Gateway, with Docker Compose orchestrating the environment. Here’s how it works step-by-step:
User Interaction
The developer runs the agent from a CLI or terminal.
They type a question about a GitHub repository — e.g., “Where is the authentication logic implemented?”
Agent Processing
The Agent (LLM + MCPTools) receives the question.
The agent determines that it needs repository data and issues a tool call via MCPTools.
MCPTools → MCP Gateway
MCPTools sends the request using streamable-http to the MCP Gateway running in Docker.
This gateway is defined in docker-compose.yml and configured for the GitHub server (–servers=github –port=8080).
GitHub Integration
The MCP Gateway handles all GitHub API interactions — listing files, retrieving content, searching code — and returns structured results to the agent.
LLM Reasoning
The agent sends the retrieved GitHub context to OpenAI GPT-4o as part of a prompt.
The LLM reasons over the data and generates a clear, context-rich answer.
Response to User
The agent prints the final answer back to the CLI, often with file names and line references.
Code Reference & File Roles
The detailed source code for this setup is available at this link.
Rather than walk through it line-by-line, here’s what each file does in the real-world developer setup:
docker-compose.yml
Defines the MCP Gateway service for GitHub.
Runs the docker/mcp-gateway:latest container with GitHub as the configured server.
Exposes the gateway on port 8080.
Can be extended to run the agent and additional connectors as separate services in the same network.
app.py
Implements the GitHub Repo Summarizer Agent.
Uses MCPTools to connect to the MCP Gateway over streamable-http.
Sends queries to GitHub via the gateway, retrieves results, and passes them to GPT-4o for reasoning.
Handles the interactive CLI loop so you can type questions and get real-time responses.
In short: the Compose file manages infrastructure and orchestration, while the Python script handles intelligence and conversation.
Setup and Execution
Clone the repository
git clone https://github.com/rajeshsgr/mcp-demo-agents/tree/main
cd mcp-demo-agents
Configure environment
Create a .env file in the root directory and add your OpenAI API key:
OPEN_AI_KEY = <<Insert your Open AI Key>>
Configure GitHub Access
To allow the MCP Gateway to access GitHub repositories, set your GitHub personal access token:
docker mcp secret set github.personal_access_token=<YOUR_GITHUB_TOKEN>
Start MCP Gateway
Bring up the GitHub MCP Gateway container using Docker Compose:
docker compose up -d
Install Dependencies & Run Agent
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
python app.py
Ask Queries
Enter your query: Summarize https://github.com/owner/repo
Real-World Agent Development with Docker, MCP, and Compose
This setup is built with production realities in mind —
Docker ensures each integration (GitHub, databases, APIs) runs in its own isolated container with all dependencies preconfigured.
MCP acts as the bridge between your agent and real-world tools, abstracting away API complexity so your agent code stays clean and focused on reasoning.
Docker Compose orchestrates all these moving parts, managing startup order, networking, scaling, and environment parity between development, staging, and production.
From here, it’s easy to add:
More MCP connectors (Jira, Slack, internal APIs).
Multiple agents specializing in different tasks.
CI/CD pipelines that spin up this environment for automated testing
Final Thoughts
By combining Docker for isolation, MCP for seamless tool integration, and Docker Compose for orchestration, we’ve built more than just a working AI agent — we’ve created a repeatable, production-ready development pattern. This approach removes environment drift, accelerates iteration, and makes it simple to add new capabilities without disrupting existing workflows. Whether you’re experimenting locally or deploying at scale, this setup ensures your agents are reliable, maintainable, and ready to handle real-world demands from day one.
Before vs. After: The Developer Experience
Aspect
Without Docker + MCP + Compose
With Docker + MCP + Compose
Environment Setup
Manual SDK installs, dependency conflicts, “works on my machine” issues.
Prebuilt container images with fixed dependencies ensure identical environments everywhere.
Integration with Tools (GitHub, Jira, etc.)
Custom API wiring in the agent code; high maintenance overhead.
MCP handles integrations in separate containers; agent code stays clean and focused.
Startup Process
Multiple scripts/terminals; manual service ordering.
docker compose up launches and orchestrates all services in the right order.
Networking
Manually configuring ports and URLs; prone to errors.
Internal Docker network with service name resolution (e.g., http://mcp-gateway-github:8080).
Scalability
Scaling services requires custom scripts and reconfigurations.
Scale any service instantly with docker compose up –scale.
Extensibility
Adding a new integration means changing the agent’s code and redeploying.
Add new MCP containers to docker-compose.yml without modifying the agent.
CI/CD Integration
Hard to replicate environments in pipelines; brittle builds.
Same Compose file works locally, in staging, and in CI/CD pipelines.
Iteration Speed
Restarting services or switching configs is slow and error-prone.
Containers can be stopped, replaced, and restarted in seconds.
Quelle: https://blog.docker.com/feed/
Docker periodically highlights blog posts featuring use cases and success stories from Docker partners and practitioners. This story was contributed by Dylen Turnbull and Timo Stark. With over 29 years in enterprise and open-source software development, Dylen Turnbull has held roles at Symantec, Veritas, F5 Networks, and most recently as a Developer Advocate for NGINX. Timo is a Docker Captain, Head of IT at DoHo Engineering, and was formerly a Principal Technical Product Manager at NGINX.
Modern Application developers face challenges in managing dependencies, ensuring consistent environments, and scaling applications. Docker Desktop simplifies these tasks with intuitive containerization, delivering reliable environments, easy deployments, and scalable architectures. NGINX server management in containers offers opportunities for enhancement, which the NGINX Development Center addresses with user-friendly tools to optimize configuration, performance, and web server management.
Opportunities for Increased Workflow Efficiency
Docker Desktop streamlines container workflows, but NGINX configuration can be further improved with the NGINX Development Center:
Easier Configuration: NGINX setup often requires command-line expertise. The NGINX Development Center offers intuitive interfaces to simplify the process.
Simplified Multi-Server Management: Managing multiple configurations involves complex volume mounting. The NGINX Development Center centralizes and streamlines configuration handling.
Improved Debugging: Debugging requires manual log access and container inspection. The NGINX Development Center provides clear diagnostic tools for faster resolution.
Faster Iteration: Reverse proxy updates need frequent restarts. The NGINX Development Center enables quick configuration changes with minimal downtime.
By integrating Docker Desktop’s seamless containerization with the NGINX Development Center’s tools, developers can achieve a more efficient workflow for modern applications.
The NGINX Development Center, available in the Docker Extensions Marketplace with over 51,000 downloads, addresses these frictions, streamlining NGINX configuration management for developers.
The advantage for App/Web Server Development
The NGINX Development Center enhances app and web server development by offering an intuitive GUI-based interface integrated into Docker Desktop, simplifying server configuration file management without requiring command-line expertise. It provides streamlined access to runtime configuration previews, minimizing manual container inspection, and enables rapid iteration without container restarts for faster development and testing cycles.
Centralized configuration management ensures consistency across development, testing, and production environments. Seamlessly integrated with Docker Desktop, the extension reduces the complexity of traditional NGINX workflows, allowing developers to focus on application development rather than infrastructure management.
Overview of the NGINX Development Center
The NGINX Development Center, developed by Timo Stark, is designed to enhance the developer experience for NGINX server configuration in containerized environments. Available in the Docker Extensions Marketplace, the extension leverages Docker Desktop’s extensibility to provide a dedicated NGINX Development Center. Key features include:
Graphical Configuration Interface
A user-friendly UI for creating and editing NGINX server blocks, routing rules, and SSL configurations.
Run-Time Configuration Updates
Apply changes to NGINX instances without container restarts, supporting rapid iteration.
Integrated Debugging Tools
Validate configurations, and troubleshoot issues directly within Docker Desktop.
How Does the NGINX Development Center Work?
The NGINX Development Center Docker extension, based on the NGINX Docker Desktop Extension public repository, simplifies NGINX configuration and management within Docker Desktop. It operates as a containerized application with a React-based user interface and a Node.js backend, integrated into Docker Desktop via the Extensions Marketplace and Docker API.
Here’s how it works in simplified terms:
Installation and Setup: The extension is installed from the Docker Extensions Marketplace or built locally using a Dockerfile that compiles the UI and backend components. It runs as a container within Docker Desktop, pulling the image nginx/nginx-docker-extension:latest.
User Interface: The React-based UI, accessible through the NGINX Development Center tab in Docker Desktop, allows developers to create and edit NGINX configurations, such as server blocks, routing rules, and SSL settings.
Configuration Management: The Node.js backend processes user inputs from the UI, generates NGINX configuration files, and applies them to a managed NGINX container. Changes are deployed dynamically using NGINX’s reload mechanism, avoiding container restarts.
Integration with Docker: The extension communicates with Docker Desktop’s API to manage NGINX containers and uses Docker volumes to store configuration files and logs, ensuring seamless interaction with the Docker ecosystem.
Debugging Support: While it doesn’t provide direct log access, the extension supports debugging by validating configurations in real-time and leveraging Docker Desktop’s native tools for indirect log viewing.
The extension’s backend, built with Node.js, handles configuration generation and NGINX instance management, while the React-based frontend provides an intuitive user experience. For development, the extension supports hot reloading, allowing developers to test changes without rebuilding the image.
Architecture Diagram
Below is a simplified architecture diagram illustrating how the NGINX Development Center integrates with Docker Desktop:
NGINX Development Center architecture showing integration with Docker Desktop, featuring a Node.js backend and React UI, managing NGINX containers and configuration files.
Docker Desktop: Hosts the extension and provides access to the Docker API and Extensions Marketplace.
NGINX Development Center: Runs as a container, with a Node.js backend for configuration management and a React UI for user interaction.
NGINX Container: The managed NGINX instance, configured dynamically by the extension.
Configuration Files: Generated and monitored by the extension, stored in Docker volumes for persistence.
Why run NGINX configuration management as a Docker Desktop Extension?
Running NGINX configuration management as a Docker Desktop Extension provides a unified, streamlined experience for developers already working within the Docker ecosystem. By integrating directly into Docker Desktop’s interface, the extension eliminates the friction of switching between multiple tools and command-line interfaces, allowing developers to manage NGINX configurations alongside their containerized applications in a single, familiar environment.
The extension approach leverages Docker’s inherent benefits of isolation and consistency, ensuring that NGINX configuration management operates reliably across different development machines and operating systems. This containerized approach prevents conflicts with local system configurations and removes the complexity of installing and maintaining separate NGINX management tools.
Furthermore, Docker Desktop serves as the only prerequisite for the NGINX Development Center. Once Docker Desktop is installed, developers can immediately access sophisticated NGINX configuration capabilities without additional software installations, complex environment setup, or specialized NGINX expertise. The extension transforms what traditionally requires command-line proficiency into an intuitive, graphical workflow that integrates seamlessly with existing Docker-based development practices.
Getting Started
Follow these steps to set up and use the Docker Extension: NGINX Development CenterPrerequisites: Docker Desktop, 1 running NGINX container.
NGINX Development Center Setup in Docker Desktop:
Ensure Docker Desktop is installed and running on your machine (Windows, macOS, or Linux).
Installing the NGINX Development Center:
Open Docker Desktop and navigate to the Extensions Marketplace (left-hand menu).
Search for “NGINX” or “NGINX Development Center”.
Click “Install” to pull and install the NGINX Development Center image
Accessing the NGINX Development Center:
After installation, a new “NGINX” tab appears in Docker Desktop’s left-hand menu.
Click the tab to open the NGINX Development Center, where you can manage configurations and monitor NGINX instances.
Configuration Management with the NGINX Development Center:
Use the GUI configuration editor to create new NGINX config files.
Configure existing nginx configuration files.
Preview and validate configurations before applying them.
Save changes, which are applied dynamically via hot reloading without restarting the NGINX container.
Real-world use case example: Development Proxy for Local Services
In modern application development, NGINX serves as a reverse proxy that’s useful for developers on full-stack or microservices projects. It manages traffic routing between components, mitigates CORS issues in browser-based testing, enables secure local HTTPS setups, and supports efficient workflows by letting multiple services share a single entry point without direct port exposure. This aids local environments for simulating production setups, testing API integrations, or handling real-time features like WebSockets, while avoiding manual restarts and complex configurations. NGINX can proxy diverse local services, including frontend frameworks (e.g., React or Angular apps), backend APIs (e.g., Node.js/Express servers), databases with web interfaces (e.g., phpMyAdmin), static file servers, or third-party tools like mock services and caching layers.
Developers often require a local proxy to route traffic between services (e.g., frontend on port 3000 and backend API) and avoid CORS issues, but manual NGINX setup demands file edits and restarts.
With the Docker Extension: NGINX Development Center
Setup: Install the NGINX Development Center via Docker Extensions Marketplace in Docker Desktop. Ensure local services (e.g., Node.js backend on port 3000) run in separate containers. Open the NGINX Development Center tab.
Containers run separately.
Configuration: In the UI, create a new server. Set upstream to server the frontend at localhost. Add proxy for /api/* to http://backend:3000. Publish via the graphical options.
Server config editing via the Docker Desktop UI
App server configuration
Validation and Testing: Preview the config in the NGINX Development Center UI to check for errors. Test by accessing http://localhost/ and http://localhost/api in a browser; confirm routing to backend.
Deployment: Save and apply changes dynamically (no restart needed). Export config for reuse in a Docker Compose file to orchestrate services.
This use case utilizes the NGINX Development Center’s React UI for proxy configuration, Node.js backend for config generation, and Docker API for seamless networking. Try setting up your own local proxy today by installing the extension and exploring the NGINX Development Center.
Try it out and come visit us
This post has examined how the NGINX Development Center, integrated into Docker Desktop via the NGINX Development Center, tackles developer challenges in managing NGINX configurations for containerized web applications. It provides a UI and backend to simplify dependency management, ensure consistent environments, and support scalable setups. The graphical interface reduces the need for command-line expertise, managing server blocks, routing, and SSL settings, while dynamic updates and real-time previews aid iteration and debugging. Docker volumes help maintain consistency across development, testing, and production.
We’ve highlighted a practical use case with Development Proxy for Local Services feasible within Docker Desktop using the extension. The architecture leverages Docker Desktop’s API and a containerized design to support the workflow.If you’re a developer interested in improving NGINX management, try installing the NGINX Development Center from the Docker Extensions Marketplace and explore the NGINX Development Center. For deeper engagement, visit the GitHub repository to review the codebase, suggest features, or contribute to its development, and consider joining discussions to connect with others.
Quelle: https://blog.docker.com/feed/
Amazon hat eine faltbare Bluetooth-Tastatur mit Touchpad im Angebot, die mit Smartphones, Tablets und Computern kompatibel ist. (Tastatur, Eingabegerät)
Quelle: Golem