Using MCP Servers: From Quick Tools to Multi-Agent Systems

Model Context Protocol (MCP) servers are a spec for exposing tools, models, or services to language models through a common interface. Think of them as smart adapters: they sit between a tool and the LLM, speaking a predictable protocol that lets the model interact with things like APIs, databases, and agents without needing to know implementation details.

But like most good ideas, the devil’s in the details.

The Promise—and the Problems of Running MCP Servers

Running an MCP sounds simple: spin up a Python or Node server that exposes your tool. Done, right? Not quite.

You run into problems fast:

Runtime friction: If an MCP is written in Python, your environment needs Python (plus dependencies, plus maybe a virtualenv strategy, plus maybe GPU drivers). Same goes for Node. This multiplies fast when you’re managing many MCPs or deploying them across teams.

Secrets management: MCPs often need credentials (API keys, tokens, etc.). You need a secure way to store and inject those secrets into your MCP runtime. That gets tricky when different teams, tools, or clouds are involved.

N×N integration pain: Let’s say you’ve got three clients that want to consume MCPs, and five MCPs to serve up. Now you’re looking at 15 individual integrations. No thanks.

To make MCPs practical, you need to solve these three core problems: runtime complexity, secret injection, and client-to-server wiring. 

If you’re wondering where I’m going with all this, take a look at those problems. We already have a technology that has been used by developers for over a decade that helps solve them: Docker containers.

In the rest of this blog I’ll walk through three different approaches, going from least complex to most complex, for integrating MCP servers into your developer experience. 

Option 1 — Docker MCP Toolkit & Catalog

For the developer who already uses containers and wants a low-friction way to start with MCP.

If you’re already comfortable with Docker but just getting your feet wet with MCP, this is the sweet spot. In the raw MCP world, you’d clone Python/Node servers, manage runtimes, inject secrets yourself, and hand-wire connections to every client. That’s exactly the pain Docker’s MCP ecosystem set out to solve.

Docker’s MCP Catalog is a curated, containerized registry of MCP servers. Each entry is a prebuilt container with everything you need to run the MCP server. 

The MCP Toolkit (available via Docker Desktop) is your control panel: search the catalog, launch servers with secure defaults, and connect them to clients.

How it helps:

No language runtimes to install

Built-in secrets management

One-click enablement via Docker Desktop

Easily wire the MCPs to your existing agents (Claude Desktop, Copilot in VS Code, etc)

Centralized access via the MCP Gateway

Figure 1: Docker MCP Catalog: Browse hundreds of MCP servers with filters for local or remote and clear distinctions between official and community servers

A Note on the MCP GatewayOne important piece working behind the scenes in both the MCP Toolkit and cagent (a framework for easily building multi-agent applications that we cover below) is the MCP Gateway, an open-source project from Docker that acts as a centralized frontend for all your MCP servers. Whether you’re using a GUI to start containers or defining agents in YAML, the Gateway handles all the routing, authentication, and translation between clients and tools. It also exposes a single endpoint that custom apps or agent frameworks can call directly, making it a clean bridge between GUI-based workflows and programmatic agent development.

Moving on: Using MCP servers alongside existing AI agents is often the first step for many developers. You wire up a couple tools, maybe connect to a calendar or a search API, and use them in something like Claude, ChatGPT, or a small custom agent. For step-by-step tutorials on how to automate dev workflows with Docker’s MCP Catalog and Toolkit with popular clients, check out these guides on ChatGPT, Claude Desktop,Codex, Gemini CLI, and Claude Code. Once that pattern clicks, the next logical step is to use those same MCP servers as tools inside a multi-agent system.

Option 2 — cagent: Declarative Multi-Agent Apps

For the developer who wants to build custom multi-agent applications but isn’t steeped in traditional agentic frameworks.

If you’re past simple MCP servers and want agents that can delegate, coordinate, and reason together, cagent is your next step. It’s Docker’s open-source, YAML-first framework for defining and running multi-agent systems—without needing to dive into complex agent SDKs or LLM loop logic.

Cagent lets you describe:

The agents themselves (model, role, instructions)

Who delegates to whom

What tools each agent can access (via MCP or local capabilities)

Below is an example of a pirate flavored chat bot:

agents:
root:
description: An agent that talks like a pirate
instruction: Always answer by talking like a pirate.
welcome_message: |
Ahoy! I be yer pirate guide, ready to set sail on the seas o' knowledge! What be yer quest?
model: auto

cagent run agents.yaml

You don’t write orchestration code. You describe what you want, and Cagent runs the system.

Why it works:

Tools are scoped per agent

Delegation is explicit

Uses MCP Gateway behind the scene

Ideal for building agent systems without writing Python

If you’d like to give cagent a try, we have a ton of examples in the project’s GitHub repository. Check out this guide on building multi-agent systems in 5 minutes. 

Option 3 — Traditional Agent Frameworks (LangGraph, CrewAI, ADK)

For developers building complex, custom, fully programmatic agent systems.

Traditional agent frameworks like LangGraph, CrewAI, or Google’s Agent Development Kit (ADK) let you define, control, and orchestrate agent behavior directly in code. You get full control over logic, state, memory, tools, and workflows.

They shine when you need:

Complex branching logic

Error recovery, retries, and persistence

Custom memory or storage layers

Tight integration with existing backend code

Example: LangGraph + MCP via Gateway

import requests
from langgraph.graph import StateGraph
from langchain.agents import Tool
from langchain_openai import ChatOpenAI

# Discover MCP endpoint from Gateway
resp = requests.get("http://localhost:6600/v1/servers")
servers = resp.json()["servers"]
duck_url = next(s["url"] for s in servers if s["name"] == "duckduckgo")

# Define a callable tool
def mcp_search(query: str) -> str:
return requests.post(duck_url, json={"input": query}).json()["output"]

search_tool = Tool(name="web_search", func=mcp_search, description="Search via MCP")

# Wire it into a LangGraph loop
llm = ChatOpenAI(model="gpt-4")
graph = StateGraph()
graph.add_node("agent", llm.bind_tools([search_tool]))
graph.add_edge("agent", "agent")
graph.set_entry_point("agent")

app = graph.compile()
app.invoke("What’s the latest in EU AI regulation?")

In this setup, you decide which tools are available. The agent chooses when to use them based on context, but you’ve defined the menu.And yes, this is still true in the Docker MCP Toolkit: you decide what to enable. The LLM can’t call what you haven’t made visible.

Choosing the Right Approach

Approach

Best For

You Manage

You Get

Docker MCP Toolkit + Catalog

Devs new to MCP, already using containers

Tool selection

One-click setup, built-in secrets, Gateway integration

Cagent

YAML-based multi-agent apps without custom code

Roles & tool access

Declarative orchestration, multi-agent workflows

LangGraph / CrewAI / ADK

Complex, production-grade agent systems

Full orchestration

Max control over logic, memory, tools, and flow

Wrapping UpWhether you’re just connecting a tool to Claude, designing a custom multi-agent system, or building production workflows by hand, Docker’s MCP tooling helps you get started easily and securely. 

Check out the Docker MCP Toolkit, cagent, and MCP Gateway for example code, docs, and more ways to get started.

Quelle: https://blog.docker.com/feed/

Amazon MQ now supports Java Messaging Service (JMS) specification for RabbitMQ brokers

Amazon MQ now supports the ability for RabbitMQ 4 brokers to connect to JMS applications through the RabbitMQ JMS Topic Exchange plugin and JMS client. The JMS topic exchange plugin is enabled by default on all RabbitMQ 4 brokers, allowing you to use the JMS client to run your JMS 1.1, JMS 2.0, and JMS 3.1 applications on RabbitMQ. You can also use the RabbitMQ JMS client to send JMS messages to an AMQP exchange and consume messages from an AMQP queue to interoperate or migrate JMS workloads to AMQP workloads. To start using your JMS applications on RabbitMQ, simply select RabbitMQ 4.2 when creating a new broker using the M7g instance type through the AWS Management console, AWS CLI, or AWS SDKs, and then use the RabbitMQ JMS client to connect your applications. To learn more about the plugin, see the Amazon MQ release notes and the Amazon MQ developer guide. This plugin is available in all regions where Amazon MQ RabbitMQ 4 instances are available today. 
Quelle: aws.amazon.com

AWS expands Resource Control Policies support for Cognito and CloudWatch Logs

AWS Resource Control Policies (RCPs) now provide support for Amazon Cognito and Amazon CloudWatch Logs. Resource control policies (RCPs) are a type of organization policy that you can use to manage permissions in your organization. RCPs offer central control over the maximum available permissions for resources in your organization. With this expansion, you can now use RCPs to manage permissions for Amazon Cognito and Amazon CloudWatch Logs resources. For example, you can create policies that prevent identities outside your organization from accessing these resources, helping you build a data perimeter and enforce baseline security standards across your AWS environment. RCPs are available in all AWS commercial Regions and AWS GovCloud (US) Regions. To learn more about RCPs and view the full list of supported AWS services, visit the Resource control policies (RCPs) documentation in the AWS Organizations User Guide.
Quelle: aws.amazon.com

Amazon Bedrock AgentCore Browser now supports custom browser extensions

Amazon Bedrock AgentCore Browser now supports custom Chrome browser extensions, enabling automation for complex workflows that standard browser automation cannot handle alone. This enhancement builds upon AgentCore’s existing secure browser features, allowing users to upload Chrome-compatible extensions to S3 and automatically install them during browser sessions. The feature serves enterprise developers, automation engineers, and organizations across industries requiring specialized browser functionality within a secure environment.
This new feature enables powerful use cases including custom authentication flows, automated testing, and improved web navigation with performance optimization through ad blocking. Organizations gain the ability to integrate third-party tools that operate as browser extensions, eliminating manual processes while maintaining security within the AgentCore Browser environment. This feature is available in all nine AWS Regions where Amazon Bedrock AgentCore Browser is available: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (Ireland). To learn more about implementing custom browser extensions in Amazon Bedrock AgentCore, visit the Browser documentation.
Quelle: aws.amazon.com

Amazon Neptune Analytics is now available in 7 additional regions

Amazon Neptune Analytics is now available in US West (N. California), Asia Pacific (Seoul), Asia Pacific (Osaka), Asia Pacific (Hong Kong), Europe (Stockholm), Europe (Paris), and South America (São Paulo) regions. You can now create and manage Neptune Analytics graphs in these new regions and run advanced graph analytics. Amazon Neptune is a serverless graph database for connected data, improves the accuracy of AI applications, and lowers operational burden and costs. Neptune instantly scales graph workloads removing the need to manage capacity. By modeling data as a graph, Neptune captures context that improves accuracy and explainability of generative AI applications. To make AI application development easier, Neptune offers fully managed GraphRAG with Amazon Bedrock Knowledge Bases, and integrations with Strands AI Agents SDK and popular agentic memory tools. It also easily analyzes tens of billions of relationships across structured and unstructured data within seconds delivering strategic insights. Neptune is the only database and analytics engine that gives you the power of connected data with the enterprise capabilities and value of AWS. To get started, you can create a new Neptune Analytics graphs using the AWS Management Console, or AWS CLI. For more information on pricing and region availability, refer to the Neptune pricing page and AWS Region Table.
Quelle: aws.amazon.com

Amazon EC2 C8i instances are now available in AWS Europe (London) region

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8i instances are available in the Europe (London) region. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. These C8i instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver up to 20% higher performance than C7i instances, with even higher gains for specific workloads. The C8i are up to 60% faster for NGINX web applications, up to 40% faster for AI deep learning recommendation models, and 35% faster for Memcached stores compared to C7i. C8i instances are a great choice for all memory-intensive workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. C8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications. To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information about the new C8i instances visit the AWS News blog.
Quelle: aws.amazon.com