Anzeige: 10.000-fach verkaufter Snacks-Adventskalender zum Bestpreis
Der Intersnack Adventskalender 2025 von Funny-Frisch mit Chips und Nüssen ist bei Amazon zum Bestpreis im Sale. (Unterhaltung & Hobby)
Quelle: Golem
Der Intersnack Adventskalender 2025 von Funny-Frisch mit Chips und Nüssen ist bei Amazon zum Bestpreis im Sale. (Unterhaltung & Hobby)
Quelle: Golem
The MCP protocol is almost one year old and during that time, developers have built thousands of new MCP servers. Thinking back to MCP demos from six months ago, most developers were using one or two local MCP servers, each contributing just a handful of tools. Six months later and we have access to thousands of tools, and a new set of issues.
Which MCP servers do we trust?
How do we avoid filling our context with tool definitions that we won’t end up needing?
How do agents discover, configure, and use tools efficiently and autonomously?
With the latest features in Docker MCP Gateway, including Smart Search and Tool Composition, we’re shifting from “What do I need to configure?” to “What can I empower agents to do?”
This week, Anthropic also released a post about building more efficient agents, and they have called out many of the same issues that we’ll be discussing in this post. Now that we’ve made progress towards having tools, we can start to think more about effectively using tools.
With dynamic MCPs, agents don’t just search for or add tools, but write code to compose new ones within a secure sandbox, improving both tool efficiency and token usage.
Enabling Agents to Find, Add, and Configure MCPs Dynamically with Smart Search
If you think about how we configure MCPs today, the process is not particularly agentic. Typically, we leave the agent interface entirely, do some old-school configuration hacking (usually editing a JSON file of some kind), and then restart our agent session to check if the MCPs have become available. As the number of MCP servers grows, is this going to work?
So what prevents our agents from doing more to help us discover useful MCP servers?
We think that Docker’s OSS gateway can help here. As the gateway manages the interface between an agent and any of the MCP servers in the gateway’s catalog, there is an opportunity to mediate that relationship in new ways.
Out of the box, the gateway ships with a default catalog, the Docker MCP Catalog, including over 270 curated servers and of course the ability to curate your own private catalogs (e.g. using servers from the community registry). And because it runs on Docker, you can pull and run any of them with minimal setup. That directly tackles the first friction point: discovery of trusted MCP servers.
Figure 1: The Docker MCP Gateway now includes mcp-find and mcp-add, new Smart Search features that let agents discover and connect to trusted MCP servers in the Docker MCP Catalog, enabling secure, dynamic tool usage.
However, the real key to dynamic MCPs is a small but crucial adjustment to the agent’s MCP session. The gateway provides a small set of primordial tools that the agent uses to search the catalog and to either add or remove servers from the current session. Just as in the post from Anthropic, which suggests a search_tools tool, we have added new tools to help the agent manage their MCP servers.
mcp-find: Find MCP servers in the current catalog by name or description. Return matching servers with their details.
mcp-add: Add a new MCP server to the session. The server must exist in the catalog.
With this small tweak, the agent can now help us negotiate a new MCP session. To make this a little more concrete, we’ll show an agent connected to the gateway asking for the DuckDuckGo MCP and then performing a search.
Figure 2: A demo of using mcp-find and mcp-add to connect to the DuckDuckGo MCP server and run a search
Configuring MCP Servers with Agent-Led Workflows
In the example above, we started by connecting our agent to the catalog of MCPs (see docker mcp client connect –help for options). The agent then adds a new MCP server to the current session. To be clear, the duckduckgo MCP server is quite simple. Since it does not require any configuration, all we needed to do was search the catalog, pull the image from a trusted registry, and spin up the MCP server in the local docker engine.
However, some MCP servers will require inputs before they can start up. For example, remote MCP servers might require that the user go through an OAuth flow. In the next example, the gateway responds by requesting that we authorize this new MCP server. Now that MCP supports elicitations, and frameworks like mcp-ui allow MCPs to render UI elements into the chat, we have begun to optimize these flows based on client-side capabilities.
Figure 3: Using mcp-find and mcp-add to connect to the Notion MCP server, including an OAuth flow
Avoid An Avalanche of Tools: Dynamic Tool Selection
In the building more efficient agents post, the authors highlight two ways that tools currently make token consumption less efficient.
Tool definitions in the context window
Intermediate tool results
The result is the same in both cases. Too many tokens are not being sent to the model. It takes surprisingly few tools for the context window to accumulate hundreds of thousands of tokens of nothing but tool definition.
Again, this is something we can improve. In the mcp gateway project, we’ve started distinguishing between tools that are available to a find tool, and ones that are added to the context window. Just as we’re giving agents tools for server selection, we can give them new ways to select tools.
Figure 4: Dynamic Tools in action: Tools can now be actively selected, avoiding the need to load all available tools into every LLM request.
The idea is conceptually simple. We are providing an option to allow agents to add servers that do not automatically put their tools into the context window. With today’s agents, this means adding MCP servers that don’t return tool definitions in tools/list requests, but still make them available to find tool calls. This is easy to do because we have an MCP gateway to mediate tools/list requests and to inject new task-oriented find tools. New primordial tools like mcp-exec and mcp-find provide agents with new ways to discover and use MCP server tools.
Once we start to think about tool selection differently, it opens up a range of possibilities.
Using Tools in a new way: From Tool Calls to Tool Composition with code-mode
The idea of “code mode” has been getting a lot of attention since CloudFlare posted about a better way to use Tools several weeks ago. The idea actually dates back to a paper “CodeAct: Your LLM Agent Acts Better when Generating Code“, which proposed that LLMs could improve agent-oriented tasks by first consolidating agent actions into code. The recent post from Anthropic also frames code mode as a way to improve agent efficiency by reducing the number of tool definitions and tool outputs in the context window.
We’re really excited by this idea. By making it possible for agents to “code” directly against MCP tool interfaces, we can provide agents with “code-mode” tools that use the tools in our current MCP catalog in new ways. By combining mcp-find with code-mode, the agent can still access a large, and dynamic, set of available tools while putting just one or two new tools into the context window. Our current code-mode tool writes javascript and takes available MCP servers as parameters.
code-mode: Create a JavaScript-enabled tool that can call tools from any of the servers listed in the servers parameter.
However, this is still code written by an agent. If we’re going to run this code, we’re going to want it to run in a sandbox. Our MCP servers are already running in Docker containers, and the code mode sandbox is no different. In fact, it’s an ideal case because this container only needs access to other MCP servers! The permissions for accessing external systems are already managed at the MCP layer.
This approach offers three key benefits:
Secure by Design: The agent stays fully contained within a sandbox. We do not give up any of the benefits of sandboxing. The code-mode tool uses only containerized MCP servers selected from the catalog.
Token and Tool Efficiency: The tools it uses do not have to be sent to the model on every request. On subsequent turns, the model just needs to know about one new code-mode tool. In practice, this can result in hundreds of thousands of fewer tokens being sent to the model on each turn.
State persistence: Using volumes to manage how state is managed across tool calls, and to track intermediate results that need not, or even should not be sent to the model.
A popular illustration of this pattern is building a code mode tool using the GitHubofficial MCP servers. The GitHub server happens to ship with a large number of tools, so code-mode will have a dramatic impact. In the example below, we’re prompting an agent to create a new code-mode tool out of the Github-official and markdownify MCP servers.
Figure 5: Using the MCP code-mode to write code to call tools from the GitHub Official and Markdownify MCP servers
The combination of Smart Search and Tool Composition unlocks dynamic, secure use of MCPs. Agents can now go beyond simply finding or adding tools; they can write code to compose new tools, and run them safely in a secure sandbox.
The result: faster tool discovery, lower token usage, fewer manual steps, and more focused time for developers.
Workflow
Before: Static MCP setup
After: Dynamic MCPs via Docker MCP Gateway
Impact
Tool discovery
Manually browse the MCP servers
mcp-find searches a Docker MCP Catalog (230+ servers) by name/description
Faster discovery
Adding tools
Enable the MCP servers manually
mcp-add pulls only the servers an agent needs into the current session
Zero manual config; just-in-time tooling
Authentication
Configure the MCP servers ahead of time
Prompt user to complete OAuth when a remote server requires it
Some clients starting to support things like mcp elicitations and UX like mcp-ui for smoother onboarding flows
Tool composition
Agent generated tool calls; tool definitions are sent to the model
With code-mode , agents write code that use multiple MCP tools
Multi-tool workflows and unified outputs
Context size
Load lots of unused tool definitions
Keep only the tools actually required for the task
Lower token usage and latency
Future-proofing
Static integrations
Dynamic, composable tools with sandboxed scripting
Ready for evolving agent behaviors and catalogs
Developer involvement
Constant context switching and config hacking
Agents self-serve: discover, authorize, and orchestrate tools
Fewer manual steps; better focus time
Table 1: Summary of Benefits from Docker’s Smart Search and Tool Composition for Dynamic MCPs
From Docker to Your Editor: Running dynamic MCP tools with cagent and ACP
Another new component of the Docker platform is cagent, our open source agent builder & runtime, which provides a simple way to build and distribute new agents. The latest version of cagent now supports the Agent Client Protocol which allows developers to add custom agents to ACP-enabled editors like neovim, or Zed, and then to share these agents by pushing them to or pulling them from Docker Hub.
This means that we can now build agents that know how to use features like smart search tools or code mode, and then embed these agents in ACP-powered editors using cagent. Here’s an example agent, running in neovim, that helps us discover new tools relevant to whatever project we are currently editing.
Figure 6: Running Dynamic MCPs in Neovim via Agent Client Protocol and a custom agent built with cagent, preconfigured with MCP server knowledge
In their section on state persistence and skills, the folks at Anthropic also hint at the idea that dynamic tools and code mode execution bring us closer to a world where over time, agents accumulate code and tools that work well together. Our current code-mode tool does not yet save the code it writes to the project but we’ll be working on this here.
For the neovim example above, we have used ACP support in the code companion plugin. Also, please check out the cagent adapter in this repo. For Zed, see their doc on adding custom agents and of course, try out cagent acp agent.yaml with your own custom agent.yaml file.
Getting Started with Dynamic MCPs Using Smart Search and Tool Composition
Dynamic tools are now available in the mcp gateway project. Unless you are running the gateway with an explicit set of features (using the existing –servers flag), then these tools are available to your agent by default. The dynamic tools feature can also be disabled using docker mcp feature disable dynamic-tools. This is a feature that we’re actively developing so please try it out and let us know what you think by opening an issue, or starting a discussion in our repo.
Get started by connecting your favorite client to the MCP gateway using docker mcp client connect, or by adding a connection using the “Clients” tab in the Docker Desktop MCP Toolkit panel.
Summary
The Docker MCP Toolkit combines a trusted runtime (the docker engine), with catalogs of MCP servers. Beginning with Docker Desktop 4.50, we are now extending the mcp gateway interface with new tools like mcp-find, mcp-add, and code-mode, to enable agents to discover MCP servers more effectively, and even to use these servers in new ways.
Whether it’s searching or pulling from a trusted catalog, initiating an OAuth flow, or scripting multi-tool workflows in a sandboxed runtime, agents can now do more on their own. And that takes us a big step closer to the agentic future we’ve been promised!
Got feedback? Open an issue or start a discussion in our repo.
Learn more
Explore the MCP Gateway Project: Visit the GitHub repository for code, examples, and contribution guidelines.
Dive into Smart Search and Tool Composition: Read the full documentation to understand how these features enable dynamic, efficient agent workflows.
Learn more about Docker’s MCP Solutions.
Quelle: https://blog.docker.com/feed/
AWS announces expanded instance family support in Deadline Cloud, adding new 6th, 7th, and 8th generation EC2 instances to enhance visual effects and animation rendering workloads. This release includes support for C7i, C7a, M7i, M7a, R7a, R7i, M8a, M8i, and R8i instance families, along with additional 6th generation instance types that were previously unavailable. Deadline Cloud is a fully managed service that helps customers run visual compute workloads in the cloud without having to manage infrastructure. With this enhancement, studios can utilize a broader range of AWS compute technology to optimize their rendering workflows. The compute-optimized (C-series), general-purpose (M-series), and memory-optimized (R-series) instances provide tailored options for different rendering workloads – from compute-intensive simulations to memory-heavy scene processing. The inclusion of latest-generation instances like M8a and R8i enables customers to access improved performance and efficiency for their most demanding rendering tasks. These instance families are available in all 10 AWS Regions where Deadline Cloud is offered. The specific instance types available in each Region depend on the regional availability of the EC2 instance types themselves. To learn more about the new instance types supported in Deadline Cloud and their regional availability, see the AWS Deadline Cloud pricing page.
Quelle: aws.amazon.com
Amazon CloudWatch Application Signals expands its availability to AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions, enabling government customers and regulated industries to automatically monitor and improve application performance in these regions. CloudWatch Application Signals provides comprehensive application monitoring capabilities by automatically collecting telemetry data from applications running on Amazon EC2, Amazon ECS, Amazon EKS and AWS Lambda, helping customers meet their compliance and monitoring requirements while maintaining workload visibility. With CloudWatch Application Signals, customers in AWS GovCloud (US) regions can now monitor application health in real time, track performance against business goals, visualize service relationships and dependencies, and quickly identify and resolve performance issues. This automated observability solution eliminates the need for manual instrumentation while providing detailed insights into application behavior and performance patterns. The service automatically detects anomalies and helps correlate issues across different AWS services, enabling faster problem resolution and improved application reliability. CloudWatch Application Signals will be available in AWS GovCloud (US-East) and AWS GovCloud (US-West). For pricing information, visit the Amazon CloudWatch pricing page. To get started, visit the Amazon CloudWatch Application Signals documentation.
Quelle: aws.amazon.com
Today, Amazon SageMaker Unified Studio announced new capabilities allowing SageMaker projects to add custom tags to resources created through the project. This helps customers enforce tagging standards that conform to Service Control Policies (SCP) and helps enable cost tracking reporting practices on resources created across the organization. As an Amazon SageMaker Unified Studio administrator, you can configure a project profile with tag configurations that will be pushed down to all projects using the project profile. Project profiles can be setup to pass Key and Value tag pairings or pass the Key of the tag with a default Value that can be modified during project creation. All tag values passed to the project will result in the resources created by that project being tagged. This provides administrators a governance mechanism that enforces project resources have the expected tags. This first release of custom tags for project resources is supported only through application programming interface (API). Custom tags for project resources capability is available in all AWS Regions where Amazon SageMaker Unified Studio is supported, including: Asia Pacific (Tokyo), Europe (Ireland), US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), South America (São Paulo), Asia Pacific (Seoul), Europe (London), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), Asia Pacific (Mumbai), Europe (Paris), Europe (Stockholm) To learn more, visit Amazon SageMaker then get started with the custom tag API documentation.
Quelle: aws.amazon.com
Lego hat sein erstes Star-Trek-Bauset angekündigt – die legendäre USS Enterprise NCC-1701-D mit 3.600 Teilen. (Lego, Star Wars)
Quelle: Golem
Erstmals kann mit der Paypal-Alternative Wero im Onlinehandel eingekauft werden. Zunächst gibt es nur Konzertkarten – für bestimmte Kunden. (Wero, Deutsche Bahn)
Quelle: Golem
Das britische Umwelt- und Landwirtschaftsministerium erneuert seine IT für 312 Millionen Pfund – setzt dabei aber weiterhin auf Windows 10. (Microsoft, Politik)
Quelle: Golem
Mit Stalkerware lassen sich leicht Mitmenschen ausspionieren. Ein neuer Test zeigt, welche Anti-Virus-Tools für Android den besten Schutz bieten. (Anti-Virus, Virus)
Quelle: Golem
Apple soll die Synchronisation von WLAN-Netzwerken zwischen Apple Watch und iPhone abschaffen wollen – statt sich dem DMA zu beugen. (Apple Watch, Apple)
Quelle: Golem