Accelerating AI and databases with Azure Container Storage, now 7 times faster and open source

More companies than ever before are choosing to run stateful workloads—such as relational databases, AI inferencing, and messaging queues—on Kubernetes. For developers building on Kubernetes, storage performance has never been more important.

Today, we’re announcing the next major release of Azure Container Storage – v2.0.0. Compared to prior versions, it delivers up to 7 times higher IOPS, 4 times less latency, and improved resource efficiency. With built-in support for local NVMe drives, Azure Container Storage now delivers our fastest, most powerful Kubernetes storage platform on Azure. It’s now also completely free to use, and available as an open-source version for installation on non-AKS clusters. Whether you’re running stateful applications in production, scaling AI workloads, or streamlining dev/test environments, this major release’s performance will give your workloads a considerable boost.

Get started with Azure Container Storage documentation

What’s Azure Container Storage?

Before we dive into the latest enhancements, let’s take a moment to revisit what Azure Container Storage is and how developers run stateful workloads on Kubernetes with speed, simplicity, and reliability.

Azure Container Storage is a cloud-native volume management and orchestration service specifically designed for Kubernetes. It integrates seamlessly with AKS (Azure Kubernetes Service) to enable provisioning of persistent volumes for production-scale, stateful workloads.

Azure Container Storage’s vision is to serve as the unified block storage orchestrator for Kubernetes workloads on Azure, providing a consistent experience across multiple storage backends for simplified volume orchestration via Kubernetes APIs. This v2.0.0 release focuses specifically on breakthrough performance with local NVMe storage, bringing enterprise-grade performance with cloud-native simplicity. Later this year, we’ll be debuting support for Azure Container Storage to integrate with Elastic SAN.

Azure Container Storage delivers optimized performance and efficiency with low-latency storage for high throughput stateful applications, along with built-in orchestration and automation that allows Kubernetes to manage storage pools, persistent volume lifecycles, snapshots, and scaling—all without switching contexts or managing individual CSI (container storage interface) drivers.

What’s new?

There’s quite a bit to unpack here, so let’s take a deeper dive into some of the key benefits that Azure Container Storage v2.0.0 delivers:

Pricing changes

As before, you’ll continue to pay for the underlying storage backend you use. But Azure Container Storage versions 2.0.0 and beyond will no longer charge a per-GB monthly fee for storage pools larger than 5 TiB for both our first party managed and open-source version, making the service now completely free to use. Provision as much storage as you need without worrying about additional management fees. This means you get enterprise-grade storage orchestration and breakthrough performance without any additional service costs—just pure value for your Kubernetes workloads.

Enhanced performance with reduced resource consumption

This release of Azure Container Storage is optimized specifically for local NVMe drives provided with a variety of VM families. This focus unlocks the fastest possible I/O performance for your most demanding workloads while reducing infrastructure costs.

Perhaps most exciting, this latest version of Azure Container Storage on local NVMe is now faster than ever before. We’ve rebuilt our architecture from the ground up—from the kernel level to the control plane—to push the limits of our storage orchestrator. This dramatic speed improvement comes with an equally impressive reduction in cluster resource consumption. Previously, Azure Container Storage on local NVMe had three performance modes that could consume 12.5%, 25%, or 50% of your node pool’s CPU cores. Azure Container Storage v2.0.0 no longer has performance tiers. Instead, it delivers superior performance while using fewer resources than even our previous lowest-impact setting. This translates directly to cost savings—you get better performance while freeing up CPU capacity for your applications to perform even faster.

Let’s look at the benchmarks. On fio (Flexible I/O Tester), the open-source industry standard for storage testing, Azure Container Storage on NVMe delivers approximately 7 times higher IOPS and 4 times less latency compared to the previous version.

But how does this translate to real workloads? We tested our own PostgreSQL for AKS deployment guide and found that PostgreSQL’s transactions per second improved by 60% while cutting latency by over 30%. For database-driven applications, this means faster query responses, higher throughput, and better user experiences.

All in all, Azure Container Storage delivers a significant performance boost for I/O-demanding workloads out of the box without additional configuration needed, offering developers a simple yet powerful tool in their cloud-native arsenal.

Accelerated AI model loading and KAITO Integration

For AI and machine learning workloads, model loading time can be a significant bottleneck. Azure VMs equipped with GPUs have local NVMe drives available. With the latest NVMe enhancements in the new v2.0.0 version, Azure Container Storage takes advantage of this hardware by dramatically accelerating model file loading for AI inferencing workloads. With our recent integration with KAITO, the first Kubernetes-native controller for automating AI model deployment, you can now deploy and scale AI models faster than ever, reducing time-to-inference and improving overall AI application responsiveness.

Above: Azure Container Storage providing fast NVMe-backed storage for model files

We loaded Llama-3.1-8B-Instruct LLM and found a 5 times improvement in model file loading speed with Azure Container Storage v2.0.0, compared to using an ephemeral OS disk.

More flexible scaling options

Azure Container Storage previously required a minimum of three nodes when using ephemeral drives. It now works with clusters of any size, including single-node deployments. This flexibility is particularly valuable for applications with robust built-in replication or backup capabilities, development environments, and edge deployments where you need high-performance storage without the overhead of larger clusters. The elimination of minimum node requirements also reduces costs for smaller deployments while maintaining the same high-performance capabilities.

Open source and community support

We recognize how important the open-source community is to the health and spirit of the Kubernetes ecosystem. Azure Container Storage version 2.0.0 is now built on our newly created open-source repositories, making it accessible to the broader Kubernetes community.

Whether you need the Azure-managed version for seamless AKS integration or prefer the community open-source version for self-hosted Kubernetes clusters, you get the same great product and features. The open-source approach also means easier installation, greater transparency, and the ability to contribute to the project’s evolution.

Explore our open-source repository (local-csi-driver), and learn more about our related block storage products:

Azure Container Storage enabled by Azure Arc

Use Container Storage Interface (CSI) driver for Azure Disk on Azure Kubernetes Service (AKS)

In summary

This major update to Azure Container Storage delivers a faster and leaner high-performance Kubernetes storage platform. Here’s what you get:

Included out of the box: This release focuses on ephemeral drives (local NVMe and temporary SSD) provided with select VM families, including storage-optimized L-series, GPU-enabled ND-series, and general-purpose Da-series.

Enhanced workload support: Optimized for demanding applications like PostgreSQL databases and KAITO-managed AI model serving.

Superior performance: 7 times improvement in read/write IOPS and 4 times reduction in latency, with 60% better PostgreSQL transaction throughput.

Open source: Built on open-source foundations with community repositories for easier installation on any Kubernetes cluster.

Flexible scaling: Deploy on clusters with as few as one node—no minimum cluster size requirements.

Zero service fees: Completely free to use for all storage pool sizes—you only pay for underlying storage.

Getting started

Ready to experience the performance boost? Here are your next steps:

New to Azure Container Storage? Start with our comprehensive documentation.

Deploying specific workloads? Check out our updated deployment guide for PostgreSQL.

Want the open-source version? Visit our GitHub repository for installation instructions.

Have questions or feedback? Reach out to our team at AskContainerStorage@microsoft.com.

Regardless of your workload, Azure Container Storage provides the performance and ease you expect from modern cloud-native storage. We’re excited to see what you build—and we’d love to hear your feedback. Happy hacking!
The post Accelerating AI and databases with Azure Container Storage, now 7 times faster and open source appeared first on Microsoft Azure Blog.
Quelle: Azure

MCP Security: A Developer’s Guide

Since its release by Anthropic in November 2024, Model Context Protocol (MCP) has gained massive adoption and is quickly becoming the connective tissue between AI agents and the tools, APIs, and data they act on. 

With just a few lines of configuration, an agent can search code, open tickets, query SaaS systems, or even deploy infrastructure. That kind of flexibility is powerful but it also introduces new security challenges. In fact, security researchers analyzing the MCP ecosystem found command injection flaws affecting 43% of analyzed servers. A single misconfigured or malicious server can exfiltrate secrets, trigger unsafe actions, or quietly change how an agent behaves. 

This guide is for developers and platform teams building with agents. We’ll unpack what makes MCP workflows uniquely risky for AI infrastructure, highlight common missteps like prompt injection or shadow tooling, and show how secure defaults, like containerized MCP servers and policy-based gateways, can help you govern every tool call without slowing your AI roadmap.

What is MCP security?

Model Context Protocol is a standardized interface that enables AI agents to interact with external tools, databases, and services. MCP security refers to the controls and risks that govern how agents discover, connect to, and execute MCP servers. These security risks span across the entire development lifecycle and involve:

Supply chain: how servers are packaged, signed, versioned, and approved.

Runtime isolation: how they’re executed on the host vs. in containers, with CPU/memory/network limits.

Brokered access: how calls are mediated, logged, blocked, or transformed in real time.

Client trust: which tools a given IDE/agent is allowed to see and use.

Why does MCP security matter?

Securing MCP workflows has become more important than ever because AI agents blur the line between “code” and “runtime.” A prompt or tool description can change what your system is capable of without a code release. 

This means that security practices have to move up a layer, from static analysis to policy over agent‑tool interactions. Docker codifies that policy in a gateway and makes secure defaults practical for everyday developers.

Docker’s approach is to make MCP both easy and safe through containerized execution, a policy‑enforcing MCP Gateway, and a curated MCP Catalog & Toolkit that helps teams standardize what agents can do. If you’re building with agents, this guide will help you understand the risks, why traditional tools fall short, and how Docker reduces blast radius without slowing your AI roadmap.

Understanding MCP security risks

While MCP risks can show up in various ways across the dev lifecycle, there are specific categories they typically fall into. The section below highlights how these risks surface in real workflows, their impact, and practical guardrails that mitigate without slowing teams down. 

Misconfigurations & weak defaults

Running servers directly on the host with broad privileges or a persistent state.

Unrestricted network egress from tools to the public internet.

Unvetted catalogs/registries in client configs, exposing agents to unknown tools.

No audit trail for tool calls-hard to investigate or respond.

Impact: Lateral movement, data exfiltration, and irreproducible behavior.

Mitigation: Always follow MCP server best practices such as leveraging containerization, applying resource and network limits, maintaining an allowlist of approved tools, and capturing call logs centrally.

Malicious or compromised servers (supply chain)

Typosquatting/poisoned images or unsigned builds.

Hidden side effects or altered tool metadata that nudges agents into risky actions.

Impact: Covert behavior change, credential theft, persistent access.

Mitigation: Require signature verification, pin versions/digests, and pull from curated sources such as the MCP Catalog & Toolkit.

Secret management failures

Plaintext credentials in environment variables, prompts, or tool arguments.

Leakage via tool outputs or model completions.

Impact: Account takeover, data loss.

Mitigation: Use managed secrets, minimize prompt exposure, and redact or block sensitive values at the broker.

Prompt injection & tool poisoning

Prompt injection: hostile content instructs the model to exfiltrate data or call dangerous tools.

Tool poisoning/shadowing: misleading tool descriptions or unexpected defaults that steer the agent.

Impact: Agents do the wrong thing, confidently.

Mitigation: Strict tool allowlists, pre/post‑call interceptors, and output filtering at the gateway. Docker’s MCP Gateway provides active security capabilities (signature checks, call logging, secret and network controls, interceptors).

What makes MCP security challenging?

Dynamic & non‑deterministic behavior: the same prompt may lead to different tool calls.

Instruction vs. data ambiguity: LLMs can treat content (including tool docs) as instructions.

Growing, shifting attack surface: every new tool expands what the agent can do instantly.

Traditional AppSec gaps: Static analysis tools don’t see agentic tool calls or MCP semantics; you need mediation between agents and tools, not just better prompts.

Implication for developers: You need a guardrail that lives at the agent–tool boundary, verifying what runs, brokering what’s allowed, and recording what happened.

How to prevent and mitigate MCP server security concerns

Use this practitioner checklist to raise the floor:

Containerize every MCP serverRun servers in containers (not on the host) with CPU/memory caps and a read‑only filesystem where possible. Treat each server as untrusted code with the least privilege necessary.Why it helps: limits blast radius and makes behavior reproducible.

Centralize enforcement at a gateway (broker)Place a policy‑enforcing gateway between clients (IDE/agent) and servers. Use it to:

Verify signatures before running servers.

Maintain a tool allowlist (only approved servers are discoverable).

Apply network egress controls and secret redaction.

Log requests/responses for audit and incident response.

Govern secrets end‑to‑endStore secrets in a managed system; avoid .env files. Prefer short‑lived tokens. Sanitize prompts and tool outputs to reduce exposure.

Defend the prompt layerUse pre‑call interceptors (argument/type checks, safety classifiers) and post‑call interceptors (redaction, PII scrub). Combine with strict tool scoping to reduce prompt‑injection blast radius.

Harden the supply chainPull servers from curated sources (e.g., MCP Catalog & Toolkit), require signatures, and pin to immutable versions.

Monitor and rehearseAlert on anomalous tool sequences (e.g., sudden credential access), and run tabletop exercises to rotate tokens and revoke access.

How Docker makes MCP security practical

Turning MCP security from theory into practice means putting guardrails where agents meet tools and making trusted servers easy to adopt for agentic workflows. Docker’s MCP stack does both: Docker Gateway enforces policy and observability on every call, while the Docker MCP Catalog & Toolkit curates, verifies, and versions the servers your team can safely use.

Docker MCP Gateway: Your enforcement point

The gateway sits between clients and servers to provide verification, policy, and observability for every tool call. It supports active security measures like signature verification, call logging, secret and network controls, and pre/post-interceptors so you can block or transform risky actions before they reach your systems. 

Learn more in Docker MCP Gateway: Unified, Secure Infrastructure for Agentic AI and the Gateway Active Security documentation.

Docker MCP Catalog & Toolkit: Curation and convenience

Use the MCP Catalog & Toolkit to standardize the servers your organization trusts. The catalog helps reduce supply‑chain risk (publisher verification, versioning, provenance) and makes it straightforward for developers to pull approved tools into their workflow. With a growing selection of 150+ curated MCP servers, MCP Catalog is a safe and easy way to get started with MCP.

Looking for a broader view of how Docker helps with AI development? Check out Docker for AI.

Putting it all Together: A practical flow

Choose servers from the Catalog and pin them by digest.

Register servers with the Gateway so clients only see approved tooling.

Enable active security: verify signatures, log all calls, redact/deny secrets, and restrict egress.

Add pre/post interceptors: validate arguments (before), redact/normalize outputs (after).

Monitor and tune: review call logs, alert on anomalies, rotate secrets, and update allowlists as new tools are introduced.

Conclusion

MCP unlocks powerful agentic workflows but also introduces new classes of risk, from prompt injection to tool poisoning and supply‑chain tampering. MCP security isn’t just better prompts; it’s secure packaging, verified distribution, and a brokered runtime with policy.

Key takeaways

Treat MCP as a governed toolchain, not just an SDK.

Put a policy gateway between agents and tools to verify, mediate, and observe.

Pull servers from the MCP Catalog & Toolkit and pin versions/digests.

Use active security features such as signature checks, interceptors, logging, and secret/network controls to reduce blast radius.

Learn more

Browse the MCP Catalog: Discover 200+ containerized, security-hardened MCP servers

Download the MCP Toolkit in Docker Desktop: Get immediate access to secure credential management and container isolation

Submit Your Server: Help build the secure, containerized MCP ecosystem. Check our submission guidelines for more.

Follow Our Progress: Star our repository for the latest security updates and threat intelligence
Quelle: https://blog.docker.com/feed/