A practitioner’s view on how Docker enables security by default and makes developers work better

This blog post was written by Docker Captains, experienced professionals recognized for their expertise with Docker. It shares their firsthand, real-world experiences using Docker in their own work or within the organizations they lead. Docker Captains are technical experts and passionate community builders who drive Docker’s ecosystem forward. As active contributors and advocates, they share Docker knowledge and help shape Docker products. To learn more about how to become or to contact a Docker Captain, visit the Docker Captains’ website.

Security has been a primary concern of all types of organizations around the world. This has gone through all the eras of technology. First we had mainframes, then servers, then the cloud, all of them with their public and private offerings variations. With each evolution, security concerns grew and became harder to comply with.

Once we advanced into the world of distributed systems, security teams had to deal with the faster evolution of the environment. New programming languages, new libraries, new packages, new images, new everything.

For security to be handled correctly, security engineers needed a strong, well designed security architecture, always guaranteeing Developer Experience wouldn’t be impacted. And that’s where Docker comes in!

Container Security Basics

Container security covers a wide range of different topics. The field is so broad that there are entire books written exclusively about this subject. But when entering an enterprise environment, we can narrow it down to a few specific topics that need to be prioritized:

Artifacts

Code

Build file (e.g. Dockerfile) creation

Vulnerability management

Culture/Processes

Let’s get a little more in depth with those topics.

Artifacts

That’s the first step to a secure environment. Having trustworthy resources available for your engineers.

To reduce friction between security teams and developers, security engineers have to make secure resources available for developers, so they can just pull their images, libraries and dependencies in general, and start using it on their systems.

Docker Hardened Images (which we’ll talk a couple sections into the article) can help you with that.

In enterprise environments, we usually see a centralized repository for approved artifacts. This helps teams manage resources and the components used in their environments, while also helping developers know where to look when they want something.

Code

Everything really starts with the code that’s written. Having problematic code pushed into production might not seem bad at first but in the long run will cause you a lot of trouble.

In security, every surface has to be considered. We can create the most secure build file in the world, have the most robust process for managing assets, have great IAM (Identity and Access Management) workflows, but we are exposed if our code isn’t well written.

Beyond relying only on the developer’s expertise, we need to create guardrails to identify and mitigate problems as they are noted. This enforces a second layer of protection over all the work that’s done. Having tools in place can get mistakes developers might not see at first.

Having well trained developers and the right controls in the CI/CD pipelines our code goes through allows us to rest easy at night knowing we’re not sending bad code into production.

A couple of controls that can be applied to the pipelines:

SCA (Software Composition Analysis)

SAST/DAST/IAST

Secret Scanning

Dependency Scanning

Build file

In the beginning of the SDLC (Software Development Life-Cycle) our engineers have to create their build file (usually a Dockerfile) to download their application’s dependencies and to turn it into a container.

Creating a build file is easy, as it’s just a sequence of steps. You download something (e.g. a Package or a Library), install it, create a folder or a file, then download the next component, install it, and so on until all the steps have been completed. But even though the default values and settings usually do the work, they don’t have all the security guardrails and best practices applied by default. Because of that, you need to be careful with what’s being pushed into production.

While coding a build file, it’s crucial to ensure:

That there aren’t any secrets hard coded in it;

That the container is not configured to run as root – which could possibly allow an attacker to elevate their privilege and gain access to the host; 

That there aren’t any sensitive files copied to your container (like certificates and credentials).

Taking these steps in the beginning and starting strong guarantees that the rest of the SDLC will be minimally exposed.

Vulnerability management

Now, we’re starting to move away from the code and from the artifacts we have engineers deliver.

Vulnerabilities can be found on everything. On technologies, on processes, on everything. We need good vulnerability management to keep the engine going.

Companies need to have well established processes to identify vulnerabilities on the go, fix them and when it’s needed, accept them. Usually we have frameworks developed internally to understand if a risk is worth being taken or if it should be fixed before moving on.

Those vulnerabilities can be new or already known. They can be in libraries used in the code, on container images used in their systems and in versions of solutions used in our environment.

They are everywhere! Be sure to identify them, keep them registered and fix them when needed.

Culture/Processes

Not only technology presents a risk to enterprise security. Poorly trained engineers and bad processes also represent a real threat to a company’s security structure.

A flaw in a process might result in the wrong code being pushed into production. Or maybe the bad version of a container image to be used in a system.

If we take into perspective how people, processes and technology are related, we might understand why a problem in the vulnerability assessment of a library might cause an entire cluster to be compromised. Or why a role that was wrongfully attributed to an user presents a serious risk to the integrity of an entire cloud environment.

These are exaggerated examples, but serve to show us that in tech, everything is connected, even if we don’t see it.

That’s why processes are so important. Solid processes mean we are focused on set outcomes instead of pleasing stakeholders. It’s important to take feedback into consideration and to make adjustments as we move forward, but we need to ensure these processes are followed, even when there isn’t unanimous agreement.

To have successful processes established, we have to:

Design guardrails

Implement steps

Train teams

Repeat

That’s the only way to enable teams effectively!

How Docker protects engineers and companies

Docker has been an ally of software engineers and security teams for a while now. Not only by enabling the success of distributed systems, but also by improving how developers write and containerize their applications.

As the Docker platform evolved, security was taken into consideration as the number one priority, like its customers.

Today, developers have access to different Docker security solutions in different parts of the platform.

Docker Scout

Docker Scout is a service created by Docker to analyze container images and its layers for known vulnerabilities. It checks against publicly known CVEs and provides the user with information regarding vulnerabilities in their images. To also help with mitigation, Docker Scout provides the user with a “fixable” value, declaring if that vulnerability can be fixed. 

This is very useful once we enter a corporate environment because it makes it possible for the security teams to recognize the risks that image brings to the organization and allow them to decide if that amount of risk can be taken or not.

We all love the CLI, but sometimes having a GUI (Graphical User Interface) might help. Docker knows what developers like, and for that reason, we have Scout on both platforms. Your developers can use it to scan their images and see a quick summary on their terminal or they can enjoy the features provided by Docker Desktop and see a complete report with links and explanations on their image’s found vulnerabilities.

Docker Scout terminal report

Docker Scout Desktop report

By providing users with those reports, they can make smarter choices when adopting different libraries and packages into their applications and can also work closely with the security teams to provide faster feedback on whether that technology is safe to use or not.

Docker Hardened Images

Now focusing on providing engineers and companies with safe and recommended resources, Docker recently announced Docker Hardened Images (DHI), a list of near-zero CVE images and optimized resources for you to start building your applications.

Even though it’s common in large organizations to have private container registries to store safe images and dependencies, DHI provides a safer start point for the security teams, since the resources available have been through extensive examination and auditing.

Docker Hardened Images report

DHI is a very helpful resource not only for enterprises but also for independent and open source software developers. Docker-backed images make the internet and the cloud safer, allowing businesses to build trustworthy and reliable platforms for their customers!

From an engineer’s perspective, the true value of Docker Hardened Images is the trust we have on Docker and the value that this security-ready solution brings us. Managing image security is hard if you have to do it all the way through. It’s hard to keep images ready to use and the difficulty just increases when our developers are requesting newer versions every day. By using Hardened Images, we’re able to provide our end users (developers and engineers) the latest versions of the most popular solutions available and offload the security team.

Final Thoughts

We can approach security in a lot of different ways, the main thing is: Security CANNOT slow down engineers. We need to design our controls in a way that we’re able to cover everything, fulfilling all gaps identified and still allowing developers to deliver code fast.

Guarantee your engineers have the best of both worlds with Docker.

Security DevEx

Get in touch with the authors:

Pedro Ignácio:

Linkedin

Blog

Denis Rodrigues:

Linkedin

Blog

Learn more about Docker’s security solutions:

Docker Desktop

Docker Scout

Docker Hardened Images

Quelle: https://blog.docker.com/feed/

Docker @ Black Hat 2025: CVEs have everyone’s attention, here’s the path forward

CVEs dominated the conversation at Black Hat 2025. Across sessions, booth discussions, and hallway chatter, it was clear that teams are feeling the pressure to manage vulnerabilities at scale. While scanning remains an important tool, the focus is shifting toward removing security debt before it enters the software supply chain. Hardened images, compliance-ready tooling, and strong ecosystem partnerships are emerging as the path forward.

Community Highlights

The Docker community was out in full force, thank you all! Our booth at Black Hat was busy all week with nonstop conversations, hands-on demos, and a steady stream of limited-edition hoodies and Docker socks spotted around Las Vegas.

The Docker + Wiz evening party brought together the DevSecOps community to swap stories, compare challenges, and celebrate progress toward a more secure software supply chain. It was a great way to hear firsthand what’s top of mind for teams right now.

Across sessions, booth conversations, and the Wiz + Docker party, six key security themes stood out.

A busy Doker Booth @ Black Hat 2025

What We Learned: Six Key Themes

Scanning isn’t enough. Teams are looking for secure, zero-CVE starting points that eliminate security debt from the outset.

Security works best when it meets teams where they are. The right hardened distro makes all the difference. For example, Debian for compatibility and Alpine for a minimal footprint.

Flexibility is essential. Customizations to minimal images are a crucial business requirement for enterprises running custom, mission-critical apps.

Hardening is expanding quickly to regulated industries, with FedRAMP-ready variants in high demand.

AI security doesn’t require reinvention; proven container patterns still protect emerging workloads.

Better together ecosystems and partnerships still matter. We’re cooking some great things with Wiz to cut through alert fatigue, focus on exploitable risks, and speed hardened image adoption.

Technical Sessions Highlights

In our Lunch and Learn event, Docker’s Mike Donovan, Brian Pratt, and Britney Blodget shared how Docker Hardened Images provide a zero-CVE starting point backed by SLAs, SBOMs, and signed provenance. This approach removes the need to choose between usability and security. Debian and Alpine variants meet teams where they are, while customization capabilities allow organizations to add certificates, packages, or configurations and still inherit updates from the base image. Interest in FedRAMP-ready images reinforced that secure-by-default solutions are in demand across highly regulated industries, and can accelerate an organization’s FedRAMP process.

Docker Hardened Images Customization

On the AI Stage, Per Krogslund explored how emerging AI agents raise new questions around trust and governance, but do not require reinventing security from scratch. Proven container security patterns—including isolation, gateway controls, and pre-runtime validation—apply directly to these workloads. Hardened images provide a crucial, trusted launchpad for AI systems too, ensuring a secure and compliant foundation before a single agent is deployed.

Black Hat 2025 is in the books, but the conversation about building secure foundations is just getting started. In response to the fantastic customer feedback, Docker Hardened Images’ roadmap now features more workflow integrations, many more verified images in the catalog, and a lot more. Watch this space!

Ready to eliminate security debt from day one? Docker Hardened Images provide zero-CVE base images, built-in compliance tooling, and the flexibility to fit your workflows. 

Learn more and request access to Docker Hardened Images!

Quelle: https://blog.docker.com/feed/

Agent Factory: The new era of agentic AI—common use cases and design patterns

This blog post is the first out of a six-part blog series called Agent Factory which will share best practices, design patterns, and tools to help guide you through adopting and building agentic AI.

Beyond knowledge: Why enterprises need agentic AI

Retrieval-augmented generation (RAG) marked a breakthrough for enterprise AI—helping teams surface insights and answer questions at unprecedented speed. For many, it was a launchpad: copilots and chatbots that streamlined support and reduced the time spent searching for information.

However, answers alone rarely drive real business impact. Most enterprise workflows demand action: submitting forms, updating records, or orchestrating multi-step processes across diverse systems. Traditional automation tools—scripts, Robotic Process Automation (RPA) bots, manual handoffs—often struggle with change and scale, leaving teams frustrated by gaps and inefficiencies.

This is where agentic AI emerges as a game-changer. Instead of simply delivering information, agents reason, act, and collaborate—bridging the gap between knowledge and outcomes and enabling a new era of enterprise automation.

Create with Azure AI Foundry

Patterns of agentic AI: Building blocks for enterprise automation

While the shift from retrieval to real-world action often begins with agents that can use tools, enterprise needs don’t stop there. Reliable automation requires agents that reflect on their work, plan multi-step processes, collaborate across specialties, and adapt in real time—not just execute single calls.

The five patterns below are foundational building blocks seen in production today. They’re designed to be combined and together unlock transformative automation.

1. Tool use pattern—from advisor to operator

Modern agents stand out by driving real outcomes. Today’s agents interact directly with enterprise systems—retrieving data, calling Application Programming Interface (APIs), triggering workflows, and executing transactions. Agents now surface answers and also complete tasks, update records, and orchestrate workflows end-to-end.

Fujitsu transformed its sales proposal process using specialized agents for data analysis, market research, and document creation—each invoking specific APIs and tools. Instead of simply answering “what should we pitch,” agents built and assembled entire proposal packages, reducing production time by 67%.

2. Reflection pattern—self-improvement for reliability

Once agents can act, the next step is reflection—the ability to assess and improve their own outputs. Reflection lets agents catch errors and iterate for quality without always depending on humans.

In high-stakes fields like compliance and finance, a single error can be costly. With self-checks and review loops, agents can auto-correct missing details, double-check calculations, or ensure messages meet standards. Even code assistants, like GitHub Copilot, rely on internal testing and refinement before sharing outputs. This self-improving loop reduces errors and gives enterprises confidence that AI-driven processes are safe, consistent, and auditable.

3. Planning pattern—decomposing complexity for robustness

Most real business processes aren’t single steps—they’re complex journeys with dependencies and branching paths. Planning agents address this by breaking high-level goals into actionable tasks, tracking progress, and adapting as requirements shift.

ContraForce’s Agentic Security Delivery Platform (ASDP) automated its partner’s security service delivery with security service agents using planning agents that break down incidents into intake, impact assessment, playbook execution, and escalation. As each phase completes, the agent checks for next steps, ensuring nothing gets missed. The result: 80% of incident investigation and response is now automated and full incident investigation can be processed for less than $1 per incident.

Planning often combines tool use and reflection, showing how these patterns reinforce each other. A key strength is flexibility: plans can be generated dynamically by an LLM or follow a predefined sequence, whichever fits the need.

4. Multi-agent pattern—collaboration at machine speed

No single agent can do it all. Enterprises create value through teams of specialists, and the multi-agent pattern mirrors this by connecting networks of specialized agents—each focused on different workflow stages—under an orchestrator. This modular design enables agility, scalability, and easy evolution, while keeping responsibilities and governance clear.

Modern multi-agent solutions use several orchestration patterns—often in combination—to address real enterprise needs. These can be LLM-driven or deterministic: sequential orchestration (such as agents refine a document step by step), concurrent orchestration (agents run in parallel and merge results), group chat/maker-checker (agents debate and validate outputs together), dynamic handoff (real-time triage or routing), and magentic orchestration (a manager agent coordinates all subtasks until completion).

JM Family adopted this approach with business analyst/quality assurance (BAQA) Genie, deploying agents for requirements, story writing, coding, documentation, and Quality Assurance (QA). Coordinated by an orchestrator, their development cycles became standardized and automated—cutting requirements and test design from weeks to days and saving up to 60% of QA time.

5. ReAct (Reason + Act) pattern—adaptive problem solving in real time

The ReAct pattern enables agents to solve problems in real time, especially when static plans fall short. Instead of a fixed script, ReAct agents alternate between reasoning and action—taking a step, observing results, and deciding what to do next. This allows agents to adapt to ambiguity, evolving requirements, and situations where the best path forward isn’t clear.

For example, in enterprise IT support, a virtual agent powered by the ReAct pattern can diagnose issues in real time: it asks clarifying questions, checks system logs, tests possible solutions, and adjusts its strategy as new information becomes available. If the issue grows more complex or falls outside its scope, the agent can escalate the case to a human specialist with a detailed summary of what’s been attempted.

These patterns are meant to be combined. The most effective agentic solutions weave together tool use, reflection, planning, multi-agent collaboration, and adaptive reasoning—enabling automation that is faster, smarter, safer, and ready for the real world.

Why a unified agent platform is essential

Building intelligent agents goes far beyond prompting a language model. When moving from demo to real-world use, teams quickly encounter challenges:

How do I chain multiple steps together reliably?

How do I give agents access to business data—securely and responsibly?

How do I monitor, evaluate, and improve agent behavior?

How do I ensure security and identity across different agent components?

How do I scale from a single agent to a team of agents—or connect to others?

Many teams end up building custom scaffolding—DIY orchestrators, logging, tool managers, and access controls. This slows time-to-value, creates risks, and leads to fragile solutions.

This is where Azure AI Foundry comes in—not just as a set of tools, but as a cohesive platform designed to take agents from idea to enterprise-grade implementation.

Azure AI Foundry: Unified, scalable, and built for the real world

Azure AI Foundry is designed from the ground up for this new era of agentic automation. Azure AI Foundry delivers a single, end-to-end platform that meets the needs of both developers and enterprises, combining rapid innovation with robust, enterprise-grade controls.

With Azure AI Foundry, teams can:

Prototype locally, deploy at scale: Develop and test agents locally, then seamlessly move to cloud runtime—no rewrites needed. Check out how to get started with Azure AI Foundry SDK.

Flexible model choice: Choose from Azure OpenAI, xAI Grok, Mistral, Meta, and over 10,000 open-source models—all via a unified API. A Model Router and Leaderboard help select the optimal model, balancing performance, cost, and specialization. Check out the Azure AI Foundry Models catalog.

Compose modular multi-agent architectures: Connect specialized agents and workflows, reusing patterns across teams. Check out how to use connected agents in Azure AI Foundry Agent Service.

Integrate instantly with enterprise systems: Leverage over 1,400+ built-in connectors for SharePoint, Bing, SaaS, and business apps, with native security and policy support. Check out what are tools in Azure AI Foundry Agent Service.

Enable openness and interoperability: Built-in support for open protocols like Agent-to-Agent (A2A) and Model Context Protocol (MCP) lets your agents work across clouds, platforms, and partner ecosystems. Check out how to connect to a Model Context Protocol Server Endpoint in Azure AI Foundry Agent Service.

Enterprise-grade security: Every agent gets a managed Entra Agent ID, robust Role-based Access Control (RBAC), On Behalf Of authentication, and policy enforcement—ensuring only the right agents access the right resources. Check out how to use a virtual network with the Azure AI Foundry Agent Service.

Comprehensive observability: Gain deep visibility with step-level tracing, automated evaluation, and Azure Monitor integration—supporting compliance and continuous improvement at scale. Check out how to monitor Azure AI Foundry Agent Service.

Azure AI Foundry isn’t just a toolkit—it’s the foundation for orchestrating secure, scalable, and intelligent agents across the modern enterprise.It’s how organizations move from siloed automation to true, end-to-end business transformation.

Stay tuned: In upcoming posts in our Agent Factory blog series, we’ll show you how to bring these pillars to life—demonstrating how to build secure, orchestrated, and interoperable agents with Azure AI Foundry, from local development to enterprise deployment.

Azure AI Foundry
Design, customize, and manage AI apps and agents at scale.

Learn more >

The post Agent Factory: The new era of agentic AI—common use cases and design patterns appeared first on Microsoft Azure Blog.
Quelle: Azure