Why I joined Docker: security at the center of the software supply chain

Mark Lechner, Docker’s CISO, shares his vision for a future where Docker not only powers the software supply chain, but actively safeguards it.

Cybersecurity has reached a turning point. The most significant threats no longer exploit isolated systems; they move through the connections between them. The modern attack surface includes every dependency, every container, and every human interaction that connects them. 

This interconnected reality is what drew me to Docker.

Over the past decade, I’ve defended banks, fintechs, crypto exchanges, and AI startups against increasingly sophisticated adversaries. Each showed how fragile trust becomes when a software supply chain spans thousands of components.

A significant portion of the world’s software now runs through Docker Hub. Containers have become the default unit of compute. And AI workloads are multiplying both innovation and risk at unprecedented speed.

This is a rare moment, one where getting security right at the foundation can change how the entire industry builds and deploys software.

Lessons from a decade on the supply chain frontline

The environments I worked in may seem unrelated (finance, fintech, crypto, AI) but together they trace how the software supply chain evolved and how security evolved with it.

In my time in neobanks/fintechs, control defined security. We protected finite, closed systems where every dependency was known and internally managed. It was a world built on ownership and predictability. There was a transition underway, and the internal walls between teams were being pulled down. Banking-as-a-Service meant inviting developers into what had always been a sealed environment. Suddenly, trust was not inherited, it had to be proven. That experience crystallized the idea that transparency and verifiability must replace assumptions.

Crypto transformed that lesson into urgency. In that world, the perimeter disappeared entirely. Dependencies, registries, and APIs became active battlefields, often targeted by nation-state actors. The pace of attack compressed from months to minutes.

The Shai Hulud worm that hit npm in September 2025 captures this new reality. It began with a single phishing email spoofing an npm alert. One compromised developer credential became a self-replicating worm spreading across 600+ package versions. The malware didn’t just steal tokens, it automated its own propagation, creating malicious GitHub Actions workflows, publishing private repositories, and moving laterally through the entire ecosystem at CI/CD speed.

Social engineering provided the entry point, and crucially, supply chain automation did the rest.

It was no longer enough to be secure; you had to be provably secure and capable of near-instant remediation.

AI has amplified that acceleration even further. Model supply chains, LLM agents, and the Model Context Protocol (MCP) have introduced entire new layers of exposure: model provenance, data lineage, and automated code generation at massive scale. Security practices are still catching up to the rate of change.

Across all these environments, one constant remained: everything ran in containers. Whether it was a financial risk engine, a crypto trading service, or an AI inference model, it was containerized.

That’s when it became clear to me that Docker isn’t simply part of the supply chain. Docker is the connective layer of modern software itself.

Why Docker is the right platform for this moment

There are three reasons why this moment matters for Docker and for security as a discipline:

Ubiquity with accountabilityEvery developer interacts with Docker. That ubiquity brings responsibility on a global scale. If Docker strengthens its security foundation, every connected system benefits. If we fall short, the consequences ripple worldwide. That scale is what makes this mission meaningful.

Our role extends beyond individual products. As steward of the container ecosystem, we have a responsibility to make it secure by default. That means setting clear expectations for how software is published, shared, and verified across Docker Hub and the Engine. Imagine a world where every image carries an SBOM and signed provenance by default, where digital signatures are standard, and where organizations can see and control the open source in their supply chain. The container ecosystem has matured, and Docker’s job now is to secure it for the next decade.

Security as a primitiveVirtualization, isolation, and portability are not just features; they are the security primitives of modern computing. Docker is embedding those primitives directly into the developer workflow.

This is reflected in Docker Hardened Images: secure, minimal containers with verifiable provenance and complete SBOMs that help organizations control supply chain risk. Through continuous review we scan, rebuild, and remediate these images at scale, raising the security baseline for the entire open-source ecosystem. Docker Scout complements that process by turning transparency into action, helping teams understand risk context and prioritize what matters most.

Christian Dupuis, lead engineer for Docker Hardened Images, defines the foundation for how Docker builds trust in his recent blog: minimal attack surface, verifiable SBOMs, secure build provenance, exploitability context, and cryptographic verification. Docker Hardened Images bring those pillars to life at scale.

Security is not confined to containers alone. The MCP Gateway enables containerised AI-tool orchestration with isolation, unified control, and observability, extending this same container-secure foundation into the AI era. By embedding policy as code into development, CI/CD, and runtime pipelines, governance becomes inherent; the same containers you trust also enforce the rules you need.

Together, these secure-by-default investments make security self-reinforcing, automated, and aligned with developer speed.

AI as the next frontier in the supply chainAI workloads are being containerized by default. As teams adopt MCP-based architectures and integrate AI agents into workflows, Docker’s role expands from developer enablement to securing AI infrastructure itself.

Everything we have built through Docker Hardened Images and Scout in the container domain now becomes foundational for this next chapter. The same principles of transparency, provenance, and continuous review will unlock a secure supply chain for AI workloads. Our goal is to provide a platform that scales with this new velocity, enabling innovation while keeping the risks contained.

My vision: From trust to proof

In thinking about the Docker opportunity, I kept returning to one phrase: Trust is not a control.

That is the essence of our approach here. In a modern software supply chain, you cannot simply trust components, you must prove their integrity. The future of security is built on proof: transparent, cryptographically verifiable, and automated.

Docker’s mission is to make that proof accessible to every developer and every organization, without slowing them down.

Here’s what that means in practice:

Every component should carry its own origin story. Provenance must be verifiable, traceable, and inseparable from the artifact itself. When the history of a component is transparent, trust becomes evidence, not assumption.

Transparency must be complete, not performative. An SBOM is more than a compliance record; it is a living map of dependencies that reveals how trust flows through a system.

Policy belongs in the pipeline. When governance is expressed as code, it becomes repeatable and portable, scaling from local development to production without friction. This approach lets each organization apply controls where they fit best, from pre-commit hooks and CI templates to runtime admission checks, so developers can move quickly within guardrails that stay with their work.

As AI reshapes development, isolation becomes the new perimeter. The ability to experiment safely, within bounded and observable environments, will define whether innovation can remain secure at scale.

These are the building blocks of a provable, scalable security model, one that developers can trust and auditors can verify.

Security should not slow development down. It should enable velocity by removing uncertainty. When the system itself provides proof, developers can build with confidence and organizations can deploy with clarity.

Building the standard for software trust

Eighteen months from now, I want “secure by Docker” to be a recognized assurance.When enterprises evaluate where to build their most sensitive workloads, Docker’s supply chain posture should be a differentiator, not a checkbox.

Docker Hardened Images will continue to evolve as the industry’s most transparent, source-built container foundation. Docker Scout will deepen visibility and context across dependencies. And our work on policy automation and AI sandboxing will extend those same assurances into new domains.

These aren’t incremental improvements. They are a shift toward verifiable, systemic security; security that is built in, measurable, and accessible to every developer.

If you are navigating supply chain risk, start with Docker Scout. If you want a trusted foundation, use Docker Hardened Images. And if you want to work on the problems that will define the next decade of software integrity, join us.

The world’s software supply chain runs through Docker.

Our mission is to ensure it is secured by Docker too.
Quelle: https://blog.docker.com/feed/

Amazon ECR introduces archive storage class for rarely accessed container images

Amazon ECR now offers a new archive storage class to reduce storage costs for large volumes of rarely accessed container images. The new archive storage class helps you meet your compliance and retention requirements while optimizing storage cost. As part of this launch, ECR lifecycle policies now support archiving images based on last pull time, allowing you to use lifecycle rules to automatically archive images based on usage patterns. To get started, you can archive images by configuring lifecycle rules to automatically archive images based on criteria such as image age, count, or last pull time, or using the ECR Console or API to archive images individually. You can archive an unlimited number of images. Archived images do not count against your image per repository limit. Once the images are archived, they are no longer accessible for pulls, but can be easily restored via ECR Console, CLI, or API within 20 minutes. Once restored, images can be pulled normally. All archival and restore operations are logged through CloudTrail for auditability. The new ECR archive storage class is available in all AWS Commercial and AWS GovCloud (US) Regions. For pricing, visit the pricing page. To learn more, visit the documentation.
Quelle: aws.amazon.com

Amazon OpenSearch Service launches Cluster Insights for improved operational visibility

Amazon OpenSearch Service now includes Cluster Insights, a monitoring solution that provides comprehensive operational visibility of your clusters through a single dashboard. This eliminates the complexity of having to analyze and correlate various logs and metrics to identify potential risks to cluster availability or performance. The solution automates the consolidation of critical operational data across nodes, indices, and shards, transforming complex troubleshooting into a streamlined process. When investigating performance issues like slow search queries, Cluster Insights displays relevant performance metrics, affected cluster resources, top-N query analysis, and specific remediation steps in one comprehensive view. The solution operates through OpenSearch UI’s resilient architecture, maintaining monitoring capabilities even during cluster unavailability. Users gain immediate access to account-level cluster summaries, enabling efficient management of multiple deployments. Cluster Insights is available at no additional cost for OpenSearch version 2.17 or later in all Regions where OpenSearch UI is available. View the complete list of supported Regions here. To learn more about Cluster Insights, refer to our technical documentation.
Quelle: aws.amazon.com

Amazon CloudWatch now supports scheduled queries in Logs Insights

Amazon CloudWatch Logs now supports automatically running Logs Insights queries on a recurring schedule for your log analysis needs. With scheduled queries, you can now automate log analysis tasks and deliver query results to Amazon S3 and Amazon EventBridge.
With today’s launch, you can track trends, monitor key operational metrics, and detect anomalies without needing to manually re-run queries or maintain custom automation. This feature makes it easier to maintain continuous visibility into your applications and infrastructure, streamline operational workflows, and ensure consistent insight generation at scale. For example, you can setup scheduled queries for your weekly audit reporting. The query results can also be stored in Amazon S3 for analysis, or trigger incident response workflows through Amazon EventBridge. The feature supports all CloudWatch Logs Insights query languages and helps teams improve operational efficiency by eliminating manual query executions.
Scheduled queries is available in US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), and South America (São Paulo).
You can configure a scheduled query using the Amazon CloudWatch console, AWS Command Line Interface (AWS CLI), AWS Cloud Development Kit (AWS CDK), and AWS SDKs. For more information, visit the Amazon CloudWatch documentation.
Quelle: aws.amazon.com

Get Invoice PDF API is now generally available.

Today, AWS announces the general availability of the Get Invoice PDF API, enabling customers to programmatically download AWS invoices via SDK calls. Customers can retrieve individual invoice PDF artifacts by invoking API calls with AWS Invoice ID as input and receives pre-signed Amazon S3 URL for immediate download of AWS invoice and supplemental documents in PDF format. For bulk invoice retrieval, customers can first call the List Invoice Summaries API to get Invoice IDs for a specific billing period, then use the Invoice IDs as input to Get Invoice API to download each Invoice PDF artifact. The Get Invoice PDF API is available in the US East (N. Virginia) Region. Customers from any commercial regions (except China Regions) can use the service. To get started with Get Invoice PDF API please visit the API Documentation.
Quelle: aws.amazon.com