Why We Chose the Harder Path: Docker Hardened Images, One Year Later

We’re coming up on a year since launching Docker Hardened Images (DHI) this May, and crossing a milestone earlier this month made me stop and reflect on what we’ve actually been building.

Earlier this month, we crossed over 500k daily pulls of DHIs, and over 25k continuously patched OS level artifacts in our SLSA Level 3 pipeline. From the time we launched the free DHI Community tier at the end of last year, the catalog has now grown to 2,000+ hardened images, MCP servers, Helm charts, and ELS images. We continuously patch every artifact (across CVEs, distros, versions), so we’re now running over a million builds regularly, and just getting started. Catalog coverage will jump again soon, as more Debian packages, ELS images, and newer artifact types are added.

But the numbers aren’t the interesting part. What matters is how we got here.

We chose the harder path, on purpose. Every product and engineering decision we made was consistently harder to build and operate, but better for developers and for the security of the ecosystem. We made hardened images free and open source. We built a multi-distro product, so adoption doesn’t mean migrating to a vendor’s proprietary OS. We build every system package from source for distros you already run. We ship a huge range of signed attestations with every image because that’s what independent verifiability actually requires.

Along the way, we also looked closely at how the rest of the industry approaches the same problems, and found patterns in patching timelines, SBOM completeness, and advisory coverage that are worth understanding before you evaluate any hardened image provider.

We made hardened images widely accessible so every team could raise their security baseline

We wanted to make a real dent in the security posture of the internet, and that meant making hardened images widely accessible. That is why we did not put our catalog behind a gated paywall, as was the industry norm, but freely available to every developer.

Building and sustaining a hardened image pipeline at this scale isn’t trivial. We know because we’ve been doing this for over a decade with Docker Official Images, freely for the community.

With the release of DHI Community under a permissive Apache 2.0 license, we raised the baseline for security across the ecosystem. Security should not be a premium feature. That kind of impact, at scale, is only possible because the foundation is open.

We built multi-distro so adoption is drop-in, and does not impose a migration tax on you

Some vendors in this space created an entirely new Linux distribution and called it “distroless,” which is a remarkable piece of branding for what is, in practice, a proprietary OS that your teams have never run, tested, or audited. Established Linux distributions like Debian and Alpine have a name for a package repository that only tracks the latest upstream version: they call it “unstable” or “edge,” not stable.

Docker doesn’t ship its own distribution, we harden the ones you already trust. That decision optimizes for your engineering reality, not ours. The hardened image that never gets adopted provides zero security value, full stop. 

With the Docker “multi-distro” approach, we support both Debian and Alpine today, with support for more distros to come. This is actually hard to do: the Debian and Alpine ecosystems don’t just differ in packaging; they diverge in libc, dependency trees, CVE streams, patch timing, and tooling. You are effectively maintaining parallel supply chains, each with its own nuances and security posture. Every hardened image in the DHI catalog is available in both Alpine and Debian, across both amd64 and arm64 architectures, which means we build, patch, and attest each combination independently, taking on that operational burden so you don’t have to.

We regularly speak with engineering teams who evaluate proprietary distributions from other vendors and run into the same wall: your existing internal expertise, tools, tests, and pipelines are built around Alpine or Debian.

Migrating to an unfamiliar, vendor-owned OS isn’t a security upgrade, it’s an adoption project and a material line item of cost, alongside the sticker price of the hardened images subscription itself. The vendor lock-in aspect goes without saying.

The migration effort means revalidating CI pipelines, retraining platform teams, auditing an entirely new package ecosystem, and working through compatibility gaps that surface weeks into a rollout. Several teams tell us they bought the migration story, spent months on it, and are still paying for images their engineers haven’t adopted. With Docker, your teams stay on the distros they already run, which means the adoption cost is measured in hours, not quarters.

One of our customers at Attentive (Stephen Commisso, Principal Engineer) captured their experience in the phrase “200 services – zero drama”, when describing their DHI rollout:

“The rollout was completely transparent to product teams. We had zero issues across over 200 services, which was particularly impressive since we were simultaneously switching Linux distributions from Ubuntu to Debian. All the heavy lifting happened during POC.”

We build every system package from source, for the distros you already use

With the launch of Hardened System Packages, Docker builds tens of thousands of Alpine and Debian system packages from source in a SLSA Build Level 3 pipeline with cryptographic signed, full provenance. This fundamentally changes the CVE equation.

Other vendors also claim to build system packages from source. The difference is that they build them for proprietary Linux distributions that have not had the benefit of independent community scrutiny and that customers have never run in production.

Docker builds packages for Alpine and Debian, the distributions your teams already operate, already test against, and already trust. Alpine and Debian are vast ecosystems that have independent maintainers, public mailing lists, coordinated disclosure with upstream projects, and volunteer security teams that operate independently of any commercial interest. You get the security benefit of from-source patching without the compatibility cost of adopting an unfamiliar OS.

We didn’t stop at near-zero CVEs, we made every image independently verifiable

Docker’s approach to container security is built on five pillars: minimal attack surface, verifiable SBOMs, secure build provenance, exploitability context, and cryptographic verification. We distilled our product development philosophy to these ideas, because we think your security posture depends on it. Not every vendor in the hardened image market shares this philosophy.

Most vendors in this space optimize for one metric: a clean CVE scan result.

Docker obsesses over near-zero CVEs too, but we went further: we built an attestation infrastructure that gives your security team, auditors, SOC, and change advisory boards machine-readable, cryptographically signed evidence for every question they will ask about an image.

We add 17 signed attestations to every single one of our 2000+ images in the DHI catalog, because that is what it takes to give you independent verifiability:

Question

DHI included attestation(s)

What this attestation is

Why it matters to you

What’s in this image?

CycloneDX SBOM, SPDX SBOM

Machine-readable inventory of every package, version, and transitive dependency. 

First thing auditors request during compliance reviews. Both formats are included so you don’t have to convert for different toolchains.

How was it built, and can I prove it?

SLSA provenance v1, SLSA verification summary, Scout provenance, DHI Image Sources

Cryptographic proof linking every image to its exact source definition. 

Required by supply chain security policies. Used by incident responders during forensics to verify whether an image was legitimately built or injected.

What vulnerabilities exist, and what’s been assessed?

CVEs v0.1, CVEs v0.2, VEX, Scout health score

CVE scan results and per-CVE exploitability justifications attached to the image itself. 

When your GRC team prepares a FedRAMP POA&M or your security team triages a new advisory, the evidence is already on the artifact.

Is it compliant?

FIPS compliance, STIG scan

FIPS evidence and OpenSCAP-generated STIG results

Ready artifacts for FedRAMP, PCI DSS, and HIPAA audits. Typically the most expensive artifacts to produce manually. Docker generates them automatically.

Has it been checked for non-CVE risks?

Secrets scan, Virus scan, Tests

Confirms no leaked credentials, no known malware, and that the image functions as expected. 

These are the checks SOC teams and security review boards require before approving production deployment. Docker runs them on every build.

What changed?

Changelog

Signed record of what was added, removed, or patched between versions.

Change advisory boards need this to approve updates. Without it, your team is diffing images manually.

Attestations answer questions about the image, the next set of questions are about the vendor.

What to ask your vendor, and what we found when we asked ourselves the same questions

In a fast-moving ecosystem, CVEs will occasionally get missed, advisories will have gaps, and no vendor operating at scale will have a flawless record. What matters is whether the gaps reveal isolated incidents or a pattern. The following questions are worth asking every vendor, including Docker.

What is the extent of your vendor’s commitment to patching?

Ask your vendor how far they go to resolve vulnerabilities. Docker continuously patches CVEs across both Debian, Alpine, and several major OSS software projects, rebuilding tens of thousands of system packages and several thousand images from source. That is a significant engineering and operational investment that most vendors avoid, because it is easier to build images for a single proprietary OS.

Docker’s commitment doesn’t end at images in our catalog. When a fix doesn’t exist upstream, there are many examples of Docker’s security team creating one. For CVE-2025-12735, a 9.8 CRITICAL RCE in Kibana’s dependency chain, the affected library was unmaintained and had no patch. Docker created the fix, shipped it to customers, and contributed it to LangChain.js. The fix was released as a public npm package on November 17, 2025.

One vendor we looked at has a published CVE policy of 7-day remediation for critical CVEs, once a qualifying patch is publicly available. In this instance, their fix appeared several weeks after that qualifying patch was created by Docker and shipped by the upstream project.

This level of upstream commitment is built into how our security team operates. Docker has been a MITRE CVE Numbering Authority since 2022, part of a sustained investment in teams’ ability to identify, disclose, and fix vulnerabilities at the source.

What assurances do you have about the completeness of your SBOMs?

Ask whether your vendor’s SBOM includes compiled dependencies (Rust crates, Go modules, JavaScript packages), or just system-level packages. Ask whether you can independently verify SBOM completeness against the project’s actual dependency manifest. Docker’s SBOMs include every compiled dependency. We’ve examined images from other vendors, and as one example for Vector (observability pipeline compiled from hundreds of Rust crate dependencies) one vendor’s SBOM did not appear to include those dependencies.

If a dependency isn’t in the SBOM, vulnerabilities in that dependency are invisible to the customer’s scanner and unverifiable by the customer’s security team. When Docker’s security team identified a High-severity CVE in Vector’s Rust dependencies, it was patched and shipped the same evening.

Does your vendor’s advisory feed surface every known CVE for the packages it ships?

Ask whether you can scan the vendor’s images with a third-party scanner against public advisory data, without relying on the vendor’s own advisory feed, and still get consistent results.

Docker recommends validating with Grype, Trivy, Wiz, or Mend. When we examined a vendor’s node image: CVE-2025-9308 and CVE-2025-8262 (both affecting yarn 1.22.22) were present in the shipped image but neither appeared on the vendor’s vulnerability page or in their security advisory feed. Docker’s hardened system package for yarn 1.22.22 is built from source with patches applied for both CVEs.

If your vendor’s advisory feed has gaps, your scanner inherits those gaps, and your security team is making decisions based on incomplete data.

When a CVE is assessed as unexploitable, does your vendor provide an auditable justification?

Not every CVE warrants a patch, and every vendor makes that judgment call. The question is whether your team can see the reasoning. Docker’s security team evaluates exploitability in the context of each minimal container image and publishes every assessment transparently.

Some vendors may set advisory version ranges to values real packages never match, thereby making CVEs invisible to scanners, and not providing a justification or an audit trail.

Docker uses VEX, the CISA-backed standard for communicating exploitability, which provides a per-CVE, machine-readable justification that every customer can read and audit.

We took on the parts of supply chain security others leave behind

Beyond multi-distro support, from-source patching, and transparency, we made a set of choices that compound into a distinctive, secure, simple experience for you.

Most vendor guarantees stop at the edge of the base image. Docker takes full ownership of your customized images: you add what your environment needs, and when a CVE is patched upstream, Docker automatically rebuilds your customized image and our SLA propagates to every artifact we produce. Your customizations don’t void the security guarantee. We’ve also opened up our hardened systems packages repo so you can use those hardened packages in your own bespoke containers. 

We will be extending this same rigor to language libraries next. The dependencies your application pulls in through npm, pip, or Maven will carry the same provenance and patching guarantees as the OS layer beneath them.

And for organizations running software that upstream has stopped supporting, Extended Lifecycle Support continues delivering security patches for up to five years past end-of-life, so teams can maintain their security posture while upgrading on their own timeline.

Come join the movement

A year ago, 500k daily pulls of the DHI catalog and a million builds running regularly felt like a milestone. Today, this is the baseline.

None of this would have happened without the teams who trusted us early and pushed us hard, including Adobe, Crypto.com, Attentive, and many others. Projects like n8n.io helped us understand what it takes to operate at scale. Partners like Socket.dev, Snyk, and Mend.io are building security workflows on top of this foundation.

We are continuing to listen, iterate, and do the hard things that are better for you, because that matters. If you are thinking about supply chain security, especially given the quantity and intensity of supply chain risks AI agents bring to the mix, now is the time to raise your baseline with Docker.

Explore the Docker Hardened Images catalog and secure your supply chain here: https://www.docker.com/products/hardened-images/

For every team and developer, the open source DHI Community tier provides an immediately upgraded security posture. For businesses, we have a wide range of options that will work for your specific needs.

More resources:

DHI documentation: https://docs.docker.com/dhi/

Watch: Why n8n.io moved to DHI

Read: Medplum’s step-by-step DHI adoption playbook

Quelle: https://blog.docker.com/feed/

AWS Secrets Manager now supports hybrid post-quantum TLS to protect secrets from quantum threats

AWS Secrets Manager now supports hybrid post-quantum key exchange using ML-KEM (Module-Lattice-based Key-Encapsulation Mechanism) to secure TLS connections for retrieving and managing secrets. This protection is automatically enabled in Secrets Manager Agent (version 2.0.0+), AWS Lambda Extension (version 19+), and Secrets Manager CSI Driver (version 2.0.0+). For SDK-based clients, hybrid post-quantum key exchange is available in supported AWS SDKs including Rust, Go, Node.js, Kotlin, Python (with OpenSSL 3.5+), and Java v2 (v2.35.11+).
With this launch, your applications retrieve secrets over TLS connections that combine classical key exchange with post-quantum cryptography, helping protect against both traditional cryptographic attacks and future quantum computing threats known as “harvest now, decrypt later” (HNDL). No code changes, configuration updates, or migration effort are required for customers using the latest client versions except for Java v2. For example, a microservice requiring multiple secrets at startup can now retrieve them over quantum-resistant TLS connections by simply upgrading to the latest Secrets Manager Agent version. You can verify hybrid post-quantum key exchange is active by checking CloudTrail logs for the “X25519MLKEM768″ key exchange algorithm in the tlsDetails field of GetSecretValue API calls.
Hybrid post-quantum key exchange using ML-KEM for AWS Secrets Manager is available in all AWS Regions where AWS Secrets Manager is supported. To learn more, visit the AWS Secrets Manager documentation and the AWS Post-Quantum Cryptography migration page.
Quelle: aws.amazon.com

AWS Transform is now available in Kiro and VS Code

AWS Transform is now available through two additional developer tools — including Kiro and VS Code. AWS Transform is an agentic migration and modernization factory designed to compress enterprise transformation timelines from years to months — handling everything from large-scale infrastructure migrations to continuous tech debt reduction, without the manual handoffs and lost context that commonly stall these programs..
With today’s launch, you can get started with AWS Transform custom transformations from wherever you already work: install the AWS Transform Power in Kiro, or install the AWS Transform extension in VS Code . AWS Transform custom transformations help you crush tech debt at scale — choose from AWS-managed transformations for common patterns like Java, Python, and Node.js version upgrades, AWS SDK migrations (boto2 to boto3, Java SDK v1 to v2, JS SDK v2 to v3), or define your own. These new surfaces make it easier to discover additional capabilities as they become available, build and iterate on your own custom transformations, and run any agent repeatedly or across thousands of repositories at once. The custom transformations are the first in a growing library of playbooks coming to developer tools, complementing the existing AWS Transform web console and CLI so you can start a job in your IDE, track progress in the web console, and finish transformations wherever it makes sense — with job state and context shared across every surface.
AWS Transform supports deploying to all AWS commercial regions,and AWS Transform custom is available in US East (N. Virginia) and Europe (Frankfurt). To learn more, visit the AWS Transform product page and user guide.
Quelle: aws.amazon.com

Amazon EC2 P6-B300 instances are now available in the AWS GovCloud (US-East) Region

Starting today, Amazon Elastic Cloud Compute (Amazon EC2) P6-B300 instances are available in the AWS GovCloud (US-East) Region. P6-B300 instances provide 8x NVIDIA Blackwell Ultra GPUs with 2.1 TB high bandwidth GPU memory, 6.4 Tbps EFA networking, 300 Gbps dedicated ENA throughput, and 4 TB of system memory. P6-B300 instances deliver 2x networking bandwidth, 1.5x GPU memory size, and 1.5x GPU TFLOPS (at FP4, without sparsity) compared to P6-B200 instances, making them well suited to train and deploy large trillion-parameter foundation models (FMs) and large language models (LLMs) with sophisticated techniques. The higher networking and larger memory deliver faster training times and more token throughput for AI workloads. P6-B300 instances are now available in p6-b300.48xlarge size in the following AWS Regions: US West (Oregon) and AWS GovCloud (US-East). To learn more about P6-B300 instances, visit Amazon EC2 P6 instances.
Quelle: aws.amazon.com

Amazon Quick now supports document-level access controls for Google Drive knowledge bases

Amazon Quick now supports document-level access controls (ACLs) for Google Drive knowledge bases, enabling organizations to maintain native Google Drive permissions when indexing content. Quick combines ACL replication for efficient pre-retrieval filtering with an additional layer of real-time permission checks directly with Google Drive at query time. This dual approach means you get the performance benefits of indexed ACLs while also guarding against stale or incorrectly mapped permission data. When a user submits a query, Quick verifies their current permissions with Google Drive before generating a response—ensuring answers are based on live access rights. With document-level access controls, Amazon Quick now respects individual file and folder permissions from Google Drive. This feature is available in all AWS Regions where Amazon Quick is available.
To get started, create or update a Google Drive knowledge base in the Amazon Quick console and configure document-level access controls in your integration settings. For more information, see Google Drive integration in the Amazon Quick User Guide.
Quelle: aws.amazon.com