How Docker Hardened Images Patches Vulnerabilities in 24 hours

On November 19, 2025, the Golang project published two Common Vulnerabilities and Exposures (CVEs) affecting the widely-used golang.org/x/crypto/ssh package. While neither vulnerability received a critical CVSS score, both presented real risks to applications using SSH functionality in Go-based containers.

CVE-2025-58181 affects SSH servers parsing GSSAPI authentication requests. The vulnerability allows attackers to trigger unbounded memory consumption by exploiting the server’s failure to validate the number of mechanisms specified in authentication requests. CVE-2025-47914 impacts SSH Agent servers that fail to validate message sizes when processing identity requests, potentially causing system panics when malformed messages arrive. (These two vulnerabilities came just days after CVE-2025-47913, a high-severity vulnerability affecting the same Golang component that Docker also quickly patched)

For teams running Go applications with SSH functionality in their containers, leaving these vulnerabilities unpatched creates exposure to denial-of-service attacks and potential system instability.

How Docker achieves lightning fast vulnerability response

When these CVEs hit the Golang project’s security feed, Docker Hardened Images customers had patched versions available in less than 24 hours. This rapid response stems from Docker Scout’s continuous monitoring architecture and DHI’s automated remediation pipeline.

Here’s how it works:

Continuous CVE ingestion: Unlike vulnerability scanning that runs on batch schedules, Docker Scout continuously ingests CVE information from upstream sources including GitHub security advisories, the National Vulnerability Database, and project-specific feeds. The moment CVE data becomes available, Scout begins analysis.

Instant impact assessment: Within seconds of CVE ingestion, Scout identifies which Docker Hardened Images are affected based in Scout’s comprehensive SBOM database. This immediate notification allows the remediation process to start without delay.

Automated patching workflow: Depending on the vulnerability and package, Docker either patches automatically or triggers a manual review process for complex changes. For these Golang SSH vulnerabilities, the team initiated builds immediately after upstream patches became available.

Cascading builds: Once the patched Golang package builds successfully, the system automatically triggers rebuilds of all dependent packages and images. Every Docker Hardened Image containing the affected golang.org/x/crypto/ssh package gets rebuilt with the security fix.

The entire process, from CVE disclosure to patched images available to customers, was completed in under 24 hours. Customers using Docker Scout received immediate notifications about the vulnerabilities and the availability of patched versions.

Why Docker’s Security Response Is Different

One of Docker’s key differentiators is its continuous, real-time monitoring, rather than periodic batch scanning. Traditional vulnerability management relies on daily or weekly scans, leaving containers exposed to known vulnerabilities for hours or even days.

With Docker Scout’s real-time CVE ingestion, detection starts the moment a vulnerability is published, enabling remediation within seconds and minimizing exposure.

This foundation powers Docker Hardened Images (DHI), where packages and dependencies are continuously tracked and automatically updated when issues arise. For example, when vulnerabilities were found in the golang.org/x/crypto library, all affected images were rebuilt and released within a day. Customers simply pull the latest tags to stay secure, no manual patching, emergency maintenance, or impact triage required.

But continuous monitoring is just the foundation. What truly sets Docker apart is how that real-time intelligence flows into an automated, transparent, and trusted remediation pipeline, built on over a decade of experience securing and maintaining the Docker Official Images program.These are the same images trusted and used by millions of developers and organizations worldwide, forming the foundation of countless production environments. That long-standing operational experience in continuously maintaining, rebuilding, and distributing secure images at global scale gives Docker a proven track record in delivering reliability, consistency, and trust few others can match.

Beyond automation, Docker’s AI guardrails add yet another layer of protection. Purpose-built for the Hardened Images pipeline, these AI systems continuously analyze upstream code changes, flag risky patterns, and prevent flawed dependencies from entering the supply chain. Unlike standard coding assistants, Docker’s AI guardrails are informed by manual, project-specific reviews, blending human expertise with adaptive intelligence. When the system detects a high-confidence issue such as an inverted error check, ignored failure, or resource mismanagement, it halts the release until a Docker engineer verifies and applies the fix. This human-in-the-loop model ensures vulnerabilities are caught long before they can reach customers, turning AI into a force multiplier for safety, not a replacement for human judgment.

Another critical differentiator is complete transparency. Consider what happens when a security scanner still flags a vulnerability even after you’ve pulled a patched image. With DHI, every image includes a comprehensive and accurate Software Bill of Materials (SBOM) that provides definitive visibility into what’s actually inside your container. When a scanner reports a supposedly remediated image as vulnerable, teams can verify the exact package versions and patch status directly from the SBOM instead of relying on scanner heuristics.

This transparency also extends to how Docker Scout handles CVE data. Docker relies entirely on independent, third-party sources for vulnerability decisions and prioritization, including the National Vulnerability Database (NVD), GitHub Security Advisories, and upstream project maintainers. This approach is essential because traditional scanners often depend on pattern matching and heuristics that can produce false positives. They may miss vendor-specific patches, overlook backported fixes, or flag vulnerabilities that have already been remediated due to database lag. In some cases, even vendor-recommended scanners fail to detect unpatched vulnerabilities, creating a false sense of security.

Without an accurate SBOM and objective CVE data, teams waste valuable time chasing phantom vulnerabilities or debating false positives with compliance auditors. Docker’s approach eliminates that uncertainty. Because the SBOM is generated directly from the build process, not inferred after the fact, it provides definitive evidence of what’s inside each image and why certain CVEs do or don’t apply. This transforms vulnerability management from guesswork and debate into objective, verifiable security assurance, backed by transparent, third-party data.

CVEs don’t have to disrupt your week

Managing vulnerabilities consumes significant engineering time. When critical CVEs drop, teams rush to assess impact, test patches, and coordinate deployments. Docker Hardened Images eliminate this overhead by continuously updating base images with complete transparency into their contents with rapid turnarounds to reduce your exposure window.

If you’re tired of vulnerability whack-a-mole disrupting your team’s roadmap, Docker Hardened Images offers a better path forward. Learn more about how Docker Scout and Hardened Images can reduce your vulnerability management burden, or contact our team to discuss your specific security requirements.

Quelle: https://blog.docker.com/feed/

AWS announces Flexible Cost Allocation on AWS Transit Gateway

AWS announces general availability of Flexible Cost Allocation on AWS Transit Gateway, enhancing how you can distribute Transit Gateway costs across your organization.
Previously, Transit Gateway only used a sender-pay model, where the source attachment account owner was responsible for all data usage related costs. The new Flexible Cost Allocation (FCA) feature provides more versatile cost allocation options through a central metering policy. Using FCA metering policy, you can choose to allocate all of your Transit Gateway data processing and data transfer usage to the source attachment account, the destination attachment account, or the central Transit Gateway account. FCA metering policies can be configured at an attachment-level or individual flow-level granularity. FCA also supports middle-box deployment models enabling you to allocate data processing usage on middle-box appliances such as AWS Network Firewall to the original source or destination attachment owners. This flexibility allows you to implement multiple cost allocation models on a single Transit Gateway, accommodating various chargeback scenarios within your AWS network infrastructure. Flexible Cost Allocation is available in all commercial AWS Regions where Transit Gateway is available. You can enable these features using the AWS Management Console, AWS Command Line Interface (CLI) and the AWS Software Development Kit (SDK). There is no additional charge for using FCA on Transit Gateway. For more information, see the Transit Gateway documentation pages.
Quelle: aws.amazon.com

Amazon Connect launches monitoring of contacts queued for callback

Amazon Connect now provides you with the ability to monitor which contacts are queued for callback. This feature enables you to search for contacts queued for callback and view additional details such as the customer’s phone number and duration of being queued within the Connect UI and APIs. You can now pro-actively route contacts to agents that are at risk of exceeding the callback timelines communicated to customers. Businesses can also identify customers that have already successfully connected with agents, and clear them from the callback queue to remove duplicative work. This feature is available in all regions where Amazon Connect is offered. To learn more, please visit our documentation and our webpage. 
Quelle: aws.amazon.com

Second-generation AWS Outposts racks now supported in the AWS Asia Pacific (Tokyo) Region

Second-generation AWS Outposts racks are now supported in the AWS Asia Pacific (Tokyo) Region. Outposts racks extend AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises data center or colocation space for a truly consistent hybrid experience. Organizations from startups to enterprises and the public sector in and outside of Japan can now order their Outposts racks connected to this new supported region, optimizing for their latency and data residency needs. Outposts allows customers to run workloads that need low latency access to on-premises systems locally while connecting back to their home Region for application management. Customers can also use Outposts and AWS services to manage and process data that needs to remain on-premises to meet data residency requirements. This regional expansion provides additional flexibility in the AWS Regions that customers’ Outposts can connect to. To learn more about second-generation Outposts racks, read this blog post and user guide. For the most updated list of countries and territories and the AWS Regions where second-generation Outposts racks are supported, check out the Outposts rack FAQs page.
Quelle: aws.amazon.com

AWS IoT Core enhances IoT rules-SQL with variable setting and error handling capabilities

AWS IoT Core now supports a SET clause in IoT rules-SQL, which lets you set and reuse variables across SQL statements. This new feature provides a simpler SQL experience and ensures consistent content when variables are used multiple times. Additionally, a new get_or_default() function provides improved failure handling by returning default values while encountering data encoding or external dependency issues, ensuring IoT rules continue execution successfully. AWS IoT Core is a fully managed service that securely connects millions of IoT devices to the AWS cloud. Rules for AWS IoT is a component of AWS IoT Core which enables you to filter, process, and decode IoT device data using SQL-like statements, and route the data to 20+ AWS and third-party services. As you define an IoT rule, these new capabilities help you eliminate complicated SQL statements and make it easy for you to manage IoT rules-SQL failures.
These new features are available in all AWS Regions where AWS IoT Core is available, including AWS GovCloud (US) and Amazon China Regions. For more information and getting started experience, visit the developer guides on SET clause and get_or_default() function.
Quelle: aws.amazon.com

Automated Reasoning checks now include natural language test Q&A generation

AWS announces the launch of natural language test Q&A generation for Automated Reasoning checks in Amazon Bedrock Guardrails. Automated Reasoning checks uses formal verification techniques to validate the accuracy and policy compliance of outputs from generative AI models. Automated Reasoning checks deliver up to 99% accuracy at detecting correct responses from LLMs, giving you provable assurance in detecting AI hallucinations while also assisting with ambiguity detection in model responses. To get started with Automated Reasoning checks, customers create and test Automated Reasoning policies using natural language documents and sample Q&As. Automated Reasoning checks generates up to N test Q&As for each policy using content from the input document, reducing the work required to go from initial policy generation to production-ready, refined policy. Test generation for Automated Reasoning checks is now available in the US (N. Virginia), US (Ohio), US (Oregon), Europe (Frankfurt), Europe (Ireland), and Europe (Paris) Regions. Customers can access the service through the Amazon Bedrock console, as well as the Amazon Bedrock Python SDK. To learn more about Automated Reasoning checks and how you can integrate it into your generative AI workflows, please read the Amazon Bedrock documentation, review the tutorials on the AWS AI blog, and visit the Bedrock Guardrails webpage.
Quelle: aws.amazon.com