Securing the software supply chain shouldn’t be hard. According to theCUBE Research, Docker makes it simple

In today’s software-driven economy, securing software supply chains is no longer optional, it’s mission-critical. Yet enterprises often struggle to balance developer speed and security. According to theCUBE Research, 95% of organizations say Docker improved their ability to identify and remediate vulnerabilities, while 79% rate it highly effective at maintaining compliance with security standards. Docker embeds security directly into the developer workflow so that protection happens by default, not as an afterthought.

At the foundation are Docker Hardened Images, which are ultra-minimal, continuously patched containers that cut the attack surface by up to 95% and achieve near-zero CVEs. These images, combined with Docker Scout’s real-time vulnerability analysis, allow teams to prevent, detect, and resolve issues early, keeping innovation and security in sync. The result: 92% of enterprises report fewer application vulnerabilities, and 60% see reductions of 25% or more.

Docker also secures agentic AI development through the MCP Catalog, Toolkit, and Gateway. These tools provide a trusted, containerized way to run Model Context Protocol (MCP) servers that power AI agents, ensuring communication happens in a secure, auditable, and isolated environment. According to theCUBE Research, 87% of organizations reduced AI setup time by over 25%, and 95% improved AI testing and validation, demonstrating that Docker makes AI development both faster and safer.

With built-in Zero Trust principles, role-based access controls, and compliance support for SOC 2, ISO 27001, and FedRAMP, Docker simplifies adherence to enterprise-grade standards without slowing developers down. The payoff is clear: 69% of enterprises report ROI above 101%, driven in part by fewer security incidents, faster delivery, and improved productivity. In short, Docker’s modern approach to DevSecOps enables enterprises to build, ship, and scale software that’s not only fast, but fundamentally secure.

Docker’s impact on software supply chain security

Docker has evolved into a complete development platform that helps enterprises build, secure, and deploy modern and agentic AI applications with trusted DevSecOps and containerization practices. From Docker Hardened Images, which are secure, minimal, and production-ready container images with near-zero CVEs, to Docker Scout’s real-time vulnerability insights and the MCP Toolkit for trusted AI agents, teams gain a unified foundation for software supply chain security.

Every part of the Docker ecosystem is designed to blend in with existing developer workflows while making security affordable, transparent, and universal. Whether you want to explore the breadth of the Docker Hardened Images catalog, analyze your own image data with Docker Scout, or test secure AI integration through the MCP Gateway, it is easy to see how Docker embeds security by default, not as an afterthought.

Review additional resources

Read more in our latest blog about ROI of working with Docker

theCUBE Research Report and eBook – economic validation of Docker

Explore Docker Hardened Images and start a 30-day free trial 

View Hardened Images and Helm Charts on Docker Hub

Explore Docker Scout

Quelle: https://blog.docker.com/feed/

A New Approach for Coding Agent Safety

Coding agents like Claude Code, Gemini CLI, Codex, Kiro, and OpenCode are changing how developers work. But as these agents become more autonomous with capabilities like deleting repos, modifying files, and accessing secrets, developers face a real problem: how do you give agents enough access to be useful without adding unnecessary risk to your local environment?

A More Effective Way to Run Local Coding Agents Safely.

We’re working on an approach that lets you run coding agents in purpose-built, isolated local environments. Local sandboxes from Docker that wrap agents in containers that mirror your local workspace and enforce strict boundaries across all the coding agents you use. The idea is to give agents the access they need while maintaining isolation from your local system.

Today’s experimental release runs agents as containers inside Docker Desktop’s VM, but we will be switching to running them inside of dedicated microVMs for more defense in depth and to improve the experience of agents executing Docker containers securely. 

What’s Available Now (Experimental Preview).

This is an experimental preview. Commands may change and you shouldn’t rely on this for production workflows yet.

Here’s what you get today:

Container-based isolation: Agents can run code, install packages, and modify files within a bind mounted workspace directory.

Filesystem isolation: Process containment, resource limits, and filesystem scoping, protecting your local system.

Broad agent support: Native support for Claude Code and Gemini CLI, with more coding agents support coming soon.

Why We Are Taking this Approach.

We don’t think the operating system-level approaches have the right long-term shape:

They sandbox only the agent process itself, not the full environment the agent needs. This means the agent constantly needs to access the host system for basic tasks (installing packages, running code, managing dependencies), leading to constant permission prompts that interrupt workflows.

They aren’t consistent across platforms.

Container-based isolation is designed for exactly the kind of dynamic, iterative workflows that coding agents need. You get flexibility without brittleness.

Although this structure is meant to be general-purpose, we’re starting for specific, pre-configured coding agents. Rather than trying to be a solution for all kinds of agents out of the box, this approach lets us solve real developer problems and deliver a great experience. We’ll support other use cases in the future, but for now, coding agents are where we can make the biggest impact.

Here’s How You Can Try It.

Today’s experimental preview works natively with Claude Code and Gemini CLI. We’re building for other agents developers use.

With Docker Desktop 4.50 and later installed, run: docker sandbox run <agent>

This creates a new isolated environment with your current working directory bind mounted.

What’s Next.

Better support and UX for running multiple agents in parallel

Granular network access controls

Granular token and secret management for multi-agent workflows

Centralized policy management and auditability

MicroVM-based isolation architecture

Support for additional coding agents

Try It and Share Your Feedback.

We’re building this alongside developers. As you experiment with Docker Sandboxes, we want to hear about your use cases and what matters most to your workflow.

Send your feedback to: coding-sandboxes-feedback@docker.com

We believe sandboxing should be how every coding agent runs, everywhere. This is an early step, and we need your input to get there. We’re building toward a future where there’s no compromise: where you can let your agents run free while protecting everything that matters. 
Quelle: https://blog.docker.com/feed/

Amazon OpenSearch Service introduces Agentic Search

Amazon OpenSearch Service launches Agentic Search, transforming how users interact with their data through intelligent, agent-driven search. Agentic Search introduces an intelligent agent-driven system that understands user intent, orchestrates the right set of tools, generates OpenSearch DSL (domain-specific language) queries, and provides transparent summaries of its decision-making process through a simple ‘agentic’ query clause and natural language search terms. Agentic Search automates OpenSearch query planning and execution, eliminating the need for complex search syntax. Users can ask questions in natural language like “Find red cars under $30,000″ or “Show last quarter’s sales trends.” The agent interprets intent, applies optimal search strategies, and delivers results while explaining its reasoning process. The feature provides two agent types: conversational agents, which handle complex interactions with the ability to store conversations in memory, and flow agents for efficient query processing. The built-in QueryPlanningTool uses large language models (LLMs) to create DSL queries, making search accessible regardless of technical expertise. Users can manage Agentic Search through APIs or OpenSearch Dashboards to configure and modify agents. Agentic Search’s advanced settings allow you to connect with external MCP servers and use custom search templates. Support for agentic search is available for OpenSearch Service version 3.3 and later in all AWS Commercial and AWS GovCloud (US) Regions where OpenSearch Service is available. See here for a full listing of our Regions. Build agents and run agentic searches using the new Agentic Search use case available in the AI Search Flows plugin. To learn more about Agentic Search, visit the OpenSearch technical documentation.
Quelle: aws.amazon.com

AWS Glue Data Quality now supports pre-processing queries

Today, AWS announces the general availability of preprocessing queries for AWS Glue Data Quality, enabling you to transform your data before running data quality checks through AWS Glue Data Catalog APIs. This feature allows you to create derived columns, filter data based on specific conditions, perform calculations, and validate relationships between columns directly within your data quality evaluation process.
Preprocessing queries provide enhanced flexibility for complex data quality scenarios that require data transformation before validation. You can create derived metrics like calculating total fees from tax and shipping columns, limiting number of columns that are considered for data quality recommendations or filter datasets to focus quality checks on specific data subsets. This capability eliminates the need for separate data pre-processing steps, streamlining your data quality workflows.
AWS Glue Data Quality preprocessing queries are available through AWS Glue Data Catalog APIs – start-data-quality-rule-recommendation-run and start-data-quality-ruleset-evaluation-run, in all commercial AWS Regions where AWS Glue Data Quality is available. To learn more about preprocessing queries, see the Glue Data Quality documentation. 
Quelle: aws.amazon.com

Amazon Quick Suite introduces scheduling for Quick Flows

Amazon Quick Flows now supports scheduling, enabling you to automate repetitive workflows without requiring manual intervention. You can now configure Quick Flows to run automatically at specified times or intervals, improving operational efficiency and ensuring critical tasks execute consistently. You can schedule Quick Flows to run daily, weekly, monthly, or on custom intervals. This capability is great for automating routine and administrative tasks such as generating recurring reports from dashboards, summarizing open items assigned to you in external services, or generating daily meeting briefings before you head out to work. You can schedule any flow you have access to—whether you created it or it was shared with you. To schedule a flow, click the scheduling icon and configure your desired date, time, and frequency. Scheduling in Quick Flows is available now in IAD, PDX, and DUB. There are no additional charges for using scheduled execution beyond standard Quick Flows usage. To learn more about configuring scheduled Quick Flows, please visit our documentation.
Quelle: aws.amazon.com

AWS Glue Data Quality now supports rule labeling for enhanced reporting

Today, AWS announces the general availability of rule label, a feature of AWS Glue Data Quality, enabling you to apply custom key-value pair labels to your data quality rules for improved organization, filtering, and targeted reporting. This enhancement allows you to categorize data quality rules by business context, team ownership, compliance requirements, or any custom taxonomy that fits your data quality and governance needs. Rule labels provide effective way to organize analyze data quality results. You can query results by specific labels to identify failing rules within particular categories, count rule outcomes by team or domain, and create focused reports for different stakeholders. For example, you can apply all rules that pertain to finance team with a label “team=finance” and generate a customized report to showcase quality metrics specific to finance team. You can label high priority rules with “criticality=high” to prioritize remediation efforts. Labels can be authored as part of the DQDL. You can query the labels as part of rule outcomes, row-level results, and API responses, making it easy to integrate with your existing monitoring and reporting workflows. AWS Glue Data Quality rule labeling is available in all commercial AWS Regions where AWS Glue Data Quality is available. See the AWS Region Table for more details. To learn more about rule labeling, see the AWS Glue Data Quality documentation.
Quelle: aws.amazon.com