The 3Cs: A Framework for AI Agent Security

Every time execution models change, security frameworks need to change with them. Agents force the next shift.

The Unattended Laptop Problem

No developer would leave their laptop unattended and unlocked. The risk is obvious. A developer laptop has root-level access to production systems, repositories, databases, credentials, and APIs. If someone sat down and started using it, they could review pull requests, modify files, commit code, and access anything the developer can access.

Yet this is how many teams are deploying agents today. Autonomous systems are given credentials, tools, and live access to sensitive environments with minimal structure. Work executes in parallel and continuously, at a pace no human could follow. Code is generated faster than developers can realistically review, and they cannot monitor everything operating on their behalf.

Once execution is parallel and continuous, the potential for mistakes or cascading failures scales quickly. Teams will continue to adopt agents because the gains are real. What remains unresolved is how to make this model safe enough to operate without requiring manual approval for every action. Manual approval slows execution back down to human speed and eliminates the value of agents entirely. And consent fatigue is real.

Why AI Agents Break Existing Governance

Traditional security controls were designed around a human operator. A person sits at the keyboard, initiates actions deliberately, and operates within organizational and social constraints. Reviews worked because there was time between intent and execution. Perimeter security protected the network boundary, while automated systems operated within narrow execution limits.

But traditional security assumes something deeper: that a human is operating the machine.  Firewalls trust the laptop because an employee is using it. VPNs trust the connection because an engineer authenticated. Secrets managers grant access because a person requested it. The model depends on someone who can be held accountable and who operates at human speed.

Agents break this assumption. They act directly, reading repositories, calling APIs, modifying files, using credentials. They have root-level privileges and execute actions at machine speed.  

Legacy controls were never intended for this. The default response has been more visibility and approvals, adding alerts, prompts, and confirmations for every action. This does not scale and generates “consent fatigue”, annoying developers and undermining the very security it seeks to enforce. When agents execute hundreds of actions in parallel, humans cannot review them meaningfully. Warnings become noise.

AI Governance and the Execution Layer: The Three Cs Framework

Each major shift in computing has moved security closer to execution. Agents follow the same trajectory. If agents execute, security must operate at the agentic execution layer.

That shift maps governance to three structural requirements: the 3Cs.

Contain: Bound the Blast Radius

Every execution model relies on isolation. Processes required memory protection. Virtual machines required hypervisors. Containers required namespaces. Agents require an equivalent boundary. Containment limits failure so mistakes made by an agent don’t have permanent consequences for your data, workflows, and business. Unlocking full agent autonomy requires the confidence that experimentation won’t be reckless. . Without it, autonomous execution fails.

Curate: Define the Agent’s Environment

What an agent can do is determined by what exists in its environment. The tools it can invoke, the code it can see, the credentials it can use, the context it operates within. All of this shapes execution before the agent acts.

Curation isn’t approval. It is construction. You are not reviewing what the agent wants to do. You are defining the world it operates in. Agents do not reason about your entire system. They act within the environment they are given. If that environment is deliberate, execution becomes predictable. If it is not, you have autonomy without structure, which is just risk.

Control: Enforce Boundaries in Real Time

Governance that exists only on paper has no effect on autonomous systems. Rules must apply as actions occur. File access, network calls, tool invocation, and credential use require runtime enforcement. This is where alert-based security breaks down. Logging and warnings explain what happened or ask permission after execution is already underway. 

Control determines what can happen, when, where, and who has the privilege to make it happen. Properly executed control does not remove autonomy. It defines its limits and removes the need for humans to approve every action under pressure. If this sounds like a policy engine, you aren’t wrong. But this must be dynamic and adaptable, able to keep pace with an agentic workforce.

Putting the 3Cs Into Practice

The three Cs reinforce one another. Containment limits the cost of failure. Curation narrows what agents can attempt and makes them more useful to developers by applying semantic knowledge to craft tools and context to suit the specific environment and task. Control at the runtime layer replaces reactive approval with structural enforcement.

In practice, this work falls to platform teams. It means standardized execution environments with isolation by default, curated tool and credential surfaces aligned to specific use cases, and policy enforcement that operates before actions complete rather than notifying humans afterward. Teams that build with these principles can use agents effectively without burning out developers or drowning them in alerts. Teams that do not will discover that human attention is not a scalable control plane.
Quelle: https://blog.docker.com/feed/

Amazon RDS now provides an enhanced console experience to connect to a database

Amazon RDS now provides an enhanced console experience that consolidates and provides all relevant information needed to connect to a database in one place, making it easier to connect to your RDS databases. The new console experience provides ready-made code snippets for Java, Python, Node.js and other programming languages as well as tools like the psql command line utility. These code snippets are automatically adjusted based on your database’s authentication settings. For example, if your cluster uses IAM authentication, the generated code snippets will use token-based authentication to connect to the database. The console experience also includes integrated CloudShell access, offering the ability to connect to your databases directly from within the RDS console. This feature is available for Amazon Aurora PostgreSQL, Amazon Aurora MySQL, Amazon RDS for PostgreSQL, Amazon RDS for MySQL, Amazon RDS for MariaDB database engines across all commercial AWS Regions. Get started with the new console experience for database connectivity through the Amazon RDS Console. To learn more, see the Amazon RDS and Aurora user guide
Quelle: aws.amazon.com

Amazon DynamoDB global tables now support replication across multiple AWS accounts

Amazon DynamoDB global tables now support replication across multiple AWS accounts. DynamoDB global tables is a fully managed, serverless, multi-Region, and multi-active database used by tens of thousands of customers to power business-critical applications. With this new capability, you can replicate tables across AWS accounts and Regions to improve resiliency, isolate workloads at the account level, and apply distinct security and governance controls. For multi-account global tables, DynamoDB automatically replicates tables across AWS accounts and Regions. This capability allows you to strengthen fault tolerance and helps ensure applications remain highly available even during account-level disruptions, while allowing customers to align data placement with organizational and security requirements. Multi-account global tables are ideal for customers that adopt multi-account strategies or use AWS Organizations to improve security isolation, enforce data perimeter guardrails, implement disaster recovery (DR), or separate workloads by business unit. Multi-account global tables is available in all AWS Regions and is billed according to existing global tables pricing. To get started, see the DynamoDB global tables documentation, and visit the AWS developer guide to learn more about the benefits of using a multi-account strategy for your AWS environment.
Quelle: aws.amazon.com

AWS Marketplace introduces localized billing for Professional Services from AWS EMEA

AWS Marketplace now offers a more localized experience for Europe, Middle East, and Africa (EMEA) customers purchasing Professional Service solutions via AWS EMEA Marketplace Operator. Customers can now procure Professional Services using localized payment methods and receive invoices from AWS EMEA. This removes previous procurement barriers caused by complex payment remittance processes between different AWS entities, which made it difficult for EMEA customers to purchase Professional Services through AWS Marketplace. Key benefits include support for SEPA (Single Euro Payment Area) payment methods and invoicing consistency from the same AWS entity covering all AWS Marketplace purchases via AWS EMEA Marketplace Operator. This capability is ideal for EMEA customers purchasing consulting, implementation, or managed services through AWS Marketplace. It also benefits organizations that prefer local payment methods such as SEPA direct debit, want to consolidate AWS and Marketplace billing, or are seeking a simpler procurement experience for Professional Services. This capability is available for EMEA customers who purchase professional services solutions in AWS Marketplace, with AWS EMEA as the Marketplace Operator. To learn more about purchasing Professional Services products in AWS Marketplace and receive invoices issued by AWS EMEA, visit the AWS Marketplace Buyer Guide and AWS EMEA Marketplace FAQs. For more information on how to add a bank account for SEPA, see Managing Your SEPA Direct Debit Payment Method in the AWS Billing and Cost Management user guide. 
Quelle: aws.amazon.com

AWS IAM Identity Center enables account access and application use in multiple AWS Regions

IAM Identity Center helps you configure the single sign-on experience of your workforce to AWS accounts and applications. You can now replicate IAM Identity Center from the primary AWS Region where you first enabled it to additional Regions of your choice. This feature enhances resilience of user access to AWS accounts and helps you deploy AWS applications in the AWS Regions that best align with your business needs such as application data residency and proximity to users.
When you enable this feature, IAM Identity Center automatically replicates your identities, entitlements, and other information from the primary Region to additional Regions. If IAM Identity Center is affected by a disruption in the primary Region, IAM Identity Center users continue to have access to their AWS accounts using the already provisioned entitlements in the additional Regions. 
AWS application administrators can use the standard application deployment workflow to deploy their application in an additional Region. They can assign users to the application in that Region, while you continue to administer IAM Identity Center in the primary Region. IAM Identity Center multi-Region support is currently available in the 17 enabled-by-default commercial AWS Regions for organization instances of IAM Identity Center connected to an external identity provider, such as Okta. The IAM Identity Center organization instance must be configured with a multi-Region customer managed KMS key (CMK). To find out which AWS applications support deployment in additional Regions, visit AWS applications that you can use with IAM Identity Center. Standard AWS KMS charges apply for storing and using CMKs. IAM Identity Center is provided at no additional cost. To learn more about IAM Identity Center, visit the product detail page. To get started, see the IAM Identity Center User Guide. 
Quelle: aws.amazon.com