The economics of enterprise AI: What the Forrester TEI study reveals about Microsoft Foundry

Leaders are chasing the AI frontier, reimagining business systems as human-led and agent-operated. To do this, customers are on the hunt for smarter models, more capable agents, and market-ready solutions to operationalize AI workflows.

When Forrester modeled the economics of enterprise AI with Microsoft Foundry, the biggest driver behind the 327% ROI over three years1 was surprising: developer productivity, worth $15.7 million over the same period.

The study showed that the bottleneck to ROI can be removed by enabling developers to focus on what matters.

Read the full Forrester study

The hidden tax on your AI investment

In most organizations, senior engineers spend a third of their time on undifferentiated work: stitching together fragmented tools, recreating context pipelines, and navigating bespoke governance processes. None of that is competitive advantage for firms—it’s a tax on every AI initiative.

According to Forrester, organizations using Foundry avoided much of this work, improving technical team productivity up to 35%. Teams using Foundry to develop AI apps and agents saw payback in as few as six months and with benefits accelerating year over year1.

Learn more about what you can do with Microsoft Foundry

The details: What the Forrester study found

Forrester interviewed 10 decision-makers at five organizations and surveyed 154 other decision-makers and AI leaders across the U.S. and Europe with experience using Microsoft Foundry. They modeled a composite enterprise with $10 billion revenue, 25,000 employees, and 100 technical staff using Foundry. To model conservative estimates, benefits were adjusted downward and costs upward; the results reflect the composite enterprise.

Read the full Forrester study

Figure 1: Survey results and reported benefits

When asked “What benefits has your organization experienced with Microsoft Foundry?”, respondents cited operational outcomes:

Note: These reflect reported experiences, not the financial model. Composite ROI is calculated separately using Forrester’s risk-adjusted methodology. Source: Survey of 154 AI decision-makers, Forrester TEI study, February 2026

Forrester found that platform investments compound in value. For a team that invests $11.6M in resources, the three-year present value of quantified benefits for the composite organization totaled $49.5M: Year one delivered $10.0M, year two $21.1M, year three $30.5M.

Figure 2: Benefits breakdown

Source: The Total Economic Impact™ Of Microsoft Foundry, a commissioned study conducted by Forrester Consulting, February 2026

When every project starts from scratch

AI initiatives will require models, enterprise knowledge, tools, and governance. Without a shared platform, teams will encounter toil. With enterprise knowledge as the example, for every AI project, teams need to create vector databases, RAG pipelines, integrations, and access-control rules, creating internal infrastructure that does not directly influence business outcomes.

75% of teams reported easier model grounding or knowledge source integration

Read the study

With Foundry, teams develop AI applications and agents on a unified, interoperable AI platform designed to enable agents to be intelligent and trustworthy: with reusable knowledge bases on data anywhere in the enterprise, protected by built-in evaluations, and agent controls. In Forrester’s TEI study, 75% of teams cited easier model grounding or knowledge source integration with Foundry IQ.

Over three years, the productivity gain alone was worth up to $15.7 million1. One Foundry customer said,

Our developers can go super fast because they can get what they need in Microsoft Foundry … We estimate that we reduce overall development time by 30%–40%.
—Global head of technology platforms, professional services

Organizations saw compounding returns when they built once and reuse everywhere with shared templates, knowledge bases, standardized evaluations, and consistent governance. This helps to explain a counterintuitive finding: organizations that focused energy consolidating on a unified platform outperformed those which did not. Their execution is simpler and therefore stronger.

The need for platform thinking

Point solutions develop in enterprises over time. Each solves a narrow problem, but each also introduces its own governance layer, context pipeline, and integration surface. The hidden cost here builds up in the stitching between these solutions.

32% were able to decrease costs by decommissioning legacy AI tools

Read the study

In the Forrester study, 32% of surveyed organizations that adopted Foundry were able to decrease costs by decommissioning legacy AI tools, and the composite organization avoided up to $4.3M in infrastructure costs over three years by eliminating duplicative workflows, integrations, and operational overhead. For example, one customer shared they were able to decommission their container-based infrastructure and eliminate spending on previous AI model development tools since the functionality was included in the Foundry platform:

One of the benefits of using Foundry versus taking those models and running them in containers in the cloud is that then you don’t have to manage the container infrastructure.
—Managing director and global head of co-innovation, professional services

Department-level budgets favor point solutions, but enterprise-level outcomes require platform thinking. That mismatch is why AI spend often fails to translate into sustained value as organizations shift from isolated pilots to scaled deployments.

Microsoft Agent Factory
Scale AI and move from ideas to outcomes with one pre-paid plan, expert-led AI skilling, and engineering expertise.

Learn more

Trust unlocks higher-impact work

Most enterprises start with internal-facing AI use cases before they shift to customer-facing solutions. Two-thirds of AI agents today focus on process automation, while one-third support direct human assistance1. The ratio matters. Most enterprises need to trust AI with bounded, auditable tasks before they can trust it to enhance human judgment.

Foundry Control Plane enables organizations to govern the AI lifecycle with organization-wide observability and controls. This includes centrally managed policies for model deployment, configurable guardrails, and continuous evaluations to see what’s running, fix what’s failing, and prove compliance across any environment.

Model scanning done by Microsoft on the models … is a key requirement for us. … we want to make sure we understand what the model contains and whether it contains anything that is not in line with policy.
—Principal product manager, professional services

67% adopted Foundry to reduce concerns with AI security, privacy, and governance

Read the study

It’s no surprise that 67% of surveyed organizations cited concerns with AI security, privacy, or governance as a top reason for adopting Microsoft Foundry, ranking it higher than model access, capabilities, and cost inefficiencies. In essence, trust is a permission slip that enables organizations to expand from isolated process automation projects into higher-impact work at scale.

What leaders should do about AI now

The Forrester TEI study makes one thing unmistakable: enterprise AI ROI compounds when AI is treated as a platform, not a series of one-off projects.

The biggest gains come from giving technical teams a reusable foundation, including models, agents, and tools that scale across use cases and eliminate repetitive work. When AI development becomes repeatable, value accelerates and confidence follows.

Three questions for your next leadership meeting
– How much of your engineering capacity goes toward rebuilding the same foundations vs. building differentiated AI capabilities? If it’s over 20%, you’re paying a hidden tax.– Do your AI initiatives share a common platform for data, evaluation, and governance, or are you scaling fragmentation?– What would it take for your organization to move from isolated automation projects to higher‑impact use cases?

Learn more about the benefits of AI workflows

Read the full Forrester TEI Study.

Build with Microsoft Foundry.

Shift from ideas to outcomes faster with Microsoft Agent Factory.

Read the full Forrester study

The Forrester Total Economic Impact™ study on Microsoft Foundry was commissioned by Microsoft and conducted by Forrester Consulting.

1The Total Economic Impact™ Of Microsoft Foundry, a commissioned study conducted by Forrester Consulting, February 2026

2Represents results for the composite organization
The post The economics of enterprise AI: What the Forrester TEI study reveals about Microsoft Foundry appeared first on Microsoft Azure Blog.
Quelle: Azure

Unpacking your top questions on agentic AI: The Shift podcast

Every day in the hallways at Microsoft, I hear product teams discussing where agents are headed and how software is forever changed. Many of us come into the office more now, and I didn’t realize how much I missed the in-between moments where natural chat gives us energy—coffee and hot takes on the way to meetings and debating at a lunch no one scheduled, but somehow nobody wants to leave. The people who work on Microsoft Azure, Microsoft Foundry, and Microsoft Fabric care deeply about what they’re building—about how cloud and AI platforms can be better for those with hands on keyboards—it’s when we’re unscripted that some of our best insights surface. How could we bottle up this passion?

Subscribe to The Shift podcast

Today we’re introducing “The Shift” podcast, an evolution of “Leading the Shift,” to share more dialogue. Grounded in questions we heard from you after announcements at Ignite, we’re releasing eight episodes this spring—one each week—that bring engineering, product, and strategy perspectives together. Across levels and backgrounds, this season’s agentic theme explores agents up and down the stack. Knowing change is the only constant, “The Shift” creates space for us all to think out loud.

Here’s a sneak peek of the new season

const currentTheme =
localStorage.getItem(‘msxcmCurrentTheme’) ||
(window.matchMedia(‘(prefers-color-scheme: dark)’).matches ? ‘dark’ : ‘light’);

// Modify player theme based on localStorage value.
let options = {“autoplay”:false,”hideControls”:null,”language”:”en-us”,”loop”:false,”partnerName”:”cloud-blogs”,”poster”:”https://azure.microsoft.com/en-us/blog/wp-content/uploads/2026/03/the-shift-1.jpg”,”title”:””,”sources”:[{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/3233302-Shift-Season2-Trailer-0x1080-6439k”,”type”:”video/mp4″,”quality”:”HQ”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/3233302-Shift-Season2-Trailer-0x720-3266k”,”type”:”video/mp4″,”quality”:”HD”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/3233302-Shift-Season2-Trailer-0x540-2160k”,”type”:”video/mp4″,”quality”:”SD”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/3233302-Shift-Season2-Trailer-0x360-958k”,”type”:”video/mp4″,”quality”:”LO”}],”ccFiles”:[{“url”:”https://azure.microsoft.com/en-us/blog/wp-json/msxcm/v1/get-captions?url=https%3A%2F%2Fwww.microsoft.com%2Fcontent%2Fdam%2Fmicrosoft%2Fbade%2Fvideos%2Fproducts-and-services%2Fen-us%2Fazure%2F3233302-shift-season-2-trailer%2F3233302-Shift-Season-2-Trailer_cc_en-us.ttml”,”locale”:”en-us”,”ccType”:”TTML”}],”downloadableFiles”:[{“url”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/3233302-Shift-Season-2-Trailer_transcript_en-us”,”locale”:”en-us”,”mediaType”:”transcript”},{“url”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/3233302-Shift-Season-2-Trailer_audio_en-us”,”locale”:”en-us”,”mediaType”:”audio”}]};

if (currentTheme) {
options.playButtonTheme = currentTheme;
}

document.addEventListener(‘DOMContentLoaded’, () => {
ump(“ump-69b081f9c8544″, options);
});

Topics we’ll explore weekly

Are my agents hunting for data?

How do agents work together?

Wait, my agent needs a database?

Is context engineering the new RAG?

What senses do my agents need to act?

Is Postgres the wave of the future?

Should my IT team hire agents?

How do we draw agentic borders?

Agents don’t succeed in isolation. They depend on how your data is unified, how your cloud handles scale, how your applications orchestrate across systems, and ultimately, how this serves people. At Microsoft, we see agents as catalysts for innovation across your entire environment, performing best when layers of the stack work together. That’s where the toughest challenges for technical teams emerge: observability, governance, security, optimization, and quality. It’s a team sport.

Your data strategy determines what your agents can reason over. Your cloud foundation determines what you can do reliably. Your agents and AI app experiences deliver business outcomes. Our colleagues and friends featured on The Shift are solving for these interdependencies. And what they all have in common is conviction that none of this works in pieces.

Our first episode, “Are my agents hunting for data?” drops tomorrow. We’ll sit with Ronald Chang, Dipti Borkar, Josh Caplan, and Cillian Mitchell from the Microsoft Fabric and Microsoft OneLake teams to cover why data preparation is essential to fueling agents with knowledge. And it’s perfect timing with Microsoft Fabric Community Conference next week in Atlanta. I hope you’ll join us to keep this conversation going.

Subscribe today on YouTube, Spotify, Apple Podcasts, Amazon Music, RSS.com, or wherever you listen and learn.

The agentic shift starts here
Follow us on YouTube to get the latest episodes.

Subscribe to the podcast

The post Unpacking your top questions on agentic AI: The Shift podcast appeared first on Microsoft Azure Blog.
Quelle: Azure

What’s Holding Back AI Agents? It’s Still Security

It’s hard to find a team today that isn’t talking about agents. For most organizations, this isn’t a “someday” project anymore. Building agents is a strategic priority for 95% of respondents that we surveyed across the globe with 800+ developers and decision makers in our latest State of Agentic AI research. The shift is happening fast: agent adoption has moved beyond experiments and demos into something closer to early operational maturity. 60% of organizations already report having AI agents in production, though a third of those remain in early stages. 

Agent adoption today is driven by a pragmatic focus on productivity, efficiency, and operational transformation, not revenue growth or cost reduction. Early adoption is concentrated in internal, productivity-focused use cases, especially across software, infrastructure, and operations. The feedback loops are fast, and the risks are easier to control. 

So what’s holding back agent scaling? Friction shows up and nearly all roads lead to the same place: AI agent security. 

AI agent security isn’t one issue it’s the constraint

When teams talk about what’s holding them back, AI agent security rises to the top. In the same survey, 40% of respondents cite security as their top blocker when building agents. The reason it hits so hard is that it’s not confined to a single layer of the stack. It shows up everywhere, and it compounds as deployments grow.

For starters, when it comes to infrastructure, as organizations expand agent deployments, teams emphasize the need for secure sandboxing and runtime isolation, even for internal agents.

At the operations layer, complexity becomes a security problem. Once you have more tools, more integrations, and more orchestration logic, it gets harder to see what’s happening end-to-end and harder to control it. Our latest research data reflects that sprawl: over a third of respondents report challenges coordinating multiple tools, and a comparable share say integrations introduce security or compliance risk. That’s a classic pattern: operational complexity creates blind spots, and blind spots become exposure.

45% of organizations say the biggest challenge is ensuring tools are secure, trusted, and enterprise-ready.

And at the governance layer, enterprises want something simple: consistency. They want guardrails, policy enforcement, and auditability that work across teams and workflows. But current tooling isn’t meeting that bar yet. In fact, 45% of organizations say the biggest challenge is ensuring tools are secure, trusted, and enterprise-ready. That’s not a minor complaint: it’s the difference between “we can try this” and “we can scale this.”

MCP is popular but not ready for enterprise

Many teams are adopting Model Context Protocol (MCP) because it gives agents a standardized way to connect to tools, data, and external systems, making agents more useful and customized.  Among respondents further along in their agent journey,  85% say they’re familiar with MCP and two-thirds say they actively use it across personal and professional projects. 

Research data suggests that most teams are operating in what could be described as “leap-of-faith mode” when it comes to MCP, adopting the protocol without security guarantees and operational controls they would demand from mature enterprise infrastructure.

But the security story hasn’t caught up yet. Teams adopt MCP because it works, but they do so without the security guarantees and operational controls they would expect from mature enterprise infrastructure. For teams earlier in their agentic journey: 46% of them identify  security and compliance as the top challenge with MCP.

Organizations are increasingly watching for threats like prompt injection and tool poisoning, along with the more foundational issues of access control, credentials, and authentication. The immaturity and security challenges of current MCP tooling make for a fragile foundation at this stage of agentic adoption.

Conclusion and recommendations

Ai agent security is what sets the speed limit for agentic AI in the enterprise. Organizations aren’t lacking interest, they’re lacking confidence that today’s tooling is enterprise-ready, that access controls can be enforced reliably, and that agents can be kept safely isolated from sensitive systems.  

The path forward is clear. Unlocking agents’ full potential will require new platforms built for enterprise scale, with secure-by-default foundations, strong governance, and policy enforcement that’s integrated, not bolted on.

Download the full Agentic AI report for more insights and recommendations on how to scale agents for enterprise. 

Join us on March 25, 2026, for a webinar where we’ll walk through the key findings and the strategies that can help you prioritize what comes next.

Learn more:

Get your copy of the latest State of Agentic AI report! 

Learn more about Docker’s AI solutions

Read more about why AI agents challenge existing governance approaches and explore a new framework designed for agentic AI.

Quelle: https://blog.docker.com/feed/

Amazon Route 53 Global Resolver is now generally available

Today, AWS announced the general availability of Amazon Route 53 Global Resolver, an internet-reachable anycast DNS resolver that delivers easy, secure, and reliable DNS resolution for authorized clients from anywhere. Global Resolver is now available across 30 AWS Regions, with support for both IPv4 and IPv6 DNS query traffic. Previewed at re:Invent 2025 in 11 AWS Regions, Global Resolver gives authorized clients in your organization anycast DNS resolution of public internet domains and private domains associated with Route 53 private hosted zones — from any location. It also provides DNS query filtering to block potentially malicious domains, not-safe-for-work domains, and domains associated with advanced DNS threats such as DNS tunneling and Domain Generation Algorithms (DGA), along with centralized query logging. With general availability, Global Resolver adds protection against Dictionary DGA threats. New customers can explore Global Resolver with a 30-day free trial. For pricing and feature details, visit the service page. To see supported AWS Regions, see the region table. To get started, see the documentation.
Quelle: aws.amazon.com

Amazon OpenSearch Service now supports in-place volume increases for all volume sizes

Amazon OpenSearch Service now extends in-place cluster volume size increases to volumes exceeding 3 TiB. With this enhancement, you can scale storage capacity across all volume sizes without requiring a blue/green deployment. Previously, you could perform volume increases up to 3 TiB on your clusters without a blue/green deployment. This release removes that limitation, making it easier for you to scale up quickly even beyond 3 TiB when required. Domains that already have a volume size above 3 TiB will require a blue/green deployment the first time a volume increase is made; subsequent volume increases will not require a blue/green deployment. Decreasing storage volume size, or making volume increases within short intervals, will still require a blue/green deployment. You can use the dry-run option to check whether your change requires a blue/green deployment. This feature is available in all AWS Commercial and AWS GovCloud (US) Regions where Amazon OpenSearch Service is available. See here for a full list of our Regions. To learn more about Amazon OpenSearch Service configurations, visit the documentation page.
Quelle: aws.amazon.com

Amazon Connect introduces AI-powered manager assistance (Preview)

Today, Amazon Connect announces the preview of an AI-powered assistant that enables contact center managers to get instant answers to operational questions using natural language. You can query across 150+ Amazon Connect metrics, including agent scheduling, self-service experience, and performance evaluations, with historical data for all of these, and receive results in seconds—eliminating hours of manual data gathering. The assistant can also diagnose underlying issues, such as identifying which queues are at risk of missing service level targets and recommending specific recovery actions.
This feature is available as a preview. To request access, contact your AWS account team or an AWS Representative. To learn more about Amazon Connect, the AWS cloud-based contact center, visit the Amazon Connect website.
Quelle: aws.amazon.com

Amazon Connect now supports conversational analytics for email

Amazon Connect now supports conversational analytics for email contacts, enabling contact center managers to automatically categorize emails, redact personally identifiable information (PII), and generate contact summaries. This allows you to quickly identify emerging trends, better maintain compliance by protecting sensitive information, and reduce the time spent reviewing agent performance. For example, when customers email about account issues, Amazon Connect automatically categorizes the email, redacts sensitive information, and generates a summary for supervisor review. To enable this feature, add the Set recording, analytics and processing behavior block to your flows before an email contact is assigned to your agent or sent to your end customer. You can customize which PII types to redact, choose whether redacted content shows specific PII type indicators e.g., [SSN] or generic markings ([PII]), opt to store both original and redacted versions in separate storage, as well as enable contact summaries. Using these analytics, you can quickly create rules to automatically trigger actions such as assigning categories, creating tasks, or updating cases. Amazon Connect conversational analytics is available in the US East (N. Virginia), US West (Oregon), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), and Europe (London) regions. To learn more and get started, please refer to the help documentation or visit the Amazon Connect website.
Quelle: aws.amazon.com

Amazon Connect enhances AI-powered predictive insights

Today, Amazon Connect is announcing enhancements to AI-powered predictive insights that make it easier for businesses to deliver proactive, personalized customer experiences at scale. Building on the five recommendation algorithms launched at re:Invent 2025, AI-powered predictive insights now support up to 40 million product catalog items (8X increase), are available in message templates for trigger-based campaigns, and deliver up to 14% improved model accuracy. These enhancements enable businesses to automatically engage customers with the right message at the right time, while reducing the time required to deploy AI-powered personalization.
Businesses can now deliver trigger-based campaigns to initiate personalized outreach based on customer behavior and predictive signals – such as sending product recommendations when a customer abandons their cart or offering complementary services after a purchase. Businesses can now deliver targeted campaigns for specific customer cohorts based on predicted preferences and behaviors. Improved model accuracy and reduced training time mean businesses can deploy personalized experiences faster with greater confidence in the recommendations provided to customers.
With Amazon Connect Customer Profiles, you only pay-as-you-go for utilized profiles. Public preview for AI-powered predictive insights enhancements is available in Europe (Frankfurt), US East (N. Virginia), Asia Pacific (Seoul), Asia Pacific (Tokyo), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central).
To learn more, visit our webpages for Customer Profiles and explore the AI-powered predictive insights documentation.
Quelle: aws.amazon.com