Introducing Anthropic’s Claude models in Microsoft Foundry: Bringing Frontier intelligence to Azure

Innovation in AI is about empowering every developer and organization with the freedom to choose the right intelligence for every challenge. In today’s landscape, where business needs span from real-time chatbots to deep research agents, model choice is an essential engine of progress.

Microsoft Foundry already offers the widest selection of models of any cloud and with today’s partnership announcement with Anthropic, we’re proud that Azure is now the only cloud providing access to both Claude and GPT frontier models to customers on one platform. This milestone expands Foundry further into what it was built to be: a single place to use any model, any framework, and every enterprise control you need to build and run AI apps and agents at scale.

“We’re excited to use Anthropic Claude models from Microsoft Foundry. Having Claude’s advanced reasoning alongside GPT models in one platform gives us flexibility to build scalable, enterprise-grade workflows that move far beyond prototypes.” — Michele Catasta, President, Replit

Start building with Claude in Microsoft Foundry today

Meet the Claude models: AI that delivers real results

According to Anthropic, Claude models are engineered for the realities of enterprise development, from tight integration with productivity tools to deep, multi-document research and agentic software development across large repositories.

Model

Strengths
Ideal use cases
Claude Haiku 4.5

Fastest, most cost-efficient

Powering free tier user experiences, real-time experiences, coding sub-agents, financial sub-agents, research sub-agents, business tasks

Claude Sonnet 4.5

Smartest model for complex agents and coding

Long-running agents, coding, cybersecurity, financial analysis, computer use, research

Claude Opus 4.1

Exceptional model for specialized reasoning tasks

Advanced coding, long-horizon tasks and complex problem solving, AI agents, agentic search and research, content creation

All Claude models are built on Constitutional AI for safety and can now be deployed through Foundry with governance, observability, and rapid integration. This enables secure use cases like customer support agents, coding agents, and research copilots: making Claude an ideal choice for scalable, trustworthy AI.

Evolving from monolithic apps to intelligent agents

Across the tech landscape, organizations are embracing agentic AI systems. Early studies show AI agents can help boost efficiency by up to 30% for teams and stakeholders. But the challenge for most enterprises isn’t building powerful apps; it’s operationalizing them and weaving them into real workflows. Industry surveys point to a clear pattern. 78% percent of executives say the primary barrier to scaling AI impact is integrating it into core business processes.

Microsoft is uniquely positioned to address this integration gap. With Foundry, we’re bringing together leading-edge reasoning models, an open platform for innovation, and Responsible AI all within a unified environment. This empowers organizations to experiment, iterate, deploy, and scale AI with confidence, all backed by robust governance and security. This means building AI solutions that are not only powerful, but practical and ready to deliver impact at scale.

“Manus deeply utilizes Anthropic’s Claude models because of their strong capabilities in coding and long-horizon task planning, together with their prowess to handle agentic tasks. We are very excited to be using them now on Azure AI Foundry!” — Tao Zhang, Co-founder & Chief Product Officer, Manus AI.

Claude in Foundry Agent Service: From reasoning to results

Inside Foundry Agent Service, Claude models serve as the reasoning core behind intelligent, goal-driven agents. Developers can:

Plan multi-step workflows: Leverage Claude in Foundry Agent Service to orchestrate complex, multi-stage tasks with structured reasoning and long-context understanding

Streamline AI integration with your everyday productivity tools: Use the Model Context Protocol (MCP) to seamlessly connect Claude to data fetchers, pipelines, and external APIs, enabling dynamic actions across your stack.

Automate data operations: Upload files for Claude to summarize, classify, or extract insights to accelerate document-driven processes with robust AI.

Real-time model selection: Using the model router, customers can soon automatically route requests to Claude Opus 4.1, Sonnet 4.5, and Haiku 4.5. Lowering latency and delivering cost savings in production.

Govern and operate your fleet: Foundry offers unified controls and oversight, allowing developers to operate their entire agent fleet with clear insight into cost, performance, and behavior in one connected view.

Developers can also use Claude models in Microsoft Foundry with Claude Code, Anthropic’s AI coding agent.

These capabilities create a framework for AI agents to safely execute complex workflows with minimal human involvement. For example, if a deployment fails, Claude can query Azure DevOps logs, diagnose the root cause, recommend a fix, and trigger a patch deployment all automatically, using registered tools and operating within governed Azure workflows.

Claude Skills: Modular intelligence you can compose

With the Claude API, developers can define skills modular building blocks that combine:

Natural-language instructions,

Optional Python or Bash code, and

Linked data files (templates, visual assets, tabular data, etc.), or APIs

Each skill is dynamically discovered, maximizing your agent’s context. Skills automate a workflow like generating reports, cleaning datasets, or assembling PowerPoint summaries and can be reused or chained with others to form larger automations. Within Microsoft Foundry, every Skill is governed, tracible, and version-controlled, ensuring reliability across teams and projects.

These capabilities allow developers to create Skills that become reusable building blocks for intelligent automation. For example, instead of embedding complex logic in prompts, a Skill can teach Claude how to interact with a system, execute code, analyze data, or transform content and through the Model Context Protocol (MCP), those Skills can be invoked by any agent as part of a larger workflow. This makes it easier to standardize expertise, ensure consistency, and scale automation across teams and applications.

Custom Deep Research: Context that connects beyond a single prompt

Claude’s Deep Research capability extends model reasoning beyond static queries. It allows agents to gather information from live sources, compare it with internal enterprise data, and produce well-reasoned, source-grounded insights. This transforms agents from simple responders into analytical systems capable of synthesizing trends, evidence, and context at scale.

Pricing

Marketplace Models

Deployment Type

Azure Resource Endpoints
Input/1M TokensOutput/1M Tokens
Claude Haiku 4.5

Global Standard

East US 2, West US

$1.00

$5.00

Claude Sonnet 4.5

Global Standard

East US 2, West US

$3.00

$15.00
Claude Opus 4.1
Global Standard

East US 2, West US

$15.00

$75.00

Looking ahead

Our partnership with Anthropic is about more than just bringing new models to Foundry. It’s about empowering every person and organization to achieve more with AI. We look forward to seeing how developers and enterprises leverage these new capabilities to build the next generation of intelligent systems.

Ready to explore Claude in Foundry? Start building today and join us in shaping the next generation of intelligent agents. Tune in to Ignite for more exciting Microsoft Foundry announcements: register today.
The post Introducing Anthropic’s Claude models in Microsoft Foundry: Bringing Frontier intelligence to Azure appeared first on Microsoft Azure Blog.
Quelle: Azure

Microsoft Foundry: Scale innovation on a modular, interoperable, and secure agent stack

One year ago, at Microsoft Ignite, we set out to redefine enterprise intelligence with Foundry. Our conviction was clear: software would evolve beyond rigid workflows, becoming systems that reason, adapt, and act with purpose. We envisioned developers moving from prescriptive logic to shaping intelligent behavior.

Today, that transformation is accelerating across industries and organizations of every size. The shift is tangible: agents are no longer just assistants, they are dynamic collaborators, seamlessly integrated into the tools we use every day. For builders, agents are reshaping software, and we are delivering a platform that empowers every developer and every business to embrace this moment with confidence and control.

Microsoft Foundry helps builders everywhere turn vision into reality with a modular, interoperable, and secure agent stack. From code to cloud, today demonstrates our focus on empowering developers with a powerful, simple—and trusted—path to production AI apps and agents. Here is the TL;DR:

Foundry Models added new models from Anthropic, Cohere, NVIDIA, and more. Model router is now generally available in Microsoft Foundry and in public preview in Foundry Agent Service. 

Foundry IQ, now in public preview, reimagines retrieval-augmented generation (RAG) as a dynamic reasoning process, simplifying orchestration and improving response quality. 

Foundry Agent Service now offers Hosted Agents, multi-agent workflows, built-in memory, and the ability to deploy agents directly to Microsoft 365 and Agent 365 in public preview. 

Foundry Tools, empowers developers to create agents with secure, real-time access to business systems, business logic, and multimodal capabilities.

Foundry Control Plane, now in public preview, centralizes identity, policy, observability, and security signals and capabilities for AI developers in one portal. GitHub Advanced Security and Microsoft Defender integration, now in public preview, helps improve collaboration between security and development teams across the full app lifecycle.

Foundry Local, now in private preview on Android, the world’s most widely used mobile platform.

Managed Instance on Azure App Service, now in public preview, helps organizations move their web applications to the cloud with just a few configuration changes. 

Next-level productivity: AI-powered tools for builders

It all starts with developers, and GitHub is the world’s largest developer community, now serving over 180 million developers. AI-powered tools and agents in GitHub are helping developers move faster, build increasingly innovative apps, and modernize legacy systems more efficiently. More than 500 million pull requests were merged using AI coding agents this year, and with AgentHQ, coding agents like Codex, Claude Code, and Jules will be available soon directly in GitHub and Visual Studio Code so developers can go from idea to implementation faster. GitHub Copilot, the world’s the most popular AI pair programmer, now serves over 26 million users, helping organizations like Pantone, Ahold Delhaize USA, and Commerzbank streamline processes and save time.

Over the last year, developers have moved from experimentation to production. They need tools that let them design, test, monitor, and improve intelligent systems with the same confidence they have in traditional software. That’s why we built a new generation of AI-powered tools: GitHub Agent HQ for unified agent management, Custom Agents to encode domain expertise, and “bring your own models” to empower teams to adapt and innovate. With Copilot Metrics, teams evolve with data, not guesswork.

We’re committed to giving every developer the tools to design, test, and improve intelligent systems, so they can turn ideas into impact, faster than ever. Managed Instance on Azure App Service, now in public preview, lets organizations move existing .NET applications to the cloud with only a few configuration changes.

Enter Microsoft Foundry: The AI app and agent factory

Enterprises need a consistent foundation to build intelligence at scale. With Microsoft Foundry, we’re unifying models, tools, and knowledge into one open system, empowering organizations to run high-performing agent fleets and intelligent workflows across their business.

Today, teams can choose from over 11,000 frontier models in Foundry, including optimized solutions for scale and specialized models for scientific and industrial breakthroughs. I’m proud to announce Rosetta Fold 3, a next-generation biomolecular structure prediction model developed with the Institute for Protein Design and Microsoft’s AI for Good Lab. Models like these enable researchers and enterprises to tackle the world’s hardest problems with state-of-the-art technology.

Build AI agents with Microsoft Foundry

Here is our top Ignite news for Foundry:

1. Use the right model for every task with Foundry Models

Innovation thrives on adaptability and choice. With more than 11,000 models, Microsoft Foundry offers the broadest model selection on any cloud. Foundry Models empowers developers to benchmark, compare, and dynamically route models to optimize performance for every task.

Today’s announcements include:

Starting today, Anthropic’s Claude Sonnet 4.5, Opus 4.1, and Haiku 4.5 models are available in Foundry, advancing our mission to give customers choice across the industry’s leading frontier models, and making Azure the only cloud offering both OpenAI and Anthropic models. Also this week, Cohere’s leading models join Foundry’s first-party model lineup, providing ultimate model choice and flexibility.

Model router (generally available) enables AI apps and agents to dynamically select the best-fit model for each prompt—balancing cost, performance, and quality. Plus, in model router in Foundry Agent Service (public preview), enables developers to build more adaptable and efficient agents; particularly helpful for multi-agent systems. model router in Foundry Agent Service (public preview), enables developers to build more adaptable and efficient agents; particularly helpful for multi-agent systems.

A new Developer Tier (public preview) makes model fine-tuning more accessible by leveraging idle GPU capacity.

Optimize AI performance with Foundry Models

2. Empower agents with knowledge using Foundry IQ

The more context an agent has, the more grounded, productive, and reliable it’s likely to be. Foundry IQ, now available in public preview, reimagines retrieval-augmented generation (RAG) as a dynamic reasoning process rather than a one-time lookup. Powered by Azure AI Search, it centralizes RAG workflows into a single grounding API, simplifying orchestration and improving response quality while respecting user permissions and data classifications.

Key features include:

Simplified cross-source grounding with no upfront indexing.

Multi-source selection, iterative retrieval, and reflection to dynamically improve the quality of agent interactions.

Foundry Agent Service integration to enrich agent context in a single, observable runtime.

Foundry already powers more than 3 billion search queries per day. By combining Foundry IQ with Microsoft Fabric IQ and Work IQ from Microsoft 365 Copilot, Microsoft provides an unparalleled context layer for agents, helping them connect users with the right information at the right time to make informed decisions.

Start building reliable agents with Foundry IQ

3. Build context-aware, action-oriented agents with Foundry Agent Service

To be force multipliers, agents need access to the same tools and knowledge as the people they support. Foundry Agent Service empowers developers to create sophisticated single and multi-agent systems, connecting models, knowledge, and tools into a single, observable runtime.

Today’s announcements include:

Hosted Agents (public preview) enable developers to run agents built with Microsoft frameworks or third-party frameworks in a fully managed environment, so they can focus on agent logic rather than operational overhead.

Multi-agent workflows (public preview) coordinate specialized agents to execute multi-step business processes using either a visual designer or a code-first API. Workflows enable long-running, stateful collaboration with recovery and debugging built-in.

Memory (public preview) enables agents to securely retain context across sessions, reducing external data-store complexity and enabling more personalized interactions out-of-the-box.

Microsoft 365 and Agent 365 integration (public preview) enables developers to instantly deploy agents from Foundry to Microsoft productivity apps, making it easier to reach users directly within the M365 ecosystem while leveraging Agent 365 for secure orchestration, governance, and enterprise-grade deployment.

Create multi-agent systems with Foundry Agent Service

4. Enable agents to take action using Foundry Tools

The right tools can transform agents from simple responders into intelligent problem-solvers. With Foundry Tools, now in public preview, developers can provide agents with secure, real-time access to business systems, business logic, and multimodal capabilities to deliver business value.

Now, developers can:

Find, connect, and manage public or private MCP tools for agents from a single, secure interface.

Enable agents to act on real-time business data and events using more than 1,400 connectors with business systems such as SAP, Salesforce, and UiPath.

Enrich workflows with out-of-the-box tools such as transcription, translation, and document processing.

Expose any API or function as an MCP tool via API Management, reusing existing business logic to accelerate time-to-value.

Enable AI agents with MCP tools with Foundry Tools

5. Advancing security and trust with Foundry Control Plane 

Scaling intelligence requires trust. As organizations rely on agents and AI powered systems for more of their workflows, teams need clearer visibility, stronger guardrails, and faster ways to identify and address risk. This year we’re expanding security and governance with two key announcements: Foundry Control Plane, now in public preview in Microsoft Foundry, and a new integration between Microsoft Defender for Cloud and GitHub Advanced Security, also in public preview. Together they give developers and security teams a more connected way to monitor behavior, guide access, and keep AI systems safe across the full lifecycle.

Foundry Control Plane brings identity, controls, observability, and security together in one place so teams can build, operate, and govern agents with confidence. Key capabilities include: 

Controls that apply unified guardrails across inputs, outputs, and tool interactions to keep agents focused, accurate, and within defined boundaries. 

Observability with built in evaluations, OTel based tracing, continuous red teaming, and dashboards that surface insights on quality, performance, safety, and cost. 

Security anchored in Entra Agent ID, Defender, and Purview to provide durable identity, policy driven access, integrated data protection, and real-time risk detection across the agent lifecycle. 

Fleet wide operations that unify health, cost, performance, risk, and policy coverage for every agent, no matter where it was built or runs, with alerts that surface issues the moment they appear empowering developers to take action. 

Defender for Cloud + GitHub Advanced Security integration 

Developers and security teams often work in separate tools and lack shared signals to prioritize risks. The new Defender for Cloud and GitHub Advanced Security integration closes this gap. Developers receive AI suggested fixes directly inside GitHub, while security teams track progress in Defender for Cloud in real time. This gives both sides a faster, more connected way to identify issues, remediate them, and keep AI systems secure throughout the app lifecycle. 

Secure your code with GitHub and Microsoft Defender

6. Foundry Local Comes to Android: Powering Cloud to Edge  

Six months ago, we launched Foundry Local on Windows and Mac. In that short time, it’s reached 560 million devices, making it one of the fastest-growing runtimes in enterprise history. Leading organizations like NimbleEdge, Morgan Stanley, Dell, and Pieces are already using Local to bring intelligence directly into the environments where work happens, from financial services to healthcare and edge computing.

Today, we’re taking the next step. Foundry Local is now in private preview on Android, the world’s most widely used mobile platform. This means agents can run natively on billions of phones, unlocking real-time inference, privacy-aware computation, and resilience, even where connectivity is unpredictable. 

We’re also announcing a new partnership with PhonePe, one of India’s fastest-growing platforms. Together, we’ll bring agentic experiences into everyday consumer applications, showing how Local can transform not just enterprise workflows, but daily life at massive scale. 

7. Modernize your web apps for the era of AI in weeks, not months 

We see customers building net new AI applications and integrating AI into existing applications. Both require a modern foundation. Managed Instance on Azure App Service, available in public preview, lets organizations move their .NET web applications to the cloud with just a few configuration changes, saving the time and effort of rewriting code. The result is faster migrations with lower overhead, and access to cloud-native scalability, built-in security and AI capabilities in Microsoft Foundry.

Migrate your web apps with Managed Instance on Azure App Service

Learn more and get started with Foundry

We hope you join us at Microsoft Ignite 2025, in-person or virtually, to see these new capabilities in action and learn how they can support your biggest ambitions for your business.

Explore Microsoft Foundry.

Watch our Innovation Session: Your AI Apps and Agent Factory.

Watch all recorded sessions at Ignite.

Chat with us on Discord.

Provide feedback on GitHub.

The post Microsoft Foundry: Scale innovation on a modular, interoperable, and secure agent stack appeared first on Microsoft Azure Blog.
Quelle: Azure

Azure at Microsoft Ignite 2025: All the intelligent cloud news explained

Before joining Microsoft, I spent years helping organizations build and transform. I’ve seen how technology decisions can shape a business’s future. Whether it’s integrating platforms or ensuring your technology strategy stands the test of time, these choices define how a business operates, innovates, and stays ahead of the competition.

Today, business leaders everywhere are asking:

How do we use AI and agents to drive real outcomes?

Is our data ready for this shift?

What risks or opportunities come with AI and agents?

Are we moving fast enough, or will we fall behind?

This week at Microsoft Ignite 2025, Azure introduces solutions that address those questions with innovations designed for this very inflection point.

It’s not just about adopting the right tools. It’s about having a platform that gives every organization the confidence to embrace an AI-first approach. Azure is built for this moment, with bold ambitions to enable businesses of every size. By unifying AI, data, apps, and infrastructure, we’re delivering intelligence at scale.

If you’re still wondering if AI can really deliver ROI, don’t take my word for it; see how Kraft Heinz, The Premier League, and Levi Strauss & Co. are finding success by pairing their unique data with an AI-first approach.

const currentTheme =
localStorage.getItem(‘msxcmCurrentTheme’) ||
(window.matchMedia(‘(prefers-color-scheme: dark)’).matches ? ‘dark’ : ‘light’);

// Modify player theme based on localStorage value.
let options = {“autoplay”:false,”hideControls”:null,”language”:”en-us”,”loop”:false,”partnerName”:”cloud-blogs”,”poster”:”https://azure.microsoft.com/en-us/blog/wp-content/uploads/2025/11/Screenshot-2025-11-19-125803.png”,”title”:””,”sources”:[{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1093182-YourIntelligentCloud-0x1080-6439k”,”type”:”video/mp4″,”quality”:”HQ”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1093182-YourIntelligentCloud-0x720-3266k”,”type”:”video/mp4″,”quality”:”HD”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1093182-YourIntelligentCloud-0x540-2160k”,”type”:”video/mp4″,”quality”:”SD”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1093182-YourIntelligentCloud-0x360-958k”,”type”:”video/mp4″,”quality”:”LO”}],”ccFiles”:[{“url”:”https://azure.microsoft.com/en-us/blog/wp-json/msxcm/v1/get-captions?url=https%3A%2F%2Fwww.microsoft.com%2Fcontent%2Fdam%2Fmicrosoft%2Fbade%2Fvideos%2Fproducts-and-services%2Fen-us%2Fazure%2F1093182-yourintelligentcloud%2F1093182-YourIntelligentCloud_cc_en-us.ttml”,”locale”:”en-us”,”ccType”:”TTML”}]};

if (currentTheme) {
options.playButtonTheme = currentTheme;
}

document.addEventListener(‘DOMContentLoaded’, () => {
ump(“ump-691f1e1c0fe99″, options);
});

With these updates, we’re making it easier to build, run, and scale AI agents that deliver real business outcomes.

TLDR—the Ignite announcement rundown

On the go and want to get right to what’s new and how to learn more? We have you covered. Otherwise, keep reading for a summary of top innovations from the week.

If you want to make AI agents smarter with enterprise context…Microsoft Fabric IQPreviewMicrosoft Foundry IQPreviewMicrosoft Foundry new tool catalogPreviewIf you want a simple, all-in-one agent experience…Microsoft Agent FactoryAvailable nowIf you want to modernize and extend your data for AI–wherever it lives…SAP BDC Connect for Microsoft FabricComing soonAzure HorizonDBPreviewAzure DocumentDBAvailable nowSQL Server 2025Available nowIf you want to operate smarter and securely with AI-powered control…Foundry Control PlanePreviewAzure Copilot with built-in agentsPreviewNative integration for Microsoft Defender for Cloud and GitHub Advanced SecurityPreviewIf you want to build on an AI-ready foundation…Azure BoostAvailable nowAzure Cobalt 200Coming soon

Your AI and agent factory, expanded: Microsoft Foundry adds Anthropic Claude and Cohere models for ultimate model choice and flexibility 

Earlier this year, we brought Anthropic models to Microsoft 365 Copilot, GitHub Copilot, and Copilot Studio. Today, we’re taking the next natural step: Claude Sonnet 4.5, Opus 4.1, and Haiku 4.5 are now part of Microsoft Foundry, advancing our mission to give customers choice across the industry’s leading frontier models—and making Azure the only cloud offering both OpenAI and Anthropic models. 

This expansion underscores our commitment to an open, interoperable Microsoft AI ecosystem—bringing Anthropic’s reasoning-first intelligence into the tools, platforms, and workflows organizations depend on every day.

Read more about this announcement

This week, Cohere’s leading models join Foundry’s first-party model lineup, enabling organizations to build high-performance retrieval, classification, and generation workflows at enterprise scale.

With these additions to Foundry’s 11,000+-model ecosystem—alongside innovations from OpenAI, xAI, Meta, Mistral AI, Black Forest Labs, and Microsoft Research—developers can build smarter agents that reason, adapt, and integrate seamlessly with their data and applications. 

Make AI agents smarter with enterprise context

In the agentic era, context is everything because the most useful agents don’t just reason, they’re capable of understanding your unique business. Microsoft Azure brings enterprise context to the forefront, so you can connect agents to the right data and systems—securely, consistently, and at scale. This set of announcements makes that real.

GPT‑5.1 in Foundry

Read more

Microsoft Fabric IQ turns your data into unified intelligence

Fabric IQ organizes enterprise data around business concepts—not tables—so decision-makers and AI agents can act in real time. Now in preview, Fabric IQ unifies analytics, time-series, and operational data under a semantic framework.

Because all data resides in OneLake, either natively or via shortcuts and mirroring, organizations can realize these benefits across on-premises, hybrid, and multicloud environments. This speeds up answering new questions and building processes, making Fabric the unified intelligence system for how enterprises see, decide, and operate.

Discover how Fabric IQ can support your business

Introducing Foundry IQ, which enables agents to understand more from your data

Now in preview, Foundry IQ makes it easier for businesses to connect AI agents to the right data, without the usual complexity. Powered by Azure AI Search, it streamlines how agents access and reason over both public and private sources, like SharePoint, Fabric IQ, and the web.

Instead of building custom RAG pipelines, developers get pre-configured knowledge bases and agentic retrieval in a single API that just works—all while also respecting user permissions. The outcome is agents that understand more, respond better, and help your apps perform with greater precision and context.

Build and control task forces of agents at cloud scale

Read Asha's blog

Back to the top

Agents, simplified: Microsoft Agent Factory, powered by Azure

This week, we’re introducing Microsoft Agent Factory—a program that brings Work IQ, Fabric IQ, and Foundry IQ together to help organizations build agents with confidence.

With a single metered plan, organizations can use Microsoft Foundry and Copilot Studio to build with IQ. This means you can deploy agents anywhere, including Microsoft 365 Copilot, without upfront licensing or provisioning.

Azure delivers large scale cluster

Read more

Eligible organizations can also tap into hands-on support from top AI Forward Deployed Engineers and access tailored, role-based training to boost AI fluency across teams.

Confidently build agents with Microsoft Agent Factory

Modernize and extend your data for AI—wherever it lives

Great AI starts with great data. To succeed, organizations need a foundation that’s fast, flexible, and intelligent. This week, we introduced new capabilities to help make that possible.

Introducing Azure HorizonDB, a new fully managed PostgreSQL database service for faster, smarter apps

Now in preview, HorizonDB is a cloud database service built for speed, scale, and resilience. It runs up to three times faster than open-source PostgreSQL and grows to handle demanding storage requirements with up to 15 replicas running on auto-scaling shared storage.

Whether building new AI apps or modernizing core systems, HorizonDB delivers enterprise-grade security and natively integrated AI models to help you scale confidently and create smarter experiences.

Azure DocumentDB offers AI-ready data, open standards, and multi-cloud deployments

Now generally available, Azure DocumentDB is a fully managed NoSQL service built on open-source tech and designed for hybrid and multicloud flexibility. It supports advanced search and vector embeddings for more accurate results and is compatible with popular open-source MongoDB drivers and tools.

sovereign cloud capabilities

Read more

SQL Server 2025 delivers AI innovation to one of the world’s most widely used databases

The decades-long foundation of innovation continues with the availability of SQL Server 2025. This release helps developers build modern, AI-powered apps using familiar T-SQL—securely and at scale.

With built-in tools for advanced search, near real-time insights via OneLake, and simplified data handling, businesses can finally unlock more value from the data they already have. SQL Server 2025 is a future-ready platform that combines performance, security, and AI to help teams move faster and work smarter.

Start exploring SQL Server 2025

Fabric goes further

SQL database and Cosmos DB in Fabric are also available this week. These databases are natively integrated into Fabric, so you can run transactional and NoSQL workloads side-by-side, all in one environment.

Get instant access to trusted data with bi-directional, zero copy sharing through SAP BDC Connect for Fabric

Fabric now enables zero-copy data sharing with SAP Business Data Cloud, enabling customers to combine trusted business data with Fabric’s advanced analytics and AI—without duplication or added complexity. This breakthrough gives you instant access to trusted, business-ready insights for advanced analytics and AI.

We offer these world-class database options so you can build once and deploy at the edge, as platform as a service (PaaS), or even as software as a service (SaaS).And because our entire portfolio is either Fabric-connected or Fabric-native, Fabric serves as a unified hub for your entire data estate.

Strengthen the databases at the heart of your data estate

Read Arun's blog

Back to the top

Operate smarter and more securely with AI-powered control

We believe trust is the foundation of transformation. In an AI-powered world, businesses need confidence, control, and clarity. Azure provides that with built-in security, governance, and observability, so you can innovate boldly without compromise.

With capabilities that protect your data, keep your operations transparent, and make environments resilient, we announced updates this week to strengthen trust at every layer.

Unified observability helps keep agents secure, compliant, and under your control

One highlight from today’s announcements is the new Foundry Control Plane. It gives teams real-time security, lifecycle management, and visibility across agent platforms. Foundry Control Plane integrates signals from the entire Microsoft Cloud, including Agent 365 and the Microsoft security suite, so builders can optimize performance, apply agent controls, and maintain compliance.

New hosted agents and multi-agent workflows let agents collaborate across frameworks or clouds without sacrificing enterprise-grade visibility, governance, and identity controls. With Entra Agent ID, Defender runtime protection, and Purview data governance, you can scale AI responsibly with guardrails in place.

Azure Copilot: Turning cloud operations into intelligent collaboration

Azure Copilot is a new agentic interface that orchestrates specialized agents across the cloud management lifecycle. It embeds agents directly where you work—chat, console, or command line—for a personalized experience that connects action, context, and governance.

We are introducing new agents that simplify how you run on the cloud—from migration and deployment to operations and optimization—so each action aligns with enterprise policy. 

Migration and modernization agents deliver smarter, automated workflows, using AI-powered discovery to handle most of the heavy lifting. This shift moves IT teams and developers beyond repetitive classification work so they can focus on building new apps and agents that drive innovation.

Similarly, the deployment agent streamlines infrastructure planning with guidance rooted in Azure Well-Architected Framework best practices, while the operations and optimization agents accelerate issue resolution, improve resiliency, and uncover cost savings opportunities.

Learn more about these agents in the Azure Copilot blog

Secure code to runtime with AI-infused DevSecOps

Microsoft and GitHub are transforming app security with native integration for Microsoft Defender for Cloud and GitHub Advanced Security. Now in preview, this integration helps protect cloud-native applications across the full app lifecycle, from code to cloud.

GitHub Universe 2025

Read more

This enables developers and security teams to collaborate seamlessly, allowing organizations to stay within the tools they use every day.

Streamline cloud operations and reimagine the datacenter

Read Jeremy's blog

Build on an AI-ready foundation

Azure infrastructure is transforming how we deliver intelligence at scale—both for our own services and for customers building the next generation of applications.

At the center of this evolution are new AI datacenters, designed as “AI superfactories,” and silicon innovations that enable Azure to provide unmatched flexibility and performance across every AI scenario.

THE first AI superfactory

Read more

Azure Boost delivers speed and security for your most demanding workloads

We’re announcing our latest generation of Azure Boost with remote storage throughput of up to 20 GBps, up to 1 million remote storage IOPS, and network bandwidth of up to 400 Gbps. These advancements significantly improve performance for future Azure VM series. Azure Boost is a purpose-built subsystem that offloads virtualization processes from the hypervisor and host operating system, accelerating storage and network-intensive workloads.

Azure Cobalt 200: Redefining performance for the agentic era

Azure Cobalt 200 is our next-generation ARM-based server, designed to deliver efficiency, performance, and security for modern workloads. It’s built to handle AI and data-intensive applications while maintaining strong confidentiality and reliability standards.

By optimizing compute and networking at scale, Cobalt 200 helps you run your most critical workloads more cost-effectively and with greater resilience. It’s infrastructure designed for today’s demands—and ready for what’s next.

See what Azure Cobalt 200 has to offer

Keeping you at the frontier with continuous innovation

We’re delivering continuous innovation in AI, apps, data, security, and cloud. When you choose Azure, you get an intelligent cloud built on decades of experience and partnerships that push boundaries. And as we’ve just shown this week, the pace of innovation isn’t slowing down anytime soon.

Back to the top

Agentic enterprise, unlocked: Start now on Microsoft Azure

I hope Ignite—and our broader wave of innovation—sparked new ideas for you. The era of the agentic cloud isn’t on the horizon; it’s here right now. Azure brings together AI, data, and cloud capabilities to help you move faster, adapt smarter, and innovate confidently.

I invite you to imagine what’s possible—and consider these questions:

What challenges could you solve with a more connected, intelligent cloud foundation?

What could you build if your data, AI, and cloud worked seamlessly together?

How could your teams work differently with more time to innovate and less to maintain?

How can you stay ahead in a world where change is the only constant?

Want to go deeper into the news? Check out these blogs:

Microsoft Foundry: Scale innovation on a modular, interoperable, secure agent stack by Asha Sharma.

Azure Databases + Microsoft Fabric: Your unified and AI-powered data estate by Arun Ulagaratchagan.

Announcing Azure Copilot agents and AI infrastructure innovations by Jeremy Winter.

Ready to take the next step?

Explore technology methodologies and tools from real-world customer experiences with Azure Essentials.

Check out the latest announcements for software companies.

Visit the Microsoft Marketplace, the trusted source for cloud solutions, AI apps, and agents.

The post Azure at Microsoft Ignite 2025: All the intelligent cloud news explained appeared first on Microsoft Azure Blog.
Quelle: Azure

Microsoft Databases and Microsoft Fabric: Your unified and AI-powered data estate

In this article

Another leap forward across Microsoft Databases and Microsoft FabricDeploy the next generation of Microsoft DatabasesGetting your data estate ready for AI with Microsoft FabricMark your calendar for FabCon and SQLConWatch these announcements in action at Microsoft Ignite

As AI reshapes every industry, one truth remains constant: data is no longer just an asset—it’s your competitive edge. The pace of AI demands easy data access, faster insights, and the ability to iterate without friction. Yet many organizations are held back by fragmented data estates and legacy systems. Microsoft Fabric was designed to meet this moment—to unify your data, simplify your architecture, and accelerate your path to becoming an AI-led organization.

That mission is gaining traction at remarkable speed. Since Fabric launched two years ago, it has grown faster than any other data and analytics platform in the industry. More than 28,000 customers—including 80% of the Fortune 500—now rely on Fabric, and its ecosystem continues to expand as partners build solutions to solve the most complex data challenges.

Explore Azure announcements at Microsoft Ignite 2025

Another leap forward across Microsoft Databases and Microsoft Fabric

As Fabric becomes the central connection point for data, we’re strengthening the database layer at the heart of your data estate—ensuring you have the scale and performance required for AI.  

Microsoft already offers one of the industry’s most comprehensive database portfolios, and we’re expanding it even further—while deeply integrating these capabilities into Fabric. I’m excited to announce the general availability of SQL Server 2025, Azure DocumentDB, and SQL database and Cosmos DB in Fabric, along with the preview of our newest addition, Azure HorizonDB. With these new offerings, you have a world-class database option to build once and deploy at the edge, as platform as a service (PaaS), or even as software as a service (SaaS). And because our entire portfolio is either Fabric-connected or Fabric native, Fabric serves as a unified hub for your entire data estate. Below I’ll cover how these new databases are purpose-built to support your AI projects.  

Deploy the next generation of Microsoft Databases

Modernize your SQL estate with SQL Server 2025, now generally available

Microsoft has been shaping the SQL landscape for more than 35 years. Now, with the release of SQL Server 2025 into general availability, we’re introducing the next evolution—one that brings developer‑first AI capabilities at the edge, within the familiar T‑SQL experience. Smarter search combines advanced semantic intelligence with full‑text filtering to uncover richer insights from complex data. AI model management using model definitions in T-SQL allows seamless integration with popular AI services such as Microsoft Foundry.

Enterprise reliability and security remain best-in-class. Enhanced query performance, optimized locking, and improved failover help ensure higher concurrency and uptime for mission‑critical workloads. With strengthened credential management through Microsoft Entra ID via Azure Arc, SQL Server 2025 is secure by design. Your data is also instantly accessible for your AI and analytics in Microsoft OneLake with mirroring for SQL Server 2025 in Fabric, now also generally available.

SQL Server 2025 is the most significant release for SQL developers in a decade. And the response to our preview has been overwhelming, with 10,000 organizations participating, 100,000 databases already deployed, and download rate two times higher than SQL Server 2022. If you want to join all those who’ve already adopted SQL Server 2025, download it today.

Azure DocumentDB: MongoDB-compatible, AI-ready, and built for hybrid and multi-cloud

We’re excited to announce Azure DocumentDB, a new service built on the open-source, MongoDB-compatible DocumentDB standard governed by the Linux Foundation. The first Azure managed service to support multi-cloud and hybrid NoSQL, Azure DocumentDB can run consistently across Azure, on-premises, and other clouds.

Azure DocumentDB gives you the freedom to embrace open source while achieving scale, security, and simplicity. It’s AI-ready, with capabilities like vector and hybrid search to deliver more relevant results. Instant autoscale meets demand, and independent compute and storage scaling keeps workloads efficient. Security and availability is standard, with Microsoft Entra ID integration, customer-managed encryption keys, 35-day backups included, and a 99.995% availability service-level agreement (SLA). And soon, enhanced full-text search will add features like fuzzy matching, proximity queries, and expanded language support, making it even easier to build intelligent, search-driven apps.

Azure DocumentDB is now generally available, so you can try it today. You can also learn more about Azure DocumentDB and all the Azure Database news by reading Shireesh Thota’s, Corporate Vice President of Azure Databases, announcement blog.

Azure HorizonDB: PostgreSQL designed for your mission-critical workloads

PostgreSQL has become the backbone of modern data solutions thanks to its rich ecosystem, extensibility, and open source foundation. Microsoft is proud to be the #1 PostgreSQL committer among hyperscalers, and we’re building on that leadership with Azure HorizonDB.

Now in early preview, Azure HorizonDB is a fully managed, PostgreSQL-compatible database service, built to handle the scale and performance required by the modern enterprise. It goes far beyond open source Postgres, with auto-scaling storage up to 128 TB, scale-out compute up to 3,072 vCores, <1 millisecond multi-zone commit latency, and enterprise security and compliance. Vector search is built-in, along with integrated AI model management and seamless connectivity to Microsoft Foundry so you can build modern AI apps. Combined with GitHub Copilot, Fabric, and Visual Studio Code integrations, it provides an intelligent and secure foundation for building and modernizing applications at any scale. To learn more about Azure HorizonDB, read our announcement blog.

Accelerate app development with Fabric SaaS Databases, now generally available

We are also releasing a new class of SaaS databases, both SQL database and Cosmos DB in Fabric, into general availability. Data developers now have access to world-class database engines within the same unified platform that powers analytics, AI, and business intelligence.

Fabric Databases are designed to streamline your application development. You can provision them in seconds, and they don’t require the usual granular configuration or deep database expertise. They provide enterprise-grade performance, are secure by default with features like cloud authentication, customer-managed keys, and database encryption, and come natively integrated into the Fabric platform, even using the same Fabric capacity units for billing.

With Fabric databases, developers now have the flexibility to build applications grounded in operational, transactional, and analytical data. Together, these offerings make Fabric a developer-first data platform that is streamlined, scalable, and ready for modern data applications.

Learn more by reading Shireesh Thota’s, Corporate Vice President of Azure Databases, announcement blog.

All your databases connected into Fabric

We’re making it easier than ever to work with your entire Microsoft database portfolio in Fabric, giving you a single, unified place to manage and use all your data. Building on our existing mirroring support for Azure SQL Database and Azure SQL MI, we’re now announcing the general availability of mirroring for Azure Database for PostgreSQL, Azure Cosmos DB, and SQL Server versions 2016–2022 and 2025. With these databases mirrored directly into Fabric, you can eliminate traditional extract, transform, and load (ETL) pipelines and make your data instantly ready for analytics and AI.

Getting your data estate ready for AI with Microsoft Fabric

Choosing the right database is essential, but it’s just the beginning. The major opportunity lies in driving frontier transformation, where data becomes the foundation for an AI-native enterprise. We recommend focusing on three core steps:

Unifying your data estate to eliminate silos and complexity.

Creating semantic meaning so your data is ready for AI.

Empowering agents to act on insights and transform operations.

In this section, I’ll dive into the latest enhancements to Microsoft Fabric that help you achieve every step of your data journey. This includes expanded interoperability in OneLake with SAP, Salesforce, Azure Databricks, and Snowflake, the introduction of Fabric IQ—a new workload that adds semantic understanding—and enhanced agentic capabilities across Fabric to help you build richer, AI-powered data experiences.

This is the future of data, and it’s already within reach. With Fabric and our database innovations, Microsoft is helping organizations move seamlessly from insight to action—unlocking the full potential of your data and the AI built on top of it.

Unify your data estate with Microsoft OneLake

Microsoft OneLake unifies all your data—across clouds, on-premises, and beyond Microsoft—into a single data lake with zero-ETL capabilities like shortcuts and mirroring. Alongside the additional mirroring sources for Microsoft Databases, we’re also introducing the preview of shortcuts to SharePoint and OneDrive. This allows you to bring unstructured productivity data into OneLake without copying files or building ETL pipelines, making it easier to train agents and enrich your structured data.

Once connected to OneLake, your data becomes easily discoverable in the apps your teams use every day like Power BI, Teams, Excel, Copilot Studio, and Microsoft Foundry. Today, we are taking that a step further with native integration with Foundry IQ—the next generation of retrieval-augmented generation (RAG). Agents rely on context—Foundry IQ’s knowledge bases deliver high-value context to agents by simplifying access to multiple data sources and making connections across information. You can use the OneLake knowledge source in Foundry IQ to connect agents to multi-cloud sources like AWS S3, on-premises sources, and structured and unstructured data.

See how shortcuts and mirroring unify your data in OneLake and fuel the next generation of intelligent agents in Microsoft Foundry:

Expanding OneLake interoperability with leading data platforms

We are also seeing great momentum with dozens of partners outside of Microsoft deeply integrating with OneLake, including ClickHouse, Dremio, Confluent, EON, and many more. And now, we are thrilled to add new, deeper interoperability with SAP, Salesforce, Azure Databricks, and Snowflake.

First, we’re deepening interoperability with the systems organizations rely on most, SAP and Salesforce. With the launch of SAP Business Data Cloud Connect for Microsoft Fabric, customers can allow bidirectional, zero-copy data sharing between SAP Business Data Cloud (BDC) and Fabric. At the same time, we are working with Salesforce to integrate their data into Fabric using the same zero-copy approach, unlocking advanced analytics and AI capabilities without the overhead of traditional ETL.

We’re also strengthening interoperability with Azure Databricks and Snowflake so you can use a single copy of data across platforms. By the end of 2025, Azure Databricks will release, in preview, the ability to natively read data from OneLake through Unity Catalog, enabling seamless access without duplication or complex data movement. Looking ahead, Databricks will also add support for writing to and storing data directly in OneLake, allowing full two-way interoperability. Read more about this interoperability.

Our collaboration with Snowflake on bidirectional data access continues as well. We are introducing a new item in OneLake called a Snowflake Database and a new UI in Snowflake—both designed to allow OneLake to be the native storage solution for your Snowflake data. We’re also bringing Snowflake mirroring to general availability, allowing you to virtualize your external Snowflake-managed Iceberg tables in OneLake with shortcuts created and handled automatically. Together, these innovations let you run any Fabric workload—whether analytics, AI, or visualization—directly on your Snowflake-managed Iceberg tables.

Learn more about our Snowflake collaboration by reading our latest joint blog or by watching the following demo:

Finally, in close collaboration with dbt Labs, we are also excited to announce built-in support for their industry leading data transformation capability. Now in preview, dbt jobs in Microsoft Fabric let you build, test, and orchestrate dbt workflows in your Fabric workspaces. Learn more in this blog.

Create semantic knowledge to fuel AI with Fabric IQ

As Frontier Firms train agents on their enterprise data, it’s become clear that quality and context matter more than data volume. Agents need business context across relationships, hierarchies, and meaning to turn raw data into actionable insight. That’s why we’re introducing Fabric IQ—a new workload designed to map your datasets to the real-world entities they represent, creating a shared semantic structure on top of your data.

The power of IQ lies in how it unifies disparate data types under a single, coherent framework. Built upon Power BI’s industry-leading, rich semantic model technology, IQ brings together analytical data, time-series telemetry, and geospatial information, all organized under a semantic framework of business entities and their relationships, properties, rules, and actions. You can then create operations agents, a new type of agent in Fabric, which can use this model to act as virtual team members, monitoring real-time data sources, identifying patterns, and taking proactive action. Instead of forcing your teams and even agents to think in terms of tables and schemas, IQ allows you to align data with how your organization operates.

In short, Fabric IQ is designed to model reality with data, so that every insight, prediction, and action is grounded in how your organization actually operates. You can learn more about IQ in Yitzhak Kesselman’s, Corporate Vice President of Messaging and Real-Time Intelligence, announcement blog.

Empower data-rich agents with Copilot, Fabric data agents, and operations agents

As organizations scale their AI initiatives, the ability to connect intelligent agents with enterprise-grade data is becoming a critical differentiator. Fabric is making this possible with a set of integrated AI experiences: Copilot in Power BI helps you ask questions of your data, Fabric data agents allow deeper analysis, and the new Fabric operations agents let you monitor your data estate and take action in real time. These experiences can be used across Fabric or as foundational knowledge sources in industry-leading AI tools like Microsoft Foundry, Copilot Studio or even Microsoft 365 Copilot to power smarter, more data-rich AI experiences.

Beyond introducing operations agents as part of Fabric IQ, we’re also expanding what data agents and Copilot can do. Along with existing integration with Microsoft Foundry and Copilot Studio, Fabric data agents can now be embedded directly in Microsoft 365 Copilot. This lets business users (with the right permissions) access trusted knowledge from OneLake and transforms Microsoft 365 from a productivity suite into an intelligent insights platform.

They can also act as hosted Model Context Protocol (MCP) servers, making it easy to integrate with other applications and agents across the AI ecosystem. Finally, data agents can now reason across both structured and unstructured data. Thanks to an integration with Azure AI Search, data teams can add their existing unstructured data search endpoints as a source in data agents. Learn more the Fabric data agent enhancements by reading the Fabric AI blog.

We’re also enhancing the standalone experience for Copilot in Power BI with a new search experience. Simply describe what you need, and Copilot will locate the relevant report, semantic model, or data agent and surface the right answers. This standalone experience is also coming to Power BI mobile so you can use it on the go.

Take a look at how you can apply all of these AI experiences together seamlessly:

In short, we’re redefining what it means to have an AI-powered data estate. With data agents, Copilot in Power BI, and operations agents in Fabric IQ, AI is now woven across Fabric. And with native integration to Microsoft Foundry and Copilot Studio, you can easily add Fabric agents as building blocks to create more intelligent, informed custom agents.

You also can see more innovation coming to the Fabric platform by reading Kim Manis’, Corporate Vice President of the Fabric Platform, Fabric blog or by checking out the more technical Fabric November 2025 Feature summary blog.

Mark your calendar for FabCon and SQLCon

We are excited to announce SQLCon 2026, which will happen at the same time and the same location as the Microsoft Fabric Community Conference (FabCon), happening March 16–20, 2026 in Atlanta, Georgia. By uniting the powerhouse SQL and Fabric communities, we’re giving data professionals everywhere a unique opportunity to master the latest innovations, share practical knowledge, and accelerate what’s possible with data and AI, all in one powerful week. Register for either conference and enjoy full access to both, with the flexibility to mix and match sessions, keynotes, and community events to fit your interests.

Register for FabCon and SQLCon now

Watch these announcements in action at Microsoft Ignite

If you’re interested in seeing these announcements live, I encourage you to join my Ignite session, “Innovation Session: Microsoft Fabric and Azure Databases – the data estate for AI” either in person or online at no cost. I’ll not only cover these major announcements but show you how they come together to help you create a unified, intelligent data foundation for AI.

You can also dive deeper into these announcements and so much more by watching the rest of the breakout sessions across Azure Data:

Tuesday, November 18

Modern data, modern apps: Innovation with Microsoft Databases

Microsoft Fabric: The data platform for the next AI frontier

Unifying your data journey: Migrating to Microsoft Fabric

Wednesday, November 19

Premier League’s data-driven fan engagement at scale

Create a semantic foundation for your AI agents in Microsoft Fabric

Move fast, save more with MongoDB-compatible workloads on DocumentDB

SQL database in Fabric: The unified database for AI apps and analytics

The blueprint for intelligent AI agents backed by PostgreSQL

Connect to and manage any data, anywhere in Microsoft OneLake

Unlock the power of Real-Time Intelligence in the era of AI

Empower Business Users with AI driven insights in Microsoft Fabric

Thursday, November 20

Real-time analytics and AI apps with Cosmos DB in Fabric

From interoperability to agents: Powering financial workflows with AI

How Fabric Data Agents Are Powering the Next Wave of AI

Explore Azure announcements at Microsoft Ignite 2025

The post Microsoft Databases and Microsoft Fabric: Your unified and AI-powered data estate appeared first on Microsoft Azure Blog.
Quelle: Azure

Announcing Azure Copilot agents and AI infrastructure innovations

In this article

Agentic cloud operations: Introducing Azure CopilotAzure’s AI infrastructure: The backbone of modernizationBuilding for trust: Resiliency, operational excellence, and securityWhat does modernizing workloads look like today?Looking ahead

The cloud is more than just a platform—it’s the engine of transformation for every organization. This year at Microsoft Ignite 2025, we’re showing how Microsoft Azure modernizes cloud infrastructure at global scale—built for reliability, security, and performance in the AI era.

Streamline cloud operations with Azure Copilot

From scalable compute to resilient networks and AI-powered operations, Azure provides the foundation that helps customers innovate faster and operate with confidence. Our strategy is anchored in three key areas, each designed to help customers thrive in a rapidly changing landscape:

We’re strengthening Azure’s global foundation. We’re expanding capacity and resilience across regions while optimizing datacenter design, power efficiency, and network topology for AI-scale workloads. Our services are zone-redundant by default, our edge footprint is growing to meet low-latency needs, and security and compliance controls are embedded at every layer. From confidential computing and sovereign cloud architectures to our security capabilities, Azure is engineered for trust by design.

We’re modernizing every workload. We’re advancing compute, network, storage, application, and data services with Microsoft Azure Cobalt and Azure Boost systems, Azure Kubernetes Service (AKS) Automatic, and Azure HorizonDB for PostgreSQL. We embrace and integrate with Linux, Kubernetes, and open-source ecosystems customers rely on.

We’re transforming how teams work. We’re embedding AI agents directly into the platform through Azure Copilot and GitHub Copilot, bringing agent-based capabilities for migration, app modernization, troubleshooting, and optimization. These features remove repetitive tasks so teams can focus on architecture instead of administration, making an integral part of how Azure runs end-to-end.

Agentic cloud operations: Introducing Azure Copilot

Azure is entering a new era where AI becomes the foundation for running your cloud. As environments grow more complex, traditional tools and manual workflows can’t keep up. This brings us to a frontier moment, where AI and cloud converge to redefine operations. We call this agentic cloud ops: a new model for the AI era.  

What is Azure Copilot?

Azure Copilot is a new agentic interface that orchestrates specialized agents across the cloud management lifecycle, automating migration, optimization, troubleshooting, and more, freeing up teams to focus on innovation.

Azure Copilot aligns actions—human or agent—with your policies and standards, offering a unified framework for compliance, auditing, and enforcement that respects role-based access control (RBAC) and Azure Policy. It provides strong governance and data residency controls, full visibility across agents and workloads, and lets you bring your own storage for complete control of chat and artifact data. To make the operating model truly agentic, we’re introducing six Azure Copilot agents—migration, deployment, optimization, observability, resiliency, and troubleshooting—in gated preview.

Learn more about Azure Copilot in our detailed blog, and learn how to sign up for the preview.

Sign up for the Azure Copilot preview

Azure’s AI infrastructure: The backbone of modernization

Azure is built for reliable, world-class performance, delivering at global scale and speed.

With more than 70 regions and hundreds of datacenters, Azure provides the largest cloud footprint in the industry. This unified infrastructure supports consistent performance, capacity, and compliance for customers everywhere.

AI infrastructure that delivers performance and scale

We’ve reimagined how our datacenters are built and operated to support the critical needs of the largest AI challenges. In September 2025, we launched Fairwater, our largest and most sophisticated AI datacenter to date, and our newest site in Atlanta now joins Wisconsin to form a planet-scale “AI superfactory.” By using high-density liquid cooling, a flat network architecture linking hundreds of thousands of GPUs, and a dedicated AI WAN backbone, we’re giving customers unmatched capacity, flexibility, and utilization across every AI workload.

Azure is the first cloud provider to deploy NVIDIA’s GB300 GPUs at scale, extending our leadership from GB200 and continuing to define the infrastructure foundation for the AI era. Each Fairwater site connects hundreds of thousands of these best-in-class GPUs, millions of CPU cores, and massive storage—enough to hold 80 billion 4K movies.

A key part of this evolution is our AI WAN—a high-speed network linking Fairwater and other Azure datacenters to move data quickly and coordinate massive AI jobs across sites. It’s engineered to keep GPUs busy, reduce bottlenecks, and scale workloads beyond the limits of a single location, so customers can tackle bigger projects and get results faster. Driving down costs through innovation, we’ve set a new benchmark for secure, high-performance AI: Azure processed more than 1.1 million tokens per second for language models—the equivalent of writing seven books per second from a single rack.

Azure’s AI infrastructure puts supercomputing-level power in every customer’s hands—enabling larger model training, faster deployment, and broader user reach within a trusted, compliant environment.

Extending AI infrastructure innovation to your workloads

An exciting part of our work on AI datacenters is that the same architectural breakthroughs that allow us to train frontier models also strengthen Azure’s core services, directly benefiting all workloads.

One of these examples is Azure Boost, which offloads virtualization processes traditionally performed by the hypervisor and host operating system onto purpose-built hardware and software. Combined with our new AMD “Turin” and Intel “Granite Rapids” virtual machines—plus the latest network-optimized and storage-optimized families—customers are seeing more than 20 GB per second of managed-disk throughput and more than a million input/output operations per second (IOPS). More than a quarter of our global fleet is now Boost-enabled, and network throughput has doubled to 400 gigabits per second for our general-purpose and AI SKUs. The infrastructure investments we’ve talked about are already being used by leading-edge companies to bring services to billions of users.

Cloud-native apps and data

Azure Kubernetes Service (AKS) delivers secure, managed Kubernetes with automated upgrades and scaling. Paired with cloud-native databases like PostgreSQL and Cosmos DB, teams build faster and recover instantly.

We’re doing the work to bridge the power of the AI infrastructure into AKS by enabling cutting-edge GPUs to function as AKS nodes out of the box or actively monitoring their health.

We haven’t stopped at the infrastructure layer. We’re also reinventing how easy it is to take advantage of Kubernetes itself. That’s why we introduced AKS Automatic. It embeds best practices, automates infrastructure provisioning, and operates critical Kubernetes components to reduce complexity and improve reliability. It handles the hard parts—patching, upgrades, observability, and security—so teams can focus on innovation instead of infrastructure.

With AKS Automatic-managed system node pools, we’re making AKS Automatic even lower-touch because now you don’t have to run critical Kubernetes components yourself. It’s entirely managed by the service. It moves key services like CoreDNS and metrics server to Microsoft-managed infrastructure, making it even easier to focus entirely on your apps.

You not only need easy-to-use application infrastructure; you also need easy-to-use data tools for your applications. We’re introducing Azure HorizonDB for PostgreSQL, which brings breakthrough scalability and AI integration for next-generation applications. Azure DocumentDB is now generally available—the first managed database built on the open-source engine we contributed to the Linux Foundation.

We are also excited to expand our longstanding partnership with SAP and announce the launch of SAP Business Data Cloud Connect for Microsoft Fabric, simplifying access to data sharing across both platforms. Read the announcement blog to learn more.

Azure Databases and Microsoft Fabric
Learn more about the next generation of Microsoft’s databases, announced at Microsoft Ignite 2025.

Read the blog

Building for trust: Resiliency, operational excellence, and security

The world runs on Azure’s cloud infrastructure. Every business, government, and developer depends on it to be reliable, secure, and always available. That responsibility drives everything we do in Azure. Our mission is to build the most efficient, reliable, and cost-effective infrastructure platform of the AI era, one that customers can depend on every day.

Resiliency is not just a feature. It is a design principle, a culture, and a shared commitment between Microsoft and our customers. Every region, service, and operation is built with that responsibility in mind.

At Microsoft Ignite, we are taking this commitment further with new capabilities that strengthen reliability, simplify operations, and help customers build with greater confidence.

Raising the bar on operational excellence: Operational excellence means reliability is designed from the start. Every Azure region is built with availability zones, redundant networking, and automated fault detection. We are extending that foundation with services like NAT Gateway, now zone-redundant by default for improved network reliability without any configuration required.

Empowering customers: With Azure Resiliency (public preview), we are co-engineering resiliency with customers. This new experience helps teams set recovery objectives, test failover drills, and validate application health, strengthening readiness together before issues arise.

Evolving security for modern threats

We continue to expand Azure’s security foundation with new capabilities that make protection simpler, smarter, and more integrated across the platform. These updates strengthen boundaries, automate defense, and bring AI-powered insight directly into how customers protect and operate their environments.

Simplifying protection: Azure Bastion Secure by Default is built into the platform. It automatically hardens remote access to virtual machines through RDP and SSH, reducing setup time and risk

Strengthening boundaries: Network Security Perimeter, now generally available, provides secure and centralized firewall to control access to PaaS resources.

Better defense: And we’re making advancements in the Web Application Firewall with Captcha for human verification.

All of this builds on Azure’s broader stack of confidential virtual machines, containers, hardware-based attestation, and encryption, supporting protection from hardware through to application.

What does modernizing workloads look like today?

Organizations are on a journey to modernize. Most run a mix of systems that span decades—from mission-critical databases to new cloud-native services. Azure meets customers where they are, helping modernize applications and data with flexibility, openness, and built-in intelligence.

Rather than a one-off effort, modernization is really about re-architecting agility, scaling efficiently, and using AI and open technologies without sacrificing reliability or control.

To help simplify and accelerate the modernization journey, we’re investing to help you find and get to the best destination for your workloads, whether it’s infrastructure as a service (IaaS), platform as a service (PaaS), or software as a service (SaaS).

Azure’s agentic migration and modernization tools make it easier than ever to modernize your apps, data, and infrastructure with speed and precision. For example, you can move existing .NET applications directly into a fully managed environment—no refactoring or containers required—into the new App Service Managed Instance (now in preview).

On the data side, the next-generation Azure SQL Managed Instance (now generally available) delivers up to five-times faster performance and double the storage capacity. And Azure Copilot and GitHub Copilot simplify SQL Server, Oracle, and PostgreSQL modernization.

Plus, across infrastructure—from VMware to Linux and IT operations—AI agents streamline migrations, reduce licensing overhead, and automate patching, governance, and compliance, so modernization becomes a repeatable, intelligent motion.

Customers migrating and modernizing to Azure using our agentic tools have shared the impressive results they have experienced. Here are just a few examples:

.NET: More than 500,000 lines of code upgraded and migrated in weeks.

Java: Four times faster modernized applications than before using the agents.

Read more about GitHub Copilot app modernization.

Looking ahead

The promise of the cloud was always about scale, flexibility, and innovation. With our innovations across our infrastructure, datacenters, Azure Copilot, services, and open-source contributions, that promise expands to drive your business forward every day.

The next era of the cloud is inevitable. It’s agentic, intelligent, and human-centered—and Microsoft is helping lead the way.

Join us at Microsoft Ignite where you can tune in to our sessions to learn more:

Innovation Session

Innovation Session: Scale Smarter: Infrastructure for the Agentic Era

Breakouts

End-to-End migration of applications with AI Agents to IaaS and PaaS

Unlock agentic intelligence in the cloud with Copilot in Azure

What’s new and what’s next in Azure IaaS

SQL Server 2025: The AI-ready enterprise database

Scaling Kubernetes securely and reliably with AKS

Inside Azure Innovations with Mark Russinovich

The post Announcing Azure Copilot agents and AI infrastructure innovations appeared first on Microsoft Azure Blog.
Quelle: Azure

Powering Distributed AI/ML at Scale with Azure and Anyscale

The path from prototype to production for AI/ML workloads is rarely straightforward. As data pipelines expand and model complexity grows, teams can find themselves spending more time orchestrating distributed compute than building the intelligence that powers their products. Scaling from a laptop experiment to a production-grade workload still feels like reinventing the wheel. What if scaling AI workloads felt as natural as writing in Python itself? That’s the idea behind Ray, the open-source distributed computing framework born at UC Berkeley’s RISELab, and now, it’s coming to Azure in a whole new way.

Today, at Ray Summit, we announced a new partnership between Microsoft and Anyscale, the company founded by Ray’s creators, to bring Anyscale’s managed Ray service to Azure as a first-party offering in private preview. This new managed service will deliver the simplicity of Anyscale’s developer experience on top of Azure’s enterprise-grade Kubernetes infrastructure, making it possible to run distributed Python workloads with native integrations, unified governance, and streamlined operations, all inside your Azure subscription.

Ray: Open-Source Distributed Computing for PythonRay reimagines distributed systems for the Python ecosystem, making it simple for developers to scale code from a single laptop to a large cluster with minimal changes. Instead of rewriting applications for distributed execution, Ray offers Pythonic APIs that allow functions and classes to be transformed into distributed tasks and actors without altering core logic. Its smart scheduling seamlessly orchestrates workloads across CPUs, GPUs, and heterogeneous environments, ensuring efficient resource utilization.

Developers can also build complete AI systems using Ray’s native libraries—Ray Train for distributed training, Ray Data for data processing, Ray Serve for model serving, and Ray Tune for hyperparameter optimization—all fully compatible with frameworks like PyTorch and TensorFlow. By abstracting away infrastructure complexity, Ray lets teams focus on model performance and innovation.

Anyscale: Enterprise Ray on AzureRay makes distributed computing accessible; Anyscale running on Azure takes it to the next level for enterprise-readiness. At the heart of this offering is RayTurbo, Anyscale’s high-performance runtime for Ray. RayTurbo is designed to maximize cluster efficiency and accelerate Python workloads, enabling teams on Azure to:

Spin up Ray clusters in minutes, without Kubernetes expertise, directly from the Azure portal or CLI.Dynamically allocate tasks across CPUs, GPUs, and heterogeneous nodes, ensuring efficient resource utilization and minimizing idle time.Easily run large experiments quickly and cost-effectively with elastic scaling, GPU packing, and native support for Azure spot VMs.Run reliably at production scale with automatic fault recovery, zero-downtime upgrades, and integrated observability.Maintain control and governance; clusters run inside your Azure subscription, so data, models, and compute stay secure, with unified billing and compliance under Azure standards.By combining Ray’s flexible APIs with Anyscale’s managed platform and RayTurbo’s performance, Python developers can move from prototype to production faster, with less operational overhead, and at cloud scale on Azure.

Kubernetes for Distributed ComputingUnder the hood, Azure Kubernetes Service (AKS) powers this new managed offering, providing the infrastructure foundation for running Ray at production scale. AKS handles the complexity of orchestrating distributed workloads while delivering the scalability, resilience, and governance that enterprise AI applications require.

AKS delivers:

Dynamic resource orchestration: Automatically provision and scale clusters across CPUs, GPUs, and mixed configurations as demand shifts.High availability: Self-healing nodes and failover keep workloads running without interruption.Elastic scaling: scale from development clusters to production deployments spanning hundreds of nodes.Integrated Azure services: Native connections to Azure Monitor, Microsoft Entra ID, Blob Storage, and policy tools streamline governance across IT and data science teams.AKS gives Ray and Anyscale a strong foundation—one that’s already trusted for enterprise workloads and ready to scale from small experiments to global deployments.

Enabling teams with Anyscale running on AzureWith this partnership, Microsoft and Anyscale are bringing together the best of open-source Ray, managed cloud infrastructure, and Kubernetes orchestration. By pairing Ray’s distributed computing platform for Python with Anyscale’s management capabilities and AKS’s robust orchestration, Azure customers gain flexibility in how they can scale AI workloads. Whether you want to start small with rapid experimentation or run mission-critical systems at global scale, this offering gives you the choice to adopt distributed computing without the complexity of building and managing infrastructure yourself.

You can leverage Ray’s open-source ecosystem, integrate with Anyscale’s managed experience, or combine both with Azure-native services, all within your subscription and governance model. This optionality means teams can choose the path that best fits their needs: prototype quickly, optimize for cost and performance, or standardize for enterprise compliance.

Together, Microsoft and Anyscale are removing operational barriers and giving developers more ways to innovate with Python on Azure, so they can move faster, scale smarter, and focus on delivering breakthroughs. Read the full release here.

Get startedLearn more about the private preview and how to request access at https://aka.ms/anyscale or subscribe to Anyscale in the Azure Marketplace.
The post Powering Distributed AI/ML at Scale with Azure and Anyscale appeared first on Microsoft Azure Blog.
Quelle: Azure

Microsoft strengthens sovereign cloud capabilities with new services

Across Europe and around the world, organizations today face a complex mix of regulatory mandates, heightened expectations for resilience, and relentless technological advancement. Sovereignty has become a core requirement for governments, public institutions, and enterprises seeking to harness the full power of the cloud while retaining control over their data and operations.

In June 2025, Microsoft CEO Satya Nadella announced a broad range of solutions to help meet these needs with the Microsoft Sovereign Cloud. We continue to adapt our sovereignty approach—innovating to meet customer needs and regulatory requirements within our Sovereign Public Cloud and Sovereign Private Cloud. Today, we are announcing a new wave of capabilities, building upon our digital sovereignty controls, to deliver advanced AI and scale, strengthened by our ecosystem of specialized in-country partner experts. With this announcement, expanded features and services include:

End-to-end AI data-processing in Europe as part of EU (European Union) Data Boundary.

Microsoft 365 Copilot expands in-country processing for Copilot Interactions to 15 countries. Learn more about this announcement in the Microsoft 365 blog.

Sovereign Landing Zones service expansion and disconnected operations for Microsoft Azure Local.

Microsoft 365 Local general availability.

Increased maximum scale of Azure Local, support for external SAN storage, and support for the latest NVIDIA GPUs.

Availability of our partner Digital Sovereignty specialization.

Discover Microsoft Sovereign Cloud

Microsoft Sovereign Cloud continuous innovation

Our latest offerings include new digital sovereignty capabilities across AI, security, and productivity, as well as a suite of upcoming features that will further address our customers’ sovereign cloud needs.

We recognize the need for continuous innovation and have already begun implementing many commitments. As of this month, we have already:

Established a European board of directors, composed of European nationals, exclusively overseeing all datacenter operations in compliance with European law, thereby putting Europe’s cloud infrastructure into the hands of Europeans.

Increased European datacenter capacity with recent launches in Austria and an upcoming launch in Belgium this month.

Embedded our digital resiliency commitments into all relevant government contracts.

Expanded open‑source investment through funding secure open-source software (OSS) projects and collaborations as well as publishing AI Access Principles that widen safe, responsible access to advanced AI, helping European developers, startups, and enterprises compete more effectively across the region.

Advanced our European Security Program by providing AI-powered intelligence and cybersecurity capacity building initiatives to strengthen Europe’s digital resilience against threat actors.

New Sovereign Public Cloud and AI capabilities

From the moment organizations begin designing their environments for sovereignty, they need end-to-end capabilities that help them embed compliance and control from the start.

EU Data Boundary includes AI data processing residency

We are delivering on our end-to-end AI data processing commitments, where data processed by AI services for EU customers remains within the European Union Data Boundary, except as otherwise directed by the customer.

This means all customer data, whether at rest or in transit, will be stored and processed exclusively in the EU. Our approach includes implementing rigorous controls and transparency measures that comply with EU customer requirements.

Expanding Microsoft 365 Copilot in-country data processing to 15 countries

Building upon decades of investment in global infrastructure and industry-leading data residency capabilities, Microsoft will now offer in-country data processing for customers’ Microsoft 365 Copilot interactions in 15 countries around the world.

By the end of 2025, Microsoft will offer customers in four countries—Australia, India, Japan and the United Kingdom—the option to have Microsoft 365 Copilot interactions processed in-country. In 2026, we’ll expand the availability of in-country data processing for Microsoft 365 Copilot to customers in eleven more countries including Canada, Germany, Italy, Malaysia, Poland, South Africa, Spain, Sweden, Switzerland, the United Arab Emirates, and the United States.

Read the full announcement in the Microsoft 365 blog

New Sovereign Landing Zone (SLZ) foundation

We are also introducing our refreshed Sovereign Landing Zone (SLZ), built on the market-proven landing zone foundation of Azure Landing Zone (ALZ).

The Sovereign Landing Zone is the recommended platform landing zone for customers wanting to implement sovereign controls in the Azure public cloud as part of the Sovereign Public Cloud.

The refresh of the Sovereign Landing Zone includes:

Updated Management Group hierarchy and supporting Azure Policy definitions, initiatives, and assignments to help implement the Sovereign Public Cloud controls (Level 1, 2, and 3).

Guidance on deployment placement of Azure Key Vault Managed HSM, if required as part of Level 2 Sovereign controls.

Deployment simplified via the Azure landing zone accelerator and the Azure landing zone library. See Sovereign Landing Zone (SLZ) implementation options for further details.

Over the next few months, the Azure Policy definitions, initiatives, and assignments that come built-in to the Sovereign Landing Zone will continue to expand to help our customers achieve sovereign controls in the sovereign public cloud out-of-the-box faster.

By adopting Sovereign Landing Zones, customers can gain a prescriptive architecture that accelerates compliance with regional sovereignty requirements while reducing complexity in policy management. This approach also helps organizations confidently scale workloads across Azure regions without compromising on regulatory alignment or operational consistency.

Check out the new Sovereign Landing Zone (SLZ)

New Sovereign Private Cloud and AI capabilities

As organizations deepen their commitment to sovereignty, the ability to combine regulatory compliance with innovation becomes especially important. This next wave of enhancements helps bring together advanced AI capabilities and scalable infrastructure designed for both public and private environments.

Supporting thousands of AI models on Azure Local with NVIDIA RTX GPUs

As we advance our Sovereign Private Cloud capabilities with Azure Local, we are introducing a new Azure offering with the latest NVIDIA RTX Pro 6000 Blackwell Server Edition GPU purpose-built for high performance AI workloads in sovereign environments.

Designed to run over 1,000 models such as GPT OSS, DeepSeek-V3, Mistral NeMo, and Llama 4 Maverick, this GPU enables organizations to accelerate their AI initiatives directly within a sovereign private cloud environment. Customers gain the flexibility to experiment, innovate, and deploy advanced AI solutions with enhanced performance. This means organizations can pursue new AI-powered opportunities while helping ensure data protection and compliance.

In addition, customers can gain access to thousands of prebuilt and open-source AI models, ready to deploy for a wide range of scenarios—from generative AI and advanced analytics to real-time decision making. This combination empowers customers to experiment, innovate, and operationalize cutting edge AI solutions, while keeping governance front and center.

Increasing Azure Local scale to hundreds of servers

Azure Local has supported single clusters of up to 16 physical servers. With our latest updates, Azure Local can support hundreds of servers, opening new possibilities for organizations with large-scale or growing sovereign private cloud demands. This enhancement means customers can support bigger, more complex workloads, scale their infrastructure with ease, and respond to evolving business needs all while aligning with the security and sovereignty required by European and global regulations.

SAN support on Azure Local

A key highlight of expanding the scale of our Sovereign Private Cloud is the introduction of Storage Area Network (SAN) support on Azure Local. With this update, customers can now securely connect their existing on-premises storage solutions from industry leaders to Azure Local. This integration empowers organizations to leverage their trusted storage investments while benefiting from cloud-native services, helping ensure data remains within their desired jurisdiction. European enterprises, in particular, gain flexibility in meeting local data residency requirements without compromising on performance or control.

Microsoft 365 Local: General availability of key workloads

Another milestone is the general availability of Microsoft 365 Local, helping bring core productivity workloads—Exchange Server, SharePoint Server, and Skype for Business Server natively to Azure Local. Starting in December, customers can deploy these productivity workloads on Azure Local in a connected mode, with a disconnected option for complete isolation coming early 2026. This approach combines familiar collaboration tools with Azure Local’s unified management and consistent Azure services and APIs, enabling organizations to maintain full operational control while aligning with stringent compliance and data residency requirements.

Disconnected operations: General availability

Microsoft’s Sovereign Private Cloud extends sovereignty principles into fully dedicated environments for organizations with strict compliance and control requirements, enabled by Azure Local. Azure Local enables government agencies, multinational enterprises, and regulated entities to maintain local control while still benefiting from the scale and innovation of Microsoft’s global cloud platform.

As part of Azure Local, we are introducing the upcoming general availability of disconnected operations, including the ability to manage multiple Azure Local clusters from the same local control plane. Available in early 2026, this capability allows customers to operate private cloud environments with a completely on-premises control plane, enabling organizations to operate securely and independently within their own dedicated environments. With disconnected operations, customers can retain business continuity and operational resilience, even in highly regulated or edge scenarios.

Learn more about Azure Local

New partner Digital Sovereignty specialization now available

We’re excited to officially launch the Digital Sovereignty specialization as part of the Microsoft AI Cloud Partner Program. This new specialization empowers partners to demonstrate deep expertise in delivering secure, compliant, and sovereign cloud solutions across Azure and Microsoft 365 platforms. By earning this designation, partners signal their ability to meet stringent data residency, privacy, and regulatory requirements—helping customers maintain control over their applications and data while driving innovation. The specialization includes rigorous audit criteria and provides benefits such as enhanced discoverability, specialized badging, and priority access to sovereign cloud opportunities.

Looking ahead: Advancing sovereignty through greater controls

The Microsoft Sovereign Cloud roadmap will provide additional capabilities designed to address evolving customer needs including:

Sovereign Public Cloud

Data Guardian: This upcoming capability helps provide transparency into operational sovereignty controls in our European public cloud environments. All remote access by Microsoft engineers to the systems that store and process your data in Europe will be routed to the EU, where an EU-based operator can monitor and, if necessary, halt these activities. All remote access by Microsoft engineers will be recorded in a tamper-evident log.

Sovereign Private Cloud

Enhanced change controls: We will introduce a set of configurable policies and approval workflows that will empower organizations with explicit oversight of any changes propagating from the cloud to the edge, strengthening governance and compliance.

Site-to-site disaster recovery: Azure Site Recovery in Azure Local will help with business continuity by keeping business apps and workloads running during outages.

Move from hybrid to fully disconnected: Azure Local will enable customers to transition workloads from hybrid to fully disconnected operations, providing them with flexibility for business continuity.

National Partner Clouds

National Partner Clouds are a core part of the Microsoft Sovereign Cloud strategy. They provide independently operated cloud environments that deliver Microsoft Azure and Microsoft 365 capabilities under local ownership and control.

Delos Cloud is designed to meet German government’s BSI cloud platform requirements.

Bleu is designed to meet the French government’s (ANSSI) SecNumCloud requirements.

For many public sector organizations, ERP is a critical workload that requires modernization to cloud environments. SAP is planning to deploy its RISE with SAP offering on Microsoft Azure for both Bleu and Delos Cloud customers, in addition to support of RISE with SAP for customers using Microsoft Azure public cloud deployments.

Learn more about Microsoft’s sovereign solutions

Microsoft delivers unmatched sovereign solutions, offering a flexible public cloud environment, a private cloud that scales to your business needs, and national partner clouds designed to meet specific compliance requirements. Our commitment to continuous investment and innovation helps our customers meet sovereignty without compromise.

Discover what’s next in cloud innovation this November at Microsoft Ignite. Learn more and register today.
The post Microsoft strengthens sovereign cloud capabilities with new services appeared first on Microsoft Azure Blog.
Quelle: Azure

Driving ROI with Azure AI Foundry and UiPath: Intelligent agents in real-world healthcare workflows

Across industries, organizations are moving from experimentation with AI to operationalizing it within business-critical workflows. At Microsoft, we are partnering with UiPath—a preferred enterprise agentic automation platform on Azure—to empower customers with integrated solutions that combine automation and AI at scale.

One example is Azure AI Foundry agents and UiPath agents (built on Azure AI Foundry) orchestrated by UiPath Maestro™ in business processes, ensuring AI insights seamlessly flow into automated business processes that deliver measurable value.

Get started with agents built on Azure AI Foundry

From insight to action: Managing incidental findings in healthcare

In healthcare, where every insight can influence a life, the ability of AI to connect information and trigger timely action is especially transformative. Incidental findings in radiology reports—unexpected abnormalities uncovered during imaging studies like CT or MRI scans—represent one of the most challenging and overlooked gaps in patient care

As the volume of patient data grows, overlooked incidental findings outside the original imaging scope can delay care, raise costs, and increase liability risks.

This is where AI steps in. In this workflow, Azure AI Foundry agents and UiPath agents—orchestrated by UiPath Maestro™—work together to operationalize this process in healthcare:

Radiology reports are generated and finalized in existing systems.

UiPath medical record summarization (MRS) agents review reports, flagging incidental findings.

Azure AI Foundry imaging agents analyze historical PACS images and radiology data, comparing past results with current findings relevant to the additional findings.

UiPath agents aggregate all results—including pertinent EMR history, prior imaging, and AI-generated imaging insights—into a comprehensive follow-up report.

The aggregated information is forwarded to the original ordering care provider in addition to the primary radiology report, eliminating the need to manually comb through the chart and prior exams for pertinent information. This creates both a secondary notification of the incidental finding and puts the summarized, relevant patient information in the clinicians’ hands, efficiently supporting the provision of safe, timely care.

UiPath Maestro™ orchestrates the business process, routing the consolidated packet to the ordering physician or specialist for next steps.

The combination of UiPath and Azure AI Foundry agents turns siloed data into precise documentation that can be used to create actionable care pathways—accelerating clinical decision making, reducing physician workload, and improving patient outcomes.

This scenario is enabled by:

UiPath Maestro™: Orchestrates complex workflows that span multiple agents, systems, and data sources; and integrates natively with Azure AI Foundry and UiPath Agents, providing tracing capabilities that create business trust in underlying AI agents.

UiPath agents: Extract and summarize structured and unstructured data from EMRs, reports, and historical records.

Azure AI Foundry agents: Analyze medical images and generate AI-powered diagnostic insights with healthcare-specific models on Azure AI Foundry that provide secure data access through DICOMweb APIs and FHIR standards, ensuring compliance and scalability.

Together, this creates an agentic ecosystem on Azure where AI insights are not isolated but operationalized directly within end-to-end business processes.

Delivering customer value

By embedding AI into automated workflows, customers see tangible ROI:

Improved outcomes: Faster detection and follow-up on incidental findings.

Efficiency gains: Automated data collection, summarization, and reporting reduce manual physician workload.

Cost savings: Early detection helps prevent expensive downstream interventions.

Trust and compliance: Built on Azure & UiPath’s security, privacy, and healthcare data standards.

This is the promise of combining enterprise-grade automation with enterprise-ready AI.

What customers are saying about AI automation in healthcare

AI-powered automation is redefining how healthcare operates. At Mercy, we are beginning to partner with Microsoft and UiPath which will allow us to move beyond data silos and create intelligent workflows that truly serve patients. This is the future of care—where insights instantly translate into action.
Robin Spraul, Automation Manager-Automation Opt & Process Engineering, Mercy

Partnership perspectives

With UiPath Maestro and Azure AI Foundry working together, we’re helping enterprises operationalize AI across workflows that matter most. This is how we turn intelligence into impact.
Asha Sharma, Corporate Vice President, Azure AI Platform

Healthcare is just the beginning. UiPath and Microsoft are empowering organizations everywhere to unlock ROI by bringing automation and AI together in real-world business processes.
Graham Sheldon, Chief Product Officer, UiPath

Looking ahead

This healthcare scenario is one of many where UiPath and Azure AI Foundry are transforming operations. From finance to supply chain to customer service, organizations can now confidently scale AI-powered automation with UiPath Maestro™ on Azure.

At Microsoft, we believe AI is only as valuable as the outcomes it delivers. Together with UiPath, we are enabling enterprises to achieve those outcomes today.

const currentTheme =
localStorage.getItem(‘msxcmCurrentTheme’) ||
(window.matchMedia(‘(prefers-color-scheme: dark)’).matches ? ‘dark’ : ‘light’);

// Modify player theme based on localStorage value.
let options = {“autoplay”:false,”hideControls”:null,”language”:”en-us”,”loop”:false,”partnerName”:”cloud-blogs”,”poster”:”http://cdn-dynmedia-1.microsoft.com/is/image/microsoftcorp/1090860-UiPath-Azure-HLS-Incidental?wid=1280″,”title”:””,”sources”:[{“src”:”http://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1090860-UiPath-Azure-HLS-Incidental-0x1080-6439k”,”type”:”video/mp4″,”quality”:”HQ”},{“src”:”http://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1090860-UiPath-Azure-HLS-Incidental-0x720-3266k”,”type”:”video/mp4″,”quality”:”HD”},{“src”:”http://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1090860-UiPath-Azure-HLS-Incidental-0x540-2160k”,”type”:”video/mp4″,”quality”:”SD”},{“src”:”http://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1090860-UiPath-Azure-HLS-Incidental-0x360-958k”,”type”:”video/mp4″,”quality”:”LO”}],”ccFiles”:[{“url”:”https://azure.microsoft.com/en-us/blog/wp-json/msxcm/v1/get-captions?url=https%3A%2F%2Fwww.microsoft.com%2Fcontent%2Fdam%2Fmicrosoft%2Fbade%2Fvideos%2Fproducts-and-services%2Fen-us%2Fazure%2F1090860-uipath-azure-hls-incidental%2F1090860-UiPath-Azure-HLS-Incidental_cc_en-us.ttml”,”locale”:”en-us”,”ccType”:”TTML”}]};

if (currentTheme) {
options.playButtonTheme = currentTheme;
}

document.addEventListener(‘DOMContentLoaded’, () => {
ump(“ump-69129fb513f58″, options);
});

The post Driving ROI with Azure AI Foundry and UiPath: Intelligent agents in real-world healthcare workflows appeared first on Microsoft Azure Blog.
Quelle: Azure

The new era of Azure Ultra Disk: Experience the next generation of mission-critical block storage

Since its launch at Microsoft Ignite 2019, Azure Ultra Disk has powered some of the world’s most demanding applications and workloads: From real-time financial trading and electronic health records to high-performance gaming and AI/ML services. Ultra Disk was a breakthrough in cloud block storage innovation from the start, introducing independent configuration of capacity, IOPS, and throughput to deliver precise performance at scale. And we’ve continued to push boundaries ever since. Committing to a purposeful evolution, not just enhancing performance and resilience for mission-critical workloads but working to ensure every advancement addresses the real-world needs of our customers.

How to deploy and use an Ultra Disk?

These advancements are not just theoretical; they’re driving real impact for customers operating on a global scale. One example being BlackRock, a global asset manager and technology provider, who leverages Azure Ultra Disk in conjunction with M-series Virtual Machines to power their mission-critical investment platform, Aladdin. For BlackRock, delivering ultra-low latency and exceptional reliability is paramount to swiftly adapting to dynamic market conditions and managing portfolios with agility and confidence.

Now that we’re on Azure, we have a springboard to unlock adoption of cloud-managed services to be able to engineer and operate at greater scale and adopt innovative technologies.
Randall Fradin, Head of Cloud Managed Services and Platform Engineering, BlackRock

Read the full customer story here.

Stories like BlackRock’s illustrate the power of Ultra Disk in action and they inspire us to keep evolving. That’s why today, we are excited to unveil a transformative update to Ultra Disk, designed to deliver superior speed, resilience, and cost efficiency for your most sensitive workloads. This major refresh introduces higher performance, greater flexibility to optimize cost, and instant access snapshots to support business continuity. With these advancements, Ultra Disk empowers organizations to accelerate operations, restore data rapidly, and scale with confidence–no matter the level of demand or criticality.

What’s new with Ultra Disk?

Ultra Disk delivers reliable performance with improved average, P99.9, and outlier latency

For mission-critical workloads, even brief disruptions can have significant impacts. That is why we have prioritized reducing tail latency at P99.9 and above. Our platform enhancements have resulted in 80% reduction in both P99.9 and outlier latency, along with a 30% improvement in average latency. These advancements make Ultra Disk the best choice for highly I/O-intensive and latency-sensitive workloads, such as transaction logs for mission-critical applications.

If you are using local SSD or Write Accelerator to achieve lower latencies, we recommend exploring Ultra Disk as an alternative option for enhanced data persistency and greater flexibility for capacity and performance.

Optimize application cost without sacrificing performance

Our goal is to support workload in maximizing both efficiency and performance. Ultra Disk’s latest provisioning model now offers greater granular control over capacity and performance, enabling better cost management. Workloads on small disks can save up to 50% while large disks can save up to 25%. These updated features are now available for both new and existing Ultra Disks:

 Greater control Previous GiB capacity billing Billed at 1 GiB granularity Billed at tiers Maximum IOPS per GiB 1,000 IOPS per GiB 300 IOPS per GiB Minimum IOPS per disk 100 IOPS  Higher of 100 or 1 IOPS per GiB Minimum MB/s per disk 1 MB/s Higher of 1 MB/s or 4 KB/s per IOPS 

A financial application operates its core database on Ultra Disk to serve market trend insight. This database stores large amount of data but only require moderate IOPS and throughput at low latency when needed (no more than 12,500 GiB, 5000 IOPS and 200 MB/s). With more flexible control over capacity and performance, this deployment now saves 22% from its Ultra Disk spending, illustrated below using East US prices.

Cost per month Previous Improved flexibility Savings 12,500 GiB $1,594 for 13,312 GiB (rounded to next tier) $1,497 for 12,500 GiB -6% 5,000 IOPS $661 for 13,312 IOPS $248 for 5,000 IOPS -62% 200 MB/s $70 for 200 MB/s No change No change Ultra Disk Total$2,324 $1,815-22% 

Unlock high performance workloads on Azure Boost and Ultra Disk

Ultra Disk and Azure Boost now enable a new class of high-performance workloads: 

Memory Optimized Mbv3 VM – Standard_M416bs_v3 – GA, up to 550,000 IOPS and 10GB/s

Azure Boost Ebdsv5 VM – GA up to 400,000 IOPS and 10GB/s

Stay tuned for newest Azure Boost VM announcement at 2025 Ignite for unprecedented remote Block Storage performance

These innovations empower customers to confidently operate high-demand applications such as large-scale SQL databases, electronic health record systems, and mission-critical enterprise platforms. Ultra Disk is equipped to address rigorous performance requirements leveraging the latest advancements in Virtual Machine technology.

Instant Access Snapshot enables you to restore and run your business application immediately

We are thrilled to announce an exciting new experience: Instant Access Snapshot for Ultra and Premium SSD v2 disks, now available in public preview. With Instant Access, you can immediately use snapshots after creation to generate new disks, eliminating the wait time (often spanning numerous hours) traditionally required for background data copy before the snapshot is in a ready and usable state. Disks generated from these Instant Access Snapshots now hydrate up to 10x faster and experience minimal read latency impact during the hydration process. This advanced capability marks a significant leap forward in the public cloud market, enabling rapid recovery and replication scale-out for your organization in real time. No more lengthy restoration processes or costly downtime! Instant Access Snapshot empowers you to get back to business within moments, not hours.

Building on the foundation of security, flexibility, and efficiency for Ultra Disk

Let’s walk through a few other features recently released that will greatly enhance your high-performance workload experience on Ultra Disk.

Operate with cost-efficiency by expanding your Ultra Disk capacity live with the support of live resize and dynamically adjusting Ultra Disk performance to avoid over provisioning. 

Run your business application securely with encryption at host on Ultra Disk. Encryption at host will encrypt your data starting from the VM host and then store the encrypted data in Ultra Disk.

Azure Site Recovery – Recover your VM applications with Ultra Disk seamlessly in another Azure region when your primary region is down.

Azure VM Backup – Backup your VM applications equipped with Ultra Disk easily and securely.

Azure Disk Backup – Backup a specific Ultra Disk that is critical to your business operation to lower your backup cost and for more customized backup operations.

Third party backup and disaster recovery support: We understand that you may have preferred third party service for your backup and disaster recovery procedures. Check out the third-party services here that now support Ultra Disk.

Migrate your clustered application to Azure as-is that uses SCSI Persistent Reservations to Ultra Disk with shared disk capability. Shared disk capability unlocks easy migration and further cost optimization for your mission-critical clustered applications.

Getting started: Unlock new possibilities for your business

Join us on this journey to redefine what’s possible for your mission critical business applications. With Azure Ultra Disk, you can experience the future of high-performance storage today, empowering your organization to move faster, recover instantly, and scale with confidence.

New to Ultra Disk? Start with our comprehensive documentation and how to deploy an Ultra Disk.

Have questions or feedback? Reach out to our team at AzureDisks@microsoft.com.

Start using Azure Ultra Disk today

The post The new era of Azure Ultra Disk: Experience the next generation of mission-critical block storage appeared first on Microsoft Azure Blog.
Quelle: Azure

Securing our future: November 2025 progress report on Microsoft’s Secure Future Initiative

When we launched the Secure Future Initiative (SFI), our mission was clear: accelerate innovation, strengthen resilience, and lead the industry toward a safer digital future. Today, we’re sharing our latest progress report that reflects steady progress in every area and engineering pillar, underscoring our commitment to security above all else. We also highlight new innovations delivered to better protect customers, and share how we use some of those same capabilities to protect Microsoft. Through SFI, we have improved the security of our platforms and services and our ability to detect and respond to cyberthreats.

Read the latest Secure Future Initiative reportFostering a security-first mindsetEngineering sentiment around security has improved by nine points since early 2024. To increase security awareness, 95% of employees have completed the latest training on guarding against AI-powered cyberattacks, which remains one of our highest-rated courses. Finally, we developed resources for employees and made them available to customers for the first time to improve security awareness.

Governance that scales globallyThe Cybersecurity Governance Council now includes three additional Deputy Chief Information Security Officers (CISOs) functions covering European regulations, internal operations, and engagement with our ecosystem of partners and suppliers. We launched the Microsoft European Security Program to deepen partnerships and better inform European governments about the cyberthreat landscape and collaborating with industry partners to better align cybersecurity regulations, advance responsible state behavior in cyberspace, and build cybersecurity capacity through the Advancing Regional Cybersecurity Initiative in the global south. You can read more on our cybersecurity policy and diplomacy work.

Secure by Design, Secure by Default, Secure OperationsMicrosoft Azure, Microsoft 365, Windows, Microsoft Surface, and Microsoft Security engineering teams continue to deliver innovations to better protect customers. Azure enforced secure defaults, expanded hardware-based trust, and updated security benchmarks to improve cloud security. Microsoft 365 introduced a dedicated AI Administrator role, and enhanced agent lifecycle governance and data security transparency to give organizations more control and visibility. Windows and Surface advanced Zero Trust principles with expanded passkeys, automatic recovery capabilities, and memory-safe improvements to firmware and drivers. Microsoft Security introduced data security posture management for AI and evolved Microsoft Sentinel into an AI-first platform with data lake, graph, and Model Context Protocol capabilities.

Engineering progress that sets the benchmarkWe’re making steady progress across all engineering pillars. Key achievements include enforcing phishing-resistant multifactor authentication (MFA) for 99.6% of Microsoft employees and devices, migrating higher-risk users to locked-down Azure Virtual Desktop environments, completing network device inventory and lifecycle management, and achieving 99.5% detection and remediation of live secrets in code. We’ve also deployed more than 50 new detections across Microsoft infrastructure with applicable detections to be added to Microsoft Defender and awarded $17 million to promote responsible vulnerability disclosure.

Actionable guidanceTo help customers improve their security, we highlight 10 SFI patterns and practices customers can follow to reduce their risk. We also share additional best practices and guidance throughout the report. Customers can do a deeper assessment of their security posture by using our Zero Trust Workshops which incorporate SFI-based assessments and actionable learnings to help customers on their own security journeys.

Security as the foundation of trustCybersecurity is no longer a feature—it’s the foundation of trust in a connected world.

With the equivalent of 35,000 engineers working full time on security, SFI remains the largest cybersecurity effort in digital history. Looking ahead, we will continue to prioritize the highest risks, accelerate delivery of security innovations, and harness AI to increase engineering efficiency and enable rapid anomaly detection and automated remediation.

The cyberthreat landscape will continue to evolve. Technology will continue to advance. And Microsoft will continue to prioritize security above all else. Our progress reflects a simple truth: trust is earned through action and accountability.

We are grateful for the partnership of our customers, industry peers, and security researchers. Together, we will innovate for a safer future.

Read our November 2025 progress report​​Learn more with Microsoft SecurityTo learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.
The post Securing our future: November 2025 progress report on Microsoft’s Secure Future Initiative appeared first on Microsoft Azure Blog.
Quelle: Azure