Beyond boundaries: The future of Azure Storage in 2026

2025 was a pivotal year in Azure Storage, and we’re heading into 2026 with a clear focus on helping customers turn AI into real impact. As outlined in last December’s Azure Storage innovations: Unlocking the future of data, Azure Storage is evolving as a unified intelligent platform that supports the full AI lifecycle at enterprise scale with the performance modern workloads demand.

What is Azure Storage?

Looking ahead to 2026, our investments span the full breadth of that lifecycle as AI becomes foundational across every industry. We are advancing storage performance for frontier model training, delivering purpose‑built solutions for large‑scale AI inferencing and emerging agentic applications, and empowering cloud‑native applications to operate at agentic scale. In parallel, we are simplifying adoption for mission‑critical workloads, lowering TCO, and deepening partnerships to co‑engineer AI‑optimized solutions with our customers.

We’re grateful to our customers and partners for their trust and collaboration, and excited to shape the next chapter of Azure Storage together in the year ahead.

Extending from training to inference

AI workloads extend from large, centralized model training to inference at scale, where models are applied continuously across products, workflows, and real-world decision making. LLM training continues to run on Azure, and we’re investing to stay ahead by expanding scale, improving throughput, and optimizing how model files, checkpoints, and training datasets flow through storage.

Innovations that helped OpenAI to operate at unprecedented scale are now available for all enterprises. Blob scaled accounts allow storage to scale across hundreds of scale units within a region, handling millions of objects required to enable enterprise data to be used as training and tuning datasets for applied AI. Our partnership with NVIDIA DGX on Azure shows that scale translates into real-world inference. DGX cloud was co-engineered to run on Azure, pairing accelerated compute with high-performance storage, Azure Managed Lustre (AMLFS), to support LLM research, automotive, and robotics applications. AMLFS provides the best price-performance for keeping GPU fleets continuously fed. We recently released Preview support for 25 PiB namespaces and up to 512 GBps of throughput, making AMLFS best in class managed Lustre deployment on Cloud.

As we look ahead, we’re deepening integration across popular first and third-party AI frameworks such as Microsoft Foundry, Ray, Anyscale, and LangChain, enabling seamless connections to Azure Storage out of box. Our native Azure Blob Storage integration within Foundry enables enterprise data consolidation into Foundry IQ, making blob storage the foundational layer for grounding enterprise knowledge, fine-tuning models, and serving low-latency context to inference, all under the tenant’s security and governance controls.

From training through full-scale inferencing, Azure Storage supports the entire agent lifecycle: from distributing large model files efficiently, storing and retrieving long-lived context, to serving data from RAG vector stores. By optimizing for each pattern end-to-end, Azure Storage has performant solutions for every stage of AI inference.

Evolving cloud native applications for agentic scale

As inference becomes the dominant AI workload, autonomous agents are reshaping how cloud native applications interact with data. Unlike human-driven systems with predictable query patterns, agents operate continuously, issuing an order of magnitude more queries than traditional users ever did. This surge in concurrency stresses databases and storage layers, pushing enterprises to rethink how they architect new cloud native applications.

Azure Storage is building with SaaS leaders like ServiceNow, Databricks, and Elastic to optimize for agentic scale leveraging our block storage portfolio. Looking forward, Elastic SAN becomes a core building block for these cloud native workloads, starting with transforming Microsoft’s own database solutions. It offers fully managed block storage pools for different workloads to share provisioned resources with guardrails for hosting multi-tenant data. We’re pushing the boundaries on max scale units to enable denser packing and capabilities for SaaS providers to manage agentic traffic patterns.

As cloud native workloads adopt Kubernetes to scale rapidly, we are simplifying the development of stateful applications through our Kubernetes native storage orchestrator, Azure Container Storage (ACStor) alongside CSI drivers. Our recent ACStor release signals two directional changes that will guide upcoming investments: adopting the Kubernetes operator model to perform more complex orchestration and open sourcing the code base to collaborate and innovate with the broader Kubernetes community.

Together, these investments establish a strong foundation for the next generation of cloud native applications where storage must scale seamlessly and deliver high efficiency to serve as the data platform for agentic scale systems.

Breaking price performance barriers for mission critical workloads

In addition to evolving AI workloads, enterprises continue to grow their mission critical workloads on Azure.

SAP and Microsoft are partnering together to expand core SAP performance while introducing AI-driven agents like Joule that enrich Microsoft 365 Copilot with enterprise context. Azure’s latest M-series advancements add substantial scale-up headroom for SAP HANA, pushing disk storage performance to ~780k IOPS and 16 GB/s throughput. For shared storage, Azure NetApp Files (ANF) and Azure Premium Files deliver the high throughput NFS/SMB foundations SAP landscapes rely on, while optimizing TCO with ANF Flexible Service Level and Azure Files Provisioned v2. Coming soon, we will introduce Elastic ZRS storage service level in ANF, bringing zone‑redundant high availability and consistent performance through synchronous replication across availability zones leveraging Azure’s ZRS architecture, without added operational complexity.

Similarly, Ultra Disks have become foundational to platforms like BlackRock’s Aladdin, which must react instantly to market shifts and sustain high-performance under heavy load. With average latency well under 500 microseconds, support for 400K IOPS, and 10 GB/s throughput, Ultra Disks enable faster risk calculation, more agile portfolio management, and resilient performance on BlackRock’s highest-volume trading days. When paired with Ebsv6 VMs, Ultra Disks can reach 800K IOPS and 14 GB/s for the most demanding mission critical workloads. And with flexible provisioning, customers can tune performance precisely to their needs while optimizing TCO.

These combined investments give enterprises a more resilient, scalable, and cost-efficient platform for their most critical workloads.

Designing for new realities of power and supply

The global AI surge is straining power grids and hardware supply chains. Rising energy costs, tight datacenter budgets, and industry-wide HDD/SSD shortages mean organizations can’t scale infrastructure simply by adding more hardware. Storage must become more efficient and intelligent by design.

We’re streamlining the entire stack to maximize hardware performance with minimal overhead. Combined with intelligent load balancing and cost-effective tiering, we are uniquely positioned to help customers scale storage sustainably even as power and hardware availability become strategic constraints. With continued innovations on Azure Boost Data Processing Units (DPUs), we expect step function gains in storage speed and feeds at even lower per unit energy consumption.

AI pipelines can span on-premises estates, neo cloud GPU clusters, and cloud, yet many of these environments are limited by power capacity or storage supply. When these limits become a bottleneck, we make it easy to shift workloads to Azure. We’re investing in integrations that make external datasets first class citizens in Azure, enabling seamless access to training, finetuning, and inference data wherever it lives. As cloud storage evolves into AI-ready datasets, Azure Storage is introducing curated, pipeline optimized experiences to simplify how customers feed data into downstream AI services.

Accelerating innovations through the storage partner ecosystem

We can’t do this alone. Azure Storage partners closely with strategic partners to push inference performance to the next level. In addition to the self-publishing capabilities available in Azure Marketplace, we go a step further by devoting resources with expertise to co-engineer solutions with partners to build highly optimized and deeply integrated services.

In 2026, you will see more co-engineered solutions like Commvault Cloud for Azure, Dell PowerScale, Azure Native Qumulo, Pure Storage Cloud, Rubrik Cloud Vault, and Veeam Data Cloud. We will focus on hybrid solutions with partners like VAST Data and Komprise to enable data movement that unlocks the power of Azure AI services and infrastructure—fueling impactful customer AI Agent and Application initiatives.

To an exciting new year with Azure Storage

As we move into 2026, our vision remains simple: help every customer unlock more value from their data with storage that is faster, smarter, and built for the future. Whether powering AI, scaling cloud native applications, or supporting mission critical workloads, Azure Storage is here to help you innovate with confidence in the year ahead.

What are the benefits of using Azure Storage?
Azure Storage services are durable, secure, and scalable. Review your options and check out our sample of scenarios.

Explore Azure Storage

The post Beyond boundaries: The future of Azure Storage in 2026 appeared first on Microsoft Azure Blog.
Quelle: Azure

Microsoft named a Leader in IDC MarketScape for Unified AI Governance Platforms

As organizations rapidly embrace generative and agentic AI, ensuring robust, unified governance has never been more critical. That’s why Microsoft is honored to be named a Leader in the 2025-2026 IDC MarketScape for Worldwide Unified AI Governance Platforms (Vendor Assessment (#US53514825, December 2025). We believe this recognition highlights our commitment to making AI innovation safe, responsible, and enterprise-ready—so you can move fast without compromising trust or compliance.

Read the IDC MarketScape for Unified AI Governance Platforms reportA graphic showing Microsoft’s position in the Leaders section of the IDC report.Figure 1. IDC MarketScape vendor analysis model is designed to provide an overview of the competitive fitness of technology and suppliers in a given market. The research methodology utilizes a rigorous scoring methodology based on both qualitative and quantitative criteria that results in a single graphical illustration of each supplier’s position within a given market. The Capabilities score measures supplier product, go-to-market and business execution in the short term. The Strategy score measures alignment of supplier strategies with customer requirements in a three- to five-year timeframe. Supplier market share is represented by the size of the icons.The urgency for a unified AI governance strategy is being driven by stricter regulatory demands, the sheer complexity of managing AI systems across multiple AI platforms and multicloud and hybrid environments, and leadership concerns for risk related to negative brand impact. Centralized, end-to-end governance platforms help organizations reduce compliance bottlenecks, lower operational risks, and turn governance into a strategic driver for responsible AI innovation. In today’s landscape, unified AI governance is not just a compliance obligation—it is critical infrastructure for trust, transparency, and sustainable business transformation.

Our own approach to AI is anchored to Microsoft’s Responsible AI standard, backed by a dedicated Office of Responsible AI. Drawing from our internal experience in building, securing, and governing AI systems, we translate these learnings directly into our AI management tools and security platform. As a result, customers benefit from features such as transparency notes, fairness analysis, explainability tools, safety guardrails, regulatory compliance assessments, agent identity, data security, vulnerability identification, and protection against cyberthreats like prompt-injection attacks. These tools enable them to develop, secure, and govern AI that aligns with ethical principles and is built to help support compliance with regulatory requirements. By integrating these capabilities, we empower organizations to make ethical decisions and safeguard their business processes throughout the entire AI lifecycle.

Microsoft’s AI Governance capabilities aim to provide integrated and centralized control for observability, management, and security across IT, developer, and security teams, ensuring integrated governance within their existing tools. Microsoft Foundry acts as our main control point for model development, evaluation, deployment, and monitoring, featuring a curated model catalog, machine learning oeprations, robust evaluation, and embedded content safety guardrails. Microsoft Agent 365, which was not yet available at the time of the IDC publication, provides a centralized control plane for IT, helping teams confidently deploy, manage, and secure their agentic AI published through Microsoft 365 Copilot, Microsoft Copilot Studio, and Microsoft Foundry.

Deeply embedded security systems are integral to Microsoft’s AI governance solution. Integrations with Microsoft Purview provide real-time data security, compliance, and governance tools, while Microsoft Entra provides agent identity and controls to manage agent sprawl and prevent unauthorized access to confidential resources. Microsoft Defender offers AI-specific posture management, threat detection, and runtime protection. Microsoft Purview Compliance Manager automates adherence to more than 100 regulatory frameworks. Granular audit logging and automated documentation bolster regulatory and forensic capabilities, enabling organizations in regulated industries to innovate with AI while maintaining oversight, secure collaboration, and consistent policy enforcement.

Guidance for security and governance leaders and CISOsTo empower organizations in advancing their AI transformation initiatives, it is crucial to focus on the following priorities for establishing a secure, well-governed, and scalable AI framework. The guidance below provides Microsoft’s recommendations for fulfilling these best practices:

CISO guidance What it means How Microsoft deliversAdopt a unified, end‑to‑end governance platform Establish a comprehensive, integrated governance system covering traditional machine learning, generative AI, and agentic AI. Ensure unified oversight from development through deployment and monitoring. Microsoft enables observability and governance at every layer across IT, developer, and security teams to provide an integrated and cohesive governance platform that enables teams to play their part from within the tools they use. Microsoft Foundry acts as the developer control plane, connecting model development, evaluation, security controls, and continuous monitoring. Microsoft Agent 365 is the control plane for IT, enabling discovery, security, deployment, and observability for agentic AI in the enterprise. Microsoft Purview, Entra, and Defender integrate to deliver consistent full-stack governance across data, identity, threat protection, and compliance.Industry‑leading responsible AI infrastructure Implement responsible AI practices as a foundational part of engineering and operations, with transparency and fairness built in. Microsoft embeds its Responsible AI Standards into our engineering processes, supported by the Office of Responsible AI. Automatic generation of model cards and built-in fairness mechanisms set Microsoft apart as a strategic differentiator, pairing technical controls with mature governance processes. Microsoft’s Responsible AI Transparency Report provides visibility to how we develop and deploy AI models and systems responsibility and provides a model for customers to emulate our best practices.Advanced security and real‑time protection Provide robust, real-time defense against emerging AI security threats, especially for regulated industries. Microsoft’s platform features real-time jailbreak detection, encrypted agent-to-agent communication, tamper-evident audit logs for model and agent actions, and deep integration with Defender to provide AI-specific threat detection, security posture management, and automated incident response capabilities. These capabilities are especially critical for regulated sectors.Automated compliance at scale Automate compliance processes, enable policy enforcement throughout the AI lifecycle, and support audit readiness across hybrid and multicloud environments. Microsoft Purview streamlines compliance adherence for regulatory requirements and provides comprehensive support for hybrid and multicloud deployments—giving customers repeatable and auditable governance processes.We believe we are differentiated in the AI governance space by delivering a unified, end-to-end platform that embeds responsible AI principles and robust security at every layer—from agents and applications to underlying infrastructure. Through native integration of Microsoft Foundry, Microsoft Agent 365, Purview, Entra, and Defender, organizations benefit from centralized oversight and observability across the layers of the organization with consistent protection and operationalized compliance across the AI lifecycle. Our comprehensive approach removes disparate and disconnected tooling, enabling organizations to build trustworthy, transparent, and secure AI solutions that can start secure and stay secure. We believe this approach uniquely differentiates Microsoft as a leader in operationalizing responsible, secure, and auditable AI at scale.

Strengthen your security strategy with Microsoft AI governance solutionsAgentic and generative AI are reshaping business processes, creating a new frontier for security and governance. Organizations that act early and prioritize governance best practices—unified governance platforms, build-in responsible AI tooling, and integrated security—will be best positioned to innovate confidently and maintain trust.

Microsoft approaches AI governance with a commitment to embedding responsible practices and robust security at every layer of the AI ecosystem. Our AI governance and security solutions empower customers with built-in transparency, fairness, and compliance tools throughout engineering and operations. We believe this approach allows organizations to benefit from centralized oversight, enforce policies consistently across the entire AI lifecycle, and achieve audit readiness—even in the rapidly changing landscape of generative and agentic AI.

Explore moreRead the IDC MarketScape excerpt.Learn more about AI Security, Governance and Compliance.Read our latest Security for AI blog to learn more about our latest capabilitiesTo learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.
The post Microsoft named a Leader in IDC MarketScape for Unified AI Governance Platforms appeared first on Microsoft Azure Blog.
Quelle: Azure

Chart your AI and agent strategy with Microsoft Marketplace

A new category of organization is emerging that embeds AI across every layer of their operations—accelerating delivery, scaling efficiently, and unlocking new business potential. These companies are leading Frontier Firm transformation, not simply adopting AI but rebuilding around it to set the pace for the next decade of innovation. Successfully adopting AI requires choosing the right strategy with tradeoffs between time-to-market and time-to-value. There is no one-size-fits-all approach—some organizations will build from scratch, buy off-the-shelf, or choose a hybrid option of custom components with ready-made tools.

Find cloud solutions with Microsoft Marketplace

Regardless of approach, Microsoft Marketplace—with the largest catalog of AI apps and agents in the industry—is the primary destination for organizations adopting AI quickly and responsibly. Thousands of pre-vetted solutions are available from Microsoft partners that seamlessly integrate with your existing Microsoft stack for faster time-to-value.

Marketplace has a single catalog that meets you where you are. Solutions can be contextually surfaced within the products employees use every day—like agents in Microsoft 365 Copilot and models in Microsoft Foundry. Additionally, with capabilities that help you balance agility and oversight, Marketplace is accelerating how organizations move from concept to production while prioritizing cloud cost and optimizing performance.

Build custom AI applications with models and apps from Marketplace

Microsoft Marketplace provides access to more than 11,000 prepackaged models, as well as over 4,000 AI apps and agents, to help you build a custom AI solution. Whether you’re doing pro-code work with programming languages, frameworks, and APIs or a low-code method with pre-built components and minimal coding, Marketplace and Microsoft products support your development.

Pro-code builds give you complete control with custom logic, custom data handling, and governance by design. You can also own your IP, which can be essential in industries like financial services or advanced manufacturing.

Marketplace provides access to thousands of models, including leading models from Anthropic, Cohere, Meta, OpenAI, and NVIDIA, that can ground custom agents with high-quality building blocks, dramatically reducing development time while preserving full ownership of logic and data. Prepackaged models available through Marketplace can accelerate building your solution with faster deployment because teams don’t have to build their stack. You can set up a specialized graphics processing unit (GPU) server, install drivers and AI runtimes, find and download the right models, and fine tune for compatibility and performance. Models are accessible in the Marketplace storefront as well as in the Azure portal and Microsoft Foundry, so teams can do what they need in the flow of work and deploy models securely in their Azure environment.

const currentTheme =
localStorage.getItem(‘msxcmCurrentTheme’) ||
(window.matchMedia(‘(prefers-color-scheme: dark)’).matches ? ‘dark’ : ‘light’);

// Modify player theme based on localStorage value.
let options = {“autoplay”:false,”hideControls”:null,”language”:”en-us”,”loop”:false,”partnerName”:”cloud-blogs”,”poster”:”https://azure.microsoft.com/en-us/blog/wp-content/uploads/2026/01/Screenshot-2026-01-12-123116.png”,”title”:””,”sources”:[{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/849800-HowtodeployamodelthroughMicrosoftFoundry-0x1080-6439k”,”type”:”video/mp4″,”quality”:”HQ”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/849800-HowtodeployamodelthroughMicrosoftFoundry-0x720-3266k”,”type”:”video/mp4″,”quality”:”HD”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/849800-HowtodeployamodelthroughMicrosoftFoundry-0x540-2160k”,”type”:”video/mp4″,”quality”:”SD”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/849800-HowtodeployamodelthroughMicrosoftFoundry-0x360-958k”,”type”:”video/mp4″,”quality”:”LO”}]};

if (currentTheme) {
options.playButtonTheme = currentTheme;
}

document.addEventListener(‘DOMContentLoaded’, () => {
ump(“ump-6969bcfc295a2″, options);
});

Alternatively, low-code builds can be done quickly and benefit from tight platform integration and standardization. As your organization seeks to maximize impact with Microsoft Copilot, team members can use Microsoft Copilot Studio to design, extend, and govern custom AI copilots with responses securely grounded in your company’s data. With Copilot Studio, you can build with a low-code platform using models from Anthropic and OpenAI to create agents that support orchestration, chat, and deep reasoning.

Buy ready-made solutions through Marketplace 

Buying your AI application or agent becomes the pragmatic choice when the solution you need already exists with proven value and capabilities. For many organizations, buying is the fastest path to production—especially if resourcing constraints make custom builds unrealistic.

Microsoft is helping organizations in the shift to agentic AI whether you’re looking for singular agents that integrate into Microsoft 365 Copilot or fully autonomous multi-agent systems. Marketplace, as an extension of the Microsoft Cloud, gives you confidence in selecting AI apps and agents from discovery through deployment.

Filter by product, category, or industry in the storefront to find the right solution for your specific needs. Then, Marketplace supports try-before-you-buy with trials or proof-of-concepts within your Microsoft environment so you can ensure the solution is right for your business.

Once you’ve made your decision, Marketplace offerings align to your existing Microsoft investments, so there is seamless provisioning for administrators in a familiar and trusted experience, whether it’s a SaaS application in Azure or an agent in Microsoft 365 Copilot. In addition, if your organization has an Azure consumption commitment, eligible solutions count toward your contract—dollar-for-dollar, no limit.

Customize your AI strategy: a blended approach

Many organizations will land somewhere in the middle of building from scratch and buying a finished AI application. A blended strategy allows you to extend partner solutions with your own IP, customize layers that drive differentiation, and leverage pre-built components to reduce engineering effort. 

For example, a common scenario in the financial services industry is modernizing fraud and anti-money laundering detection systems that identify suspicious transactions or spot unusual customer behavior. Rebuilding these systems requires large rules engines and manual effort which can generate high false-positive rates and compliance fatigue.

With Marketplace, firms can deploy pre-built fraud and machine learning (ML) models and risk-scoring engines with compliant APIs in minutes—all running inside their Azure tenant using Managed Identity, so sensitive data stays secure. Once deployed, teams can immediately begin blending the models with their existing workflows, data pipelines, and case management systems. Instead of recertifying every new model or scenario, organizations can test, compare, and iterate rapidly without reopening full compliance reviews each time. This allows them to improve ML and fraud detection at a fraction of the cost and time required to rebuild systems internally, accelerating their journey to become Frontier.

Start discovering with Microsoft Marketplace

As your organization moves through Frontier Firm transformation, Microsoft Marketplace provides a unified, governed, and trusted ecosystem to innovate, while streamlining discovery, purchase, and deployment. A growing catalog of AI apps, agents, and models is available in the storefront and contextually surfaced in the Microsoft products you use every day. Whether you are building bespoke agents, deploying proven partner solutions, or blending both approaches, Marketplace helps AI practitioners and technology leaders focus on delivering measurable business impact at scale.

Start searching the Marketplace catalog

Explore Marketplace benefits with access to demos, case studies, and guides

Watch the webinar on “Charting your agent strategy with Marketplace”

Discover cloud-based solutions with Microsoft Marketplace
Your trusted source for cloud solutions, AI apps, and agents. Check out our featured solutions, featured industries, and customer stories.

See all products here

The post Chart your AI and agent strategy with Microsoft Marketplace appeared first on Microsoft Azure Blog.
Quelle: Azure

Bridging the gap between AI and medicine: Claude in Microsoft Foundry advances capabilities for healthcare and life sciences customers

Healthcare and life sciences organizations are navigating an era of unprecedented complexity. Administrative burden continues to rise, clinical workflows remain fragmented, and scientific discovery is advancing faster than traditional systems can support. At the same time, trust, safety, and regulatory compliance remain non-negotiable.

From clearing prior authorization backlogs to accelerating clinical research and regulatory submissions, organizations need AI that does more than generate text. They need AI that understands medical and scientific complexity, reasons across multi-step workflows, and can be deployed responsibly at enterprise scale.

Today, we’re excited to announce Claude for Healthcare and Life Sciences, now available in Microsoft Foundry bringing advanced reasoning, agentic workflows, and life sciences–tuned capabilities to some of the industry’s most demanding real-world use cases. Built on Azure’s secure, enterprise-grade foundation, Foundry ensures these capabilities scale responsibly while integrating with familiar Azure services for data, compliance, and workflow automation.

See what Claude Sonnet on Microsoft Foundry can do for your business

From general intelligence to domain expertise

Claude for Healthcare A complementary set of tools and resources that enable healthcare providers, payers, and organizations to use Claude for medical and operational workflows, while meeting the highest standards for trust, privacy, and compliance.

Claude for Life Sciences New components that accelerate every stage of the research and development (R&D) lifecycle connecting Claude to more scientific platforms and enabling it to generate more consistent, high-quality experimental and clinical protocols.

Together, these capabilities build on major recent advances in Claude’s general intelligence bringing domain-aware AI into the workflows that matter most.

Built for regulated, real-world workflows

Claude for Healthcare and Life Sciences enables organizations to deploy vertical-specific AI agents tailored to healthcare and life sciences use cases. These agents combine:

Advanced model capabilities optimized for healthcare and scientific reasoning.

Enterprise-grade deployment paths aligned to industry requirements.

Domain-specific connectors, model context protocols (MCPs), and skills to complete specialized tasks.

All within the trusted, unified Microsoft Foundry platform.

Transforming healthcare from insight to action

Healthcare teams are often constrained by administrative burden, fragmented systems, and time-intensive workflows. Claude on Microsoft Foundry helps address these challenges by supporting use cases such as:

Prior authorization: Streamlining documentation review and decision support.

Insurance claims appeal processing: Accelerating appeals with structured reasoning and evidence synthesis.

Care coordination and patient message triage: Helping clinicians prioritize and respond more effectively.

Why it matters

Trusted: Deployed on HIPAA-ready infrastructure through Claude for Enterprise.

Powerful: Frontier-level reasoning across clinical, operational, and coding-related tasks.

Tailored: Purpose-built for healthcare workflows with healthcare-specific MCPs.

Committed: Designed for long-term evolution alongside healthcare organizations.

Accelerating life sciences from discovery to translation

In life sciences, speed and scientific rigor are critical, whether in early discovery or regulatory submission. Claude for Life Sciences supports end-to-end workflows across research, development, and operations.

Key life sciences use cases

Preclinical R&D acceleration

Bioinformatics analysis

Protocol and experimental design

Literature synthesis and hypothesis generation

Clinical trial operations and data management

Regulatory affairs and submission preparation

Life sciences–tuned capabilities

Advanced research and protocol design agents

Code interpreter workflows for bioinformatics

Models trained to support:

Experimental protocol design

Next-hypothesis generation

Plasmid and molecular design tasks

Why it matters

Trusted: Life sciences–specific capabilities built with biosafety guardrails.

Powerful: Frontier AI for bioinformatics, experimental design, and synthesis.

Tailored: Deep integrations with scientific databases and lab platforms.

Committed: Co-developed alongside pharma and research leaders.

Powered by the latest advances in Claude intelligence

These domain-specific capabilities build on major improvements in Claude’s underlying models. According to Anthropic, when assessed on detailed simulations of real-world medical and scientific tasks, Claude Opus 4.5 substantially outperforms earlier releases across benchmarks such as:

PubMed-based medical question answering

Clinical reasoning simulations

Agent-based medical task benchmarks

Combined with ongoing investments in safety, low hallucination rates, and responsible AI, these advances make Claude dramatically more useful for real-world healthcare and life sciences workflows including prior authorization, care coordination, and regulatory submissions.

One platform. Many models. Built for trust.

With Microsoft Foundry, customers can choose from a growing catalog of industry-leading models—including Claude—while benefiting from a unified platform for governance, observability, deployment, and compliance.

Claude for Healthcare and Life Sciences adds another powerful option for organizations that need:

Domain-aware reasoning

Enterprise-grade controls

Flexible deployment across regulated environments

Get started

Claude for Healthcare and Life Sciences is available today in Microsoft Foundry. To learn more, explore the model catalog or connect with your Microsoft account team to understand how Claude can support your healthcare or life sciences workloads.

Explore Claude Sonnet on Microsoft Foundry

The post Bridging the gap between AI and medicine: Claude in Microsoft Foundry advances capabilities for healthcare and life sciences customers appeared first on Microsoft Azure Blog.
Quelle: Azure

Microsoft’s strategic AI datacenter planning enables seamless, large-scale NVIDIA Rubin deployments

CES 2026 showcases the arrival of the NVIDIA Rubin platform, along with Azure’s proven readiness for deployment. Microsoft’s long-range datacenter strategy was engineered for moments exactly like this, where NVIDIA’s next-generation systems slot directly into infrastructure that has anticipated their power, thermal, memory, and networking requirements years ahead of the industry. Our long-term collaboration with NVIDIA ensures Rubin fits directly into Azure’s forward platform design.

Learn more about Azure AI infrastructure

Building with purpose for the future

Azure’s AI datacenters are engineered for the future of accelerated computing. That enables seamless integration of NVIDIA Vera Rubin NVL72 racks across Azure’s largest next-gen AI superfactories from current Fairwater sites in Wisconsin and Atlanta to future locations.

The newest NVIDIA AI infrastructure requires significant upgrades in power, cooling, and performance optimization; however, Azure’s experience with our Fairwater sites and multiple upgrade cycles over the years demonstrates an ability to flexibly enhance and expand AI infrastructure in step with advancements in technology.

Azure’s proven experience delivering scale and performance

Microsoft has years of market-proven experience in designing and deploying scalable AI infrastructure that evolves with every major advancement of AI technology. In lockstep with each successive generation of NVIDIA’s accelerated compute infrastructure, Microsoft rapidly integrates NVIDIA’s innovations and delivers them at scale. Our early, large-scale deployments of NVIDIA Ampere and Hopper GPUs, connected via NVIDIA Quantum-2 InfiniBand networking, were instrumental in bringing models like GPT-3.5 to life, while other clusters set supercomputing performance records, demonstrating we can bring next-generation systems online faster and with higher real-world performance than the rest of the industry.

We unveiled the first and largest implementations of both NVIDIA GB200 NVL72 and NVIDIA GB300 NVL72 platforms, architected as racks into single supercomputers which train AI models dramatically faster, helping Azure remain a top choice for customers seeking advanced AI capabilities.

Azure’s systems approach

Azure is engineered for compute, networking, storage, software, and infrastructure all working together as one integrated platform. This is how Microsoft builds a durable advantage into Azure and delivers cost and performance breakthroughs that compound over time.

Maximizing GPU utilization requires optimization across every layer. In addition to Azure being able to adopt NVIDIA’s new accelerated compute platforms early, Azure advantages come from the surrounding platform as well: high-throughput Blob storage, proximity placement and region-scale design shaped by real production patterns, and orchestration layers like CycleCloud and AKS tuned for low-overhead scheduling at massive cluster scale.

Azure Boost and other offload engines clear IO, network, and storage bottlenecks so models scale smoothly. Faster storage feeds larger clusters, stronger networking sustains them, and optimized orchestration keeps end-to-end performance steady. First party innovations reinforce the loop: liquid cooling Heat Exchanger Units maintain tight thermals, Azure hardware security module (HSM) silicon offloads security work, and Azure Cobalt delivers exceptional performance and efficiency for general-purpose compute and AI-adjacent tasks. Together, these integrations ensure the entire system scales efficiently, so GPU investments deliver maximum value.

This systems approach is what makes Azure ready for the Rubin platform. We are delivering new systems and establishing an end-to-end platform already shaped by the requirements Rubin brings.

Operating the NVIDIA Rubin platform

NVIDIA Vera Rubin Superchips will deliver 50 PF NVFP4 inference performance per chip and 3.6 EF NVFP4 per rack, a five times jump over NVIDIA GB200 NVL72 rack systems.Azure has already incorporated the core architectural assumptions Rubin requires:

NVIDIA NVLink evolution: The sixth-generation NVIDIA NVLink fabric expected in Vera Rubin NVL72 systems reaches ~260 TB/s of scale-up bandwidth, and Azure’s rack architecture has already been redesigned to operate with those bandwidth and topology advantages.

High-performance scale-out networking: The Rubin AI infrastructure relies on ultra-fast NVIDIA ConnectX-9 1,600 Gb/s networking, delivered by Azure’s network infrastructure, which has been purpose-built to support large-scale AI workloads.

HBM4/HBM4e thermal and density planning: The Rubin memory stack demands tighter thermal windows and higher rack densities; Azure’s cooling, power envelopes, and rack geometries have already been upgraded to handle the same constraints.

SOCAMM2 driven memory expansion: Rubin Superchips use a new memory expansion architecture; Azure’s platform has already integrated and validated similar memory extension behaviors to keep models fed at scale.

Reticle sized GPU scaling and multi-die packaging: Rubin moves to massively larger GPU footprints and multi-die layouts. Azure’s supply chain, mechanical design, and orchestration layers have been pre-tuned for these physical and logical scaling characteristics.

Azure’s approach in designing for next generation accelerated compute platforms like Rubin has been proven over several years, including significant milestones:

Operated the world’s largest commercial InfiniBand deployments across multiple GPU generations.

Built reliability layers and congestion management techniques that unlock higher cluster utilization and larger job sizes than competitors, reflected in our ability to publish industry leading large-scale benchmarks. (E.g., multi-rack MLPerf runs competitors have never replicated.)

AI datacenters co-designed with Grace Blackwell and Vera Rubin from the ground up to maximize performance and performance per dollar at the cluster level.

Design principles that differentiate Azure

Pod exchange architecture: To enable fast servicing, Azure’s GPU server trays are designed to be quickly swappable without requiring extensive rewiring, improving uptime.

Cooling abstraction layer: Rubin’s multi-die, high bandwidth components require sophisticated thermal headroom that Fairwater already accommodates, avoiding expensive retrofit cycles.

Next gen power design: Vera Rubin NVL72 demand increasing watt density; Azure’s multi-year power redesign (liquid cooling loop revisions, CDU scaling, and high amp busways) ensures immediate deployability.

AI superfactory modularity: Microsoft, unlike other hyperscalers, builds regional supercomputers rather than singular megasites, enabling more predictable global rollout of new SKUs.

How co-design leads to user benefits

The NVIDIA Rubin platform marks a major step forward in accelerated computing, and Azure’s AI datacenters and superfactories are already engineered to take full advantage. Years of co-design with NVIDIA across interconnects, memory systems, thermals, packaging, and rack scale architecture means Rubin integrates directly into Azure’s platform without rework. Rubin’s core assumptions are already reflected in our networking, power, cooling, orchestration, and pod exchange design principles. This alignment gives customers immediate benefits with faster deployment, faster scaling, and faster impact as they build the next era of large-scale AI.
The post Microsoft’s strategic AI datacenter planning enables seamless, large-scale NVIDIA Rubin deployments appeared first on Microsoft Azure Blog.
Quelle: Azure

Azure updates for partners: December 2025

At Microsoft Ignite 2025, we explored what it means for organizations to move into the era of Frontier transformation. This shift is focused on embedding AI across every part of the business to improve decision-making, increase speed, and create new value. Organizations leading in AI make it foundational. They rethink processes and integrate new technologies from the start to improve efficiency.

For partners, this move toward Frontier represents a significant opportunity to lead customers into this new era. By building AI-powered solutions, connecting data for intelligent insights, and deploying Microsoft Azure’s cloud-ready platforms, partners can deliver value faster and scale confidently through the Microsoft ecosystem.

Microsoft Ignite came with a significant number of announcements, so I’ve gathered the Azure updates that matter most for partners. These are the capabilities that can strengthen your ability to deliver intelligent solutions, drive operational efficiency, and differentiate your product or service in the market. You can also explore how partners are turning momentum into action, access highlights, and grab practical guidance from my Microsoft Ignite session.

Azure Copilot: Now in private previewAzure Copilot introduces specialized agents to the Azure portal, PowerShell, and CLI. Powered by Azure Resource Manager (ARM)-driven scenarios and advanced AI models from Microsoft and partners, Azure Copilot streamlines migration, assessment, and modernization activities with data-driven insights, guided workflows, and improved governance across customer environments. For partners, this creates a unified way to deliver intelligent automation for cloud workloads, accelerate modernization projects, reduce operational overhead, and strengthen governance through integrated agentic workflows across Azure and GitHub Copilot.

For more information, check out these additional resources:

Blog: Ushering in the Era of Agentic Cloud Operations with Azure CopilotMicrosoft Ignite session: Agentic AI Tools for Partner-Led Migration and Modernization SuccessMicrosoft Ignite session: Partners: Accelerate Secure Migrations and Innovate in the Era of AI

Foundry Control Plane: Now in public previewMicrosoft Foundry Control Plane extends Agent 365 by bringing unified visibility, security, and control to AI agents operating across the Microsoft Cloud. It centralizes policy management, lifecycle governance, and observability, offering a consistent way to manage agent behavior and performance. By providing enterprise-grade governance and security capabilities that support safe, scalable, and efficient agent management for customers across varied environments, Control Plane empowers confident deployment and operation of AI-powered solutions.

For more information, review these additional resources:

Microsoft Learn: What is the Microsoft Foundry Control Plane?Microsoft Ignite session: Build Partner Advantage: Drive Key AI Use-Cases with Azure Tech Stack

Foundry IQ: Now in public previewFoundry IQ provides a unified endpoint for agent knowledge, automating source routing and retrieval workflows through Azure AI Search. It equips agents to work with enterprise content securely and with greater contextual grounding by connecting a unified knowledge base to multiple data sources. For partners, this creates a streamlined way to build retrieval augmented generation (RAG) solutions, link agents to customer-specific knowledge sources, and deliver consistent, context-rich capabilities that empower organizations to unlock more value from their data.

Read our blog to learn more: Foundry IQ: Unlocking ubiquitous knowledge for agents

Fabric IQ: Now in public previewMicrosoft Fabric IQ offers a live, unified view of enterprise data and AI agents, organizing information by business concepts and using OneLake to support real-time analytics across hybrid and multicloud environments. For partners, Fabric IQ creates a foundation for building intelligent, context-aware solutions that align to business processes, accelerate analytics performance, and strengthen governance to improve reliability and efficiency across customer deployments.

For more information, check out these additional resources:

Blog: From Data Platform to Intelligence Platform: Introducing Microsoft Fabric IQMicrosoft Ignite session: Microsoft Fabric IQ: Turning unified data into unified intelligenceMicrosoft Ignite session: How Microsoft’s data platform is creating value for partners

Microsoft Agent Factory: Now availableMicrosoft Agent Factory is a new program designed for organizations that want to move from experimentation to execution faster. At the heart of this program is the Microsoft Agent Pre-Purchase Plan (P3), which streamlines procurement and reduces complexity. With P3, partners can offer their customers access to 32 Microsoft services through one flexible pool of funds, eliminating the need to manage multiple contracts or choose between platforms. This single metered plan not only reduces upfront licensing and provisioning but also supports greater predictability for organizations investing in AI innovation. Eligible organizations can also tap into hands-on support from top AI Forward Deployed Engineers (FDEs) and access tailored, role-based training to boost AI fluency across teams. Together, they unlock new opportunities for growth and innovation while encouraging customers to confidently embrace the future of AI.

Read our blog to learn more: Accelerate innovation with Microsoft Agent Factory

Microsoft Foundry: Anthropic Claude models are now availableMicrosoft Foundry now offers Anthropic Claude models that support advanced reasoning for research, coding, and agentic workflows, all within the Microsoft unified governance and observability framework. For partners, this expands choice across model capabilities to develop multistep agents using the right model per task while maintaining governance and deployment consistency across Azure, Foundry, and Microsoft 365 Copilot environments.

Read our blog to learn more: Introducing Anthropic’s Claude models in Microsoft Foundry: Bringing Frontier intelligence to Azure

Resale enabled offers: Now available through Microsoft MarketplaceResale enabled offers are now available in nearly all Marketplace-supported regions, allowing software companies to work with channel partners to manage listings and expand reach. For partners, this creates new channel-led sales opportunities by making it easier to promote and manage listings on behalf of publishers and reach more customers globally without adding operational complexity.

For more information, check out these resources:

Marketplace: Cloud solutions, AI apps, and agentsBlog: The Microsoft Marketplace opportunity for channel ecosystemMicrosoft Ignite session: Executing on the channel-led marketplace opportunity for partnersMicrosoft Ignite session: Marketplace success for partners—from SMB to enterpriseMicrosoft Ignite session: Partner: Benefits for Accelerating Software Company Success

Azure HorizonDB for PostgreSQL: Now in private previewAzure HorizonDB is a new PostgreSQL cloud database for mission-critical applications and modern AI workloads, offering auto-scaling storage, rapid compute scale out, advanced vector indexing, and integration with the Microsoft AI and analytics ecosystem. For partners, HorizonDB supports the development of intelligent and resilient applications, modernization of legacy systems, and creation of high-performance data platforms designed for security, scale, and future AI workloads.

Check out these additional resources:

Blog: Announcing Azure HorizonDBPreview sign-up: Apply for the preview hereMicrosoft Ignite session: Azure HorizonDB: Deep Dive into a New Enterprise-Scale PostgreSQL

Microsoft Agent 365: The control plane for AI agentsAgent 365 extends the Microsoft user management infrastructure to AI agents, empowering organizations to govern agents across Microsoft 365, Azure, and Foundry. Available in the Microsoft 365 admin center with the Frontier program, it combines capabilities from Microsoft 365 Defender, Entra, Purview, and Microsoft 365 for unified security, productivity, and management. For partners, this creates a consistent approach to deploying, securing, and managing fleets of AI agents across customer environments with streamlined governance and operational clarity.

Read our blog to learn more: Microsoft Agent 365: The control plane for AI agents

Looking forwardMicrosoft Ignite is about more than product updates; it’s a time to celebrate what we can achieve together as partners. Continue your journey and explore the Cloud & AI Platforms partner sessions at Microsoft Ignite and read the Azure at Microsoft Ignite 2025: All the intelligent cloud news explained blog post for more product updates.

Stay connected with us. Follow Microsoft Partner on LinkedIn, join the conversation in our Partner News Community, and explore the Microsoft partner site to keep your momentum going.

For details on recent announcements, please read the “What’s new in Azure for Partners” newsletter on the Microsoft Community Hub and follow the tag “Azure News” to stay updated.

November update: What’s new in Azure for Partners | Microsoft Community HubOctober update: What’s new in Azure for Partners | Microsoft Community Hub
The post Azure updates for partners: December 2025 appeared first on Microsoft Azure Blog.
Quelle: Azure

Microsoft named a Leader in Gartner® Magic Quadrant™ for AI Application Development Platforms

A recognition for AI innovation

Microsoft is recognized as a Leader in the 2025 Gartner® Magic Quadrant™ for AI Application Development Platforms and is positioned furthest for Completeness of Vision. This leadership reflects a long‑term conviction: the next wave of applications is agentic, and real customer impact requires far more than great demos. Organizations need agents grounded in real data and tools, capable of driving business workflows, and governed with end‑to‑end observability at scale. Our investment in agent frameworks, orchestration, and enterprise‑grade governance is how we make that full journey real and practical for every customer.

Read the full Gartner report

Why we believe this matters

Gartner evaluates vendors on two dimensions: Completeness of Vision (where the platform is headed) and Ability to Execute (whether it can deliver today). Being positioned furthest on vision isn’t about having the boldest roadmap: it’s about whether that vision translates into the real capabilities customers need for the future of AI.

Microsoft Foundry is our unified platform for building, deploying and governing AI applications—and over the past year, we’ve focused it on four areas that customers tell us separate production AI from proof-of-concept:

Real data, real tools. Agents are only as useful as what they can access. Foundry IQ provides a single secure grounding API that connects agents to enterprise data, while Foundry Tools offers over 1,400 pre-built connectors for document processing, translation, speech, and business systems.

Workflow integration, not just conversation. The shift from chatbot to agent means moving from Q&A to action. Foundry Agent Service supports multi-agent orchestration where agents can handle off tasks, coordinate decisions, and drive end to end business processes: deployable directly into Copilot or your applications.

Observability and governance at scale. When agents act autonomously, you need to see what they’re doing and why. Foundry Control Plane provides organization-wide visibility, audit trails, and policy enforcement. “Trust but verify” doesn’t scale without tooling.

Models from cloud to edge. Build and run AI models wherever your workloads live—from cloud to edge. Fine-tune and deploy models from Foundry Models using enterprise-grade GenAI Ops, then run them on-device with Foundry Local for low-latency, offline, or regulated scenarios.

With these pillars in place, Foundry delivers everything organizations need to build AI applications and multi-agent systems at scale. That’s why we’ve ensured it works seamlessly with the tools developers and businesses use most. Foundry integrates deeply with development tools including Visual Studio Code, GitHub, Azure, and productivity tools such as Microsoft 365, Microsoft Teams, and the broader enterprise stack.

Explore Microsoft Foundry

Walking the talk: Our agent-driven approach

This year, Microsoft adopted a fundamentally new approach for preparing our submission for AI Application Development Platforms. Instead of relying on manual data gathering and coordination, our team developed custom agents designed to collect, organize, and validate all the information required for the evaluation.

How the agent was created:

The agent’s development is detailed in a recent blog post, which outlines the technical architecture and methodology behind its creation. Built using Microsoft Agent Framework, our open-source offering, the agents leverage advanced orchestration capabilities and multimodal content processing. It was designed to automate the complex process of assembling submission data, ensuring accuracy and completeness while reducing manual effort.

Technical highlights:

The agent uses a structured prompt and workflow, as specified here. It integrates with Microsoft Foundry platform-as-a-service (PaaS) model, supporting both pay-as-you-go and provisioned throughput options.

Benefits of the agent-driven process:

By automating the submission workflow, the agent improved data accuracy and transparency, allowing our experts to focus on strategic insights rather than manual compilation. The process was more efficient, reduced the risk of errors, and ensured that our submission was both comprehensive and up to date.

This innovation reflects Microsoft’s commitment to technical excellence and continuous improvement, providing customers with greater confidence in the quality and reliability of its AI solutions. By streamlining critical processes, Microsoft delivers more accurate, transparent, and timely updates, enabling organizations to make informed decisions and accelerate innovation with enterprise-grade AI platforms that maintain compliance and security standards.

Empowering organizations with Microsoft Foundry

We believe our recognition in the Gartner Magic Quadrant™ for AI Application Development Platforms is a testament to Microsoft’s commitment to empowering organizations to develop robust, scalable, and intelligent AI solutions. The agent-driven submission process exemplifies our drive to innovate, operate transparently, and share process with our community.

More than 80,000 enterprises and software development companies across healthcare, manufacturing, and retail industries are leveraging Foundry to deliver transformative solutions—from predictive supply chain insights to personalized customer experiences. These success stories highlight how Foundry accelerates innovation while maintaining trust and compliance.

Genie is offering provider practices a way to use AI to converse with patients through their preferred channel. This will reduce the amount of administrative work and cost for practices to simply give patients the answers to their questions.
Sidd Shah, Vice President of Strategy & Business Growth, healow

With Genix Copilot, we have unlocked the power of generative and agentic AI from shop floor to top floor, cutting troubleshooting time by 60-80%. Genix Copilot on Azure OpenAI is reshaping industrial performance and advancing environmental goals, turning data into real outcomes for customers across very different sectors.
Rajesh Ramachandran, Global Chief Digital Officer, Process Automation, ABB

Foundry Agent Service and Microsoft Agent Framework connect our agents to data and each other, and the governance and observability in Microsoft Foundry provide what KPMG firms need to be successful in a regulated industry.
Sebastian Stöckle, Global Head of Audit Innovation and AI, KPMG International

Microsoft is at the cutting edge of AI-based shopping, and with Ask Ralph, we’re blending the world of fashion and the world of technology to reimagine how consumers shop online.
Naveen Seshadri, Chief Digital Officer, Ralph Lauren

Thank you to our customers and partners for making this recognition possible. We look forward to helping you grow more with Microsoft Foundry.

Discover resources for your AI journey

Read the Gartner report

Discover more at Microsoft Customer Stories

Learn more about Microsoft Foundry

*Gartner, Magic Quadrant for AI Application Development Platforms, 17 November 2025

Gartner and Magic Quadrant are registered trademarks of Gartner, Inc. and/or its affiliates and is used herein with permission. All rights reserved.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Microsoft.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s business and technology insights organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
The post Microsoft named a Leader in Gartner® Magic Quadrant™ for AI Application Development Platforms appeared first on Microsoft Azure Blog.
Quelle: Azure

Microsoft’s commitment to supporting cloud infrastructure demand in the United States

Today, we are sharing progress on our infrastructure expansions across the United States that are supporting the tremendous growth in customer demand for cloud and AI services. Recently we announced Fairwater sites in Wisconsin and Atlanta, and now we are expanding with the launch of our East US 3 region in the Greater Atlanta Metro area in early 2027, and the expansion of five existing datacenter regions across the United States.

Learn about Microsoft’s investment in datacenter infrastructure

New Cloud Region to open in Atlanta in early 2027

Microsoft’s global cloud network serves as the foundation that underpins daily life, innovation, and economic growth. With more regions than any other cloud provider, Microsoft’s global cloud infrastructure includes more than 70 regions, over 400 datacenters worldwide, over 370,000 miles of terrestrial and subsea fiber, and over 190 edge sites, making it one of the largest, most trusted and secure in the world.

Our datacenter footprint in Greater Atlanta Metro area is already running the most advanced AI supercomputers on the planet, and in early 2027, this footprint in Atlanta will expand to support all our customer workloads out of the East US 3 datacenter region. This region will be designed to support the most advanced Azure workloads, on a foundation of trust for all organizations.

Get started with Azure today

The East US 3 region will offer additional resiliency capabilities through Availability Zones which are unique physical datacenter locations equipped with independent power, networking, and cooling. Availability Zones provide organizations with peace of mind knowing their applications can be designed with increased tolerance to failures by incorporating functionality such as zone-redundant storage.

Microsoft’s datacenter community pledge is to build and operate digital infrastructure that addresses societal challenges and creates benefits for communities in which we operate and where our employees live and work. The East US 3 region is being designed to meet Microsoft’s carbon, water, waste and sustainability commitments. In developing the East US 3 region, we have water conservation and replenishment top of mind.  The region in Georgia is designed to be LEED Gold Certification: a framework for healthy, highly efficient, and cost-saving buildings, offering environmental, social, and governance benefits.

Delivering a resilient cloud infrastructure

We’re empowering all organizations to adopt a resilient cloud strategy that enables them to take advantage of the full capabilities of the cloud. The cloud is not a single region or location but a network of regions across the United States and the world that enables the access of Azure services, resources and capacity across a broader set of geographic areas.  

Our infrastructure projects in the United States are driven by the need for greater resiliency, agility and flexibility in today’s dynamic cloud environment. With six datacenter regions with Availability Zones (AZ) already in operation, we will be adding AZs in the United States to the following existing regions:

North Central US by the end of 2026.

West Central US in early 2027.

US Gov Arizona in early 2026.

In 2026, we will add Availability Zones in regions where we already have three, including East US 2 in Virginia and South Central US in Texas. The expansion of our Availability Zone footprint will provide additional supply of Azure infrastructure capacity to meet the need for customers in these regions to grow with confidence and with more options when considering a multi-region cloud architecture. Leveraging a multi-region cloud architecture with any of our United States regions further strengthens application performance, latency, and overall resilience and availability of cloud applications.

Organizations are already using Azure to transform their applications in the era of AI, with a resilient cloud foundation:

The University of Miami: The University of Miami is a leading-edge teaching and research institution located on Florida’s southern tip, part of a region known as Hurricane Alley. With a steady threat of extreme weather–related outages, the University looked to Microsoft Azure to improve its disaster recovery capabilities and shift key on-premises assets to the cloud. Pursuing a well-architected strategy, the University now takes advantage of Azure availability zones to safeguard against outages, stay operational during maintenance and improvements, and help ensure resilience and reliability. Additionally, the University is realizing greater agility, faster response time to business needs, and reduced costs by continuing to pursue Azure-backed solutions.

The State of Alaska: The State of Alaska is reducing costs by consolidating infrastructure and decommissioning legacy systems. It is improving resiliency and reliability while strengthening security by migrating systems to Azure, where geography is no longer a challenge. 

Supporting our government customers

We remain committed to enabling resilient, compliant cloud strategies for our government customers. In early 2026, we will expand our Azure Government footprint with the addition of three Availability Zones in the US Government Arizona region, giving agencies and partners more options for zone-redundant architectures to improve recovery time (RTO) and recovery point objectives (RPO) and mission continuity aligned with CMMC and NIST guidance.

This expansion supports growing demand for segmented, resilient architectures that isolate sensitive workloads while meeting regulatory requirements for availability and security. The US Government datacenter region in Arizona gives additional options for customers in the Defense Industrial Base (DIB) for its benefits of proximity, latency, and mission alignment, offering an alternative to US Government Virginia for new deployments.

These investments complement the Azure for US Government Secret cloud region launched earlier this year, reinforcing our commitment to secure, compliant, and mission-ready cloud solutions. Discover how Microsoft is advancing AI and infrastructure innovation in our H200: Accelerating AI at Scale blog.

Discover what Azure can do for you

Boost your cloud strategy

Use the Cloud Adoption Framework to achieve your cloud goals with best practices, documentation, and tools for business and technology strategies.

Use the Well-Architected Framework to optimize workloads with guidance for building reliable, secure, and performant solutions on Azure.

By choosing to deploy services through any of our Azure regions, customers can leverage the diverse and robust infrastructure that Microsoft is developing across the United States. This approach not only offers resilience and flexibility but also paves the way for innovative solutions that drive economic growth and a more connected future.

Where to find more resources:

Take a virtual tour of Microsoft datacenters

Learn more about Microsoft’s global infrastructure

Microsoft Datacenters: Illuminating the unseen power of the cloud—Microsoft Datacenters

Learn about Georgia—Microsoft Local

Learn how Microsoft is driving next-generation AI and infrastructure innovation

The post Microsoft’s commitment to supporting cloud infrastructure demand in the United States appeared first on Microsoft Azure Blog.
Quelle: Azure

Actioning agentic AI: 5 ways to build with news from Microsoft Ignite 2025

Energy at Microsoft Ignite was electric. Over 20,000 attendees gathered in San Francisco, with 200,000 digitally joining us to explore the future of cloud and AI. What continues to inspire me most are the responses online and the conversations happening in our technical communities. It’s how quickly you’re turning these announcements into action and building solutions for billions of people, which will ultimately shape our future.

Join us at Microsoft AI Dev Days (Dec 10-11, 2025) and start building

As someone who lives and breathes technical audience marketing across Microsoft Azure, Foundry, Fabric, databases, and developer tools. Our Azure platform announcements resonated because they help solve real problems—and now, the work begins.

So, what is everyone saying about the top news? And where do we go from here? Let’s reflect on the top five cloud and AI stories from Microsoft Ignite across the web right now and then unpack how these innovations can be put into practice across Microsoft AI Dev Days, Microsoft AI Tour and more.

1. Claude comes to Microsoft Foundry: Choice for builders

The technical community lit up about what Claude models in Microsoft Foundry unlock. I really like how this eWeek article describes the significance as a “partnership [that] removes one of the biggest historical blockers to adopting new AI tools: vendor complexity.”

Developers told us they wanted access to Claude Sonnet and Claude Opus alongside OpenAI’s GPT models. They wanted the ability to select the right models for their use cases, and the tools to evaluate for tone, safety, performance, and more. Now Azure is the only cloud supporting access to both Claude and GPT frontier models for its customers.

Response from the community is clear: model diversity matters. When you’re building AI apps and agents, having options means you can optimize for what matters most to your users. Microsoft Foundry gives you flexibility while maintaining enterprise-grade security, compliance, and governance.

My favorite watches and reads:

Microsoft Exec talks OpenAI, AI Bubble, Data Centers, AI Safety, and more

Everything you need to build AI apps & agents

Foundry: The Top AI Announcement from Microsoft Ignite 2025

Microsoft Brings Anthropic’s Claude Opus 4.5 to Foundry Preview

Deploy and compare models in Microsoft Foundry

2. IQ Revolution: Semantic understanding that works

The new portfolio of Microsoft IQ offerings has data engineers and architects buzzing. One blogger captured it perfectly: “This is Microsoft rewiring the connective tissue between productivity apps, analytics platforms, and AI development environments to create something that’s been missing from the enterprise AI conversation.” Knowledge is how the shift to agentic AI becomes practical rather than theoretical.

Foundry IQ streamlines knowledge retrieval from multiple sources including SharePoint, Fabric, and the web. Powered by Azure AI Search, it delivers policy-aware retrieval without having to build complex custom RAG pipelines. Developers get pre-configured knowledge bases and agentic retrieval in a single API that “just works,” while also respecting user permissions, which is what I heard resonating on the ground.

Designed with Foundry IQ integration, Fabric IQ creates a semantic intelligence layer that unifies analytics, time-series, and operational data around shared business concepts, letting you build and deploy agents that reason consistently across domains while cutting down the schema-mapping, data wrangling, and prompt engineering that normally eat the most time.

More must-reads:

CIO Talk: Microsoft Gets IQ

Microsoft’s Fabric IQ teaches AI agents to understand business operations, not just data patterns

Microsoft Ignite 2025: How Data-Driven Intelligence Powers the Age of AI Agents

3. Azure HorizonDB: PostgreSQL power

PostgreSQL developers are celebrating the preview of Azure HorizonDB, which you can sign up for here. This fully managed, Postgres-compatible database service is designed from the ground up for modern cloud-native and AI workloads.

The technical community embraced it wholeheartedly, seeing their priorities reflected. Azure HorizonDB delivers up to 3x more throughput than open-source Postgres for transactional workloads, with auto-scaling storage up to 128TB and scale-out compute supporting up to 3,072 vCores. Sub-millisecond multi-zone commit latencies support apps that are both fast and resilient.

What really got developers excited was built-in vector indexing with advanced filtering using DiskANN, which brings AI intelligence directly to where your data lives. This helps developers build semantic search and RAG patterns without the complexity and latency of managing separate vector stores or moving data across systems. Integration with Microsoft Foundry also streamlines setup and AI app development.

And for those migrating from Oracle, GitHub Copilot-powered migration tools in the PostgreSQL Extension for VS Code make the transition smoother than ever. The community has spoken: they want PostgreSQL flexibility combined with Azure enterprise capabilities, and Azure HorizonDB delivers.

Be sure to check these out:

Announcing Azure HorizonDB

Water, Water Everywhere: How Microsoft Ignite 2025 Turned Data Into Intelligence

Microsoft Ignite 2025: AI + Databases = The Next Big Shift with Microsoft’s Shireesh Thota

Azure HorizonDB: Microsoft goes big with PostgreSQL

4. Azure Copilot: Agents change the game

The announcement of the new Azure Copilot gives IT professionals a new reason to come to the cloud. Now supporting the full cloud operations lifecycle, Azure Copilot features a collection of specialized agents for migration, deployment, observability, optimization, resiliency, and troubleshooting. A star within this new experience for IT pros is the migration agent, helping turn weeks of manual discovery into rapid acceleration by scanning environments, identifying legacy workloads, and auto-generating infrastructure-as-code templates so migrations are fast and clean.

With Azure Copilot, migrating and modernizing become far more manageable, surfacing cost improvements, right sizing environments, and diagnosing issues across containers, virtual machines, and databases, while honoring role-based access control (RBAC) policies and compliance guardrails. Available at no extra cost in the Azure Portal, CLI, and the new Operations Center, this new agentic interface in Azure transforms modernization and gives IT teams the ability to be more proactive as they build on the Azure foundation.

Smart takes on the news:

Making Sense of Microsoft Ignite 2025 for Azure and AI Architects

How Azure Copilot’s New Agents Automate DevOps and SecOps

Azure Update – IGNITE SPECIAL – 21st November 2025

Microsoft’s Azure Copilot to support agentic cloud operations at scale with new AI agents

Azure Copilot Agents Launch in Private Preview

5. Azure hardware: Limitless power and security

Performance is everything when you’re training large models or running inference at scale and that’s why the latest hardware museum behind our ‘Frontier Street’ activation at Ignite captured the community’s imagination.

When you stood in front of a blade from our Azure AI infrastructure server, with NVIDIA Blackwell Ultra GPUs presented like a museum piece, your excitement made it feel even more like stepping into an art gallery—with a spotlight on Cobalt, Maia, and Microsoft’s unique NVIDIA partnership.

And it didn’t stop there. Microsoft’s custom Azure silicon now includes the Azure Boost DPU, the first in-house data processing unit, and Azure Integrated HSM for top-notch security. We can’t wait to keep bringing these innovations directly to you.

See what others are saying:

Announcing Cobalt 200: Azure’s next cloud-native CPU

Microsoft has Designed its Own 132 Core Processor: Azure Cobalt 200

Microsoft’s Azure Cobalt 200 ARM Chip Delivers 50% Performance Boost

Powering Modern Cloud Workloads with Azure Boost: Ignite 2025

Your keyboard, your impact

Here’s what makes this moment special: announcements at Ignite aren’t endpoints; they’re starting points. You’re the next gen creators who will take these tools and build new agentic experiences we can’t yet imagine. Your implementations surface insights that shape how Azure evolves. Your real-world patterns are what drive product decisions. The relationship between announcement and innovation is a partnership, and the technical community drives that process forward.

Create the future with us:

Tune into Microsoft AI Dev Days: December 10-11. Starting today, we’re hosting two days of live-streamed technical content on the Reactor, broadcast across all dev channels. These sessions are designed for developers who want to go deep on building with the technologies announced at Ignite. Mark your calendars and join the community for hands-on workshops and technical deep dives that will bringing you to the cutting edge of AI innovation.

Join us at a Microsoft AI Tour location near you. We’re coming to your city with hands-on technical workshops. These one-day, free events focus on getting you keyboard time with the technologies announced at Ignite. We’re continuing to hit the road in 2026 to 30 more locations.

Catch up with your tech community. Ignite delivered incredible technical content across hundreds of sessions. What makes the following three sessions special is how deep they go into the tech with information about how to start implementing in your own environments.

Community Theater: Ask Me Anything with Scott Hanselman

Community Theater: Learn Infrastructure-as-Code through Minecraft

Community Theater: Cloud Perspectives: Cloud Management & Ops Platform Team Insights

Take your learning to the next level with curated skilling plans. Whether you’re new to these technologies or looking to deepen your expertise, Microsoft skilling plans can accelerate your career from fundamentals to advanced implementation.

What are you building with the latest technologies announced at Ignite? Join the conversation in Azure’s technical community.

Join Microsoft Ignite 2026 early!
Sign up now to join the Microsoft Ignite early-access list and be eligible to receive limited‑edition swag at the event.

Save the date

The post Actioning agentic AI: 5 ways to build with news from Microsoft Ignite 2025 appeared first on Microsoft Azure Blog.
Quelle: Azure

Azure Storage innovations: Unlocking the future of data

Microsoft is redefining what’s possible in the public cloud and driving the next wave of AI-powered transformation for organizations. Whether you’re pushing the boundaries with AI, improving the resilience of mission-critical workloads, or modernizing legacy systems with cloud-native solutions, Azure Storage has a solution for you.

Learn more about Azure Storage tools and products

At Microsoft Ignite 2025 and KubeCon North America last month, we showcased the latest innovations in Azure Storage, powering your workloads. Here is a recap of those releases and advancements.

Innovating for the future with AI

Azure Blob Storage provides a unified storage foundation for the entire AI lifecycle, powering everything from ingestion and preparation to checkpoint management and model deployment.

To enable customers to rapidly train, fine-tune, and deploy AI models, we evolved the Azure Blob Storage architecture to scale and deliver exabytes of capacity, 10s of Tbps throughput, and millions of IOPS to GPUs. In this video, you can see a single storage account scaling to over 50 Tbps on read throughput. Azure Blob Storage is also the foundation that enables OpenAI to train and serve models at unprecedented speed and scale. 

Fig 1. Storage-centric view of AI training and fine-tuning

For customers handling terabyte or petabyte scale AI training data, Azure Managed Lustre (AMLFS) is a high-performance parallel file system delivering massive throughput and parallel I/O to keep GPUs continuously fed with data. AMLFS 20 (preview) supports 25 PiB namespaces and up to 512 GBps throughput. Hierarchical Storage Management (HSM) integration enhances AMLFS scalability by enabling seamless data movement between AMLFS and your exabyte-scale datasets in Azure Blob Storage. Auto-import (preview) allows you to pull only required datasets into AMLFS, and auto-export sends trained models to long-term storage or inferencing.

Rakuten is accelerating the training of Japanese large language models on Microsoft Azure, leveraging Azure Managed Lustre, Azure Blob Storage, and Azure Kubernetes Service to maximize GPU utilization and simplify scaling.
Natalie Mao, VP, AI & Data Division, Rakuten Group

Once models are trained and fine-tuned, inferencing takes center stage delivering real-time predictions and insights. Azure Blob Storage provides best-in-class storage for Microsoft AI services, including Microsoft Foundry Agent Knowledge (preview) and AI Search retrieval agents (preview), enabling customers to bring their own storage accounts for full flexibility and control, ensuring that enterprise data remains secure and ready for retrieval-augmented generation (RAG).

Additionally, Premium Blob Storage delivers consistent low-latency and up to 3X faster retrieval performance, critical for RAG agents. For customers that prefer open-source AI frameworks, Azure Storage built LangChain Azure Blob Loader which delivers granular security, memory-efficient loading of millions of objects and up to 5x faster performance compared to prior community implementations.

Fig 2. Storage-centric view of AI inference with enterprise data

Azure Storage is evolving to be an integrated, intelligent AI-driven platform simplifying management of exabyte-scale AI data. Storage Discovery and Copilot work together to help you analyze and understand how your data estate is evolving over time using dashboards and questions in natural language. With Storage Discovery and Storage Actions, you can optimize costs, protect your data and govern large datasets with hundreds of billions of objects used for training, and fine-tuning.

Optimizing modern applications with Cloud Native

Modern cloud-native applications demand agility. Two principles consistently stand out: elasticity and flexibility. Your storage should scale seamlessly with dynamic workloads—without operational overhead. The innovations below are designed for the cloud, enabling you to auto-scale, optimize costs intelligently, and deliver the performance needed by modern applications.

Azure Elastic SAN provides cloud-native block storage for scale and tight Kubernetes integration for fast scaling, and multi‑tenancy that optimizes cost. With new auto scaling support, Elastic SAN automatically expands resources as needed, making it easier to manage storage footprints across workloads. Early next year, we’ll extend Kubernetes integration via Azure Container Storage for Azure Kubernetes Service (AKS) to general availability (GA). These enhancements let you maintain familiar hosting environments while layering in cloud-native capabilities.

Cloud-native agility is also critical for modern applications built on object storage, with the need to optimize costs and performance for dynamic and unpredictable traffic patterns. Smart Tier (preview) on Azure Blob Storage continuously analyzes access patterns, moving data between tiers automatically.

New data starts in the hot tier. After 30 days of inactivity, it moves to cool, and after 90 days, to cold. If an object is accessed again, it’s promoted back to hot which keeps data in the most cost-effective tier automatically. You can optimize costs without sacrificing performance, simplifying data management at scale and keeping your focus on building.

Hosting mission-critical workloads

Enterprises today run mission-critical workloads that require block storage and deliver predictable performance and uncompromising business continuity. Azure Ultra Disk is our highest-performance block storage offering, purpose-built for workloads like high-frequency trading, ecommerce platforms, transactional databases, and electronic health record systems that demand exceptional speed, reliability, and scalability.

With Azure Ultra Disk, we can confidently scale our platform globally, knowing that performance and resilience will meet enterprise expectations, that consistency allows our teams to focus on AI innovation and workflow automation rather than infrastructure.
Charles McDaniels, Director of Systems Engineering Management for Global Cloud Services, ServiceNow

We know performance, cost, and business continuity remain the top priorities for our customers and we are raising the bar in every category:

Performance: We have further improved the average latency for Azure Ultra Disk by 30% with average latency well under 0.5ms for small IOs on virtual machines (VMs) with Azure Boost. A single Azure Ultra Disk can deliver industry leading performance of 400K IOPS and 10 GBps throughput. In addition, with Ebsv6 VMs, both Premium SSD v2 and Azure Ultra Disk can deliver industry leading VM performance scale of 800K IOPS and 14 GBps throughput for the most demanding applications.

Cost: Flexible provisioning for Azure Ultra Disk reduces total cost of ownership by up to 50%, letting you scale capacity, IOPS, and MBps independently at finer granularity.

Business continuity: Instant Access Snapshots (preview) lets you backup and restore your workloads instantly with exceptional performance on rehydration. This differentiated experience for Azure Premium v2 and Ultra Disk helps eliminate the operational overhead of monitoring snapshot readiness or pre‑warming resources, while reducing recovery, refresh, and scale‑out times from hours to seconds.

Azure NetApp Files (ANF) is designed to deliver low latency, high performance, and data management at scale. Its large volumes capabilities have been significantly expanded providing an over 3x increase in single volume capacity scale to 7.2 PiB and a 4x increase in throughput to 50 GiBps. Cache volumes bring data and files closer to where users need rapid access in a space efficient footprint. These make ANF suitable for several high-performance computing workloads such as Electronic Design Automation (EDA), Seismic Interpretation and Visualization, Reservoir Simulations, and Risk Modeling. Microsoft is not only positioning ANF for mission critical applications but also using ANF for in-house silicon design.

Breaking barriers—migrating your storage infrastructure

Every organization’s cloud journey is unique. Whether you need to move existing environments to the cloud with minimal disruption or plan a full modernization, Azure Storage offers solutions for you. Storage Migration Solution Advisor in Copilot can provide recommendations to help streamline the decision-making process for these migrations. 

Azure Data Box and Storage Mover simplify the migration journey from on-premises and other clouds to Azure. The next generation Azure Data Box is now GA. Storage Mover is our fully managed data migration service that is secure, efficient and scalable with new capabilities: on-premises NFS shares to Azure Files NFS 4.1, on-premises SMB shares to Azure Blob storage, and cloud-to-cloud transfers. Storage Migration Solution Advisor in Copilot accelerates decision-making for migrations.

For users ready to migrate their NAS data estates, Azure Files now makes this easier than ever. We have introduced a new management model making it easier and more cost effective to use file shares. Additionally, Azure Files now enables you to eliminate complex on-premises Active Directory or domain controller infrastructure, with Entra-only identities for SMB shares. With cloud native identity support, you can now manage your user permissions directly in Azure, including external identities for applications like Azure Virtual Desktop (AVD).

Entra-only identities support with Azure Files transforms SLB’s Petrel workflows by removing dependencies on on-premises domain controllers, simplifying identity management and storage infrastructure for globally distributed teams working on complex exploration and reservoir characterization. This cloud-native architecture allows customers to access SMB shares in an easy and secure manner without complex VPN or hybrid infrastructure setups.
Swapnil Daga, Storage Architect for Tenant Infrastructure, SLB

ANF Migration Assistant simplifies moving ONTAP workloads from on-premises or other clouds to Azure. Behind the scenes, the Migration Assistant uses NetApp’s SnapMirror replication technology, providing efficient, full fidelity, block-level incremental transfers. You can now leverage large datasets without impacting production workloads.

For customers running on-premises partner solutions who want to migrate to Azure using the same partner-provided technology, Azure has recently introduced Azure Native offers with Pure Storage and Dell PowerScale.

To make migrations easier, Azure Storage’s Migration Program connects you with a robust ecosystem of experts and tools. Trusted partners like Atempo, Cirata, Cirrus Data, and Komprise can accelerate migration of SAN and NAS workloads. This program offers secure, low-risk transfers of files, objects, and block storage to help enterprises unlock the full potential of Azure.

Start your next chapter with Azure Storage

The era of AI-powered transformation is here. Begin your journey by exploring Azure’s advanced storage offerings and migration tools, designed to accelerate AI adoption, cloud migration, and modernization. Take the next step today and unlock new possibilities with Azure Storage as the foundation for your AI initiatives.

For any questions, reach out at azurestoragefeedback@microsoft.com.

Get started with Azure Storage
Secure, high-performance, reliable, and scalable cloud storage.

Start exploring

The post Azure Storage innovations: Unlocking the future of data appeared first on Microsoft Azure Blog.
Quelle: Azure