Microsoft’s strategic AI datacenter planning enables seamless, large-scale NVIDIA Rubin deployments

CES 2026 showcases the arrival of the NVIDIA Rubin platform, along with Azure’s proven readiness for deployment. Microsoft’s long-range datacenter strategy was engineered for moments exactly like this, where NVIDIA’s next-generation systems slot directly into infrastructure that has anticipated their power, thermal, memory, and networking requirements years ahead of the industry. Our long-term collaboration with NVIDIA ensures Rubin fits directly into Azure’s forward platform design.

Learn more about Azure AI infrastructure

Building with purpose for the future

Azure’s AI datacenters are engineered for the future of accelerated computing. That enables seamless integration of NVIDIA Vera Rubin NVL72 racks across Azure’s largest next-gen AI superfactories from current Fairwater sites in Wisconsin and Atlanta to future locations.

The newest NVIDIA AI infrastructure requires significant upgrades in power, cooling, and performance optimization; however, Azure’s experience with our Fairwater sites and multiple upgrade cycles over the years demonstrates an ability to flexibly enhance and expand AI infrastructure in step with advancements in technology.

Azure’s proven experience delivering scale and performance

Microsoft has years of market-proven experience in designing and deploying scalable AI infrastructure that evolves with every major advancement of AI technology. In lockstep with each successive generation of NVIDIA’s accelerated compute infrastructure, Microsoft rapidly integrates NVIDIA’s innovations and delivers them at scale. Our early, large-scale deployments of NVIDIA Ampere and Hopper GPUs, connected via NVIDIA Quantum-2 InfiniBand networking, were instrumental in bringing models like GPT-3.5 to life, while other clusters set supercomputing performance records, demonstrating we can bring next-generation systems online faster and with higher real-world performance than the rest of the industry.

We unveiled the first and largest implementations of both NVIDIA GB200 NVL72 and NVIDIA GB300 NVL72 platforms, architected as racks into single supercomputers which train AI models dramatically faster, helping Azure remain a top choice for customers seeking advanced AI capabilities.

Azure’s systems approach

Azure is engineered for compute, networking, storage, software, and infrastructure all working together as one integrated platform. This is how Microsoft builds a durable advantage into Azure and delivers cost and performance breakthroughs that compound over time.

Maximizing GPU utilization requires optimization across every layer. In addition to Azure being able to adopt NVIDIA’s new accelerated compute platforms early, Azure advantages come from the surrounding platform as well: high-throughput Blob storage, proximity placement and region-scale design shaped by real production patterns, and orchestration layers like CycleCloud and AKS tuned for low-overhead scheduling at massive cluster scale.

Azure Boost and other offload engines clear IO, network, and storage bottlenecks so models scale smoothly. Faster storage feeds larger clusters, stronger networking sustains them, and optimized orchestration keeps end-to-end performance steady. First party innovations reinforce the loop: liquid cooling Heat Exchanger Units maintain tight thermals, Azure hardware security module (HSM) silicon offloads security work, and Azure Cobalt delivers exceptional performance and efficiency for general-purpose compute and AI-adjacent tasks. Together, these integrations ensure the entire system scales efficiently, so GPU investments deliver maximum value.

This systems approach is what makes Azure ready for the Rubin platform. We are delivering new systems and establishing an end-to-end platform already shaped by the requirements Rubin brings.

Operating the NVIDIA Rubin platform

NVIDIA Vera Rubin Superchips will deliver 50 PF NVFP4 inference performance per chip and 3.6 EF NVFP4 per rack, a five times jump over NVIDIA GB200 NVL72 rack systems.Azure has already incorporated the core architectural assumptions Rubin requires:

NVIDIA NVLink evolution: The sixth-generation NVIDIA NVLink fabric expected in Vera Rubin NVL72 systems reaches ~260 TB/s of scale-up bandwidth, and Azure’s rack architecture has already been redesigned to operate with those bandwidth and topology advantages.

High-performance scale-out networking: The Rubin AI infrastructure relies on ultra-fast NVIDIA ConnectX-9 1,600 Gb/s networking, delivered by Azure’s network infrastructure, which has been purpose-built to support large-scale AI workloads.

HBM4/HBM4e thermal and density planning: The Rubin memory stack demands tighter thermal windows and higher rack densities; Azure’s cooling, power envelopes, and rack geometries have already been upgraded to handle the same constraints.

SOCAMM2 driven memory expansion: Rubin Superchips use a new memory expansion architecture; Azure’s platform has already integrated and validated similar memory extension behaviors to keep models fed at scale.

Reticle sized GPU scaling and multi-die packaging: Rubin moves to massively larger GPU footprints and multi-die layouts. Azure’s supply chain, mechanical design, and orchestration layers have been pre-tuned for these physical and logical scaling characteristics.

Azure’s approach in designing for next generation accelerated compute platforms like Rubin has been proven over several years, including significant milestones:

Operated the world’s largest commercial InfiniBand deployments across multiple GPU generations.

Built reliability layers and congestion management techniques that unlock higher cluster utilization and larger job sizes than competitors, reflected in our ability to publish industry leading large-scale benchmarks. (E.g., multi-rack MLPerf runs competitors have never replicated.)

AI datacenters co-designed with Grace Blackwell and Vera Rubin from the ground up to maximize performance and performance per dollar at the cluster level.

Design principles that differentiate Azure

Pod exchange architecture: To enable fast servicing, Azure’s GPU server trays are designed to be quickly swappable without requiring extensive rewiring, improving uptime.

Cooling abstraction layer: Rubin’s multi-die, high bandwidth components require sophisticated thermal headroom that Fairwater already accommodates, avoiding expensive retrofit cycles.

Next gen power design: Vera Rubin NVL72 demand increasing watt density; Azure’s multi-year power redesign (liquid cooling loop revisions, CDU scaling, and high amp busways) ensures immediate deployability.

AI superfactory modularity: Microsoft, unlike other hyperscalers, builds regional supercomputers rather than singular megasites, enabling more predictable global rollout of new SKUs.

How co-design leads to user benefits

The NVIDIA Rubin platform marks a major step forward in accelerated computing, and Azure’s AI datacenters and superfactories are already engineered to take full advantage. Years of co-design with NVIDIA across interconnects, memory systems, thermals, packaging, and rack scale architecture means Rubin integrates directly into Azure’s platform without rework. Rubin’s core assumptions are already reflected in our networking, power, cooling, orchestration, and pod exchange design principles. This alignment gives customers immediate benefits with faster deployment, faster scaling, and faster impact as they build the next era of large-scale AI.
The post Microsoft’s strategic AI datacenter planning enables seamless, large-scale NVIDIA Rubin deployments appeared first on Microsoft Azure Blog.
Quelle: Azure

Azure updates for partners: December 2025

At Microsoft Ignite 2025, we explored what it means for organizations to move into the era of Frontier transformation. This shift is focused on embedding AI across every part of the business to improve decision-making, increase speed, and create new value. Organizations leading in AI make it foundational. They rethink processes and integrate new technologies from the start to improve efficiency.

For partners, this move toward Frontier represents a significant opportunity to lead customers into this new era. By building AI-powered solutions, connecting data for intelligent insights, and deploying Microsoft Azure’s cloud-ready platforms, partners can deliver value faster and scale confidently through the Microsoft ecosystem.

Microsoft Ignite came with a significant number of announcements, so I’ve gathered the Azure updates that matter most for partners. These are the capabilities that can strengthen your ability to deliver intelligent solutions, drive operational efficiency, and differentiate your product or service in the market. You can also explore how partners are turning momentum into action, access highlights, and grab practical guidance from my Microsoft Ignite session.

Azure Copilot: Now in private previewAzure Copilot introduces specialized agents to the Azure portal, PowerShell, and CLI. Powered by Azure Resource Manager (ARM)-driven scenarios and advanced AI models from Microsoft and partners, Azure Copilot streamlines migration, assessment, and modernization activities with data-driven insights, guided workflows, and improved governance across customer environments. For partners, this creates a unified way to deliver intelligent automation for cloud workloads, accelerate modernization projects, reduce operational overhead, and strengthen governance through integrated agentic workflows across Azure and GitHub Copilot.

For more information, check out these additional resources:

Blog: Ushering in the Era of Agentic Cloud Operations with Azure CopilotMicrosoft Ignite session: Agentic AI Tools for Partner-Led Migration and Modernization SuccessMicrosoft Ignite session: Partners: Accelerate Secure Migrations and Innovate in the Era of AI

Foundry Control Plane: Now in public previewMicrosoft Foundry Control Plane extends Agent 365 by bringing unified visibility, security, and control to AI agents operating across the Microsoft Cloud. It centralizes policy management, lifecycle governance, and observability, offering a consistent way to manage agent behavior and performance. By providing enterprise-grade governance and security capabilities that support safe, scalable, and efficient agent management for customers across varied environments, Control Plane empowers confident deployment and operation of AI-powered solutions.

For more information, review these additional resources:

Microsoft Learn: What is the Microsoft Foundry Control Plane?Microsoft Ignite session: Build Partner Advantage: Drive Key AI Use-Cases with Azure Tech Stack

Foundry IQ: Now in public previewFoundry IQ provides a unified endpoint for agent knowledge, automating source routing and retrieval workflows through Azure AI Search. It equips agents to work with enterprise content securely and with greater contextual grounding by connecting a unified knowledge base to multiple data sources. For partners, this creates a streamlined way to build retrieval augmented generation (RAG) solutions, link agents to customer-specific knowledge sources, and deliver consistent, context-rich capabilities that empower organizations to unlock more value from their data.

Read our blog to learn more: Foundry IQ: Unlocking ubiquitous knowledge for agents

Fabric IQ: Now in public previewMicrosoft Fabric IQ offers a live, unified view of enterprise data and AI agents, organizing information by business concepts and using OneLake to support real-time analytics across hybrid and multicloud environments. For partners, Fabric IQ creates a foundation for building intelligent, context-aware solutions that align to business processes, accelerate analytics performance, and strengthen governance to improve reliability and efficiency across customer deployments.

For more information, check out these additional resources:

Blog: From Data Platform to Intelligence Platform: Introducing Microsoft Fabric IQMicrosoft Ignite session: Microsoft Fabric IQ: Turning unified data into unified intelligenceMicrosoft Ignite session: How Microsoft’s data platform is creating value for partners

Microsoft Agent Factory: Now availableMicrosoft Agent Factory is a new program designed for organizations that want to move from experimentation to execution faster. At the heart of this program is the Microsoft Agent Pre-Purchase Plan (P3), which streamlines procurement and reduces complexity. With P3, partners can offer their customers access to 32 Microsoft services through one flexible pool of funds, eliminating the need to manage multiple contracts or choose between platforms. This single metered plan not only reduces upfront licensing and provisioning but also supports greater predictability for organizations investing in AI innovation. Eligible organizations can also tap into hands-on support from top AI Forward Deployed Engineers (FDEs) and access tailored, role-based training to boost AI fluency across teams. Together, they unlock new opportunities for growth and innovation while encouraging customers to confidently embrace the future of AI.

Read our blog to learn more: Accelerate innovation with Microsoft Agent Factory

Microsoft Foundry: Anthropic Claude models are now availableMicrosoft Foundry now offers Anthropic Claude models that support advanced reasoning for research, coding, and agentic workflows, all within the Microsoft unified governance and observability framework. For partners, this expands choice across model capabilities to develop multistep agents using the right model per task while maintaining governance and deployment consistency across Azure, Foundry, and Microsoft 365 Copilot environments.

Read our blog to learn more: Introducing Anthropic’s Claude models in Microsoft Foundry: Bringing Frontier intelligence to Azure

Resale enabled offers: Now available through Microsoft MarketplaceResale enabled offers are now available in nearly all Marketplace-supported regions, allowing software companies to work with channel partners to manage listings and expand reach. For partners, this creates new channel-led sales opportunities by making it easier to promote and manage listings on behalf of publishers and reach more customers globally without adding operational complexity.

For more information, check out these resources:

Marketplace: Cloud solutions, AI apps, and agentsBlog: The Microsoft Marketplace opportunity for channel ecosystemMicrosoft Ignite session: Executing on the channel-led marketplace opportunity for partnersMicrosoft Ignite session: Marketplace success for partners—from SMB to enterpriseMicrosoft Ignite session: Partner: Benefits for Accelerating Software Company Success

Azure HorizonDB for PostgreSQL: Now in private previewAzure HorizonDB is a new PostgreSQL cloud database for mission-critical applications and modern AI workloads, offering auto-scaling storage, rapid compute scale out, advanced vector indexing, and integration with the Microsoft AI and analytics ecosystem. For partners, HorizonDB supports the development of intelligent and resilient applications, modernization of legacy systems, and creation of high-performance data platforms designed for security, scale, and future AI workloads.

Check out these additional resources:

Blog: Announcing Azure HorizonDBPreview sign-up: Apply for the preview hereMicrosoft Ignite session: Azure HorizonDB: Deep Dive into a New Enterprise-Scale PostgreSQL

Microsoft Agent 365: The control plane for AI agentsAgent 365 extends the Microsoft user management infrastructure to AI agents, empowering organizations to govern agents across Microsoft 365, Azure, and Foundry. Available in the Microsoft 365 admin center with the Frontier program, it combines capabilities from Microsoft 365 Defender, Entra, Purview, and Microsoft 365 for unified security, productivity, and management. For partners, this creates a consistent approach to deploying, securing, and managing fleets of AI agents across customer environments with streamlined governance and operational clarity.

Read our blog to learn more: Microsoft Agent 365: The control plane for AI agents

Looking forwardMicrosoft Ignite is about more than product updates; it’s a time to celebrate what we can achieve together as partners. Continue your journey and explore the Cloud & AI Platforms partner sessions at Microsoft Ignite and read the Azure at Microsoft Ignite 2025: All the intelligent cloud news explained blog post for more product updates.

Stay connected with us. Follow Microsoft Partner on LinkedIn, join the conversation in our Partner News Community, and explore the Microsoft partner site to keep your momentum going.

For details on recent announcements, please read the “What’s new in Azure for Partners” newsletter on the Microsoft Community Hub and follow the tag “Azure News” to stay updated.

November update: What’s new in Azure for Partners | Microsoft Community HubOctober update: What’s new in Azure for Partners | Microsoft Community Hub
The post Azure updates for partners: December 2025 appeared first on Microsoft Azure Blog.
Quelle: Azure

Microsoft named a Leader in Gartner® Magic Quadrant™ for AI Application Development Platforms

A recognition for AI innovation

Microsoft is recognized as a Leader in the 2025 Gartner® Magic Quadrant™ for AI Application Development Platforms and is positioned furthest for Completeness of Vision. This leadership reflects a long‑term conviction: the next wave of applications is agentic, and real customer impact requires far more than great demos. Organizations need agents grounded in real data and tools, capable of driving business workflows, and governed with end‑to‑end observability at scale. Our investment in agent frameworks, orchestration, and enterprise‑grade governance is how we make that full journey real and practical for every customer.

Read the full Gartner report

Why we believe this matters

Gartner evaluates vendors on two dimensions: Completeness of Vision (where the platform is headed) and Ability to Execute (whether it can deliver today). Being positioned furthest on vision isn’t about having the boldest roadmap: it’s about whether that vision translates into the real capabilities customers need for the future of AI.

Microsoft Foundry is our unified platform for building, deploying and governing AI applications—and over the past year, we’ve focused it on four areas that customers tell us separate production AI from proof-of-concept:

Real data, real tools. Agents are only as useful as what they can access. Foundry IQ provides a single secure grounding API that connects agents to enterprise data, while Foundry Tools offers over 1,400 pre-built connectors for document processing, translation, speech, and business systems.

Workflow integration, not just conversation. The shift from chatbot to agent means moving from Q&A to action. Foundry Agent Service supports multi-agent orchestration where agents can handle off tasks, coordinate decisions, and drive end to end business processes: deployable directly into Copilot or your applications.

Observability and governance at scale. When agents act autonomously, you need to see what they’re doing and why. Foundry Control Plane provides organization-wide visibility, audit trails, and policy enforcement. “Trust but verify” doesn’t scale without tooling.

Models from cloud to edge. Build and run AI models wherever your workloads live—from cloud to edge. Fine-tune and deploy models from Foundry Models using enterprise-grade GenAI Ops, then run them on-device with Foundry Local for low-latency, offline, or regulated scenarios.

With these pillars in place, Foundry delivers everything organizations need to build AI applications and multi-agent systems at scale. That’s why we’ve ensured it works seamlessly with the tools developers and businesses use most. Foundry integrates deeply with development tools including Visual Studio Code, GitHub, Azure, and productivity tools such as Microsoft 365, Microsoft Teams, and the broader enterprise stack.

Explore Microsoft Foundry

Walking the talk: Our agent-driven approach

This year, Microsoft adopted a fundamentally new approach for preparing our submission for AI Application Development Platforms. Instead of relying on manual data gathering and coordination, our team developed custom agents designed to collect, organize, and validate all the information required for the evaluation.

How the agent was created:

The agent’s development is detailed in a recent blog post, which outlines the technical architecture and methodology behind its creation. Built using Microsoft Agent Framework, our open-source offering, the agents leverage advanced orchestration capabilities and multimodal content processing. It was designed to automate the complex process of assembling submission data, ensuring accuracy and completeness while reducing manual effort.

Technical highlights:

The agent uses a structured prompt and workflow, as specified here. It integrates with Microsoft Foundry platform-as-a-service (PaaS) model, supporting both pay-as-you-go and provisioned throughput options.

Benefits of the agent-driven process:

By automating the submission workflow, the agent improved data accuracy and transparency, allowing our experts to focus on strategic insights rather than manual compilation. The process was more efficient, reduced the risk of errors, and ensured that our submission was both comprehensive and up to date.

This innovation reflects Microsoft’s commitment to technical excellence and continuous improvement, providing customers with greater confidence in the quality and reliability of its AI solutions. By streamlining critical processes, Microsoft delivers more accurate, transparent, and timely updates, enabling organizations to make informed decisions and accelerate innovation with enterprise-grade AI platforms that maintain compliance and security standards.

Empowering organizations with Microsoft Foundry

We believe our recognition in the Gartner Magic Quadrant™ for AI Application Development Platforms is a testament to Microsoft’s commitment to empowering organizations to develop robust, scalable, and intelligent AI solutions. The agent-driven submission process exemplifies our drive to innovate, operate transparently, and share process with our community.

More than 80,000 enterprises and software development companies across healthcare, manufacturing, and retail industries are leveraging Foundry to deliver transformative solutions—from predictive supply chain insights to personalized customer experiences. These success stories highlight how Foundry accelerates innovation while maintaining trust and compliance.

Genie is offering provider practices a way to use AI to converse with patients through their preferred channel. This will reduce the amount of administrative work and cost for practices to simply give patients the answers to their questions.
Sidd Shah, Vice President of Strategy & Business Growth, healow

With Genix Copilot, we have unlocked the power of generative and agentic AI from shop floor to top floor, cutting troubleshooting time by 60-80%. Genix Copilot on Azure OpenAI is reshaping industrial performance and advancing environmental goals, turning data into real outcomes for customers across very different sectors.
Rajesh Ramachandran, Global Chief Digital Officer, Process Automation, ABB

Foundry Agent Service and Microsoft Agent Framework connect our agents to data and each other, and the governance and observability in Microsoft Foundry provide what KPMG firms need to be successful in a regulated industry.
Sebastian Stöckle, Global Head of Audit Innovation and AI, KPMG International

Microsoft is at the cutting edge of AI-based shopping, and with Ask Ralph, we’re blending the world of fashion and the world of technology to reimagine how consumers shop online.
Naveen Seshadri, Chief Digital Officer, Ralph Lauren

Thank you to our customers and partners for making this recognition possible. We look forward to helping you grow more with Microsoft Foundry.

Discover resources for your AI journey

Read the Gartner report

Discover more at Microsoft Customer Stories

Learn more about Microsoft Foundry

*Gartner, Magic Quadrant for AI Application Development Platforms, 17 November 2025

Gartner and Magic Quadrant are registered trademarks of Gartner, Inc. and/or its affiliates and is used herein with permission. All rights reserved.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Microsoft.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s business and technology insights organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
The post Microsoft named a Leader in Gartner® Magic Quadrant™ for AI Application Development Platforms appeared first on Microsoft Azure Blog.
Quelle: Azure

Microsoft’s commitment to supporting cloud infrastructure demand in the United States

Today, we are sharing progress on our infrastructure expansions across the United States that are supporting the tremendous growth in customer demand for cloud and AI services. Recently we announced Fairwater sites in Wisconsin and Atlanta, and now we are expanding with the launch of our East US 3 region in the Greater Atlanta Metro area in early 2027, and the expansion of five existing datacenter regions across the United States.

Learn about Microsoft’s investment in datacenter infrastructure

New Cloud Region to open in Atlanta in early 2027

Microsoft’s global cloud network serves as the foundation that underpins daily life, innovation, and economic growth. With more regions than any other cloud provider, Microsoft’s global cloud infrastructure includes more than 70 regions, over 400 datacenters worldwide, over 370,000 miles of terrestrial and subsea fiber, and over 190 edge sites, making it one of the largest, most trusted and secure in the world.

Our datacenter footprint in Greater Atlanta Metro area is already running the most advanced AI supercomputers on the planet, and in early 2027, this footprint in Atlanta will expand to support all our customer workloads out of the East US 3 datacenter region. This region will be designed to support the most advanced Azure workloads, on a foundation of trust for all organizations.

Get started with Azure today

The East US 3 region will offer additional resiliency capabilities through Availability Zones which are unique physical datacenter locations equipped with independent power, networking, and cooling. Availability Zones provide organizations with peace of mind knowing their applications can be designed with increased tolerance to failures by incorporating functionality such as zone-redundant storage.

Microsoft’s datacenter community pledge is to build and operate digital infrastructure that addresses societal challenges and creates benefits for communities in which we operate and where our employees live and work. The East US 3 region is being designed to meet Microsoft’s carbon, water, waste and sustainability commitments. In developing the East US 3 region, we have water conservation and replenishment top of mind.  The region in Georgia is designed to be LEED Gold Certification: a framework for healthy, highly efficient, and cost-saving buildings, offering environmental, social, and governance benefits.

Delivering a resilient cloud infrastructure

We’re empowering all organizations to adopt a resilient cloud strategy that enables them to take advantage of the full capabilities of the cloud. The cloud is not a single region or location but a network of regions across the United States and the world that enables the access of Azure services, resources and capacity across a broader set of geographic areas.  

Our infrastructure projects in the United States are driven by the need for greater resiliency, agility and flexibility in today’s dynamic cloud environment. With six datacenter regions with Availability Zones (AZ) already in operation, we will be adding AZs in the United States to the following existing regions:

North Central US by the end of 2026.

West Central US in early 2027.

US Gov Arizona in early 2026.

In 2026, we will add Availability Zones in regions where we already have three, including East US 2 in Virginia and South Central US in Texas. The expansion of our Availability Zone footprint will provide additional supply of Azure infrastructure capacity to meet the need for customers in these regions to grow with confidence and with more options when considering a multi-region cloud architecture. Leveraging a multi-region cloud architecture with any of our United States regions further strengthens application performance, latency, and overall resilience and availability of cloud applications.

Organizations are already using Azure to transform their applications in the era of AI, with a resilient cloud foundation:

The University of Miami: The University of Miami is a leading-edge teaching and research institution located on Florida’s southern tip, part of a region known as Hurricane Alley. With a steady threat of extreme weather–related outages, the University looked to Microsoft Azure to improve its disaster recovery capabilities and shift key on-premises assets to the cloud. Pursuing a well-architected strategy, the University now takes advantage of Azure availability zones to safeguard against outages, stay operational during maintenance and improvements, and help ensure resilience and reliability. Additionally, the University is realizing greater agility, faster response time to business needs, and reduced costs by continuing to pursue Azure-backed solutions.

The State of Alaska: The State of Alaska is reducing costs by consolidating infrastructure and decommissioning legacy systems. It is improving resiliency and reliability while strengthening security by migrating systems to Azure, where geography is no longer a challenge. 

Supporting our government customers

We remain committed to enabling resilient, compliant cloud strategies for our government customers. In early 2026, we will expand our Azure Government footprint with the addition of three Availability Zones in the US Government Arizona region, giving agencies and partners more options for zone-redundant architectures to improve recovery time (RTO) and recovery point objectives (RPO) and mission continuity aligned with CMMC and NIST guidance.

This expansion supports growing demand for segmented, resilient architectures that isolate sensitive workloads while meeting regulatory requirements for availability and security. The US Government datacenter region in Arizona gives additional options for customers in the Defense Industrial Base (DIB) for its benefits of proximity, latency, and mission alignment, offering an alternative to US Government Virginia for new deployments.

These investments complement the Azure for US Government Secret cloud region launched earlier this year, reinforcing our commitment to secure, compliant, and mission-ready cloud solutions. Discover how Microsoft is advancing AI and infrastructure innovation in our H200: Accelerating AI at Scale blog.

Discover what Azure can do for you

Boost your cloud strategy

Use the Cloud Adoption Framework to achieve your cloud goals with best practices, documentation, and tools for business and technology strategies.

Use the Well-Architected Framework to optimize workloads with guidance for building reliable, secure, and performant solutions on Azure.

By choosing to deploy services through any of our Azure regions, customers can leverage the diverse and robust infrastructure that Microsoft is developing across the United States. This approach not only offers resilience and flexibility but also paves the way for innovative solutions that drive economic growth and a more connected future.

Where to find more resources:

Take a virtual tour of Microsoft datacenters

Learn more about Microsoft’s global infrastructure

Microsoft Datacenters: Illuminating the unseen power of the cloud—Microsoft Datacenters

Learn about Georgia—Microsoft Local

Learn how Microsoft is driving next-generation AI and infrastructure innovation

The post Microsoft’s commitment to supporting cloud infrastructure demand in the United States appeared first on Microsoft Azure Blog.
Quelle: Azure

Actioning agentic AI: 5 ways to build with news from Microsoft Ignite 2025

Energy at Microsoft Ignite was electric. Over 20,000 attendees gathered in San Francisco, with 200,000 digitally joining us to explore the future of cloud and AI. What continues to inspire me most are the responses online and the conversations happening in our technical communities. It’s how quickly you’re turning these announcements into action and building solutions for billions of people, which will ultimately shape our future.

Join us at Microsoft AI Dev Days (Dec 10-11, 2025) and start building

As someone who lives and breathes technical audience marketing across Microsoft Azure, Foundry, Fabric, databases, and developer tools. Our Azure platform announcements resonated because they help solve real problems—and now, the work begins.

So, what is everyone saying about the top news? And where do we go from here? Let’s reflect on the top five cloud and AI stories from Microsoft Ignite across the web right now and then unpack how these innovations can be put into practice across Microsoft AI Dev Days, Microsoft AI Tour and more.

1. Claude comes to Microsoft Foundry: Choice for builders

The technical community lit up about what Claude models in Microsoft Foundry unlock. I really like how this eWeek article describes the significance as a “partnership [that] removes one of the biggest historical blockers to adopting new AI tools: vendor complexity.”

Developers told us they wanted access to Claude Sonnet and Claude Opus alongside OpenAI’s GPT models. They wanted the ability to select the right models for their use cases, and the tools to evaluate for tone, safety, performance, and more. Now Azure is the only cloud supporting access to both Claude and GPT frontier models for its customers.

Response from the community is clear: model diversity matters. When you’re building AI apps and agents, having options means you can optimize for what matters most to your users. Microsoft Foundry gives you flexibility while maintaining enterprise-grade security, compliance, and governance.

My favorite watches and reads:

Microsoft Exec talks OpenAI, AI Bubble, Data Centers, AI Safety, and more

Everything you need to build AI apps & agents

Foundry: The Top AI Announcement from Microsoft Ignite 2025

Microsoft Brings Anthropic’s Claude Opus 4.5 to Foundry Preview

Deploy and compare models in Microsoft Foundry

2. IQ Revolution: Semantic understanding that works

The new portfolio of Microsoft IQ offerings has data engineers and architects buzzing. One blogger captured it perfectly: “This is Microsoft rewiring the connective tissue between productivity apps, analytics platforms, and AI development environments to create something that’s been missing from the enterprise AI conversation.” Knowledge is how the shift to agentic AI becomes practical rather than theoretical.

Foundry IQ streamlines knowledge retrieval from multiple sources including SharePoint, Fabric, and the web. Powered by Azure AI Search, it delivers policy-aware retrieval without having to build complex custom RAG pipelines. Developers get pre-configured knowledge bases and agentic retrieval in a single API that “just works,” while also respecting user permissions, which is what I heard resonating on the ground.

Designed with Foundry IQ integration, Fabric IQ creates a semantic intelligence layer that unifies analytics, time-series, and operational data around shared business concepts, letting you build and deploy agents that reason consistently across domains while cutting down the schema-mapping, data wrangling, and prompt engineering that normally eat the most time.

More must-reads:

CIO Talk: Microsoft Gets IQ

Microsoft’s Fabric IQ teaches AI agents to understand business operations, not just data patterns

Microsoft Ignite 2025: How Data-Driven Intelligence Powers the Age of AI Agents

3. Azure HorizonDB: PostgreSQL power

PostgreSQL developers are celebrating the preview of Azure HorizonDB, which you can sign up for here. This fully managed, Postgres-compatible database service is designed from the ground up for modern cloud-native and AI workloads.

The technical community embraced it wholeheartedly, seeing their priorities reflected. Azure HorizonDB delivers up to 3x more throughput than open-source Postgres for transactional workloads, with auto-scaling storage up to 128TB and scale-out compute supporting up to 3,072 vCores. Sub-millisecond multi-zone commit latencies support apps that are both fast and resilient.

What really got developers excited was built-in vector indexing with advanced filtering using DiskANN, which brings AI intelligence directly to where your data lives. This helps developers build semantic search and RAG patterns without the complexity and latency of managing separate vector stores or moving data across systems. Integration with Microsoft Foundry also streamlines setup and AI app development.

And for those migrating from Oracle, GitHub Copilot-powered migration tools in the PostgreSQL Extension for VS Code make the transition smoother than ever. The community has spoken: they want PostgreSQL flexibility combined with Azure enterprise capabilities, and Azure HorizonDB delivers.

Be sure to check these out:

Announcing Azure HorizonDB

Water, Water Everywhere: How Microsoft Ignite 2025 Turned Data Into Intelligence

Microsoft Ignite 2025: AI + Databases = The Next Big Shift with Microsoft’s Shireesh Thota

Azure HorizonDB: Microsoft goes big with PostgreSQL

4. Azure Copilot: Agents change the game

The announcement of the new Azure Copilot gives IT professionals a new reason to come to the cloud. Now supporting the full cloud operations lifecycle, Azure Copilot features a collection of specialized agents for migration, deployment, observability, optimization, resiliency, and troubleshooting. A star within this new experience for IT pros is the migration agent, helping turn weeks of manual discovery into rapid acceleration by scanning environments, identifying legacy workloads, and auto-generating infrastructure-as-code templates so migrations are fast and clean.

With Azure Copilot, migrating and modernizing become far more manageable, surfacing cost improvements, right sizing environments, and diagnosing issues across containers, virtual machines, and databases, while honoring role-based access control (RBAC) policies and compliance guardrails. Available at no extra cost in the Azure Portal, CLI, and the new Operations Center, this new agentic interface in Azure transforms modernization and gives IT teams the ability to be more proactive as they build on the Azure foundation.

Smart takes on the news:

Making Sense of Microsoft Ignite 2025 for Azure and AI Architects

How Azure Copilot’s New Agents Automate DevOps and SecOps

Azure Update – IGNITE SPECIAL – 21st November 2025

Microsoft’s Azure Copilot to support agentic cloud operations at scale with new AI agents

Azure Copilot Agents Launch in Private Preview

5. Azure hardware: Limitless power and security

Performance is everything when you’re training large models or running inference at scale and that’s why the latest hardware museum behind our ‘Frontier Street’ activation at Ignite captured the community’s imagination.

When you stood in front of a blade from our Azure AI infrastructure server, with NVIDIA Blackwell Ultra GPUs presented like a museum piece, your excitement made it feel even more like stepping into an art gallery—with a spotlight on Cobalt, Maia, and Microsoft’s unique NVIDIA partnership.

And it didn’t stop there. Microsoft’s custom Azure silicon now includes the Azure Boost DPU, the first in-house data processing unit, and Azure Integrated HSM for top-notch security. We can’t wait to keep bringing these innovations directly to you.

See what others are saying:

Announcing Cobalt 200: Azure’s next cloud-native CPU

Microsoft has Designed its Own 132 Core Processor: Azure Cobalt 200

Microsoft’s Azure Cobalt 200 ARM Chip Delivers 50% Performance Boost

Powering Modern Cloud Workloads with Azure Boost: Ignite 2025

Your keyboard, your impact

Here’s what makes this moment special: announcements at Ignite aren’t endpoints; they’re starting points. You’re the next gen creators who will take these tools and build new agentic experiences we can’t yet imagine. Your implementations surface insights that shape how Azure evolves. Your real-world patterns are what drive product decisions. The relationship between announcement and innovation is a partnership, and the technical community drives that process forward.

Create the future with us:

Tune into Microsoft AI Dev Days: December 10-11. Starting today, we’re hosting two days of live-streamed technical content on the Reactor, broadcast across all dev channels. These sessions are designed for developers who want to go deep on building with the technologies announced at Ignite. Mark your calendars and join the community for hands-on workshops and technical deep dives that will bringing you to the cutting edge of AI innovation.

Join us at a Microsoft AI Tour location near you. We’re coming to your city with hands-on technical workshops. These one-day, free events focus on getting you keyboard time with the technologies announced at Ignite. We’re continuing to hit the road in 2026 to 30 more locations.

Catch up with your tech community. Ignite delivered incredible technical content across hundreds of sessions. What makes the following three sessions special is how deep they go into the tech with information about how to start implementing in your own environments.

Community Theater: Ask Me Anything with Scott Hanselman

Community Theater: Learn Infrastructure-as-Code through Minecraft

Community Theater: Cloud Perspectives: Cloud Management & Ops Platform Team Insights

Take your learning to the next level with curated skilling plans. Whether you’re new to these technologies or looking to deepen your expertise, Microsoft skilling plans can accelerate your career from fundamentals to advanced implementation.

What are you building with the latest technologies announced at Ignite? Join the conversation in Azure’s technical community.

Join Microsoft Ignite 2026 early!
Sign up now to join the Microsoft Ignite early-access list and be eligible to receive limited‑edition swag at the event.

Save the date

The post Actioning agentic AI: 5 ways to build with news from Microsoft Ignite 2025 appeared first on Microsoft Azure Blog.
Quelle: Azure

Azure Storage innovations: Unlocking the future of data

Microsoft is redefining what’s possible in the public cloud and driving the next wave of AI-powered transformation for organizations. Whether you’re pushing the boundaries with AI, improving the resilience of mission-critical workloads, or modernizing legacy systems with cloud-native solutions, Azure Storage has a solution for you.

Learn more about Azure Storage tools and products

At Microsoft Ignite 2025 and KubeCon North America last month, we showcased the latest innovations in Azure Storage, powering your workloads. Here is a recap of those releases and advancements.

Innovating for the future with AI

Azure Blob Storage provides a unified storage foundation for the entire AI lifecycle, powering everything from ingestion and preparation to checkpoint management and model deployment.

To enable customers to rapidly train, fine-tune, and deploy AI models, we evolved the Azure Blob Storage architecture to scale and deliver exabytes of capacity, 10s of Tbps throughput, and millions of IOPS to GPUs. In this video, you can see a single storage account scaling to over 50 Tbps on read throughput. Azure Blob Storage is also the foundation that enables OpenAI to train and serve models at unprecedented speed and scale. 

Fig 1. Storage-centric view of AI training and fine-tuning

For customers handling terabyte or petabyte scale AI training data, Azure Managed Lustre (AMLFS) is a high-performance parallel file system delivering massive throughput and parallel I/O to keep GPUs continuously fed with data. AMLFS 20 (preview) supports 25 PiB namespaces and up to 512 GBps throughput. Hierarchical Storage Management (HSM) integration enhances AMLFS scalability by enabling seamless data movement between AMLFS and your exabyte-scale datasets in Azure Blob Storage. Auto-import (preview) allows you to pull only required datasets into AMLFS, and auto-export sends trained models to long-term storage or inferencing.

Rakuten is accelerating the training of Japanese large language models on Microsoft Azure, leveraging Azure Managed Lustre, Azure Blob Storage, and Azure Kubernetes Service to maximize GPU utilization and simplify scaling.
Natalie Mao, VP, AI & Data Division, Rakuten Group

Once models are trained and fine-tuned, inferencing takes center stage delivering real-time predictions and insights. Azure Blob Storage provides best-in-class storage for Microsoft AI services, including Microsoft Foundry Agent Knowledge (preview) and AI Search retrieval agents (preview), enabling customers to bring their own storage accounts for full flexibility and control, ensuring that enterprise data remains secure and ready for retrieval-augmented generation (RAG).

Additionally, Premium Blob Storage delivers consistent low-latency and up to 3X faster retrieval performance, critical for RAG agents. For customers that prefer open-source AI frameworks, Azure Storage built LangChain Azure Blob Loader which delivers granular security, memory-efficient loading of millions of objects and up to 5x faster performance compared to prior community implementations.

Fig 2. Storage-centric view of AI inference with enterprise data

Azure Storage is evolving to be an integrated, intelligent AI-driven platform simplifying management of exabyte-scale AI data. Storage Discovery and Copilot work together to help you analyze and understand how your data estate is evolving over time using dashboards and questions in natural language. With Storage Discovery and Storage Actions, you can optimize costs, protect your data and govern large datasets with hundreds of billions of objects used for training, and fine-tuning.

Optimizing modern applications with Cloud Native

Modern cloud-native applications demand agility. Two principles consistently stand out: elasticity and flexibility. Your storage should scale seamlessly with dynamic workloads—without operational overhead. The innovations below are designed for the cloud, enabling you to auto-scale, optimize costs intelligently, and deliver the performance needed by modern applications.

Azure Elastic SAN provides cloud-native block storage for scale and tight Kubernetes integration for fast scaling, and multi‑tenancy that optimizes cost. With new auto scaling support, Elastic SAN automatically expands resources as needed, making it easier to manage storage footprints across workloads. Early next year, we’ll extend Kubernetes integration via Azure Container Storage for Azure Kubernetes Service (AKS) to general availability (GA). These enhancements let you maintain familiar hosting environments while layering in cloud-native capabilities.

Cloud-native agility is also critical for modern applications built on object storage, with the need to optimize costs and performance for dynamic and unpredictable traffic patterns. Smart Tier (preview) on Azure Blob Storage continuously analyzes access patterns, moving data between tiers automatically.

New data starts in the hot tier. After 30 days of inactivity, it moves to cool, and after 90 days, to cold. If an object is accessed again, it’s promoted back to hot which keeps data in the most cost-effective tier automatically. You can optimize costs without sacrificing performance, simplifying data management at scale and keeping your focus on building.

Hosting mission-critical workloads

Enterprises today run mission-critical workloads that require block storage and deliver predictable performance and uncompromising business continuity. Azure Ultra Disk is our highest-performance block storage offering, purpose-built for workloads like high-frequency trading, ecommerce platforms, transactional databases, and electronic health record systems that demand exceptional speed, reliability, and scalability.

With Azure Ultra Disk, we can confidently scale our platform globally, knowing that performance and resilience will meet enterprise expectations, that consistency allows our teams to focus on AI innovation and workflow automation rather than infrastructure.
Charles McDaniels, Director of Systems Engineering Management for Global Cloud Services, ServiceNow

We know performance, cost, and business continuity remain the top priorities for our customers and we are raising the bar in every category:

Performance: We have further improved the average latency for Azure Ultra Disk by 30% with average latency well under 0.5ms for small IOs on virtual machines (VMs) with Azure Boost. A single Azure Ultra Disk can deliver industry leading performance of 400K IOPS and 10 GBps throughput. In addition, with Ebsv6 VMs, both Premium SSD v2 and Azure Ultra Disk can deliver industry leading VM performance scale of 800K IOPS and 14 GBps throughput for the most demanding applications.

Cost: Flexible provisioning for Azure Ultra Disk reduces total cost of ownership by up to 50%, letting you scale capacity, IOPS, and MBps independently at finer granularity.

Business continuity: Instant Access Snapshots (preview) lets you backup and restore your workloads instantly with exceptional performance on rehydration. This differentiated experience for Azure Premium v2 and Ultra Disk helps eliminate the operational overhead of monitoring snapshot readiness or pre‑warming resources, while reducing recovery, refresh, and scale‑out times from hours to seconds.

Azure NetApp Files (ANF) is designed to deliver low latency, high performance, and data management at scale. Its large volumes capabilities have been significantly expanded providing an over 3x increase in single volume capacity scale to 7.2 PiB and a 4x increase in throughput to 50 GiBps. Cache volumes bring data and files closer to where users need rapid access in a space efficient footprint. These make ANF suitable for several high-performance computing workloads such as Electronic Design Automation (EDA), Seismic Interpretation and Visualization, Reservoir Simulations, and Risk Modeling. Microsoft is not only positioning ANF for mission critical applications but also using ANF for in-house silicon design.

Breaking barriers—migrating your storage infrastructure

Every organization’s cloud journey is unique. Whether you need to move existing environments to the cloud with minimal disruption or plan a full modernization, Azure Storage offers solutions for you. Storage Migration Solution Advisor in Copilot can provide recommendations to help streamline the decision-making process for these migrations. 

Azure Data Box and Storage Mover simplify the migration journey from on-premises and other clouds to Azure. The next generation Azure Data Box is now GA. Storage Mover is our fully managed data migration service that is secure, efficient and scalable with new capabilities: on-premises NFS shares to Azure Files NFS 4.1, on-premises SMB shares to Azure Blob storage, and cloud-to-cloud transfers. Storage Migration Solution Advisor in Copilot accelerates decision-making for migrations.

For users ready to migrate their NAS data estates, Azure Files now makes this easier than ever. We have introduced a new management model making it easier and more cost effective to use file shares. Additionally, Azure Files now enables you to eliminate complex on-premises Active Directory or domain controller infrastructure, with Entra-only identities for SMB shares. With cloud native identity support, you can now manage your user permissions directly in Azure, including external identities for applications like Azure Virtual Desktop (AVD).

Entra-only identities support with Azure Files transforms SLB’s Petrel workflows by removing dependencies on on-premises domain controllers, simplifying identity management and storage infrastructure for globally distributed teams working on complex exploration and reservoir characterization. This cloud-native architecture allows customers to access SMB shares in an easy and secure manner without complex VPN or hybrid infrastructure setups.
Swapnil Daga, Storage Architect for Tenant Infrastructure, SLB

ANF Migration Assistant simplifies moving ONTAP workloads from on-premises or other clouds to Azure. Behind the scenes, the Migration Assistant uses NetApp’s SnapMirror replication technology, providing efficient, full fidelity, block-level incremental transfers. You can now leverage large datasets without impacting production workloads.

For customers running on-premises partner solutions who want to migrate to Azure using the same partner-provided technology, Azure has recently introduced Azure Native offers with Pure Storage and Dell PowerScale.

To make migrations easier, Azure Storage’s Migration Program connects you with a robust ecosystem of experts and tools. Trusted partners like Atempo, Cirata, Cirrus Data, and Komprise can accelerate migration of SAN and NAS workloads. This program offers secure, low-risk transfers of files, objects, and block storage to help enterprises unlock the full potential of Azure.

Start your next chapter with Azure Storage

The era of AI-powered transformation is here. Begin your journey by exploring Azure’s advanced storage offerings and migration tools, designed to accelerate AI adoption, cloud migration, and modernization. Take the next step today and unlock new possibilities with Azure Storage as the foundation for your AI initiatives.

For any questions, reach out at azurestoragefeedback@microsoft.com.

Get started with Azure Storage
Secure, high-performance, reliable, and scalable cloud storage.

Start exploring

The post Azure Storage innovations: Unlocking the future of data appeared first on Microsoft Azure Blog.
Quelle: Azure

Introducing GPT-5.2 in Microsoft Foundry: The new standard for enterprise AI

The age of AI small talk is over. Enterprise applications demand more than clever chat. They require a reliable, reasoning partner capable of solving the most ambiguous, high-stakes problems, including planning multi-agent workflows and delivering auditable code.

Azure is the foundation for solving these challenges. Today, OpenAI’s GPT-5.2 is announced as generally available in Microsoft Foundry, introducing a new frontier model series purposefully built to meet the needs of enterprise developers and technical leaders—setting a new standard for a new era.

Explore GPT-5.2 in Foundry today

GPT-5.1 vs. GPT-5.2: Key upgrades for developers to know

GPT-5.2 series introduces deeper logical chains, richer context handling, and agentic execution that prompts shippable artifacts. For example, design docs, runnable code, unit tests, and deployment scripts can be generated with fewer iterations. The GPT-5.2 series is built on new architecture, delivering superior performance, efficiency, and reasoning depth compared to prior generations. It’s also trained on the proven GPT-5.1 dataset and further enhanced with improved safety and integrations. GPT-5.2 leaps beyond previous models with substantial performance improvements across core metrics.

Today, we’re shipping GPT-5.2 and GPT-5.2-Chat. Each is greatly improved from its predecessor, and together they excel in everyday professional excellence.

GPT-5.2: The most advanced reasoning model that solves harder problems more effectively and with more polish. An example of this is information work, where great thinking is now complemented with better communication skills and improved formatting in spreadsheets and slideshow creation.

GPT-5.2-Chat: A powerful yet efficient workhorse for everyday work and learning, with clear improvements in info-seeking questions, how-to’s and walk-throughs, technical writing, and translation. It’s also more effective at supporting studying and skill-building, as well as offering clearer job and career guidance.

Why GPT-5.2 sets a new standard for enterprise AI

For long term success in complex professional tasks, teams need structured outputs, reliable tool use, and enterprise guardrails. GPT‑5.2 is optimized for these agent scenarios within Foundry’s enterprise-grade platform, offering consistent developer experience across reasoning, chat, and coding.

Multi-Step Logical Chains: Decomposes complex tasks, justifies decisions, and produces explainable plans.

Context-Aware Planning: Ingests large inputs (project briefs, codebases, meeting notes) for holistic, actionable output.

Agentic Execution: Coordinates tasks end-to-end, across design, implementation, testing, and deployment, reducing iteration cycles and manual oversight.

Safety and Governance: Enterprise-grade controls, managed identities, and policy enforcement for secure, compliant AI adoption.

GPT-5.2’s deep reasoning capabilities, expanded context handling, and agentic patterns make it the smart choice for building AI agents that can tackle long-running, complex tasks across industries, including financial services, healthcare, manufacturing, and customer support.

Analytics and Decision Support: Useful for wind tunneling scenarios, explaining trade-offs, and producing defensible plans for stakeholders.

Application Modernization: Make rapid progress in refactoring services, generating tests, and producing migration plans with risk and rollback criteria.

Data and Pipelines: Audit ETL, recommend monitors/SLAs, and generate validation SQL for data integrity.

Customer Experiences: Build context-aware assistants and agentic workflows that integrate into existing apps.

The results? Agents that maintain reliability through complex workflows and agent service, while producing structured, auditable outputs that scale confidently in Microsoft Foundry.

GPT-5.2 deployment and pricing

Model Deployment Pricing (USD $/million tokens)   InputCached Input Output GPT-5.2Standard Global  $1.75 $0.175 $14.00 Standard Data Zones (US) $1.925 $0.193 $15.40GPT-5.2-Chat Standard Global  $1.75 $0.175 $14.00

Start building with GPT-5.2 today
Build in Microsoft Foundry, where enterprise agents go from vision to production.

Start your next project

The post Introducing GPT-5.2 in Microsoft Foundry: The new standard for enterprise AI appeared first on Microsoft Azure Blog.
Quelle: Azure

Azure networking updates on security, reliability, and high availability

Enabling the next wave of cloud transformation with Azure Networking

The cloud landscape is evolving at an unprecedented pace, driven by the exponential growth of AI workloads and the need for seamless, secure, and high-performance connectivity. Azure Network services stand at the forefront of this transformation, delivering the hyperscale infrastructure, intelligent services, and resilient architecture that empower organizations to innovate and scale with confidence.

Get the latest Azure Network services updates here

Azure’s global network is purpose-built to meet the demands of modern AI and cloud applications. With over 60 AI regions, 500,000+ miles of fiber, and more than 4 petabits per second (Pbps) of WAN capacity, Azure’s backbone is engineered for massive scale and reliability. The network has tripled its overall capacity since the end of FY24, now reaching 18 Pbps, ensuring that customers can run the most demanding AI and data workloads with uncompromising performance.

In this blog, I am excited to share about our advancements in data center networking that provides the core infrastructure to run AI training models at massive scale, as well as our latest product announcements to strengthen the resilience, security, scale, and the capabilities needed to run cloud native workloads for optimized performance and cost.

AI at the heart of the cloud

AI is not just a workload—it’s the engine driving the next generation of cloud systems. Azure’s network fabric is optimized for AI at every layer, supporting long-lasting, high-bandwidth flows for model training, low-latency intra-datacenter fabrics for GPU clusters, and secure, lossless traffic management. Azure’s architecture integrates InfiniBand and high-speed Ethernet to deliver ultra-fast, lossless data transfer between compute and storage, minimizing training times and maximizing efficiency. Azure’s network is built to support workloads with distributed GPU pools across datacenters and regions using a dedicated AI WAN. Distributed GPU clusters are connected to the services running in Azure regions via a dedicated and private connection that uses Azure Private Link and hardware based VNet appliance running high performant DPUs.

Azure Network services are designed to support users at every stage—from migrating on-premises workloads to the cloud, to modernizing applications with advanced services, to building cloud-native and AI-powered solutions. Whether it’s seamless VNet integration, ExpressRoute for private connectivity, or advanced container networking for Kubernetes, Azure provides the tools and services to connect, build, and secure the cloud of tomorrow.

Resilient by default

Resiliency is foundational to Azure Networking’s mission. We continue to execute on the goal to provide resiliency by default. In continuing with the trend of offering zone resilient SKUs of our gateways (ExpressRoute, VPN, and Application Gateway), the latest to join the list is Azure NAT Gateway. At Ignite 2025, we announced the public preview of Standard NAT Gateway V2 which offers zone redundant architecture for outbound connectivity at no additional cost. Zone Redundant NAT gateways automatically distribute traffic to available zones during an outage of a single zone. It also supports 100 Gbps of total throughput and can handle 10 million packets per second. It is IPv6 ready out of the gate and provides traffic insights with flow logs. Read the NAT Gateway blog for more information.

Pushing the boundaries on security

We continue to advance our platform with security as the top mission, adhering to the principles of Secure Future Initiatives. Along these lines, we are happy to announce the following capabilities in preview or GA:

DNS Security Policy with Threat Intel: Now generally available, this feature provides smart protection with continuous updates, monitoring, and blocking of known malicious domains.

Private Link Direct Connect: Now in public preview, this extends Private Link connectivity to any routable private IP address, supporting disconnected VNets and external SaaS providers, with enhanced auditing and compliance support.

JWT Validation in Application Gateway: Application Gateway now supports JSON Web Token (JWT) validation in public preview, delivering native JWT validation at Layer 7 for web applications, APIs, and service-to-service (S2S) or machine-to-machine (M2M) communication. This feature shifts the token validation process from backend servers to the Application Gateway, improving performance and reducing complexity. This capability enables organizations to strengthen security without adding complexity, offering consistent, centralized, secure-by-default Layer 7 controls that allow teams to build and innovate faster while maintaining a trustworthy security posture.​

Forced tunneling for VWAN Secure Hubs: Forced Tunnel allows you to configure Azure Virtual WAN to inspect Internet-bound traffic with a security solution deployed in the Virtual WAN hub and route inspected traffic to a designed next hop instead of directly to the Internet. Route Internet traffic to edge Firewall connected to Virtual WAN via the default route learnt from ExpressRoute, VPN or SD-WAN. Route Internet traffic to your favorite Network Virtual Appliance or SASE solution deployed in spoke Virtual Network connected to Virtual WAN.

Providing ubiquitous scale

Scale is of utmost importance to customers looking to fine tune their AI models or low latency inferencing for their AI/ML workloads. Enhanced VPN and ExpressRoute connectivity, and scalable private endpoints further strengthen the platform’s reliability and future-readiness.

ExpressRoute 400G: Azure will be supporting 400G ExpressRoute direct ports in select locations starting 2026. Users can use multiple of these ports to provide multi-terabit throughput via dedicated private connection to on-premises or remote GPU sites.

High throughput VPN Gateway: We are announcing GA of 3x faster VPN gateway connectivity with support for single TCP flow of 5Gbps and a total throughput of 20 Gbps with four tunnels.

High scale Private Link: We are also increasing the total number of private endpoints allowed in a virtual network to 5000 and a total of 20,000 cross peered VNets.

Advanced traffic filtering for storage optimization in Azure Network Watcher: Targeted traffic logs help optimize storage costs, accelerate analysis, and simplify configuration and management.

Enhancing the experience of cloud native applications

Elasticity and the ability to scale seamlessly are essential capabilities Azure customers who deploy containerized apps expect and rely on. AKS is an ideal platform for deploying and managing containerized applications that require high availability, scalability, and portability. Azure’s Advanced Container Networking Service is natively integrated with AKS and offered as a managed networking add-on for workloads that require high performance networking, essential security and pod level observability.

We are happy to announce the product updates below in this space:

eBPF Host Routing in Advanced Container Networking Services for AKS: By embedding routing logic directly into the Linux kernel, this feature reduces latency and increases throughput for containerized applications.

Pod CIDR Expansion in Azure CNI Overlay for AKS: This new capability allows users to expand existing pod CIDR ranges, enhancing scalability and adaptability for large Kubernetes workloads without redeploying clusters.

WAF for Azure Application Gateway for Containers: Now generally available, this brings secure-by-design web application firewall capabilities to AKS, ensuring operational consistency and seamless policy management for containerized workloads.

Azure Bastion now enables secure, simplified access to private AKS clusters, reducing setup effort and maintaining isolation and providing cost savings to users.

These innovations reflect Azure Networking’s commitment to delivering secure, scalable, and future-ready solutions for every stage of your cloud journey. For a full list of updates, visit the official Azure updates page.

Get started with Azure Networking

Azure Networking is more than infrastructure—it’s the catalyst for foundational digital transformation, empowering enterprises to harness the full potential of the cloud and AI. As organizations navigate their cloud journeys, Azure stands ready to connect, secure, and accelerate innovation at every step.

All updates in one spot
From Azure DNS to Virtual Network, stay informed on what's new with Azure Networking.

Get more information here

The post Azure networking updates on security, reliability, and high availability appeared first on Microsoft Azure Blog.
Quelle: Azure

A decade of open innovation: Celebrating 10 years of Microsoft and Red Hat partnership

Ten years ago, Microsoft and Red Hat began a partnership grounded in open source and enterprise cloud innovation. This year, we celebrate a decade of collaboration. Our journey together has helped customers accelerate hybrid cloud transformation, empower developers to innovate, and strengthen the open source community to drive modern application innovation​.

Accelerate modernization with Azure Red Hat OpenShift

The partnership that redefined enterprise cloud

In 2015, running mission-critical Linux workloads on Microsoft Azure was considered bold and visionary. Ten years later, our partnership with Red Hat has helped thousands of organizations worldwide accelerate digital transformation, set new benchmarks in open innovation, and advance the cloud-native movement for enterprises everywhere.

Together, we introduced Red Hat Enterprise Linux (RHEL) on Azure, setting a new precedent for innovation in the cloud. This collaboration deepened with the addition of Red Hat offerings, including Azure Red Hat OpenShift (ARO)—a fully managed, jointly engineered, and supported application platform that combines cloud scale with open source flexibility.

Red Hat and Microsoft’s global footprint and expanding customer base underline how an open approach and commitment to solving customer challenges drives adoption and innovation at scale.

Accomplishments and impact​

Azure Red Hat OpenShift and Red Hat’s automation platforms are powering digital transformation for global leaders across industries:

Leaders like Teranet have saved CA$5.6 million in capital expenditures and increased customer confidence by migrating mission-critical systems and OpenShift containers to Azure, unlocking unmatched scalability and automation.

For Bradesco, Azure Red Hat OpenShift is the secure, scalable backbone of its future-ready AI platform—unifying governance, powering more than 200 enterprise AI initiatives, and accelerating transformation across every business unit. By integrating Azure OpenAI and Power Platform, Bradesco delivers scalable, compliant innovation in banking services. 

Western Sydney University improved reliability and accelerated digital research for thousands of students and faculty with the security and flexibility of Red Hat Enterprise Linux on Azure. 

Symend launched new regions in weeks and powered personalized customer engagement by adopting Azure Red Hat OpenShift and Microsoft Azure AI, driving agility at enterprise scale. Microsoft itself leverages Red Hat’s Ansible Automation Platform to streamline thousands of endpoints and modernize global network operations for business-critical infrastructure.

Together, Microsoft and Red Hat have advanced the industry with major accomplishments:

Deep integration for real-world flexibility: Red Hat solutions—like Azure Red Hat OpenShift, Red Hat Enterprise Linux, and Red Hat Ansible Automation Platform—are available across Azure, including in the Azure Marketplace, Azure Government, and expanding regions. Customers benefit from streamlined migrations, enhanced security features, and integrated support that simplifies modernization.​

Modernization and operational agility: OpenShift Virtualization and Confidential Containers on Azure Red Hat OpenShift enable customers to migrate and modernize legacy applications, run confidential workloads, and automate operations. These capabilities deliver scalability and secure management across hybrid environments.

Accelerating open source innovation: Together, the companies have made contributions to Kubernetes, containers, cloud monitoring, secure computing standards, and advancing open hybrid architectures for everyone.

Expanding developer and IT choice: By making RHEL available for Windows Subsystem for Linux and supporting hybrid container and virtual machine (VM) environments, Microsoft and Red Hat have given developers flexible, secure, and consistent tools for building anywhere.​

Enabling transformative AI adoption at scale: By leveraging Azure Red Hat OpenShift as a secure, governable foundation for managing multicloud OpenShift clusters, Bradesco streamlined operations across on-premises and cloud environments. This foundation, combined with Microsoft Foundry and Azure OpenAI Service, empowers Bradesco to deliver AI-powered banking solutions that scale securely and responsibly across millions of customers and business units. Symend also adopts Azure Red Hat OpenShift and Azure AI to power personalized customer engagement.

Flexible pricing: Azure Hybrid Benefit for RHEL is a key cost optimization feature that allows organizations to maximize existing Red Hat subscriptions when running workloads on Azure. By leveraging this benefit, customers can reduce licensing costs and improve ROI while maintaining enterprise-grade support and security. Including this in the conversation highlights how Azure delivers both technical flexibility and financial efficiency for hybrid environments.

Additionally, customers can optimize costs with pay-as-you-go pricing, draw down Microsoft Azure Consumption Commitment (MACC), and receive a single bill for both OpenShift and Azure consumption with Azure Red Hat OpenShift.

Discover what these solutions can offer your business

Ten years of innovation: Microsoft and Red Hat partnership highlights

The partnership’s journey is marked by major shared milestones, summarized in the timeline graphic below:

November 2015: Partnership announcement launched a decade of innovation.

February 2016: Red Hat Enterprise Linux available in the Azure Marketplace with integrated support.

May 2019: Azure Red Hat OpenShift reached general availability (GA).

January 2020: Red Hat Enterprise Linux BYOS Gold images available in Azure.

May 2021: JBoss EAP offered as an Azure App Service.

January 2022: Ansible released as a managed app for automation.

February 2023: Azure Red Hat OpenShift for Azure Government reached GA.

May 2025: OpenShift Virtualization on Azure Red Hat OpenShift entered public preview, culminating at Ignite 2025 with GA.

See the attached timeline for more details about key moments and innovations.​

Ignite 2025: GA of OpenShift Virtualization and more on Azure Red Hat OpenShift

A defining moment of our tenth anniversary was the GA of OpenShift Virtualization on Azure Red Hat OpenShift, announced at Microsoft Ignite 2025. Organizations can now run VMs alongside containers on a single, secure platform, seamlessly bridging traditional virtualization with cloud-native innovation. Enterprises can modernize their VM workloads into Kubernetes-based environments, leveraging Azure’s performance and security with familiar OpenShift tools.

In addition, Microsoft Ignite 2025 marked the GA of confidential containers on Azure Red Hat OpenShift, delivering enhanced hardware-enforced security and isolation for container workloads. The event also showcased alongside the GA of Red Hat Enterprise Linux (RHEL) for HPC on Azure, offering a secure, high-performance platform tailored for scientific and parallel computing workloads in Azure.

Together, these announcements underscore our ongoing commitment to hybrid innovation, security, and helping customers to deploy a wide spectrum of enterprise workloads with agility and confidence.

Open at the core: What’s next for open source and enterprise cloud collaboration

Ten years of partnership have proven openness is more than a technological strategy—it is a culture of progress, trust, and shared innovation. Microsoft and Red Hat remain committed to pioneering the future of hybrid cloud and AI-powered applications, always keeping customer choice and reliability at the center.

We’re proud to partner with Red Hat not just to support our customers, but also in our shared interest in projects like the Linux Kernel, Kubernetes, and most recently llm-d. Together, we are committed to continuing contributions to the health and success of open source technologies and communities.

To our customers, partners, and open source communities: thank you for partnering with us on this journey. Together, we will continue to build the future of enterprise technology—openly, boldly, and collaboratively.
—Brendan Burns, Corporate Vice President, Microsoft Cloud Native

Explore OpenShift Virtualization on Azure

Explore more stories on hybrid cloud and open innovation

Unlock what is next: Microsoft at Red Hat Summit 2025​

Red Hat Powers Modern Virtualization on Microsoft Azure​

Red Hat Success Stories: Helping Microsoft with IT automation​

The best of both worlds: How Microsoft and Red Hat are revolutionizing enterprise IT​

Red Hat CEO and Microsoft EVP on The Evolution of Open Source​

GA of OpenShift Virtualization on Azure Red Hat OpenShift at Microsoft Ignite 2025

Bradesco, Azure Red Hat OpenShift is the secure, scalable backbone of its future-ready AI platform

Ortec Finance launched a cloud-native risk management platform, accelerating service delivery for over 600 financial institutions

Rossmann transformed its retail operations and scaled hybrid cloud deployments to millions of customers

City of Vienna modernized citizen services with AI, improving availability and efficiency for thousands of residents​​ 

Porsche Informatik accelerated digital transformation across automotive logistics, optimizing mission-critical IT service 

The post A decade of open innovation: Celebrating 10 years of Microsoft and Red Hat partnership appeared first on Microsoft Azure Blog.
Quelle: Azure

Introducing Mistral Large 3 in Microsoft Foundry: Open, capable, and ready for production workloads

Enterprises today are embracing open-weight models for their transparency, flexibility, and ability to run across a broad range of deployment architectures. As the number of open models grows, the bar for reliability, instruction-following quality, multimodal reasoning, and long-context performance continues to rise. 

Today, we’re excited to announce that Mistral Large 3 is now available in Azure, bringing one of the strongest open-weight, Apache-licensed frontier models to the Microsoft Cloud. 

Mistral Large 3 delivers frontier-class capabilities with open-source flexibility, making it a powerful option for organizations building production assistants, retrieval-augmented applications, agentic systems, and multimodal workflows. 

See Mistral Large 3 in action

Enterprise-ready open models 

Mistral Large 3 sits in the leading tier of globally available open models alongside DeepSeek and the GPT OSS family. It is optimized not only for benchmark-chasing on abstract mathematical puzzles, but also for what customers need most in real enterprise applications: 

Highly reliable instruction following 

Long-context comprehension and retention 

Strong multimodal reasoning 

Stable, predictable performance across dialogue and applied reasoning 

According to Mistral, Mistral Large 3 shows fewer breakdowns and more consistent behavior than most peers, especially in multi-turn conversations and complex, extended inputs. It is designed for production, not just experimentation. 

Mistral 3 is optimized for real-world scenarios 

Instruction reliability you can depend on 

Many open models excel on benchmarks but struggle with instruction clarity when deployed in real workflows. Mistral Large 3 reverses that trend by demonstrating:

Precise adherence to task instructions 

Strong grounding in domain knowledge 

Low hallucination rates

Consistent formatting in structured outputs 

This makes it particularly effective for agents, automation flows, and business logic integration where reliability is non-negotiable. 

Exceptional long-context handling 

With extended context support, Mistral Large 3 processes, retains, and reasons over long documents, multi-step sequences, and sustained dialogues with notable stability. 

Enterprises can use it for: 

Retrieval-augmented generation 

Document understanding 

Multi-turn conversational systems 

Long-form summarization and synthesis 

Its ability to maintain coherence over long sessions reduces error cascades and produces more predictable outcomes. 

Multimodal and applied reasoning 

As organizations build increasingly multimodal workflows, interpreting text, images, diagrams, and structured data, Mistral Large 3 provides strong cross-modal understanding with balanced behavior. 

It excels in: 

Visual question answering 

Diagram or chart interpretation 

Multimodal retrieval and grounding 

Combined reasoning over text and image inputs 

Its stability makes it ideal for use cases where multimodal reasoning must be accurate, not approximate. 

Fully Open and Apache 2.0 licensed

Mistral Large 3 stands out as the strongest fully open model developed outside of China and offers something rare in the global ecosystem: 

Frontier-level capability, Apache 2.0 licensing, reproducible results, and worldwide availability without regional restrictions. 

Organizations can: 

Integrate the model in Microsoft Foundry 

Export weights for hybrid or on-premises deployment (subject to Mistral licensing) 

Run it in their own VPC, edge, or sovereign cloud environments 

Fine-tune or customize freely 

Use it for commercial applications without attribution requirements 

This combination of capability and openness is uniquely compelling for global enterprises requiring flexibility, transparency, and long-term vendor independence. 

Why Mistral Large 3 in Azure? 

Foundry provides an end-to-end workspace for model development, evaluation, and deployment, including unified governance, observability, and agent-ready tooling. 

With Mistral Large 3 in Foundry, customers gain: 

1.Unified access to top-performing models

Simplified and secure access to Mistral Large 3 and Mistral Document AI as first-party models available on Foundry alongside other open and commercial frontier models.

2. End-to-end evaluation and observability

Foundry delivers end-to-end evaluations, routing, and observability, enabling organizations to benchmark Mistral Large 3 across cost, latency, throughput, and quality, while monitoring performance and spending through a single set of dashboards and SDKs. Workloads can be intelligently routed to the most efficient model with no added integration effort. 

3. Enterprise-grade safety and governance 

Foundry applies Responsible AI safeguards, content filters, and auditability across all model interactions, ensuring safe, compliant deployments. 

4. Agent-first capabilities 

Mistral Large 3 supports tool calling, enabling agentic systems that can take action, automate workflows, and connect to enterprise data and APIs. This foundation supports customer service bots, research agents, automation flows, and enterprise copilots. 

Unlocking new use cases across industries 

Enterprise knowledge assistants: Long-context comprehension enables rich, grounded conversations across corporate knowledge bases. 

Document intelligence and retrieval-augmented pipelines: Stable reasoning and consistent formatting make it ideal for summarization, extraction, and multi-document synthesis. 

Developer agents and automation: Reliable instruction supports code refactoring, test generation, and workflow automation. 

Multimodal customer experiences: Combining image and text understanding enables richer digital assistant and customer support experiences. 

Pricing

ModelDeployment type Azure resource regions Price/1M tokens Availability Mistral Large 3 Global Standard West US 3 Input: $0.5 Output: $1.5 Dec 2, 2025—public preview 

The future of open models on Azure 

With the addition of Mistral Large 3, Foundry continues to expand its position as the cloud platform with the widest selection of open and frontier models, unified under a single, enterprise-ready ecosystem. 

As organizations increasingly demand transparent, flexible, and globally accessible intelligence, Mistral Large 3 sets a new benchmark for what a production-ready open model can deliver. 

Try Mistral Large 3 today
Open, capable, multimodal, long-context Mistral Large 3 is now available in Microsoft Foundry.

Explore on Foundry

The post Introducing Mistral Large 3 in Microsoft Foundry: Open, capable, and ready for production workloads appeared first on Microsoft Azure Blog.
Quelle: Azure