Announcing Azure Copilot agents and AI infrastructure innovations

In this article

Agentic cloud operations: Introducing Azure CopilotAzure’s AI infrastructure: The backbone of modernizationBuilding for trust: Resiliency, operational excellence, and securityWhat does modernizing workloads look like today?Looking ahead

The cloud is more than just a platform—it’s the engine of transformation for every organization. This year at Microsoft Ignite 2025, we’re showing how Microsoft Azure modernizes cloud infrastructure at global scale—built for reliability, security, and performance in the AI era.

Streamline cloud operations with Azure Copilot

From scalable compute to resilient networks and AI-powered operations, Azure provides the foundation that helps customers innovate faster and operate with confidence. Our strategy is anchored in three key areas, each designed to help customers thrive in a rapidly changing landscape:

We’re strengthening Azure’s global foundation. We’re expanding capacity and resilience across regions while optimizing datacenter design, power efficiency, and network topology for AI-scale workloads. Our services are zone-redundant by default, our edge footprint is growing to meet low-latency needs, and security and compliance controls are embedded at every layer. From confidential computing and sovereign cloud architectures to our security capabilities, Azure is engineered for trust by design.

We’re modernizing every workload. We’re advancing compute, network, storage, application, and data services with Microsoft Azure Cobalt and Azure Boost systems, Azure Kubernetes Service (AKS) Automatic, and Azure HorizonDB for PostgreSQL. We embrace and integrate with Linux, Kubernetes, and open-source ecosystems customers rely on.

We’re transforming how teams work. We’re embedding AI agents directly into the platform through Azure Copilot and GitHub Copilot, bringing agent-based capabilities for migration, app modernization, troubleshooting, and optimization. These features remove repetitive tasks so teams can focus on architecture instead of administration, making an integral part of how Azure runs end-to-end.

Agentic cloud operations: Introducing Azure Copilot

Azure is entering a new era where AI becomes the foundation for running your cloud. As environments grow more complex, traditional tools and manual workflows can’t keep up. This brings us to a frontier moment, where AI and cloud converge to redefine operations. We call this agentic cloud ops: a new model for the AI era.  

What is Azure Copilot?

Azure Copilot is a new agentic interface that orchestrates specialized agents across the cloud management lifecycle, automating migration, optimization, troubleshooting, and more, freeing up teams to focus on innovation.

Azure Copilot aligns actions—human or agent—with your policies and standards, offering a unified framework for compliance, auditing, and enforcement that respects role-based access control (RBAC) and Azure Policy. It provides strong governance and data residency controls, full visibility across agents and workloads, and lets you bring your own storage for complete control of chat and artifact data. To make the operating model truly agentic, we’re introducing six Azure Copilot agents—migration, deployment, optimization, observability, resiliency, and troubleshooting—in gated preview.

Learn more about Azure Copilot in our detailed blog, and learn how to sign up for the preview.

Sign up for the Azure Copilot preview

Azure’s AI infrastructure: The backbone of modernization

Azure is built for reliable, world-class performance, delivering at global scale and speed.

With more than 70 regions and hundreds of datacenters, Azure provides the largest cloud footprint in the industry. This unified infrastructure supports consistent performance, capacity, and compliance for customers everywhere.

AI infrastructure that delivers performance and scale

We’ve reimagined how our datacenters are built and operated to support the critical needs of the largest AI challenges. In September 2025, we launched Fairwater, our largest and most sophisticated AI datacenter to date, and our newest site in Atlanta now joins Wisconsin to form a planet-scale “AI superfactory.” By using high-density liquid cooling, a flat network architecture linking hundreds of thousands of GPUs, and a dedicated AI WAN backbone, we’re giving customers unmatched capacity, flexibility, and utilization across every AI workload.

Azure is the first cloud provider to deploy NVIDIA’s GB300 GPUs at scale, extending our leadership from GB200 and continuing to define the infrastructure foundation for the AI era. Each Fairwater site connects hundreds of thousands of these best-in-class GPUs, millions of CPU cores, and massive storage—enough to hold 80 billion 4K movies.

A key part of this evolution is our AI WAN—a high-speed network linking Fairwater and other Azure datacenters to move data quickly and coordinate massive AI jobs across sites. It’s engineered to keep GPUs busy, reduce bottlenecks, and scale workloads beyond the limits of a single location, so customers can tackle bigger projects and get results faster. Driving down costs through innovation, we’ve set a new benchmark for secure, high-performance AI: Azure processed more than 1.1 million tokens per second for language models—the equivalent of writing seven books per second from a single rack.

Azure’s AI infrastructure puts supercomputing-level power in every customer’s hands—enabling larger model training, faster deployment, and broader user reach within a trusted, compliant environment.

Extending AI infrastructure innovation to your workloads

An exciting part of our work on AI datacenters is that the same architectural breakthroughs that allow us to train frontier models also strengthen Azure’s core services, directly benefiting all workloads.

One of these examples is Azure Boost, which offloads virtualization processes traditionally performed by the hypervisor and host operating system onto purpose-built hardware and software. Combined with our new AMD “Turin” and Intel “Granite Rapids” virtual machines—plus the latest network-optimized and storage-optimized families—customers are seeing more than 20 GB per second of managed-disk throughput and more than a million input/output operations per second (IOPS). More than a quarter of our global fleet is now Boost-enabled, and network throughput has doubled to 400 gigabits per second for our general-purpose and AI SKUs. The infrastructure investments we’ve talked about are already being used by leading-edge companies to bring services to billions of users.

Cloud-native apps and data

Azure Kubernetes Service (AKS) delivers secure, managed Kubernetes with automated upgrades and scaling. Paired with cloud-native databases like PostgreSQL and Cosmos DB, teams build faster and recover instantly.

We’re doing the work to bridge the power of the AI infrastructure into AKS by enabling cutting-edge GPUs to function as AKS nodes out of the box or actively monitoring their health.

We haven’t stopped at the infrastructure layer. We’re also reinventing how easy it is to take advantage of Kubernetes itself. That’s why we introduced AKS Automatic. It embeds best practices, automates infrastructure provisioning, and operates critical Kubernetes components to reduce complexity and improve reliability. It handles the hard parts—patching, upgrades, observability, and security—so teams can focus on innovation instead of infrastructure.

With AKS Automatic-managed system node pools, we’re making AKS Automatic even lower-touch because now you don’t have to run critical Kubernetes components yourself. It’s entirely managed by the service. It moves key services like CoreDNS and metrics server to Microsoft-managed infrastructure, making it even easier to focus entirely on your apps.

You not only need easy-to-use application infrastructure; you also need easy-to-use data tools for your applications. We’re introducing Azure HorizonDB for PostgreSQL, which brings breakthrough scalability and AI integration for next-generation applications. Azure DocumentDB is now generally available—the first managed database built on the open-source engine we contributed to the Linux Foundation.

We are also excited to expand our longstanding partnership with SAP and announce the launch of SAP Business Data Cloud Connect for Microsoft Fabric, simplifying access to data sharing across both platforms. Read the announcement blog to learn more.

Azure Databases and Microsoft Fabric
Learn more about the next generation of Microsoft’s databases, announced at Microsoft Ignite 2025.

Read the blog

Building for trust: Resiliency, operational excellence, and security

The world runs on Azure’s cloud infrastructure. Every business, government, and developer depends on it to be reliable, secure, and always available. That responsibility drives everything we do in Azure. Our mission is to build the most efficient, reliable, and cost-effective infrastructure platform of the AI era, one that customers can depend on every day.

Resiliency is not just a feature. It is a design principle, a culture, and a shared commitment between Microsoft and our customers. Every region, service, and operation is built with that responsibility in mind.

At Microsoft Ignite, we are taking this commitment further with new capabilities that strengthen reliability, simplify operations, and help customers build with greater confidence.

Raising the bar on operational excellence: Operational excellence means reliability is designed from the start. Every Azure region is built with availability zones, redundant networking, and automated fault detection. We are extending that foundation with services like NAT Gateway, now zone-redundant by default for improved network reliability without any configuration required.

Empowering customers: With Azure Resiliency (public preview), we are co-engineering resiliency with customers. This new experience helps teams set recovery objectives, test failover drills, and validate application health, strengthening readiness together before issues arise.

Evolving security for modern threats

We continue to expand Azure’s security foundation with new capabilities that make protection simpler, smarter, and more integrated across the platform. These updates strengthen boundaries, automate defense, and bring AI-powered insight directly into how customers protect and operate their environments.

Simplifying protection: Azure Bastion Secure by Default is built into the platform. It automatically hardens remote access to virtual machines through RDP and SSH, reducing setup time and risk

Strengthening boundaries: Network Security Perimeter, now generally available, provides secure and centralized firewall to control access to PaaS resources.

Better defense: And we’re making advancements in the Web Application Firewall with Captcha for human verification.

All of this builds on Azure’s broader stack of confidential virtual machines, containers, hardware-based attestation, and encryption, supporting protection from hardware through to application.

What does modernizing workloads look like today?

Organizations are on a journey to modernize. Most run a mix of systems that span decades—from mission-critical databases to new cloud-native services. Azure meets customers where they are, helping modernize applications and data with flexibility, openness, and built-in intelligence.

Rather than a one-off effort, modernization is really about re-architecting agility, scaling efficiently, and using AI and open technologies without sacrificing reliability or control.

To help simplify and accelerate the modernization journey, we’re investing to help you find and get to the best destination for your workloads, whether it’s infrastructure as a service (IaaS), platform as a service (PaaS), or software as a service (SaaS).

Azure’s agentic migration and modernization tools make it easier than ever to modernize your apps, data, and infrastructure with speed and precision. For example, you can move existing .NET applications directly into a fully managed environment—no refactoring or containers required—into the new App Service Managed Instance (now in preview).

On the data side, the next-generation Azure SQL Managed Instance (now generally available) delivers up to five-times faster performance and double the storage capacity. And Azure Copilot and GitHub Copilot simplify SQL Server, Oracle, and PostgreSQL modernization.

Plus, across infrastructure—from VMware to Linux and IT operations—AI agents streamline migrations, reduce licensing overhead, and automate patching, governance, and compliance, so modernization becomes a repeatable, intelligent motion.

Customers migrating and modernizing to Azure using our agentic tools have shared the impressive results they have experienced. Here are just a few examples:

.NET: More than 500,000 lines of code upgraded and migrated in weeks.

Java: Four times faster modernized applications than before using the agents.

Read more about GitHub Copilot app modernization.

Looking ahead

The promise of the cloud was always about scale, flexibility, and innovation. With our innovations across our infrastructure, datacenters, Azure Copilot, services, and open-source contributions, that promise expands to drive your business forward every day.

The next era of the cloud is inevitable. It’s agentic, intelligent, and human-centered—and Microsoft is helping lead the way.

Join us at Microsoft Ignite where you can tune in to our sessions to learn more:

Innovation Session

Innovation Session: Scale Smarter: Infrastructure for the Agentic Era

Breakouts

End-to-End migration of applications with AI Agents to IaaS and PaaS

Unlock agentic intelligence in the cloud with Copilot in Azure

What’s new and what’s next in Azure IaaS

SQL Server 2025: The AI-ready enterprise database

Scaling Kubernetes securely and reliably with AKS

Inside Azure Innovations with Mark Russinovich

The post Announcing Azure Copilot agents and AI infrastructure innovations appeared first on Microsoft Azure Blog.
Quelle: Azure

Powering Distributed AI/ML at Scale with Azure and Anyscale

The path from prototype to production for AI/ML workloads is rarely straightforward. As data pipelines expand and model complexity grows, teams can find themselves spending more time orchestrating distributed compute than building the intelligence that powers their products. Scaling from a laptop experiment to a production-grade workload still feels like reinventing the wheel. What if scaling AI workloads felt as natural as writing in Python itself? That’s the idea behind Ray, the open-source distributed computing framework born at UC Berkeley’s RISELab, and now, it’s coming to Azure in a whole new way.

Today, at Ray Summit, we announced a new partnership between Microsoft and Anyscale, the company founded by Ray’s creators, to bring Anyscale’s managed Ray service to Azure as a first-party offering in private preview. This new managed service will deliver the simplicity of Anyscale’s developer experience on top of Azure’s enterprise-grade Kubernetes infrastructure, making it possible to run distributed Python workloads with native integrations, unified governance, and streamlined operations, all inside your Azure subscription.

Ray: Open-Source Distributed Computing for PythonRay reimagines distributed systems for the Python ecosystem, making it simple for developers to scale code from a single laptop to a large cluster with minimal changes. Instead of rewriting applications for distributed execution, Ray offers Pythonic APIs that allow functions and classes to be transformed into distributed tasks and actors without altering core logic. Its smart scheduling seamlessly orchestrates workloads across CPUs, GPUs, and heterogeneous environments, ensuring efficient resource utilization.

Developers can also build complete AI systems using Ray’s native libraries—Ray Train for distributed training, Ray Data for data processing, Ray Serve for model serving, and Ray Tune for hyperparameter optimization—all fully compatible with frameworks like PyTorch and TensorFlow. By abstracting away infrastructure complexity, Ray lets teams focus on model performance and innovation.

Anyscale: Enterprise Ray on AzureRay makes distributed computing accessible; Anyscale running on Azure takes it to the next level for enterprise-readiness. At the heart of this offering is RayTurbo, Anyscale’s high-performance runtime for Ray. RayTurbo is designed to maximize cluster efficiency and accelerate Python workloads, enabling teams on Azure to:

Spin up Ray clusters in minutes, without Kubernetes expertise, directly from the Azure portal or CLI.Dynamically allocate tasks across CPUs, GPUs, and heterogeneous nodes, ensuring efficient resource utilization and minimizing idle time.Easily run large experiments quickly and cost-effectively with elastic scaling, GPU packing, and native support for Azure spot VMs.Run reliably at production scale with automatic fault recovery, zero-downtime upgrades, and integrated observability.Maintain control and governance; clusters run inside your Azure subscription, so data, models, and compute stay secure, with unified billing and compliance under Azure standards.By combining Ray’s flexible APIs with Anyscale’s managed platform and RayTurbo’s performance, Python developers can move from prototype to production faster, with less operational overhead, and at cloud scale on Azure.

Kubernetes for Distributed ComputingUnder the hood, Azure Kubernetes Service (AKS) powers this new managed offering, providing the infrastructure foundation for running Ray at production scale. AKS handles the complexity of orchestrating distributed workloads while delivering the scalability, resilience, and governance that enterprise AI applications require.

AKS delivers:

Dynamic resource orchestration: Automatically provision and scale clusters across CPUs, GPUs, and mixed configurations as demand shifts.High availability: Self-healing nodes and failover keep workloads running without interruption.Elastic scaling: scale from development clusters to production deployments spanning hundreds of nodes.Integrated Azure services: Native connections to Azure Monitor, Microsoft Entra ID, Blob Storage, and policy tools streamline governance across IT and data science teams.AKS gives Ray and Anyscale a strong foundation—one that’s already trusted for enterprise workloads and ready to scale from small experiments to global deployments.

Enabling teams with Anyscale running on AzureWith this partnership, Microsoft and Anyscale are bringing together the best of open-source Ray, managed cloud infrastructure, and Kubernetes orchestration. By pairing Ray’s distributed computing platform for Python with Anyscale’s management capabilities and AKS’s robust orchestration, Azure customers gain flexibility in how they can scale AI workloads. Whether you want to start small with rapid experimentation or run mission-critical systems at global scale, this offering gives you the choice to adopt distributed computing without the complexity of building and managing infrastructure yourself.

You can leverage Ray’s open-source ecosystem, integrate with Anyscale’s managed experience, or combine both with Azure-native services, all within your subscription and governance model. This optionality means teams can choose the path that best fits their needs: prototype quickly, optimize for cost and performance, or standardize for enterprise compliance.

Together, Microsoft and Anyscale are removing operational barriers and giving developers more ways to innovate with Python on Azure, so they can move faster, scale smarter, and focus on delivering breakthroughs. Read the full release here.

Get startedLearn more about the private preview and how to request access at https://aka.ms/anyscale or subscribe to Anyscale in the Azure Marketplace.
The post Powering Distributed AI/ML at Scale with Azure and Anyscale appeared first on Microsoft Azure Blog.
Quelle: Azure

Microsoft strengthens sovereign cloud capabilities with new services

Across Europe and around the world, organizations today face a complex mix of regulatory mandates, heightened expectations for resilience, and relentless technological advancement. Sovereignty has become a core requirement for governments, public institutions, and enterprises seeking to harness the full power of the cloud while retaining control over their data and operations.

In June 2025, Microsoft CEO Satya Nadella announced a broad range of solutions to help meet these needs with the Microsoft Sovereign Cloud. We continue to adapt our sovereignty approach—innovating to meet customer needs and regulatory requirements within our Sovereign Public Cloud and Sovereign Private Cloud. Today, we are announcing a new wave of capabilities, building upon our digital sovereignty controls, to deliver advanced AI and scale, strengthened by our ecosystem of specialized in-country partner experts. With this announcement, expanded features and services include:

End-to-end AI data-processing in Europe as part of EU (European Union) Data Boundary.

Microsoft 365 Copilot expands in-country processing for Copilot Interactions to 15 countries. Learn more about this announcement in the Microsoft 365 blog.

Sovereign Landing Zones service expansion and disconnected operations for Microsoft Azure Local.

Microsoft 365 Local general availability.

Increased maximum scale of Azure Local, support for external SAN storage, and support for the latest NVIDIA GPUs.

Availability of our partner Digital Sovereignty specialization.

Discover Microsoft Sovereign Cloud

Microsoft Sovereign Cloud continuous innovation

Our latest offerings include new digital sovereignty capabilities across AI, security, and productivity, as well as a suite of upcoming features that will further address our customers’ sovereign cloud needs.

We recognize the need for continuous innovation and have already begun implementing many commitments. As of this month, we have already:

Established a European board of directors, composed of European nationals, exclusively overseeing all datacenter operations in compliance with European law, thereby putting Europe’s cloud infrastructure into the hands of Europeans.

Increased European datacenter capacity with recent launches in Austria and an upcoming launch in Belgium this month.

Embedded our digital resiliency commitments into all relevant government contracts.

Expanded open‑source investment through funding secure open-source software (OSS) projects and collaborations as well as publishing AI Access Principles that widen safe, responsible access to advanced AI, helping European developers, startups, and enterprises compete more effectively across the region.

Advanced our European Security Program by providing AI-powered intelligence and cybersecurity capacity building initiatives to strengthen Europe’s digital resilience against threat actors.

New Sovereign Public Cloud and AI capabilities

From the moment organizations begin designing their environments for sovereignty, they need end-to-end capabilities that help them embed compliance and control from the start.

EU Data Boundary includes AI data processing residency

We are delivering on our end-to-end AI data processing commitments, where data processed by AI services for EU customers remains within the European Union Data Boundary, except as otherwise directed by the customer.

This means all customer data, whether at rest or in transit, will be stored and processed exclusively in the EU. Our approach includes implementing rigorous controls and transparency measures that comply with EU customer requirements.

Expanding Microsoft 365 Copilot in-country data processing to 15 countries

Building upon decades of investment in global infrastructure and industry-leading data residency capabilities, Microsoft will now offer in-country data processing for customers’ Microsoft 365 Copilot interactions in 15 countries around the world.

By the end of 2025, Microsoft will offer customers in four countries—Australia, India, Japan and the United Kingdom—the option to have Microsoft 365 Copilot interactions processed in-country. In 2026, we’ll expand the availability of in-country data processing for Microsoft 365 Copilot to customers in eleven more countries including Canada, Germany, Italy, Malaysia, Poland, South Africa, Spain, Sweden, Switzerland, the United Arab Emirates, and the United States.

Read the full announcement in the Microsoft 365 blog

New Sovereign Landing Zone (SLZ) foundation

We are also introducing our refreshed Sovereign Landing Zone (SLZ), built on the market-proven landing zone foundation of Azure Landing Zone (ALZ).

The Sovereign Landing Zone is the recommended platform landing zone for customers wanting to implement sovereign controls in the Azure public cloud as part of the Sovereign Public Cloud.

The refresh of the Sovereign Landing Zone includes:

Updated Management Group hierarchy and supporting Azure Policy definitions, initiatives, and assignments to help implement the Sovereign Public Cloud controls (Level 1, 2, and 3).

Guidance on deployment placement of Azure Key Vault Managed HSM, if required as part of Level 2 Sovereign controls.

Deployment simplified via the Azure landing zone accelerator and the Azure landing zone library. See Sovereign Landing Zone (SLZ) implementation options for further details.

Over the next few months, the Azure Policy definitions, initiatives, and assignments that come built-in to the Sovereign Landing Zone will continue to expand to help our customers achieve sovereign controls in the sovereign public cloud out-of-the-box faster.

By adopting Sovereign Landing Zones, customers can gain a prescriptive architecture that accelerates compliance with regional sovereignty requirements while reducing complexity in policy management. This approach also helps organizations confidently scale workloads across Azure regions without compromising on regulatory alignment or operational consistency.

Check out the new Sovereign Landing Zone (SLZ)

New Sovereign Private Cloud and AI capabilities

As organizations deepen their commitment to sovereignty, the ability to combine regulatory compliance with innovation becomes especially important. This next wave of enhancements helps bring together advanced AI capabilities and scalable infrastructure designed for both public and private environments.

Supporting thousands of AI models on Azure Local with NVIDIA RTX GPUs

As we advance our Sovereign Private Cloud capabilities with Azure Local, we are introducing a new Azure offering with the latest NVIDIA RTX Pro 6000 Blackwell Server Edition GPU purpose-built for high performance AI workloads in sovereign environments.

Designed to run over 1,000 models such as GPT OSS, DeepSeek-V3, Mistral NeMo, and Llama 4 Maverick, this GPU enables organizations to accelerate their AI initiatives directly within a sovereign private cloud environment. Customers gain the flexibility to experiment, innovate, and deploy advanced AI solutions with enhanced performance. This means organizations can pursue new AI-powered opportunities while helping ensure data protection and compliance.

In addition, customers can gain access to thousands of prebuilt and open-source AI models, ready to deploy for a wide range of scenarios—from generative AI and advanced analytics to real-time decision making. This combination empowers customers to experiment, innovate, and operationalize cutting edge AI solutions, while keeping governance front and center.

Increasing Azure Local scale to hundreds of servers

Azure Local has supported single clusters of up to 16 physical servers. With our latest updates, Azure Local can support hundreds of servers, opening new possibilities for organizations with large-scale or growing sovereign private cloud demands. This enhancement means customers can support bigger, more complex workloads, scale their infrastructure with ease, and respond to evolving business needs all while aligning with the security and sovereignty required by European and global regulations.

SAN support on Azure Local

A key highlight of expanding the scale of our Sovereign Private Cloud is the introduction of Storage Area Network (SAN) support on Azure Local. With this update, customers can now securely connect their existing on-premises storage solutions from industry leaders to Azure Local. This integration empowers organizations to leverage their trusted storage investments while benefiting from cloud-native services, helping ensure data remains within their desired jurisdiction. European enterprises, in particular, gain flexibility in meeting local data residency requirements without compromising on performance or control.

Microsoft 365 Local: General availability of key workloads

Another milestone is the general availability of Microsoft 365 Local, helping bring core productivity workloads—Exchange Server, SharePoint Server, and Skype for Business Server natively to Azure Local. Starting in December, customers can deploy these productivity workloads on Azure Local in a connected mode, with a disconnected option for complete isolation coming early 2026. This approach combines familiar collaboration tools with Azure Local’s unified management and consistent Azure services and APIs, enabling organizations to maintain full operational control while aligning with stringent compliance and data residency requirements.

Disconnected operations: General availability

Microsoft’s Sovereign Private Cloud extends sovereignty principles into fully dedicated environments for organizations with strict compliance and control requirements, enabled by Azure Local. Azure Local enables government agencies, multinational enterprises, and regulated entities to maintain local control while still benefiting from the scale and innovation of Microsoft’s global cloud platform.

As part of Azure Local, we are introducing the upcoming general availability of disconnected operations, including the ability to manage multiple Azure Local clusters from the same local control plane. Available in early 2026, this capability allows customers to operate private cloud environments with a completely on-premises control plane, enabling organizations to operate securely and independently within their own dedicated environments. With disconnected operations, customers can retain business continuity and operational resilience, even in highly regulated or edge scenarios.

Learn more about Azure Local

New partner Digital Sovereignty specialization now available

We’re excited to officially launch the Digital Sovereignty specialization as part of the Microsoft AI Cloud Partner Program. This new specialization empowers partners to demonstrate deep expertise in delivering secure, compliant, and sovereign cloud solutions across Azure and Microsoft 365 platforms. By earning this designation, partners signal their ability to meet stringent data residency, privacy, and regulatory requirements—helping customers maintain control over their applications and data while driving innovation. The specialization includes rigorous audit criteria and provides benefits such as enhanced discoverability, specialized badging, and priority access to sovereign cloud opportunities.

Looking ahead: Advancing sovereignty through greater controls

The Microsoft Sovereign Cloud roadmap will provide additional capabilities designed to address evolving customer needs including:

Sovereign Public Cloud

Data Guardian: This upcoming capability helps provide transparency into operational sovereignty controls in our European public cloud environments. All remote access by Microsoft engineers to the systems that store and process your data in Europe will be routed to the EU, where an EU-based operator can monitor and, if necessary, halt these activities. All remote access by Microsoft engineers will be recorded in a tamper-evident log.

Sovereign Private Cloud

Enhanced change controls: We will introduce a set of configurable policies and approval workflows that will empower organizations with explicit oversight of any changes propagating from the cloud to the edge, strengthening governance and compliance.

Site-to-site disaster recovery: Azure Site Recovery in Azure Local will help with business continuity by keeping business apps and workloads running during outages.

Move from hybrid to fully disconnected: Azure Local will enable customers to transition workloads from hybrid to fully disconnected operations, providing them with flexibility for business continuity.

National Partner Clouds

National Partner Clouds are a core part of the Microsoft Sovereign Cloud strategy. They provide independently operated cloud environments that deliver Microsoft Azure and Microsoft 365 capabilities under local ownership and control.

Delos Cloud is designed to meet German government’s BSI cloud platform requirements.

Bleu is designed to meet the French government’s (ANSSI) SecNumCloud requirements.

For many public sector organizations, ERP is a critical workload that requires modernization to cloud environments. SAP is planning to deploy its RISE with SAP offering on Microsoft Azure for both Bleu and Delos Cloud customers, in addition to support of RISE with SAP for customers using Microsoft Azure public cloud deployments.

Learn more about Microsoft’s sovereign solutions

Microsoft delivers unmatched sovereign solutions, offering a flexible public cloud environment, a private cloud that scales to your business needs, and national partner clouds designed to meet specific compliance requirements. Our commitment to continuous investment and innovation helps our customers meet sovereignty without compromise.

Discover what’s next in cloud innovation this November at Microsoft Ignite. Learn more and register today.
The post Microsoft strengthens sovereign cloud capabilities with new services appeared first on Microsoft Azure Blog.
Quelle: Azure

Driving ROI with Azure AI Foundry and UiPath: Intelligent agents in real-world healthcare workflows

Across industries, organizations are moving from experimentation with AI to operationalizing it within business-critical workflows. At Microsoft, we are partnering with UiPath—a preferred enterprise agentic automation platform on Azure—to empower customers with integrated solutions that combine automation and AI at scale.

One example is Azure AI Foundry agents and UiPath agents (built on Azure AI Foundry) orchestrated by UiPath Maestro™ in business processes, ensuring AI insights seamlessly flow into automated business processes that deliver measurable value.

Get started with agents built on Azure AI Foundry

From insight to action: Managing incidental findings in healthcare

In healthcare, where every insight can influence a life, the ability of AI to connect information and trigger timely action is especially transformative. Incidental findings in radiology reports—unexpected abnormalities uncovered during imaging studies like CT or MRI scans—represent one of the most challenging and overlooked gaps in patient care

As the volume of patient data grows, overlooked incidental findings outside the original imaging scope can delay care, raise costs, and increase liability risks.

This is where AI steps in. In this workflow, Azure AI Foundry agents and UiPath agents—orchestrated by UiPath Maestro™—work together to operationalize this process in healthcare:

Radiology reports are generated and finalized in existing systems.

UiPath medical record summarization (MRS) agents review reports, flagging incidental findings.

Azure AI Foundry imaging agents analyze historical PACS images and radiology data, comparing past results with current findings relevant to the additional findings.

UiPath agents aggregate all results—including pertinent EMR history, prior imaging, and AI-generated imaging insights—into a comprehensive follow-up report.

The aggregated information is forwarded to the original ordering care provider in addition to the primary radiology report, eliminating the need to manually comb through the chart and prior exams for pertinent information. This creates both a secondary notification of the incidental finding and puts the summarized, relevant patient information in the clinicians’ hands, efficiently supporting the provision of safe, timely care.

UiPath Maestro™ orchestrates the business process, routing the consolidated packet to the ordering physician or specialist for next steps.

The combination of UiPath and Azure AI Foundry agents turns siloed data into precise documentation that can be used to create actionable care pathways—accelerating clinical decision making, reducing physician workload, and improving patient outcomes.

This scenario is enabled by:

UiPath Maestro™: Orchestrates complex workflows that span multiple agents, systems, and data sources; and integrates natively with Azure AI Foundry and UiPath Agents, providing tracing capabilities that create business trust in underlying AI agents.

UiPath agents: Extract and summarize structured and unstructured data from EMRs, reports, and historical records.

Azure AI Foundry agents: Analyze medical images and generate AI-powered diagnostic insights with healthcare-specific models on Azure AI Foundry that provide secure data access through DICOMweb APIs and FHIR standards, ensuring compliance and scalability.

Together, this creates an agentic ecosystem on Azure where AI insights are not isolated but operationalized directly within end-to-end business processes.

Delivering customer value

By embedding AI into automated workflows, customers see tangible ROI:

Improved outcomes: Faster detection and follow-up on incidental findings.

Efficiency gains: Automated data collection, summarization, and reporting reduce manual physician workload.

Cost savings: Early detection helps prevent expensive downstream interventions.

Trust and compliance: Built on Azure & UiPath’s security, privacy, and healthcare data standards.

This is the promise of combining enterprise-grade automation with enterprise-ready AI.

What customers are saying about AI automation in healthcare

AI-powered automation is redefining how healthcare operates. At Mercy, we are beginning to partner with Microsoft and UiPath which will allow us to move beyond data silos and create intelligent workflows that truly serve patients. This is the future of care—where insights instantly translate into action.
Robin Spraul, Automation Manager-Automation Opt & Process Engineering, Mercy

Partnership perspectives

With UiPath Maestro and Azure AI Foundry working together, we’re helping enterprises operationalize AI across workflows that matter most. This is how we turn intelligence into impact.
Asha Sharma, Corporate Vice President, Azure AI Platform

Healthcare is just the beginning. UiPath and Microsoft are empowering organizations everywhere to unlock ROI by bringing automation and AI together in real-world business processes.
Graham Sheldon, Chief Product Officer, UiPath

Looking ahead

This healthcare scenario is one of many where UiPath and Azure AI Foundry are transforming operations. From finance to supply chain to customer service, organizations can now confidently scale AI-powered automation with UiPath Maestro™ on Azure.

At Microsoft, we believe AI is only as valuable as the outcomes it delivers. Together with UiPath, we are enabling enterprises to achieve those outcomes today.

const currentTheme =
localStorage.getItem(‘msxcmCurrentTheme’) ||
(window.matchMedia(‘(prefers-color-scheme: dark)’).matches ? ‘dark’ : ‘light’);

// Modify player theme based on localStorage value.
let options = {“autoplay”:false,”hideControls”:null,”language”:”en-us”,”loop”:false,”partnerName”:”cloud-blogs”,”poster”:”http://cdn-dynmedia-1.microsoft.com/is/image/microsoftcorp/1090860-UiPath-Azure-HLS-Incidental?wid=1280″,”title”:””,”sources”:[{“src”:”http://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1090860-UiPath-Azure-HLS-Incidental-0x1080-6439k”,”type”:”video/mp4″,”quality”:”HQ”},{“src”:”http://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1090860-UiPath-Azure-HLS-Incidental-0x720-3266k”,”type”:”video/mp4″,”quality”:”HD”},{“src”:”http://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1090860-UiPath-Azure-HLS-Incidental-0x540-2160k”,”type”:”video/mp4″,”quality”:”SD”},{“src”:”http://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1090860-UiPath-Azure-HLS-Incidental-0x360-958k”,”type”:”video/mp4″,”quality”:”LO”}],”ccFiles”:[{“url”:”https://azure.microsoft.com/en-us/blog/wp-json/msxcm/v1/get-captions?url=https%3A%2F%2Fwww.microsoft.com%2Fcontent%2Fdam%2Fmicrosoft%2Fbade%2Fvideos%2Fproducts-and-services%2Fen-us%2Fazure%2F1090860-uipath-azure-hls-incidental%2F1090860-UiPath-Azure-HLS-Incidental_cc_en-us.ttml”,”locale”:”en-us”,”ccType”:”TTML”}]};

if (currentTheme) {
options.playButtonTheme = currentTheme;
}

document.addEventListener(‘DOMContentLoaded’, () => {
ump(“ump-69129fb513f58″, options);
});

The post Driving ROI with Azure AI Foundry and UiPath: Intelligent agents in real-world healthcare workflows appeared first on Microsoft Azure Blog.
Quelle: Azure

The new era of Azure Ultra Disk: Experience the next generation of mission-critical block storage

Since its launch at Microsoft Ignite 2019, Azure Ultra Disk has powered some of the world’s most demanding applications and workloads: From real-time financial trading and electronic health records to high-performance gaming and AI/ML services. Ultra Disk was a breakthrough in cloud block storage innovation from the start, introducing independent configuration of capacity, IOPS, and throughput to deliver precise performance at scale. And we’ve continued to push boundaries ever since. Committing to a purposeful evolution, not just enhancing performance and resilience for mission-critical workloads but working to ensure every advancement addresses the real-world needs of our customers.

How to deploy and use an Ultra Disk?

These advancements are not just theoretical; they’re driving real impact for customers operating on a global scale. One example being BlackRock, a global asset manager and technology provider, who leverages Azure Ultra Disk in conjunction with M-series Virtual Machines to power their mission-critical investment platform, Aladdin. For BlackRock, delivering ultra-low latency and exceptional reliability is paramount to swiftly adapting to dynamic market conditions and managing portfolios with agility and confidence.

Now that we’re on Azure, we have a springboard to unlock adoption of cloud-managed services to be able to engineer and operate at greater scale and adopt innovative technologies.
Randall Fradin, Head of Cloud Managed Services and Platform Engineering, BlackRock

Read the full customer story here.

Stories like BlackRock’s illustrate the power of Ultra Disk in action and they inspire us to keep evolving. That’s why today, we are excited to unveil a transformative update to Ultra Disk, designed to deliver superior speed, resilience, and cost efficiency for your most sensitive workloads. This major refresh introduces higher performance, greater flexibility to optimize cost, and instant access snapshots to support business continuity. With these advancements, Ultra Disk empowers organizations to accelerate operations, restore data rapidly, and scale with confidence–no matter the level of demand or criticality.

What’s new with Ultra Disk?

Ultra Disk delivers reliable performance with improved average, P99.9, and outlier latency

For mission-critical workloads, even brief disruptions can have significant impacts. That is why we have prioritized reducing tail latency at P99.9 and above. Our platform enhancements have resulted in 80% reduction in both P99.9 and outlier latency, along with a 30% improvement in average latency. These advancements make Ultra Disk the best choice for highly I/O-intensive and latency-sensitive workloads, such as transaction logs for mission-critical applications.

If you are using local SSD or Write Accelerator to achieve lower latencies, we recommend exploring Ultra Disk as an alternative option for enhanced data persistency and greater flexibility for capacity and performance.

Optimize application cost without sacrificing performance

Our goal is to support workload in maximizing both efficiency and performance. Ultra Disk’s latest provisioning model now offers greater granular control over capacity and performance, enabling better cost management. Workloads on small disks can save up to 50% while large disks can save up to 25%. These updated features are now available for both new and existing Ultra Disks:

 Greater control Previous GiB capacity billing Billed at 1 GiB granularity Billed at tiers Maximum IOPS per GiB 1,000 IOPS per GiB 300 IOPS per GiB Minimum IOPS per disk 100 IOPS  Higher of 100 or 1 IOPS per GiB Minimum MB/s per disk 1 MB/s Higher of 1 MB/s or 4 KB/s per IOPS 

A financial application operates its core database on Ultra Disk to serve market trend insight. This database stores large amount of data but only require moderate IOPS and throughput at low latency when needed (no more than 12,500 GiB, 5000 IOPS and 200 MB/s). With more flexible control over capacity and performance, this deployment now saves 22% from its Ultra Disk spending, illustrated below using East US prices.

Cost per month Previous Improved flexibility Savings 12,500 GiB $1,594 for 13,312 GiB (rounded to next tier) $1,497 for 12,500 GiB -6% 5,000 IOPS $661 for 13,312 IOPS $248 for 5,000 IOPS -62% 200 MB/s $70 for 200 MB/s No change No change Ultra Disk Total$2,324 $1,815-22% 

Unlock high performance workloads on Azure Boost and Ultra Disk

Ultra Disk and Azure Boost now enable a new class of high-performance workloads: 

Memory Optimized Mbv3 VM – Standard_M416bs_v3 – GA, up to 550,000 IOPS and 10GB/s

Azure Boost Ebdsv5 VM – GA up to 400,000 IOPS and 10GB/s

Stay tuned for newest Azure Boost VM announcement at 2025 Ignite for unprecedented remote Block Storage performance

These innovations empower customers to confidently operate high-demand applications such as large-scale SQL databases, electronic health record systems, and mission-critical enterprise platforms. Ultra Disk is equipped to address rigorous performance requirements leveraging the latest advancements in Virtual Machine technology.

Instant Access Snapshot enables you to restore and run your business application immediately

We are thrilled to announce an exciting new experience: Instant Access Snapshot for Ultra and Premium SSD v2 disks, now available in public preview. With Instant Access, you can immediately use snapshots after creation to generate new disks, eliminating the wait time (often spanning numerous hours) traditionally required for background data copy before the snapshot is in a ready and usable state. Disks generated from these Instant Access Snapshots now hydrate up to 10x faster and experience minimal read latency impact during the hydration process. This advanced capability marks a significant leap forward in the public cloud market, enabling rapid recovery and replication scale-out for your organization in real time. No more lengthy restoration processes or costly downtime! Instant Access Snapshot empowers you to get back to business within moments, not hours.

Building on the foundation of security, flexibility, and efficiency for Ultra Disk

Let’s walk through a few other features recently released that will greatly enhance your high-performance workload experience on Ultra Disk.

Operate with cost-efficiency by expanding your Ultra Disk capacity live with the support of live resize and dynamically adjusting Ultra Disk performance to avoid over provisioning. 

Run your business application securely with encryption at host on Ultra Disk. Encryption at host will encrypt your data starting from the VM host and then store the encrypted data in Ultra Disk.

Azure Site Recovery – Recover your VM applications with Ultra Disk seamlessly in another Azure region when your primary region is down.

Azure VM Backup – Backup your VM applications equipped with Ultra Disk easily and securely.

Azure Disk Backup – Backup a specific Ultra Disk that is critical to your business operation to lower your backup cost and for more customized backup operations.

Third party backup and disaster recovery support: We understand that you may have preferred third party service for your backup and disaster recovery procedures. Check out the third-party services here that now support Ultra Disk.

Migrate your clustered application to Azure as-is that uses SCSI Persistent Reservations to Ultra Disk with shared disk capability. Shared disk capability unlocks easy migration and further cost optimization for your mission-critical clustered applications.

Getting started: Unlock new possibilities for your business

Join us on this journey to redefine what’s possible for your mission critical business applications. With Azure Ultra Disk, you can experience the future of high-performance storage today, empowering your organization to move faster, recover instantly, and scale with confidence.

New to Ultra Disk? Start with our comprehensive documentation and how to deploy an Ultra Disk.

Have questions or feedback? Reach out to our team at AzureDisks@microsoft.com.

Start using Azure Ultra Disk today

The post The new era of Azure Ultra Disk: Experience the next generation of mission-critical block storage appeared first on Microsoft Azure Blog.
Quelle: Azure

Securing our future: November 2025 progress report on Microsoft’s Secure Future Initiative

When we launched the Secure Future Initiative (SFI), our mission was clear: accelerate innovation, strengthen resilience, and lead the industry toward a safer digital future. Today, we’re sharing our latest progress report that reflects steady progress in every area and engineering pillar, underscoring our commitment to security above all else. We also highlight new innovations delivered to better protect customers, and share how we use some of those same capabilities to protect Microsoft. Through SFI, we have improved the security of our platforms and services and our ability to detect and respond to cyberthreats.

Read the latest Secure Future Initiative reportFostering a security-first mindsetEngineering sentiment around security has improved by nine points since early 2024. To increase security awareness, 95% of employees have completed the latest training on guarding against AI-powered cyberattacks, which remains one of our highest-rated courses. Finally, we developed resources for employees and made them available to customers for the first time to improve security awareness.

Governance that scales globallyThe Cybersecurity Governance Council now includes three additional Deputy Chief Information Security Officers (CISOs) functions covering European regulations, internal operations, and engagement with our ecosystem of partners and suppliers. We launched the Microsoft European Security Program to deepen partnerships and better inform European governments about the cyberthreat landscape and collaborating with industry partners to better align cybersecurity regulations, advance responsible state behavior in cyberspace, and build cybersecurity capacity through the Advancing Regional Cybersecurity Initiative in the global south. You can read more on our cybersecurity policy and diplomacy work.

Secure by Design, Secure by Default, Secure OperationsMicrosoft Azure, Microsoft 365, Windows, Microsoft Surface, and Microsoft Security engineering teams continue to deliver innovations to better protect customers. Azure enforced secure defaults, expanded hardware-based trust, and updated security benchmarks to improve cloud security. Microsoft 365 introduced a dedicated AI Administrator role, and enhanced agent lifecycle governance and data security transparency to give organizations more control and visibility. Windows and Surface advanced Zero Trust principles with expanded passkeys, automatic recovery capabilities, and memory-safe improvements to firmware and drivers. Microsoft Security introduced data security posture management for AI and evolved Microsoft Sentinel into an AI-first platform with data lake, graph, and Model Context Protocol capabilities.

Engineering progress that sets the benchmarkWe’re making steady progress across all engineering pillars. Key achievements include enforcing phishing-resistant multifactor authentication (MFA) for 99.6% of Microsoft employees and devices, migrating higher-risk users to locked-down Azure Virtual Desktop environments, completing network device inventory and lifecycle management, and achieving 99.5% detection and remediation of live secrets in code. We’ve also deployed more than 50 new detections across Microsoft infrastructure with applicable detections to be added to Microsoft Defender and awarded $17 million to promote responsible vulnerability disclosure.

Actionable guidanceTo help customers improve their security, we highlight 10 SFI patterns and practices customers can follow to reduce their risk. We also share additional best practices and guidance throughout the report. Customers can do a deeper assessment of their security posture by using our Zero Trust Workshops which incorporate SFI-based assessments and actionable learnings to help customers on their own security journeys.

Security as the foundation of trustCybersecurity is no longer a feature—it’s the foundation of trust in a connected world.

With the equivalent of 35,000 engineers working full time on security, SFI remains the largest cybersecurity effort in digital history. Looking ahead, we will continue to prioritize the highest risks, accelerate delivery of security innovations, and harness AI to increase engineering efficiency and enable rapid anomaly detection and automated remediation.

The cyberthreat landscape will continue to evolve. Technology will continue to advance. And Microsoft will continue to prioritize security above all else. Our progress reflects a simple truth: trust is earned through action and accountability.

We are grateful for the partnership of our customers, industry peers, and security researchers. Together, we will innovate for a safer future.

Read our November 2025 progress report​​Learn more with Microsoft SecurityTo learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.
The post Securing our future: November 2025 progress report on Microsoft’s Secure Future Initiative appeared first on Microsoft Azure Blog.
Quelle: Azure

GitHub Universe 2025: Where developer innovation took center stage

At GitHub Universe 2025, the theme was clear: the ability to see, steer, and build across agents will bring the greatest impact and GitHub is the platform that’s transforming and empowering how developers and agents work together. As we heard from Kyle Daigle, Chief Operating Officer, GitHub, on stage this week, it is “the new era of collaboration.” Agents have become an integral part of software development already, taking on manual, repetitive coding tasks so developers can focus on complex problem-solving and creative, higher impact work. Developers have been at the forefront of AI from the beginning. They are showing us how agents can break down traditional roles, redefine processes, and introduce a scaling effect so we can build AI-powered solutions as quickly as we can dream them up.

GitHub is supporting every developer, on every language, and running locally or in any cloud, as they set an example for every other business function on the role of AI. This year’s GitHub Universe event was a celebration of code and community, and a showcase of developer tools, developer control, and developer choice. Our friends at GitHub announced key innovations to build agentic apps with enterprise-grade security, scalability, and trust. Now with Agent HQ—the open ecosystem that unites every agent on a single platform—there is one mission control to assign, govern, and track multiple agents in one place. Microsoft is helping turn this vision into velocity, delivering the infrastructure and products developers need to build what’s next. The rise of agents is a transformation we first introduced at Microsoft Build, and it’s accelerating fast. By harnessing the full power of the Azure portfolio, agentic AI becomes more than a productivity boost. It’s a strategic advantage.

Record-breaking growth and momentum

GitHub’s Octoverse 2025 report, released this week, offers an annual snapshot of global software development trends, highlighted record-breaking growth, and momentum across the developer ecosystem:

Find the full Octoverse 2025 summary here

180M+ developers work and build on GitHub.

Nearly 80% of developers new to GitHub use Copilot in their first week.

630M total projects on GitHub, +121M in 2025, the biggest year yet.

1.12B contributions to public and open source repositories, +13% year-over-year.

4.3M+ AI-related repositories, nearby double since 2023.

TypeScript and Python are the two most used languages in 2025, signaling a shift driven by AI preferences.

These numbers reflect a global movement—developers embracing AI, open source, and cloud-native tools to build faster and smarter. At Microsoft, we are customer zero when it comes to leveraging AI every day across various business processes and workflows. Nearly all of Microsoft’s engineers use GitHub Copilot as part of their development processes. Agents will make us not only more efficient and faster but fundamentally reinvent how we work.

This transformation isn’t just enterprise-led. As AI and agentic workflows are redefining how software gets built, it was clear while in San Francisco this week, startups are at the forefront.

I work with startups all over the world building cutting-edge data and AI solutions, and I see every day how GitHub Copilot makes it possible for startups to run leaner, make the most of their dev resources and ship faster. It makes a huge difference in terms of their quality of output and time to market. GitHub Copilot is now the standard that’s powering the next generation of AI solutions.
Heena Purohit, Global Director, Data and AI, Microsoft for Startups

This next era is about giving developers even more autonomy, intelligence, and infrastructure to help organizations unlock new value. When developers thrive, the business thrives.

Empowering every organization with AI tools built for what’s next

Microsoft Azure offers a full-stack platform that brings together AI-powered developer tools, agents, and enterprise-grade security from cloud to edge. It’s a human-centered approach to software delivery, designed to help every organization move faster, build smarter, and innovate with confidence. Agents are changing the game by assisting developers (and each other) across the entire lifecycle. Agents can tackle bug fixes, documentation to code reviews, and deploy to Azure, so developers can focus on what they do best: create.

You can now build and deploy AI agents end-to-end in Visual Studio Code with help from GitHub Copilot. AI Toolkit for VS Code lets developers explore models and build agents where they code—with evaluation and tracing in one place. Now, with prompt-first agent development powered by GitHub Copilot and built on the Microsoft Agent Framework, developers can create, refine, and launch production-ready agents faster and more intuitively than ever—all from within their favorite editor.

Azure MCP Server is now generally available, giving your agents the power of cloud and redefining how developers interact with Azure. Built on Model Context Protocol (MCP), it can create a secure, standards-based bridge between Azure services—like AKS, ACA, App Service, Cosmos DB, SQL, AI Foundry, and Fabric—and AI-powered tools such as GitHub Copilot. Imagine managing cloud resources, generating infrastructure-as-code, and troubleshooting deployments—all through natural language, right from your favorite IDE or MCP-compatible client, and all aligned with Azure best practices. Azure MCP Server accelerates innovation, eliminates context switching, and delivers enterprise-grade security and scalability. It’s not just a tool—it’s the future of intelligent cloud development.

Modern app development is in a new era—where developers are moving from writing code to orchestrating autonomous systems that understand and act on intent. On the main stage this week, Amanda Silver, Corporate Vice President and Head of Product for Apps and Agents at Microsoft, showcased how Microsoft and GitHub Copilot work together to empower spec-driven development, code generation with Azure context, prompt first agent creation, workflow orchestration, and operational excellence. Together, redefining how human creativity becomes production-ready innovation.

Becoming frontier starts here

At Microsoft, we’re proud to be part of this journey—providing the infrastructure and integration that make agentic development real for enterprises around the world.

GitHub Universe 2025 wasn’t just a showcase, it was a signal. AI is now the default expectation in software development, and agentic workflows are becoming the new standard. And with GitHub and Azure working together, we uniquely deliver the tools, platforms, and vision to help every developer thrive.

We’re building that future alongside GitHub, our customers, our startups and partner ecosystem, and the global developer community. Because when the right tools are in the hands of developers, transformation isn’t a question of if—it’s a matter of how fast. When developers lead, innovation follows.

Build in cloud with Azure
Microsoft Azure can help you get started with AI-powered developer tools, agents, and enterprise-grade security.

Start your AI journey

The post GitHub Universe 2025: Where developer innovation took center stage appeared first on Microsoft Azure Blog.
Quelle: Azure

Resiliency in the cloud—empowered by shared responsibility and Azure Essentials

Empowering organizations to shape the future of cloud with resilient, always-on solutions.

Overview

In today’s digital-first era, downtime is not an option—businesses must be resilient to thrive.

Imagine it’s 2:00 AM and an outage occurs. Whether your team responds with panic or calm depends on how well you’ve prepared. Fast recovery isn’t luck—it’s the result of intentional planning for resiliency by design.

Reliability means your cloud service works as expected, delivering consistent uptime and performance. Resiliency is your ability to quickly recover when things go wrong—like outages or disasters.

Reliability is the promise; building resiliency is how we keep that promise. Leading organizations build resiliency into their cloud solutions from the start, using zone-redundant architectures as a baseline and expanding to multi-region deployments for their most critical workloads.

Microsoft’s Azure Essentials is designed to make these practices accessible and actionable for every organization.

What is Azure Essentials?

Shared responsibility

Reliability and resiliency in the cloud are achieved through a partnership between Microsoft and our customers. The shared responsibility model clarifies accountability by role:

AreaMicrosoft(Platform reliability)Customer/Partner(Solution resiliency)Global platform availabilityDelivers global infrastructure and uptimeN/AFoundational SLAsGuarantees service levelsN/ASolution architecture and SLOsN/ADesign and maintain solution-level objectivesConfiguration, deployments, and operationsN/AImplement and manage deployments and operationsBackup and disaster recoveryProvides secure backup capabilitiesDevelop and test recovery plansValidationOffers platform validation toolsTest solution’s ability to withstand failuresGovernance and complianceSets shared guardrailsEnforce policies and compliance within environment

Note: N/A indicates that the responsibility does not apply to that party.

Real stories, real impact:

Publix Employees Federal Credit Union leveraged Azure’s disaster recovery capabilities to minimize downtime during severe weather. The University of Miami adopted availability zones and robust recovery strategies to ensure continuity for students and faculty. These stories show how platform reliability and customer resiliency combine for real-world results.

How Microsoft helps: Azure Essentials as the anchor

At the heart of Microsoft’s approach is Azure Essentials—the unified methodology that brings together all the tools, guidance, and best practices our customers need. Azure Essentials enables organizations to build resilient, reliable, and secure cloud solutions at every stage of their journey.

Azure Essentials brings together:

Foundational blueprints: Azure Well-Architected Framework and Cloud Adoption Framework guidance to establish secure, cost-effective environments from day one.

Actionable assessments: Optimization tools and gap analyses for continuous improvement.

Integrated tools: Validation with Azure Chaos Studio, monitoring with Azure Monitor, security with Microsoft Defender for Cloud, and automation with Azure DevOps.

Resilient design patterns: Support for migration, modernization, AI innovation, and unified data platforms with zone-redundant architectures and disaster recovery solutions.

Continuous improvement: Ongoing validation, monitoring, and remediation to maintain a strong resiliency posture.

Azure Essentials in action: Practical stages

Start resilient: Apply zone-redundant patterns, align security and governance, and embed resiliency early using Azure Blueprints and reference architectures.

Get resilient: Address gaps in existing deployments through assessments and targeted remediation plans and recommend high-availability strategies such as multi-region deployments.

Stay resilient: Implement continuous validation and improvement cycles, using telemetry, policy, and partner services to enforce resiliency posture.

With Azure Essentials, you’re not just preparing for the future—you’re helping to shape it, setting a new standard for resilient, always-on cloud innovation.

Resiliency across Azure solutions

Azure Essentials empowers organizations to build resiliency into every Azure solution—whether you’re migrating workloads, innovating with AI, or unifying your data platform. Here’s how each solution area supports resilient cloud operations, with direct links to practical guidance:

Migration and modernization: Architect for zone-redundancy, implement backup and disaster recovery, and validate resiliency after migration. Learn more.

This resource provides actionable strategies for designing cloud solutions that minimize downtime and ensure business continuity through redundancy and robust architecture.

AI apps and agents: Deploy models across multiple zones or regions, build resilient APIs and data pipelines, and continuously monitor and retrain models. Learn more.

This link offers practical guidance on building and deploying AI-powered applications that are resilient, scalable, and secure, with real-world examples and best practices.

Unified data platform: Design for durability and rapid recovery with geo-redundancy, regular backups, and automated recovery processes. Learn more.

This article explains how to architect data platforms for resilience, covering strategies for backup, recovery, and high availability using Microsoft Fabric.

Organizations can architect for fault tolerance, eliminate single points of failure, back up and test recovery regularly, and enforce governance at scale. Tools like Azure Advisor, Azure Monitor, and Azure DevOps help automate and monitor operations, while Azure Chaos Studio enables validation and testing.

Where to find more information

Ready to get your organization’s cloud environment resilient? Start with these resources:

Explore: Backup and disaster recovery.

Use: Reliability guides by service.

Access technical methodology: Azure Essentials.

Start your project with experts and investments: Azure Accelerate.

Register for: Microsoft Ignite sessions on resiliency best practices.

Take the next step to make resiliency and reliability your default by leveraging Azure Essentials and the rest of these resources today.

Get started with Azure Accelerate
Fuel transformation with experts and investments across the cloud and AI journey.

What is Azure Accelerate?

The post Resiliency in the cloud—empowered by shared responsibility and Azure Essentials appeared first on Microsoft Azure Blog.
Quelle: Azure

Introducing Agent HQ: Any agent, any way you work

The current AI landscape presents a challenge we’re all too familiar with: incredible power fragmented across different tools and interfaces. At GitHub, we’ve always worked to solve these kinds of systemic challenges—by making Git accessible, code review systematic with pull requests, and automating deployment with Actions.

With 180 million developers, GitHub is growing at its fastest rate ever—a new developer joining every second. What’s more, 80% of new developers are using Copilot in their first week. AI isn’t just a tool anymore; it’s an integral part of the development experience. Our responsibility is to ensure this new era of collaboration is powerful, secure, and seamlessly integrated into the workflow you already trust.

At GitHub Universe, we’re announcing Agent HQ, GitHub’s vision for the next evolution of our platform. Agents shouldn’t be bolted on. They should work the way you already work. That’s why we’re making agents native to the GitHub flow.

Agent HQ transforms GitHub into an open ecosystem that unites every agent on a single platform. Over the coming months, coding agents from Anthropic, OpenAI, Google, Cognition, xAI, and more will become available directly within GitHub as part of your paid GitHub Copilot subscription.

To bring this vision to life, we’re shipping a suite of new capabilities built on the primitives you trust. This starts with a mission control, a single command center to assign, steer, and track the work of multiple agents from anywhere. It extends to VS Code with new ways to plan and customize agent behavior. And it is backed by enterprise-grade functionality: a new generation of agentic code review, a dedicated control plane to govern AI access and agent behavior, and a metrics dashboard to understand the impact of AI on your work.

We are also deeply committed to investing in our platform and strengthening the primitives you rely on every day. This new world of development is powered by that foundational work, and we look forward to sharing more updates.

Let’s dive in.

In this postGitHub is your Agent HQ: An open ecosystem for all agentsMission control: Your command center, wherever you buildNew in VS Code: Plan, customize, and connectIncreased confidence and control for your teamGitHub is your Agent HQ: An open ecosystem for all agentsThe future is about giving you the power to orchestrate a fleet of specialized agents to perform complex tasks in parallel, not juggling a patchwork of disconnected tools or relying on a single agent. As the pioneer of asynchronous collaboration, we believe it’s our responsibility to make sure these next-generation async tools just work.

With Agent HQ what’s not changing is just as important as what is. You’re still working with the primitives you know—Git, pull requests, issues—and using your preferred compute, whether that’s GitHub Actions or self-hosted runners. You’re accessing agents through your existing paid Copilot subscription.

On top of that foundation, we’re opening the doors to a new world of capability. Over the coming months, coding agents from Anthropic, OpenAI, Google, Cognition, and xAI will be available on GitHub as part of your paid GitHub Copilot subscription.

Don’t want to wait? Starting this week, Copilot Pro+ users can begin working with OpenAI Codex in VS Code Insiders, the first of our partner agents to extend beyond its native surfaces and directly into the editor.

‘Our collaboration with GitHub has always pushed the frontier of how developers build software. The first Codex model helped power Copilot and inspired a new generation of AI-assisted coding. We share GitHub’s vision of meeting developers wherever they work, and we’re excited to bring Codex to millions more developers who use GitHub and VS Code, extending the power of Codex everywhere code gets written.’

Alexander Embiricos, Codex Product Lead, OpenAI

‘We’re partnering with GitHub to bring Claude even closer to how teams build software. With Agent HQ, Claude can pick up issues, create branches, commit code, and respond to pull requests, working alongside your team like any other collaborator. This is how we think the future of development works: agents and developers building together, on the infrastructure you already trust.’

Mike Krieger, Chief Product Officer, Anthropic

‘The best developer tools fit seamlessly into your workflow, helping you stay focused and move faster. With Agent HQ, Jules becomes a native assignee, streamlining manual steps and reducing friction in everyday development. This deeper integration with GitHub brings agents closer to where developers already work, making collaboration more natural and efficient.’

Kathy Korevec, Director of Product at Google LabsMission control: Your command center, wherever you buildThe power of Agent HQ comes from mission control, a unified command center that follows you wherever you work. It’s not a single destination; it’s a consistent interface across GitHub, VS Code, mobile, and the CLI that lets you direct, monitor, and manage every AI-driven task. With mission control, you can choose from a fleet of agents, assign them work in parallel, and track their progress from any device.

We’re also providing:

New branch controls that give you granular oversight over when to run CI and other checks for agent-created code.Identity features to control which agent is building the task, managing access, and policies just like you would with any other developer on your team.One-click merge conflict resolution, improved file navigation, and better code commenting capabilities.New integrations for Slack and Linear, on top of our recently announced connections for Atlassian Jira, Microsoft Teams and Azure Boards, and Raycast.Logos for Slack, Linear, Microsoft Teams, VS Code, Azure Boards, Jira, and Raycast.Try mission control today.

New in VS Code: Plan, customize, and connectMission control is in VS Code, too, so you’ve got a single view of all your agents running in VS Code, in the Copilot CLI, or on GitHub.

Today’s brand new release in VS Code is all about working alongside agents on projects, and it’s not surprising that great results start with a great plan. Getting the context right before a project is critical, but that same context needs to carry through into the work. Copilot already adapts to the way your team works by learning from your files and your project’s culture, but sometimes you need more pointed context.

So today, we’re introducing Plan Mode, which works with Copilot, and asks you clarifying questions along the way, to help you to build a step-by-step approach for your task. Providing the context upfront improves what Copilot can do and helps you find gaps, missing decisions, or project deficiencies early in the process—before any code is written. Once you approve, your plan goes to Copilot to start implementing, whether that’s locally in VS Code or using an agent in the cloud.

For even finer control, you can now create custom agents in VS Code with AGENTS.md files, source-controlled documents that let you set clear rules and guardrails such as “prefer this logger” or “use table-driven tests for all handlers.” This shapes Copilot’s behavior without you re-prompting it every time.

Now you can rely on the new GitHub MCP Registry, available directly in VS Code. VS Code is the only editor that supports the full MCP specification. Discover, install, and enable MCP servers like Stripe, Figma, Sentry, and others, with a single click. When your task calls for a specialist, create custom agents in GitHub Copilot with their own system prompt and tools to help you define the ways you want Copilot to work.

Increased confidence and control for your teamAgent HQ doesn’t just give you more power—it gives you confidence. Ensuring code quality, understanding AI’s influence on your workflow, and maintaining control over how AI interacts with your codebase and organization are essential for your team’s success, and we’re tackling these challenges head-on.

When it comes to code quality, the core problem is that “LGTM” doesn’t always mean “the code is healthy.” A review can pass, but can still degrade the codebase and quickly become long-term technical debt. With GitHub Code Quality, in public preview today, you’ve got org-wide visibility, governance, and reporting to systematically improve code maintainability, reliability, and test coverage across every repository. Enabling it extends Copilot’s security checks to look at the maintainability and reliability impact of the code that’s been changed.

And we’ve added a code review step into the Copilot coding agent’s workflow, too, so Copilot gets an initial first-line review and addresses problems (before you even see the code).

Screenshot of GitHub Code Quality, showing the results of Copilot’s review.As an organization, you need to know how Copilot is being used. So today, we’re announcing the public preview of the Copilot metrics dashboard, showing Copilot’s impact and critical usage metrics across your entire organization.

For enterprise administrators who are managing AI access, including AI agents and MCP, we’re focused on providing consistent AI controls for teams with the control plane—your agent governance layer. Set security policies, audit logging, and manage access all in one place. Enterprise admins can also control which agents are allowed, define access to models, and obtain metrics about the Copilot usage in your organization.

For developers, by developersWe built Agent HQ because we’re developers, too. We know what it’s like when it feels like your tools are fighting you instead of helping you. When “AI-powered” ends up meaning more context-switching, more babysitting, more subscriptions, and more time explaining what you need to get the value you were promised.

That ends today.

Agent HQ isn’t about the hype of AI. It’s about the reality of shipping code. It’s about bringing order and governance to this new era without compromising choice. It’s about giving you the power to build faster, with more confidence, and on your terms.

Welcome home. Let’s build.
The post Introducing Agent HQ: Any agent, any way you work appeared first on Microsoft Azure Blog.
Quelle: Azure

Building the future together: Microsoft and NVIDIA announce AI advancements at GTC DC

Microsoft and NVIDIA are deepening our partnership to power the next wave of AI industrial innovation. For years, our companies have helped fuel the AI revolution, bringing the world’s most advanced supercomputing to the cloud, enabling breakthrough frontier models, and making AI more accessible to organizations everywhere. Today, we’re building on that foundation with new advancements that deliver greater performance, capability, and flexibility.

With added support for NVIDIA RTX PRO 6000 Blackwell Server Edition on Azure Local, customers can deploy AI and visual computing workloads distributed and edge environments with the seamless orchestration and management you use in the cloud. New NVIDIA Nemotron and NVIDIA Cosmos models in Azure AI Foundry give businesses an enterprise-grade platform to build, deploy, and scale AI applications and agents. With NVIDIA Run:ai on Azure, enterprises can get more from every GPU to streamline operations and accelerate AI. Finally, Microsoft is redefining AI infrastructure with the world’s first deployment of NVIDIA GB300 NVL72.

Explore our partnership on Azure Local

Today’s announcements mark the next chapter in our full-stack AI collaboration with NVIDIA, empowering customers to build the future faster.

Expanding GPU support to Azure Local

Microsoft and NVIDIA continue to drive advancements in artificial intelligence, offering innovative solutions that span the public and private cloud, the edge, and sovereign environments.

As highlighted in the March blog post for NVIDIA GTC, Microsoft will offer NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs on Azure. Now, with expanded availability of NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs on Azure Local, organizations can optimize their AI workloads, regardless of location, to provide customers with greater flexibility and more options than ever. Azure Local leverages Azure Arc to empower organizations to run advanced AI workloads on-premises while retaining the management simplicity of the cloud or operating in fully disconnected environments. 

NVIDIA RTX PRO 6000 Blackwell GPUs provide the performance and flexibility needed to accelerate a broad range of use cases, from agentic AI, physical AI, and scientific computing to rendering, 3D graphics, digital twins, simulation, and visual computing. This expanded GPU support unlocks a range of edge use cases that fulfill the stringent requirements of critical infrastructure for our healthcare, retail, manufacturing, government, defense, and intelligence customers. This may include real-time video analytics for public safety, predictive maintenance in industrial settings, rapid medical diagnostics, and secure, low-latency inferencing for essential services such as energy production and critical infrastructure. The NVIDIA RTX PRO 6000 Blackwell enables improved virtual desktop support by leveraging NVIDIA vGPU technology and Multi-Instance GPU (MIG) capabilities. This can not only accommodate a higher user density, but also power AI-enhanced graphics and visual compute capabilities, offering an efficient solution for demanding virtual environments.

Earlier this year, Microsoft announced a multitude of AI capabilities at the edge, all enriched with NVIDIA accelerated computing:

Edge Retrieval Augmented Generation (RAG): Empower sovereign AI deployments with fast, secure, and scalable inferencing on local data—supporting mission-critical use cases across government, healthcare, and industrial automation.

Azure AI Video Indexer enabled by Azure Arc: Enables real-time and recorded video analytics in disconnected environments—ideal for public safety and critical infrastructure monitoring or post-event analysis.

With Azure Local, customers can meet strict regulatory, data residency, and privacy requirements while harnessing the latest AI innovations powered by NVIDIA.

Whether you need ultra-low latency for business continuity, robust local inferencing, or compliance with industry regulations, we’re dedicated to delivering cutting-edge AI performance wherever your data resides. Customers now access the breakthrough performance of the NVIDIA RTX PRO 6000 Blackwell GPUs in new Azure Local solutions—including Dell AX-770, HPE ProLiant DL380 Gen12, and Lenovo ThinkAgile MX650a V4.

To find out more about upcoming availability and sign up for early ordering, visit: 

Dell for Azure Local

HPE for Azure Local

Lenovo for Azure Local

Powering the future of AI with new models on Azure AI Foundry

At Microsoft, we’re committed to bringing the most advanced AI capabilities to our customers, wherever they need them. Through our partnership with NVIDIA, Azure AI Foundry now brings world-class multimodal reasoning models directly to enterprises, deployable anywhere as secure, scalable NVIDIA NIM™ microservices. The portfolio spans a range of different use cases:

NVIDIA Nemotron Family: High accuracy open models and datasets for agentic AI

Llama Nemotron Nano VL 8B is available now and is tailored for multimodal vision-language tasks, document intelligence and understanding, and mobile and edge AI agents. 

NVIDIA Nemotron Nano 9B is available now and supports enterprise agents, scientific reasoning, advanced math, and coding for software engineering and tool calling. 

NVIDIA Llama 3.3 Nemotron Super 49B 1.5 is coming soon and is designed for enterprise agents, scientific reasoning, advanced math, and coding for software engineering and tool calling.

NVIDIA Cosmos Family: Open world foundation models for physical AI

Cosmos Reason-1 7B is available now and supports robotics planning and decision making, training data curation and annotation for autonomous vehicles, and video analytics AI agents extracting insights and performing root-cause analysis from video data.

NVIDIA Cosmos Predict 2.5 is coming soon and is a generalist model for world state generation and prediction. 

NVIDIA Cosmos Transfer 2.5 is coming soon and is designed for structural conditioning and physical AI.

Microsoft TRELLIS by Microsoft Research: High-quality 3D asset generation 

Microsoft TRELLIS by Microsoft Research is available now and enables digital twins by generating accurate 3D assets from simple prompts, immersive retail experiences with photorealistic product models for AR and virtual try-ons, and game and simulation development by turning creative ideas into production-ready 3D content.

Together, these open models reflect the depth of the Azure and NVIDIA partnership: combining Microsoft’s adaptive cloud with NVIDIA’s leadership in accelerated computing to power the next generation of agentic AI for every industry. Learn more about the models here.

Maximizing GPU utilization for enterprise AI with NVIDIA Run:ai on Azure

As an AI workload and GPU orchestration platform, NVIDIA Run:ai helps organizations make the most of their compute investments, accelerating AI development cycles and driving faster time-to-market for new insights and capabilities. By bringing NVIDIA Run:ai to Azure, we’re giving enterprises the ability to dynamically allocate, share, and manage GPU resources across teams and workloads, helping them get more from every GPU.

NVIDIA Run:ai on Azure integrates seamlessly with core Azure services, including Azure NC and ND series instances, Azure Kubernetes Service (AKS), and Azure Identity Management, and offers compatibility with Azure Machine Learning and Azure AI Foundry for unified, enterprise-ready AI orchestration. We’re bringing hybrid scale to life to help customers transform static infrastructure into a flexible, shared resource for AI innovation.

With smarter orchestration and cloud-ready GPU pooling, teams can drive faster innovation, reduce costs, and unleash the power of AI across their organizations with confidence. NVIDIA Run:ai on Azure enhances AKS with GPU-aware scheduling, helping teams allocate, share, and prioritize GPU resources more efficiently. Operations are streamlined with one-click job submission, automated queueing, and built in governance. This ensures teams spend less time managing infrastructure and more time focused on building what’s next. 

This impact spans industries, supporting the infrastructure and orchestration behind transformative AI workloads at every stage of enterprise growth: 

Healthcare organizations can use NVIDIA Run:ai on Azure to advance medical imaging analysis and drug discovery workloads across hybrid environments. 

Financial services organizations can orchestrate and scale GPU clusters for complex risk simulations and fraud detection models. 

Manufacturers can accelerate computer vision training models for improved quality control and predictive maintenance in their factories. 

Retail companies can power real-time recommendation systems for more personalized experiences through efficient GPU allocation and scaling, ultimately better serving their customers.

Powered by Microsoft Azure and NVIDIA, Run:ai is purpose-built for scale, helping enterprises move from isolated AI experimentation to production-grade innovation.

const currentTheme =
localStorage.getItem(‘msxcmCurrentTheme’) ||
(window.matchMedia(‘(prefers-color-scheme: dark)’).matches ? ‘dark’ : ‘light’);

// Modify player theme based on localStorage value.
let options = {“autoplay”:false,”hideControls”:null,”language”:”en-us”,”loop”:false,”partnerName”:”cloud-blogs”,”poster”:”https://cdn-dynmedia-1.microsoft.com/is/image/microsoftcorp/1089732-ActualizeNVIDIA-RUN-AI-AZURE?wid=1280″,”title”:””,”sources”:[{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1089732-ActualizeNVIDIA-RUN-AI-AZURE-0x1080-6439k”,”type”:”video/mp4″,”quality”:”HQ”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1089732-ActualizeNVIDIA-RUN-AI-AZURE-0x720-3266k”,”type”:”video/mp4″,”quality”:”HD”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1089732-ActualizeNVIDIA-RUN-AI-AZURE-0x540-2160k”,”type”:”video/mp4″,”quality”:”SD”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1089732-ActualizeNVIDIA-RUN-AI-AZURE-0x360-958k”,”type”:”video/mp4″,”quality”:”LO”}],”ccFiles”:[{“url”:”https://azure.microsoft.com/en-us/blog/wp-json/msxcm/v1/get-captions?url=https%3A%2F%2Fwww.microsoft.com%2Fcontent%2Fdam%2Fmicrosoft%2Fbade%2Fvideos%2Fproducts-and-services%2Fen-us%2Fazure%2F1089732-actualizenvidia-run-ai-azure%2F1089732-ActualizeNVIDIA-RUN-AI-AZURE_cc_en-us.ttml”,”locale”:”en-us”,”ccType”:”TTML”}]};

if (currentTheme) {
options.playButtonTheme = currentTheme;
}

document.addEventListener(‘DOMContentLoaded’, () => {
ump(“ump-69012f878d8eb”, options);
});

Reimagining AI at scale: First to deploy NVIDIA GB300 NVL72 supercomputing cluster

Microsoft is redefining AI infrastructure with the new NDv6 GB300 VM series, delivering the first at-scale production cluster of NVIDIA GB300 NVL72 systems, featuring over 4600 NVIDIA Blackwell Ultra GPUs connected via NVIDIA Quantum-X800 InfiniBand networking. Each NVIDIA GB300 NVL72 rack integrates 72 NVIDIA Blackwell Ultra GPUs and 36 NVIDIA Grace™ CPUs, delivering over 130 TB/s of NVLink bandwidth and up to 136 kW of compute power in a single cabinet. Designed for the most demanding workloads—reasoning models, agentic systems, and multimodal AI—GB300 NVL72 combines ultra-dense compute, direct liquid cooling, and smart rack-scale management to deliver breakthrough efficiency and performance within a standard datacenter footprint. 

Azure’s co-engineered infrastructure enhances GB300 NVL72 with technologies like Azure Boost for accelerated I/O and integrated hardware security modules (HSM) for enterprise-grade protection. Each rack arrives pre-integrated and self-managed, enabling rapid, repeatable deployment across Azure’s global fleet. As the first cloud provider to deploy NVIDIA GB300 NVL72 at scale, Microsoft is setting a new standard for AI supercomputing—empowering organizations to train and deploy frontier models faster, more efficiently, and more securely than ever before. Together, Azure and NVIDIA are powering the future of AI. 

Learn more about Microsoft’s systems approach in delivering GB300 NVL72 on Azure.

Unleashing the performance of ND GB200-v6 VMs with NVIDIA Dynamo 

Our collaboration with NVIDIA focuses on optimizing every layer of the computing stack to help customers maximize the value of their existing AI infrastructure investments. 

To deliver high-performance inference for compute-intensive reasoning models at scale, we’re bringing together a solution that combines the open-source NVIDIA Dynamo framework, our ND GB200-v6 VMs with NVIDIA GB200 NVL72 and Azure Kubernetes Service(AKS). We’ve demonstrated the performance this combined solution delivers at scale with the gpt-oss 120b model processing 1.2 million tokens per second deployed in a production-ready, managed AKS cluster and have published a deployment guide for developers to get started today. 

Dynamo is an open-source, distributed inference framework designed for multi-node environments and rack-scale accelerated compute architectures. By enabling disaggregated serving, LLM-aware routing and KV caching, Dynamo significantly boosts performance for reasoning models on Blackwell, unlocking up to 15x more throughput compared to the prior Hopper generation, opening new revenue opportunities for AI service providers. 

These efforts enable AKS production customers to take full advantage of NVIDIA Dynamo’s  inference optimizations when deploying frontier reasoning models at scale. We’re dedicated to bringing the latest open-source software innovations to our customers, helping them fully realize the potential of the NVIDIA Blackwell platform on Azure. 

Learn more about Dynamo on AKS.

Get more AI resources

Join us in San Francisco at Microsoft Ignite in November to hear about the latest in enterprise solutions and innovation.

Explore Azure AI Foundry and Azure Local.

The post Building the future together: Microsoft and NVIDIA announce AI advancements at GTC DC appeared first on Microsoft Azure Blog.
Quelle: Azure