Azure networking updates on security, reliability, and high availability

Enabling the next wave of cloud transformation with Azure Networking

The cloud landscape is evolving at an unprecedented pace, driven by the exponential growth of AI workloads and the need for seamless, secure, and high-performance connectivity. Azure Network services stand at the forefront of this transformation, delivering the hyperscale infrastructure, intelligent services, and resilient architecture that empower organizations to innovate and scale with confidence.

Get the latest Azure Network services updates here

Azure’s global network is purpose-built to meet the demands of modern AI and cloud applications. With over 60 AI regions, 500,000+ miles of fiber, and more than 4 petabits per second (Pbps) of WAN capacity, Azure’s backbone is engineered for massive scale and reliability. The network has tripled its overall capacity since the end of FY24, now reaching 18 Pbps, ensuring that customers can run the most demanding AI and data workloads with uncompromising performance.

In this blog, I am excited to share about our advancements in data center networking that provides the core infrastructure to run AI training models at massive scale, as well as our latest product announcements to strengthen the resilience, security, scale, and the capabilities needed to run cloud native workloads for optimized performance and cost.

AI at the heart of the cloud

AI is not just a workload—it’s the engine driving the next generation of cloud systems. Azure’s network fabric is optimized for AI at every layer, supporting long-lasting, high-bandwidth flows for model training, low-latency intra-datacenter fabrics for GPU clusters, and secure, lossless traffic management. Azure’s architecture integrates InfiniBand and high-speed Ethernet to deliver ultra-fast, lossless data transfer between compute and storage, minimizing training times and maximizing efficiency. Azure’s network is built to support workloads with distributed GPU pools across datacenters and regions using a dedicated AI WAN. Distributed GPU clusters are connected to the services running in Azure regions via a dedicated and private connection that uses Azure Private Link and hardware based VNet appliance running high performant DPUs.

Azure Network services are designed to support users at every stage—from migrating on-premises workloads to the cloud, to modernizing applications with advanced services, to building cloud-native and AI-powered solutions. Whether it’s seamless VNet integration, ExpressRoute for private connectivity, or advanced container networking for Kubernetes, Azure provides the tools and services to connect, build, and secure the cloud of tomorrow.

Resilient by default

Resiliency is foundational to Azure Networking’s mission. We continue to execute on the goal to provide resiliency by default. In continuing with the trend of offering zone resilient SKUs of our gateways (ExpressRoute, VPN, and Application Gateway), the latest to join the list is Azure NAT Gateway. At Ignite 2025, we announced the public preview of Standard NAT Gateway V2 which offers zone redundant architecture for outbound connectivity at no additional cost. Zone Redundant NAT gateways automatically distribute traffic to available zones during an outage of a single zone. It also supports 100 Gbps of total throughput and can handle 10 million packets per second. It is IPv6 ready out of the gate and provides traffic insights with flow logs. Read the NAT Gateway blog for more information.

Pushing the boundaries on security

We continue to advance our platform with security as the top mission, adhering to the principles of Secure Future Initiatives. Along these lines, we are happy to announce the following capabilities in preview or GA:

DNS Security Policy with Threat Intel: Now generally available, this feature provides smart protection with continuous updates, monitoring, and blocking of known malicious domains.

Private Link Direct Connect: Now in public preview, this extends Private Link connectivity to any routable private IP address, supporting disconnected VNets and external SaaS providers, with enhanced auditing and compliance support.

JWT Validation in Application Gateway: Application Gateway now supports JSON Web Token (JWT) validation in public preview, delivering native JWT validation at Layer 7 for web applications, APIs, and service-to-service (S2S) or machine-to-machine (M2M) communication. This feature shifts the token validation process from backend servers to the Application Gateway, improving performance and reducing complexity. This capability enables organizations to strengthen security without adding complexity, offering consistent, centralized, secure-by-default Layer 7 controls that allow teams to build and innovate faster while maintaining a trustworthy security posture.​

Forced tunneling for VWAN Secure Hubs: Forced Tunnel allows you to configure Azure Virtual WAN to inspect Internet-bound traffic with a security solution deployed in the Virtual WAN hub and route inspected traffic to a designed next hop instead of directly to the Internet. Route Internet traffic to edge Firewall connected to Virtual WAN via the default route learnt from ExpressRoute, VPN or SD-WAN. Route Internet traffic to your favorite Network Virtual Appliance or SASE solution deployed in spoke Virtual Network connected to Virtual WAN.

Providing ubiquitous scale

Scale is of utmost importance to customers looking to fine tune their AI models or low latency inferencing for their AI/ML workloads. Enhanced VPN and ExpressRoute connectivity, and scalable private endpoints further strengthen the platform’s reliability and future-readiness.

ExpressRoute 400G: Azure will be supporting 400G ExpressRoute direct ports in select locations starting 2026. Users can use multiple of these ports to provide multi-terabit throughput via dedicated private connection to on-premises or remote GPU sites.

High throughput VPN Gateway: We are announcing GA of 3x faster VPN gateway connectivity with support for single TCP flow of 5Gbps and a total throughput of 20 Gbps with four tunnels.

High scale Private Link: We are also increasing the total number of private endpoints allowed in a virtual network to 5000 and a total of 20,000 cross peered VNets.

Advanced traffic filtering for storage optimization in Azure Network Watcher: Targeted traffic logs help optimize storage costs, accelerate analysis, and simplify configuration and management.

Enhancing the experience of cloud native applications

Elasticity and the ability to scale seamlessly are essential capabilities Azure customers who deploy containerized apps expect and rely on. AKS is an ideal platform for deploying and managing containerized applications that require high availability, scalability, and portability. Azure’s Advanced Container Networking Service is natively integrated with AKS and offered as a managed networking add-on for workloads that require high performance networking, essential security and pod level observability.

We are happy to announce the product updates below in this space:

eBPF Host Routing in Advanced Container Networking Services for AKS: By embedding routing logic directly into the Linux kernel, this feature reduces latency and increases throughput for containerized applications.

Pod CIDR Expansion in Azure CNI Overlay for AKS: This new capability allows users to expand existing pod CIDR ranges, enhancing scalability and adaptability for large Kubernetes workloads without redeploying clusters.

WAF for Azure Application Gateway for Containers: Now generally available, this brings secure-by-design web application firewall capabilities to AKS, ensuring operational consistency and seamless policy management for containerized workloads.

Azure Bastion now enables secure, simplified access to private AKS clusters, reducing setup effort and maintaining isolation and providing cost savings to users.

These innovations reflect Azure Networking’s commitment to delivering secure, scalable, and future-ready solutions for every stage of your cloud journey. For a full list of updates, visit the official Azure updates page.

Get started with Azure Networking

Azure Networking is more than infrastructure—it’s the catalyst for foundational digital transformation, empowering enterprises to harness the full potential of the cloud and AI. As organizations navigate their cloud journeys, Azure stands ready to connect, secure, and accelerate innovation at every step.

All updates in one spot
From Azure DNS to Virtual Network, stay informed on what's new with Azure Networking.

Get more information here

The post Azure networking updates on security, reliability, and high availability appeared first on Microsoft Azure Blog.
Quelle: Azure

A decade of open innovation: Celebrating 10 years of Microsoft and Red Hat partnership

Ten years ago, Microsoft and Red Hat began a partnership grounded in open source and enterprise cloud innovation. This year, we celebrate a decade of collaboration. Our journey together has helped customers accelerate hybrid cloud transformation, empower developers to innovate, and strengthen the open source community to drive modern application innovation​.

Accelerate modernization with Azure Red Hat OpenShift

The partnership that redefined enterprise cloud

In 2015, running mission-critical Linux workloads on Microsoft Azure was considered bold and visionary. Ten years later, our partnership with Red Hat has helped thousands of organizations worldwide accelerate digital transformation, set new benchmarks in open innovation, and advance the cloud-native movement for enterprises everywhere.

Together, we introduced Red Hat Enterprise Linux (RHEL) on Azure, setting a new precedent for innovation in the cloud. This collaboration deepened with the addition of Red Hat offerings, including Azure Red Hat OpenShift (ARO)—a fully managed, jointly engineered, and supported application platform that combines cloud scale with open source flexibility.

Red Hat and Microsoft’s global footprint and expanding customer base underline how an open approach and commitment to solving customer challenges drives adoption and innovation at scale.

Accomplishments and impact​

Azure Red Hat OpenShift and Red Hat’s automation platforms are powering digital transformation for global leaders across industries:

Leaders like Teranet have saved CA$5.6 million in capital expenditures and increased customer confidence by migrating mission-critical systems and OpenShift containers to Azure, unlocking unmatched scalability and automation.

For Bradesco, Azure Red Hat OpenShift is the secure, scalable backbone of its future-ready AI platform—unifying governance, powering more than 200 enterprise AI initiatives, and accelerating transformation across every business unit. By integrating Azure OpenAI and Power Platform, Bradesco delivers scalable, compliant innovation in banking services. 

Western Sydney University improved reliability and accelerated digital research for thousands of students and faculty with the security and flexibility of Red Hat Enterprise Linux on Azure. 

Symend launched new regions in weeks and powered personalized customer engagement by adopting Azure Red Hat OpenShift and Microsoft Azure AI, driving agility at enterprise scale. Microsoft itself leverages Red Hat’s Ansible Automation Platform to streamline thousands of endpoints and modernize global network operations for business-critical infrastructure.

Together, Microsoft and Red Hat have advanced the industry with major accomplishments:

Deep integration for real-world flexibility: Red Hat solutions—like Azure Red Hat OpenShift, Red Hat Enterprise Linux, and Red Hat Ansible Automation Platform—are available across Azure, including in the Azure Marketplace, Azure Government, and expanding regions. Customers benefit from streamlined migrations, enhanced security features, and integrated support that simplifies modernization.​

Modernization and operational agility: OpenShift Virtualization and Confidential Containers on Azure Red Hat OpenShift enable customers to migrate and modernize legacy applications, run confidential workloads, and automate operations. These capabilities deliver scalability and secure management across hybrid environments.

Accelerating open source innovation: Together, the companies have made contributions to Kubernetes, containers, cloud monitoring, secure computing standards, and advancing open hybrid architectures for everyone.

Expanding developer and IT choice: By making RHEL available for Windows Subsystem for Linux and supporting hybrid container and virtual machine (VM) environments, Microsoft and Red Hat have given developers flexible, secure, and consistent tools for building anywhere.​

Enabling transformative AI adoption at scale: By leveraging Azure Red Hat OpenShift as a secure, governable foundation for managing multicloud OpenShift clusters, Bradesco streamlined operations across on-premises and cloud environments. This foundation, combined with Microsoft Foundry and Azure OpenAI Service, empowers Bradesco to deliver AI-powered banking solutions that scale securely and responsibly across millions of customers and business units. Symend also adopts Azure Red Hat OpenShift and Azure AI to power personalized customer engagement.

Flexible pricing: Azure Hybrid Benefit for RHEL is a key cost optimization feature that allows organizations to maximize existing Red Hat subscriptions when running workloads on Azure. By leveraging this benefit, customers can reduce licensing costs and improve ROI while maintaining enterprise-grade support and security. Including this in the conversation highlights how Azure delivers both technical flexibility and financial efficiency for hybrid environments.

Additionally, customers can optimize costs with pay-as-you-go pricing, draw down Microsoft Azure Consumption Commitment (MACC), and receive a single bill for both OpenShift and Azure consumption with Azure Red Hat OpenShift.

Discover what these solutions can offer your business

Ten years of innovation: Microsoft and Red Hat partnership highlights

The partnership’s journey is marked by major shared milestones, summarized in the timeline graphic below:

November 2015: Partnership announcement launched a decade of innovation.

February 2016: Red Hat Enterprise Linux available in the Azure Marketplace with integrated support.

May 2019: Azure Red Hat OpenShift reached general availability (GA).

January 2020: Red Hat Enterprise Linux BYOS Gold images available in Azure.

May 2021: JBoss EAP offered as an Azure App Service.

January 2022: Ansible released as a managed app for automation.

February 2023: Azure Red Hat OpenShift for Azure Government reached GA.

May 2025: OpenShift Virtualization on Azure Red Hat OpenShift entered public preview, culminating at Ignite 2025 with GA.

See the attached timeline for more details about key moments and innovations.​

Ignite 2025: GA of OpenShift Virtualization and more on Azure Red Hat OpenShift

A defining moment of our tenth anniversary was the GA of OpenShift Virtualization on Azure Red Hat OpenShift, announced at Microsoft Ignite 2025. Organizations can now run VMs alongside containers on a single, secure platform, seamlessly bridging traditional virtualization with cloud-native innovation. Enterprises can modernize their VM workloads into Kubernetes-based environments, leveraging Azure’s performance and security with familiar OpenShift tools.

In addition, Microsoft Ignite 2025 marked the GA of confidential containers on Azure Red Hat OpenShift, delivering enhanced hardware-enforced security and isolation for container workloads. The event also showcased alongside the GA of Red Hat Enterprise Linux (RHEL) for HPC on Azure, offering a secure, high-performance platform tailored for scientific and parallel computing workloads in Azure.

Together, these announcements underscore our ongoing commitment to hybrid innovation, security, and helping customers to deploy a wide spectrum of enterprise workloads with agility and confidence.

Open at the core: What’s next for open source and enterprise cloud collaboration

Ten years of partnership have proven openness is more than a technological strategy—it is a culture of progress, trust, and shared innovation. Microsoft and Red Hat remain committed to pioneering the future of hybrid cloud and AI-powered applications, always keeping customer choice and reliability at the center.

We’re proud to partner with Red Hat not just to support our customers, but also in our shared interest in projects like the Linux Kernel, Kubernetes, and most recently llm-d. Together, we are committed to continuing contributions to the health and success of open source technologies and communities.

To our customers, partners, and open source communities: thank you for partnering with us on this journey. Together, we will continue to build the future of enterprise technology—openly, boldly, and collaboratively.
—Brendan Burns, Corporate Vice President, Microsoft Cloud Native

Explore OpenShift Virtualization on Azure

Explore more stories on hybrid cloud and open innovation

Unlock what is next: Microsoft at Red Hat Summit 2025​

Red Hat Powers Modern Virtualization on Microsoft Azure​

Red Hat Success Stories: Helping Microsoft with IT automation​

The best of both worlds: How Microsoft and Red Hat are revolutionizing enterprise IT​

Red Hat CEO and Microsoft EVP on The Evolution of Open Source​

GA of OpenShift Virtualization on Azure Red Hat OpenShift at Microsoft Ignite 2025

Bradesco, Azure Red Hat OpenShift is the secure, scalable backbone of its future-ready AI platform

Ortec Finance launched a cloud-native risk management platform, accelerating service delivery for over 600 financial institutions

Rossmann transformed its retail operations and scaled hybrid cloud deployments to millions of customers

City of Vienna modernized citizen services with AI, improving availability and efficiency for thousands of residents​​ 

Porsche Informatik accelerated digital transformation across automotive logistics, optimizing mission-critical IT service 

The post A decade of open innovation: Celebrating 10 years of Microsoft and Red Hat partnership appeared first on Microsoft Azure Blog.
Quelle: Azure

Introducing Mistral Large 3 in Microsoft Foundry: Open, capable, and ready for production workloads

Enterprises today are embracing open-weight models for their transparency, flexibility, and ability to run across a broad range of deployment architectures. As the number of open models grows, the bar for reliability, instruction-following quality, multimodal reasoning, and long-context performance continues to rise. 

Today, we’re excited to announce that Mistral Large 3 is now available in Azure, bringing one of the strongest open-weight, Apache-licensed frontier models to the Microsoft Cloud. 

Mistral Large 3 delivers frontier-class capabilities with open-source flexibility, making it a powerful option for organizations building production assistants, retrieval-augmented applications, agentic systems, and multimodal workflows. 

See Mistral Large 3 in action

Enterprise-ready open models 

Mistral Large 3 sits in the leading tier of globally available open models alongside DeepSeek and the GPT OSS family. It is optimized not only for benchmark-chasing on abstract mathematical puzzles, but also for what customers need most in real enterprise applications: 

Highly reliable instruction following 

Long-context comprehension and retention 

Strong multimodal reasoning 

Stable, predictable performance across dialogue and applied reasoning 

According to Mistral, Mistral Large 3 shows fewer breakdowns and more consistent behavior than most peers, especially in multi-turn conversations and complex, extended inputs. It is designed for production, not just experimentation. 

Mistral 3 is optimized for real-world scenarios 

Instruction reliability you can depend on 

Many open models excel on benchmarks but struggle with instruction clarity when deployed in real workflows. Mistral Large 3 reverses that trend by demonstrating:

Precise adherence to task instructions 

Strong grounding in domain knowledge 

Low hallucination rates

Consistent formatting in structured outputs 

This makes it particularly effective for agents, automation flows, and business logic integration where reliability is non-negotiable. 

Exceptional long-context handling 

With extended context support, Mistral Large 3 processes, retains, and reasons over long documents, multi-step sequences, and sustained dialogues with notable stability. 

Enterprises can use it for: 

Retrieval-augmented generation 

Document understanding 

Multi-turn conversational systems 

Long-form summarization and synthesis 

Its ability to maintain coherence over long sessions reduces error cascades and produces more predictable outcomes. 

Multimodal and applied reasoning 

As organizations build increasingly multimodal workflows, interpreting text, images, diagrams, and structured data, Mistral Large 3 provides strong cross-modal understanding with balanced behavior. 

It excels in: 

Visual question answering 

Diagram or chart interpretation 

Multimodal retrieval and grounding 

Combined reasoning over text and image inputs 

Its stability makes it ideal for use cases where multimodal reasoning must be accurate, not approximate. 

Fully Open and Apache 2.0 licensed

Mistral Large 3 stands out as the strongest fully open model developed outside of China and offers something rare in the global ecosystem: 

Frontier-level capability, Apache 2.0 licensing, reproducible results, and worldwide availability without regional restrictions. 

Organizations can: 

Integrate the model in Microsoft Foundry 

Export weights for hybrid or on-premises deployment (subject to Mistral licensing) 

Run it in their own VPC, edge, or sovereign cloud environments 

Fine-tune or customize freely 

Use it for commercial applications without attribution requirements 

This combination of capability and openness is uniquely compelling for global enterprises requiring flexibility, transparency, and long-term vendor independence. 

Why Mistral Large 3 in Azure? 

Foundry provides an end-to-end workspace for model development, evaluation, and deployment, including unified governance, observability, and agent-ready tooling. 

With Mistral Large 3 in Foundry, customers gain: 

1.Unified access to top-performing models

Simplified and secure access to Mistral Large 3 and Mistral Document AI as first-party models available on Foundry alongside other open and commercial frontier models.

2. End-to-end evaluation and observability

Foundry delivers end-to-end evaluations, routing, and observability, enabling organizations to benchmark Mistral Large 3 across cost, latency, throughput, and quality, while monitoring performance and spending through a single set of dashboards and SDKs. Workloads can be intelligently routed to the most efficient model with no added integration effort. 

3. Enterprise-grade safety and governance 

Foundry applies Responsible AI safeguards, content filters, and auditability across all model interactions, ensuring safe, compliant deployments. 

4. Agent-first capabilities 

Mistral Large 3 supports tool calling, enabling agentic systems that can take action, automate workflows, and connect to enterprise data and APIs. This foundation supports customer service bots, research agents, automation flows, and enterprise copilots. 

Unlocking new use cases across industries 

Enterprise knowledge assistants: Long-context comprehension enables rich, grounded conversations across corporate knowledge bases. 

Document intelligence and retrieval-augmented pipelines: Stable reasoning and consistent formatting make it ideal for summarization, extraction, and multi-document synthesis. 

Developer agents and automation: Reliable instruction supports code refactoring, test generation, and workflow automation. 

Multimodal customer experiences: Combining image and text understanding enables richer digital assistant and customer support experiences. 

Pricing

ModelDeployment type Azure resource regions Price/1M tokens Availability Mistral Large 3 Global Standard West US 3 Input: $0.5 Output: $1.5 Dec 2, 2025—public preview 

The future of open models on Azure 

With the addition of Mistral Large 3, Foundry continues to expand its position as the cloud platform with the widest selection of open and frontier models, unified under a single, enterprise-ready ecosystem. 

As organizations increasingly demand transparent, flexible, and globally accessible intelligence, Mistral Large 3 sets a new benchmark for what a production-ready open model can deliver. 

Try Mistral Large 3 today
Open, capable, multimodal, long-context Mistral Large 3 is now available in Microsoft Foundry.

Explore on Foundry

The post Introducing Mistral Large 3 in Microsoft Foundry: Open, capable, and ready for production workloads appeared first on Microsoft Azure Blog.
Quelle: Azure

New options for AI-powered innovation, resiliency, and control with Microsoft Azure

Organizations running mission‑critical workloads operate under stricter standards because system failures can often affect people and business operations at scale. They must ensure control, resilience, and operational autonomy such that innovation does not compromise governance. They need agility that also maintains continuity and preserves standards compliance, so they can get the most out of AI, scalable compute, and advanced analytics on their terms.

For example, manufacturing plants need assembly lines to continue to operate during network outages, and healthcare providers need the ability to access patient data during natural disasters. Similarly, government agencies and critical infrastructure operators must comply with regulations to keep systems autonomous and data within national borders. Additionally, regulations sometimes mandate that sensitive data remains stored and processed locally under local jurisdiction and personnel control.

These are exactly the challenges Azure’s adaptive cloud approach is designed to solve. We are extending Azure public regions with options that adapt to our customers’ evolving business requirements without forcing trade-offs. Microsoft’s strategy spans both our public cloud, private cloud, and edge technology, giving customers a unified platform for operations, applications, and data with the right balance of flexibility and control. This approach empowers customers to use Azure services to innovate in environments under their full control, rather than maintaining separate, siloed, or legacy IT systems.

Meeting unique operational and data sovereignty needs

To address unique operational and data sovereignty needs, Microsoft introduced Azure Local—Azure infrastructure delivered in customers’ own datacenters or distributed locations. Azure Local comes with integrated compute, storage, and networking services and leverages Azure Arc to extend cloud services across the management, data, application, and security layers into hybrid and disconnected environments.

Learn more about what’s new in Azure Local

Over the past six months, our team has significantly expanded Azure Local’s capabilities to meet requirements across industries. We are seeing tremendous momentum from customers like GSK, a global biopharma leader extending cloud innovation and AI to the edge using Azure Local. GSK is enabling real-time data processing and AI inferencing across vaccine and medicine manufacturing and R&D labs worldwide. GSK joined our What’s new in Azure Local session at Ignite last month, offering insight into how they are using Azure Local.

We are also engaging deeply with public sector organizations to ensure essential services can run independent of internet connectivity when needed, from city administrations to defense and emergency response agencies.

To support these customers, we are enabling a growing set of Azure Local features and functionalities across Microsoft and partners, many of which have reached General Availability (GA) and preview.

Microsoft 365 Local (GA) delivers full productivity—email, collaboration, and communications—within a private cloud, ensuring sovereignty and security for sovereign scenarios.

Next-gen NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs (GA) bring high-performance AI workloads on premises, enabling advanced analytics and generative AI without sacrificing compliance.

Azure Migrate support for Azure Local (GA) streamlines lift-and-shift migrations, reducing costs and accelerating time-to-value.

AD-less deployments, Rack-Aware Clustering, and external SAN storage integration (Preview) offer more options for identity, fault tolerance, and flexible storage strategies.

Rack aware clustering is now available in preview for Azure Local

Multi-rack deployments (Preview) dramatically increase options for high scale, supporting larger IT estates in a single integrated Azure Local instance.

Disconnected operations (Preview) delivers a fully disconnected Azure Local experience for mission-critical environments where internet connectivity is infeasible or unwanted.

In short, Azure Local has rapidly evolved into a robust platform for operational sovereignty. It delivers Azure consistency for all workloads from core business apps to AI, in customers’ locations—from a few nodes on a factory floor up to thousands of nodes. These advancements reflect our commitment to meet customers where they are. 

Intelligent and connected physical operations

Azure’s adaptive cloud approach helps bring AI to physical operations. Our Azure IoT platform enables asset-intensive organizations to harness data from devices and sensors in a secure, scalable, and resilient fashion. When combined with Microsoft Fabric, customers get real-time insights from their operational data. This integration allows industries such as manufacturing, energy, and industrial operations to bridge digital and physical systems and adopt AI and automation in ways that align with their specific needs.

Demonstrating how the cloud, edge AI, and simulation can help orchestrate human-robotic collaboration on manufacturing product lines at Microsoft Ignite

Our approach to enable AI in physical operations environments follows two basic patterns. Azure IoT Operations enables device and sensor data from larger sites to be aggregated and processed close to its source for near real-time decision-making and reduced latency, streaming only relevant data to Fabric for more advanced analytics. Azure IoT Hub, on the other hand, enables device data to securely flow directly to Fabric with cloud-based identity and security. The integration across Microsoft Fabric and Azure IoT helps bridge Operational Technology (OT) and Information Technology (IT), delivering cost-effective, secure, and repeatable outcomes.

In the last six months, we introduced several enhancements to Azure IoT tailored for connected operations use cases:

In Azure IoT Hub, a new Microsoft-backed X.509 certificate management capability provides enhanced secure identity lifecycle control. Integration with Azure Device Registry streamlines identity, security, and policy management across fleets.

Enhanced Azure Device Registry capabilities improve asset registration, classification, and monitoring for operational insight while allowing Azure connected assets and devices to be used with any Azure service.

Azure Device Registry (ADR) acts as the unified control plane for managing both physical assets from Azure IoT Operations and devices from Azure IoT Hub

Azure IoT Operations’ latest release includes a number of new features. WebAssembly-powered data graphs enable fast, modular analytics for near-instant decision-making. Expanded connectors for OPC UA, ONVIF, REST/HTTP, SSE, and MQTT simplify interoperability. OpenTelemetry endpoint support enables smooth telemetry pipelines and monitoring. Advanced health monitoring provides deep visibility and control over operational assets.

In Microsoft Fabric, Fabric IQ and Digital Twin Builder turn raw telemetry into actionable context for simulation and intelligent feedback loops thanks to the use of models and knowledge graphs that bring clarity to streaming data.

Customers like Chevron and Husqvarna are scaling Azure IoT Operations from single-site pilots to multi-site rollouts, unlocking new use cases such as predictive maintenance and worker safety. These deployments demonstrate measurable impact and adaptive cloud architectures delivering business value. Our partner ecosystem is also growing with Siemens, Litmus, Rockwell Automation, and Sight Machine building on the platform.

Managing a distributed estate with unified Azure management and security

Organizations often grapple with the complexity of highly distributed IT estates—spanning on-premises datacenters, hundreds or sometimes thousands of edge sites, multiple public clouds, and countless devices. Managing and securing this sprawling ecosystem is challenging with traditional tools. A core promise of Azure’s adaptive cloud approach is helping to simplify centralized operations through a single, unified control plane via Azure Arc.

Over the last six months, we have delivered a wave of improvements to help customers manage distributed resources at scale, across heterogenous environments, in a frictionless way. Key enhancements in our Azure Arc platform include:

Azure Arc site manager (Preview) organizes resources by physical site for easier monitoring and management of distributed operations.

New GCP connector (Preview) projects Google Cloud resources into Azure for a single pane of glass across Azure, AWS, and GCP.

The Multicloud connector enabled by Azure Arc is now in preview for GCP environments

Azure Machine Configuration (GA) enforces OS-level settings across Azure Arc-managed servers for compliance and security.

New Azure policies to audit and configure Windows Recovery environment to be ready for critical patch to recover from machine unbootable state such as faulty drivers.

New subscription level enrollment of essential machine management services with a simplified pricing model and a unified user experience from Azure for the hybrid environment, lowering adoption barrier for legacy environments.

Workload Identity (GA) lets Azure Arc-enabled Kubernetes clusters use Entra ID for secure resource access, eliminating local storage of secrets.

AKS Fleet Manager (Preview) integrates Azure Arc-connected clusters for centralized policy sync and deployments across hybrid environments.

Azure Key Vault Secret Store Extension (GA) allows Azure Arc-enabled Kubernetes clusters to cache secrets from Azure Key Vault, improving security and workload resiliency to intermittent network connectivity for hybrid workloads.

These enhancements underscore our belief that cloud management and cloud-native application development should not stop at the cloud. Whether an IT team is responsible for five datacenters or 5000 retail sites, Azure provides the tooling to manage that distributed environment and develop applications as one cohesive and adaptive cloud.

Azure’s adaptive cloud approach gives organizations the freedom to innovate on their terms while maintaining control. In an era defined by uncertainty, whether from cyber threats or geopolitical shifts, Azure empowers customers to modernize confidently without sacrificing resiliency or control.

Innovate on an adaptive cloud

The post New options for AI-powered innovation, resiliency, and control with Microsoft Azure appeared first on Microsoft Azure Blog.
Quelle: Azure

Azure networking updates on security, reliability, and high availability

Enabling the next wave of cloud transformation with Azure Networking

The cloud landscape is evolving at an unprecedented pace, driven by the exponential growth of AI workloads and the need for seamless, secure, and high-performance connectivity. Azure Network services stand at the forefront of this transformation, delivering the hyperscale infrastructure, intelligent services, and resilient architecture that empower organizations to innovate and scale with confidence.

Get the latest Azure Network services updates here

Azure’s global network is purpose-built to meet the demands of modern AI and cloud applications. With over 60 AI regions, 500,000+ miles of fiber, and more than 4 petabits per second (Pbps) of WAN capacity, Azure’s backbone is engineered for massive scale and reliability. The network has tripled its overall capacity since the end of FY24, now reaching 18 Pbps, ensuring that customers can run the most demanding AI and data workloads with uncompromising performance.

In this blog, I am excited to share about our advancements in data center networking that provides the core infrastructure to run AI training models at massive scale, as well as our latest product announcements to strengthen the resilience, security, scale, and the capabilities needed to run cloud native workloads for optimized performance and cost.

AI at the heart of the cloud

AI is not just a workload—it’s the engine driving the next generation of cloud systems. Azure’s network fabric is optimized for AI at every layer, supporting long-lasting, high-bandwidth flows for model training, low-latency intra-datacenter fabrics for GPU clusters, and secure, lossless traffic management. Azure’s architecture integrates InfiniBand and high-speed Ethernet to deliver ultra-fast, lossless data transfer between compute and storage, minimizing training times and maximizing efficiency. Azure’s network is built to support workloads with distributed GPU pools across datacenters and regions using a dedicated AI WAN. Distributed GPU clusters are connected to the services running in Azure regions via a dedicated and private connection that uses Azure Private Link and hardware based VNet appliance running high performant DPUs.

Azure Network services are designed to support users at every stage—from migrating on-premises workloads to the cloud, to modernizing applications with advanced services, to building cloud-native and AI-powered solutions. Whether it’s seamless VNet integration, ExpressRoute for private connectivity, or advanced container networking for Kubernetes, Azure provides the tools and services to connect, build, and secure the cloud of tomorrow.

Resilient by default

Resiliency is foundational to Azure Networking’s mission. We continue to execute on the goal to provide resiliency by default. In continuing with the trend of offering zone resilient SKUs of our gateways (ExpressRoute, VPN, and Application Gateway), the latest to join the list is Azure NAT Gateway. At Ignite 2025, we announced the public preview of Standard NAT Gateway V2 which offers zone redundant architecture for outbound connectivity at no additional cost. Zone Redundant NAT gateways automatically distribute traffic to available zones during an outage of a single zone. It also supports 100 Gbps of total throughput and can handle 10 million packets per second. It is IPv6 ready out of the gate and provides traffic insights with flow logs. Read the NAT Gateway blog for more information.

Pushing the boundaries on security

We continue to advance our platform with security as the top mission, adhering to the principles of Secure Future Initiatives. Along these lines, we are happy to announce the following capabilities in preview or GA:

DNS Security Policy with Threat Intel: Now generally available, this feature provides smart protection with continuous updates, monitoring, and blocking of known malicious domains.

Private Link Direct Connect: Now in public preview, this extends Private Link connectivity to any routable private IP address, supporting disconnected VNets and external SaaS providers, with enhanced auditing and compliance support.

JWT Validation in Application Gateway: Application Gateway now supports JSON Web Token (JWT) validation in public preview, delivering native JWT validation at Layer 7 for web applications, APIs, and service-to-service (S2S) or machine-to-machine (M2M) communication. This feature shifts the token validation process from backend servers to the Application Gateway, improving performance and reducing complexity. This capability enables organizations to strengthen security without adding complexity, offering consistent, centralized, secure-by-default Layer 7 controls that allow teams to build and innovate faster while maintaining a trustworthy security posture.​

Forced tunneling for VWAN Secure Hubs: Forced Tunnel allows you to configure Azure Virtual WAN to inspect Internet-bound traffic with a security solution deployed in the Virtual WAN hub and route inspected traffic to a designed next hop instead of directly to the Internet. Route Internet traffic to edge Firewall connected to Virtual WAN via the default route learnt from ExpressRoute, VPN or SD-WAN. Route Internet traffic to your favorite Network Virtual Appliance or SASE solution deployed in spoke Virtual Network connected to Virtual WAN.

Providing ubiquitous scale

Scale is of utmost importance to customers looking to fine tune their AI models or low latency inferencing for their AI/ML workloads. Enhanced VPN and ExpressRoute connectivity, and scalable private endpoints further strengthen the platform’s reliability and future-readiness.

ExpressRoute 400G: Azure will be supporting 400G ExpressRoute direct ports in select locations starting 2026. Users can use multiple of these ports to provide multi-terabit throughput via dedicated private connection to on-premises or remote GPU sites.

High throughput VPN Gateway: We are announcing GA of 3x faster VPN gateway connectivity with support for single TCP flow of 5Gbps and a total throughput of 20 Gbps with four tunnels.

High scale Private Link: We are also increasing the total number of private endpoints allowed in a virtual network to 5000 and a total of 20,000 cross peered VNets.

Advanced traffic filtering for storage optimization in Azure Network Watcher: Targeted traffic logs help optimize storage costs, accelerate analysis, and simplify configuration and management.

Enhancing the experience of cloud native applications

Elasticity and the ability to scale seamlessly are essential capabilities Azure customers who deploy containerized apps expect and rely on. AKS is an ideal platform for deploying and managing containerized applications that require high availability, scalability, and portability. Azure’s Advanced Container Networking Service is natively integrated with AKS and offered as a managed networking add-on for workloads that require high performance networking, essential security and pod level observability.

We are happy to announce the product updates below in this space:

eBPF Host Routing in Advanced Container Networking Services for AKS: By embedding routing logic directly into the Linux kernel, this feature reduces latency and increases throughput for containerized applications.

Pod CIDR Expansion in Azure CNI Overlay for AKS: This new capability allows users to expand existing pod CIDR ranges, enhancing scalability and adaptability for large Kubernetes workloads without redeploying clusters.

WAF for Azure Application Gateway for Containers: Now generally available, this brings secure-by-design web application firewall capabilities to AKS, ensuring operational consistency and seamless policy management for containerized workloads.

Azure Bastion now enables secure, simplified access to private AKS clusters, reducing setup effort and maintaining isolation and providing cost savings to users.

These innovations reflect Azure Networking’s commitment to delivering secure, scalable, and future-ready solutions for every stage of your cloud journey. For a full list of updates, visit the official Azure updates page.

Get started with Azure Networking

Azure Networking is more than infrastructure—it’s the catalyst for foundational digital transformation, empowering enterprises to harness the full potential of the cloud and AI. As organizations navigate their cloud journeys, Azure stands ready to connect, secure, and accelerate innovation at every step.

All updates in one spot
From Azure DNS to Virtual Network, stay informed on what's new with Azure Networking.

Get more information here

The post Azure networking updates on security, reliability, and high availability appeared first on Microsoft Azure Blog.
Quelle: Azure

Introducing Claude Opus 4.5 in Microsoft Foundry

We’re at a real inflection point in the AI landscape, a threshold where models move from useful assistants to genuine collaborators. Models that understand the objective, factor in constraints, and execute complex multi-tool workflows. Models that not only support processes, but help restructure them for reliability, scale, and operational efficiency.

Anthropic’s newest model, Claude Opus 4.5, embodies that shift. Today, we are excited to share that Opus 4.5 is now available in public preview in Microsoft Foundry, GitHub Copilot paid plans, and Microsoft Copilot Studio.

Start building with Claude in Microsoft Foundry

Building on the Microsoft Ignite announcement of our expanded partnership with Anthropic, Microsoft Foundry delivers its commitment to giving Azure customers immediate access to the widest selection of advanced and frontier AI models of any cloud. Foundry empowers developers to accelerate innovation with an integrated, interoperable, and secure AI platform that enables seamless deployment, integration, and scaling for AI apps and agents.

We’re excited to use Anthropic Claude models from Microsoft Foundry. Having Claude’s advanced reasoning alongside GPT models in one platform gives us flexibility to build scalable, enterprise-grade workflows that move far beyond prototypes.
—Michele Catasta, President, Replit

const currentTheme =
localStorage.getItem(‘msxcmCurrentTheme’) ||
(window.matchMedia(‘(prefers-color-scheme: dark)’).matches ? ‘dark’ : ‘light’);

// Modify player theme based on localStorage value.
let options = {“autoplay”:false,”hideControls”:null,”language”:”en-us”,”loop”:false,”partnerName”:”cloud-blogs”,”poster”:”https://azure.microsoft.com/en-us/blog/wp-content/uploads/2100/12/1093212-MAzureAIFoundry_tbmnl_en-us.jpg”,”title”:””,”sources”:[{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1093212-MAzureAIFoundry-0x1080-6439k”,”type”:”video/mp4″,”quality”:”HQ”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1093212-MAzureAIFoundry-0x720-3266k”,”type”:”video/mp4″,”quality”:”HD”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1093212-MAzureAIFoundry-0x540-2160k”,”type”:”video/mp4″,”quality”:”SD”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1093212-MAzureAIFoundry-0x360-958k”,”type”:”video/mp4″,”quality”:”LO”}],”ccFiles”:[{“url”:”https://azure.microsoft.com/en-us/blog/wp-json/msxcm/v1/get-captions?url=https%3A%2F%2Fwww.microsoft.com%2Fcontent%2Fdam%2Fmicrosoft%2Fbade%2Fvideos%2Fproducts-and-services%2Fen-us%2Fazure%2F1093212-mazureaifoundry%2F1093212-MAzureAIFoundry_cc_en-us.ttml”,”locale”:”en-us”,”ccType”:”TTML”}]};

if (currentTheme) {
options.playButtonTheme = currentTheme;
}

document.addEventListener(‘DOMContentLoaded’, () => {
ump(“ump-692ab1b535002″, options);
});

Opus 4.5 for real work

Opus 4.5 sets a new bar for coding, agentic workflows, and enterprise productivity: outperforming Sonnet 4.5 and Opus 4.1, at a more accessible price point. Its versatility across software engineering, complex reasoning, tool use, and vision unlocks new opportunities for organizations to modernize systems, automate critical workstreams, and deliver ROI faster.

By prioritizing rapid integration of the latest models, Foundry allows Azure customers to stay ahead of the curve and maximize the impact of their agentic AI systems; all while maintaining centralized governance, security, and observability at scale.

1. Built for production engineering and agentic capabilities

According to Anthropic, Opus 4.5 delivers state-of-the-art performance on industry standard software engineering benchmarks, including new highs on SWE-bench (80.9%). Early testers consistently describe the model as being able to interpret ambiguous requirements, reason over architectural tradeoffs, and identify fixes for issues that span multiple systems.

Opus 4.5 accelerates engineering velocity by completing multi-day development work in hours with:

Improved multilingual coding performance

More efficient code generation

Stronger test coverage

Cleaner architectural and refactoring choices

Capability / BenchmarkClaude Opus 4.5Claude Sonnet 4.5Claude Opus 4.1Gemini 3 ProAgentic coding (SWE-bench Verified)80.90% 77.20% 74.50% 76.20% Agentic terminal coding (Terminal-bench 2.0)59.30% 50.00% 46.50% 54.20% Agentic tool use — Retail (t2-bench)88.90%86.20%86.80% 85.30% Agentic tool use — Telecom (t2-bench)98.20% 98.00%71.50% 98.00% Scaled tool use (MCP Atlas)62.30% 43.80% 40.90%_Computer use (OSWorld) 66.30% 61.40% 44.40% _Novel problem solving (ARC-AGI-2 Verified) 37.60% 13.60% _31.10% Graduate-level reasoning (GPQA Diamond) 87.00% 83.40% 81.00% 91.90% Visual reasoning (MMMU validation) 80.70% 77.80% 77.10%_Multilingual Q&A (MMLU) 90.80% 89.10% 89.50%91.80% 

Claude Opus 4.5 benchmark results from Anthropic

Opus 4.5 is also one of the strongest tool-using models available today, capable of powering agents that work seamlessly across hundreds of tools. Developers gain access to several important upgrades:

Programmatic Tool Calling: Execute tools directly in Python for more efficient, deterministic workflows.

Tool Search: Dynamically discover tools from large libraries without using up space in the context window.

Tool Use Examples: More accurate tool calling for complex tool schemas.

Together, these capabilities enable sophisticated agents across cybersecurity, full-stack software engineering, financial modeling, and other workflows requiring multiple tool interactions. Opus 4.5 shows strong, real-world intelligence applying these tools creatively within constraints. In testing, the model successfully navigated complex policy environments, such as airline change rules, chaining upgrades, downgrades, cancellations, and rebookings to optimize outcomes. This kind of adaptive, constraint-aware problem-solving reflects a meaningful step forward in what agentic AI systems can accomplish.

Manus deeply utilizes Anthropic’s Claude models because of their strong capabilities in coding and long-horizon task planning, together with their prowess to handle agentic tasks. We are very excited to be using them now on Microsoft Foundry!
—Tao Zhang, Co-founder & Chief Product Officer, Manus AI

2. Improved developer experience on Foundry

Opus 4.5 paired with new developer capabilities offered on Foundry is designed to help teams build more effective and efficient agentic systems:

Effort Parameter (Beta): Control how much computational effort Claude allocates across thinking, tool calls, and responses to balance performance with latency and cost for your specific use cases.

Compaction Control: Handle long-running agentic tasks more effectively with new SDK helpers that manage context efficiently over extended interactions.

These enhancements provide greater predictability and operational control for enterprise workloads.

3. Enhanced office productivity and computer use

Opus 4.5 also doubles down as Anthropic’s best vision model, unlocking workflows that depend on complex visual interpretation and multi-step navigation. Computer use performance has improved significantly, enabling more reliable automation of desktop tasks.

For knowledge workers, the model delivers a step-change improvement in powering agents that create spreadsheets, presentations, and documents. It produces work with consistency, professional polish, and genuine domain awareness making it a fit for finance, legal, and other precision-critical verticals. The model better leverages memory to maintain context and consistency across files throughout sprawling professional projects.

4. Safety and security

According to Anthropic, Opus 4.5 also delivers meaningful improvements in safety and security. The model shows a reduced rate of misaligned responses, stronger robustness against prompt-injection attacks, and more reliable behavior across complex tasks.

These improvements align with Microsoft’s commitment to providing enterprise customers with models that meet high bars for safety, governance, and operational integrity

Use cases

Opus 4.5 serves the following use cases

Software development: Deploy agents that handle complex, multi-system development tasks with minimal supervision.

Financial analysis: Connect insights across regulatory filings, market reports, and internal data for sophisticated predictive modeling and proactive compliance monitoring.

Cybersecurity: Correlate logs, vulnerability databases, and threat intelligence for professional-grade threat detection and automated incident response.

Enterprise operations: Manage sophisticated workflows requiring coordination across multiple tools, systems, and information sources.

Pricing and availability

Opus 4.5 delivers frontier performance and sets a new standard for a variety of use cases at one third the price of previous Opus-class models.

Model
Offer typeDeployment type
Regions
Price (1M tokens)
Availability

Claude Opus 4.5

Serverless Pay-go

Global Standard

East US2, Sweden Central
Input- $5Output- $25November 24, 2025 (public preview)

Get started today

Claude Opus 4.5 is available now in Microsoft Foundry and coming soon in Visual Studio Code via the Foundry extension. Visit the Foundry portal to begin building with Opus 4.5.

The post Introducing Claude Opus 4.5 in Microsoft Foundry appeared first on Microsoft Azure Blog.
Quelle: Azure

Introducing Anthropic’s Claude models in Microsoft Foundry: Bringing Frontier intelligence to Azure

Innovation in AI is about empowering every developer and organization with the freedom to choose the right intelligence for every challenge. In today’s landscape, where business needs span from real-time chatbots to deep research agents, model choice is an essential engine of progress.

Microsoft Foundry already offers the widest selection of models of any cloud and with today’s partnership announcement with Anthropic, we’re proud that Azure is now the only cloud providing access to both Claude and GPT frontier models to customers on one platform. This milestone expands Foundry further into what it was built to be: a single place to use any model, any framework, and every enterprise control you need to build and run AI apps and agents at scale.

“We’re excited to use Anthropic Claude models from Microsoft Foundry. Having Claude’s advanced reasoning alongside GPT models in one platform gives us flexibility to build scalable, enterprise-grade workflows that move far beyond prototypes.” — Michele Catasta, President, Replit

Start building with Claude in Microsoft Foundry today

Meet the Claude models: AI that delivers real results

According to Anthropic, Claude models are engineered for the realities of enterprise development, from tight integration with productivity tools to deep, multi-document research and agentic software development across large repositories.

Model

Strengths
Ideal use cases
Claude Haiku 4.5

Fastest, most cost-efficient

Powering free tier user experiences, real-time experiences, coding sub-agents, financial sub-agents, research sub-agents, business tasks

Claude Sonnet 4.5

Smartest model for complex agents and coding

Long-running agents, coding, cybersecurity, financial analysis, computer use, research

Claude Opus 4.1

Exceptional model for specialized reasoning tasks

Advanced coding, long-horizon tasks and complex problem solving, AI agents, agentic search and research, content creation

All Claude models are built on Constitutional AI for safety and can now be deployed through Foundry with governance, observability, and rapid integration. This enables secure use cases like customer support agents, coding agents, and research copilots: making Claude an ideal choice for scalable, trustworthy AI.

Evolving from monolithic apps to intelligent agents

Across the tech landscape, organizations are embracing agentic AI systems. Early studies show AI agents can help boost efficiency by up to 30% for teams and stakeholders. But the challenge for most enterprises isn’t building powerful apps; it’s operationalizing them and weaving them into real workflows. Industry surveys point to a clear pattern. 78% percent of executives say the primary barrier to scaling AI impact is integrating it into core business processes.

Microsoft is uniquely positioned to address this integration gap. With Foundry, we’re bringing together leading-edge reasoning models, an open platform for innovation, and Responsible AI all within a unified environment. This empowers organizations to experiment, iterate, deploy, and scale AI with confidence, all backed by robust governance and security. This means building AI solutions that are not only powerful, but practical and ready to deliver impact at scale.

“Manus deeply utilizes Anthropic’s Claude models because of their strong capabilities in coding and long-horizon task planning, together with their prowess to handle agentic tasks. We are very excited to be using them now on Azure AI Foundry!” — Tao Zhang, Co-founder & Chief Product Officer, Manus AI.

Claude in Foundry Agent Service: From reasoning to results

Inside Foundry Agent Service, Claude models serve as the reasoning core behind intelligent, goal-driven agents. Developers can:

Plan multi-step workflows: Leverage Claude in Foundry Agent Service to orchestrate complex, multi-stage tasks with structured reasoning and long-context understanding

Streamline AI integration with your everyday productivity tools: Use the Model Context Protocol (MCP) to seamlessly connect Claude to data fetchers, pipelines, and external APIs, enabling dynamic actions across your stack.

Automate data operations: Upload files for Claude to summarize, classify, or extract insights to accelerate document-driven processes with robust AI.

Real-time model selection: Using the model router, customers can soon automatically route requests to Claude Opus 4.1, Sonnet 4.5, and Haiku 4.5. Lowering latency and delivering cost savings in production.

Govern and operate your fleet: Foundry offers unified controls and oversight, allowing developers to operate their entire agent fleet with clear insight into cost, performance, and behavior in one connected view.

Developers can also use Claude models in Microsoft Foundry with Claude Code, Anthropic’s AI coding agent.

These capabilities create a framework for AI agents to safely execute complex workflows with minimal human involvement. For example, if a deployment fails, Claude can query Azure DevOps logs, diagnose the root cause, recommend a fix, and trigger a patch deployment all automatically, using registered tools and operating within governed Azure workflows.

Claude Skills: Modular intelligence you can compose

With the Claude API, developers can define skills modular building blocks that combine:

Natural-language instructions,

Optional Python or Bash code, and

Linked data files (templates, visual assets, tabular data, etc.), or APIs

Each skill is dynamically discovered, maximizing your agent’s context. Skills automate a workflow like generating reports, cleaning datasets, or assembling PowerPoint summaries and can be reused or chained with others to form larger automations. Within Microsoft Foundry, every Skill is governed, tracible, and version-controlled, ensuring reliability across teams and projects.

These capabilities allow developers to create Skills that become reusable building blocks for intelligent automation. For example, instead of embedding complex logic in prompts, a Skill can teach Claude how to interact with a system, execute code, analyze data, or transform content and through the Model Context Protocol (MCP), those Skills can be invoked by any agent as part of a larger workflow. This makes it easier to standardize expertise, ensure consistency, and scale automation across teams and applications.

Custom Deep Research: Context that connects beyond a single prompt

Claude’s Deep Research capability extends model reasoning beyond static queries. It allows agents to gather information from live sources, compare it with internal enterprise data, and produce well-reasoned, source-grounded insights. This transforms agents from simple responders into analytical systems capable of synthesizing trends, evidence, and context at scale.

Pricing

Marketplace Models

Deployment Type

Azure Resource Endpoints
Input/1M TokensOutput/1M Tokens
Claude Haiku 4.5

Global Standard

East US 2, West US

$1.00

$5.00

Claude Sonnet 4.5

Global Standard

East US 2, West US

$3.00

$15.00
Claude Opus 4.1
Global Standard

East US 2, West US

$15.00

$75.00

Looking ahead

Our partnership with Anthropic is about more than just bringing new models to Foundry. It’s about empowering every person and organization to achieve more with AI. We look forward to seeing how developers and enterprises leverage these new capabilities to build the next generation of intelligent systems.

Ready to explore Claude in Foundry? Start building today and join us in shaping the next generation of intelligent agents. Tune in to Ignite for more exciting Microsoft Foundry announcements: register today.
The post Introducing Anthropic’s Claude models in Microsoft Foundry: Bringing Frontier intelligence to Azure appeared first on Microsoft Azure Blog.
Quelle: Azure

Microsoft Foundry: Scale innovation on a modular, interoperable, and secure agent stack

One year ago, at Microsoft Ignite, we set out to redefine enterprise intelligence with Foundry. Our conviction was clear: software would evolve beyond rigid workflows, becoming systems that reason, adapt, and act with purpose. We envisioned developers moving from prescriptive logic to shaping intelligent behavior.

Today, that transformation is accelerating across industries and organizations of every size. The shift is tangible: agents are no longer just assistants, they are dynamic collaborators, seamlessly integrated into the tools we use every day. For builders, agents are reshaping software, and we are delivering a platform that empowers every developer and every business to embrace this moment with confidence and control.

Microsoft Foundry helps builders everywhere turn vision into reality with a modular, interoperable, and secure agent stack. From code to cloud, today demonstrates our focus on empowering developers with a powerful, simple—and trusted—path to production AI apps and agents. Here is the TL;DR:

Foundry Models added new models from Anthropic, Cohere, NVIDIA, and more. Model router is now generally available in Microsoft Foundry and in public preview in Foundry Agent Service. 

Foundry IQ, now in public preview, reimagines retrieval-augmented generation (RAG) as a dynamic reasoning process, simplifying orchestration and improving response quality. 

Foundry Agent Service now offers Hosted Agents, multi-agent workflows, built-in memory, and the ability to deploy agents directly to Microsoft 365 and Agent 365 in public preview. 

Foundry Tools, empowers developers to create agents with secure, real-time access to business systems, business logic, and multimodal capabilities.

Foundry Control Plane, now in public preview, centralizes identity, policy, observability, and security signals and capabilities for AI developers in one portal. GitHub Advanced Security and Microsoft Defender integration, now in public preview, helps improve collaboration between security and development teams across the full app lifecycle.

Foundry Local, now in private preview on Android, the world’s most widely used mobile platform.

Managed Instance on Azure App Service, now in public preview, helps organizations move their web applications to the cloud with just a few configuration changes. 

Next-level productivity: AI-powered tools for builders

It all starts with developers, and GitHub is the world’s largest developer community, now serving over 180 million developers. AI-powered tools and agents in GitHub are helping developers move faster, build increasingly innovative apps, and modernize legacy systems more efficiently. More than 500 million pull requests were merged using AI coding agents this year, and with AgentHQ, coding agents like Codex, Claude Code, and Jules will be available soon directly in GitHub and Visual Studio Code so developers can go from idea to implementation faster. GitHub Copilot, the world’s the most popular AI pair programmer, now serves over 26 million users, helping organizations like Pantone, Ahold Delhaize USA, and Commerzbank streamline processes and save time.

Over the last year, developers have moved from experimentation to production. They need tools that let them design, test, monitor, and improve intelligent systems with the same confidence they have in traditional software. That’s why we built a new generation of AI-powered tools: GitHub Agent HQ for unified agent management, Custom Agents to encode domain expertise, and “bring your own models” to empower teams to adapt and innovate. With Copilot Metrics, teams evolve with data, not guesswork.

We’re committed to giving every developer the tools to design, test, and improve intelligent systems, so they can turn ideas into impact, faster than ever. Managed Instance on Azure App Service, now in public preview, lets organizations move existing .NET applications to the cloud with only a few configuration changes.

Enter Microsoft Foundry: The AI app and agent factory

Enterprises need a consistent foundation to build intelligence at scale. With Microsoft Foundry, we’re unifying models, tools, and knowledge into one open system, empowering organizations to run high-performing agent fleets and intelligent workflows across their business.

Today, teams can choose from over 11,000 frontier models in Foundry, including optimized solutions for scale and specialized models for scientific and industrial breakthroughs. I’m proud to announce Rosetta Fold 3, a next-generation biomolecular structure prediction model developed with the Institute for Protein Design and Microsoft’s AI for Good Lab. Models like these enable researchers and enterprises to tackle the world’s hardest problems with state-of-the-art technology.

Build AI agents with Microsoft Foundry

Here is our top Ignite news for Foundry:

1. Use the right model for every task with Foundry Models

Innovation thrives on adaptability and choice. With more than 11,000 models, Microsoft Foundry offers the broadest model selection on any cloud. Foundry Models empowers developers to benchmark, compare, and dynamically route models to optimize performance for every task.

Today’s announcements include:

Starting today, Anthropic’s Claude Sonnet 4.5, Opus 4.1, and Haiku 4.5 models are available in Foundry, advancing our mission to give customers choice across the industry’s leading frontier models, and making Azure the only cloud offering both OpenAI and Anthropic models. Also this week, Cohere’s leading models join Foundry’s first-party model lineup, providing ultimate model choice and flexibility.

Model router (generally available) enables AI apps and agents to dynamically select the best-fit model for each prompt—balancing cost, performance, and quality. Plus, in model router in Foundry Agent Service (public preview), enables developers to build more adaptable and efficient agents; particularly helpful for multi-agent systems. model router in Foundry Agent Service (public preview), enables developers to build more adaptable and efficient agents; particularly helpful for multi-agent systems.

A new Developer Tier (public preview) makes model fine-tuning more accessible by leveraging idle GPU capacity.

Optimize AI performance with Foundry Models

2. Empower agents with knowledge using Foundry IQ

The more context an agent has, the more grounded, productive, and reliable it’s likely to be. Foundry IQ, now available in public preview, reimagines retrieval-augmented generation (RAG) as a dynamic reasoning process rather than a one-time lookup. Powered by Azure AI Search, it centralizes RAG workflows into a single grounding API, simplifying orchestration and improving response quality while respecting user permissions and data classifications.

Key features include:

Simplified cross-source grounding with no upfront indexing.

Multi-source selection, iterative retrieval, and reflection to dynamically improve the quality of agent interactions.

Foundry Agent Service integration to enrich agent context in a single, observable runtime.

Foundry already powers more than 3 billion search queries per day. By combining Foundry IQ with Microsoft Fabric IQ and Work IQ from Microsoft 365 Copilot, Microsoft provides an unparalleled context layer for agents, helping them connect users with the right information at the right time to make informed decisions.

Start building reliable agents with Foundry IQ

3. Build context-aware, action-oriented agents with Foundry Agent Service

To be force multipliers, agents need access to the same tools and knowledge as the people they support. Foundry Agent Service empowers developers to create sophisticated single and multi-agent systems, connecting models, knowledge, and tools into a single, observable runtime.

Today’s announcements include:

Hosted Agents (public preview) enable developers to run agents built with Microsoft frameworks or third-party frameworks in a fully managed environment, so they can focus on agent logic rather than operational overhead.

Multi-agent workflows (public preview) coordinate specialized agents to execute multi-step business processes using either a visual designer or a code-first API. Workflows enable long-running, stateful collaboration with recovery and debugging built-in.

Memory (public preview) enables agents to securely retain context across sessions, reducing external data-store complexity and enabling more personalized interactions out-of-the-box.

Microsoft 365 and Agent 365 integration (public preview) enables developers to instantly deploy agents from Foundry to Microsoft productivity apps, making it easier to reach users directly within the M365 ecosystem while leveraging Agent 365 for secure orchestration, governance, and enterprise-grade deployment.

Create multi-agent systems with Foundry Agent Service

4. Enable agents to take action using Foundry Tools

The right tools can transform agents from simple responders into intelligent problem-solvers. With Foundry Tools, now in public preview, developers can provide agents with secure, real-time access to business systems, business logic, and multimodal capabilities to deliver business value.

Now, developers can:

Find, connect, and manage public or private MCP tools for agents from a single, secure interface.

Enable agents to act on real-time business data and events using more than 1,400 connectors with business systems such as SAP, Salesforce, and UiPath.

Enrich workflows with out-of-the-box tools such as transcription, translation, and document processing.

Expose any API or function as an MCP tool via API Management, reusing existing business logic to accelerate time-to-value.

Enable AI agents with MCP tools with Foundry Tools

5. Advancing security and trust with Foundry Control Plane 

Scaling intelligence requires trust. As organizations rely on agents and AI powered systems for more of their workflows, teams need clearer visibility, stronger guardrails, and faster ways to identify and address risk. This year we’re expanding security and governance with two key announcements: Foundry Control Plane, now in public preview in Microsoft Foundry, and a new integration between Microsoft Defender for Cloud and GitHub Advanced Security, also in public preview. Together they give developers and security teams a more connected way to monitor behavior, guide access, and keep AI systems safe across the full lifecycle.

Foundry Control Plane brings identity, controls, observability, and security together in one place so teams can build, operate, and govern agents with confidence. Key capabilities include: 

Controls that apply unified guardrails across inputs, outputs, and tool interactions to keep agents focused, accurate, and within defined boundaries. 

Observability with built in evaluations, OTel based tracing, continuous red teaming, and dashboards that surface insights on quality, performance, safety, and cost. 

Security anchored in Entra Agent ID, Defender, and Purview to provide durable identity, policy driven access, integrated data protection, and real-time risk detection across the agent lifecycle. 

Fleet wide operations that unify health, cost, performance, risk, and policy coverage for every agent, no matter where it was built or runs, with alerts that surface issues the moment they appear empowering developers to take action. 

Defender for Cloud + GitHub Advanced Security integration 

Developers and security teams often work in separate tools and lack shared signals to prioritize risks. The new Defender for Cloud and GitHub Advanced Security integration closes this gap. Developers receive AI suggested fixes directly inside GitHub, while security teams track progress in Defender for Cloud in real time. This gives both sides a faster, more connected way to identify issues, remediate them, and keep AI systems secure throughout the app lifecycle. 

Secure your code with GitHub and Microsoft Defender

6. Foundry Local Comes to Android: Powering Cloud to Edge  

Six months ago, we launched Foundry Local on Windows and Mac. In that short time, it’s reached 560 million devices, making it one of the fastest-growing runtimes in enterprise history. Leading organizations like NimbleEdge, Morgan Stanley, Dell, and Pieces are already using Local to bring intelligence directly into the environments where work happens, from financial services to healthcare and edge computing.

Today, we’re taking the next step. Foundry Local is now in private preview on Android, the world’s most widely used mobile platform. This means agents can run natively on billions of phones, unlocking real-time inference, privacy-aware computation, and resilience, even where connectivity is unpredictable. 

We’re also announcing a new partnership with PhonePe, one of India’s fastest-growing platforms. Together, we’ll bring agentic experiences into everyday consumer applications, showing how Local can transform not just enterprise workflows, but daily life at massive scale. 

7. Modernize your web apps for the era of AI in weeks, not months 

We see customers building net new AI applications and integrating AI into existing applications. Both require a modern foundation. Managed Instance on Azure App Service, available in public preview, lets organizations move their .NET web applications to the cloud with just a few configuration changes, saving the time and effort of rewriting code. The result is faster migrations with lower overhead, and access to cloud-native scalability, built-in security and AI capabilities in Microsoft Foundry.

Migrate your web apps with Managed Instance on Azure App Service

Learn more and get started with Foundry

We hope you join us at Microsoft Ignite 2025, in-person or virtually, to see these new capabilities in action and learn how they can support your biggest ambitions for your business.

Explore Microsoft Foundry.

Watch our Innovation Session: Your AI Apps and Agent Factory.

Watch all recorded sessions at Ignite.

Chat with us on Discord.

Provide feedback on GitHub.

The post Microsoft Foundry: Scale innovation on a modular, interoperable, and secure agent stack appeared first on Microsoft Azure Blog.
Quelle: Azure

Azure at Microsoft Ignite 2025: All the intelligent cloud news explained

Before joining Microsoft, I spent years helping organizations build and transform. I’ve seen how technology decisions can shape a business’s future. Whether it’s integrating platforms or ensuring your technology strategy stands the test of time, these choices define how a business operates, innovates, and stays ahead of the competition.

Today, business leaders everywhere are asking:

How do we use AI and agents to drive real outcomes?

Is our data ready for this shift?

What risks or opportunities come with AI and agents?

Are we moving fast enough, or will we fall behind?

This week at Microsoft Ignite 2025, Azure introduces solutions that address those questions with innovations designed for this very inflection point.

It’s not just about adopting the right tools. It’s about having a platform that gives every organization the confidence to embrace an AI-first approach. Azure is built for this moment, with bold ambitions to enable businesses of every size. By unifying AI, data, apps, and infrastructure, we’re delivering intelligence at scale.

If you’re still wondering if AI can really deliver ROI, don’t take my word for it; see how Kraft Heinz, The Premier League, and Levi Strauss & Co. are finding success by pairing their unique data with an AI-first approach.

const currentTheme =
localStorage.getItem(‘msxcmCurrentTheme’) ||
(window.matchMedia(‘(prefers-color-scheme: dark)’).matches ? ‘dark’ : ‘light’);

// Modify player theme based on localStorage value.
let options = {“autoplay”:false,”hideControls”:null,”language”:”en-us”,”loop”:false,”partnerName”:”cloud-blogs”,”poster”:”https://azure.microsoft.com/en-us/blog/wp-content/uploads/2025/11/Screenshot-2025-11-19-125803.png”,”title”:””,”sources”:[{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1093182-YourIntelligentCloud-0x1080-6439k”,”type”:”video/mp4″,”quality”:”HQ”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1093182-YourIntelligentCloud-0x720-3266k”,”type”:”video/mp4″,”quality”:”HD”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1093182-YourIntelligentCloud-0x540-2160k”,”type”:”video/mp4″,”quality”:”SD”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1093182-YourIntelligentCloud-0x360-958k”,”type”:”video/mp4″,”quality”:”LO”}],”ccFiles”:[{“url”:”https://azure.microsoft.com/en-us/blog/wp-json/msxcm/v1/get-captions?url=https%3A%2F%2Fwww.microsoft.com%2Fcontent%2Fdam%2Fmicrosoft%2Fbade%2Fvideos%2Fproducts-and-services%2Fen-us%2Fazure%2F1093182-yourintelligentcloud%2F1093182-YourIntelligentCloud_cc_en-us.ttml”,”locale”:”en-us”,”ccType”:”TTML”}]};

if (currentTheme) {
options.playButtonTheme = currentTheme;
}

document.addEventListener(‘DOMContentLoaded’, () => {
ump(“ump-691f1e1c0fe99″, options);
});

With these updates, we’re making it easier to build, run, and scale AI agents that deliver real business outcomes.

TLDR—the Ignite announcement rundown

On the go and want to get right to what’s new and how to learn more? We have you covered. Otherwise, keep reading for a summary of top innovations from the week.

If you want to make AI agents smarter with enterprise context…Microsoft Fabric IQPreviewMicrosoft Foundry IQPreviewMicrosoft Foundry new tool catalogPreviewIf you want a simple, all-in-one agent experience…Microsoft Agent FactoryAvailable nowIf you want to modernize and extend your data for AI–wherever it lives…SAP BDC Connect for Microsoft FabricComing soonAzure HorizonDBPreviewAzure DocumentDBAvailable nowSQL Server 2025Available nowIf you want to operate smarter and securely with AI-powered control…Foundry Control PlanePreviewAzure Copilot with built-in agentsPreviewNative integration for Microsoft Defender for Cloud and GitHub Advanced SecurityPreviewIf you want to build on an AI-ready foundation…Azure BoostAvailable nowAzure Cobalt 200Coming soon

Your AI and agent factory, expanded: Microsoft Foundry adds Anthropic Claude and Cohere models for ultimate model choice and flexibility 

Earlier this year, we brought Anthropic models to Microsoft 365 Copilot, GitHub Copilot, and Copilot Studio. Today, we’re taking the next natural step: Claude Sonnet 4.5, Opus 4.1, and Haiku 4.5 are now part of Microsoft Foundry, advancing our mission to give customers choice across the industry’s leading frontier models—and making Azure the only cloud offering both OpenAI and Anthropic models. 

This expansion underscores our commitment to an open, interoperable Microsoft AI ecosystem—bringing Anthropic’s reasoning-first intelligence into the tools, platforms, and workflows organizations depend on every day.

Read more about this announcement

This week, Cohere’s leading models join Foundry’s first-party model lineup, enabling organizations to build high-performance retrieval, classification, and generation workflows at enterprise scale.

With these additions to Foundry’s 11,000+-model ecosystem—alongside innovations from OpenAI, xAI, Meta, Mistral AI, Black Forest Labs, and Microsoft Research—developers can build smarter agents that reason, adapt, and integrate seamlessly with their data and applications. 

Make AI agents smarter with enterprise context

In the agentic era, context is everything because the most useful agents don’t just reason, they’re capable of understanding your unique business. Microsoft Azure brings enterprise context to the forefront, so you can connect agents to the right data and systems—securely, consistently, and at scale. This set of announcements makes that real.

GPT‑5.1 in Foundry

Read more

Microsoft Fabric IQ turns your data into unified intelligence

Fabric IQ organizes enterprise data around business concepts—not tables—so decision-makers and AI agents can act in real time. Now in preview, Fabric IQ unifies analytics, time-series, and operational data under a semantic framework.

Because all data resides in OneLake, either natively or via shortcuts and mirroring, organizations can realize these benefits across on-premises, hybrid, and multicloud environments. This speeds up answering new questions and building processes, making Fabric the unified intelligence system for how enterprises see, decide, and operate.

Discover how Fabric IQ can support your business

Introducing Foundry IQ, which enables agents to understand more from your data

Now in preview, Foundry IQ makes it easier for businesses to connect AI agents to the right data, without the usual complexity. Powered by Azure AI Search, it streamlines how agents access and reason over both public and private sources, like SharePoint, Fabric IQ, and the web.

Instead of building custom RAG pipelines, developers get pre-configured knowledge bases and agentic retrieval in a single API that just works—all while also respecting user permissions. The outcome is agents that understand more, respond better, and help your apps perform with greater precision and context.

Build and control task forces of agents at cloud scale

Read Asha's blog

Back to the top

Agents, simplified: Microsoft Agent Factory, powered by Azure

This week, we’re introducing Microsoft Agent Factory—a program that brings Work IQ, Fabric IQ, and Foundry IQ together to help organizations build agents with confidence.

With a single metered plan, organizations can use Microsoft Foundry and Copilot Studio to build with IQ. This means you can deploy agents anywhere, including Microsoft 365 Copilot, without upfront licensing or provisioning.

Azure delivers large scale cluster

Read more

Eligible organizations can also tap into hands-on support from top AI Forward Deployed Engineers and access tailored, role-based training to boost AI fluency across teams.

Confidently build agents with Microsoft Agent Factory

Modernize and extend your data for AI—wherever it lives

Great AI starts with great data. To succeed, organizations need a foundation that’s fast, flexible, and intelligent. This week, we introduced new capabilities to help make that possible.

Introducing Azure HorizonDB, a new fully managed PostgreSQL database service for faster, smarter apps

Now in preview, HorizonDB is a cloud database service built for speed, scale, and resilience. It runs up to three times faster than open-source PostgreSQL and grows to handle demanding storage requirements with up to 15 replicas running on auto-scaling shared storage.

Whether building new AI apps or modernizing core systems, HorizonDB delivers enterprise-grade security and natively integrated AI models to help you scale confidently and create smarter experiences.

Azure DocumentDB offers AI-ready data, open standards, and multi-cloud deployments

Now generally available, Azure DocumentDB is a fully managed NoSQL service built on open-source tech and designed for hybrid and multicloud flexibility. It supports advanced search and vector embeddings for more accurate results and is compatible with popular open-source MongoDB drivers and tools.

sovereign cloud capabilities

Read more

SQL Server 2025 delivers AI innovation to one of the world’s most widely used databases

The decades-long foundation of innovation continues with the availability of SQL Server 2025. This release helps developers build modern, AI-powered apps using familiar T-SQL—securely and at scale.

With built-in tools for advanced search, near real-time insights via OneLake, and simplified data handling, businesses can finally unlock more value from the data they already have. SQL Server 2025 is a future-ready platform that combines performance, security, and AI to help teams move faster and work smarter.

Start exploring SQL Server 2025

Fabric goes further

SQL database and Cosmos DB in Fabric are also available this week. These databases are natively integrated into Fabric, so you can run transactional and NoSQL workloads side-by-side, all in one environment.

Get instant access to trusted data with bi-directional, zero copy sharing through SAP BDC Connect for Fabric

Fabric now enables zero-copy data sharing with SAP Business Data Cloud, enabling customers to combine trusted business data with Fabric’s advanced analytics and AI—without duplication or added complexity. This breakthrough gives you instant access to trusted, business-ready insights for advanced analytics and AI.

We offer these world-class database options so you can build once and deploy at the edge, as platform as a service (PaaS), or even as software as a service (SaaS).And because our entire portfolio is either Fabric-connected or Fabric-native, Fabric serves as a unified hub for your entire data estate.

Strengthen the databases at the heart of your data estate

Read Arun's blog

Back to the top

Operate smarter and more securely with AI-powered control

We believe trust is the foundation of transformation. In an AI-powered world, businesses need confidence, control, and clarity. Azure provides that with built-in security, governance, and observability, so you can innovate boldly without compromise.

With capabilities that protect your data, keep your operations transparent, and make environments resilient, we announced updates this week to strengthen trust at every layer.

Unified observability helps keep agents secure, compliant, and under your control

One highlight from today’s announcements is the new Foundry Control Plane. It gives teams real-time security, lifecycle management, and visibility across agent platforms. Foundry Control Plane integrates signals from the entire Microsoft Cloud, including Agent 365 and the Microsoft security suite, so builders can optimize performance, apply agent controls, and maintain compliance.

New hosted agents and multi-agent workflows let agents collaborate across frameworks or clouds without sacrificing enterprise-grade visibility, governance, and identity controls. With Entra Agent ID, Defender runtime protection, and Purview data governance, you can scale AI responsibly with guardrails in place.

Azure Copilot: Turning cloud operations into intelligent collaboration

Azure Copilot is a new agentic interface that orchestrates specialized agents across the cloud management lifecycle. It embeds agents directly where you work—chat, console, or command line—for a personalized experience that connects action, context, and governance.

We are introducing new agents that simplify how you run on the cloud—from migration and deployment to operations and optimization—so each action aligns with enterprise policy. 

Migration and modernization agents deliver smarter, automated workflows, using AI-powered discovery to handle most of the heavy lifting. This shift moves IT teams and developers beyond repetitive classification work so they can focus on building new apps and agents that drive innovation.

Similarly, the deployment agent streamlines infrastructure planning with guidance rooted in Azure Well-Architected Framework best practices, while the operations and optimization agents accelerate issue resolution, improve resiliency, and uncover cost savings opportunities.

Learn more about these agents in the Azure Copilot blog

Secure code to runtime with AI-infused DevSecOps

Microsoft and GitHub are transforming app security with native integration for Microsoft Defender for Cloud and GitHub Advanced Security. Now in preview, this integration helps protect cloud-native applications across the full app lifecycle, from code to cloud.

GitHub Universe 2025

Read more

This enables developers and security teams to collaborate seamlessly, allowing organizations to stay within the tools they use every day.

Streamline cloud operations and reimagine the datacenter

Read Jeremy's blog

Build on an AI-ready foundation

Azure infrastructure is transforming how we deliver intelligence at scale—both for our own services and for customers building the next generation of applications.

At the center of this evolution are new AI datacenters, designed as “AI superfactories,” and silicon innovations that enable Azure to provide unmatched flexibility and performance across every AI scenario.

THE first AI superfactory

Read more

Azure Boost delivers speed and security for your most demanding workloads

We’re announcing our latest generation of Azure Boost with remote storage throughput of up to 20 GBps, up to 1 million remote storage IOPS, and network bandwidth of up to 400 Gbps. These advancements significantly improve performance for future Azure VM series. Azure Boost is a purpose-built subsystem that offloads virtualization processes from the hypervisor and host operating system, accelerating storage and network-intensive workloads.

Azure Cobalt 200: Redefining performance for the agentic era

Azure Cobalt 200 is our next-generation ARM-based server, designed to deliver efficiency, performance, and security for modern workloads. It’s built to handle AI and data-intensive applications while maintaining strong confidentiality and reliability standards.

By optimizing compute and networking at scale, Cobalt 200 helps you run your most critical workloads more cost-effectively and with greater resilience. It’s infrastructure designed for today’s demands—and ready for what’s next.

See what Azure Cobalt 200 has to offer

Keeping you at the frontier with continuous innovation

We’re delivering continuous innovation in AI, apps, data, security, and cloud. When you choose Azure, you get an intelligent cloud built on decades of experience and partnerships that push boundaries. And as we’ve just shown this week, the pace of innovation isn’t slowing down anytime soon.

Back to the top

Agentic enterprise, unlocked: Start now on Microsoft Azure

I hope Ignite—and our broader wave of innovation—sparked new ideas for you. The era of the agentic cloud isn’t on the horizon; it’s here right now. Azure brings together AI, data, and cloud capabilities to help you move faster, adapt smarter, and innovate confidently.

I invite you to imagine what’s possible—and consider these questions:

What challenges could you solve with a more connected, intelligent cloud foundation?

What could you build if your data, AI, and cloud worked seamlessly together?

How could your teams work differently with more time to innovate and less to maintain?

How can you stay ahead in a world where change is the only constant?

Want to go deeper into the news? Check out these blogs:

Microsoft Foundry: Scale innovation on a modular, interoperable, secure agent stack by Asha Sharma.

Azure Databases + Microsoft Fabric: Your unified and AI-powered data estate by Arun Ulagaratchagan.

Announcing Azure Copilot agents and AI infrastructure innovations by Jeremy Winter.

Ready to take the next step?

Explore technology methodologies and tools from real-world customer experiences with Azure Essentials.

Check out the latest announcements for software companies.

Visit the Microsoft Marketplace, the trusted source for cloud solutions, AI apps, and agents.

The post Azure at Microsoft Ignite 2025: All the intelligent cloud news explained appeared first on Microsoft Azure Blog.
Quelle: Azure

Microsoft Databases and Microsoft Fabric: Your unified and AI-powered data estate

In this article

Another leap forward across Microsoft Databases and Microsoft FabricDeploy the next generation of Microsoft DatabasesGetting your data estate ready for AI with Microsoft FabricMark your calendar for FabCon and SQLConWatch these announcements in action at Microsoft Ignite

As AI reshapes every industry, one truth remains constant: data is no longer just an asset—it’s your competitive edge. The pace of AI demands easy data access, faster insights, and the ability to iterate without friction. Yet many organizations are held back by fragmented data estates and legacy systems. Microsoft Fabric was designed to meet this moment—to unify your data, simplify your architecture, and accelerate your path to becoming an AI-led organization.

That mission is gaining traction at remarkable speed. Since Fabric launched two years ago, it has grown faster than any other data and analytics platform in the industry. More than 28,000 customers—including 80% of the Fortune 500—now rely on Fabric, and its ecosystem continues to expand as partners build solutions to solve the most complex data challenges.

Explore Azure announcements at Microsoft Ignite 2025

Another leap forward across Microsoft Databases and Microsoft Fabric

As Fabric becomes the central connection point for data, we’re strengthening the database layer at the heart of your data estate—ensuring you have the scale and performance required for AI.  

Microsoft already offers one of the industry’s most comprehensive database portfolios, and we’re expanding it even further—while deeply integrating these capabilities into Fabric. I’m excited to announce the general availability of SQL Server 2025, Azure DocumentDB, and SQL database and Cosmos DB in Fabric, along with the preview of our newest addition, Azure HorizonDB. With these new offerings, you have a world-class database option to build once and deploy at the edge, as platform as a service (PaaS), or even as software as a service (SaaS). And because our entire portfolio is either Fabric-connected or Fabric native, Fabric serves as a unified hub for your entire data estate. Below I’ll cover how these new databases are purpose-built to support your AI projects.  

Deploy the next generation of Microsoft Databases

Modernize your SQL estate with SQL Server 2025, now generally available

Microsoft has been shaping the SQL landscape for more than 35 years. Now, with the release of SQL Server 2025 into general availability, we’re introducing the next evolution—one that brings developer‑first AI capabilities at the edge, within the familiar T‑SQL experience. Smarter search combines advanced semantic intelligence with full‑text filtering to uncover richer insights from complex data. AI model management using model definitions in T-SQL allows seamless integration with popular AI services such as Microsoft Foundry.

Enterprise reliability and security remain best-in-class. Enhanced query performance, optimized locking, and improved failover help ensure higher concurrency and uptime for mission‑critical workloads. With strengthened credential management through Microsoft Entra ID via Azure Arc, SQL Server 2025 is secure by design. Your data is also instantly accessible for your AI and analytics in Microsoft OneLake with mirroring for SQL Server 2025 in Fabric, now also generally available.

SQL Server 2025 is the most significant release for SQL developers in a decade. And the response to our preview has been overwhelming, with 10,000 organizations participating, 100,000 databases already deployed, and download rate two times higher than SQL Server 2022. If you want to join all those who’ve already adopted SQL Server 2025, download it today.

Azure DocumentDB: MongoDB-compatible, AI-ready, and built for hybrid and multi-cloud

We’re excited to announce Azure DocumentDB, a new service built on the open-source, MongoDB-compatible DocumentDB standard governed by the Linux Foundation. The first Azure managed service to support multi-cloud and hybrid NoSQL, Azure DocumentDB can run consistently across Azure, on-premises, and other clouds.

Azure DocumentDB gives you the freedom to embrace open source while achieving scale, security, and simplicity. It’s AI-ready, with capabilities like vector and hybrid search to deliver more relevant results. Instant autoscale meets demand, and independent compute and storage scaling keeps workloads efficient. Security and availability is standard, with Microsoft Entra ID integration, customer-managed encryption keys, 35-day backups included, and a 99.995% availability service-level agreement (SLA). And soon, enhanced full-text search will add features like fuzzy matching, proximity queries, and expanded language support, making it even easier to build intelligent, search-driven apps.

Azure DocumentDB is now generally available, so you can try it today. You can also learn more about Azure DocumentDB and all the Azure Database news by reading Shireesh Thota’s, Corporate Vice President of Azure Databases, announcement blog.

Azure HorizonDB: PostgreSQL designed for your mission-critical workloads

PostgreSQL has become the backbone of modern data solutions thanks to its rich ecosystem, extensibility, and open source foundation. Microsoft is proud to be the #1 PostgreSQL committer among hyperscalers, and we’re building on that leadership with Azure HorizonDB.

Now in early preview, Azure HorizonDB is a fully managed, PostgreSQL-compatible database service, built to handle the scale and performance required by the modern enterprise. It goes far beyond open source Postgres, with auto-scaling storage up to 128 TB, scale-out compute up to 3,072 vCores, <1 millisecond multi-zone commit latency, and enterprise security and compliance. Vector search is built-in, along with integrated AI model management and seamless connectivity to Microsoft Foundry so you can build modern AI apps. Combined with GitHub Copilot, Fabric, and Visual Studio Code integrations, it provides an intelligent and secure foundation for building and modernizing applications at any scale. To learn more about Azure HorizonDB, read our announcement blog.

Accelerate app development with Fabric SaaS Databases, now generally available

We are also releasing a new class of SaaS databases, both SQL database and Cosmos DB in Fabric, into general availability. Data developers now have access to world-class database engines within the same unified platform that powers analytics, AI, and business intelligence.

Fabric Databases are designed to streamline your application development. You can provision them in seconds, and they don’t require the usual granular configuration or deep database expertise. They provide enterprise-grade performance, are secure by default with features like cloud authentication, customer-managed keys, and database encryption, and come natively integrated into the Fabric platform, even using the same Fabric capacity units for billing.

With Fabric databases, developers now have the flexibility to build applications grounded in operational, transactional, and analytical data. Together, these offerings make Fabric a developer-first data platform that is streamlined, scalable, and ready for modern data applications.

Learn more by reading Shireesh Thota’s, Corporate Vice President of Azure Databases, announcement blog.

All your databases connected into Fabric

We’re making it easier than ever to work with your entire Microsoft database portfolio in Fabric, giving you a single, unified place to manage and use all your data. Building on our existing mirroring support for Azure SQL Database and Azure SQL MI, we’re now announcing the general availability of mirroring for Azure Database for PostgreSQL, Azure Cosmos DB, and SQL Server versions 2016–2022 and 2025. With these databases mirrored directly into Fabric, you can eliminate traditional extract, transform, and load (ETL) pipelines and make your data instantly ready for analytics and AI.

Getting your data estate ready for AI with Microsoft Fabric

Choosing the right database is essential, but it’s just the beginning. The major opportunity lies in driving frontier transformation, where data becomes the foundation for an AI-native enterprise. We recommend focusing on three core steps:

Unifying your data estate to eliminate silos and complexity.

Creating semantic meaning so your data is ready for AI.

Empowering agents to act on insights and transform operations.

In this section, I’ll dive into the latest enhancements to Microsoft Fabric that help you achieve every step of your data journey. This includes expanded interoperability in OneLake with SAP, Salesforce, Azure Databricks, and Snowflake, the introduction of Fabric IQ—a new workload that adds semantic understanding—and enhanced agentic capabilities across Fabric to help you build richer, AI-powered data experiences.

This is the future of data, and it’s already within reach. With Fabric and our database innovations, Microsoft is helping organizations move seamlessly from insight to action—unlocking the full potential of your data and the AI built on top of it.

Unify your data estate with Microsoft OneLake

Microsoft OneLake unifies all your data—across clouds, on-premises, and beyond Microsoft—into a single data lake with zero-ETL capabilities like shortcuts and mirroring. Alongside the additional mirroring sources for Microsoft Databases, we’re also introducing the preview of shortcuts to SharePoint and OneDrive. This allows you to bring unstructured productivity data into OneLake without copying files or building ETL pipelines, making it easier to train agents and enrich your structured data.

Once connected to OneLake, your data becomes easily discoverable in the apps your teams use every day like Power BI, Teams, Excel, Copilot Studio, and Microsoft Foundry. Today, we are taking that a step further with native integration with Foundry IQ—the next generation of retrieval-augmented generation (RAG). Agents rely on context—Foundry IQ’s knowledge bases deliver high-value context to agents by simplifying access to multiple data sources and making connections across information. You can use the OneLake knowledge source in Foundry IQ to connect agents to multi-cloud sources like AWS S3, on-premises sources, and structured and unstructured data.

See how shortcuts and mirroring unify your data in OneLake and fuel the next generation of intelligent agents in Microsoft Foundry:

Expanding OneLake interoperability with leading data platforms

We are also seeing great momentum with dozens of partners outside of Microsoft deeply integrating with OneLake, including ClickHouse, Dremio, Confluent, EON, and many more. And now, we are thrilled to add new, deeper interoperability with SAP, Salesforce, Azure Databricks, and Snowflake.

First, we’re deepening interoperability with the systems organizations rely on most, SAP and Salesforce. With the launch of SAP Business Data Cloud Connect for Microsoft Fabric, customers can allow bidirectional, zero-copy data sharing between SAP Business Data Cloud (BDC) and Fabric. At the same time, we are working with Salesforce to integrate their data into Fabric using the same zero-copy approach, unlocking advanced analytics and AI capabilities without the overhead of traditional ETL.

We’re also strengthening interoperability with Azure Databricks and Snowflake so you can use a single copy of data across platforms. By the end of 2025, Azure Databricks will release, in preview, the ability to natively read data from OneLake through Unity Catalog, enabling seamless access without duplication or complex data movement. Looking ahead, Databricks will also add support for writing to and storing data directly in OneLake, allowing full two-way interoperability. Read more about this interoperability.

Our collaboration with Snowflake on bidirectional data access continues as well. We are introducing a new item in OneLake called a Snowflake Database and a new UI in Snowflake—both designed to allow OneLake to be the native storage solution for your Snowflake data. We’re also bringing Snowflake mirroring to general availability, allowing you to virtualize your external Snowflake-managed Iceberg tables in OneLake with shortcuts created and handled automatically. Together, these innovations let you run any Fabric workload—whether analytics, AI, or visualization—directly on your Snowflake-managed Iceberg tables.

Learn more about our Snowflake collaboration by reading our latest joint blog or by watching the following demo:

Finally, in close collaboration with dbt Labs, we are also excited to announce built-in support for their industry leading data transformation capability. Now in preview, dbt jobs in Microsoft Fabric let you build, test, and orchestrate dbt workflows in your Fabric workspaces. Learn more in this blog.

Create semantic knowledge to fuel AI with Fabric IQ

As Frontier Firms train agents on their enterprise data, it’s become clear that quality and context matter more than data volume. Agents need business context across relationships, hierarchies, and meaning to turn raw data into actionable insight. That’s why we’re introducing Fabric IQ—a new workload designed to map your datasets to the real-world entities they represent, creating a shared semantic structure on top of your data.

The power of IQ lies in how it unifies disparate data types under a single, coherent framework. Built upon Power BI’s industry-leading, rich semantic model technology, IQ brings together analytical data, time-series telemetry, and geospatial information, all organized under a semantic framework of business entities and their relationships, properties, rules, and actions. You can then create operations agents, a new type of agent in Fabric, which can use this model to act as virtual team members, monitoring real-time data sources, identifying patterns, and taking proactive action. Instead of forcing your teams and even agents to think in terms of tables and schemas, IQ allows you to align data with how your organization operates.

In short, Fabric IQ is designed to model reality with data, so that every insight, prediction, and action is grounded in how your organization actually operates. You can learn more about IQ in Yitzhak Kesselman’s, Corporate Vice President of Messaging and Real-Time Intelligence, announcement blog.

Empower data-rich agents with Copilot, Fabric data agents, and operations agents

As organizations scale their AI initiatives, the ability to connect intelligent agents with enterprise-grade data is becoming a critical differentiator. Fabric is making this possible with a set of integrated AI experiences: Copilot in Power BI helps you ask questions of your data, Fabric data agents allow deeper analysis, and the new Fabric operations agents let you monitor your data estate and take action in real time. These experiences can be used across Fabric or as foundational knowledge sources in industry-leading AI tools like Microsoft Foundry, Copilot Studio or even Microsoft 365 Copilot to power smarter, more data-rich AI experiences.

Beyond introducing operations agents as part of Fabric IQ, we’re also expanding what data agents and Copilot can do. Along with existing integration with Microsoft Foundry and Copilot Studio, Fabric data agents can now be embedded directly in Microsoft 365 Copilot. This lets business users (with the right permissions) access trusted knowledge from OneLake and transforms Microsoft 365 from a productivity suite into an intelligent insights platform.

They can also act as hosted Model Context Protocol (MCP) servers, making it easy to integrate with other applications and agents across the AI ecosystem. Finally, data agents can now reason across both structured and unstructured data. Thanks to an integration with Azure AI Search, data teams can add their existing unstructured data search endpoints as a source in data agents. Learn more the Fabric data agent enhancements by reading the Fabric AI blog.

We’re also enhancing the standalone experience for Copilot in Power BI with a new search experience. Simply describe what you need, and Copilot will locate the relevant report, semantic model, or data agent and surface the right answers. This standalone experience is also coming to Power BI mobile so you can use it on the go.

Take a look at how you can apply all of these AI experiences together seamlessly:

In short, we’re redefining what it means to have an AI-powered data estate. With data agents, Copilot in Power BI, and operations agents in Fabric IQ, AI is now woven across Fabric. And with native integration to Microsoft Foundry and Copilot Studio, you can easily add Fabric agents as building blocks to create more intelligent, informed custom agents.

You also can see more innovation coming to the Fabric platform by reading Kim Manis’, Corporate Vice President of the Fabric Platform, Fabric blog or by checking out the more technical Fabric November 2025 Feature summary blog.

Mark your calendar for FabCon and SQLCon

We are excited to announce SQLCon 2026, which will happen at the same time and the same location as the Microsoft Fabric Community Conference (FabCon), happening March 16–20, 2026 in Atlanta, Georgia. By uniting the powerhouse SQL and Fabric communities, we’re giving data professionals everywhere a unique opportunity to master the latest innovations, share practical knowledge, and accelerate what’s possible with data and AI, all in one powerful week. Register for either conference and enjoy full access to both, with the flexibility to mix and match sessions, keynotes, and community events to fit your interests.

Register for FabCon and SQLCon now

Watch these announcements in action at Microsoft Ignite

If you’re interested in seeing these announcements live, I encourage you to join my Ignite session, “Innovation Session: Microsoft Fabric and Azure Databases – the data estate for AI” either in person or online at no cost. I’ll not only cover these major announcements but show you how they come together to help you create a unified, intelligent data foundation for AI.

You can also dive deeper into these announcements and so much more by watching the rest of the breakout sessions across Azure Data:

Tuesday, November 18

Modern data, modern apps: Innovation with Microsoft Databases

Microsoft Fabric: The data platform for the next AI frontier

Unifying your data journey: Migrating to Microsoft Fabric

Wednesday, November 19

Premier League’s data-driven fan engagement at scale

Create a semantic foundation for your AI agents in Microsoft Fabric

Move fast, save more with MongoDB-compatible workloads on DocumentDB

SQL database in Fabric: The unified database for AI apps and analytics

The blueprint for intelligent AI agents backed by PostgreSQL

Connect to and manage any data, anywhere in Microsoft OneLake

Unlock the power of Real-Time Intelligence in the era of AI

Empower Business Users with AI driven insights in Microsoft Fabric

Thursday, November 20

Real-time analytics and AI apps with Cosmos DB in Fabric

From interoperability to agents: Powering financial workflows with AI

How Fabric Data Agents Are Powering the Next Wave of AI

Explore Azure announcements at Microsoft Ignite 2025

The post Microsoft Databases and Microsoft Fabric: Your unified and AI-powered data estate appeared first on Microsoft Azure Blog.
Quelle: Azure