Azure networking updates on security, reliability, and high availability

Enabling the next wave of cloud transformation with Azure Networking

The cloud landscape is evolving at an unprecedented pace, driven by the exponential growth of AI workloads and the need for seamless, secure, and high-performance connectivity. Azure Network services stand at the forefront of this transformation, delivering the hyperscale infrastructure, intelligent services, and resilient architecture that empower organizations to innovate and scale with confidence.

Get the latest Azure Network services updates here

Azure’s global network is purpose-built to meet the demands of modern AI and cloud applications. With over 60 AI regions, 500,000+ miles of fiber, and more than 4 petabits per second (Pbps) of WAN capacity, Azure’s backbone is engineered for massive scale and reliability. The network has tripled its overall capacity since the end of FY24, now reaching 18 Pbps, ensuring that customers can run the most demanding AI and data workloads with uncompromising performance.

In this blog, I am excited to share about our advancements in data center networking that provides the core infrastructure to run AI training models at massive scale, as well as our latest product announcements to strengthen the resilience, security, scale, and the capabilities needed to run cloud native workloads for optimized performance and cost.

AI at the heart of the cloud

AI is not just a workload—it’s the engine driving the next generation of cloud systems. Azure’s network fabric is optimized for AI at every layer, supporting long-lasting, high-bandwidth flows for model training, low-latency intra-datacenter fabrics for GPU clusters, and secure, lossless traffic management. Azure’s architecture integrates InfiniBand and high-speed Ethernet to deliver ultra-fast, lossless data transfer between compute and storage, minimizing training times and maximizing efficiency. Azure’s network is built to support workloads with distributed GPU pools across datacenters and regions using a dedicated AI WAN. Distributed GPU clusters are connected to the services running in Azure regions via a dedicated and private connection that uses Azure Private Link and hardware based VNet appliance running high performant DPUs.

Azure Network services are designed to support users at every stage—from migrating on-premises workloads to the cloud, to modernizing applications with advanced services, to building cloud-native and AI-powered solutions. Whether it’s seamless VNet integration, ExpressRoute for private connectivity, or advanced container networking for Kubernetes, Azure provides the tools and services to connect, build, and secure the cloud of tomorrow.

Resilient by default

Resiliency is foundational to Azure Networking’s mission. We continue to execute on the goal to provide resiliency by default. In continuing with the trend of offering zone resilient SKUs of our gateways (ExpressRoute, VPN, and Application Gateway), the latest to join the list is Azure NAT Gateway. At Ignite 2025, we announced the public preview of Standard NAT Gateway V2 which offers zone redundant architecture for outbound connectivity at no additional cost. Zone Redundant NAT gateways automatically distribute traffic to available zones during an outage of a single zone. It also supports 100 Gbps of total throughput and can handle 10 million packets per second. It is IPv6 ready out of the gate and provides traffic insights with flow logs. Read the NAT Gateway blog for more information.

Pushing the boundaries on security

We continue to advance our platform with security as the top mission, adhering to the principles of Secure Future Initiatives. Along these lines, we are happy to announce the following capabilities in preview or GA:

DNS Security Policy with Threat Intel: Now generally available, this feature provides smart protection with continuous updates, monitoring, and blocking of known malicious domains.

Private Link Direct Connect: Now in public preview, this extends Private Link connectivity to any routable private IP address, supporting disconnected VNets and external SaaS providers, with enhanced auditing and compliance support.

JWT Validation in Application Gateway: Application Gateway now supports JSON Web Token (JWT) validation in public preview, delivering native JWT validation at Layer 7 for web applications, APIs, and service-to-service (S2S) or machine-to-machine (M2M) communication. This feature shifts the token validation process from backend servers to the Application Gateway, improving performance and reducing complexity. This capability enables organizations to strengthen security without adding complexity, offering consistent, centralized, secure-by-default Layer 7 controls that allow teams to build and innovate faster while maintaining a trustworthy security posture.​

Forced tunneling for VWAN Secure Hubs: Forced Tunnel allows you to configure Azure Virtual WAN to inspect Internet-bound traffic with a security solution deployed in the Virtual WAN hub and route inspected traffic to a designed next hop instead of directly to the Internet. Route Internet traffic to edge Firewall connected to Virtual WAN via the default route learnt from ExpressRoute, VPN or SD-WAN. Route Internet traffic to your favorite Network Virtual Appliance or SASE solution deployed in spoke Virtual Network connected to Virtual WAN.

Providing ubiquitous scale

Scale is of utmost importance to customers looking to fine tune their AI models or low latency inferencing for their AI/ML workloads. Enhanced VPN and ExpressRoute connectivity, and scalable private endpoints further strengthen the platform’s reliability and future-readiness.

ExpressRoute 400G: Azure will be supporting 400G ExpressRoute direct ports in select locations starting 2026. Users can use multiple of these ports to provide multi-terabit throughput via dedicated private connection to on-premises or remote GPU sites.

High throughput VPN Gateway: We are announcing GA of 3x faster VPN gateway connectivity with support for single TCP flow of 5Gbps and a total throughput of 20 Gbps with four tunnels.

High scale Private Link: We are also increasing the total number of private endpoints allowed in a virtual network to 5000 and a total of 20,000 cross peered VNets.

Advanced traffic filtering for storage optimization in Azure Network Watcher: Targeted traffic logs help optimize storage costs, accelerate analysis, and simplify configuration and management.

Enhancing the experience of cloud native applications

Elasticity and the ability to scale seamlessly are essential capabilities Azure customers who deploy containerized apps expect and rely on. AKS is an ideal platform for deploying and managing containerized applications that require high availability, scalability, and portability. Azure’s Advanced Container Networking Service is natively integrated with AKS and offered as a managed networking add-on for workloads that require high performance networking, essential security and pod level observability.

We are happy to announce the product updates below in this space:

eBPF Host Routing in Advanced Container Networking Services for AKS: By embedding routing logic directly into the Linux kernel, this feature reduces latency and increases throughput for containerized applications.

Pod CIDR Expansion in Azure CNI Overlay for AKS: This new capability allows users to expand existing pod CIDR ranges, enhancing scalability and adaptability for large Kubernetes workloads without redeploying clusters.

WAF for Azure Application Gateway for Containers: Now generally available, this brings secure-by-design web application firewall capabilities to AKS, ensuring operational consistency and seamless policy management for containerized workloads.

Azure Bastion now enables secure, simplified access to private AKS clusters, reducing setup effort and maintaining isolation and providing cost savings to users.

These innovations reflect Azure Networking’s commitment to delivering secure, scalable, and future-ready solutions for every stage of your cloud journey. For a full list of updates, visit the official Azure updates page.

Get started with Azure Networking

Azure Networking is more than infrastructure—it’s the catalyst for foundational digital transformation, empowering enterprises to harness the full potential of the cloud and AI. As organizations navigate their cloud journeys, Azure stands ready to connect, secure, and accelerate innovation at every step.

All updates in one spot
From Azure DNS to Virtual Network, stay informed on what's new with Azure Networking.

Get more information here

The post Azure networking updates on security, reliability, and high availability appeared first on Microsoft Azure Blog.
Quelle: Azure

A decade of open innovation: Celebrating 10 years of Microsoft and Red Hat partnership

Ten years ago, Microsoft and Red Hat began a partnership grounded in open source and enterprise cloud innovation. This year, we celebrate a decade of collaboration. Our journey together has helped customers accelerate hybrid cloud transformation, empower developers to innovate, and strengthen the open source community to drive modern application innovation​.

Accelerate modernization with Azure Red Hat OpenShift

The partnership that redefined enterprise cloud

In 2015, running mission-critical Linux workloads on Microsoft Azure was considered bold and visionary. Ten years later, our partnership with Red Hat has helped thousands of organizations worldwide accelerate digital transformation, set new benchmarks in open innovation, and advance the cloud-native movement for enterprises everywhere.

Together, we introduced Red Hat Enterprise Linux (RHEL) on Azure, setting a new precedent for innovation in the cloud. This collaboration deepened with the addition of Red Hat offerings, including Azure Red Hat OpenShift (ARO)—a fully managed, jointly engineered, and supported application platform that combines cloud scale with open source flexibility.

Red Hat and Microsoft’s global footprint and expanding customer base underline how an open approach and commitment to solving customer challenges drives adoption and innovation at scale.

Accomplishments and impact​

Azure Red Hat OpenShift and Red Hat’s automation platforms are powering digital transformation for global leaders across industries:

Leaders like Teranet have saved CA$5.6 million in capital expenditures and increased customer confidence by migrating mission-critical systems and OpenShift containers to Azure, unlocking unmatched scalability and automation.

For Bradesco, Azure Red Hat OpenShift is the secure, scalable backbone of its future-ready AI platform—unifying governance, powering more than 200 enterprise AI initiatives, and accelerating transformation across every business unit. By integrating Azure OpenAI and Power Platform, Bradesco delivers scalable, compliant innovation in banking services. 

Western Sydney University improved reliability and accelerated digital research for thousands of students and faculty with the security and flexibility of Red Hat Enterprise Linux on Azure. 

Symend launched new regions in weeks and powered personalized customer engagement by adopting Azure Red Hat OpenShift and Microsoft Azure AI, driving agility at enterprise scale. Microsoft itself leverages Red Hat’s Ansible Automation Platform to streamline thousands of endpoints and modernize global network operations for business-critical infrastructure.

Together, Microsoft and Red Hat have advanced the industry with major accomplishments:

Deep integration for real-world flexibility: Red Hat solutions—like Azure Red Hat OpenShift, Red Hat Enterprise Linux, and Red Hat Ansible Automation Platform—are available across Azure, including in the Azure Marketplace, Azure Government, and expanding regions. Customers benefit from streamlined migrations, enhanced security features, and integrated support that simplifies modernization.​

Modernization and operational agility: OpenShift Virtualization and Confidential Containers on Azure Red Hat OpenShift enable customers to migrate and modernize legacy applications, run confidential workloads, and automate operations. These capabilities deliver scalability and secure management across hybrid environments.

Accelerating open source innovation: Together, the companies have made contributions to Kubernetes, containers, cloud monitoring, secure computing standards, and advancing open hybrid architectures for everyone.

Expanding developer and IT choice: By making RHEL available for Windows Subsystem for Linux and supporting hybrid container and virtual machine (VM) environments, Microsoft and Red Hat have given developers flexible, secure, and consistent tools for building anywhere.​

Enabling transformative AI adoption at scale: By leveraging Azure Red Hat OpenShift as a secure, governable foundation for managing multicloud OpenShift clusters, Bradesco streamlined operations across on-premises and cloud environments. This foundation, combined with Microsoft Foundry and Azure OpenAI Service, empowers Bradesco to deliver AI-powered banking solutions that scale securely and responsibly across millions of customers and business units. Symend also adopts Azure Red Hat OpenShift and Azure AI to power personalized customer engagement.

Flexible pricing: Azure Hybrid Benefit for RHEL is a key cost optimization feature that allows organizations to maximize existing Red Hat subscriptions when running workloads on Azure. By leveraging this benefit, customers can reduce licensing costs and improve ROI while maintaining enterprise-grade support and security. Including this in the conversation highlights how Azure delivers both technical flexibility and financial efficiency for hybrid environments.

Additionally, customers can optimize costs with pay-as-you-go pricing, draw down Microsoft Azure Consumption Commitment (MACC), and receive a single bill for both OpenShift and Azure consumption with Azure Red Hat OpenShift.

Discover what these solutions can offer your business

Ten years of innovation: Microsoft and Red Hat partnership highlights

The partnership’s journey is marked by major shared milestones, summarized in the timeline graphic below:

November 2015: Partnership announcement launched a decade of innovation.

February 2016: Red Hat Enterprise Linux available in the Azure Marketplace with integrated support.

May 2019: Azure Red Hat OpenShift reached general availability (GA).

January 2020: Red Hat Enterprise Linux BYOS Gold images available in Azure.

May 2021: JBoss EAP offered as an Azure App Service.

January 2022: Ansible released as a managed app for automation.

February 2023: Azure Red Hat OpenShift for Azure Government reached GA.

May 2025: OpenShift Virtualization on Azure Red Hat OpenShift entered public preview, culminating at Ignite 2025 with GA.

See the attached timeline for more details about key moments and innovations.​

Ignite 2025: GA of OpenShift Virtualization and more on Azure Red Hat OpenShift

A defining moment of our tenth anniversary was the GA of OpenShift Virtualization on Azure Red Hat OpenShift, announced at Microsoft Ignite 2025. Organizations can now run VMs alongside containers on a single, secure platform, seamlessly bridging traditional virtualization with cloud-native innovation. Enterprises can modernize their VM workloads into Kubernetes-based environments, leveraging Azure’s performance and security with familiar OpenShift tools.

In addition, Microsoft Ignite 2025 marked the GA of confidential containers on Azure Red Hat OpenShift, delivering enhanced hardware-enforced security and isolation for container workloads. The event also showcased alongside the GA of Red Hat Enterprise Linux (RHEL) for HPC on Azure, offering a secure, high-performance platform tailored for scientific and parallel computing workloads in Azure.

Together, these announcements underscore our ongoing commitment to hybrid innovation, security, and helping customers to deploy a wide spectrum of enterprise workloads with agility and confidence.

Open at the core: What’s next for open source and enterprise cloud collaboration

Ten years of partnership have proven openness is more than a technological strategy—it is a culture of progress, trust, and shared innovation. Microsoft and Red Hat remain committed to pioneering the future of hybrid cloud and AI-powered applications, always keeping customer choice and reliability at the center.

We’re proud to partner with Red Hat not just to support our customers, but also in our shared interest in projects like the Linux Kernel, Kubernetes, and most recently llm-d. Together, we are committed to continuing contributions to the health and success of open source technologies and communities.

To our customers, partners, and open source communities: thank you for partnering with us on this journey. Together, we will continue to build the future of enterprise technology—openly, boldly, and collaboratively.
—Brendan Burns, Corporate Vice President, Microsoft Cloud Native

Explore OpenShift Virtualization on Azure

Explore more stories on hybrid cloud and open innovation

Unlock what is next: Microsoft at Red Hat Summit 2025​

Red Hat Powers Modern Virtualization on Microsoft Azure​

Red Hat Success Stories: Helping Microsoft with IT automation​

The best of both worlds: How Microsoft and Red Hat are revolutionizing enterprise IT​

Red Hat CEO and Microsoft EVP on The Evolution of Open Source​

GA of OpenShift Virtualization on Azure Red Hat OpenShift at Microsoft Ignite 2025

Bradesco, Azure Red Hat OpenShift is the secure, scalable backbone of its future-ready AI platform

Ortec Finance launched a cloud-native risk management platform, accelerating service delivery for over 600 financial institutions

Rossmann transformed its retail operations and scaled hybrid cloud deployments to millions of customers

City of Vienna modernized citizen services with AI, improving availability and efficiency for thousands of residents​​ 

Porsche Informatik accelerated digital transformation across automotive logistics, optimizing mission-critical IT service 

The post A decade of open innovation: Celebrating 10 years of Microsoft and Red Hat partnership appeared first on Microsoft Azure Blog.
Quelle: Azure

Introducing Mistral Large 3 in Microsoft Foundry: Open, capable, and ready for production workloads

Enterprises today are embracing open-weight models for their transparency, flexibility, and ability to run across a broad range of deployment architectures. As the number of open models grows, the bar for reliability, instruction-following quality, multimodal reasoning, and long-context performance continues to rise. 

Today, we’re excited to announce that Mistral Large 3 is now available in Azure, bringing one of the strongest open-weight, Apache-licensed frontier models to the Microsoft Cloud. 

Mistral Large 3 delivers frontier-class capabilities with open-source flexibility, making it a powerful option for organizations building production assistants, retrieval-augmented applications, agentic systems, and multimodal workflows. 

See Mistral Large 3 in action

Enterprise-ready open models 

Mistral Large 3 sits in the leading tier of globally available open models alongside DeepSeek and the GPT OSS family. It is optimized not only for benchmark-chasing on abstract mathematical puzzles, but also for what customers need most in real enterprise applications: 

Highly reliable instruction following 

Long-context comprehension and retention 

Strong multimodal reasoning 

Stable, predictable performance across dialogue and applied reasoning 

According to Mistral, Mistral Large 3 shows fewer breakdowns and more consistent behavior than most peers, especially in multi-turn conversations and complex, extended inputs. It is designed for production, not just experimentation. 

Mistral 3 is optimized for real-world scenarios 

Instruction reliability you can depend on 

Many open models excel on benchmarks but struggle with instruction clarity when deployed in real workflows. Mistral Large 3 reverses that trend by demonstrating:

Precise adherence to task instructions 

Strong grounding in domain knowledge 

Low hallucination rates

Consistent formatting in structured outputs 

This makes it particularly effective for agents, automation flows, and business logic integration where reliability is non-negotiable. 

Exceptional long-context handling 

With extended context support, Mistral Large 3 processes, retains, and reasons over long documents, multi-step sequences, and sustained dialogues with notable stability. 

Enterprises can use it for: 

Retrieval-augmented generation 

Document understanding 

Multi-turn conversational systems 

Long-form summarization and synthesis 

Its ability to maintain coherence over long sessions reduces error cascades and produces more predictable outcomes. 

Multimodal and applied reasoning 

As organizations build increasingly multimodal workflows, interpreting text, images, diagrams, and structured data, Mistral Large 3 provides strong cross-modal understanding with balanced behavior. 

It excels in: 

Visual question answering 

Diagram or chart interpretation 

Multimodal retrieval and grounding 

Combined reasoning over text and image inputs 

Its stability makes it ideal for use cases where multimodal reasoning must be accurate, not approximate. 

Fully Open and Apache 2.0 licensed

Mistral Large 3 stands out as the strongest fully open model developed outside of China and offers something rare in the global ecosystem: 

Frontier-level capability, Apache 2.0 licensing, reproducible results, and worldwide availability without regional restrictions. 

Organizations can: 

Integrate the model in Microsoft Foundry 

Export weights for hybrid or on-premises deployment (subject to Mistral licensing) 

Run it in their own VPC, edge, or sovereign cloud environments 

Fine-tune or customize freely 

Use it for commercial applications without attribution requirements 

This combination of capability and openness is uniquely compelling for global enterprises requiring flexibility, transparency, and long-term vendor independence. 

Why Mistral Large 3 in Azure? 

Foundry provides an end-to-end workspace for model development, evaluation, and deployment, including unified governance, observability, and agent-ready tooling. 

With Mistral Large 3 in Foundry, customers gain: 

1.Unified access to top-performing models

Simplified and secure access to Mistral Large 3 and Mistral Document AI as first-party models available on Foundry alongside other open and commercial frontier models.

2. End-to-end evaluation and observability

Foundry delivers end-to-end evaluations, routing, and observability, enabling organizations to benchmark Mistral Large 3 across cost, latency, throughput, and quality, while monitoring performance and spending through a single set of dashboards and SDKs. Workloads can be intelligently routed to the most efficient model with no added integration effort. 

3. Enterprise-grade safety and governance 

Foundry applies Responsible AI safeguards, content filters, and auditability across all model interactions, ensuring safe, compliant deployments. 

4. Agent-first capabilities 

Mistral Large 3 supports tool calling, enabling agentic systems that can take action, automate workflows, and connect to enterprise data and APIs. This foundation supports customer service bots, research agents, automation flows, and enterprise copilots. 

Unlocking new use cases across industries 

Enterprise knowledge assistants: Long-context comprehension enables rich, grounded conversations across corporate knowledge bases. 

Document intelligence and retrieval-augmented pipelines: Stable reasoning and consistent formatting make it ideal for summarization, extraction, and multi-document synthesis. 

Developer agents and automation: Reliable instruction supports code refactoring, test generation, and workflow automation. 

Multimodal customer experiences: Combining image and text understanding enables richer digital assistant and customer support experiences. 

Pricing

ModelDeployment type Azure resource regions Price/1M tokens Availability Mistral Large 3 Global Standard West US 3 Input: $0.5 Output: $1.5 Dec 2, 2025—public preview 

The future of open models on Azure 

With the addition of Mistral Large 3, Foundry continues to expand its position as the cloud platform with the widest selection of open and frontier models, unified under a single, enterprise-ready ecosystem. 

As organizations increasingly demand transparent, flexible, and globally accessible intelligence, Mistral Large 3 sets a new benchmark for what a production-ready open model can deliver. 

Try Mistral Large 3 today
Open, capable, multimodal, long-context Mistral Large 3 is now available in Microsoft Foundry.

Explore on Foundry

The post Introducing Mistral Large 3 in Microsoft Foundry: Open, capable, and ready for production workloads appeared first on Microsoft Azure Blog.
Quelle: Azure

New options for AI-powered innovation, resiliency, and control with Microsoft Azure

Organizations running mission‑critical workloads operate under stricter standards because system failures can often affect people and business operations at scale. They must ensure control, resilience, and operational autonomy such that innovation does not compromise governance. They need agility that also maintains continuity and preserves standards compliance, so they can get the most out of AI, scalable compute, and advanced analytics on their terms.

For example, manufacturing plants need assembly lines to continue to operate during network outages, and healthcare providers need the ability to access patient data during natural disasters. Similarly, government agencies and critical infrastructure operators must comply with regulations to keep systems autonomous and data within national borders. Additionally, regulations sometimes mandate that sensitive data remains stored and processed locally under local jurisdiction and personnel control.

These are exactly the challenges Azure’s adaptive cloud approach is designed to solve. We are extending Azure public regions with options that adapt to our customers’ evolving business requirements without forcing trade-offs. Microsoft’s strategy spans both our public cloud, private cloud, and edge technology, giving customers a unified platform for operations, applications, and data with the right balance of flexibility and control. This approach empowers customers to use Azure services to innovate in environments under their full control, rather than maintaining separate, siloed, or legacy IT systems.

Meeting unique operational and data sovereignty needs

To address unique operational and data sovereignty needs, Microsoft introduced Azure Local—Azure infrastructure delivered in customers’ own datacenters or distributed locations. Azure Local comes with integrated compute, storage, and networking services and leverages Azure Arc to extend cloud services across the management, data, application, and security layers into hybrid and disconnected environments.

Learn more about what’s new in Azure Local

Over the past six months, our team has significantly expanded Azure Local’s capabilities to meet requirements across industries. We are seeing tremendous momentum from customers like GSK, a global biopharma leader extending cloud innovation and AI to the edge using Azure Local. GSK is enabling real-time data processing and AI inferencing across vaccine and medicine manufacturing and R&D labs worldwide. GSK joined our What’s new in Azure Local session at Ignite last month, offering insight into how they are using Azure Local.

We are also engaging deeply with public sector organizations to ensure essential services can run independent of internet connectivity when needed, from city administrations to defense and emergency response agencies.

To support these customers, we are enabling a growing set of Azure Local features and functionalities across Microsoft and partners, many of which have reached General Availability (GA) and preview.

Microsoft 365 Local (GA) delivers full productivity—email, collaboration, and communications—within a private cloud, ensuring sovereignty and security for sovereign scenarios.

Next-gen NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs (GA) bring high-performance AI workloads on premises, enabling advanced analytics and generative AI without sacrificing compliance.

Azure Migrate support for Azure Local (GA) streamlines lift-and-shift migrations, reducing costs and accelerating time-to-value.

AD-less deployments, Rack-Aware Clustering, and external SAN storage integration (Preview) offer more options for identity, fault tolerance, and flexible storage strategies.

Rack aware clustering is now available in preview for Azure Local

Multi-rack deployments (Preview) dramatically increase options for high scale, supporting larger IT estates in a single integrated Azure Local instance.

Disconnected operations (Preview) delivers a fully disconnected Azure Local experience for mission-critical environments where internet connectivity is infeasible or unwanted.

In short, Azure Local has rapidly evolved into a robust platform for operational sovereignty. It delivers Azure consistency for all workloads from core business apps to AI, in customers’ locations—from a few nodes on a factory floor up to thousands of nodes. These advancements reflect our commitment to meet customers where they are. 

Intelligent and connected physical operations

Azure’s adaptive cloud approach helps bring AI to physical operations. Our Azure IoT platform enables asset-intensive organizations to harness data from devices and sensors in a secure, scalable, and resilient fashion. When combined with Microsoft Fabric, customers get real-time insights from their operational data. This integration allows industries such as manufacturing, energy, and industrial operations to bridge digital and physical systems and adopt AI and automation in ways that align with their specific needs.

Demonstrating how the cloud, edge AI, and simulation can help orchestrate human-robotic collaboration on manufacturing product lines at Microsoft Ignite

Our approach to enable AI in physical operations environments follows two basic patterns. Azure IoT Operations enables device and sensor data from larger sites to be aggregated and processed close to its source for near real-time decision-making and reduced latency, streaming only relevant data to Fabric for more advanced analytics. Azure IoT Hub, on the other hand, enables device data to securely flow directly to Fabric with cloud-based identity and security. The integration across Microsoft Fabric and Azure IoT helps bridge Operational Technology (OT) and Information Technology (IT), delivering cost-effective, secure, and repeatable outcomes.

In the last six months, we introduced several enhancements to Azure IoT tailored for connected operations use cases:

In Azure IoT Hub, a new Microsoft-backed X.509 certificate management capability provides enhanced secure identity lifecycle control. Integration with Azure Device Registry streamlines identity, security, and policy management across fleets.

Enhanced Azure Device Registry capabilities improve asset registration, classification, and monitoring for operational insight while allowing Azure connected assets and devices to be used with any Azure service.

Azure Device Registry (ADR) acts as the unified control plane for managing both physical assets from Azure IoT Operations and devices from Azure IoT Hub

Azure IoT Operations’ latest release includes a number of new features. WebAssembly-powered data graphs enable fast, modular analytics for near-instant decision-making. Expanded connectors for OPC UA, ONVIF, REST/HTTP, SSE, and MQTT simplify interoperability. OpenTelemetry endpoint support enables smooth telemetry pipelines and monitoring. Advanced health monitoring provides deep visibility and control over operational assets.

In Microsoft Fabric, Fabric IQ and Digital Twin Builder turn raw telemetry into actionable context for simulation and intelligent feedback loops thanks to the use of models and knowledge graphs that bring clarity to streaming data.

Customers like Chevron and Husqvarna are scaling Azure IoT Operations from single-site pilots to multi-site rollouts, unlocking new use cases such as predictive maintenance and worker safety. These deployments demonstrate measurable impact and adaptive cloud architectures delivering business value. Our partner ecosystem is also growing with Siemens, Litmus, Rockwell Automation, and Sight Machine building on the platform.

Managing a distributed estate with unified Azure management and security

Organizations often grapple with the complexity of highly distributed IT estates—spanning on-premises datacenters, hundreds or sometimes thousands of edge sites, multiple public clouds, and countless devices. Managing and securing this sprawling ecosystem is challenging with traditional tools. A core promise of Azure’s adaptive cloud approach is helping to simplify centralized operations through a single, unified control plane via Azure Arc.

Over the last six months, we have delivered a wave of improvements to help customers manage distributed resources at scale, across heterogenous environments, in a frictionless way. Key enhancements in our Azure Arc platform include:

Azure Arc site manager (Preview) organizes resources by physical site for easier monitoring and management of distributed operations.

New GCP connector (Preview) projects Google Cloud resources into Azure for a single pane of glass across Azure, AWS, and GCP.

The Multicloud connector enabled by Azure Arc is now in preview for GCP environments

Azure Machine Configuration (GA) enforces OS-level settings across Azure Arc-managed servers for compliance and security.

New Azure policies to audit and configure Windows Recovery environment to be ready for critical patch to recover from machine unbootable state such as faulty drivers.

New subscription level enrollment of essential machine management services with a simplified pricing model and a unified user experience from Azure for the hybrid environment, lowering adoption barrier for legacy environments.

Workload Identity (GA) lets Azure Arc-enabled Kubernetes clusters use Entra ID for secure resource access, eliminating local storage of secrets.

AKS Fleet Manager (Preview) integrates Azure Arc-connected clusters for centralized policy sync and deployments across hybrid environments.

Azure Key Vault Secret Store Extension (GA) allows Azure Arc-enabled Kubernetes clusters to cache secrets from Azure Key Vault, improving security and workload resiliency to intermittent network connectivity for hybrid workloads.

These enhancements underscore our belief that cloud management and cloud-native application development should not stop at the cloud. Whether an IT team is responsible for five datacenters or 5000 retail sites, Azure provides the tooling to manage that distributed environment and develop applications as one cohesive and adaptive cloud.

Azure’s adaptive cloud approach gives organizations the freedom to innovate on their terms while maintaining control. In an era defined by uncertainty, whether from cyber threats or geopolitical shifts, Azure empowers customers to modernize confidently without sacrificing resiliency or control.

Innovate on an adaptive cloud

The post New options for AI-powered innovation, resiliency, and control with Microsoft Azure appeared first on Microsoft Azure Blog.
Quelle: Azure

Announcing vLLM v0.12.0, Ministral 3 and DeepSeek-V3.2 for Docker Model Runner

At Docker, we are committed to making the AI development experience as seamless as possible. Today, we are thrilled to announce two major updates that bring state-of-the-art performance and frontier-class models directly to your fingertips: the immediate availability of Mistral AI’s Ministral 3 and DeepSeek-V3.2, alongside the release of vLLM v0.12.0 on Docker Model Runner.

Whether you are building high-throughput serving pipelines or experimenting with edge-optimized agents on your laptop, today’s updates are designed to accelerate your workflow.

Meet Ministral 3: Frontier Intelligence, Edge Optimized

While vLLM powers your production infrastructure, we know that development needs speed and efficiency right now. That’s why we are proud to add Mistral AI’s newest marvel, Ministral 3, to the Docker Model Runner library on Docker Hub.

Ministral 3 is Mistral AI’s premier edge model. It packs frontier-level reasoning and capabilities into a dense, efficient architecture designed specifically for local inference. It is perfect for:

Local RAG applications: Chat with your docs without data leaving your machine.

Agentic Workflows: Fast reasoning steps for complex function-calling agents.

Low-latency prototyping: Test ideas instantly without waiting for API calls.

DeepSeek-V3.2: The Open Reasoning Powerhouse

We are equally excited to introduce support for DeepSeek-V3.2. Known for pushing the boundaries of what open-weights models can achieve, the DeepSeek-V3 series has quickly become a favorite for developers requiring high-level reasoning and coding proficiency.

DeepSeek-V3.2 brings Mixture-of-Experts (MoE) architecture efficiency to your local environment, delivering performance that rivals top-tier closed models. It is the ideal choice for:

Complex Code Generation: Build and debug software with a model specialized in programming tasks.

Advanced Reasoning: Tackle complex logic puzzles, math problems, and multi-step instructions.

Data Analysis: Process and interpret structured data with high precision.

Run Them with One Command

With Docker Model Runner, you don’t need to worry about complex environment setups, python dependencies, or weight downloads. We’ve packaged both models so you can get started immediately.

To run Ministral 3:

docker model run ai/ministral3

To run DeepSeek-V3.2:

docker model run ai/deepseek-v3.2-vllm

These commands automatically pull the model, set up the runtime, and drop you into an interactive chat session. You can also point your applications to them using our OpenAI-compatible local endpoint, making them drop-in replacements for your cloud API calls during development.

vLLM v0.12.0: Faster, Leaner, and Ready for What’s Next

We are excited to highlight the release of vLLM v0.12.0. vLLM has quickly become the gold standard for high-throughput and memory-efficient LLM serving, and this latest version raises the bar again.

Version 0.12.0 brings critical enhancements to the engine, including:

Expanded Model Support: Day-0 support for the latest architecture innovations, ensuring you can run the newest open-weights models (like DeepSeek V3.2 and Ministral 3) the moment they drop.

Optimized Kernels: Significant latency reductions for inference on NVIDIA GPUs, making your containerized AI applications snappier than ever.

Enhanced PagedAttention: Further optimizations to memory management, allowing you to batch more requests and utilize your hardware to its full potential.

Why This Matters

The combination of Ministral 3, DeepSeek-V3.2, and vLLM v0.12.0 represents the maturity of the open AI ecosystem.

You now have access to a serving engine that maximizes data center performance, alongside a choice of models to fit your specific needs—whether you prioritize the edge-optimized speed of Ministral 3 or the deep reasoning power of DeepSeek-V3.2. All of this is easily accessible via Docker Model Runner.

How You Can Get Involved

The strength of Docker Model Runner lies in its community, and there’s always room to grow. We need your help to make this project the best it can be. To get involved, you can:

Star the repository: Show your support and help us gain visibility by starring the Docker Model Runner repo.

Contribute your ideas: Have an idea for a new feature or a bug fix? Create an issue to discuss it. Or fork the repository, make your changes, and submit a pull request. We’re excited to see what ideas you have!

Spread the word: Tell your friends, colleagues, and anyone else who might be interested in running AI models with Docker.

We’re incredibly excited about this new chapter for Docker Model Runner, and we can’t wait to see what we can build together. Let’s get to work!
Quelle: https://blog.docker.com/feed/

Docker, JetBrains, and Zed: Building a Common Language for Agents and IDEs

As agents become capable enough to write and refactor code, they should work natively inside the environments developers work in: editors. 

That’s why JetBrains and Zed are co-developing ACP, the Agent Client Protocol. ACP gives agents and editors a shared language, so any agent can read context, take actions, and respond intelligently without bespoke wiring for every tool.

Why it matters

Every protocol that’s reshaped development (LSP for language tools, MCP for AI context) works the same way: define the standard once, unlock the ecosystem. ACP does this for the editor itself. Write an agent that speaks ACP, and it works in JetBrains, Zed, or anywhere else that adopts the protocol. 

Docker’s contribution

Docker’s cagent, an open-source multi-agent runtime, already supports ACP, alongside Claude Code, Codex CLI, and Gemini CLI. Agents built with cagent can run in any ACP-compatible IDE, like JetBrains, immediately.

We’ve also shipped Dynamic MCPs, letting agents discover and compose tools at runtime, surfaced directly in the editor where developers work.

What’s next

ACP is early, but the direction is clear. As agents embed deeper into workflows, the winners will be tools that interoperate. Open standards let everyone build on shared foundations instead of custom glue.

Docker will continue investing in ACP and standards that make development faster, more open, and more secure. When code, context, and automation converge, shared protocols ensure we move forward together.
Quelle: https://blog.docker.com/feed/

SES Mail Manager is now available in 10 additional AWS Regions, 27 total

Amazon SES announces that the SES Mail Manager product is now available in 10 additional commercial AWS Regions. This expands coverage from the current 17 commercial AWS Regions where Mail Manager is launched, meaning that Mail Manager is now offered in all commercial Regions where SES offers its core Outbound service. SES Mail Manager allows customers to configure email routing and delivery mechanisms for their domains, and to have a single view of email governance, risk, and compliance solutions for all email workloads. Organizations commonly deploy Mail Manager to replace legacy hosted mail relays or simplify integration with third-party mailbox providers and email security solutions. Mail Manager also supports onward delivery to WorkMail mailboxes, built-in archiving with search and export capabilities, and integration with third-party security add-ons directly within the console. The 10 new Mail Manager Regions include Middle East (Bahrain), Asia Pacific (Jakarta), Africa (Cape Town), Middle East (UAE), Asia Pacific (Hyderabad), Asia Pacific (Malaysia), Europe (Milan), Israel (Tel Aviv), Canada West (Calgary), and Europe (Zurich). The full list of Mail Manager Region availability is here. To learn more, see the Amazon SES Mail Manager product page and the SES Mail Manager documentation. You can start using Mail Manager in these new Regions through the Amazon SES console.
Quelle: aws.amazon.com