Quartalszahlen: Nvidia wächst trotz Angst vor KI-Blase weiter
Nvidia wächst weiter deutlich über Markterwartung. Trotz Diskussionen über eine mögliche KI-Blase bleibt der Chipkonzern klar auf Wachstumskurs. (Nvidia, KI)
Quelle: Golem
Nvidia wächst weiter deutlich über Markterwartung. Trotz Diskussionen über eine mögliche KI-Blase bleibt der Chipkonzern klar auf Wachstumskurs. (Nvidia, KI)
Quelle: Golem
Ältere Versionen des Packprogramms 7-Zip weisen eine gefährliche Schadcode-Lücke auf, die inzwischen ausgenutzt wird. Nutzer sollten handeln. (Sicherheitslücke, Packer)
Quelle: Golem
Die Black Friday Woche startet mit smarten Govee-Rabatten. Fünf Lichtsysteme sind jetzt im Angebot – ideal als Deko und Geschenk. (Smart Home, Beleuchtung)
Quelle: Golem
KI-Modelle brauchen langwieriges Training und absurd große Datenmengen? Nur, solange man nicht ein paar Tricks anwendet. Eine Anleitung von Tim Elsner (KI, DIY – Do it yourself)
Quelle: Golem
The path from prototype to production for AI/ML workloads is rarely straightforward. As data pipelines expand and model complexity grows, teams can find themselves spending more time orchestrating distributed compute than building the intelligence that powers their products. Scaling from a laptop experiment to a production-grade workload still feels like reinventing the wheel. What if scaling AI workloads felt as natural as writing in Python itself? That’s the idea behind Ray, the open-source distributed computing framework born at UC Berkeley’s RISELab, and now, it’s coming to Azure in a whole new way.
Today, at Ray Summit, we announced a new partnership between Microsoft and Anyscale, the company founded by Ray’s creators, to bring Anyscale’s managed Ray service to Azure as a first-party offering in private preview. This new managed service will deliver the simplicity of Anyscale’s developer experience on top of Azure’s enterprise-grade Kubernetes infrastructure, making it possible to run distributed Python workloads with native integrations, unified governance, and streamlined operations, all inside your Azure subscription.
Ray: Open-Source Distributed Computing for PythonRay reimagines distributed systems for the Python ecosystem, making it simple for developers to scale code from a single laptop to a large cluster with minimal changes. Instead of rewriting applications for distributed execution, Ray offers Pythonic APIs that allow functions and classes to be transformed into distributed tasks and actors without altering core logic. Its smart scheduling seamlessly orchestrates workloads across CPUs, GPUs, and heterogeneous environments, ensuring efficient resource utilization.
Developers can also build complete AI systems using Ray’s native libraries—Ray Train for distributed training, Ray Data for data processing, Ray Serve for model serving, and Ray Tune for hyperparameter optimization—all fully compatible with frameworks like PyTorch and TensorFlow. By abstracting away infrastructure complexity, Ray lets teams focus on model performance and innovation.
Anyscale: Enterprise Ray on AzureRay makes distributed computing accessible; Anyscale running on Azure takes it to the next level for enterprise-readiness. At the heart of this offering is RayTurbo, Anyscale’s high-performance runtime for Ray. RayTurbo is designed to maximize cluster efficiency and accelerate Python workloads, enabling teams on Azure to:
Spin up Ray clusters in minutes, without Kubernetes expertise, directly from the Azure portal or CLI.Dynamically allocate tasks across CPUs, GPUs, and heterogeneous nodes, ensuring efficient resource utilization and minimizing idle time.Easily run large experiments quickly and cost-effectively with elastic scaling, GPU packing, and native support for Azure spot VMs.Run reliably at production scale with automatic fault recovery, zero-downtime upgrades, and integrated observability.Maintain control and governance; clusters run inside your Azure subscription, so data, models, and compute stay secure, with unified billing and compliance under Azure standards.By combining Ray’s flexible APIs with Anyscale’s managed platform and RayTurbo’s performance, Python developers can move from prototype to production faster, with less operational overhead, and at cloud scale on Azure.
Kubernetes for Distributed ComputingUnder the hood, Azure Kubernetes Service (AKS) powers this new managed offering, providing the infrastructure foundation for running Ray at production scale. AKS handles the complexity of orchestrating distributed workloads while delivering the scalability, resilience, and governance that enterprise AI applications require.
AKS delivers:
Dynamic resource orchestration: Automatically provision and scale clusters across CPUs, GPUs, and mixed configurations as demand shifts.High availability: Self-healing nodes and failover keep workloads running without interruption.Elastic scaling: scale from development clusters to production deployments spanning hundreds of nodes.Integrated Azure services: Native connections to Azure Monitor, Microsoft Entra ID, Blob Storage, and policy tools streamline governance across IT and data science teams.AKS gives Ray and Anyscale a strong foundation—one that’s already trusted for enterprise workloads and ready to scale from small experiments to global deployments.
Enabling teams with Anyscale running on AzureWith this partnership, Microsoft and Anyscale are bringing together the best of open-source Ray, managed cloud infrastructure, and Kubernetes orchestration. By pairing Ray’s distributed computing platform for Python with Anyscale’s management capabilities and AKS’s robust orchestration, Azure customers gain flexibility in how they can scale AI workloads. Whether you want to start small with rapid experimentation or run mission-critical systems at global scale, this offering gives you the choice to adopt distributed computing without the complexity of building and managing infrastructure yourself.
You can leverage Ray’s open-source ecosystem, integrate with Anyscale’s managed experience, or combine both with Azure-native services, all within your subscription and governance model. This optionality means teams can choose the path that best fits their needs: prototype quickly, optimize for cost and performance, or standardize for enterprise compliance.
Together, Microsoft and Anyscale are removing operational barriers and giving developers more ways to innovate with Python on Azure, so they can move faster, scale smarter, and focus on delivering breakthroughs. Read the full release here.
Get startedLearn more about the private preview and how to request access at https://aka.ms/anyscale or subscribe to Anyscale in the Azure Marketplace.
The post Powering Distributed AI/ML at Scale with Azure and Anyscale appeared first on Microsoft Azure Blog.
Quelle: Azure
Across Europe and around the world, organizations today face a complex mix of regulatory mandates, heightened expectations for resilience, and relentless technological advancement. Sovereignty has become a core requirement for governments, public institutions, and enterprises seeking to harness the full power of the cloud while retaining control over their data and operations.
In June 2025, Microsoft CEO Satya Nadella announced a broad range of solutions to help meet these needs with the Microsoft Sovereign Cloud. We continue to adapt our sovereignty approach—innovating to meet customer needs and regulatory requirements within our Sovereign Public Cloud and Sovereign Private Cloud. Today, we are announcing a new wave of capabilities, building upon our digital sovereignty controls, to deliver advanced AI and scale, strengthened by our ecosystem of specialized in-country partner experts. With this announcement, expanded features and services include:
End-to-end AI data-processing in Europe as part of EU (European Union) Data Boundary.
Microsoft 365 Copilot expands in-country processing for Copilot Interactions to 15 countries. Learn more about this announcement in the Microsoft 365 blog.
Sovereign Landing Zones service expansion and disconnected operations for Microsoft Azure Local.
Microsoft 365 Local general availability.
Increased maximum scale of Azure Local, support for external SAN storage, and support for the latest NVIDIA GPUs.
Availability of our partner Digital Sovereignty specialization.
Discover Microsoft Sovereign Cloud
Microsoft Sovereign Cloud continuous innovation
Our latest offerings include new digital sovereignty capabilities across AI, security, and productivity, as well as a suite of upcoming features that will further address our customers’ sovereign cloud needs.
We recognize the need for continuous innovation and have already begun implementing many commitments. As of this month, we have already:
Established a European board of directors, composed of European nationals, exclusively overseeing all datacenter operations in compliance with European law, thereby putting Europe’s cloud infrastructure into the hands of Europeans.
Increased European datacenter capacity with recent launches in Austria and an upcoming launch in Belgium this month.
Embedded our digital resiliency commitments into all relevant government contracts.
Expanded open‑source investment through funding secure open-source software (OSS) projects and collaborations as well as publishing AI Access Principles that widen safe, responsible access to advanced AI, helping European developers, startups, and enterprises compete more effectively across the region.
Advanced our European Security Program by providing AI-powered intelligence and cybersecurity capacity building initiatives to strengthen Europe’s digital resilience against threat actors.
New Sovereign Public Cloud and AI capabilities
From the moment organizations begin designing their environments for sovereignty, they need end-to-end capabilities that help them embed compliance and control from the start.
EU Data Boundary includes AI data processing residency
We are delivering on our end-to-end AI data processing commitments, where data processed by AI services for EU customers remains within the European Union Data Boundary, except as otherwise directed by the customer.
This means all customer data, whether at rest or in transit, will be stored and processed exclusively in the EU. Our approach includes implementing rigorous controls and transparency measures that comply with EU customer requirements.
Expanding Microsoft 365 Copilot in-country data processing to 15 countries
Building upon decades of investment in global infrastructure and industry-leading data residency capabilities, Microsoft will now offer in-country data processing for customers’ Microsoft 365 Copilot interactions in 15 countries around the world.
By the end of 2025, Microsoft will offer customers in four countries—Australia, India, Japan and the United Kingdom—the option to have Microsoft 365 Copilot interactions processed in-country. In 2026, we’ll expand the availability of in-country data processing for Microsoft 365 Copilot to customers in eleven more countries including Canada, Germany, Italy, Malaysia, Poland, South Africa, Spain, Sweden, Switzerland, the United Arab Emirates, and the United States.
Read the full announcement in the Microsoft 365 blog
New Sovereign Landing Zone (SLZ) foundation
We are also introducing our refreshed Sovereign Landing Zone (SLZ), built on the market-proven landing zone foundation of Azure Landing Zone (ALZ).
The Sovereign Landing Zone is the recommended platform landing zone for customers wanting to implement sovereign controls in the Azure public cloud as part of the Sovereign Public Cloud.
The refresh of the Sovereign Landing Zone includes:
Updated Management Group hierarchy and supporting Azure Policy definitions, initiatives, and assignments to help implement the Sovereign Public Cloud controls (Level 1, 2, and 3).
Guidance on deployment placement of Azure Key Vault Managed HSM, if required as part of Level 2 Sovereign controls.
Deployment simplified via the Azure landing zone accelerator and the Azure landing zone library. See Sovereign Landing Zone (SLZ) implementation options for further details.
Over the next few months, the Azure Policy definitions, initiatives, and assignments that come built-in to the Sovereign Landing Zone will continue to expand to help our customers achieve sovereign controls in the sovereign public cloud out-of-the-box faster.
By adopting Sovereign Landing Zones, customers can gain a prescriptive architecture that accelerates compliance with regional sovereignty requirements while reducing complexity in policy management. This approach also helps organizations confidently scale workloads across Azure regions without compromising on regulatory alignment or operational consistency.
Check out the new Sovereign Landing Zone (SLZ)
New Sovereign Private Cloud and AI capabilities
As organizations deepen their commitment to sovereignty, the ability to combine regulatory compliance with innovation becomes especially important. This next wave of enhancements helps bring together advanced AI capabilities and scalable infrastructure designed for both public and private environments.
Supporting thousands of AI models on Azure Local with NVIDIA RTX GPUs
As we advance our Sovereign Private Cloud capabilities with Azure Local, we are introducing a new Azure offering with the latest NVIDIA RTX Pro 6000 Blackwell Server Edition GPU purpose-built for high performance AI workloads in sovereign environments.
Designed to run over 1,000 models such as GPT OSS, DeepSeek-V3, Mistral NeMo, and Llama 4 Maverick, this GPU enables organizations to accelerate their AI initiatives directly within a sovereign private cloud environment. Customers gain the flexibility to experiment, innovate, and deploy advanced AI solutions with enhanced performance. This means organizations can pursue new AI-powered opportunities while helping ensure data protection and compliance.
In addition, customers can gain access to thousands of prebuilt and open-source AI models, ready to deploy for a wide range of scenarios—from generative AI and advanced analytics to real-time decision making. This combination empowers customers to experiment, innovate, and operationalize cutting edge AI solutions, while keeping governance front and center.
Increasing Azure Local scale to hundreds of servers
Azure Local has supported single clusters of up to 16 physical servers. With our latest updates, Azure Local can support hundreds of servers, opening new possibilities for organizations with large-scale or growing sovereign private cloud demands. This enhancement means customers can support bigger, more complex workloads, scale their infrastructure with ease, and respond to evolving business needs all while aligning with the security and sovereignty required by European and global regulations.
SAN support on Azure Local
A key highlight of expanding the scale of our Sovereign Private Cloud is the introduction of Storage Area Network (SAN) support on Azure Local. With this update, customers can now securely connect their existing on-premises storage solutions from industry leaders to Azure Local. This integration empowers organizations to leverage their trusted storage investments while benefiting from cloud-native services, helping ensure data remains within their desired jurisdiction. European enterprises, in particular, gain flexibility in meeting local data residency requirements without compromising on performance or control.
Microsoft 365 Local: General availability of key workloads
Another milestone is the general availability of Microsoft 365 Local, helping bring core productivity workloads—Exchange Server, SharePoint Server, and Skype for Business Server natively to Azure Local. Starting in December, customers can deploy these productivity workloads on Azure Local in a connected mode, with a disconnected option for complete isolation coming early 2026. This approach combines familiar collaboration tools with Azure Local’s unified management and consistent Azure services and APIs, enabling organizations to maintain full operational control while aligning with stringent compliance and data residency requirements.
Disconnected operations: General availability
Microsoft’s Sovereign Private Cloud extends sovereignty principles into fully dedicated environments for organizations with strict compliance and control requirements, enabled by Azure Local. Azure Local enables government agencies, multinational enterprises, and regulated entities to maintain local control while still benefiting from the scale and innovation of Microsoft’s global cloud platform.
As part of Azure Local, we are introducing the upcoming general availability of disconnected operations, including the ability to manage multiple Azure Local clusters from the same local control plane. Available in early 2026, this capability allows customers to operate private cloud environments with a completely on-premises control plane, enabling organizations to operate securely and independently within their own dedicated environments. With disconnected operations, customers can retain business continuity and operational resilience, even in highly regulated or edge scenarios.
Learn more about Azure Local
New partner Digital Sovereignty specialization now available
We’re excited to officially launch the Digital Sovereignty specialization as part of the Microsoft AI Cloud Partner Program. This new specialization empowers partners to demonstrate deep expertise in delivering secure, compliant, and sovereign cloud solutions across Azure and Microsoft 365 platforms. By earning this designation, partners signal their ability to meet stringent data residency, privacy, and regulatory requirements—helping customers maintain control over their applications and data while driving innovation. The specialization includes rigorous audit criteria and provides benefits such as enhanced discoverability, specialized badging, and priority access to sovereign cloud opportunities.
Looking ahead: Advancing sovereignty through greater controls
The Microsoft Sovereign Cloud roadmap will provide additional capabilities designed to address evolving customer needs including:
Sovereign Public Cloud
Data Guardian: This upcoming capability helps provide transparency into operational sovereignty controls in our European public cloud environments. All remote access by Microsoft engineers to the systems that store and process your data in Europe will be routed to the EU, where an EU-based operator can monitor and, if necessary, halt these activities. All remote access by Microsoft engineers will be recorded in a tamper-evident log.
Sovereign Private Cloud
Enhanced change controls: We will introduce a set of configurable policies and approval workflows that will empower organizations with explicit oversight of any changes propagating from the cloud to the edge, strengthening governance and compliance.
Site-to-site disaster recovery: Azure Site Recovery in Azure Local will help with business continuity by keeping business apps and workloads running during outages.
Move from hybrid to fully disconnected: Azure Local will enable customers to transition workloads from hybrid to fully disconnected operations, providing them with flexibility for business continuity.
National Partner Clouds
National Partner Clouds are a core part of the Microsoft Sovereign Cloud strategy. They provide independently operated cloud environments that deliver Microsoft Azure and Microsoft 365 capabilities under local ownership and control.
Delos Cloud is designed to meet German government’s BSI cloud platform requirements.
Bleu is designed to meet the French government’s (ANSSI) SecNumCloud requirements.
For many public sector organizations, ERP is a critical workload that requires modernization to cloud environments. SAP is planning to deploy its RISE with SAP offering on Microsoft Azure for both Bleu and Delos Cloud customers, in addition to support of RISE with SAP for customers using Microsoft Azure public cloud deployments.
Learn more about Microsoft’s sovereign solutions
Microsoft delivers unmatched sovereign solutions, offering a flexible public cloud environment, a private cloud that scales to your business needs, and national partner clouds designed to meet specific compliance requirements. Our commitment to continuous investment and innovation helps our customers meet sovereignty without compromise.
Discover what’s next in cloud innovation this November at Microsoft Ignite. Learn more and register today.
The post Microsoft strengthens sovereign cloud capabilities with new services appeared first on Microsoft Azure Blog.
Quelle: Azure
Across industries, organizations are moving from experimentation with AI to operationalizing it within business-critical workflows. At Microsoft, we are partnering with UiPath—a preferred enterprise agentic automation platform on Azure—to empower customers with integrated solutions that combine automation and AI at scale.
One example is Azure AI Foundry agents and UiPath agents (built on Azure AI Foundry) orchestrated by UiPath Maestro™ in business processes, ensuring AI insights seamlessly flow into automated business processes that deliver measurable value.
Get started with agents built on Azure AI Foundry
From insight to action: Managing incidental findings in healthcare
In healthcare, where every insight can influence a life, the ability of AI to connect information and trigger timely action is especially transformative. Incidental findings in radiology reports—unexpected abnormalities uncovered during imaging studies like CT or MRI scans—represent one of the most challenging and overlooked gaps in patient care
As the volume of patient data grows, overlooked incidental findings outside the original imaging scope can delay care, raise costs, and increase liability risks.
This is where AI steps in. In this workflow, Azure AI Foundry agents and UiPath agents—orchestrated by UiPath Maestro™—work together to operationalize this process in healthcare:
Radiology reports are generated and finalized in existing systems.
UiPath medical record summarization (MRS) agents review reports, flagging incidental findings.
Azure AI Foundry imaging agents analyze historical PACS images and radiology data, comparing past results with current findings relevant to the additional findings.
UiPath agents aggregate all results—including pertinent EMR history, prior imaging, and AI-generated imaging insights—into a comprehensive follow-up report.
The aggregated information is forwarded to the original ordering care provider in addition to the primary radiology report, eliminating the need to manually comb through the chart and prior exams for pertinent information. This creates both a secondary notification of the incidental finding and puts the summarized, relevant patient information in the clinicians’ hands, efficiently supporting the provision of safe, timely care.
UiPath Maestro™ orchestrates the business process, routing the consolidated packet to the ordering physician or specialist for next steps.
The combination of UiPath and Azure AI Foundry agents turns siloed data into precise documentation that can be used to create actionable care pathways—accelerating clinical decision making, reducing physician workload, and improving patient outcomes.
This scenario is enabled by:
UiPath Maestro™: Orchestrates complex workflows that span multiple agents, systems, and data sources; and integrates natively with Azure AI Foundry and UiPath Agents, providing tracing capabilities that create business trust in underlying AI agents.
UiPath agents: Extract and summarize structured and unstructured data from EMRs, reports, and historical records.
Azure AI Foundry agents: Analyze medical images and generate AI-powered diagnostic insights with healthcare-specific models on Azure AI Foundry that provide secure data access through DICOMweb APIs and FHIR standards, ensuring compliance and scalability.
Together, this creates an agentic ecosystem on Azure where AI insights are not isolated but operationalized directly within end-to-end business processes.
Delivering customer value
By embedding AI into automated workflows, customers see tangible ROI:
Improved outcomes: Faster detection and follow-up on incidental findings.
Efficiency gains: Automated data collection, summarization, and reporting reduce manual physician workload.
Cost savings: Early detection helps prevent expensive downstream interventions.
Trust and compliance: Built on Azure & UiPath’s security, privacy, and healthcare data standards.
This is the promise of combining enterprise-grade automation with enterprise-ready AI.
What customers are saying about AI automation in healthcare
AI-powered automation is redefining how healthcare operates. At Mercy, we are beginning to partner with Microsoft and UiPath which will allow us to move beyond data silos and create intelligent workflows that truly serve patients. This is the future of care—where insights instantly translate into action.
Robin Spraul, Automation Manager-Automation Opt & Process Engineering, Mercy
Partnership perspectives
With UiPath Maestro and Azure AI Foundry working together, we’re helping enterprises operationalize AI across workflows that matter most. This is how we turn intelligence into impact.
Asha Sharma, Corporate Vice President, Azure AI Platform
Healthcare is just the beginning. UiPath and Microsoft are empowering organizations everywhere to unlock ROI by bringing automation and AI together in real-world business processes.
Graham Sheldon, Chief Product Officer, UiPath
Looking ahead
This healthcare scenario is one of many where UiPath and Azure AI Foundry are transforming operations. From finance to supply chain to customer service, organizations can now confidently scale AI-powered automation with UiPath Maestro™ on Azure.
At Microsoft, we believe AI is only as valuable as the outcomes it delivers. Together with UiPath, we are enabling enterprises to achieve those outcomes today.
const currentTheme =
localStorage.getItem(‘msxcmCurrentTheme’) ||
(window.matchMedia(‘(prefers-color-scheme: dark)’).matches ? ‘dark’ : ‘light’);
// Modify player theme based on localStorage value.
let options = {“autoplay”:false,”hideControls”:null,”language”:”en-us”,”loop”:false,”partnerName”:”cloud-blogs”,”poster”:”http://cdn-dynmedia-1.microsoft.com/is/image/microsoftcorp/1090860-UiPath-Azure-HLS-Incidental?wid=1280″,”title”:””,”sources”:[{“src”:”http://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1090860-UiPath-Azure-HLS-Incidental-0x1080-6439k”,”type”:”video/mp4″,”quality”:”HQ”},{“src”:”http://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1090860-UiPath-Azure-HLS-Incidental-0x720-3266k”,”type”:”video/mp4″,”quality”:”HD”},{“src”:”http://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1090860-UiPath-Azure-HLS-Incidental-0x540-2160k”,”type”:”video/mp4″,”quality”:”SD”},{“src”:”http://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1090860-UiPath-Azure-HLS-Incidental-0x360-958k”,”type”:”video/mp4″,”quality”:”LO”}],”ccFiles”:[{“url”:”https://azure.microsoft.com/en-us/blog/wp-json/msxcm/v1/get-captions?url=https%3A%2F%2Fwww.microsoft.com%2Fcontent%2Fdam%2Fmicrosoft%2Fbade%2Fvideos%2Fproducts-and-services%2Fen-us%2Fazure%2F1090860-uipath-azure-hls-incidental%2F1090860-UiPath-Azure-HLS-Incidental_cc_en-us.ttml”,”locale”:”en-us”,”ccType”:”TTML”}]};
if (currentTheme) {
options.playButtonTheme = currentTheme;
}
document.addEventListener(‘DOMContentLoaded’, () => {
ump(“ump-69129fb513f58″, options);
});
The post Driving ROI with Azure AI Foundry and UiPath: Intelligent agents in real-world healthcare workflows appeared first on Microsoft Azure Blog.
Quelle: Azure
Since its launch at Microsoft Ignite 2019, Azure Ultra Disk has powered some of the world’s most demanding applications and workloads: From real-time financial trading and electronic health records to high-performance gaming and AI/ML services. Ultra Disk was a breakthrough in cloud block storage innovation from the start, introducing independent configuration of capacity, IOPS, and throughput to deliver precise performance at scale. And we’ve continued to push boundaries ever since. Committing to a purposeful evolution, not just enhancing performance and resilience for mission-critical workloads but working to ensure every advancement addresses the real-world needs of our customers.
How to deploy and use an Ultra Disk?
These advancements are not just theoretical; they’re driving real impact for customers operating on a global scale. One example being BlackRock, a global asset manager and technology provider, who leverages Azure Ultra Disk in conjunction with M-series Virtual Machines to power their mission-critical investment platform, Aladdin. For BlackRock, delivering ultra-low latency and exceptional reliability is paramount to swiftly adapting to dynamic market conditions and managing portfolios with agility and confidence.
Now that we’re on Azure, we have a springboard to unlock adoption of cloud-managed services to be able to engineer and operate at greater scale and adopt innovative technologies.
Randall Fradin, Head of Cloud Managed Services and Platform Engineering, BlackRock
Read the full customer story here.
Stories like BlackRock’s illustrate the power of Ultra Disk in action and they inspire us to keep evolving. That’s why today, we are excited to unveil a transformative update to Ultra Disk, designed to deliver superior speed, resilience, and cost efficiency for your most sensitive workloads. This major refresh introduces higher performance, greater flexibility to optimize cost, and instant access snapshots to support business continuity. With these advancements, Ultra Disk empowers organizations to accelerate operations, restore data rapidly, and scale with confidence–no matter the level of demand or criticality.
What’s new with Ultra Disk?
Ultra Disk delivers reliable performance with improved average, P99.9, and outlier latency
For mission-critical workloads, even brief disruptions can have significant impacts. That is why we have prioritized reducing tail latency at P99.9 and above. Our platform enhancements have resulted in 80% reduction in both P99.9 and outlier latency, along with a 30% improvement in average latency. These advancements make Ultra Disk the best choice for highly I/O-intensive and latency-sensitive workloads, such as transaction logs for mission-critical applications.
If you are using local SSD or Write Accelerator to achieve lower latencies, we recommend exploring Ultra Disk as an alternative option for enhanced data persistency and greater flexibility for capacity and performance.
Optimize application cost without sacrificing performance
Our goal is to support workload in maximizing both efficiency and performance. Ultra Disk’s latest provisioning model now offers greater granular control over capacity and performance, enabling better cost management. Workloads on small disks can save up to 50% while large disks can save up to 25%. These updated features are now available for both new and existing Ultra Disks:
Greater control Previous GiB capacity billing Billed at 1 GiB granularity Billed at tiers Maximum IOPS per GiB 1,000 IOPS per GiB 300 IOPS per GiB Minimum IOPS per disk 100 IOPS Higher of 100 or 1 IOPS per GiB Minimum MB/s per disk 1 MB/s Higher of 1 MB/s or 4 KB/s per IOPS
A financial application operates its core database on Ultra Disk to serve market trend insight. This database stores large amount of data but only require moderate IOPS and throughput at low latency when needed (no more than 12,500 GiB, 5000 IOPS and 200 MB/s). With more flexible control over capacity and performance, this deployment now saves 22% from its Ultra Disk spending, illustrated below using East US prices.
Cost per month Previous Improved flexibility Savings 12,500 GiB $1,594 for 13,312 GiB (rounded to next tier) $1,497 for 12,500 GiB -6% 5,000 IOPS $661 for 13,312 IOPS $248 for 5,000 IOPS -62% 200 MB/s $70 for 200 MB/s No change No change Ultra Disk Total$2,324 $1,815-22%
Unlock high performance workloads on Azure Boost and Ultra Disk
Ultra Disk and Azure Boost now enable a new class of high-performance workloads:
Memory Optimized Mbv3 VM – Standard_M416bs_v3 – GA, up to 550,000 IOPS and 10GB/s
Azure Boost Ebdsv5 VM – GA up to 400,000 IOPS and 10GB/s
Stay tuned for newest Azure Boost VM announcement at 2025 Ignite for unprecedented remote Block Storage performance
These innovations empower customers to confidently operate high-demand applications such as large-scale SQL databases, electronic health record systems, and mission-critical enterprise platforms. Ultra Disk is equipped to address rigorous performance requirements leveraging the latest advancements in Virtual Machine technology.
Instant Access Snapshot enables you to restore and run your business application immediately
We are thrilled to announce an exciting new experience: Instant Access Snapshot for Ultra and Premium SSD v2 disks, now available in public preview. With Instant Access, you can immediately use snapshots after creation to generate new disks, eliminating the wait time (often spanning numerous hours) traditionally required for background data copy before the snapshot is in a ready and usable state. Disks generated from these Instant Access Snapshots now hydrate up to 10x faster and experience minimal read latency impact during the hydration process. This advanced capability marks a significant leap forward in the public cloud market, enabling rapid recovery and replication scale-out for your organization in real time. No more lengthy restoration processes or costly downtime! Instant Access Snapshot empowers you to get back to business within moments, not hours.
Building on the foundation of security, flexibility, and efficiency for Ultra Disk
Let’s walk through a few other features recently released that will greatly enhance your high-performance workload experience on Ultra Disk.
Operate with cost-efficiency by expanding your Ultra Disk capacity live with the support of live resize and dynamically adjusting Ultra Disk performance to avoid over provisioning.
Run your business application securely with encryption at host on Ultra Disk. Encryption at host will encrypt your data starting from the VM host and then store the encrypted data in Ultra Disk.
Azure Site Recovery – Recover your VM applications with Ultra Disk seamlessly in another Azure region when your primary region is down.
Azure VM Backup – Backup your VM applications equipped with Ultra Disk easily and securely.
Azure Disk Backup – Backup a specific Ultra Disk that is critical to your business operation to lower your backup cost and for more customized backup operations.
Third party backup and disaster recovery support: We understand that you may have preferred third party service for your backup and disaster recovery procedures. Check out the third-party services here that now support Ultra Disk.
Migrate your clustered application to Azure as-is that uses SCSI Persistent Reservations to Ultra Disk with shared disk capability. Shared disk capability unlocks easy migration and further cost optimization for your mission-critical clustered applications.
Getting started: Unlock new possibilities for your business
Join us on this journey to redefine what’s possible for your mission critical business applications. With Azure Ultra Disk, you can experience the future of high-performance storage today, empowering your organization to move faster, recover instantly, and scale with confidence.
New to Ultra Disk? Start with our comprehensive documentation and how to deploy an Ultra Disk.
Have questions or feedback? Reach out to our team at AzureDisks@microsoft.com.
Start using Azure Ultra Disk today
The post The new era of Azure Ultra Disk: Experience the next generation of mission-critical block storage appeared first on Microsoft Azure Blog.
Quelle: Azure
When we launched the Secure Future Initiative (SFI), our mission was clear: accelerate innovation, strengthen resilience, and lead the industry toward a safer digital future. Today, we’re sharing our latest progress report that reflects steady progress in every area and engineering pillar, underscoring our commitment to security above all else. We also highlight new innovations delivered to better protect customers, and share how we use some of those same capabilities to protect Microsoft. Through SFI, we have improved the security of our platforms and services and our ability to detect and respond to cyberthreats.
Read the latest Secure Future Initiative reportFostering a security-first mindsetEngineering sentiment around security has improved by nine points since early 2024. To increase security awareness, 95% of employees have completed the latest training on guarding against AI-powered cyberattacks, which remains one of our highest-rated courses. Finally, we developed resources for employees and made them available to customers for the first time to improve security awareness.
Governance that scales globallyThe Cybersecurity Governance Council now includes three additional Deputy Chief Information Security Officers (CISOs) functions covering European regulations, internal operations, and engagement with our ecosystem of partners and suppliers. We launched the Microsoft European Security Program to deepen partnerships and better inform European governments about the cyberthreat landscape and collaborating with industry partners to better align cybersecurity regulations, advance responsible state behavior in cyberspace, and build cybersecurity capacity through the Advancing Regional Cybersecurity Initiative in the global south. You can read more on our cybersecurity policy and diplomacy work.
Secure by Design, Secure by Default, Secure OperationsMicrosoft Azure, Microsoft 365, Windows, Microsoft Surface, and Microsoft Security engineering teams continue to deliver innovations to better protect customers. Azure enforced secure defaults, expanded hardware-based trust, and updated security benchmarks to improve cloud security. Microsoft 365 introduced a dedicated AI Administrator role, and enhanced agent lifecycle governance and data security transparency to give organizations more control and visibility. Windows and Surface advanced Zero Trust principles with expanded passkeys, automatic recovery capabilities, and memory-safe improvements to firmware and drivers. Microsoft Security introduced data security posture management for AI and evolved Microsoft Sentinel into an AI-first platform with data lake, graph, and Model Context Protocol capabilities.
Engineering progress that sets the benchmarkWe’re making steady progress across all engineering pillars. Key achievements include enforcing phishing-resistant multifactor authentication (MFA) for 99.6% of Microsoft employees and devices, migrating higher-risk users to locked-down Azure Virtual Desktop environments, completing network device inventory and lifecycle management, and achieving 99.5% detection and remediation of live secrets in code. We’ve also deployed more than 50 new detections across Microsoft infrastructure with applicable detections to be added to Microsoft Defender and awarded $17 million to promote responsible vulnerability disclosure.
Actionable guidanceTo help customers improve their security, we highlight 10 SFI patterns and practices customers can follow to reduce their risk. We also share additional best practices and guidance throughout the report. Customers can do a deeper assessment of their security posture by using our Zero Trust Workshops which incorporate SFI-based assessments and actionable learnings to help customers on their own security journeys.
Security as the foundation of trustCybersecurity is no longer a feature—it’s the foundation of trust in a connected world.
With the equivalent of 35,000 engineers working full time on security, SFI remains the largest cybersecurity effort in digital history. Looking ahead, we will continue to prioritize the highest risks, accelerate delivery of security innovations, and harness AI to increase engineering efficiency and enable rapid anomaly detection and automated remediation.
The cyberthreat landscape will continue to evolve. Technology will continue to advance. And Microsoft will continue to prioritize security above all else. Our progress reflects a simple truth: trust is earned through action and accountability.
We are grateful for the partnership of our customers, industry peers, and security researchers. Together, we will innovate for a safer future.
Read our November 2025 progress reportLearn more with Microsoft SecurityTo learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.
The post Securing our future: November 2025 progress report on Microsoft’s Secure Future Initiative appeared first on Microsoft Azure Blog.
Quelle: Azure
The era of AI agents has arrived, and with it, a new standard for how they connect to tools: the Model Context Protocol (MCP). MCP unlocks powerful, flexible workflows by letting agents tap into external tools and systems. But with thousands of MCP servers (including remote ones) now available, it’s easy to ask: Where do I even start?
I’m Oleg Šelajev, and I lead Developer Relations for AI products at Docker. I’ve been hands-on with MCP servers since the very beginning. In this post, we’ll cover what I consider to be the best MCP servers for boosting developer productivity, along with a simple, secure way to discover and run them using the Docker MCP Catalog and Toolkit.
Let’s get started.
Top MCP servers for developer productivity
Before we dive into specific servers, let’s first cover what developers should consider before incorporating these tools into their workflows. What makes a MCP server worth using?
From our perspective, the best MCP servers (regardless of your use case) should:
Come from verified, trusted sources to reduce MCP security risk.
Easily connect to existing tools and fit into your workflow.
Have real productivity payoff (whether it’s note-taking, fetching web content, or keeping your AI agents honest with additional context from trusted libraries).
With that in mind, here are six MCP servers we’d consider must-haves for developers looking to boost their everyday productivity.
1. Context7 – Enhancing AI coding accuracy
What it is: Context7 is a powerful MCP tool specifically designed to make AI agents better at coding.
How it’s used with Docker: Add the Context7 MCP server by clicking on the tile in Docker Toolkit or use the CLI command docker mcp server enable context7.
Why we use it: It solves the “AI hallucination” problem. When an agent is working on code, Context7 injects up-to-date, version-specific documentation and code examples directly into the prompt. This means the agent gets accurate information from the actual libraries we’re using, not from stale training data.
2. Obsidian – Smarter note-taking and project management
What it is: Obsidian is a powerful, local-first knowledge base and note-taking app.
How it’s used with Docker: While Obsidian itself is a desktop app, install the community plugin that enables the local REST API. And then configure the MCP server to talk to that localhost endpoint.
Why we use it: It gives us all the power of Obsidian to our AI assistants. Note-taking and accessing your prior memories has never been easier.
Here’s a video on how you can use it.
3. DuckDuckGo – Bringing search capabilities to coding agents
What it is: This is an MCP server for the DuckDuckGo search engine.
How it’s used with Docker: Simply enable the DuckDuckGo server in the MCP Toolkit or CLI.
Why we use it: It provides a secure and straightforward way for our AI agents to perform web searches and fetch content from URLs. If you’re using coding assistants like Claude Code or Gemini CLI, they know how to do it with built-in functionalities, but if your entry point is something more custom, like an application with an AI component, giving them access to a reliable search engine is fantastic.
4. Docker Hub – Exploring the world’s largest artifact repository
What it is: An MCP server from Docker that allows your AI to fetch info from the largest artifact repository in the world!
How it’s used with Docker: You need to provide the personal access token and the username that you use to connect to Docker Hub. But enabling this server in the MCP toolkit is as easy as just clicking some buttons.
Why we use it: From working with Docker Hardened Images to checking the repositories and which versions of Docker images you can use, accessing Docker Hub gives AI the power to tap into the largest artifact repository with ease.
Here’s a video of updating a Docker Hub repository info automatically from the GitHub repo
The powerful duo: GitHub + Notion MCP servers – turning customer feedback into actionable dev tasks
Some tools are just better together. When it comes to empowering AI coding agents, GitHub and Notion make a particularly powerful pair. These two MCP servers unlock seamless access to your codebase and knowledge base, giving agents the ability to reason across both technical and product contexts.
Whether it’s triaging issues, scanning PRs, or turning customer feedback into dev tasks, this combo lets developer agents move fluidly between source code and team documentation, all with just a few simple setup steps in Docker’s MCP Toolkit.
Let’s break down how these two servers work, why we love them, and how you can start using them today.
5. GitHub-official
What it is: This refers to the official GitHub server, which allows AI agents to interact with GitHub repositories.
How it’s used with Docker: Enabled via the MCP Toolkit, this server connects your agent to GitHub for tasks like reading issues, checking PRs, or even writing code. Either use a personal access token or log in via OAuth.
Why we use it: GitHub is an essential tool in almost any developer’s toolbelt. From surfing the issues in the repositories you work on to checking if the errors you see are documented in the repo. GitHub MCP server gives AI coding agents incredible power!
6. Notion
What it is: Notion actually has two MCP servers in the catalog. A remote MCP server hosted by Notion itself, and a containerized version. In any case, if you’re using Notion, enabling AI to access your knowledge base has never been easier.
How it’s used with Docker: Enable the MCP server, provide an integration token, or log in via OAuth if you choose to use the remote server.
Why we use it: It provides an easy way to, for example, plow through the customer feedback and create issues for developers. In any case, plugging your knowledge base into AI leads to almost unlimited power.
Here’s a video where you can see how Notion and GitHub MCP servers work perfectly together.
Getting started with MCP servers made easy
While MCP unlocks powerful new workflows, it also introduces new complexities and security risks. How do developers manage all these new MCP servers? How do they ensure they’re configured correctly and, most importantly, securely?
This focus on a trusted, secure foundation is precisely why partners like E2B chose the Docker MCP Catalog to be the provider for their secure AI agent sandboxes. The MCP Catalog now hosts over 270+ MCP servers, including popular remote servers.
The security risks aren’t theoretical; our own “MCP Horror Stories” blog series documents the attacks that are already happening. The series, the latest episode of which, the “Local Host Breach” (CVE-2025-49596), details how vulnerabilities in this new ecosystem can lead to full system compromise. The MCP Toolkit directly combats these threats with features like container isolation, signed image verification from the catalog, and an intelligent gateway that can intercept and block malicious requests before they ever reach your tools.
This is where the Docker MCP Toolkit comes in. It provides a comprehensive solution that gives you:
Server Isolation: Each MCP server runs in its own sandboxed container, preventing a breach in one tool from compromising your host machine or other services.
Convenient Configuration: The ToolKit offers a central place to configure all your servers, manage tokens, and handle OAuth flows, dramatically simplifying setup and maintenance.
Advanced Security: It’s designed to overcome the most common and dangerous attacks against MCP.
Figure 1: Docker Desktop UI showing MCP Toolkit with enabled servers (Context7, DuckDuckGo, GitHub, Notion, Docker Hub).
Find MCP servers that work best for you
This list, from private knowledge bases like Obsidian to global repositories like Docker Hub and essential tools like GitHub, is just a glimpse of what’s possible when you securely and reliably connect your AI agents to the tools you use every day.
The Docker MCP Toolkit is your central hub for this new ecosystem. It provides the essential isolation, configuration, and security to experiment and build with confidence, knowing you’re protected from the various real threats.
This is just our list of favorites, but the ecosystem is growing every day.
We invite you to explore the full Docker MCP Catalog to discover all the available servers that can supercharge your AI workflows. Get started with the Docker MCP Toolkit today and take control of your AI tool interactions.
We also want to hear from you: Explore the Docker MCP Catalog and tell us what are your must-have MCP servers? What amazing tool combinations have you built? Let us know in our community channel!
Learn more
Try MCP Toolkit by launching Docker Desktop (Requires version 4.48 or newer to launch the MCP Toolkit automatically)
Join our community Slack channel to let us know your must-have MCP servers.
Discover how Docker is powering agentic development.
Quelle: https://blog.docker.com/feed/