Microsoft named a Leader in the 2025 Gartner® Magic Quadrant™ for Distributed Hybrid Infrastructure

We’re proud that Microsoft has once again been named a Leader in the 2025 Gartner® Magic Quadrant™ for Distributed Hybrid Infrastructure. This is the third year in a row we’ve been recognized by Gartner, and we feel it reflects the impact we’re delivering by helping organizations run workloads seamlessly across hybrid, edge, multicloud, and sovereign environments with Azure. 

See the full 2025 Gartner Magic Quadrant for Distributed Hybrid Infrastructure report here

Azure’s adaptive cloud: built on Azure Arc and Azure Local

The reality for many organizations today is that not every workload belongs in a hyperscale datacenter. Azure’s adaptive cloud approach recognizes that reality and brings the cloud model to every environment through two core technologies.

Azure Arc brings Azure management to any environment, including datacenters, edge, and multicloud. It unifies on-premises and non-Azure resources through Azure Resource Manager and enables services such as Azure Kubernetes Service (AKS), Microsoft Defender for Cloud, Azure IoT Operations, and Azure AI Video Indexer.

Azure Local leverages Azure Arc to bring Azure services and management to customer-owned environments, enabling customers to run cloud-native workloads such as virtual machines and Arc-enabled AKS locally. Azure Local also supports Microsoft’s Sovereign Private Cloud strategy, enabling isolated, compliance-driven operations with Azure consistency.

Together, Azure Arc and Azure Local give customers unified governance, security, and management across distributed environments. They help organizations innovate faster, stay secure, and scale with confidence.

Real-world impact across industries

Azure’s adaptive cloud approach is especially valuable for organizations that need to balance local operations with the flexibility of the cloud. Manufacturers and industrial companies are deploying AI models at the edge to improve safety, quality, and automation while managing them centrally in Azure. Regulated sectors such as financial services, healthcare, and government maintain compliance and data residency as they modernize with Azure. Enterprises with distributed sites are using a single management plane to connect operations, improve efficiency, and gain real-time insight across locations.

Across these industries, we’re seeing customers embrace Azure’s adaptive cloud approach enabled by Azure Arc:

Publix Employees Federal Credit Union (PEFCU) consolidated operations using Azure Arc and Azure Local, reducing disaster recovery time to under 10 minutes per VM and freeing engineers to focus on innovation.

Delta Dental of California modernized its core payor system with containerized workloads, improving performance, uptime, and compliance.

CDW migrated 800 virtual machines to Azure Local, doubling SQL performance and strengthening governance and security with Azure Policy and Microsoft Defender for Cloud.

Coles and Emirates Global Aluminum (EGA) continue to scale operations with Azure Local, running GPU-enabled AI workloads that enhance analytics, customer experiences, and manufacturing insights.

These organizations are adopting Azure’s adaptive cloud approach to run workloads where they are needed most while unifying operations, security, and innovation across every environment.

Building the future together 

Recognition from Gartner is an important milestone to us, but what matters most is what our customers are achieving with Azure every day. We’re continuing to invest in making Azure more adaptive, more secure, and more capable across every environment so organizations can innovate on their own terms. We’re grateful for the trust our customers place in us and inspired by how they are shaping the future of cloud infrastructure right alongside us.

Learn more 

Learn how your organization can accelerate innovation with Azure Local and Azure Arc.

Explore Sovereign Private Cloud with Azure Local and Microsoft 365 Local.

Join us at Microsoft Ignite in San Francisco this November to explore the latest in enterprise innovation.

Gartner, Magic Quadrant for Distributed Hybrid Infrastructure, Julia Palmer, Jeffrey Hewitt, Dennis Smith, Tony Harvey, Elaine Zhang, 8 September 2025.

GARTNER is a registered trademark and service mark of Gartner and Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved. This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Microsoft. Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. 
The post Microsoft named a Leader in the 2025 Gartner® Magic Quadrant™ for Distributed Hybrid Infrastructure appeared first on Microsoft Azure Blog.
Quelle: Azure

Fully managed cloud-to-cloud transfers with Azure Storage Mover

In today’s modern world, cloud strategies are evolving rapidly. Many organizations are embracing multicloud environments, while others are looking to consolidate and migrate workloads to a single trusted platform. At Microsoft, we recognize that the need for secure, reliable, and efficient data movement across platforms is critical. Today, we are announcing the General Availability of cloud-to-cloud migration from AWS S3 to Azure Blob Storage using Azure Storage Mover. 

Since its launch in 2023, Azure Storage Mover has been simplifying on-premises data migrations for organizations of all sizes, making large-scale data transfers to Azure faster, more secure, and less complex. For those who are unfamiliar, Azure Storage Mover is a free, fully managed migration service designed to move data from Files Shares and NAS Storage into Azure Object and File storage with minimal disruption. It enables efficient, scalable, and reliable data transfers via Azure’s centralized orchestration; while also maintaining file metadata and supporting both one-time migrations and sync tasks without requiring custom scripts or third-party tools.

Ready to simplify migration?

Azure Storage Mover’s new cloud-to-cloud migration capability enables direct transfers from AWS S3 to Azure Blob. Unlike on-premises migrations, cloud-to-cloud transfers do not require a self-hosted agent, simplifying setup and eliminating additional compute requirements. This approach reduces infrastructure costs and migration overhead; no agents, no scripts, and fully managed. Key capabilities include: 

Direct parallel transfers: Storage Mover supports high-speed, server-to-server parallel file transfers, optimizing migration performance for large datasets.

Integrated automation: Users can leverage the Azure portal/CLI for automated workflows and repeatable job tracking while ensuring metadata preservation—removing reliance on scripts or third-party tools. 

Secure transfers: All data transfers are encrypted in transit, and the service integrates with Azure’s security and compliance frameworks, including Azure Active Directory, Multicloud Arc connector and role-based access control (RBAC). 

Incremental sync: After initial migration, Storage Mover can perform incremental syncs, transferring only changed files to minimize downtime and ensure data consistency. 

Monitoring and observability: Migration progress can be tracked via the Azure portal, CLI, or REST API, and is integrated with Azure Monitor and Log Analytics for detailed telemetry and error reporting. 

Real-world impact: How customers are accelerating migration 

During public preview, customers have already realized the benefits of cloud-to-cloud migration and transferred petabytes (PBs) of data from AWS to Azure. For example, one of our customers, Syncro, in partnership with SOUTHWORKS, migrated hundreds of terabytes from AWS S3 to Azure Blob with minimal downtime. The phased migration approach, enabled by Storage Mover, allows for complex orchestration, data integrity, and immediate access to Azure’s analytics and AI capabilities. 

Syncro, a leading provider of IT management solutions, faced the challenge of migrating hundreds of terabytes of data from AWS S3 to Azure Blob. Using Azure Storage Mover, SOUTHWORKS completed the pilot migration, transferring 60 TB in the first phase and planning ongoing migrations for approximately 120 TB. The solution enabled complex, phased migrations, maintained data integrity, and leveraged Azure’s advanced analytics and AI capabilities immediately upon arrival.
— Johnny Halife, CTO, SOUTHWORKS

Beyond simplifying migration, Azure Storage Mover opens the door to a broader ecosystem of innovation, helping organizations maximize the value of their data once it’s in Azure. Migrating AI training data into Azure Blob Storage allows organizations to quickly leverage advanced artificial intelligence and machine learning capabilities. With immediate access to powerful Azure tools, teams can develop, train, and deploy models at scale to unlock innovation and accelerate insights across the Azure ecosystem.

Additional product updates 

We’re also excited to announce several new service capabilities for Azure Storage Mover, including support for additional source and target pairs:

Migration from on-premises SMB shares to Azure Object storage.

Support for migration from on-premises NFS shares to Azure Files NFS 4.1 share.

Planning data migration to Azure storage? Discover recommended Storage Migration solutions using Azure Copilot.

Coming soon: Storage Mover availability in Azure US Government regions. 

Ready to modernize your data estate? 

Azure Storage Mover’s cloud-to-cloud migration capabilities are now available to simplify your multicloud journey and accelerate digital transformation. Get started today! 
The post Fully managed cloud-to-cloud transfers with Azure Storage Mover appeared first on Microsoft Azure Blog.
Quelle: Azure

Innovation spotlight: How 3 customers are driving change with migration to Azure SQL

Organizations are under constant pressure to modernize their estate. Legacy infrastructure, manual processes, and increasing data volumes in silos make it harder to deliver the performance, security, and agility that today’s business landscape demands to keep pace with the competitive pressures.

Continue reading to learn about how three organizations—Thomson Reuters, Hexure, and CallRevu—each jumpstarted their transformation with migration of their on-premises workloads to Microsoft Azure. As a result, organizations were able to improve operational efficiency and accelerate AI-powered innovation. Their stories reveal how fully managed platform-as-a-service solutions like Microsoft Azure SQL Managed Instance helps organizations move from legacy constraints to a scalable and secure AI-ready foundation ready to power future possibilities.

Try Azure SQL Managed Instance today

Modernization at scale: Thomson Reuters  

For Thomson Reuters, one of the world’s most trusted providers of tax and accounting solutions, modernization was less of an option and more of a necessity. Supporting over 7,000 firms and 70,000 users during the peak of tax season required an infrastructure that was both robust and scalable. The company previously hosted more than 18,000 databases and over 500 terabytes of data on third-party servers, an approach that came with high costs, operational complexity, and challenges scaling to meet seasonal demand.  

By migrating this massive estate into Azure SQL Managed Instance from another cloud hosting environment, Thomson Reuters achieved modernization at scale. With programs like Microsoft Azure Migrate to support every step of the migration journey, and automation tools like PowerShell and Azure Resource Manager templates, they were able to streamline deployments and maintain performance while minimizing disruptions. Azure’s fully managed platform allowed Thomson Reuters to streamline database administration and automated key tasks like backups and updates. As a result, their teams could focus on delivering value to customers rather than managing infrastructure. Azure Virtual Desktop together with Windows 11 facilitated access to tax preparation applications, reducing complexity and costs.  

The benefits were immediate and significant. Thomson Reuters gained: 

Consistent performance during seasonal peaks.

Improved resiliency.

Reduced support overhead.

Optimized costs across licensing and infrastructure.  

Thomson Reuters now has a foundation for continued growth and the flexibility to scale its services as demand requires.

Thomson Reuters transforms tax prep for 7K businesses with Azure SQL Managed Instance

Operational efficiency and performance: Hexure  

While Thomson Reuters’ story highlights scale, Hexure’s migration shows the operational efficiency gains that come from moving to a fully managed platform with Azure SQL Managed Instance and Microsoft Azure App Service. Hexure provides digital solutions for insurance and financial services companies—managing sensitive customer information across many databases and applications.  

The company faced challenges with aging infrastructure that slowed down critical processes and demanded heavy manual intervention. Provisioning new customer instances, managing backups, and handling failovers was time-intensive. Processing delays made it harder to serve clients with the speed and reliability customers expect.  

Migrating to Azure SQL Managed Instance changed that equation. Hexure cut processing times by up to 97%, transforming overnight batch jobs into near-instant operations. Migration times were reduced by more than 80% thanks to built-in compatibility and automation. With Microsoft Azure Key Vault, Hexure could better manage cybersecurity and protection of their data. Features like point-in-time restore, automated backups, and geo-replication not only boosted resilience but also ensured compliance with industry regulations.  

Equally important, the move allowed Hexure to:  

Onboard new customers in minutes versus hours.

Deliver faster shipping cycles for features and platform improvements.

Reduce management of infrastructure—including servers.

With migration, Hexure could now focus on innovation and customer service. For an industry where trust and responsiveness are critical, this operational leap forward directly translates into stronger client relationships.

Hexure cuts processing time by up to 97.2% with Azure SQL Managed Instance

Innovation with AI and insights: CallRevu  

CallRevu’s story illustrates the next frontier: innovation. CallRevu helps automotive dealerships improve lead conversion, follow-up, and customer experience by analyzing phone calls across more than 5,000 locations. Handling this volume of conversational data requires not only advanced analytics, but scalable platform. 

With a fully managed solution built on Azure SQL Managed Instance, Microsoft Azure Kubernetes Service together with Microsoft Azure AI services, CallRevu created a platform that goes beyond storing and managing data. It ensures reliable, scalable performance for call data and transcriptions, while services like Microsoft Azure OpenAI for real-time summaries and insights. This integration allows CallRevu to surface actionable insights in real time—helping dealerships connect marketing to results, improve agent performance, and ultimately drive more sales. 

The company also benefits from the operational simplicity that Azure SQL Managed Instance delivers. By migrating from their on-premises SQL Server environment, they were able to benefit from automated backups, scaling, and monitoring to reduce administrative overhead, while built-in security helps protect sensitive customer interactions. Data is mirrored in Microsoft Fabric allowing Power BI dashboards to generate real-time insights. With a strong and agile data foundation in place, CallRevu can focus on innovating faster—bringing AI-powered capabilities to an industry where customer engagement is a critical differentiator while also:  

Increasing customer satisfaction by 10%.

Saving USD500,000 annually in labor costs.

Increasing lead conversion by 15%.

CallRevu delivers real-time insights for auto dealerships with Azure AI Foundry

Take the next step in your transformation journey  

Modernization is not a one-time project—it’s a journey that is different for every organization. For some organizations, the first step is simply migrating off legacy servers. For others, it’s about rethinking how operations can run more efficiently. And for many, it’s about leveraging cloud and AI to create entirely new opportunities.  

The experiences of Thomson Reuters, Hexure, and CallRevu highlight how migration to a platform-as-a-service anchored on database solutions like Azure SQL Managed Instance supports every stage of that journey. By providing a managed, secure, and scalable cloud platform, backed by the tools and programs, organizations can migrate with confidence, operate more efficiently, and innovate faster.

Ready to get started? Here are some free tools you can start trying today: 

Try Azure SQL Managed Instance free today. 

Learn how Azure Migrate can help you get started.

Join Microsoft at PASS Data Community Summit 2025 to continue your learning journey and how Azure is making it easier than ever to start your transformation journey. Learn more on our sponsorship and presence.
The post Innovation spotlight: How 3 customers are driving change with migration to Azure SQL appeared first on Microsoft Azure Blog.
Quelle: Azure

The Signals Loop: Fine-tuning for world-class AI apps and agents 

In the early days of the AI shift, AI applications were largely built as thin layers on top of off-the-shelf foundation models. But as developers began tackling more complex use cases, they quickly encountered the limitations of simply using RAG on top of off-the-shelf models. While this approach offered a fast path to production, it often fell short in delivering the accuracy, reliability, efficiency, and engagement needed for more sophisticated use cases.

However, this dynamic is shifting. As AI shifts from assistive copilots to autonomous co-workers, the architecture behind these systems must evolve. Autonomous workflows, powered by real-time feedback and continuous learning, are becoming essential for productivity and decision-making. AI applications that incorporate continuous learning through real-time feedback loops—what we refer to as the ‘signals loop’—are emerging as the key to building more adaptive and resilient differentiation over time.

Learn how you can start fine-tuning models with Azure AI Foundry

Building truly effective AI apps and agents requires more than just access to powerful LLMs. It demands a rethinking of AI architecture—one that places continuous learning and adaptation at its core. The ‘signals loop’ centers on capturing user interactions and product usage data in real time, then systematically integrating this feedback to refine model behavior and evolve product features, creating applications that get better over time.

As the rise of open-source frontier models democratizes access to model weights, fine-tuning (including reinforcement learning) is becoming more accessible and building these loops becomes more feasible. Capabilities like memory are also increasing the value of signals loops. These technologies enable AI systems to retain context and learn from user feedback—driving greater personalization and improving customer retention. And as the use of agents continues to grow, ensuring accuracy becomes even more critical, underscoring the growing importance of fine-tuning and implementing a robust signals loop. 

At Microsoft, we’ve seen the power of the signals loop approach firsthand. First-party products like Dragon Copilot and GitHub Copilot exemplify how signals loops can drive rapid product improvement, increased relevance, and long-term user engagement.

Implementing signals loop for continuous AI improvement: Insights from Dragon Copilot and GitHub Copilot

Dragon Copilot is a healthcare Copilot that helps doctors become more productive and deliver better patient care. The Dragon Copilot team has built a signals loop to drive continuous product improvement. The team built a fine-tuned model using a repository of clinical data, which resulted in much better performance than the base foundational model with prompting only. As the product has gained usage, the team used customer feedback telemetry to continuously refine the model. When new foundational models are released, they are evaluated with automated metrics to benchmark performance and updated if there are significant gains. This loop creates compounding improvements with every model generation, which is especially important in a field where the demand for precision is extremely high. The latest models now outperform base foundational models by ~50%. This high performance helps clinicians focus on patients, capture the full patient story, and improve care quality by producing accurate, comprehensive documentation efficiently and consistently.

GitHub Copilot was the first Microsoft Copilot, capturing widespread attention and setting the standard of what AI-powered assistance could look like. In its first year, it rapidly grew to over a million users, and has now reached more than 20 million users. As expectations for code suggestion quality and relevance continue to rise, the GitHub Copilot team has shifted its focus to building a robust mid-training and post-training environment, enabling a signals loop to deliver Copilot innovations through continuous fine-tuning. The latest code completions model was trained on over 400 thousand real-world samples from public repositories and further tuned via reinforcement learning using hand-crafted, synthetic training data. Alongside this new model, the team introduced several client-side and UX changes, achieving an over 30% improvement in retained code for completions and a 35% improvement in speed. These enhancements allow GitHub Copilot to anticipate developer needs and act as a proactive coding partner.

Key implications for the future of AI: Fine-tuning, feedback loops, and speed matter 

The experiences of Dragon Copilot and GitHub Copilot underscore a fundamental shift in how differentiated AI products will be built and scaled moving forward. A few key implications emerge:

Fine-tuning is not optional—it’s strategically important: Fine-tuning is no longer niche, but a core capability that unlocks significant performance improvements. Across our products, fine-tuning has led to dramatic gains in accuracy and feature quality. As open-source models democratize access to foundational capabilities, the ability to fine-tune for specific use cases will increasingly define product excellence.

Feedback loops can generate continuous improvement: As foundational models become increasingly commoditized, the long-term defensibility of AI products will not come from the model alone, but from how effectively those models learn from usage. The signals loop—powered by real-world user interactions and fine-tuning—enables teams to deliver high-performing experiences that continuously improve over time.

Companies must evolve to support iteration at scale, and speed will be key: Building a system that supports frequent model updates requires adjusting data pipelines, fine-tuning, evaluation loops, and team workflows. Companies’ engineering and product orgs must align around fast iteration and fine-tuning, telemetry analysis, synthetic data generation, and automated evaluation frameworks to keep up with user needs and model capabilities. Organizations that evolve their systems and tools to rapidly incorporate signals—from telemetry to human feedback—will be best positioned to lead. Azure AI Foundry provides the essential components needed to facilitate this continuous model and product improvement.

Agents require intentional design and continuous adaptation: Building agents goes beyond model selection. It demands thoughtful orchestration of memory, reasoning, and feedback mechanisms. Signals loops enable agents to evolve from reactive assistants into proactive co-workers that learn from interactions and improve over time. Azure AI Foundry provides the infrastructure to support this evolution, helping teams design agents that act, adapt dynamically, and deliver sustained value.

While in the early days of AI fine-tuning was not economical and required lots of time and effort, the rise of open-source frontier models and methods like LoRA and distillation have made tuning more cost-effective, and tools have become easier to use. As a result, fine-tuning is more accessible to more organizations than ever before. While out-of-the-box models have a role to play for horizontal workloads like knowledge search or customer service, organizations are increasingly experimenting with fine-tuning for industry and domain-specific scenarios, adding their domain-specific data to their products and models.

The signals loop ‘future proofs’ AI investments by enabling models to continuously improve over time as usage data is fed back into the fine-tuned model, preventing stagnated performance.

Build adaptive AI experiences with Azure AI Foundry

To simplify the implementation of fine-tuning feedback loops, Azure AI Foundry offers industry-leading fine-tuning capabilities through a unified platform that streamlines the entire AI lifecycle—from model selection to deployment—while embedding enterprise-grade compliance and governance. This empowers teams to build, adapt, and scale AI solutions with confidence and control. 

Here are four key reasons why fine-tuning on Azure AI Foundry stands out: 

Model choice: Access a broad portfolio of open and proprietary models from leading providers, with the flexibility to choose between serverless or managed compute options. 

Reliability: Rely on 99.9% availability for Azure OpenAI models and benefit from latency guarantees with provisioned throughput units (PTUs). 

Unified platform: Leverage an end-to-end environment that brings together models, training, evaluation, deployment, and performance metrics—all in one place. 

Scalability: Start small with a cost-effective Developer Tier for experimentation and seamlessly scale to production workloads using PTUs. 

Join us in building the future of AI, where copilots become co-workers, and workflows become self-improving engines of productivity.

Learn more

Register for Ignite’s AI fine-tuning in Azure AI Foundry to make your agents unstoppable. 

Download the white paper: Learn how to unlock business-value with fine-tuning.

Explore fine-tuning with Azure AI Foundry documentation.

The post The Signals Loop: Fine-tuning for world-class AI apps and agents  appeared first on Microsoft Azure Blog.
Quelle: Azure

FYAI: Why developers will lead AI transformation across the enterprise

Developers are leading AI adoption—and driving transformation across every industry. From writing code to managing applications, they’re using copilots and agents to accelerate delivery, reduce manual effort, and build with greater confidence. Just as they led automation, developers are now reshaping customer experiences and streamlining operations to unlock AI’s full potential.

Transform what’s possible for your business with Microsoft AIIn this edition of FYAI, a series where we spotlight AI trends with Microsoft leaders, we hear from Amanda Silver, Corporate Vice President and Head of Product, Apps, and Agents. Amanda’s leadership has shaped Microsoft’s evolution toward open-source collaboration, and she’s advancing a future where AI transforms how developers build, deploy, and iterate at scale to drive continuous innovation.

In this Q&A, Amanda shares why developer-led AI adoption matters, how agentic DevOps is redefining workflows, and what leaders can do today to maximize impact. 

How is the AI landscape changing how developer teams deliver the apps businesses run on? AI is collapsing handoffs across the software lifecycle. DevOps successfully united build, test, deploy, and operate, but the earlier phases—discovery, requirements, shared vision, and initial scaffolding—mostly sat outside that loop. Now copilots can turn natural language ideas into specs and scaffolds, and agents take on tests, upgrades, and runtime operations. The result is a single, faster cycle from idea to impact: lower cost to iterate, quicker transitions, and more freedom to refine until the product fits the business. Think of it like the shift to public cloud: before the public cloud, teams waited weeks to procure hardware and commit capital up front; with the cloud, environments spin up in seconds and you pay only for what you use. AI brings that same elasticity to product definition and delivery—removing friction at the front of the lifecycle and letting teams iterate based on real feedback. Put simply: cloud removed friction from infrastructure; AI removes friction from intent to implementation.

What are some examples of how AI is helping developers re-imagine their daily work?  AI is turning software delivery into a true idea-to-operate system. For developers, that means less time spent on manual cleanup and more time focused on creative, high-impact work. Copilots and agents now handle the repetitive, often invisible tasks that used to pile up—like debugging, dependency upgrades, and security patches. Instead of waiting for a quarterly “tech debt sprint,” agentic DevOps lets teams pay down debt continuously, in the background. 

A great example is how agentic AI is accelerating migration and modernization. In the past, updating frameworks or moving to new platforms meant months of planning and manual fixes. Now, agents can automate .NET and Java upgrades, resolve breaking changes, and even orchestrate large-scale migrations—compressing timelines from months to hours. This isn’t just about speed; it’s about keeping codebases healthy and modern by default, so developers can focus on building new features and improving user experiences. 

The net effect: developers spend less time firefighting and more time innovating. Technical debt becomes a manageable, ongoing process—not a looming crisis. And as AI agents take on more of the routine work, teams can operate in a steadier flow state, with healthier code and faster delivery. 

What does that mean for apps? Are they getting better? And how does this impact the role a developer plays?Apps will get better because they become learning systems. With AI in the loop, teams shift from shipandhope to continuous observe → hypothesize → change → validate cycle centered on continuously refining product–market fit. AI can help synthesize telemetry (such as funnels, dropoffs, session replays, and qualitative feedback), surface where users struggle, propose changes (like copy, flow, component layout, and recommendations), and can even wire up feature flags or experiments to prove whether a change works. The effect is a dramatic reduction in timetolearning—and faster convergence on what users value. 

PreAI versus PostAI user interaction PreAI (huntandpeck): Users navigate dense menus and deep information architectures, scanning screens to find the one control that does what they need. Every step risks a dead end, and context is easy to lose when switching pages or tools.PostAI (intentfirst): Users express intent in natural language (like text, speech, or multimodal). The app interprets that intent, keeps context, and routes the user to the right data, action, or workflow—often composing the UI on the fly (for example, drafting a form, filtering to the relevant records, and suggesting the next best action). Think of this as moving from “Where do I click?” to “Here’s what I need—do it with me.” What changes for developers From page builders to experience composers. Devs design intent routers and orchestrations that connect models, agents, data, and services—so the app can respond intelligently to varied user goals without forcing rigid click paths.From manual analysis to AI-assisted product loops. Instead of hand rolled dashboards and ad hoc investigations, AI highlights opportunity areas, drafts experiment plans, and opens pull requests with proposed code and config changes. Developers review, constrain, and ship—with guardrails.From “debt sprints” to continuous modernization. Agents can keep the app current—upgrading frameworks (for example, .NET and Java), repairing dependency drift, patching vulnerabilities, and standardizing pipelines—while feature work continues. That turns tech debt into a managed, always on workload rather than a periodic fire drill.  Bottom line: AI tightens the loop between what users want and what the app becomes. Developers spend less time on menu wiring and manual forensics, and more time defining intent, composing agentic flows, setting success metrics, and supervising safe, measurable change. Apps improve faster—not just because they’re smarter, but because teams can experiment, learn, and adapt as usage grows. 

Where do you see Microsoft standing out in a sea of AI competition? Microsoft’s biggest differentiator is our ability to connect AI agents to the systems, data, and workflows that power real business. We serve organizations with massive, complex codebases and deep operational requirements—and our tools are designed to meet them where they are. With GitHub, Visual Studio, and Azure AI Foundry, millions of developers can access the latest models and agentic capabilities directly in their daily workflow, backed by enterprise-grade security, governance, and responsible AI benchmarks. 

software development with github copilot and microsoft azureRead the blog ↗

But what truly sets Microsoft apart is the breadth of integration. AI agents built on our platform can tap into a huge ecosystem of business apps, data sources, and operational systems—whether it’s enterprise resource planning (ERP), customer relationship manager (CRM), human resources (HR), finance, or custom line-of-business solutions. Through open standards like Model Connector Protocol (MCP) and Agent-to-Agent (A2A), agents can securely connect, orchestrate, and automate across these environments, making it possible to deliver outcomes that matter: automating workflows, modernizing legacy systems, and driving continuous improvement. 

Yina Arenas’s Agent Factory series shows how Microsoft is building the blueprint for safe, secure, and reliable AI agents—from rapid prototyping to production, observability, and real-world use cases. Our platform isn’t just about building agents; it’s about enabling them to work with the systems and data that organizations already rely on, so teams can move from experiments to enterprise-scale impact. 

At the end of the day, Microsoft’s advantage is not just scale—it’s the ability to make AI agents truly useful by connecting them to the heart of the business, with the tools and standards to do it safely and securely. 

When should developers decide which tasks to delegate to agents versus tackle themselves for maximum impact? As my colleague, David Fowler, put it: “Humans are the UI thread; agents are the background thread. Don’t block the UI!” Developers should focus on the creative, judgment-driven work—setting intent, making architectural decisions, and shaping the product experience. Agents excel at handling the repetitive, long-running, or cross-cutting tasks that can quietly run in the background: code health, dependency upgrades, telemetry triage, and even scaffolding out solutions to unblock the blank page. 

The key is to delegate anything that slows down your flow or distracts from high-impact work. If a task is routine, latency-tolerant, or easily reversible, let an agent handle it. If it requires deep context, product judgment, or could fundamentally change the direction of your app, keep it on the human “UI thread.” This way, developers stay responsive and focused, while agents continuously improve the codebase and operations in parallel. 

By striking the right balance, developers can minimize time spent on routine tasks and stay focused on the work that moves products and teams forward.

Why are AI coding tools attracting so much investment and interest? Why reimagine the developer experience now? Because software development already generates the kind of rich, structured signals that AI thrives on. Code and diffs, pull request reviews, test results, build logs, runtime and performance telemetry, issue trackers, and deployment outcomes are all labeled, timestamped, and traceable. That makes the dev environment a natural proving ground for applied machine learning: models can learn from real work, be evaluated against objective checks (like tests, linters, and policies), and improve inside an existing feedback loop (such as Continuous Integration and Continuous Delivery (CI/CD), feature flags, and canaries). In short, we have the data, the instrumentation, and the validation built in. 

There’s also a cultural reason: developers automate away friction—from compilers and build systems to version control, CI/CD, containers, and infrastructure as code. Generative AI is the next step in that lineage. It shifts more work from hand authoring to specifying intent and supervising outcomes: copilots help with exploration and acceleration; agents handle continuous code health, upgrades, and safe, reversible changes. Investment flows here because better developer experience maps directly to throughput, quality, and time to value. 

And yes—the future starts with developers. As dev teams discover where AI delivers real support in their own workflow, those patterns spread to the rest of the business, accelerating how every function experiments, learns, and ships. 

Empowering developers with AI to deliver lasting impactWe’re entering a new era of software delivery—and it’s agentic, adaptive, and deeply human-centered. With copilots and agents in the loop, developers are building systems that continually adapt to business needs. At Microsoft, we’re empowering developers to move from idea to impact faster by focusing on creativity, product vision, and building with trustworthy AI.

In fact, Frontier Firms are already showing us what’s possible. They treat software as a dynamic system—refined through telemetry, experimentation, and AI-powered insight. And across all types of organizations, compelling AI use cases are emerging—from customer service to software engineering—setting the pace for what’s possible with the latest AI tooling.

Empower your people and drive real results with Microsoft AIReady to learn more? Discover resources and tools to accelerate your AI journey:

Learn three skill-building insights that help Frontier Firms drive innovation.Get started with GitHub Copilot.Build your first production-grade agent in under an hour with Azure AI Foundry.Simplify development and meet evolving business needs with Microsoft Cloud solutions.
The post FYAI: Why developers will lead AI transformation across the enterprise appeared first on Microsoft Azure Blog.
Quelle: Azure

Accelerating open-source infrastructure development for frontier AI at scale

In the transition from building computing infrastructure for cloud scale to building cloud and AI infrastructure for frontier scale, the world of computing has experienced tectonic shifts in innovation. Throughout this journey, Microsoft has shared its learnings and best practices, optimizing our cloud infrastructure stack in cross-industry forums such as the Open Compute Project (OCP) Global Foundation.

Today, we see that the next phase of cloud infrastructure innovation is poised to be the most consequential period of transformation yet. In just the last year, Microsoft has added more than 2 gigawatts of new capacity and launched the world’s most powerful AI datacenter, which delivers 10x the performance of the world’s fastest supercomputer today. Yet, this is just the beginning.

Learn more about Microsoft’s global infrastructure

Delivering AI infrastructure at the highest performance and lowest cost requires a systems approach, with optimizations across the stack to drive quality, speed, and resiliency at a level that can provide a consistent experience to our customers. In the quest to supply resilient, sustainable, secure, and widely scalable technology to handle the breadth of AI workloads, we’re embarking on an ambitious new journey: one not just of redefining infrastructure innovation at every layer of execution from silicon to systems, but one of tightly integrated industry alignment on standards that offer a model for global interoperability and standardization.

At this year’s OCP Global Summit, Microsoft is contributing new standards across power, cooling, sustainability, security, networking, and fleet resiliency to further advance innovation in the industry.

Redefining power distribution for the AI era

As AI workloads scale globally, hyperscale datacenters are experiencing unprecedented power density and distribution challenges.

Last year, at the OCP Global Summit, we partnered with Meta and Google in the development of Mt. Diablo, a disaggregated power architecture. This year, we’re building on this innovation with the next step of our full-stack transformation of datacenter power systems: solid-state transformers. Solid-state transformers simplify the power chain with new conversion technologies and protection schemes that can accommodate future rack voltage requirements.

Training large models across thousands of GPUs also introduces variable and intense power draw patterns that can strain the grid. The utility, and traditional power delivery systems. These fluctuations not only risk hardware reliability and operational efficiency but also create challenges across capacity planning and sustainability goals.

Together with key industry partners, Microsoft is leading a power stabilization initiative to address this challenge. In a recently published paper with OpenAI and NVIDIA—Power Stabilization for AI Training Datacenters—we address how full-stack innovations spanning rack-level hardware, firmware orchestration, predictive telemetry, and facility integration can smooth power spikes, reduce power overshoot by 40%, and mitigate operational risk and costs to enable predictable, and scalable power delivery for AI training clusters.

This year, at the OCP Global Summit, Microsoft is joining forces with industry partners to launch a dedicated power stabilization workgroup. Our goal is to foster open collaboration across hyperscalers and hardware partners, sharing our learnings from full-stack innovation and inviting the community to co-develop new methodologies that address the unique power challenges of AI training datacenters. By building on the insights from our recently published white paper, we aim to accelerate industry-wide adoption of resilient, scalable power delivery solutions for the next generation of AI infrastructure. Read more about our power stabilization efforts.

Cooling innovations for resiliency

As the power profile for AI infrastructure changes, we are also continuing to rearchitect our cooling infrastructure to support evolving needs around energy consumption, space optimization, and overall datacenter sustainability. Various cooling solutions must be implemented to support the scale of our expansion—as we seek to build new AI-scale datacenters, we are also utilizing Heat Exchanger Unit (HXU)-based liquid cooling to rapidly deploy new AI capacity within our existing air-cooled datacenter footprint.

Microsoft’s next generation HXU is an upcoming OCP contribution that enables liquid cooling for high-performance AI systems in air-cooled datacenters, supporting global scalability and rapid deployment. The modular HXU design delivers 2X the performance of current models and maintains >99.9% cooling service availability for AI workloads. No datacenter modifications are required, allowing seamless integration and expansion. Learn more about the next generation HXU here. 

Meanwhile, we’re continuing to innovate across multiple layers of the stack to address changes in power and heat dissipation—utilizing facility water cooling at datacenter-scale, circulating liquid in closed-loops from server to chiller; and exploring on-chip cooling innovations like microfluidics to efficiently remove heat directly from the silicon.

Unified networking solutions for growing infrastructure demands 

Scaling hundreds of thousands of GPUs to operate as a single, coherent system comes with significant challenges to create rack-scale interconnects that can deliver low-latency, high bandwidth fabrics that are both efficient and interoperable. As AI workloads grow exponentially and infrastructure demands intensify, we are exploring networking optimizations that can support these needs. To that end, we have developed solutions leveraging scale-up, scale-out, and Wide Area Network (WAN) solutions to enable large-scale distributed training.

We partner closely with standards bodies, like UEC (Ultra Ethernet Consortium) and UALink, focused on innovation in networking technologies for this critical element of AI systems. We are also driving forward adoption of Ethernet for scale-up networking across the ecosystem and are excited to see the Ethernet for Scale-up Networking (ESUN) workstream launch under the OCP Networking Project. We look forward to promoting adoption of cutting-edge networking solutions and enabling multi-vendor Ecosystem based on open standards.

Security, sustainability, and quality: Fundamental pillars for resilient AI operations

Defense in depth: Trust at every layer

Our comprehensive approach to scaling AI systems responsibly includes embedding trust and security into every layer of our platform. This year, we are introducing new security contributions that build on our existing body of work in hardware security and introduce new protocols that are uniquely fit to support new scientific breakthroughs that have been accelerated with the introduction of AI:

Building on past years’ contributions and Microsoft’s collaboration with AMD, Google, and NVIDIA, we have further enhanced Caliptra, our open-source silicon root of trust The introduction of Caliptra 2.1 extends the hardware root-of-trust to a full security subsystem. Learn more about Caliptra 2.1 here.

We have also added Adams Bridge 2.0 to Caliptra to extend support for quantum-resilient cryptographic algorithms to the root-of-trust.

Finally, we are contributing OCP Layered Open-source Cryptographic Key Management (L.O.C.K)—a key management block for storage devices that secures media encryption keys in hardware. L.O.C.K was developed through collaboration between Google, Kioxia, Microsoft, Samsung, and Solidigm.

Advancing datacenter-scale sustainability 

Sustainability continues to be a major area of opportunity for industry collaboration and standardization through communities such as the Open Compute Project. Working collaboratively as an ecosystem of hyperscalers and hardware partners is one catalyst to address the need for sustainable datacenter infrastructure that can effectively scale as compute demands continue to evolve. This year, we are pleased to continue our collaborations as part of OCP’s Sustainability workgroup across areas such as carbon reporting, accounting, and circularity:

Announced at this year’s Global Summit, we are partnering with AWS, Google, and Meta to fund the Product Category Rule initiative under the OCP Sustainability workgroup, with the goal of standardizing carbon measurement methodology for devices and datacenter equipment.

Together with Google, Meta, OCP, Schneider Electric, and the iMasons Climate Accord, we are establishing the Embodied Carbon Disclosure Base Specification to establish a common framework for reporting the carbon impact of datacenter equipment.

Microsoft is advancing the adoption of waste heat reuse (WHR). In partnership with the NetZero Innovation Hub, NREL, and EU and US collaborators, Microsoft has published heat reuse reference designs and is developing an economic modeling tool which provide data center operators and waste heat off takers/consumers the cost it takes to develop the waste heat reuse infrastructure based on the conditions like the size and capacity of the WHR system, season, location, WHR mandates and subsidies in place. These region-specific solutions help operators convert excess heat into usable energy—meeting regulatory requirements and unlocking new capacity, especially in regions like Europe where heat reuse is becoming mandatory.

We have developed an open methodology for Life Cycle Assessment (LCA) at scale across large-scale IT hardware fleets to drive towards a “gold standard” in sustainable cloud infrastructure.

Rethinking node management: Fleet operational resiliency for the frontier era

As AI infrastructure scales at an unprecedented pace, Microsoft is investing in standardizing how diverse compute nodes are deployed, updated, monitored, and serviced across hyperscale datacenters. In collaboration with AMD, Arm, Google, Intel, Meta, and NVIDIA, we are driving a series of Open Compute Project (OCP) contributions focused on streamlining fleet operations, unifying firmware management, manageability interfaces and enhancing diagnostics, debug, and RAS (Reliability, Availability, and Serviceability) capabilities. This standardized approach to lifecycle management lays the foundation for consistent, scalable node operations during this period of rapid expansion. Read more about our approach to resilient fleet operations. 

Paving the way for frontier-scale AI computing 

As we enter a new era of frontier-scale AI development, Microsoft takes pride in leading the advancement of standards that will drive the future of globally deployable AI supercomputing. Our commitment is reflected in our active role in shaping the ecosystem that enables scalable, secure, and reliable AI infrastructure across the globe. We invite attendees of this year’s OCP Global Summit to connect with Microsoft at booth #B53 to discover our latest cloud hardware demonstrations. These demonstrations showcase our ongoing collaborations with partners throughout the OCP community, highlighting innovations that support the evolution of AI and cloud technologies.

Connect with Microsoft at the OCP Global Summit 2025 and beyond

Visit Microsoft at the OCP Global Summit at booth #B53.

Check out sessions delivered by Microsoft and partners from OCP Summit 2025.

Take a virtual tour of Microsoft datacenters.

Learn more about Microsoft’s global infrastructure.

The post Accelerating open-source infrastructure development for frontier AI at scale appeared first on Microsoft Azure Blog.
Quelle: Azure

Oracle Database@Azure offers new features, regions, and programs to unlock data and AI innovation

Together, Microsoft and Oracle are delivering the most comprehensive, enterprise‑ready platform for organizations migrating their Oracle solutions to the public cloud—especially those aiming to empower IT professionals and developers to streamline AI adoption and enhance employee productivity.

Oracle Database@Azure was the first offering of its kind in the market and today has the broadest regional availability, new ways to unify your data in Microsoft Fabric, deeper security integrations with Microsoft Defender, and can run most Oracle Database services—Base Database Service, Exadata Database Service on Dedicated Infrastructure, Exadata Database Service on Exascale Infrastructure and Autonomous Database as well as Oracle Database 19c or 23ai—on Azure.

Get started with Oracle Database@Azure

The result? A truly enterprise‑ready platform that offers more choice, increased control, and expanded opportunity to innovate with confidence–and customers are excited about the impact it’s driving in action.

Oracle Database@Azure delivered Exadata-grade performance natively within Azure, enabling us to host our Oracle EBS in the cloud without compromise. We gained native, real-time access to EBS data from Azure and seamless integration with both Oracle and non-Oracle data sources. Paired now with Microsoft Fabric, Power BI, and Copilot studio, our team will be able to accelerate insight delivery to business stakeholders and build agentic workflows faster. It’s a practical path to iterate on new features while keeping governance and security at the forefront.
Mahesh Tyagi, Vice President Finance Engineering, Activision Blizzard 

New Oracle Database@Azure features strengthen its enterprise leadership

Enterprise-grade capabilities are essential for organizations that depend on Oracle databases for mission-critical workloads. That’s why Microsoft continues to advance Oracle Database@Azure, bringing together the scale and resilience of Azure with industry-leading security and AI innovation.

Take a look at our latest announcements. For more details on each of these capabilities, check out our technical blog.

Announcing two new capabilities for real-time data integration and replication with Microsoft Fabric for an AI-ready data estate

Oracle Database mirroring in OneLake, now in public preview, enables continuous zero-ETL synchronization of Oracle data into OneLake, enabling a unified real-time data estate in Microsoft Fabric. Also available today, native Oracle GoldenGate integration offers managed, high-performance, low-latency replication and can be purchased using Microsoft Azure Consumption Commitment (MACC). Once your Oracle data is connected through Oracle Database@Azure, you can use powerful AI innovation tools like Microsoft Copilot Studio, Azure AI Foundry, and Power BI.

Oracle Base Database is generally available

Oracle Database@Azure offers customers the flexibility to run any Oracle Database service on Azure. Oracle Database@Azure now supports all popular Oracle database services—Base Database Service, Exadata Database Service on Dedicated Infrastructure, Exadata Database Service on Exascale Infrastructure and Autonomous Database—and also the choice of using either Oracle Database 19c or 23ai. This provides customers with a comprehensive set of flexible, simple, and cost-effective migration options when moving their Oracle databases to Azure.

Support for Oracle workloads goes beyond Oracle Database services. We’re excited to share that Oracle has introduced support policies for running Oracle E-Business Suite, PeopleSoft, JD Edwards EnterpriseOne, Enterprise Performance Management, and Oracle Retail Applications in Microsoft Azure using Oracle Database@Azure. This enables businesses to harness the power of Microsoft Azure while leveraging Oracle’s industry-leading database technology to achieve greater scalability, performance, and security. We continue to offer full Oracle Maximum Availability (MAA) support—up to platinum tier—available exclusively on Azure, giving customers the highest levels of availability, disaster recovery, and zero-data-loss protection for mission critical workloads.

Microsoft Defender now brings industry-leading threat detection and response to Oracle Database@Azure

Microsoft Defender is a cloud-native security platform that provides unified threat protection, vulnerability management, and automated compliance to safeguard Oracle Database@Azure workloads. Complemented by Microsoft Sentinel’s AI-powered security information and event management (SIEM) for real-time monitoring, and Microsoft Entra ID’s unified identity and access controls, customers get comprehensive enterprise-grade protection designed for today’s complex threat landscape.  

Azure Arc for Oracle Database@Azure

Extend Azure’s management, governance, and security capabilities across environments—whether on-premises, multicloud or edge. From a single control plane, Azure Arc enables you to enforce policies, manage identities, and automate lifecycle operations for all your Azure resources—and now, for your Oracle databases running natively on Azure. 

Azure IoT Operations and Microsoft Fabric now power an integration blueprint with Oracle Fusion Cloud Supply Chain and Manufacturing (SCM)

This integration enables manufacturers to capture live insights from factory equipment and sensors, automate key processes, and drive data‑driven decisions for greater efficiency and responsiveness.

Available in over 28 regions globally

With plans to reach 33 live regions by the end of the year, Oracle Database@Azure empowers organizations to deploy closer to their applications and users across North America, EMEA, and APAC. Stay up to date on the latest regions to go live here.

Introducing Azure Accelerate for Oracle

To help every organization start quickly and confidently—regardless of their size—Microsoft is excited to offer Azure Accelerate benefits to Oracle customers. Azure Accelerate is a program designed to support customers across their cloud and AI journey with expert guidance and investments. Customers can cut through the complexity of their Oracle migrations—and related application migration, modernization, and AI innovation projects—while also minimizing project costs. Azure Accelerate makes it easier than ever to bring your Oracle workloads to Azure by offering:

Access to trusted experts: Tap into the deep expertise of Azure’s specialized partner ecosystem. Additionally, you can take advantage of the Cloud Accelerate Factory benefit provides Microsoft experts at no additional cost.

Microsoft investments: Access Partner funding and Azure credits designed to make your migration to Azure more cost effective and minimize project risk.

Comprehensive coverage: Get help at every stage of the project, starting with an initial assessment through pilots or proof-of-value to full-scale implementation.

With Azure Accelerate, Oracle customers can now migrate more efficiently while integrating AI into their strategy, alongside Azure experts from day one.

Channel partners can now resell Oracle Database@Azure

Microsoft AI Cloud Partners and Oracle Partner Network (OPN) members can now purchase and resell Oracle Database@Azure—right from the Microsoft Marketplace. This new model underscores Microsoft and Oracle’s joint commitment to the partner community while streamlining migration and modernization for customers who prefer to purchase through their trusted partners.

Microsoft’s partner reseller programme helped CGI select Oracle Database@Azure to consolidate cloud services under a single cloud provider, ensuring cost efficiency, elasticity and redundancy required to meet CGI’s client key requirements. For Smart DCC, CGI is working with Oracle and Microsoft to implement the solution through the Microsoft marketplace reseller model, providing a streamlined procurement route on a secure, enterprise-ready platform for mission-critical workloads.
Ro Crawford, VP Consulting Services, CGI

We are also excited to share that Oracle Database@Azure is now included in the Microsoft Most Valuable Professionals (MVP) program under the new technology area, Azure Solutions and Ecosystem. This new technology area spans mission-critical workloads and modernization efforts, including Oracle Database@Azure, Azure VMware Solution (AVS), Nutanix on Azure, and mainframe modernization strategies. Microsoft Most Valuable Professionals program recognizes exceptional community leaders for their technical expertise, leadership, speaking experience, online influence, and commitment to solving real world problems. To learn more about the program, visit this FAQ.

Oracle Database@Azure customer momentum

Customers like Conduent, BV Liantis, SEFE, Astellas Pharmacy, Craneware and Medline have moved their Oracle databases to Oracle Database@Azure to optimize performance and reduce latency while unlocking a future-ready foundation for AI. 

We’re excited to spotlight our customer innovation in our sessions at Oracle AI World. Don’t miss Activision Blizzard on stage for Microsoft’s Spotlight Session on Wednesday, October 15 at 4:45 PM PT. You can find our full session list and featured customers here. 

Get started
Oracle Database@Azure is an Oracle database service running on Oracle Cloud Infrastructure (OCI), colocated in Microsoft data centers

Learn more here

Looking ahead

We’re excited to continue this journey—bringing together the best of Oracle and Microsoft to help customers innovate faster, operate smarter, and lead in the era of intelligent applications.

If you’re attending Oracle AI World 2025, come talk to our experts at the Microsoft booth (#3005) and be sure to check out our sessions.

Learn more about Oracle on Azure | Microsoft Azure.

To get started, contact our sales team.
The post Oracle Database@Azure offers new features, regions, and programs to unlock data and AI innovation appeared first on Microsoft Azure Blog.
Quelle: Azure

Sora 2 now available in Azure AI Foundry

Turning imagination into reality has never been more instantaneous—and powerful—as it is today, with the launch of OpenAI’s Sora 2: Now in public preview in Azure AI Foundry.

Azure AI Foundry is the developer destination built for creators, from startups to global businesses. The platform now offers a curated catalog of generative media models, including OpenAI’s Sora, GPT-image-1 and GPT-image-1-mini, Black Forest Lab’s Flux 1.1 and Kontext Pro, and more. These models empower software development companies and builders to serve creatives with new and unique capabilities—to accelerate storyboarding, drive engagement, and transform the creative process, all without sacrificing the safety, reliability, and integration businesses expect.

Start creating with Sora 2 in Azure AI Foundry

What can you create with Sora 2 in Azure AI Foundry?

Sora 2 in Azure AI Foundry isn’t just another video generation tool; it’s a creative powerhouse, seamlessly integrated into a platform built for innovation, trust, and scale. Unlike standalone solutions, Azure AI Foundry offers a unified platform where developers can access Sora 2 alongside other leading generative models in a secure, scalable, and structured environment to achieve more:

Marketers can rapidly produce stunning, branded campaign assets including animated assets for product launches and personalized content to capture attention and drive engagement.

Retailers can engage customers with interactive and localized campaigns to accelerate time-to-market and transform their customers’ online shopping experience.

Creative directors can transform imaginative ideas into dynamic movie trailers and cinematic experiences to test concepts, while Sora 2’s realistic world simulation, synchronized audio, and creative controls help bring visions to life.

Educators can create immersive lesson plans and interactive media that spark curiosity and deepen understanding.

With Sora 2 in Azure AI Foundry, developers across industries can innovate boldly and confidently. Azure AI Foundry’s unified environment, advanced capabilities, and enterprise-grade security provide the foundation for creativity to flourish and ideas to become reality.

What features and controls are available?

Sora 2 in Azure AI Foundry stands out by combining OpenAI’s most advanced video generation capabilities with the trusted infrastructure and security controls of Microsoft Azure, unlocking new possibilities for every developer with a set of core features:

Realistic video generation powered by advanced world simulation and physics.

Generation based on input text, images, and video.

Synchronized audio and dialogue for immersive storytelling.

Audio available in multiple languages.

Enhanced creative control, including detailed prompt understanding for studio shots, scene details, and camera angles.

Seamless integration into business workflows, backed by Microsoft’s enterprise-grade safety and security.

Microsoft is committed to delivering secure and safe AI solutions for organizations of all sizes. Through Azure AI Foundry and our responsible AI principles, we empower customers with embedded security, safety, and privacy controls.

This foundation extends to Sora 2, where our advanced safety systems and robust controls work together to help developers innovate more confidently in Azure AI Foundry:

Content filters for inputs: Screens text, image, and video inputs for prompts. 

Content filters for outputs: Analyzes video frames and audio; can block content to help comply with organizational policies.

Enterprise-grade security: Azure’s compliance and governance frameworks protect customer data and creative assets. 

Sora 2 Azure AI Foundry pricing and availability

Starting today, Sora 2 is available via API through Standard Global in Azure AI Foundry.

ModelSizePrice per second (in USD)
Sora 2
Portrait: 720×1280 Landscape: 1280×720
$0.10

Please refer to the Azure AI Foundry Models page for future updates in deployment types and availability.

Get started with AI as your creative partner

Sora 2 is designed to empower and inspire developers. By accelerating early production and enabling rapid prototyping, Sora 2 frees up time for more ideation and storytelling. The goal is to bring human creativity to the next level, making it easier for anyone to turn ideas into compelling visual stories.

Ready to create with Sora 2?
Explore the full catalog of generative media models in Azure AI Foundry.

Get started

The post Sora 2 now available in Azure AI Foundry appeared first on Microsoft Azure Blog.
Quelle: Azure

From queries to conversations: Unlock insights about your data using Azure Storage Discovery—now generally available

We are excited to announce the general availability of Azure Storage Discovery, a fully managed service that delivers enterprise-wide visibility into your data estate in Microsoft Azure Blob Storage and Azure Data Lake Storage. Azure Storage Discovery helps you optimize storage costs, comply with security best practices, and drive operational efficiency. With the included Microsoft Copilot in Azure integration, all decision makers and data users can access and uncover valuable data management insights using simple, everyday language—no specialized programming or query skills required. The intuitive experience provides advanced data visualizations and actional intelligence that are most important to you.

 What is Azure Storage Discovery?

Businesses are speeding up digital transformation by storing large amounts of data in Azure Storage for AI, analytics, cloud native apps, HPC, backup, and archive. This data spans multiple subscriptions, regions, and accounts to meet workload needs and compliance rules. The data sprawl makes it challenging to track data growth, spot unexpected data reduction, or optimize costs without clear visibility into data trends and access patterns. Organizations struggle to identify which datasets and business units drive growth and usage the most. Without a global view and streamlined insights across all storage accounts, it’s challenging to ensure data availability, residency, security, and redundancy are consistently aligned with best practices and regulatory compliance requirements.

Azure Storage Discovery makes it simple to gain and analyze insights to manage such large data estates.

Analyze your data estate with Azure Storage Discovery

Azure Storage Discovery lets you easily set up a workspace with storage accounts from any region or subscription you can access. The first insights are available in less than 24 hours, and you can get started by analyzing your data estate.

Unlock intelligent insights using natural language with Copilot in Azure

Use natural language to ask for the storage insights you need to accomplish your storage management goals. Copilot in Azure expresses them using rich data visualizations, like tables and charts.

Interactive reports built into the Azure portal

Azure Storage Discovery generates out-of-box dashboards you can access from the Azure portal, with insights that help you visualize and analyze your data estate. The reports include filters for your storage data estate by region, redundancy, and performance, allowing you to quickly drill down and uncover the insights important to you.

Advanced storage insights

The reports deliver insights, at a glance, across multiple dimensions, helping you manage your data effectively:

Capacity: Insights about resource, object sizes, and counts aggregated by subscriptions, resource groups, and storage accounts with growth trends.

Activity: Visualize transactions, ingress, and egress for insights on how your storage is accessed and utilized.

Security: Highlights critical security configurations of your storage resources with outliers including public network access, shared access key, anonymous access to blobs, and encryption settings.

Configurations: Surfaces configuration patterns across your storage accounts like redundancy, lifecycle management, inventory, and others.

Errors: Highlights failed operations and error codes to help identify patterns of issues that might be impacting workloads.

Kickstart your insights for free, including 15 days of historical data

Getting started is easy with access to 15 days of historical insights within hours of deploying your Azure Storage Discovery workspace. The standard pricing plan offers the most comprehensive set of insights, while the free pricing plan gets you going with the basics.

Analyze long term trends with 18 months of insights

The Azure Storage Discovery workspace with the standard pricing plan, will retain insights for up to 18 months so you can analyze long-term trends and any business or season specific workload patterns.

Azure Storage Discovery is available to you today! You can learn more about Azure Storage Discovery here and even get started in the Azure Portal here.

Use Copilot to solve the most important business problems

During the design of Azure Storage Discovery, we spoke with many customers across various business-critical roles, such as IT managers, data engineers, and CIOs. We realized AI could simplify onboarding by removing the need for infrastructure deployment or coding knowledge. As a result, we included Copilot in Azure Storage Discovery from the start. It offers insights beyond standard reports and dashboards using natural language queries to deliver actionable information through visualizations like trend charts and tables.

To get started, simply navigate to your Azure Storage Discovery workspace resource in the Azure portal, and activate Copilot.

Identify opportunity to optimize costs

Understanding storage size trends is crucial for cost optimization, and analyzing these trends by region and performance type can reveal important patterns about how the data is evolving over time. With Azure Storage Discovery’s 18 months of data retention, you can uncover long-term trends and unexpected changes across your data estate, while Copilot quickly visualizes storage size trends broken down by region.

“How is the storage size trending over the past month by region?”

Finding cost-saving opportunities across many storage accounts can be difficult, but Copilot simplifies this by highlighting accounts with the highest savings potential based on capacity and transactions as shown below.

“Provide a list of storage accounts with default access tier as Hot, that are above 1TiB in size and have the least transactions”

Before taking any action, you can dive even deeper into the insights by evaluating distributions. For example, a distribution of access tiers across blobs.

“Show me a chart of blob count by blob access tier”

Knowing that the majority of objects are still in the Hot tier provides immediate opportunities to reduce costs by enabling Azure Storage Actions to tier down or even delete data that is not accessed frequently. Azure Storage Actions is a fully managed, serverless platform that automates data management tasks—like tiering, retention, and metadata updates—across millions of blobs in Azure Blob Storage and Data Lake Storage.

Assess whether storage configurations align with security best practices

For better storage security, Microsoft recommends using Microsoft Entra ID with managed identities instead of Shared Key authentication. Azure Storage Discovery enables you to quickly see that there are still many storage accounts with shared access keys enabled and drill down into a list of Storage accounts that need optimization.

“Show me a pie chart of my storage accounts with shared access key enabled by region”

Manage your data redundancy requirements

Azure provides several redundancy options to meet data availability, disaster recovery, performance, and cost needs. These choices should be regularly reviewed against risks and benefits for an effective storage strategy. Azure Storage Discovery quickly shows the redundancy configuration for all storage accounts and allows you to analyze the most suitable option for each workload and critical business data.

“Show me a chart of my storage account count by redundancy”

Pricing and availability

A single Azure Storage Discovery workspace can analyze the subscriptions and storage accounts from all supported regions. Learn more about the regions supported by Azure Storage Discovery here. The service offers a free pricing plan with insights related to capacity and configurations retained for up to 15 days and a standard pricing plan that also includes advanced insights related to activity, errors, and security configurations. Insights are retained for up to 18 months, allowing you to analyze trends and business cycles.

Learn more about the pricing plans in the Azure Storage Discovery documentation and access the prices for your region here.

Get started with Azure Storage Discovery

Getting started with Azure Storage Discovery is easy. Simply follow these two steps:

Configure an Azure Storage Discovery workspace and select the set of subscriptions and resource groups containing your storage accounts.

Define the “Scopes” that represent your business groups or workloads.

That’s it! Give it a moment. Once a workspace is configured, Azure Storage Discovery starts aggregating the relevant insights and makes them available to you via intuitive charts. You’ll find them in the Azure portal, on different report pages of your workspace. We’ll even look back in time and provide 15 days of historic data. Your insights are typically available within a few hours.

To get started, visit Azure Storage Discovery in the Azure Marketplace.

You can also deploy via the brand new Storage Center in the Azure portal. Find Azure Storage Discovery in the “Data management” section.

Want to read more before deploying? The planning guide walks you through all the important considerations for a successful Azure Storage Discovery deployment.

We’d love to hear your feedback. What insights are most valuable to you? What would make Azure Storage Discovery more valuable for your business? Let us know at: StorageDiscoveryFeedback@service.microsoft.com.

Get started with Azure Storage Discovery
Azure Storage Discovery integrates with Copilot in Azure, enabling you to unlock insights and accelerate decision-making without utilizing any query language.

Find the overview here

The post From queries to conversations: Unlock insights about your data using Azure Storage Discovery—now generally available appeared first on Microsoft Azure Blog.
Quelle: Azure

Microsoft’s commitment to supporting cloud infrastructure demand in Asia

Microsoft supports cloud infrastructure demand in Asia

As Asia surges ahead in digital transformation, Microsoft is committed to expanding its cloud infrastructure to match the continent’s demand. In 2025, Microsoft launched new Azure datacenter regions in Malaysia and Indonesia, and is set to expand further with new datacenter regions launching in India and Taiwan in 2026. Microsoft is also announcing our intent to deliver a second datacenter region in Malaysia, called Southeast Asia 3. Across Asian markets, the company is investing billions to expand its AI infrastructure footprint—bringing cutting-edge AI, next-generation networking, and scalable storage to the world’s most populus area. These investments will empower enterprises across Asia to scale seamlessly, unlock the full value of their data, and capture new opportunities for growth.

Learn more about Microsoft Cloud Adoption Framework for Azure

Microsoft’s global infrastructure spans over 70 datacenter regions across 33 countries—more than any other cloud provider—designed to meet data residency, compliance, and performance. In Asia, where businesses across financial services, public sector, manufacturing, retail, and start-ups are deeply integrated into the global economy, Microsoft’s strategically distributed datacenters deliver seamless scalability, low-latency connectivity, and regulatory assurance. By keeping critical data and applications close on fault-tolerant, high-capacity networking infrastructure, organizations can operate confidently across local and international markets—delivering fast, reliable services that meet customer expectations and comply with legal requirements.

With a dozen datacenter regions already live across Asia, we are making significant datacenter region investments to expand across the continent. These investments will become some of our most integral datacenters in the region:

East Asia

East Asia, an historically established market in our Japan and Korea geographies, will see continued growth and expansion. In April 2025, Microsoft launched Azure Availability Zones in the Japan West region—enhancing resilience and efficiency as part of a two-year plan to invest in Japan’s AI and cloud infrastructure.

Additionally, Microsoft announced the launch of Microsoft 365 and associated data residency offerings for commercial customers in the Taiwan North cloud region. Azure services are also accessible to select customers in this region, with general availability for all customers expected in 2026.

Southeast Asia nations

Microsoft is also deepening its commitment in Southeast Asia countries through substantial investments, marked by the launch of new cloud regions in Indonesia and Malaysia in May 2025. The recently launched regions are designed with AI-ready hyperscale cloud infrastructure and three availability zones, providing organizations across Southeastern Asia with secure, low-latency access to cloud services.

The recently launched Indonesia Central region is a welcome addition to this area of the world. It offers comprehensive Azure services and local Microsoft 365 availability, unlocking new capabilities to allow customers to innovate. Our continued investments in Indonesia are expected to drive significant expansion, positioning this datacenter region to become one of the largest regions in Asia over the coming years. Today, more than 100 organizations are already using the Microsoft Cloud from Indonesia, to accelerate their transformation, including:

Binus University is leveraging Azure Machine Learning and Azure OpenAI Service to enhance both campus operations and student learning. AI enables accurate student intake forecasting and automates diploma supplement summaries for over 10,000 graduates annually, improving operational efficiency. On the academic side, BINUS is developing AI-powered tools like personalized AI Tutors, generative AI in libraries for tailored book recommendations, and the Beelingua platform for interactive language learning, all aimed at creating a more adaptive, inclusive, and future-ready educational experience.

GoTo Group integrates GitHub Copilot into its engineering workflow, aiming to boost productivity and innovation. Nearly a thousand engineers have adopted the AI-powered coding assistant, which offers real-time suggestions, chat-based help, and simplified explanations of complex code, significantly speeding up the time to innovate.

Customers such as Adaro, BCA, Binus University, Pertamina, Telkom Indonesia, and Manulife have joined the Indonesia Central cloud region, gaining on-premises access to Microsoft’s hyperscale infrastructure.

The Malaysia West datacenter region, our first cloud region in the country, helps empower Malaysia’s digital and AI transformation with access to Azure and Microsoft 365. A diverse group of organizations, enterprises, and startups are already leveraging the Malaysia West region including:

PETRONAS, Malaysia’s global energy and solutions provider, is partnering with Microsoft to leverage hyperscale cloud infrastructure to continue advancing its digital and AI transformation, as well as clean energy transition efforts in Asia.

Other customers using Microsoft’s new cloud region include FinHero, SCICOM Berhad, Senang, SIRIM Berhad, TNG Digital (the operator of TNG eWallet), and Veeam, along with more organizations expected to come onboard as demand for secure, scalable, and locally-hosted cloud services continues to grow across industries.

In Malaysia, Microsoft is expanding its digital infrastructure footprint further with a new datacenter region, Southeast Asia 3, planned in Johor Bahru. When this next-generation region comes online, it will feature Microsoft’s most comprehensive and strategic cloud services, designed to support advanced workloads and evolving customer needs from across the area.  

In addition to Indonesia and Malaysia, Microsoft also announced in 2024, a significant commitment to enable a cloud and AI-powered future for Thailand.

India sub-continent

The India geography already has several live datacenter regions, and this footprint will expand further with the launch of the Hyderabad-based India South Central datacenter region coming in 2026. This is a part of a US $3 billion investment over two years in India cloud and AI infrastructure.

Consider a multi-region approach

Microsoft’s goal is to empower you to build and grow your business with unparalleled performance and availability. One of the best ways to position your organization for growth is to consider how you choose the right Azure regions.

Our infrastructure investments in Asia are driven by the need for greater agility and flexibility in today’s dynamic cloud environment. Organizations can build a more resilient foundation by not locking themselves into a single region, all while optimizing performance. This enables access to Azure services, resources, and capacity across a broader set of geographic areas. A multi-region approach allows businesses to rapidly adapt to changing demands while maintaining high service levels. Our cloud infrastructure supports this agility by distributing services across regions, helping ensure responsiveness and scalability during peak usage. Leveraging a multi-region cloud architecture with any of our Asia-based regions further strengthens application performance, latency, and overall resilience and availability of cloud applications—empowering organizations to stay ahead in a fast-evolving digital landscape.

Opportunities for cost optimization

Pricing is a critical factor when selecting the right Azure regions for your organization. Through our significant investments in Asia, Microsoft is now able to offer newer and more cost-effective Azure regions, catering to both small and large organizations. Our newest regions like Indonesia Central, are designed to provide greater choice and flexibility, enabling businesses to optimize their cloud expenditures while maintaining high performance and availability.

Boost your cloud strategy

Use the Cloud Adoption Framework to achieve your cloud goals with best practices, documentation, and tools for business and technology strategies.

Use the Well Architected Framework to optimize workloads with guidance for building reliable, secure, and performant solutions on Azure.

By choosing to deploy services through any of our Azure regions, customers can leverage the diverse and robust infrastructure that Microsoft is developing across Asia. This approach not only offers resilience and flexibility but also paves the way for innovative solutions that drive economic growth and a more connected future.

Learn more about Cloud Adoption Framework

The post Microsoft’s commitment to supporting cloud infrastructure demand in Asia appeared first on Microsoft Azure Blog.
Quelle: Azure