FYAI: Why developers will lead AI transformation across the enterprise

Developers are leading AI adoption—and driving transformation across every industry. From writing code to managing applications, they’re using copilots and agents to accelerate delivery, reduce manual effort, and build with greater confidence. Just as they led automation, developers are now reshaping customer experiences and streamlining operations to unlock AI’s full potential.

Transform what’s possible for your business with Microsoft AIIn this edition of FYAI, a series where we spotlight AI trends with Microsoft leaders, we hear from Amanda Silver, Corporate Vice President and Head of Product, Apps, and Agents. Amanda’s leadership has shaped Microsoft’s evolution toward open-source collaboration, and she’s advancing a future where AI transforms how developers build, deploy, and iterate at scale to drive continuous innovation.

In this Q&A, Amanda shares why developer-led AI adoption matters, how agentic DevOps is redefining workflows, and what leaders can do today to maximize impact. 

How is the AI landscape changing how developer teams deliver the apps businesses run on? AI is collapsing handoffs across the software lifecycle. DevOps successfully united build, test, deploy, and operate, but the earlier phases—discovery, requirements, shared vision, and initial scaffolding—mostly sat outside that loop. Now copilots can turn natural language ideas into specs and scaffolds, and agents take on tests, upgrades, and runtime operations. The result is a single, faster cycle from idea to impact: lower cost to iterate, quicker transitions, and more freedom to refine until the product fits the business. Think of it like the shift to public cloud: before the public cloud, teams waited weeks to procure hardware and commit capital up front; with the cloud, environments spin up in seconds and you pay only for what you use. AI brings that same elasticity to product definition and delivery—removing friction at the front of the lifecycle and letting teams iterate based on real feedback. Put simply: cloud removed friction from infrastructure; AI removes friction from intent to implementation.

What are some examples of how AI is helping developers re-imagine their daily work?  AI is turning software delivery into a true idea-to-operate system. For developers, that means less time spent on manual cleanup and more time focused on creative, high-impact work. Copilots and agents now handle the repetitive, often invisible tasks that used to pile up—like debugging, dependency upgrades, and security patches. Instead of waiting for a quarterly “tech debt sprint,” agentic DevOps lets teams pay down debt continuously, in the background. 

A great example is how agentic AI is accelerating migration and modernization. In the past, updating frameworks or moving to new platforms meant months of planning and manual fixes. Now, agents can automate .NET and Java upgrades, resolve breaking changes, and even orchestrate large-scale migrations—compressing timelines from months to hours. This isn’t just about speed; it’s about keeping codebases healthy and modern by default, so developers can focus on building new features and improving user experiences. 

The net effect: developers spend less time firefighting and more time innovating. Technical debt becomes a manageable, ongoing process—not a looming crisis. And as AI agents take on more of the routine work, teams can operate in a steadier flow state, with healthier code and faster delivery. 

What does that mean for apps? Are they getting better? And how does this impact the role a developer plays?Apps will get better because they become learning systems. With AI in the loop, teams shift from shipandhope to continuous observe → hypothesize → change → validate cycle centered on continuously refining product–market fit. AI can help synthesize telemetry (such as funnels, dropoffs, session replays, and qualitative feedback), surface where users struggle, propose changes (like copy, flow, component layout, and recommendations), and can even wire up feature flags or experiments to prove whether a change works. The effect is a dramatic reduction in timetolearning—and faster convergence on what users value. 

PreAI versus PostAI user interaction PreAI (huntandpeck): Users navigate dense menus and deep information architectures, scanning screens to find the one control that does what they need. Every step risks a dead end, and context is easy to lose when switching pages or tools.PostAI (intentfirst): Users express intent in natural language (like text, speech, or multimodal). The app interprets that intent, keeps context, and routes the user to the right data, action, or workflow—often composing the UI on the fly (for example, drafting a form, filtering to the relevant records, and suggesting the next best action). Think of this as moving from “Where do I click?” to “Here’s what I need—do it with me.” What changes for developers From page builders to experience composers. Devs design intent routers and orchestrations that connect models, agents, data, and services—so the app can respond intelligently to varied user goals without forcing rigid click paths.From manual analysis to AI-assisted product loops. Instead of hand rolled dashboards and ad hoc investigations, AI highlights opportunity areas, drafts experiment plans, and opens pull requests with proposed code and config changes. Developers review, constrain, and ship—with guardrails.From “debt sprints” to continuous modernization. Agents can keep the app current—upgrading frameworks (for example, .NET and Java), repairing dependency drift, patching vulnerabilities, and standardizing pipelines—while feature work continues. That turns tech debt into a managed, always on workload rather than a periodic fire drill.  Bottom line: AI tightens the loop between what users want and what the app becomes. Developers spend less time on menu wiring and manual forensics, and more time defining intent, composing agentic flows, setting success metrics, and supervising safe, measurable change. Apps improve faster—not just because they’re smarter, but because teams can experiment, learn, and adapt as usage grows. 

Where do you see Microsoft standing out in a sea of AI competition? Microsoft’s biggest differentiator is our ability to connect AI agents to the systems, data, and workflows that power real business. We serve organizations with massive, complex codebases and deep operational requirements—and our tools are designed to meet them where they are. With GitHub, Visual Studio, and Azure AI Foundry, millions of developers can access the latest models and agentic capabilities directly in their daily workflow, backed by enterprise-grade security, governance, and responsible AI benchmarks. 

software development with github copilot and microsoft azureRead the blog ↗

But what truly sets Microsoft apart is the breadth of integration. AI agents built on our platform can tap into a huge ecosystem of business apps, data sources, and operational systems—whether it’s enterprise resource planning (ERP), customer relationship manager (CRM), human resources (HR), finance, or custom line-of-business solutions. Through open standards like Model Connector Protocol (MCP) and Agent-to-Agent (A2A), agents can securely connect, orchestrate, and automate across these environments, making it possible to deliver outcomes that matter: automating workflows, modernizing legacy systems, and driving continuous improvement. 

Yina Arenas’s Agent Factory series shows how Microsoft is building the blueprint for safe, secure, and reliable AI agents—from rapid prototyping to production, observability, and real-world use cases. Our platform isn’t just about building agents; it’s about enabling them to work with the systems and data that organizations already rely on, so teams can move from experiments to enterprise-scale impact. 

At the end of the day, Microsoft’s advantage is not just scale—it’s the ability to make AI agents truly useful by connecting them to the heart of the business, with the tools and standards to do it safely and securely. 

When should developers decide which tasks to delegate to agents versus tackle themselves for maximum impact? As my colleague, David Fowler, put it: “Humans are the UI thread; agents are the background thread. Don’t block the UI!” Developers should focus on the creative, judgment-driven work—setting intent, making architectural decisions, and shaping the product experience. Agents excel at handling the repetitive, long-running, or cross-cutting tasks that can quietly run in the background: code health, dependency upgrades, telemetry triage, and even scaffolding out solutions to unblock the blank page. 

The key is to delegate anything that slows down your flow or distracts from high-impact work. If a task is routine, latency-tolerant, or easily reversible, let an agent handle it. If it requires deep context, product judgment, or could fundamentally change the direction of your app, keep it on the human “UI thread.” This way, developers stay responsive and focused, while agents continuously improve the codebase and operations in parallel. 

By striking the right balance, developers can minimize time spent on routine tasks and stay focused on the work that moves products and teams forward.

Why are AI coding tools attracting so much investment and interest? Why reimagine the developer experience now? Because software development already generates the kind of rich, structured signals that AI thrives on. Code and diffs, pull request reviews, test results, build logs, runtime and performance telemetry, issue trackers, and deployment outcomes are all labeled, timestamped, and traceable. That makes the dev environment a natural proving ground for applied machine learning: models can learn from real work, be evaluated against objective checks (like tests, linters, and policies), and improve inside an existing feedback loop (such as Continuous Integration and Continuous Delivery (CI/CD), feature flags, and canaries). In short, we have the data, the instrumentation, and the validation built in. 

There’s also a cultural reason: developers automate away friction—from compilers and build systems to version control, CI/CD, containers, and infrastructure as code. Generative AI is the next step in that lineage. It shifts more work from hand authoring to specifying intent and supervising outcomes: copilots help with exploration and acceleration; agents handle continuous code health, upgrades, and safe, reversible changes. Investment flows here because better developer experience maps directly to throughput, quality, and time to value. 

And yes—the future starts with developers. As dev teams discover where AI delivers real support in their own workflow, those patterns spread to the rest of the business, accelerating how every function experiments, learns, and ships. 

Empowering developers with AI to deliver lasting impactWe’re entering a new era of software delivery—and it’s agentic, adaptive, and deeply human-centered. With copilots and agents in the loop, developers are building systems that continually adapt to business needs. At Microsoft, we’re empowering developers to move from idea to impact faster by focusing on creativity, product vision, and building with trustworthy AI.

In fact, Frontier Firms are already showing us what’s possible. They treat software as a dynamic system—refined through telemetry, experimentation, and AI-powered insight. And across all types of organizations, compelling AI use cases are emerging—from customer service to software engineering—setting the pace for what’s possible with the latest AI tooling.

Empower your people and drive real results with Microsoft AIReady to learn more? Discover resources and tools to accelerate your AI journey:

Learn three skill-building insights that help Frontier Firms drive innovation.Get started with GitHub Copilot.Build your first production-grade agent in under an hour with Azure AI Foundry.Simplify development and meet evolving business needs with Microsoft Cloud solutions.
The post FYAI: Why developers will lead AI transformation across the enterprise appeared first on Microsoft Azure Blog.
Quelle: Azure

Accelerating open-source infrastructure development for frontier AI at scale

In the transition from building computing infrastructure for cloud scale to building cloud and AI infrastructure for frontier scale, the world of computing has experienced tectonic shifts in innovation. Throughout this journey, Microsoft has shared its learnings and best practices, optimizing our cloud infrastructure stack in cross-industry forums such as the Open Compute Project (OCP) Global Foundation.

Today, we see that the next phase of cloud infrastructure innovation is poised to be the most consequential period of transformation yet. In just the last year, Microsoft has added more than 2 gigawatts of new capacity and launched the world’s most powerful AI datacenter, which delivers 10x the performance of the world’s fastest supercomputer today. Yet, this is just the beginning.

Learn more about Microsoft’s global infrastructure

Delivering AI infrastructure at the highest performance and lowest cost requires a systems approach, with optimizations across the stack to drive quality, speed, and resiliency at a level that can provide a consistent experience to our customers. In the quest to supply resilient, sustainable, secure, and widely scalable technology to handle the breadth of AI workloads, we’re embarking on an ambitious new journey: one not just of redefining infrastructure innovation at every layer of execution from silicon to systems, but one of tightly integrated industry alignment on standards that offer a model for global interoperability and standardization.

At this year’s OCP Global Summit, Microsoft is contributing new standards across power, cooling, sustainability, security, networking, and fleet resiliency to further advance innovation in the industry.

Redefining power distribution for the AI era

As AI workloads scale globally, hyperscale datacenters are experiencing unprecedented power density and distribution challenges.

Last year, at the OCP Global Summit, we partnered with Meta and Google in the development of Mt. Diablo, a disaggregated power architecture. This year, we’re building on this innovation with the next step of our full-stack transformation of datacenter power systems: solid-state transformers. Solid-state transformers simplify the power chain with new conversion technologies and protection schemes that can accommodate future rack voltage requirements.

Training large models across thousands of GPUs also introduces variable and intense power draw patterns that can strain the grid. The utility, and traditional power delivery systems. These fluctuations not only risk hardware reliability and operational efficiency but also create challenges across capacity planning and sustainability goals.

Together with key industry partners, Microsoft is leading a power stabilization initiative to address this challenge. In a recently published paper with OpenAI and NVIDIA—Power Stabilization for AI Training Datacenters—we address how full-stack innovations spanning rack-level hardware, firmware orchestration, predictive telemetry, and facility integration can smooth power spikes, reduce power overshoot by 40%, and mitigate operational risk and costs to enable predictable, and scalable power delivery for AI training clusters.

This year, at the OCP Global Summit, Microsoft is joining forces with industry partners to launch a dedicated power stabilization workgroup. Our goal is to foster open collaboration across hyperscalers and hardware partners, sharing our learnings from full-stack innovation and inviting the community to co-develop new methodologies that address the unique power challenges of AI training datacenters. By building on the insights from our recently published white paper, we aim to accelerate industry-wide adoption of resilient, scalable power delivery solutions for the next generation of AI infrastructure. Read more about our power stabilization efforts.

Cooling innovations for resiliency

As the power profile for AI infrastructure changes, we are also continuing to rearchitect our cooling infrastructure to support evolving needs around energy consumption, space optimization, and overall datacenter sustainability. Various cooling solutions must be implemented to support the scale of our expansion—as we seek to build new AI-scale datacenters, we are also utilizing Heat Exchanger Unit (HXU)-based liquid cooling to rapidly deploy new AI capacity within our existing air-cooled datacenter footprint.

Microsoft’s next generation HXU is an upcoming OCP contribution that enables liquid cooling for high-performance AI systems in air-cooled datacenters, supporting global scalability and rapid deployment. The modular HXU design delivers 2X the performance of current models and maintains >99.9% cooling service availability for AI workloads. No datacenter modifications are required, allowing seamless integration and expansion. Learn more about the next generation HXU here. 

Meanwhile, we’re continuing to innovate across multiple layers of the stack to address changes in power and heat dissipation—utilizing facility water cooling at datacenter-scale, circulating liquid in closed-loops from server to chiller; and exploring on-chip cooling innovations like microfluidics to efficiently remove heat directly from the silicon.

Unified networking solutions for growing infrastructure demands 

Scaling hundreds of thousands of GPUs to operate as a single, coherent system comes with significant challenges to create rack-scale interconnects that can deliver low-latency, high bandwidth fabrics that are both efficient and interoperable. As AI workloads grow exponentially and infrastructure demands intensify, we are exploring networking optimizations that can support these needs. To that end, we have developed solutions leveraging scale-up, scale-out, and Wide Area Network (WAN) solutions to enable large-scale distributed training.

We partner closely with standards bodies, like UEC (Ultra Ethernet Consortium) and UALink, focused on innovation in networking technologies for this critical element of AI systems. We are also driving forward adoption of Ethernet for scale-up networking across the ecosystem and are excited to see the Ethernet for Scale-up Networking (ESUN) workstream launch under the OCP Networking Project. We look forward to promoting adoption of cutting-edge networking solutions and enabling multi-vendor Ecosystem based on open standards.

Security, sustainability, and quality: Fundamental pillars for resilient AI operations

Defense in depth: Trust at every layer

Our comprehensive approach to scaling AI systems responsibly includes embedding trust and security into every layer of our platform. This year, we are introducing new security contributions that build on our existing body of work in hardware security and introduce new protocols that are uniquely fit to support new scientific breakthroughs that have been accelerated with the introduction of AI:

Building on past years’ contributions and Microsoft’s collaboration with AMD, Google, and NVIDIA, we have further enhanced Caliptra, our open-source silicon root of trust The introduction of Caliptra 2.1 extends the hardware root-of-trust to a full security subsystem. Learn more about Caliptra 2.1 here.

We have also added Adams Bridge 2.0 to Caliptra to extend support for quantum-resilient cryptographic algorithms to the root-of-trust.

Finally, we are contributing OCP Layered Open-source Cryptographic Key Management (L.O.C.K)—a key management block for storage devices that secures media encryption keys in hardware. L.O.C.K was developed through collaboration between Google, Kioxia, Microsoft, Samsung, and Solidigm.

Advancing datacenter-scale sustainability 

Sustainability continues to be a major area of opportunity for industry collaboration and standardization through communities such as the Open Compute Project. Working collaboratively as an ecosystem of hyperscalers and hardware partners is one catalyst to address the need for sustainable datacenter infrastructure that can effectively scale as compute demands continue to evolve. This year, we are pleased to continue our collaborations as part of OCP’s Sustainability workgroup across areas such as carbon reporting, accounting, and circularity:

Announced at this year’s Global Summit, we are partnering with AWS, Google, and Meta to fund the Product Category Rule initiative under the OCP Sustainability workgroup, with the goal of standardizing carbon measurement methodology for devices and datacenter equipment.

Together with Google, Meta, OCP, Schneider Electric, and the iMasons Climate Accord, we are establishing the Embodied Carbon Disclosure Base Specification to establish a common framework for reporting the carbon impact of datacenter equipment.

Microsoft is advancing the adoption of waste heat reuse (WHR). In partnership with the NetZero Innovation Hub, NREL, and EU and US collaborators, Microsoft has published heat reuse reference designs and is developing an economic modeling tool which provide data center operators and waste heat off takers/consumers the cost it takes to develop the waste heat reuse infrastructure based on the conditions like the size and capacity of the WHR system, season, location, WHR mandates and subsidies in place. These region-specific solutions help operators convert excess heat into usable energy—meeting regulatory requirements and unlocking new capacity, especially in regions like Europe where heat reuse is becoming mandatory.

We have developed an open methodology for Life Cycle Assessment (LCA) at scale across large-scale IT hardware fleets to drive towards a “gold standard” in sustainable cloud infrastructure.

Rethinking node management: Fleet operational resiliency for the frontier era

As AI infrastructure scales at an unprecedented pace, Microsoft is investing in standardizing how diverse compute nodes are deployed, updated, monitored, and serviced across hyperscale datacenters. In collaboration with AMD, Arm, Google, Intel, Meta, and NVIDIA, we are driving a series of Open Compute Project (OCP) contributions focused on streamlining fleet operations, unifying firmware management, manageability interfaces and enhancing diagnostics, debug, and RAS (Reliability, Availability, and Serviceability) capabilities. This standardized approach to lifecycle management lays the foundation for consistent, scalable node operations during this period of rapid expansion. Read more about our approach to resilient fleet operations. 

Paving the way for frontier-scale AI computing 

As we enter a new era of frontier-scale AI development, Microsoft takes pride in leading the advancement of standards that will drive the future of globally deployable AI supercomputing. Our commitment is reflected in our active role in shaping the ecosystem that enables scalable, secure, and reliable AI infrastructure across the globe. We invite attendees of this year’s OCP Global Summit to connect with Microsoft at booth #B53 to discover our latest cloud hardware demonstrations. These demonstrations showcase our ongoing collaborations with partners throughout the OCP community, highlighting innovations that support the evolution of AI and cloud technologies.

Connect with Microsoft at the OCP Global Summit 2025 and beyond

Visit Microsoft at the OCP Global Summit at booth #B53.

Check out sessions delivered by Microsoft and partners from OCP Summit 2025.

Take a virtual tour of Microsoft datacenters.

Learn more about Microsoft’s global infrastructure.

The post Accelerating open-source infrastructure development for frontier AI at scale appeared first on Microsoft Azure Blog.
Quelle: Azure

Oracle Database@Azure offers new features, regions, and programs to unlock data and AI innovation

Together, Microsoft and Oracle are delivering the most comprehensive, enterprise‑ready platform for organizations migrating their Oracle solutions to the public cloud—especially those aiming to empower IT professionals and developers to streamline AI adoption and enhance employee productivity.

Oracle Database@Azure was the first offering of its kind in the market and today has the broadest regional availability, new ways to unify your data in Microsoft Fabric, deeper security integrations with Microsoft Defender, and can run most Oracle Database services—Base Database Service, Exadata Database Service on Dedicated Infrastructure, Exadata Database Service on Exascale Infrastructure and Autonomous Database as well as Oracle Database 19c or 23ai—on Azure.

Get started with Oracle Database@Azure

The result? A truly enterprise‑ready platform that offers more choice, increased control, and expanded opportunity to innovate with confidence–and customers are excited about the impact it’s driving in action.

Oracle Database@Azure delivered Exadata-grade performance natively within Azure, enabling us to host our Oracle EBS in the cloud without compromise. We gained native, real-time access to EBS data from Azure and seamless integration with both Oracle and non-Oracle data sources. Paired now with Microsoft Fabric, Power BI, and Copilot studio, our team will be able to accelerate insight delivery to business stakeholders and build agentic workflows faster. It’s a practical path to iterate on new features while keeping governance and security at the forefront.
Mahesh Tyagi, Vice President Finance Engineering, Activision Blizzard 

New Oracle Database@Azure features strengthen its enterprise leadership

Enterprise-grade capabilities are essential for organizations that depend on Oracle databases for mission-critical workloads. That’s why Microsoft continues to advance Oracle Database@Azure, bringing together the scale and resilience of Azure with industry-leading security and AI innovation.

Take a look at our latest announcements. For more details on each of these capabilities, check out our technical blog.

Announcing two new capabilities for real-time data integration and replication with Microsoft Fabric for an AI-ready data estate

Oracle Database mirroring in OneLake, now in public preview, enables continuous zero-ETL synchronization of Oracle data into OneLake, enabling a unified real-time data estate in Microsoft Fabric. Also available today, native Oracle GoldenGate integration offers managed, high-performance, low-latency replication and can be purchased using Microsoft Azure Consumption Commitment (MACC). Once your Oracle data is connected through Oracle Database@Azure, you can use powerful AI innovation tools like Microsoft Copilot Studio, Azure AI Foundry, and Power BI.

Oracle Base Database is generally available

Oracle Database@Azure offers customers the flexibility to run any Oracle Database service on Azure. Oracle Database@Azure now supports all popular Oracle database services—Base Database Service, Exadata Database Service on Dedicated Infrastructure, Exadata Database Service on Exascale Infrastructure and Autonomous Database—and also the choice of using either Oracle Database 19c or 23ai. This provides customers with a comprehensive set of flexible, simple, and cost-effective migration options when moving their Oracle databases to Azure.

Support for Oracle workloads goes beyond Oracle Database services. We’re excited to share that Oracle has introduced support policies for running Oracle E-Business Suite, PeopleSoft, JD Edwards EnterpriseOne, Enterprise Performance Management, and Oracle Retail Applications in Microsoft Azure using Oracle Database@Azure. This enables businesses to harness the power of Microsoft Azure while leveraging Oracle’s industry-leading database technology to achieve greater scalability, performance, and security. We continue to offer full Oracle Maximum Availability (MAA) support—up to platinum tier—available exclusively on Azure, giving customers the highest levels of availability, disaster recovery, and zero-data-loss protection for mission critical workloads.

Microsoft Defender now brings industry-leading threat detection and response to Oracle Database@Azure

Microsoft Defender is a cloud-native security platform that provides unified threat protection, vulnerability management, and automated compliance to safeguard Oracle Database@Azure workloads. Complemented by Microsoft Sentinel’s AI-powered security information and event management (SIEM) for real-time monitoring, and Microsoft Entra ID’s unified identity and access controls, customers get comprehensive enterprise-grade protection designed for today’s complex threat landscape.  

Azure Arc for Oracle Database@Azure

Extend Azure’s management, governance, and security capabilities across environments—whether on-premises, multicloud or edge. From a single control plane, Azure Arc enables you to enforce policies, manage identities, and automate lifecycle operations for all your Azure resources—and now, for your Oracle databases running natively on Azure. 

Azure IoT Operations and Microsoft Fabric now power an integration blueprint with Oracle Fusion Cloud Supply Chain and Manufacturing (SCM)

This integration enables manufacturers to capture live insights from factory equipment and sensors, automate key processes, and drive data‑driven decisions for greater efficiency and responsiveness.

Available in over 28 regions globally

With plans to reach 33 live regions by the end of the year, Oracle Database@Azure empowers organizations to deploy closer to their applications and users across North America, EMEA, and APAC. Stay up to date on the latest regions to go live here.

Introducing Azure Accelerate for Oracle

To help every organization start quickly and confidently—regardless of their size—Microsoft is excited to offer Azure Accelerate benefits to Oracle customers. Azure Accelerate is a program designed to support customers across their cloud and AI journey with expert guidance and investments. Customers can cut through the complexity of their Oracle migrations—and related application migration, modernization, and AI innovation projects—while also minimizing project costs. Azure Accelerate makes it easier than ever to bring your Oracle workloads to Azure by offering:

Access to trusted experts: Tap into the deep expertise of Azure’s specialized partner ecosystem. Additionally, you can take advantage of the Cloud Accelerate Factory benefit provides Microsoft experts at no additional cost.

Microsoft investments: Access Partner funding and Azure credits designed to make your migration to Azure more cost effective and minimize project risk.

Comprehensive coverage: Get help at every stage of the project, starting with an initial assessment through pilots or proof-of-value to full-scale implementation.

With Azure Accelerate, Oracle customers can now migrate more efficiently while integrating AI into their strategy, alongside Azure experts from day one.

Channel partners can now resell Oracle Database@Azure

Microsoft AI Cloud Partners and Oracle Partner Network (OPN) members can now purchase and resell Oracle Database@Azure—right from the Microsoft Marketplace. This new model underscores Microsoft and Oracle’s joint commitment to the partner community while streamlining migration and modernization for customers who prefer to purchase through their trusted partners.

Microsoft’s partner reseller programme helped CGI select Oracle Database@Azure to consolidate cloud services under a single cloud provider, ensuring cost efficiency, elasticity and redundancy required to meet CGI’s client key requirements. For Smart DCC, CGI is working with Oracle and Microsoft to implement the solution through the Microsoft marketplace reseller model, providing a streamlined procurement route on a secure, enterprise-ready platform for mission-critical workloads.
Ro Crawford, VP Consulting Services, CGI

We are also excited to share that Oracle Database@Azure is now included in the Microsoft Most Valuable Professionals (MVP) program under the new technology area, Azure Solutions and Ecosystem. This new technology area spans mission-critical workloads and modernization efforts, including Oracle Database@Azure, Azure VMware Solution (AVS), Nutanix on Azure, and mainframe modernization strategies. Microsoft Most Valuable Professionals program recognizes exceptional community leaders for their technical expertise, leadership, speaking experience, online influence, and commitment to solving real world problems. To learn more about the program, visit this FAQ.

Oracle Database@Azure customer momentum

Customers like Conduent, BV Liantis, SEFE, Astellas Pharmacy, Craneware and Medline have moved their Oracle databases to Oracle Database@Azure to optimize performance and reduce latency while unlocking a future-ready foundation for AI. 

We’re excited to spotlight our customer innovation in our sessions at Oracle AI World. Don’t miss Activision Blizzard on stage for Microsoft’s Spotlight Session on Wednesday, October 15 at 4:45 PM PT. You can find our full session list and featured customers here. 

Get started
Oracle Database@Azure is an Oracle database service running on Oracle Cloud Infrastructure (OCI), colocated in Microsoft data centers

Learn more here

Looking ahead

We’re excited to continue this journey—bringing together the best of Oracle and Microsoft to help customers innovate faster, operate smarter, and lead in the era of intelligent applications.

If you’re attending Oracle AI World 2025, come talk to our experts at the Microsoft booth (#3005) and be sure to check out our sessions.

Learn more about Oracle on Azure | Microsoft Azure.

To get started, contact our sales team.
The post Oracle Database@Azure offers new features, regions, and programs to unlock data and AI innovation appeared first on Microsoft Azure Blog.
Quelle: Azure

Sora 2 now available in Azure AI Foundry

Turning imagination into reality has never been more instantaneous—and powerful—as it is today, with the launch of OpenAI’s Sora 2: Now in public preview in Azure AI Foundry.

Azure AI Foundry is the developer destination built for creators, from startups to global businesses. The platform now offers a curated catalog of generative media models, including OpenAI’s Sora, GPT-image-1 and GPT-image-1-mini, Black Forest Lab’s Flux 1.1 and Kontext Pro, and more. These models empower software development companies and builders to serve creatives with new and unique capabilities—to accelerate storyboarding, drive engagement, and transform the creative process, all without sacrificing the safety, reliability, and integration businesses expect.

Start creating with Sora 2 in Azure AI Foundry

What can you create with Sora 2 in Azure AI Foundry?

Sora 2 in Azure AI Foundry isn’t just another video generation tool; it’s a creative powerhouse, seamlessly integrated into a platform built for innovation, trust, and scale. Unlike standalone solutions, Azure AI Foundry offers a unified platform where developers can access Sora 2 alongside other leading generative models in a secure, scalable, and structured environment to achieve more:

Marketers can rapidly produce stunning, branded campaign assets including animated assets for product launches and personalized content to capture attention and drive engagement.

Retailers can engage customers with interactive and localized campaigns to accelerate time-to-market and transform their customers’ online shopping experience.

Creative directors can transform imaginative ideas into dynamic movie trailers and cinematic experiences to test concepts, while Sora 2’s realistic world simulation, synchronized audio, and creative controls help bring visions to life.

Educators can create immersive lesson plans and interactive media that spark curiosity and deepen understanding.

With Sora 2 in Azure AI Foundry, developers across industries can innovate boldly and confidently. Azure AI Foundry’s unified environment, advanced capabilities, and enterprise-grade security provide the foundation for creativity to flourish and ideas to become reality.

What features and controls are available?

Sora 2 in Azure AI Foundry stands out by combining OpenAI’s most advanced video generation capabilities with the trusted infrastructure and security controls of Microsoft Azure, unlocking new possibilities for every developer with a set of core features:

Realistic video generation powered by advanced world simulation and physics.

Generation based on input text, images, and video.

Synchronized audio and dialogue for immersive storytelling.

Audio available in multiple languages.

Enhanced creative control, including detailed prompt understanding for studio shots, scene details, and camera angles.

Seamless integration into business workflows, backed by Microsoft’s enterprise-grade safety and security.

Microsoft is committed to delivering secure and safe AI solutions for organizations of all sizes. Through Azure AI Foundry and our responsible AI principles, we empower customers with embedded security, safety, and privacy controls.

This foundation extends to Sora 2, where our advanced safety systems and robust controls work together to help developers innovate more confidently in Azure AI Foundry:

Content filters for inputs: Screens text, image, and video inputs for prompts. 

Content filters for outputs: Analyzes video frames and audio; can block content to help comply with organizational policies.

Enterprise-grade security: Azure’s compliance and governance frameworks protect customer data and creative assets. 

Sora 2 Azure AI Foundry pricing and availability

Starting today, Sora 2 is available via API through Standard Global in Azure AI Foundry.

ModelSizePrice per second (in USD)
Sora 2
Portrait: 720×1280 Landscape: 1280×720
$0.10

Please refer to the Azure AI Foundry Models page for future updates in deployment types and availability.

Get started with AI as your creative partner

Sora 2 is designed to empower and inspire developers. By accelerating early production and enabling rapid prototyping, Sora 2 frees up time for more ideation and storytelling. The goal is to bring human creativity to the next level, making it easier for anyone to turn ideas into compelling visual stories.

Ready to create with Sora 2?
Explore the full catalog of generative media models in Azure AI Foundry.

Get started

The post Sora 2 now available in Azure AI Foundry appeared first on Microsoft Azure Blog.
Quelle: Azure

From queries to conversations: Unlock insights about your data using Azure Storage Discovery—now generally available

We are excited to announce the general availability of Azure Storage Discovery, a fully managed service that delivers enterprise-wide visibility into your data estate in Microsoft Azure Blob Storage and Azure Data Lake Storage. Azure Storage Discovery helps you optimize storage costs, comply with security best practices, and drive operational efficiency. With the included Microsoft Copilot in Azure integration, all decision makers and data users can access and uncover valuable data management insights using simple, everyday language—no specialized programming or query skills required. The intuitive experience provides advanced data visualizations and actional intelligence that are most important to you.

 What is Azure Storage Discovery?

Businesses are speeding up digital transformation by storing large amounts of data in Azure Storage for AI, analytics, cloud native apps, HPC, backup, and archive. This data spans multiple subscriptions, regions, and accounts to meet workload needs and compliance rules. The data sprawl makes it challenging to track data growth, spot unexpected data reduction, or optimize costs without clear visibility into data trends and access patterns. Organizations struggle to identify which datasets and business units drive growth and usage the most. Without a global view and streamlined insights across all storage accounts, it’s challenging to ensure data availability, residency, security, and redundancy are consistently aligned with best practices and regulatory compliance requirements.

Azure Storage Discovery makes it simple to gain and analyze insights to manage such large data estates.

Analyze your data estate with Azure Storage Discovery

Azure Storage Discovery lets you easily set up a workspace with storage accounts from any region or subscription you can access. The first insights are available in less than 24 hours, and you can get started by analyzing your data estate.

Unlock intelligent insights using natural language with Copilot in Azure

Use natural language to ask for the storage insights you need to accomplish your storage management goals. Copilot in Azure expresses them using rich data visualizations, like tables and charts.

Interactive reports built into the Azure portal

Azure Storage Discovery generates out-of-box dashboards you can access from the Azure portal, with insights that help you visualize and analyze your data estate. The reports include filters for your storage data estate by region, redundancy, and performance, allowing you to quickly drill down and uncover the insights important to you.

Advanced storage insights

The reports deliver insights, at a glance, across multiple dimensions, helping you manage your data effectively:

Capacity: Insights about resource, object sizes, and counts aggregated by subscriptions, resource groups, and storage accounts with growth trends.

Activity: Visualize transactions, ingress, and egress for insights on how your storage is accessed and utilized.

Security: Highlights critical security configurations of your storage resources with outliers including public network access, shared access key, anonymous access to blobs, and encryption settings.

Configurations: Surfaces configuration patterns across your storage accounts like redundancy, lifecycle management, inventory, and others.

Errors: Highlights failed operations and error codes to help identify patterns of issues that might be impacting workloads.

Kickstart your insights for free, including 15 days of historical data

Getting started is easy with access to 15 days of historical insights within hours of deploying your Azure Storage Discovery workspace. The standard pricing plan offers the most comprehensive set of insights, while the free pricing plan gets you going with the basics.

Analyze long term trends with 18 months of insights

The Azure Storage Discovery workspace with the standard pricing plan, will retain insights for up to 18 months so you can analyze long-term trends and any business or season specific workload patterns.

Azure Storage Discovery is available to you today! You can learn more about Azure Storage Discovery here and even get started in the Azure Portal here.

Use Copilot to solve the most important business problems

During the design of Azure Storage Discovery, we spoke with many customers across various business-critical roles, such as IT managers, data engineers, and CIOs. We realized AI could simplify onboarding by removing the need for infrastructure deployment or coding knowledge. As a result, we included Copilot in Azure Storage Discovery from the start. It offers insights beyond standard reports and dashboards using natural language queries to deliver actionable information through visualizations like trend charts and tables.

To get started, simply navigate to your Azure Storage Discovery workspace resource in the Azure portal, and activate Copilot.

Identify opportunity to optimize costs

Understanding storage size trends is crucial for cost optimization, and analyzing these trends by region and performance type can reveal important patterns about how the data is evolving over time. With Azure Storage Discovery’s 18 months of data retention, you can uncover long-term trends and unexpected changes across your data estate, while Copilot quickly visualizes storage size trends broken down by region.

“How is the storage size trending over the past month by region?”

Finding cost-saving opportunities across many storage accounts can be difficult, but Copilot simplifies this by highlighting accounts with the highest savings potential based on capacity and transactions as shown below.

“Provide a list of storage accounts with default access tier as Hot, that are above 1TiB in size and have the least transactions”

Before taking any action, you can dive even deeper into the insights by evaluating distributions. For example, a distribution of access tiers across blobs.

“Show me a chart of blob count by blob access tier”

Knowing that the majority of objects are still in the Hot tier provides immediate opportunities to reduce costs by enabling Azure Storage Actions to tier down or even delete data that is not accessed frequently. Azure Storage Actions is a fully managed, serverless platform that automates data management tasks—like tiering, retention, and metadata updates—across millions of blobs in Azure Blob Storage and Data Lake Storage.

Assess whether storage configurations align with security best practices

For better storage security, Microsoft recommends using Microsoft Entra ID with managed identities instead of Shared Key authentication. Azure Storage Discovery enables you to quickly see that there are still many storage accounts with shared access keys enabled and drill down into a list of Storage accounts that need optimization.

“Show me a pie chart of my storage accounts with shared access key enabled by region”

Manage your data redundancy requirements

Azure provides several redundancy options to meet data availability, disaster recovery, performance, and cost needs. These choices should be regularly reviewed against risks and benefits for an effective storage strategy. Azure Storage Discovery quickly shows the redundancy configuration for all storage accounts and allows you to analyze the most suitable option for each workload and critical business data.

“Show me a chart of my storage account count by redundancy”

Pricing and availability

A single Azure Storage Discovery workspace can analyze the subscriptions and storage accounts from all supported regions. Learn more about the regions supported by Azure Storage Discovery here. The service offers a free pricing plan with insights related to capacity and configurations retained for up to 15 days and a standard pricing plan that also includes advanced insights related to activity, errors, and security configurations. Insights are retained for up to 18 months, allowing you to analyze trends and business cycles.

Learn more about the pricing plans in the Azure Storage Discovery documentation and access the prices for your region here.

Get started with Azure Storage Discovery

Getting started with Azure Storage Discovery is easy. Simply follow these two steps:

Configure an Azure Storage Discovery workspace and select the set of subscriptions and resource groups containing your storage accounts.

Define the “Scopes” that represent your business groups or workloads.

That’s it! Give it a moment. Once a workspace is configured, Azure Storage Discovery starts aggregating the relevant insights and makes them available to you via intuitive charts. You’ll find them in the Azure portal, on different report pages of your workspace. We’ll even look back in time and provide 15 days of historic data. Your insights are typically available within a few hours.

To get started, visit Azure Storage Discovery in the Azure Marketplace.

You can also deploy via the brand new Storage Center in the Azure portal. Find Azure Storage Discovery in the “Data management” section.

Want to read more before deploying? The planning guide walks you through all the important considerations for a successful Azure Storage Discovery deployment.

We’d love to hear your feedback. What insights are most valuable to you? What would make Azure Storage Discovery more valuable for your business? Let us know at: StorageDiscoveryFeedback@service.microsoft.com.

Get started with Azure Storage Discovery
Azure Storage Discovery integrates with Copilot in Azure, enabling you to unlock insights and accelerate decision-making without utilizing any query language.

Find the overview here

The post From queries to conversations: Unlock insights about your data using Azure Storage Discovery—now generally available appeared first on Microsoft Azure Blog.
Quelle: Azure