Microsoft Planetary Computer Pro: Unlocking AI-powered geospatial insights for enterprises across industries

A proliferation of satellite constellations and connectivity to hyperscale clouds has made geospatial data available for a wide variety of sectors and use cases: from coordinating supply chains, to managing climate risk, and planning urban infrastructure, just to name a few. Yet despite its growing importance, geospatial data remains notoriously complex and siloed across a variety of sources, including satellites, drones, and other sensors—often accessible only to experts.  

To help solve this challenge, Microsoft has invested in simplifying the complex geospatial landscape—and we are excited to introduce the Public Preview of Microsoft Planetary Computer Pro, a comprehensive platform that makes it dramatically easier for organizations to harness geospatial data for real-world impact. Microsoft Planetary Computer Pro is a next-generation platform designed to bring geospatial insights into the mainstream analytics workflow. It empowers organizations to ingest, catalog, store, process, and disseminate large volumes of private geospatial data in Microsoft Azure, using familiar tools and AI-driven insights. The result? Easier access, optimized datasets, unified security, identity, and governance, and faster time to insight.   

Geospatial insights at your fingertips with Microsoft Planetary Computer Pro

Industries are already realizing the benefits. For example, energy companies are using earth observation data to help monitor infrastructure health and anticipate maintenance needs. In agriculture, organizations are optimizing crop yields by analyzing soil conditions, weather trends, and land use patterns. Retailers are refining site selection strategies by combining demographic data with mobility and footfall analytics. 

These are not isolated cases; they reflect a broader shift. As enterprises face rising pressure to become more efficient, resilient, and sustainable, the ability to operationalize geospatial data is becoming a defining competitive advantage. 

Partner momentum: A thriving ecosystem 

Microsoft’s commitment to working with partners is foundational to our mission.  

Microsoft has been collaborating closely with Esri to integrate ArcGIS Pro and Enterprise into the platform. Esri users will be able to directly access managed content for use in imagery analysis workflows at any scale. This partnership enables geographic information system (GIS) professionals to continue using their preferred tools while benefiting from the scalability and AI capabilities of the Microsoft cloud. 

Microsoft partner Xoople is a start-up launching an end-to-end Earth Intelligence system powered by a new Xoople satellite constellation and Microsoft’s Planetary Computer Pro. With the help of Planetary Computer’s efficient data ingestion, indexing, management, and processing, Xoople plans to transform the datasets and deliver the latest industry insights to end customers via the Azure Marketplace and specialized ISVs. 

Microsoft’s partnerships are also helping provide value to organizations working around the world to enable a more sustainable future.  

Space Intelligence provides customers with audit-grade data on forest coverage and carbon storage for nature-based projects. Space Intelligence uses geospatial data analysis and machine learning through Microsoft Planetary Computer Pro to support zero deforestation and mass restoration. Space Intelligence required easy access in their AI/ML pipelines to a large-scale catalog of input data, both public and private, to process petabytes of data annually. Microsoft Planetary Computer Pro enabled them to scale their AI data storage layer with high-speed access, integrate through APIs, visualize data efficiently with an on-demand tiling stack, and maintain alignment between their open and closed data sources. 

Impact Observatory uses Planetary Computer Pro, Azure Batch, and proprietary models to optimize the production of their land-use land cover map product. By moving their inference pipeline on to Azure and using Azure Batch, Impact Observatory was able to run their model in parallel on 1000 VMs, utilizing a total of 1 million core hours. In less than a week, they produced their global land-use land cover map.  

EY Consulting has emerged as a pivotal force in revolutionizing geospatial capabilities across diverse industries. Their strategic collaboration with Microsoft has empowered supported customers by integrating leading cutting-edge geospatial into Azure. Through their experienced expertise in geospatial data analytics, EY Consulting has made significant strides in embedding these insights into business operations, effectively redefining the geospatial landscape. 

Looking forward: Mainstreaming geospatial insights with AI-ready infrastructure

Microsoft Planetary Computer Pro helps break down the barriers of complexity by integrating directly with tools like Microsoft Fabric, Azure AI Foundry, and Power BI—along with third-party platforms. This interoperability means data analysts, developers, and business users can access and act on geospatial data from mainstream analytics workflow. More than just access, Planetary Computer Pro sets the stage for applied AI—standardizing diverse datasets in a secure, cloud-native environment to enable advanced modeling, forecasting, and decision support. This is the foundation for a future where geospatial insights can help power everyday decisions across nearly every industry. 

Satellite image of Western Washington captured by Landsat 8.

Conclusion: Geospatial insights at your fingertips 

By helping make geospatial insights more accessible, actionable, and AI-ready, Microsoft Planetary Computer Pro empowers organizations to make better decisions for their business and the planet. 

The public preview of Microsoft Planetary Computer Pro is available now in select Azure regions. 

Microsoft Planetary Computer Pro
Unify geospatial data with enterprise AI and analytics to enhance business decisions.

Discover more >

To get started: 

Visit Microsoft Planetary Computer Pro. 

Review our documentation on Microsoft Planetary Computer Pro.

Contact us at MPCPro@microsoft.com. 

As the world grapples with complex challenges, Microsoft Planetary Computer Pro helps ensure that geospatial insights are no longer a luxury for specialists, but accessible to all.
The post Microsoft Planetary Computer Pro: Unlocking AI-powered geospatial insights for enterprises across industries appeared first on Microsoft Azure Blog.
Quelle: Azure

Maximize your ROI for Azure OpenAI

When you’re building with AI, every decision counts—especially when it comes to cost. Whether you’re just getting started or scaling enterprise-grade applications, the last thing you want is unpredictable pricing or rigid infrastructure slowing you down. Azure OpenAI is designed with that in mind: flexible enough for early experiments, powerful enough for global deployments, and priced to match how you actually use it.

From startups to the Fortune 500, more than 60,000 customers are choosing Azure AI Foundry, not just for access to foundational and reasoning models—but because it meets them where they are, with deployment options and pricing models that align to real business needs. This is about more than just AI—it’s about making innovation sustainable, scalable, and accessible.

Azure OpenAI deployment types and pricing options

This blog breaks down the available pricing and deployment options, and tools that support scalable, cost-conscious AI deployments.

Flexible pricing models that match your needs

Azure OpenAI supports three distinct pricing models designed to meet different workload profiles and business requirements:

Standard—For bursty or variable workloads where you want to pay only for what you use.

Provisioned—For high-throughput, performance-sensitive applications that require consistent throughput.

Batch—For large-scale jobs that can be processed asynchronously at a discounted rate.

Each approach is designed to scale with you—whether you’re validating a use case or deploying across business units.

Standard

The Standard deployment model is ideal for teams that want flexibility. You’re charged per API call based on tokens consumed, which helps optimize budgets during periods of lower usage.

Best for: Development, prototyping, or production workloads with variable demand.

You can choose between:

Global deployments: To ensure optimal latency across geographies.

OpenAI Data Zones: For more flexibility and control over data privacy and residency.

With all deployment selections, data is stored at rest within the Azure chosen region of your resource.

Batch

The Batch model is designed for high-efficiency, large-scale inference. Jobs are submitted and processed asynchronously, with responses returned within 24 hours—at up to 50% less than Global Standard pricing. Batch also features large scale workload support to process bulk requests with lower costs. Scale your massive batch queries with minimal friction and efficiently handle large-scale workloads to reduce processing time, with 24-hour target turnaround, at up to 50% less cost than global standard.

Best for: Large-volume tasks with flexible latency needs.

Typical use cases include:

Large-scale data processing and content generation.

Data transformation pipelines.

Model evaluation across extensive datasets.

Customer in action: Ontada

Ontada, a McKesson company, used the Batch API to transform over 150 million oncology documents into structured insights. Applying LLMs across 39 cancer types, they unlocked 70% of previously inaccessible data and cut document processing time by 75%. Learn more in the Ontada case study.

Provisioned

The Provisioned model provides dedicated throughput via Provisioned Throughput Units (PTUs). This enables stable latency and high throughput—ideal for production use cases requiring real-time performance or processing at scale. Commitments can be hourly, monthly, or yearly with corresponding discounts.

Best for: Enterprise workloads with predictable demand and the need for consistent performance.

Common use cases:

High-volume retrieval and document processing scenarios.

Call center operations with predictable traffic hours.

Retail assistant with consistently high throughput.

Customers in action: Visier and UBS

Visier built “Vee,” a generative AI assistant that serves up to 150,000 users per hour. By using PTUs, Visier improved response times by three times compared to pay-as-you-go models and reduced compute costs at scale. Read the case study.

UBS created ‘UBS Red’, a secure AI platform supporting 30,000 employees across regions. PTUs allowed the bank to deliver reliable performance with region-specific deployments across Switzerland, Hong Kong, and Singapore. Read the case study.

Deployment types for standard and provisioned

To meet growing requirements for control, compliance, and cost optimization, Azure OpenAI supports multiple deployment types:

Global: Most cost-effective, routes requests through the global Azure infrastructure, with data residency at rest.

Regional: Keeps data processing in a specific Azure region (28 available today), with data residency both at rest and processing in the selected region.

Data Zones: Offers a middle ground—processing remains within geographic zones (E.U. or U.S.) for added compliance without full regional cost overhead.

Global and Data Zone deployments are available across Standard, Provisioned, and Batch models.

Dynamic features help you cut costs while optimizing performance

Several dynamic new features designed to help you get the best results for lower costs are now available.

Model router for Azure AI Foundry: A deployable AI chat model that automatically selects the best underlying chat model to respond to a given prompt. Perfect for diverse use cases, model router delivers high performance while saving on compute costs where possible, all packaged as a single model deployment.

Batch large scale workload support: Processes bulk requests with lower costs. Efficiently handle large-scale workloads to reduce processing time, with 24-hour target turnaround, at 50% less cost than global standard.

Provisioned throughput dynamic spillover: Provides seamless overflowing for your high-performing applications on provisioned deployments. Manage traffic bursts without service disruption.

Prompt caching: Built-in optimization for repeatable prompt patterns. It accelerates response times, scales throughput, and helps cut token costs significantly.

Azure OpenAI monitoring dashboard: Continuously track performance, usage, and reliability across your deployments.

To learn more about these features and how to leverage the latest innovations in Azure AI Foundry models, watch this session from Build 2025 on optimizing Gen AI applications at scale.

Integrated Cost Management tools

Beyond pricing and deployment flexibility, Azure OpenAI integrates with Microsoft Cost Management tools to give teams visibility and control over their AI spend.

Capabilities include:

Real-time cost analysis.

Budget creation and alerts.

Support for multi-cloud environments.

Cost allocation and chargeback by team, project, or department.

These tools help finance and engineering teams stay aligned—making it easier to understand usage trends, track optimizations, and avoid surprises.

Built-in integration with the Azure ecosystem

Azure OpenAI is part of a larger ecosystem that includes:

Azure AI Foundry—Everything you need to design, customize, and manage AI applications and agents.

Azure Machine Learning—For model training, deployment, and MLOps.

Azure Data Factory—For orchestrating data pipelines.

Azure AI services—For document processing, search, and more.

This integration simplifies the end-to-end lifecycle of building, customizing, and managing AI solutions. You don’t have to stitch together separate platforms—and that means faster time-to-value and fewer operational headaches.

A trusted foundation for enterprise AI

Microsoft is committed to enabling AI that is secure, private, and safe. That commitment shows up not just in policy, but in product:

Secure future initiative: A comprehensive security-by-design approach.

Responsible AI principles: Applied across tools, documentation, and deployment workflows.

Enterprise-grade compliance: Covering data residency, access controls, and auditing.

Get started with Azure AI Foundry

Build custom generative AI models with Azure OpenAI in Foundry Models.

Documentation for Deployment types.

Learn more about Azure OpenAI pricing.

Design, customize, and manage AI applications with Azure AI Foundry.

Azure OpenAI
Deploy the latest reasoning series and foundational models.

Learn more >

The post Maximize your ROI for Azure OpenAI appeared first on Microsoft Azure Blog.
Quelle: Azure

IDC Business Value Study: A 306% ROI within 3 years using Ubuntu Linux on Azure

Businesses today are under pressure to innovate faster, reduce costs, and stay secure—all while preparing for an AI-driven future. As part of this shift, many organizations are turning to Microsoft Azure to modernize their infrastructure. In doing so, they find that migrating to Azure helps meet these evolving demands by improving agility, strengthening security, and laying the foundation for AI readiness.

Microsoft Azure supports your migration and modernization journey with services built for Linux and Open Source. Central to this transformation is Ubuntu, Canonical’s enterprise-grade Linux distribution, which integrates seamlessly with Azure’s IaaS and PaaS. Together, they deliver high performance, reliability, and enterprise support—plus a broad set of tools to make migration smooth and efficient.

Optimize your Ubuntu experience in Azure

To bring a data-driven perspective to these benefits, Microsoft commissioned International Data Corporation (IDC) to conduct a business value study* based on interviews with organizations that moved their Ubuntu workloads from on-premises to Azure. Study participants shared that Azure provides a more efficient and effective platform for their Ubuntu workloads, maximizing their value in core business functions and supporting new technology adoption. Using the data derived from these interviews, IDC analysts created a typical customer profile to represent common experiences and business outcomes. The consolidated data from study participants shows that running Canonical Ubuntu workloads on Azure delivers the following benefits:

306% three-year return on investment with an 11-month payback on investment.

35% lower three-year cost of operations.

63% faster to deploy new compute resources and 52% faster to scale to new business opportunities.

85% less unplanned downtime affecting users.

$30.63M higher revenue per organization per year.

Quantified benefits of Ubuntu on Microsoft Azure

IDC interviewed stakeholders involved with Ubuntu workloads on Azure, uncovering significant benefits cited by participants, including:

Run mission-critical workloads with robust performance and flexibility

Organizations running workloads such as data analytics, engineering simulations, and machine learning, experience increased agility and operational efficiency with Ubuntu on Azure. By leveraging Ubuntu on Azure, businesses can scale seamlessly and respond swiftly to changing market conditions, ensuring optimal application performance while accelerating innovation and maintaining a competitive edge.

“With Ubuntu on Azure, we’ve unlocked AI adoption. We can scale innovations and experiment with technologies like GenAI, ML, and big data analytics without infrastructure constraints.”

The study participants also highlighted the ease of migrating Ubuntu workloads to Azure and the ability to add or remove capacity as needed. Gains in agility and development were notable, with users able to adjust and scale their Ubuntu environments more rapidly and flexibly in Azure, reducing deployment-related friction on development and business activities.

“Scalability is one of the reasons we moved to Ubuntu on Azure. We now have rapid scaling and flexible deployment, which enhance our responsiveness to business needs by almost 40%.”

Strengthen security and empower your IT teams

Security was another standout benefit for organizations adopting Ubuntu on Azure. They experienced enhanced operational resilience and reduced exposure to security and performance risks. Azure’s built-in security tools, including Microsoft Defender for Cloud, offer continuous security assessment threat detection, and actionable recommendations. This enables IT teams to proactively identify vulnerabilities, respond swiftly to potential threats, and maintain robust protection, ultimately supporting business continuity and fostering trust with customers and stakeholders.

“Ubuntu on Azure provides built-in security features such as Microsoft Defender for Cloud, which is a continuous security assessment and actionable recommendations. This proactive approach helps us identify vulnerabilities before they can be exploited, which is what we all are looking out for.”

In addition, IT teams have been able to shift their focus from maintenance-heavy tasks to more strategic, innovation-driven efforts, including AI initiatives. The transition to Azure simplified operations, streamlined development cycles, and enabled teams to make faster progress on business-critical projects by leveraging built-in AI tools and infrastructure that support rapid experimentation and deployment.

“With Ubuntu on Azure, we leverage AI and refocus our IT team. Managing on-premises infrastructure was difficult, but Azure AI services enhanced our applications and drove innovation. We’ve shifted IT resources from maintenance to strategic projects, improving productivity by 25%.”

Reduce operational costs while scaling efficiently

Organizations also realized significant cost efficiencies with Ubuntu on Azure. By taking advantage of Azure’s pay-as-you-go pricing and removing hardware maintenance burdens, businesses achieved notable infrastructure and licensing savings.

IDC found that customers reduced the cost of running Ubuntu workloads by an average of 35% over three years, saving $6,500 per Azure VM. Many also saw a 29% reduction in annual infrastructure costs, equating to approximately $581,100 per year.

“Ubuntu on Azure has reduced our direct IT costs by 40%, and it also optimizes our resource allocation, so we have better operational efficiency and staff time savings.”

“Ubuntu on Azure offers significant cost savings and scalability compared to on-premises solutions. It also provides excellent integration and interoperability and helps address data challenges, enhancing completeness, accuracy, and availability to support business decisions.”

Learn more from the IDC study

Download the full study: The Business Value of Ubuntu on Microsoft Azure.

Register to attend the webinar and listen to our guests from IDC, Microsoft, and Canonical discuss the benefits of running Ubuntu Linux on Azure.

To learn more about Ubuntu on Azure, visit our website. 

The Business Value of Ubuntu on Microsoft Azure
Read the full International Data Corporation business value study.

Learn more >

*IDC White Paper, sponsored by Microsoft, The Business Value of Ubuntu on Microsoft Azure, doc # US52857024, January 2025.

The post IDC Business Value Study: A 306% ROI within 3 years using Ubuntu Linux on Azure appeared first on Microsoft Azure Blog.
Quelle: Azure

Celebrating innovation, scale, and real-world impact with Serverless Compute on Azure

Microsoft is named a Leader in The Forrester Wave™: Serverless Development Platforms, Q2 2025

We are thrilled to announce that Microsoft has been recognized as a leader in The Forrester Wave™: Serverless Development Platforms, Q2 2025. We believe this recognition is a testament to our relentless focus on empowering developers, driving innovation, and delivering real value at scale for organizations across industries with Azure Functions and Azure Container Apps. Download the full report here (Forrester subscription required).

Focus on code, not infrastructure with serverless

Build smarter, scale faster with serverless compute in the era of AI applications and agents

Microsoft’s vision for serverless has always been clear: enable every developer to build, deploy, and manage modern applications with unmatched productivity, security, and agility—no matter the architecture, language, or workload. With Azure’s end-to-end serverless platform, we have moved beyond function-as-a-service to a comprehensive environment where containers, event-driven architectures, AI, and cloud-native patterns come together seamlessly.

Build and deploy serverless apps at scale

Our serverless offerings are designed to do more than abstract infrastructure—they are the foundation for building next-generation intelligent apps. With deep integrations into AI services, robust event handling, and developer-centric tooling, Azure Functions and Azure Container Apps make it easy for teams to transform ideas into impactful solutions.

What sets Microsoft’s serverless compute platform apart?

Unified event-driven and container-based models: Azure Functions and Azure Container Apps let you run any code, anywhere, scaling instantly from zero to hyper-scale—supporting both serverless functions and fully managed serverless containers without worrying about underlying infrastructure.

AI integration at every layer: With native support for Azure OpenAI, serverless GPUs and AI toolchains, you can embed generative AI, retrieval-augmented generation (RAG) patterns, and agentic workflows directly into serverless workflows, accelerating innovation in every app.

Best-in-class developer experience: From Visual Studio and VS Code to GitHub Actions, GitHub Copilot for Azure and familiar open-source frameworks, Microsoft’s stack puts developer productivity first—backed by extensive documentation, templates, and integrated DevOps capabilities.

Enterprise-grade security and compliance: Azure offers comprehensive identity and access management, role-based controls, and regulatory compliance, ensuring your applications and data are always protected.

Flexible pricing and hosting: Choose between consumption-based serverless, dedicated compute, or adaptive models. Features like Flex Consumption Plan and serverless GPU let you optimize for cost, performance, and specific workload needs.

Seamless and instant scaling: Instantly scale from zero to global with negligible cold start delays—ensuring always-on performance and real-time responsiveness for AI-powered and event-driven workloads, without manual intervention or infrastructure management.

Industry impact: With over a decade of operating a reliable cloud platform, we support mission-critical workloads across financial services, manufacturing, media, retail, and beyond.

Fully managed serverless container platform

Real-world impact: Customer success stories

Our customers continue to inspire us, showing what’s possible with Azure Functions and Azure Container Apps:

Hera Space Mission: Hera Space Companion, in collaboration with Terra Mater Studios, European Space Agency and Impact AI, is using Azure Container Apps and Azure AI Foundry to power the Hera AI Companion—an interactive, multilingual experience that lets users converse with a spacecraft in deep space—while also enabling rapid satellite image analysis and streamlined AI model deployment to accelerate innovation in space-based environmental insights.

Coca Cola: By adopting Azure Container Apps and Azure Functions to orchestrate real-time interactions in its global “Create Real Magic” holiday campaign, Coca Cola created a serverless, AI-powered Santa to engage over a million consumers across 43 countries in 26 languages with personalized experiences.

NFL: The National Football League integrates Azure Container Apps into its scouting platform, NFL Combine, to deliver real-time, sideline-ready AI insights, transforming hours of manual analysis into seconds of actionable data for coaches and scouts—without managing infrastructure.to power advanced fan engagement platforms, delivering real-time updates, personalized content, and data analytics during live events—all at massive scale.

Indiana Pacers: The Pacers build a real-time, in-arena captioning system that delivers instant, accurate captions to fans, enhancing accessibility and redefining the live sports experience through serverless compute and AI.

Coldplay: The iconic band, Coldplay, partners with Pixel Artworks to deliver immersive, AI-driven visual experiences at live shows, blending creativity and technology in real time using Azure Functions.

Heineken: Heineken is leveraging Azure Functions to build secure, scalable AI agents that automate workflows and power real-time RAG experiences—enabling intelligent, cost-optimized innovation across its global operations.

These stories are just a glimpse into the transformative potential of serverless at Microsoft. Visit the Microsoft Customer Stories for deeper dives into how organizations are succeeding with Azure Functions and Azure Container Apps, and check out the latest Build updates for even more innovation highlights.

Innovation continues: Build what’s next with Microsoft serverless

This recognition as a leader isn’t just a milestone—it’s a launchpad for what’s next. We’re continuously investing in AI-powered development, seamless hybrid cloud, and flexible deployment models. Our recent updates at Microsoft Build highlight advanced AI apps and agents, new serverless GPU capabilities, and an ever-growing ecosystem of tools, templates, and partner solutions to help you modernize, build, and scale.

Whether you’re building intelligent agents, orchestrating real-time data, or delivering engaging digital experiences, Microsoft’s serverless platform provides the power, flexibility, and trust you need.

Join us on this journey. Explore the latest on Azure Functions and Azure Container Apps, and let’s build the future—together.

Forrester does not endorse any company, product, brand, or service included in its research publications and does not advise any person to select the products or services of any company or brand based on the ratings included in such publications. Information is based on the best available resources. Opinions reflect judgment at the time and are subject to change. For more information, read about Forrester’s objectivity here .
The post Celebrating innovation, scale, and real-world impact with Serverless Compute on Azure appeared first on Microsoft Azure Blog.
Quelle: Azure

FYAI: How to leverage AI to reimagine cross-functional collaboration with Yina Arenas

Microsoft Build 2025 showcased how Microsoft is reimagining the software development lifecycle with powerful new capabilities that redefine what’s possible with AI.

From streamlining enterprise workflows to accelerating scientific discovery, AI agents are transforming how developers build and how businesses operate.

15 million developers are using GitHub Copilot, using features like agent mode and code review to handle repetitive tasks, allowing them to focus on the fun, creative parts of software development.Hundreds of thousands of customers are using Microsoft 365 Copilot to assist with research, brainstorming, and solution development, allowing increased for efficiency.More than 230,000 organizations—including 90% of the Fortune 500—have used Microsoft Copilot Studio to build AI agents and automations to improve productivity and scale business quickly.More than 11,000 AI models are now available through Azure AI Foundry, including Microsoft-hosted and partner-hosted models. This extensive library of AI models provides unparalleled resources for organizations to innovate and scale their AI-powered solutions.In this edition of FYAI, a series where we dive deep on AI trends with Microsoft leaders, we hear from Yina Arenas, Vice President of Product, Azure AI Foundry, who is leading the work at Microsoft to empower every developer to shape the future with generative AI using breakthrough models and enterprise AI agents.

Explore Microsoft AIIn this Q&A, Yina shares her insights on the shifting AI landscape, including why businesses are getting stuck in the “proof of concept” phase and how Azure AI Foundry can meet organizations where they are and take their AI projects to the next level.

What shifts in the AI landscape are you seeing that are fundamentally changing how people—and organizations—build and scale AI?We’re seeing a profound shift from AI as a research experiment to AI as a core business capability. What’s exciting—and challenging—is that organizations are no longer just asking, “Can we build this?” but “How do we build this responsibly, at scale, and with real impact?” That shift requires new tools, new mindsets, and new ways of working across teams. At Microsoft, we’re focused on making AI more accessible and inclusive—so that everyone, from developers to domain experts, can contribute to building solutions that matter. It’s not just about the tech—it’s about empowering people to solve real problems with AI.

Why is it still so hard for businesses to move from experimentation to production with AI—and what needs to change to unlock that next wave of value?Azure AI Foundry is supporting open Agent2Agent (A2A) protocol

Learn howMany organizations get stuck in the “proof of concept” phase because the leap to production is complex. It’s not just about selecting the right model—it’s about integrating it into systems, ensuring it’s secure and responsible, and aligning it with business goals. What’s missing is a cohesive, end-to-end approach that brings together the right tools, governance, and collaboration in a developer-friendly environment. That’s where Azure AI Foundry comes in—it’s designed to help teams not only move faster but do so thoughtfully by providing a cohesive end-to-end platform and offering traceability across prompts, models, and runtime behavior. We’re making it easier and less complex for developers to build apps while also giving business decision makers the ability to see how these apps perform, measure their ROI, and meet compliance requirements. To unlock the next wave of value, we need to make AI development more collaborative, transparent, and outcome-driven.

How does Azure AI Foundry help bridge that gap—and how is it different from other approaches out there?Azure AI Foundry is built to meet organizations where they are—whether they’re just starting or scaling AI across the enterprise. It brings together the best of Microsoft’s AI capabilities from foundational models to orchestration and monitoring in a unified platform. What sets Azure AI Foundry apart is not only that it’s built on decades of world-class research but that it’s built with humans at the center, so whether you’re a data scientist, product manager, engineer, or business leader, our AI solutions work for you. It also bakes in responsible AI from the start by integrating tools, from testing to monitoring to governance, that support the entire life cycle.

Who is Azure AI Foundry built for, and how does it support cross-functional teams—from data scientists to decision-makers—to build together?Azure AI Foundry: Your AI App and agent factory

Learn moreAzure AI Foundry is designed for anyone looking to take their AI projects to the next level—whether you’re part of a big enterprise, a startup, or a software development company. It offers access to the leading frontier models, integrates orchestration frameworks, supports open protocols for multi-agent collaboration, and provides native observability tooling—all within a secure, governed environment. Whether it’s optimizing call centers, analyzing data, improving product searches, or automating workflows, Azure AI Foundry pulls everything—models, tools, and agents—into one user-friendly platform. With tools like GitHub, Visual Studio, and Copilot Studio, Azure AI Foundry makes it easy for developers, data scientists, IT pros, and decision-makers to shorten the journey from idea to production.

A close up of a spiralAzure AI FoundryDesign, customize, and manage AI apps and agents at scale.

Get started todayWhere are you seeing Azure AI Foundry already making an impact—and what kinds of transformation are customers unlocking?As the central hub for building, orchestrating, and managing AI solutions, Azure AI Foundry remains the centerpiece of our AI platform strategy. It is now used by developers at more than 70,000 enterprises and software development companies—including Atomicwork, Epic, Fujitsu, Gainsight, H&R Block, and LG Electronics—to design, customize, and manage their AI apps and agents. And just six months in, more than 10,000 organizations have used Azure AI Foundry Agent Service to build, deploy, and scale their agents. Developers are designing agents that act, reason, take initiative, and deliver measurable business outcomes.

Heineken, for example, used Azure AI Foundry to build a multi-agent platform called “Hoppy” that helps employees access data and tools across the company in their native language. Their implementation has already saved thousands of hours, reducing tasks that once took 20 minutes to just 20 seconds.

Fujitsu evaluated Azure AI Foundry Agent Service to automate sales proposal creation. This boosted productivity by 67%, letting their teams to focus on customer engagement. The AI agent integrates with existing Microsoft tools familiar to around 38,000 employees, retrieves dispersed knowledge, and lays the foundation for broader AI-powered innovation.

Draftwise, a digital native offering an AI-powered contract drafting and review platform, is using cutting edge models in Azure AI Foundry (Cohere multimodal and AOAI reasoning) to help streamline the contract drafting process by integrating with a lawyer’s document storage system.

What excites you most about what’s next—for Azure AI Foundry, and for how people can reimagine the way they work and create with AI?What excites me most about what’s next for Azure AI Foundry is how it’s unlocking a new era of creativity and empowerment—not just for developers, but for everyone. We’re moving beyond the idea of AI as a tool you use to AI as a copilot you build with. Azure AI Foundry is helping people imagine and create agents that understand their goals, adapt to their workflows, and evolve with their needs.

That shift—from writing code to orchestrating intelligence—is profound. It means that a product manager, a marketer, or a frontline worker can shape how AI works for them, without needing to be a machine learning expert. It’s about putting the power of AI into the hands of the many, not the few.

And what’s most inspiring is that we’re just getting started. The agents people are building today are solving real problems—automating complex processes, accelerating insights, and freeing up time for more meaningful work. But the agents of tomorrow? They’ll be collaborators in creativity, partners in problem-solving, and catalysts for innovation we haven’t even dreamed of yet.

That’s the future I see—and it’s being built right now, by people who are reimagining what’s possible with AI.

Design, customize, and manage AI apps and agents at scaleThrough leaders like Yina Arenas, Microsoft’s vision for the future of AI is both inspiring and deeply human-centered. With platforms like Azure AI Foundry, we’re entering a new era where AI becomes not just a tool, but a true collaborator—empowering everyone, regardless of technical expertise, to innovate and solve real-world problems. With Azure AI Foundry, the potential of AI is being unlocked by developers everywhere, sparking a wave of transformation and boundless possibilities.

Interested in learning more? Here are a few resources:

Build your first production-grade AI agent in under an hour: Azure AI FoundryLearn how Azure AI Foundry is supporting open Agent2Agent (A2A) protocolRead Azure AI Foundry Agent Service documentationEmpower your team to grow their AI skillsFYAI: How agents will transform business and daily work
The post FYAI: How to leverage AI to reimagine cross-functional collaboration with Yina Arenas appeared first on Microsoft Azure Blog.
Quelle: Azure

GitHub scales on demand with Azure Functions

GitHub is the home of the world’s software developers, with more than 100 million developers and 420 million total repositories across the platform. To keep everything running smoothly and securely, GitHub collects a tremendous amount of data through an in-house pipeline made up of several components. But even though it was built for fault tolerance and scalability, the ongoing growth of GitHub led the company to reevaluate the pipeline to ensure it meets both current and future demands. 

“We had a scalability problem, currently, we collect about 700 terabytes a day of data, which is heavily used for detecting malicious behavior against our infrastructure and for troubleshooting. This internal system was limiting our growth.”

—Stephan Miehe, GitHub Senior Director of Platform Security

GitHub worked with its parent company, Microsoft, to find a solution. To process the event stream at scale, the GitHub team built a function app that runs in Azure Functions Flex Consumption, a plan recently released for public preview. Flex Consumption delivers fast and large scale-out features on a serverless model and supports long function execution times, private networking, instance size selection, and concurrency control.

Azure Functions Flex Consumption
Find out how can scale fast with Azure Functions Flex Consumption Plan

Learn more

In a recent test, GitHub sustained 1.6 million events per second using one Flex Consumption app triggered from a network-restricted event hub.

“What really matters to us is that the app scales up and down based on demand. Azure Functions Flex Consumption is very appealing to us because of how it dynamically scales based on the number of messages that are queued up in Azure Event Hubs.”

—Stephan Miehe, GitHub Senior Director of Platform Security

In a recent test, GitHub’s new function app processed 1.6 million messages per second in the Azure Functions Flex Consumption plan.

A look back

GitHub’s problem lay in an internal messaging app orchestrating the flow between the telemetry producers and consumers. The app was originally deployed using Java-based binaries and Azure Event Hubs. But as it began handling up to 460 gigabytes (GB) of events per day, the app was reaching its design limits, and its availability began to degrade.

For best performance, each consumer of the old platform required its own environment and time-consuming manual tuning. In addition, the Java codebase was prone to breakage and hard to troubleshoot, and those environments were getting expensive to maintain as the compute overhead grew.

“We couldn’t accept the risk and scalability challenges of the current solution,“ Miehe says. He and his team began to weigh the alternatives. “We were already using Azure Event Hubs, so it made sense to explore other Azure services. Given the simple nature of our need—HTTP POST request—we wanted something serverless that carries minimal overhead.”

Familiar with serverless code development, the team focused on similar Azure-native solutions and arrived at Azure Functions.

“Both platforms are well known for being good for simple data crunching at large scale, but we don’t want to migrate to another product in six months because we’ve reached a ceiling.”

—Stephan Miehe, GitHub Senior Director of Platform Security

A function app can automatically scale the queue based on the amount of logging traffic. The question was how much it could scale. At the time GitHub began working with the Azure Functions team, the Flex Consumption plan had just entered private preview. Based on a new underlying architecture, Flex Consumption supports up to 1,000 partitions and provides a faster target-based scaling experience. The product team built a proof of concept that scaled to more than double the legacy platform’s largest topic at the time, showing that Flex Consumption could handle the pipeline.

“Azure Functions Flex Consumption gives us a serverless solution with 100% of the capacity we need now, plus all the headroom we need as we grow.”

—Stephan Miehe, GitHub Senior Director of Platform Security

Making a good solution great

GitHub joined the private preview and worked closely with the Azure Functions product team to see what else Flex Consumption could do. The new function app is written in Python to consume events from Event Hubs. It consolidates large batches of messages into one large message and sends it on to the consumers for processing.

Finding the right number for each batch took some experimentation, as every function execution has at least a small percentage of overhead. At peak usage times, the platform will process more than 1 million events per second. Knowing this, the GitHub team needed to find the sweet spot in function execution. Too high a number and there’s not enough memory to process the batch. Too small a number and it takes too many executions to process the batch and slows performance.

The right number proved to be 5,000 messages per batch. “Our execution times are already incredibly low—in the 100–200 millisecond range,” Miehe reports.

This solution has built-in flexibility. The team can vary the number of messages per batch for different use cases and can trust that the target-based scaling capabilities will scale out to the ideal number of instances. In this scaling model, Azure Functions determines the number of unprocessed messages on the event hub and then immediately scales to an appropriate instance count based on the batch size and partition count. At the upper bound, the function app scales up to one instance per event hub partition, which can work out to be 1,000 instances for very large event hub deployments.

“If other customers want to do something similar and trigger a function app from Event Hubs, they need to be very deliberate in the number of partitions to use based on the size of their workload, if you don’t have enough, you’ll constrain consumption.”

—Stephan Miehe, GitHub Senior Director of Platform Security

Azure Functions supports several event sources in addition to Event Hubs, including Apache Kafka, Azure Cosmos DB, Azure Service Bus queues and topics, and Azure Queue Storage.

Reaching behind the virtual network

The function as a service model frees developers from the overhead of managing many infrastructure-related tasks. But even serverless code can be constrained by the limitations of the networks where it runs. Flex Consumption addresses the issue with improved virtual network (VNet) support. Function apps can be secured behind a VNet and can reach other services secured behind a VNet—without degrading performance.

As an early adopter of Flex Consumption, GitHub benefited from improvements being made behind the scenes to the Azure Functions platform. Flex Consumption runs on Legion, a newly architected, internal platform as a service (PaaS) backbone that improves network capabilities and performance for high-demand scenarios. For example, Legion is capable of injecting compute into an existing VNet in milliseconds—when a function app scales up, each new compute instance that is allocated starts up and is ready for execution, including outbound VNet connectivity, within 624 milliseconds (ms) at the 50 percentile and 1,022 ms at the 90 percentile. That’s how GitHub’s messaging processing app can reach Event Hubs secured behind a virtual network without incurring significant delays. In the past 18 months, the Azure Functions platform has reduced cold start latency by approximately 53% across all regions and for all supported languages and platforms.

Working through challenges

This project pushed the boundaries for both the GitHub and Azure Functions engineering teams. Together, they worked through several challenges to achieve this level of throughput:

In the first test run, GitHub had so many messages pending for processing that it caused an integer overflow in the Azure Functions scaling logic, which was immediately fixed.

In the second run, throughput was severely limited due to a lack of connection pooling. The team rewrote the function code to correctly reuse connections from one execution to the next.

At about 800,000 events per second, the system appeared to be throttled at the network level, but the cause was unclear. After weeks of investigation, the Azure Functions team found a bug in the receive buffer configuration in the Azure SDK Advanced Message Queuing Protocol (AMQP) transport implementation. This was promptly fixed by the Azure SDK team and allowed GitHub to push beyond 1 million events per second.

Best practices in meeting a throughput milestone

With more power comes more responsibility, and Miehe acknowledges that Flex Consumption gave his team “a lot of knobs to turn,” as he put it. “There’s a balance between flexibility and the effort you have to put in to set it up right.”

To that end, he recommends testing early and often, a familiar part of the GitHub pull request culture. The following best practices helped GitHub meet its milestones:

Batch it if you can: Receiving messages in batches boosts performance. Processing thousands of event hub messages in a single function execution significantly improves the system throughput.

Experiment with batch size: Miehe’s team tested batches as large as 100,000 events and as small as 100 before landing on 5,000 as the max batch size for fastest execution.

Automate your pipelines: GitHub uses Terraform to build the function app and the Event Hubs instances. Provisioning both components together reduces the amount of manual intervention needed to manage the ingestion pipeline. Plus, Miehe’s team could iterate incredibly quickly in response to feedback from the product team.

The GitHub team continues to run the new platform in parallel with the legacy solution while it monitors performance and determines a cutover date. 

“We’ve been running them side by side deliberately to find where the ceiling is,” Miehe explains.

The team was delighted. As Miehe says, “We’re pleased with the results and will soon be sunsetting all the operational overhead of the old solution.“

Explore solutions with Azure Functions

Azure Functions Flex Consumption

Azure Functions

The post GitHub scales on demand with Azure Functions appeared first on Azure Blog.
Quelle: Azure

Elevate your AI deployments more efficiently with new deployment and cost management solutions for Azure OpenAI Service including self-service Provisioned

We’re excited to announce significant updates for Azure OpenAI Service, designed to help our 60,000 plus customers manage AI deployments more efficiently and cost-effectively beyond current pricing. With the introduction of self-service Provisioned deployments, we aim to help make your quota and deployment processes more agile, faster to market, and more economical. The technical value proposition remains unchanged—Provisioned deployments continue to be the best option for latency-sensitive and high-throughput applications. Today’s announcement includes self-service provisioning, visibility to service capacity and availability, and the introduction of Provisioned (PTU) hourly pricing and reservations to help with cost management and savings. 

Azure OpenAI Service deployment and cost management solutions walkthrough

What’s new? 

Self-Service Provisioning and Model Independent Quota Requests 

We are introducing self-service provisioning alongside standard tokens, allowing you to request Provisioned Throughput Units (PTUs) more flexibly and efficiently. This new feature empowers you to manage your Azure OpenAI Service quata deployments independently without relying on support from your account team. By decoupling quota requests from specific models, you can now allocate resources based on your immediate needs and adjust as your requirements evolve. This change simplifies the process and accelerates your ability to deploy and scale your applications. 

Visibility to service capacity and availability

Gain better visibility into service capacity and availability, helping you make informed decisions about your deployments. With this new feature, you can access real-time information about service capacity in different regions, ensuring that you can plan and manage your deployments more effectively. This transparency allows you to avoid potential capacity issues and optimize the distribution of your workloads across available resources, leading to improved performance and reliability for your applications. 

Provisioned hourly pricing and reservations 

We are excited to introduce two new self-service purchasing options for PTUs: 

Hourly no-commitment purchasing 

You can now create a Provisioned deployment for as little as an hour, with a flat hourly rate of $2 per unit per hour. This model-independent pricing makes it easy to deploy and tear down deployments as needed, offering maximum flexibility. This is ideal for testing scenarios or transitional periods without any long-term commitment. 

Monthly and yearly Azure reservations for Provisioned deployments

For production environments with steady request volumes, Azure OpenAI Service Provisioned Reservations offer significant cost savings. By committing to a monthly or yearly reservation, you can save up to 82% or 85%, respectively, over hourly rates. Reservations are now decoupled from specific models and deployments, providing unmatched flexibility. This approach allows enterprises to optimize costs while maintaining the ability to switch models and adjust deployments as needed. Read our technical blog on Reservations here.

Azure OpenAI Service
Build your own copilot and generative AI applications

Try today

Benefits for decision makers 

These updates are designed to provide flexibility, cost efficiency, and ease of use, making it simpler for decision-makers to manage AI deployments. 

Flexibility: With self-service provisioning and hourly pricing, you can scale your deployments up or down based on immediate needs without long-term commitments. 

Cost efficiency: Azure Reservations offer substantial savings for long-term use, enabling better budget planning and cost management. 

Ease of use: Enhanced visibility and simplified provisioning processes reduce administrative burdens, allowing your team to focus on strategic initiatives rather than operational details. 

Customer success stories 

Before we made self-service available, select customers started achieving benefits of these options. 

Visier Solutions: By leveraging Provisioned Throughput Units (PTUs) with Azure OpenAI Service, Visier Solutions has significantly enhanced their AI-powered people analytics tool, Vee. With PTUs, Visier guarantees rapid, consistent response times, crucial for handling the high volume of queries from their extensive customer base. This powerful synergy between Visier’s innovative solutions and Azure’s robust infrastructure not only boosts customer satisfaction by delivering swift and accurate insights but also underscores Visier’s commitment to using cutting-edge technology to drive transformational change in workforce analytics. Read the case study on Microsoft. 

An analytics and insights company: Switched from Standard Deployments to GPT-4 Turbo PTUs and experienced a significant reduction in response times, from 10–20 seconds to just 2–3 seconds. 

A Chatbot Services company: Reported improved stability and lower latency with Azure PTUs, enhancing the performance of their services. 

A visual entertainment company: Noted a drastic latency improvement, from 12–13 seconds down to 2–3 seconds, enhancing user engagement. 

Empowering all customers to build with Azure OpenAI Service

These new updates do not alter the technical excellence of Provisioned deployments, which continue to deliver low and predictable latency. Instead, they introduce a more flexible and cost-effective procurement model, making Azure OpenAI Service more accessible than ever. With self-service Provisioned, model-independent units, and both hourly and reserved pricing options, the barriers to entry have been drastically lowered. 

To learn more about enhancing the reliability, security, and performance of your cloud and AI investments, explore the additional resources below.

Additional Resources 

Azure Pricing Provisioned Reservations

Azure OpenAI Service Pricing 

More details about Provisioned

Documentation for On-Boarding 

PTU Calculator in Azure AI Studio 

Unveiling Azure OpenAI Service Provisioned reservations blog

The post Elevate your AI deployments more efficiently with new deployment and cost management solutions for Azure OpenAI Service including self-service Provisioned appeared first on Azure Blog.
Quelle: Azure

Announcing mandatory multi-factor authentication for Azure sign-in

Learn how multifactor authentication (MFA) can protect your data and identity and get ready for Azure’s upcoming MFA requirement. 

As cyberattacks become increasingly frequent, sophisticated, and damaging, safeguarding your digital assets has never been more critical. As part of Microsoft’s $20 billion dollar investment in security over the next five years and our commitment to enhancing security in our services in 2024, we are introducing mandatory multifactor authentication (MFA) for all Azure sign-ins.

The need for enhanced security

One of the pillars of Microsoft’s Secure Future Initiative (SFI) is dedicated to protecting identities and secrets—we want to reduce the risk of unauthorized access by implementing and enforcing best-in-class standards across all identity and secrets infrastructure, and user and application authentication and authorization. As part of this important priority, we are taking the following actions:

Protect identity infrastructure signing and platform keys with rapid and automatic rotation with hardware storage and protection (for example, hardware security module (HSM) and confidential compute).

Strengthen identity standards and drive their adoption through use of standard SDKs across 100% of applications.

Ensure 100% of user accounts are protected with securely managed, phishing-resistant multifactor authentication.

Ensure 100% of applications are protected with system-managed credentials (for example, Managed Identity and Managed Certificates).

Ensure 100% of identity tokens are protected with stateful and durable validation.

Adopt more fine-grained partitioning of identity signing keys and platform keys.

Ensure identity and public key infrastructure (PKI) systems are ready for a post-quantum cryptography world.

Ensuring Azure accounts are protected with securely managed, phishing-resistant multifactor authentication is a key action we are taking. As recent research by Microsoft shows that multifactor authentication (MFA) can block more than 99.2% of account compromise attacks, making it one of the most effective security measures available, today’s announcement brings us all one step closer toward a more secure future.

In May 2024, we talked about implementing automatic enforcement of multifactor authentication by default across more than one million Microsoft Entra ID tenants within Microsoft, including tenants for development, testing, demos, and production. We are extending this best practice of enforcing MFA to our customers by making it required to access Azure. In doing so, we will not only reduce the risk of account compromise and data breach for our customers, but also help organizations comply with several security standards and regulations, such as Payment Card Industry Data Security Standard (PCI DSS), Health Insurance Portability and Accountability Act (HIPAA), General Data Protection Regulation (GDPR), and National Institute of Standards and Technology (NIST).

Preparing for mandatory Azure MFA

Required MFA for all Azure users will be rolled out in phases starting in the 2nd half of calendar year 2024 to provide our customers time to plan their implementation: 

Phase 1: Starting in October, MFA will be required to sign-in to Azure portal, Microsoft Entra admin center, and Intune admin center. The enforcement will gradually roll out to all tenants worldwide. This phase will not impact other Azure clients such as Azure Command Line Interface, Azure PowerShell, Azure mobile app and Infrastructure as Code (IaC) tools. 

Phase 2: Beginning in early 2025, gradual enforcement for MFA at sign-in for Azure CLI, Azure PowerShell, Azure mobile app, and Infrastructure as Code (IaC) tools will commence.

Beginning today, Microsoft will send a 60-day advance notice to all Entra global admins by email and through Azure Service Health Notifications to notify the start date of enforcement and actions required. Additional notifications will be sent through the Azure portal, Entra admin center, and the M365 message center.

For customers who need additional time to prepare for mandatory Azure MFA, Microsoft will review extended timeframes for customers with complex environments or technical barriers.

How to use Microsoft Entra for flexible MFA

Organizations have multiple ways to enable their users to utilize MFA through Microsoft Entra:

Microsoft Authenticator allows users to approve sign-ins from a mobile app using push notifications, biometrics, or one-time passcodes. Augment or replace passwords with two-step verification and boost the security of your accounts from your mobile device.

FIDO2 security keys provide access by signing in without a username or password using an external USB, near-field communication (NFC), or other external security key that supports Fast Identity Online (FIDO) standards in place of a password.

Certificate-based authentication enforces phishing-resistant MFA using personal identity verification (PIV) and common access card (CAC). Authenticate using X.509 certificates on smart cards or devices directly against Microsoft Entra ID for browser and application sign-in.

Passkeys allow for phishing-resistant authentication using Microsoft Authenticator.

Finally, and this is the least secure version of MFA, you can also use a SMS or voice approval as described in this documentation.

External multifactor authentication solutions and federated identity providers will continue to be supported and will meet the MFA requirement if they are configured to send an MFA claim.

Moving forward

At Microsoft, your security is our top priority. By enforcing MFA for Azure sign-ins, we aim to provide you with the best protection against cyber threats. We appreciate your cooperation and commitment to enhancing the security of your Azure resources.

Our goal is to deliver a low-friction experience for legitimate customers while ensuring robust security measures are in place. We encourage all customers to begin planning for compliance as soon as possible to avoid any business interruptions. 

Start today! For additional details on implementation, impacted accounts, and next steps for you, please refer to this documentation.
The post Announcing mandatory multi-factor authentication for Azure sign-in appeared first on Azure Blog.
Quelle: Azure

Microsoft Cost Management updates—July 2024

Whether you’re a new student, a thriving startup, or the largest enterprise, you have financial constraints, and you need to know what you’re spending, where it’s being spent, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Microsoft Cost Management comes in.

We’re always looking for ways to learn more about your challenges and how Microsoft Cost Management can help you better understand where you’re accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Exports enhancements: Parquet format support, file compression, and Fabric ingestion

Pricing updates on Azure.com

Your feedback matters: Take our quick survey! 

New ways to save money with Microsoft Cloud

Documentation updates

Let’s dig into the details.

Exports enhancements: Parquet format support, file compression, and Fabric ingestion 

In our last blog, I spoke about the support for FOCUS 1.0 (FinOps Cost Usage and Specification) datasets in Exports. We continue to make enhancements to the Exports functionality bringing support for the parquet format and file compression which can potentially help you achieve 40 to 70% file size reduction. These new cost saving features are initially available for the following datasets: Cost and usage details (Actual, Amortized, FOCUS) and Price Sheet. They aim to streamline your cost management processes, improve data handling efficiency, and reduce storage and network costs, all while providing comprehensive insights into your Azure spending.

Parquet is an open-source, columnar storage file format designed for efficient data processing and analytics. It offers several benefits over traditional formats like Comma-Separated Values (CSV), some of which are included below:

Efficient storage and reduced network cost: Parquet’s columnar format allows for better compression and encoding schemes, resulting in smaller file sizes. Compressed datasets occupy less space, translating to lower storage expenses and file transfer network cost.

Improved data transfer speed: Smaller file sizes mean faster data transfer rates, enhancing the efficiency of data operations.

Faster query performance: By storing data by column, parquet enables faster data retrieval and query performance, especially for large datasets.

Optimized analytics: Parquet format is optimized for big data tools and can be easily integrated with various analytics platforms.

To further reduce the size of your datasets, you can now compress your CSV files using GNU ZIP (GZIP) and parquet files using Snappy.

Here is the screenshot showing the new configuration options:

Please refer to this article to get started.

Microsoft Fabric ingestion 

Microsoft Fabric, as we know, is a great tool for data reporting and analytics where you can reference datasets from multiple sources without copying the data. We have now added new documentation to make it easy for you to ingest your exported costs datasets into new or existing Fabric workspaces. Just follow the steps included in this article. 

Pricing updates on Azure.com

We’ve been working hard to make some changes to our Azure pricing experiences, and we’re excited to share them with you. These changes will help make it easier for you to estimate the costs of your solutions.

We’re thrilled to announce the launch of new pricing pages for Azure AI Health (now generally available) and the innovative Phi-3 service (now in preview), ensuring you have the latest information at your fingertips.

Our Azure AI suite has seen significant enhancements, with updated calculators for Azure AI Vision and Azure AI Language, ensuring you have access to the most current offers and SKUs. The Azure AI Speech service now proudly offers generally available pricing for the cutting-edge Text to Speech add-on feature “Avatar”, and Azure AI Document Intelligence has added pricing for new training and custom generative stock-keeping units (SKUs).

To maintain the accuracy and relevance of our offers, we’ve deprecated the Azure HPC Cache and SQL Server Stretch pricing pages and calculators. This step ensures that you’re only presented with the most up-to-date and valid options.

The pricing calculator has been updated with the latest offers and SKUs for Azure Container Storage, Azure AI Vision, Azure Monitor, and PostgreSQL, reflecting our commitment to providing you with the most accurate cost estimates.

We’ve introduced new prices and SKUs across various services, including pricing for the new Intel Dv6/Ev6 series (preview) and ND Mi300X v5 series for Virtual Machines, auxiliary logs offer for Azure Monitor, and audio streaming and closed caption SKUs for Azure Communication Services. The Azure Databricks service now features pricing for Automated Serverless Compute, and the Azure Container Storage service pricing page now reflects generally available pricing.

Our dedication to enhancing your pricing experience is reflected in the continuous improvements made to several pages, including Azure Synapse Analytics, Azure SQL Database, Azure Migrate, Azure Cosmos DB (autoscale-provisioned), Microsoft Purview, Microsoft Fabric, Linux Virtual Machines, Azure VMware Solution, Azure Web PubSub, Azure Content Delivery Network, and Azure SignalR Service.

We’re constantly working to improve our pricing tools and make them more accessible and user-friendly. We hope you find these changes helpful in estimating the costs for your Azure Solutions. If you have any feedback or suggestions for future improvements, please let us know!

Your Feedback Matters: Take our quick survey!

If you use Azure in your day-to-day work from deploying resources to managing costs and billing, we would love to hear from you. (All experience levels welcome!) Please take a few moments to complete this short, 5 to 10-minute survey to help us understand your roles, responsibilities, and the challenges you face in managing the cloud. Your feedback will help us improve our services to better meet your personal needs. 

New ways to save money in the Microsoft Cloud

Here are new and updated offerings which can potentially help with your cost optimization needs:

Generally Available: Azure Virtual Network Manager mesh and direct connectivity

Generally Available: Announcing kube-egress-gateway for Kubernetes

Generally Available: Run your Databricks Jobs with Serverless compute for workflows

Generally Available: Azure Elastic SAN Feature Updates

Generally Available: Azure Virtual Network Manager mesh and direct connectivity

Public Preview: Summary rules in Azure Monitor Log Analytics, for optimal consumption experiences and cost

Public Preview: Continuous Performance Diagnostics for Windows VMs to enhance VM Troubleshooting

Public Preview: Azure cross-subscription Load Balancer

Public Preview: Advanced Network Observability for your Azure Kubernetes Service clusters through Azure Monitor

New Azure Advisor recommendations for Azure Database for PostgreSQL—Flexible Server

Want a more guided experience? Start with Control Azure spending and manage bills with Microsoft Cost Management.

Documentation updates 

Here are a few costs related documentation updates you might be interested in:

Update: Centrally managed Azure Hybrid Benefit FAQ

Update: Pay for your Azure subscription by wire transfer

Update: Tutorial: Create and manage budgets

Update: Understand cost details fields

Update: Quickstart: Start using Cost analysis

Update: Tutorial: Improved exports experience—Preview

Update: Transfer Azure Enterprise enrollment accounts and subscriptions

Update:  Migrate from Consumption Usage Details API

Update: Change contact information for an Azure billing account

New: Avoid unused subscriptions

Want to keep an eye on all documentation updates? Check out the Cost Management and Billing documentation change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request. You can also submit a GitHub issue. We welcome and appreciate all contributions!

What’s next?

These are just a few of the big updates from last month. Don’t forget to check out the previous Microsoft Cost Management updates. We’re always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow the Microsoft Cost Management YouTube channel to stay in the loop with new videos as they’re released and let us know what you’d like to see next.
The post Microsoft Cost Management updates—July 2024 appeared first on Azure Blog.
Quelle: Azure

New Azure Data Box capabilities to accelerate your offline data migration

Azure Data Box offline data transfer solution allows you send petabytes of data into Azure Storage in a quick, inexpensive, and reliable manner. The secure data transfer is accelerated by hardware transfer devices that enable offline data ingestion to Azure.

We’re excited to announce several new service capabilities including:

General availability of self-encrypted drives Azure Data Box Disk SKU that enables fast transfers on Linux systems.

Support for data ingestion to multiple blob access tiers in a single order.

Preview of cross-region data transfers for seamless data ingest from source country or region to select Azure destinations in a different country or region.

Support in Azure Storage Mover for online catch-up data copy of any changes active workloads may have generated post offline migrations with Azure Data Box.

Additionally, we’re happy to share the Azure Data Box cloud service is HIPAA/BAA, PCI 3DS and PCI DSS certified. More details on each of these new capabilities can be found below.

Azure Data Box
Move stored or in-flight data to Azure quickly and cost-effectively

Learn more

Azure Data Box Disk: self-encrypted drives

Azure Data Box Disk is now generally available in a hardware-encrypted option in the European Union, United States, and Japan. These self-encrypting drives (SEDs) use the dedicated/native/specialized hardware for data encryption, without any software dependency from the host machine. These SEDs use the specialized native hardware present on the disk for data encryption, without any software dependencies on the host machine. With this offering, we now support comparable data transfer rates on Linux as that of our BitLocker-encrypted Data Box Disk drives on Windows.

Azure Data Box Disk SED is popular with some of our Automotive customers as it connects directly to the in-car Linux-based data loggers through a SATA interface, thereby eliminating the need for a secondary data copy from another in-car storage and saving time. Here is how Xylon, manufacturer of automotive data loggers uses Azure Data Box Disk: self encrypted drives to migrate Advanced driver-assistance systems (ADAS) sensor data to Azure: 

Through the cooperation with the Microsoft Azure team, we have enabled direct data logging to the hardware-encrypted Data Box Disks plugged into our logiRECORDER Automotive HIL Video Logger. It enables our common customers to transfer precious data from the test fleet to the cloud in the simplest and fastest possible way, without wasting time on unnecessary data copying and reformatting along the way.” 
—Jura Ivanovic, Product Director, Automotive HIL Video Logger, Xylon 

Learn more about Data Box Disk: self encrypted drives and get started migrating your on-premises data to Azure. 

Multi-access tier ingestion support

You can now transfer data to different blob access tiers including Cold Tier in a single Azure Data Box order. Previously, Azure Data Box only supported transferring data to the default access tiers of Azure Storage Accounts. For example, if you wanted to move data to the Cool tier in an Azure Storage Account that has the default set to hot, you would have had to first move the data to hot tier via Azure Data Box and then leverage life cycle management to move the data to the Cool tier after it’s uploaded to Azure. 

We have now introduced new “access tier” folders in the folder hierarchy on the device. All data that you copy to the “Cool” folder will have it’s access tier set as cool, irrespective of the default access tier of the destination Storage account, and similarly for data copied to other folders representing the various access tiers. Learn more about multi-access tier ingestion support. 

Cross-region data transfer to select Azure regions 

We’re excited to share that Azure Data Box cross-region data transfer capabilities, now in preview, supports seamless ingest of on-premises data from a source country or region to select Azure destinations in a different country or region. For example, with this capability you can now copy on-premises data from Singapore or India to the West United States Azure destination region. Note that the Azure Data Box device isn’t shipped across commerce boundaries. Instead, it’s transported from and to an Azure data Center within the originating country or region where the on-premises data resides. Data transfer to the destination Azure region takes place across the Azure network without incurring additional fees. 

Learn more about this capability and the supported country or region combinations for Azure Data Box, Azure Data Box Disk, and Azure Data Box Heavy respectively. 

Support for online catch-up copy with Azure Storage Mover Integration 

If your data source has any active workloads, it will likely make changes while your Azure Data Box is in transit to Azure. Consequently, you’ll also need to bring those changes to your cloud storage, before a workload can be cut over to it. We’re happy to announce that you can now combine the Azure Storage Mover and Data Box services to form an effective file and folder migration solution to minimize downtime for your workloads. Storage Mover jobs can detect differences between your on-site and cloud storage to effectively transfer any updates and new files not previously captured by your Data Box transfer. For example, if only a file’s metadata (such as permissions) has changed, Azure Storage Mover will upload only the new metadata instead of the entire file content. 

Learn more about how catch-up copies with Azure Storage Mover’s merge and mirror copy mode can help transfer only the delta data to Azure.

Certifications

The Azure Data Box cloud service has achieved HIPAA/BAA, PCI 3DS & PCI DSS certifications. These certifications have been key requests from many of our customers across the healthcare and financial sectors respectively, and we’re happy to have achieved the compliance status to enable our customers’ data transfer needs.

Additional product updates

Support for up to 4 TB Azure files across the product family. 

Support for data transfer to “Poland Central” and “Italy North” Azure regions. 

Transfers to Premium Azure Files and Blob Archive tiers now supported with Data Box Disk. 

The data copy service, which significantly improves the ingestion and upload time for small files, is now generally available.

Our goal is to continually enhance the simplicity of your offline data transfers, and your input is invaluable. Should you have any suggestions or feedback regarding Azure Data Box, feel free to reach out via email at DataBox@microsoft.com. We look forward to you reviewing your feedback and comments.
The post New Azure Data Box capabilities to accelerate your offline data migration appeared first on Azure Blog.
Quelle: Azure