OpenAI’s fastest model, GPT-4o mini is now available on Azure AI

We are also announcing safety features by default for GPT-4o mini, expanded data residency and service availability, plus performance upgrades to Microsoft Azure OpenAI Service.

GPT-4o mini allows customers to deliver stunning applications at a lower cost with blazing speed. GPT-4o mini is significantly smarter than GPT-3.5 Turbo—scoring 82% on Measuring Massive Multitask Language Understanding (MMLU) compared to 70%—and is more than 60% cheaper.1 The model delivers an expanded 128K context window and integrates the improved multilingual capabilities of GPT-4o, bringing greater quality to languages from around the world.

GPT-4o mini, announced by OpenAI today, is available simultaneously on Azure AI, supporting text processing capabilities with excellent speed and with image, audio, and video coming later. Try it at no cost in the Azure OpenAI Studio Playground.

Azure AI
Where innovators are creating the future

Try for free

We’re most excited about the new customer experiences that can be enhanced with GPT-4o mini, particularly streaming scenarios such as assistants, code interpreter, and retrieval which will benefit from this model’s capabilities. For instance, we observed the incredible speed while testing GPT-4o mini on GitHub Copilot, an AI pair programmer that assists you by delivering code completion suggestions in the tiny pauses between keystrokes, rapidly updating recommendations with each new character typed.

We are also announcing updates to Azure OpenAI Service, including extending safety by default for GPT-4o mini, expanded data residency, and worldwide pay-as-you-go availability, plus performance upgrades. 

Azure AI brings safety by default to GPT-4o mini

Safety continues to be paramount to the productive use and trust that we and our customers expect.

We’re pleased to confirm that our Azure AI Content Safety features—including prompt shields and protected material detection— are now ‘on by default’ for you to use with GPT-4o mini on Azure OpenAI Service.

We have invested in improving the throughput and speed of the Azure AI Content Safety capabilities—including the introduction of an asynchronous filter—so you can maximize the advancements in model speed while not compromising safety. Azure AI Content Safety is already supporting developers across industries to safeguard their generative AI applications, including game development (Unity), tax filing (H&R Block), and education (South Australia Department for Education).

In addition, our Customer Copyright Commitment will apply to GPT-4o mini, giving peace of mind that Microsoft will defend customers against third-party intellectual property claims for output content.

Azure AI now offers data residency for all 27 regions

From day one, Azure OpenAI Service has been covered by Azure’s data residency commitments.

Azure AI gives customers both flexibility and control over where their data is stored and where their data is processed, offering a complete data residency solution that helps customers meet their unique compliance requirements. We also provide choice over the hosting structure that meets business, application, and compliance requirements. Regional pay-as-you-go and Provisioned Throughput Units (PTUs) offer control over both data processing and data storage.

We’re excited to share that Azure OpenAI Service is now available in 27 regions including Spain, which launched earlier this month as our ninth region in Europe.

Azure AI announces global pay-as-you-go with the highest throughput limits for GPT-4o mini

GPT-4o mini is now available using our global pay-as-you-go deployment at 15 cents per million input tokens and 60 cents per million output tokens, which is significantly cheaper than previous frontier models.

We are pleased to announce that the global pay-as-you-go deployment option is generally available this month, allowing customers to pay for the resources they consume, making it flexible for variable workloads, while traffic is routed globally to provide higher throughput, and still offering control over where data resides at rest.

Additionally, we recognize that one of the challenges customers face with new models is not being able to upgrade between model versions in the same region as their existing deployments. Now, with global pay-as-you-go deployments, customers will be able to upgrade from existing models to the latest models.

Global pay-as-you-go offers customers the highest possible scale, offering 15M tokens per minute (TPM) throughput for GPT-4o mini and 30M TPM throughput for GPT-4o. Azure OpenAI Service offers GPT-4o mini with 99.99% availability and the same industry leading speed as our partner OpenAI.

Azure AI offers leading performance and flexibility for GPT-4o mini

Azure AI is continuing to invest in driving efficiencies for AI workloads across Azure OpenAI Service.

GPT-4o mini comes to Azure AI with availability on our Batch service this month. Batch delivers high throughput jobs with a 24-hour turnaround at a 50% discount rate by using off-peak capacity. This is only possible because Microsoft runs on Azure AI, which allows us to make off-peak capacity available to customers.

We are also releasing fine-tuning for GPT-4o mini this month which allows customers to further customize the model for your specific use case and scenario to deliver exceptional value and quality at unprecedented speeds. Following our update last month to switch to token based billing for training, we’ve reduced the hosting charges by up to 43%. Paired with our low price for inferencing, this makes Azure OpenAI Service fine-tuned deployments the most cost-effective offering for customers with production workloads.

With more than 53,000 customers turning to Azure AI to deliver breakthrough experiences at impressive scale, we’re excited to see the innovation from companies like Vodafone (customer agent solution), the University of Sydney (AI assistants), and GigXR (AI virtual patients). More than 50% of the Fortune 500 are building their applications with Azure OpenAI Service.

We can’t wait to see what our customers do with GPT-4o mini on Azure AI!

1GPT-4o mini: advancing cost-efficient intelligence | OpenAI
The post OpenAI’s fastest model, GPT-4o mini is now available on Azure AI appeared first on Azure Blog.
Quelle: Azure

Microsoft Cost Management updates—June 2024

Whether you’re a new student, a thriving startup, or the largest enterprise, you have financial constraints, and you need to know what you’re spending, where it’s being spent, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Microsoft Cost Management comes in. 

We’re always looking for ways to learn more about your challenges and how Microsoft Cost Management can help you better understand where you’re accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates. 

FOCUS 1.0 support in Exports

Cost card in Azure portal

Kubernetes cost views (New entry point)

Pricing updates on Azure.com

New ways to save money with Microsoft Cloud

Documentation updates

Before we dig into details, kudos to the FinOps foundation for successfully hosting FinOps X 2024 in San Diego, California last month. Microsoft participated as a platinum sponsor for a second consecutive year. Our team members enjoyed connecting with customers and getting insights into their FinOps practice. We also shared our vision of simplifying FinOps through AI, demonstrated in this short video—Bring your FinOps practice into the era of AI.

For all our updates from FinOps X 2024, refer to the blog post by my colleague, Michael Flanakin, who also serves in the FinOps Technical Advisory Council. 

FOCUS 1.0 support in exports 

As you may already know, the FinOps foundation announced the general availability of the FinOps Cost and Usage Specification (FOCUS) Version 1 in June 2024. We are thrilled to announce that you can get the newly released version through exports experience in the Microsoft Azure portal or the REST API. You can review the updated schema and the differences from the previous version in this Microsoft Learn article. We will continue to support the ability for you to export the preview version of the FOCUS dataset.

For all the datasets supported through exports and to learn more about the functionality, refer to our documentation.

Cost card in Azure portal 

You have always had the ability to estimate costs for Azure services using the pricing calculator so that you can better plan your expenses. Now, we are excited to announce the estimation capability within the Azure portal itself. Engineers now can quickly get a breakdown of their estimated virtual machine (VM) costs before deploying them and adjust as needed. This new experience is currently available only for VMs running on pay-as-you-go subscriptions and will be expanded in the future. Empowering engineers with cost data without disrupting their workflow enables them to make the right decisions for managing their spending and drives accountability.

Kubernetes cost views (new entry point) 

I had spoken about the Azure Kubernetes Service cost views in our November 2023 blog post. We know how important it is for you to get visibility into the granular costs of running your clusters. To make it even easier to access these cost views, we have added an entry point to the cluster page itself. Engineers and admins who are already on the cluster page potentially making configuration changes or just monitoring their cluster, can now quickly reference the costs as well.

Pricing updates on Azure.com

We’ve been working hard to make some changes to our Azure pricing experiences, and we’re excited to share them with you. These changes will help make it easier for you to estimate the costs of your solutions. 

We’ve expanded our global reach with pricing support for new Azure regions, including Spain Central and Mexico Central. 

We’ve introduced pricing for several new services—enhancing our Azure portfolio—including Trusted Signing, Azure Advanced Container Networking Services, Azure AI Studio, Microsoft Entra External ID, and Azure API Center (now available on the Azure API Management pricing calculator.)

The Azure pricing calculator now supports a new example to help you get started with estimating costs for your Azure Arc enabled servers scenarios.  

Azure AI has seen significant updates with pricing support for Basic Video Indexing Analysis for Azure AI Video Indexer, new GPT-4o models and improved Fine Tuning models for Azure OpenAI Service, the deprecation of S2 to S4 volume discount tiers for Azure AI Translator, and the introduction of standard fast transcription and video dubbing, both in preview, for Azure AI Speech.  

We’re thrilled to announce new features in both preview and general availability stages with Azure flex consumption (preview) for Azure Functions, Advanced messaging (generally available) for Azure Communication Services, and Azure API Center (generally available) for Azure API Management, and AKS Automatic (preview) for Azure Kubernetes.  

We’ve made comprehensive updates to our pricing models to reflect the latest offerings and ensure you have the most accurate information, including changes to

Azure Bastion: Added pricing for premium and developer stock-keeping units (SKUs).

Virtual Machines: Removal of CentOS for Linux, added 5 year reserved instances (RI) pricing for the Hx and HBv4 series, as well as pricing for the new NDsr H100 v5 and E20 v4 series.

Databricks: Added pricing for all-purpose serverless compute jobs.

Azure Communication Gateway: Added pricing for the new “Lab” SKU.

Azure Virtual Desktop for Azure Stack HCI: Pricing added to the Azure Virtual Desktop calculator.

Azure Data Factory: Added RI pricing for Dataflow.

Azure Container Apps: Added pricing for dynamic session feature.

Azure Backup: Added pricing for the new comprehensive Blob Storage data protection feature.

 Azure SQL Database: Added 3 year RI pricing for hyperscale series, zone redundancy pricing for hyperscale elastic pools, and disaster recovery pricing options for single database.

Azure PostgreSQL: Added pricing for Premium SSD v2.

Defender for Cloud: Added pricing for the “Pre-Purchase Plan”.

Azure Stack Hub: Added pricing for site recovery.

Azure Monitor: Added pricing for pricing for workspace replication as well as data restore in the pricing calculator.

We’re constantly working to improve our pricing tools and make them more accessible and user-friendly. We hope you find these changes helpful in estimating the costs for your Azure solutions. If you have any feedback or suggestions for future improvements, please let us know! 

New ways to save money in the Microsoft Cloud 

VM Hibernation is now generally available 

Documentation updates 

Here are a few documentation updates you might be interested in: 

Update: Understand Cost Management data  

Update: Azure Hybrid Benefit documentation 

Update: Automation for partners  

Update: View and download your Microsoft Azure invoice  

Update: Tutorial: Create and manage exported data  

Update: Automatically renew reservations  

Update: Changes to the Azure reservation exchange policy  

Update: Migrate from EA Marketplace Store Charge API

Update: Azure product transfer hub  

Update: Get started with your Microsoft Partner Agreement billing account  

Update: Manage billing across multiple tenants using associated billing tenants

Want to keep an eye on all documentation updates? Check out the Cost Management and Billing documentation change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request. You can also submit a GitHub issue. We welcome and appreciate all contributions! 

What’s next? 

These are just a few of the big updates from last month. Don’t forget to check out the previous Microsoft Cost Management updates. We’re always listening and making constant improvements based on your feedback, so please keep the feedback coming.  

Follow @MSCostMgmt on X and subscribe to the Microsoft Cost Management YouTube channel for updates, tips, and tricks.
The post Microsoft Cost Management updates—June 2024 appeared first on Azure Blog.
Quelle: Azure

Harnessing the full power of AI in the cloud: The economic impact of migrating to Azure for AI readiness

As the digital landscape rapidly evolves, AI stands at the forefront, driving significant innovation across industries. However, to fully harness the power of AI, businesses must be AI-ready; this means having defined use-cases for their AI apps, being equipped with modernized databases that seamlessly integrate with AI models, and most importantly, having the right infrastructure in place to power and realize their AI ambitions. When we talk to our customers, many have expressed that traditional on-premises systems often fall short in providing the necessary scalability, stability, and flexibility required for modern AI applications.

A recent Forrester study1, commissioned by Microsoft, surveyed over 300 IT leaders and interviewed representatives from organizations globally to learn about their experience migrating to Azure and if that enhanced their AI impact. The results showed that migrating from on-premises infrastructure to Azure can support AI-readiness in organizations, with lower costs to stand up and consume AI services plus improved flexibility and ability to innovate with AI. Here’s what you should know before you start leveraging AI in the cloud.

Challenges faced by customers with on-premises infrastructure

Many organizations who attempted to implement AI on-premises encountered significant challenges with their existing infrastructure. The top challenges with on-premises infrastructure cited were:

Aging and costly infrastructure: Maintaining or replacing aging on-premises systems is both expensive and complex, diverting resources from strategic initiatives.

Infrastructure instability: Unreliable infrastructure impacts business operations and profitability, creating an urgent need for a more stable solution.

Lack of scalability: Traditional systems often lack the scalability required for AI and machine learning (ML) workloads, necessitating substantial investments for infrequent peak capacity needs.

High capital costs: The substantial upfront costs of on-premises infrastructure limit flexibility and can be a barrier to adopting new technologies.

Forrester’s study highlights that migrating to Azure effectively addresses these issues, enabling organizations to focus on innovation and business growth rather than infrastructure maintenance.

Azure AI
Where innovators are creating for the future

Try for free today

Key Benefits

Improved AI-readiness: When asked whether being on Azure helped with AI-readiness, 75% of survey respondents with Azure infrastructure reported that migrating to the cloud was essential or significantly reduced barriers to AI and ML adoption. Interviewees noted that the AI services are readily available in Azure, and colocation of data and infrastructure that is billed only on consumption helps teams test and deploy faster with less upfront costs. This was summarized well by an interviewee who was the head of cloud and DevOps for a banking company:

We didn’t have to go and build an AI capability. It’s up there, and most of our data is in the cloud as well. And from a hardware-specific standpoint, we don’t have to go procure special hardware to run AI models. Azure provides that hardware today.”
—Head of cloud and DevOps for global banking company

Cost Efficiency: Migrating to Azure significantly reduces the initial costs of deploying AI and the cost to maintain AI, compared to on-premises infrastructure. The study estimates that organizations experience financial benefits of USD $500 thousand plus over three years and 15% lower costs to maintain AI/ML in Azure compared to on-premises infrastructure.

Flexibility and scalability to build and maintain AI: As mentioned above, lack of scalability was a common challenge for survey respondents with on-premises infrastructure as well. Respondents with on-premises infrastructure cited lack of scalability with existing systems as a challenge when deploying AI and ML at 1.5 times the rate of those with Azure cloud infrastructure.

Interviewees shared that migrating to Azure gave them easy access to new AI services and the scalability they needed to test and build them out without worrying about infrastructure. 90% of survey respondents with Azure cloud infrastructure agreed or strongly agreed they have the flexibility to build new AI and ML applications. This is compared to 43% of respondents with on-premises infrastructure. A CTO for a healthcare organization said:

After migrating to Azure all the infrastructure problems have disappeared, and that’s generally been the problem when you’re looking at new technologies historically.”
—CTO for a healthcare organization

They explained that now, “The scalability [of Azure] is unsurpassed, so it adds to that scale and reactiveness we can provide to the organization.” They also said: “When we were running on-prem, AI was not as easily accessible as it is from a cloud perspective. It’s a lot more available, accessible, and easy to start consuming as well. It allowed the business to start thinking outside of the box because the capabilities were there.”

Holistic organizational improvement: Beyond the cost and performance benefits, the study found that migration to Azure accelerated innovation with AI by having an impact on the people at all levels of an organization:

Bottoms-up: skilling and reinvestment in employees. Forrester has found that investing in employees to build understanding, skills, and ethics is critical to successfully using AI. Both interviewees and survey respondents expressed difficulty finding skilled resources to support AI and ML initiatives at their organizations.

Migrating to the cloud freed up resources and changed the types of work needed, allowing organizations to upskill employees and reinvest resources in new initiatives like AI. A VP of AI for a financial services organization shared: “As we have gone along this journey, we have not reduced the number of engineers as we have gotten more efficient, but we’re doing more. You could say we’ve invested in AI, but everything we have invested—my entire team—none of these people were new additions. These are people we could redeploy because we’re doing everything else more efficiently.”

Top-down: created a larger culture of innovation at organizations. As new technologies—like AI—disrupt entire industries, companies need to excel at all levels of innovation to succeed, including embracing platforms and ecosystems that help drive innovation. For interviewees, migrating to the cloud meant that new resources and capabilities were readily available, making it easier for organizations to take advantage of new technologies and opportunities with reduced risk.

Survey data indicates that 77% of respondents with Azure cloud infrastructure find it easier to innovate with AI and ML, compared to only 34% of those with on-premises infrastructure. An executive head of cloud and DevOps for a banking organization said: “Migrating to Azure changes the mindset from an organization perspective when it comes to innovation, because services are easily available in the cloud. You don’t have to go out to the market and look for them. If you look at AI, originally only our data space worked on it, whereas today, it’s being used across the organization because we were already in the cloud and it’s readily available.”

Learn more about migrating to Azure for AI-readiness

Forrester’s study underscores the significant economic and strategic advantages of migrating to Azure for be AI-ready. Lower costs, increased innovation, better resource allocation, and improved scalability make migration to Azure a clear choice for organizations looking to thrive in the AI-driven future.

Ready to get started with your migration journey? Here are some resources to learn more:

Read the full Forrester TEI study on migration to Azure for AI-readiness.

The solutions that can support your organization’s migration and modernization goals.

Our hero offerings that provide funding, unique offers, expert support, and best practices for all use-cases, from migration to innovation with AI.

Learn more in our e-book and video on how to migrate to innovate.

Refrences

Forrester Consulting The Total Economic Impact™ Of Migrating to Microsoft Azure For AI-Readiness, commissioned by Microsoft, June 2024

The post Harnessing the full power of AI in the cloud: The economic impact of migrating to Azure for AI readiness appeared first on Azure Blog.
Quelle: Azure

Announcing Phi-3 fine-tuning, new generative AI models, and other Azure AI updates to empower organizations to customize and scale AI applications

AI is transforming every industry and creating new opportunities for innovation and growth. But, developing and deploying AI applications at scale requires a robust and flexible platform that can handle the complex and diverse needs of modern enterprises and allow them to create solutions grounded in their organizational data. That’s why we are excited to announce several updates to help developers quickly create customized AI solutions with greater choice and flexibility leveraging the Azure AI toolchain:

Serverless fine-tuning for Phi-3-mini and Phi-3-medium models enables developers to quickly and easily customize the models for cloud and edge scenarios without having to arrange for compute.

Updates to Phi-3-mini including significant improvement in core quality, instruction-following, and structured output, enabling developers to build with a more performant model without additional cost.

Same day shipping earlier this month of the latest models from OpenAI (GPT-4o mini), Meta (Llama 3.1 405B), Mistral (Large 2) to Azure AI to provide customers greater choice and flexibility.

Unlocking value through model innovation and customization  

In April, we introduced the Phi-3 family of small, open models developed by Microsoft. Phi-3 models are our most capable and cost-effective small language models (SLMs) available, outperforming models of the same size and next size up. As developers look to tailor AI solutions to meet specific business needs and improve quality of responses, fine-tuning a small model is a great alternative without sacrificing performance. Starting today, developers can fine-tune Phi-3-mini and Phi-3-medium with their data to build AI experiences that are more relevant to their users, safely, and economically.

Given their small compute footprint, cloud and edge compatibility, Phi-3 models are well suited for fine-tuning to improve base model performance across a variety of scenarios including learning a new skill or a task (e.g. tutoring) or enhancing consistency and quality of the response (e.g. tone or style of responses in chat/Q&A). We’re already seeing adaptations of Phi-3 for new use cases.

Phi-3 models
A family of powerful, small language models (SLMs) with groundbreaking performance at low cost and low latency

Try today

Microsoft and Khan Academy are working together to help improve solutions for teachers and students across the globe. As part of the collaboration, Khan Academy uses Azure OpenAI Service to power Khanmigo for Teachers, a pilot AI-powered teaching assistant for educators across 44 countries and is experimenting with Phi-3 to improve math tutoring. Khan Academy recently published a research paper highlighting how different AI models perform when evaluating mathematical accuracy in tutoring scenarios, including benchmarks from a fine-tuned version of Phi-3. Initial data shows that when a student makes a mathematical error, Phi-3 outperformed most other leading generative AI models at correcting and identifying student mistakes.

And we’ve fine-tuned Phi-3 for the device too. In June, we introduced Phi Silica to empower developers with a powerful, trustworthy model for building apps with safe, secure AI experiences. Phi Silica builds on the Phi family of models and is designed specifically for the NPUs in Copilot+ PCs. Microsoft Windows is the first platform to have a state-of-the-art small language model (SLM) custom built for the Neural Processing Unit (NPU) and shipping inbox.

You can try fine-tuning for Phi-3 models today in Azure AI.

I am also excited to share that our Models-as-a-Service (serverless endpoint) capability in Azure AI is now generally available. Additionally, Phi-3-small is now available via a serverless endpoint so developers can quickly and easily get started with AI development without having to manage underlying infrastructure. Phi-3-vision, the multi-modal model in the Phi-3 family, was announced at Microsoft Build and is available through Azure AI model catalog. It will soon be available via a serverless endpoint as well. Phi-3-small (7B parameter) is available in two context lengths 128K and 8K whereas Phi-3-vision (4.2B parameter) has also been optimized for chart and diagram understanding and can be used to generate insights and answer questions.

We are seeing great response from the community on Phi-3. We released an update for Phi-3-mini last month that brings significant improvement in core quality and instruction following. The model was re-trained leading to substantial improvement in instruction following and support for structured output. We also improved multi-turn conversation quality, introduced support for <|system|> prompts, and significantly improved reasoning capability.

The table below highlights improvements across instruction following, structured output, and reasoning.

Benchmarks Phi-3-mini-4k Phi-3-mini-128k Apr ’24 release Jun ’24 update Apr ’24 release Jun ’24 update Instruction Extra Hard 5.7 6.0 5.7 5.9 Instruction Hard 4.9 5.1 5 5.2 JSON Structure Output 11.5 52.3 1.9 60.1 XML Structure Output 14.4 49.8 47.8 52.9 GPQA 23.7 30.6 25.9 29.7 MMLU 68.8 70.9 68.1 69.7 Average 21.7 35.8 25.7 37.6 

We continue to make improvements to Phi-3 safety too. A recent research paper highlighted Microsoft’s iterative “break-fix” approach to improving the safety of the Phi-3 models which involved multiple rounds of testing and refinement, red teaming, and vulnerability identification. This method significantly reduced harmful content by 75% and enhanced the models’ performance on responsible AI benchmarks. 

Expanding model choice, now with over 1600 models available in Azure AI

With Azure AI, we’re committed to bringing the most comprehensive selection of open and frontier models and state-of-the-art tooling to help meet customers’ unique cost, latency, and design needs. Last year we launched the Azure AI model catalog where we now have the broadest selection of models with over 1,600 models from providers including AI21, Cohere, Databricks, Hugging Face, Meta, Mistral, Microsoft Research, OpenAI, Snowflake, Stability AI and others. This month we added—OpenAI’s GPT-4o mini through Azure OpenAI Service, Meta Llama 3.1 405B, and Mistral Large 2.

Continuing the momentum today we are excited to share that Cohere Rerank is now available on Azure. Accessing Cohere’s enterprise-ready language models on Azure AI’s robust infrastructure enables businesses to seamlessly, reliably, and safely incorporate cutting-edge semantic search technology into their applications. This integration allows users to leverage the flexibility and scalability of Azure, combined with Cohere’s highly performant and efficient language models, to deliver superior search results in production.

TD Bank Group, one of the largest banks in North America, recently signed an agreement with Cohere to explore its full suite of large language models (LLMs), including Cohere Rerank.

At TD, we’ve seen the transformative potential of AI to deliver more personalized and intuitive experiences for our customers, colleagues and communities, we’re excited to be working alongside Cohere to explore how its language models perform on Microsoft Azure to help support our innovation journey at the Bank.”
Kirsti Racine, VP, AI Technology Lead, TD.

Atomicwork, a digital workplace experience platform and longtime Azure customer, has significantly enhanced its IT service management platform with Cohere Rerank. By integrating the model into their AI digital assistant, Atom AI, Atomicwork has improved search accuracy and relevance, providing faster, more precise answers to complex IT support queries. This integration has streamlined IT operations and boosted productivity across the enterprise. 

The driving force behind Atomicwork’s digital workplace experience solution is Cohere’s Rerank model and Azure AI Studio, which empowers Atom AI, our digital assistant, with the precision and performance required to deliver real-world results. This strategic collaboration underscores our commitment to providing businesses with advanced, secure, and reliable enterprise AI capabilities.”
Vijay Rayapati, CEO of Atomicwork

Command R+, Cohere’s flagship generative model which is also available on Azure AI, is purpose-built to work well with Cohere Rerank within a Retrieval Augmented Generation (RAG) system. Together they are capable of serving some of the most demanding enterprise workloads in production. 

Earlier this week, we announced that Meta Llama 3.1 405B along with the latest fine-tuned Llama 3.1 models, including 8B and 70B, are now available via a serverless endpoint in Azure AI. Llama 3.1 405B can be used for advanced synthetic data generation and distillation, with 405B-Instruct serving as a teacher model and 8B-Instruct/70B-Instruct models acting as student models. Learn more about this announcement here.

Mistral Large 2 is now available on Azure, making Azure the first leading cloud provider to offer this next-gen model. Mistral Large 2 outperforms previous versions in coding, reasoning, and agentic behavior, standing on par with other leading models. Additionally, Mistral Nemo, developed in collaboration with NVIDIA, brings a powerful 12B model that pushes the boundaries of language understanding and generation. Learn More.

And last week, we brought GPT-4o mini to Azure AI alongside other updates to Azure OpenAI Service, enabling customers to expand their range of AI applications at a lower cost and latency with improved safety and data deployment options. We will announce more capabilities for GPT-4o mini in coming weeks. We are also happy to introduce a new feature to deploy chatbots built with Azure OpenAI Service into Microsoft Teams.  

Enabling AI innovation safely and responsibly  

Building AI solutions responsibly is at the core of AI development at Microsoft. We have a robust set of capabilities to help organizations measure, mitigate, and manage AI risks across the AI development lifecycle for traditional machine learning and generative AI applications. Azure AI evaluations enable developers to iteratively assess the quality and safety of models and applications using built-in and custom metrics to inform mitigations. Additional Azure AI Content Safety features—including prompt shields and protected material detection—are now “on by default” in Azure OpenAI Service. These capabilities can be leveraged as content filters with any foundation model included in our model catalog, including Phi-3, Llama, and Mistral. Developers can also integrate these capabilities into their application easily through a single API. Once in production, developers can monitor their application for quality and safety, adversarial prompt attacks, and data integrity, making timely interventions with the help of real-time alerts.

Azure AI uses HiddenLayer Model Scanner to scan third-party and open models for emerging threats, such as cybersecurity vulnerabilities, malware, and other signs of tampering, before onboarding them to the Azure AI model catalog. The resulting verifications from Model Scanner, provided within each model card, can give developer teams greater confidence as they select, fine-tune, and deploy open models for their application. 

We continue to invest across the Azure AI stack to bring state of the art innovation to our customers so you can build, deploy, and scale your AI solutions safely and confidently. We cannot wait to see what you build next.

Stay up to date with more Azure AI news

Watch this video to learn more about Azure AI model catalog.

Listen to the podcast on Phi-3 with lead Microsoft researcher Sebastien Bubeck.

The post Announcing Phi-3 fine-tuning, new generative AI models, and other Azure AI updates to empower organizations to customize and scale AI applications appeared first on Azure Blog.
Quelle: Azure

Latest advancements in Premium SSD v2 and Ultra Azure Managed Disks

We are excited to share the latest advancements in Premium SSD v2 (Pv2) and Ultra disks, the next generation of Azure disk storage. We have enhanced the backup and disaster recovery experience for Pv2 and Ultra disks to help you protect your workloads with ease. In addition, we have improved the security measures for Virtual Machines (VMs) equipped with Pv2 and Ultra. Stay tuned as we delve into these exciting new features! 

Pv2 and Ultra disks offer high throughput, high input/output operations per second (IOPS), and low latency for scenarios ranging from input/output (IO)-intensive workloads like SAP High-performance Analytic Appliance (SAP HANA) to general purpose applications across Relational Database Management System (RDBMS), SQL, and NoSQL databases. Pv2 features an unparalleled balance of price and performance—a 23% price-performance improvement over the competing cloud according to a 2023 GigaOm report—and Ultra provides top performance with low sub-millisecond latency.

Premium SSD v2
Unparalleled balance of price and performance for workloads requiring low latency and high IOPS and throughput.

Learn more

Ultra disk
Top performance with sub-millisecond latency for IO-intensive and transaction-heavy workloads.

Learn more

Incremental snapshots of Pv2 and Ultra Disks—now generally available

In August 2023, we made incremental snapshots of Pv2 and Ultra disks generally available. Incremental snapshots are reliable and cost-effective point-in-time backups of your disks that store only the changes made since the last snapshot.

Switching to Premium SSD v2 disks empowers Teradata with enhanced flexibility to achieve optimal storage performance in our customers’ VantageCloud Lake environments. We are thrilled to witness the general availability of incremental snapshots for Premium SSD v2 disks as it enables swift recreation and recovery to new disks, leading to a faster and more seamless onboarding experience for our customers.
—Neeraj Jain, Director, Cloud Product Management, Teradata

Below are some common uses for incremental snapshots:

Backup: You can use incremental snapshots to create a point-in-time backup of Pv2 and Ultra disks to recover from accidental data loss.

Regional disaster recovery: You can protect against regional disasters by copying incremental snapshots of Pv2 and Ultra disks to any region of your choice.

Refresh development environments from production: You can create incremental snapshots of your Pv2 and Ultra disks in production and use these to update your development or training environment.

Premium SSD v2 disks and the snapshot feature are integral components powering our Unity DevOps Version Control in production. As we look ahead, we envision extending these powerful tools to further enhance Unity DevOps. Premium SSD v2 disks empower us to offer a high-performance cloud-based Version Control system that seamlessly supports real-time 3D projects on a global scale, using snapshots for scalability and recovery. Thanks to Premium SSD v2, we’ve been able to finely balance performance and cost, delivering exceptional value to our customers.
—Shanti Gaudreault, VP of Engineering, Unity Cloud Services

Fully managed backup and disaster recovery solutions

Beyond incremental snapshots, we’re excited to share the new Azure native fully managed backup and recovery solutions for Pv2 and Ultra disks. With these solutions, you can protect your VMs and disks in a single click.

General availability: Azure VM Backup with Pv2 and Ultra Disks

Azure VM Backup is a fully managed, simple, and secure backup solution designed to safeguard VMs against disasters. A vital component of the backup process involves taking application-consistent incremental snapshots of all disks attached to a VM. These snapshots are then securely transferred to the Recovery Services Vault for long-term storage.

To ensure business continuity and recovery in the event of disasters or ransomware attacks, you can enable Azure Backup on VMs equipped with Pv2 and Ultra disks. This enhancement further strengthens the robustness of our backup solution for Pv2 and Ultra disks. To learn more, refer to this documentation.

General availability: Azure Disk Backup with Pv2 and Ultra disks

Azure Disk Backup is a snapshot lifecycle management solution for disks. It automates periodic creation of incremental snapshots and retains them for however long you specify. It is an agentless solution and doesn’t impact production application performance. You can now enable Azure Backup on individual Pv2 and Ultra disks. To get started, refer to this documentation.

Private preview: Azure Site Recovery for Pv2 Disks

Azure Site Recovery (ASR) helps mitigate data loss through continuous replication of the source VM and disks to a secondary region. It minimizes the impact of outages by reducing application recovery time, and it reduces your infrastructure cost by simplifying deployment and management. ASR for Pv2 disks is now in private preview, you can fill out this form to sign up.

General availability: Application consistent VM restore points for Pv2 and Ultra Disks

Application consistent VM restore points are fundamental to backup and disaster recovery solutions. We achieve application consistency by using an agent running in the VM to capture memory content and pending input or output (I/O) operations while also ensuring the restore point is consistent across multiple disks of a VM. You can now create application consistent VM restore points for your VMs with Pv2 and Ultra disks. To get started, refer to this documentation.

Third-party backup and disaster recovery support

The backup and recovery experience on Pv2 and Ultra is also elevated by our third-party partners. Veeam, Rubrik, and OpenText now offer backup support for Pv2 and Ultra disks, and Veritas now offers backup support for Ultra disk.

Veeam Backup support for Pv2 and Ultra disks

Veeam Backup, a data protection and disaster recovery tool, now supports backups for VMs using Pv2 and Ultra disks. With Veeam Backup, you can create image-level backups and cloud-native snapshots of Azure VMs to restore entire VMs and their attached disks. For more information, check out what’s new in Veeam Backup and Replication and the Veeam Backup for Microsoft Azure User Guide.

Veeam is a strategic Microsoft partner, offering features to help protect against ransomware and provide added resilience for customers on-premises, in Azure and at the edge by allowing users to store their backups in Azure Blob, with support for immutability, ensuring data is safe and compliant with regulatory requirements. We support a wide variety of Azure workloads, and now are providing support for Premium v2 and Ultra Disk with Veeam Backup for Azure.
—Dave Russell, SVP, Head of Strategy at Veeam

Rubrik Security Cloud support for Pv2 and Ultra disks

Rubrik Security Cloud (RSC) now provides backup support for VMs with Pv2 and Ultra disks. RSC is a comprehensive solution that helps you protect your data, monitor data risk, and recover data and applications. To learn more, see Rubrik Security Cloud and the documentation.

Rubrik’s customers understand the importance of having access to high-performance storage, and Microsoft’s Ultra Disk Premium V2 incremental snapshots Premium SSD v2 and Ultra disks will deliver just that. The integration with Rubrik will be incredibly smooth, designed to seamlessly secure customer’s valuable data—keeping it in safe hands. Kudos to Microsoft for their partnership in making data protection a seamless and hassle-free experience!
—Kesavan Palanichamy, Staff Product Manager at Rubrik

Veritas NetBackup Snapshot Manager support for Ultra disks

Veritas has expanded support of its NetBackup Snapshot Manager to include Ultra disks. With Veritas NetBackup, you can streamline the backup and recovery processes of your high-performance workloads using Ultra disk. For more information, check out the NetBackup Snapshot Manager for Cloud Install and Upgrade Guide (page 24).

Carbonite by OpenText Support for Pv2 and Ultra Disks

OpenText announced backup support for Pv2 and Ultra disk through Carbonite Availability, a tool to protect your data from incidents. With Carbonite Availability, you can quickly restore to a known point in time. To learn more, check out OpenText’s blog post on The Cost of Risk.

OpenText is committed to ensuring customers can take advantage of latest advancements on Azure Disk Storage for their enterprise data backup and recovery needs. Now, with the GA of incremental snapshots support for Premium SSD v2 and Ultra Disk and by the power of OpenText Migrate and OpenText Availability, customers can add recovery into a point-in-time snapshot for any supported Ultra Disk or Premium SSD v2. This expanded functionality empowers customers to efficiently restore to a known point, eliminating downtime and reducing costs associated with ransomware attacks!
—Jody Guffey, Vice President, Americas Sales at OpenText Cybersecurity

New security features for Pv2 and Ultra disks

In addition to backup and recovery enhancements, we’re delighted to share new security features available for Pv2 and Ultra disks. These features include encryption for your data at the VM host and enhanced security protections for your VMs using Pv2 and Ultra disk.

General availability: Encryption at host for Pv2 and Ultra disks

Encryption at host now supports Pv2 and Ultra disks. When you enable encryption at host, encryption starts on the VM host itself, the Azure server where your VM is allocated. The data is encrypted at the VM host and flows encrypted to the storage service. In addition, this feature will ensure that local storage on the VM host is encrypted at rest, which includes temporary storage on the VM as well as the host cache used with Premium SSD, Standard SSD, and Standard HDD disks. To learn more, read the documentation.

General availability: Trusted launch VM support for Pv2 and Ultra disks

Azure Trusted VM Launch now supports VMs with Pv2 and Ultra disks. It enhances the security of Azure Generation 2 VMs by a foundational compute security layer to protect VMs against advanced and persistent attack techniques such as bootkits and rootkits. For example, Trusted Launch provides secure boot by ensuring that VM boots use only software trusted by the Original Equipment Manufacturer. To learn more, read the documentation.

Get started with Pv2 and Ultra disks

With added backup support and security protections, our next generation of Azure disk storage is more ready than ever for your most demanding and mission-critical workloads. To get started with Pv2 and Ultra disks, check out how to deploy a Premium SSD v2 and how to deploy an Ultra disk.
The post Latest advancements in Premium SSD v2 and Ultra Azure Managed Disks appeared first on Azure Blog.
Quelle: Azure

Supercharge your AI app development through cloud modernization

The advent of generative AI has ushered in a new era of intelligent applications that can understand natural language, generate human-like content, and augment human capabilities. However, as companies across industries begin to recognize the potential of AI to revolutionize their operations, a crucial first step is often overlooked: modernizing their on-premises application infrastructure.

If your organization aspires to harness the power of AI to enhance customer experiences and drive growth, cloud migration far outpaces on-premises alternatives. Many of the early-adopter customers such as H&R Block and TomTom have reiterated that what set them up for success in the AI era was the fact that they had already begun modernizing their app infrastructure on Azure. A commissioned study by IDC, “Exploring the Benefits of Cloud Migration and Modernization for the Development of Intelligent Applications,” based on interviews with 900 IT leaders worldwide about their experiences migrating apps to the cloud, threw up additional insights to connect these dots. In this blog, I will walk you through some of the takeaways.

Azure AI
Lead your market with multimodal and generative AI

Learn more

Modernize or fall behind: The AI-driven urgency for cloud migration

Let’s state the obvious: AI is a powerful technology, capable of creating content, code, and even entire applications. The rapid advancements in generative AI technologies, such as OpenAI’s GPT-4, have transformed how businesses operate and interact with customers.

But generative AI models, like those powering ChatGPT or image-generating tools, are data-hungry beasts. They require massive computing resources, flexible scaling, and access to vast datasets to deliver their transformative capabilities. On-premises legacy systems and siloed data stores simply cannot keep pace with the compute and data demands of modern AI workloads.

Cloud platforms provide the robust infrastructure and storage solutions necessary to support AI workloads, all fully managed by the provider. They offer virtually unlimited scalability, ensuring applications can handle fluctuating demand and deliver consistent performance. The IDC study’s key finding revealed that organizations were primarily motivated to migrate applications to the cloud by numerous business advantages, including improved security and data privacy, streamlined integration of cloud-based services, and cost savings. Additionally, the cloud’s inherent agility allows businesses to experiment, iterate, and deploy AI models quickly, accelerating innovation.

The .NET platform, with its latest version, is equipped to leverage AI in cloud environments. Developers can integrate AI functionality into their applications using tools like the Semantic Kernel and libraries such as OpenAI, Qdrant, and Milvus. The integration with .NET Aspire ensures that applications can be deployed to the cloud with optimal performance and scalability. For example, H&R Block’s AI Tax Assistant, developed using .NET and Azure OpenAI, demonstrates how businesses can create scalable, AI-driven solutions to enhance user experiences and operational efficiency. By incorporating .NET into your cloud migration strategy, you can streamline development and accelerate the realization of AI’s potential across your business operations.

Migrating and refactoring legacy on-premises applications to be cloud-optimized unlocks the ability to exploit AI services, vast data repositories, and scale compute seamlessly. This can enable your company to not only create generative AI apps, but to fully integrate generative AI across all facets of their intelligent systems and data pipelines.

Accelerate your AI ambitions in the cloud

The recent IDC study makes clear a strong correlation between a company’s desire to leverage generative AI and the realization of its full value through cloud migration. Let’s break down some key considerations:

Data accessibility: Cloud environments make it easier to consolidate and access data from various sources, providing AI models with the information they need for training and refinement.

Computational power: The cloud’s elastic computing resources can be dynamically allocated to meet the demands of complex AI algorithms, ensuring optimal performance and cost efficiency.

Collaboration: Cloud-based tools facilitate seamless collaboration among data scientists, developers, and business stakeholders, streamlining the AI development and deployment process.

Beyond just enabling generative AI, cloud migration also accelerates overall innovation velocity. Cloud platforms provide a wealth of ready-to-use services like machine learning, IoT, serverless computing, and more that allow companies to rapidly develop and deploy new intelligent capabilities into their apps.

Embrace AI in the cloud to outpace the competition

The urgency to migrate and modernize applications isn’t just about keeping up with the times—it’s about gaining a competitive edge. Businesses that embrace the cloud and AI are better positioned to:

Attract top talent: The most talented data scientists and developers are drawn to organizations with cutting-edge technology stacks.

Adapt to change: The cloud’s flexibility allows businesses to pivot quickly in response to evolving market conditions or customer needs.

Drive revenue growth: AI-powered applications can open new revenue streams and deliver exceptional customer experiences.

Spark AI-powered innovation by modernizing your cloud

To maintain a competitive edge, cloud migration must go beyond simply lifting and shifting applications. It’s about modernization—rearchitecting and optimizing applications for the cloud to unlock new levels of agility, scalability, and innovation. By modernizing your applications to cloud-native architectures, your business can:

Enhance functionality: Integrate AI-powered features like chatbots, personalized recommendations, and intelligent automation into existing applications.

Improve performance: Leverage cloud-native technologies to optimize application speed, responsiveness, and scalability.

Reduce costs: Pay only for the resources you use, eliminating the need for costly on-premises infrastructure.

The majority of respondents in the IDC survey chose to migrate applications to the cloud because it empowered them to innovate on application development and realize a multitude of commercial benefits as a result more rapidly.

Fuel your intelligent app development with a cloud-powered AI transformation

The migration and modernization of applications to the cloud is not just an option but a necessity in the era of generative AI. Companies that act swiftly to embrace this transformation will be well-positioned to harness the full potential of intelligent applications, driving innovation, operational efficiency, and customer engagement. The synergy between cloud computing and generative AI is creating unparalleled opportunities for businesses to redefine their strategies and achieve sustained growth in a competitive landscape.

By recognizing the urgency and quantifying the benefits, companies can make informed decisions about their cloud migration and modernization journeys, ensuring they remain at the forefront of technological advancement and market relevance.

Get the full rundown of the IDC’s survey by downloading the whitepaper
Exploring the benefits of cloud migration and modernization

Discover more

The post Supercharge your AI app development through cloud modernization appeared first on Azure Blog.
Quelle: Azure

AI on the road: Azure OpenAI Service helps drive better decision making for the transportation sector

Since the invention of the wheel in 3500 BCE, the world has become an increasingly fast-moving place. The Industrial Revolution introduced steam-powered trains and ships, revolutionizing land and sea travel. The 20th century saw the advent of automobiles and airplanes, drastically reducing travel time and transforming global connectivity. And today, AI is supporting how quickly, and intelligently businesses can move both goods and people and there is no slowing down in sight: global AI in transportation is expected to reach $23.11 billion by 2032.1

Azure OpenAI Service
Build your own copilot and generative AI applications

Explore our features

Azure OpenAI Service is driving change in transportation

Microsoft Azure OpenAI Service is supporting the transportation industry through innovative business applications. TomTom’s Digital Cockpit, powered by Azure, offers an immersive in-car infotainment system, enhancing driver interaction that the company claims reduces costs. CarMax uses Azure OpenAI Service to streamline content creation for its car research pages. Fraport integrates AI with Azure OpenAI Service to automate operations and assist employees, preparing for future growth despite workforce reductions, and Alstom leverages AI to enhance operational efficiency across its value chain, supporting its vision of Engineering 4.0 and improving specification quality and project management. All four mark significant strides towards smarter, more efficient transportation solutions. Read on to learn how each Microsoft customer below uses Azure OpenAI Service to improve business operations.

TomTom brings AI-powered, talking cars to life with AzureTomTom has developed the Digital Cockpit, an immersive in-car infotainment system that automobile manufacturers can customize, potentially reducing costs by up to 80%. This system utilizes Azure OpenAI Service, Azure Cosmos DB, and Azure Kubernetes Service to provide seamless control and interaction for drivers. TomTom’s goal is to surpass the smartphone experience by enabling conversational infotainment.The development group for Digital Cockpit was reduced from 10 to 3 people, and query response times improved significantly from 12 seconds to just 2.5 seconds. In evaluating 300 different scenarios, the AI assistant correctly understood and answered 95% of complex driver requests. TomTom Digital Cockpit is now available to car manufacturers, enabling them to customize the system to their brand—accelerating time-to-market and retaining brand ownership of the driver experience.

CarMax puts customers first with car research tools powered by Azure OpenAI ServiceSince its inception in 1993, CarMax has evolved from a groundbreaking startup to the leading used car retailer in the United States, selling over 11 million vehicles. To continue its trajectory of innovation, CarMax has leveraged Azure OpenAI Service to streamline content creation for its car research pages. This collaboration has significantly enhanced CarMax’s digital tools and capabilities, allowing the company to produce AI-generated content that not only aids customers in their car-buying research but also boosts search engine rankings. For example, generating summaries for 5,000 car pages, which would have taken about 11 years manually, was accomplished in just a few months using Azure OpenAI Service. This efficiency also resulted in an 80% editorial review approval rate for the AI-generated content.The integration of Azure OpenAI Service has brought substantial benefits, including cost and time savings, improved content management, and the ability to scale and deploy custom models. The service has enabled CarMax to summarize extensive customer reviews into concise, readable sentences, enhancing user experience and driving more traffic to the website through improved SEO performance. As a result, CarMax’s search engine metrics have seen an upward trend.

FraportGPT leads the way to the airport of the future: Fraport makes employees’ daily tasks easier with the help of Azure OpenAI ServiceFraport AG, which operates Frankfurt Airport in Germany and holds stakes in 30 other airports worldwide, is integrating AI to automate and streamline operations. FraportGPT, a chatbot powered by Azure OpenAI Service, is designed to assist employees across an array of different specialized areas. The chatbot helps programmers create code statements, it summarizes lengthy texts for legal staff, and it assists human resource and administrative personnel with drafting emails and documents. By utilizing AI, Fraport aims to address the challenge of a projected 30% workforce reduction due to aging while simultaneously planning for a 30% growth in operations over the next few years.FraportGPT’s deployment has been met with enthusiasm from employees, who have begun using it in diverse areas such as rental management and administrative functions, enhancing efficiency and productivity.

Alstom integrates generative AI to its business processes with Azure AI technologiesAlstom, a global leader in the railway sector, has integrated Microsoft generative AI into its operations to enhance efficiency, meet its unprecedented backlog, and improve customer satisfaction. This AI-driven approach aims to streamline processes across all stages of Alstom’s value chain—including business opportunities, contract specifications, design, manufacturing, testing, supply chain, installation, and maintenance. Since 2020, Alstom has been leveraging AI to support its vision of Engineering 4.0 and the Augmented Workforce, ensuring that employees can use AI copilots to generate engineering assets more quickly and cost-effectively. For instance, AI helps in writing specifications for railway systems, and can improve specification quality by 25% and reduce costs associated with poor-quality requirements. Alstom’s in-house AI tool, supported by Azure OpenAI Service, facilitates a wide range of business functions—including human resource, finance, and project management—with over a thousand users conducting more than 15,000 operations monthly. The tool’s capabilities in content generation, translation, and document intelligence enable massive data processing, thereby enhancing operational efficiency.

Azure OpenAI Service is proud to support Microsoft customers like TomTom, CarMax, Fraport, and Alstom in enhancing navigation, optimizing logistics, and improving operational efficiency, and bolstering better customer support, research, and project management.

Our commitment to responsible AI

Organizations across industries are leveraging Azure OpenAI Service and Microsoft Copilot services and capabilities to drive growth, increase productivity, and create value-added experiences. From advancing medical breakthroughs to streamlining manufacturing operations, our customers trust that their data is protected by robust privacy protections and data governance practices. As our customers continue to expand their use of our AI solutions, they can be confident that their valuable data is safeguarded by industry-leading data governance and privacy practices in the most trusted cloud on the market today.  

At Microsoft, we have a long-standing practice of protecting our customers’ information. Our approach to responsible AI is built on a foundation of privacy, and we remain dedicated to upholding core values of privacy, security, and safety in all our generative AI products and solutions.   

Get started with Azure OpenAI Service 

Apply for access to Azure OpenAI Service by completing this form. 

Learn about Azure OpenAI Service and the latest enhancements. 

Get started with GPT-4 in Azure OpenAI Service in Microsoft Learn. 

Read our partner announcement blog, Empowering partners to develop AI-powered apps and experiences with ChatGPT in Azure OpenAI Service. 

Learn how to use the new Chat Completions API (in preview) and model versions for ChatGPT and GPT-4 models in Azure OpenAI Service.

Learn more about Azure AI Content Safety.

Explore capabilities and solutions with Azure OpenAI Service

1 Precedence Research, Artificial Intelligence in Transportation Market Size to Reach USD 23.11 Bn by 2032, September 2023.
The post AI on the road: Azure OpenAI Service helps drive better decision making for the transportation sector appeared first on Azure Blog.
Quelle: Azure

Enable location analytics with Azure Maps

Imagine unlocking a treasure trove of insights from your existing data sets, that makes you look at the physical world differently. That’s what location analytics enables. Any data that has a geographic aspect to it is often called “location data” and is already present in about 80% of enterprise data. It is generated from customer databases, smartphones, Internet of Things (IoT) devices, connected vehicles, GPS units, credit card transactions, and more—this data is everywhere. Location analytics is the science of adding and analyzing layers of location data alongside your existing enterprise data to derive unique insights.  

Organizations use location analytics to create many of the experiences you use every day—like when you are booking a hotel in a different country, often hotel prices are automatically available to you in your currency. Behind the scenes, hotel companies are using location services to convert your IP address to your country and to display hotel locations on a map. This helps them to seamlessly provide the relevant information for you, enhancing your online booking experience.   

Azure Maps
Bring intelligent location-enabled experiences to life for applications and solutions

Explore features

Organizations across industries leveraging Azure Maps APIs  

With Microsoft Azure Maps, organizations worldwide are using location data to create similar applications and experiences for mobile and web to gain unique insights, solve critical challenges, and improve their businesses. Azure Maps provides a suite of location services that enable developers and enterprises to build scalable, location-enabled, and map-based experiences. 

Services available through Azure Maps APIs unlock a wide variety of use cases across different sectors. Here’s a quick highlight of few of our services and how they are being used:  

Did you know Azure Maps is HIPAA compliant?

Read the blog

Data enrichment services enable adding more information to the data that you already have. The Geocoding service is used to convert physical addresses into coordinates, and to convert coordinates into addresses (known as reverse geocoding). Azure Maps Geocoding API enables users to also save the geocoded addresses for as long as they have an active Azure account, so they don’t have to reuse the service each time and incur incremental costs. Once converted, addresses can be visualized on a map using the Get Map Tiles API service for further analysis. A popular use case for these location services is in the healthcare industry where organizations use the geocoding API to convert patients’ addresses into coordinates, and then use the Map Tiles service to visualize where patients are located on a map to find the nearest health care facilities for patients. Further, certain ambulance operators are leveraging location analytics to pre-emptively place ambulances at predictive ‘hot spot’ locations to reduce emergency response times. Azure Maps is built on Microsoft Azure and is fully compliant with the Health Insurance Portability and Accountability Act (HIPAA) providing healthcare companies with peace of mind when dealing with highly sensitive and confidential patient information. 

Routing services are used to calculate the distance or time required to get from one point to another. One of the most prominent use cases for routing is in the logistics industry where organizations use routing APIs to create the most efficient vehicle routes to deliver goods. Optimized routes help businesses in saving time and costs—enabling operational efficiencies. Recently, Azure partnered with Nvidia to use Nvidia cuOpt for multi-itinerary optimization. Often big logistics companies are dealing with hundreds of drivers and dropping locations and need to create a matrix of possible routes to pick the most efficient ones. With Nvidia’s cuOpt, a state-of-the-art, graphics processing unit (GPU) accelerated engine, the time taken to create and analyze the matrix of routes is reduced from multiple minutes to sub seconds.  

Weather data services provide daily, historical, normal, and actuals for any latitude and longitude while also providing temperature, air quality, and storm information. The weather service also provides valuable data to inform prediction and modeling based on current and forecasted data enabling development of applications that are weather-informed. A popular use case is seen in the retail industry where organizations use historical and current weather data to forecast weather conditions. This information helps them make informed sales and operational decisions such as inventory planning and pricing. Retailers also use weather data to create more targeted ads and promotions, improving their overall marketing campaign effectiveness. 

Get started with Azure Maps

Azure Maps is designed for compatibility, enabling you to connect with a range of Azure services like Azure IoT, Power BI, Microsoft Azure Active Directory, Azure Data Explorer, Power Apps, Synapse ML, and more. With minimal coding required, you can effortlessly enhance your applications with powerful mapping and location analytics capabilities.    

Visit the Azure Maps product page to learn more.

Explore our collection of hundreds of Azure Maps samples that have been made open source on GitHub. Build your location-aware solutions with a seamless development experience.

Leverage the Azure Maps Tech Community blog page to stay abreast of all new tools and technologies being added to Azure Maps.   

 If you already have an Azure subscription, you just need to add an Azure Maps resource to your project and use that instance to call Azure Maps APIs. Visit the Azure Maps pricing page to explore pricing options. Pay only for what you use and easily deploy your Azure Maps service into an existing Azure subscription.

If you don’t have an Azure subscription, sign up for it here and follow the steps above. 

Learn more about Azure Maps

The post Enable location analytics with Azure Maps appeared first on Azure Blog.
Quelle: Azure

10 ways to impact business velocity through Azure OpenAI Service

The phrase, “time is money,” is commonly attributed to Benjamin Franklin, who first used it in his essay “Advice to a Young Tradesman,” published in 1748. Franklin was addressing the economic value of time, a concept increasingly relevant when discussing AI’s impact on business today. AI is adept at processing and analyzing troves of data much faster than a human brain—enabling quicker, more informed decision-making. Leaders who embrace AI now and take action to understand it, experiment with it, and envision how it can solve hard problems are going to run companies that thrive in an AI world.1 From automating routine tasks to providing deep insights through data analysis, AI technologies are enabling businesses to make quicker, more informed decisions, driving growth and competitive advantage.

Azure OpenAI Service

Power business efficiency

Learn more

10 ways AI can turbocharge business efficiency

Automating repetitive tasks: AI can handle mundane and repetitive tasks such as data entry, scheduling, and email sorting.

Real-time data analysis: AI algorithms can analyze vast amounts of data in real-time, providing immediate insights and allowing businesses to make faster, data-driven decisions.

Predictive analytics: AI can forecast trends and behaviors based on historical data, enabling companies to anticipate market changes and customer needs more rapidly.

Customer support chatbots: AI-powered chatbots provide instant customer service, addressing inquiries and resolving issues without human intervention.

Supply chain optimization: AI can predict demand, optimize inventory levels, and streamline logistics.

Fraud detection: AI systems can quickly detect and respond to fraudulent activities by analyzing transaction patterns and identifying anomalies in real-time.

Personalized marketing: AI can tailor marketing campaigns to individual preferences and behaviors, increasing engagement and conversion rates more swiftly.

Enhanced recruitment processes: AI can screen resumes, conduct initial interviews, and identify the best candidates faster than traditional methods.

Process automation: Robotic Process Automation (RPA) driven by AI can execute business processes faster and with fewer errors, from financial transactions to regulatory compliance.

Product development: AI accelerates product development cycles by simulating different design scenarios, optimizing prototypes, and predicting performance outcomes.

Below, we look at three Microsoft customers who used Azure OpenAI Service to accelerate the speed at which they do business.  

Akbank empowers staff to search through 10,000 records in seconds with Azure OpenAI ServiceAkbank, one of Türkiye’s largest banks, has significantly improved its customer support operations by integrating Azure OpenAI Service. Whereas their customer representatives previously had to search through a hefty 10,000-article knowledge base in hopes of finding correct responses, they now interact with an AI chatbot that generates correct answers 90% of the time. This integration saves three minutes per interaction, thus enhancing both the quality and accuracy of the support provided. Akbank has also incorporated proactive suggestions into the chatbot, enabling staff to get faster responses and continually improve customer support.

Serving customers 78% faster: VOCALLS’ voicebots supercharge call handing with Azure OpenAIVOCALLS, a Prague and London-based telecommunications company, leverages Microsoft Azure AI technologies to support its customer service with AI-powered voicebots. Specializing in conversational AI solutions, VOCALLS automates over 50 million interactions annually, improving customer experiences for companies like Estafeta. Estafeta, a logistics pioneer in Latin America, saw a 78% reduction in average handling time and a 120% increase in answered calls after deploying VOCALLS’ voicebot, Beatriz. This voicebot provides immediate support, eliminating wait times and boosting customer satisfaction scores.

RepsMate empowers businesses to create super agents by leveraging Azure’s AI superpowersAs a member of the Microsoft for Startups program, RepsMate has leveraged Microsoft’s networks, support, and the Azure Marketplace to gain traction in Eastern Europe. RepsMate’s solution, driven by AI and data analysis, has led to significant efficiency gains, reducing average handling times by 12%, decreasing chat durations by 20 to 30%, and increasing first-call resolution rates by 5 to 10%. Additionally, RepsMate has automated up to 25% of interactions with predefined answers, enhancing both speed and accuracy. Additionally, their strategic use of Microsoft’s full suite of technologies, has allowed RepsMate to train large datasets faster and avoid unnecessary costs, further enhancing efficiency.

Azure marketplace

Discover, try, and deploy cloud software

A faster more efficient future

Examples like those of Akbank, VOCALLS, and RepsMate demonstrate the impact of AI on business speed and productivity. By integrating AI solutions like Microsoft Azure OpenAI Service, companies can achieve faster decision-making, optimize their processes, and better support customer experiences. As businesses continue to adopt and innovate with AI, they’re in a better position to meet the demands of a rapidly evolving market.

Our commitment to responsible AI

Organizations across industries are leveraging Microsoft Azure OpenAI Service and Copilot services and capabilities to drive growth, increase productivity, and create value-added experiences. From advancing medical breakthroughs to streamlining manufacturing operations, our customers trust that their data is protected by robust privacy protections and data governance practices. As our customers continue to expand their use of our AI solutions, they can be confident that their valuable data is safeguarded by industry-leading data governance and privacy practices in the most trusted cloud on the market today.  

At Microsoft, we have a long-standing practice of protecting our customers’ information. Our approach to responsible AI is built on a foundation of privacy, and we remain dedicated to upholding core values of privacy, security, and safety in all our generative AI products and solutions.   

Get started with Azure OpenAI Service 

Apply for access to Azure OpenAI Service by completing this form. 

Learn about Azure OpenAI Service and the latest enhancements. 

Get started with GPT-4 in Azure OpenAI Service in Microsoft Learn. 

Read our partner announcement blog, empowering partners to develop AI-powered apps and experiences with ChatGPT in Azure OpenAI Service. 

Learn how to use the new Chat Completions API (in preview) and model versions for ChatGPT and GPT-4 models in Azure OpenAI Service.

Learn more about Azure AI Content Safety.

Explore possibilities with Azure OpenAI Service

1 Harvard Business Review, Build a Winning AI Strategy For Your Business, July 14, 2023.
The post 10 ways to impact business velocity through Azure OpenAI Service appeared first on Azure Blog.
Quelle: Azure

Build your own copilot with Microsoft Azure AI Studio  

In the rapidly evolving world of AI technology, Microsoft Azure AI Studio, now generally available, is empowering organizations to create their own AI copilots. With AI Studio, organizations can customize and build their own copilot to meet their unique needs.  

We’re excited to see organizations across various sectors staying competitive, driving innovation, and transforming their operations.  No matter your use case, AI Studio accelerates the generative AI development lifecycle, empowering organizations to build and shape the future with AI.  

Build a copilot to streamline call center operations 

Azure ai services

Built cutting-edge applications

Vodafone used AI Studio to modernize their existing customer service chatbot TOBi, and to develop a new copilot with a conversational AI search interface, called SuperAgent, to support Vodafone’s human agents when responding to complex customer inquiries.  

TOBi serves customers by addressing common questions like account status and simple technical troubleshooting. SuperAgent involves summarizing call center transcripts, which condenses lengthy calls into brief summaries stored in the customer relationship management system (CRM). This allows agents to quickly understand the reason for a customer’s previous call and detect new issues, improving response times and customer satisfaction. The system is fully automated, with calls transcribed and summarized by Microsoft Azure OpenAI Service in Azure AI Studio, providing actionable insights for agents. 

Together, Vodafone’s call center is getting impressive results, TOBi manages nearly 45 million customer calls a month, fully resolving 70%. On average, customer call times have been reduced by at least one minute, saving customers and agents’ valuable time.2  

Part of using new technologies is experimentation and the ability to easily collaborate. With Azure AI Studio, you can interact with other people and with projects through a code-first approach to seamlessly explore, build, test, and deploy, using cutting-edge AI tools and machine learning models.” 
—Ahmed Elsayed, CIO UK & Europe Digital Engineering Director, Vodafone Group

Build a copilot to improve customer experiences 

H&R Block built AI Tax Assist, “a generative AI experience that streamlines online tax filing by enabling clients to ask questions during the workflow”, was developed with AI Studio.  

AI Tax Assist addresses individuals’ questions as they prepare and file their taxes and can answer tax theory questions or offer navigation instructions when needed. It can provide answers on tax forms, deductions, and credits to help customers maximize potential refunds and minimize tax liability. AI Tax Assist also answers free-form, tax-related questions, providing dynamic responses to customer questions. 

“Since launching the AI Tax Assist experience, we’ve received positive customer feedback and seen increased usage throughout the 2024 tax season,” says Jody Vanarsdale, Director of Product Management at H&R Block. “We’ve been most pleased to see customers complete the entire filing process quickly and without leaving the app.” 

With Azure AI Studio, our devs can code faster, so they had time to ‘experiment’ to fine-tune features like enabling individuals to ask as many questions as needed conversationally and the ability to revisit previous conversation threads. It’s an approach we’re continuing—to push innovation and deliver the best experiences.” 
—Aditya Thadani, Vice President of Artificial Intelligence Platforms, H&R Block 

Build a copilot to boost employee productivity  

Sweco, a leading European architecture and engineering firm, recognized the need for a custom copilot solution to help employees in the flow of their work. They chose AI Studio to build their own copilot, SwecoGPT, which automates document creation and analysis, delivers advanced search, and provides language translation.  

Azure AI Solutions

Learn more

Shah Muhammad, Head of AI Innovation at Sweco, appreciates the “one-click deployment of the models in Azure AI Studio and that it makes Microsoft Azure AI offerings transparent and available to the user.”1 Since its deployment, nearly half of Sweco’s employees use SwecoGPT and report increased productivity, giving them more time to focus on creativity and helping customers. 

With Azure AI Studio, [we were] able to rapidly develop a proof of concept (POC) to show how a SwecoGPT could look, operate, and benefit our consultants and our business as a whole. This just showcases the power and scalability of Azure AI.” 
 —David Hunter, Head of AI and Automation, Sweco

Microsoft Azure AI Studio

Develop and deploy generative AI responsibly

Learn more

Build on a foundation of trust  

In the journey of AI integration, trust is table stakes. With Azure AI, you can confidently create a copilot knowing your data is always your data. Your data is never used to train the models.  

We owe it to our clients to handle all their information responsibly. Microsoft has shown a lot of leadership in establishing those principles of responsible AI, maintaining clients’ trust, maintaining privacy, and ensuring that any capabilities we deliver are consistent with our promise of expertise.” 
—Aditya Thadani, Vice President of Artificial Intelligence Platforms, H&R Block 

Your organizational data is encrypted in your Microsoft Azure subscription and protected by Microsoft’s comprehensive enterprise compliance and security controls.  

We are a company that uses a lot of Microsoft products, and we trust Microsoft for its security, compliance, and leadership in generative AI.”
—David Hunter, Head of AI and Automation, Sweco 

AI Studio is designed with responsible AI principles and practices in mind. Build your copilot with technologies, templates, and best practices to help manage risk, improve accuracy, protect privacy, reinforce transparency, and simplify compliance. Safeguard your copilot with Azure AI Content Safety’s configurable filters and controls. 

Beyond the technology, the Microsoft commitment to responsible AI was a differentiator,” says Garcia. “As we work with new technologies to build the inclusive digital communities of tomorrow, this is a critical foundation.”
—Ignacio Garcia, CIO Italy & Global Director Data Analytics and AI, Vodafone Group 

Build for the future 

Azure AI is where innovators are creating the future, and we’re continuing to invest in our AI Studio platform to provide cutting edge services and tools to our customers. As we look ahead to what’s next for custom copilots, we’re excited about how agents, like Azure OpenAI Assistants application programming interface (API) can accelerate and improve custom copilot development. Using Azure OpenAI models, developers can provide specific instructions to tune AI capabilities and access multiple tools in parallel, including code interpreter and file search, or custom tools built and accessed through function calling.   

We’re excited to see what customers build next.  

I can’t wait to see what we’re doing in six months or a year from now. The potential of Azure AI Studio for us—and what we can do with it for our customers—is infinite.”
—Shah Muhammad, Head of AI Innovation, Sweco 

Get started with Azure AI Studio  

Explore AI Studio.

Watched the recorded session from Build.

Learn about the Azure OpenAI Assistants API.

Sources

1 Microsoft Customer Story-Sweco Group empowers its architects and engineers with a timesaving AI assistant built in Azure AI Studio 

2 Microsoft Customer Story-Vodafone amplifies call center innovation, customer service, and employee inclusion with Azure AI
The post Build your own copilot with Microsoft Azure AI Studio   appeared first on Azure Blog.
Quelle: Azure