New models added to the Phi-3 family, available on Microsoft Azure

Read more announcements from Azure at Microsoft Build 2024: New ways Azure helps you build transformational AI experiences and The new era of compute powering Azure AI solutions.

At Microsoft Build 2024, we are excited to add new models to the Phi-3 family of small, open models developed by Microsoft. We are introducing Phi-3-vision, a multimodal model that brings together language and vision capabilities. You can try Phi-3-vision today.

Phi-3-small and Phi-3-medium, announced earlier, are now available on Microsoft Azure, empowering developers with models for generative AI applications that require strong reasoning, limited compute, and latency bound scenarios. Lastly, previously available Phi-3-mini, as well as Phi-3-medium, are now also available through Azure AI’s models as a service offering, allowing users to get started quickly and easily.

The Phi-3 family

Phi-3 models are the most capable and cost-effective small language models (SLMs) available, outperforming models of the same size and next size up across a variety of language, reasoning, coding, and math benchmarks. They are trained using high quality training data, as explained in Tiny but mighty: The Phi-3 small language models with big potential. The availability of Phi-3 models expands the selection of high-quality models for Azure customers, offering more practical choices as they compose and build generative AI applications.

Phi-3-vision

Bringing together language and vision capabilities

Try it today

There are four models in the Phi-3 model family; each model is instruction-tuned and developed in accordance with Microsoft’s responsible AI, safety, and security standards to ensure it’s ready to use off-the-shelf.

Phi-3-vision is a 4.2B parameter multimodal model with language and vision capabilities.

Phi-3-mini is a 3.8B parameter language model, available in two context lengths (128K and 4K).

Phi-3-small is a 7B parameter language model, available in two context lengths (128K and 8K).

Phi-3-medium is a 14B parameter language model, available in two context lengths (128K and 4K).

Find all Phi-3 models on Azure AI and Hugging Face.

Phi-3 models have been optimized to run across a variety of hardware. Optimized variants are available with ONNX Runtime and DirectML providing developers with support across a wide range of devices and platforms including mobile and web deployments. Phi-3 models are also available as NVIDIA NIM inference microservices with a standard API interface that can be deployed anywhere and have been optimized for inference on NVIDIA GPUs and Intel accelerators.

It’s inspiring to see how developers are using Phi-3 to do incredible things—from ITC, an Indian conglomerate, which has built a copilot for Indian farmers to ask questions about their crops in their own vernacular, to the Khan Academy, who is currently leveraging Azure OpenAI Service to power their Khanmigo for teachers pilot and experimenting with Phi-3 to improve math tutoring in an affordable, scalable, and adaptable manner. Healthcare software company Epic is looking to also use Phi-3 to summarize complex patient histories more efficiently. Seth Hain, senior vice president of R&D at Epic explains, “AI is embedded directly into Epic workflows to help solve important issues like clinician burnout, staffing shortages, and organizational financial challenges. Small language models, like Phi-3, have robust yet efficient reasoning capabilities that enable us to offer high-quality generative AI at a lower cost across our applications that help with challenges like summarizing complex patient histories and responding faster to patients.”

Digital Green, used by more than 6 million farmers, is introducing video to their AI assistant, Farmer.Chat, adding to their multimodal conversational interface. “We’re excited about leveraging Phi-3 to increase the efficiency of Farmer.Chat and to enable rural communities to leverage the power of AI to uplift themselves,” said Rikin Gandhi, CEO, Digital Green.

Bringing multimodality to Phi-3

Phi-3-vision is the first multimodal model in the Phi-3 family, bringing together text and images, and the ability to reason over real-world images and extract and reason over text from images. It has also been optimized for chart and diagram understanding and can be used to generate insights and answer questions. Phi-3-vision builds on the language capabilities of the Phi-3-mini, continuing to pack strong language and image reasoning quality in a small model.

Phi-3-vision can generate insights from charts and diagrams:

Groundbreaking performance at a small size

As previously shared, Phi-3-small and Phi-3-medium outperform language models of the same size as well as those that are much larger.

Phi-3-small with only 7B parameters beats GPT-3.5T across a variety of language, reasoning, coding, and math benchmarks.1

The Phi-3-medium with 14B parameters continues the trend and outperforms Gemini 1.0 Pro.2

Phi-3-vision with just 4.2B parameters continues that trend and outperforms larger models such as Claude-3 Haiku and Gemini 1.0 Pro V across general visual reasoning tasks, OCR, table, and chart understanding tasks.3

All reported numbers are produced with the same pipeline to ensure that the numbers are comparable. As a result, these numbers may differ from other published numbers due to slight differences in the evaluation methodology. More details on benchmarks are provided in our technical paper.

See detailed benchmarks in the footnotes of this post.

Prioritizing safety

Phi-3 models were developed in accordance with the Microsoft Responsible AI Standard and underwent rigorous safety measurement and evaluation, red-teaming, sensitive use review, and adherence to security guidance to help ensure that these models are responsibly developed, tested, and deployed in alignment with Microsoft’s standards and best practices.

Phi-3 models are also trained using high-quality data and were further improved with safety post-training, including reinforcement learning from human feedback (RLHF), automated testing and evaluations across dozens of harm categories, and manual red-teaming. Our approach to safety training and evaluations are detailed in our technical paper, and we outline recommended uses and limitations in the model cards.

Finally, developers using the Phi-3 model family can also take advantage of a suite of tools available in Azure AI to help them build safer and more trustworthy applications.

Choosing the right model

With the evolving landscape of available models, customers are increasingly looking to leverage multiple models in their applications depending on use case and business needs. Choosing the right model depends on the needs of a specific use case.

Small language models are designed to perform well for simpler tasks, are more accessible and easier to use for organizations with limited resources, and they can be more easily fine-tuned to meet specific needs. They are well suited for applications that need to run locally on a device, where a task doesn’t require extensive reasoning and a quick response is needed.

The choice between using Phi-3-mini, Phi-3-small, and Phi-3-medium depends on the complexity of the task and available computational resources. They can be employed across a variety of language understanding and generation tasks such as content authoring, summarization, question-answering, and sentiment analysis. Beyond traditional language tasks these models have strong reasoning and logic capabilities, making them good candidates for analytical tasks. The longer context window available across all models enables taking in and reasoning over large text content—documents, web pages, code, and more.

Phi-3-vision is great for tasks that require reasoning over image and text together. It is especially good for OCR tasks including reasoning and Q&A over extracted text, as well as chart, diagram, and table understanding tasks.

Get started today

To experience Phi-3 for yourself, start with playing with the model on Azure AI Playground. Learn more about building with and customizing Phi-3 for your scenarios using the Azure AI Studio.

Footnotes

1Table 1: Phi-3-small with only 7B parameters

2Table 2: Phi-3-medium with 14B parameters

3Table 3: Phi-3-vision with 4.2B parameters

The post New models added to the Phi-3 family, available on Microsoft Azure appeared first on Azure Blog.
Quelle: Azure

From code to production: New ways Azure helps you build transformational AI experiences

We’re witnessing a critical turning point in the market as AI moves from the drawing boards of innovation into the concrete realities of everyday life. The leap from potential to practical application marks a pivotal chapter, and you, as developers, are key to bringing it to bear.

The news at Build is focused on the top demands we’ve heard from all of you as we’ve worked together to turn this promise of AI into reality:

Empowering every developer to move with greater speed and efficiency, using the tools you already know and love.

Expanding and simplifying access to the AI, data—application platform services you need to be successful so you can focus on building transformational AI experiences.

And, helping you focus on what you do best—building incredible applications—with responsibility, safety, security, and reliability features, built right into the platform. 

I’ve been building software products for more than two decades now, and I can honestly say there’s never been a more exciting time to be a developer. What was once a distant promise is now manifesting—and not only through the type of apps that are possible, but how you can build them.

With Microsoft Azure, we’re meeting you where you are today—and paving the way to where you’re going. So let’s jump right into some of what you’ll learn over the next few days. Welcome to Microsoft Build 2024!

Create the future with Azure AI: offering you tools, model choice, and flexibility  

The number of companies turning to Azure AI continues to grow as the list of what’s possible expands. We’re helping more than 50,000 companies around the globe achieve real business impact using it—organizations like Mercedes-Benz, Unity, Vodafone, H&R Block, PwC, SWECO, and so many others.  

To make it even more valuable, we continue to expand the range of models available to you and simplify the process for you to find the right models for the apps you’re building. You can learn more about all Azure AI updates we’re announcing this week over on the Tech Community blog. 

Azure AI Studio, a key component of the copilot stack, is now generally available. The pro-code platform empowers responsible generative AI development, including the development of your own custom copilot applications. The seamless development approach includes a friendly user interface (UI) and code-first capabilities, including Azure Developer CLI (AZD) and AI Toolkit for VS Code, enabling developers to choose the most accessible workflow for their projects.

Developers can use Azure AI Studio to explore AI tools, orchestrate multiple interoperating APIs and models; ground models using their data using retrieval augmented generation (RAG) techniques; test and evaluate models for performance and safety; and deploy at scale and with continuous monitoring in production.

Empowering you with a broad selection of small and large language models  

Our model catalog is the heart of Azure AI Studio. With more than 1,600 models available, we continue to innovate and partner broadly to bring you the best selection of frontier and open large language models (LLMs) and small language models (SLMs) so you have flexibility to compare benchmarks and select models based on what your business needs. And, we’re making it easier for you to find the best model for your use case by comparing model benchmarks, like accuracy and relevance.

I’m excited to announce OpenAI’s latest flagship model, GPT-4o, is now generally available in Azure OpenAI Service. This groundbreaking multimodal model integrates text, image, and audio processing in a single model and sets a new standard for generative and conversational AI experiences. Pricing for GPT-4o is $5/1M Tokens for input and $15/1M Tokens for output.

Earlier this month, we enabled GPT-4 Turbo with Vision through Azure OpenAI Service. With these new models developers can build apps with inputs and outputs that span across text, images, and more, for a richer user experience. 

We’re announcing new models through Models-as-a-Service (MaaS) in Azure AI Studio leading Arabic language model Core42 JAIS and TimeGen-1 from Nixtla are now available in preview. Models from AI21, Bria AI, Gretel Labs, NTT DATA, Stability AI as well as Cohere Rerank are coming soon.  

Phi-3: Redefining what’s possible with SLMs

At Build we’re announcing Phi-3-small, Phi-3-medium, and Phi-3-vision, a new multimodal model, in the Phi-3 family of AI small language models (SLMs), developed by Microsoft. Phi-3 models are powerful, cost-effective and optimized for resource constrained environments including on-device, edge, offline inference, and latency bound scenarios where fast response times are critical. 

Introducing Phi-3: Groundbreaking performance at a small size

Sized at 4.2 billion parameters, Phi-3-vision supports general visual reasoning tasks and chart/graph/table reasoning. The model offers the ability to input images and text, and to output text responses. For example, users can ask questions about a chart or ask an open-ended question about specific images. Phi-3-mini and Phi-3-medium are also now generally available as part of Azure AI’s MaaS offering.

In addition to new models, we are adding new capabilities across APIs to enable multimodal experiences. Azure AI Speech has several new features in preview including Speech analytics and Video translation to help developers build high-quality, voice-enabled apps. Azure AI Search now has dramatically increased storage capacity and up to 12X increase in vector index size at no additional cost to run RAG workloads at scale.

Azure AI Studio

Get everything you need to develop generative AI applications and custom copilots in one platform

Try now

Bring your intelligent apps and ideas to life with Visual Studio, GitHub, and the Azure platform

The tools you choose to build with should make it easy to go from idea to code to production. They should adapt to where and how you work, not the other way around. We’re sharing several updates to our developer and app platforms that do just that, making it easier for all developers to build on Azure. 

Access Azure services within your favorite tools for faster app development

By extending Azure services natively into the tools and environments you’re already familiar with, you can more easily build and be confident in the performance, scale, and security of your apps.  

How to choose the right approach for your AI transformation

Learn more

We’re also making it incredibly easy for you to interact with Azure services from where you’re most comfortable: a favorite dev tool like VS Code, or even directly on GitHub, regardless of previous Azure experience or knowledge. Today, we’re announcing the preview of GitHub Copilot for Azure, extending GitHub Copilot to increase its usefulness for all developers. You’ll see other examples of this from Microsoft and some of the most innovative ISVs at Build, so be sure to explore our sessions.  

Also in preview today is the AI Toolkit for Visual Studio Code, an extension that provides development tools and models to help developers acquire and run models, fine-tune them locally, and deploy to Azure AI Studio, all from VS Code.  

Updates that make cloud native development faster and easier

.NET Aspire has arrived! This new cloud-native stack simplifies development by automating configurations and integrating resilient patterns. With .NET Aspire, you can focus more on coding and less on setup while still using your preferred tools. This stack includes a developer dashboard for enhanced observability and diagnostics right from the start for faster and more reliable app development. Explore more about the general availability of .NET Aspire on the DevBlogs post.   

We’re also raising the bar on ease of use in our application platform services, introducing Azure Kubernetes Services (AKS) Automatic, the easiest managed Kubernetes experience to take AI apps to production. In preview now, AKS Automatic builds on our expertise running some of the largest and most advanced Kubernetes applications in the world, from Microsoft Teams to Bing, XBox online services, Microsoft 365 and GitHub Copilot to create best practices that automate everything from cluster set up and management to performance and security safeguards and policies.

As a developer you now have access to a self-service app platform that can move from container image to deployed app in minutes while still giving you the power of accessing the Kubernetes API. With AKS Automatic you can focus on building great code, knowing that your app will be running securely with the scale, performance and reliability it needs to support your business.

Data solutions built for the era of AI

Developers are at the forefront of a pivotal shift in application strategy which necessitates optimizations at every tier of an application—including databases—since AI apps require fast and frequent iterations to keep pace with AI model innovation. 

We’re excited to unveil new data and analytics features this week designed to assist you in the critical aspects of crafting intelligent applications and empowering you to create the transformative apps of today and tomorrow.

Enabling developers to build faster with AI built into Azure databases 

Vector search is core to any AI application so we’re adding native capabilities to Azure Cosmos DB with Azure Cosmos DB for NoSQL. Powered by DiskANN, a powerful algorithm library, this makes Azure Cosmos DB the first cloud database to offer lower latency vector search at cloud scale without the need to manage servers. 

Azure Cosmos DB

The database for the era of AI

Learn more

We’re also announcing the availability of Azure Database for PostgreSQL extension for Azure AI to make bringing AI capabilities to data in PostgreSQL data even easier. Now generally available, this enables developers who prefer PostgreSQL to plug data directly into Azure AI for a simplified path to leverage LLMs and build rich PostgreSQL generative AI experiences.   

Embeddings enable AI models to better understand relationships and similarities between data, which is key for intelligent apps. Azure Database for PostgreSQL in-database embedding generation is now in preview so embeddings can be generated right within the database—offering single-digit millisecond latency, predictable costs, and the confidence that data will remain compliant for confidential workloads. 

Making developer life easier through in-database Copilot capabilities

These databases are not only helping you build your own AI experiences. We’re also applying AI directly in the user experience so it’s easier than ever to explore what’s included in a database. Now in preview, Microsoft Copilot capabilities in Azure SQL DB convert queries into SQL language so developers can use natural language to interact with data. And, Copilot capabilities are coming to Azure Database for MySQL to provide summaries of technical documentation in response to user questions—creating an all-around easier and more enjoyable management experience.

Microsoft Copilot capabilities in the database user experience

Microsoft Fabric updates: Build powerful solutions securely and with ease

We have several Fabric updates this week, including the introduction of Real-Time Intelligence. This completely redesigned workload enables you to analyze, explore, and act on your data in real time. Also coming at Build: the Workload Development Kit in preview, making it even easier to design and build apps in Fabric. And our Snowflake partnership expands with support for Iceberg data format and bi-directional read and write between Snowflake and Fabric’s OneLake. Get the details and more in Arun Ulag’s blog: Fuel your business with continuous insights and generative AI. And for an overview of Fabric data security, download the Microsoft Fabric Microsoft Fabric security whitepaper.

Spend a day in the life of a piece of data and learn exactly how it moves from its database home to do more than ever before with the insights of Microsoft Fabric, real-time assistance by Microsoft Copilot, and the innovative power of Azure AI.  

Build on a foundation of safe and responsible AI

What began with our principles and a firm belief that AI must be used responsibly and safely has become an integral part of the tooling, APIs, and software you use to scale AI responsibly. Within Azure AI, we have 20 Responsible AI tools with more than 90 features. And there’s more to come, starting with updates at Build.

New Azure AI Content Safety capabilities

We’re equipping you with advanced guardrails that help protect AI applications and users from harmful content and security risks and this week, we’re announcing new  feature for Azure AI Content Safety. Custom Categories are coming soon so you can create custom filters for specific content filtering needs. This feature also includes a rapid option, enabling you to deploy new custom filters within an hour to protect against emerging threats and incidents.  

Prompt Shields and Groundedness Detection are both available in preview now in Azure OpenAI Service and Azure AI Studio help fortify AI safety. Prompt shields mitigate both indirect and jailbreak prompt injection attacks on LLMs, while Groundedness Detection enables detection of ungrounded materials or hallucinations in generated responses.  

Features to help secure and govern your apps and data

Microsoft Defender for Cloud now extends its cloud-native application protection to AI applications from code to cloud. And, AI security posture management capabilities enable security teams to discover their AI services and tools, identify vulnerabilities, and proactively remediate risks. Threat protection for AI workloads in Defender for Cloud leverages a native integration with Azure AI Content Safety to enable security teams to monitor their Azure OpenAl applications for direct and in-direct prompt injection attacks, sensitive data leaks and other threats so they can quickly investigate and respond.

With easy-to-use APIs, app developers can easily integrate Microsoft Purview into line of business apps to get industry-leading data security and compliance for custom-built AI apps. You can empower your app customers and respective end users to discover data risks in AI interactions, protect sensitive data with encryption, and govern AI activities. These capabilities are available for Copilot Studio in public preview and soon (coming in July) will be available in public preview for Azure AI Studio, and via the Purview SDK, so developers can benefit from the data security and compliance controls for their AI apps built on Azure AI.  Read more here.  

Two final security notes. We’re also announcing a partnership with HiddenLayer to scan open models that we onboard to the catalog, so you can verify that the models are free from malicious code and signs of tampering before you deploy them. We are the first major AI development platform to provide this type of verification to help you feel more confident in your model choice. 

Second, Facial Liveness, a feature of the Azure AI Vision Face API which has been used by Windows Hello for Business for nearly a decade, is now available in preview for browser. Facial Liveness is a key element in multi-factor authentication (MFA) to prevent spoofing attacks, for example, when someone holds a picture up to the camera to thwart facial recognition systems. Developers can now easily add liveness and optional verification to web applications using Face Liveness, with the Azure AI Vision SDK, in preview.

Our belief in the safe and responsible use of AI is unwavering. You can read our recently published Responsible AI Transparency Report for a detailed look at Microsoft’s approach to developing AI responsibly. We’ll continue to deliver more innovation here and our approach will remain firmly rooted in principles and put into action with built-in features.

Move your ideas from a spark to production with Azure

Organizations are rapidly moving beyond AI ideation and into production. We see and hear fresh examples every day of how our customers are unlocking business challenges that have plagued industries for decades, jump-starting the creative process, making it easier to serve their own customers, or even securing a new competitive edge. We’re curating an industry-leading set of developer tools and AI capabilities to help you, as developers, create and deliver the transformational experiences that make this all possible.

Learn more at Microsoft Build 2024

Join us at Microsoft Build 2024 to experience the keynotes and learn more about how AI could shape your future.

Enhance your AI skills.

Discover innovative AI solutions through the Microsoft commercial marketplace.

Read more about Microsoft Fabric updates: Fuel your business with continuous insights and generative AI.

Read more about Azure Infrastructure: Unleashing innovation: How Microsoft Azure powers AI solutions.

Try Microsoft Azure for free

The post From code to production: New ways Azure helps you build transformational AI experiences appeared first on Azure Blog.
Quelle: Azure

Unleashing innovation: The new era of compute powering Azure AI solutions

As AI continues to transform industries, Microsoft is expanding its global cloud infrastructure to meet the needs of developers and customers everywhere. At Microsoft Build 2024, we’re unveiling our latest progress in developing tools and services optimized for powering your AI solutions. Microsoft’s cloud infrastructure is unique in how it provides choice and flexibility in performance and power for customers’ unique AI needs, whether that’s doubling deployment speeds or lowering operating costs.

That’s why we’ve enhanced our adaptive, powerful, and trusted platform with the performance and resilience you’ll need to build intelligent AI applications. We’re delivering on our promise to support our customers by providing them with exceptional cost-performance in compute and advanced generative AI capabilities.

Try Microsoft Azure for free >

Powerful compute for general purpose and AI workloads

Microsoft has the expertise and scale to run the AI supercomputers that power some of the world’s biggest AI services, such as Microsoft Azure OpenAI Service, ChatGPT, Bing, and more. Our focus as we continue to expand our AI infrastructure is on optimizing performance, scalability, and power efficiency.

Microsoft takes a systems approach to cloud infrastructure, optimizing both hardware and software to efficiently handle workloads at scale. In November 2023, Microsoft introduced its first in-house designed cloud compute processor, Azure Cobalt 100, which enables general-purpose workloads on the Microsoft Cloud. We are announcing the preview of Azure virtual machines built to run on Cobalt 100 processors. Cobalt 100-based virtual machines (VMs) are Azure’s most power efficient compute offering, and deliver up to 40% better performance than our previous generation of Arm-based VMs. And we’re delivering that same Arm-based performance and efficiency to customers like Elastic, MongoDB, Siemens, Snowflake, and Teradata. The new Cobalt 100-based VMS are expected to enhance efficiency and performance for both Azure customers and Microsoft products. Additionally, IC3, the platform that powers billions of customer conversations in Microsoft Teams, is adopting Cobalt 100 to serve its growing customer base more efficiently, achieving up to 45% better performance on Cobalt 100 VMs.

We’re combining the best of industry and the best of Microsoft in our AI infrastructure. Alongside our custom Azure Cobalt 100 and Maia series and silicon industry partnerships, we’re also announcing the general availability of the ND MI300X VM series, where Microsoft is the first cloud provider to bring AMD’s most powerful Instinct MI300X Accelerator to Azure. With the addition of the ND MI300X VM combining eight AMD MI300X Instinct accelerators, Azure is delivering customers unprecedented cost-performance for inferencing scenarios of frontier models like GPT-4. Our infrastructure supports different scenarios for AI supercomputing, such as building large models from scratch, running inference on pre-trained models, using model as a service providers, and fine-tuning models for specific domains.

Azure Migrate and Modernize

Curated resources and expert help to migrate or modernize your on-premises infrastructure

Get started

One of Microsoft’s advantages in AI is our ability to combine thousands of virtual machines with tens of thousands of GPUs with the best of InfiniBand and Ethernet based networking topologies for supercomputers in the cloud that can run large scale AI workloads to lower costs. With a diversity of silicon across AMD, NVIDIA, and Microsoft’s Maia AI accelerators, Azure’s AI infrastructure delivers the most complete compute platform for AI workloads. It is this combination of advanced AI accelerators, datacenter designs, and optimized compute and networking topology that drive cost efficiency per workload. That means whether you use Microsoft Copilot or build your own copilot apps, the Azure platform ensures you get the best AI performance with optimized cost.

Microsoft is further extending our cloud infrastructure with the Azure Compute Fleet, a new service that simplifies provisioning of Azure compute capacity across different VM types, availability zones, and pricing models to more easily achieve desired scale, performance, and cost by enabling users to control VM group behaviors automatically and programmatically. As a result, Compute Fleet has the potential to greatly optimize your operational efficiency and increase your core compute flexibility and reliability for both AI and general-purpose workloads together at scale.

AI-enhanced central management and security

As businesses continue to expand their computing estate, managing and governing the entire infrastructure can become overwhelming. We keep hearing from developers and customers that they spend more time searching for information and are less productive. Microsoft is focused on simplifying this process through AI-enhanced central management and security. Our adaptive cloud approach takes innovation to the next level with a single, intelligent control plane that spans from cloud to edge, making it easier for customers to manage their entire computing estate in a consistent way. We’re also aiming to improve your experience with managing these distributed environments through Microsoft Copilot in Azure.

We created Microsoft Copilot in Azure to act as an AI companion, helping your teams manage operations seamlessly across both cloud and edge environments. By using natural language, you can ask Copilot questions and receive personalized recommendations related to Azure services. Simply ask, “Why is my app slow?” or “How do I fix this error?” and Copilot will navigate a customer through potential causes and fixes.

Microsoft Copilot in Azure

Manage operations from cloud to edge with an AI assistant

Learn more

Starting today, we will be opening the preview of Copilot in Azure to all customers over the next couple of weeks. With this update, customers can choose to have all their users access Copilot or grant access to specific users or groups within a tenant. With this flexibility to manage Copilot, you can tailor your approach and control which groups of users or departments within your organization have access to it. You can feel secure knowing you can deploy and use the tool in a controlled manner, ensuring it aligns with your organization’s operational standards and security policies.

We’re continually enhancing Copilot and making the product better with every release to help developers be more productive. One of the ways we’ve simplified the developer’s experience is by making databases and analytics services easier to configure, manage, and optimize through AI-enhanced management. Several new skills are available for Azure Kubernetes Service (AKS) in Copilot for Azure that simplify common management tasks, including the ability to configure AKS backups, change tiers, locate YAML files for editing, and construct kubectl commands.

We’ve also added natural language to SQL conversion and self-help for database administration to support your Azure SQL database-driven applications. Developers can ask questions about their data in plain text, and Copilot generates the corresponding T-SQL query. Database administrators can independently manage databases, resolve issues, and learn more about performance and capabilities. Developers benefit from detailed explanations of the generated queries, helping them write code faster.

Lastly, you’ll notice a few new security enhancements to the tool. Copilot now includes Microsoft Defender for Cloud prompting capabilities to streamline risk exploration, remediation, and code fixes. Defender External Attack Surface Management (EASM) leverages Copilot to help surface risk-related insights and convert natural language to corresponding inventory queries across data discovered by Defender EASM. These features make database queries more user-friendly, enabling our customers to use natural language for any related queries. We’ll continue to expand Copilot capabilities in Azure so you can be more productive and focused on writing code.

Cloud infrastructure built for limitless innovation

Microsoft is committed to helping you stay ahead in this new era by giving you the power, flexibility, and performance you need to achieve your AI ambitions. Our unique approach to cloud and AI infrastructure helps us and developers like you meet the challenges of the ever-changing technological landscape head-on so you can continue working efficiently while innovating at scale.

Discover new ways to transform with AI

Learn how Azure helps build AI experiences

Read more about AI-powered analytics

Key Microsoft Build sessions

BRK126: Adaptive cloud approach: Build and scale apps from cloud to edge

BRK124: Building AI applications that leverage your data in object storage

BRK 129: Building applications at hyper scale with the latest Azure innovations

BRK133: Unlock potential on Azure with Microsoft Copilot

BRK127: Azure Monitor: Observability from code to cloud

The post Unleashing innovation: The new era of compute powering Azure AI solutions appeared first on Azure Blog.
Quelle: Azure

Enhance your security capabilities with Azure Bastion Premium

At Microsoft Azure, we are unwavering in our commitment to providing robust and reliable networking solutions for our customers. In today’s dynamic digital landscape, seamless connectivity, uncompromising security, and optimal performance are non-negotiable. As cyber threats have grown more frequent and severe, the demand for security in the cloud has increased drastically. As a response to this, we are announcing a new SKU for Microsoft Azure Bastion—Azure Bastion Premium. This service, now in public preview, will provide advanced recording, monitoring, and auditing capabilities for customers handling highly sensitive workloads. In this blog post, we’ll explore what Azure Bastion Premium is, the benefits this SKU offers, and why it is a must-use for customers with highly regulated security policies.

Azure Bastion

Protect your virtual machines with more secure remote access

Discover solutions

What is Azure Bastion Premium?

Azure Bastion Premium is a new SKU for customers that handle highly sensitive virtual machine workloads. Its mission is to offer enhanced security features that ensure customer virtual machines are connected securely and to monitor virtual machines for any anomalies that may arise. Our first set of features will focus on ensuring private connectivity and graphical recordings of virtual machines connected through Azure Bastion.

Two key security advantages

Enhanced security: With the existing Azure Bastion SKUs, customers can protect their virtual machines by using the Azure Bastion’s public IP address as the point of entry to their target virtual machines. However, Azure Bastion Premium SKU takes security to the next level by eliminating the public IP. Instead of relying on the public IP address, customers can now connect to a private endpoint on Azure Bastion. As a result, this approach eliminates the need to secure a public IP address, effectively reducing one point of attack.

Virtual machine monitoring: Azure Bastion Premium SKU allows customers to graphically record their virtual machine sessions. Customers can retain virtual machine sessions in alignment to their internal policies and compliance requirements. Additionally, keeping a record of virtual machine sessions allows customers to identify anomalies or unexpected behavior. Whether it is unusual activity, security breaches, or data exfiltration, having a visual record opens the door to investigations and mitigations.

Features offered in Azure Bastion Premium

Graphical session recordingGraphical session recording allows Azure Bastion to graphically record all virtual machine sessions that connect through the enabled Azure Bastion. These recordings are stored in a customer-designated storage account and can be viewed directly in the Azure Bastion resource blade. We see this feature as a value add to customers that want an additional layer of monitoring on their virtual machine sessions. With this feature enabled, if an anomaly within the virtual machine session happens, customers can go back and review the recording to see what exactly happened within the session. For other customers that have data retention policies, session recording will keep a complete record of all recorded sessions. Customers can maintain access and control over the recordings within their storage account to keep it compliant to their policies.Setting up session recording is extremely easy and intuitive. All you need is a designated container within a storage account, a virtual machine, and Azure Bastion to connect to. For more information about setting up and using session recording, see our documentation.

Private Only Azure BastionIn Azure Bastion’s current SKUs that are generally available, inbound connection to the virtual network where Azure Bastion has been provisioned is only available through a public IP address. With Private Only Azure Bastion, we are enabling customers to connect inbound to their Azure Bastion through a private IP address. We see this offering as a must-have feature for customers who want to minimize the use of public endpoints. For customers who have strict policies surrounding the use of public endpoints, Private Only Azure Bastion ensures that Azure Bastion is a compliant service under organizational policies. For other customers that have on-premises machines trying to connect to Azure, utilizing Private Only Azure Bastion with ExpressRoute private peering will enable private connectivity from their on-premise machines straight to their Azure virtual machines.Setting up Private Only Azure Bastion is very easy. When you create a Azure Bastion, under Configure IP address, select Private IP address instead of Public IP address and then click Review + create.Note: Private Only Azure Bastions can only be created with net-new Azure Bastions, not with pre-existing Azure Bastions.

Feature comparison of Azure Bastion offerings

FeaturesDeveloperBasicStandardPremiumPrivate connectivity to virtual machinesYesYesYesYesDedicated host agentNoYesYes           YesSupport for multiple connections per userNoYesYesYesLinux Virtual Machine private key in AKVNoYesYesYesSupport for network security groupsNoYesYesYesAudit loggingNoYesYesYesKerberos supportNoYesYesYesVNET peering supportNoNoYesYesHost scaling (2 to 50 instances)NoNoYesYesCustom port and protocolNoNoYesYesNative RDP/SSH client through Azure CLINoNoYesYesAAD login for RDP/SSH through native clientNoNoYesYesIP-based connectionNoNoYesYesShareable links NoNoYesYesGraphical session recordingNoNoNoYesPrivate Only Azure BastionNoNoNoYes

How to get started

Navigate to the Azure portal.

Deploy Azure Bastion configured manually to include Premium SKU.

Under Configure IP Address, there is the option to enable Azure Bastion on a public or private IP address (Private Only Azure Bastion).

In the Advanced tab, there is a checkbox for Session recording (Preview).

Stay updated on the latest

Our commitment extends beyond fulfilling network security requirements; we are committed to collaborating with internal teams to integrate our solution with other products within our security portfolio. As upcoming features and integrations roll out in the coming months, we are confident that Azure Bastion will seamlessly fit into the “better together” narrative, effectively addressing customer needs related to virtual machine workload security.
The post Enhance your security capabilities with Azure Bastion Premium appeared first on Azure Blog.
Quelle: Azure

Cassidy founder Justin Fineberg champions AI accessibility with Azure OpenAI Service

Leveraging Microsoft Azure OpenAI Service to make generative AI more accessible for everyday business

Justin Fineberg will be speaking at Build 2024

Justin Fineberg’s career trajectory took a pivotal turn when he encountered the transformative potential of AI. The year was 2020 and Fineberg had received early access to the beta version of OpenAI’s GPT-3:

“The moment I began working with GPT-3, I realized we were at the cusp of a new era in technology. It was like discovering a new language that could unlock endless possibilities.”
Justin Fineberg, CEO/Founder, Cassidy

Azure OpenAI Service

Build your own copilot and generative AI applications.

Learn more

AI at the intersection of creativity and technology

Originally, Fineberg considered a career in film.

“I kind of saw myself as a filmmaker when I was younger: building a great product is in many ways about telling a great story. And that kind of ties back to my background in film.” 
Justin Fineberg, CEO/Founder, Cassidy

In 2022, Fineberg decided to leave his job as a product manager at Blade to team up with his long-time collaborator and engineer, Ian Woodfill. Woodfill’s understanding of the no-code space and Fineberg’s passion for accessible AI solutions led them to start Cassidy, which provides easy ways to build custom generative AI for business organizations.

Today, Fineberg has more than 400,00 followers across social platforms—and that number is expected to grow. After all, AI is on the rise. A recent article in Forbes reported that AI market size is expected to reach $407 billion by 2027 with an annual growth rate of 37.3% from 2023 to 2030. With a growing contingent of individuals—and businesses—adopting AI, Fineberg is seeking to bridge the gap between complex AI technologies and practical business applications. By focusing on user-friendly interfaces and seamless integration, Cassidy aims to make AI an integral part of business workflows, empowering users to harness its potential without being AI experts themselves.

Leveraging Microsoft Azure OpenAI Service

Justin leveraged Azure OpenAI Service to bridge the gap between advanced AI technologies and practical, everyday applications. His mission with Cassidy is to put powerful AI tools in the hands of those who could benefit from them the most, regardless of their technical expertise. By leveraging Azure OpenAI Service, Cassidy simplifies the integration of advanced AI capabilities for companies, enabling them to automate tasks and enhance productivity without the need for coding or deep tech knowledge.

Azure OpenAI Service stands out for its comprehensive suite of AI models, which Fineberg utilizes to drive Cassidy’s capabilities.

“Azure OpenAI Service democratizes access to these powerful tools, making it easier for innovators across sectors to leverage AI in their projects.”
Justin Fineberg, CEO/Founder, Cassidy

The service’s breadth ensures that whether a user is looking to automate customer service, generate unique marketing content, or develop novel applications, they have the necessary tools at their disposal.

Ease of integration is at the heart of Cassidy—which aims to streamline the development process and allow creators to focus on their vision rather than the complexities of technology. The ability to integrate with Azure’s ecosystem was a game-changer for Cassidy, allowing Fineberg to scale and enhance the company’s offerings with greater ease.

Fineberg sees Azure OpenAI Service playing a pivotal role in shaping the AI landscape. Its continuous evolution, with updates and additions to its AI model offerings, ensures that users have access to the latest advancements in AI technology.

“Azure OpenAI Service is not just a platform for today; it’s a platform that’s evolving with the future of AI. Choosing Azure OpenAI Service wasn’t just about accessing advanced AI models; it was about ensuring reliability, scalability, and security for our users. As businesses grow and their needs evolve, the service’s infrastructure is designed to scale alongside them, ensuring that AI capabilities can expand in tandem with user requirements. The scalability of Azure OpenAI Service has been instrumental in supporting Cassidy’s growth. It ensures that as our user base expands, we can maintain performance and reliability without skipping a beat.”
Justin Fineberg, CEO/Founder, Cassidy

Four bits of AI advice from Justin Fineberg:

Embrace curiosity: Approach AI with a mindset of curiosity. Since it’s still fresh for most, there’s no real “expertise” yet—just a wide-open space for exploration and discovery. Approach AI with an open mind and see where your curiosity leads you. 

Prioritize the low-hanging fruit: Focus on what AI can do easily and effectively right now. Don’t let the current limitations distract you—AI technology is advancing fast. Keep up to date with new developments while continuously prioritizing the most powerful opportunities available today.

Prioritize user-friendly design: AI tools should be accessible and easy to use for everyone, not just experts.

Share use cases: Don’t be shy about how you’re using AI in your work and business. Let’s learn together. 

Learn more about Justin’s use of Azure OpenAI Service when he speaks at Build 2024.

Our commitment to responsible AI

At Microsoft, we‘re guided by our AI principles and Responsible AI Standard along with decades of research on AI, grounding, and privacy-preserving machine learning. A multidisciplinary team of researchers, engineers, and policy experts reviews our AI systems for potential harms and mitigations—refining training data; filtering to limit harmful content, query- and result-blocking sensitive topics; and applying Microsoft technologies like Azure AIContent Safety, InterpretML, and Fairlearn. We make it clear how the system makes decisions by noting limitations, linking to sources, and prompting users to review, fact-check, and adjust content based on subject matter expertise.

Get started with Azure OpenAI Service 

Apply for access to Azure OpenAI Service by completing this form. 

Learn about Azure OpenAI Service and the latest enhancements. 

Get started with GPT-4 in Azure OpenAI Service in Microsoft Learn. 

Read our partner announcement blog, empowering partners to develop AI-powered apps and experiences with ChatGPT in Azure OpenAI Service. 

Learn how to use the new Chat Completions API (in preview) and model versions for ChatGPT and GPT-4 models in Azure OpenAI Service.

Learn more about Azure AI Content Safety.

The post Cassidy founder Justin Fineberg champions AI accessibility with Azure OpenAI Service appeared first on Azure Blog.
Quelle: Azure

Introducing GPT-4o: OpenAI’s new flagship multimodal model now in preview on Azure

Microsoft is thrilled to announce the launch of GPT-4o, OpenAI’s new flagship model on Azure AI. This groundbreaking multimodal model integrates text, vision, and audio capabilities, setting a new standard for generative and conversational AI experiences. GPT-4o is available now in Azure OpenAI Service, to try in preview, with support for text and image.

Azure OpenAI Service

Apply for access

A step forward in generative AI for Azure OpenAI Service

GPT-4o offers a shift in how AI models interact with multimodal inputs. By seamlessly combining text, images, and audio, GPT-4o provides a richer, more engaging user experience.

Launch highlights: Immediate access and what you can expect

Azure OpenAI Service customers can explore GPT-4o’s extensive capabilities through a preview playground in Azure OpenAI Studio starting today in two regions in the US. This initial release focuses on text and vision inputs to provide a glimpse into the model’s potential, paving the way for further capabilities like audio and video.

Efficiency and cost-effectiveness

GPT-4o is engineered for speed and efficiency. Its advanced ability to handle complex queries with minimal resources can translate into cost savings and performance.

Potential use cases to explore with GPT-4o

The introduction of GPT-4o opens numerous possibilities for businesses in various sectors: 

Enhanced customer service: By integrating diverse data inputs, GPT-4o enables more dynamic and comprehensive customer support interactions.

Advanced analytics: Leverage GPT-4o’s capability to process and analyze different types of data to enhance decision-making and uncover deeper insights.

Content innovation: Use GPT-4o’s generative capabilities to create engaging and diverse content formats, catering to a broad range of consumer preferences.

Exciting future developments: GPT-4o at Microsoft Build 2024 

We are eager to share more about GPT-4o and other Azure AI updates at Microsoft Build 2024, to help developers further unlock the power of generative AI.

Get started with Azure OpenAI Service

Begin your journey with GPT-4o and Azure OpenAI Service by taking the following steps:

Try out GPT-4o in Azure OpenAI Service Chat Playground (in preview).

If you are not a current Azure OpenAI Service customer, apply for access by completing this form.

Learn more about Azure OpenAI Service and the latest enhancements. 

Understand responsible AI tooling available in Azure with Azure AI Content Safety.

Review the OpenAI blog on GPT-4o.

The post Introducing GPT-4o: OpenAI’s new flagship multimodal model now in preview on Azure appeared first on Azure Blog.
Quelle: Azure

Accelerate AI innovation with the Microsoft commercial marketplace

With Microsoft Build 2024 right around the corner, I am excited to share how the Microsoft commercial marketplace is extending innovation. As we enter the era of AI, I’m seeing developers utilize the marketplace to use cutting-edge AI tools that accelerate adoption of next-generation solutions for their organizations. At the same time, more customers than ever are using the marketplace to find, try, and adopt new AI solutions quickly. Ultimately, the marketplace—as an extension of the Microsoft Cloud—is how your AI and Microsoft Copilot applications are discovered and deployed.

Microsoft commercial marketplace

Find, buy, and deliver the right cloud solutions for your organization’s changing needs

Shop now

At the heart of the marketplace is our extensive catalog of solutions from Microsoft’s robust network of partners and software development companies. These solutions are surfaced across our in-product experiences, as well as in our storefronts. Today, the marketplace supports a diverse catalog of AI-powered solutions, including AI-enabled software-as-a-service (SaaS) offerings, Copilot extensions, AI-enabled Microsoft Teams applications, machine learning models from partners such as Mistral AI, and more. While Microsoft supports a number of ways for partners to build AI-based technology, the marketplace is where customers can find all of these solutions from one trusted source.

Partners innovating with AI

We’ve seen a triple-digit percentage increase year-over-year in transactable AI offers published on the Microsoft commercial marketplace. And customers are eager to discover the AI solutions that best fit their unique needs. Visits to AI solution pages on our storefronts have increased more than 700% year-over-year, and AI solutions continue to make up a rapidly growing percentage of sales transacted through the marketplace.*

During one of our Microsoft Build sessions, you’ll hear from two partners who are building exciting AI solutions that leverage the Microsoft Cloud and are available now through the marketplace:

Pinecone helps companies build generative AI applications faster with vector databases. Pinecone can be deployed with Microsoft Azure and across various data sources, models, and frameworks. Pinecone serverless, coming to the marketplace soon, will deliver generative AI applications even faster at up to 50 times lower cost.

UiPath’s Business Automation Platform enables customers to supercharge productivity, transform user experiences, and innovate faster with AI-powered automations. With more than 80 platform integrations, customers can tap into UiPath enterprise-grade automation capabilities directly from Microsoft 365, Azure, Microsoft Dynamics 365, and Copilot.

Smarter purchasing through the marketplace

Microsoft is the only company that can support the entire ecosystem of AI—from the infrastructure and data layers all the way to the front-end user experience with Copilot. This enables developers to build next-generation AI tools quickly and for partners to connect their AI solutions to the Microsoft customer base through the marketplace—making it efficient and scalable for organizations to discover and adopt AI broadly. During this AI transformation, the Microsoft commercial marketplace is how we are enabling businesses of every size to access the solutions they need.

With rapid technological development, it has become even more important to balance the need to innovate with meeting business requirements. By aligning SaaS strategy to the marketplace, organizations can unify their data to get the most out of their AI investments:

Try before you buy. The marketplace allows you to try new solutions before you make a larger commitment. Free trials or direct purchases of a small number of licenses can ensure the technology works for your organization before making a big investment. The marketplace also supports proofs-of-concept with private offers, so you can further vet solutions before widescale adoption.

Innovate faster. Centralizing cloud portfolios helps you decrease time-to-value. AI solutions are part of one comprehensive catalog and pre-certified to run on Azure. Vendors can be onboarded instantly, and billing is simplified through a single invoice.

Maximize investments. Organizations can optimize cloud spend by counting the solutions they need towards their Azure consumption commitment. Microsoft automatically counts 100% of eligible offers towards your commitment, helping unlock discounts on Azure infrastructure.

Create alignment across teams. The marketplace makes it easier to keep teams aligned using approved solutions. With private Azure marketplace, an administrator can pre-select approved solutions so your team can compliantly access what they need. If a needed solution is not yet approved, team members can easily request it be added, empowering innovation with the right guardrails to safeguard investments. 

Govern and control with a private Azure marketplace

All of this translates into huge savings of time and money. In a 2023 Total Economic Impact™ study commissioned by Microsoft, Forrester Consulting found the marketplace delivers customers a three-year 587% return on investment (ROI) with a payback period of less than six months.

Join us at Microsoft Build

We’re excited to be accelerating the era of AI by setting the standard for the creation and commerce of AI solutions. For developers building new solutions, I encourage you to check out tools and benefits from ISV Success that will help you realize these innovations. Partners can also use Marketplace Rewards to accelerate their marketplace growth and generate high impact opportunities.

We’ll share more about the value of the marketplace for your organization in upcoming sessions at Microsoft Build. Whether you’re attending in Seattle or virtually, I hope you’ll join our experts to learn more.

Launch AI applications and get to market faster with marketplace: in-person and online (Session ID: BRK130)

AI-powered commerce with the Microsoft commercial marketplace: on-demand (Session ID: OD527)

Maximize cloud investments with the Microsoft commercial marketplace: on-demand (Session ID: OD528)

Meet marketplace experts in the Microsoft Cloud Platform Community space: in-person

Register for Microsoft Build

Sources

*Internal data from our data analytics team
The post Accelerate AI innovation with the Microsoft commercial marketplace appeared first on Azure Blog.
Quelle: Azure

3 ways Microsoft Azure AI Studio helps accelerate the AI development journey  

The generative AI revolution is here, and businesses across the globe and across industries are adopting the technology into their work. However, the learning curve for your own AI applications can be steep, with 52% of organizations reporting that a lack of skilled workers is their biggest barrier to implement and scale AI.1 To reap the true value of generative AI, organizations need tools to simplify AI development, so they can focus on the big picture of solving business needs. Microsoft Azure AI Studio, Microsoft’s generative AI platform, is designed to democratize the AI development process for developers, bringing together the models, tools, services, and integrations you need to get started developing your own AI application quickly.  

“Azure AI Studio improved the experience for creating AI products. We found it mapped perfectly to our needs for faster development and time to market, and greater throughput, scalability, security, and trust.” 
Denis Yarats, Chief Technology Officer and Cofounder, Perplexity.AI 

Azure AI Studio (preview)

Develop generative AI applications and custom copilots in one platform

Learn more

1. Develop how you want   

The Azure AI Studio comprehensive user interface (UI) and code-first experiences empower developers to choose their preferred method of working, whether it’s through a user-friendly, accessible interface or by diving directly into code. This flexibility is crucial for rapid project initiation, iteration, and collaboration—allowing teams to work in the manner that best suits their skills and project requirements.  

The choice for where to develop was important for IWill Therapy and IWill CARE, a leading online mental health care provider in India, when they started using Azure AI Studio to build a solution to reach more clients. IWill created a Hindi-speaking chatbot named IWill GITA using the cutting-edge products and services included in the Azure AI Studio platform. IWill‘s scalable, AI-powered copilot brings mental health access and therapist-like conversations to people throughout India.

The comprehensible UI in Azure AI Studio made it easy for cross functional teams to get on the same page, allowing workers with less AI development experience to skill up quickly.  

“We found that the Azure user interface removed the communication gap between engineers and businesspeople. It made it easy for us to train subject-matter experts in one day”. 
Ashish Dwivedi, Co-founder and COO, iWill Therapy

Azure AI Studio allows developers to move seamlessly between its friendly user interface and code, with software development kits (SDKs) and Microsoft Visual Studio code extensions for local development experiences. The Azure AI Studio dual approach caters to diverse development preferences, streamlining the process from exploration to deployment, ultimately enabling developers to bring their AI projects to life more quickly and effectively. 

2. Identify the best model for your needs   

The Azure AI Studio model catalog offers a comprehensive hub for discovering, evaluating, and consuming foundation models, including a wide array of leading models from Meta, Mistral, Hugging Face, OpenAI, Cohere, Nixtla, G42 Jais, and many more. To enable developers to make an informed decision about which model to use, Azure AI Studio offers tools such as model benchmarking. With model benchmarking, developers can quickly compare models by task using open-source datasets and industry-standard metrics, such as accuracy and fluency. Developers can also explore model cards that detail model capabilities and limitations and try sample inferences to ensure the model is a good fit. 

The Azure AI Studio integration of models from leading partners is already helping customers streamline their development process and accelerating the time to market for their AI solutions. When Perplexity.AI was building their own copilot, a conversational answer engine named Perplexity Ask, Azure AI Studio enabled them to explore various models and to choose the best fit for their solution.  

“Trying out large language models available with Azure OpenAI Service was easy, with just a few clicks to get going. That’s an important differentiator of Azure AI Studio: we had our first prototype in hours. We had more time to try more things, even with our minimal headcount.”  
Denis Yarats, CTO and Cofounder, Perplexity.AI 

Generate solutions faster with azure openai service

Learn more

3. Streamline your development cycles

Prompt flow in Azure AI Studio is a powerful feature that streamlines the development cycle of generative AI solutions. Developers can develop, test, evaluate, debug, and manage large language model (LLM) flows. You can now monitor their performance, including quality and operational metrics, in real-time, and optimize your flows as needed. Prompt flow is designed to be effortless, with a visual graph for easy orchestration, and integrations with open-source frameworks like LangChain and Semantic Kernel. Prompt flow also facilitates collaboration across teams; multiple users can work together on prompt engineering projects, share LLM assets, evaluate quality and safety of flows, maintain version control, and automate workflows for streamlined large language model operations (LLMOps). 

When Siemens Digital Industries Software wanted to build a solution for its customers and frontline work teams to communicate with operations and engineering teams in real-time to better drive innovation and rapidly address problems as they arise, they looked to Azure AI Studio to create their own copilot. Siemens developers combined Microsoft Teams capabilities with Azure AI Studio and its comprehensive suite of tools, including prompt flow, to streamline workflows that included prototyping, deployment, and production. 

“Our developers really like the UI-first approach of prompt flow and the ease of Azure AI Studio. It definitely accelerated our adoption of advanced machine learning technologies, and they have a lot of confidence now for ongoing AI innovation with this solution and others to come.”  
Manal Dave, Advanced Software Engineer, Siemens Digital Industries Software

Get started with Azure AI Studio  

The ability to choose between UI and code, plus the Azure AI Studio model choice and developer tools are just some of the ways AI Studio can help you accelerate your generative AI development. Helping customers achieve more is at the heart of everything we do, and we’re excited to share new ways Azure AI Studio can help you build your own copilots and other AI apps during Microsoft Build.  

Check out some of the upcoming sessions:  

Learn more about how Azure AI Studio can help you build production-ready AI solutions.  

Get started, build with Azure AI Studio.

Register for Microsoft Build and check out the upcoming Azure AI Studio session on building your own copilot.

1 The Business Opportunity of AI (microsoft.com)
The post 3 ways Microsoft Azure AI Studio helps accelerate the AI development journey   appeared first on Azure Blog.
Quelle: Azure

Bringing generative AI to Azure network security with new Microsoft Copilot integrations

Today we are excited to announce the Azure Web Application Firewall (WAF) and Azure Firewall integrations in the Microsoft Copilot for Security standalone experience. This is the first step we are taking toward bringing interactive, generative AI-powered capabilities to Azure network security.

Copilot empowers teams to protect at the speed and scale of AI by turning global threat intelligence (78 trillion or more security signals), industry best practices, and organizations’ security data into tailored insights. With the growing cost of security breaches, organizations need every advantage to protect against skilled and coordinated cyber threats. To see more and move faster, they need generative AI technology that complements human ingenuity and refocuses teams on what matters. A recent study shows that:

Experienced security analysts were 22% faster with Copilot.

They were 7% more accurate across all tasks when using Copilot.

And, most notably, 97% said they want to use Copilot the next time they do the same task.

Azure network security

Protect your applications and cloud workloads with network security services

Explore solutions

Generative AI for Azure network security

Azure WAF and Azure Firewall are critical security services that many Microsoft Azure customers use to protect their network and applications from threats and attacks. These services offer advanced threat protection using default rule sets as well as detection and protection against sophisticated attacks using rich Microsoft threat intelligence and automatic patching against zero-day vulnerabilities. These systems process huge volumes of packets, analyze signals from numerous network resources, and generate vast amounts of logs. To reason over terabytes of data and cut through the noise to detect threats, analysts spend several hours if not days performing manual tasks. In addition to the scale of data there is a real shortage of security expertise. It is difficult to find and train cybersecurity talent and these staff shortages slow down responses to security incidents and limit proactive posture management. 

With our announcement of Azure WAF and Azure Firewall integrations in Copilot for Security, organizations can empower their analysts to triage and investigate hyperscale data sets seamlessly to find detailed, actionable insights and solutions at machine speeds using a natural language interface with no additional training. Copilot automates manual tasks and helps upskill Tier 1 and Tier 2 analysts to perform tasks that would otherwise be reserved for more experienced Tier 3 or Tier 4 professionals, redirecting expert staff to the hardest challenges, thus elevating the proficiency of the entire team. Copilot can also easily translate threat insights and investigations into natural language summaries to quickly inform colleagues or leadership. The organizational efficiency gained by Copilot summarizing vast data signals to generate key insights into the threat landscape enables analysts to outpace adversaries in a matter of minutes instead of hours or days.

How Copilot for Security works with the Azure Firewall and Azure WAF plugins.

Azure Web Application Firewall integration in Copilot

Today, Azure WAF generates detections for a variety of web application and API security attacks. These detections generate terabytes of logs that are ingested into Log Analytics. While the logs give insights into the Azure WAF actions, it is a non-trivial and time-consuming activity for an analyst to understand the logs and gain actionable insights.

The Azure WAF integration in Copilot for Security helps analysts perform contextual analysis of the data in minutes. Specifically, it synthesizes data from Azure Diagnostics logs to generate summarization of Azure WAF detections tailored to each customer’s environment. The key capabilities include investigation of security threats—including analyzing WAF rules triggered, investigating malicious IP addresses, analyzing SQL Injection (SQLi) and Cross-site scripting (XSS) attacks blocked by WAF, and natural language explanations for each detection.

By asking a natural-language question about these attacks, the analyst receives a summarized response that includes details about why that attack occurred and equips the analyst with enough information to investigate the issue further. In addition, with the assistance of Copilot, analysts can retrieve information on the most frequently offending IP addresses, identify top malicious bot attacks, and pinpoint the managed and custom Azure WAF rules that have been triggered most frequently within their environment.

A sneak peek at the Azure WAF integration in Copilot for Security.

Azure Firewall integration in Copilot

Azure Firewall intercepts and blocks malicious traffic using the intrusion detection and prevention system (IDPS) feature today. However, when analysts need to perform a deeper investigation of the threats that Azure Firewall catches using this feature, they need to do this manually—which is a non-trivial and time-consuming task. The Azure Firewall integration in Copilot helps analysts perform these investigations with the speed and scale of AI.

The first step in an investigation is to pick a specific Azure Firewall and see the threats it has intercepted. Analysts today spend hours writing custom queries or navigating through several manual steps to retrieve threat information from Log Analytics workspaces. With Copilot, analysts just need to ask about the threats they’d like to see, and Copilot will present them with the requested information.

The next step is to better understand the nature and impact of these threats. Today, analysts must retrieve additional contextual information such as geographical location of IPs, threat rating of a fully qualified domain name (FQDN), details of common vulnerabilities and exposures (CVEs) associated with an IDPS signature, and more manually from various sources. This process is slow and involves a lot of effort. Copilot pulls information from the relevant sources to enrich your threat data in a fraction of the time.

Once a detailed investigation has been performed for a single Azure Firewall and single threat, analysts would like to determine if these threats were seen elsewhere in their environment. All the manual work they performed for an investigation for a single Azure Firewall is something they would have to repeat fleet wide. Copilot can do this at machine speed and help correlate this information with other security products integrated with Copilot to better understand how attackers are targeting their entire infrastructure.

A sneak peek at the Azure Firewall integration in Copilot for Security.

Looking forward

The future of technology is here, and users will increasingly expect their network security products to be AI enabled; and Copilot positions organizations to fully leverage the opportunities presented by the emerging era of generative AI. The integrations announced today combine Microsoft’s expertise in security with state-of-the-art generative AI packaged together in a solution built with security, privacy, and compliance at its heart to help organizations better defend themselves from attackers while keeping their data completely private.

Getting access

We look forward to continuing to integrate Azure network security into Copilot to make it easier for our customers to be more productive and be able to quickly analyze threats and mitigate vulnerabilities ahead of their adversaries. These new capabilities in Copilot for Security are already being used internally by Microsoft and a small group of customers. Today, we’re excited to announce the upcoming public preview. We expect to launch the preview for all customers for Azure WAF and Azure Firewall at Microsoft Build on May 21, 2024. In the coming weeks, we’ll continuously add new capabilities and make improvements based on your feedback.

Please stop by the Copilot for Security booth at RSA 2024 to see a demo of these capabilities today, express interest for early access, and read about additional Microsoft announcements at RSA.
The post Bringing generative AI to Azure network security with new Microsoft Copilot integrations appeared first on Azure Blog.
Quelle: Azure

Microsoft Cost Management updates—April 2024

Whether you’re a new student, a thriving startup, or the largest enterprise—you have financial constraints, and you need to know what you’re spending, where it’s being spent, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Microsoft Cost Management comes in. 

We’re always looking for ways to learn more about your challenges and how Microsoft Cost Management can help you better understand where you’re accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback: 

Savings plan role-based access control roles 

Advisor updates 

Pricing updates 

Feedback opportunity for commitment savings design

What’s new in Cost Management Labs

New ways to save money with Microsoft Cloud

New videos and learning opportunities

Documentation updates

Let’s dig into the details. 

Savings plan role-based access control roles   

Azure savings plan for compute allows organizations to lower eligible compute usage costs by up to 65% (off listed pay-as-you-go rates) by committing to an hourly spend for 1 or 3 years. We understand that they are a valuable way for you to optimize your cloud expenses. To give you more flexibility over their management—we are happy to announce the general availability of four new role-based access control roles:

Savings plan administrator 

Savings plan purchaser 

Savings plan contributor  

Savings plan reader

Learn more about the permissions needed to view and manage savings plans. 

Advisor updates 

Label removal 

In the Azure portal, Azure Advisor currently shows potential aggregated cost savings under the label “Potential yearly savings based on retail pricing” on pages where cost recommendations are displayed (as shown in the image below.) This aggregated savings estimate at the top of the page will be removed from the Azure portal on September 30, 2024. However, you can still evaluate potential yearly savings tailored to your specific needs by following these steps.  

Note: All individual recommendations and their associated potential savings will remain available. 

Cost optimization workbook update

With the cost optimization workbook in Advisor, you can find ways to reduce waste and get the most value from your cloud spending. We are pleased to announce the addition of databases and sustainability insights under Usage optimization as shown below.  

For more information, refer to this cost optimization workbook article. 

Pricing updates on Azure.com 

We’ve been working hard to make some changes to our Azure pricing experiences, and we’re excited to share them with you. These changes will help make it easier for you to estimate the costs of your solutions. 

We’re thrilled to announce the launch of both Microsoft Copilot for Security and Azure Modeling and Simulation Workbench, complete with new pricing pages and calculators to streamline your cost estimations. Additionally, Azure API Management is officially generally available. 

Our AI offerings have expanded with Azure AI Document Intelligence now providing pricing details for Disconnected Container’s new prebuilt and customer extraction stock-keeping units (SKUs), and Azure AI Content Safety enhancing its free and standard instances with the new “Prompt Shields” and “Groundedness” features. Azure AI Search is upgrading its storage offerings, so check out the pricing page for more details. 

We’ve retired select offers to refine our portfolio and focus on delivering the most value to our customers. This includes the Azure Data Lake Storage Gen 1 offer for Storage Accounts, the graphics and rendering application licensing offers for Azure Batch, and the Azure real-time operating system offer. 

We’ve also added the pricing and offer information of many new capabilities across various services. Starting with Linux Virtual Machines, we’ve added the new NC H100 v5 SKU to our lineup, as well as updated Red Hat Enterprise Linux software pricing for the Linux OS. Azure SQL Database now includes pricing for next-generation General Compute SKUs for Single Database, and also pricing for its new elastic jobs agent feature. Azure Databricks saw the addition of pricing for two new workloads: “Model Training” and “Serverless Jobs.” We’ve also introduced Azure Virtual Desktop for Azure Stack HCI pricing on Azure Virtual Desktop. Across both the pricing pages and the calculator, Microsoft Fabric now shows pricing for the new “Mirroring” feature, Azure Communication Services now shows pricing and offer info for its “Advanced Messaging” SKU, and Microsoft Defender for Cloud includes pricing for the new “Defender for APIs” capability. Lastly, Application Gateway for Containers pricing has been added to the Application Gateway pricing page and calculator. 

We’re constantly working to improve our pricing tools and make them more accessible and user-friendly. We hope you find these changes helpful in estimating the costs for your Azure solutions. If you have any feedback or suggestions for future improvements, please let us know! 

Feedback opportunity for commitment savings design 

If you have experience managing Reservations or Savings Plans to reduce costs in Azure portal, we would appreciate your feedback on a new design concept for commitment savings. We are looking for participants for a 60-minute 1:1 interview and usability walkthrough. Please complete this survey to help us determine if you are eligible. 

What’s new in Cost Management Labs 

With Cost Management Labs, you get a sneak peek at what’s coming in Microsoft Cost Management and can engage directly with us to share feedback and help us better understand how you use the service, so we can deliver more tuned and optimized experiences. Here are a few features you can see in Cost Management Labs: 

Currency selection in Cost analysis smart viewsView your non-USD charges in USD or switch between the currencies you have charges in to view the total cost for that currency only. To change currency, select “Customize” at the top of the view and select the currency you would like to apply. Currency selection is not applicable to those with only USD charges. Currency selection is enabled by default in Cost Management Labs.    

Recent and pinned views in the cost analysis previewShow all classic and smart views in cost analysis and streamline navigation by prioritizing recently used and pinned views.    

Forecast in Cost analysis smart viewsShow your forecast cost for the period at the top of Cost analysis preview.    

Charts in Cost analysis smart viewsView your daily or monthly cost over time in Cost analysis smart views.      

Change scope from menuAllow changing scope from the menu for quicker navigation. 

Of course, that’s not all. Every change in Microsoft Cost Management is available in Cost Management Labs a week before it’s in the full Azure portal or Microsoft 365 admin center. We’re eager to hear your thoughts and understand what you’d like to see next. What are you waiting for? Try Cost Management Labs today. 

New ways to save money in the Microsoft Cloud 

Here are some updates that will likely help you optimize your costs: 

Generally Available: Index Advisor in Azure Cosmos DB helps optimize your index policy for NoSQL queries 

General availability: Semantic caching with vCore-based Azure Cosmos DB for MongoDB   

General availability: HNSW vector index in vCore-based Azure Cosmos DB for MongoDB  

Azure Red Hat OpenShift April 2024 updates 

Public preview: Filtered vector search in vCore-based Azure Cosmos DB for MongoDB  

New videos and learning opportunities 

We have added several new videos to our Microsoft Cost Management YouTube channel to help you manage your Microsoft Customer Agreement (MCA) account and reduce your costs. We encourage you to watch them and learn more. 

A new video on Intelligent FinOps in Azure for cost control on Microsoft Mechanics YouTube channel.  

FinOps and Azure! Understanding what FinOps is and why we care. 

An article on cost allocation and its importance for optimization: Cost allocation is imperative for cloud resource optimization.

Want a more guided experience? Start with the Control Azure spending and manage bills with Microsoft Cost Management and Billing training path. 

Documentation updates 

Here are a few documentation updates you might be interested in: 

New: Azure billing meter ID updates

Update: Save on select VMs in Poland Central for a limited time 

Update: Create an Enterprise Agreement subscription

Update: EA Billing administration on the Azure portal 

Update: Onboard to the Microsoft Customer Agreement (MCA) 

Update: Azure product transfer hub

Update: Ingest cost details data

Update: Understand cost details fields

Update: Permissions to view and manage Azure savings plans

Update: Azure savings plan recommendations

Update: Get started with Cost Management for partners 

Update: Get started with your Enterprise Agreement billing account 

Update: Programmatically create Azure subscriptions for a Microsoft Customer Agreement with the latest APIs

Update: Pay your Microsoft Customer Agreement Azure or Microsoft Online Subscription Program Azure bill

Want to keep an eye on all documentation updates? Check out the Cost Management and Billing documentation change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request. You can also submit a GitHub issue. We welcome and appreciate all contributions! 

What’s next? 

These are just a few of the big updates from last month. Don’t forget to check out the previous Microsoft Cost Management update blogs. We’re always listening and making constant improvements based on your feedback, so please keep the feedback coming. 

Follow @MSCostMgmt on X and subscribe to the Cost Management YouTube channel for updates, tips, and tricks. You can also share ideas and vote up others in the Cost Management feedback forum. 
The post Microsoft Cost Management updates—April 2024 appeared first on Azure Blog.
Quelle: Azure