Advancing memory leak detection with AIOps—introducing RESIN

“Operating a cloud infrastructure at global scale is a large and complex task, particularly when it comes to service standard and quality. In a previous blog, we shared how AIOps was leveraged to improve service quality, engineering efficiency, and customer experience. In this blog, I’ve asked Jian Zhang, Principal Program Manager from the AIOps Platform and Experiences team to share how AI and machine learning is used to automate memory leak detection, diagnosis, and mitigation for service quality.”—Mark Russinovich, Chief Technology Officer, Azure.

In the ever-evolving landscape of cloud computing, memory leaks represent a persistent challenge—affecting performance, stability, and ultimately, the user experience. Therefore, memory leak detection is important to cloud service quality. Memory leaks happen when memory is allocated but not released in a timely manner unintentionally. It causes potential performance degradation of the component and possible crashes of the operation system (OS). Even worse, it often affects other processes running on the same machine, causing them to be slowed down or even killed.

Given the impact of memory leak issues, there are many studies and solutions for memory leak detection. Traditional detection solutions fall into two categories: static and dynamic detection. The static leak detection techniques analyze software source code and deduce potential leaks whereas the dynamic method detects leak through instrumenting a program and tracks the object references at runtime.

However, these conventional techniques for detecting memory leaks are not adequate to meet the needs of leak detection in a cloud environment. The static approaches have limited accuracy and scalability, especially for leaks that result from cross-component contract violations, which need rich domain knowledge to capture statically. In general, the dynamic approaches are more suitable for a cloud environment. However, they are intrusive and require extensive instrumentations. Furthermore, they introduce high runtime overhead which is costly for cloud services.

RESIN

Designed to address memory leaks in production cloud infrastructure

Explore the research

Introducing RESIN

Today, we are introducing RESIN, an end-to-end memory leak detection service designed to holistically address memory leaks in large cloud infrastructure. RESIN has been used in Microsoft Azure production and demonstrated effective leak detection with high accuracy and low overhead.

REsin: a holistic service for memory leaks

Read the report

RESIN system workflow

A large cloud infrastructure could consist of hundreds of software components owned by different teams. Prior to RESIN, memory leak detection was an individual team’s effort in Microsoft Azure. As shown in Figure 1, RESIN utilizes a centralized approach, which conducts leak detection in multi-stages for the benefit of low overhead, high accuracy, and scalability. This approach does not require access to components’ source code or extensive instrumentation or re-compilation.

Figure 1: RESIN workflow

RESIN conducts low-overhead monitoring using monitoring agents to collect memory telemetry data at host level. A remote service is used to aggregate and analyze data from different hosts using a bucketization-pivot scheme. When leaking is detected in a bucket, RESIN triggers an analysis on the process instances in the bucket. For highly suspicious leaks identified, RESIN performs live heap snapshotting and compares it to regular heap snapshots in a reference database. After generating multiple heap snapshots, RESIN runs diagnosis algorithm to localize the root cause of the leak and generates a diagnosis report to attach to the alert ticket to assist developers for further analysis—ultimately, RESIN automatically mitigates the leaking process.

Detection algorithms

There are unique challenges in memory leak detection in cloud infrastructure:

Noisy memory usage caused by changing workload and interference in the environment results in high noise in detection using static threshold-based approach.

Memory leak in production systems are usually fail-slow faults that could last days, weeks, or even months and it can be difficult to capture gradual change over long periods of time in a timely manner.

At the scale of Azure global cloud, it’s not practical to collect fine-grained data over long period of time.

To address these challenges, RESIN uses a two-level scheme to detect memory leak symptoms: A global bucket-based pivot analysis to identify suspicious components and a local individual process leak detection to identify leaking processes.

With the bucket-based pivot analysis at component level, we categorize raw memory usage into a number of buckets and transform the usage data into summary about number of hosts in each bucket. In addition, a severity score for each bucket is calculated based on the deviations and host count in the bucket. Anomaly detection is performed on the time-series data of each bucket of each component. The bucketization approach not only robustly represents the workload trend with noise tolerance but also reduces computational load of the anomaly detection.

However, detection at component level only is not sufficient for developers to investigate the leak efficiently because, normally, many processes run on a component. When a leaking bucket is identified at the component level, RESIN runs a second-level detection scheme at the process granularity to narrow down the scope of investigation. It outputs the suspected leaking process, its start and end time, and the severity score.

Diagnosis of detected leaks

Once a memory leak is detected, RESIN takes a snapshot of live heap, which contains all memory allocations referenced by running application, and analyzes the snapshots to pinpoint the root cause of the detected leak. This makes memory leak alert actionable.

RESIN also leverages Windows heap manager’s snapshot capability to perform live profiling. However, the heap collection is expensive and could be intrusive to the host’s performance. To minimize overhead caused by heap collection, a few considerations are considered to decide how snapshots are taken.

The heap manager only stores limited information in each snapshot such as stack trace and size for each active allocation in each snapshot.

RESIN prioritizes candidate hosts for snapshotting based on leak severity, noise level, and customer impact. By default, the top three hosts in the suspected list are selected to ensure successful collection.

RESIN utilizes a long-term, trigger-based strategy to ensure the snapshots capture the complete leak. To facilitate the decision regarding when to stop the trace collection, RESIN analyzes memory growth patterns (such as steady, spike, or stair) and takes a pattern-based approach to decide the trace completion triggers.

RESIN uses a periodical fingerprinting process to build reference snapshots, which is compared with the snapshot of suspected leaking process to support diagnosis.

RESIN analyzes the collected snapshots to output stack traces of the root.

Mitigation of detected leaks

When a memory leak is detected, RESIN attempts to automatically mitigate the issue to avoid further customer impact. Depending on the nature of the leak, a few types of mitigation actions are taken to mitigate the issue. RESIN uses a rule-based decision tree to choose a mitigation action that minimizes the impact.

If the memory leak is localized to a single process or Windows service, RESIN attempts the lightest mitigation by simply restarting the process or the service. OS reboot can resolve software memory leaks but takes a much longer time and can cause virtual machine downtime and as such, is normally reserved as the last resort. For a non-empty host, RESIN utilizes solutions such as Project Tardigrade, which skips hardware initialization and only performs a kernel soft reboot, after live virtual machine migration, to minimize user impact. A full OS reboot is performed only when the soft reboot is ineffective.

RESIN stops applying mitigation actions to a target once the detection engine no longer considers the target leaking.

Result and impact of memory leak detection

RESIN has been running in production in Azure since late 2018 and to date, it has been used to monitor millions of host nodes and hundreds of host processes daily. Overall, we achieved 85% precision and 91% recall with RESIN memory leak detection,1 despite the rapidly growing scale of the cloud infrastructure monitored.

The end-to-end benefits brought by RESIN are clearly demonstrated by two key metrics:

Virtual machine unexpected reboots: the average number of reboots per one hundred thousand hosts per day due to low memory.

Virtual machine allocation error: the ratio of erroneous virtual machine allocation requests due to low memory.

Between September 2020 and December 2023, the virtual machine reboots were reduced by nearly 100 times, and allocation error rates were reduced by over 30 times. Furthermore, since 2020, no severe outages have been caused by Azure host memory leaks.1

Learn more about RESIN

You can improve the reliability and performance of your cloud infrastructure, and prevent issues caused by memory leaks through RESIN’s end-to-end memory leak detection capabilities designed to holistically address memory leaks in large cloud infrastructure. To learn more, read the publication.

1 RESIN: A Holistic Service for Dealing with Memory Leaks in Production Cloud Infrastructure, Chang Lou, Johns Hopkins University; Cong Chen, Microsoft Azure; Peng Huang, Johns Hopkins University; Yingnong Dang, Microsoft Azure; Si Qin, Microsoft Research; Xinsheng Yang, Meta; Xukun Li, Microsoft Azure; Qingwei Lin, Microsoft Research; Murali Chintalapati, Microsoft Azure, OSDI’22.
The post Advancing memory leak detection with AIOps—introducing RESIN appeared first on Azure Blog.
Quelle: Azure

Empowering operators through generative AI technologies with Azure for Operators

OpenAI’s offerings—ChatGPT, Codex, Sora, and DALL-E—have caught the public’s imagination and opened doors to many opportunities for infusing AI into networks, systems, services, and applications. These cutting-edge AI technologies are now deeply integrated with Microsoft products including Bing, Windows, Office, and Microsoft Teams. Within Azure for Operators, we are taking advantage of the significant investments Microsoft has made and its expertise in programming foundation models by developing technical solutions that will give our customers a competitive advantage. Our product portfolio, which includes Azure Operator Nexus, Azure Operator Insights, and Azure private multi-access edge compute is being augmented with generative AI technologies, empowering operators to efficiently solve real-world problems. But before we get into the solutions, let’s begin with a brief background on generative AI and recent AI advancements.

Azure for Operators

Get to know the Microsoft portfolio for operators

Discover solutions

Foundation models and the next era of ai

Read the blog

Background on generative AI

OpenAI’s generative models have drawn significant attention for their exceptional performance in generating text, image, video, and code. Among these generative models, a notable breakthrough is generative pre-trained transformer (GPT), a large language model with hundreds of billions of parameters. GPT is pre-trained on a vast corpus of data from the open internet, allowing it to comprehend natural language and generate human-like responses to input prompts from users. ChatGPT, Codex (the model behind GitHub Copilot), Sora, and DALL-E are all derived from the pre-trained GPT (or foundation model). Codex is additionally trained on code from 54 million GitHub repositories—a process known as “fine-tuning.” To enable the customization of GPT for new language tasks, OpenAI offers a paid API service that allows developers to fine-tune GPT on domain-specific data through a command-line interface and query the fine-tuned model without accessing the underlying model. Through a partnership with OpenAI, Microsoft benefits from exclusive access to the underlying model and parameters of GPT, placing us in a strong position to develop inference and a fine-tuning infrastructure. 

Microsoft and openai exclusively license gpt-3

Read the blog

We have divided our AI and machine learning investments into four categories:

Reactive management:​ Automated incident management.​

Proactive management: Automated anomaly detection and fault localization.

AI and machine learning infused into Azure for Operators products​.

AI and machine learning engineering platform across Azure for Operators​.

I want to talk a little about our investments that fall under the first two categories. These systems showcase the potential of foundation models as they are incorporated into our products, and they can significantly impact the way mobile operator networks are developed, operated, and managed.

Reactive management: Data intelligence copilot for operators

Operators gather vast amounts of data, including node-level, gNodeB-level, user-level, and flow-level data, for purposes like network monitoring, performance tracking, capacity management, and debugging. In commercial operator networks, the number of such counters and metrics that are regularly computed often exceeds several thousands, accounting for tens of Gbps of data transfer. Retrieving relevant metrics and visualizing them is crucial for network operations. However, the complexity of modern wireless systems and the vast number of counters involved make this task challenging, necessitating expert knowledge to perform this essential operation.

The process today involves specialists with expert knowledge creating dashboards for a limited number of metrics, which the operators browse through to obtain relevant information. However, if operators require customized data, such as visualizing throughput for a specific user rather than aggregate throughput or if they need access to a different set of metrics for complex debugging purposes, a loop through the specialists is required. The specialists need to identify the relevant variables, write code in database query language to combine them in an appropriate manner, and then create and share a dashboard. 

Can operators interact with their data by asking simple questions in natural language, without having to remember any of the complex counter names or how to combine them in a database query language?

We believe that such a system has the potential to significantly transform the status quo. It would provide a more natural way to interact with operator data without heavy reliance on specialists. This would reduce the time to mitigate network issues, and it would provide more value from the operator data by reducing the barrier to customized insights.

The development of foundation models like GPT-4 has significantly advanced the capabilities of natural language interfaces for data interaction, demonstrating remarkable performance on standard text-to-SQL datasets. Despite these achievements, challenges persist in specialized and niche domains such as operator data. These challenges include the handling of specialized information that is often not publicly available, the overwhelming volume of data counters and metrics that exceeds the prompt size limits of these models, and the need for numerical accuracy that is crucial for decision-making in network operations but which the foundation models are not adept at.

System architecture for data intelligence copilot for operators.

We have developed data intelligence copilot for operators, a natural language interface for retrieval and analytics tasks on operator data, leveraging foundation models. It addresses the challenges posed by operator data through a combination of a domain-specific database with comprehensive metrics descriptions, a semantic search for filtering relevant metrics within the models’ prompt size limits, few-shot learning for enhancing numerical accuracy in code generation, and expert feedback mechanisms that allow for continuous improvement of the database through contributions from domain experts.1 This copilot is being integrated into our Azure Operator Insights product as a knowledge assistant.

Reactive management: Intent-based network management

Generally, operator networks are very complex with management operations, heavily relying on highly skilled professionals and sophisticated management tools to create, update, and deploy network configurations. Configuration files can be several tens of thousands of lines long. This process is not only labor-intensive but also error-prone, underscoring a need for automation to alleviate the management burden for network operators.

We have been working on a promising paradigm called intent-based networking (IBN), a solution to simplify network management for operators. It allows network operators to specify the desired behavior or “intent” of their network in natural language. They can say, “Allow ICMP traffic in my network,” and then the solution automatically translates the intent into updated network configurations. IBN can present these updated configurations to network administrators for review prior to their deployment, ensuring network safety while keeping minimal human intervention.

Intent-based networking agent powered by GPT simplifies network management.

Although the concept of IBN has been around for some time, its implementation has been hindered by the complexities of natural language understanding and the intricate task of configuration generation. Motivated by recent advances in generative AI (for example GPT), we revisited this problem and developed a tool named “IBN agent” based on GPT. Our IBN agent takes as input the running network configuration and the user’s natural language intent. It then queries GPT to update the network configuration according to the user intent. Utilizing existing configuration syntax checks and network verification tools, the IBN agent also flags errors in the GPT-generated configurations. Moreover, users can intervene at any point and provide feedback on any undesired behavior. Based on these identified errors or user feedback, the IBN agent iteratively refines the configuration with GPT until all automated and human checks are passed. We believe that IBN holds substantial potential to simplify network configuration in the future.

Proactive maintenance: Next generation communications copilot for operators

Practitioners, engineers, researchers, and students can find themselves grappling with a multitude of acronyms and intricate terminology with information spread across many documents, which makes working with and developing standards-compliant systems an onerous and time-consuming task. For example, an engineering team working on implementing a registration request procedure as a part of building 5G virtual core would need to identify all the relevant technical specifications from among thousands of documents and understand the call flow and message formats as described in those specifications.

The current method of acquiring this information involves sifting through numerous webpages and technical specification documents. While this approach provides extensive comprehension of a topic from various sources, it can also be time-intensive and tedious to identify, gather, and synthesize information from multiple relevant sources.

Foundation models represent a significant advancement in providing synthesized, readily comprehensible answers to user queries related to wireless communication specifications. However, despite the usefulness of state-of-the-art large language models, they also produce irrelevant or inaccurate responses to many queries related to niche and specialized domains.

We have developed a conversational AI tool for information synthesis of wireless communication specifications.

Like ChatGPT, the nextgen communications (NGC) copilot offers a question-and-answer interface, but with an enhanced ability to provide more accurate and relevant answers on topics pertaining to wireless communication technical specifications. NGC copilot builds on foundation models, prompt engineering, and retrieval augmented generation approaches; it features a domain-specific database, tailored word embeddings, and a user feedback mechanism. For more accurate responses, it integrates into its database technical specifications and standards that are often overlooked by traditional models due to their niche nature. The system uses a specialized word-embedding model to better understand telecom jargon, improving its query response relevance. Experts can also provide feedback, which helps refine the database and improve answer quality. We have been piloting NGC within our engineering teams and its performance has been excellent.

Proactive management: Network configuration anomaly detection

One of the most common causes of network disruptions today is network configuration errors. Configuration governs the protocols and policies that regulate and control network access, performance, security, billing, and more. Misconfigurations, when they occur, can lead to a frustrating user experience with slow performance, lack of connectivity, or even sweeping service outages. Operators who experience such outages often suffer from loss of reputation and revenue.

Despite the importance of correct network configuration, configuration management today remains a challenge for operators. Manual peer review of configuration changes can have limited effectiveness. Device configurations are often low-level, complex, and long—making them notoriously challenging to audit manually and at scale. On the other hand, automation is also not a panacea; it’s prone to errors, bugs, and mistakes.

The configuration anomaly detection analysis pipeline.

Many configuration errors are obvious in hindsight and could be detected by sufficiently intelligent learning models. For this reason, we have invested in developing AI-driven anomaly-detection tools that can proactively identify and block erroneous configuration changes before they are applied to the network—before they can impact real users. Machine learning is adept at identifying common configuration usage patterns and anti-patterns. It can effectively sift through changes to ignore those that are intentional and alert operators about those that are likely unintentional or erroneous.

Given a collection of similar configuration files (such as JSON, XML, or YAML), our system synthesizes a common template that captures the similarities between these configurations, leaving placeholders for differing values. Using the synthesized template, our system employs a state-of-the-art, unsupervised anomaly-detection technique, known as the isolation forest, to pinpoint likely errors in configurations. These potential anomalies are reported with an anomaly-likelihood score for review. In this way, we aim to help operators with safe and reliable management of their 5G networks by leveraging automated validation of configurations. For real-world scenarios and additional technical details, please read our recent paper.2

Microsoft responsible AI

empowering responsible ai practices

Learn more

We realize that AI and machine learning-based solutions may involve ethical concerns regarding the underlying models, their training data, and associated biases. To address these concerns, the office of responsible AI shepherds the AI projects at Microsoft on risk assessment and mitigation. We work hard to understand the aspects that require improvement regarding bias and discrimination, and we strive to receive broad approval on compliance. We pass on all guidelines to the engineers to ensure responsible usage without slowing progress.

Explore the Microsoft portfolio of products

My hope is that these examples show that foundation models significantly enhance the Azure for Operators portfolio of products. There is a lot more to say, and there are many additional examples of systems we have developed, but I will leave that for another time.

1 Microsoft, Adapting Foundation Models for Operator Data Analytics, Manikata Kotaru, HotNets 2023.

2 Microsoft, Diffy: Data-Driven Bug Finding for Configurations, Siva Kesava Reddy Kakarla, Francis Y. Yan, and Ryan Beckett, April 2024.
The post Empowering operators through generative AI technologies with Azure for Operators appeared first on Azure Blog.
Quelle: Azure

Cloud Cultures, Part 6: Accelerating collective growth in Malaysia

Innovate, Connect, Cultivate

The Cloud Cultures series is an exploration of the intersection between cloud innovation and culture across the globe. 

Malaysia accelerates growth through digital transformation

Amidst the swiftly changing digital landscape, Malaysia stands out as a dynamic force capturing global attention. This nation—enriched by its diverse population comprised of Malays, Indians, Chinese, and more—is home to people and companies that have adeptly embraced innovative technologies, ensuring the benefits extend to all, not just the tech-savvy elite.

Malaysia has established a culture of digital acceleration through industries like energy, farming, and education by striking a balance between growth and the needs of their people. During my travels, I learned how they’ve embraced cloud innovation in a way that allows them to navigate the modern world with confidence and ensure that everyone is along for the ride.

Before setting out to meet with local companies, I joined General Manager of Energy and Utilities for Microsoft Malaysia, Shereeta (full name: Datin Sharifah Shereeta Syed Sheh), for a traditional Malaysian breakfast at her favorite restaurant. We sat down to talk about our upcoming interviews over nasi lemak—a delicious combination of fried anchovies, fish, hard-boiled egg, cucumber, and sambal on fragrant coconut rice, alongside pancakes, coconut grits, and colorful baked treats. Delighted by the food and excited for the day, we parted ways after breakfast. Shereeta headed out to a local chicken farm while I ventured further into the city.

PETRONAS is building a more sustainable world

I began my visit in the heart of Kuala Lumpur at the Golden Triangle, a hub for shopping, commerce, and entertainment. Standing 88-stories tall with a 17-acre park at its base, the PETRONAS Twin Towers are a wonder to behold. The skyscrapers are complete with malls, museums, a philharmonic orchestra, and a skybridge with views of the vibrant city. This is where I met Phuah Aik-Chong, CEO of Petronas Digital, to learn how PETRONAS utilizes the cloud to accelerate digital transformation.

PETRONAS is a dynamic global energy group with presence in over 100 countries. They produce and deliver energy and solutions that power society’s progress, enriching lives for a sustainable future. PETRONAS’ commitment to sustainability starts at the core of their operations and extends throughout their value chain. People are their strength and partners for growth, driving innovation to deliver a unique spectrum of solutions. PETRONAS’ commitment to Malaysia’s progress doesn’t stop at providing oil and gas—they make a concerted effort to provide development opportunities to underserved populations. One such initiative is the BeDigital Bootcamp, which involves upskilling students from various universities in Malaysia. Partnering with Microsoft, they have collaborated on multiple initiatives that reflect the mutual goal of empowering Malaysians to benefit in tandem with the rapid pace of innovation and digital advancements.

Chop Cheong Bee uses e-farming to feed Malaysia

While I stayed in the city, Shereeta took a break from the bustling metropolis and turned down a quiet dirt road. There, she learned about a local company that helps independent chicken farmers use cloud technology to turn their operations into smart farms—improving food security across Malaysia with affordable, high-quality chicken.

Founded in 1985, Chop Cheong Bee began as a poultry trading company, supplying chicken to local markets and establishments in Malaysia. After a brief period of time, they had to close due to an overwhelming number of manual tasks. However, in the late 2000s, they reopened focusing on technology and e-farming practices.

Cloud technology enables Chop Cheong Bee to create environments where chickens can thrive, utilizing a closed and climate-controlled farming system. The solution they developed collects data to inform how much feed is being consumed and the meat conversion ratios, all in real time. Today, Chop Cheong Bee is a crucial poultry supplier that facilitates a sizable portion of the chicken supply in Malaysia.

General Manager of Chop Cheong Bee, Datuk Jeffrey Ng Choon Ngee shared how e-farming is the future:

“With our solution, we can improve the broiler production index by 20 to 30 points. That’s easily a 10 percent improvement. If more farms can achieve this, then the cost of production will drop. And then hopefully, more Malaysians can afford quality poultry.”

Chop Cheong Bee built a system that can produce about 280 to 340 million chickens annually and supply 80 to 100 customers daily. This new way of farming not only provides millions of people with affordable and nutritious meat, but has also attracted a younger, more technology-focused generation of farmers to this vital industry.

Bersama Malaysia ensures citizens are part of the country’s digital acceleration

My final stop in Malaysia was a basketball court to shoot hoops with a recent graduate, Vaashini Palaniappan, who took part in the Bersama Malaysia (Together with Malaysia) program. Alongside sponsors like the Ministry of Education and Microsoft, the initiative teaches students digital skilling, inspiring young students, and women to dream outside the norm and build careers in tech.

Vaashini Palaniappan, data scientist and recent graduate, shared her future aspirations:

“There are so many women in this data and AI field that want to invent something, that want a brighter future. Because of this, I’m inspired to do something different. I want to be inventive using AI.”

Growing up in a small town, Vaashini didn’t have a lot of exposure to technology. But by participating in university programs, she was able to study sciences, learn technical skills, and understand the impact of advanced technologies on medicine. After seeing a close friend pass from cancer, Vaashini said she was determined to become a doctor and leverage innovative technology for good—specifically, to use AI to detect early signs of cancer and build a hyper-personalized treatment plan for patients.

Bersama Malaysia, along with Microsoft’s Code with Barriers program, were created to ensure citizens of Malaysia are a part of the digital acceleration of the country. These programs are empowering Malaysia’s inclusive digital economy and advancing the nation’s digital transformation across the private and public sectors. Malaysia has consistently been a trailblazer in fostering opportunities for its citizens. Through initiatives like Bersama Malaysia, the nation ensures that no one is left behind in the dynamic landscape of transformation.

Innovating together makes change happen

Later that evening, Shereeta and I discussed our journey over my first experience with a popular local fruit: the durian. After getting used to the infamous smell, I snacked on the custard-like meat and reflected on Malaysia’s inspiring commitment to extending growth far beyond the gleaming skyscrapers and urban epicenters. This version of cloud culture ensures that as the pace of progress quickens, it doesn’t come at the cost of anyone being sidelined. As is often the case, I saw in Malaysia that the best way to accelerate growth isn’t racing ahead; it’s moving forward together.

In this ever-changing world, there is always more to experience. See you on my next adventure!

Learn more:

Watch more Cloud Cultures episodes

Find the Azure geography that meets your needs

Cloud Cultures

Explore the ways people across the globe are using Azure technology

Learn more

The post Cloud Cultures, Part 6: Accelerating collective growth in Malaysia appeared first on Azure Blog.
Quelle: Azure

What’s new in Azure Data, AI, and Digital Applications: Data operates as the currency of AI

Every day we see new and amazing things unfolding for individuals and businesses because of generative AI. Imagination is unlocked for content creators, time is saved on routine tasks, and the customer service experience is forever changed. The excitement is palpable but one thing that can temper all that excitement? A data strategy that doesn’t align with your AI aspirations. 

A sound data strategy—how data is collected, stored, managed, analyzed, and used to achieve business goals—is the foundation of any successful AI project. We like to say data operates as the currency of AI, because the real magic and value from any AI technology or intelligent application happens when it’s applied to your own data.  

When it comes to ensuring your data strategy supports your AI objectives, what works for one company may not work for another. Every company has different data sources, needs, and objectives, but there are some common elements to check.  

For instance, is there alignment between your data vision and goals, business strategy, and AI use cases? Are there gaps or challenges across your data landscape, like siloed data, multiple reporting tools, and more? Is governance in place to ensure security, data quality, and compliance? 

Choosing the right data platform, along with the right tools and services, is essential. And of course, we think the best choice is the Microsoft Intelligent Data Platform, powered by Microsoft Azure, where you can seamlessly unify your data estate to efficiently manage your infrastructure, data, analytics, and AI solutions. 

Here are just a few of the latest updates and resources we’re making available in Azure to fuel your innovation, connect your apps and data estate to your AI transformation strategies, and unlock the power of data as the currency of AI. 

What’s new in data, AI, and digital applications 

First on Azure: Mistral’s flagship model Mistral Large

This week we announced Mistral AI will bring its flagship model, Mistral Large, first to Azure and the Mistral AI platform. With Mistral Large on Azure, customers get the benefit of a highly performant and efficient large language model (LLM) that’s cost-effective. Mistral AI is just one of the latest ways we’re providing our customers with flexibility and options so they can decide which model is best for their business. Read more on the partnership.  

Microsoft recognized as a Leader in 2023 Gartner® Magic Quadrant™ for Data Integration Tools and 2023 Gartner® Magic Quadrant™ for Container Management 

For the third year in a row, Gartner has positioned Microsoft as a Leader in the 2023 Gartner® Magic Quadrant™ for Data Integration Tools. Data integration has become a critical operation as organizations seek to draw on vast data stores to empower decision making and fuel innovation. Learn more about our data integration capabilities and read the Gartner® Magic Quadrant™ for Data Integration Tools report. 

Cloud-native technologies like containers and Kubernetes are the future of application development. That’s why we’re honored to announce that Microsoft has been named a Leader in the 2023 Gartner® Magic Quadrant™ for Container Management. Learn more about our approach to integrating Azure Kubernetes Service (AKS) with other Azure services. And read the Gartner® Magic Quadrant™ for Container Management report. 

Azure OpenAI Service: Assistants API, new models for fine-tuning, text-to-speech, and more 

We recently announced several new capabilities, models, and pricing improvements, including the launch of Assistants API (preview), which makes it simple for developers to create high-quality, copilot-like experiences within their own applications. There are also new text-to-speech capabilities, model updates for OpenAI GPT-4 Turbo (preview) and GPT-3.5 Turbo, and new embeddings models. Also available are fine-tuning API updates, including a new model, support for continuous fine-tuning, and lower pricing. Explore what’s new in Azure OpenAI. 

Microsoft Fabric is now HIPAA compliant 

Microsoft Fabric, our comprehensive analytics solution for businesses in the era of AI, has earned new accreditations for the Health Insurance Portability and Accountability Act (HIPAA) and the International Organization for Standardization (ISO) including ISO 27017, ISO 27018, ISO 27001, and ISO 27701. These accreditations show our dedication to ensuring the best level of security, compliance, and privacy for our customers’ data.

Announcing native document support for personally identifiable information redaction and summarization 

Redacting personally identifiable information (PII) from a document is often a manual process, which can be time-consuming, expensive, and inconvenient. To alleviate this challenge, we are excited to announce the availability of native document support in Azure AI Language, which automates PII redaction. It is available in public preview. You can apply for access to this feature today. 

Achieving sustainable growth with Azure and FinOps best practices 

This month we introduced two game-changing Microsoft Azure capabilities addressing the global priority to reduce IT carbon footprints and optimize cloud costs: Azure Carbon Optimization (preview) and Microsoft Azure Emissions Insights (preview). Discover how FinOps best practices can strategically guide your business through the intricacies of carbon emission reduction and learn more about Azure Carbon Optimization. 

SQL Server enabled by Microsoft Azure Arc now offers Azure SQL migration assessment 

As a first step in enabling an end-to-end migration to put your data to work, empower innovation, and be AI ready, SQL Server enabled by Azure Arc now offers Azure SQL migration assessment as a built-in capability. Arc SQL Server continuously assesses the SQL Server estate and provides readiness and optimal Azure SQL size recommendations.

New migration service in Azure Database for PostgreSQL (preview) 

The new migration service in Azure Database for PostgreSQL simplifies the process of moving PostgreSQL databases from anywhere to Azure, offering both offline and online migration options from an on-premises server, Amazon Web Service Relational Database Service (AWS RDS) for PostgreSQL, Azure Virtual Machines (VMs), and Azure Database for PostgreSQL-Single Server. Advantages of Azure Database for PostgreSQL-Flexible Server include cost-effective scaling and performance, improved control and customization, enhanced security features, and more.

Azure SQL Database Hyperscale outpaces Amazon Aurora PostgreSQL by up to 68% in performance and value 

We partnered with Principled Technologies on a commissioned study to compare Microsoft Azure SQL Database Hyperscale performance with other cloud databases in the market. The study clearly shows that Azure SQL Database Hyperscale easily manages the most critical workloads, outpacing Amazon Aurora PostgreSQL by up to 68% in both performance and value. Read the report from Principled Technologies.  

Microsoft Azure Migrate and Modernize program expansion includes Oracle Exadata migration 

Offers for Oracle workloads within Azure Migrate and Modernize have been extended to support customers migrating on-premises Oracle Exadata environments to Oracle Database@Azure. In addition, Exadata support is available in both Assess and Plan (pilot and proof of concept) and Migrate and Modernize phases. Microsoft partners and customers are both invited to learn more about Azure Migrate and Modernize and Azure Innovate.

Customers fueling transformation with data 

NETZSCH Group creates single source of truth with Microsoft Intelligent Data Platform 

Around the world, companies across a range of diverse markets rely on machinery and instrumentation developed by German-based holding company NETZSCH Group. The company recently began exploring how AI and IoT data could provide new avenues for growth. Because the company operated its three main business units independently, its old system was not made for big data, was not designed for anything AI-based, and wasn’t scalable. Needing to unify its siloed data as part of its AI journey, the company adopted the Microsoft Intelligent Data Platform (MIDP). Using several Azure services, including Azure Synapse Analytics, Azure Databricks, and Azure Machine Learning, the company unified its data and reduced the number of reporting tools from 12 to 1: Power BI. The company is now looking to use its modernized data estate and Microsoft Intelligent Data Platform to drive new avenues of growth. Read the full story. 

ASOS uses Azure AI Studio to surprise and delight young fashion lovers 

With the recent prevalence of generative AI platforms like ChatGPT, ASOS recognized a triple opportunity to expand its business model, enrich its technology infrastructure, and meet the modern tech expectations of its young customers. ASOS used Azure OpenAI Service and Azure AI prompt flow with ChatGPT language models to rapidly develop and test an Azure OpenAI-powered customer experience that helps customers discover new looks.

Cognizant makes performance management more effective and meaningful with Microsoft Azure Machine Learning 

Cognizant wanted to take performance management to new heights by making it more intelligent, more meaningful, and even more effective. Already familiar with the power of AI and machine learning, Cognizant set out to enhance its solution by focusing on improving the value and effectiveness of manager feedback to ensure all employees received feedback that was meaningful, easy to comprehend and actionable.

Astronomer delivers business-critical data with Microsoft Azure 

Organizations depend on data to power their applications, run critical processes, produce analytics, and deliver AI. However, teams often struggle to deliver data reliably, scale their use of data, and implement governance and control. Astronomer, a leading Software-as-a-Service (SaaS) platform, provides a modern, enterprise-ready data orchestration platform powered by Apache Airflow. They are bringing this solution to Azure as a native service available to all Azure customers.

Datex debuts flexible supply chain software based on the Microsoft Azure Stack and Azure Integration Services 

Datex delivers third-party logistics providers a comprehensive range of warehouse management solutions catering to their specific needs. To enhance scalability and dependability for fluctuating demands and expanding client requirements, Datex developed Wavelength, an enterprise low-code application platform (LCAP), utilizing Azure Stack and Azure Integration Services. This approach provided Datex with increased capability and adaptability, along with inherent scaling for various operational volumes.

Credit Europe Bank NV enhances customer experience with Azure, leaving legacy infrastructure behind 

Dealing with its aging infrastructure, Credit Europe Bank NV chose Kubernetes Service and Azure AI services to upgrade their legacy infrastructure and revolutionize the IT operations and customer experience. It introduced features like real-time forex trading, soft tokens, and a swift, mobile-based customer onboarding process. With a modernized banking platform, enhanced security measures, and plans to further expand within the Microsoft ecosystem, the bank is set to grow and scale even further.

Skill up with these learning resources 

Enable chat history with Azure AI Studio and Azure Cosmos DB 

Deliver improved user experiences by providing easier access to past conversations. This feature empowers developers to build advanced chatbots leveraging custom data for informed responses and multimodal integration.

New resources for digital natives on Azure and Microsoft for startups 

Visit our new digital natives on Azure site, a great resource and hub including use cases, customer stories, programs, blogs, event information, and learning paths. Also, check out Microsoft Learn for Startups. It’s packed with tools to help power startups with Microsoft technology, from getting up and running in the Microsoft Cloud to ensuring security and compliance for customers. 

Official Azure Cosmos DB learning resources 

Building intelligent apps with Azure Cosmos DB and want to better understand capabilities like vector search and how to implement RAG pattern, along with improved elasticity, and greater scale? Check out the Microsoft Learning resources for Azure Cosmos DB. 

Official SQL Server learning resources 

Prepare for the SQLBits conference on March 19 to 23, 2024 by leveraging our free Microsoft Learn resources on the official SQL Server Microsoft Learn Collection to help you better understand how to improve performance with the latest capabilities for Azure SQL Databases, Azure Database for PostgreSQL, and SQL Server enabled by AZ Arc Explore SQL Server. Visit the official SQL Server Microsoft Learn Collection. 

A code-first experience for building a copilot with Azure AI Studio 

In addition to a rich UI-driven experience perfect for low-code developers and learners, Azure AI Studio provides a code-first experience for pro developers who want to build custom functions, integrate with other services, and experiment with various features. Using the Azure AI command-line tool (CLI) or language-based libraries (SDK), developers can use a code-first experience to build applications that integrate models with AI services, manage resources involved in their solution(s), and safely and responsibly deploy solutions.

Microsoft Fabric Learn Together weekly series 

Get caught up on this valuable learning series by watching sessions on demand. Sessions range from getting started with end-to-end analytics, to using Apache Spark in Fabric, working with Microsoft Azure Delta Lake tables, Microsoft Azure Data Factory pipelines, data ingestion, and more.

Get ready for the Microsoft Fabric Analytics Engineers exam 

Thinking about taking the exam for the new Microsoft Fabric Analytics Engineer role? Take our new DP-600 practice exam featuring 50 sample questions to find out if you are ready and to see what areas you need to focus on to ace your exam. Try the practice exam on Microsoft Learn.  

New Virtual Training Day tracks to accelerate your understanding of Azure Data and AI 

Attend the newest virtual training day events for migrating and securing SQL Server, implementing a data lakehouse with Microsoft Fabric, and data fundamentals designed to build skills and get advice from experts.  

Want to learn something new? We have more shows to help! 

Explore Azure AI with The AI Show

Join Seth Juarez and friends to explore cutting-edge Azure AI developments and engage in insightful conversations with AI experts. The new channel hosts live episodes, on-demand segments, and provides seamless access to the latest content.

Watch on-demand: Make Azure AI Real

Microsoft Reactor’s Make Azure AI Real series is a collection of livestreams focused on AI technical labs from Microsoft experts. Learn how to use Microsoft AI tools and services, such Azure OpenAI Service, responsible AI, and more. You will also get insights into the latest AI trends and best practices. Watch on-demand and discover how to make AI real for your projects and goals.

New episode: Azure Innovation Podcast Featuring M12, Microsoft’s venture capital fund

Episode 3 features a chat with Michael Stewart, a Partner at M12, Microsoft’s venture capital fund. Get a unique perspective on how startups are spearheading AI innovation and standing out in a crowded market.

Opportunities to connect 

KubeCon Paris: March 19 to 22, 2024, Paris 

Microsoft Azure is a Diamond sponsor of KubeCon Europe, the Cloud Native Computing Foundation’s (CNCF) flagship conference for Kubernetes and open-source technology. Join us in Paris, France, March 19 to 22, 2024 to learn about the latest developments in cloud-native and AI. Azure will also be hosting a pre-day event focused on Azure Kubernetes Service (AKS) for customers and partners on March 19, 2024.

Microsoft Fabric Community Conference: March 26 to 28, 2024

There’s still time to register for the Microsoft Fabric Community Conference in Las Vegas March 26 to 28, 2024 and see firsthand how Fabric and the rest of our data and AI products can help your organization prepare for the era of AI. You’ll hear from leading Microsoft and community experts from around the world and get hands on experiences with the latest features from Microsoft Fabric, Power BI, Azure Databases, Azure Databricks, Azure AI, Microsoft Purview, and so much more. Use discount code MSCUST to get $100 off.

Microsoft JDConf 2024: Virtual Java developer conference March 27 to 28, 2024 

This enriching two-day event March 27 to 28, 2024 will feature a selection of live sessions from leading voices in the Java community and Microsoft teams. We’re organizing separate live streams targeting three major regions: Americas; Europe, Middle East, and Africa (EMEA); and Asia-Pacific. This ensures that no matter where you are, you can be a part of Microsoft JDConf at a time that works best for you.

Azure Cosmos DB Conf 2024: April 16, 2024 

This year’s theme is “Building the Next Generation of AI Apps with Azure Cosmos DB.” Tune into our April 16 livestream to watch sessions from Microsoft experts and community members and learn why Azure Cosmos DB is the database for the era of AI. You won’t want to miss it.

POSETTE: An event for Postgres 2024: June 11 to 13, 2024

Formerly called Citus Con, POSETTE is a free, virtual developer event Jun 11 to13, 2024. The name POSETTE stands for Postgres Open-Source Ecosystem Talks Training and Education. The event features four livestream opportunities to hear from open-source users, Azure database customers, and experts in PostgreSQL and Citus. Come learn what you can do with the world’s most advanced open-source relational database—from the nerdy to the sublime. 

What’s new?  

Jessica shares insights on technology transformation along with important updates and resources about the data, AI, and digital application solutions that make Microsoft Azure the platform for the era of AI. Find Jessica’s blog posts here and be sure to follow Jessica on LinkedIn. 
The post What’s new in Azure Data, AI, and Digital Applications: Data operates as the currency of AI appeared first on Azure Blog.
Quelle: Azure

6 ways generative AI helps improve accessibility for all with Azure 

 In the rapidly evolving world of technology, generative AI stands out, especially in its potential to transform the lives of people with disabilities. The past year has witnessed unparalleled advances in this domain, catalyzing significant breakthroughs in accessibility. The excitement surrounding generative AI is not just about the convenience it offers to the general populace, but also its profound impact on enhancing productivity and enabling individuals with disabilities to engage more fully in their preferred activities. This sentiment is at the heart of the advancements made possible by cutting-edge tools like Microsoft Copilot, which exemplify the transformative power of generative AI in making technology truly inclusive. 

The application of Azure AI in enhancing accessibility is both broad and impactful, with Microsoft Copilot leading the charge. Here are six notable examples where Azure AI is making a difference: 

1. Microsoft Copilot—The assistive tool for everyone 

Copilot, powered by Microsoft Azure OpenAI Service, is at the forefront of this revolution, embodying the essence of assistive technology designed for everyone. The philosophy underpinning Copilot and similar generative AI tools is simple yet profound: accessibility is about adaptation to the individual’s needs. Through natural language processing capabilities, Copilot enables users to effortlessly request or generate adaptations specific to their requirements. Whether it’s simplifying complex documents or assisting someone who is colorblind in navigating color-coded charts, Copilot stands as a testament to the inclusive potential of generative AI. To learn more, watch this video on Copilot and Accessibility. 

2. Seeing AI—Vision assistant, powered by Azure 

Made with and for the blind community, this mobile app assists with daily tasks from understanding your surroundings, to reading the mail, to identifying products. Leveraging the power of Microsoft Azure GPT-4 Turbo with Vision, Seeing AI is able to generate highly detailed descriptions of photos. Users can also employ natural language capabilities to chat with Seeing AI and ask questions about a photo or document. Download Seeing AI here.

3. AI-powered audio descriptions 

With the advancements in GPT-4 Turbo with Vision, Azure AI unlocks huge opportunities in video accessibility for blind and low-vision individuals. The enhanced computer vision capabilities now allow for more detailed and accessible video descriptions. If you’re interested in using computer vision to expand accessibility of videos in your organization, please fill in this form to express interest in our upcoming solution accelerator.

4. Alternative and augmentative communication (AAC) 

Azure Neural Voices, utilized by AI for Accessibility grantee Cboard, brings natural voices to their open-source picture board communication app. This development, alongside the use of Azure OpenAI to refine sentence structure, opens up new possibilities for personalized communication. 

5. Mental health support chatbots 

The deployment of Azure OpenAI in creating mental health chatbots, exemplified by iWill in India, showcases the potential of AI in delivering crucial services to underserved populations. iWill uses a combination of AI, human-in-the-loop, and content safety filtering to ensure AI is used responsibly for vulnerable users with mental health concerns. 

6. Accessible AI development in Microsoft Azure AI Studio 

Microsoft is committed to making AI development accessible to all developers, regardless of their abilities. This commitment is reflected in the design and development of Azure AI Studio, which was created with accessibility as a foundational principle. As the disability community often says, “nothing about us without us”. By empowering developers with disabilities to take part in AI development, we hope to enable the next wave of AI-driven accessibility solutions—built by people with lived experience—that can benefit even more individuals. 

Customer inspiration: NaturalReader 

NaturalReader, a Canadian AI text-to-speech service provider, has significantly enhanced educational accessibility for millions of students worldwide by leveraging Azure AI to develop more authentic, natural-sounding voices and a convenient mobile app. This innovation not only doubled its global sales from 2022 to 2023 but also attracted several Ivy League colleges as customers. By addressing the challenges faced by students with learning disabilities, such as dyslexia, NaturalReader has made educational materials more accessible and engaging, helping to break down barriers to learning. The company’s success in improving voice quality and accessibility at scale, with a notable increase in daily users and app downloads, underscores the transformative impact of Azure AI on educational technology and the broader mission of making education accessible to all. 

Human inspiration: Paralympian Lex Gillette 

Team USA Paralympian Lex Gillette sat down with Microsoft to discuss how technology helps him in his daily life. He is the current world record holder in the long jump, a five-time Paralympic medalist, a four-time long jump world champion, and an 18-time national champion. He is the only totally blind athlete to ever eclipse the 22-foot barrier in the long jump. As Lex prepares for the 2024 Olympic Games in Paris, we are excited to follow his journey. Learn more about Lex Gillette’s journey on the Microsoft Cloud Blog. 

Join us at the Microsoft Ability Summit 

To delve deeper into the intersection of Azure AI and accessibility, we invite you to join us at the Microsoft Ability Summit on March 7, 2024. This free event will feature discussions on various aspects of AI and accessibility, including sessions on AI development, co-design projects with EY, and innovative applications of AI in bridging the disability divide. 

Generative AI is not just a technological advancement; it’s a pathway to empowerment and inclusivity. As we continue to explore and expand the capabilities of tools like Microsoft Copilot, the horizon of what’s possible keeps broadening. The transformative impact of Azure AI on accessibility is a powerful reminder of technology’s potential to make a meaningful difference in people’s lives, particularly those who face the greatest challenges in accessing and utilizing digital tools. Join us in this journey towards a more accessible and empowered future, where technology truly makes the impossible possible. 

Ready to help improve the world of accessibility? Check out the Accessibility Innovation Toolkit to help you think about accessibility and create a framework for innovation featuring tops, case studies, datasets, and research. 
The post 6 ways generative AI helps improve accessibility for all with Azure  appeared first on Azure Blog.
Quelle: Azure

Azure multicloud networking: Native and partner solutions

Enterprise customers are increasingly adopting multiple cloud providers—per a recent Gartner Survey, By 2027, over 90% of enterprises will adopt multicloud models, up from 80% in 2023, for differentiated capabilities and interoperability and to mitigate vendor lock-in risks.1 The intentional drivers for this trend include data sovereignty, which refers to the legal requirement to store data within a specific geographic location, and cost optimization, which allows businesses to select the most cost-effective cloud provider for each workload. The other intentional drivers include product selection, geographical reach, while the unintentional drivers include shadow IT, line of business (LOB) owner-driven cloud selection, and mergers and acquisitions.

This multicloud strategy demands enterprise cloud architects to design and enable hybrid clouds that can connect, operate, and govern multiple cloud environments securely and efficiently.

Microsoft Azure has long anticipated such an evolution and has been building and evolving its networking services, such as Azure ExpressRoute and Azure Virtual WAN and management and orchestration solutions, such as Azure Arc, to provide seamless, multicloud connectivity as well as centralized management of multicloud resources.

With Azure’s multicloud enabled networking and management services, Azure enterprise customers can evolve their enterprise cloud network architecture from hybrid cloud to hybrid multicloud and with Azure as their “hub” cloud while the other connected clouds as their “spoke” clouds.

Azure Arc for multicloud orchestration and management

Azure Arc is a hybrid and multicloud management solution, enabling customers to take advantage of Azure management services (Microsoft Defender for Cloud, Update Management, Azure Monitor, and more) no matter where the environment is running. Since its launch in November 2019, Azure Arc is being leveraged by thousands of enterprises to manage their servers, Kubernetes clusters, databases, and applications across on-premises, multicloud, and edge environments, providing customers with a single way to manage their infrastructure. Azure Arc’s most recent advances and developments are described in this latest Azure Arc blog post.

Microsoft is investing more in this space with the goal of making it easy for customers to discover, visualize, and manage their multicloud estate. These additional Azure Arc multicloud capabilities are leveraged by other services such as Azure Virtual WAN and Defender for Cloud, so customers can easily connect and secure their multicloud environments.

Azure networking services for enabling multicloud connectivity

Azure networking services span the full breadth of cloud networking capabilities, features, and functions, covering cloud network virtualization and segmentation, private, high-performance hybrid networking, secure application delivery, and network security, and they serve as the important building block for an enterprise cloud architecture and means for enterprise cloud consumption.

While these services help enterprises optimally leverage Azure with highest security, performance, and reliability, enterprises can now leverage Azure’s network services and management tools to access, interconnect, and consume workloads across other clouds.

For connectivity to and from other CSPs (AWS, GCP, OCI, Alibaba), Azure offers three fundamental services offered with a wide range of speeds and feeds.

Direct internet peering

Azure VPN and Virtual WAN

Azure ExpressRoute

Figure 1: Azure as a hub cloud

Direct internet peering with other CSPs

Many workloads depend on cross cloud connectivity over Public IP. Microsoft operates one of the largest wide area networks in the world. With more than 200 edge point of presence (PoPs) and more than 40,000 peering connections, Microsoft is deeply connected to other clouds and service providers providing best in class Public IP to Public IP connectivity. Microsoft connects to AWS and GCP in 50 different locations across the world with multiple terabits of capacity in some locations. All the traffic between other clouds and Microsoft is carried within Microsoft global backbone until it is handed off or back to the destination CSPs network. Traffic between other clouds and Microsoft goes via dedicated private network interconnect (PNI). This private network interconnect is built on high availability architecture, providing both low latency and higher reliability.

Microsoft is also working with other cloud and service providers to build next-generation solutions, which would increase the capacity significantly, reduce the time to provision capacity, and remove the single location dependency. Recently we announced our partnership with Lumen on Exa-Switch program.2 This technology is built to deliver high-capacity networks while reducing the time to deliver the capacity between clouds and service providers.

Azure VPN and Virtual WAN for multicloud connectivity

One of the most common and prevalent ways to interconnect resources between public clouds is over the internet using a site-to-site VPN. All public cloud providers offer IPSec VPN gateway as a service and this service is widely used by Azure customers  to set up a private cloud-to-cloud connection. As an example, interconnecting resources in Azure Virtual Networks using Azure VPN Gateway and AWS Virtual Private Cloud (VPCs) using AWS virtual private gateway is described in this how to guide by Azure.

Azure Virtual WAN is an Azure native networking service that brings many networking, security, and routing functionalities together to provide a single operational interface for Azure customers to build a managed global transit cloud network, interconnecting and securing customers’ Azure Virtual Networks and on-premises sites using various network connectivity services such as site-to-site and point-to-site VPN, virtual network (VNet) connections, ExpressRoute, and Azure Firewall.

Using Azure Virtual WAN’s site-to-site VPN, Azure customers can connect VPCs in other CSPs to the Azure Virtual WAN Hub. While this type of VPN connection currently needs to be set up manually, Azure Virtual WAN is extending and enhancing this site-to-site VPN connection service to enable managed multicloud VPN connections for VWAN hub.

In addition, Azure Virtual WAN integrates and supports many independent software vendors (ISV) partners’ software defined wide area network (SDWAN) and VPN services under the Network Virtual Appliance (NVA) in VWAN hub partner program and the combined solutions can be used to build multicloud connections between Azure and other CSPs such as AWS and GCP. Some of these partner offers are described in the multicloud partners solution section below.

Azure ExpressRoute service for multicloud

Azure ExpressRoute lets you extend your on-premises networks into the Microsoft Cloud over a private connection via a connectivity provider (ExpressRoute Provider Model) or directly (ExpressRoute Direct model). ExpressRoute has a constantly growing ecosystem of connectivity providers and systems integrator partners. For the latest information, see ExpressRoute partners and peering locations.

Azure currently offers a native multicloud connectivity service to interconnect Azure and Oracle Clouds. While this native service was built to support Azure customers that want highspeed, secure connections between their Oracle applications on Oracle Cloud and Azure Cloud, this type of native multicloud highspeed interconnection service to other CSPs is currently being planned.

Meanwhile, many of the ExpressRoute partners offer innovative multicloud interconnect service offers such that Azure customers could cross-connect Azure ExpressRoute with other CSP’s highspeed private connection services. Some of these partner offers are described below by the partners themselves.

Azure partner solutions for enabling multicloud connectivity

Alongside Azure native network services there are a number of Azure Networking ISV, Cloud Exchange Platform (CXP), and Marketplace Partners that offer many innovative services that are able to fulfill the diverse multicloud networking needs of our enterprise customers.

While this blog does not cover all of the ISV and CXP partners (see Azure marketplace for a full list of multicloud ISV and CXP solutions), here are some partners in no particular order, that offer  multicloud networking solutions that are leveraged by a number of our customers to build connectivity between their workloads in Azure and workloads in other CSPs.

Aviatrix

The Aviatrix Secure Cloud Networking Platform enables Azure customers to securely interconnect workloads in Azure with workloads in other CSPs and on-premises workloads. Aviatrix solves common customer challenges around optimizing cloud costs for data transfer, accelerating M&A customer onboarding, and providing distributed security enforcement with consistent policies across multicloud environments. 

Learn More

Aviatrix and Microsoft Azure | Aviatrix.

Alkira

For customers using Azure, Alkira offers an elegant approach for onboarding cloud applications onto their network. Alkira achieves this through its Cloud Exchange Point (CXP) hosted in Azure, which not only helps onboarding VNETs in Azure but it can also onboard workloads running in other CSPs.

Learn more

Alkira Cloud Network as a Service.

Prosimo

Prosimo’s Full Stack Cloud Transit is built for enterprises to connect networks, applications, platform as a service (PaaS), and users into a unified network fabric across public and private clouds. The solution provides a transformative set of tools to rapidly adopt native services from cloud service providers and elevate them to meet the sophisticated requirements for enterprises with advanced networking features such as overlapping IP addresses, service insertion, and namespace segmentation. The solution is delivered as a service yet under the enterprise’s own control, with an elastic scaling approach that meets their operational flexibility and compliance needs.

Learn more

Simplify your cloud operations in Azure with Prosimo.

Arrcus

Azure cloud customers can use Arrcus FlexMCN solution to build secure connectivity with micro-segmentation between their workloads in Azure VNets to other CSPs such as AWS and ensure a consistent network policy across clouds. Arrcus FlexMCN solution allows segment routing-based traffic engineering (SR-TE) to deliver application aware performance and route optimization.

Learn More:

Arrcus Flexible Multi-Cloud Networking (FlexMCN™).

Cisco Systems

Cisco enables control and security while driving agility and innovation across multicloud and hybrid environments.  Catalyst SD-WAN’s Cloud OnRamp simplifies, automates, and optimizes cloud connectivity while ensuring secure connections to Azure. It leverages built-in automation with Azure Virtual WAN for interregional, branch to cloud, and hybrid-cloud/mulitcloud connectivity.

Learn more

Cisco SD-WAN Cloud OnRamp.

Equinix

Equinix Fabric Cloud Router makes it easy to connect applications and data across different clouds—solving the hard problems enterprises face today.

Cloud-to-cloud—gain the performance benefits of a private network without the hassle and costs of a physical router, spin up routing virtually with reliable, high bandwidth connections between multiple cloud providers and avoid backhauling traffic.

Learn More

 Equinix Fabric Cloud Router.

Megaport

The Megaport platform enables private access from Azure to hundreds of services across the globe including AWS, Oracle, Google, and IBM Cloud. Common multicloud architectures for Azure include connectivity to your private data center environments, as well as cloud-to-cloud peering with other hyperscalers and cloud service providers. Easily connect at one of more than 850 Megaport-enabled data center locations to ensure your network is no longer a cumbersome but necessary evil, but a simple and flexible way to drive innovation across your business.

Learn More

Common Multicloud Connectivity Scenarios – Megaport Documentation

Learn more about Azure’s multicloud networking services

In conclusion, as enterprises increasingly adopt a multicloud strategy, Azure, along with its ecosystem partners, provides flexible solutions for connecting and consuming cloud resources from other CSPs. Azure’s multicloud networking services, such as ExpressRoute, Virtual WAN, and Azure Arc, enable seamless, secure, and high-performance connections between Azure and other CSPs. Additionally, Azure’s partner solutions offer innovative services to meet the diverse multicloud networking requirements of enterprise customers. By using Azure as the hub cloud of their enterprise cloud architecture, customers can benefit from Azure’s multicloud capable networking and management services to transform their enterprise cloud network architecture from hybrid cloud to hybrid multicloud.

1 Forecast Analysis: Enterprise Infrastructure Software, Worldwide. January 12, 2024. Gartner ID: G00797127.

2Lumen, Google and Microsoft create new on-demand, optical interconnection ecosystem, Lumen.
The post Azure multicloud networking: Native and partner solutions appeared first on Azure Blog.
Quelle: Azure

Paralympian Lex Gillette’s top 7 AI tools for accessibility

Discover the AI innovations empowering lives on and off the athletic track

Human sighted guides are instrumental in supporting blind athletes. They offer comprehensive instructions on adaptations, training, and methodologies. Today, much like a guide, Azure OpenAI Service is helping power Microsoft apps that help people accomplish their goals, no matter how large. Copilot for Microsoft 365 can automate tasks and generate content, Microsoft Translator can bridge linguistic barriers, and tools like Seeing AI can read documents and describe immediate surroundings making it easier than ever to navigate the world. For Paralympian Champion long jumper Lex Gillette, technology with accessible design helps him thrive on and off the field.

Azure AI

Build intelligent apps at enterprise scale with the Azure AI portfolio.

Get started 

Refusing to let blindness limit ambitions

At the age of eight, Gillette faced a life-altering challenge when he was diagnosed with recurrent retinal detachments. Despite undergoing numerous surgeries, Gillette lost his sight entirely. With the support of his mother, educators, and mentors, Gillette adapted to his new way of living, refusing to let his blindness limit his ambitions. He turned to sports, where he found not only an outlet for his energy and competitiveness but also a platform to showcase his extraordinary abilities.

In the Paralympic world, guides use audible cues, enabling athletes to better gauge their run-up and take-off points, and they provide safety protocols and injury prevention strategies to ensure athletes can train effectively while minimizing risks. Under the guidance of his coach, Brian Whitmer, he learned to run and jump without the ability to see, using the sound of his coach’s voice.

“Athletes with sight are able to see other successful sprinters,” explains Gillette. “When you can’t see you’re reliant on your guide to help you understand what you need to do with your body to get better results.”

With a career boasting multiple Paralympic medals in the long jump, including silver medals across five consecutive Paralympic Games (2004, 2008, 2012, 2016, and 2020), Gillette has solidified his status as one of the most accomplished athletes in the field. He holds the world record in the T11 (totally blind classification) long jump category and has earned numerous championships and accolades in both national and international competitions.In 2016, Lex Gillette was in sixth place when he took his starting position for the long jump. He had already started sprinting when he realized that the sound of the enthusiastic crowd was prohibiting him from hearing his guide. He could have chosen to start over. But he chose to persevere.

 “I had a decision to make,” says Gillette. “I could have stopped running. Gone back and waited for them to be quiet.”

Instead, Gillette jumped. He went from sixth place to winning the silver.

Empowering through Microsoft AI innovation

Today, Gillette has set a goal for himself of helping other individuals achieve their dreams, whether that’s through athletics or entrepreneurship. And he believes AI can help.

Microsoft and Azure AI offer a range of accessibility solutions to create more inclusive experiences. Solutions include speech transcription and captioning, content readers, translation services, voice assistants, facial recognition, and computer vision. These AI-powered tools can help individuals with disabilities better access and engage with digital content, whether it’s through speech-to-text transcription, image captioning, or text-to-speech translation.

It’s not necessarily the sight that determines our success. It’s our ability to see the vision.Lex Gillette

Sight School Inc. is dedicated to empowering all ages with visual impairments through adaptive sports, community engagement and advocacy for an independent and inclusive life.

One of the primary objectives of Sight School is getting visually impaired individuals involved in physical activity. “In a perfect world, everyone would be able to go into a gym to work out, without needing to be tethered to another individual: How many pounds you’ve placed on the squat rack; how many miles you’ve run on the treadmill; how many calories you’ve burned, and how to navigate to the smoothie bar.”“Technology,” says Gillette, has been the great equalizer for us.”

Through AI, Gillette can understand his surroundings in ways previously inaccessible to him. Technology can provide verbal descriptions of his environment, such as airport departure screens. Translation apps allow him to communicate with athletes halfway across the globe, identify healthy recipes, manage mobile banking, and hear a visual description of a photo starring him and his young son.

Furthermore, Gillette has found a powerful tool for living more autonomously and engaging more deeply with the world, showcasing the transformative potential of technology in the lives of individuals with visual impairments.

“I like showing the world that with the right tools and resources literally anything is possible.”

Lex Gillette shares 7 things he wants you to know about AI accessibility tools at Microsoft

Seeing AI is designed to help visually impaired people better understand their surroundings. Utilizing the power of AI, it can narrate the world around the user, reading out texts from documents, identifying products via their barcodes, recognizing friends and their emotions, and describing scenes, objects, and colors.

Copilot for Microsoft 365 integrates AI across Microsoft’s productivity suite to automate tasks, generate content, and provide analytical insights, essentially acting as an intelligent assistant that helps users with their work across various applications like Microsoft Word, Excel, and Outlook.

Microsoft Teams supports speech-to-text transcription and captioning for meetings and calls, powered by Azure Cognitive Services, making it easier for people who are Deaf or Hard of Hearing to participate in online communication.

Outlook, Microsoft Edge, and PowerPoint include a read-aloud feature, powered by Azure Cognitive Services, that uses text-to-speech technology to read web pages, documents, and emails aloud.

Microsoft Translator is a translation service that supports speech-to-speech, text-to-speech, and optical character recognition (OCR) to enable communication across different languages and formats in Microsoft Teams or Outlook.

Windows Hello uses facial recognition technology, powered by Azure Face API, to allow users to log in to their devices with their face, making it easier for people with mobility disabilities to access their devices.

Microsoft Windows and Xbox include voice assistants powered by speech-to-text technology, which can be used by users of all abilities to control the device and access information and services through voice commands.

Learn more about AI tools for accessibility

Learn more about AI for accessibility at the Ability Summit and read how Microsoft is advancing accessible technology from Chief Accessibility Officer, Jenny Lay-Flurrie: Advancing accessibility with AI technology and innovation.

Explore the Azure AI portfolio for solutions to help reimagine your business with generative AI.

Read 6 ways generative AI helps improve accessibility for all with Azure.

The post Paralympian Lex Gillette’s top 7 AI tools for accessibility appeared first on Azure Blog.
Quelle: Azure

Modernize and build intelligent apps with support from Microsoft partner solutions

AI transformation drives significant business value, as a recent study of over 2000 business leaders and decision-makers found: 

For every USD1 a company invests in AI, it realizes an average return of USD3.50.

Organizations realize a return on their AI investments within 14 months.

92% of AI deployments take 12 months or less. 

71% of respondents say their companies are already using AI.  

Clearly, we’re witnessing rapid expansion of AI wherein organizations globally are unlocking productivity within their businesses, but also bending the curve on innovation by building on an open AI platform and partner ecosystem. These organizations are engaging Microsoft experts to build differentiated, intelligent applications and modernize existing, business-critical applications. These intelligent applications use real-time and historical data to deliver personalized and adaptable digital experiences with meaningful outcomes, that close the gap between the user’s current state and the desired outcome. New or modernized, when built on Microsoft Azure, these applications benefit from one of the largest interconnected networks on the planet, high availability, and trusted security and compliance.  

Learn More

Forrester Total Economic Impact of Microsoft Azure App Innovation report chevron_right

Azure brings together capabilities for modern app development, cloud-scale data, and leading generative AI in one place. Customers see great value using these services together. In a recent Forrester Total Economic Impact of Microsoft Azure App Innovation report, customers were able to gain significant time savings of one to one and a half months when delivering new applications to the market, increase developer efficiency up to 25%, and reduce app downtime up to 25%. This leads to compelling business benefits such as beating competitors in the innovation race, capturing incremental revenue, minimizing lost revenue and fines from downtime, and increasing the engagement and retention of key talent.   

ISV solutions help accelerate your AI transformation

While Azure provides the tools to build and modernize intelligent applications, it’s important to consider the broader tech stack. Independent Software Vendor (ISV) solutions complement Azure services by allowing you to meet specific use-case requirements, modernize existing tech stacks onto Azure, and mitigate the need to build new skillsets. If your organization routinely uses ISV solutions as part of the app infrastructure or development process, chances are that you can continue to use them even as you build new or modernize existing apps onto Azure. An example is apps built on Azure Spring Apps or Azure Red Hat OpenShift.  

1. Azure Spring Apps Enterprise

Azure Spring Apps Enterprise is a fully managed service for the Spring Framework, built in collaboration with VMware. Building upon the Spring Framework and incorporating features from VMware Tanzu, Azure Spring Apps Enterprise helps accelerate development with ready-made, enterprise-conformant templates. Azure Spring Apps Enterprise offers full integration into Azure’s ecosystem and services, including fully managed infrastructure, built-in app lifecycle management, and ease of monitoring for app development and modernization. If you have existing apps in the Spring Framework, you can efficiently modernize them onto Azure while managing costs and enhancing the apps with AI. Here’s how to get started: Migrate Spring Boot applications to Azure Spring Apps.  

2. Azure Red Hat OpenShift

Azure Red Hat OpenShift is a turnkey application platform. It is jointly engineered, operated, and supported by Red Hat and Microsoft. With Azure Red Hat OpenShift, you can deploy fully managed Red Hat OpenShift clusters without worrying about building and managing the infrastructure and get ready access to and integration with Azure tools, singular billing, integrated support and access to committed spend, and discount programs. This increases operational efficiency, time to value, and allows developers to refocus on innovation to quickly build, deploy, and scale applications. Get started through the Azure Red Hat OpenShift documentation.  

Microsoft also supports pure third-party solutions as part of its ISV ecosystem, to complement native Azure services. While these solutions meet a diverse set of use-cases, ranging from analytics to storage, here’s one that’s likely common to many app development or modernization projects—HashiCorp Terraform.

3. HashiCorp Terraform on Azure

An infrastructure as code tool for provisioning and managing cloud infrastructure, HashiCorp Terraform on Azure allows you to define infrastructure as code with declarative configuration files that can be used to create, manage, and update infrastructure. If your organization currently uses Terraform, developers can use their familiarity with the tool to deploy and manage Azure infrastructure using familiar and consistent syntax and tooling. To support this, HashiCorp offers a library of pre-built modules for Azure services, including Azure AI, Azure Kubernetes Service, and Azure Cosmos DB. And as your developers build new modules, perhaps with GitHub Copilot, those modules can be templatized using HashiCorp Terraform for reuse within your organization, setting up your developer teams for greater productivity and velocity. To get you started, read the blog “Empowering AI: Building and Deploying Azure AI Landing Zones with Terraform.”  

Build and modernize apps with Azure and our partner ecosystem

So, as you look through your app infrastructure and decide to modernize your existing apps, any Spring apps or Red Hat OpenShift apps can easily be moved to Azure, with HashiCorp Terraform on Azure to assist. While we have only looked at three solutions in this blog, your preferred vendors are likely part of the Azure ISV ecosystem. Microsoft’s ecosystem of partners also includes partners that specialize in offering services to build custom intelligent apps, with industry-specific experience.

Connect with experts from Azure who will be able to guide you on your app architecture that utilizes the appropriate technology and services— Microsoft or partner—for your needs.
The post Modernize and build intelligent apps with support from Microsoft partner solutions appeared first on Azure Blog.
Quelle: Azure

The Total Economic Impact™ of the Microsoft commercial marketplace

In today’s fast-paced, AI-fueled business environment, organizations require a growing number of cloud solutions. In trying to meet this demand, software as a service (SaaS) sprawl can lead to wasted investments and cost overruns. Research by Forrester found that cloud marketplaces can minimize this challenge by helping organizations balance speed and agility.1 The Microsoft commercial marketplace allows customers to centralize their cloud portfolio, taking complexity out of the purchasing process. In a 2023 Total Economic Impact™ study commissioned by Microsoft, Forrester Consulting found the Microsoft commercial marketplace delivered customers a 587% return on investment (ROI) with a payback period of less than six months.

By connecting customers to our ecosystem of more than 400,000 partners through one trusted source, the marketplace simplifies and accelerates business-to-business (B2B) tech commerce. Forrester Consulting interviewed 10 customers that actively use the Microsoft commercial marketplace to quantify the opportunity. The research found that for a composite organization based on the interviewed customers the marketplace:

Reduced employee time for onboarding new vendors by 75%.

Reduced employee effort required for each procurement engagement by 50%.

Optimized Azure cloud budget by recapturing up to 50% of at-risk spend.

“I can actually do procurement faster. I can make sure that my products get released faster, bringing benefits faster to the company.”
Director of IT, Financial Services

Simplify procurement processes

Enterprise buying is complex, especially in tech and with rapid advancements in AI. Procurement teams must navigate a multitude of choices, manage vendor relationships, and continuously adapt to emerging trends while ensuring investments meet immediate needs, as well as align with long-term business strategy in a cost-efficient manner.

Ultimately, organizations find the most success with the marketplace when the procurement team, or buying office, becomes involved at an organizational level. As a single destination to discover, try, and purchase certified solutions, the marketplace helps customers “make informed procurement decisions without having to devote significant employee time to researching, evaluating, and selecting a vendor,” according to interviewees in the Forrester Consulting study. Using the marketplace reduced employee effort required for each procurement engagement by 50%.

“We have readily available options for our suppliers that are already vetted and available. [Our teams] can adopt without having to go through elongated procurement processes, additional orders, and administrative processes.”
Director of Global Supply Chain, Telecommunications

For finance teams, interviewees reported in the Forrester Consulting study that leveraging features of the marketplace such as automated invoicing and billing systems “reduced manual data entry, eliminated billing errors, and expedited overall invoice processing and billing cycles.” Adopting the marketplace improved time savings for the payments and finance purchasing team by 30%.

“Once I buy a solution from the marketplace, I don’t have the nightmare of managing invoices and checking for accuracy.”
Executive Director, Government

Onboarding new vendors can be slow and cumbersome, at times delaying a customer from deploying new technology solutions. Using the marketplace reduced time required to onboard new vendors by 75%. Vendors can submit required documentation and access guidance and tools directly in the marketplace, reducing the administrative burden of reviewing paperwork and walking vendors through new processes.

“The Microsoft commercial marketplace has been really useful for fulfilling my day-to-day activities and facilitating my vendor relationships. …The contract and red tape is handled by Microsoft. I don’t have to go deep into my NDAs (non-disclosure agreements), legal documents, or contract redlining, and all those kinds of things.”
Director of IT, Financial Services

Maximize cloud investments

As cloud investments grow, customers are looking to maximize value. Many of the customers interviewed by Forrester Consulting for the study had consumption commitments with Microsoft Azure—contracts providing discounts on infrastructure once a customer meets a certain spending threshold.

The Microsoft commercial marketplace is unique in that it automatically counts 100% of eligible purchases towards a customer’s commitment, ensuring more value for the investment. Customers can quickly search Azure benefit-eligible solutions directly in the Azure portal.

Because of this benefit, Forrester Consulting found the marketplace helped customers optimize their Azure commitments, recapturing up to 50% of at-risk spend. Those interviewed by Forrester Consulting are in good company. According to Microsoft data, more than 85% of customers with consumption commitments are already buying partner solutions via the marketplace to maximize their investments.

“It’s dollar for dollar. Whatever we spend, 100% of it goes to our spend commitment, and that was a huge deal. For some of [Microsoft’s] competitors, [that’s] not the case.”
IT Sourcing Specialist, Manufacturing

Marketplace customers also got more out of their cloud budgets by avoiding costly proof of concept fees. Prior to using the marketplace, customers interviewed by Forrester Consulting said they often had to pay up-front fees for a proof of concept to vet new solutions. The marketplace makes it simple to access trial options, including free trials ranging from 30 days to six months, so customers can feel confident before they buy.

Get started with marketplace

The Forrester Consulting Total Economic Impact™ study found the Microsoft commercial marketplace helps customers simplify procurement and maximize cloud investments, ultimately leading to “cost savings, streamlined operations, and increased agility in software acquisition and integration.” As customers look to cloud solutions to solve business and technology needs, the marketplace makes it easier than ever for organizations large and small to access these tools quickly, securely, and efficiently.

Get the free Forrester Consulting study: The Total Economic Impact™ of the Microsoft Commercial Marketplace.

Read how Microsoft was named a Leader in cloud and applications-centric marketplaces by IDC MarketScape.

Learn more about the Microsoft commercial marketplace.

1The SaaS Marketplaces Landscape, Q2 2023 | Forrester.
The post The Total Economic Impact™ of the Microsoft commercial marketplace appeared first on Azure Blog.
Quelle: Azure

What’s new in Azure Data, AI, and Digital Applications: Are you ready to go from GenAI experimentation to solutions deployed at scale?

For many organizations, implementing AI is not an “if” but a “when.” Determining the “when” piece depends on many factors, but one that may be tricky to scope is “when we are ready?.” Is it when your workloads are in the cloud? When your team has the skills? When it’s a good fit for your culture and customers? All of the above, and …?  

Prior to working at Microsoft, I was a Microsoft Systems Implementor (SI) partner implementing Microsoft Azure solutions. A big reason we were exclusive with Microsoft was the investment in tooling and expertise Microsoft makes to help customers assess their transformation readiness. Going back to our very beginning, Microsoft’s roots are in helping developers and organizations use technology to solve some of the world’s biggest opportunities and challenges. In this era of AI, we once again find ourselves seeking ways to empower our customers to adopt GenAI widely and at a pace not seen before in the technology industry. In fact, in a recent IDC study, we found that pace is a huge part of the GenAI equation. 

As someone who has been at a keyboard building software solutions, I can tell you—having enterprise-ready tooling and seamless integration into the broader cloud services needed to build applications is at the top of the list for delivering on time, with quality, no doubt about it.

We are now in the second year of the era of AI. The first year was full of excitement and experimentation, giving all of us a glimpse into the powerful potential of AI to revolutionize experiences for customers, improve employee productivity, and ignite a sense of wonder. The focus this year is on implementation, realizing value, and discovering where AI can take your organization. Read on for what’s new across the business, with a special focus on updates that will help ensure you’re ready for what’s next.

Whats new in AI readiness

Evaluate apps in Azure AI Studio before deploying

Ensuring an AI-powered app is ready to deploy means evaluating model accuracy and robustness, response quality, scalability, compliance, and other items critical to launching a successful app. With Azure AI Studio, generative AI app developers can build and evaluate applications for safety, reliability, and performance before deploying. If needed, they can fine-tune models and reorchestrate prompt application components. The platform facilitates scalability, transforming proof-of-concepts into full production with ease. And continuous monitoring and refinement supports long-term success. To get a look at this in action, watch our good friend Seth Juarez in this Microsoft Mechanics episode, “Build your own copilots with Azure AI Studio,” to see how evaluation is built into the workflow.

Calling all retailers! AI-ready data solutions in Microsoft Fabric now in public preview

In January we unveiled new generative AI and data solutions across the shopper journey, offering copilot experiences through Microsoft Cloud for Retail. A new retail industry data model can be used for data governance, reporting, business intelligence and advanced analytics. A data connector brings e-commerce data from Sitecore OrderCloud into Microsoft Fabric in real time. Analytics templates, such as frequently bought together, provide actionable, data-driven recommendations to help retailers improve product upselling and shelf optimization. New copilot templates on Azure OpenAI Service allow retailers to build personalized shopping experiences and support store operations. Add to that new copilot features in Microsoft Dynamics 365 Customer Insights, and the launch of Retail Media Creative Studio in the Microsoft Retail Media Platform and Microsoft Cloud for Retail now offers more options for retailers to choose from to infuse copilot experiences throughout the shopper journey. Learn more.

Advancing through the LLMOps Maturity Model: A roadmap for generative AI operational excellence 

The latest in our ongoing series on LLMOps for business leaders delves into how to use the LLMOps Maturity Model to methodically advance from theoretical to practical with powerful generative AI models. The LLMOps Maturity Model is not just a roadmap from foundational LLM utilization to mastery in deployment and operational management; it’s a strategic guide that underscores why understanding and implementing this model is essential for navigating the ever-evolving landscape of AI. This fourth blog in the series offers a practical roadmap for businesses to skillfully navigate the world of generative AI and Large Language Models. It’s all about moving from the basics of using LLMs to mastering deployment and management.

New Azure SQL Database Hyperscale pricing and Azure AI Search integration

AI applications need high-performance databases that can handle large volumes of data and complex queries. Azure SQL Database Hyperscale’s unique architecture provides the needed flexibility and scalability for AI-ready cloud applications of any size and I/O requirement. And new, reduced compute pricing gives you the performance and security of Azure SQL at commercial open-source prices. Learn more.

New, integrated vectorization capability in Azure AI Search means customers can now do vector search using data stored in Azure SQL Database. This new capability opens up new application scenarios for integrating vectors into traditional search as well as Retrieval-Augmented Generation (RAG) applications. Learn more.

Microsoft is a leader in the 2023 IDC MarketScape for AI Governance Platforms  

We are proud of the work we put into ensuring our AI products and services empower you to deploy solutions that are safe, responsible, and effective. So, we are honored to be recognized as a Leader in the inaugural IDC MarketScape Worldwide AI Governance Platforms 2023 Vendor Assessment (doc #US50056923, November 2023). Learn more about this recognition.

Azure Partners: Faster app delivery, cost savings, and funding with Azure Migrate and Modernize and Azure Innovate

Whether your customers are migrating to gain a secure and AI-ready foundation, modernizing their app portfolio, or are ready to build new intelligent apps, it is now easier to bring value to our mutual customers and help them capitalize on the technological innovations transforming every industry.

Achieve faster application delivery times, significant cost savings, and access funding through with Azure Migrate and Modernize and Azure Innovate. Why is this so important to get ready for AI? You must get to the cloud before that innovation can begin!

Formerly the Azure Migration and Modernization Program, Azure Migrate and Modernize is an expanded offering for customer scenarios across apps, data, and infrastructure that includes support for more workloads, streamlined access to specialized partners, incentives to offset migration costs, and security guidance built into every engagement. Azure Innovate is a new offering focused on building new solutions and modernizing existing ones on Azure to meet the demand for AI transformation.

Both offerings provide end-to-end coverage of customer needs, from migration and modernization scenarios to AI innovation, and are built to scale as customer requirements and priorities evolve. Partners providing services, or ISVs building new or modernizing applications, have access to assessments, pilot PoCs, tooling, funding, and Microsoft expert guidance when you need it – all designed to help you accelerate the cloud journey and drive growth and impact.

Visit Azure Migrate and Modernize and Azure Innovate for more info. 

Ready for the cloud and AI? Find out with a free Azure Expert Assessment

Looking for the best way to leverage the scale and compute horsepower of the cloud to optimize your IT infrastructure, data, and applications? Want a personal recommendation on that best way? For free? Check out Azure Expert Assessment, a one-to-one offer to collaborate with a Certified Azure Expert who will personally guide you through the assessment and make recommendations for your organization’s cloud adoption plan. You’ll get a clear technical roadmap and a comprehensive business case to support your cloud strategy. You will also get access to best practices, tools, and resources to help you implement your cloud solutions. Sound good? Go: Azure Expert Assessment.

Build your AI skills

Coming soon: Industry AI Implementation Workshops for partners

A new series of AI workshops to help partners implement industry-specific AI solutions is launching soon. The goal is to inform, educate, and accelerate our industry-specific partners on generative AI and equip them with what’s needed to go from concept to market. Workshop benefits include:

Architectural guidance supporting customer/partner adoption of our Generative AI stack (Azure OpenAI, Copilot, etc.)

An approach to Code ‘For’ / Code ‘With’ depending on the relationship

Offer feature requests/improvements to our horizontal platforms (Azure OpenAI, Copilot for M365, copilot for D365, etc.)

The Retail workshop is currently in pilot with Manufacturing, Healthcare, and Sustainability following soon. Stay updated on the Industry AI Workshops 

Workshop: Get started with Responsible AI

The Responsible AI framework helps AI developers identify and mitigate risks and harms that could impact people, businesses, and society. This hand-on workshop gives participants experience using Responsible AI to debug their machine learning model to improve the model’s performance to be more fair, inclusive, safe, reliable, and transparent. Learn more on the Responsible AI Framework.

Opportunities to connect

Register for the Microsoft Fabric Community Conference

Unifying data from across a sprawling infrastructure is critical to AI readiness. Microsoft Fabric helps you connect and curate data from anywhere and apply powerful analytics and share insights across your organization all while governing and protecting your data. Come see for yourself – join us at the Microsoft Fabric Community Conference in Las Vegas March 26-28and see firsthand how Fabric and the rest of our data and AI products can help your organization prepare for the era of AI. You’ll hear from leading Microsoft and community experts from around the world and get hands on experiences with the latest features from Microsoft Fabric, Power BI, Azure Databases, Azure Databricks, Azure AI, Microsoft Purview, and so much more. Use discount code MSCUST to save $100 off. Register today. 

Azure Cosmos DB Conf 2024—Call for speakers

Azure Cosmos DB is the database for the era of AI and modern app development. From Chat GPT to TomTom Digital Cockpit—an immersive, conversational in-car infotainment system—Azure Cosmos DB powers responsive and intelligent apps with real-time data, ingested and processed at any scale.

There are many, many examples of developers building innovative apps with Azure Cosmos DB and if you’re one of them, we invite you to showcase your work at Azure Cosmos DB Conference 2024 on April 16. This free, virtual developer event is co-organized by Microsoft and the Azure Cosmos DB community. This year’s theme is “Building the next generation of AI Apps with Azure Cosmos DB.” We seek stories about customers using Azure Cosmos DB and AI to power next-gen intelligent apps, focusing on unique customer scenarios/use cases, and on integrating with open-source APIs like MongoDB, PostgreSQL, and Apache Cassandra. Customer demo sessions showcasing innovative AI use cases will get priority. This call for speakers is open until Feb. 15. Get the details and submit a session.

AI era ready customers are making transformative moves

Walmart unveils new generative AI-powered capabilities for shoppers and associates

Microsoft and Walmart established a strategic partnership in 2018 that has accelerated innovation on several fronts. At CES Walmart unveiled that latest results of our ongoing collaboration: an all-new generative AI-powered search function that works across iOS, Android, and Walmart’s own website. The new capability is specifically designed to understand the context of a customer’s query and generate personalized responses. Soon, customers will have a more interactive and conversational experience, get answers to specific questions, and receive personalized product suggestions. Read the story.

Windstream uses Azure OpenAI Service to empower employees and transform business outcomes

Windstream, a leading telecommunications services provider in the U.S. was ready for the era of AI and has fully embraced it to revolutionize operations. From using Azure OpenAI Service to extract valuable insights from customer interactions and call transcripts to improve customer service, to using the service to analyze technical information and error codes and transform it into customer-friendly messages to inform users about issues and expected resolution times. Windstream also uses Azure AI, including cognitive search and OpenAI’s Davinci, to index their vast amount of internal social media data and documents, which included approximately 100,000 indexed documents on their Confluence wiki. This indexed data is made available to Windstream’s custom-built GPT (Generative Pre-trained Transformer) platform, hosted within Azure Kubernetes Service (AKS). This allows employees to access indexed knowledge and answer questions, leading to increased efficiency and more informed decision-making. And that’s not all. Read more about Windstream’s AI story.

Ally Financial empowers customer service associates to focus on human engagement by using Azure OpenAI Service

Ally, a digital financial services firm, wanted to increase the time its call-center customer service associates have to spend with customers. Those associates had to write detailed notes after customer calls, taking time away from customers. To free up that time and maintain detailed call documentation, Ally used Microsoft Azure and Azure OpenAI Service to automate note-taking. Now associates can quickly review the AI-generated summary after each call and turn their attention back to serving customers. This solution cut associates’ post-call effort by 30 percent. As Ally improves call summary accuracy—which is over 85% to start—they expect to reduce associate post-call effort by 50%. Read more on Ally’s AI integration.

Microsoft and Cognite build industrial data operations platform on Microsoft Fabric and Azure OpenAI Service

Another partnership transforming industry with AI is our partnership with Cognite, which recently expanded to converge enterprise and industrial data operations to create a scalable, AI-driven platform that meets the demands of modern industries from the shop floor to the top floor. The solution integrates flagship Cognite Data Fusion with Microsoft Fabric to deliver a unified enterprise DataOps solution with capabilities for vertical industrial data workloads enabled through copilots. Customers can leverage Cognite Data Fusion for driving decisions in asset centric scenarios, for example asset performance optimization, and Fabric to generate insights to run their business. Read more on Cognite’s story.

How Microsoft’s AI screened over 32 million candidates to find a better battery—in 80 hours

In an awe-inspiring example of how AI can reshape our world, the Microsoft Quantum team used advanced AI to screen over 32 million candidates to discover and synthesize a new material that holds the potential for better batteries—the first real-life example of many that will be achieved in a new era of scientific discovery driven by AI. Joining forces with the Department of Energy’s Pacific Northwest National Laboratory (PNNL), the team accomplished in days something that would have taken traditional science and lab experimentation multiple lifespans to achieve. The discovery is important for many reasons. Solid-state batteries are safer than traditional liquid or gel-like lithium batteries and provide more energy density. Lithium is scarce, expensive, and environmentally and geopolitically problematic. Creating a battery that reduces lithium requirements by approximately 70% could have tremendous environmental, safety, and economic benefits. This achievement is a glimpse at how the convergence of high-performance computing (HPC) and AI is accelerating scientific discovery across industries. Microsoft puts the power these breakthroughs into customers’ hands with our Azure Quantum Elements platform. Read the AI breakthrough battery story.

Are you ready for the era of AI?

With an average return of $3.5 for every $1 invested and firms beginning to see returns in just 14 months, as reported by IDC,1 AI is rerouting technology roadmaps everywhere as the focus sharpens around when to deploy AI-powered solutions. The era of AI dramatically expands that next horizon and moves it much, much closer, accelerating transformation and value realization timelines.

If you are assessing your organization’s readiness milestones on a revised roadmap, include room for experimentation [the unknown?]. With AI, there’s what you want it to do, and in many cases, what more you realize it can do once you begin implementation. Not quite serendipity, but close.

Microsoft Azure was purpose built with your limitless innovation in mind. With the most comprehensive and responsible AI toolset in the market, the most advanced supercomputer in the world, and a team of the best and brightest on hand to help you plan and execute, Microsoft is the trusted partner to empower you to make the most of the AI era. 

IDC STUDY—The Business Opportunity of AI (microsoft.com) 

The post What’s new in Azure Data, AI, and Digital Applications: Are you ready to go from GenAI experimentation to solutions deployed at scale? appeared first on Azure Blog.
Quelle: Azure