A deep dive into what’s new with Azure Cognitive Services

This blog post was co-authored by Tina Coll, Senior Product Marketing Manager, Azure Cognitive Services.

Microsoft Build 2019 marks an important milestone for the evolution of Azure Cognitive Services with the introduction of new services and capabilities for developers. Azure empowers developers to make reinforcement learning real for businesses with the launch of Personalizer. Personalizer, along with Anomaly Detector and Content Moderator, is part of the new Decision category of Cognitive Services that provide recommendations to enable informed and efficient decision-making for users.

Available now in preview and general availability (GA):

Preview

Cognitive service APIs:

Personalizer – creates personalized user experiences
Conversation transcription – transcribes in-person meetings in real-time
Form Recognizer – automates data-entry
Ink Recognizer – unlocks the potential of digital inked content

Container support for businesses AI models at the edge and closer to the data:

Speech Services (Speech to Text & Text to Speech)
Anomaly Detector
Form Recognizer

Generally available

Neural Text-to-Speech
Computer Vision Read
Text Analytics Named Entity Recognition

Cognitive Services span the categories of Vision, Speech, Language, Search, and Decision, offering the most comprehensive portfolio in the market for developers who want to embed the ability to see, hear, translate, decide and more into their apps.  With so much in store, let’s get to it.

Decision: Introducing Personalizer, reinforcement learning for the enterprise

Retail, Media, E-commerce and many other industries have long pursued the holy grail of personalizing the experience. Unfortunately giving customers more of what they want often requires stringing together various CRM, DMP, name-your-acronym platforms and running A/B tests day and night. Reinforcement learning is the set of techniques that allow AI to achieve a goal by learning from what’s happening in the world in real-time. Only Azure delivers this powerful reinforcement-learning based capability through a simple-to-use API with Personalizer.

Within Microsoft, teams are using Personalizer to enhance the user experience. Xbox saw a 40 percent lift in engagement by using Personalizer to display content to users that will most likely interest them.

Speech: In-person meetings just got better with conversation transcription

Conversation transcription, an advanced speech-to-text feature, improves meeting efficiency by transcribing conversations in real-time, enabling all participants to engage fully, capturing who said what when so you can quickly follow up on next steps. Pair conversation transcription with a device integrating the Speech Service Device SDK, now generally available, for higher-quality transcriptions. It also integrates with a variety of meeting conference solutions including Microsoft Teams and other third-party meeting software. Visit the Speech page to see more details.

Vision: Unlocking the value of your content – from forms to digital inked notes

Form Recognizer uses advanced machine learning technology to quickly and more accurately extract text and data from business’s forms and documents. With container support, this service can run on-premises and in the cloud. Automate information extraction quickly and tailor to specific content, with only 5 samples, and no manual labeling.

Ink Recognizer provides applications with the ability to recognize digital handwriting, common shapes, and the layout of inked documents. Through an API call, you can leverage Ink Recognizer to create experiences that combine the benefits of physical pen and paper with the best of the digital.

Integrated in Microsoft Office 365 and Windows, Ink Recognizer gives users freedom to create content in a natural way. Ink Recognizer in PowerPoint, converts ideas to professional looking slides in a matter of moments.

Bringing AI to the edge

In November 2018, we announced the Preview of Cognitive Services in containers that run on-premises, in the cloud or at the edge, an industry first.

Container support is now available in preview for:

Speech Services (Speech to Text & Text to Speech)
Anomaly Detector
Form Recognizer

With Cognitive Services in containers, ISVs and enterprises can transform their businesses with edge computing scenarios. Axon, a global leader in connected public safety technologies partnering with more than 17,000 law enforcement agencies in 100+ countries around the world, relies on Cognitive Services in containers for public safety scenarios where the difference of a second in response time matters:

“Microsoft's containers for Cognitive Services allow us to ensure the highest levels of data integrity and compliance for our law enforcement customers while enabling our AI products to perform in situations where network connectivity is limited.”

– Moji Solgi, VP of AI and Machine Learning, Axon

Fortifying the existing Cognitive Services portfolio

In addition to the new Cognitive Services, the following capabilities are generally available:

Neural Text-to-Speech now supports 5 voices and is available in 9 regions to provide customers greater language coverage and support. By changing the styles using Speech Synthesis Markup Language or the voice tuning portal, you can easily refine the voice to express different emotions or speak with different tones for various scenarios. Visit the Text-to-Speech page to “hear” more on the new voices available.

Computer Vision Read operation reads multi-page documents and contains improved capabilities for extracting text from the most common file types including PDF and TIFF.

In addition, Computer Vision has an improved image tagging model that now understands 10K+ concepts, scenes, and objects and has also expanded the set of recognized celebrities from 200K to 1M. Video Indexer has several enhancements including new AI Editor won a NAB Show Product of the Year Award in the AI/ML category at this year’s event.

Named entity recognition, a capability of Text Analytics, takes free-form text and identifies the occurrences of entities such as people, locations, organizations, and more. Through a API call, named entity recognition uses robust machine learning models to find and categorize more than twenty types of named entities in any text documents. Named entity recognition supports 19 language models available in Preview, with English and Spanish now Generally Available.

Language Understanding (LUIS) now supports multiple intents to help users better comprehend complex and compound sentences.

QnA Maker supports multi-turn dialogs, enhancing its core capability of extracting dialog from PDFs or websites.

Get started today

Today’s milestones illustrate our commitment to bringing the latest innovations in AI to the intelligent cloud and intelligent edge.

To get started building vision and search intelligent apps, visit the Azure Cognitive Services page.
Quelle: Azure

AI-first content understanding, now across more types of content for even more use cases

This post is authored by Elad Ziklik, Principal Program Manager, Applied AI.

Today, data isn’t the barrier to innovation, usable data is. Real-world information is messy and carries valuable knowledge in ways that are not readily usable and require extensive time, resources, and data science expertise to process. With Knowledge Mining, it’s our mission to close the gap between data and knowledge.

We’re making it easier to uncover latent insights across all your content with:

Azure Search’s cognitive search capability (general availability)
Form Recognizer (preview)

Cognitive search and expansion into new scenarios

Announced at Microsoft Build 2018, Azure Search’s cognitive search capability uniquely helps developers apply a set of composable cognitive skills to extract knowledge from a wide range of content. Deep integration of cognitive skills within Azure Search enables the application of facial recognition, key phrase extraction, sentiment analysis, and other skills to content with a single click. This knowledge is organized and stored in a search index, enabling new experiences for exploring the data.

Cognitive search, now generally available, delivers:

Faster performance – Improved throughput capabilities with increased processing speeds up to 30 times faster than in preview. Completing previously hour-long tasks in only a couple of minutes.
Support of complex data types – Natively supported to extend the types of data that can be stored and searched (this has been the most requested Azure Search feature.) Raw datasets can include hierarchical or nested substructures that do not break down neatly into a tabular rowset, for example multiple locations and phone numbers for a single customer.
New skills – Extended library of pre-built skills based on customer feedback. Improved support for processing images, added ability to create conditional skills, and shaper skills that allow for better control and management of multiple skills in a skillset. Plus, entity recognition provides additional information to each entity identified, such as the Wikipedia URL.
Easy implementation – The solution accelerator provides all the resources needed to quickly build a prototype, including templates for deploying Azure resources, a search index, custom skills, a web app, and PowerBI reports. Use the accelerator to jump start development efforts and apply cognitive search to your business needs.

See what’s possible when you apply cognitive search to unstructured content, like art:

Tens of thousands of customers use Azure Search today, processing over 260 billion files each month. Now with cognitive search, millions of enrichments are performed over data ranging from PDFs to Office documents, from JSON files to JPEGs. This is possible because cognitive search reduces the complexity to orchestrate complex enrichment pipelines containing custom and prebuilt skills, resulting in deeper insight of content. Customers across industries including healthcare, legal, media, and manufacturing use this capability to solve business challenges.

“Complex customer needs and difficult markets are our daily business. Cognitive search enables us to augment expert knowledge and experience for reviewing complex technical requirements into an automated solution that empowers knowledge workers throughout our organization.”  Chris van Ravenswaay, Business Solution Manager, Howden

Extending AI-driven content understanding beyond search

Many scenarios outside of search require extracted insights from messy, complicated information. Expanding cognitive search to support unique scenarios, we are excited to announce the preview of the knowledge store capability within cognitive search – allowing access to AI-generated annotations in table and JSON format for application in non-search use cases like PowerBI dashboards, machine learning models, organized data repositories, bots, and other custom applications.

Form Recognizer, a new Cognitive Service

The Form Recognizer Cognitive Service, available in preview, applies advanced machine learning to accurately extract text, key-value pairs, and tables from documents.

With as few as 5 samples, Form Recognizer tailors its understanding to your documents. You can also use the REST interface of the Form Recognizer API to then integrate into cognitive search indexes, automate business processes, and create custom workflows for your business. You can turn forms into usable data at a fraction of the time and cost, so you can focus more time acting on the information rather than compiling it.

Container support for Form Recognizer supports use on the edge, on-premises, and in the cloud. The portable architecture can be deployed directly to Azure Kubernetes Service or Azure Container Instances or to a Kubernetes cluster deployed to Azure Stack.

Organizations like Chevron and Starbucks are using Form Recognizer to accelerate extraction of knowledge from forms and make faster decisions.

We look forward to seeing how you leverage these products to drive impact for your business.

Getting Started

Read more in docs
Get started with the solution accelerator
Try Azure Search's cognitive search
Explore knowledge store capability of cognitive search
Explore Form Recognizer

Quelle: Azure

Azure.Source – Volume 81

We’re really looking forward to Microsoft Build 2019, our premier event for developers happening next week, May 6-8 at the Washington State Convention Center in Seattle. It’s a chance for developers to gain access to the latest updates and developments across Microsoft’s products and solutions, understand our strategy and product roadmaps, and learn about new technology and open source software in innovative ways. Even the weather in Seattle looks like it's going to cooperate.

The Azure team is of course a big part of Build. We’ll be giving loads of presentations, and also posting a wide range of blog posts providing more details about what we share throughout the event. This Channel 9 video gives you a look ahead with the Azure IoT team:

As we're gearing up for Microsoft Build 2019, the IoT Show goes into Azure IoT's building on the Microsoft Campus to meet some of the speakers who are preparing awesome IoT content for you.

In the meantime, there’s plenty of other stuff going on around Azure right now. Here are some of the highlights:

News and updates:

Intelligent edge innovation across data, IoT, and mixed reality

We're at an incredibly exciting technology inflection point. The virtually limitless computing power of the cloud, combined with increasingly connected and perceptive devices at the edge of the network, create possibilities we could only have dreamed of just a few years ago – possibilities made up of millions of connected devices, infinite data, and the ability to create truly immersive multi-sense, multidevice experiences. This post looks at some of the newest advances.

Digitizing trust: Azure Blockchain Service simplifies blockchain development

In a rapidly globalizing digital world, business processes touch multiple organizations and great sums are spent managing workflows that cross trust boundaries. As digital transformation expands beyond the walls of one company and into processes shared with suppliers, partners, and customers, the importance of trust grows with it. Microsoft’s goal is to help companies thrive in this new era of secure multi-party computation by delivering open, scalable platforms, and services that any company from game publishers and grain processors, to payments ISVs and global shippers can use to digitally transform the processes they share with others.

Making AI real for every developer and every organization

AI is fueling the next wave of transformative innovations that will change the world. With Azure AI, our goal is to empower organizations to apply AI across the spectrum of their business to engage customers, empower employees, optimize operations, and transform products. Read this blog to learn about our three guiding investment principles.

Technical content

Azure Stack IaaS – part seven

This blog post covers the automation options in your Cloud IaaS toolkit. We’ve come a long way – in the virtualization days, before cloud and self-service, it took a while to get all the approvals, credentials, virtual LANs (VLANs), logical unit numbers (LUNs), etc. It took so long, that the actual creation part was easy. When cloud came along with self-service, not only was it easier to create a virtual machine (VM) without relying on others, but it changed our thinking about whether VMs were precious or disposable. We’ll discuss!

Migrating big data workloads to Azure HDInsight

Migrating big data workloads to the cloud remains a key priority for our customers, and Azure HDInsight is committed to making that journey simple and cost-effective. HDInsight partners with Unravel, whose mission is to reduce the complexity of delivering reliable application performance when migrating data from on-premises or a different cloud platform onto HDInsight. Unravel’s Application Performance Management (APM) platform brings a host of services towards providing unified visibility and operational intelligence to plan and optimize the migration process onto HDInsight.

Deploy a FHIR sandbox in Azure

In connection with HIMSS 2019, we announced the Azure API for FHIR, which provides our customers with an enterprise grade, managed FHIR® API in Azure. Since then, we've been busy improving the service with new configuration options and features. Some of the features we have been working on include authentication configuration and the SMART on FHIR Azure Active Directory Proxy, which enables the so-called SMART on FHIR EHR launch with the Azure API for FHIR. We've developed a sandbox environment that illustrates how the service and the configuration options are used. In this post, we focus on how to deploy the sandbox in Azure. Later blog posts will dive into some of the technical details of the various configuration options.

5 internal capabilities to help you increase IoT success

This article is the third in a four-part series designed to help companies maximize their ROI on IoT. In the first post, we discussed how IoT can transform businesses. In the second post, we shared insights on how to create a successful strategy that yields desired ROI. In this third post, we discuss how companies can move forward by identifying and filling capability gaps. Let’s dive into some ideas about how to solve some of the challenges that could slow your IoT progress.

Monitoring enhancements for VMware and physical workloads protected with Azure Site Recovery

Azure Site Recovery has enhanced the health monitoring of your workloads by introducing various health signals on the replication component, Process Server. The Process Server (PS) in a hybrid disaster recovery (DR) scenario is a vital component of data replication. It handles replication caching, data compression, and data transfer. Once the workloads are protected, issues can be triggered due to multiple factors including high data change rate (churn) at source, network connectivity, available bandwidth, under provisioning the Process Server, or protecting a large number of workloads with a single Process Server. These may lead to a bad state of the PS and have a cascading effect on replication of VMs. Troubleshooting these issues is now made easier with additional health signals from the Process Server.

Building recommender systems with Azure Machine Learning service

Recommendation systems are used in a variety of industries, from retail to news and media. If you’ve ever used a streaming service or ecommerce site that has surfaced recommendations for you based on what you’ve previously watched or purchased, you’ve interacted with a recommendation system. With the availability of large amounts of data, many businesses are turning to recommendation systems as a critical revenue driver. However, finding the right recommender algorithms can be very time consuming for data scientists. This is why Microsoft has provided a GitHub repository with Python best practice examples to facilitate the building and evaluation of recommendation systems using Azure Machine Learning services.

Six Principles to Build Healthy Data-Driven Organizations

Organizations are increasingly forming teams around the function of Data Science. Data Science is a field that combines mathematics, programming, and visualization techniques and applies scientific methods to specific business domains or problems, like predicting future customer behavior, planning air traffic routes, or recognizing speech patterns. But what does it really mean to be a data-driven organization? InfoQ takes a look.

Azure shows

MSDN Channel 9

DevOps for ASP.NET Developers, Pt. 7

In part 7 of our series, Abel and Jeremy show us two ways to scaffold out an Azure DevOps pipeline. We'll see how to use Azure DevOps projects via the Azure portal, which gives us a UI to configure everything. Also, they'll show us how to use the Yo Team generator that allows us to work from the command line.

Detect Shake (Xamarin. Essentials API of the Week)

Xamarin.Essentials provides developers with cross-platform APIs for their mobile applications. On this week's Xamarin.Essential API of the week, we take a look at the Detect Shake API to help you detect when a user shakes a device.

YouTube

Mastering Azure using Cloud Shell, PowerShell and Bash!

Azure can be managed in many different ways. Learn your command line options like Azure PowerShell, Azure CLI, and Cloud Shell to be more efficient in managing your Azure infrastructure. Become a hero on the shell to manage the cloud!

The Azure Podcast

Episode 276 – Cloud simplified

Ryan Berry, an Azure Cloud Solutions Architect at Microsoft, talks about his own YouTube Channel where they distill down complex topics into bite sized chunks to make it easy for you to quickly leverage these features to address similar requirements you may have for moving something into Azure.

Azure DevOps Podcast

Rob Richardson on Containers in Azure

In this episode, Rob explains the critical steps when creating a container, what developers should consider when looking to run and support Containers through Azure, and much, much more.
Quelle: Azure

LaLiga entertains millions with Azure-based conversational AI

For LaLiga, keeping fans entertained and engaged is a top priority. And when it comes to fans, the Spanish football league has them in droves, with approximately 1.6 billion social media followers around the world. So any time it introduces a new feature, forum, or app for fans, instant global popularity is almost guaranteed. And while this is great news for LaLiga, it also poses technical challenges—nobody wants systems crashing or going unresponsive when millions of people are trying out a fun new app.

When LaLiga chose to develop a personal digital assistant running on Microsoft Azure, its developers took careful steps to ensure optimal performance in the face of huge user volume in multiple languages across a variety of voice platforms. Specifically, the league used Azure to build a conversational AI solution capable of accommodating the quirks of languages and nicknames to deliver a great experience across multiple channels and handle a global volume of millions of users.

Along the way, some valuable lessons emerged for tackling a deployment of this scope and scale.

Accommodating the quirks of languages and nicknames

The LaLiga virtual assistant has launched for Google Assistant and Skype, and it will eventually support 11 platforms. The assistant was created with Azure Bot Service and the Microsoft Bot Framework, and it incorporates Azure Cognitive Services and a variety of other Azure tools. The main engine for the assistant takes advantage of the scalability and flexibility of Azure App Service —a platform as a service (PaaS) offering—to streamline development. LaLiga used it multiple times to accelerate the development of the bot logic, image service, Google Assistant connector, Alexa connector, data loaders, cache management, and two Azure functions for live data and proactive messages.

Figure 1. An overview of the LaLiga virtual assistant architecture

Fans can ask the assistant questions using natural language, and the system parses this input to determine user intent by using Azure Text Analytics and Azure Cognitive Services Language Understanding in either Spanish or English. That may seem straightforward, but developers learned that subtitles of language can complicate the process. For example, the primary word for “goalkeeper” is different in the Spanish dialects used in Spain, Argentina, and Colombia. So the mapping of questions to intents needed to accommodate a many-to-one relationship for these variations.

A similar issue arose with players whose names have complicated spellings and don’t clearly correspond to the pronunciation – for example, "Griesman" instead of "Griezmann" – resulting in a variety of translations. The solution here was to use aliases to guide the system to the correct player. Nicknames were another sticking point. Developers used Azure Monitor Application Insights to investigate user queries that weren’t mapping to any existing player and found that a lot of people were asking about a player but using his nickname rather than his official name. Once again, aliases came to the rescue

Guaranteeing a great experience across multiple channels

One goal of the development team was to support a consistent, high-quality user experience across different mobile platforms and devices, each of which has its own display parameters and may also have different connection protocols. In response to every user query, the LaLiga virtual assistant returns three elements: an image, some text, and a voice reply. The image can be a picture of a player or a “hero card” showing match results or player statistics. For channels with a visual display, the image and text are customized with XAML to make them easily legible for the specific display resolution.

All channels aren’t created equal when it comes to user popularity, either. LaLiga expects that some channels will be used much more frequently than others, and this requires adjustments to properly manage scalability resources. Developers created an app service for each channel and optimized it for anticipated usage.

Developers also needed to customize the connectors that the assistant uses for different channels depending on the channels’ capabilities and requirements. For example, the Alexa interface is based on Microsoft .NET Framework, which made it straightforward to develop a connection with Microsoft tools, but Google Assistant uses Node.js, requiring more complex development. Developers found it tricky to map messages from the LaLiga virtual assistant to types that Google Assistant understands. Adding a custom connector hosted with App Service resolved the issue. App Service also helps manage the scalability requirements for the channel. Microsoft is using the lessons learned from the LaLiga virtual assistant to help all developers streamline the creation of connectors with Azure-based bots.

Figure 2. An overview of integration with Google Assistant and Alexa

Planning for millions of enthusiastic users

LaLiga anticipates that the assistant will be hugely popular and that most users will ask multiple questions, generating a vast number of hits on the system each day and leading to high consumption of computing resources. Developers adopted multiple strategies to mitigate this high demand.

Incoming queries get divided into two categories—live and non-live. A live query could be one about a match in progress, where the data could be constantly changing, whereas a non-live query might relate to a completed game or player’s basic statistics. Whenever a non-live query arrives, the result is cached, so the answer is readily available if someone else asks the same question. The LaLiga virtual assistant uses a highly optimized Azure SQL database as its main data storage, rather than a non-structured data lake, to expedite results.

Because scalability was a big concern, the team decided early to dedicate a developer to scalability testing. The developer created an automated system to simulate queries to the assistant, eventually testing millions of hits a day and setting the stage for a smooth launch. Bombarding the system with so many queries revealed another hitch—all those hits were genuine queries, but some web services might see that huge volume and think that the system is being hit by a distributed denial of service (DDOS) attack. So it’s essential to ensure that all components are configured to account for the popularity of the assistant.

Learn from LaLiga and build your own great bot

While some of these use cases may seem straightforward, the LaLiga virtual assistant development experience showed that sometimes a small tweak to development processes or application configuration can yield substantial rewards in terms of system performance and development time. We hope that the lessons learned during the LaLiga project will help you build your own massively popular digital assistant!

Read the case study for more information about LaLiga’s digital transformation and its initiatives to boost fan engagement.

Get started building your own branded virtual assistant.

Start more simply and build your first Q&A bot with QnA Maker.
Quelle: Azure

Making AI real for every developer and every organization

AI is fueling the next wave of transformative innovations that will change the world. With Azure AI, our goal is to empower organizations to apply AI across the spectrum of their business to engage customers, empower employees, optimize operations and transform products. To make this a reality, we have three guiding investment principles:

Boost the productivity of developers and data scientists and empower them to build AI solutions faster.
Enable these AI solutions to be deployed at scale alongside existing systems and processes.
Ensure organizations can build with full confidence knowing that they own and control their data on a platform that adheres to some of the industry’s strictest privacy standards and has the most comprehensive compliance portfolio of any cloud provider.

These guiding principles enable us to fulfill our mission of empowering every developer and every organization to harness the potential of AI. With research centers that span the globe, from Redmond to Shanghai, we continue to achieve industry breakthroughs in areas such as vision, speech, language, advanced machine learning techniques, and specialized AI hardware. These innovations are now key components of several of our flagship products, like Office 365, Xbox, Bing and Dynamics 365. This is important because with Azure AI, customers can benefit from the latest innovations that have been thoroughly battle-tested in our own products.

We are honored and humbled by the tremendous adoption of Azure AI by customers. Organizations of all sizes in all industries are using Azure AI to transform their business by:

Using machine learning to build predictive models, optimizing business processes.
Utilizing advanced vision, speech, language, and decision-enabling capabilities to build AI powered apps and agents to deliver personalized and engaging experiences.
Applying knowledge mining to uncover latent insights from vast repositories of data.

Today, we are excited to announce a range of innovations across all of these areas. Let’s walk through them.

Machine learning

Azure Machine Learning service is designed to accelerate the end-to-end machine learning lifecycle. With Azure Machine Learning, developers and organizations can quickly and easily build, train, and deploy models anywhere from the intelligent cloud to the intelligent edge, as well as manage their models with integrated (CI/CD) tooling.

As we strive to enable developers, data scientists, and DevOps professionals across all skill levels to increase productivity, operationalize models at scale and innovate faster. We are pleased to announce:

New capabilities to enhance productivity now in preview:

Automated machine learning user interface that enables business domain experts to train machine learning models with just a few clicks.
Zero-code, visual interface that enables users new to machine learning to build, train, and deploy models easily using drag and drop capabilities.
Azure Machine Learning notebooks that provides developers and data scientists a code-first machine learning experience.

New capabilities to enable operationalization of models at scale:

MLOps or DevOps for machine learning capabilities, including Azure DevOps integration that enables Azure DevOps to be used to manage the entire machine learning lifecycle including model reproducibility, validation, deployment, and retraining.
General availability of hardware accelerated models that run on FPGA’s in Azure for extremely low latency and low-cost inferencing. Available in preview for Databox Edge.
Model interpretability capabilities that enable customers to understand how a model works and why it makes certain predictions, removing the ‘black box’ aspect of ML models.

Our commitment to an open platform:

Contribution to the open source MLflow project, with native support for MLflow in Azure Machine Learning service.
Support for ONNX Runtime for NVIDIA TensorRT and Intel nGraph for high speed inferencing on NVIDIA and Intel chipsets.
Preview of a new service, Azure Open Datasets, that helps customers improve machine learning model accuracy using rich, curated open data and reduce time normally spent on both data discovery and preparation.

It’s exciting to see customers such as British Petroleum, Walgreens Boots and Schneider Electric deploying machine learning solutions at scale using Azure Machine Learning.

“All the data scientists on our team enjoy using Azure Machine Learning, because it’s fully interoperable with all the other tools they use in their day-to-day work—no extra training is needed, and they get more done faster now,” – Matthieu Boujonnier, Analytics Application Architect and Data Scientist, Schneider Electric. Read the Schneider Electric case study.

Visit Azure Machine Learning to discover more.

AI apps and agents

The combination of Azure Cognitive Services and Azure Bot Service enables developers to easily infuse powerful AI capabilities into their apps and agents.

Azure Cognitive Services continues to be the most comprehensive portfolio in the market for developers who want to embed the ability to see, hear, respond, translate, reason and more into their apps. Today we’re making it even easier for developers to embed AI into their applications:

Introduction of a new Decision category.
Services in this category provide users recommendations to enable informed and efficient decision-making. Services such as Content Moderator, the recently announced Anomaly Detector and a new service called Personalizer, available in preview, are part of this new category. Personalizer is built on reinforcement-learning and prioritizes relevant content and experiences for each user, improving app usability and engagement. Microsoft’s very own Xbox drove a 40 percent lift in user engagement on its home screen as a result of using Personalizer.
In Vision, we are announcing two new services available in preview. Ink Recognizer enables developers to combine the benefits of physical pen and paper with the best of the digital by embedding digital ink recognition capabilities into apps. Developers can build on top of it to make notes searchable and convert hand-written sketches into presentation-ready content in a matter of minutes. Additionally, the Computer Vision read capability, which extracts text from common file types including multi-page documents and PDF, TIFF formats, is now generally available.
In Speech, we are announcing preview of new advanced speech-to-text capability called conversation transcription that catalyzes meeting efficiency by transcribing conversations in real-time so participants can fully engage in the discussion, know who said what when, and quickly follow up on next steps. Neural text-to-speech capability and Speech Service Device SDK are also now generally available.
In Language, Language Understanding has a new analytics dashboard to evaluate the quality of language models. In addition, QnA Maker now supports multiturn dialogs. The Text Analytics named entity extraction capability is now generally available.
We have expanded the portfolio of Cognitive Services that can run locally through a Docker container and we’re pleased to preview container support for Anomaly Detector, Speech-to-Text, and Text-to-Speech.

Only Azure provides developers with the flexibility to embed these powerful AI services where needed. Visit Azure Cognitive Services to find out more.

Azure Bot Service, built on Microsoft Bot Framework, makes it easier to develop bots and intelligent agents. New enhancements include:

Adaptive dialogs enable developers to create more sophisticated, dynamic conversations.
Language generation package streamlines the creation of smart and dynamic bot responses.
Emulator now has improved fidelity for debugging channels.

LaLiga, the premier men’s soccer league of Spain, create solutions using Cognitive Services, Bot Service and other Azure services. Their intelligent bot connects with 1.6 billion followers across various social networks. Delivering a world-class voice assistant is key to scoring brand love with fans:

“Our digital innovation platform built on Microsoft Azure helps us deliver the best possible fan experiences for the world’s best sports league.” Jose Carlos Franco, Head of Data and Analytics, LaLiga.

Knowledge mining

While organizations have seemingly unlimited access to information that can range from databases to PDFs to media files, there are still significant challenges in making that information usable and meaningful. With knowledge mining, you can leverage industry leading AI capabilities to easily unlock latent insights from all your content at scale.

We have two exciting announcements in this category:

The cognitive search capability of Azure Search, is now generally available and up to 30 times faster than before. Azure Search is the only offering in the market with a single mechanism to apply AI enrichments to content. Using Cognitive Search and its built-in AI capabilities, customers can discover patterns and relationships in their content, understand sentiment, extract key phrases and more, all without any data science expertise. In addition, a new knowledge store capability in preview enables developers to further leverage the insights and metadata they extract from the cognitive search pipeline. Developers can store the enriched metadata they create with cognitive search and apply it to any variety of scenarios such as Power BI visualizations, custom knowledge graphs, trigger actions within an application, or build machine learning models with new labeled data.
The new Form Recognizer service applies advanced machine learning to accurately extract text, key-value pairs, and tables from documents. With just a few samples, it tailors its understanding to supplied documents, both on-premises and in the cloud. It can be used to build robotic process automation (RPA) solutions.

The Metropolitan Museum of Art is exploring how Cognitive Search understands nuances and relationships across their encyclopedic collection:

British Petroleum, Icertis, Howden, Chevron, UiPath, and others benefit from Azure Search and Form Recognizer to extract insights from their content and automate processes.

“We’re excited to leverage Form Recognizer as a key document extraction capability on our Robotic Process Automation (RPA) platform and our open AI ecosystem. UiPath’s and Microsoft’s investments in AI are streamlining the process of unlocking key business data, making possible a new era of AI-driven business insights and knowledge management” Mark Benyovszky, Director Artificial Intelligence, UiPath.

Continuing to innovate

We continue to invest to make Azure the best place for AI and we are most excited to see how customers are applying AI in their businesses. The opportunities are limitless, and we are looking forward to seeing what you create with Azure AI.

Additional resources

Overview of Azure AI
Azure Machine Learning
Azure Cognitive Services
Azure Bot Service
Azure Search

Ready to get started? Check out:

E-book: AI in Action Developer’s Guide

Quelle: Azure

Intelligent edge innovation across data, IoT, and mixed reality

We are at an incredibly exciting technology inflection point. The virtually limitless computing power of the cloud, combined with increasingly connected and perceptive devices at the edge of the network, create possibilities we could only have dreamed of just a few years ago – possibilities made up of millions of connected devices, infinite data, and the ability to create truly immersive multi-sense, multidevice experiences.

While much attention has been paid to the cloud innovations, the advancements at the edge are becoming equally remarkable. And, of course, the experiences built using the cloud and edge together are what become truly transformative. The intelligent cloud and intelligent edge application pattern, transforms the way we can interact with digital information and further blend the physical and digital worlds for greater societal benefit and customer innovation.

Today’s announcements put us a few steps closer toward these outcomes:

Azure SQL Database Edge – A small footprint database engine optimized for the edge, with AI built-in
IoT Plug and Play – Connect IoT devices to the cloud without writing any code
HoloLens 2 Development Edition – All the tools needed to get started building mixed reality experiences

Azure SQL Database Edge

In our work to address the unique requirements for data security and analytics on the edge, we are excited to announce the preview of Azure SQL Database Edge, which brings to the edge the same performant, secure, and easy to manage SQL engine that our customers love in Azure SQL Database and SQL Server. This new offering also adds capabilities that are optimized for edge scenarios, including:

Support for Arm and x64-based interactive devices and edge gateways.
Low latency analytics on the edge by combining data streaming and time-series, with in-database machine learning and support for graph data. 
Industry leading security capabilities that Azure SQL Database offers to protect data at rest and in motion on edge devices and edge gateways. Manage security policies and updates from a central management portal such as Azure IoT Central.
Ability to develop applications once and deploy anywhere by having a common programming surface area across Azure SQL Database, SQL Server on-premises, and Azure SQL Database Edge.
Support for both cloud connected and fully disconnected edge scenarios. Gain cloud scale by connecting edge scenarios to Azure. Enable fully disconnected edge scenarios with local compute and storage.
Supports business intelligence with Microsoft Power BI and other BI tools.

Join the Early Adopter Program to access the preview and get started building your next intelligent edge solution.

HoloLens 2 Development Edition

When the intelligent cloud and intelligent edge are imbued with mixed reality and artificial intelligence, we have a framework for achieving amazing things and empowering even more people. Enabling intelligent cloud and intelligent edge solutions requires a new class of distributed, connected applications that will deliver break-through business outcomes. Now all the pieces are coming together.

From construction sites to factory floors, from operating rooms to classrooms, mixed reality powered by AI, are changing how we work, learn, communicate, and get things done moving us beyond the 2D worlds of the PC and smartphone into third wave of computing.   

To help catalyze this third wave of computing, we are expanding our Mixed Reality Developer Program. Over the past year, we have engaged with more than 20,000 mixed reality developers, a number we expect to more than triple in the coming 12 months. To serve the needs of a larger audience, we are investing in more meetups, programs, events, and hacks (including our Mixed Reality Dev Days that is happening now) with an increased focus on serving the mixed reality dev community on a global scale.   

In addition to serving today’s global mixed reality development community, we are expanding the program to educate and build the mixed reality developers of the future. To support developers in this journey, we’re excited to announce the HoloLens 2 Development Edition.

Available through the Mixed Reality Developer Program, the Development Edition brings together all the tools developers needed to get started:

HoloLens 2 mixed reality device
$500.00 in Azure credits – To jump start your mixed reality development using Azure mixed reality services
3-month free trials of Unity Pro and the Unity PiXYZ Plugin for CAD data

Bringing together HoloLens 2, Azure Mixed Reality Services, and Unity Pro, the most widely used real-time 3D development platform, makes it easy for developers to create professional mixed reality experiences using your industrial design data. The PiXYZ Plugin gives you the tools to create real-time experiences using Computer Aided Drawing (CAD) or Building Information Management (BIM) data.

“Pairing HoloLens 2 with Unity’s real-time 3D development platform enables businesses to accelerate innovation, create immersive experiences, and engage with industrial customers in more interactive ways. The addition of Unity Pro and PiXYZ Plugin to HoloLens 2 Development Edition gives businesses the immediate ability to create real-time 2D, 3D, VR, and AR interactive experiences while allowing for the importing and preparation of design data to create real-time experiences.”

– Tim McDonough, GM of Industrial – Unity

Ready to jump in and start developing today? Join the Mixed Reality Developer Program.

Visit hololens.com to learn more about the HoloLens 2 Development Edition and sign up to stay updated about latest news, mixed reality toolkits, code samples, and open source projects.  

IoT Plug and Play

One of the biggest challenges in building IoT solutions is to connect millions of IoT devices to the cloud due to heterogeneous nature of devices today – such as different form factors, processing capabilities, operational system, memory, and capabilities.

IoT Plug and Play offers a new, open modeling language to connect IoT devices to the cloud seamlessly. With IoT Plug and Play, developers can connect IoT devices to the cloud, without having to write a single line of embedded code. IoT Plug and Play also enables device manufactures to build smarter IoT devices that just work with the cloud.

In the past, Microsoft introduced the Plug and Play technology that allowed PC users to quickly connect peripherals without having to perform complex hardware and software configurations. Similarly, with IoT Plug and Play, Microsoft is simplifying IoT to accelerate adoption for enterprises who will be able to prototype and then move to full scale deployments much faster.

Today, we are announcing that cloud developers can find IoT Plug and Play enabled devices in our Azure IoT Device Catalog. The first wave includes dozens of devices from partners such as Compal, Kyocera, and STMicroelectronics.

Here is what a few partners in the IoT Plug and Play ecosystem have to say:

“IoT Plug and Play underlines our commitment to providing a streamlined and optimized deployment path for customers using the Microsoft Azure platform, a major advantage in today’s competitive and quickly evolving market.”

– Richard Brown, VP of International Marketing, VIA Technologies, Inc.

“At Askey, we’re committed to adding leading and innovative technology to the smart mobility industry and ecosystem. Thanks to Microsoft’s IoT Plug and Play, and the Azure Certified for IoT program, IoT device application development management will never be the same.”

– Robert Lin, CEO of Askey

“Thundercomm is excited to be cooperating with Microsoft Azure on end-to-end IoT solutions. As a highlight of our cooperation, Thundercomm TurboX Asset Tag makes it possible for enterprises to decide how to improve the utilization of their valued assets. With IoT Plug and Play, we will partner together with Microsoft to accelerate and simplify the integration and deployment of IoT devices.”

– Larry Geng, Chairman, Thundercomm

Discover even more possibilities with Azure IoT. Check the Building IoT Solutions with Azure: A Developer’s Guide to get started.

These are just a few of the many new innovations we are bringing to the edge. With our $5 billion, 4-year investment into the intelligent edge, you can be confident there is much more to come.
Quelle: Azure

Digitizing trust: Azure Blockchain Service simplifies blockchain development

In a rapidly globalizing digital world, business processes touch multiple organizations and great sums are spent managing workflows that cross trust boundaries. As digital transformation expands beyond the walls of one company and into processes shared with suppliers, partners, and customers, the importance of trust grows with it. Microsoft’s goal is to help companies thrive in this new era of secure multi-party computation by delivering open, scalable platforms, and services that any company from game publishers and grain processors, to payments ISVs and global shippers can use to digitally transform the processes they share with others.

Azure Blockchain Service: The foundation for blockchain applications in the cloud

Azure Blockchain Service is a fully-managed blockchain service that simplifies the formation, management, and governance of consortium blockchain networks so businesses can focus on workflow logic and application development. Today, we’re excited to announce that the public preview is now available.

With a few simple clicks, users can create and deploy a permissioned blockchain network and manage consortium policies using an intuitive interface in the Azure portal. Built-in governance enables developers to add new members, set permissions, monitor network health and activity, and execute governed, private interactions through integrations with Azure Active Directory.

This week, we also announced an exciting partnership with J.P. Morgan to make Quorum the first ledger available in Azure Blockchain Service. Because it’s built on the popular Ethereum protocol, which has the world’s largest blockchain developer community, Quorum is a natural choice. It integrates with a rich set of open-source tools while also supporting confidential transactions, something our enterprise customers require. Quorum customers like Starbucks, Louis Vuitton, and our own Xbox Finance team can now use Azure Blockchain Service to quickly expand their networks with lower costs, shifting their focus from infrastructure management to application development and business logic.

“We are incredibly proud of the success Quorum has had over the last four years as organizations around the world use Quorum to solve complex business and societal problems. We are delighted to partner alongside Microsoft as we continue to strengthen Quorum and expand capabilities and services on the platform.”

— Umar Farooq, Global Head of Blockchain at J.P. Morgan

We’re excited to offer customers an enterprise-grade Ethereum stack with Quorum, and look forward to adding new capabilities to Azure Blockchain Service in the coming months, including digital token management, improved application integration, and support for R3’s Corda Enterprise.

An application-driven approach

The ledger is just the foundation for new applications. After configuring the underlying blockchain network with Azure Blockchain Service, you need to codify your business logic using smart contracts. Until now, this has been cumbersome, requiring multiple command-line tools and limited developer IDE integration. Today we are releasing an extension for VS Code to address these issues. This extension allows you to create and compile Ethereum smart contracts, deploy them to either the public chain or a consortium network in Azure Blockchain Service, and manage their code using Azure DevOps.

Once your network is created and smart contract state machines are deployed, you must build an application in order for consortium participants to share business logic and data represented by the smart contracts. A key challenge has been integrating these applications with smart contracts so they either respond to smart contract updates or execute smart contract transactions. This connects business processes managed in other systems such as databases, CRM, and ERP systems with the ledger. Our new Azure Blockchain Dev Kit makes this easier than ever with connectors and templates for Logic Apps and Flow as well as integrations with serverless tools like Azure Functions.

You can learn more about how to build your first network, code your smart contracts, and interact with the ledger in the latest episodes of the web series Block Talk.

Embracing open communities

Over the past year, we have been preparing our Confidential Consortium Framework (CCF) for public release. CCF uses trusted execution environments (TEEs) such as SGX and VSM to enable ledgers that integrate with it to execute confidential transactions with the throughput and latency of a centralized database. Confidentiality and high performance are key requirements of our enterprise customers. We’re excited to announce that we have finished the first version of CCF, integrated with Quorum, and have made the source code available on Github.

Microsoft believes that the best way to bring blockchain to our customers is by partnering with the diverse and talented open source communities that are driving blockchain innovation today. We began this journey in 2015, partnering with the growing communities around Ethereum, R3 Corda, and Hyperledger to make those technologies available in Azure. Instead of building our own ledger, or creating a ledger alternative, we have worked to make open source technology developers love and work better with Azure. All of the tooling released this week allows developers to work against both consortium networks in Azure Blockchain Service and with public Ethereum.

“Microsoft has embraced the open community of blockchain developers and has brought the best of their cloud development tooling to the developers building the next wave of decentralized applications. With Azure Blockchain Service and Ethereum integrations for tools like VS Code, Microsoft is demonstrating its commitment to open blockchain development.”

— Vitalik Buterin, co-founder of Ethereum

Next steps

Learn more about Azure Blockchain Service and get started today:

Create a free Azure account and get $200 to build your first network.
See how to configure and deploy your network, author smart contracts, and interact with the ledger on the web series Block Talk.
Accelerate development with code samples and tutorials in our Azure Blockchain Dev Kit.
Learn how you can use blockchain and databases like Azure Cosmos DB together.

Quelle: Azure

Azure Stack IaaS – part seven

If you do it often, automate it

In the virtualization days, before cloud and self-service, it took a while to get all the approvals, credentials, virtual LANs (VLANs), logical unit numbers (LUNs), etc. It took so long, that the actual creation part was easy. When cloud came along with self-service, not only was it easier to create a virtual machine (VM) without relying on others, but it changed our thinking about whether VMs were precious or disposable. At the same time developers were moving to a world of continuous delivery to serve their customers who expect apps with a constant stream of new features. This is the reason all real clouds provide automation APIs to quickly create VMs and other resources, including the infrastructure they rely on. This is often called “Infrastructure as code.” Azure’s API is governed by the Azure Resource Manager (ARM). When you set up Azure Stack, you get your own private instance of ARM.

In this blog post I will cover the automation options in your Cloud IaaS toolkit.

Azure portal

The best place to learn this is from first completing a VM deployment through the portal. Azure Stack provides the same portal as Azure. When you get to the last confirmation page, click the Download Template link. This will show you the template that will be used to deploy the VM you just specified on the previous screens.

As you look through the template – which is in JSON format – you can see how the virtual machine, network, and storage is defined. You’ll see an OS profile and hardware profile for your VM. Using this template, you could deploy this same VM over and over. This is perfect if you’re helping your developers with Continuous Integration and Delivery (CI/CD).

One thing about an ARM template is it is not a procedural script like you might get with standard automation tools. This template creates the resources as a single deployment transaction. ARM will either get to the goal state or the deployment will fail. If it fails, you have a chance to fix failure condition and move forward or delete the deployment and start over. This way you don’t get something other than what you specify.

Once you save your template, you can redeploy it in the portal, using the Template deployment item in the marketplace:

Learn more:
Azure Resource Manager overview
Quickstart: Create templates using the Azure Portal
Deploy resources in the portal using a custom template

Visual Studio and Visual Studio Code

Visual Studio can be scary for an infrastructure person, but if you can master some simple things, you can really help your team down the road of automation. The biggest thing that Visual Studio provides is the real-time feedback that you’re authoring the ARM JSON template correctly in terms of syntax. You don’t get this in Notepad. Get started by creating a new Azure Resource Group project:

A great way to start authoring a template is to use Quickstart templates. Azure Stack has a number of Quickstart templates you can use. When you start your new project in Select Azure Template, pick Azure Stack Quickstart from the drop-down list.

Visual Studio not only helps you author the template, it allows you to deploy the template directly to Azure Stack. When you sign into Visual Studio, all of your subscriptions in both Azure and Azure Stack show up as deployment targets. Since I use a number of Azure Stack environments, I have lots of deployment options:

Another way to create these templates is in Visual Studio Code. The ARM template is a JSON file, so you need a good lightweight authoring tool to work on the JSON file. Visual Studio Code (VS Code) is a great option.

After installing VS Code, you need to add the Azure Resource Manager Tools extension. This extension adds many features that simplify template authoring that help you manage variables, parameters, and resource blocks with features like including formatting and color coding. Check it out in the image below:

Learn more:
Install Visual Studio and Connect to Azure Stack
Deploy templates to Azure Stack using Visual Studio
Create a template in Visual Studio
Create a template in Visual Studio Code

PowerShell and Azure command-line interface

The Portal and Visual Studio are both deployment options for your template. PowerShell and the Azure command-line interface (CLI) are two other options. Since Azure Stack is your own private instance of the Azure Resource Manager, you need to connect to your unique instance.

To connect to Azure Stack with PowerShell, use the Add-AzureRMEnvironment cmdlet, specifying the ARM endpoint for your environment. Depending on how your Azure Stack environment has been set up, you will either authenticate using your Azure Active Directory credentials or your organization’s Azure Directory (referred to in the documentation as ADFS).

Azure CLI is Microsoft's cross-platform command-line experience for managing Azure resources. It works on Mac, Linux, and Windows. In Azure you can run the CLI in Cloud Shell. You can use Azure CLI to connect to Azure Stack and deploy IaaS templates. Just like PowerShell, you need to first connect to your Azure Stack’s unique ARM endpoint. For CLI you use the az cloud register command. To deploy your ARM template you use the az group deployment create command.

Please note: We have not implemented Cloud Shell on Azure Stack yet. Cloud Shell allows you to run the Azure CLI directly inside your browser. If you would like to see this, make sure you put in a vote on Azure Cloud Shell UserVoice.

Learn more:
Install PowerShell for Azure Stack
Connect to Azure Stack with PowerShell
Deploy in Azure Stack with PowerShell
Use Azure CLI on Azure Stack
Deploy Azure Resource Manager Templates with Azure CLI

The cloud is for automation

The more people use clouds like Azure and Azure Stack, the less they use the portal and the more they use automation. There is an automation option in Azure and Azure Stack for all your needs. Say goodbye to virtualization and say hello to Cloud IaaS with your automation toolkit.

In this blog series

We hope you come back to read future posts in this blog series. Here are some of our past and upcoming topics:

Azure Stack at its core is an Infrastructure-as-a-Service (IaaS) platform
Start with what you already have
Fundamentals of IaaS
Protect your stuff
Do it yourself
Pay for what you use
Build on the success of others
Journey to PaaS

Quelle: Azure

Migrating big data workloads to Azure HDInsight

Migrating big data workloads to the cloud remains a key priority for our customers and Azure HDInsight is committed to making that journey simple and cost effective. HDInsight partners with Unravel whose mission is to reduce the complexity of delivering reliable application performance when migrating data from on-premises or a different cloud platform onto HDInsight.
Quelle: Azure