New Quick Start deploys SAP S/4HANA on AWS

This Quick Start automatically deploys SAP S/4HANA in the Amazon Web Services (AWS) Cloud in 1.5-2.5 hours. This Quick Start is for SAP architects, system administrators, and IT technical professionals who are responsible for architecture design and the deployment of S/4HANA workloads on AWS. 
Quelle: aws.amazon.com

It’s a Wrap – Highlights from the DockerCon 2019 Keynote Sessions

If you missed DockerCon in San Francisco this year or were unable to watch the livestream, no need to worry – we have you covered. You can catch all the demos, get the latest announcements and find out what is next for the Docker ecosystem by watching the replay sessions on demand.

Day 1: Docker Enterprise 3.0, Customer Innovation Awards, Robots and More
On Tuesday,  we kicked off the first day of DockerCon with product announcements, demos and customer guest speakers. During the session, we presented Docker Enterprise 3.0, the only desktop-to-cloud enterprise container platform enabling organizations to build and share any application and securely run them anywhere – from hybrid cloud to the edge. Additionally, we announced this year’s winners of the Customer Innovation awards, featuring Carnival, Citizens Bank, Liberty Mutual, Lindsay Corporation and Nationwide.

On-stage, the Docker team also demonstrated  Docker Applications, Docker Kubernetes Service (DKS) and new features and capabilities in Docker Desktop Enterprise – all designed to accelerate the application development and deployment pipeline. They keynote closed with a demonstration from R.O.S.I.E, the robot built by two Liberty Mutual engineers using Docker.
 To learn first hand everything featured on stage, watch the replay here:
 

The Docker Foundation, Community and Captain Awards and What’s Next for Docker
In Wednesday’s general session, we shared a bit more about our values as a company. First, we announced the creation of the Docker Foundation, a philanthropic organization that will focus on enabling education opportunities. The first organization Docker Foundation will be working with is CodePath.org, an organization focused on eliminating educational inequity in computer science by providing the tools and connections that empower software engineers of any race, gender, or background to access jobs in the technology industry. 

We followed this by announcing this year’s Community Leader Award winners and the Docker Captain Award – recognizing the importance of the extended Docker community. Finally our CTO, Kal De, took the stage to share more about Docker’s engineering tenets and technology areas where we are investing. This included demonstrations of forthcoming features for multi-architecture builds, automated cloud deployments, and the addition of commercial support for containerd.

To get a closer look at Day 2, watch the replay here:


Did you miss the @DockerCon keynote sessions featuring #Docker demos, new products and customer updates? Learn more and watch the replay:Click To Tweet

For more information:

Learn more about Docker Enterprise
Find out more about the Docker Foundation
Meet our Customer Innovation Award Winners
Check out more DockerCon content 

The post It’s a Wrap – Highlights from the DockerCon 2019 Keynote Sessions appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

New Azure Machine Learning updates simplify and accelerate the ML lifecycle

With the exponential rise of data, we are undergoing a technology transformation, as organizations realize the need for insights driven decisions. Artificial intelligence (AI) and machine learning (ML) technologies can help harness this data to drive real business outcomes across industries. Azure AI and Azure Machine Learning service are leading customers to the world of ubiquitous insights and enabling intelligent applications such as product recommendations in retail, load forecasting in energy production, image processing in healthcare to predictive maintenance in manufacturing and many more.

Microsoft Build 2019 represents a major milestone in the growth and expansion of Azure Machine Learning with new announcements powering the entire machine learning lifecycle.

Boost productivity for developers and data scientists across skill levels with integrated zero-code and code-first authoring experiences as well as automated machine learning advancements for building high-quality models easily.
Enterprise-grade capabilities to deploy, manage, and monitor models with MLOps (DevOps for machine learning). Hardware accelerated models for unparalleled scale and cost performance and model interpretability for transparency in model predictions. 
Open-source capabilities that provide choice and flexibility to customers with MLflow implementation, ONNX runtime support for TensorRT and Intel nGraph, and the new Azure Open Datasets service that delivers curated open data to improve model accuracy.

With these announcements and other improvements being added weekly, Azure Machine Learning continues to help customers easily apply machine learning to grow, compete and meet their objectives.

“By seamlessly integrating Walgreens stores and our other points of care with Microsoft’s Azure AI platform and Azure Machine Learning, the partnership will offer personalized lifestyle, wellness and disease management solutions, available via customers’ delivery method of choice.” 

— Vish Sankaran, Chief Innovation Officer, Walgreens Boots Alliance, Inc.

Boost productivity with simplified machine learning

“Using Azure Machine Learning service, we get peace of mind with automated machine learning, knowing that we are exhausting all the possible scenarios and using the best model for our inputs.”

— Diana Kennedy, Vice President, Strategy, Architecture, and Planning, BP

Automated machine learning advancements

Doubling-down on our mission to simplify AI, the new automated machine learning user interface (Preview), enables business domain experts to train machine learning models on data without writing a single line of code, in just a few clicks. Learn how to run an automated ML experiment in the portal.

Automated machine learning UI

Feature engineering updates including new featurizers that provide tailor made inputs for any given data, to deliver optimal models. Improvements in sweeping different combinations of algorithms for algorithm selection and hyperparameters and the addition of new popular learners like the XGBoost algorithm and more, that enable greater model accuracy. Compute optimization automatically guides which algorithms to parse out and where to focus, while early termination ensures training runs that deliver models efficiently. Automated machine learning also provides complete transparency into algorithms, so developers and data scientists can manually override and control the process. All these advancements help ensure the best model is delivered.

Building forecasts is an integral part of any business, whether it’s revenue, inventory, sales, or customer demand. Forecasting with automated machine learning includes new capabilities that improve the accuracy and performance of recommended models with time series data, including new predict forecast function, rolling cross validation splits for time series data, configurable lags, window aggregation, and holiday featurizer.

This ensures very high accuracy forecasting models and supporting automation for machine learning across many scenarios.

Azure Machine Learning visual interface (Preview)

The Visual interface is a powerful drag and drop workflow capability that simplifies the process of building, training, and deploying machine learning models. Customers new to machine learning, who prefer a zero-code experience can take advantage of the capabilities similar to those found in Azure Machine Learning Studio inside Azure Machine Learning service. Data preparation, feature engineering, training algorithms, and model evaluation are presented in an intuitive web user experience backed by the scale, version control, and enterprise security of Azure Machine Learning service.

Azure Machine Learning visual interface

With this new visual interface, we have started to combine the best of Azure Machine Learning Studio in Azure Machine Learning service. We will continue to share more updates throughout the year as we move from Preview towards General Availability.

Try it out yourself with this tutorial.

Hosted notebooks in Azure Machine Learning (Preview)

The new notebooks VM based authoring is directly integrated into Azure Machine Learning, providing a code-first experience for Python developers to conveniently build and deploy models in the workspace experience. Developers and data scientists can perform every operation supported by the Azure Machine Learning Python SDK using a familiar Jupyter notebook in a secure, enterprise-ready environment.

Hosted notebook VM (Preview) in Azure Machine Learning

Get started quickly and access a notebook directly in Azure Machine Learning, use preconfigured notebooks with no set up required, and fully customize notebook VMs by adding custom packages and drivers.

Enterprise-grade model deployment, management, and monitoring

MLOps – DevOps for machine learning

MLOps (also known as DevOps for Machine Learning) is the practice for collaboration and communication between data scientists and DevOps professionals to help manage the production machine learning lifecycle.

New MLOps capabilities in Azure Machine Learning bring the sophistication of DevOps to data science, with orchestration and management capabilities to enable effective ML Lifecycle management with:

Model reproducibility and versioning control to track and manage assets to create the model and sharing of ML pipelines, using environment, code, and data versioning capabilities.
Audit trail to ensure asset integrity and provides control logs to help meet regulatory requirements.
Packaging and validation for model portability and to certify model performance.
Deployment and monitoring support with a simplified experience for debugging, profiling and deploying models, to enable releasing models with confidence and knowing when to retrain.
Azure DevOps extension for Machine Learning and the Azure ML CLI to submit experiments from a DevOps Pipeline, track code from Azure Repos or GitHub, trigger release pipelines when an ML model is registered, and automate end-to-end ML deployment workflows using Azure DevOps Pipelines.

Operationalize models efficiently with MLOps

These capabilities enable customers to bring their machine learning scenarios to production by supporting reproducibility, auditability, and automation of the end-to-end lifecycle and leading to improved model quality over time.

Learn more about MLOps with Azure Machine Learning.

Hardware accelerated models and FPGA on Data Box Edge

In addition to acceleration available with GPUs, now scale from cloud to edge with Azure Machine Learning Hardware Accelerated Models, powered by FPGAs. These Hardware Accelerated Models are now generally available in the cloud, along with a preview of models deployed to Data Box Edge.

FPGA technology supports compute intensive scenarios like deep neural networks (DNNs), that have ushered in breakthroughs in computer vision, without forcing tradeoffs between price and performance. With FPGA’s it is possible to achieve ultra-low latency with ResNet 50, ResNet 152, VGG-16, DenseNet 121, and SSD-VGG. FPGAs enable real-time insights for scenarios like manufacturing defect analysis, satellite imagery, or autonomous video footage to drive business critical decisions.

Learn more about FPGAs and Azure Machine Learning.

Model interpretability

Microsoft is committed to supporting transparency, intelligibility, and explanation in machine learning models. Model interpretability brings us one step closer to understanding the predictions a model makes to ensure fairness and avoid model bias. This deeper understanding of models is key when uncovering insights about the model itself both in order to improve model accuracy during training and to uncover model behaviors and explain model prediction outcomes during inferencing.

Model interpretability is available in Preview and cutting-edge open source technologies (e.g., SHAP, LIME) under a common API, giving data scientists the tools to explain machine learning models globally on all data, or locally on a specific data point in an easy-to-use and scalable fashion.

Learn more about model interpretability.

Open and interoperable platform providing flexibility and choice 

“All the data scientists on our team enjoy using Azure Machine Learning, because it’s fully interoperable with all the other tools they use in their day-to-day work—no extra training is needed, and they get more done faster now.”

— Matthieu Boujonnier, Analytics Application Architect and Data Scientist, Schneider Electric

ONNX Runtime with Azure Machine Learning

Azure Machine Learning service supports ONNX (Open Neural Network Exchange), the open standard for representing machine learning models from TensorFlow, PyTorch, Keras, SciKit-Learn, and many other frameworks. An updated version of ONNX Runtime is now available fully supporting ONNX 1.5 (including object detection models such as YOLOv3 and SSD). With ONNX Runtime, developers now have a consistent scoring API that enables hardware acceleration thanks to the general availability of NVIDIA TensorRT integration and the public preview of Intel nGraph integration. ONNX Runtime is used on millions of Windows devices as part of Windows ML. ONNX Runtime also handles billions of requests in hyperscale Microsoft services such as Office, Bing, and Cognitive Services where an average of two times the performance gains have been seen. An updated version of ONNX Runtime is now available fully supporting the ONNX 1.5 specification, including state of the art object detection models such as Yolov3 and SSD. 

Learn more about ONNX and Azure Machine Learning.

MLflow integration

Azure Machine Learning supports popular open-source frameworks to build highly accurate machine earning models easily, and to enable training to run in variety of environments whether on-prem or in the cloud. Now developers can use MLflow with their Azure Machine Learning workspace to log metrics and artifacts from training runs in a centralized, secure, and scalable location.

Azure Open Datasets (Preview)

Azure Open Datasets is a new service providing curated, open datasets hosted on Azure and easily accessible from Azure Machine Learning. Use these datasets for exploration or combine them with other data to improve the accuracy of machine learning models. Datasets currently provided are historical and forecast weather data form NOAA, and many more will be added over time. Developers and data scientists can also nominate data sets to Azure, to support the global machine learning community with relevant and optimized data. 

Azure Open Datasets

Learn more about Azure Open Datasets.

Start building experiences

Envisioning, building, and delivering these advancements to the Azure Machine Learning service has been made possible by closely working with our customers and partners. We look forward to helping simplify and accelerate machine learning even further by providing the most open, productive, and easy-to-use machine learning platform. Together, we can shape the next phase of innovation, making AI a reality for your business and enabling breakthrough experiences.

Get started with a free trial of Azure Machine Learning service.

Learn more about the Azure Machine Learning service and follow the quickstarts and tutorials. Explore the service using the Jupyter notebook samples. 

Read all the Azure AI news from Microsoft Build 2019 .
Quelle: Azure

A deep dive into what’s new with Azure Cognitive Services

This blog post was co-authored by Tina Coll, Senior Product Marketing Manager, Azure Cognitive Services.

Microsoft Build 2019 marks an important milestone for the evolution of Azure Cognitive Services with the introduction of new services and capabilities for developers. Azure empowers developers to make reinforcement learning real for businesses with the launch of Personalizer. Personalizer, along with Anomaly Detector and Content Moderator, is part of the new Decision category of Cognitive Services that provide recommendations to enable informed and efficient decision-making for users.

Available now in preview and general availability (GA):

Preview

Cognitive service APIs:

Personalizer – creates personalized user experiences
Conversation transcription – transcribes in-person meetings in real-time
Form Recognizer – automates data-entry
Ink Recognizer – unlocks the potential of digital inked content

Container support for businesses AI models at the edge and closer to the data:

Speech Services (Speech to Text & Text to Speech)
Anomaly Detector
Form Recognizer

Generally available

Neural Text-to-Speech
Computer Vision Read
Text Analytics Named Entity Recognition

Cognitive Services span the categories of Vision, Speech, Language, Search, and Decision, offering the most comprehensive portfolio in the market for developers who want to embed the ability to see, hear, translate, decide and more into their apps.  With so much in store, let’s get to it.

Decision: Introducing Personalizer, reinforcement learning for the enterprise

Retail, Media, E-commerce and many other industries have long pursued the holy grail of personalizing the experience. Unfortunately giving customers more of what they want often requires stringing together various CRM, DMP, name-your-acronym platforms and running A/B tests day and night. Reinforcement learning is the set of techniques that allow AI to achieve a goal by learning from what’s happening in the world in real-time. Only Azure delivers this powerful reinforcement-learning based capability through a simple-to-use API with Personalizer.

Within Microsoft, teams are using Personalizer to enhance the user experience. Xbox saw a 40 percent lift in engagement by using Personalizer to display content to users that will most likely interest them.

Speech: In-person meetings just got better with conversation transcription

Conversation transcription, an advanced speech-to-text feature, improves meeting efficiency by transcribing conversations in real-time, enabling all participants to engage fully, capturing who said what when so you can quickly follow up on next steps. Pair conversation transcription with a device integrating the Speech Service Device SDK, now generally available, for higher-quality transcriptions. It also integrates with a variety of meeting conference solutions including Microsoft Teams and other third-party meeting software. Visit the Speech page to see more details.

Vision: Unlocking the value of your content – from forms to digital inked notes

Form Recognizer uses advanced machine learning technology to quickly and more accurately extract text and data from business’s forms and documents. With container support, this service can run on-premises and in the cloud. Automate information extraction quickly and tailor to specific content, with only 5 samples, and no manual labeling.

Ink Recognizer provides applications with the ability to recognize digital handwriting, common shapes, and the layout of inked documents. Through an API call, you can leverage Ink Recognizer to create experiences that combine the benefits of physical pen and paper with the best of the digital.

Integrated in Microsoft Office 365 and Windows, Ink Recognizer gives users freedom to create content in a natural way. Ink Recognizer in PowerPoint, converts ideas to professional looking slides in a matter of moments.

Bringing AI to the edge

In November 2018, we announced the Preview of Cognitive Services in containers that run on-premises, in the cloud or at the edge, an industry first.

Container support is now available in preview for:

Speech Services (Speech to Text & Text to Speech)
Anomaly Detector
Form Recognizer

With Cognitive Services in containers, ISVs and enterprises can transform their businesses with edge computing scenarios. Axon, a global leader in connected public safety technologies partnering with more than 17,000 law enforcement agencies in 100+ countries around the world, relies on Cognitive Services in containers for public safety scenarios where the difference of a second in response time matters:

“Microsoft's containers for Cognitive Services allow us to ensure the highest levels of data integrity and compliance for our law enforcement customers while enabling our AI products to perform in situations where network connectivity is limited.”

– Moji Solgi, VP of AI and Machine Learning, Axon

Fortifying the existing Cognitive Services portfolio

In addition to the new Cognitive Services, the following capabilities are generally available:

Neural Text-to-Speech now supports 5 voices and is available in 9 regions to provide customers greater language coverage and support. By changing the styles using Speech Synthesis Markup Language or the voice tuning portal, you can easily refine the voice to express different emotions or speak with different tones for various scenarios. Visit the Text-to-Speech page to “hear” more on the new voices available.

Computer Vision Read operation reads multi-page documents and contains improved capabilities for extracting text from the most common file types including PDF and TIFF.

In addition, Computer Vision has an improved image tagging model that now understands 10K+ concepts, scenes, and objects and has also expanded the set of recognized celebrities from 200K to 1M. Video Indexer has several enhancements including new AI Editor won a NAB Show Product of the Year Award in the AI/ML category at this year’s event.

Named entity recognition, a capability of Text Analytics, takes free-form text and identifies the occurrences of entities such as people, locations, organizations, and more. Through a API call, named entity recognition uses robust machine learning models to find and categorize more than twenty types of named entities in any text documents. Named entity recognition supports 19 language models available in Preview, with English and Spanish now Generally Available.

Language Understanding (LUIS) now supports multiple intents to help users better comprehend complex and compound sentences.

QnA Maker supports multi-turn dialogs, enhancing its core capability of extracting dialog from PDFs or websites.

Get started today

Today’s milestones illustrate our commitment to bringing the latest innovations in AI to the intelligent cloud and intelligent edge.

To get started building vision and search intelligent apps, visit the Azure Cognitive Services page.
Quelle: Azure

AI-first content understanding, now across more types of content for even more use cases

This post is authored by Elad Ziklik, Principal Program Manager, Applied AI.

Today, data isn’t the barrier to innovation, usable data is. Real-world information is messy and carries valuable knowledge in ways that are not readily usable and require extensive time, resources, and data science expertise to process. With Knowledge Mining, it’s our mission to close the gap between data and knowledge.

We’re making it easier to uncover latent insights across all your content with:

Azure Search’s cognitive search capability (general availability)
Form Recognizer (preview)

Cognitive search and expansion into new scenarios

Announced at Microsoft Build 2018, Azure Search’s cognitive search capability uniquely helps developers apply a set of composable cognitive skills to extract knowledge from a wide range of content. Deep integration of cognitive skills within Azure Search enables the application of facial recognition, key phrase extraction, sentiment analysis, and other skills to content with a single click. This knowledge is organized and stored in a search index, enabling new experiences for exploring the data.

Cognitive search, now generally available, delivers:

Faster performance – Improved throughput capabilities with increased processing speeds up to 30 times faster than in preview. Completing previously hour-long tasks in only a couple of minutes.
Support of complex data types – Natively supported to extend the types of data that can be stored and searched (this has been the most requested Azure Search feature.) Raw datasets can include hierarchical or nested substructures that do not break down neatly into a tabular rowset, for example multiple locations and phone numbers for a single customer.
New skills – Extended library of pre-built skills based on customer feedback. Improved support for processing images, added ability to create conditional skills, and shaper skills that allow for better control and management of multiple skills in a skillset. Plus, entity recognition provides additional information to each entity identified, such as the Wikipedia URL.
Easy implementation – The solution accelerator provides all the resources needed to quickly build a prototype, including templates for deploying Azure resources, a search index, custom skills, a web app, and PowerBI reports. Use the accelerator to jump start development efforts and apply cognitive search to your business needs.

See what’s possible when you apply cognitive search to unstructured content, like art:

Tens of thousands of customers use Azure Search today, processing over 260 billion files each month. Now with cognitive search, millions of enrichments are performed over data ranging from PDFs to Office documents, from JSON files to JPEGs. This is possible because cognitive search reduces the complexity to orchestrate complex enrichment pipelines containing custom and prebuilt skills, resulting in deeper insight of content. Customers across industries including healthcare, legal, media, and manufacturing use this capability to solve business challenges.

“Complex customer needs and difficult markets are our daily business. Cognitive search enables us to augment expert knowledge and experience for reviewing complex technical requirements into an automated solution that empowers knowledge workers throughout our organization.”  Chris van Ravenswaay, Business Solution Manager, Howden

Extending AI-driven content understanding beyond search

Many scenarios outside of search require extracted insights from messy, complicated information. Expanding cognitive search to support unique scenarios, we are excited to announce the preview of the knowledge store capability within cognitive search – allowing access to AI-generated annotations in table and JSON format for application in non-search use cases like PowerBI dashboards, machine learning models, organized data repositories, bots, and other custom applications.

Form Recognizer, a new Cognitive Service

The Form Recognizer Cognitive Service, available in preview, applies advanced machine learning to accurately extract text, key-value pairs, and tables from documents.

With as few as 5 samples, Form Recognizer tailors its understanding to your documents. You can also use the REST interface of the Form Recognizer API to then integrate into cognitive search indexes, automate business processes, and create custom workflows for your business. You can turn forms into usable data at a fraction of the time and cost, so you can focus more time acting on the information rather than compiling it.

Container support for Form Recognizer supports use on the edge, on-premises, and in the cloud. The portable architecture can be deployed directly to Azure Kubernetes Service or Azure Container Instances or to a Kubernetes cluster deployed to Azure Stack.

Organizations like Chevron and Starbucks are using Form Recognizer to accelerate extraction of knowledge from forms and make faster decisions.

We look forward to seeing how you leverage these products to drive impact for your business.

Getting Started

Read more in docs
Get started with the solution accelerator
Try Azure Search's cognitive search
Explore knowledge store capability of cognitive search
Explore Form Recognizer

Quelle: Azure

Azure.Source – Volume 81

We’re really looking forward to Microsoft Build 2019, our premier event for developers happening next week, May 6-8 at the Washington State Convention Center in Seattle. It’s a chance for developers to gain access to the latest updates and developments across Microsoft’s products and solutions, understand our strategy and product roadmaps, and learn about new technology and open source software in innovative ways. Even the weather in Seattle looks like it's going to cooperate.

The Azure team is of course a big part of Build. We’ll be giving loads of presentations, and also posting a wide range of blog posts providing more details about what we share throughout the event. This Channel 9 video gives you a look ahead with the Azure IoT team:

As we're gearing up for Microsoft Build 2019, the IoT Show goes into Azure IoT's building on the Microsoft Campus to meet some of the speakers who are preparing awesome IoT content for you.

In the meantime, there’s plenty of other stuff going on around Azure right now. Here are some of the highlights:

News and updates:

Intelligent edge innovation across data, IoT, and mixed reality

We're at an incredibly exciting technology inflection point. The virtually limitless computing power of the cloud, combined with increasingly connected and perceptive devices at the edge of the network, create possibilities we could only have dreamed of just a few years ago – possibilities made up of millions of connected devices, infinite data, and the ability to create truly immersive multi-sense, multidevice experiences. This post looks at some of the newest advances.

Digitizing trust: Azure Blockchain Service simplifies blockchain development

In a rapidly globalizing digital world, business processes touch multiple organizations and great sums are spent managing workflows that cross trust boundaries. As digital transformation expands beyond the walls of one company and into processes shared with suppliers, partners, and customers, the importance of trust grows with it. Microsoft’s goal is to help companies thrive in this new era of secure multi-party computation by delivering open, scalable platforms, and services that any company from game publishers and grain processors, to payments ISVs and global shippers can use to digitally transform the processes they share with others.

Making AI real for every developer and every organization

AI is fueling the next wave of transformative innovations that will change the world. With Azure AI, our goal is to empower organizations to apply AI across the spectrum of their business to engage customers, empower employees, optimize operations, and transform products. Read this blog to learn about our three guiding investment principles.

Technical content

Azure Stack IaaS – part seven

This blog post covers the automation options in your Cloud IaaS toolkit. We’ve come a long way – in the virtualization days, before cloud and self-service, it took a while to get all the approvals, credentials, virtual LANs (VLANs), logical unit numbers (LUNs), etc. It took so long, that the actual creation part was easy. When cloud came along with self-service, not only was it easier to create a virtual machine (VM) without relying on others, but it changed our thinking about whether VMs were precious or disposable. We’ll discuss!

Migrating big data workloads to Azure HDInsight

Migrating big data workloads to the cloud remains a key priority for our customers, and Azure HDInsight is committed to making that journey simple and cost-effective. HDInsight partners with Unravel, whose mission is to reduce the complexity of delivering reliable application performance when migrating data from on-premises or a different cloud platform onto HDInsight. Unravel’s Application Performance Management (APM) platform brings a host of services towards providing unified visibility and operational intelligence to plan and optimize the migration process onto HDInsight.

Deploy a FHIR sandbox in Azure

In connection with HIMSS 2019, we announced the Azure API for FHIR, which provides our customers with an enterprise grade, managed FHIR® API in Azure. Since then, we've been busy improving the service with new configuration options and features. Some of the features we have been working on include authentication configuration and the SMART on FHIR Azure Active Directory Proxy, which enables the so-called SMART on FHIR EHR launch with the Azure API for FHIR. We've developed a sandbox environment that illustrates how the service and the configuration options are used. In this post, we focus on how to deploy the sandbox in Azure. Later blog posts will dive into some of the technical details of the various configuration options.

5 internal capabilities to help you increase IoT success

This article is the third in a four-part series designed to help companies maximize their ROI on IoT. In the first post, we discussed how IoT can transform businesses. In the second post, we shared insights on how to create a successful strategy that yields desired ROI. In this third post, we discuss how companies can move forward by identifying and filling capability gaps. Let’s dive into some ideas about how to solve some of the challenges that could slow your IoT progress.

Monitoring enhancements for VMware and physical workloads protected with Azure Site Recovery

Azure Site Recovery has enhanced the health monitoring of your workloads by introducing various health signals on the replication component, Process Server. The Process Server (PS) in a hybrid disaster recovery (DR) scenario is a vital component of data replication. It handles replication caching, data compression, and data transfer. Once the workloads are protected, issues can be triggered due to multiple factors including high data change rate (churn) at source, network connectivity, available bandwidth, under provisioning the Process Server, or protecting a large number of workloads with a single Process Server. These may lead to a bad state of the PS and have a cascading effect on replication of VMs. Troubleshooting these issues is now made easier with additional health signals from the Process Server.

Building recommender systems with Azure Machine Learning service

Recommendation systems are used in a variety of industries, from retail to news and media. If you’ve ever used a streaming service or ecommerce site that has surfaced recommendations for you based on what you’ve previously watched or purchased, you’ve interacted with a recommendation system. With the availability of large amounts of data, many businesses are turning to recommendation systems as a critical revenue driver. However, finding the right recommender algorithms can be very time consuming for data scientists. This is why Microsoft has provided a GitHub repository with Python best practice examples to facilitate the building and evaluation of recommendation systems using Azure Machine Learning services.

Six Principles to Build Healthy Data-Driven Organizations

Organizations are increasingly forming teams around the function of Data Science. Data Science is a field that combines mathematics, programming, and visualization techniques and applies scientific methods to specific business domains or problems, like predicting future customer behavior, planning air traffic routes, or recognizing speech patterns. But what does it really mean to be a data-driven organization? InfoQ takes a look.

Azure shows

MSDN Channel 9

DevOps for ASP.NET Developers, Pt. 7

In part 7 of our series, Abel and Jeremy show us two ways to scaffold out an Azure DevOps pipeline. We'll see how to use Azure DevOps projects via the Azure portal, which gives us a UI to configure everything. Also, they'll show us how to use the Yo Team generator that allows us to work from the command line.

Detect Shake (Xamarin. Essentials API of the Week)

Xamarin.Essentials provides developers with cross-platform APIs for their mobile applications. On this week's Xamarin.Essential API of the week, we take a look at the Detect Shake API to help you detect when a user shakes a device.

YouTube

Mastering Azure using Cloud Shell, PowerShell and Bash!

Azure can be managed in many different ways. Learn your command line options like Azure PowerShell, Azure CLI, and Cloud Shell to be more efficient in managing your Azure infrastructure. Become a hero on the shell to manage the cloud!

The Azure Podcast

Episode 276 – Cloud simplified

Ryan Berry, an Azure Cloud Solutions Architect at Microsoft, talks about his own YouTube Channel where they distill down complex topics into bite sized chunks to make it easy for you to quickly leverage these features to address similar requirements you may have for moving something into Azure.

Azure DevOps Podcast

Rob Richardson on Containers in Azure

In this episode, Rob explains the critical steps when creating a container, what developers should consider when looking to run and support Containers through Azure, and much, much more.
Quelle: Azure

LaLiga entertains millions with Azure-based conversational AI

For LaLiga, keeping fans entertained and engaged is a top priority. And when it comes to fans, the Spanish football league has them in droves, with approximately 1.6 billion social media followers around the world. So any time it introduces a new feature, forum, or app for fans, instant global popularity is almost guaranteed. And while this is great news for LaLiga, it also poses technical challenges—nobody wants systems crashing or going unresponsive when millions of people are trying out a fun new app.

When LaLiga chose to develop a personal digital assistant running on Microsoft Azure, its developers took careful steps to ensure optimal performance in the face of huge user volume in multiple languages across a variety of voice platforms. Specifically, the league used Azure to build a conversational AI solution capable of accommodating the quirks of languages and nicknames to deliver a great experience across multiple channels and handle a global volume of millions of users.

Along the way, some valuable lessons emerged for tackling a deployment of this scope and scale.

Accommodating the quirks of languages and nicknames

The LaLiga virtual assistant has launched for Google Assistant and Skype, and it will eventually support 11 platforms. The assistant was created with Azure Bot Service and the Microsoft Bot Framework, and it incorporates Azure Cognitive Services and a variety of other Azure tools. The main engine for the assistant takes advantage of the scalability and flexibility of Azure App Service —a platform as a service (PaaS) offering—to streamline development. LaLiga used it multiple times to accelerate the development of the bot logic, image service, Google Assistant connector, Alexa connector, data loaders, cache management, and two Azure functions for live data and proactive messages.

Figure 1. An overview of the LaLiga virtual assistant architecture

Fans can ask the assistant questions using natural language, and the system parses this input to determine user intent by using Azure Text Analytics and Azure Cognitive Services Language Understanding in either Spanish or English. That may seem straightforward, but developers learned that subtitles of language can complicate the process. For example, the primary word for “goalkeeper” is different in the Spanish dialects used in Spain, Argentina, and Colombia. So the mapping of questions to intents needed to accommodate a many-to-one relationship for these variations.

A similar issue arose with players whose names have complicated spellings and don’t clearly correspond to the pronunciation – for example, "Griesman" instead of "Griezmann" – resulting in a variety of translations. The solution here was to use aliases to guide the system to the correct player. Nicknames were another sticking point. Developers used Azure Monitor Application Insights to investigate user queries that weren’t mapping to any existing player and found that a lot of people were asking about a player but using his nickname rather than his official name. Once again, aliases came to the rescue

Guaranteeing a great experience across multiple channels

One goal of the development team was to support a consistent, high-quality user experience across different mobile platforms and devices, each of which has its own display parameters and may also have different connection protocols. In response to every user query, the LaLiga virtual assistant returns three elements: an image, some text, and a voice reply. The image can be a picture of a player or a “hero card” showing match results or player statistics. For channels with a visual display, the image and text are customized with XAML to make them easily legible for the specific display resolution.

All channels aren’t created equal when it comes to user popularity, either. LaLiga expects that some channels will be used much more frequently than others, and this requires adjustments to properly manage scalability resources. Developers created an app service for each channel and optimized it for anticipated usage.

Developers also needed to customize the connectors that the assistant uses for different channels depending on the channels’ capabilities and requirements. For example, the Alexa interface is based on Microsoft .NET Framework, which made it straightforward to develop a connection with Microsoft tools, but Google Assistant uses Node.js, requiring more complex development. Developers found it tricky to map messages from the LaLiga virtual assistant to types that Google Assistant understands. Adding a custom connector hosted with App Service resolved the issue. App Service also helps manage the scalability requirements for the channel. Microsoft is using the lessons learned from the LaLiga virtual assistant to help all developers streamline the creation of connectors with Azure-based bots.

Figure 2. An overview of integration with Google Assistant and Alexa

Planning for millions of enthusiastic users

LaLiga anticipates that the assistant will be hugely popular and that most users will ask multiple questions, generating a vast number of hits on the system each day and leading to high consumption of computing resources. Developers adopted multiple strategies to mitigate this high demand.

Incoming queries get divided into two categories—live and non-live. A live query could be one about a match in progress, where the data could be constantly changing, whereas a non-live query might relate to a completed game or player’s basic statistics. Whenever a non-live query arrives, the result is cached, so the answer is readily available if someone else asks the same question. The LaLiga virtual assistant uses a highly optimized Azure SQL database as its main data storage, rather than a non-structured data lake, to expedite results.

Because scalability was a big concern, the team decided early to dedicate a developer to scalability testing. The developer created an automated system to simulate queries to the assistant, eventually testing millions of hits a day and setting the stage for a smooth launch. Bombarding the system with so many queries revealed another hitch—all those hits were genuine queries, but some web services might see that huge volume and think that the system is being hit by a distributed denial of service (DDOS) attack. So it’s essential to ensure that all components are configured to account for the popularity of the assistant.

Learn from LaLiga and build your own great bot

While some of these use cases may seem straightforward, the LaLiga virtual assistant development experience showed that sometimes a small tweak to development processes or application configuration can yield substantial rewards in terms of system performance and development time. We hope that the lessons learned during the LaLiga project will help you build your own massively popular digital assistant!

Read the case study for more information about LaLiga’s digital transformation and its initiatives to boost fan engagement.

Get started building your own branded virtual assistant.

Start more simply and build your first Q&A bot with QnA Maker.
Quelle: Azure

Watch and learn: Identity & access management sessions at Next '19

We have been hard at work building enterprise-ready identity and access management (IAM) services for our customers and partners, and made multiple product announcements at Next ‘19. During the show, we also hosted 10 sessions on IAM, all of which you can now watch on-demand, from any device.  Managing access to your infrastructureBest Practices for Identity and Authorization With GCPAnthos and Hybrid IdentityBest Practices for Using Microsoft Active Directory (AD) and Apps on Google CloudReduce AD Dependency With Cloud Identity and Secure LDAPManaging the identity of your employeesUnifying User, Device, and App Management With Cloud IdentityHow Airbnb Secured Access to Their Cloud With Context-Aware AccessThe Future of Security Keys: Using Your Phone in the Fight Against PhishingUnify Mobile and Desktop Management From the CloudHow Google Securely Enables Modern End-User ComputingManaging the identity of your customers and partnersScaling with Google’s New Identity PlatformWatching these sessions will give you a great foundation on Google’s approach to IAM, and how you can incorporate it into your environment. Then, stay tuned for more enhancements to our identity and access management services in the coming months.
Quelle: Google Cloud Platform

Last month today: April on GCP

There were lots of product announcements along with learning opportunities. Not surprisingly, our top stories from April were all from the big event. Read on to catch up!Next ‘19 at a glanceWhether you attended Next ‘19 or not, you can catch up on all that happened from our list of all 122 announcements from the show. Read about the news in compute and infrastructure, as well as a ton of launches tied to identity and security on GCP. There are also new features to explore in data analytics and AI/ML, details on running Windows workloads on GCP and improvements in productivity and collaboration with G Suite. Finally, scroll down the list to learn how customers are using GCP. Our blog now even has its own dedicated Next section where you can find all the posts from the event.The future of the cloud is openAt Next ‘19, we introduced Anthos, our hybrid cloud platform that will let you write once, run anywhere. It’s designed so you can write an app and run it without modifying the code across platforms: GCP, Google Kubernetes Engine (GKE), GKE On-Prem, and, soon, third-party clouds. Anthos is completely software-based, using open APIs so users can easily build and manage hybrid clouds. In addition, Anthos Migrate (available in beta) can auto-migrate VMs into GKE containers.Also at Next, we announced new partnerships with seven open source-centric database and analytics providers. This means that you can use their managed services through GCP, with the added benefits of unified management, billing and support. A ton of new applications being developed today run these partners’ open-source database systems, ranging from general purpose databases to specific ones for time-series, graph and search use cases.No servers, no problemCloud Run, our new serverless compute offering, also entered beta last month. Cloud Run lets you run stateless HTTP-driven containers, while handling all infrastructure management, including server provisioning, configuring, scaling and management behind the scenes. With Cloud Run, you can scale your containers up or down quickly (even to zero), giving you the flexibility of containers and velocity of serverless. Cloud Run is based on Knative, an open-source API and runtime environment.Extending the tools developers useAs cloud evolves, so does application development. Cloud Code is a set of plug-ins for IntelliJ and Visual Studio (VS) Code that bring automation and assistance to every phase of the software development lifecycle, using the tools developers already use. Integrated development environments (IDEs) can automate a lot of a developer’s work, but can be challenging for cloud development. Cloud Code uses command-line container tools such as Skaffold, Jib and Kubectl under the hood, so you see continuous feedback as you’re building your project in a Kubernetes environment.That’s a wrap for April. We’ll see you next month.
Quelle: Google Cloud Platform

AWS Fargate PV1.3 now supports the Splunk log driver

You can now use the Splunk log driver to ship container logs from AWS Fargate tasks to a Splunk endpoint. By specifying the endpoint and other log configuration while authoring task definitions, you can ingest logs generated by containers running in Fargate to Splunk. Learn more in the API Configuration documentation.
Quelle: aws.amazon.com