Update IoT devices connected to Azure with Mender update manager

With many IoT solutions connecting thousands of hardware endpoints, fixing security issues or upgrading functionality becomes a challenging and expensive task. The ability to update devices is critical for any IoT solution since it ensures that your organization can respond rapidly to security vulnerabilities by deploying fixes. Azure IoT Hub provides many capabilities to enable developers to build device management processes into their solutions, such as device twins for synchronizing device configuration, and automatic device management to deploy configuration changes across large device fleet. We have previously blogged about how these features have been used to implement IoT device firmware updates.

Some customers have told us they need a turn-key IoT device update manager, so we are pleased to share a collaboration with Mender to showcase how IoT devices connected to Azure can be remotely updated and monitored using Mender open source update manager. Mender provides robust over-the-air (OTA) update management via full image updates and dual A/B partitioning with roll-back, managed and monitored through a web-based management UI.  Customers can use Mender for updating Linux images that are built with Yocto. By integrating with Azure IoT Hub Device Provisioning Service, IoT device identity credentials can be shared between Mender and IoT Hub which is accomplished using a custom allocation policy and an Azure Function. As a result, operators can monitor IoT device states and analytics through their solution built with Azure IoT Hub, and then assign and deploy updates to those devices in Mender because they share device identities.

Recently, Mender’s CTO Eystein Stenberg came on the IoT Show to show how it works:

Keeping devices updated and secure is important for any IoT solution, and Mender now provides a great new option for Azure customers to implement OTA updates.

Additional resources

•    See Mender’s blog post on how to integrate IoT Hub Device Provisioning Service with Mender
•    Learn more about automatic device management in IoT Hub
Quelle: Azure

Compute and stream IoT insights with data-driven applications

There is a lot more data in the world than can possibly be captured with even the most robust, cutting-edge technology. Edge computing and the Internet of Things (IoT) are just two examples of technologies increasing the volume of useful data. There is so much data being created that the current telecom infrastructure will struggle to transport it and even the cloud may become strained to store it. Despite the advent of 5G in telecom, and the rapid growth of cloud storage, data growth will continue to outpace the capacities of both infrastructures. One solution is to build stateful, data-driven applications with technology from SWIM.AI.

The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we describe how one Microsoft partner uses Azure to solve a unique problem.

Shared awareness and communications

The increase in volume has other consequences, especially when IoT devices must be aware of each other and communicate shared information. Peer-to-peer (P2P) communications between IoT assets can overwhelm a network and impair performance. Smart grids are an example of how sensors or electric meters are networked across a distribution grid to improve the overall reliability and cost of delivering electricity. Using meters to determine the locality of issues can help improve service to a residence, neighborhood, municipality, sector, or region. The notion of shared awareness extends to vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications. As networked AI spreads to more cars and devices, so do the benefits of knowing the performance or status of other assets. Other use cases include:

Traffic lights that react to the flow of vehicles across a neighborhood.
Process manufacturing equipment that can determine the impact from previous process steps.
Upstream oil/gas equipment performance that reacts to downstream oil/gas sensor validation.

Problem: Excess data means data loss

When dealing with large volumes of data, enterprises often struggle to determine which data to retain, how much to retain, and for how long they must retain it. By default, they may not retain any of it. Or, they may sub-sample data and retain an incomplete data set. That lost data may potentially contain high value insights. For example, consider traffic information that could be used for efficient vehicle routing, commuter safety, insurance analysis, and government infrastructure reviews. The city of Las Vegas maintains over 1,100 traffic light intersections that can generate more than 45TB of data every day. As stated before, IoT data will challenge our ability to transport and store data at these volumes.

Data may also become excessive when it’s aggregated. For example, telecom and network equipment typically create snapshots of data and send it every 15 minutes. By normalizing this data into a summary over time, you lose granularity. This means the nature or pattern of data over time along with any unique, intuitive events would be missed. The same applies to any equipment capturing fixed-time, window summary data. The loss of data is detrimental to networks where devices share data, either for awareness or communication. The problem is also compounded, as only snapshots are captured and aggregated for an entire network of thousands or millions of devices. Real-time is the goal.

Real-time is the goal

Near real-time is the current standard for stateless application architectures, but “near” real-time is not fast enough anymore. Real-time processing or processing within milliseconds is the new standard for V2V or V2I communications and requires a much more performant architecture. Swim does this by leveraging stateful API’s. With stateful connections, it’s possible to have a rapid response between peers in a network. Speed has enormous effects on efficiency and reliability and it’s essential for systems where safety is paramount such as preventing crashes. Autonomous systems will rely on real-time performance for safety purposes.

An intelligent edge data strategy

SWIM.AI delivers a solution for building scalable streaming applications. According to their site Meet Swim:

“Instead of configuring a separate message broker, app server and database, Swim provides for its own persistence, messaging, scheduling, clustering, replication, introspection, and security. Because everything is integrated, Swim seamlessly scales across edge, cloud, and client, for a fraction of the infrastructure and development cost of traditional cloud application architectures.”

The figure below shows an abstract view of how Swim can simplify IoT architectures:

Harvest data in mid-stream

SWIM.AI uses the lightweight Swim platform, only generating a 2MB footprint to compute and stream IoT insights, building what they call “data-driven applications.” These applications sit in the data stream and generate unique, intelligent web agents for each data source it sees. These intelligent web agents then process the raw data as it streams, only publishing state changes from the data stream. This streamed data can be used by other web agents or stored in a data lake, such as Azure.

Swim uses the “needle in a haystack” metaphor to explain this unique advantage. Swim allows you to apply a metal detector while harvesting the grain to find the needle, without having to bail, transport, or store the grain before searching for the needle. The advantage is in continuously processing data, where intelligent web agents can learn over time or be influenced by domain experts that set thresholds.

Because of the stateful architecture of Swim, only the minimum data necessary is transmitted over the network. Furthermore, application services need not wait for the cloud to establish application context. This results in extremely low latencies, as the stateful connections don’t incur the latency cost of reading and writing to a database or updating based on poll requests.

On SWIM.AI’s website, a Smart City application shows the real-time status of lights and traffic across a hundred intersections with thousands of sensors. The client using the app could be a connected or an autonomous car approaching the intersection. It could be a handheld device next to the intersection, or a browser a thousand miles away in the contiguous US. The latency to real-time is 75-150ms, less than the blink of an eye across the internet.

Benefits

The immediate benefit is saving costs for transporting and storing data.
Through Swim’s technology, you can retain the granularity. For example, take the case of 10 seconds of TB per day generated from every 1000 traffic light intersections. Winnow that data down to 100 seconds of GB per day. But the harvested dataset fully describes the original raw dataset.
Create efficient networked apps for various data sources. For example, achieve peer-to-peer awareness and communications between assets such as vehicles, devices, sensors, and other data sources across the internet.
Achieve ultra-low latencies in the 75-150 millisecond range. This is the key to creating apps that depend on data for awareness and communications.

Azure services used in the solution

The demonstration of DataFabric from SWIM.AI relies on core Azure services for security, provisioning, management, and storage. DataFabric also uses the Common Data Model to simplify sharing information with other systems, such as Power BI or PowerApps, in Azure. Azure technology enables the customer’s analytics to be integrated with events and native ML and cognitive services.

DataFabric is based on the Microsoft IoT reference architecture and uses the following core components:

IoT Hub: Provides a central point in the cloud to manage devices and their data.
IoT Edge Field gateway: An on-premises solution for delivering cloud intelligence.
Azure Event Hubs: Ingests millions of events per second.
Azure Blob: Efficient storage that includes options for hot, warm and archived data.
Azure Data Lake storage: A highly scalable and cost-effective data lake solution for big data analytics.
Azure Streaming Analytics: For transforming data into actionable insights and predictions in near real-time.

Next steps

To learn more about other industry solutions, go to the Azure for Manufacturing page.

To find out more about this solution, go to DataFabric for Azure IoT and select Get it now.
Quelle: Azure

Azure.Source – Volume 86

News and updates

Microsoft hosts HL7 FHIR DevDays

One of the largest gatherings of healthcare IT developers will come together on the Microsoft campus June 10-12 for HL7 FHIR DevDays, with the goal of advancing the open standard for interoperable health data, called HL7® FHIR® (Fast Healthcare Interoperability Resources, pronounced “fire”). Microsoft is thrilled to host this important conference, and engage with the developer community on everything from identifying immediate use cases to finding ways for all of us to hack together in ways that help advance the FHIR specification.

Announcing self-serve experience for Azure Event Hubs Clusters

For businesses today, data is indispensable. Innovative ideas in manufacturing, health care, transportation, and financial industries are often the result of capturing and correlating data from multiple sources. Now more than ever, the ability to reliably ingest and respond to large volumes of data in real time is the key to gaining competitive advantage for consumer and commercial businesses alike. To meet these big data challenges, Azure Event Hubs offers a fully managed and massively scalable distributed streaming platform designed for a plethora of use cases from telemetry processing to fraud detection.

A look at Azure's automated machine learning capabilities

The automated machine learning capability in Azure Machine Learning service allows data scientists, analysts, and developers to build machine learning models with high scalability, efficiency, and productivity all while sustaining model quality. With the announcement of automated machine learning in Azure Machine Learning service as generally available last December, we have started the journey to simplify artificial intelligence (AI). We are furthering our investment for accelerating productivity with a new release that includes exciting capabilities and features in the areas of model quality, improved model transparency, the latest integrations, ONNX support, a code-free user interface, time series forecasting, and product integrations.

Technical content

Securing the hybrid cloud with Azure Security Center and Azure Sentinel

Infrastructure security is top of mind for organizations managing workloads on-premises, in the cloud, or hybrid. Keeping on top of an ever-changing security landscape presents a major challenge. Fortunately, the power and scale of the public cloud has unlocked powerful new capabilities for helping security operations stay ahead of the changing threat landscape. Microsoft has developed a number of popular cloud based security technologies that continue to evolve as we gather input from customers. This post breaks down a few key Azure security capabilities and explain how they work together to provide layers of protection.

Customize your automatic update settings for Azure Virtual Machine disaster recovery

In today’s cloud-driven world, employees are only allowed access to data that is absolutely necessary for them to effectively perform their job. The ability to hence control access but still be able to perform job duties aligning to the infrastructure administrator profile is becoming more relevant and frequently requested by customers. When we released the automatic update of agents used in disaster recovery (DR) of Azure Virtual Machines (VMs), the most frequent feedback we received was related to access control. The request we heard from you was to allow customers to provide an existing automation account, approved and created by a person who is entrusted with the right access in the subscription. You asked, and we listened!

Azure Stack IaaS – part nine

Before we built Azure Stack, our program manager team called a lot of customers who were struggling to create a private cloud out of their virtualization infrastructure. We were surprised to learn that the few that managed to overcome the technical and political challenges of getting one set up had trouble getting their business units and developers to use it. It turns out they created what we now call a snowflake cloud, a cloud unique to just their organization. This is one of the main problems we were looking to solve with Azure Stack. A local cloud that has not only automated deployment and operations, but also is consistent with Azure so that developers and business units can tap into the ecosystem. In this blog we cover the different ways you can tap into the Azure ecosystem to get the most value out of IaaS.

What is the difference between Azure Application Gateway, Load Balancer, Front Door and Firewall?

Last week at a conference in Toronto, an attendee came to the Microsoft booth and asked something that has been asked many times in the past. So, this blog post covers all of it here for everyone’s benefit. What are the differences between Azure Firewall, Azure Application Gateway, Azure Load Balancer, Network Security Groups, Azure Traffic Manager, and Azure Front Door? This blog offers a high-level consolidation of what they each do.

Azure shows

Five tools for building APIs with GraphQL | Five Things

Burke and Chris are back and this week they're bringing you five tools for building API's with GraphQL. True story, they shot this at the end of about a twelve hour day and you can see the pain in Burke's eyes. It's not GraphQL he doesn't like, it's filming for six straight hours. Also, Chris picks whistles over bells (because of course he does) and Burke fights to stay awake for four minutes.

Microservices and more in .NET Core 3.0 | On .NET

Enabling developers to build resilient microservices is an important goal for .NET Core 3.0. In this episode, Shayne Boyer is joined by Glenn Condron and Ryan Nowak from the ASP.NET team who discuss some of the exciting work that's happening in the microservice space for .NET Core 3.0.

Interknowlogy mixes Azure IoT and mixed reality | The Internet of Things Show

When mixed reality meets the Internet of Things through Azure Digital Twins, a new way of accessing data materializes. See how Interknowlogy mixes Azure IoT and Mixed Reality to deliver not only stunning experiences but also accrued efficiency and productivity to workforce.

Bring DevOps to your open-source projects: Top three tips for maintainers | The Open Source Show

Baruch Sadogurksy, Head of Developer Relations at JFrog, and Aaron Schlesinger, Cloud Advocate at Microsoft and Project Athens Maintainer, talk about the art of DevOps for Open Source. Balancing contributor needs with the core DevOps principles of people, process, and tools. You'll learn how to future-proof your projects, avoid the dreaded Bus Factor, and get Aaron and Baruch's advice for evaluating and selecting tools, soliciting contributor input and voting, documenting processes, and so much more.

Episode 282 – Azure Front Door Service | The Azure Podcast

Cynthia talks with Sharad Agrawal on what Azure Front Door Service is, how to choose between Azure Front Door Service, CDN, Azure Traffic Manager and App Gateway, and how to get started.

HTML5 audio not supported

Atley Hunter on the Business of App Development | Azure DevOps Podcast

In this episode, Jeffrey and Atley are discussing the business of app development. Atley describes some of the first apps he’s ever developed, some of the most successful and popular apps he’s ever created, how he’s gone about creating these apps, and gives his tips for other developers in the space.

Industries and partners

Empowering clinicians with mobile health data: Right information, right place, right time

Improving patient outcomes and reducing healthcare costs depends on healthcare providers such as doctors, nurses, and specialized clinician ability to access a wide range of data at the point of patient care in the form of health records, lab results, and protocols. Tactuum, a Microsoft partner, provides the Quris solution that empowers clinicians with access to the right information, the right place, at the right time, enabling them to do their jobs efficiently and with less room for error.

Building a better asset and risk management platform with elastic Azure services

Elasticity means services can expand and contract on demand. This means Azure customers who are on a pay-as-you-go plan will reap the most benefit out of Azure services. Their service is always available, but the cost is kept to a minimum. Together with elasticity, Azure lets modern enterprises migrate and evolve more easily. For financial service providers, the modular approach lets customers benefit from best-of-breed analytics in three key areas. Read the post to learn what they are.

Symantec’s zero-downtime migration to Azure Cosmos DB

How do you migrate live, mission-critical data for a flagship product that must manage billions of requests with low latency and no downtime? The consumer business unit at Symantec faced this exact challenge when deciding to shift from their costly and complex self-managed database infrastructure, to a geographically dispersed and low latency managed database solution on Azure. The Symantec team shared their business requirements and decision to adopt Azure Cosmos DB in a recent case study.
Quelle: Azure

Microsoft hosts HL7 FHIR DevDays

This blog post was co-authored by Greg Moore, Corporate Vice President, Microsoft Healthcare and Peter Lee, Corporate Vice President, Microsoft Healthcare.

One of the largest gatherings of healthcare IT developers will come together on the Microsoft campus next week for HL7 FHIR DevDays, with the goal of advancing the open standard for interoperable health data, called HL7® FHIR® (Fast Healthcare Interoperability Resources, pronounced “fire”). Microsoft is thrilled to host this important conference on June 10-12, 2019 on our Redmond campus, and engage with the developer community on everything from identifying immediate use cases to finding ways for all of us to hack together in ways that help advance the FHIR specification.

We believe that FHIR will be an incredibly important piece of the healthcare future. Its modern design enables a new generation of AI-powered applications and services, and it provides an extensible, standardized format that makes it possible for all health IT systems to not only share data so that it can get to the right people where and when they need it, but also turn that data into knowledge. While real work has been underway for many years on HL7 FHIR, today it has become one of the most critical technologies in health data management, leading to major shifts in both the technology and policy of healthcare. 

Given the accelerating shift of healthcare to the cloud, FHIR in the cloud presents a potentially historic opportunity to advance health data interoperability. For this reason, last summer in Washington, DC, we stood with leaders from AWS, Google, IBM, Oracle, and Salesforce to make a joint pledge to adopt technologies that promote the interoperability of health data. But we all know that FHIR is not magic. To make the liberation of health data a reality, developers and other stakeholders will need to work together, and so this is why community events like HL7 FHIR DevDays are so important. They allow us to try out new ideas in code and discuss a variety of areas, from the basics of FHIR, to its use with medical devices, imaging, research, security, privacy, and patient empowerment.

The summer of 2019 may indeed be the coming of age for FHIR, with the new version of the standard called “FHIR release 4” (R4) reaching broader adoption, new product updates from Microsoft, and new interop policies from the US government that will encourage the industry to adopt FHIR more broadly.

New FHIR standard progressing quickly

Healthcare developers can start building with greater confidence that FHIR R4 will help connect people, data, and systems. R4 is the first version to be “normative,” which means that it’s an official part of the future specification so that all future versions will be backward compatible.

Microsoft adding more FHIR functionality to Azure

Microsoft is doing its part to realize benefits of health data interop with FHIR, and today we’re announcing that our open source FHIR Server for Azure will support FHIR R4 and is available today.

We have added a new data persistence provider implementation to the open source FHIR Server for Azure. The new SQL persistence provider enables developers to configure their FHIR server instance to use either an Azure Cosmos DB backed persistence layer, or a persistence layer using a SQL database, such as Azure SQL Database. This will make it easier for customers to manage their healthcare applications by adding more capabilities for their preferred SQL provider. It will extend the capability of a FHIR server in Azure to support key business workloads with new features such as chained queries and transactions.

Growing ecosystem of customers and partners

Our Azure API for FHIR already has a broad partner ecosystem in place and customers using the preview service to centralize disparate data.

Northwell Health, the largest employer in New York state with 23 hospitals and 700 practices, is using the Azure API for FHIR to build interoperability into its data flow solution to reduce excess days for patients. This ensures the patient only stays for the period that is required for clinical care and there are no other non-clinical reasons are occurring for delays in discharging the patient.

Our open source implementation of FHIR Server for Azure is already creating a tighter feedback loop with developers and partners for our products who have quickly innovated on top of this open source project.

Darena Solutions used the open source FHIR Server for Azure to develop its Blue Button application with a content management system (CMS) called BlueButtonPRO. This will allow patients to import their data from CMS (through Blue Button). More importantly, it allows patients a simple and secure way to download, view, manage, and share healthcare data from any FHIR portals that they have access to.

US Health IT Policy proposal to adopt FHIR

The DevDays conference also comes on the heels of the US government’s proposed ruling to improve interoperability of health data embodied in the 21st Century Cures Act, which includes the use of FHIR.

Microsoft supports the focus in these proposed rules on reducing barriers to interoperability because we are confident that the result will be good for patients. Interoperability and the seamless flow of health data will enable a more informed and empowered consumer. We expect the health industry will respond with greater efficiency, better care, and cost savings.

We're at a pivotal moment for health interoperability, where all the bottom-up development in the FHIR community is meeting top-down policy decision at the federal level.

Health data interoperability at Microsoft

Integrating health data into our platforms is a huge commitment for Microsoft, and Azure with FHIR is just the start. Now that FHIR is baked into the core of Azure, the Microsoft cloud will natively speak FHIR as the language for health data as we plan for all our services to inherit that ability.

Healthcare today and into the future will demand a broad perspective and creative, collaborative problem-solving. Looking ahead, Microsoft intends to continue an open, collaborative dialogue with the industry and community, from FHIR DevDays to the hallways of our customers and partners.

FHIR is a part of our healthcare future, and FHIR DevDays is a great place to start designing for that future.
Quelle: Azure

How to optimize your Azure environment

Without the right tools and approach, cloud optimization can be a time-consuming and difficult process. There is an ever growing list of best practices to follow, and it’s constantly in flux as your cloud workloads evolve. Add the challenges and emergencies you face on a day-to-day basis, and it’s easy to understand why it’s hard to be proactive about ensuring your cloud resources are running optimally.

Azure offers many ways to help ensure that you’re running your workloads optimally and getting the most out of your investment.

Three kinds of optimization: organizational, architectural, and tactical

One way to think about these is the altitude of advice and optimization offered: organizational, architectural, or tactical.
At the tactical or resource level, you have Azure Advisor, a free Azure service that helps you optimize your Azure resources for high availability, security, performance, and cost. Advisor scans your resource usage and configuration and provides over 100 personalized recommendations. Each recommendation includes inline actions to make remediating your cloud resource optimizations fast and easy.

At the other end of the spectrum is Azure Architecture Center, a collection of free guides created by Azure experts to help you understand organizational and architectural best practices and optimize your workloads. This guidance is especially useful when you’re designing a new workload for the cloud or migrating an existing workload from on-premises to the cloud.

The guides in the Azure Architecture Center range from the Microsoft Cloud Adoption Framework for Azure, which can help guide your organization’s approach to cloud adoption and strategy, to Azure Reference Architectures, which provides recommended architectures and practices for common scenarios like AI, IoT, microservices, serverless, SAP, web apps, and more.

Start small, gain momentum

There are many ways to get started optimizing your Azure environment. You can align as an organization on your cloud adoption strategy, you can review your workload architecture against the reference architectures we provide, or you can open up Advisor and see which of your resources have best practice recommendations. Those are just a few examples, ultimately it’s a choice only you and your organization can make.

If your organization is like most, it helps to start small and gain momentum. We’ve seen many customers have success kicking off their optimization journey at the tactical or resource level, then the workload level, and ultimately working their way up to the organizational level, where you can consolidate what you’ve learned and implement policy.

Get started with Azure Advisor

When you visit Advisor, you’ll likely find many recommended actions you can take to optimize your environment. Our advice? Don’t get overwhelmed. Just get started. Scan the recommendations for opportunities that are the most meaningful to you and your organization. For some, that might be high availability considerations like VM backup, a common oversight in VM creation, especially when making the transition from dev/test to production. For others, it might be finding cost savings by looking at VMs that are being underutilized.

Once you’ve found a suitable recommendation, go ahead and remediate it as shown in this video. Optimization is an ongoing process and never really finished, but every step you take is a step in the right direction.

Visit Advisor in the Azure portal to get started reviewing and remediating your recommendations. For more in-depth guidance, visit the Azure Advisor documentation. Let us know if you have a suggestion for Advisor by submitting an idea in our feedback tool here.
Quelle: Azure

Build more accurate forecasts with new capabilities in automated machine learning

We are excited to announce new capabilities which are apart of time-series forecasting in Azure Machine Learning service. We launched preview of forecasting in December 2018, and we have been excited with the strong customer interest. We listened to our customers and appreciate all the feedback. Your responses helped us reach this milestone. Thank you.

Building forecasts is an integral part of any business, whether it’s revenue, inventory, sales, or customer demand. Building machine learning models is time-consuming and complex with many factors to consider, such as iterating through algorithms, tuning your hyperparameters and feature engineering. These choices multiply with time series data, with additional considerations of trends, seasonality, holidays and effectively splitting training data.

Forecasting within automated machine learning (ML) now includes new capabilities that improve the accuracy and performance of our recommended models:

New forecast function
Rolling-origin cross validation
Configurable Lags
Rolling window aggregate features
Holiday detection and featurization

Expanded forecast function

We are introducing a new way to retrieve prediction values for the forecast task type. When dealing with time series data, several distinct scenarios arise at prediction time that require more careful consideration. For example, are you able to re-train the model for each forecast? Do you have the forecast drivers for the future? How can you forecast when you have a gap in historical data? The new forecast function can handle all these scenarios.

Let’s take a closer look at common configurations of train and prediction data scenarios, when using the new forecasting function. For automated ML the forecast origin is defined as the point when the prediction of forecast values should begin. The forecast horizon is how far out the prediction should go into the future.

In many cases training and prediction do not have any gaps in time. This is the ideal because the model is trained on the freshest available data. We recommend you set your forecast this way if your prediction interval allows time to retrain, for example in more fixed data situations such as financial forecasts rate or supply chain applications using historical revenue or known order volumes.

When forecasting you may know future values ahead of time. These values act as contextual information that can greatly improve the accuracy of the forecast. For example, the price of a grocery item is known weeks in advance, which strongly influences the “sales” target variable. Another example is when you are running what-if analyses, experimenting with future values of drivers like foreign exchange rates. In these scenarios the forecast interface lets you specify forecast drivers describing time periods for which you want the forecasts (Xfuture). 

If train and prediction data have a gap in time, the trained model becomes stale. For example, in high-frequency applications like IoT it is impractical to retrain the model constantly, due to high velocity of change from sensors with dependencies on other devices or external factors e.g. weather. You can provide prediction context with recent values of the target (ypast) and the drivers (Xpast) to improve the forecast. The forecast function will gracefully handle the gap, imputing values from training and prediction context where necessary.

In other scenarios, such as sales, revenue, or customer retention, you may not have contextual information available for future time periods. In these cases, the forecast function supports making zero-assumption forecasts out to a “destination” time. The forecast destination is the end point of the forecast horizon. The model maximum horizon is the number of periods the model was trained to forecast and may limit the forecast horizon length.

The forecast model enriches the input data (e.g. adds holiday features) and imputes missing values. The enriched and imputed data are returned with the forecast.

Notebook examples for sales forecast, bike demand and energy forecast can be found on GitHub.

Rolling-origin cross validation

Cross-validation (CV) is a vital procedure for estimating and reducing out-of-sample error for a model. For time series data we need to ensure training only occurs using values to the past of the test data. Partitioning the data without regard to time does not match how data becomes available in production, and can lead to incorrect estimates of the forecaster’s generalization error.

To ensure correct evaluation, we added rolling-origin cross validation (ROCV) as the standard method to evaluate machine learning models on time series data. It divides the series into training and validation data using an origin time point. Sliding the origin in time generates the cross-validation folds.

As an example, when we do not use ROCV, consider a hypothetical time-series containing 40 observations. Suppose the task is to train a model that forecasts the series up-to four time-points into the future. A standard 10-fold cross validation (CV) strategy is shown in the image below. The y-axis in the image delineates the CV folds that will be made while the colors distinguish training points (blue) from validation points (orange). In the 10-fold example below, notice how folds one through nine result in model training on dates future to be included the validation set resulting inaccurate training and validation results.

This scenario should be avoided for time-series instead, when we use an ROCV strategy as shown below, we preserve the time series data integrity and eliminate the risk of data leakage.

ROCV is used automatically for forecasting. You simply pass the training and validation data together and set the number of cross validation folds. Automated machine learning (ML) will use the time column and grain columns you have defined in your experiment to split the data in a way that respects time horizons. Automated ML will also retrain the selected model on the combined train and validation set to make use of the most recent and thus most informative data, which under the rolling-origin splitting method ends up in the validation set.

Lags and rolling window aggregates

Often the best information a forecaster can have is the recent value of the target. Creating lags and cumulative statistics of the target then increases accuracy of your predictions.

In automated ML, you can now specify target lag as a model feature. Adding lag length identifies how many rows to lag based on your time interval. For example, if you wanted to lag by two units of time, you set the lag length parameter to two.

The table below illustrates how a lag length of two would be treated. Green columns are engineered features with lags of sales by one day and two day. The blue arrows indicate how each of the lags are generated by the training data. Not a number (Nan) are created when sample data does not exist for that lag period.

In addition to the lags, there may be situations where you need to add rolling aggregation of data values as features. For example, when predicting energy demand you might add a rolling window feature of three days to account for thermal changes of heated spaces. The table below shows feature engineering that occurs when window aggregation is applied. Columns for minimum, maximum, and sum are generated on a sliding window of three based on the defined settings. Each row has a new calculated feature, in the case of date January 4, 2017 maximum, minimum, and sum values are calculated using temp values for January 1, 2017, January 2, 2017, and January 3, 2017. This window of “three” shifts along to populate data for the remaining rows.

Generating and using these additional features as extra contextual data helps with the accuracy of the trained model. This is all possible by adding a few parameters to your experiment settings.

Holiday features

For many time series scenarios, holidays have a strong influence on how the modeled system behaves. The time before, during, and after a holiday can modify the series’ patterns, especially in scenarios such as sales and product demand. Automated ML will create additional features as input for model training on daily datasets. Each holiday generates a window over your existing dataset that the learner can assign an effect to. With this update, we will support over 2000 holidays in over 110 countries. To use this feature, simply pass the country code as a part of the time series settings. The example below shows input data in the left table and the right table shows updated dataset with holiday featurization applied. Additional features or columns are generated that add more context when models are trained for improved accuracy.

Get started with time-series forecasting in automated ML

With these new capabilities automated ML increases support more complex forecasting scenarios, provides more control to configure training data using lags and window aggregation and improves accuracy with new holiday featurization and ROCV. Azure Machine Learning aims to enable data scientists of all skill levels to use powerful machine learning technology that simplifies their processes and reduces the time spent training models. Get started by visiting our documentation and let us know what you think – we are committed to make automated ML better for you!

Learn more about the Azure Machine Learning service and get started with a free trial.

Learn more about automated machine learning
How to Guide: Auto-train a time-series forecast model
Automated ML GitHub Samples

Quelle: Azure

Using Text Analytics in call centers

Azure Cognitive Services provides Text Analytics APIs that simplify extracting information from text data using natural language processing and machine learning. These APIs wrap pre-built language processing capabilities, for example, sentiment analysis, key phrase extraction, entity recognition, and language detection.

Using Text Analytics, businesses can draw deeper insights from interactions with their customers. These insights can be used to create management reports, automate business processes, for competitive analysis, and more. One area that can provide such insights is recorded customer service calls which can provide the necessary data to:

Measure and improve customer satisfaction
Track call center and agent performance
Look into performance of various service areas

In this blog, we will look at how we can gain insights from these recorded customer calls using Azure Cognitive Services.

Using a combination of these services, such as Text Analytics and Speech APIs, we can extract information from the content of customer and agent conversations. We can then visualize the results and look for trends and patterns.

The sequence is as follows:

Using Azure Speech APIs, we can convert the recorded calls to text. With the text transcriptions in hand, we can then run Text Analytics APIs to gain more insight into the content of the conversations.
The sentiment analysis API provides information on the overall sentiment of the text in three categories positive, neutral, and negative. At each turn of the conversation between the agent and customer, we can:

See how the customer sentiment is improving, staying the same, or declining.
Evaluate the call, the agent, or either for their effectiveness in handling customer complaints during different times.
See when an agent is consistently able to turn negative conversations into positive or vice versa and identify opportunities for training.

Using the key phrase extraction API, we can extract the key phrases in the conversation. This data, in combination with the detected sentiment, can assign categories to a set of key phrases during the call. With this data in hand, we can:

See which phrases carry negative or positive sentiment.
Evaluate shifts in sentiment over time or during product and service announcements.

Using the entity recognition API, we can extract entities such as person, organization, location, date time, and more. We can use this data, for example, to:

Tie the call sentiment to specific events such as product launches or store openings in an area.
Use customer mentions of competitors for competitive intelligence and analysis.

Lastly, Power BI can help visualize the insights and communicate the patterns and trends to drive to action.

Using the Azure Cognitive Services Text Analytics, we can gain deeper insights into customer interactions and go beyond simple customer surveys into the content of their conversations.

A sample code implementation of the above workflow can be found on GitHub.
Quelle: Azure

Ask Me Anything – “Network” with teams from Azure Networking!

Which 3rd party devices are supported for connecting to Azure VPN Gateway? Can I connect to multiple sites from the same virtual network? Ask these questions and more during the next Ask Me Anything (AMA) session via Twitter on Tuesday, June 11, 2019 from 10:00 AM to 11:30 AM Pacific Time.

This is your opportunity to ask questions about our products, services, or even the team, directly to members of these teams:

Azure Application Gateway
Azure DNS
Azure ExpressRoute
Azure Traffic Manager
Azure VPN Gateway

Tell us about your experiences, we want your valuable insights into how we can improve the service.

To get involved, follow @AzureSupport on Twitter and send a tweet with the hashtag  "#AzNetworkingAMA". Then during the event, members from the product teams will start answering your questions.

How it works

AMA stands for Ask Me Anything, which is a less formal way to get answers to your questions directly from the engineers and product managers. It’s an opportunity for a live conversation with the experts who are responsible for building and maintaining Azure services.

During the live session, you can ask questions by tweeting at @AzureSupport  with the hashtag #AzNetworkingAMA. Your question can span multiple tweets by replying to first tweet you post with this hashtag.

If I’m in a different time zone, no problem. Start tweeting your questions in advance and we’ll answer during the event.

You really can ask anything you’d like, but here’s a list of question ideas to get you started:

What’s the difference between App Gateway and VPN Gateway?
Can I delegate an Azure DNS subdomain?
What features are currently planned or in development?
What is the difference between App Gateway and Azure Load Balancer?
How much do I get charged for App Gateway?
Why should I use the V2 SKU of App Gateway vs the V1?
How does App Gateway compare with Azure Front Door?
Can I use App Gateway for purely “private” (not internet facing) applications?
Which protocols are supported on Azure VPN Gateway?

The Azure Networking AMA is a great way for you to get inside the minds that build the products you love, and continues our series of AMAs that connect customers directly with developers. To learn more about some of our previous AMAs, you can read about the Azure Backup AMA and the Azure Integration Services AMA.

Get out and tweet @AzureSupport.
Quelle: Azure

Empowering clinicians with mobile health data: Right information, right place, right time

Improving patient outcomes and reducing healthcare costs depends on healthcare providers such as doctors, nurses, and specialized clinician ability to access a wide range of data at the point of patient care in the form of health records, lab results, and protocols. Tactuum, a Microsoft partner, provides the Quris solution that empowers clinicians with access to the right information, the right place, at the right time, enabling them to do their jobs efficiently and with less room for error.

The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we describe how one Microsoft partner uses Azure to solve a unique problem.

Information fragmentation results in poor quality of care

A patient is brought into the emergency department with a deep cut to the leg. The wound is several days old and the patient is exhibiting symptoms of illness, perhaps infection. As a clinician, you know the hospital has a clear protocol for wound management and possible infections. Do you know where to find this information quickly? Is it on a wiki, internal website, or on paper in a binder? Lastly, is it current? Finding the right information in these conditions can be time-consuming and stressful. Or worse, it could be inaccurate and out of date.

In many healthcare provider organizations today, information is fragmented between electronic health records (EHR), on-line third-party sites, intranet sites, and on paper. Additionally, some information may be on secured sites, not visible to everyone and data disappears if it’s unavailable offline. This situation can be detrimental to the quality of patient care because critical data is available too late or not at all. Even with internet access, the wrong information may come from a search engine. So aside from the logistical challenges of making data available, it’s important to ensure that only the right information is found. So, the enduring challenge is getting the right information to the right person, in the right place, and at the right time.

The searchability cost of file systems

Even a facility with modern IT resources such as computers, tablets, or specialized instruments presents obstacles in the search for information. Users must navigate through the network and tunnel into folders, backtracking if they are wrong. Some folders may not be available to everyone or require asking for permission when time is of the essence. Websites and apps may also require authorization. So what happens if a device is offline? Computer systems present their own hurdles to quick access.

Solution

The challenge has become a problem-to-solve for one Microsoft partner, Tactuum, who created the Quris Clinical Companion. Working with some leading hospitals, including the University of Washington and the University of Michigan, they are solving the problem for healthcare. From the Tactuum website comes this description:

“Our flagship product allows organizations to push out to staff, in real-time, the latest guidelines, protocols, algorithms, calculators and clinical handbooks. Put your existing clinical resources into clinicians’ hands right now and know that they’re using the latest and most up-to-date information.”

Tactuum has a few notable goals:

Right information: The content is vetted, with security safeguards. The content is easy to use, and data consumption insights are provided.
Right place: Available where you need it through mobile devices, workstations, and EHR systems.
Right time: Available on and offline. When online, real-time updates become possible.
Right cost: Minimal IT involvement, low maintenance, and no paper or printing required.

The graphic below illustrates the components and workflow of the system.

Benefits

Improve quality of care due to more effective decision-making (quicker and more reliable).
Save money on printing requirements, easier maintenance, and streamlined distribution.
Innovation through powerful data and analytics.

The solution supports improving patient outcomes with critical information at the point of patient care, saving both time and money. Here’s one example, according to a registered nurse and Quris user at Airlift Northwest in Seattle:

“Time savings has been immeasurable. In the past it was required to have a workgroup of staff, educators, and medical directors to review and update the hardcopy “Bluebook.” This was very expensive and required significant time. Now, a smaller group reviews policies and resources, does updates, and uploads it directly to the organization’s server for immediate use.”

Azure services

The Microsoft Azure worldwide presence and extensive compliance portfolio provide the backbone of the Quris solution, including the following key services:

Web Apps: Supports Windows and Linux
Blob Storage: Multiple blob types, hot, cool, and archive tiers
Azure Active Directory: Identity services that work with your on-premises, cloud, or hybrid environment
Azure SQL Database: Unmatched scale and high availability for compute and storage
Xamarin: Connects apps to enterprise systems, in the cloud or on premises

Next steps

To see more about Azure in the healthcare industry see Azure for health.

Go to the Azure Marketplace listing for Quris and select Contact me.
Quelle: Azure