Azure Digital Twins: Powering the next generation of IoT connected solutions

Last month at Microsoft Build 2020, we announced the new features for Azure Digital Twins, the IoT platform that enables the creation of next-generation IoT connected solutions that model the real world. Today, we are announcing that these updated capabilities are now available in preview. Using the power of IoT, businesses have gained unprecedented insights into their assets. But as connected solutions continue to evolve, companies are looking for ways to create richer models of entire business environments, which is quite challenging even for sophisticated businesses.

Our goal with Azure Digital Twins is to make the creation of sophisticated digital twin solutions easy. With today’s announcement, you can apply your domain expertise on top of Azure Digital Twins to design and build comprehensive digital models of entire environments.

Using Azure Digital Twins, you can gain insights that drive better products, optimization of operations, cost reduction, and breakthrough customer experiences. And you can now do so across environments of all types, including buildings, factories, farms, energy networks, railways, stadiums—even entire cities.

What’s new?

We received a lot of valuable feedback from the Azure Digital Twins preview and we are excited to share the expanded capabilities of Azure Digital Twins that will simplify and accelerate your creation of IoT connected solutions.

Open modeling language

The new preview of Azure Digital Twins lets you create custom models of any connected environment. The new preview of Azure Digital Twins lets you create custom models of any connected environment. Using the rich and flexible Digital Twins Definition Language (DTDL), based on the JSON-LD standard, you can configure your Azure Digital Twins service to tailor to the specific needs of your use case.

Real-world environments are created from connected twins. Each twin is modeled using properties, telemetry events, components, and relationships that define how twins can be connected into rich knowledge graphs. DTDL is also used for models throughout other Azure IoT services, including IoT Plug and Play and Azure Time Series Insights.

DTDL is the glue that helps you keep your Azure Digital Twins solution connected and compatible with other parts of the Azure ecosystem.

As part of our commitment to openness and interoperability, we will continue to promote best practices and shared digital twin models for a wide range of businesses and industry domains through the Digital Twins Consortium and other channels to accelerate your time building valuable IoT connected solutions that span many industry verticals and use cases.

Live execution environment

Azure Digital Twins lets you bring your digital twins to life using data from IoT and other data sources, creating an always-up-to-date digital representation of your environment that is scalable and secure.

Using a robust event system, you can build dynamic business logic and data processing as data flows through the execution environment, and now, you can harness the power of external compute resources, such as Azure Functions. This makes it easy to use pre-existing code with Azure Digital Twins, which provides freedom of choice in terms of programming languages and compute models.

To extract insights from the live execution environment, Azure Digital Twins provides a powerful query system that allows you to search for twins based on a wide range of conditions and relationships.

Input from IoT and business systems

You can easily connect assets such as IoT and IoT Edge devices, as well as existing business systems such as ERP and CRM, to Azure Digital Twins to drive the live execution environment.

You can now use a new or existing Azure IoT Hub to connect, monitor, and manage all of your assets at scale, taking advantage of the full device management capabilities that IoT Hub provides. The ability to use any existing Hub makes it easier to add Azure Digital Twins to existing IoT solutions incrementally.

Using the Azure Digital Twins REST APIs, you can also use data sources other than IoT, unlocking even more actionable insights with Azure Digital Twins.

Output to Azure Time Series Insight, storage, and analytics

You can integrate Azure Digital Twins with other Azure services to build complete end-to-end solutions. You can define event routes that send selected data to downstream services through endpoints that support Event Hubs, Event Grid, or Service Bus Event routes to send data to Azure Data Lake for long term storage; to data analytics services such as Azure Synapse Analytics to apply machine learning; to Logic Apps for workflow integration; or to Power BI to extract insights. Another important use case is time series data integration and historian analytics with Azure Time Series Insight.

Combined, these capabilities greatly simplify today’s difficult tasks of modeling and creating a digital representation of an environment, helping you focus on what differentiates your business rather than building and operating complex, distributed systems architecture securely and at scale.

Innovating with customers and partners

Azure Digital Twins is already being used by a broad set of customers and partners. Below are a few examples showcasing the applicability of Azure Digital Twins across a wide range of industries:

Ansys Twin Builder: physics-based digital twins

Physics-based simulation has long been an essential part of the product design process, helping engineers to optimize and validate design choices. With the broad deployment of IoT sensors in products and their environment, it is now possible to apply the same simulation technology after a product has been built, shipped and deployed in the field.  Simulation technology can be used to optimize performance and energy usage or predict failures in a highly accurate and immediate way, without the complexities associated with alternative techniques.

“Ansys Twin Builder lets engineers quickly deliver real-time simulation models for operational use. With Microsoft’s Azure Digital Twins platform, it is now possible to efficiently integrate the simulation-based twins into a broader IoT solution.” —Sameer Kher, Senior Director,Twin Builder Product Line for Ansys 

Bentley iTwin: infrastructure digital twins

In the world of infrastructure development, complex CAD data is the backbone of planning, execution and operation of major infrastructures, such as road and rail networks, public works and utilities, industrial plants, and commercial and institutional facilities. Bentley’s iTwin platform captures geometry and metadata of the project and its environment as the source of truth that drives daily decisions throughout the entire lifecycle of the project. As a developer, you can think of it as GitHub for CAD.

“Using Azure Digital Twins, we can bring this backbone to life using raw and processed information from IoT sensors distributed throughout the infrastructure. By bringing a wide range of information sources together into a comprehensive Digital Twin, including CAD data, real-world scans and photometry, IoT sensor data, weather feeds and many more, we can revolutionize the way infrastructure projects are planned, built and operated.” —Pavan Emani, Vice President, iTwin software development for Bentley 

To learn more about the customers and partners using Azure Digital Twins in exciting ways, we encourage you to visit the customer stories covering a spectrum of industry use cases.
  

Get started

We look forward to continuing to deliver on our commitment of simplifying and accelerating your time to value building next-generation IoT connected solutions. We are excited about the role Azure Digital Twins will play helping you gain valuable insights across your environments.

Watch this video to learn more:

 

Get started with Azure Digital Twins today.

Visit the Azure Digital Twins product page.

See the Azure Digital Twins documentation and quick start guides.

Watch the Deep Dive: Azure Digital Twins webinar for technical walkthrough. Join event at 9 AM PT June 29, 2020 for Live Q & A.

Watch the Deep Dive: Bentley and Azure Digital Twins webinar for architectural overview. Join event at 9 AM PT August 3, 2020 for Live Q & A.

Watch how Bentley uses Azure Digital Twins to build Bentley iTwin solution.

Watch the Azure Digital Twins Microsoft Build event session.

Read our customer stories from Ansys and Bentley.

Read Announcing Azure Digital Twins: Create digital replicas of spaces and infrastructure using cloud, AI and IoT to get familiar with the previous release.

Quelle: Azure

Advancing Azure service quality with artificial intelligence: AIOps

“In the era of big data, insights collected from cloud services running at the scale of Azure quickly exceed the attention span of humans. It’s critical to identify the right steps to maintain the highest possible quality of service based on the large volume of data collected. In applying this to Azure, we envision infusing AI into our cloud platform and DevOps process, becoming AIOps, to enable the Azure platform to become more self-adaptive, resilient, and efficient. AIOps will also support our engineers to take the right actions more effectively and in a timely manner to continue improving service quality and delighting our customers and partners. This post continues our Advancing Reliability series highlighting initiatives underway to keep improving the reliability of the Azure platform. The post that follows was written by Jian Zhang, our Program Manager overseeing these efforts, as she shares our vision for AIOps, and highlights areas of this AI infusion that are already a reality as part of our end-to-end cloud service management.”—Mark Russinovich, CTO, Azure

This post includes contributions from Principal Data Scientist Manager Yingnong Dang and Partner Group Software Engineering Manager Murali Chintalapati.

 

As Mark mentioned when he launched this Advancing Reliability blog series, building and operating a global cloud infrastructure at the scale of Azure is a complex task with hundreds of ever-evolving service components, spanning more than 160 datacenters and across more than 60 regions. To rise to this challenge, we have created an AIOps team to collaborate broadly across Azure engineering teams and partnered with Microsoft Research to develop AI solutions to make cloud service management more efficient and more reliable than ever before. We are going to share our vision on the importance of infusing AI into our cloud platform and DevOps process. Gartner referred to something similar as AIOps (pronounced “AI Ops”) and this has become the common term that we use internally, albeit with a larger scope. Today’s post is just the start, as we intend to provide regular updates to share our adoption stories of using AI technologies to support how we build and operate Azure at scale.

Why AIOps?

There are two unique characteristics of cloud services:

The ever-increasing scale and complexity of the cloud platform and systems
The ever-changing needs of customers, partners, and their workloads

To build and operate reliable cloud services during this constant state of flux, and to do so as efficiently and effectively as possible, our cloud engineers (including thousands of Azure developers, operations engineers, customer support engineers, and program managers) heavily rely on data to make decisions and take actions. Furthermore, many of these decisions and actions need to be executed automatically as an integral part of our cloud services or our DevOps processes. Streamlining the path from data to decisions to actions involves identifying patterns in the data, reasoning, and making predictions based on historical data, then recommending or even taking actions based on the insights derived from all that underlying data.

 
Figure 1. Infusing AI into cloud platform and DevOps.

The AIOps vision

AIOps has started to transform the cloud business by improving service quality and customer experience at scale while boosting engineers’ productivity with intelligent tools, driving continuous cost optimization, and ultimately improving the reliability, performance, and efficiency of the platform itself. When we invest in advancing AIOps and related technologies, we see this ultimately provides value in several ways:

Higher service quality and efficiency: Cloud services will have built-in capabilities of self-monitoring, self-adapting, and self-healing, all with minimal human intervention. Platform-level automation powered by such intelligence will improve service quality (including reliability, and availability, and performance), and service efficiency to deliver the best possible customer experience.
Higher DevOps productivity: With the automation power of AI and ML, engineers are released from the toil of investigating repeated issues, manually operating and supporting their services, and can instead focus on solving new problems, building new functionality, and work that more directly impacts the customer and partner experience. In practice, AIOps empowers developers and engineers with insights to avoid looking at raw data, thereby improving engineer productivity.
Higher customer satisfaction: AIOps solutions play a critical role in enabling customers to use, maintain, and troubleshoot their workloads on top of our cloud services as easily as possible. We endeavor to use AIOps to understand customer needs better, in some cases to identify potential pain points and proactively reach out as needed. Data-driven insights into customer workload behavior could flag when Microsoft or the customer needs to take action to prevent issues or apply workarounds. Ultimately, the goal is to improve satisfaction by quickly identifying, mitigating, and fixing issues.

My colleagues Marcus Fontoura, Murali Chintalapati, and Yingnong Dang shared Microsoft’s vision, investments, and sample achievements in this space during the keynote AI for Cloud–Toward Intelligent Cloud Platforms and AIOps at the AAAI-20 Workshop on Cloud Intelligence in conjunction with the 34th AAAI Conference on Artificial Intelligence. The vision was created by a Microsoft AIOps committee across cloud service product groups including Azure, Microsoft 365, Bing, and LinkedIn, as well as Microsoft Research (MSR). In the keynote, we shared a few key areas in which AIOps can be transformative for building and operating cloud systems, as shown in the chart below.
 

Figure 2. AI for Cloud: AIOps and AI-Serving Platform.

AIOps

Moving beyond our vision, we wanted to start by briefly summarizing our general methodology for building AIOps solutions. A solution in this space always starts with data—measurements of systems, customers, and processes—as the key of any AIOps solution is distilling insights about system behavior, customer behaviors, and DevOps artifacts and processes. The insights could include identifying a problem that is happening now (detect), why it’s happening (diagnose), what will happen in the future (predict), and how to improve (optimize, adjust, and mitigate). Such insights should always be associated with business metrics—customer satisfaction, system quality, and DevOps productivity—and drive actions in line with prioritization determined by the business impact. The actions will also be fed back into the system and process. This feedback could be fully automated (infused into the system) or with humans in the loop (infused into the DevOps process). This overall methodology guided us to build AIOps solutions in three pillars.

Figure 3. AIOps methodologies: Data, insights, and actions.

AI for systems

Today, we're introducing several AIOps solutions that are already in use and supporting Azure behind the scenes. The goal is to automate system management to reduce human intervention. As a result, this helps to reduce operational costs, improve system efficiency, and increase customer satisfaction. These solutions have already contributed significantly to the Azure platform availability improvements, especially for Azure IaaS virtual machines (VMs). AIOps solutions contributed in several ways including protecting customers’ workload from host failures through hardware failure prediction and proactive actions like live migration and Project Tardigrade and pre-provisioning VMs to shorten VM creation time.

Of course, engineering improvements and ongoing system innovation also play important roles in the continuous improvement of platform reliability.

Hardware Failure Prediction is to protect cloud customers from interruptions caused by hardware failures. We shared our story of Improving Azure Virtual Machine resiliency with predictive ML and live migration back in 2018. Microsoft Research and Azure have built a disk failure prediction solution for Azure Compute, triggering the live migration of customer VMs from predicted-to-fail nodes to healthy nodes. We also expanded the prediction to other types of hardware issues including memory and networking router failures. This enables us to perform predictive maintenance for better availability.
Pre-Provisioning Service in Azure brings VM deployment reliability and latency benefits by creating pre-provisioned VMs. Pre-provisioned VMs are pre-created and partially configured VMs ahead of customer requests for VMs. As we described in the IJCAI 2020 publication, As we described in the AAAI-20 keynote mentioned above,  the Pre-Provisioning Service leverages a prediction engine to predict VM configurations and the number of VMs per configuration to pre-create. This prediction engine applies dynamic models that are trained based on historical and current deployment behaviors and predicts future deployments. Pre-Provisioning Service uses this prediction to create and manage VM pools per VM configuration. Pre-Provisioning Service resizes the pool of VMs by destroying or adding VMs as prescribed by the latest predictions. Once a VM matching the customer's request is identified, the VM is assigned from the pre-created pool to the customer’s subscription.

AI for DevOps

AI can boost engineering productivity and help in shipping high-quality services with speed. Below are a few examples of AI for DevOps solutions.

Incident management is an important aspect of cloud service management—identifying and mitigating rare but inevitable platform outages. A typical incident management procedure consists of multiple stages including detection, engagement, and mitigation stages. Time spent in each stage is used as a Key Performance Indicator (KPI) to measure and drive rapid issue resolution. KPIs include time to detect (TTD), time to engage (TTE), and time to mitigate (TTM).

 
Figure 4. Incident management procedures.

As shared in AIOps Innovations in Incident Management for Cloud Services at the AAAI-20 conference, we have developed AI-based solutions that enable engineers not only to detect issues early but also to identify the right team(s) to engage and therefore mitigate as quickly as possible. Tight integration into the platform enables end-to-end touchless mitigation for some scenarios, which considerably reduces customer impact and therefore improves the overall customer experience.

Anomaly Detection provides an end-to-end monitoring and anomaly detection solution for Azure IaaS. The detection solution targets a broad spectrum of anomaly patterns that includes not only generic patterns defined by thresholds, but also patterns which are typically more difficult to detect such as leaking patterns (for example, memory leaks) and emerging patterns (not a spike, but increasing with fluctuations over a longer term). Insights generated by the anomaly detection solutions are injected into the existing Azure DevOps platform and processes, for example, alerting through the telemetry platform, incident management platform, and, in some cases, triggering automated communications to impacted customers. This helps us detect issues as early as possible.

For an example that has already made its way into a customer-facing feature, Dynamic Threshold is an ML-based anomaly detection model. It is a feature of Azure Monitor used through the Azure portal or through the ARM API. Dynamic Threshold allows users to tune their detection sensitivity, including specifying how many violation points will trigger a monitoring alert.

Safe Deployment serves as an intelligent global “watchdog” for the safe rollout of Azure infrastructure components. We built a system, code name Gandalf, that analyzes temporal and spatial correlation to capture latent issues that happened hours or even days after the rollout. This helps to identify suspicious rollouts (during a sea of ongoing rollouts), which is common for Azure scenarios, and helps prevent the issue propagating and therefore prevents impact to additional customers. We provided details on our safe deployment practices in this earlier blog post and went into more detail about how Gandalf works in our USENIX NSDI 2020 paper and slide deck.

AI for customers

To improve the Azure customer experience, we have been developing AI solutions to power the full lifecycle of customer management. For example, a decision support system has been developed to guide customers towards the best selection of support resources by leveraging the customer’s service selection and verbatim summary of the problem experienced. This helps shorten the time it takes to get customers and partners the right guidance and support that they need.

AI-serving platform

To achieve greater efficiencies in managing a global-scale cloud, we have been investing in building systems that support using AI to optimize cloud resource usage and therefore the customer experience. One example is Resource Central (RC), an AI-serving platform for Azure that we described in Communications of the ACM. It collects telemetry from Azure containers and servers, learns from their prior behaviors, and, when requested, produces predictions of their future behaviors. We are already using RC to predict many characteristics of Azure Compute workloads accurately, including resource procurement and allocation, all of which helps to improve system performance and efficiency.

Looking towards the future

We have shared our vision of AI infusion into the Azure platform and our DevOps processes and highlighted several solutions that are already in use to improve service quality across a range of areas. Look to us to share more details of our internal AI and ML solutions for even more intelligent cloud management in the future. We’re confident that these are the right investment solutions to improve our effectiveness and efficiency as a cloud provider, including improving the reliability and performance of the Azure platform itself.
Quelle: Azure

Five reasons to view this Azure Synapse Analytics virtual event

The virtual event Azure Synapse Analytics: How It Works is now available on demand. In demos and technical discussions, Microsoft customers explain how they’re using the newest Azure Synapse Analytics capabilities to deliver insights faster, bring together an entire analytics ecosystem in a central location, reduce costs, and transform decision-making.

This post outlines five key reasons to view the one-hour event.

Learn how to deliver powerful insights with speed and ease

Today, it’s critical to have a data-driven culture in your organization. Analytics play a pivotal role in helping many organizations make insights-driven decisions—decisions to transform supply chains, develop new ways to interact with customers, and evaluate new offerings.

At Azure Synapse Analytics: How It Works, customers showed how they combine data ingestion, data warehousing, and big data analytics in a single cloud-native service using Azure Synapse. If you’re a data engineer trying to wrangle multiple data types from multiple sources to create pipelines or a database administrator with responsibilities over your data lake and data warehouse, you’ll see how all this can be simplified in a code-free environment.

Customers also demonstrated how they give their employees access to unprecedented, real-time insights from enterprise data using Azure Synapse with built-in Power BI authoring.

Achieve unprecedented ROI

Companies featured at the event have demonstrated significant cost reductions with cloud analytics solutions. Compared to on-premises solutions, these solutions:

Require lower implementation and maintenance costs.
Reduce analytics project development time.
Provide access to more frequent innovation.
Deliver higher levels of security and business continuity.
Help ensure a better competitive advantage and higher customer satisfaction.

With cloud analytics, organizations pay for data and analytics tools only when needed, pausing consumption when not in use. They can reallocate budget previously spent on hardware and infrastructure management to optimizing processes and launching new projects. In fact, customers average a 271 percent ROI with Azure Synapse—savings that come from lower operating costs, increased productivity, reallocating staff to higher-value activities, and increasing operating income due to improved analytics. Analytics in Azure is up to 14 times faster and costs 94 percent less than other cloud providers.

Deliver a unified analytics experience to everyone in your organization

BI specialists, data engineers, and other IT and data professionals are using Azure Synapse to build, manage, and optimize analytics pipelines, using a variety of skillsets.

Data engineers can use a code-free visual environment for managing data pipelines.
Database administrators can automate query optimization and easily explore data lakes.
Data scientists can build proofs of concept in minutes.
Business analysts can securely access datasets and use Power BI to build dashboards in minutes—all while using the same analytics service.

Analyze data at limitless scale

By viewing the event, you’ll learn how to access and analyze all your data, from your enterprise data lake to multiple data warehouses and big data analytics systems, with blazing speed. Join us to see how data professionals can query both relational and non-relational data using the familiar SQL language, using either serverless or provisioned resources—with Azure Synapse.

Attain unmatched security

Of course, trust is critical for any cloud solution. Customers will share how they take advantage of advanced Azure Synapse security and privacy features such as automated threat detection and always-on data encryption to help ensure that data stays safe and private by using column-level security and native row-level security. You’ll also learn about dynamic data masking, which automatically protects sensitive data in real time.

In summary, by viewing the Azure Synapse Analytics: How It Works virtual event, you’ll learn how to deliver:

Powerful insights.
Unprecedented ROI.
Unified experience.
Limitless scale.
Unmatched security.

Quelle: Azure