Microsoft Azure portal April 2019 update

This month’s updates include improvements to IaaS, Azure Data Explorer, Security Center, Recovery Services, Role-Based Access Control, Support, and Intune.

Sign in to the Azure portal now and see for yourself everything that’s new. Download the Azure mobile app to stay connected to your Azure resources anytime, anywhere.

Here’s the list of April updates to the Azure portal:

IaaS

Improved create experience for Managed Disks
Use non-ASCII characters for virtual machine names

Azure Data Explorer

New full-screen Create Cluster experience

Security Center

Public preview: Adaptive network hardening in Azure Security Center
Azure Security Center adaptive application control updates
Support for virtual network peering in Azure Security Center
Azure Security Center: Secure score impact changes

Azure Site Recovery

Replication to managed disks

Role-Based Access Control

New Classic administrators tab

Support

Updated support request experience

Other

Updates to Microsoft Intune

IAAS

Improved create experience for Managed Disks

Managed disks now have the latest UI pattern for creating resources in Azure. This updated flow eliminates horizontal scrolling during the creation workflow and follows the same UI patterns that we use in other popular services like VM, Storage, Cosmos DB and AKS, resulting in easier to learn and better customer experiences.

Use of non-ASCII characters for virtual machine names

We loosened the restrictions on the characters you can use to name a virtual machine in the portal to include non-ASCII characters. Azure virtual machine naming in the portal is constrained by two sets of rules: Azure resource naming rules and guest operating system hostname naming rules, which can be more restrictive. With this release, we allow more Unicode characters in the virtual machine name, which is used as both the Azure resource name and the guest hostname. While the Azure resource name is immutable, you can update the in-guest hostname after the VM is created.

Azure Data Explorer

New full-screen Create Cluster experience

We've changed the way users create clusters. The new experience contains the new UX pattern of "review + create" which appears in several Azure products.

Security Center

Public preview: Adaptive network hardening

Azure Security Center can now learn the network traffic and connectivity patterns of your Azure workload and provide you with network security group (NSG) rule recommendations for your internet-facing virtual machines. This is called adaptive network hardening, and it's now in public preview. It helps you secure connections to and from the public internet (made by workloads running in the public cloud), which are one of the most common attack surfaces.

It can be hard to know which NSG rules should be in place to make sure that Azure workloads are available only to required source ranges. These new recommendations in Security Center help you configure your network access policies and limit your exposure to attacks. Security Center uses machine learning to fully automate this process, including an automated enforcement mechanism. These recommendations also use Microsoft’s extensive threat intelligence reports to make sure that known malicious actors are blocked.

To view these recommendations, in the Security Center portal, select Networking and then Adaptive network hardening.

Adaptive application control updates

In Azure Security Center, adaptive application control in audit mode is now available for Azure Linux VMs. This whitelisting solution is also available for non-Azure Windows and Linux VMs and servers that are connected to Security Center.

In addition, you can now rename groups of virtual machine and server clusters in Security Center. They're still automatically named group1, group2, and so on. But you can then edit them to provide a more meaningful name to your machine cluster groups to help you better represent those application control policy groups. Learn more about automated end-to-end application control in Security Center by visiting our documentation, “Adaptive application controls in Azure Security Center.”

Support for virtual network peering

The network map in Azure Security Center now supports virtual network peering. You can view directly from the network map allowed traffic flows between peered virtual networks and deep dive into the connections and entities.

Secure score impact changes

In Azure Security Center, the number for secure score impact represents how much your overall secure score will improve if you follow recommendations.

Security Center fine tunes the score of the recommendations, continuously adjusting them to make sure they reflect the necessary prioritization. As part of this effort, the secure score has changed for several recommendations. The change might affect your overall secure score. You can learn more about secure score by visiting our documentation, “Improve your secure score in Azure Security Center.”

Azure Site Recovery

Replication to managed disks

Azure Site Recovery (ASR) now supports disaster recovery of VMware virtual machines and physical servers by directly replicating to Managed Disks. All new protections now have this capability available on the Azure portal. In order to enable replication for a machine, you no longer need to create storage accounts. For more details, refer to the announcement blog post, “Simplify disaster recovery with Managed Disks for VMware and physical servers.”

Role-based access control

New Classic administrators tab

If you are still using the classic deployment model, we've consolidated the management of Co-administrators on a new tab named Classic administrators. If you need to add or remove Co-administrators, you can use this new tab. To learn more about this tab, see Azure classic subscription administrators.

To see the new Classic administrators tab:

In the Azure portal, select All services and then Subscriptions.
Select your subscription.
Select Access control (IAM) and then the Classic administrators tab.

Support

Updated support request experience

We have updated the support request creation experience, improving screen real estate usage and creating better interaction patterns.

During support case creation, customers can take advantage of our rich self-help content and diagnostics to troubleshoot their issues and get immediate solutions to their problems. The self-help and troubleshooting steps are available to all customers, including those that have not purchased a technical support plan with Microsoft.

Other

Updates to Microsoft Intune

The Microsoft Intune team has been hard at work on updates as well. You can find the full list of updates to Intune on the “What's new in Microsoft Intune” page, including changes that affect your experience using Intune.

Azure portal “how to” video series

Have you checked out our Azure portal “how to” video series yet? The videos highlight specific aspects of the portal so you can be more efficient and productive while deploying your cloud workloads from the portal. Recent videos include a demonstration of how to create a storage account and upload a blob and how to create an Azure Kubernetes Service cluster in the portal. Keep checking our playlist on YouTube for a new video each week.

Next steps

The Azure portal’s large team of engineers always wants to hear from you, so please keep providing us with your feedback in the comments section below or on Twitter @AzurePortal.

Don’t forget to sign in the Azure portal and download the Azure mobile app today to see everything that’s new. See you next month!
Quelle: Azure

Azure Sphere Retail and Retail Evaluation feeds

Azure Sphere developers might have noticed that we now have two Azure Sphere OS feeds where once there was only one. The Azure Sphere Preview feed that delivered over-the-air OS updates has been replaced by feeds named Retail Azure Sphere OS and Retail Evaluation Azure Sphere OS. What’s the difference and what does it mean for you?

The Retail feed provides a production-ready OS and is intended for broad deployment to end-user installations. The Retail Evaluation feed provides each new OS for 14 days before we release it to the Retail feed. It is intended for backwards compatibility testing.

At the 19.02 release, both feeds delivered the same OS. The 19.03 quality update was released to the Retail Evaluation feed on March 14, 2019 and was promoted to the Retail feed on March 28, 2019. Future releases will similarly be made available on the Retail Evaluation feed for 14 days before they are promoted to the Retail feed.

What’s the value to you?

We’ve designed Azure Sphere for easy updates so that new versions of the OS can be deployed to customer sites without manual intervention. However, we recognize that you want an opportunity to verify your existing applications before your customers receive the new OS. The 14-day evaluation period lets you check that everything works as you expect.

Application binaries that are built only with production APIs from a given OS release will be compatible with all subsequent OS releases. To evaluate the new OS, we recommend that you assign one or more devices to a separate Retail Evaluation device group that is configured to receive the Retail Evaluation feed. Using the devices in this group as “canaries,” you can run your applications and OTA application deployments against the new OS version.

If you encounter problems, please notify us immediately through your Microsoft technical account manager (TAM) so that we can address any issues.

Get started with Azure Sphere

The best way to learn more about the Azure Sphere Retail and Retail Evaluation feeds is by connecting an Azure Sphere devkit or module to the network. If you haven’t already started building with Azure Sphere, you can get started quickly with modules that meet your needs from our ecosystem of Azure Sphere partners. To learn more, view the on-demand Azure Sphere Ecosystem Expansion webinar.
Quelle: Azure

IoT in Action: Enabling cloud transformation across industries

The intelligent cloud and intelligent edge go hand-in-hand, and together they are sparking massive transformation across industries. As computing gets more deeply embedded in the real world, powerful new opportunities arise to transform revenue, productivity, safety, customer experiences, and more. According to a white paper by Keystone Strategy, digital transformation leaders generate 8 percent more per year in operating income than other enterprises.

But what does cloud transformation look like within the context of the Internet of Thing (IoT)?

Below I’ve laid out a typical cloud transformation journey and provided examples of how the cloud is transforming city government, industrial IoT, and oil and gas innovators. For a deep dive on this very topic, I hope you’ll join me and a whole host of cloud and IoT experts, and Microsoft partners and customers at the upcoming IoT in Action event in Houston.

The typical cloud transformation journey

As mentioned, the cloud is a vital piece of IoT. Below I’ve outlined a typical cloud journey.

Embrace an innovation mindset: The first part of the cloud transformation journey—and this applies to digital transformation in general—is building a culture and mindset that is willing to innovate, and welcomes change and the potential it brings. This must start with leadership. If leadership doesn’t set the example of an innovation mindset, it will be difficult to achieve buy-in internally.
Clarify rationale for a cloud move: Typically, these reasons are plentiful such as cost savings, greater availability, and better performance. Understanding rationale from a strategic standpoint and aligning with your overall business goals can help you focus your efforts and find the right cloud fit.
Determine which applications to modernize and migrate: Prioritizing applications and determining which ones need to be migrated is also key. Migration is an opportunity for modernization of the IT ecosystem, which can ultimately save time and money. Making a prioritized plan and budgeting for modernization needs is critical.
Expect cloud usage (and costs) to rise: After the initial migration, cloud consumption typically increases. Due to easy access and relatively low-cost, developers and administrators will consume more resources, developing new applications and solutions.
But then it levels out: As an organization gets a clear understanding around its actual cloud consumption, it will be able to prioritize its workloads, bring some workloads back on premise, and negotiate pricing models. Implementing governance processes will help to control costs and ensure optimal performance.

Below I’ve included a few snapshots that show how the cloud transformation journey is paying off for city government, manufacturers, and the oil and gas industry.

Smart cites and the cloud journey

What do flood detection sensors, firefighting drones, transit wi-fi, and smart water meters have in common? They’re cloud connected.

Houston is on a mission to connect its citizens to the city and the city to its citizens. In the wake of massive Hurricane Harvey destruction, the city is doing more than just rebuilding: it is working to become safer, more resilient, and more connected.

To that end, the City of Houston is working with Microsoft and Microsoft partners to leverage cloud transformation and build repeatable, IoT solutions that span transportation, public safety, disaster recovery and response, connected neighborhoods, smart buildings, and more. A shared vision and strong collaboration from city leaders have been crucial to the success of this massive undertaking.

Learn more about the Microsoft and Houston initiative for details around how Houston is embracing cloud transformation to take care of its citizens.

Industrial IoT and the cloud journey

Industrial organizations are also leveraging digital and cloud transformation. By combining cloud with IoT, manufacturers are able to streamline, increase productivity, and predict issues before they happen. They’re even able to offer new service lines.

Rolls-Royce is a fantastic example of a manufacturer that has embraced cloud transformation to create a valuable service that helps its customers minimize costly delays and maximize fuel efficiency. With more than 13,000 commercial aircraft engines in service worldwide, Rolls Royce uses data from equipment sensors to help airlines predict and plan for maintenance needs and increase fuel economy.

The solution relies on the Microsoft Azure platform and Azure IoT solution accelerators to help filter, synthesize, and analyze massive volumes of data, delivering actionable insights to the right stakeholders at the right time. According to Michael Chester, Product Manager Data Services, Rolls-Royce, “By looking at wider sets of operating data and using machine learning and analytics to spot subtle correlations, we can optimize our models and provide insight that might improve a flight schedule or a maintenance plan and help reduce disruption for our customers.”

Oil and gas IoT and the cloud journey

A shifting competitive landscape, price volatility, technology, and other factors are reshaping the oil and gas industry. Areas of transformation include field empowerment, operations, and industry innovation. Foundational to success is digital transformation.

XTO Energy, a subsidiary of ExxonMobil knows firsthand the importance of digital and cloud transformation. One of the challenges they faced was that the existing infrastructure where they have major holdings didn’t lend itself to collecting data.

Recognizing the need to modernize and use data to drive better decisions, they deployed a series of intelligent cloud and intelligent edge solutions that have helped them keep tabs on well heads. Using the Microsoft Azure platform and Azure IoT technologies, they collect, store, and analyze data, giving XTO Energy new insights into well operations and future drilling possibilities.

According to Brian Khoury, IoT and Data Architecture Supervisor at XTO Energy, “We recognize the need to further digitize and to use data as an asset that drives insights and solves problems that we couldn’t solve when information is confined to physical paper or siloed across departments. Oil and gas tends to be behind in the use of digital tools compared to other industries, so we’re working hard to be more digitally enabled and connected. Embracing the cloud is an important part of that effort because it frees us up from having to manage hardware, storage, servers—all things that aren’t our core business—and we can scale and spin up resources as needed.”

IoT in Action comes to Houston April 16, 2019

The intelligent cloud and intelligent edge present powerful opportunities across industries. Please join us for a one-day IoT in Action event in Houston. This event is a unique opportunity to explore innovative, scalable IoT solutions that enable cloud transformation across industries – from city government to industrial IoT solution providers and oil and gas innovators. It’s also a great way to connect with experts and network with other Microsoft partners and customers to explore opportunities around the intelligent edge and intelligent cloud.
Quelle: Azure

Alerts in Azure are now all the more consistent!

The typical workflow we hear from customers – both ITOps and DevOps teams – is that alerts go to the appropriate team (on-call individual) based on some metadata such as subscription ID, resource groups, and more. The common alert schema makes this workflow more streamlined by providing a clear separation between the essential meta-data that is needed to route the alert, and the additional context that the responsible team (or individual) needs to debug and fix the issue.Azure Monitor alerts provides rich alerting capabilities on a variety of telemetry such as metrics, logs, and activity logs. Over the past year, we have unified the alerting experience by providing a common consumption experience including UX and API for alerts. However, the payload format for alerts remained different which puts the burden of building and maintaining multiple integrations, one for each alert type based on telemetry, on the user. Today, we are releasing a new common alert schema that provides a single extensible format for all alert types.

What’s the common alert schema?

With the common alert schema, all alert payloads generated by Azure Monitor will have a consistent structure. Any alert instance describes the resource that was affected and the cause of the alert, and these are described in the common schema in the following sections:

Essentials: A set of standardized fields which are common across all alert types. It describes what resource the alert is on, along with additional common alert metadata such as severity or description.
Alert context: A set of fields which describe the cause of the alert details that vary based on the alert type. For example, a metric alert would have fields like the metric name and metric value in the alert context, whereas an activity log alert would have information about the event that generated the alert.

How does it help me?

The typical workflow we hear from customers – both ITOps and DevOps teams – is that alerts go to the appropriate team (on-call individual) based on some metadata such as subscription ID, resource groups, and more. The common alert schema makes this workflow more streamlined by providing a clear separation between the essential meta-data that is needed to route the alert, and the additional context that the responsible team (or individual) needs to debug and fix the issue.

Find more information about the exact fields, versioning, and other schema related details.

How is this going to impact me?

If you consume alerts from Azure in any manner whether it be email, webhooks, external tools, or others you might want to continue reading.

Email: A consistent and detailed email template allowing you to not only diagnose issues at a glance, but also jump to the process of working on the incident through deeplinks to the alert details on the portal and the affected resource.
SMS: A consistent SMS template
Webhook, Logic Apps, Azure Functions: A consistent JSON structure, allowing you to easily build integrations across different alert types.

The new schema will also enable a more rich consumption experience across both the Azure portal and the Azure mobile app in the immediate future. You can learn more about the changes coming as part of this feature by visiting our documentation.

Why should I switch over from my existing integrations?

If you already have integrations with the existing schemas, the reason to switch over are many:

Consistent alert structure means that you could potentially have fewer integrations, making the process of managing and maintaining these connectors a much simpler task.
Payload enrichments like rich diagnostic information, ability to customize, and more would surface up only in the new schema.

How do I get this new schema?

To avoid breaking your existing integrations, the common alert schema is something you can opt-in to and opt-out of as well.

To opt-in or out from the Azure portal:

Open any existing or a new action in an action group.
Select Yes for the toggle to enable the common alert schema as shown.

If you wish to opt-in at scale, you can also use the action groups API to automate this process. Learn more about how to write integrations for the common alert schema and the alert context schemas for the different alert types.

As always, we would love to hear your feedback. Please continue to share your thoughts at azurealertsfeedback@microsoft.com
Quelle: Azure

Monitoring on Azure HDInsight Part 2: Cluster health and availability

This is the second blog post in a four-part series on Monitoring on Azure HDInsight. "Monitoring on Azure HDInsight Part 1: An Overview" discusses the three main monitoring categories: cluster health and availability, resource utilization and performance, and job status and logs. This blog covers the first of those topics, cluster health and availability, in more depth.

As a high-availability service, Azure HDInsight ensures that you can spend time focused on your workloads, not worrying about the availability of your cluster. To accomplish this, HDInsight clusters are equipped with two head nodes, two gateway nodes, and three ZooKeeper nodes, making sure there is no single point of failure for your cluster. Nevertheless, Azure HDInsight offers multiple ways to comprehensively monitor the status of your clusters’ nodes and the components that run on them. HDInsight clusters include both Apache Ambari, which provides health information at a glance and predefined alerts, as well as Azure Monitor logs integration, which allows the querying of metrics and logs as well as configurable alerts.

Apache Ambari                   

Apache Ambari, included on all HDInsight clusters, simplifies cluster management and monitoring cluster via an easy-to-use web UI and REST API. Today, Ambari is the best way to monitor the health and availability of a single HDInsight cluster in depth.

Dashboard

The Ambari dashboard contains widgets that show a handful of metrics to give you a quick overview of your HDInsight cluster’s health. These widgets show metrics such as the number of live DataNodes (worker nodes), JournalNodes (ZooKeeper nodes), NameNode (head nodes) uptime, as well as metrics specific to certain cluster types such as YARN ResourceManager uptime for Spark and Hadoop clusters.

The Ambari Dashboard, included on all Azure HDInsight clusters.

Hosts – View individual node status

The hosts tab allows you to drill down further and view status information for individual nodes in the cluster. This offers a view showing whether there are any active alerts for the current node as well as the status/availability of each individual component running on the node.

The Ambari Hosts view shows detailed status information for individual nodes in your cluster.

Ambari alerts

Ambari also provides several configurable alerts out of the box that can provide notification of specific events. The number of currently active alerts is shown in the upper-left corner of Ambari in a red badge containing the number of alerts.

Ambari offers many predefined alerts related to availability, including:

Alert Name

Description

DataNode Health Summary

This service-level alert is triggered if there are unhealthy DataNodes.

NameNode High Availability Health

This service-level alert is triggered if either the Active NameNode or Standby NameNode are not running.

Percent JournalNodes Available

This alert is triggered if the number of down JournalNodes in the cluster is greater than the configured critical threshold. It aggregates the results of JournalNode process checks.

Percent DataNodes Available

This alert is triggered if the number of down DataNodes in the cluster is greater than the configured critical threshold. It aggregates the results of DataNode process checks.

A full list of Ambari alerts that help monitor the availability of a cluster can be found in our documentation, “Availability and reliability of Apache Hadoop cluster in HDInsight.”

The detailed view for each alert shows a description of the alert, the specific criteria or thresholds that will trigger a warning or critical alert, and the check interval for the criteria. The thresholds and check interval can be configured for individual alerts.

The Ambari detailed alert view shows the description of the alert and the check interval and threshold for the alert to fire.

Email Notifications

Ambari also offers support for configuring email notifications. Ambari email notifications can be a good way to monitor alerts when managing many HDInsight clusters.

Configuring Ambari email notifications can be a useful way to be notified of alerts for your clusters.

Azure Monitor logs integration

Azure Monitor logs enables data generated by multiple resources such as HDInsight clusters, to be collected and aggregated in one place to achieve a unified monitoring experience.

As a prerequisite, you will need a Log Analytics Workspace to store the collected data. If you have not already created one, you can follow the instructions for creating a Log Analytics Workspace.

You can then easily configure an HDInsight cluster to send many workload-specific metrics to Log Analytics, such as YARN ResourceManager information for Spark/Hadoop clusters, broker topics, and controller metrics for Kafka clusters. You can even configure multiple HDInsight clusters to send metrics to the same Log Analytics Workspace so you can monitor all of your clusters in a single place. See how to enable Azure Monitor logs integration on your HDInsight cluster by visiting our documentation on using Azure Monitor logs to monitor HDInsight clusters.

Query metrics tables in the logs blade

Once Log Analytics Integration is enabled, which may take a few minutes, you can start querying the logs/metrics tables.

The Logs blade in a Log Analytics workspace lets you query collected metrics and logs across many clusters.

The computer availability tab in the logs blade of your Log Analytics Workspace lists a number of sample queries related to availability, such as:

Query Name

Description

Computers availability today

Chart the number of computers sending logs, each hour.

List heartbeats

List all computer heartbeats from the last hour.

Last heartbeat of each computer

Show the last heartbeat sent by each computer.

Unavailable computers

List all known computers that didn't send a heartbeat in the last 5 hours.

Availability rate

Calculate the availability rate of each connected computer.

Azure Monitor alerts

You can also set up Azure Monitor alerts that will trigger when the value of a metric or the results of a query meet certain conditions.

You can condition on a query returning a record with a value that is greater than or less than some thresholds, or even on the number of results returned by a query. For example, you could create an alert to send an email when one or more nodes haven’t sent a heartbeat in one hour (i.e. is presumed to be unavailable). You can create multiple conditions that need to be met in order for an alert to fire.

There are several types of actions you can choose to trigger when your alert fires, such as an email, SMS, push, voice, an Azure Function, a LogicApp, a Webhook, an ITSM, or an Automation Runbook. You can set multiple actions for a single alert. Find more information about these different types of actions by visiting our documentation, “Create and manage action groups in the Azure portal.”

Finally, you can specify a severity for the alert in addition to the name. The ability to specify severity is a powerful tool that can be used when creating multiple alerts. For example, you could create one alert to raise a Warning (Sev 1) alert if a single head node becomes unavailable and another alert that raises a Critical (Sev 0) alert in the unlikely event that both head nodes go down. Alerts can be grouped by severity when viewed later.

Azure Monitor alerts are an extremely customizable way to receive alerts for specific events.

Next steps

While HDInsight’s redundant architecture, designed for high availability, means that a single failure will never impact the functionality of your cluster, HDInsight makes sure that you are always informed about potential availability issues so they can be mitigated early on. Between Apache Ambari with Azure Monitor logs integration, and Apache Ambari with Azure Log Analytics integration, Azure HDInight will offer comprehensive solutions for both monitoring a cluster in depth or monitoring many clusters at a glance. You can learn more and see concrete examples in our documentation, “How To Monitor Cluster Availability With Ambari and Azure Monitor Logs.”

Try HDInsight now

We hope you will take full advantage of monitoring on HDInsight and we are excited to see what you will build with Azure HDInsight. Read this developer guide and follow the quick start guide to learn more about implementing these pipelines and architectures on Azure HDInsight. Stay up-to-date on the latest Azure HDInsight news and features by following us on Twitter #AzureHDInsight and @AzureHDInsight. For questions and feedback, reach out to AskHDInsight@microsoft.com.

About HDInsight

Azure HDInsight is an easy, cost-effective, enterprise-grade service for open source analytics that enables customers to easily run popular open source frameworks including Apache Hadoop, Spark, Kafka, and others. The service is available in 36 public regions and Azure Government and National Clouds. Azure HDInsight powers mission-critical applications in a wide variety of sectors and enables a wide range of use cases including ETL, streaming, and interactive querying.
Quelle: Azure

The future of manufacturing is open

With the expansion of IoT across all industries data is becoming the currency of innovation. Organizations have both an opportunity and a business imperative to adopt technologies quickly, build digital competencies, and offer new value-added services that will serve their broader ecosystem.

Manufacturing is an industry where IoT is having a transformational impact, yet which also requires many companies to come together for IoT to be effective. We see several challenges that slow down innovation in manufacturing, such as proprietary data structures from legacy industrial assets and closed industrial solutions. These closed structures foster data silos and limit productivity, hindering production and profitability. It takes more than new software to drive transformation—it takes a new approach to open standards, an ecosystem mindset, the ability to break out of the “walled garden” for data as well as new technology.

This is why Microsoft has invested heavily in making Azure work seamlessly with OPC UA. In fact, we are the leading contributor of open source software to the OPC Foundation. To further this open platform approach, we have collaborated with world-leading manufacturers to accelerate innovation in industrial IoT to shorten time to value. But we feel we need to do more, not just directly between Microsoft and our partners but across the industry and between the partners themselves. It’s not about what any one company can deliver within their operations – it’s about what they can share with others across the sector to help everyone achieve at new levels. It’s clearly a much bigger task than any one organization can take on, and today, I’m pleased to share more about the investments we are making to advance innovation in the manufacturing space by enabling open platforms.

Announcing the Open Manufacturing Platform

Today at Hannover Messe 2019, we are launching the Open Manufacturing Platform (OMP) together with the BMW Group, our partner on this initiative. Built on the Microsoft Azure Industrial IoT cloud platform, the OMP will provide a reference architecture and open data model framework for community members who will both contribute to and learn from others around industrial IoT projects. We’ve set up an initial approach and are actively working to bring new community members on board. BMW has an initial use case focused on their IoT platform, built on Microsoft Azure, in the second generation of autonomous transport systems in one of their sites, greatly simplifying their logistics processes and creating greater efficiency. More information about this and the partnership can be found here.

The OMP provides a single open platform architecture that liberates data from legacy industrial assets, standardizes data models for more efficient data correlation, and most importantly, enables manufacturers to share their data with ecosystem partners in a controlled and secure way, allowing others to benefit from their insights. With pre-built industrial use cases and reference designs, community members will work together to address common industrial challenges while maintaining ownership over their own data. Our news release, shared jointly with the BMW Group this morning, can be found here.

A rising tide that lifts all boats

The recognition of the need for an open approach is taking hold across the industry, as evidenced by SAP’s announcement today of the Open Industry 4.0 Alliance. This alliance – focused on factories, plants and warehouses – between SAP and a number of European manufacturing leaders will help create an open ecosystem for the operation of highly automated factories.

OMP and the Open Industry 4.0 Alliance are complementary visions. Both recognize the need for an open platform for the cloud and intelligent edge on the ground in the factory. Both highlight an open data model and standards-based data exchange mechanisms that allow for cross-company collaboration.

We’ve been working closely with SAP on efforts like the Open Data Initiative and across the industry on a wide range of initiatives including the Industrial Internet Consortium, the Plattform Industrie 4.0 and the OPC Foundation. We look forward to continuing this fruitful partnership and working to align OMP and the Open Industry 4.0 Alliance. Collaboration is the lifeblood of future manufacturing and the more we work together, the more we can accomplish.

Read more here.
Quelle: Azure

Schema validation with Event Hubs

Event Hubs is fully managed, real-time ingestion Azure service. It integrates seamlessly with other Azure services. It also allows Apache Kafka clients and applications to talk to Event Hubs without any code changes.

Apache Avro is a binary serialization format. It relies on schemas (defined in JSON format) that define what fields are present and their type. Since it's a binary format, you can produce and consume Avro messages to and from the Event Hubs.

Event Hubs' focus is on the data pipeline. It doesn't validate the schema of the Avro events.

If it's expected that the producer and consumers will not be in sync with the event schemas, there needs to be "source of truth" for the schema tracking, both for producers and consumers.

Confluent has a product for this. It's called Schema Registry. Schema Registry is part of the Confluent’s open source offering.

Schema Registry can store schemas, list schemas, list all the versions of a given schema, retrieve a certain version of a schema, get the latest version of a schema, and it can do schema validation. It has a UI and you can manage schemas via its REST APIs as well.

What are my options on Azure for the Schema Registry?

You can install and manage your own Apache Kafka cluster (IaaS)
You can install Confluent Enterprise from the Azure Marketplace.
You can use the HDInsight to launch a Kafka Cluster with the Schema Registry.
I've put together an ARM template for this. Please see the GitHub repo for the HDInsight Kafka cluster with Confluent’s Schema Registry.
Currently, Event Hubs store only the data, events. All metadata for the schemas doesn’t get stored. For the schema metadata storage, along with the Schema Registry, you can install a small Kafka cluster on Azure.
Please see the following GitHub post on how to configure the Schema Registry to work with the Event Hubs.
On a future release, Event Hubs, along with the events, will be able to store the metadata of the schemas. At that time, just having a Schema Registry on a VM will suffice. There will be no need to have a small Kafka cluster.

Other than the Schema Registry, are the any alternative ways of doing the schema validation for the events?

Yes, we can utilize Event Capture feature of the Event Hubs for the schema validation.

While we are capturing the message on a Azure Blob storage or a Azure Data Lake store, we can trigger an Azure Function via a Capture Event. This Function then can custom validate the received message's schema by leveraging the Avro Tools/libraries.

Please see the following for capturing events through Azure Event Hubs into Azure Blob Storage or Azure Data Lake Storage and/or see how to use Event Grid and Azure Functions for migrating Event Hubs data into a data warehouse.

We can also write Spark job(s) that consumes the events from the Event Hubs and validates the Avro messages by the custom schema validation Spark code with the help of org.apache.avro.* and kafka.serializer.* Java packages per say. Please look at this tutorial on how to stream data into Azure Databricks using Event Hubs.

Conclusion

Microsoft Azure is a comprehensive cloud computing service that allows you both the control of IaaS and the higher-level services of PaaS.

After the assessment of the project, if the schema validation is required, one can use Event Hubs PaaS service with a single Schema Registry VM instance, or can leverage the Event Hubs Capture Event feature for the schema validations.
Quelle: Azure

Enabling precision medicine with integrated genomic and clinical data

Precision medicine tailors a patient's medical treatment by factoring in their genetic makeup and clinical data. The key to applying this methodology is integrating clinical data with an individual’s genomic data for the most complete longitudinal healthcare record to power the most precise and effective treatment.

Problem: data in silos, detached from the point of care

Currently, clinical information resides in silos (elecftronic healthcare records, radiological information systems, laboratory information systems, and picture archiving and communication systems), with little to no integration or interoperability between them. Furthermore, there is not just one genome for a patient, but multiple “omes” including the genome, proteome, transcriptome, epigenome, and microbiome and beyond. The lack of availability of a complete, integrated longitudinal patient record incorporating multiomics to power precision medicine has several detrimental effects. First and foremost, it results in less effective medicine, and suboptimal patient outcomes. It can also delay diagnoses where data required to support a clinical decision is not readily available. Working with an incomplete medical record can increase the risk of errors. Last but not least, this can exacerbate the lack of coordination across multidisciplinary care teams, resulting in suboptimal patient care and increased healthcare costs. For precision medicine, this presents a significant challenge around how to integrate clinical data systems and clinical genomic data. The cumulative result is the reduced feasibility of providing precision medicine at the point of care.

The solution: seamless connection of clinical data with genomic data

Kanteron Systems Platform is a patient-centric, workflow-aware, precision medicine solution. The solution integrates many key types of healthcare data for a complete patient longitudinal record to power precision medicine including medical imaging, digital pathology, clinical genomics, and pharmacogenomic data.

The figure below shows key data layers of the Kanteron Platform:

Benefits

The solution provides several key benefits to help fulfill the potential of precision medicine. First, it provides a clinical content management system across the full range of data types comprising the patient record. With the cost of full genomic sequencing now dipping below the $1,000 USD mark, a tsunami of genomic data is expected. Each genome record can take up to 150 GB or more of storage. The Kanteron Platform provides support for managing this massive growth in genomic data and paves the way for genomic sequencing at scale. Through the integration of data and support for multiomics, this solution can also be used to enable pharmacogenomics, in turn helping to increase medication efficacy and reduce adverse events. Artificial intelligence and machine learning are most powerful when applied to the full patient record, across the range of data types comprising this record. Through integration of key data types, the Kanteron Platform enables healthcare organizations to realize the full potential of artificial intelligence to improve patient outcomes and reduce healthcare costs.

Azure services that make a difference

Azure offers Kanteron’s customers a level of flexibility, scalability, security, and compliance that is not possible with on-premises installations. Azure is also available across 54 regions and 140 countries worldwide, and just expanded into South Africa, enabling healthcare organizations to deploy where required and satisfy any applicable data sovereignty requirements. Azure supports a vast range of compliance requirements as seen in the Compliance offerings. We now have 91 certifications and attestations. Key Azure services used to support the Kanteron Platform include both Azure Storage, and Virtual Machines.

Recommended Next steps

Explore how the Kanteron Systems Platform can power your precision medicine practice to the next level through integration of genomic and clinical data, and support for advanced artificial intelligence.
Quelle: Azure