Connecting Node-RED to Azure IoT Central

Today I want to show how simple it is to connect a temperature/humidity sensor to Azure IoT Central using a Raspberry Pi and Node-RED.

As many of you know, Raspberry Pi is a small, single-board computer. Its low cost, low power nature makes it a natural fit for IoT projects. Node-RED is a flow-based, drag and drop programming tool designed for IoT. It enables the creation of robust automation flows in a web browser, simplifying IoT project development.

For my example, I’m using a Raspberry Pi 3 Model B and a simple DHT22 temperature and humidity sensor, but it should work with other models of the Pi. If you have a different kind of sensor, you should be able to adapt the guide below to use it, provided you can connect Node-RED to your sensor.

Configuring Azure IoT Central

Create an app.
Create a new device template.

Temp (temp)
Humidity (humidity)

Create a real device and get the DPS connection information.
Use dps-keygen to provision the device and get a device connection string.

Identify the three parts of the resulting connection string and save them for later.

Connecting the DHT22 sensor

Before we can get data from our DHT22 sensor, we need to connect it to the pi. The DHT22 typically has three pins broken out, but some of them have four. If you have one with four, check the datasheet to confirm which pins are voltage (may be shown as +, VCC or VDD), data (or signal), and ground.

With the pi powered off, use jumper wires to connect your DHT22 as shown below:

NOTE: The power jumper (red) should go to 3.3V, data jumper (yellow) should go to GPIO4 and the ground jumper (black) should go to ground. Some boards are different, so double-check your connections!

Installing required software

I started by installing Raspbian Lite using the guide. Then, I installed Node-RED. At this point you should be able to open a browser and visit http://raspberrypi.lan:1880 to see the Node-RED interface. Next, install the Azure IoT Hub nodes for Node-RED. The easiest way to do this is from the Node-RED interface, using the Manage Palette command.

Install the DHT22 nodes. Unfortunately, since this node has some lower-level hardware requirements, it can’t be installed through the Manage Palette command. Please follow the instructions using the link above.

Configuring the flow

Now that you have Node-RED up and running on your pi, you’re ready to create your flow. By default, Node-RED should already have a flow called “Flow 1,” but if you can easily create a new one by selecting the (+) icon above the canvas.

Starting the flow with the inject node

The first node we will add to this flow is an input node. For this example, we will use the inject node which simply injects an arbitrary JSON document into the flow. From the input section in the palette, drag the node from the palette on the left onto the canvas. Then, double select it to open the configuration window. Set the node properties as shown below:

This node will simply inject a JSON object where the payload is set to a timestamp. We don’t really care about that value. This is just a simple way to kick off the flow.

Getting data from the DHT22

In the Node-RED palette, find the rpi dht22 node and drag it onto the canvas. Double click on it to open the configuration window, and set the node properties as shown below:

Connect the inject node to the rpi dht22 node by dragging the little handle from one to the other.

Reformatting the message

The JSON message produced by the DHT22 node isn’t formatted correctly for sending to Azure IoT, so we need to fix that. We will use the change node to do this, so drag it out from the palette onto the canvas and connect it to the DHT22 node. Double click on it to open the configuration window and set the node properties as shown below:

For the functional part of this node, we will use JSONata, which is a query and transformation language for JSON documents. After selecting the JSONata type in the to selector, select the […] button to open the editor and enter the following:

Here we are extracting the temperature and humidity values from the input JSON message and putting them inside the data element in the resulting JSON message. We’re also adding the device ID and shared access key which you got from the Device Connection String earlier.

Sending the data to Azure IoT Central

Now that we’ve got the JSON message ready, find the Azure IoT Hub node in the palette and drag it onto the canvas. Again, double click on it to open the configuration window and set the properties as shown here:

Confirming your message and debugging

The final node we will add to our flow is a debug node, which simply outputs the message it is given to the debug panel in Node-RED. Connect it to the end of the flow (after Azure IoT Hub) and set the name to “Hub Response.”

If you’re interested in seeing the JSON message at any point in the flow, you can add more debug nodes anywhere you want. You can enable or disable the output of a debug node by selecting the little box on the right side of the node.

The flow

Here is what your flow should look like. I’ve added a couple of extra debug nodes while developing this flow, but you can see that only the Hub Response node is enabled.

Before you can run the flow, you need to deploy it from the workspace. To do this select the red Deploy button at the top right of the Node-RED screen. Then, simply select the little box on the left of the every minute node and it will start. Since we configured that node to run every minute, it will continue to send messages to Azure IoT Central until you stop it by either disabling the flow or redeploying.

Pop back over to your IoT Central app and you should start seeing data within a minute or so.

As you can see, connecting Node-RED to Azure IoT Central is pretty simple. This is a great way to quickly prototype and experiment with different sensors and message payloads without having to write any code! You can also use this approach for creating gateways or protocol translators so you can easily connect almost anything to Azure IoT Central.

Appendix: Flow source

If you want to just copy-paste the whole thing in instead of building it up yourself, you can import the following JSON into Node-RED and just update the three values from your Device Connection String (see the instructions above).

[{"id":"9e47273a.f12738", "type":"tab", "label":"DHT22-IoTC", "disabled":false, "info":""}, {"id":"b3d8f5b6.a243b8", "type":"debug", "z":"9e47273a.f12738", "name":"Hub Response", "active":true, "tosidebar":true, "console":false, "tostatus":false, "complete":"true", "x":740, "y":340, "wires":[]}, {"id":"117b0c09.6b3a04", "type":"azureiothub", "z":"9e47273a.f12738", "name":"Azure IoT Hub", "protocol":"mqtt", "x":520, "y":340, "wires":[["b3d8f5b6.a243b8"]]}, {"id":"ee333823.1d33a8", "type":"inject", "z":"9e47273a.f12738", "name":"", "topic":"", "payload":"", "payloadType":"date", "repeat":"60", "crontab":"", "once":false, "onceDelay":"", "x":210, "y":120, "wires":[["38f14b0d.96eb14"]]}, {"id":"38f14b0d.96eb14", "type":"rpi-dht22", "z":"9e47273a.f12738", "name":"", "topic":"rpi-dht22", "dht":22, "pintype":"0", "pin":4, "x":400, "y":120, "wires":[["f0bfed44.e988b"]]}, {"id":"f0bfed44.e988b", "type":"change", "z":"9e47273a.f12738", "name":"", "rules":[{"t":"set", "p":"payload", "pt":"msg", "to":"{t "deviceId":"{YOUR DEVICE ID} ", t "key":"{YOUR KEY}", t "protocol":"mqtt", t "data": {t "temp": $number(payload), t "humidity": $number(humidity)t t }tt}", "tot":"jsonata"}], "action":"", "property":"", "from":"", "to":"", "reg":false, "x":280, "y":340, "wires":[["117b0c09.6b3a04", "db5b70be.81e2a"]]}, {"id":"db5b70be.81e2a", "type":"debug", "z":"9e47273a.f12738", "name":"Payload", "active":true, "tosidebar":true, "console":false, "tostatus":false, "complete":"payload", "x":500, "y":420, "wires":[]}]
Quelle: Azure

Azure Backup now supports PowerShell and ACLs for Azure Files

We are excited to reveal a set of new features for backing up Microsoft Azure file shares natively using Azure Backup. All backup-related features have also been released to support file shares connected to Azure File Sync.

Azure files with NTFS ACLs

Azure Backup now supports preserving and restoring new technology file system (NTFS) access control lists (ACL) for Azure files in preview. Starting in 2019, Azure Backup automatically started capturing your file ACLs when backing up file shares. When you need to go back in time, the file ACLs are also restored along with the files and folders.

Use Azure Backup with PowerShell

You can now script your backups for Azure File Shares using PowerShell. Make use of the PowerShell commands to configure backups, take on-demand backups, or even restore files from your file shares protected by Azure Backup.

We have enabled on-demand backups that can retain your snapshots for 10 years using PowerShell. Schedulers can be used to run on-demand PowerShell scripts with chosen retention and thus take snapshots at regular intervals every week, month, or year. Please refer to the limitations of on-demand backups using Azure backup.

If you are looking for sample scripts, please write to AskAzureBackupTeam@microsoft.com. We have created a sample script using Azure Automation runbook that enables you to schedule backups on a periodic basis and retain them even up to 10 years.

Manage backups

A key enabler we introduced last year was the ability to “Manage backups” right from the Azure Files portal. As soon as you configure protection for a file share using Azure Backup, the “Snapshots” button on your Azure Files portal changes to “Manage backups.”

Using “Manage backups,” you can take on-demand backups, restore files shares, or individual files and folders, and even change the policy used for scheduling backups. You can also go to the Recovery Services Vault that backs up the file share and edit policies used to backup Azure File shares.

Email alerts

Backup alerts for the backup and restored jobs of Azure File shares has been enabled. The alerting capability allows you to configure notifications of job failures to chosen email addresses.

Best practices

Accidental deletion of data can happen for storage accounts, file shares, and snapshots taken by Azure Backup. It is a best practice to lock your storage accounts that have Azure Backup enabled to ensure your restores points are not deleted. Also, warnings are displayed before protected file shares or snapshots created by Azure Backup are deleted. This helps you to prevent data loss through accidental deletion.

Related links and additional content

If you are new to Azure Backup, start configuring the backup on the Azure portal.
Want more details? Check out Azure Backup documentation or the preview blog, “Introducing backup for Azure file shares.”
Need help? Reach out to the Azure Backup forum for support.
Tell us how we can improve Azure Backup by contributing new ideas and voting up existing ones.
Follow us on Twitter @AzureBackup for the latest news and updates.

Quelle: Azure

Welcome to the service mesh era: Introducing a new Istio blog post series

Adopting a microservices architecture brings a host of benefits, including increased autonomy, flexibility, and modularity. But the process of decoupling a single-tier monolithic application into smaller services introduces new obstacles: How do you know what’s running? How do you roll out new versions of your services? How do you secure and monitor all those containers?   To address these challenges, you can use a service mesh: software that helps you orchestrate, secure, and collect telemetry across distributed applications. A service mesh transparently oversees and monitors all traffic for your application, typically through a set of network proxies that sit alongside each microservice. Adopting a service mesh allows you to decouple your application from the network, and in turn, allows your operations and development teams to work independently.Alongside IBM, Lyft, and others, Google launched Istio in 2016 as an open-source service mesh solution. Built on the high-performance Envoy proxy, Istio provides a configurable overlay on your microservices running in Kubernetes. It supports end-to-end encryption between services, granular traffic and authorization policies, and unified metrics— all without any changes to your application code.  Istio’s architecture is based on trusted service mesh software used internally at Google for years. And much in the same way we brought Kubernetes into the world, we wanted to make this exciting technology available to as many users as possible. To that end, we recently announced the beta availability of Istio on GKE, an important milestone in our quest to deliver a managed, mature service mesh that you can deploy with one click. You also heard from us about our vision for a service mesh that spans both the Cloud and on-prem.To kick off 2019, we thought we’d take a step back and dive deep into how you can use Istio right now, in production. This is the first post in a practical blog series on Istio and service mesh, where we will cover all kinds of user perspectives, from developers and cluster operators to security administrators and SREs. Through real use cases, we will shed light on the “what” and “how” of service mesh— but most importantly, how Istio can help you deliver immediate business value to your customers.To start, let’s explore why Istio matters in the context of other ongoing shifts in the cloud-native ecosystem: towards abstraction from infrastructure, towards automation, and towards a hybrid cloud environment.Automate everything  The world of modern software moves quickly. Increasingly, organizations are looking for ways to automate the development process from source code to release, in order to address business demands and increase velocity in a competitive landscape. Continuous delivery is a pipeline-based approach for automating application deployments, and represents a key pillar in DevOps best practices.Istio’s declarative, CRD-based configuration model integrates seamlessly with continuous delivery systems, allowing you to incorporate Istio resources into your deployment pipelines. For example, you can configure your pipeline to automatically deploy Istio VirtualServices to manage traffic for a canary deployment. Doing so lets you leverage Istio’s powerful features—from granular traffic management to in-flight chaos testing—with zero manual intervention. With its declarative configuration model, Istio can also work with modern GitOps workflows, where source control serves as the central source of truth for your infrastructure and application configuration.Serverless, with Istio  Serverless computing, meanwhile, transforms source code into running workloads that execute only when called. Adopting a serverless pattern can help organizations reduce infrastructure costs, while allowing developers to focus on writing features and delivering business value.Serverless platforms work well because they decouple code and infrastructure. But most of the time, organizations aren’t only running serverless workloads— they also have stateful applications, including microservices apps on Kubernetes infrastructure. To address this, several open-source, Kubernetes-based serverless platforms have emerged in the open-source community. These platforms allow Kubernetes users to deploy both serverless functions and traditional Kubernetes applications onto the same cluster.Last year, we released Knative, a new project that provides a common set of building blocks for running serverless applications on Kubernetes. Knative includes components for serving requests, handling event triggers, and building containerized functions from source code. Knative Serving is built on Istio, and brings Istio’s telemetry aggregation and security-by-default to serverless functions.Knative aims to become the standard across Kubernetes-based serverless platforms. Further, the ability to treat serverless functions as services in the same way you treat traditional containers will help provide much-needed uniformity between the serverless and Kubernetes worlds. This standardization will allow you to use the same Istio traffic rules, authorization policies, and metrics pipelines across all your workloads.Build once, run anywhereAs Kubernetes matures, users are increasingly adopting more complex cluster configurations. Today, you might have several clusters, not one. And those clusters might span hybrid environments, whether in the public cloud, in multiple clouds, or on-prem. You might also have microservices that have to talk to single-tier applications running in virtual machines, or service endpoints to manage and secure, or functions to spin up across clusters.Driven by the need for lower latency, security, and cost savings, the era of multi-cloud is upon us, introducing the need for tools that span both cloud and on-prem environments.Released with 1.0, Istio Multicluster is a feature that allows you to manage a cross-cluster service mesh using a single Istio control plane, so you can take advantage of Istio’s features even with a complex, multicluster mesh topology. With Istio Multicluster, you can use the same security roles across clusters, aggregate metrics, and route traffic to a new version of an application. The multicluster story gets easier in 1.1, as the new Galley component helps synchronize service registries between clusters.Cloud Services Platform is another example of the push towards interoperable environments, combining solutions including Google Kubernetes Engine, GKE On-Prem, and Istio, towards the ultimate goal of creating a seamless Kubernetes experience across environments.What’s next?Subsequent posts in this series will cover Istio’s key features: traffic management, authentication, security, observability, IT administration, and infrastructure environments. Whether you’re just getting started with Istio, or working to move Istio into your production environment, we hope this blog post series will have something relevant and actionable for you.We’re excited to have you along for the ride on our service mesh journey. Stay tuned!
Quelle: Google Cloud Platform

HDInsight Metastore Migration Tool open source release now available

We are excited to share the release of the Microsoft Azure HDInsight Metastore Migration Tool (HMMT), an open-source script that can be used for applying bulk edits to the Hive metastore.

The HDInsight Metastore Migration Tool is a low-latency, no-installation solution for challenges related to data migrations in Azure HDInsight. There are many reasons why a Hive data migration may need to take place. You may need to protect your data by enabling secure transfer on your Azure storage accounts. Perhaps you will be migrating your Hive tables from WASB to Azure Data Lake Storage (ADLS) Gen2 as part of your upgrade from HDInsight 3.6 to 4.0. Or you may have decided to organize the locations of your databases, tables, and user-defined functions (UDF) to follow a cohesive structure. With HMMT, these migration scenarios and many others no longer require manual intervention.

HMMT handles Hive metadata migration scenarios in a quick, safe, and controllable environment. This blog post is divided into three sections. First, the background to HMMT is outlined with respect to the Hive metastore and Hive storage patterns. The second section covers the design of HMMT and describes initial setup steps. Finally, some sample migrations are described and solved with HMMT as a demonstration of its usage and value.

Background

The Hive metastore

The Hive metastore is a SQL database for Hive metadata such as table, database, and user defined function storage locations. The Hive metastore is provisioned automatically when an HDInsight cluster is created. Alternatively, an existing SQL database may be used to persist metadata across multiple clusters. The existing SQL database is then referred to as an external metastore. HMMT is intended to be used against external metastores to persist metadata migrations over time and across multiple clusters.

Hive storage uniform resource identifiers

For each Hive table, database, or UDF available to the cluster, the Hive metastore keeps a record of that artifact’s location in external storage. Artifact locations are persisted in a Windows Azure Storage Blob or in Azure Data Lake Storage. Each location is represented as an Azure storage uniform resource identifier (URI), which describes the account-type, account, container, and subcontainer path that the artifact lives in. The above diagram describes the schema used to represent Hive table URIs. The same schema pattern applies to Hive databases and UDFs. 

Suppose a Hive query is executed against table1. Hive will first attempt to read the table contents from the corresponding storage entry found in the Hive metastore. Hive supports commands for displaying and updating a table’s storage location:

Changing the storage location of a table requires the execution of an update command corresponding to the table of interest. If multiple table locations are to be changed, multiple update commands must be executed. Since storage locations must be updated manually, wholesale changes to the metastore can be an error-prone and time-consuming task. The location update story concerning non-table artifacts is even less favorable – the location of a database or UDF cannot be changed from within Hive. Therefore, the motivation behind releasing HMMT to the public is to provide a pain-free way to update the storage location of Hive artifacts. HMMT directly alters the Hive metastore, which is the fastest (and only) way to make changes to Hive artifacts at scale.

How HMMT works

HMMT generates a series of SQL commands that will directly update the Hive metastore based on the input parameters. Only storage URIs that match the input parameters will be affected by the script. The tool can alter any combination of Hive storage accounts, account-types, containers, and subcontainer paths. Note that HMMT is exclusively supported on HDInsight 3.6 and onwards.

Start using HMMT right away by downloading it directly from the Microsoft HDInsight GitHub page. HMMT requires no installation. Make sure the script itself is run from an IP address that is whitelisted to the Hive metastore SQL Server. HMMT can be run from any UNIX command-line that has one of the supported query clients installed. The script does not necessarily need to be run from within the HDInsight cluster. Initially supported clients are Beeline and SqlCmd. Since Beeline is supported, HMMT can be run directly from any HDInsight cluster headnode.

Disclaimer: Since HMMT directly alters the contents of the Hive metastore, it is recommended to use the script with caution and care. When executing the script, the post-migration contents of the metastore will be shown as console output in order to describe the potential impact of the execution. For the specified migration parameters to take effect, the flag “liverun” must be passed to the HMMT command. The tool launches as a dry run by default. In addition, it is strongly recommended to keep backups of the Hive metastore even if you do not intend to use HMMT. More information regarding Hive metastore backups can be found at the end of this blog.

Usage examples

HMMT supports a wide variety of use cases related to the migration and organization of Hive metadata. The benefit of HMMT is that the tool provides an easy way to make sure that the Hive metastore reflects the results of a data migration. HMMT may also be executed against a set of artifacts in anticipation of an upcoming data migration. This section demonstrates the usage and value of HMMT using two examples. One example will cover a table migration related to secure storage transfer, and the other will describe the process to migrate Hive UDF JAR metadata.

Example 1: Enabling secure transfer

Suppose your Hive tables are stored across many different storage accounts, and you have recently enabled secure transfer on a selection of these accounts. Since enabling secure transfer does not automatically update the Hive metastore, the storage URIs must be modified to reflect the change (for example, from WASB to WASBS). With your IP whitelisted and a supported client installed, HMMT will update all matching URIs with the following command:

The first four arguments passed to the script correspond to the SQL server, database, and credentials used to access the metastore.
The next four arguments correspond to the ‘source’ attributes to be searched for. In this case the script will affect WASB accounts Acc1, Acc2 and Acc3. There will be no filtering for the container or subcontainer path. HMMT supports WASB, WASBS, ABFS, ABFSS, and ADL as storage migration options.
The target flag represents the table in the Hive metastore to be changed. The table SDS stores Hive table locations. Other table options include DBS for Hive databases, FUNC_RU for Hive UDFs, and SKEWED_COL_VALUE_LOC_MAP for a skewed store of Hive tables.
The Query Client flag corresponds to the query command line tool to be used. In this case, the client of choice is Apache Beeline.

The remaining flags correspond to the ‘destination’ attributes for affected URIs. In this case, all matching URIs specified by the source options will have their account type moved to WASBS. Up to one entry per destination flag is permitted. The values of these flags are merged together to form the post-migration URI pattern.

This sample script command will only pick up table URIs corresponding to WASB accounts, where the account name is “Acc1”, “Acc2”, or “Acc3.” The container and path options are left as a wildcard, meaning that every table under any of these three accounts will have its URI adjusted. The adjustment made by the script is to set the storage type to WASBS. No other aspects of the table URIs will be affected.

Example 2: UDF JAR organization

In this example, suppose you have loaded many UDFs into Hive over time. UDFs are implemented in JAR files, which may be stored in various account containers depending on which cluster the JAR was introduced from. As a result, the table FUNC_RU will have many entries across a variety of account containers and paths. If you wanted to clean up the locations of UDF JARs, you could do so using this command:

This command will pick up UDF JAR URIs, which are exclusively found in the table FUNC_RU, in the WASB storage account “Acc1” for any container and subcontainer path. Once the script is complete, the Hive metastore will show that all JARs from that account can be found in the /jarfiles/ directory under the container “jarstoragecontainer."

Feedback and contributions

We would love to get your feedback. Please reach us with any feature requests, suggestions, and inquiries at askhdinsight@microsoft.com. We also encourage feature asks and source-code contributions to HMMT itself via the HDInsight GitHub repository.

Other resources

HMMT GitHub repository
Hive external metastore
Beeline usage
Microsoft SqlCmd
Azure SQL Server Whitelisting
Azure HDInsight
HDInsight on GitHub
MSDN forum
Stack Overflow
SQL Database Backup steps
Guide to HIVE UDFs

Quelle: Azure

Protecting your cloud VMs with Cloud IAP context-aware access controls

Organizations have increasing numbers of Internet-facing apps and infrastructure that they need to protect. Since 2011, Google has been leveraging the BeyondCorp security model to protect our internet-facing resources, and over the past few years we have made it easier for you to adopt the same model for your apps, APIs, and infrastructure through context-aware access capabilities in our cloud products. At Next ‘18 in London, we added context-aware access capabilities to Cloud Identity-Aware Proxy (IAP) to help protect web apps. Today, we are extending these capabilities to TCP services such as SSH and RDP, to help protect access to your cloud-based virtual machines (VMs).A zero trust security model for your apps and infrastructureContext-aware access allows you to define and enforce granular access to cloud resources based on a user’s identity and the context of their request. This can help increase your organization’s security posture while decreasing complexity for users, giving them the ability to access apps or infrastructure resources securely from virtually anywhere and any trusted device.Granular access controlsUnlike the all-or-nothing approach often used in the traditional network-based access model, context-aware access helps you ensure that access is restricted to the right people and only to the right resources. You can now determine who can access a VM based on unique security considerations such as location, device security status, and user’s identity. In addition, VMs protected by IAP require no changes—simply turn on IAP, and access to your VM instance is protected.Here’s how it worksLet’s say you’re an administrator who wants to allow SSH access to VMs for a group of DevOps users in GCP. You can now use Cloud IAP to enable access without exposing any services directly to the Internet. DevOps admin simply configures Cloud IAP’s TCP forwarding feature. Subsequently when a user performs SSH from the gCloud command tool, the SSH traffic is tunneled over a websocket connection to Cloud IAP which applies any relevant context-aware access policies. If access is allowed the tunneled SSH traffic is forwarded to the VM instance transparently. Remote Desktop Protocol (RDP) works similarly.  As an administrator, all you have to do is configure access to the VM instances from the Cloud IAP IP subnet; your VM instances don’t even need public IP addresses or dedicated bastion hosts.Getting startedContext-aware access for TCP services in Cloud IAP is now available in beta. To get started, navigate to the admin console and check out the documentation for step-by-step instructions.
Quelle: Google Cloud Platform