How Azure Security Center helps you protect your environment from new vulnerabilities

Recently the disclosure of a vulnerability (CVE-2019-5736) was announced in the open-source software (OSS) container runtime, runc. This vulnerability can allow an attacker to gain root-level code execution on a "host. runc" which is the underlying container runtime underneath many popular containers.

Azure Security Center can help you detect vulnerable resources in your environment within Microsoft Azure, on-premises, or other clouds. Azure Security Center can also detect that an exploitation has occurred and alert you.

Azure Security Center offers several methods that can be applied to mitigate or detect malicious behavior:

Strengthen security posture – Azure Security Center periodically analyzes the security state of your resources. When it identifies potential security vulnerabilities it creates recommendations. The recommendations guide you through the process of configuring the necessary controls. We have plans to add recommendations when unpatched resources are detected. You can find more information about strengthening security posture by visiting our documentation, “Managing security recommendations in Azure Security Center.”
File Integrity Monitoring (FIM) – This method examines files and registry keys of operating systems, application software, and more, for changes that might indicate an attack. By enabling FIM, Azure Security Center will be able detect changes in the directory which can indicate malicious activity. Guidance on how to enable FIM and add file tracking on Linux machines can be found in our documentation, “File Integrity Monitoring in Azure Security Center.”
Security alerts – Azure Security Center detects suspicious activities on Linux machines using auditd framework. Collected records flow into a threat detection pipeline and surface as an alert when malicious activity is detected. Security alerts coverage will soon include new analytics to identify compromised machines by runc vulnerability. You can find more information about security alerts by visiting our documentation, “Azure Security Center detection capabilities.”

To apply the best security hygiene practices, it is recommended to have your environment configured so that it posses the latest updates from your distribution provider. System updates can be performed through Azure Security Center, for more guidance visit our documentation, “Apply system updates in Azure Security Center.”
Quelle: Azure

Update 19.02 for Azure Sphere public preview now available

The Azure Sphere 19.02 release is available today. In our second quarterly release after public preview, our focus is on broader enablement of device capabilities, reducing your time to market with new reference solutions, and continuing to prioritize features based on feedback from organizations building with Azure Sphere.

Today Azure Sphere’s hardware offerings are centered around our first Azure Sphere certified MCU, the MediaTek MT3620. Expect to see additional silicon announcements in the near future, as we work to expand our silicon and hardware ecosystems to enable additional technical scenarios and ultimately deliver more choice to manufacturers.

Our 19.02 release focuses on broadening what you can accomplish with MT3620 solutions. With this release, organizations will be able to use new peripheral classes (I2C, SPI) from the A7 core. We continue to build on the private Ethernet functionality by adding new platform support for critical networking services (DHCP and SNTP) that enable a set of brownfield deployment scenarios. Additionally, by leveraging our new reference solutions and hardware modules, device builders can now bring the security of Azure Sphere to products even faster than before.

To build applications that leverage this new funcionality, you will need to ensure that you have installed the latest Azure Sphere SDK Preview for Visual Studio. All Wi-Fi connected devices will automatically receive an updated Azure Sphere OS.

New connectivity options – This release supports DHCP and SNTP servers in private LAN configurations. You can optionally enable these services when connecting a MT3620 to a private Ethernet connection.
Broader device enablement – Beta APIs now enable hardware support for both I2C and SPI peripherals. Additionally, we have enabled broader configurability options for UART.
More space for applications – The MT3620 now supports 1 MB of space dedicated for your production application binaries.
Reducing time to market of MT3620-enabled products – To reduce complexity in getting started with the many aspects of Azure Sphere we have added several samples and reference solutions to our GitHub samples repo:

Private Ethernet – Demonstrates how to wire the supported microchip part and provides the software to begin developing a private Ethernet-based solution.
Real-time clock – Demonstrates how to set, manage, and integrate the MT3620 real time clock with your applications.
Bluetooth command and control – Demonstrates how to enable command and control scenarios by extending the Bluetooth Wi-Fi pairing solution released in 18.11.
Better security options for BLE – Extends the Bluetooth reference solution to support a PIN between the paired device and Azure Sphere.
Azure IoT – Demonstrates how to use Azure Sphere with either Azure IoT Central or an Azure IoT Hub.
CMake preview – Provides an early preview of CMake as an alternative for building Azure Sphere applications both inside and outside Visual Studio. This limited preview lets customers begin testing the use of existing assets in Azure Sphere development.

OS update protection – The Azure Sphere OS now protects against a set of update scenarios that would cause the device to fail to boot. The OS detects and recovers from these scenarios by automatically and atomically rolling back the device OS to its last known good configuration.
Latest Azure IoT SDK – The Azure Sphere OS has updated its Azure IoT SDK to the LTS Oct 2018 version.

All Wi-Fi connected devices that were previously updated to the 18.11 release will automatically receive the 19.02 Azure Sphere OS release. As a reminder, if your device is still running a release older than 18.11, it will be unable to authenticate to an Azure IoT Hub via DPS or receive OTA updates. See the Release Notes for how to proceed in that case.

As always, continued thanks to our preview customers for your comments and suggestions. Microsoft engineers and Azure Sphere community experts will respond to product-related questions on our MSDN forum and development questions on Stack Overflow. We also welcome product feedback and new feature requests.

Visit the Azure Sphere website for documentation and more information on how to get started with your Azure Sphere development kit. You can also email us at to kick off an Azure Sphere engagement with your Microsoft representative.
Quelle: Azure

Learn how to build with Azure IoT: Upcoming IoT Deep Dive events

Microsoft IoT Show, the place to go to hear about the latest announcements, tech talks, and technical demos, is starting a new interactive, live-streaming event and technical video series called IoT Deep Dive!

Each IoT Deep Dive will bring in a set of IoT experts, like Joseph Biron, PTC CTO of IoT, and Chafia Aouissi, Azure IoT Senior Program Manager, during the first IoT Deep Dive, "Building End to End industrial Solutions with PTC ThingWorx and Azure IoT.” Join us on February 20, 2019 from 9:00 AM – 9:45 AM Pacific Standard Time to walk-through end to end IoT solutions, technical demos, and best practices.

Come learn and ask questions about how to build IoT solutions and deep dive into intelligent edge, tooling, DevOps, security, asset tracking, and other top requested technical deep dives. Perfect for developers, architects, or anyone who is ready to accelerate going from proof of concept to production, or needs best practices tips while building their solutions.

Upcoming events

IoT Deep Dive Live: Building End to End industrial Solutions with PTC ThingWorx and Azure

PTC ThingWorx and Microsoft Azure IoT are proven industrial innovation solutions with a market-leading IoT cloud infrastructure. Sitting on top of Azure IoT, ThingWorx delivers a robust and rapid creation of IoT applications and solutions that maximizes Azure services such as IoT Hub. Join the event to learn how to build an E2E industrial solution. You can setup a reminder to join the live event.

When: February 20, 2019 at 9:00 AM – 9:45 AM Pacific Standard Time | Level 300
Learn about: ThingWorx, Vuforia Studio, Azure IoT, and Dynamics 365
Special guests:

Joseph Biron, Chief Technology Officer of IoT, PTC
Neal Hagermoser, Global ThingWorx COE Lead, PTC
Chafia Aouissi, Senior Program Manager, Azure IoT
Host: Pamela Cortez, Program Manager, Azure IoT

Industries and use cases: Smart connected product manufactures in the verticals of including automotive, industrial equipment, aerospace, electronics, and high tech.

Location Intelligence for Transportation with Azure Maps 

Come learn how to use Azure Maps to provide location intelligence in different areas of transportation such as fleet management, asset tracking, and logistics.

When: March 6, 2019 9:00 AM – 9:45 AM Pacific Standard Time | Level 300
Learn about: Azure Maps, Azure IoT Hub, Azure IoT Central, and Azure Event Grid
Guest speakers:

Ricky Brundritt, Senior Program Manager, Azure IoT
Pamela Cortez, Program Manager, Azure IoT

Industries and use cases: Fleet management, logistics, asset management, and IoT

Submit questions before the events on the Microsoft IoT tech community or during the IoT Deep Dive live event itself! All videos will be hosted on Microsoft IoT Show after the live event.
Quelle: Azure

Under the hood: Performance, scale, security for cloud analytics with ADLS Gen2

On February 7, 2018 we announced the general availability of Azure Data Lake Storage (ADLS) Gen2. Azure is now the only cloud provider to offer a no-compromise cloud storage solution that is fast, secure, massively scalable, cost-effective, and fully capable of running the most demanding production workloads. In this blog post we’ll take a closer look at the technical foundation of ADLS that will power the end to end analytics scenarios our customers demand.

ADLS is the only cloud storage service that is purpose-built for big data analytics. It is designed to integrate with a broad range of analytics frameworks enabling a true enterprise data lake, maximize performance via true filesystem semantics, scales to meet the needs of the most demanding analytics workloads, is priced at cloud object storage rates, and is flexible to support a broad range of workloads so that you are not required to create silos for your data.

A foundational part of the platform

The Azure Analytics Platform not only features a great data lake for storing your data with ADLS, but is rich with additional services and a vibrant ecosystem that allows you to succeed with your end to end analytics pipelines.

Azure features services such as HDInsight and Azure Databricks for processing data, Azure Data Factory to ingress and orchestrate, Azure SQL Data Warehouse, Azure Analysis Services, and Power BI to consume your data in a pattern known as the Modern Data Warehouse, allowing you to maximize the benefit of your enterprise data lake.

Additionally, an ecosystem of popular analytics tools and frameworks integrate with ADLS so that you can build the solution that meets your needs.

“Data management and data governance is top of mind for customers implementing cloud analytics solutions. The Azure Data Lake Storage Gen2 team have been fantastic partners ensuring tight integration to provide a best-in-class customer experience as our customers adopt ADLS Gen2.”

– Ronen Schwartz, Senior Vice president & General Manager of Data Integration and Cloud Integration, Informatica

"WANDisco’s Fusion data replication technology combined with Azure Data Lake Storage Gen2 provides our customers a compelling LiveData solution for hybrid analytics by enabling easy access to Azure Data Services without imposing any downtime or disruption to on premise operations.”

– David Richards, Co-Founder and CEO, WANdisco

“Microsoft continues to innovate in providing scalable, secure infrastructure which go hand in hand with Cloudera’s mission of delivering on the Enterprise Data Cloud. We are very pleased to see Azure Data Lake Storage Gen2 roll out globally. Our mutual customers can take advantage of the simplicity of administration this storage option provides when combined with our analytics platform.”

– Vikram Makhija, General Manager for Cloud, Cloudera


Performance is the number one driver of value for big data analytics workloads. The reason for this is simple, the more performant the storage layer, the less compute (the expensive part!) required to extract the value from your data. Therefore, not only do you gain a competitive advantage by achieving insights sooner, you do so at a significantly reduced cost.

“We saw a 40 percent performance improvement and a significant reduction of our storage footprint after testing one of our market risk analytics workflows at Zurich’s Investment Management on Azure Data Lake Storage Gen2.”

– Valerio Bürker, Program Manager Investment Information Solutions, Zurich Insurance

Let’s look at how ADLS achieves overwhelming performance. The most notable feature is the Hierarchical Namespace (HNS) that allows this massively scalable storage service to arrange your data like a filesystem with a hierarchy of directories. All analytics frameworks (eg. Spark, Hive, etc.) are built with an implicit assumption that the underlying storage service is a hierarchical filesystem. This is most obvious when data is written to temporary directories which are renamed at the completion of the job. For traditional cloud-based object stores, this is an O(n) complex operation, n copies and deletes, that dramatically impacts performance. In ADLS this rename is a single atomic metadata operation.

The other contributor to performance is the Azure Blob Filesystem (ABFS) driver. This driver takes advantage of the fact that the ADLS endpoint is optimized for big data analytics workloads. These workloads are most sensitive to maximizing throughput via large IO operations, as distinct from other general purpose cloud stores that must optimize for a much larger range of IO operations. This level of optimization leads to significant IO performance improvements that directly benefits the performance and cost aspects of running big data analytics workloads on Azure. The ABFS driver is contributed as part of Apache Hadoop® and is available in HDInsight and Azure Databricks, as well as other commercial Hadoop distributions.


Scalability for big data analytics is also critically important. There’s no point having a solution that works great for a few TBs of data, but collapses as the data size inevitably grows. The rate of growth of big data analytics projects tend to be non-linear as a consequence of more diverse and accessible sources of data. Most projects do benefit from the principle that the more data you have, the better the insights. However, this leads to design challenges such that the system must scale at the same rate as the growth of the data. One of the great design pivots of big data analytics frameworks, such as Hadoop and Spark, is that they scale horizontally. What this means is that as the data and/or processing grows, you can just add more nodes to your cluster and the processing continues unabated. This, however, relies on the storage layer scaling linearly as well.

This is where the value of building ADLS on top of the existing Azure Blob service shines. The EB scale of this service now applies to ADLS ensuring that no limits exist on the amount of data to be stored or accessed. In practical terms, customers can store 100s of PB of data which can be accessed with throughput to satisfy the most demanding workloads.


For customers wanting to build a data lake to serve the entire enterprise, security is no lightweight consideration. There are multiple aspects to providing end to end security for your data lake:

Authentication – Azure Active Directory OAuth bearer tokens provide industry standard authentication mechanisms, backed by the same identity service used throughout Azure and Office365.
Access control – A combination of Azure Role Based Access Control (RBAC) and POSIX-compliant Access Control Lists (ACLs) to provide flexible and scalable access control. Significantly, the POSIX ACLs are the same mechanism used within Hadoop.
Encryption at rest and transit – Data stored in ADLS is encrypted using either a system supplied or customer managed key. Additionally, data is encrypted using TLS 1.2 whilst in transit.
Network transport security – Given that ADLS exposes endpoints on the public Internet, transport-level protections are provided via Storage Firewalls that securely restrict where the data may be accessed from, enforced at the packet level.

Tight integration with analytics frameworks results in an end to end secure pipeline. The HDInsight Enterprise Security Package makes end-user authentication flow through the cluster and to the data in the data lake.

Get started today!

We’re excited for you to try Azure Data Lake Storage! Get started today and let us know your feedback.

Get started with Azure Data Lake Storage.
Watch the video, “Create your first ADLS Gen2 Data Lake.”
Read the general availability announcement.
Learn how ADLS improves the Azure analytics platform in the blog post, “Individually great, collectively unmatched: Announcing updates to 3 great Azure Data Services.”
Refer to the Azure Data Lake Storage documentation.
Learn how to deploy a HDInsight cluster with ADLS.
Deploy an Azure Databricks workspace with ADLS.
Ingest data into ADLS using Azure Data Factory.

Quelle: Azure

Monitor at scale in Azure Monitor with multi-resource metric alerts

Our customers rely on Azure to run large scale applications and services critical to their business. To run services at scale, you need to setup alerts to proactively detect, notify, and remediate issues before it affects your customers. However, configuring alerts can be hard when you have a complex, dynamic environment with lots of moving parts.

Today, we are excited to release multi-resource support for metric alerts in Azure Monitor to help you set up critical alerts at scale. Metric alerts in Azure Monitor work on a host of multi-dimensional platform and custom metrics, and notify you when the metric breaches a threshold that was either defined by you or detected automatically.

With this new feature, you will be able to set up a single metric alert rule that monitors:

A list of virtual machines in one Azure region
All virtual machines in one or more resource groups in one Azure region
All virtual machines in a subscription in one Azure region

Benefits of using multi-resource metric alerts

Get alerting coverage faster: With a small number of rules, you can monitor all the virtual machines in your subscription. Multi-resource rules set at subscription or resource group level can automatically monitor new virtual machines deployed to the same resource group/subscription (in the same Azure region). Once you have such a rule created, you can deploy hundreds of virtual machines all monitored from day one without any additional effort.
Much smaller number of rules to manage: You no longer need to have a metric alert for every resource that you want to monitor.
You still get resource level notifications: You still get granular notifications per impacted resource, so you always have the information you need to diagnose issues.
Even simpler at scale experience: Using Dynamic Thresholds along with multi-resource metric alerts, you can monitor each virtual machine without the need to manually identify and set thresholds that fit all the selected resources. Dynamic condition type applies tailored thresholds based on advanced machine learning (ML) capabilities that learn metrics' historical behavior, as well as identifies patterns and anomalies.

Setting up a multi-resource metric alert rule

When you set up a new metric alert rule in the alert rule creation experience, use the checkboxes to select all the virtual machines you want the rule to be applied to. Please note that all the resources must be in the same Azure region.

You can select one or more resource groups, or select a whole subscription to apply the rule to all virtual machines in the subscription.

If you select all virtual machines in your subscription, or one or more resource groups, you get the option to auto-grow your selection. Selecting this option means the alert rule will automatically monitor any new virtual machines that are deployed to this subscription or resource group. With this option selected, you don’t need to create a new rule or edit an existing rule whenever a new virtual machine is deployed.

You can also use Azure Resource Manager templates to deploy multi-resource metric alerts. Learn more in our documentation, “Understand how metric alerts work in Azure Monitor.”


The pricing for metric alert rules is based on number of metric timeseries monitored by an alert rule. This same pricing applies to multi-resource metric alert rules.

Wrapping up

We are excited about this new capability that makes configuring and managing metric alerts rule at scale easier. This functionality is currently only supported for virtual machines with support for other resource types coming soon. We would love to hear what you think about it and what improvements we should make. Contact us at
Quelle: Azure

Protect Azure Virtual Machines using storage spaces direct with Azure Site Recovery

Storage spaces direct (S2D) lets you host a guest cluster on Microsoft Azure which is especially useful in scenarios where virtual machines (VMs) are hosting a critical application like SQL, Scale out file server, or SAP ASCS. You can learn more about clustering by reading the article, “Deploying laaS VM Guest Clusters in Microsoft Azure.” I am also happy to share that with the latest Azure Site Recovery (ASR) update, you can now protect these business critical applications. The ASR support of storage spaces direct allows you to take your higher availability application and make it more resilient by providing a protection against region level failure.

We continue to deliver on our promise of simplicity and help you can protect your storage spaces direct cluster in three simple steps:

Inside the recovery services vault, select +replicate.

1. Select replication policy with application consistency off. Please note, that only crash consistency support is available.

2. Select all the nodes in the cluster and make them part of a Multi-VM consistency group. To learn more about Multi-VM consistency please visit our documentation, “Common questions: Azure-to-Azure replication.”

3. Lastly, select OK to enable the replication.

Next steps

Begin protecting virtual machines using storage spaces direct. To get started visit our documentation, “Replicate Azure Virtual Machines using storage spaces direct to another Azure region.”

Disaster recovery between Azure regions is available in all Azure regions where ASR is available. Please note, this feature is only available for Azure Virtual Machines’ disaster recovery.

Related links and additional content

Check the most common queries on Azure Virtual Machine disaster recovery.
Learn more about the supported configurations for replicating Azure Virtual Machines.
Need help? Reach out to Azure Site Recovery forum for support.
Tell us how we can improve Azure Site Recovery by contributing new ideas and voting on existing ones.

Quelle: Azure

Anomaly detection using built-in machine learning models in Azure Stream Analytics

Built-in machine learning (ML) models for anomaly detection in Azure Stream Analytics significantly reduces the complexity and costs associated with building and training machine learning models. This feature is now available for public preview worldwide.

What is Azure Stream Analytics?

Azure Stream Analytics is a fully managed serverless PaaS offering on Azure that enables customers to analyze and process fast moving streams of data, and deliver real-time insights for mission critical scenarios. Developers can use a simple SQL language (extensible to include custom code) to author and deploy powerful analytics processing logic that can scale-up and scale-out to deliver insights with milli-second latencies.

Traditional way to incorporate anomaly detection capabilities in stream processing

Many customers use Azure Stream Analytics to continuously monitor massive amounts of fast-moving streams of data in order to detect issues that do not conform to expected patterns and prevent catastrophic losses. This in essence is anomaly detection.

For anomaly detection, customers traditionally relied on either sub-optimal methods of hard coding control limits in their queries, or used custom machine learning models. Development of custom learning models not only requires time, but also high levels of data science expertise along with nuanced data pipeline engineering skills. Such high barriers to entry precluded adoption of anomaly detection in streaming pipelines despite the associated value for many Industrial IoT sites.

Built-in machine learning functions for anomaly detection in Stream Analytics

With built-in machine learning based anomaly detection capabilities, Azure Stream Analytics reduces complexity of building and training custom machine learning models to simple function calls. Two new unsupervised machine learning functions are being introduced to detect two of the most commonly occurring anomalies namely temporary and persistent.

AnomalyDetection_SpikeAndDip function to detect temporary or short-lasting anomalies such as spike or dips. This is based on the well-documented Kernel density estimation algorithm.
AnomalyDetection_ChangePoint function to detect persistent or long-lasting anomalies such as bi-level changes, slow increasing and slow decreasing trends. This is based on another well-known algorithm called exchangeability martingales.


SELECT sensorid, System.Timestamp as time, temperature as temp,
AnomalyDetection_SpikeAndDip(temperature, 95, 120, 'spikesanddips')
LIMIT DURATION(second, 120) as SpikeAndDipScores
FROM input

In the example above, AnomalyDetection_SpikeAndDip function helps monitor a set of sensors for spikes or dips in the temperature readings. Furthermore, the underlying ML model uses a user supplied confidence level of 95 percent to set the model sensitivity. A training event count of 120 that corresponds to a 120 second sliding window are supplied as function parameters. Note that the job is partitioned by sensorid, which results in multiple ML models being trained under the hood, one for each sensor and all within the same single query.

Get started today

We’re excited for you to try out anomaly detection functions in Azure Stream Analytics. To try this new feature, please refer to the feature documentation, "Anomaly Detection in Azure Stream Analytics."
Quelle: Azure

Moving your Azure Virtual Machines has never been easier!

To meet customer demand, Azure is continuously expanding. We’ve been adding new Azure regions and introducing new capabilities. As a result, customers can now move their existing virtual machines (VMs) to new regions while adopting the latest capabilities. There are other factors that prompt our customers to relocate their VMs. For example, you may want to do that to increase SLAs.

In this blog, we will walk you through the steps you need to follow to move your VMs across regions or within the same region.

Why do customers want to move their Azure IaaS Virtual Machines?

Some of the most common reasons that prompt our customers to move their virtual machines include:

•    Geographical proximity: “I deployed my VM in region A and now region B, which is closer to my end users, has become available.”

•    Mergers and acquisitions: “My organization was acquired, and the new management team wants to consolidate resources and subscriptions into one region.”

•    Data sovereignty: “My organization is based in the UK with a large local customer base. As a result of Brexit, I need to move my Azure resources from various European regions to the UK in order to comply with local rules and regulations.”

•    SLA requirements: “I deployed my VMs in Region A, and I would like to get a higher level of confidence regarding the availability of my services by moving my VMs into Availability Zones (AZ). Region A doesn’t have an AZ at the moment. I want to move my VMs to Region B, which is still within my latency limits and has Availability Zones.”

If you or your organization are going through any of these scenarios or you have a different reason to move your virtual machines, we’ve got you covered!

Move Azure VMs to a target region

For any of the scenarios outlined above, if you want to move your Azure Virtual Machines to a different region with the same configuration as the source region or increase your availability SLAs by moving your virtual machines into an Availability Zone, you can use Azure Site Recovery (ASR). We recommend taking the following steps to ensure a successful transition:

1.    Verify prerequisites: To move your VMs to a target region, there are a few prerequisites we recommend you gather. This ensures that you’re creating a basic understanding of the Azure Site Recovery replication, the components involved, the support matrix, etc.

2.    Prepare the source VMs: This involves ensuring the network connectivity of your VMs, certificates installed on your VMs, identifying the networking layout of your source and dependent components, etc.

3.    Prepare the target region: You should have the necessary permissions to create resources in the target region including the resources that are not replicated by Site Recovery. For example, permissions for your subscriptions in the target region, available quota in the target region, Site Recovery’s ability to support replication across the source-target regional pair, pre-creation of load balancers, network security groups (NSGs), key vault, etc.

4.    Copy data to the target region: Use Azure Site Recovery replication technology to copy data from the source VM to the target region.

5.    Test the configuration: Once the replication is complete, test the configuration by performing a failover test to a non-production network.

6.    Perform the move: Once you’re satisfied with the testing and you have verified the configuration, you can initiate the actual move to the target region.

7.    Discard the resources in the source region: Clean up the resources in the source region and stop replication of data.


Move your Azure VM ‘as is’

If you intend to retain the same source configuration as the target region, you can do so with Azure Site Recovery. Your virtual machine configuration availability SLAs will be the same before and after the move. A single instance VM after the move will come back online as a single instance VM. VMs in an Availability Set after the move will be placed into an Availability Set, and VMs in an Availability Zone will be placed into an Availability Zone within the target region.

To learn more about the steps to move your VMs, refer to the documentation.

Move your Azure virtual machines to increase availability

As many of you know, we offer Availability Zones (AZs), a high availability offering that protects your applications and data from datacenter failures. AZs are unique physical locations within an Azure region and are equipped with independent power, cooling, and networking. To ensure resiliency, there’s a minimum of three separate zones in all enabled regions. With AZs, Azure offers 99.99 percent VM uptime SLA.

You can use Azure Site Recovery to move your single instance VM or VMs in an Availability Set into an Availability Zone, thereby achieving 99.99 percent uptime SLA. You can choose to place your single instance VM or VMs in an Availability Set into Availability Zones when you choose to enable the replication for your VM using Azure Site Recovery. Ideally each VM in an Availability Set should be spread across Availability Zones. The SLA for availability will be 99.99 percent once you complete the move operation. To learn more about the steps to move the VMs and improve your availability, refer to our documentation.

Azure natively provides you with the high availability and reliability you need for your mission-critical workloads, and you can choose to increase your SLAs and meet compliance requirements using the disaster recovery features provided by Azure Site Recovery. You can use the same service to increase availability of the virtual machines you have already deployed as described in this blog. Getting started with Azure Site Recovery is easy – simply check out the pricing information, and sign up for a free Azure trial. You can also visit the Azure Site Recovery forum on  the Microsoft Developer Network (MSDN) for additional information and to engage with other customers.
Quelle: Azure

Maximize throughput with repartitioning in Azure Stream Analytics

Customers love Azure Stream Analytics for its ease of analyzing streams of data in movement, with the ability to set up a running pipeline within five minutes. Optimizing throughput has always been a challenge when trying to achieve high performance in a scenario that can't be fully parallelized. This occurs when you don't control the partition key of the input stream, or your source “sprays” input across multiple partitions that later need to be merged. You can now use a new extension of Azure Stream Analytics SQL to specify the number of partitions of a stream when reshuffling the data. This new capability unlocks performance and aids in maximizing throughput in such scenarios.

The new extension of Azure Stream Analytics SQL includes a keyword INTO that allows you to specify the number of partitions for a stream when performing reshuffling using a PARTITION BY statement. This new keyword, and the functionality it provides, is a key feature to achieve high performance throughput for the above scenarios, as well as to better control the data streams after a shuffle. To learn more about what’s new in Azure Stream Analytics, please see, “Eight new features in Azure Stream Analytics.”

What is repartitioning?

Repartitioning, or reshuffling, is required when processing data on a stream that is not sharded according to the natural input scheme, such as the PartitionId in the Event Hubs case. This might happen when you don’t control the routing of the event generators or you need to scale out your flow due to resource constraints. After repartitioning, each shard can be processed independently of others, and progress without additional synchronization between the shards. This allows you to linearly scale out your streaming pipeline.

You can specify the number of partitions the stream should be split into by using a newly introduced keyword INTO after a PARTITION BY statement, with a strictly positive integer that indicates the partition count. Please see below for an example:

SELECT * INTO [output] FROM [input] PARTITION BY DeviceID INTO 10

The query above will read from the input, regardless of it being naturally partitioned, and repartition the stream tenfold according to the DeviceID dimension and flush the data to output. Hashing of the dimension value (DeviceID) is used to determine which partition shall accept which substream. The data will be flushed independently for each partitioned stream, assuming the output supports partitioned writes, and either has 10 partitions, or can handle an arbitrary number of such.

A diagram of the data flow with the repartition in place is below:

Why and how to use repartitioning?

Use repartitioning to optimize the heavy parts of processing. It will process the data independently and simultaneously on disjoint subsets, even when the data is not naturally partitioned properly on input. The partitioning scheme is carried forward as long as the partition key stays the same.

Experiment and observe the resource utilization of your job to determine the exact number of partitions needed. Remember, Streaming Unit (SU) count, which is the unit of scale for Azure Stream Analytics, must be adjusted so the number of physical resources available to the job can fit the partitioned flow. In general, six SUs is a good number to assign to each partition. In case there are insufficient resources assigned to the job, the system will only apply the repartition if it benefits the job.

When joining two streams of data explicitly repartitioned, these streams must have the same partition key and partition count. The outcome is a stream that has the same partition scheme. Please see below for an example:

WITH step1 AS (SELECT * FROM [input1] PARTITION BY DeviceID INTO 10),
step2 AS (SELECT * FROM [input2] PARTITION BY DeviceID INTO 10)


Specifying a mismatching number of partitions or partition key would yield a compilation error when creating the job.

When writing a partitioned stream to an output, it works best if the output scheme matches the stream scheme by key and count, so each substream can be flushed independently of others. Alternatively, the stream must be merged and possibly repartitioned again by a different scheme before flushing. This would add to the general latency of the processing, as well as the resource utilization and should be avoided.

For use cases with SQL output, use explicit repartitioning to match optimal partition count to maximize throughput. Since SQL works best with eight writers, repartitioning the flow to eight before flushing, or somewhere further upstream, may prove beneficial for the job’s performance. For more information, please refer to the documentation, “Azure Stream Analytics output to Azure SQL Database.”

Next steps

Get started with Azure Stream Analytics and have a look at our documentation to understand how to leverage query parallelization in Azure Stream Analytics.

For any question, join the conversation on Stack Overflow.
Quelle: Azure

Benefits of using Azure API Management with microservices

The IT industry is experiencing a shift from monolithic applications to microservices-based architectures. The benefits of this new approach include:

Independent development and freedom to choose technology – Developers can work on different microservices at the same time and choose the best technologies for the problem they are solving.
Independent deployment and release cycle – Microservices can be updated individually on their own schedule.
Granular scaling – Individual microservices can scale independently, reducing the overall cost and increasing reliability.
Simplicity – Smaller services are easier to understand which expedites development, testing, debugging, and launching a product.
Fault isolation – Failure of a microservice does not have to translate into failure of other services.

In this blog post we will explore:

How to design a simplified online store system to realize the above benefits.
Why and how to manage public facing APIs in microservice-based architectures.
How to get started with Azure API Management and microservices.

Example: Online store implemented with microservices

Let’s consider a simplified online store system. A visitor of the website needs to be able to see product’s details, place an order, review a placed order.

Whenever an order is placed, the system needs to process the order details and issue a shipping request. Based on user scenarios and business requirements, the system must have the following properties:

Granular scaling – Viewing product details happens on average at least 1,000 times more often than placing an order.
Simplicity – Independent user actions are clearly defined, and this separation needs to be reflected in the architecture of the system.
Fault isolation – Failure of the shipping functionality cannot affect viewing products or placing an order.

They hint towards implementing the system with three microservices:

Order with public GET and POST API – Responsible for viewing and placing an order.
Product with public GET API – Responsible for viewing details of a product.
Shipping triggered internally by an event – Responsible for processing and shipping an order.

For this purpose we will use Azure Functions, which are easy to implement and manage. Their event-driven nature means that they are executed on, and billed for, an interaction. This becomes useful when the store traffic is unpredictable. The underlying infrastructure scales down to zero in times of no traffic. It can also serve bursts of traffic in a scenario when a marketing campaign becomes viral or load increases during shopping holidays like Black Friday in the United States.

To maintain the scaling granularity, ensure simplicity, and keep release cycles independent, every microservice should be implemented in an individual Function App.

The order and product microservices are external facing functions with an HTTP Trigger. The shipping microservice is triggered indirectly by the order microservice, which creates a message in Azure Service Bus. For example, when you order an item, the website issues a POST Order API call which executes the order function. Next, your order is queued as a message in an Azure Service Bus instance which then triggers the shipping function for its processing.

Top reasons to manage external API communication in microservices-based architectures

The proposed architecture has a fundamental problem, the way communication from outside is handled.

Client applications are coupled to internal microservices. This becomes especially burdensome when you wish to split, merge, or rewrite microservices.
APIs are not surfaced under the same domain or IP address.
Common API rules cannot be easily applied across microservices.
Managing API changes and introducing new versions is difficult.

Although Azure Functions Proxies offer a unified API plane, they fall short in the other scenarios. The limitations should be addressed by fronting Azure Functions with an Azure API Management, now available in a serverless Consumption tier.

API Management abstracts APIs from their implementation and hosts them under the same domain or a static IP address. It allows you to decouple client applications from internal microservices. All your APIs in Azure API Management share a hostname and a static IP address. You may also assign custom domains.

Using API Management secures APIs by aggregating them in Azure API Management, and not exposing your microservices directly. This helps you reduce the surface area for a potential attack. You can authenticate API requests using a subscription key, JWT token, client certificate, or custom headers. Traffic may be filtered down only to trusted IP addresses.

With API Management can also execute rules on APIs. You can define API policies on incoming requests and outgoing responses globally, per API, or per API operation. There are almost 50 policies like authentication methods, throttling, caching, and transformations. Learn more by visiting our documentation, “API Management policies.”

API Management simplifies changing APIs. You can manage your APIs throughout their full lifecycle from design phase, to introducing new versions or revisions. Contrary to revisions, versions are expected to contain breaking changes such as removal of API operations or changes to authentication.

You can also monitor APIs when using API Management. You can see usage metrics in your Azure API Management instance. You may log API calls in Azure Application Insights to create charts, monitor live traffic, and simplify debugging.

API Management makes it easy to publish APIs to external developers. Azure API Management comes with a developer portal which is an automatically generated, fully customizable website where visitors can discover APIs, learn how to use them, try them out interactively, download their OpenAPI specification, and finally sign up to acquire API keys.

How to use API Management with microservices

Azure API Management has recently become available in a new pricing tier. With its billing per execution, the consumption tier is especially suited for microservice-based architectures and event-driven systems. For example, it would be a great choice for our hypothetical online store.

For more advanced systems, other tiers of API Management offer a richer feature set.

Regardless of the selected service tier, you can easily front your Azure Functions with an Azure API Management instance. It takes only a few minutes to get started with Azure API Management.
Quelle: Azure