New features in Azure Application Insights Metrics Explorer

Over the past weeks, Application Insights Metrics Explorer introduced several new features that allow more options for visualizing metrics, and simplify chart configuration. This includes new percentage aggregation charts, pinning standard and custom grids to dashboards, ability to control over y-axis boundaries of the charts, and sorting basic and advanced configuration options in chart settings.

Percentage aggregation charts

In many cases the actual value of a segmented metric is not as important as how it compares to the other values in the group. For example, you might want to visualize the percentage of the failed vs. successful requests, where they would sum up to 100%, instead of looking at the raw counts. Or you might want to see the percentage number of the HTTP requests handled by each server instance. To make it possible, we introduced a new percentage aggregation type, allowing users to switch between “Average” and “Average (%)”, “Sum” and “Sum (%)”, etc. Below is a sample chart that is based on the percentage aggregation:

To configure percentage aggregation, you need to turn on grouping, and select percentage aggregation from the Aggregation dropdown selector of chart settings:

 

Hiding less commonly used advanced settings, unless the user really wants to see them

A common feedback that we caught from new Application Insights users is that there are too many options in the chart settings pane. Some of these options are quite advanced but critical for the experienced users. Mixing basic and advanced options resulted in frustration for the new users who could not identify the essential settings and thus could not configure the charts the way they wanted. What did we do? A new checkbox on top of chart configuration now allows filtering basic and advanced settings. New users are looking at a simpler view. As the user becomes more proficient with metrics explorer, he can add advanced settings by checking the box on top of the chart details dialog:

Pinning Metrics Explorer grids to Azure dashboards

In the past, you could pin metrics explorer charts to dashboards. Now in addition to pinning charts, you can also pin grids. This small but very useful icon lets you customize your dashboards with those metrics that are better represented in a grid format:

Ability to freeze lower and upper boundaries of the y-axis on the charts

Freezing y-axis of the charts becomes important when looking at smaller fluctuations of larger values. For example, when the volume of successful requests drops from 99.99% to 99.5%, it may represent a significant reduction in the quality of service. But from the charting perceptive, noticing a small numeric value fluctuation would be difficult or even impossible. With the new option, you can freeze the lowest boundary of the chart to 99%, which would make this small drop more apparent. Another example can be a fluctuation in the available memory, where the value will technically never reach 0, so fixing the range to a higher value may make the drops in available memory easier to spot. The two charts below duplicates the same metric but with and without fixed y-axis boundary:

To freeze the y-axis boundaries, you need to check advanced settings and specify the desired range under the Y-axis range section of the chart details dialog:

Quelle: Azure

Clustered Columnstore Index in Azure SQL Database

Columnstore index is the preferred technology to run analytics queries in Azure SQL Databases. We recently announced general availability if In-Memory technologies for all Premium databases. Similar to In-Memory OLTP, the columnstore index technology is not available in databases in the Standard and Basic pricing tiers today.

The columnstore technology is available in two flavors; clustered columnstore index (CCI) for DataMart analytics workloads and nonclustered columnstore index (NCCI) to run analytics queries on operational (i.e. OLTP) workload. Please refer to NCCI vs CCI for the differences between these two flavors of columnstore indexes. The columnstore index can speed up the performance of analytics queries up to 100x while significantly reducing the storage footprint. The data compression achieved depends on the schema and the data, but we see around 10x data compression on average when compared to rowstore with no compression. This blog will focus on Analytic workloads using CCI but cover NCCI in a future blog.

Clustered Columnstore index is available in Azure SQL Databases across all premium editions. However, it is not yet available on the Standard and Basic pricing tiers. Using this technology in Azure SQL Databases, you can lower the storage cost and getting a similar or better query performance on lower premium tiers.

The tables below show a typical analytics query with multi-table join running on P1 and P15 both with/without clustered columnstore index and storage savings achieved

Query Performance: Key point to note below is that with clustered columnstore index, the example query runs 5x faster on P1 compared to the same query running on P15 with rowstore with no tuning.  This can significantly lower the cost you need to pay to meet your workload requirements.

Pricing Tier
With Rowstore
With Columnstore
Performance Gains

P1
30.6 secs
4.2 secs
14x

P15
19.5 secs
0.319 secs
60x

Storage Size: The storage savings with columnstore compared to PAGE or NONE compressed tables shown below. While the cost of storage is already included with AzureDB, but lower storage can enable you to choose a lower tier. Note, this is generated test data so the compression is lower than what one would get for customer workloads.

Number of Rows
Size Rowstore (MB)
Size columnstore (MB)
Savings

3626191
212 (PAGE compression)
120
1.8x

3626191
756 (NONE compression)
120MB
6.2x

The best part of columnstore index technology is that it does not require any changes to your application. All you need to do is to either create or replace an existing index with columnstore index on your table(s).

How does Columnstore Index work?

As described earlier, the columnstore is just an index that stores data in a table as columns as shown below. The queries can continue to access the table requiring no changes.

Columnstore index delivers significant data compression and query performance due to the following three key factors

Reduced IO and Storage: Since data is stored as individual columns, it compresses really well as all values are drawn from the same domain (i.e. data type) and in many cases, the values repeat or are similar. The compression will depend on the data distribution but typical compression that we have seen is around 10x. This is significant because it enables you reduce the storage as the IO footprint of your database significantly.
Only Referenced columns need to be fetched: Most analytics queries fetch/process only a small set of columns. If you consider a typical Star Schema,  the FACT table is the one with most rows and it has large number of columns. With columnstore storage, SQL Server needs to fetch only the referenced columns unlike rowstore where the full row needs to be fetched regardless of number of columns referenced in the query. For example, consider a FACT table with 100 columns and an analytic query accessing this tables references only 5 columns. Now, by fetching only the referenced columns, you can potentially reduce IO by 95% with simplifying assumption that all columns take same storage. Note, this is on top of already 10x data compression provided by columnstore.
Efficient Data Processing: SQL Server has an industry leading query engine for columnstore data to deliver up to 100x speed up in query performance. For details, please refer to Data load into clustered columnstore index.

How do I create clustered columnstore index?

Creating a clustered columnstore index is like creating any other index. For example, I can create a regular rowstore table as follows

CREATE TABLE ACCOUNT (
ACCOUNTKEY INT NOT NULL,
ACCOUNTDESCRIPTION NVARCHAR (50),
ACCOUNTTYPE NVARCHAR (50),
ACCOUNTCODEALTERNATEKEY INT)

Any rows inserted into the table above are stored in rowstore format. Now, if you want to convert this table to store data in &;columnstore&039;, all you need to do is to execute the following SQL statement

CREATE CLUSTERED COLUMNSTORE index ACCOUNT_CI on ACCOUNT

If the rowstore table had a clustered BTREE index, then you can execute the following SQL Statement

CREATE CLUSTERED COLUMNSTORE index ACCOUNT_CI on ACCOUNT WITH (DROP_EXISTING = ON)

When and where should you use clustered columnstore Index?

Clustered Columnstore index primarily targets analytics workloads. The table below shows the common scenarios that have been successfully deployed with this technology.

Columnstore Option
Workload
Compression

CCI (clustered columnstore index)

Traditional DW workload with Star or Snowflake schema: Commonly you enable CCI on the FACT table but keep DIMENSION tables with rowstore with PAGE compression.
Additional Considerations: consider CCI for large dimension tables with > 1 million rows
Insert mostly workload: Many workloads such as IOT (Internet of things) insert large volume of data with minimal updates/deletes. These workloads can benefit with huge data compression as well as speed up of analytic queries.

10x on average

CCI/NCI (with one or more nonclustered indexes)

Similar to the ones mentioned with CCI but require (a) PK/FK enforcements (b) significant number of queries with equality predicate or short range queries. NCIs speed up the query performance by avoiding full table scans (c) update/delete deletes of rows which can be efficiently located using NCIs.

10X on average + additional storage for NCIs

Resources to get started

For more details, please refer to the following

Sample workload for columnstore index
Examples of production deployment of columnstore index
SQL Server Team&039;s blogs on columnstore index
MSDN documentation on columnstore index

Quelle: Azure

Telemetry Platform Features in Azure Media Services

We are excited to announce new Azure Media Services (AMS) telemetry platform features, generally available through our new Telemetry API. Media Services telemetry allows you to monitor and measure the health of your services through a suite of telemetry data. Telemetry data is written to your Azure Storage account and can be processed and visualized using a wide array of data visualization tools.

Consuming Telemetry Data

This release includes telemetry metrics for Channel, Streaming Endpoint, and Archive entities. Telemetry data is written to an Azure Storage table in the storage account specified when configuring telemetry for your media services account. Telemetry data is stored in aggregate in a table, “TelemetryMetricsYYYYMMDD,” for each day’s data (where “YYYYMMDD” denotes the date timestamp).

Each table entry contains a set of common fields and a record with a set of entity-specific fields. The entry identifying fields include the following:

Property
Value
Example

PartitionKey
{Account ID}_{Entity ID}
e49bef329c29495f9b9570989682069d_64435281c50a4dd8ab7011cb0f4cdf66

RowKey
{Seconds to Midnight}_{Random Value}
01688_00199

Timestamp
The time at which the row entry was created
2016-09-09T22:43:42.241Z

Type
The type of the entity providing telemetry
Channel

Name
The name of the telemetry event
ChannelHeartbeat

ObservedTime
The time at which the event occurred
2016-09-09T22:42:36:924Z

ServiceID
Service ID
f70bd731-691d-41c6-8f2d-671d0bdc9c7e

Entity-specific Properties
{Record as defined by the event}
{Record}

The account ID is included in the partition key to simplify workflows where multiple media services accounts are writing data to the same storage account. The row key start with the number of seconds to midnight to allow top n style data queries within a partition (see the log tail table design pattern for more information). The observed timestamp is an approximate measure provided by the entity reporting telemetry.

Entity-Specific Telemetry

The data in each telemetry row represents an aggregation of telemetry events raised over an aggregation time window, listed below. Each entity pushes telemetry with the following frequencies:

Channels: Every 60 seconds
Streaming Endpoints: Every 30 seconds
Archive: Every 60 seconds

Below is the schema description for channels, streaming endpoints, and archive entities.

Channels

Property
Value
Example

TrackType
Type of track
video

TrackName
Name of the track
video

Bitrate
Expected bitrate of the track
785,000

IncomingBitrate
Incoming bitrate of the track
784,548

OverlapCount
Number of overlapping fragments received in ingest
0

DiscontinuityCount
Number of discontinuities detected in ingest
0

LastTimestamp
Last ingested data timestamp
1800488800

NonincreasingCount
Count of fragments discarded due to non-increasing timestamp
0

UnalignedKeyFrames
Boolean on whether we received fragments where key frames are not aligned
False

UnalignedPresentationTIme
Boolean on whether we received fragments where presentation time is not aligned
False

UnexpectedBitrate
Boolean on whether the IncomingBitrate and Bitrate differ by more than 50% or if IncomingBitrate for an audio or video track is less than 40 kbps if Bitrate is 0
False

Healthy
Boolean on whether OverlapCount, DiscontinuityCount, NonincreasingCount, UnalignedKeyFrames, UnalignedPresentationTime, and UnexpectedBitrate are all zero or false
True

CustomAttributes
Placeholder for custom attributes
 

Streaming Endpoints

Property
Value
Example

HostName
Hostname of the endpoint
builddemoserver.origin.mediaservices.windows.net

StatusCode
HTTP status code
200

ResultCode
HTTP result code detail
S_OK

RequestCount
Total requests received within the last aggregation window
3

BytesSent
Total bytes sent within the last aggregation window
2,987,358

ServerLatency
Average server latency including storage in milliseconds
130

E2ELatency
Average end-to-end latency in milliseconds
250

Archive

Property
Value
Example

ManifestName
Name of the manifest
asset-eb149703-ed0a-483c-91c4-e4066e72cce3/a0a5cfbf-71ec-4bd2-8c01-a92a2b38c9ba.ism

TrackName
Name of the track
audio_1

TrackType
Type of track
audio

Bitrate
Track bitrate
785,000

Healthy
Boolean on whether there were no discarded fragments or archive acquisition errors in storage
True

CustomAttributes
Placeholder for custom attributes
 

The schema above is designed to give good performance within the limits of Azure Table Storage:

Data is partitioned by account ID and service ID to allow telemetry from each service to be queried independently
Partitions contain the date to give a reasonable upper bound on the partition size
Row keys are in reverse time order to allow the most recent telemetry items to be queried for a given service

This should allow many of the common queries to be efficient:

Parallel, independent downloading of data for separate services
Retrieving all data for a given service in a date range
Retrieving the most recent data for a service

Visualizing Telemetry Data

Your Azure Storage account can export data to data visualization tools such as PowerBI, Application Insights, and AMS Live Monitoring Dashboard, among many others. Below is an example of how this data can be imported into and visualized directly with PowerBI.

First, select the telemetry tables for the days you are interested in visualizing. To import your data, select Microsoft Azure Table Storage from the Get Data menu. Enter your storage account credentials and select the tables to import. Next, by selecting Content.ObservedTime as the time axis and Average of Content.Healthy (casted to decimal representation) as the value, plot the health of your channels and archive entities as a line graph. To plot channel health and archive health separately, add a filter on Content.Name. This visualization illustrates entity health where a value of 1 represents perfectly healthy and a value of 0 represents perfectly unhealthy for the given time. Below are examples of the channel and archive health plotted over time.

To visualize the overall health of your services, collapse the time dimension by plotting the data as a pie chart. Selecting Content.Healthy as the legend and Count of Content.Healthy as the value, produces the visualization below.

We’re excited to share these telemetry features and hope they provide useful information about the health of your Azure media services.

Thanks,

Azure Media Services Streaming Team
Quelle: Azure

New Microsoft Azure training and discounted certifications

This post is authored by Julia White, Corporate Vice President, Azure + Security Marketing.

As I recently discussed in my Top Cloud Myths of 2016 blog post, the use of cloud technology has become mainstream. Over 90% of the Fortune 500 companies now use at least one of Microsoft’s enterprise-grade services, and more than 60% are using three or more Microsoft cloud technologies. Azure compute usage has more than doubled year-over-year, and across a variety of industries, companies like Rolls Royce, Uber, and Coca-Cola are using Azure to transform their businesses.

With this rapid adoption of Azure, we have consistently heard from developers and IT operators about the need for Azure training and certification to help ensure they have the latest and greatest information. With the rapid innovation cycle of Azure, it’s critical to stay up to date on the latest technology and methodologies.

To make sure you have up-to-date technical skills and best practices using Azure, we are introducing new training resources and discounted access to certification. Announcing today, we have three training offers that combine free access to our library of flexible online courses, discounts on our industry-standard Microsoft Certified Professional exams, and a discount on Linux certification offered through the Linux Foundation. In case you missed the recent news, Microsoft joined the Linux Foundation, further demonstrating our commitment to open source for Azure, and as a company.

We are offering a broad range of learning resources – from a Massively Open Online Course, or MOOC, to full certification offerings. The MOOCs offer online videos, demos, labs, graded assessments, office hours, and more.  When you complete a MOOC you get digital certificates for completion, and you get access to reduced-cost Microsoft Certified Professional exams for formal certification.

Check out all the learning options for Azure, and read my colleague Gavriella Schuster’s blog post to dive into more details about our new resources and discounted certifications!
Quelle: Azure

Azure Redis Cache diagnostic improvements

Early next year, we will transition Azure Redis Cache’s telemetry infrastructure to the new Azure Monitor service, thereby enhancing its monitoring and alerting experiences. Customers will enjoy the following benefits from this upgrade:

Getting service metrics out of the box – Today you need to create an Azure Storage account and configure Azure Redis to write metrics data to that account in order to monitor or alert for your cache. After the change, this is no longer required. All caches will automatically display metrics in the Azure service portal. These new Redis metrics data also will be accessible through Azure Monitor service’s REST API. If you have a need to retain metrics for longer than 30 days, you may still export it to your own storage account for archiving and offline analysis.
Managing alerts more effectively – With alert rules, Azure Monitor service allows you to add trigger conditions on metrics and events and it will notify you using standard channels (e.g., email, webhook) when any of the conditions is met. In addition, you will have more granular control over how alerts are configured. You will be able to set up alerts for each cache instance separately, instead of on a region and subscription basis currently.
Integrating with 3rd-party tools – Logs from Azure Redis will be streamed in near realtime to an Event Hub. This makes events from your cache instance available to 3rd party logging analytics systems, SIEMs, or custom telemetry pipelines almost instantaneously. Azure Monitor has a set of partners today who can ingest this data and the list of ecosystem partners is growing continuously.

We will update the Azure Redis service in multiple phases starting at the beginning of January 2017. We plan to complete the rollout for all Azure regions within the month.

The update will be seamless for most users, who will simply see the new metrics appearing in the portal. You can start setting up alerts should you so choose. If you are reading the Azure Redis metrics through the Azure Insights library, you just need to upgrade to the latest library version. If you access the metrics data directly from a storage account, however, you will need to change your tool to use either the Azure Insights library or Azure Monitor REST API, or to reconfigure a storage account for exporting and archiving the data. Azure Monitoring REST API Walkthrough provides a good overview of how to programmatically access Azure resource metrics. To facilitate migrating your tool, Azure Redis will continue to write metrics data to your current account until the end of February 2017. It will only publish data to Azure Monitor service after that time. We would encourage you to make the necessary modifications as early as possible.

We hope that you will like the new Azure Redis metrics. Please share with us your thoughts when you have a chance to try them out.
Quelle: Azure

Kafka Connect for Azure IoT Hub

In Azure IoT, we believe in empowering developers to build IoT solutions with the technologies of their choice.  Today I’m excited to announce the release of Kafka Connect for Azure IoT Hub. This new Kafka Source Connector can be used to read telemetry data from devices connected to the Azure IoT Hub; this open source code can be found on GitHub.
 
Azure IoT Hub provides secure two way communication with devices, device identity and device management at extreme scale and performance. Kafka Connect for Azure IoT Hub enables developers to connect IoT Hub to open source systems using Kafka for a powerful, secure and performant IoT solution. Kafka Connect for IoT Hub can also be used with the new managed Kafka solution available in Azure HDInsight.
 
Stay tuned for more announcements and follow us on GitHub to see what is coming next for Kafka Connect for Azure IoT Hub.
Quelle: Azure

Secondary Indexes on Column Store accelerate SQL Data Warehouse look up queries

We are pleased to announce that Azure SQL Data Warehouse now supports the creation of secondary B-Tree indexes (also referred to as non-clustered indexes or NCI) on column store tables (also referred to as clustered column store indexes or CCI). Most analytic queries aggregate large amounts of data and are served well by scanning the column store segments directly. However, there is often a need to look for a ‘needle in a haystack’ which translates to a query that does a lookup of a single row or a small range of rows. Such look up queries can get orders of magnitude (even 1000x) improvement in response time and potentially run in sub-second if there is a B-Tree index on the filter column.

SQL Data Warehouse is your go-to SQL-based view across data, offering a fast, fully managed, petabyte-scale cloud solution. It is highly elastic, enabling you to provision and scale up to 60 times larger in seconds. You can scale compute and storage independently, allowing you to range from burst to archival scenarios, and pay based off what you&;re using instead of being locked into a confined bundle. Plus, SQL Data Warehouse offers the unique option to pause compute, giving you even more freedom to better manage your cloud costs.

Prior to the availability of secondary B-Tree indexes on column store tables, users could meet response time requirements for their point look up queries by duplicating column store data in a clustered B-Tree index. However, the duplication of data adds implementation complexity, storage cost as well as latency. Some of these users have now tried the new secondary indexes over column store and are delighted that they can get the same interactive response time without the data duplication.

How to Create a Secondary Index on a Column Store Table

This follows the same syntax as the generic Create Index Transact-SQL statements. A simple test on 1TB TPC-H data demonstrated that the query time for selecting orders for a given orderkey from lineitem went down from 41 seconds to under a second after a secondary index on orderkey was added. 

Best Practices for Using Secondary Indexes

Here are some guidelines to bear in mind when using secondary indexes on column store tables.

Use them for high cardinality columns that are used as filters in queries returning a small number of rows.
Don’t be heavy handed with secondary indexes as there is an overhead to maintaining them during loads. Best to limit to 1 or 2 secondary indexes per table.

Secondary indexes can be created on partitioned column store tables as well. However, as they are local to each distribution and partition, they cannot be used to implement UNIQUE constraint.

Next Steps

In this blog post we talked about benefits of the new functionality offered by secondary B-Tree indexes on column store tables. This is now available in all SQL Data Warehouse Azure regions worldwide. We encourage you to try it out if you have a use case for point lookups.

Learn More

Check out the many resources for learning more about SQL Data Warehouse, including:

What is Azure SQL Data Warehouse?
SQL Data Warehouse best practices
Video library
MSDN forum
Stack Overflow forum
Quelle: Azure

Live, online, step-by-step guidance from Cortana Intelligence Suite experts

Learn how to build intelligence into your applications by getting hands-on with the Cortana Intelligence Suite. On December 6, Microsoft Virtual Academy will host Microsoft Architects Todd Kitta and Jin Cho as they lead a workshop where you will create an end-to-end solution and finish the session with a working web app. Todd and Jin will pause their instruction to give you time to work through the project and to ask questions.

Learn how to architect solutions in Cortana Intelligence Suite

Bring your interest in data science and advanced analytics to learn what’s possible as we take a step-by-step look at the platform. We’ll explore key components, including Data Factory, HDInsight Spark, Azure Machine Learning, and Power BI, as we build the app. The instructors will talk about open source capabilities, walk through a machine learning model, show you how to set up a Data Factory pipeline, and much more.

Note: To best follow along with the instructors, be sure to set up a free trial subscription to Microsoft Azure before the event.

Course outline

Analytics State of the Union + Cortana Intelligence Suite Keynote
Building a Machine Learning Model and Operationalizing
Setting up Azure Data Factory
Developing a Data Factory Pipeline for Data Movement
Operationalizing Machine Learning Scoring with Azure Machine Learning and Data Factory
Summarizing Data Using HDInsight Spark
Visualizing Spark Data in Power BI
Deploying an Intelligent Web App
Wrap-up and Cleanup of Azure Resources

Register Now

Cortana Intelligence Suite End to End

Date: December 6, 2015
Time: 9am‒4pm PST
Where: Live, online virtual classroom
Cost: Free!

Register now
Quelle: Azure

Help us shape Azure Storage Client Libraries and Tools

Azure Storage team is looking for feedback that will help with the planning of the Storage Client Libraries and Tools. If you currently use any of the client libraries and/or tools for Azure Storage, please take our short survey that shouldn&;t take more than 3 minutes to complete.

Once you complete the survey you will also have the opportunity to learn more about the upcoming preview programs as well as helping us shape the product roadmap.
Quelle: Azure

Integrated Vulnerability Assessment with Azure Security Center

Vulnerability management is a critical part of an organization’s security and compliance strategy. Security flaws are constantly being discovered and fixed by vendors, making it hard for organizations to keep up with security patches. Meanwhile, missing security updates are easy targets for attackers and can compromise the security of the entire network.

Traditional network based scanners are available in the Azure Marketplace and successfully used by customers for vulnerability assessment. Nevertheless, many Azure customers are looking for continuous agent based solutions for the following reasons:

Cloud environments tend to be more dynamic. Virtual Machines (VMs) are being spun up and down frequently, making it more difficult for a scheduled scan to cover all assets.
Network based scanners require a user account on target virtual machines in order to provide full insight. In many cases, however, customers lack the ability to centrally manage such accounts in the cloud.
When resources are spread across different virtual networks, multiple network based scanners are required to get access to all virtual machines.

As announced at the end of September, Azure Security Center now offers integrated vulnerability assessment with Qualys cloud agents (preview) as part of the Virtual Machine recommendations. If a Virtual Machine does not have an integrated vulnerability assessment solution already deployed, Security Center recommends that it be installed. The solution can be deployed to multiple VMs at one time, and the ability to automatically deploy on new VMs as they are created, will be added soon. Once deployed, the Qualys agent will start reporting vulnerability data to the Qualys management platform, which in turn provides vulnerability and health monitoring data back to Security Center. Users can quickly identify vulnerable VMs from the Security Center dashboard. Additional reports and information are available in the Qualys management console, which is linked directly from Security Center.

To get started, simply follow the “Add a vulnerability assessment solution” recommendation in Azure Security Center as shown in this article. You can also watch this short video on channel 9. If you don’t have yet a Qualys subscription, you can enable a free trial, in just a few clicks. Once the trial is over, the Qualys agent would stop reporting vulnerabilities and can be easily removed from the Security Center dashboard without any impact to the VM.

Interested in learning more on Azure Security Center and its partner ecosystem integration?

Top 4 reasons for using Azure Security Center for partner security solutions
Managing security recommendations in Azure Security Center
Monitoring partner solutions with Azure Security Center
Integrating Security Center alerts with Azure log integration (Preview)
Security Resource Provider REST API Reference

Quelle: Azure