New Azure VPN Gateways now 6x faster

Running mission-critical workloads require both performance and reliability. To improve your Azure VPN experience, we are introducing a new generation of VPN gateways with better performance, a better SLA, and at the same price as our older gateways.

Many customers with network intensive workloads in Azure Virtual Networks (VNets) are driving the need for increased cross-premises and cross-region VPN performance. To accommodate even more demanding workloads we re-engineered our VPN Gateway service to provide 6X more performance coupled with better reliability and backed by an even stricter SLA.

In addition to performance, many customers with mission-critical workloads need control over the VPN policies to meet compliance regulations. We now provide custom IPsec/IKE policy selection giving you more flexibility to choose your encryption policy. We are also enhancing the new gateways to accommodate both route-based and policy-based VPNs. Although a route-based VPN using BGP to automatically learn routing is easier to manage, many customers have already deployed policy-based VPNs at their branch offices. The new VPN gateways allow multiple sites using policy-based VPNs to connect to the same VPN gateway.

New guidance

As we introduce the new VPN gateways, called VpnGw1, VpnGw2, and VpnGw3, we are also updating our deployment guidance. The existing Basic VPN gateway is unchanged with the same 80-100 Mbps performance and a 99.9% SLA. The Basic VPN gateway is appropriate for non-production dev/test scenarios. The Basic VPN gateway should not be used for any production scenarios.

For your production services, we strongly recommend that you select or migrate to the new VPN Gateways that have a 99.95% SLA. The new VPN gateways have a higher SLA and better performance at the same price as the old gateways. We will continue to support the old VPN gateways so you can manage existing deployments, but starting in September you will not be able to create the older Standard or High Performance VPN gateways.

Better performance

The new generation of Azure VPN Gateways provide single tunnel performance of up to 1 Gbps and aggregate up to 1.25 Gbps with multiple tunnels improving your access to VNets either from your premises or for cross-region VNet-to-VNet connectivity. Enabling the active-active VPN gateway option provides even higher throughput with multiple flows to your Azure VPN gateways.

Here are the details:

VPN Gateway
Recommended Workload Type
Price
($ per hour)
Throughput Benchmark*
SLA

S2S & V2V tunnels

($ per tunnel-hour)

PS2 Tunnels
(Max)

Basic
Dev/Test
$0.04
100 Mbps
99.9%

Max 10
1- 10: included

0

VpnGw1
Production
$0.19
650 Mbps
99.95%

Max 30
1- 10: included
11-30:  $0.015

128

VpnGw2
Production
$0.49
1 Gbps
99.95%

Max 30
1-10: included
11-30: $0.015

128

VpnGw3
Production
$1.25
1.25 Gbps
99.95%

Max 30
1-10: included
11-30: $0.015

128

* Benchmark data obtained by running iperf3 between VNets in the same region, with minimum duration of 120 seconds and up to 32 flows. Refer to this page for more details on how to measure throughput across your Azure VPN gateways.

VpnGw1 at 650 Mbps provides a 6.5x and VpnGw2 at 1Gbps provides a 5x performance improvement at the same price as the old Standard and High Performance gateways, respectively. We also increased the Site to Site (S2S) tunnel count from 10 to 30 tunnels so you can connect more of your sites to the VPN Gateway. There is a per S2S tunnel charge for the 11th through 30th tunnels. We also are introducing a new, even higher performance VPN gateway called VpnGw3. With multiple tunnels VpnGw3 has shown 1.25 Gbps throughput in our tests. Please note that the actual performance in production is highly dependent on the application behavior, the quality of your ISP, and the actual distance (network path) from your physical VPN device to the Azure region with your Azure VNet.

Customers often deploy a S2S VPN to connect branch offices to the same Azure VNet while the main corporate WAN is accessed via ExpressRoute. The corporate WAN may also use S2S VPN as a backup path in case of a connectivity issue with ExpressRoute. 

If you have a 1 Gbps ExpressRoute circuit you can now also have a 1 Gbps S2S tunnel on the backup path so if a failover event occurs you still have a performant network connection to your VNets, although via the Internet. Note the performance caveats mentioned previously regarding the quality of your ISP.

New VPN capabilities – Custom IPsec/IKE policy & multi-site policy-based VPN

We are also releasing two new features to improve VPN manageability and give customers more choices. These include the support for custom IPsec/IKE connection policies to satisfy your compliance and security requirements, and the ability to connect multiple on-premises networks using policy-based firewall devices to your Azure VPN gateway.

With custom IPsec/IKE policy, you can now set the exact cryptographic algorithms and key strengths on each S2S or VNet-to-VNet connection to satisfy your enterprise compliance and security requirements. Azure VPN gateways utilize a default set of IPsec/IKE cryptographic algorithms that maximize interoperability with a wide range of 3rd party VPN devices. The default list may not meet all your compliance requirements. For example, you may need higher Diffie-Hellman Group or PFS Group (Perfect Forward Security) than the default, or there are certain cryptographic algorithms that you want to exclude (e.g., SHA1, 3DES, etc.) You can now specify the exact combinations of cryptographic algorithms and key strengths, as shown in the example below:

Additionally, you can now connect multiple on-premises policy-based VPN devices to your Azure VPN gateway, by utilizing the custom policy:

We do understand that configuring and maintaining VPNs for mission-critical workloads are complex tasks. These new VPN capabilities were developed based on customer feedback. We have re-written much of our documentation and will be providing more deployment blueprints, guidance, and best practices.

Please let us know how we can further enhance the Azure VPN service. Here are some links to get started with the new VPN gateways:

About new VPN gateway SKUs & migration instruction
About cryptographic requirements and Azure VPN gateways
Configure IPsec/IKE policy on S2S VPN or VNet-to-VNet connections
Connect multiple policy-based VPN devices to Azure VPN gateway

Quelle: Azure

Embed Video Indexer insights in your website

Video Indexer embeddable widgets is a great way to start adding AI insights to your videos. Whether you want to add deep search ability to your published videos or let your users be more engaged with the video content on your website, you can easily achieve that by using the embeddable option at Video Indexer web application or by using Video Indexer API.

Getting Started

To get started embedding Video Indexer insights to your website you must have a registered account. If you don't have an account you can easily Sign-In to Video Indexer using a Microsoft, Google, LinkedIn, or Azure Active Directory and get one generated for you.

Video Indexer supports embedding two types of widgets into your application: Cognitive Insights and Player.

Cognitive Insights Widget

This widget contains all the visual insights that were extracted from the video after the indexing process such as people appearances, top keywords, sentiment analysis, transcript and search.

It also allows you to change the language and get all the insights based on the selected language. Here is an example:
 

Player Widget

The player widget is a customized Azure Media Player that except of providing video streaming, contains extra features such as playback speed and closed captions. Here is an example:

In order to embed a widget in your website you need to get an embed code and paste it in your html file. The embed code contains iframe tag with embed URL.

You have two options to get the embed URL: via Video Indexer web application or by calling Video Indexer API specific method. We will cove both ways.

Get your embed code via Video Indexer web application (Public videos only)

You can easily get the embed code for your indexed videos with a click of a button:

1. Login to your account at VI

2. Upload a video

3. After indexing process has completed click “play” on the video at the main gallery page.

4. Click the “embed button” and select the widget you want to embed with the desired options. (player/insights)
 

5. Copy and paste the code into your html file.

Notice: if you embed via the web application you can embed only public videos. Private videos requires accessToken parameter in the embed URL that contains 1h access token for the video.

Get your embed code via Video Indexer API (Public or Private videos)

In order to get the embed URL that contains the accessToken for your video you can use Video Indexer API and call Get Insights Widget Url or Get Player Widget Url by passing the video id.

If you manage your own videos you can also get the embed code based on your internal video id by calling Get Insight Widget By External Id.

In order to start working with the API you will have to register and get your API subscription key first. The Getting started with the Video Indexer API blog post is where you will find a very detailed blog post about getting started with Video Indexer API.

After you have your embed URL just paste it as “src” attribute of an iframe element which you want to locate anywhere in your website.

<iframe width="580" height="580" src="https://www.videoindexer.ai/embed/insights/c4c1ad4c9a/?widgets=people,search" frameborder="0" allowfullscreen></iframe>

Embedding options

Video Indexer widgets are customizable per your need. You can choose to embed only the insights widget or the player, or embed them both.

Embed both types of widgets in your application

Copy and paste the embed codes for the player widget and the insights widget and include the following JS file before the closing <body> tag:

<script src="https://breakdown.blob.core.windows.net/public/vb.widgets.mediator.js"></script>

The above file is required in order to handle cross origin communications between the widgets.

You can read more about how it works at our docs.

Embed cognitive insights and use your Azure Media Player

If you are using Azure Media Player in your website you can easily embed Video Indexer insights widget that will communicate with your player using vi communication plugin. Just paste the following script in your page after azure media player library script and you are all set.

<script src="https://breakdown.blob.core.windows.net/public/amp-vb.plugin.js"></script>

The plugin let you also get the VTT file for your player and choose if you want to sync between language and transcript with your video.

For more information and code samples see the relevant Video Indexer docs.

Embed cognitive insights with any video player

If you are using other players like YouTube player, Vimeo or your own player you can still embed Video Indexer cognitive insights and make them communicate with your player, for example jump into the relevant moment when user clicks on one of the widgets.

In order to achieve that you will have to implement some functions and listen to “postMessage” JavaScript event.

Here is a detailed demo that demonstrates this approach.

How to customize Video Indexer widgets?

Video Indexer widget are customizable per your need. You can choose to embed only the insights that you think will be more valuable for your users.

Customize the cognitive insights widget

You can choose the types of insights you want by specifying them as a value to the  following URL parameter added to the to the embed code you get (from API or from the web application):

&widgets=<list of wanted widgets>

The possible values are: people, keywords, sentiments, transcript, search.

For example, if you want to embed widget containing only people and search insights the iframe embed URL will look like this:

https://www.videoindexer.ai/embed/insights/c4c1ad4c9a/?widgets=people,search

You can see a detailed demo here and read more at Video Indexer docs.

Customize the player widget

If you embed Video Indexer player you can choose the size of the player by specifying the size of the iframe.

For example :

<iframe width="640" height="360" src="https://www.videoindexer.ai/embed/player/{id}” frameborder="0" allowfullscreen />

By default Video Indexer player will have auto generated closed captions based on the transcript of the video that was extracted from the video with the source language that was selected when the video was uploaded.

If you want to embed with a different language you can add &captions=< Language | ”all” | “false” > to the embed player URL or put “all” as the value if you want to have all available languages captions.

The embed URL then will look like this : https://www.videoindexer.ai/embed/player/6bc9113d26/?captions=italian. If you want to disable captions you can pass “false” as value for captions parameter.

Auto play – by default the player will start playing the video. you can choose not to by passing &autoplay=false to the embed URL above.

For more details, please take a look at the Video Indexer Documentation. Follow us on Twitter @Video_Indexer to get the latest news on the Video Indexer. If you have any questions or need help, contact us at visupport@microsoft.com.
Quelle: Azure

Announcing Storage Service Encryption for Azure Managed Disks

We announced general availability of Managed Disks in February 2017.  Today, we are excited to announce Azure Storage Service Encryption (SSE) for Managed Disks. SSE provides encryption-at-rest and safeguard your data to meet your organizational security and compliance commitments.

SSE is enabled by default for all Managed Disks, Snapshots and Images in all the public and Germany regions. Starting June 10th, 2017, all new Managed Disks/Snapshots/Images and new data written to existing Managed Disks are automatically encrypted-at-rest with keys managed by Microsoft. Visit the Managed Disks FAQ page for more details.

As always, we would love to hear feedback via comments. If you run into any problem with this feature or have any question, we are always ready to help you at our Azure Storage MSDN forum.

Know more about Managed Disks using the links below:

·         Azure Managed Disks Overview

·         Migrate to Managed Disks

 
Quelle: Azure

Event Hubs Auto-Inflate, take control of your scale

Azure Event Hubs is a hyper-scalable telemetry ingestion service that can ingest millions of events per second. It gives a distributed streaming platform with low latency and configurable time retention, which enables you to ingress massive amounts of telemetry into cloud and read data from multiple applications using publish-subscribe semantics.

Event Hubs lets you scale with Throughput Units (TUs). TUs are variable reserved capacities and the component that you would purchase. A single TU entitles you to 1MB/second or 1000 events/second ingress and 2MB/second or 2000 events/second egress. This capacity has to be reserved/purchased when you create an Event Hubs namespace.

This reservation is good when you have a steady and predictable usage that is not bound to change. Many Event Hubs customers commonly increase their usage of Event Hubs after onboarding to the service. For greater data transfer, you had to increase your predetermined TUs manually. Well, not anymore!

Event Hubs is launching the new Auto-Inflate feature, which enables you to scale-up your TUs automatically to meet your usage needs. This simple opt-in feature gives you the control to prevent throttling when, data ingress rates exceed your pre-determined TUs and when your egress rates exceed your set TUs.

By enabling the Auto-Inflate on your namespace, you can limit the number of TUs you want to scale-up to on your namespace. This simple configuration lets you start small on your TUs and scales-up as you grow your data in Event Hubs. With no changes to your existing setup, this cost-effective value add feature gives you more control based on your usage needs.

This feature is now available in all Azure regions and you can enable on your existing Event Hubs. The article Enable auto-inflate on your namespace, describes the auto-inflate (or scale-up) feature in detail.

Next Steps?

Learn how you can enable this feature on your namespace – Enable auto-inflate on your namespace

Use ARM to enable the scale-up feature

Onboard to Azure Event Hubs

Start enjoying this feature, available today.

If you have any questions or suggestions, leave us a comment below.
Quelle: Azure

Microsoft Announces Virtual Event: Azure OpenDev on June 21st

Whether it is to drive agility, develop new skills, or engage and scale through community, open source is at the core of how developers deliver value in a cloud world. I can clearly see this accelerating through my work on Microsoft Azure. Open source technologies are an integral part of our offerings, and we have a strong partner ecosystem helping us deliver the open source solutions developers love on Azure.

To show what’s possible with open source on Azure, Microsoft will host the first Azure OpenDev online event on June 21st at 9 am (Pacific Time). This new event series is designed for developers and architects using open source technologies and the cloud to accelerate innovation and digital transformation of the business. The event will include open source thought leaders such as Canonical’s founder Mark Shuttleworth, Docker’s COO Scott Johnston and other community advocates from Chef, Pivotal, Red Hat, and more, sharing their perspectives on what is possible with open source and the cloud. I hope you’ll save the date to participate in this community event.

The first ever Azure OpenDev

Azure OpenDev is an online conference series scheduled to occur every 3 to 4 months. In this first edition, we will showcase ways to run your open source applications and solutions in the cloud. We’ll show you how to create microservices with open languages and platforms such as Java and Node.js (and really anything else), leverage containers and orchestrators, and improve your DevOps pipeline using open tools. You’ll hear people who are building and deploying open source solutions every day. They’ll share best practices, lessons learned, and helpful tips for using the cloud. You’ll also have the chance to ask about what’s on your mind through a Q&A with subject matter experts from Microsoft and our partners.

Check out the full speaker lineup and session topics on the event page.

Many opportunities to explore and learn

We’ve already released how-to sessions and resources so you can experience open source software and tools on Azure:

Watch Joe Binder, Principal Product Manager for Microsoft Azure, show how easy it is to deploy a Java Spring Boot app to Azure. Using IntelliJ and the Azure CLI 2.0, Joe takes an existing Spring Boot app, containerizes it, and quickly deploys it to Azure Web Apps on Linux as well as Kubernetes on Azure Container Services.

Matt Hernandez, Senior Program Manager for Microsoft Azure, deploys a sample Node.js MEAN application to Azure Web Apps on Linux, presenting the full developer experience with Visual Studio Code and the Azure CLI 2.0. Matt also showcases how to store the app’s data inside Cosmos DB, a drop-in replacement for MongoDB to which it maintains full protocol compatibility.

Each video comes with all the instructions, code, and scripts available on GitHub for you to try the services and solutions yourself. Whether you’re completely new on our cloud, or you’re already an Azure ninja, you’ll be able to learn and practice something new.

The how-to videos are available online today on the Azure OpenDev event page.

Save the date – and spread the word

Visit Azure OpenDev to learn more about the event and save the date to your calendar. Please help us spread the word to your friends and family. It takes a village!

If you are unable to watch the Azure OpenDev on June 21st at 9 am PT and enjoy the full live experience, all is not lost! Immediately after the event, all the sessions will be available on-demand on the event web page.
Quelle: Azure

Running OpenBSD on Azure

As you know, Microsoft has officially supported FreeBSD virtual machine (VM) running on Azure since 2014 and last year Microsoft announced the availability of FreeBSD 10.3 as a ready-made VM image in the Azure Marketplace.  Now if you go to Azure Marketplace, you also will see FreeBSD 11 VM and pfsense offering there too.

Today we are happy to share you that Azure supports OpenBSD 6.1 with the collaboration effort from Esdenera and Microsoft. Meanwhile Esdenera brings their firewall product based on OpenBSD on board Azure Marketplace now.

What is OpenBSD?

The OpenBSD project produces a freely available, multi-platform 4.4BSD-based UNIX-like operating system. The goals place emphasis on correctness, security, standardization, and portability.

OpenBSD is thought of as the most secure UNIX-like operating system by many security professionals, as a result of the never-ending comprehensive source code audit.
OpenBSD is a full-featured UNIX-like operating system available in source and binary form at no charge.
OpenBSD integrates cutting-edge security technology suitable for building firewalls and private network services in a distributed environment.
OpenBSD benefits from strong ongoing development in many areas, offering opportunities to work with emerging technologies and an international community of developers and end users.

How to bring your own OpenBSD to Azure?

You can follow the guidance to prepare your custom OpenBSD image, convert virtual hard disk to the fixed VHD format, upload it to Azure storage and create a virtual machine from uploaded VHD.

Latest Azure Agent added the support of OpenBSD which contributed by Esdenera. Next release will be v2.2.13.

Do you want to use OpenBSD based Virtual Appliance on Azure?

The Esdenera Firewall 3 is a professional network appliance that has been built for Enterprise networks, Infrastructure-as-a-Service (IaaS), and remote access solutions. It is built upon Esdenera’s TNOS network operating system, an OpenBSD-based platform developed for Esdenera’s OEM customers.
Quelle: Azure

Optimizing Performance of Azure SQL Data Warehouse with SentryOne

There is constant talk about big data; endless marketing decks, whitepapers, and blog posts about how fast data is multiplying on-premise and in the cloud. In many cases, familiar database technologies, such as Symmetric Multi-Processing (SMP) or “scale-up” architecture, can no longer process growing data sets fast enough for businesses to make timely decisions. Massively Parallel Processing (MPP) or “scale-out” architecture is quickly becoming the preferred alternative proven capable of handling larger (or massive) data sets.

Azure SQL Data Warehouse, or simply Azure SQL DW, allows companies to use MPP to take advantage of significant performance gains crunching large data sets without the investment and overhead of maintaining on-premise hardware and software. Simply provision an Azure SQL DW instance, and you gain access to all the advantages of MPP without the commitments of purchasing and maintaining the infrastructure associated with it. That noted, because Azure is a pay-as-you-go solution, it’s important to ensure you are as efficient as possible with those resources to get the most value from this platform-as-a-service (PaaS).

SentryOne (formerly SQL Sentry) has always provided solutions to data professionals to monitor, diagnose, and optimize performance for SQL Server and the Microsoft Data Platform. SentryOne DW Sentry is the essential solution which provides critical performance insight for Azure SQL DW.

SentryOne DW Sentry provides unequaled visibility into one of the most expensive steps in MPP query execution: data movement. Additionally, you can monitor and be notified regarding any loads, backups, and restores of your data. Explore activity in an Outlook-style calendar view, or generate your own customized alerts based on thresholds and other logic such as excessive queuing and suspended queries due to lock contention or exhaustion of concurrency slots.

Data Movement Dashboard

Data movement is a natural part of how MPP systems operate, but there are times where heavy data movement can be an indication of poorly designed queries, or incorrectly distributed tables. The DW Sentry Data Movement Dashboard is designed to allow the user to quickly identify data movement activity, and zoom-in to pinpoint time periods where activity was the highest. Additionally, it highlights when you have activity that is unbalanced across compute nodes, and associated distributions.

DW Sentry, like all SentryOne solutions, provides the ability to zoom-in on key periods of activity, go back in time to a relevant time period, and the ability to jump-to other SentryOne diagnostic and optimization features.

Distributed Queries

Query concepts are different in MPP architecture: every query must be deconstructed into smaller pieces and run against individual distributed compute resources in the system. DW Sentry collects and displays details of each MPP query, allowing DW Sentry to show all historical information in a method that allows for filtering, sorting, grouping, and other historical analysis. It also provides the distributed query step details so this information can be reviewed along with the primary query request information.

DW Sentry also provides alerting surrounding query and load performance indicators, allowing user notifications of performance issues.

Loads/ Backups/ Restores

For Azure SQL DW, we are tracking loads processes from SSIS packages so load performance can be tracked over time, providing a display like the distributed query collection.

Event Manager Calendar

One of the most interesting DW Sentry features for Azure SQL DW monitoring is one of the original components found in all SentryOne solutions, the Event Calendar. In MPP systems, concurrency is an important aspect of performance; the Event Calendar graphically displays all activity that is occurring at a given point in time to promote quick diagnosis of potential resource constraint issues.

Same SentryOne Client, Same SentryOne Monitoring Service

As with all SentryOne products, you don't have to change anything with your monitoring footprint to monitor on premise targets, such as SQL Servers, Analysis Services, Hyper-V hosts and guests, or APS appliances along with cloud solutions like Azure SQL DB and Azure SQL DW. You can monitor all of it with a single implementation, and view all the information in any SentryOne client. You can also easily trial and deploy the complete suite of products from the Azure Marketplace.

SentryOne continues to improve their Microsoft Data Platform performance optimization solutions, so watch for future announcements and further improvements. Keep growing your "big data" with the confidence that SentryOne will help you Monitor, Diagnose and Optimize your platform.

Sound interesting to you? Download a trial of SentryOne which includes DW Sentry that will allow you to monitor Azure SQL DW. The trial also includes our full suite of solutions built to elevate performance across the entire Microsoft Data Platform.
Quelle: Azure

Announcing Storage Service Encryption with Customer Managed Keys limited preview

Today, we are excited to announce the Preview of Azure Storage Service Encryption with Customer Managed Keys integrated with Azure Key Vault for Azure Blob Storage. Azure customers already benefit from Storage Service Encryption for Azure Blob and File Storage using Microsoft managed keys.

Storage Service Encryption with Customer Managed Keys uses Azure Key Vault that provides highly available and scalable secure storage for RSA cryptographic keys backed by FIPS 140-2 Level 2 validated HSMs (Hardware Security Modules). Key Vault streamlines the key management process and enables customers to fully maintain control of keys that are used to encrypt data, manage, and audit their key usage.

This is one of the most requested features by enterprise customers looking to protect sensitive data as part of their regulatory or compliance needs, HIPAA and BAA compliant.

 

Customers can generate/import their RSA key to Azure Key Vault and enable Storage Service Encryption. Azure Storage handles the encryption and decryption in a fully transparent fashion using envelope encryption in which data is encrypted using an AES based key, which is in turn protected using the Customer Managed Key stored in Azure Key Vault.

Customers can rotate their key in Azure Key Vault as per their compliance policies. When they rotate their key, Azure Storage detects the new key version and re-encrypts the Account Encryption Key for that storage account. This does not result in re-encryption of all data and there is no other action required from user.

Customers can also revoke access to the storage account by revoking access on their key in Azure Key Vault. There are several ways to revoke access to your keys. Please refer to Azure Key Vault PowerShell and Azure Key Vault CLI for more details. Revoking access will effectively block access to all blobs in the storage account as the Account Encryption Key is inaccessible by Azure Storage.

Customers can enable this feature on all available redundancy types of Azure Blob storage including premium storage and can toggle from using Microsoft managed to using customer managed keys. There is no additional charge for enabling this feature.

You can enable this feature on any Azure Resource Manager storage account using the Azure Portal, Azure PowerShell, Azure CLI, or the Microsoft Azure Storage Resource Provider API.

To participate in the preview please send an email to ssediscussions@microsoft.com. Find out more about Storage Service Encryption with Customer Managed Keys.
Quelle: Azure

Azure #CosmosDB: Case study around RU/min with the Universal Store Team

Karthik Tunga Gopinath, from the Universal Store Team at Microsoft, leveraged RU/min to optimize the entire workload provisioning with Azure Cosmos DB. He is sharing his experience on this blog.

The Universal Store Team (UST) in Microsoft is the platform powering all “storefronts” of Microsoft assets. These storefronts can be web, rich client, or brick mortar stores. This includes the commerce, fraud, risk, identity, catalog, and entitlements/license systems.

Our team, in the UST, streams user engagement of paid applications, such as games, in near real-time. This data is used by Microsoft Fraud systems to identify if a customer is eligible for refund upon request. The streaming data is ingested to Azure Cosmos DB. Today, this usage data along with other insights provided by the Customer Knowledge platform forms the key factors in the refund decision. Since this infrastructure is used for UST Self-Serve refund, it is imperative that we drive down the operation cost as much as possible.

We chose Azure Cosmos DB primarily for three reasons, guaranteed SLAs, elastic scale, and global distribution. It is crucial for the data store to keep up with the incoming stream of events with guaranteed SLAs. The storage needed to be replicated in order to serve refund queries faster across multiple regions with support for disaster recovery.

The problem

Azure Cosmos DB provides a predictable and low-latency performance backed by the most comprehensive SLA in the industry. Such performance requires capacity provisioning at the second granularity. However, by relying only on provisioning per second, cost was a concern for us as we had to provision for peaks.

For refund scenario, we need to store 2 TB of usage data. This coupled with the fact that we cannot fully control the write skew causes a few problems. The incoming data has temporal bursts due to various reasons such as new game releases, discounts on game purchases, weekday vs. weekends, etc. This would result in the writes being frequently throttled. To avoid being throttled during bursts, we needed to allocate more RUs. This over allocation proved to be expense since we didn’t use all the RUs allocated majority of the time. 

Another reason we allocate more RUs is to decrease our Mean Time to Repair (MTTR). This is primary when trying to catchup to the current stream of metric events after a delay or failure. We need to have enough capacity to catchup as soon as possible. Currently, the platform has between 2,500 to 4,000 writes/sec. In theory, we only need 24K RUs/second, each write is 6 RUs given our document size. However, because of the skew it is hard to predict when a write will happen on which partition. Also, the partition key we used is designed for extremely fast read access for good customer experience during self-service refund.

Request Units per Minute (RU/m) Stress Test Experiment

To test RU/m, we designed the catchup or failure recovery test in our dev environment. Previously we allocated 300K RU/s for the collection. We enabled RU/m and reduced compute capacity from 300K RU/second to 100K RU/second. This gave us extra 1M RU/m. To push our writes to the limit and test our catchup scenario we simulated an upstream failure. We stopped streaming for about 20 hours. We then started streaming backlog data and observed if the application could catchup with the lower RU/s plus additional RU/m. The dev environment also had the same load as we see in production.

Data Lag Graph

RU/min usage

The catchup test is our worst-case workload. The first graph shows the lag in streaming data. We see that overtime the application can catchup and reduce the lag to near zero (real-time). From the second graph we see that RU/m is utilized when we start the catchup test. It shows that the load was beyond our allocated RU/s which is our desired outcome for the test. The RU/m usage is between 35K to 40K until we catchup. This is the expected behavior since the peak load is on Azure Cosmos DB is during this period. The slow drop in RU/m usage is because the catchup is near completion. Once data is close to near real-time, the application doesn’t need all the extra RU/m since the RU/s provisioned is enough to meet the throughput requirements for normal operations most of the time.

RU/m usage during normal conditions

As mentioned above, during normal operations the streaming pipeline requires only 24k RU/s. However, because we might have a lot of activity on a specific partition (“Hot Spot”), each partition can have unexcepted capacity need. Looking at the graph below, you can see sporadic RU/m consumption as RU/m is still used for non-peak load. Such hot spots could happen for numerous reasons that were mentioned above. Also, we noticed that the application did not experience any throttling during the entire test period. Previously we allocated 300k RU/s to handle these bursts. Now with RU/m, we only need to provision 100k RU/s. RU/m also helped us during our normal operation, not just during peak load.

RU/m usage during normal operations

Results

The above experiments proved that we could leverage RU/m and lower RU/s allocation while still handling peak load. By leveraging RU/m we could reduce our operation cost by more than 66%.

Next steps

The team is actively working on ways to reduce the write skews and working with Azure Cosmos DB team to get the most out of RU/m.

Resources

Our vision is to be the most trusted database service for all modern applications. We want to enable developers to truly transform the world we are living in through the apps they are building, which is even more important than the individual features we are putting into Azure Cosmos DB. We spend limitless hours talking to customers every day and adapting Azure Cosmos DB to make the experience truly stellar and fluid. We hope that RU/m capability will enable you to do more and will make your development and maintenance even easier!

So, what are the next steps you should take?

First, understand the core concepts of Azure Cosmos DB.
Learn more about RU/m by reading the documentation:

How RU/m works
Enabling and disabling RU/m
Good use cases
Optimize your provisioning
Specify access to RU/m for specific operations
Visit the pricing page to understand billing implications

If you need any help or have questions or feedback, please reach out to us through askcosmosdb@microsoft.com. Stay up-to-date on the latest Azure Cosmos DB news (#CosmosDB) and features by following us on Twitter @AzureCosmosDB and join our LinkedIn Group.

About Azure Cosmos DB

Azure Cosmos DB started as “Project Florence” in the late 2010 to address developer pain-points faced by large scale applications inside Microsoft. Observing that the challenges of building globally distributed apps are not a problem unique to Microsoft, in 2015 we made the first generation of this technology available to Azure Developers in the form of Azure DocumentDB. Since that time, we’ve added new features and introduced significant new capabilities. Azure Cosmos DB is the result. It represents the next big leap in globally distributed, at scale, cloud databases.
Quelle: Azure

New sample model for Azure Analysis Services

In April we announced the general availability of Azure Analysis Services, which evolved from the proven analytics engine in Microsoft SQL Server Analysis Services. The success of any modern data-driven organization requires that information is available at the fingertips of every business user, not just IT professionals and data scientists, to guide their day-to-day decisions. Self-service BI tools have made huge strides in making data accessible to business users. However, most business users don’t have the expertise or desire to do the heavy lifting that is typically required, including finding the right sources of data, importing the raw data, transforming it into the right shape, and adding business logic and metrics, before they can explore the data to derive insights. With Azure Analysis Services, a BI professional can create a semantic model over the raw data and share it with business users so that all they need to do is connect to the model from any BI tool and immediately explore the data and gain insights. Azure Analysis Services uses a highly optimized in-memory engine to provide responses to user queries at the speed of thought.

Last December we wrote a post on creating your first model with Azure Analysis Services. Now you can try Azure Analysis Services without the need to build anything with our new sample model based on the Adventure Works Internet Sales database. It is designed to show a range of Analysis Services modeling features.

Ready to give it a try? Follow the steps in the rest of this blog post and you’ll see how easy it is.

Before getting started, you’ll need:

Azure Subscription – Sign up for a free trial.

Create an Analysis Services server in Azure

1. Go to http://portal.azure.com.

2. In the Menu blade, click New.

3. Expand Data + Analytics, and then click Analysis Services.

4. In the Analysis Services blade, enter the following and then click Create:

Server name: Type a unique name.
Subscription: Select your subscription.
Resource group: Select Create new, and then type a name for your new resource group.
Location: This is the Azure datacenter location that hosts the server. Choose a location nearest you.
Pricing tier: For our simple model, select D1. This is the smallest tier and great for getting started. The larger tiers are differentiated by how much cache and query processing units they have. Cache indicates how much data can be loaded into the cache after it has been compressed. Query processing units, or QPUs, are a sign of how many queries can be supported concurrently. Higher QPUs may mean better performance and allow for a higher concurrency of users.

Now that you’ve created a server, you can add the sample to it.

Adding the sample model to your server

1. On the overview blade of your server, click + New Model on the top left.

2. Under choose data source select Sample data and click add.

It will take a few moments for the model to be created. When it is finished you will see in in the list of models on your server:

The model can now be queried in Microsoft Excel, Power BI Desktop, or edited in Visual Studio by clicking on the three dots next to the model name and selecting the tool that you wish to use.

Note: if you need to download any of these tools, click on the links below:

SQL Server Data Tools/Visual Studio – Download the latest version for free.

Power BI Desktop – Download the latest version for free.

Then you can start visualizing the data.

Learn more about Azure Analysis Services.
Quelle: Azure