Introducing proximity placement groups

Co-locate your Azure resources for improved application performance

The performance of your applications is central to the success of your IT organization. Application performance can directly impact your ability to increase customer satisfaction and ultimately grow your business.

Many factors can affect the performance of your applications. One of those is network latency which is impacted, among other things, by the physical distance between the virtual machines deployed.

For example, when you place your Microsoft Azure Virtual Machines in a single Azure region, the physical distance between the virtual machines is reduced. Placing them within a single availability zone is another step you can take to deploy your virtual machines closer to each other. However, as the Azure footprint grows, a single availability zone may span multiple physical data centers resulting in network latency that can impact your overall application performance. If a region does not support availability zones or if your application does not use availability zones, the latency between the application tiers may increase as a result.

Today, we are announcing the preview of proximity placement groups. A new capability that we are making available to achieve co-location of your Azure Infrastructure as a Service (IaaS) resources and low network latency among them.

Azure proximity placement groups represent a new logical grouping capability for your Azure Virtual Machines, which in turn is used as a deployment constraint when selecting where to place your virtual machines. In fact, when you assign your virtual machines to a proximity placement group, the virtual machines are placed in the same data center, resulting in lower and deterministic latency for your applications.

When to use proximity placement groups

Proximity placement groups improve the overall application performance by reducing the network latency among virtual machines. You should consider using proximity placement groups for multi-tiered, IaaS-based deployments where application tiers are deployed using multiple virtual machines, availability sets and/or virtual machine scale sets.

As an example, consider the case where each tier in your application is deployed in an availability set or virtual machine scale set for high availability. Using a single proximity placement group for all the tiers of your applications, even if they use different virtual machine SKUs and sizes, will force all the deployments to follow each other and land in the same data center for best latency.

In order to get the best results with proximity placement groups, make sure you’re using accelerated networking and optimize your virtual machines for low latency.

Getting started with proximity placement groups

The easiest way to start with proximity placement groups is to use them with your Azure Resource Manager (ARM) templates.

To create a proximity placement group resource just add the following statement:

{
"apiVersion": "2018-04-01",
"type": "Microsoft.Compute/proximityPlacementGroups",
"name": "[parameters('ppgName')]",
"location": "[resourceGroup().location]"
}

To use this proximity placement group later in the template with a virtual machine (or availability set or virtual machine scale set), just add the following dependency and property:

{
"name": "[parameters('virtualMachineName')]",
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2018-06-01",
"location": "[parameters('location')]",
"dependsOn": [
"[concat('Microsoft.Compute/proximityPlacementGroups/', parameters('ppgName'))]"
],
"properties": {
"proximityPlacementGroup": {
"id": "[resourceId('Microsoft.Compute/proximityPlacementGroups',parameters('ppgName'))]"
}
}

To learn more about proximity placement groups, see the following tutorials on using proximity placement groups with PowerShell and CLI.

What to expect when using proximity placement groups

Proximity placement groups offer co-location in the same data center. However, because proximity placement groups represent an additional deployment constraint, allocation failures can occur (for example, you may not be able to place your Azure Virtual Machines in the same proximity placement group.)

When you ask for the first virtual machine in the proximity placement group, the data center is automatically selected. In some cases, a second request for a different virtual machine SKU may fail since it does not exist in the data center already selected. In this case, an OverconstrainedAllocationRequest error will be returned. To troubleshoot, please check to see which virtual machines are available in the chosen region or zone using the Azure portal or APIs. If all of the desired SKUs are available, try changing the order in which you deploy them.

In the case of elastic deployments, which scale out, having a proximity placement group constraint on your deployment may result in a failure to satisfy the request. When using proximity placement groups, we recommend that you ask for all the virtual machines at the same time.

Proximity placement groups are in preview now and are offered free of charge in all public regions.

Please refer to our documentation for additional information about proximity placement groups.

Here’s what we’ve heard from SAP, who participated in the early preview program:

“It is really great to see this feature now publicly available. We are going to make use of it in our standard deployments. My team is automating large scale deployments of SAP landscapes. To ensure best performance of the systems it is essential to ensure low-latency between the different components of the system. Especially critical is the communication between Application server and the database, as well as the latency between HANA VMs when synchronous replication has to be enabled. In the late 2018 we did some measurements in various Azure regions and found out that sometimes the latency was not as expected and not in the optimal range. While discussing this with Microsoft, we were offered to join the early preview and evaluate the Proximity Placement Groups (PPG) feature. During our evaluation we were able to bring down the latency to less than 0.3 ms between all system components, which is more than sufficient to ensure great system performance. Best deterministic results we achieved when PPGs were combined with Network acceleration of VM NICs, which additionally improved the measured latencies.”

Ventsislav Ivanov, Development Architect, SAP
Quelle: Azure

Published by