Introducing proximity placement groups

Co-locate your Azure resources for improved application performance

The performance of your applications is central to the success of your IT organization. Application performance can directly impact your ability to increase customer satisfaction and ultimately grow your business.

Many factors can affect the performance of your applications. One of those is network latency which is impacted, among other things, by the physical distance between the virtual machines deployed.

For example, when you place your Microsoft Azure Virtual Machines in a single Azure region, the physical distance between the virtual machines is reduced. Placing them within a single availability zone is another step you can take to deploy your virtual machines closer to each other. However, as the Azure footprint grows, a single availability zone may span multiple physical data centers resulting in network latency that can impact your overall application performance. If a region does not support availability zones or if your application does not use availability zones, the latency between the application tiers may increase as a result.

Today, we are announcing the preview of proximity placement groups. A new capability that we are making available to achieve co-location of your Azure Infrastructure as a Service (IaaS) resources and low network latency among them.

Azure proximity placement groups represent a new logical grouping capability for your Azure Virtual Machines, which in turn is used as a deployment constraint when selecting where to place your virtual machines. In fact, when you assign your virtual machines to a proximity placement group, the virtual machines are placed in the same data center, resulting in lower and deterministic latency for your applications.

When to use proximity placement groups

Proximity placement groups improve the overall application performance by reducing the network latency among virtual machines. You should consider using proximity placement groups for multi-tiered, IaaS-based deployments where application tiers are deployed using multiple virtual machines, availability sets and/or virtual machine scale sets.

As an example, consider the case where each tier in your application is deployed in an availability set or virtual machine scale set for high availability. Using a single proximity placement group for all the tiers of your applications, even if they use different virtual machine SKUs and sizes, will force all the deployments to follow each other and land in the same data center for best latency.

In order to get the best results with proximity placement groups, make sure you’re using accelerated networking and optimize your virtual machines for low latency.

Getting started with proximity placement groups

The easiest way to start with proximity placement groups is to use them with your Azure Resource Manager (ARM) templates.

To create a proximity placement group resource just add the following statement:

{
"apiVersion": "2018-04-01",
"type": "Microsoft.Compute/proximityPlacementGroups",
"name": "[parameters('ppgName')]",
"location": "[resourceGroup().location]"
}

To use this proximity placement group later in the template with a virtual machine (or availability set or virtual machine scale set), just add the following dependency and property:

{
"name": "[parameters('virtualMachineName')]",
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2018-06-01",
"location": "[parameters('location')]",
"dependsOn": [
"[concat('Microsoft.Compute/proximityPlacementGroups/', parameters('ppgName'))]"
],
"properties": {
"proximityPlacementGroup": {
"id": "[resourceId('Microsoft.Compute/proximityPlacementGroups',parameters('ppgName'))]"
}
}

To learn more about proximity placement groups, see the following tutorials on using proximity placement groups with PowerShell and CLI.

What to expect when using proximity placement groups

Proximity placement groups offer co-location in the same data center. However, because proximity placement groups represent an additional deployment constraint, allocation failures can occur (for example, you may not be able to place your Azure Virtual Machines in the same proximity placement group.)

When you ask for the first virtual machine in the proximity placement group, the data center is automatically selected. In some cases, a second request for a different virtual machine SKU may fail since it does not exist in the data center already selected. In this case, an OverconstrainedAllocationRequest error will be returned. To troubleshoot, please check to see which virtual machines are available in the chosen region or zone using the Azure portal or APIs. If all of the desired SKUs are available, try changing the order in which you deploy them.

In the case of elastic deployments, which scale out, having a proximity placement group constraint on your deployment may result in a failure to satisfy the request. When using proximity placement groups, we recommend that you ask for all the virtual machines at the same time.

Proximity placement groups are in preview now and are offered free of charge in all public regions.

Please refer to our documentation for additional information about proximity placement groups.

Here’s what we’ve heard from SAP, who participated in the early preview program:

“It is really great to see this feature now publicly available. We are going to make use of it in our standard deployments. My team is automating large scale deployments of SAP landscapes. To ensure best performance of the systems it is essential to ensure low-latency between the different components of the system. Especially critical is the communication between Application server and the database, as well as the latency between HANA VMs when synchronous replication has to be enabled. In the late 2018 we did some measurements in various Azure regions and found out that sometimes the latency was not as expected and not in the optimal range. While discussing this with Microsoft, we were offered to join the early preview and evaluate the Proximity Placement Groups (PPG) feature. During our evaluation we were able to bring down the latency to less than 0.3 ms between all system components, which is more than sufficient to ensure great system performance. Best deterministic results we achieved when PPGs were combined with Network acceleration of VM NICs, which additionally improved the measured latencies.”

Ventsislav Ivanov, Development Architect, SAP
Quelle: Azure

Federation V2 is now KubeFed

Some time ago we talked about how Federation V2 on Red Hat OpenShift 3.11 enables users to spread their applications and services across multiple locales or clusters. As a fast moving project, lots of changes happened since our last blog post. Among those changes, Federation V2 has been renamed to KubeFed and we have released […]
The post Federation V2 is now KubeFed appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Enhancing the customer experience with the Azure Networking MSP partner program

We are always looking for ways to improve the customer experience and allow our partners to complement our offerings. In support of these efforts we are sharing the Azure Networking Managed Service Provider (MSP) program along with partners that deliver value added managed cloud network services to help enterprise customers connect, operationalize, and scale their mission critical applications running in Azure.

Azure Networking MSP Partner Program enables partners such as networking focused MSPs, network carriers, and systems integrators (SIs) to use their rich networking experience to offer cloud and hybrid networking services around Azure’s growing portfolio of Azure Networking products and services.

Azure’s Networking services are fundamental building blocks critical to cloud migration, optimal connectivity, and security of applications. New networking services such as Virtual WAN, ExpressRoute, Azure Firewall, and Azure Front Door further enrich this portfolio allowing customers to deploy richer applications in the cloud. The Networking MSP partners can help customers deploy and manage Azure Networking services.

Azure Networking MSPs

Azure MSPs play a critical role in enterprise cloud transformation by bringing their deep knowledge and real-world experience to help enterprise customers migrate to Azure. Azure MSPs and the Azure Expert MSP program make it easy for customers to discover and engage specialized MSPs.

Azure Networking MSPs are a specialized set of MSPs for addressing enterprise cloud networking needs and challenges across all aspects of cloud and hybrid networking. Their managed network services and offerings include various aspects of the application lifecycle including network architecture, planning, deployment, operations, maintenance, and optimization.

Azure Lighthouse – unblocking Azure Networking MSPs

Many enterprise customers, such as banks and financial institutions want partners who can help them with managing their Azure Networking subscriptions. However, the need for individual customer management for these subscriptions introduces a lot of manual work for these service providers.

Last week, we announced Azure Lighthouse, which is a unique set of capabilities on Azure, empowering service provider partners with a single control plane to view and manage Azure at scale across all their customers with higher automation and efficiency. We also talked about how Azure Lighthouse enables management at scale for service providers.

With Azure Lighthouse, Azure Networking MSPs can seamlessly onboard customers via managed services offers on the Azure marketplace or natively via ARM templates – empowering them to deliver a rich set of managed network experiences for their end-customers.

Azure Networking MSP partners

Azure Networking partners play a big role in the Azure networking ecosystem, delivering Virtual WAN CPEs and hybrid networking services such as ExpressRoute to enterprises that are building cloud infrastructures. We welcome the following Azure Networking MSP launch partners into our Azure Networking MSP partner ecosystem.

These partners have invested in people, best practices, operations and tools to build and harness deep Azure Networking knowledge and service capabilities. They’ve trained their staff on Azure and have partnered closely with us in Azure Networking through technical workshops and design reviews.

These partners are also early adopters of Azure Lighthouse, building and delivering a new generation of managed network experiences for their end customers. We encourage all worldwide networking MSPs, network carriers, and SIs that would like to join this program to reach out via ManagedVirtualWAN@microsoft.com to join the Azure Networking MSP program and bring your unique value and services to Azure customers.

In summary, we firmly believe that Azure customers will greatly benefit from the new cloud networking focused services our partners are bringing to the market. Customers will be able to leverage these services to augment their own inhouse skills and be able to move faster and more efficiently while optimally leveraging the cloud to meet their enterprise business needs. For more information on how to engage with our Networking MSP partner, please see partner information on our MSP partners site.
Quelle: Azure

Epic Games Store: Cloud-Saves, Mods und Zombies kommen

Verbesserungen beim Offlinemodus, dazu Speicherstände in der Cloud und etwas später Nutzerbewertungen: Epic Games hat die Pläne für seinen Epic Games Store aktualisiert. Und es gibt ein Spiel von einem Entwickler exklusiv, der bis vor kurzem noch vehement gegen solche Deals war. (Epic Games, Steam)
Quelle: Golem

Deploying a UPI environment for OpenShift 4.1 on VMs and bare metal

With the release of Red Hat OpenShift 4, the concept of User Provisioned Infrastructure (UPI) has emerged to encompass the environments where the infrastructure (compute, network and storage resources) that hosts the OpenShift Container Platform is deployed by the user. This allows for more creative deployments, while leaving the management of the infrastructure to the […]
The post Deploying a UPI environment for OpenShift 4.1 on VMs and bare metal appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Advancing Microsoft Azure reliability

Reliance on cloud services continues to grow for industries, organizations, and people around the world. So now more than ever it is important that you can trust that the cloud solutions you rely on are secure, compliant with global standards and local regulations, keep data private and protected, and are fundamentally reliable. At Microsoft, we are committed to providing a trusted set of cloud services, giving you the confidence to unlock the potential of the cloud.

Over the past 12 months, Azure has operated core compute services at 99.995 percent average uptime across our global cloud infrastructure. However, at the scale Azure operates, we recognize that uptime alone does not tell the full story. We experienced three unique and significant incidents that impacted customers during this time period, a datacenter outage in the South Central US region in September 2018, Azure Active Directory (Azure AD) Multi-Factor Authentication (MFA) challenges in November 2018, and DNS maintenance issues in May 2019.

Building and operating a global cloud infrastructure of 54 regions made up of hundreds of evolving services is a large and complex task, so we treat each incident as an important learning moment. Outages and other service incidents are a challenge for all public cloud providers, and we continue to improve our understanding of the complex ways in which factors such as operational processes, architectural designs, hardware issues, software flaws, and human factors can align to cause service incidents. All three of the incidents mentioned were the result of multiple failures that only through intricate interactions led to a customer-impacting outage. In response, we are creating better ways to mitigate incidents through steps such as redundancies in our platform, quality assurance throughout our release pipeline, and automation in our processes. The capability of continuous, real-time improvement is one of the great advantages of cloud services, and while we will never eliminate all such risks, we are deeply focused on reducing both the frequency and the impact of service issues while being transparent with our customers, partners, and the broader industry.

Ensuring reliability is a fundamental responsibility for every Azure engineer. To augment these efforts, we have formed a new Quality Engineering team within my CTO office, working alongside our Site Reliability Engineering (SRE) team to pioneer new approaches to deliver an even more reliable platform. To keep improving our reliability, here are some of the initiatives that we already have underway:

Safe deployment practices – Azure approaches change automation through a safe deployment practice framework which aims to ensure that all code and configuration changes go through a cycle of specific stages. These stages include dev/test, staging, private previews, a hardware diversity pilot, and longer validation periods before a broader rollout to region pairs. This has dramatically reduced the risk that software changes will have negative impacts, and we are extending this mechanism to include software-defined infrastructure changes, such as networking and DNS.
Storage-account level failover – During the September 2018 datacenter outage, several storage stamps were physically damaged, requiring their immediate shut down. Because it is our policy to prioritize data retention over time-to-restore, we chose to endure a longer outage to ensure that we could restore all customer data successfully. A number of you have told us that you want more flexibility to make this decision for your own organizations, so we are empowering customers by previewing the ability to initiate your own failover at the storage-account level.
Expanding availability zones – Today, we have availability zones live in the 10 largest Azure regions, providing an additional reliability option for the majority of our customers. We are also underway to bring availability zones to the next 10 largest Azure regions between now and 2021.
Project Tardigrade – At Build last month, I discussed Project Tardigrade, a new Azure service named after the nearly indestructible microscopic animals also known as water bears. This effort will detect hardware failures or memory leaks that can lead to operating system crashes just before they occur, so that Azure can then freeze virtual machines for a few seconds so the workloads can be moved to a healthy host.  
Low to zero impactful maintenance – We’re investing in improving zero-impact and low-impact update technologies including hot patching, live migration, and in-place migration. We’ve deployed dozens of security and reliability patches to host infrastructure in the past year, many of which were implemented with no customer impact or downtime. We continue to invest in these technologies to bring their benefits to even more Azure services.
Fault injection and stress testing – Validating that systems will perform as designed in the face of failures is possible only by subjecting them to those failures. We’re increasingly fault injecting our services before they go to production, both at a small scale with service-specific load stress and failures, but also at regional and AZ scale with full region and AZ failure drills in our private canary regions. Our plan is to eventually make these fault injection services available to customers so that they can perform the same validation on their own applications and services.

Look for us to share more details of our internal architecture and operations in the future. While we are taking all of these steps to improve foundational reliability, Azure also provides you with high availability, disaster recovery, and backup solutions that can enable your applications to meet business availability requirements and recovery objectives. We maintain detailed guidance on designing reliable applications, including best practices for architectural design, monitoring application health, and responding to failures and disasters.

Reliability is and continues to be a core tenet of our trusted cloud commitments, alongside compliance, security, privacy, and transparency. Across all these areas, we know that customer trust is earned and must be maintained, not just by saying the right thing but by doing the right thing. Microsoft believes that a trusted, responsible and inclusive cloud is grounded in how we engage as a business, develop our technology, our advocacy and outreach, and how we are serving the communities in which we operate. Microsoft is committed to providing a trusted set of cloud services, giving you the confidence to unlock the potential of the cloud.
Quelle: Azure