Linux and Windows networking performance enhancements | Accelerated Networking

Microsoft Azure is pleased to announce a series of performance optimizations supporting the latest distributions of Linux (Ubuntu, Red Hat, CentOS) and Windows for all virtual machine (VM) sizes, providing up to 25 Gbps of networking throughput. 25Gbps bandwidth is currently the fastest published speed between VMs in the public cloud.

These optimizations have been deployed to the entire Azure computing fleet in coordination with the latest Linux operating systems being published to the Azure Marketplace. Other popular Linux operating systems plan to incorporate these optimizations through their regular updates.

In order to help our customers architect high performance solutions, Azure is also publishing the expected network performance for VMs  on our website. This will make it easier for our customers to reduce solution costs while maintaining optimal performance. See expected network performance per VM size for further information.

Some examples of today's expected performance metrics are highlighted below:

VM Size

vCPU
Memory: GB
Local SSD: GB
Max data disks
Max cached and local disk throughput: IOPS / MBps (cache size in GB)
Max uncached disk throughput: IOPS / MBps
Max NICs / Expected network performance: Mbps

Standard_M64ms
64
1792
2048
32
80,000 / 800 (6348)
40,000 / 1,000
32 / 16000

Standard_GS5

32
448
896
64
160,000 / 1,600 (4,224)
80,000 / 2,000
8 / 20000

Standard_
DS15_v2

20
140
280
40
80,000 / 640 (720)
64,000 / 960
8 / 20000

Standard_ 128
M128s

128
2048
4096
64
160,000 / 1,600 (12,696)
80,000 / 2,000
32 / 25000

Maximize your VM’s Performance with Accelerated Networking (AN) – now widely available for 8+ core virtual machines

We are also pleased to announce the General Availability (GA) of Accelerated Networking (AN) for Windows with an expanded Public Preview for Linux.

AN provides very low latency and  jitter networking performance via Azure's in-house programmable hardware and technologies such as SR-IOV. Also, by moving much of the SDN stack into hardware, compute cycles are reclaimed by end user applications putting less load on the VM.

As an example of AN’s performance advantage, a growing number of Azure SQL services in production today have achieved an amazing 70% improvement in several benchmarks. This clearly demonstrates real world performance benefits for customers looking to run latency sensitive workloads in the cloud.

AN for Linux continues its Public Preview and is supported by the latest operating systems published in the Azure Marketplace (Ubuntu, Red Hat, CentOS, and SLES). This preview is quickly expanding to more regions over the coming weeks. Additional operating systems such as FreeBSD will be supported with updates coming soon.

Instructions on how to sign up and participate in Azure's AN Linux Public Preview are found here, Accelerated Networking Overview and Deployment Instructions, and are supplemented by a list of current limitations found here, Accelerated Networking Service Update.

Expected network performance is the maximum aggregated bandwidth allocated per VM size across all NICs for all destinations. Upper limits are not guaranteed, but are intended to provide guidance for selecting the right VM size for a specific application. Actual network performance will depend on a variety of factors including number of TCP connections, network congestion, application workloads, and network settings. For more information on optimizing network throughput, see Optimizing Network Throughput for Linux and Windows. To achieve network performance on Linux or Windows VMs, it may be necessary to select specific Azure recommended versions. To produce comparable results see How to Reliably Test for Virtual Machine Network Performance.
Quelle: Azure

Going Hybrid with Kubernetes on Google Cloud Platform and Nutanix

By Allan Naim, Product GTM Lead, Kubernetes and Container Engine

Recently, we announced a strategic partnership with Nutanix to help remove friction from hybrid cloud deployments for enterprises. You can find the announcement blog post here.

Hybrid cloud allows organizations to run a variety of applications either on-premise or in the public cloud. With this approach, enterprises can:

Increase the speed at which they’re releasing products and features
Scale applications to meet customer demand
Move applications to the public cloud at their own pace
Reduce time spent on infrastructure and increase time spent on writing code
Reduce cost by improving resource utilization and compute efficiency

The vast majority of organizations have a portfolio of applications with varying needs. In some cases, data sovereignty and compliance requirements force a jurisdictional deployment model where an application and its data must reside in an on-premises environment or within a country’s boundaries. Alternatively, mobile and IoT applications are characterized with unpredictable consumption models that make the on-demand, pay-as-you-go cloud model the best deployment target for these applications.

Hybrid cloud deployments can help deliver the security, compliance and compute power you require with the agility, flexibility and scale you need. Our hybrid cloud example will encompass three key components:

On-premise: Nutanix infrastructure
Public cloud: Google Cloud Platform (GCP)
Open source: Kubernetes and Containers

Containers provide an immutable and highly portable infrastructure that enables developers to predictably deploy apps across any environment where the container runtime engine can run. This makes it possible to run the same containerized application on bare metal, private cloud or public cloud. However, as developers move towards microservice architectures, they must solve a new set of challenges such as scaling, rolling updates, discovery, logging, monitoring and networking connectivity.

Google’s experience running our own container-based internal systems inspired us to create Kubernetes, and Google Container Engine, an open source and Google Cloud managed platform for running containerized applications across a pool of compute resources. Kubernetes abstracts away the underlying infrastructure, and provides a consistent experience for running containerized applications. Kubernetes introduces the concept of a declarative deployment model. In this model, an ops person supplies a template that describes how the application should run, and Kubernetes ensures the application’s actual state is always equal to the desired state. Kubernetes also manages container scheduling, scaling, health, lifecycle, load balancing, data persistence, logging and monitoring.

In a first phase, the Google Cloud-Nutanix partnership focuses on easing hybrid operations using Nutanix Calm as a single control plane for workload management across both on-premises Nutanix and GCP environments, using Kubernetes as the container management layer across the two. Nutanix Calm was recently announced at Nutanix .NEXT conference and once publicly available, will be used to automate provisioning and lifecycle operations across hybrid cloud deployments. Nutanix Enterprise Cloud OS supports a hybrid Kubernetes environment running on Google Compute Engine in the cloud and a Kubernetes cluster on Nutanix on-premises. Through this, customers can deploy portable application blueprints that run on both an on-premises Nutanix environment as well as in GCP.

Let’s walk through the steps involved in setting up a hybrid environment using Nutanix and GCP.

The steps involved are as follows:

Provision an on premise 4-node Kubernetes cluster using a Nutanix Calm blueprint
Provision a Google Compute Engine 4-node Kubernetes cluster using the same Nutanix Calm Kubernetes blueprint, configured for Google Cloud
Use Kubectl to manage both on premise and Google Cloud Kubernetes clusters
Using Helm, we’ll deploy the same WordPress chart on both on premise and Google Cloud Kubernetes clusters

Provisioning an on-premise Kubernetes cluster using a Nutanix Calm blueprint
You can use Nutanix Calm to provision a Kubernetes cluster on premise, and Nutanix Prism, an infrastructure management solution for virtualized data centers, to bootstrap a cluster of virtualized compute and storage. This results in a Nutanix managed pool of compute and storage that’s now ready to be orchestrated by Nutanix Calm, for one-click deployment of popular commercial and open source packages.

The tools used to deploy the Nutanix and Google hybrid cloud stacks.

You can then select the Kubernetes blueprint to target the Nutanix on-premise environment.

The Calm Kubernetes blueprint pictured below configures a four-node Kubernetes cluster that includes all the base software on all the nodes and the master. We’ve also customized our Kubernetes blueprint to configure Helm Tiller on the cluster, so you can use Helm to deploy a WordPress chart. Calm blueprints also allow you to create workflows so that configuration tasks can take place in a specified order, as shown below with the “create” action.

Now, launch the Kubernetes Blueprint:

After a couple of minutes, the Kubernetes cluster is up and running with five VMs (one master node and four worker nodes):

Provisioning a Kubernetes cluster on Google Compute Engine with the same Nutanix Calm Kubernetes blueprint
Using Nutanix Calm, you can now deploy the Kubernetes blueprint onto GCP. The Kubernetes cluster is up and running on Compute Engine within a couple of minutes, again with five VMs (one master node + four worker nodes):

You’re now ready to deploy workloads across the hybrid environment. In this example, you’ll deploy a containerized WordPress stack.

Using Kubectl to manage both on-premise and Google Cloud Kubernetes clusters
Kubectl is a command line interface tool that comes with Kubernetes to run commands against Kubernetes clusters.

You can now target each Kubernetes cluster across the hybrid environment and use kubectl to run basic commands. First, ssh into your on-premise environment and run a few commands.

# List out the nodes in the cluster

$ kubectl get nodes

NAME STATUS AGE
10.21.80.54 Ready 16m
10.21.80.59 Ready 16m
10.21.80.65 Ready 16m
10.21.80.67 Ready 16m

# View the cluster config

$ kubectl config view

apiVersion: v1
clusters:
– cluster:
server: http://10.21.80.66:8080
name: default-cluster
contexts:
– context:
cluster: default-cluster
user: default-admin
name: default-context
current-context: default-context
kind: Config
preferences: {}
users: []

# Describe the storageclass configured. This is the Nutanix storage volume plugin for Kubernetes

$ kubectl get storageclass

NAME KIND
silver StorageClass.v1.storage.k8s.io

$ kubectl describe storageclass silver

Name: silver
IsDefaultClass: No
Annotations: storageclass.kubernetes.io/is-default-class=true
Provisioner: kubernetes.io/nutanix-volume

Using Helm, you can deploy the same WordPress chart on both on-premise and Google Cloud Kubernetes clusters
This example uses Helm, a package manager used to install and manage Kubernetes applications. In this example, the Calm Kubernetes blueprint includes Helm as part of the cluster setup. The on-premise Kubernetes cluster is configured with Nutanix Acropolis, a storage provisioning system, which automatically creates Kubernetes persistent volumes for the WordPress pods.

Let’s deploy WordPress on-premise and on Google Cloud:

# Deploy wordpress

$ helm install wordpress-0.6.4.tgz

NAME: quaffing-crab
LAST DEPLOYED: Sun Jul 2 03:32:21 2017
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Secret
NAME TYPE DATA AGE
quaffing-crab-mariadb Opaque 2 1s
quaffing-crab-wordpress Opaque 3 1s

==> v1/ConfigMap
NAME DATA AGE
quaffing-crab-mariadb 1 1s

==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
quaffing-crab-wordpress Pending silver 1s
quaffing-crab-mariadb Pending silver 1s

==> v1/Service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
quaffing-crab-mariadb 10.21.150.254 3306/TCP 1s
quaffing-crab-wordpress 10.21.150.73 80:32376/TCP,443:30998/TCP 1s

==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
quaffing-crab-wordpress 1 1 1 0 1s
quaffing-crab-mariadb

Then, you can run a few kubectl commands to browse the on-premise deployment.

# Take a look at the persistent volume claims

$ kubectl get pvc

NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
quaffing-crab-mariadb Bound 94d90daca29eaafa7439b33cc26187536e2fcdfc20d78deddda6606db506a646-nutanix-k8-volume 8Gi RWO 1m
quaffing-crab-wordpress Bound 764e5462d809a82165863af8423a3e0a52b546dd97211dfdec5e24b1e448b63c-nutanix-k8-volume 10Gi RWO 1m

# Take a look at the running pods

$ kubectl get po

NAME READY STATUS RESTARTS AGE
quaffing-crab-mariadb-3339155510-428wb 1/1 Running 0 3m
quaffing-crab-wordpress-713434103-5j613 1/1 Running 0 3m

# Take a look at the services exposed

$ kubectl get svc

NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1 443/TCP 16d
quaffing-crab-mariadb 10.21.150.254 3306/TCP 4m
quaffing-crab-wordpress 10.21.150.73 #.#.#.# 80:32376/TCP,443:30998/TCP 4m

This on-premise environment did not have a load balancer provisioned, so we used the cluster IP to browse the WordPress site. The Google Cloud WordPress deployment automatically assigned a load balancer to the WordPress service along with an external IP address.

Summary

Nutanix Calm provided a one-click consistent deployment model to provision a Kubernetes cluster on both Nutanix Enterprise Cloud and Google Cloud.
Once the Kubernetes cluster is running in a hybrid environment, you can use the same tools (Helm, kubectl) to deploy containerized applications targeting the respective environment. This represents a “write once deploy anywhere” model. 
Kubernetes abstracts away the underlying infrastructure constructs, making it possible to consistently deploy and run containerized applications across heterogeneous cloud environments

Next steps

Get started on Google Cloud Platform (GCP)
Visit Kubernetes getting started site and code
Join Kubernetes community and Slack chat
Follow Kubernetes on Twitter
Learn about Nutanix Calm
If you have feedback and/or questions, reach out to us here

Quelle: Google Cloud Platform