Deploying high-throughput workloads on GKE Autopilot with the Scale-Out compute class

GKE Autopilot is a full-featured, fully managed Kubernetes platform that combines the full power of the Kubernetes API with a hands-off approach to cluster management and operations. Since launching Autopilot last year, we’ve continued to innovate, adding capabilities to meet the demands of your workloads. We’re excited to introduce the concept of compute classes in Autopilot, together with the Scale-Out compute class, which offers high performance x86 and Arm compute, now available in Preview.Autopilot compute classes are a curated set of hardware configurations on which you can deploy your workloads. In this initial release, we are introducing the Scale-Out compute class, which is designed for workloads that are optimized for a single-thread-per-core and scale horizontally. The Scale-Out compute class currently supports two hardware architectures — x86 and Arm — allowing you to choose whichever one offers the best price-performance for your specific workload. The Scale-Out compute class joins our original, general-purpose compute option and is designed for running workloads that benefit from the fastest CPU platforms available on Google Cloud, and with greater cost-efficiency for applications that have high CPU utilization.We also heard from you that some workloads would benefit from higher-performance compute. To serve this need, x86 workloads running on the Scale-Out compute class are currently served by 3rd Gen AMD EPYCTM processors, with Simultaneous Multithreading (SMT) disabled, achieving the highest per-core benchmark among x86 platforms in Google Cloud.And for the first time, Autopilot supports Arm workloads. Currently utilizing the new Tau T2A VMs running on Ampere® Altra® Arm-based processors, the Scale-Out compute class gives your Arm workloads price-performance benefits combined with a thriving, open, end-to-end platform independent ecosystem. Autopilot Arm Pods are currently available in us-central, europe-west4, and asia-southeast1.Deploying Arm workloads using the Scale-Out compute classTo deploy your Pods on a specific compute class and CPU, simply add a Kubernetes nodeSelector or node affinity rule with the following labels in your deployment specification:cloud.google.com/COMPUTE-CLASSkubernetes.io/ARCHTo run an Arm workload on Autopilot, you need a cluster running version 1.24.1-gke.1400 or later and in one of the supported regions. You can create a new cluster at this version, or upgrade an existing one. To create a new Arm-supported cluster on the CLI, use the following:code_block[StructValue([(u’code’, u’CLUSTER_NAME=autopilot-armrnREGION=us-central1rnVERSION=1.24.1-gke.1400rngcloud container clusters create-auto $CLUSTER_NAME \rn –release-channel “rapid” –region $REGION \rn –cluster-version $VERSION’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e2ad76ca790>)])]For example, the following Deployment specification will deploy the official Nginx image on the Arm architecture:code_block[StructValue([(u’code’, u’apiVersion: apps/v1rnkind: Deploymentrnmetadata:rn name: nginx-arm64rnspec:rn selector:rn matchLabels:rn app: nginxrn template:rn metadata:rn labels:rn app: nginxrn spec:rn nodeSelector:rn cloud.google.com/compute-class: Scale-Outrn kubernetes.io/arch: arm64rn containers:rn – name: nginxrn image: nginx:latest’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e2ad7dcb7d0>)])]Deploying x86 workloads on the Scale-Out compute classThe Scale-out compute class also supports the x86 architecture by simply adding a selector for the `Scale-Out` compute class. You can either explicitly set the architecture with kubernetes.io/arch: amd64 or omit that label from the selector, as x86 is the default.To run an x86 Scale-Out workload on Autopilot, you need a cluster running version 1.24.1-gke.1400 or later and in one of the supported regions. The same CLI command from the example above will get you an x86 Scale-Out-capable GKE Autopilot cluster.code_block[StructValue([(u’code’, u’apiVersion: apps/v1rnkind: Deploymentrnmetadata:rn name: nginx-arm64rnspec:rn selector:rn matchLabels:rn app: nginxrn template:rn metadata:rn labels:rn app: nginxrn spec:rn nodeSelector:rn cloud.google.com/compute-class: Scale-Outrn containers:rn – name: nginxrn image: nginx:latest’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e2ad7b83090>)])]Deploying Spot Pods using the Scale-Out compute classYou can also combine compute classes with Spot Pods by adding the label cloud.google.com/gke-spot: “true”to the nodeSelector:code_block[StructValue([(u’code’, u’apiVersion: apps/v1rnkind: Deploymentrnmetadata:rn name: nginx-arm64rnspec:rn selector:rn matchLabels:rn app: nginxrn template:rn metadata:rn labels:rn app: nginxrn spec:rn nodeSelector:rn cloud.google.com/gke-spot: “true”rn cloud.google.com/compute-class: Scale-Outrn kubernetes.io/arch: arm64rn containers:rn – name: nginxrn image: nginx:latest’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e2ad7b83050>)])]Spot Pods are supported for both the x86 and Arm architectures when using the Scale-Out compute class.Try the Scale-Out compute class on GKE Autopilot today!To help you get started, check out our guides on creating an Autopilot cluster, getting started with compute classes, building images for Arm workloads, and deploying Arm workloads on GKE Autopilot.Related ArticleRun your Arm workloads on Google Kubernetes Engine with Tau T2A VMsWith Google Kubernetes Engine’s (GKE) support for the new Tau VM T2A, you can run your containerized workloads on the Arm architecture.Read Article
Quelle: Google Cloud Platform

Published by