OpenShift 4.2: Declarative Dynamic UI for your Operator

When building a Kubernetes-native application, CustomResourceDefinitions (CRD) are the primary way to extend the Kubernetes API with custom resources. This post will cover generating a creation form for your custom resource based on an OpenAPI-based schema. Afterward, we are going to talk about the value you can get through Operator Descriptors to fulfill more complex interactions and improve the overall usability of your application.
Generate creation form based on OpenAPI schema
Many of our partners (ISVs) have certain requirements when building a UI form to guide users creating an instance of their application or custom resource managed by their Operators. Starting from Kubernetes 1.8, CustomResourcesDefinition (CRD) gained the ability to define an optional OpenAPI v3 based validation schema. In Kubernetes 1.15 and beyond, any new feature for CRDs will be required to have a structural schema. This is important not only for data consistency and security, but it also enables the potential to design and build a richer UI to improve the user experience when creating or mutating custom resources.
 
For example, here is a CRD manifest from one of our partners:

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  creationTimestamp: null
  name: couchbaseclusters.couchbase.com
spec:
  …
  validation:
    openAPIV3Schema:
      properties:
        spec:
          properties:
            …
            cluster:
              properties:
                …
                autoFailoverServerGroup:
                  type: boolean
                autoFailoverTimeout:
                  maximum: 3600
                  minimum: 5
                  type: integer
                …
              required:          
              …
             – autoFailoverTimeout
              …

 
With the validation info, we can start associating these fields with corresponding UI input fields. Since the autoFailoverServerGroup field is expecting a boolean data type, we can either assign this field with a checkbox, a radio button or a toggle switch. As for autoFailoverTimeout field, we can simply limit the input type as an integer between 5 to 3600. We can also denote that autoFailoverTimeout is a required field, while autoFailoverServerGroup is optional. So far, everything looks good. However, things start to get complicated for other data types or complex nested fields.
 
From our partners who build Operators, one common use case we see is that the custom resource needs a Secret object as a prerequisite for creating an instance. In the CRD manifest, this would be specified similar to the code snippet below:

  …
  properties:
    credentials:
      type: string
      …

 
As we can see, the only viable validation from OpenAPISchema checks only the data type as “string” and is fairly limited in terms of usability. Wouldn’t it be great if the UI could provide a searchable dropdown list of all the existing Secrets in your cluster? It could not only speed up the filling process but also reduce possible human errors in comparison with manually entry. This is where Operator Lifecycle Manager (OLM) descriptors come in.
Operator Descriptors enhancements
Prerequisites
 

Install Operator Lifecycle Manager

 
The Operator Lifecycle Manager (OLM) can be installed with one command on any Kubernetes cluster and interact with the OKD console. If you’re using Red Hat OpenShift 4, the OLM is pre-installed to manage and update the Operators on your cluster.
 
 

Generate an Operator manifest, ClusterServiceVersion

 
Generate the ClusterServiceVersion (CSV) that represents the CRDs your Operator manages, the permissions it requires to function, and other installation information with the OLM. See Generating a ClusterServiceVersion (CSV) for more information on generating with Operator SDK, or manually defining a manifest file. You’ll only have to do this once, then carry these changes forward for successive releases of the Operator.
specDescriptors
OLM introduces the notion of “descriptors” of both spec and status fields in Kubernetes API responses. Descriptors are intended to indicate various properties of a field in order to make decisions about their content. The schema for a descriptor is the same, regardless of type:

type Descriptor = {
  path: string; // Dot-delimited path of the field on the object
  displayName: string;
  description: string;

  /* Used to determine which “capabilities” this descriptor has, and which 
     React component to use */
  ‘x-descriptors’: SpecCapability[] | StatusCapability[];
  value?: any; // Optional value 
}

 
The x-descriptors field can be thought of as “capabilities” (and is referenced in the code using this term). Capabilities are defined in types.ts provide a mapping between descriptors and different UI components (implemented as React components) using URN format.
 

k8sResourcePrefix descriptor 

 
Recall the use case previously mentioned for specifying a Kubernetes resource in the CRD manifest. The “k8sResourcePrefix” is the OLM descriptor for this goal

k8sResourcePrefix = ‘urn:alm:descriptor:io.kubernetes:’,

 
Let’s take CouchbaseCluster as an example to see how this descriptor can be adopted in the Couchbase Operator’s CSV file. First, inside the CRD manifest (couchbasecluster.crd.yaml):

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  creationTimestamp: null
  name: couchbaseclusters.couchbase.com
spec:
  …
  names:
    kind: CouchbaseCluster
  …
  validation:
    openAPIV3Schema:
      properties:
        spec:
          properties:
            …
            authSecret:
             minLength: 1
             type: string
            …
            tls:
             properties:
               static:
                 properties:
                   member:
                     properties:
                       serverSecret:
                       type: string
                     type: object
                  operatorSecret:
                     type: string
                type: object
              type: object
          required:          
          …
          – authSecret
          …

 
The validation block specifies a Secret object (authSecret) that stores the admin credential is required for creating a CouchbaseCluster custom resource. And for TLS (tls), it requires additional two Secret objects, one as Server Secret (tls.static.member.serverSecret), and the other as the Operator Secret (tls.static.operatorSecret).
 
To utilize the OLM Descriptors, inside Couchbase Operator’s CSV file, we first specify the k8sResourcePrefix descriptor as a “Secret” object (urn:alm:descriptor:io.kubernetes:Secret) and then point it to the fields on the CouchbaseCluster CRD object in the “path” field.

apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
  name: couchbase-operator.v1.1.0
  …
spec:
  customresourcedefinitions:
    owned:
    – description: Manages Couchbase clusters
      displayName: Couchbase Cluster
      kind: CouchbaseCluster
      name: couchbaseclusters.couchbase.com
      …
      specDescriptors:
      – description: The name of the secret object that stores the admin
          credentials.
        displayName: Auth Secret
        path: authSecret
        x-descriptors:
        – urn:alm:descriptor:io.kubernetes:Secret
        …
      – description: The name of the secret object that stores the server’s
          TLS certificate.
        displayName: Server TLS Secret
        path: tls.static.member.serverSecret
        x-descriptors:
        – urn:alm:descriptor:io.kubernetes:Secret
        …
      – description: The name of the secret object that stores the
          Operator’s TLS certificate.
        displayName: Operator TLS Secret
        path: tls.static.operatorSecret
        x-descriptors:
        – urn:alm:descriptor:io.kubernetes:Secret

 
Let’s take a closer look. In the CSV file:

Under the spec.customresourcedefinitions.owned section (i.e. the CRD is that owned by this Operator, which can be multiple CRDs), specify the metadata of your custom resource. 
Since this is for “creating or mutating” the custom resource, assign the k8sResourcePrefix descriptor under “specDescriptors” section as input. 
description and displayName fields are pretty straightforward being the information that can be displayed on the UI as help text and field title. 
path – is used for pointing the field on the object in the dot-delimited path as to where it is inside the CRD (i.e. couchbasecluster.crd.yaml). 
x-descriptors – Assign this field with the `k8sResourcePrefix` descriptor and specify the resource type as “Secret” in “urn:alm:descriptor:io.kubernetes:Secret”.

Now, let’s take a look in OpenShift console. We will have to install the Couchbase Operator from “OperatorHub” first so the Operator is ready to be used on the cluster.
We can create CouchbaseCluster instance via the “Installed Operator” view:

Next, switch to the form view and see that “searchable dropdown component” shows up on the UI. This component allows us to look for existing Secrets on the cluster and is pointed to the corresponding fields on the CouchbaseCluster object. It’s that simple.

resourceRequirements descriptor 

Specifying how much CPU and memory (RAM) each container needs for a pod is another worth mentioned use case. Again, in the couchbasecluster.crd.yaml manifest, we can see fields for specifying the resource limits and requests for a running pod in bold:


spec:
  …  names:
    kind: CouchbaseCluster
  …
  validation:
    openAPIV3Schema:
      properties:
        spec:
          properties:
            …
            servers:
             items:
               properties:
                 name:
                   minLength: 1
                   pattern: ^[-_a-zA-Z0-9]+$
                   type: string
                 pod:
                   properties:
                     …
                     resources:
                       properties:
                         limits:
                               properties:
                                 cpu:
                                   type: string
                                 memory:
                                   type: string
                                 storage:
                                   type: string
                               type: object
                         requests:
                               properties:
                                 cpu:
                                   type: string
                                 memory:
                                   type: string
                                 storage:
                                   type: string
                           type: object
                    …

 
As we can see, these fields are nested and could be tricky to convert and organize them to the form fields. Alternatively, we can take advantage of the resourceRequirements descriptor by including it in Couchbase Operator’s CSV file and pointing to the resources field of the CouchbaseCluster object.

apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
  name: couchbase-operator.v1.1.0
  …
spec:
  customresourcedefinitions:
    owned:
    – description: Manages Couchbase clusters
      displayName: Couchbase Cluster
      kind: CouchbaseCluster
      name: couchbaseclusters.couchbase.com
      …
      specDescriptors:
        …
      – description: Limits describes the minimum/maximum amount of compute
          Resources required/allowed.
        displayName: Resource Requirements
        path: servers[0].pod.resources
        x-descriptors:
        – urn:alm:descriptor:com.tectonic.ui:resourceRequirements
        …

 
The Resource Requirement react component will then show up on the UIs for creating or mutating your custom resource in OpenShift console. For example, in Create Couchbase Cluster view, UI shows both Limits and Requests fields:

On the other hand, in the CouchbaseCluster Details view, you can access the widget to configure the Resource Limits and Requests as shown as the screenshots below, respectively:

nodeAffinity, podAffinity, and podAntiAffinity descriptors

For assigning your running pods to nodes, it can be achieved by the affinity feature, which consists of two types of affinity, Node Affinity and Pod Affinity/Pod Anti-affinity. In the CRD manifest, these affinity related fields could be very nested and fairly complicated (see the nodeAffinity, podAffinity, and podAntiAffinity fields in alertmanager.crd.yaml).
 
Similarly, we can leverage nodeAffinity, podAffinity, and podAntiAffinity descriptors and point them to the affinity field of the Alertmanager object.

apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
  name: prometheusoperator.0.27.0
  …
spec:
  …
  displayName: Prometheus Operator
  …
  customresourcedefinitions:
    owned:
    …
    – name: alertmanagers.monitoring.coreos.com
      version: v1
      kind: Alertmanager
      displayName: Alertmanager
      description: Configures an Alertmanager for the namespace
      …
      specDescriptors:
        …
        – description: Node affinity is a group of node affinity scheduling                    
          displayName: Node Affinity
          path: affinity.nodeAffinity
          x-descriptors:
          – urn:alm:descriptor:com.tectonic.ui:nodeAffinity
   – description: Pod affinity is a group of inter pod affinity scheduling rules
          displayName: Pod Affinity
          path: affinity.podAffinity
          x-descriptors:
          – urn:alm:descriptor:com.tectonic.ui:podAffinity
   – description: Pod anti affinity is a group of inter pod anti affinity scheduling rules
          displayName: Pod Anti-affinity
          path: affinity.podAntiAffinity
          x-descriptors:
          – urn:alm:descriptor:com.tectonic.ui:podAntiAffinity
        …

 
Later when we go ahead and create an  Alertmanager instance in console, we will see those UI widgets with clear visual grouping along with input instruction for guiding how we can specify affinity using key/value pair with the logical operator. The “operator” field is a dropdown that provides viable options and the “value” field is enabled/disabled dynamically based on the operator being specified. And for “preferred” related rules, the weighting will be required. 

Through talking with customers, we’ve learned the majority of our users treat the UI as the medium to learn and explore the technical or API details. The Affinity descriptor is one good example of the desired UX we strive to provide.

statusDescriptors

So far we have covered the OLM descriptors for the spec fields in Kubernetes API responses. In addition, OLM also provides a set of statusDescriptors for referencing fields in the status block of a custom resource. Some of them come with an associated react component too for richer interactions to the API. One example is podStatuses descriptor. 

podStatuses descriptor 

podStatuses statusDescriptor is usually being paired with podCount specDescriptor. User can specify the desired size of the custom resource being deployed with podCount specDescriptor. podStatuses statusDescriptor provides a dynamic graphical widget for better representing the latest member status of the custom resource being created or mutated.

Following the same pattern, in the code snippet below, we can see how etcd Operator applies podCount and podStatuses descriptors in the CSV file for users to create, mutating, and displaying etcdCluster custom resource in the console.

apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
  name: etcdoperator.v0.9.4
  …
spec:
  …
  displayName: etcd
  …
  customresourcedefinitions:
    owned:
    …
    – name: etcdclusters.etcd.database.coreos.com
      version: v1beta2
      kind: EtcdCluster
      displayName: etcd Cluster
      description: Represents a cluster of etcd nodes.
      …
      specDescriptors:
        – description: The desired number of member Pods for the etcd cluster.
          displayName: Size
          path: size
          x-descriptors:
          – ‘urn:alm:descriptor:com.tectonic.ui:podCount’
        …
      statusDescriptors:
        – description: The status of each of the member Pods for the etcd cluster.
          displayName: Member Status
          path: members
          x-descriptors:
          – ‘urn:alm:descriptor:com.tectonic.ui:podStatuses’
        …

What’s next?
We hope the content and examples covered in this post will trigger community-wide discussions on how to improve the overall UX of Operator-managed application for the end-users. If you would like to learn more about OLM Descriptors, check out the github page where you can see the full list of specDescriptors and statusDescriptors that are currently available. Share your experience or feedback to Operator Lifecycle Manager (OLM) with github issues. If you want to explore more and contribute to Operator Descriptors, check out the contributing guide.
 
The post OpenShift 4.2: Declarative Dynamic UI for your Operator appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Published by