Our commitment to you

The post Our commitment to you appeared first on Mirantis | Pure Play Open Cloud.
An open letter to our customers, business partners and friends
I am sure that you are doing everything possible to support your employees, customers and families during this challenging time. We at Mirantis are doing the same.
We understand how critical our products and services are to your business. I want to assure you that every one of you can continue to rely on us. Mirantis continues to operate without any interruption as we have taken all necessary steps to protect the well-being of our employees in every country where we are present.
To that end:

We have assigned additional staff to customer support. We continuously track our support quality with particular attention to critical incidents.
Our product development continues without disruption, and our software release schedule remains on track.
We are soliciting feedback from all our customers on where we can do better and go above and beyond our contractual commitments and prior expectations.

In summary, we will do whatever it takes to help you achieve your goals.
To help us better serve you I also have one ask: if you have any feedback, request or just a problem where we can help, please contact me. I would love to hear from you.
Thank you for your continued trust in Mirantis, and may you and your loved ones be well.
Sincerely,
Adrian Ionel
CEO and Chairman
Mirantis
The post Our commitment to you appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Simplifying deployments of accelerated AI workloads on Red Hat OpenShift with NVIDIA GPU Operator

In this blog we would like to demonstrate how to use the new NVIDIA GPU operator to deploy GPU-accelerated workloads on an OpenShift cluster.
The new GPU operator enables OpenShift to schedule workloads that require use of GPGPUs as easily as one would schedule CPU or memory for more traditional not accelerated workloads. Start by creating a container that has a GPU workload inside it and request the GPU resource when creating the pod and OpenShift will take care of the rest. This makes deployment of GPU workloads to OpenShift clusters straightforward for users and administrators as it is all managed at the cluster level and not on the host machines. The GPU operator for OpenShift will help to simplify and accelerate the compute-intensive ML/DL modeling tasks for data scientists, as well as  help running inferencing tasks across data centers, public clouds, and at the edge. Typical workloads that can benefit from GPU acceleration include image and speech recognition, visual search and several others.
We assume that you have an OpenShift 4.x cluster deployed with some worker nodes that have GPU devices.
$ oc get no

NAME                           STATUS ROLES AGE VERSION

ip-10-0-130-177.ec2.internal   Ready worker 33m v1.16.2

ip-10-0-132-41.ec2.internal    Ready master 42m v1.16.2

ip-10-0-156-85.ec2.internal    Ready worker 33m v1.16.2

ip-10-0-157-132.ec2.internal   Ready master 42m v1.16.2

ip-10-0-170-127.ec2.internal   Ready worker 4m15s v1.16.2

ip-10-0-174-93.ec2.internal    Ready master 42m v1.16.2
In order to expose what features and devices each node has to OpenShift we first need to deploy the Node Feature Discovery (NFD) Operator (see here for more detailed instructions).
Once the NFD Operator is deployed we can take a look at one of our nodes; here we see the difference between the node before and after. Among the new labels describing the node features, we see:

feature.node.kubernetes.io/pci-10de.present=true

This indicates that we have at least one PCIe device from the vendor ID 0x10de, which is for Nvidia. These labels created by the NFD operator are what the GPU Operator uses in order to determine where to deploy the driver containers for the GPU(s).
However, before we can deploy the GPU Operator we need to ensure that the appropriate RHEL entitlements have been created in the cluster (see here for more detailed instructions). After the RHEL entitlements have been deployed to the cluster, then we may proceed with installation of the GPU Operator.
The GPU Operator is currently installed via helm chart, so make sure that you have helm v3+ installed. Once you have helm installed we can begin the GPU Operator installation.
     1. Add the Nvidia helm repo:
 

$ helm repo add nvidia https://nvidia.github.io/gpu-operator “nvidia” has been added to your repositories

     2. Update the helm repo:

$ helm repo update Hang tight while we grab the latest from your chart repositories…
…Successfully got an update from the “nvidia” chart repository Update Complete. ⎈ Happy Helming!⎈

     3. Install the GPU Operator helm chart:
 

$ helm install –devel https://nvidia.github.io/gpu-operator/gpu-operator-1.0.0.tgz
–set platform.openshift=true,operator.defaultRuntime=crio,nfd.enabled=false –wait –generate-name

     4. Monitor deployment of GPU Operator:
 

$ oc get pods -n gpu-operator-resources -w

This command will watch the gpu-operator-resources namespace as the operator rolls out on the cluster. Once the installation is completed you should see something like this in the gpu-operator-resources namespace.
We can see that both the nvidia-driver-validation and the nvidia-device-plugin-validation pods have completed successfully and we have four daemonsets, each running the number of pods that match the node label feature.node.kubernetes.io/pci-10de.present=true. Now we can inspect our GPU node once again.
Here we can see the latest changes to our node which now include Capacity, Allocatable and Allocatable Resources for a new resource called nvidia.com/gpu. As we see above, since our GPU node only has one GPU we can see that reflected.
Now that we have the NFD Operator, cluster entitlements, and the GPU Operator deployed we can assign workloads that will use the GPU resources.
Let’s begin by configuring Cluster Autoscaling for our GPU devices. This will allow us to create workloads that request GPU resources and then will automatically scale our GPU nodes up and down depending on the amount of requests pending for these devices.
The first step is to create a ClusterAutoscaler resource definition, for example:
$ cat 0001-clusterautoscaler.yaml

apiVersion: “autoscaling.openshift.io/v1″

kind: “ClusterAutoscaler”

metadata:

  name: “default”

spec:

  podPriorityThreshold: -10

  resourceLimits:

    maxNodesTotal: 24

    gpus:

      – type: nvidia.com/gpu

        min: 0

        max: 16

  scaleDown:

    enabled: true

    delayAfterAdd: 10m

    delayAfterDelete: 5m

    delayAfterFailure: 30s

    unneededTime: 10m

$ oc create -f 0001-clusterautoscaler.yaml

clusterautoscaler.autoscaling.openshift.io/default created
 
Here we define the number of nvidia.com/gpu resources that we expect for the Autoscaler.
 
After we deploy the ClusterAutoscaler, we deploy the MachineAutoscaler resource that references the MachineSet that is used to scale the cluster:
$ cat 0002-machineautoscaler.yaml

apiVersion: “autoscaling.openshift.io/v1beta1″

kind: “MachineAutoscaler”

metadata:

  name: “gpu-worker-us-east-1a”

  namespace: “openshift-machine-api”

spec:

  minReplicas: 1

  maxReplicas: 6

  scaleTargetRef:

    apiVersion: machine.openshift.io/v1beta1

    kind: MachineSet

    name: gpu-worker-us-east-1a

$ oc create -f 0002-machineautoscaler.yaml

machineautoscaler.autoscaling.openshift.io/sj-022820-01-h4vrj-worker-us-east-1c created
 
The metadata name should be a unique MachineAutoscaler name, and the MachineSet name at the end of the file should be the value of an existing MachineSet.
 
Looking at our cluster, we check what MachineSets are available:
 
$ oc get machinesets -n openshift-machine-api

NAME                                   DESIRED CURRENT READY AVAILABLE AGE

sj-022820-01-h4vrj-worker-us-east-1a   1 1 1 1 4h45m

sj-022820-01-h4vrj-worker-us-east-1b   1 1 1 1 4h45m

sj-022820-01-h4vrj-worker-us-east-1c   1 1 1 1 4h45m

In this example the third MachineSet sj-022820-01-h4vrj-worker-us-east-1c is the one that has GPU nodes.
 

$ oc get machineset sj-022820-01-h4vrj-worker-us-east-1c -n openshift-machine-api -o yaml 

apiVersion: machine.openshift.io/v1beta1

kind: MachineSet

metadata:

  name: sj-022820-01-h4vrj-worker-us-east-1c

  namespace: openshift-machine-api

spec:

  replicas: 1

    spec:

      metadata:

          instanceType: p3.2xlarge

          kind: AWSMachineProviderConfig

          placement:

            availabilityZone: us-east-1c

            region: us-east-1

 
We can create our MachineAutoscaler resource definition, which would look like this:
 
$ cat 0002-machineautoscaler.yaml

apiVersion: “autoscaling.openshift.io/v1beta1″

kind: “MachineAutoscaler”

metadata:

  name: “sj-022820-01-h4vrj-worker-us-east-1c”

  namespace: “openshift-machine-api”

spec:

  minReplicas: 1

  maxReplicas: 6

  scaleTargetRef:

    apiVersion: machine.openshift.io/v1beta1

    kind: MachineSet

    name: sj-022820-01-h4vrj-worker-us-east-1c

$ oc create -f 0002-machineautoscaler.yaml

machineautoscaler.autoscaling.openshift.io/sj-022820-01-h4vrj-worker-us-east-1c created
We can now start to deploy RAPIDs using shared storage between multiple instances. Begin by creating a new project:
 

$ oc new-project rapids

 
Assuming you have a StorageClass that provides ReadWriteMany functionality like OpenShift Container Storage with cephfs, we can create a PVC to attach to our RAPIDs instances. (‘storageClassName` is the name of the StorageClass)
 
$ cat 0003-pvc-for-ceph.yaml

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

  name: rapids-cephfs-pvc

spec:

  accessModes:

  – ReadWriteMany

  resources:

    requests:

      storage: 25Gi

  storageClassName: example-storagecluster-cephfs

$ oc create -f 0003-pvc-for-ceph.yaml

persistentvolumeclaim/rapids-cephfs-pvc created

$ oc get pvc -n rapids

NAME                STATUS VOLUME                                 CAPACITY ACCESS MODES STORAGECLASS       AGE

rapids-cephfs-pvc   Bound pvc-a6ba1c38-6498-4b55-9565-d274fb8b003e   25Gi RWX example-storagecluster-cephfs   33s
 
Now that we have our shared storage deployed we can finally deploy the RAPIDs template and create the new application inside our rapids namespace:
 
$ oc create -f 0004-rapids_template.yaml

template.template.openshift.io/rapids created

$ oc new-app rapids

–> Deploying template “rapids/rapids” to project rapids

     RAPIDS

     ———

     Template for RAPIDS

     A RAPIDS pod has been created.

     * With parameters:

        * Number of GPUs=1

        * Rapids instance number=1

–> Creating resources …

    service “rapids” created

    route.route.openshift.io “rapids” created

    pod “rapids” created

–> Success

    Access your application via route ‘rapids-rapids.apps.sj-022820-01.perf-testing.devcluster.openshift.com’

    Run ‘oc status’ to view your app.
 
In a browser we can now load the route that the template created above: rapids-rapids.apps.sj-022820-01.perf-testing.devcluster.openshift.com

Image shows example notebook running using GPUs on OpenShift
 
We can also see on our GPU node that RAPIDs is running on it and using the GPU resource:
 

$ oc describe gpu node 

 
Given we have more than one person that wants to run Jupyter playbooks, lets create a second RAPIDs instance with its own dedicated GPU.
 

$ oc new-app rapids -p INSTANCE=2

–> Deploying template “rapids/rapids” to project rapids

     RAPIDS

     ———

     Template for RAPIDS

     A RAPIDS pod has been created.

     * With parameters:

        * Number of GPUs=1

        * Rapids instance number=2

–> Creating resources …

    service “rapids2″ created

    route.route.openshift.io “rapids2″ created

    pod “rapids2″ created

–> Success

    Access your application via route ‘rapids2-rapids.apps.sj-022820-01.perf-testing.devcluster.openshift.com’

    Run ‘oc status’ to view your app.

 
But we just used our only GPU resource on our GPU node, so the new deployment of rapids (rapids2) is not schedulable due to insufficient GPU resources.
 

$ oc get pods -n rapids

NAME      READY STATUS    RESTARTS AGE

rapids    1/1 Running   0 30m

rapids2   0/1 Pending   0 2m44s

 
If we look at the event state of the rapids2 pod:
 

$ oc describe pod/rapids -n rapids

Events:

  Type     Reason         Age From             Message

  —-     ——         —- —-             ——-

  Warning  FailedScheduling  <unknown> default-scheduler   0/9 nodes are available: 9 Insufficient nvidia.com/gpu.

  Normal   TriggeredScaleUp  44s cluster-autoscaler  pod triggered scale-up: [{openshift-machine-api/sj-022820-01-h4vrj-worker-us-east-1c 1->2 (max: 6)}]

 
We just need to wait for the ClusterAutoscaler and MachineAutoscaler to do their job and scale up the MachineSet as we see above. Once the new node is created:
 

$ oc get no 

NAME                           STATUS ROLES AGE VERSION

(old nodes)

ip-10-0-167-0.ec2.internal     Ready worker 72s v1.16.2

 
The new RAPIDs instance will deploy to the new node once it becomes Ready with no user intervention.
To summarize, the new NVIDIA GPU operator simplifies the  use of GPU resources in OpenShift clusters. In this blog we’ve demonstrated the use-case for multi-user RAPIDs development using NVIDIA GPUs. Additionally we’ve used OpenShift Container Storage and the ClusterAutoscaler to automatically scale up our special resource nodes as they are being requested by applications.
As you observed, NVIDIA GPU Operator is already relatively easy to deploy using Helm and work is ongoing to support t deployments right from OperatorHub, simplifying this process even further. 
For more information on NVIDIA GPU Operator and OpenShift, please see the official Nvidia documentation.
1 – Helm 3 is in Tech Preview in OpenShift 4.3, and will GA in OpenShift 4.4
The post Simplifying deployments of accelerated AI workloads on Red Hat OpenShift with NVIDIA GPU Operator appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

OpenShift Commons Briefing: JupyterHub on-demand (and other tools) with Red Hat’s Guillaume Moutier and Landon LaSmith

Welcome to the first briefing of the “All Things Data” series of OpenShift Commons briefings. We’ll be holding future briefings on Tuesdays at 8:00am PST, so reach out with any topics you’re interested in and remember to bookmark the OpenShift Commons Briefing calendar!
In this first briefing for the “All Things Data” OpenShift Commons series, Red Hat’s Guillaume Moutier and Landon LaSmith demo’d how to easily integrate Open Data Hub and OpenShift Container Storage to build your own data science platform. When working on data science projects, it’s a guarantee that you will need different kinds of storage for your data: block, file, object.
Open Data Hub (ODH) is an open source project that provides open source AI tools for running large and distributed AI workloads on OpenShift Container Platform.
OpenShift Container Storage (OCS) is software-defined storage for containers that provides you with every type of storage you need, from a simple, single source.
Briefing Slides: ODH on OCS
Additional Resources:
Culture of innovation: Open Data Hub
Open Data Hub Community Project Website: opendatahub.io
OpenShift AI/ML Resources: openshift.com/ai-ml
Product Documentation for Product Documentation for Red Hat OpenShift Container Storage 4.2
Feedback:
To find out more about OpenShift Container Storage or to take a test drive, visit https://www.openshift.com/products/container-storage/.
If you would like to learn more about what the OpenShift Container Storage team is up to or provide feedback on any of the new 4.2 features, take this brief 3-minute survey.
 
The post OpenShift Commons Briefing: JupyterHub on-demand (and other tools) with Red Hat’s Guillaume Moutier and Landon LaSmith appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Announcing OpenShift Serverless 1.5.0 Tech Preview – A sneak peek of our GA

I am sure many of you are as excited as we about cloud native development, and one of the hot topics in the space is Serverless. With that in mind let’s talk about our most recent release of OpenShift Serverless that includes a number of features and functionalities that definitely improve the developer experience in Kubernetes and really enable many interesting application patterns and workloads. 
For the uninitiated, OpenShift Serverless is based on the open source project Knative and helps developers deploy and run almost any containerized workload as a serverless workload. Applications can scale up or down (to zero) or react to and consume events without lock-in concerns. The Serverless user experience can be integrated with other OpenShift services, such as OpenShift Pipelines, Monitoring and Metering. Beyond autoscaling and events, it also provides a number of other features, such as:

Immutable revisions allows you to deploy new features: performing canary, A/B or blue-green testing with gradual traffic rollout with no sweat and following best practices.

Ready for the hybrid cloud: Truly portable serverless running anywhere OpenShift runs, that is on-premises or on any public cloud. Leverage data locality and SaaS when needed.

Use any programming language or runtime of choice. From Java, Python, Go and JavaScript to Quarkus, SpringBoot or Node.js.

One of the most interesting aspects of running serverless containers is that it offers an alternative to application modernization that allows users to reuse investments already made and what is available today. If you have a number of web applications, microservices or RESTful APIs built as containers that you would like to scale up and down based on the number of HTTP requests, that’s a perfect fit. But if you also would like to build new event driven systems that will consume Apache Kafka messages or be triggered by new files being uploaded to Ceph (or S3), that’s possible too. Autoscaling your containers to match the number of requests can improve your response time, offering a better quality of service and increase your cluster density by allowing more applications to run, optimizing resources usage.
New Features in 1.5.0 – Technology Preview
Based on Knative 0.12.1 – Keeping up with the release cadence of the community, we already include Knative 0.12 in Serving, Eventing and kn – the official Knative CLI. As with anything we ship as a product at Red Hat, this means we have validated these components on a variety of different platforms and configurations OpenShift runs.
Use of Kourier – By using Kourier we can maintain the list of requirements to get Serverless installed in OpenShift to a minimal, with low resource consumption, faster cold-starts and avoiding impact on non-serverless workloads running on the same namespace. In combination with fixes we implemented in OpenShift 4.3.5 the time to create an application from a pre-built container improved between 40-50% depending on the container image size.
Before Kourier

After Kourier 

 
Disconnected installs (air gapped) – Given the request of several customers that want to benefit from serverless architectures and its programming model but in controlled environments with restricted or no internet access, we are enabling the OpenShift Serverless operator to be installed in disconnected OpenShift clusters. The kn CLI, used to manage applications in Knative, is also available to download from the OpenShift cluster itself, even in disconnected environments. 

The journey so far
We already have OpenShift Serverless being deployed and used on a number of Openshift clusters by a variety of customers during the Technology Preview. These clusters are running on a number of different providers such as on premises with bare metal hardware or virtualized systems, or on the cloud running on AWS or Azure. These environments exposed our team to a number of different configurations that you really only get by running hybrid cloud solutions which enables us to cover a wide net during this validation period and take this feedback back to the community, improving quality and usability. 
Install experience and upgrades with the Operator 

The Serverless operator deals with all the complexities of installing Knative on Kubernetes, offering a simplified experience. It takes it one step further by enabling an easy path to upgrades and updates, which are also delivered over-the-air and that can be applied automatically, making system administrators rest assured that they can receive CVEs and bug fixes to production systems. For those concerned with automatic updates, they can also opt for manually applying those as well. 
Integration with Console
With the integration with OpenShift console, users have the ability to configure traffic distribution using the UI as an alternative to use kn, the CLI. Traffic split lets users perform a number of different techniques to roll out new versions and new features on their applications, the most common ones being A/B testing, canary or dark launches. By letting users visualize this using the topology view they can get quickly an understanding of the architecture and deployment strategies being used and course correct if needed. 
 
 

The integration with the console provides a good visualization for event sources connected to services. The screenshot below for examples has a service (kiosk) consuming messages from Apache Kafka, while two other applications (frontend) are scaled down to zero. 
 
Deploy your first application and use Quarkus
To deploy your first serverless container using the CLI (kn), download the client and from a terminal execute: 
[markito@anakin ~]$ kn service create greeter –image quay.io/rhdevelopers/knative-tutorial-greeter:quarkus
Creating service ‘greeter’ in namespace ‘default':
0.133s The Route is still working to reflect the latest desired specification.
0.224s Configuration “greeter” is waiting for a Revision to become ready.
5.082s …
5.132s Ingress has not yet been reconciled.
5.235s Ready to serve.
Service ‘greeter’ created to latest revision ‘greeter-pjxfx-1′ is available at URL:

http://greeter.default.apps.test.mycluster.org

This will create a Knative Service based on the container image provided. Quarkus, a Kubernetes native Java stack, is a perfect fit for building serverless applications in Java, given its blazing fast startup time and low memory footprint, but Knative can also run any other language or runtime. Creating a Knative Service object will manage multiple Kubernetes objects commonly used to deploy an application, such as Deployments, Routes and Services, providing a simplified experience for anyone getting started with Kubernetes development, with the added benefit of making it autoscale based on the number of requests and all other benefits already mentioned on this post. 
 
You can also follow the excellent Knative Tutorial for more scenarios and samples. 
 
The journey so far has been exciting and we have been contributing to the Knative community since its inception. I would also like to send a big “thank you” to our team across engineering, QE and documentation for keeping up with the fast pace of the serverless space; they have been doing phenomenal work. 
 
Get started today with OpenShift Serverless following the installation instructions! 
The post Announcing OpenShift Serverless 1.5.0 Tech Preview – A sneak peek of our GA appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Community Blog Round Up 17 March 2020

Oddbit writes two incredible articles – one about configuring passwordless consoles for raspberry pi and another about configuring open vswitch with nmcli while Carlos Camacho publishes Emilien Macchi’s deep dive demo on containerized deployment sans Paunch.
A passwordless serial console for your Raspberry Pi by oddbit
legendre on #raspbian asked:
How can i config rasp lite to open a shell on the serial uart on boot? Params are 1200-8-N-1 Dont want login running, just straight to sh
In this article, we’ll walk through one way of implementing this configuration.
Read more at https://blog.oddbit.com/post/2020-02-24-a-passwordless-serial-console/
TripleO deep dive session #14 (Containerized deployments without paunch) by Carlos Camacho
This is the 14th release of the TripleO “Deep Dive” sessions. Thanks to Emilien Macchi for this deep dive session about the status of the containerized deployment without Paunch.
Read more at https://www.anstack.com/blog/2020/02/18/tripleo-deep-dive-session-14.html
Configuring Open vSwitch with nmcli by oddbit
I recently acquired a managed switch for my home office in order to segment a few devices off onto their own isolated vlan. As part of this, I want to expose these vlans on my desktop using Open vSwitch (OVS), and I wanted to implement the configuration using NetworkManager rather than either relying on the legacy /etc/sysconfig/network-scripts scripts or rolling my own set of services. These are my notes in case I ever have to do this again.
Read more at https://blog.oddbit.com/post/2020-02-15-configuring-open-vswitch-with/
Quelle: RDO

Announcing OpenShift Serverless 1.5.0 Tech Preview – A sneak peak of our GA

I am sure many of you are as excited as we about cloud native development, and one of the hot topics in the space is Serverless. With that in mind let’s talk about our most recent release of OpenShift Serverless that includes a number of features and functionalities that definitely improve the developer experience in Kubernetes and really enable many interesting application patterns and workloads. 
For the uninitiated, OpenShift Serverless is based on the open source project Knative and helps developers deploy and run almost any containerized workload as a serverless workload. Applications can scale up or down (to zero) or react to and consume events without lock-in concerns. The Serverless user experience can be integrated with other OpenShift services, such as OpenShift Pipelines, Monitoring and Metering. Beyond autoscaling and events, it also provides a number of other features, such as:

Immutable Revisions: Deploy new features performing canary, A/B or blue-green testing, with gradual traffic rollout, following best practices and enabling faster interaction.

Ready for hybrid cloud: Truly portable serverless running anywhere OpenShift runs, that is on premise or on any cloud. Leverage data locality or SaaS solutions when needed.

Any programming language or runtime: Use any programming language or the runtime of your choice. From Java, Python, Go and JavaScript to Quarkus, Spring Boot, Rails or Node.js.
 
One of the most interesting aspects of running serverless containers is that it offers an alternative to application modernization that allows users to reuse investments already made and what is available today. If you have a number of web applications, microservices or RESTful APIs built as containers that you would like to scale up and down based on the number of HTTP requests, that’s a perfect fit. But if you also would like to build new event driven systems that will consume Apache Kafka messages or be triggered by new files being uploaded to Ceph (or S3), that’s possible too. Autoscaling your containers to match the number of requests can improve your response time, offering a better quality of service and increase your cluster density by allowing more applications to run, optimizing resources usage.
New Features in 1.5.0 – Technology Preview
Based on Knative 0.12.1 – Keeping up with the release cadence of the community, we already include Knative 0.12 in Serving, Eventing and kn – the official Knative CLI. As with anything we ship as a product at Red Hat, this means we have validated these components on a variety of different platforms and configurations OpenShift runs.
Use of Kourier – By using Kourier we can maintain the list of requirements to get Serverless installed in OpenShift to a minimal, with low resource consumption, faster cold-starts and avoiding impact on non-serverless workloads running on the same namespace. In combination with fixes we implemented in OpenShift 4.3.5 the time to create an application from a pre-built container improved between 40-50% depending on the container image size.
Before Kourier

After Kourier 

 
Disconnected installs (air gapped) – Given the request of several customers that want to benefit from serverless architectures and its programming model but in controlled environments with restricted or no internet access, we are enabling the OpenShift Serverless operator to be installed in disconnected OpenShift clusters. The kn CLI, used to manage applications in Knative, is also available to download from the OpenShift cluster itself, even in disconnected environments. 

The journey so far
We already have OpenShift Serverless being deployed and used on a number of Openshift clusters by a variety of customers during the Technology Preview. These clusters are running on a number of different providers such as on premises with bare metal hardware or virtualized systems, or on the cloud running on AWS or Azure. These environments exposed our team to a number of different configurations that you really only get by running hybrid cloud solutions which enables us to cover a wide net during this validation period and take this feedback back to the community, improving quality and usability. 
Install experience and upgrades with the Operator 

The Serverless operator deals with all the complexities of installing Knative on Kubernetes, offering a simplified experience. It takes it one step further by enabling an easy path to upgrades and updates, which are also delivered over-the-air and that can be applied automatically, making system administrators rest assured that they can receive CVEs and bug fixes to production systems. For those concerned with automatic updates, they can also opt for manually applying those as well. 
Integration with Console
With the integration with OpenShift console, users have the ability to configure traffic distribution using the UI as an alternative to use kn, the CLI. Traffic split lets users perform a number of different techniques to roll out new versions and new features on their applications, the most common ones being A/B testing, canary or dark launches. By letting users visualize this using the topology view they can get quickly an understanding of the architecture and deployment strategies being used and course correct if needed. 
 
 

The integration with the console provides a good visualization for event sources connected to services. The screenshot below for examples has a service (kiosk) consuming messages from Apache Kafka, while two other applications (frontend) are scaled down to zero. 
 
Deploy your first application and use Quarkus
To deploy your first serverless container using the CLI (kn), download the client and from a terminal execute: 
[markito@anakin ~]$ kn service create greeter –image quay.io/rhdevelopers/knative-tutorial-greeter:quarkus

Creating service ‘greeter’ in namespace ‘default':

0.133s The Route is still working to reflect the latest desired specification.

0.224s Configuration “greeter” is waiting for a Revision to become ready.

5.082s …

5.132s Ingress has not yet been reconciled.

5.235s Ready to serve.

Service ‘greeter’ created to latest revision ‘greeter-pjxfx-1′ is available at URL:

http://greeter.default.apps.test.mycluster.org

This will create a Knative Service based on the container image provided. Quarkus, a Kubernetes native Java stack, is a perfect fit for building serverless applications in Java, given its blazing fast startup time and low memory footprint, but Knative can also run any other language or runtime. Creating a Knative Service object will manage multiple Kubernetes objects commonly used to deploy an application, such as Deployments, Routes and Services, providing a simplified experience for anyone getting started with Kubernetes development, with the added benefit of making it autoscale based on the number of requests and all other benefits already mentioned on this post. 
 
You can also follow the excellent Knative Tutorial for more scenarios and samples. 
 
The journey so far has been exciting and we have been contributing to the Knative community since its inception. I would also like to send a big “thank you” to our team across engineering, QE and documentation for keeping up with the fast pace of the serverless space; they have been doing phenomenal work. 
 
Get started today with OpenShift Serverless following the installation instructions! 
The post Announcing OpenShift Serverless 1.5.0 Tech Preview – A sneak peak of our GA appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Introduction to Security Contexts and SCCs

With Role Based Access Control, we have an OpenShift-wide tool to determine the actions (or verbs) each user can perform against each object in the API. For that, rules are defined combining resources with the API verbs into sets called roles, and with the role binding we attribute those rules to users. Once we have those Users or Service Accounts, we can attribute them to particular resources to give them access to those actions. For example, a Pod may be able to delete a ConfigMap, but not a Secret when running under a specific Service Account. That’s an upper level control plane feature that doesn’t take into account the underlay node permission model, meaning the Unix permission model, and some of it’s newer kernel accouterments.
So, the container platform is protected with good RBAC practices from it’s created object,s but the node may not be. That is where a Pod may not be able to delete an object in etcd using the API because it’s restricted by RBAC, but it may delete important files in the system and even stop kubelet if properly programmed for that. To prevent this scenario, SCCs (Security Context Constraints) can come to the rescue.
Linux Processes and Privileges
Before going into deep waters with SCCs, let’s go back in time and take a look at some of the key concepts Linux brings to us regarding processes. A good start is entering the command man capabilities on a Linux terminal. That’s the manual page that contains very important fundamentals to understand the goal behind the SCCs.
The first important distinction that we need to do is between privileged and unprivileged processes. While privileged processes will have user ID 0 being the superuser or root, unprivileged processes will have non-zero user IDs. Privileged processes bypass kernel permission checks. That means that the actions that a process or thread can perform on operating systems objects such as files, directories, symbolic links, pseudo filesystems (procfs, cgroupfs, sysfs etc.) and even memory objects such as shared memory regions, pipes and sockets… Those actions are unlimited and not verified by the system. Meaning, the kernel won’t check user, group or others permissions (taking from the Unix permission model UGO – user, group and others) to grant access to that specific object in behalf of the process.
If we look at the list of running processes on a Linux system using the command ps -u root we will find very important processes such as systemd for example that has the PID 1 and is responsible for bootstrapping the user space in most distributions and initializing most common services. For that it needs non restricted access to the system.
Unprivileged processes, though, are subject to full permission checking based on process credentials (user ID, group ID and supplementary group list etc.). The kernel will make an iterative check under each category user, groups and others trying to match the user and group credentials on the running process with the target object’s permissions in order to grant or deny access. Keep in mind that this is not the service account in OpenShift. This is the system’s user that runs the container process if we want to speak containers.
After kernel 2.2 the concept of capabilities was introduced. In order to have more flexibility and enable the use of superuser or root features in a granular way, those super privileges were broken into small pieces that can be enabled or disabled independently. That is what we call capabilities. We can take a deeper look on http://man7.org/linux/man-pages/man7/capabilities.7.html
As an example, let’s say that we have an application that needs special networking configurations. Let’s say that we need to configure one interface, open a port on the system’s firewall, create a NAT rule for that and punt a new custom route on the system’s routing table. But you don’t need to make arbitrary changes to any file in the system. We can set CAP_NET_ADMIN instead of running the process as a privileged one.
Beyond privileges and capabilities we have SELinux and AppArmor that are both kernel security modules that can be added on top of capabilities to get even more fine grained security rules by using access control security policies or program profiles. In addition, we have Seccomp which is a secure computing mode kernel facility that reduces the available system calls to the kernel for a given process.
Finally, adding to all that, we still have interprocess communications, privilege escalation and access to the host namespace when we begin to talk about containers. That is out of scope here at this point but…
How does that translate to containers?
That said, we come back to containers and ask: what are containers again? They are processes segregated by namespaces and cgroups and on that note they have all the same security features described above. So how do we create containers with those security features then?
Let’s first take a look at what is the smallest piece of software that creates the container process: runc. As its definition on the github page says, it’s a tool to spawn and run containers according to the OCI specification. It’s the default choice for OCI runtimes although we have others such as kata containers. In order to use runc, we need to have a file system image and a bundle with the configuration for the process. The short story on the bundle is we must put a json formatted specification for the container where all the configurations will be taken into account. Check this part of it’s documentation: https://github.com/opencontainers/runtime-spec/blob/master/config.md#linux-process
From there we have fields such as apparmorProfile, capabilities or selinuxLabel. We can set user ID, group ID and supplementary group IDs. What tool then automates the process of getting the file system ready and passing down those parameters for us?
We can use podman, for example, for testing or development, running isolated containers or pods. It allows us to do it with special privileges as we show below:
Privileged bash terminal:
sudo podman run –privileged -it registry.access.redhat.com/rhel7/rhel /bin/bash
Process ntpd with privilege to change the system clock:
sudo podman run -d –cap-add SYS_TIME ntpd
Ok. Cool. But when it comes the time to run those containers on Kubernetes or OpenShift how do we configure those capabilities and security features?
Inside the OpenShift platform CRI-O container engine is the one that runs and manages containers. It is compliant with the Kubernetes Container Runtime Interface (CRI). It complies with kubelet rules in order to give it a standard interface to call the container engine and all the magic is done automating runc behind the scenes while allowing other features to be developed under the engine itself.
Following the workflow above to run a pod in Kubernetes or OpenShift, we’ll first make an API call to kubernetes asking to run a particular Pod. It could come from an oc command or from code, for example. Then the API will process that request and store it in etcd; the pod will be scheduled for a specific node since the scheduler watches those events; finally, kubelet, in that node, will read that event and call the container runtime (CRI-O) with all the parameters and options requested to run the pod. I know it’s very summarized. But the important thing here is that we need to pass parameters down to the API in order to have our Pod with the desired privileges configured. In the example below a new pod gets scheduled to run in node 1.

What goes into that yaml file in order to request those privileges? Two different objects are implemented under the Kubernetes API: PodSecurityContext and SecurityContext. The first one, obviously, related to Pods and the second one related to the specific container. They are part of their respective types. So you can find those fields on Pod and Container Specs on yaml manifests. With that they can be applied to an entire Pod, no matter how many containers are there or to specific containers into that Pod. Then the SecurityContext settings take precedence over the PodSecurityContext ones. You can find the security context source code under https://github.com/kubernetes/api/blob/master/core/v1/types.go.
Here we can find a few examples on how to configure security contexts for Pods. Below I present the first three fields of the SecurityContext object.
type SecurityContext struct {
// The capabilities to add/drop when running containers.
// Defaults to the default set of capabilities granted by the container runtime.
// +optional
Capabilities *Capabilities `json:”capabilities,omitempty” protobuf:”bytes,1,opt,name=capabilities”`
// Run container in privileged mode.
// Processes in privileged containers are essentially equivalent to root on the host.
// Defaults to false.
// +optional
Privileged *bool `json:”privileged,omitempty” protobuf:”varint,2,opt,name=privileged”`
// The SELinux context to be applied to the container.
// If unspecified, the container runtime will allocate a random SELinux context for each
// container. May also be set in PodSecurityContext. If set in both SecurityContext and
// PodSecurityContext, the value specified in SecurityContext takes precedence.
// +optional
SELinuxOptions *SELinuxOptions `json:”seLinuxOptions,omitempty” protobuf:”bytes,3,opt,name=seLinuxOptions”`
<…>
}

Here is an example of a yaml manifest configuration with capabilities on securityContext field:
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo-4
spec:
containers:
– name: sec-ctx-4
image: gcr.io/google-samples/node-hello:1.0
securityContext:
capabilities:
add: [“NET_ADMIN”, “SYS_TIME”]

Ok. Now what? We have an idea on how to give super powers to a container or Pod even though they may be RBAC restricted. How can we control this behavior?
Security Context Constraints
Finally we get back to our main subject. How can I make sure that a specific Pod or Container doesn’t request more than what it should request in terms of process privileges and not only OpenShift object privileges under it’s API?
That’s the role of Security Context Constraints. To check beforehand if the system can pass that pod or container configuration request, with privileged or custom security context, further onto the cluster API that will end up running a powerful container process. To have a taste on what a SCC looks like here is an example:
oc get scc restricted -o yaml

allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegeEscalation: true
allowPrivilegedContainer: false
allowedCapabilities: null
apiVersion: security.openshift.io/v1
defaultAddCapabilities: null
fsGroup:
type: MustRunAs
groups:
– system:authenticated
kind: SecurityContextConstraints
metadata:
annotations:
kubernetes.io/description: restricted denies access to all host features and requires
pods to be run with a UID, and SELinux context that are allocated to the namespace. This
is the most restrictive SCC and it is used by default for authenticated users.
creationTimestamp: “2020-02-08T17:25:39Z”
generation: 1
name: restricted
resourceVersion: “8237”
selfLink: /apis/security.openshift.io/v1/securitycontextconstraints/restricted
uid: 190ef798-af35-40b9-a980-0d369369a385
priority: null
readOnlyRootFilesystem: false
requiredDropCapabilities:
– KILL
– MKNOD
– SETUID
– SETGID
runAsUser:
type: MustRunAsRange
seLinuxContext:
type: MustRunAs
supplementalGroups:
type: RunAsAny
users: []
volumes:
– configMap
– downwardAPI
– emptyDir
– persistentVolumeClaim
– projected
– secret

That above is the default SCC that has pretty basic permissions and will accept Pod configurations that don’t request special security contexts. Just by looking at the name of the fields we can have an idea on how many features it can verify before letting a workload with containers pass by the API and get scheduled.
In conclusion, we have at hand a tool that allows an OpenShift admin to decide whether an entire pod can run in privileged mode, have special capabilities, access directories and volumes on the host namespace, use special SELinux contexts, what ID the container process can use among other features before the Pod gets requested to the API and passed to the container runtime process.
In the next blog posts we’ll explore each field of an SCC, explore their underlying Linux technology, present the prebuilt ones and understand their relationship with the RBAC system to grant or deny special security contexts declared under Pod’s or container’s Spec field. Stay tuned!
The post Introduction to Security Contexts and SCCs appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Video: OpenShift is Kubernetes

Our very own Burr Sutter has produced a video explaining how Kubernetes and OpenShift relate to one another, and why OpenShift is Kubernetes, not a fork there of.
The post Video: OpenShift is Kubernetes appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Open Container Storage 4.2 in Open Container Platform 4.2.14 – UPI Installation in Red Hat Virtualization

OCS 4.2 in OCP 4.2.14 – UPI installation in RHV
When OCS 4.2 GA was released, I was thrilled to finally test and deploy it in my lab. I read the documentation and saw that only vSphere and AWS installations are currently supported. My lab is installed in an RHV environment following the UPI Bare Metal documentation so, in the beginning, I was a bit disappointed. I realized that it could be an interesting challenge to find a different way to use it and, well, I found it during my day by day late night fun. All the following procedures are unsupported.
Prerequisites

An OCP 4.2.x cluster installed (the current latest version is 4.2.14)
The possibility to create new local disks inside the VMs (if you are using a virtualized environment) or servers with disks that can be used

Issues
The official OCS 4.2 installation in vSphere requires a minimum of 3 nodes which use 2TB volume each (a PVC using the default “thin” storage class) for the OSD volumes + 10GB for each mon POD (3 in total using always a PVC). It also requires 16 CPU and 64GB RAM for node.
Use case scenario

bare-metal installations
vSphere cluster

without a shared datastore
you don’t want to use the vSphere dynamic provisioner
without enough space in the datastore
without enough RAM or CPU

other virtualized installation (for example RHV which is the one used for this article)

Challenges

create a PVC using local disks
change the default 2TB volumes size
define a different StorageClass (without using a default one) for the mon PODs and the OSD volumes
define different limits and requests per component

Solutions

use the local storage operator
create the ocs-storagecluster resource using a YAML file instead of the new interface. That means also add the labels to the worker nodes that are going to be used by OCS

Procedures
Add the disks in the VMs. Add 2 disks for each node. 10GB disk for mon POD and 100GB disk for the OSD volume

Repeat for the other 2 nodes
The disks MUST be in the same order and have the same device name in all the nodes. For example, /dev/sdb MUST be the 10GB disk and /dev/sdc the 100GB disk in all the nodes.
[root@utility ~]# for i in {1..3} ; do ssh core@worker-${i}.ocp42.ssa.mbu.labs.redhat.com lsblk | egrep “^sdb.*|sdc.*$” ; done
sdb      8:16   0   10G  0 disk
sdc      8:32   0  100G  0 disk
sdb      8:16   0   10G  0 disk
sdc      8:32   0  100G  0 disk
sdb      8:16   0   10G  0 disk
sdc      8:32   0  100G  0 disk
[root@utility ~]#

Install the Local Storage Operator. Here the official documentation
Create the namespace
[root@utility ~]# oc new-project local-storage‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Then install the operator from the OperatorHub

Wait for the operator POD up&running
[root@utility ~]# oc get pod -n local-storage
NAME                                     READY   STATUS    RESTARTS   AGE
local-storage-operator-ccbb59b45-nn7ww   1/1     Running   0          57s
[root@utility ~]#

The Local Storage Operator works using the devices as reference. The LocalVolume resource scans the nodes which match the selector and creates a StorageClass for the device.
Do not use different StorageClass names for the same device.
We need the Filesystem type for these volumes. Prepare the LocalVolume YAML file to create the resource for the mon PODs which use /dev/sdb
[root@utility ~]# cat &lt;&lt;EOF &gt; local-storage-filesystem.yaml
apiVersion: “local.storage.openshift.io/v1″
kind: “LocalVolume”
metadata:
  name: “local-disks-fs”
  namespace: “local-storage”
spec:
  nodeSelector:
    nodeSelectorTerms:
    – matchExpressions:
        – key: kubernetes.io/hostname
          operator: In
          values:
          – worker-1.ocp42.ssa.mbu.labs.redhat.com
          – worker-2.ocp42.ssa.mbu.labs.redhat.com
          – worker-3.ocp42.ssa.mbu.labs.redhat.com
  storageClassDevices:
    – storageClassName: “local-sc”
      volumeMode: Filesystem
      devicePaths:
        – /dev/sdb
EOF

Then create the resource
[root@utility ~]# oc create -f local-storage-filesystem.yaml
localvolume.local.storage.openshift.io/local-disks-fs created
[root@utility ~]#‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Check if all the PODs are up&running and if the StorageClass and the PVs exist
[root@utility ~]# oc get pod -n local-storage
NAME                                     READY   STATUS    RESTARTS   AGE
local-disks-fs-local-diskmaker-2bqw4     1/1     Running   0          106s
local-disks-fs-local-diskmaker-8w9rz     1/1     Running   0          106s
local-disks-fs-local-diskmaker-khhm5     1/1     Running   0          106s
local-disks-fs-local-provisioner-g5dgv   1/1     Running   0          106s
local-disks-fs-local-provisioner-hkj69   1/1     Running   0          106s
local-disks-fs-local-provisioner-vhpj8   1/1     Running   0          106s
local-storage-operator-ccbb59b45-nn7ww   1/1     Running   0          15m
[root@utility ~]# oc get sc
NAME       PROVISIONER                    AGE
local-sc   kubernetes.io/no-provisioner   109s
[root@utility ~]# oc get pv
NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
local-pv-68faed78   10Gi       RWO            Delete           Available           local-sc                84s
local-pv-780afdd6   10Gi       RWO            Delete           Available           local-sc                83s
local-pv-b640422f   10Gi       RWO            Delete           Available           local-sc                9s
[root@utility ~]#

The PVs were created.
Prepare the LocalVolume YAML file to create the resource for the OSD volumes which use /dev/sdc
We need the Block type for these volumes.
[root@utility ~]# cat &lt;&lt;EOF &gt; local-storage-block.yaml
apiVersion: “local.storage.openshift.io/v1″
kind: “LocalVolume”
metadata:
  name: “local-disks”
  namespace: “local-storage”
spec:
  nodeSelector:
    nodeSelectorTerms:
    – matchExpressions:
        – key: kubernetes.io/hostname
          operator: In
          values:
          – worker-1.ocp42.ssa.mbu.labs.redhat.com
          – worker-2.ocp42.ssa.mbu.labs.redhat.com
          – worker-3.ocp42.ssa.mbu.labs.redhat.com
  storageClassDevices:
    – storageClassName: “localblock-sc”
      volumeMode: Block
      devicePaths:
        – /dev/sdc
EOF

Then create the resource
[root@utility ~]# oc create -f local-storage-block.yaml
localvolume.local.storage.openshift.io/local-disks created
[root@utility ~]#

Check if all the PODs are up&running and if the StorageClass and the PVs exist
[root@utility ~]# oc get pod -n local-storage
NAME                                     READY   STATUS    RESTARTS   AGE
local-disks-fs-local-diskmaker-2bqw4     1/1     Running   0          6m33s
local-disks-fs-local-diskmaker-8w9rz     1/1     Running   0          6m33s
local-disks-fs-local-diskmaker-khhm5     1/1     Running   0          6m33s
local-disks-fs-local-provisioner-g5dgv   1/1     Running   0          6m33s
local-disks-fs-local-provisioner-hkj69   1/1     Running   0          6m33s
local-disks-fs-local-provisioner-vhpj8   1/1     Running   0          6m33s
local-disks-local-diskmaker-6qpfx        1/1     Running   0          22s
local-disks-local-diskmaker-pw5ql        1/1     Running   0          22s
local-disks-local-diskmaker-rc5hr        1/1     Running   0          22s
local-disks-local-provisioner-9qprp      1/1     Running   0          22s
local-disks-local-provisioner-kkkcm      1/1     Running   0          22s
local-disks-local-provisioner-kxbnn      1/1     Running   0          22s
local-storage-operator-ccbb59b45-nn7ww   1/1     Running   0          19m
[root@utility ~]# oc get sc
NAME            PROVISIONER                    AGE
local-sc        kubernetes.io/no-provisioner   6m36s
localblock-sc   kubernetes.io/no-provisioner   25s
[root@utility ~]# oc get pv
NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS    REASON   AGE
local-pv-5c4e718c   100Gi      RWO            Delete           Available           localblock-sc            10s
local-pv-68faed78   10Gi       RWO            Delete           Available           local-sc                 6m13s
local-pv-6a58375e   100Gi      RWO            Delete           Available           localblock-sc            10s
local-pv-780afdd6   10Gi       RWO            Delete           Available           local-sc                 6m12s
local-pv-b640422f   10Gi       RWO            Delete           Available           local-sc                 4m58s
local-pv-d6db37fd   100Gi      RWO            Delete           Available           localblock-sc            5s
[root@utility ~]#

All the PVs were created.
Install OCS 4.2. Here the official documentation
Create the namespace “openshift-storage“
[root@utility ~]# cat &lt;&lt;EOF &gt; ocs-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: openshift-storage
  labels:
    openshift.io/cluster-monitoring: “true”
EOF
[root@utility ~]# oc create -f ocs-namespace.yaml
namespace/openshift-storage created
[root@utility ~]#

Add the labels to the workers
oc label node worker-1.ocp42.ssa.mbu.labs.redhat.com “cluster.ocs.openshift.io/openshift-storage=” –overwrite
oc label node worker-1.ocp42.ssa.mbu.labs.redhat.com “topology.rook.io/rack=rack0″ –overwrite
oc label node worker-2.ocp42.ssa.mbu.labs.redhat.com “cluster.ocs.openshift.io/openshift-storage=” –overwrite
oc label node worker-2.ocp42.ssa.mbu.labs.redhat.com “topology.rook.io/rack=rack1″ –overwrite
oc label node worker-3.ocp42.ssa.mbu.labs.redhat.com “cluster.ocs.openshift.io/openshift-storage=” –overwrite
oc label node worker-3.ocp42.ssa.mbu.labs.redhat.com “topology.rook.io/rack=rack3″ –overwrite‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Install the operator from the web interface

Check on the web interface if the operator is Up to date

And wait for the PODs up&running
[root@utility ~]# oc get pod -n openshift-storage
NAME                                  READY   STATUS    RESTARTS   AGE
noobaa-operator-85d86479fc-n8vp5      1/1     Running   0          106s
ocs-operator-65cf57b98b-rk48c         1/1     Running   0          106s
rook-ceph-operator-59d78cf8bd-4zcsz   1/1     Running   0          106s
[root@utility ~]#

Create the OCS Cluster Service YAML file
[root@utility ~]# cat &lt;&lt;EOF &gt; ocs-cluster-service.yaml
apiVersion: ocs.openshift.io/v1
kind: StorageCluster
metadata:
  name: ocs-storagecluster
  namespace: openshift-storage
spec:
  manageNodes: false
  monPVCTemplate:
    spec:
      accessModes:
      – ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
      storageClassName: ‘local-sc’
      volumeMode: Filesystem
  storageDeviceSets:
  – count: 1
    dataPVCTemplate:
      spec:
        accessModes:
        – ReadWriteOnce
        resources:
          requests:
            storage: 100Gi
        storageClassName: ‘localblock-sc’
        volumeMode: Block
    name: ocs-deviceset
    placement: {}
    portable: true
    replica: 3
    resources: {}
EOF

You can notice the “monPVCTemplate” section in which we define the StorageClass “local-sc” and in the section “storageDeviceSets” the different storage sizes and the StorageClass “localblock-sc” used by OSD volumes.
Now we can create the resource
[root@utility ~]# oc create -f ocs-cluster-service.yaml
storagecluster.ocs.openshift.io/ocs-storagecluster created
[root@utility ~]#‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

During the creation of the resources, we can see how the PVCs created are bounded with the Local Storage PVs
[root@utility ~]# oc get pvc -n openshift-storage
NAME              STATUS   VOLUME              CAPACITY   ACCESS MODES   STORAGECLASS   AGE
rook-ceph-mon-a   Bound    local-pv-68faed78   10Gi       RWO            local-sc       13s
rook-ceph-mon-b   Bound    local-pv-b640422f   10Gi       RWO            local-sc       8s
rook-ceph-mon-c   Bound    local-pv-780afdd6   10Gi       RWO            local-sc       3s
[root@utility ~]# oc get pv
NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                               STORAGECLASS    REASON   AGE
local-pv-5c4e718c   100Gi      RWO            Delete           Available                                       localblock-sc            28m
local-pv-68faed78   10Gi       RWO            Delete           Bound       openshift-storage/rook-ceph-mon-a   local-sc                 34m
local-pv-6a58375e   100Gi      RWO            Delete           Available                                       localblock-sc            28m
local-pv-780afdd6   10Gi       RWO            Delete           Bound       openshift-storage/rook-ceph-mon-c   local-sc                 34m
local-pv-b640422f   10Gi       RWO            Delete           Bound       openshift-storage/rook-ceph-mon-b   local-sc                 33m
local-pv-d6db37fd   100Gi      RWO            Delete           Available                                       localblock-sc            28m
[root@utility ~]#

And now we can see the OSD PVCs and the PVs bounded
[root@utility ~]# oc get pvc -n openshift-storage
NAME                      STATUS   VOLUME              CAPACITY   ACCESS MODES   STORAGECLASS    AGE
ocs-deviceset-0-0-7j2kj   Bound    local-pv-6a58375e   100Gi      RWO            localblock-sc   3s
ocs-deviceset-1-0-lmd97   Bound    local-pv-d6db37fd   100Gi      RWO            localblock-sc   3s
ocs-deviceset-2-0-dnfbd   Bound    local-pv-5c4e718c   100Gi      RWO            localblock-sc   3s‍‍‍‍‍
[root@utility ~]# oc get pv | grep localblock-sc
local-pv-5c4e718c                          100Gi      RWO            Delete           Bound    openshift-storage/ocs-deviceset-2-0-dnfbd   localblock-sc                          31m
local-pv-6a58375e                          100Gi      RWO            Delete           Bound    openshift-storage/ocs-deviceset-0-0-7j2kj   localblock-sc                          31m
local-pv-d6db37fd                          100Gi      RWO            Delete           Bound    openshift-storage/ocs-deviceset-1-0-lmd97   localblock-sc                          31m
[root@utility ~]#

This is the first PVC created inside the OCS cluster used by noobaa
[root@utility ~]# oc get pvc -n openshift-storage
NAME                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
db-noobaa-core-0          Bound    pvc-d8dbb86f-3d83-11ea-ac51-001a4a16017d   50Gi       RWO            ocs-storagecluster-ceph-rbd   72s‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Wait for all the PODs up&running
[root@utility ~]# oc get pod -n openshift-storage
NAME                                                              READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-2qkl8                                            3/3     Running     0          5m31s
csi-cephfsplugin-4pbvl                                            3/3     Running     0          5m31s
csi-cephfsplugin-j8w82                                            3/3     Running     0          5m31s
csi-cephfsplugin-provisioner-647cd6996c-6mw9t                     4/4     Running     0          5m31s
csi-cephfsplugin-provisioner-647cd6996c-pbrxs                     4/4     Running     0          5m31s
csi-rbdplugin-9nj85                                               3/3     Running     0          5m31s
csi-rbdplugin-jmnqz                                               3/3     Running     0          5m31s
csi-rbdplugin-provisioner-6b8ff67dc4-jk5lm                        4/4     Running     0          5m31s
csi-rbdplugin-provisioner-6b8ff67dc4-rxjhq                        4/4     Running     0          5m31s
csi-rbdplugin-vrzjq                                               3/3     Running     0          5m31s
noobaa-core-0                                                     1/2     Running     0          2m34s
noobaa-operator-85d86479fc-n8vp5                                  1/1     Running     0          13m
ocs-operator-65cf57b98b-rk48c                                     0/1     Running     0          13m
rook-ceph-drain-canary-worker-1.ocp42.ssa.mbu.labs.redhat.w2cqv   1/1     Running     0          2m41s
rook-ceph-drain-canary-worker-2.ocp42.ssa.mbu.labs.redhat.whv6s   1/1     Running     0          2m40s
rook-ceph-drain-canary-worker-3.ocp42.ssa.mbu.labs.redhat.ll8gj   1/1     Running     0          2m40s
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-d7d64976d8cm7   1/1     Running     0          2m28s
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-864fdf78ppnpm   1/1     Running     0          2m27s
rook-ceph-mgr-a-5fd6f7578c-wbsb6                                  1/1     Running     0          3m24s
rook-ceph-mon-a-bffc546c8-vjrfb                                   1/1     Running     0          4m26s
rook-ceph-mon-b-8499dd679c-6pzm9                                  1/1     Running     0          4m11s
rook-ceph-mon-c-77cd5dd54-64z52                                   1/1     Running     0          3m46s
rook-ceph-operator-59d78cf8bd-4zcsz                               1/1     Running     0          13m
rook-ceph-osd-0-b46fbc7d7-hc2wz                                   1/1     Running     0          2m41s
rook-ceph-osd-1-648c5dc8d6-prwks                                  1/1     Running     0          2m40s
rook-ceph-osd-2-546d4d77fb-qb68j                                  1/1     Running     0          2m40s
rook-ceph-osd-prepare-ocs-deviceset-0-0-7j2kj-s72g4               0/1     Completed   0          2m56s
rook-ceph-osd-prepare-ocs-deviceset-1-0-lmd97-27chl               0/1     Completed   0          2m56s
rook-ceph-osd-prepare-ocs-deviceset-2-0-dnfbd-s7z8v               0/1     Completed   0          2m56s
rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-d7b4b5b6hnpr   1/1     Running     0          2m12s

Our installation is now complete and OCS fully operative.
Now we can browse the noobaa management console (for now it only works in Chrome) and create a new user to test the S3 object storage

Get the endpoint for the S3 object server
[root@utility ~]# oc get route s3 -o jsonpath='{.spec.host}’ -n openshift-storage
s3-openshift-storage.apps.ocp42.ssa.mbu.labs.redhat.com‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Test it with your preferred S3 client (I use Cyberduck in my windows desktop which I’m using to write this article)

Create something to check if you can write

It works!
Set the ocs-storagecluster-cephfs StorageClass as the default one
[root@utility ~]# oc patch storageclass ocs-storagecluster-cephfs -p ‘{“metadata”: {“annotations”:{“storageclass.kubernetes.io/is-default-class”:”true”}}}’
storageclass.storage.k8s.io/ocs-storagecluster-cephfs patched
[root@utility ~]#‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Test the ocs-storagecluster-cephfs StorageClass adding persistent storage to the registry
 [root@utility ~]# oc edit configs.imageregistry.operator.openshift.io
storage:
  pvc:
    claim:‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Check the PVC created and wait for the new POD up&running
[root@utility ~]# oc get pvc -n openshift-image-registry
NAME                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                AGE
image-registry-storage   Bound    pvc-ba4a07c1-3d86-11ea-ad40-001a4a1601e7   100Gi      RWX            ocs-storagecluster-cephfs   12s
[root@utility ~]# oc get pod -n openshift-image-registry
NAME                                               READY   STATUS    RESTARTS   AGE
cluster-image-registry-operator-655fb7779f-pn7ms   2/2     Running   0          36h
image-registry-5bdf96556-98jbk                     1/1     Running   0          105s
node-ca-9gbxg                                      1/1     Running   1          35h
node-ca-fzcrm                                      1/1     Running   0          35h
node-ca-gr928                                      1/1     Running   1          35h
node-ca-jkfzf                                      1/1     Running   1          35h
node-ca-knlcj                                      1/1     Running   0          35h
node-ca-mb6zh                                      1/1     Running   0          35h
[root@utility ~]#

Test it in a new project test
[root@utility ~]# oc new-project test
Now using project “test” on server “https://api.ocp42.ssa.mbu.labs.redhat.com:6443″.

You can add applications to this project with the ‘new-app’ command. For example, try:

    oc new-app django-psql-example

to build a new example application in Python. Or use kubectl to deploy a simple Kubernetes application:

    kubectl create deployment hello-node –image=gcr.io/hello-minikube-zero-install/hello-node

[root@utility ~]# podman pull alpine
Trying to pull docker.io/library/alpine…Getting image source signatures
Copying blob c9b1b535fdd9 doneCopying config e7d92cdc71 doneWriting manifest to image destination
Storing signaturese7d92cdc71feacf90708cb59182d0df1b911f8ae022d29e8e95d75ca6a99776a
[root@utility ~]# podman login -u $(oc whoami) -p $(oc whoami -t) $REGISTRY_URL –tls-verify=false
Login Succeeded!
[root@utility ~]# podman tag alpine $REGISTRY_URL/test/alpine
[root@utility ~]# podman push $REGISTRY_URL/test/alpine –tls-verify=false
Getting image source signatures
Copying blob 5216338b40a7 done
Copying config e7d92cdc71 done
Writing manifest to image destination
Storing signatures
[root@utility ~]# oc get is -n test
NAME     IMAGE REPOSITORY                                                                        TAGS     UPDATED
alpine   default-route-openshift-image-registry.apps.ocp42.ssa.mbu.labs.redhat.com/test/alpine   latest   3 minutes ago
[root@utility ~]#

The registry works!
Other Scenario
If your cluster is deployed in vSphere and uses the default “thin” StorageClass but your datastore isn’t big enough, you can start from the OCS installation.
When it comes to creating the OCS Cluster Service, create a YAML file with your desired sizes and without storageClassName (it will use the default one).
You can also remove the “monPVCTemplate” if you are not interested in changing the storage size.
[root@utility ~]# cat &lt;&lt;EOF &gt; ocs-cluster-service.yaml
apiVersion: ocs.openshift.io/v1
kind: StorageCluster
metadata:
  name: ocs-storagecluster
  namespace: openshift-storage
spec:
  manageNodes: false
  monPVCTemplate:
    spec:
      accessModes:
      – ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
      storageClassName: ”
      volumeMode: Filesystem
  storageDeviceSets:
  – count: 1
    dataPVCTemplate:
      spec:
        accessModes:
        – ReadWriteOnce
        resources:
          requests:
            storage: 100Gi
        storageClassName: ”
        volumeMode: Block
    name: ocs-deviceset
    placement: {}
    portable: true
    replica: 3
    resources: {}
EOF

Limits and Requests
Limits and Requests, by default, are set like that
[root@utility ~]# oc describe node worker-1.ocp42.ssa.mbu.labs.redhat.com

Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
——— —- ———— ———- ————— ————- —
openshift-storage noobaa-core-0 4 (25%) 4 (25%) 8Gi (12%) 8Gi (12%) 13m
openshift-storage rook-ceph-mgr-a-676d4b4796-54mtk 1 (6%) 1 (6%) 3Gi (4%) 3Gi (4%) 12m
openshift-storage rook-ceph-mon-b-7d7747d8b4-k9txg 1 (6%) 1 (6%) 2Gi (3%) 2Gi (3%) 13m
openshift-storage rook-ceph-osd-1-854847fd4c-482bt 1 (6%) 2 (12%) 4Gi (6%) 8Gi (12%) 12m

We can create our new YAML file to change those settings in the ocs-storagecluster StorageCluster resource
[root@utility ~]# cat &lt;&lt;EOF &gt; ocs-cluster-service-modified.yaml
apiVersion: ocs.openshift.io/v1
kind: StorageCluster
metadata:
  name: ocs-storagecluster
  namespace: openshift-storage
spec:
resources:
mon:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
mgr:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
noobaa-core:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
noobaa-db:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
  manageNodes: false
  monPVCTemplate:
    spec:
      accessModes:
      – ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
      storageClassName: ‘local-sc’
      volumeMode: Filesystem
  storageDeviceSets:
  – count: 1
    dataPVCTemplate:
      spec:
        accessModes:
        – ReadWriteOnce
        resources:
          requests:
            storage: 100Gi
        storageClassName: ‘localblock-sc’
        volumeMode: Block
    name: ocs-deviceset
    placement: {}
    portable: true
    replica: 3
    resources:
limits:
cpu: 1
memory: 4Gi
requests:
cpu: 1
memory: 4Gi
EOF

And apply
[root@utility ~]# oc apply -f ocs-cluster-service-modified.yaml
Warning: oc apply should be used on resource created by either oc create –save-config or oc apply
storagecluster.ocs.openshift.io/ocs-storagecluster configured

We have to wait for the operator which reads the new configs and applies them
[root@utility ~]# oc describe node worker-1.ocp42.ssa.mbu.labs.redhat.com

Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
——— —- ———— ———- ————— ————- —
openshift-storage noobaa-core-0 2 (12%) 2 (12%) 2Gi (3%) 2Gi (3%) 23s
openshift-storage rook-ceph-mgr-a-54f87f84fb-pm4rn 1 (6%) 1 (6%) 1Gi (1%) 1Gi (1%) 56s
openshift-storage rook-ceph-mon-b-854f549cd4-bgdb6 1 (6%) 1 (6%) 1Gi (1%) 1Gi (1%) 46s
openshift-storage rook-ceph-osd-1-ff56d545c-p7hvn 1 (6%) 1 (6%) 4Gi (6%) 4Gi (6%) 50s

And now we have our PODs with the new configurations applied.
The OSD PODs won’t start if you choose too low values.
Sections:

mon for rook-ceph-mon
mgr for rook-ceph-mgr
noobaa-core and noobaa-db for the 2 containers in the pod noobaa-core-0
mds for rook-ceph-mds-ocs-storagecluster-cephfilesystem
rgw for rook-ceph-rgw-ocs-storagecluster-cephobjectstore
the resources section in the end for rook-ceph-osd

rgw and mds sections work only the first time we create the resource.

spec:
resources:
mds:
limits:
cpu: 2
memory: 4Gi
requests:
cpu: 2
memory: 4Gi
rgw:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 1
memory: 2Gi

Conclusions
Now you can enjoy your brand-new OCS 4.2 in OCP 4.2.x
Things changed if you think about OCS 3.x, for example, the use of the PVCs instead of using directly the disks attached and for now, there are a lot of limitations for sustainability and supportability reasons.
We will wait for a fully supported installation for these scenarios.
UPDATES

The cluster used to write this article has been updated from 4.2.14 to 4.2.16 and then from 4.2.16 to 4.3.0.

The current OCS setup is still working

Added Requests and Limits configurations.

The post Open Container Storage 4.2 in Open Container Platform 4.2.14 – UPI Installation in Red Hat Virtualization appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

The Block Editor is Now Supported on the WordPress Native Apps

Part of what helps WordPress power 35% of the web is language: WordPress is fully translated into 68 languages. Pair that with the WordPress native apps, which make WordPress available across devices, and you have a globally accessible tool.

Today we’re announcing app updates that bring the new Block editor to mobile devices, so on-the-go publishing is even easier for that 35%.

At Automattic, we speak 88 different languages, so we thought: why not use some of them to tell you about the editor updates? Instead of a few screenshots and bullet points, here are some of the people who build the editor and apps sharing their favorite tools and tricks for the mobile Block editor. To make it more accessible, we’ve also included English translations. 

(And for those who want more detail — yes, there are still screenshots and bullet points!)

Rafael, Brazilian Portuguese

Com o novo editor, a criação de conteúdo é mais intuitiva por que as opções de formatação de texto e inserção de arquivos são exibidas de uma forma bem simples.

Toque no ícone ⊕ enquanto estiver editando um post ou página para ver os blocos disponíveis como Parágrafo, Título, Imagem, Vídeo, Lista, Galeria, Mídia e texto, Espaçador e muitos outros.

Translation

With the new editor, creating content is more intuitive because the options to format text and add media are displayed in a simple way. Tap on the ⊕ icon when editing whether a post or page to see all the available blocks like Paragraph, Heading, Image, Video, List, Gallery, Media & Text, Spacer and more.

Anitaa, Tamil

பயணங்களில் மிகவும் விருப்பமுள்ள எனக்கு, பயண குறிப்புகளை பயண நேரத்திலேயே எழுதுவது வழக்கம். இந்தப் புதிய கைபேசி செயலி என் வேலையே மிகவும் எளிதாக்குகிறது. எனக்குப் பிடித்த சில அம்சவ்கள்:  

கி போர்ட்டில் உள்ள நேக்ஸ்ட் பொத்தானை அழுத்துவதன் மூலமே புதிய பத்தியை தொடங்க முடிவது.பட்டியல் தொகுதியைப் பயன்படுத்தி எனது சொந்த பட்டியலை உருவாக்க முடியும்.

பட்டியலின் உள்ளெ பட்டியலை சரிபார்க்கும், அல்லது, துணை பட்டியலை உள்ளடக்கும் பட்டியல் பத்தியை ஆவலுடன் எதிர்பார்க்கிறேன். எனவே அடுத்த புதுப்பிப்பைப் பற்றி நான் மகிழ்ச்சியடைகிறேன்.

Translation

I love travelling and I spend a lot of time on my blog writing travel tips while on the go. My favorite features in the Block editor include:

Creating a new paragraph block by pressing the RETURN button on the keypad. Adding a List block to create my own lists.You can even add sub-lists!

I look forward to seeing what’s coming next!

Mario, Spanish

Cuando escribo, doy mil vueltas sobre qué palabras utilizar y me cuesta decidirme. Uso mi móvil porque me da la posibilidad de capturar mis ideas justo en el momento que se me ocurren. Es por eso que de las cosas que más me gustan del Editor es que puedo moverme de un bloque de texto a otro con facilidad y también cambiarlos de lugar. Además, se puede hacer/deshacer muy fácilmente, y siempre se mantiene el historial de edición lo que me da mayor seguridad a la hora de cambiar incluso sólo pequeñas partes del contenido que voy escribiendo.

Translation

When I write, I walk around in circles and can never decide which words to use. So I use my mobile phone, which lets me capture ideas right when they occur to me. That’s why the things I appreciate in the new Editor are the abilities to move from block to block with ease and to change their order and since you can undo/redo quite easily and can see your editing history, I have confidence when I change even small bits of the post I’m writing.

Jaclyn, Chinese

用過 Gutenberg 古騰堡後網誌效率高很多!因為寫旅行文章,很多時候是在旅途中或是平日空擋等候時間紀錄和寫下想法,行動 app 讓我隨時隨地都可以編輯文章。行動古騰堡簡化了移動文章段落重新排序的步驟,讓文章的架構變得很清楚,也更容易管理。

Translation

The new block editor truly makes a difference in my blogging efficiency and experience. Since my blog is about traveling, I often scribble notes and thoughts during my trips. The block editor on mobile simplifies the process of moving paragraphs around and organizing content, so the architecture of the post becomes clearer and easier to reorganize.

To start using the block editor on your app, make sure to update to the latest version, and then opt in to using it! To opt in, navigate to My Site → Settings and toggle on Use Block Editor.

We hope you give the latest release a try; tell us about your favorite part of the mobile block editor once you’ve had a chance to try it.

We’d also love to know your thoughts on the general writing flow and on some of the newer blocks like video, list, and quote blocks. For specific feedback, you can reach out to us from within the app by going to Me → Help and Support, then selecting Contact Us.
Quelle: RedHat Stack