Migration Paths for RDO From CentOS 7 to 8

In last CentOS Dojo, it was asked if RDO would provide python3 packages for OpenStack Ussuri on CentOS7 and if it would be “possible” in the context of helping in the upgrade path from Train to Ussuri. As “possible” is a vague term and I think the response deserves some more explanation than a binary one, I’ve collected my thoughts in this topic as a way to start a discussion within the RDO community.

Yes, upgrades are hard
We all know that upgrading production OpenStack cloud is complex and depends strongly on each specific layout and deployment tools (different deployment tools may support or not the OpenStack upgrades) and processes. In addition, upgrading from CentOS 7 to 8 requires OS redeploy, which introduces operational complexity to the migration. We are commited to help the RDO community users to migrate their clouds to new versions of OpenStack and/or Operating Systems in different ways:

Providing RDO Train packages on CentOS8. This allows users to choose between doing a one-step upgrade from CentOS7/Train -> CentOS8/Ussuri or split it in two steps CentOS7/Train -> CentOS8/Train -> CentOS8/Ussuri.
RDO maintains OpenStack packages during the whole upstream maintenance cycle – for the Train release, this is until April 2021. Operators can take some time to plan and execute their migration paths.

Also the Rolling Upgrades features provided in OpenStack allows one to keep agents running in compute nodes in Train temporarily after the controllers have been updated to Ussuri using Upgrade Levels in Nova or built-in backwards compatibility features in Neutron and other services.

What “Supporting a OpenStack release in a CentOS version” means in RDO
Before discussing the limitations and challenges to support RDO Ussuri on CentOS 7.7 using python 3, I’ll describe what supporting a new RDO release means:

Build

Before we can start building OpenStack packages we need to have all required dependencies used to build or run OpenStack services. We use the libraries from CentOS base repos as much as we can and avoid rebasing or forking CentOS base packages unless it’s strongly justified.
OpenStack packages are built using DLRN in RDO Trunk repos or CBS using jobs running in post pipeline in review.rdoproject.org.
RDO also consumes packages from other CentOS SIGs as Ceph from Storage SIG, KVM from Virtualization or collectd from OpsTools.

Validate

We run CI jobs periodically to validate the packages provided in the repos. These jobs are executed using the Zuul instance in SoftwareFactory project or Jenkins in CentOS CI infra and deploy different configurations of OpenStack using Packstack, puppet-openstack-integration and TripleO.
Also, some upstream projects include CI jobs on CentOS using the RDO packages to gate every change on it.

Publish

RDO Trunk packages are published in https://trunk.rdoproject.org and validated repositories are moved to promoted links.
RDO CloudSIG packages are published in official CentOS mirrors after they are validated by CI jobs.

Challenges to provide python 3 packages for RDO Ussuri in CentOS 7
Build

While CentOS 7 includes a quite wide set of python 2 modules (150+) in addition to the interpreter, the python 3 stack included in CentOS 7.7 is just the python interpreter and ~5 python modules. All the missing ones would need to be bootstraped for python3.
Some python bindings are provided as part of other builds, i.e. python-rbd or python-rados is part of Ceph in StorageSIG, python-libguestfs is part of libguestfs in base repo, etc… RDO doesn’t own those packages so commitment from the owners would be needed or RDO would need to take ownership of them in this specific release (which means maintaining them until Train EOL).
Current specs in Ussuri tie python version to CentOS version. We’d need to figure out a way to switch python version in CentOS 7 via tooling configuration and macros.

Validate

In order to validate the python3 builds for Ussuri on CentOS 7, the deployment tools (puppet-openstack, packstack, kolla and TripleO) would need upstream fixes to install python3 packages instead of python2 for CentOS 7. Ideally, new CI jobs should be added with this configuration to gate changes in those repositores. This would require support from the upstream communities.

Conclusion

Alternatives exist to help operators in the migration path from Train on CentOS 7 to Ussuri on CentOS 8 and avoid a massive full cloud reboot.
Doing a full supported RDO release of Ussuri on CentOS 7 would require a big effort in RDO and other projects that can’t be done with existing resources:

It would required a full bootstrap of python3 dependencies which are pulled from CentOS base repositoris in python 2.
Other SIGs would need to provide python3 packages or, alternatively, RDO would need to maintain them for this specific release.
In order to validate the release upstream deployment projects would need to support this new python3 Train release.

There may be chances for intermediate solutions limited to a reduced set of packages that would help in the transition period. We’d need to hear details from the interested community members about what would be actually needed and what’s the desired migration workflow. We will be happy to onboard new community members with interest in contributing to this effort.

We are open to listen and discuss what other options may help the users, come to us and let us know how we can do it.

Quelle: RDO

OCS 4.2 in OCP 4.2.14 – UPI installation in RHV

When OCS 4.2 GA was released last month, I was thrilled to finally test and deploy it in my lab. I read the documentation and saw that only vSphere and AWS installations were currently supported. My lab is installed in an RHV environment following the UPI Bare Metal documentation so, in the beginning, I was a bit disappointed. I realized that it could be an interesting challenge to find a different way to use it and, well, I found it while hacking away for some late night fun. All the following procedures are unsupported.
Prerequisites

An OCP 4.2.x cluster installed (the current latest version is 4.2.14)
The possibility to create new local disks inside the VMs (if you are using a virtualized environment) or servers with disks that can be used

Issues
The official OCS 4.2 installation in vSphere requires a minimum of 3 nodes which use 2TB volume each (a PVC using the default “thin” storage class) for the OSD volumes + 10GB for each mon POD (3 in total using always a PVC). It also requires 16 CPU and 64GB RAM for node.
Use case scenario

bare-metal installations
vSphere cluster

without a shared datastore
you don’t want to use the vSphere dynamic provisioner
without enough space in the datastore
without enough RAM or CPU

other virtualized installation (for example RHV which is the one used for this article)

Challenges

create a PVC using local disks
change the default 2TB volumes size
define a different StorageClass (without using a default one) for the mon PODs and the OSD volumes
define different limits and requests per component

Solutions

use the local storage operator
create the ocs-storagecluster resource using a YAML file instead of the new interface. That means also add the labels to the worker nodes that are going to be used by OCS

Procedures
Add the disks in the VMs. Add 2 disks for each node. 10GB disk for mon POD and 100GB disk for the OSD volume.

Repeat for the other 2 nodes
The disks MUST be in the same order and have the same device name in all the nodes. For example, /dev/sdb MUST be the 10GB disk and /dev/sdc the 100GB disk in all the nodes.
[root@utility ~]# for i in {1..3} ; do ssh core@worker-${i}.ocp42.ssa.mbu.labs.redhat.com lsblk | egrep “^sdb.*|sdc.*$” ; done
sdb 8:16 0 10G 0 disk
sdc 8:32 0 100G 0 disk
sdb 8:16 0 10G 0 disk
sdc 8:32 0 100G 0 disk
sdb 8:16 0 10G 0 disk
sdc 8:32 0 100G 0 disk
[root@utility ~]#

Install the Local Storage Operator. Here the official documentation
Create the namespace
[root@utility ~]# oc new-project local-storage‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Then install the operator from the OperatorHub

Wait for the operator POD up&running
[root@utility ~]# oc get pod -n local-storage
NAME READY STATUS RESTARTS AGE
local-storage-operator-ccbb59b45-nn7ww 1/1 Running 0 57s
[root@utility ~]#

The Local Storage Operator works using the devices as reference. The LocalVolume resource scans the nodes which match the selector and creates a StorageClass for the device.
Do not use different StorageClass names for the same device.
We need the Filesystem type for these volumes. Prepare the LocalVolume YAML file to create the resource for the mon PODs which use /dev/sdb
[root@utility ~]# cat <<EOF > local-storage-filesystem.yaml
apiVersion: “local.storage.openshift.io/v1″
kind: “LocalVolume”
metadata:
name: “local-disks-fs”
namespace: “local-storage”
spec:
nodeSelector:
nodeSelectorTerms:
– matchExpressions:
– key: kubernetes.io/hostname
operator: In
values:
– worker-1.ocp42.ssa.mbu.labs.redhat.com
– worker-2.ocp42.ssa.mbu.labs.redhat.com
– worker-3.ocp42.ssa.mbu.labs.redhat.com
storageClassDevices:
– storageClassName: “local-sc”
volumeMode: Filesystem
devicePaths:
– /dev/sdb
EOF

Then create the resource
[root@utility ~]# oc create -f local-storage-filesystem.yaml
localvolume.local.storage.openshift.io/local-disks-fs created
[root@utility ~]#‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Check if all the PODs are up&running and if the StorageClass and the PVs exist
[root@utility ~]# oc get pod -n local-storage
NAME READY STATUS RESTARTS AGE
local-disks-fs-local-diskmaker-2bqw4 1/1 Running 0 106s
local-disks-fs-local-diskmaker-8w9rz 1/1 Running 0 106s
local-disks-fs-local-diskmaker-khhm5 1/1 Running 0 106s
local-disks-fs-local-provisioner-g5dgv 1/1 Running 0 106s
local-disks-fs-local-provisioner-hkj69 1/1 Running 0 106s
local-disks-fs-local-provisioner-vhpj8 1/1 Running 0 106s
local-storage-operator-ccbb59b45-nn7ww 1/1 Running 0 15m
[root@utility ~]# oc get sc
NAME PROVISIONER AGE
local-sc kubernetes.io/no-provisioner 109s
[root@utility ~]# oc get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
local-pv-68faed78 10Gi RWO Delete Available local-sc 84s
local-pv-780afdd6 10Gi RWO Delete Available local-sc 83s
local-pv-b640422f 10Gi RWO Delete Available local-sc 9s
[root@utility ~]#

The PVs were created.
Prepare the LocalVolume YAML file to create the resource for the OSD volumes which use /dev/sdc
We need the Block type for these volumes.
[root@utility ~]# cat <<EOF > local-storage-block.yaml
apiVersion: “local.storage.openshift.io/v1″
kind: “LocalVolume”
metadata:
name: “local-disks”
namespace: “local-storage”
spec:
nodeSelector:
nodeSelectorTerms:
– matchExpressions:
– key: kubernetes.io/hostname
operator: In
values:
– worker-1.ocp42.ssa.mbu.labs.redhat.com
– worker-2.ocp42.ssa.mbu.labs.redhat.com
– worker-3.ocp42.ssa.mbu.labs.redhat.com
storageClassDevices:
– storageClassName: “localblock-sc”
volumeMode: Block
devicePaths:
– /dev/sdc
EOF

Then create the resource
[root@utility ~]# oc create -f local-storage-block.yaml
localvolume.local.storage.openshift.io/local-disks created
[root@utility ~]#

Check if all the PODs are up&running and if the StorageClass and the PVs exist
[root@utility ~]# oc get pod -n local-storage
NAME READY STATUS RESTARTS AGE
local-disks-fs-local-diskmaker-2bqw4 1/1 Running 0 6m33s
local-disks-fs-local-diskmaker-8w9rz 1/1 Running 0 6m33s
local-disks-fs-local-diskmaker-khhm5 1/1 Running 0 6m33s
local-disks-fs-local-provisioner-g5dgv 1/1 Running 0 6m33s
local-disks-fs-local-provisioner-hkj69 1/1 Running 0 6m33s
local-disks-fs-local-provisioner-vhpj8 1/1 Running 0 6m33s
local-disks-local-diskmaker-6qpfx 1/1 Running 0 22s
local-disks-local-diskmaker-pw5ql 1/1 Running 0 22s
local-disks-local-diskmaker-rc5hr 1/1 Running 0 22s
local-disks-local-provisioner-9qprp 1/1 Running 0 22s
local-disks-local-provisioner-kkkcm 1/1 Running 0 22s
local-disks-local-provisioner-kxbnn 1/1 Running 0 22s
local-storage-operator-ccbb59b45-nn7ww 1/1 Running 0 19m
[root@utility ~]# oc get sc
NAME PROVISIONER AGE
local-sc kubernetes.io/no-provisioner 6m36s
localblock-sc kubernetes.io/no-provisioner 25s
[root@utility ~]# oc get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
local-pv-5c4e718c 100Gi RWO Delete Available localblock-sc 10s
local-pv-68faed78 10Gi RWO Delete Available local-sc 6m13s
local-pv-6a58375e 100Gi RWO Delete Available localblock-sc 10s
local-pv-780afdd6 10Gi RWO Delete Available local-sc 6m12s
local-pv-b640422f 10Gi RWO Delete Available local-sc 4m58s
local-pv-d6db37fd 100Gi RWO Delete Available localblock-sc 5s
[root@utility ~]#

All the PVs were created.
Install OCS 4.2. Here the official documentation
Create the namespace “openshift-storage“
[root@utility ~]# cat <<EOF > ocs-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: openshift-storage
labels:
openshift.io/cluster-monitoring: “true”
EOF
[root@utility ~]# oc create -f ocs-namespace.yaml
namespace/openshift-storage created
[root@utility ~]#

Add the labels to the workers
oc label node worker-1.ocp42.ssa.mbu.labs.redhat.com “cluster.ocs.openshift.io/openshift-storage=” –overwrite
oc label node worker-1.ocp42.ssa.mbu.labs.redhat.com “topology.rook.io/rack=rack0″ –overwrite
oc label node worker-2.ocp42.ssa.mbu.labs.redhat.com “cluster.ocs.openshift.io/openshift-storage=” –overwrite
oc label node worker-2.ocp42.ssa.mbu.labs.redhat.com “topology.rook.io/rack=rack1″ –overwrite
oc label node worker-3.ocp42.ssa.mbu.labs.redhat.com “cluster.ocs.openshift.io/openshift-storage=” –overwrite
oc label node worker-3.ocp42.ssa.mbu.labs.redhat.com “topology.rook.io/rack=rack3″ –overwrite‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Install the operator from the web interface

Check on the web interface if the operator is Up to date

And wait for the PODs up&running
[root@utility ~]# oc get pod -n openshift-storage
NAME READY STATUS RESTARTS AGE
noobaa-operator-85d86479fc-n8vp5 1/1 Running 0 106s
ocs-operator-65cf57b98b-rk48c 1/1 Running 0 106s
rook-ceph-operator-59d78cf8bd-4zcsz 1/1 Running 0 106s
[root@utility ~]#

Create the OCS Cluster Service YAML file
[root@utility ~]# cat <<EOF > ocs-cluster-service.yaml
apiVersion: ocs.openshift.io/v1
kind: StorageCluster
metadata:
name: ocs-storagecluster
namespace: openshift-storage
spec:
manageNodes: false
monPVCTemplate:
spec:
accessModes:
– ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: ‘local-sc’
volumeMode: Filesystem
storageDeviceSets:
– count: 1
dataPVCTemplate:
spec:
accessModes:
– ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: ‘localblock-sc’
volumeMode: Block
name: ocs-deviceset
placement: {}
portable: true
replica: 3
resources: {}
EOF

You can notice the “monPVCTemplate” section in which we define the StorageClass “local-sc” and in the section “storageDeviceSets” the different storage sizes and the StorageClass “localblock-sc” used by OSD volumes.
Now we can create the resource
[root@utility ~]# oc create -f ocs-cluster-service.yaml
storagecluster.ocs.openshift.io/ocs-storagecluster created
[root@utility ~]#‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

During the creation of the resources, we can see how the PVCs created are bounded with the Local Storage PVs
[root@utility ~]# oc get pvc -n openshift-storage
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
rook-ceph-mon-a Bound local-pv-68faed78 10Gi RWO local-sc 13s
rook-ceph-mon-b Bound local-pv-b640422f 10Gi RWO local-sc 8s
rook-ceph-mon-c Bound local-pv-780afdd6 10Gi RWO local-sc 3s
[root@utility ~]# oc get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
local-pv-5c4e718c 100Gi RWO Delete Available localblock-sc 28m
local-pv-68faed78 10Gi RWO Delete Bound openshift-storage/rook-ceph-mon-a local-sc 34m
local-pv-6a58375e 100Gi RWO Delete Available localblock-sc 28m
local-pv-780afdd6 10Gi RWO Delete Bound openshift-storage/rook-ceph-mon-c local-sc 34m
local-pv-b640422f 10Gi RWO Delete Bound openshift-storage/rook-ceph-mon-b local-sc 33m
local-pv-d6db37fd 100Gi RWO Delete Available localblock-sc 28m
[root@utility ~]#

And now we can see the OSD PVCs and the PVs bounded
[root@utility ~]# oc get pvc -n openshift-storage
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
ocs-deviceset-0-0-7j2kj Bound local-pv-6a58375e 100Gi RWO localblock-sc 3s
ocs-deviceset-1-0-lmd97 Bound local-pv-d6db37fd 100Gi RWO localblock-sc 3s
ocs-deviceset-2-0-dnfbd Bound local-pv-5c4e718c 100Gi RWO localblock-sc 3s‍‍‍‍‍
[root@utility ~]# oc get pv | grep localblock-sc
local-pv-5c4e718c 100Gi RWO Delete Bound openshift-storage/ocs-deviceset-2-0-dnfbd localblock-sc 31m
local-pv-6a58375e 100Gi RWO Delete Bound openshift-storage/ocs-deviceset-0-0-7j2kj localblock-sc 31m
local-pv-d6db37fd 100Gi RWO Delete Bound openshift-storage/ocs-deviceset-1-0-lmd97 localblock-sc 31m
[root@utility ~]#

This is the first PVC created inside the OCS cluster used by noobaa
[root@utility ~]# oc get pvc -n openshift-storage
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
db-noobaa-core-0 Bound pvc-d8dbb86f-3d83-11ea-ac51-001a4a16017d 50Gi RWO ocs-storagecluster-ceph-rbd 72s‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Wait for all the PODs up&running
[root@utility ~]# oc get pod -n openshift-storage
NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-2qkl8 3/3 Running 0 5m31s
csi-cephfsplugin-4pbvl 3/3 Running 0 5m31s
csi-cephfsplugin-j8w82 3/3 Running 0 5m31s
csi-cephfsplugin-provisioner-647cd6996c-6mw9t 4/4 Running 0 5m31s
csi-cephfsplugin-provisioner-647cd6996c-pbrxs 4/4 Running 0 5m31s
csi-rbdplugin-9nj85 3/3 Running 0 5m31s
csi-rbdplugin-jmnqz 3/3 Running 0 5m31s
csi-rbdplugin-provisioner-6b8ff67dc4-jk5lm 4/4 Running 0 5m31s
csi-rbdplugin-provisioner-6b8ff67dc4-rxjhq 4/4 Running 0 5m31s
csi-rbdplugin-vrzjq 3/3 Running 0 5m31s
noobaa-core-0 1/2 Running 0 2m34s
noobaa-operator-85d86479fc-n8vp5 1/1 Running 0 13m
ocs-operator-65cf57b98b-rk48c 0/1 Running 0 13m
rook-ceph-drain-canary-worker-1.ocp42.ssa.mbu.labs.redhat.w2cqv 1/1 Running 0 2m41s
rook-ceph-drain-canary-worker-2.ocp42.ssa.mbu.labs.redhat.whv6s 1/1 Running 0 2m40s
rook-ceph-drain-canary-worker-3.ocp42.ssa.mbu.labs.redhat.ll8gj 1/1 Running 0 2m40s
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-d7d64976d8cm7 1/1 Running 0 2m28s
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-864fdf78ppnpm 1/1 Running 0 2m27s
rook-ceph-mgr-a-5fd6f7578c-wbsb6 1/1 Running 0 3m24s
rook-ceph-mon-a-bffc546c8-vjrfb 1/1 Running 0 4m26s
rook-ceph-mon-b-8499dd679c-6pzm9 1/1 Running 0 4m11s
rook-ceph-mon-c-77cd5dd54-64z52 1/1 Running 0 3m46s
rook-ceph-operator-59d78cf8bd-4zcsz 1/1 Running 0 13m
rook-ceph-osd-0-b46fbc7d7-hc2wz 1/1 Running 0 2m41s
rook-ceph-osd-1-648c5dc8d6-prwks 1/1 Running 0 2m40s
rook-ceph-osd-2-546d4d77fb-qb68j 1/1 Running 0 2m40s
rook-ceph-osd-prepare-ocs-deviceset-0-0-7j2kj-s72g4 0/1 Completed 0 2m56s
rook-ceph-osd-prepare-ocs-deviceset-1-0-lmd97-27chl 0/1 Completed 0 2m56s
rook-ceph-osd-prepare-ocs-deviceset-2-0-dnfbd-s7z8v 0/1 Completed 0 2m56s
rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-d7b4b5b6hnpr 1/1 Running 0 2m12s

Our installation is now complete and OCS fully operative.
Now we can browse the noobaa management console (for now it only works in Chrome) and create a new user to test the S3 object storage

Get the endpoint for the S3 object server
[root@utility ~]# oc get route s3 -o jsonpath='{.spec.host}’ -n openshift-storage
s3-openshift-storage.apps.ocp42.ssa.mbu.labs.redhat.com‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Test it with your preferred S3 client (I use Cyberduck in my windows desktop which I’m using to write this article)

Create something to check if you can write

It works!
Set the ocs-storagecluster-cephfs StorageClass as the default one
[root@utility ~]# oc patch storageclass ocs-storagecluster-cephfs -p ‘{“metadata”: {“annotations”:{“storageclass.kubernetes.io/is-default-class”:”true”}}}’
storageclass.storage.k8s.io/ocs-storagecluster-cephfs patched
[root@utility ~]#‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Test the ocs-storagecluster-cephfs StorageClass adding persistent storage to the registry
[root@utility ~]# oc edit configs.imageregistry.operator.openshift.io
storage:
pvc:
claim:‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Check the PVC created and wait for the new POD up&running
[root@utility ~]# oc get pvc -n openshift-image-registry
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
image-registry-storage Bound pvc-ba4a07c1-3d86-11ea-ad40-001a4a1601e7 100Gi RWX ocs-storagecluster-cephfs 12s
[root@utility ~]# oc get pod -n openshift-image-registry
NAME READY STATUS RESTARTS AGE
cluster-image-registry-operator-655fb7779f-pn7ms 2/2 Running 0 36h
image-registry-5bdf96556-98jbk 1/1 Running 0 105s
node-ca-9gbxg 1/1 Running 1 35h
node-ca-fzcrm 1/1 Running 0 35h
node-ca-gr928 1/1 Running 1 35h
node-ca-jkfzf 1/1 Running 1 35h
node-ca-knlcj 1/1 Running 0 35h
node-ca-mb6zh 1/1 Running 0 35h
[root@utility ~]#

Test it in a new project test
[root@utility ~]# oc new-project test
Now using project “test” on server “https://api.ocp42.ssa.mbu.labs.redhat.com:6443″.

You can add applications to this project with the ‘new-app’ command. For example, try:

oc new-app django-psql-example

to build a new example application in Python. Or use kubectl to deploy a simple Kubernetes application:

kubectl create deployment hello-node –image=gcr.io/hello-minikube-zero-install/hello-node

[root@utility ~]# podman pull alpine
Trying to pull docker.io/library/alpine…Getting image source signatures
Copying blob c9b1b535fdd9 doneCopying config e7d92cdc71 doneWriting manifest to image destination
Storing signaturese7d92cdc71feacf90708cb59182d0df1b911f8ae022d29e8e95d75ca6a99776a
[root@utility ~]# podman login -u $(oc whoami) -p $(oc whoami -t) $REGISTRY_URL –tls-verify=false
Login Succeeded!
[root@utility ~]# podman tag alpine $REGISTRY_URL/test/alpine
[root@utility ~]# podman push $REGISTRY_URL/test/alpine –tls-verify=false
Getting image source signatures
Copying blob 5216338b40a7 done
Copying config e7d92cdc71 done
Writing manifest to image destination
Storing signatures
[root@utility ~]# oc get is -n test
NAME IMAGE REPOSITORY TAGS UPDATED
alpine default-route-openshift-image-registry.apps.ocp42.ssa.mbu.labs.redhat.com/test/alpine latest 3 minutes ago
[root@utility ~]#

The registry works!
Other Scenario
If your cluster is deployed in vSphere and uses the default “thin” StorageClass but your datastore isn’t big enough, you can start from the OCS installation.
When it comes to creating the OCS Cluster Service, create a YAML file with your desired sizes and without storageClassName (it will use the default one).
You can also remove the “monPVCTemplate” if you are not interested in changing the storage size.
[root@utility ~]# cat <<EOF > ocs-cluster-service.yaml
apiVersion: ocs.openshift.io/v1
kind: StorageCluster
metadata:
name: ocs-storagecluster
namespace: openshift-storage
spec:
manageNodes: false
monPVCTemplate:
spec:
accessModes:
– ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: ”
volumeMode: Filesystem
storageDeviceSets:
– count: 1
dataPVCTemplate:
spec:
accessModes:
– ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: ”
volumeMode: Block
name: ocs-deviceset
placement: {}
portable: true
replica: 3
resources: {}
EOF

Limits and Requests
Limits and Requests, by default, are set like that
[root@utility ~]# oc describe node worker-1.ocp42.ssa.mbu.labs.redhat.com

Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
——— —- ———— ———- ————— ————- —
openshift-storage noobaa-core-0 4 (25%) 4 (25%) 8Gi (12%) 8Gi (12%) 13m
openshift-storage rook-ceph-mgr-a-676d4b4796-54mtk 1 (6%) 1 (6%) 3Gi (4%) 3Gi (4%) 12m
openshift-storage rook-ceph-mon-b-7d7747d8b4-k9txg 1 (6%) 1 (6%) 2Gi (3%) 2Gi (3%) 13m
openshift-storage rook-ceph-osd-1-854847fd4c-482bt 1 (6%) 2 (12%) 4Gi (6%) 8Gi (12%) 12m

We can create our new YAML file to change those settings in the ocs-storagecluster StorageCluster resource
[root@utility ~]# cat <<EOF > ocs-cluster-service-modified.yaml
apiVersion: ocs.openshift.io/v1
kind: StorageCluster
metadata:
name: ocs-storagecluster
namespace: openshift-storage
spec:
resources:
mon:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
mgr:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
noobaa-core:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
noobaa-db:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
manageNodes: false
monPVCTemplate:
spec:
accessModes:
– ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: ‘local-sc’
volumeMode: Filesystem
storageDeviceSets:
– count: 1
dataPVCTemplate:
spec:
accessModes:
– ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: ‘localblock-sc’
volumeMode: Block
name: ocs-deviceset
placement: {}
portable: true
replica: 3
resources:
limits:
cpu: 1
memory: 4Gi
requests:
cpu: 1
memory: 4Gi
EOF

And apply
[root@utility ~]# oc apply -f ocs-cluster-service-modified.yaml
Warning: oc apply should be used on resource created by either oc create –save-config or oc apply
storagecluster.ocs.openshift.io/ocs-storagecluster configured

We have to wait for the operator which reads the new configs and applies them
[root@utility ~]# oc describe node worker-1.ocp42.ssa.mbu.labs.redhat.com

Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
——— —- ———— ———- ————— ————- —
openshift-storage noobaa-core-0 2 (12%) 2 (12%) 2Gi (3%) 2Gi (3%) 23s
openshift-storage rook-ceph-mgr-a-54f87f84fb-pm4rn 1 (6%) 1 (6%) 1Gi (1%) 1Gi (1%) 56s
openshift-storage rook-ceph-mon-b-854f549cd4-bgdb6 1 (6%) 1 (6%) 1Gi (1%) 1Gi (1%) 46s
openshift-storage rook-ceph-osd-1-ff56d545c-p7hvn 1 (6%) 1 (6%) 4Gi (6%) 4Gi (6%) 50s

And now we have our PODs with the new configurations applied.
The OSD PODs won’t start if you choose too low values.
Sections:

mon for rook-ceph-mon
mgr for rook-ceph-mgr
noobaa-core and noobaa-db for the 2 containers in the pod noobaa-core-0
mds for rook-ceph-mds-ocs-storagecluster-cephfilesystem
rgw for rook-ceph-rgw-ocs-storagecluster-cephobjectstore
the resources section in the end for rook-ceph-osd

rgw and mds sections work only the first time we create the resource.

spec:
resources:
mds:
limits:
cpu: 2
memory: 4Gi
requests:
cpu: 2
memory: 4Gi
rgw:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 1
memory: 2Gi

Conclusions
Now you can enjoy your brand-new OCS 4.2 cluster in OCP 4.2.x.
Things changed if you think about OCS 3.x, for example, the use of the PVCs instead of directly using the disks attached. For now, there are a lot of limitations for sustainability and supportability reasons.
We will wait for a fully supported installation for these scenarios.
UPDATES

The cluster used to write this article has been updated from 4.2.14 to 4.2.16 and then from 4.2.16 to 4.3.0.

The current OCS setup is still working

The post OCS 4.2 in OCP 4.2.14 – UPI installation in RHV appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

How to deploy IBM Blockchain Platform on Red Hat OpenShift

Arshiya Lal is an Offering Manager for IBM’s Blockchain Platform. She leads their developer experience portfolio, spearheads a program for blockchain start-ups, and informs strategy for IBM’s blockchain offerings. She’s been featured on the Bad Crypto podcast and spoken at Duke, North Carolina Tech Association, and various start-up events. Before joining IBM’s Blockchain group, she developed expertise in a variety of groundbreaking technologies including smart textiles, 3-D Printing, augmented reality, virtual reality, and gesture recognition. She is a graduate of the Georgia Institute of Technology in Atlanta, Georgia.
IBM recently announced a new version of the IBM Blockchain Platform (v2.1.0) which is optimized and certified to deploy on Red Hat OpenShift. This offering is well-suited for organizations who need to store a copy of the ledger and run workloads on their own infrastructure, meet specific data residency requirements, or deploy blockchain components in multi-cloud or hybrid cloud architectures to meet consortium needs.
The IBM Blockchain Platform together with Red Hat OpenShift offers:
Simplicity
Quickly build, operate, govern and grow a blockchain network with the most complete blockchain software, services, tools and sample codes available.
Flexibility
Containerize smart contracts, peers, certificate authorities and ordering services and easily deploy them within your preferred environments.
Reliability
Confidently create networks with high performance and availability for the different stages of blockchain development, deployment and production.
Steps to deploy
Below are some quick tips to help you quickly get started deploying IBM Blockchain Platform  (v2.1.0) on Red Hat OpenShift. For full documentation, go here. 
Step 1: Access the software and documentation 
IBM Blockchain Platform V2.1.0 requires an entitlement key that is included with your order from Passport Advantage®. Get the entitlement key that is assigned to your ID:

Log in to MyIBM Container Software Library with the IBMid  and password that are associated with the entitled software.
In the Entitlement keys section, select Copy key to copy the entitlement key to the clipboard. For deployment instructions, see Deploying IBM Blockchain Platform V2.1.0.

Step 2: Evaluate the hardware and system configuration

Your system must meet the minimum hardware requirements. For more details, see System prerequisites. 
Ensure that you have a Red Hat OpenShift Container Platform 3.11 or 4.2 Kubernetes cluster available to install the IBM Blockchain Platform. For more information, see OpenShift Container Platform 3.11 and 4.2 documentation. 
You need to install and connect to your cluster by using the OpenShift Container Platform CLI to deploy the platform. 

Step 3: Get started
Complete the following steps to install IBM Blockchain Platform V2.1.0. 

Log in to your OpenShift cluster.
Create a new project.
Add security and access policies.
Create a secret for your entitlement key.
Deploy the IBM Blockchain Platform operator. 
Deploy the IBM Blockchain Platform console. 
Log in to the console.

Want a jumpstart?
Engage with Blockchain Lab Services experts who know the platform better than anyone and start to unlock all the value that IBM Blockchain Platform can bring to your business. With locations around the world, our experts can work side-by-side with you to accelerate the deployment and configuration of your blockchain network on RedHat. 
For more information about Blockchain Lab Services contact your IBM Blockchain Platform sales representative. 
For more information: https://ibm.com/blockchain/platform 
 
The post How to deploy IBM Blockchain Platform on Red Hat OpenShift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Changes to dockerproject.org APT and YUM repositories

While many people know about Docker, not that many know its history and where it came from. Docker was started as a project in the dotCloud company, founded by Solomon Hykes, which provided a PaaS solution. The project became so successful that dotCloud renamed itself to Docker, Inc. and focused on Docker as its primary product.

As the “Docker project” grew from being a proof of concept shown off at various meetups and at PyCon in 2013 to a real community project, it needed a website where people could learn about it and download it. This is why the “dockerproject.org” and “dockerproject.com” domains were registered.

With the move from dotCloud to Docker, Inc. and the shift of focus onto the Docker product, it made sense to move everything to the “docker.com” domain. This is where you now find the company website, documentation, and of course the APT and YUM repositories at download.docker.com have been there since 2017.

On the 31st of March 2020, we will be shutting down the legacy APT and YUM repositories hosted at dockerproject.org and dockerproject.com. These repositories haven’t been updated with the latest releases of Docker and so the packages hosted there contain security vulnerabilities. Removing these repositories will make sure that people download the latest version of Docker ensuring their security and providing the best experience possible

What do I need to do?

If you are currently using the APT or YUM repositories from dockerproject.org or dockerproject.com, please update to use the repositories at download.docker.com.

You can find instructions for CentOS, Debian, Fedora and Ubuntu in the documentation.
The post Changes to dockerproject.org APT and YUM repositories appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/