Migration Paths for RDO From CentOS 7 to 8

In last CentOS Dojo, it was asked if RDO would provide python3 packages for OpenStack Ussuri on CentOS7 and if it would be “possible” in the context of helping in the upgrade path from Train to Ussuri. As “possible” is a vague term and I think the response deserves some more explanation than a binary one, I’ve collected my thoughts in this topic as a way to start a discussion within the RDO community.

Yes, upgrades are hard
We all know that upgrading production OpenStack cloud is complex and depends strongly on each specific layout and deployment tools (different deployment tools may support or not the OpenStack upgrades) and processes. In addition, upgrading from CentOS 7 to 8 requires OS redeploy, which introduces operational complexity to the migration. We are commited to help the RDO community users to migrate their clouds to new versions of OpenStack and/or Operating Systems in different ways:

Providing RDO Train packages on CentOS8. This allows users to choose between doing a one-step upgrade from CentOS7/Train -> CentOS8/Ussuri or split it in two steps CentOS7/Train -> CentOS8/Train -> CentOS8/Ussuri.
RDO maintains OpenStack packages during the whole upstream maintenance cycle – for the Train release, this is until April 2021. Operators can take some time to plan and execute their migration paths.

Also the Rolling Upgrades features provided in OpenStack allows one to keep agents running in compute nodes in Train temporarily after the controllers have been updated to Ussuri using Upgrade Levels in Nova or built-in backwards compatibility features in Neutron and other services.

What “Supporting a OpenStack release in a CentOS version” means in RDO
Before discussing the limitations and challenges to support RDO Ussuri on CentOS 7.7 using python 3, I’ll describe what supporting a new RDO release means:

Build

Before we can start building OpenStack packages we need to have all required dependencies used to build or run OpenStack services. We use the libraries from CentOS base repos as much as we can and avoid rebasing or forking CentOS base packages unless it’s strongly justified.
OpenStack packages are built using DLRN in RDO Trunk repos or CBS using jobs running in post pipeline in review.rdoproject.org.
RDO also consumes packages from other CentOS SIGs as Ceph from Storage SIG, KVM from Virtualization or collectd from OpsTools.

Validate

We run CI jobs periodically to validate the packages provided in the repos. These jobs are executed using the Zuul instance in SoftwareFactory project or Jenkins in CentOS CI infra and deploy different configurations of OpenStack using Packstack, puppet-openstack-integration and TripleO.
Also, some upstream projects include CI jobs on CentOS using the RDO packages to gate every change on it.

Publish

RDO Trunk packages are published in https://trunk.rdoproject.org and validated repositories are moved to promoted links.
RDO CloudSIG packages are published in official CentOS mirrors after they are validated by CI jobs.

Challenges to provide python 3 packages for RDO Ussuri in CentOS 7
Build

While CentOS 7 includes a quite wide set of python 2 modules (150+) in addition to the interpreter, the python 3 stack included in CentOS 7.7 is just the python interpreter and ~5 python modules. All the missing ones would need to be bootstraped for python3.
Some python bindings are provided as part of other builds, i.e. python-rbd or python-rados is part of Ceph in StorageSIG, python-libguestfs is part of libguestfs in base repo, etc… RDO doesn’t own those packages so commitment from the owners would be needed or RDO would need to take ownership of them in this specific release (which means maintaining them until Train EOL).
Current specs in Ussuri tie python version to CentOS version. We’d need to figure out a way to switch python version in CentOS 7 via tooling configuration and macros.

Validate

In order to validate the python3 builds for Ussuri on CentOS 7, the deployment tools (puppet-openstack, packstack, kolla and TripleO) would need upstream fixes to install python3 packages instead of python2 for CentOS 7. Ideally, new CI jobs should be added with this configuration to gate changes in those repositores. This would require support from the upstream communities.

Conclusion

Alternatives exist to help operators in the migration path from Train on CentOS 7 to Ussuri on CentOS 8 and avoid a massive full cloud reboot.
Doing a full supported RDO release of Ussuri on CentOS 7 would require a big effort in RDO and other projects that can’t be done with existing resources:

It would required a full bootstrap of python3 dependencies which are pulled from CentOS base repositoris in python 2.
Other SIGs would need to provide python3 packages or, alternatively, RDO would need to maintain them for this specific release.
In order to validate the release upstream deployment projects would need to support this new python3 Train release.

There may be chances for intermediate solutions limited to a reduced set of packages that would help in the transition period. We’d need to hear details from the interested community members about what would be actually needed and what’s the desired migration workflow. We will be happy to onboard new community members with interest in contributing to this effort.

We are open to listen and discuss what other options may help the users, come to us and let us know how we can do it.

Quelle: RDO

OCS 4.2 in OCP 4.2.14 – UPI installation in RHV

When OCS 4.2 GA was released last month, I was thrilled to finally test and deploy it in my lab. I read the documentation and saw that only vSphere and AWS installations were currently supported. My lab is installed in an RHV environment following the UPI Bare Metal documentation so, in the beginning, I was a bit disappointed. I realized that it could be an interesting challenge to find a different way to use it and, well, I found it while hacking away for some late night fun. All the following procedures are unsupported.
Prerequisites

An OCP 4.2.x cluster installed (the current latest version is 4.2.14)
The possibility to create new local disks inside the VMs (if you are using a virtualized environment) or servers with disks that can be used

Issues
The official OCS 4.2 installation in vSphere requires a minimum of 3 nodes which use 2TB volume each (a PVC using the default “thin” storage class) for the OSD volumes + 10GB for each mon POD (3 in total using always a PVC). It also requires 16 CPU and 64GB RAM for node.
Use case scenario

bare-metal installations
vSphere cluster

without a shared datastore
you don’t want to use the vSphere dynamic provisioner
without enough space in the datastore
without enough RAM or CPU

other virtualized installation (for example RHV which is the one used for this article)

Challenges

create a PVC using local disks
change the default 2TB volumes size
define a different StorageClass (without using a default one) for the mon PODs and the OSD volumes
define different limits and requests per component

Solutions

use the local storage operator
create the ocs-storagecluster resource using a YAML file instead of the new interface. That means also add the labels to the worker nodes that are going to be used by OCS

Procedures
Add the disks in the VMs. Add 2 disks for each node. 10GB disk for mon POD and 100GB disk for the OSD volume.

Repeat for the other 2 nodes
The disks MUST be in the same order and have the same device name in all the nodes. For example, /dev/sdb MUST be the 10GB disk and /dev/sdc the 100GB disk in all the nodes.
[root@utility ~]# for i in {1..3} ; do ssh core@worker-${i}.ocp42.ssa.mbu.labs.redhat.com lsblk | egrep “^sdb.*|sdc.*$” ; done
sdb 8:16 0 10G 0 disk
sdc 8:32 0 100G 0 disk
sdb 8:16 0 10G 0 disk
sdc 8:32 0 100G 0 disk
sdb 8:16 0 10G 0 disk
sdc 8:32 0 100G 0 disk
[root@utility ~]#

Install the Local Storage Operator. Here the official documentation
Create the namespace
[root@utility ~]# oc new-project local-storage‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Then install the operator from the OperatorHub

Wait for the operator POD up&running
[root@utility ~]# oc get pod -n local-storage
NAME READY STATUS RESTARTS AGE
local-storage-operator-ccbb59b45-nn7ww 1/1 Running 0 57s
[root@utility ~]#

The Local Storage Operator works using the devices as reference. The LocalVolume resource scans the nodes which match the selector and creates a StorageClass for the device.
Do not use different StorageClass names for the same device.
We need the Filesystem type for these volumes. Prepare the LocalVolume YAML file to create the resource for the mon PODs which use /dev/sdb
[root@utility ~]# cat <<EOF > local-storage-filesystem.yaml
apiVersion: “local.storage.openshift.io/v1″
kind: “LocalVolume”
metadata:
name: “local-disks-fs”
namespace: “local-storage”
spec:
nodeSelector:
nodeSelectorTerms:
– matchExpressions:
– key: kubernetes.io/hostname
operator: In
values:
– worker-1.ocp42.ssa.mbu.labs.redhat.com
– worker-2.ocp42.ssa.mbu.labs.redhat.com
– worker-3.ocp42.ssa.mbu.labs.redhat.com
storageClassDevices:
– storageClassName: “local-sc”
volumeMode: Filesystem
devicePaths:
– /dev/sdb
EOF

Then create the resource
[root@utility ~]# oc create -f local-storage-filesystem.yaml
localvolume.local.storage.openshift.io/local-disks-fs created
[root@utility ~]#‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Check if all the PODs are up&running and if the StorageClass and the PVs exist
[root@utility ~]# oc get pod -n local-storage
NAME READY STATUS RESTARTS AGE
local-disks-fs-local-diskmaker-2bqw4 1/1 Running 0 106s
local-disks-fs-local-diskmaker-8w9rz 1/1 Running 0 106s
local-disks-fs-local-diskmaker-khhm5 1/1 Running 0 106s
local-disks-fs-local-provisioner-g5dgv 1/1 Running 0 106s
local-disks-fs-local-provisioner-hkj69 1/1 Running 0 106s
local-disks-fs-local-provisioner-vhpj8 1/1 Running 0 106s
local-storage-operator-ccbb59b45-nn7ww 1/1 Running 0 15m
[root@utility ~]# oc get sc
NAME PROVISIONER AGE
local-sc kubernetes.io/no-provisioner 109s
[root@utility ~]# oc get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
local-pv-68faed78 10Gi RWO Delete Available local-sc 84s
local-pv-780afdd6 10Gi RWO Delete Available local-sc 83s
local-pv-b640422f 10Gi RWO Delete Available local-sc 9s
[root@utility ~]#

The PVs were created.
Prepare the LocalVolume YAML file to create the resource for the OSD volumes which use /dev/sdc
We need the Block type for these volumes.
[root@utility ~]# cat <<EOF > local-storage-block.yaml
apiVersion: “local.storage.openshift.io/v1″
kind: “LocalVolume”
metadata:
name: “local-disks”
namespace: “local-storage”
spec:
nodeSelector:
nodeSelectorTerms:
– matchExpressions:
– key: kubernetes.io/hostname
operator: In
values:
– worker-1.ocp42.ssa.mbu.labs.redhat.com
– worker-2.ocp42.ssa.mbu.labs.redhat.com
– worker-3.ocp42.ssa.mbu.labs.redhat.com
storageClassDevices:
– storageClassName: “localblock-sc”
volumeMode: Block
devicePaths:
– /dev/sdc
EOF

Then create the resource
[root@utility ~]# oc create -f local-storage-block.yaml
localvolume.local.storage.openshift.io/local-disks created
[root@utility ~]#

Check if all the PODs are up&running and if the StorageClass and the PVs exist
[root@utility ~]# oc get pod -n local-storage
NAME READY STATUS RESTARTS AGE
local-disks-fs-local-diskmaker-2bqw4 1/1 Running 0 6m33s
local-disks-fs-local-diskmaker-8w9rz 1/1 Running 0 6m33s
local-disks-fs-local-diskmaker-khhm5 1/1 Running 0 6m33s
local-disks-fs-local-provisioner-g5dgv 1/1 Running 0 6m33s
local-disks-fs-local-provisioner-hkj69 1/1 Running 0 6m33s
local-disks-fs-local-provisioner-vhpj8 1/1 Running 0 6m33s
local-disks-local-diskmaker-6qpfx 1/1 Running 0 22s
local-disks-local-diskmaker-pw5ql 1/1 Running 0 22s
local-disks-local-diskmaker-rc5hr 1/1 Running 0 22s
local-disks-local-provisioner-9qprp 1/1 Running 0 22s
local-disks-local-provisioner-kkkcm 1/1 Running 0 22s
local-disks-local-provisioner-kxbnn 1/1 Running 0 22s
local-storage-operator-ccbb59b45-nn7ww 1/1 Running 0 19m
[root@utility ~]# oc get sc
NAME PROVISIONER AGE
local-sc kubernetes.io/no-provisioner 6m36s
localblock-sc kubernetes.io/no-provisioner 25s
[root@utility ~]# oc get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
local-pv-5c4e718c 100Gi RWO Delete Available localblock-sc 10s
local-pv-68faed78 10Gi RWO Delete Available local-sc 6m13s
local-pv-6a58375e 100Gi RWO Delete Available localblock-sc 10s
local-pv-780afdd6 10Gi RWO Delete Available local-sc 6m12s
local-pv-b640422f 10Gi RWO Delete Available local-sc 4m58s
local-pv-d6db37fd 100Gi RWO Delete Available localblock-sc 5s
[root@utility ~]#

All the PVs were created.
Install OCS 4.2. Here the official documentation
Create the namespace “openshift-storage“
[root@utility ~]# cat <<EOF > ocs-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: openshift-storage
labels:
openshift.io/cluster-monitoring: “true”
EOF
[root@utility ~]# oc create -f ocs-namespace.yaml
namespace/openshift-storage created
[root@utility ~]#

Add the labels to the workers
oc label node worker-1.ocp42.ssa.mbu.labs.redhat.com “cluster.ocs.openshift.io/openshift-storage=” –overwrite
oc label node worker-1.ocp42.ssa.mbu.labs.redhat.com “topology.rook.io/rack=rack0″ –overwrite
oc label node worker-2.ocp42.ssa.mbu.labs.redhat.com “cluster.ocs.openshift.io/openshift-storage=” –overwrite
oc label node worker-2.ocp42.ssa.mbu.labs.redhat.com “topology.rook.io/rack=rack1″ –overwrite
oc label node worker-3.ocp42.ssa.mbu.labs.redhat.com “cluster.ocs.openshift.io/openshift-storage=” –overwrite
oc label node worker-3.ocp42.ssa.mbu.labs.redhat.com “topology.rook.io/rack=rack3″ –overwrite‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Install the operator from the web interface

Check on the web interface if the operator is Up to date

And wait for the PODs up&running
[root@utility ~]# oc get pod -n openshift-storage
NAME READY STATUS RESTARTS AGE
noobaa-operator-85d86479fc-n8vp5 1/1 Running 0 106s
ocs-operator-65cf57b98b-rk48c 1/1 Running 0 106s
rook-ceph-operator-59d78cf8bd-4zcsz 1/1 Running 0 106s
[root@utility ~]#

Create the OCS Cluster Service YAML file
[root@utility ~]# cat <<EOF > ocs-cluster-service.yaml
apiVersion: ocs.openshift.io/v1
kind: StorageCluster
metadata:
name: ocs-storagecluster
namespace: openshift-storage
spec:
manageNodes: false
monPVCTemplate:
spec:
accessModes:
– ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: ‘local-sc’
volumeMode: Filesystem
storageDeviceSets:
– count: 1
dataPVCTemplate:
spec:
accessModes:
– ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: ‘localblock-sc’
volumeMode: Block
name: ocs-deviceset
placement: {}
portable: true
replica: 3
resources: {}
EOF

You can notice the “monPVCTemplate” section in which we define the StorageClass “local-sc” and in the section “storageDeviceSets” the different storage sizes and the StorageClass “localblock-sc” used by OSD volumes.
Now we can create the resource
[root@utility ~]# oc create -f ocs-cluster-service.yaml
storagecluster.ocs.openshift.io/ocs-storagecluster created
[root@utility ~]#‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

During the creation of the resources, we can see how the PVCs created are bounded with the Local Storage PVs
[root@utility ~]# oc get pvc -n openshift-storage
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
rook-ceph-mon-a Bound local-pv-68faed78 10Gi RWO local-sc 13s
rook-ceph-mon-b Bound local-pv-b640422f 10Gi RWO local-sc 8s
rook-ceph-mon-c Bound local-pv-780afdd6 10Gi RWO local-sc 3s
[root@utility ~]# oc get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
local-pv-5c4e718c 100Gi RWO Delete Available localblock-sc 28m
local-pv-68faed78 10Gi RWO Delete Bound openshift-storage/rook-ceph-mon-a local-sc 34m
local-pv-6a58375e 100Gi RWO Delete Available localblock-sc 28m
local-pv-780afdd6 10Gi RWO Delete Bound openshift-storage/rook-ceph-mon-c local-sc 34m
local-pv-b640422f 10Gi RWO Delete Bound openshift-storage/rook-ceph-mon-b local-sc 33m
local-pv-d6db37fd 100Gi RWO Delete Available localblock-sc 28m
[root@utility ~]#

And now we can see the OSD PVCs and the PVs bounded
[root@utility ~]# oc get pvc -n openshift-storage
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
ocs-deviceset-0-0-7j2kj Bound local-pv-6a58375e 100Gi RWO localblock-sc 3s
ocs-deviceset-1-0-lmd97 Bound local-pv-d6db37fd 100Gi RWO localblock-sc 3s
ocs-deviceset-2-0-dnfbd Bound local-pv-5c4e718c 100Gi RWO localblock-sc 3s‍‍‍‍‍
[root@utility ~]# oc get pv | grep localblock-sc
local-pv-5c4e718c 100Gi RWO Delete Bound openshift-storage/ocs-deviceset-2-0-dnfbd localblock-sc 31m
local-pv-6a58375e 100Gi RWO Delete Bound openshift-storage/ocs-deviceset-0-0-7j2kj localblock-sc 31m
local-pv-d6db37fd 100Gi RWO Delete Bound openshift-storage/ocs-deviceset-1-0-lmd97 localblock-sc 31m
[root@utility ~]#

This is the first PVC created inside the OCS cluster used by noobaa
[root@utility ~]# oc get pvc -n openshift-storage
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
db-noobaa-core-0 Bound pvc-d8dbb86f-3d83-11ea-ac51-001a4a16017d 50Gi RWO ocs-storagecluster-ceph-rbd 72s‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Wait for all the PODs up&running
[root@utility ~]# oc get pod -n openshift-storage
NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-2qkl8 3/3 Running 0 5m31s
csi-cephfsplugin-4pbvl 3/3 Running 0 5m31s
csi-cephfsplugin-j8w82 3/3 Running 0 5m31s
csi-cephfsplugin-provisioner-647cd6996c-6mw9t 4/4 Running 0 5m31s
csi-cephfsplugin-provisioner-647cd6996c-pbrxs 4/4 Running 0 5m31s
csi-rbdplugin-9nj85 3/3 Running 0 5m31s
csi-rbdplugin-jmnqz 3/3 Running 0 5m31s
csi-rbdplugin-provisioner-6b8ff67dc4-jk5lm 4/4 Running 0 5m31s
csi-rbdplugin-provisioner-6b8ff67dc4-rxjhq 4/4 Running 0 5m31s
csi-rbdplugin-vrzjq 3/3 Running 0 5m31s
noobaa-core-0 1/2 Running 0 2m34s
noobaa-operator-85d86479fc-n8vp5 1/1 Running 0 13m
ocs-operator-65cf57b98b-rk48c 0/1 Running 0 13m
rook-ceph-drain-canary-worker-1.ocp42.ssa.mbu.labs.redhat.w2cqv 1/1 Running 0 2m41s
rook-ceph-drain-canary-worker-2.ocp42.ssa.mbu.labs.redhat.whv6s 1/1 Running 0 2m40s
rook-ceph-drain-canary-worker-3.ocp42.ssa.mbu.labs.redhat.ll8gj 1/1 Running 0 2m40s
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-d7d64976d8cm7 1/1 Running 0 2m28s
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-864fdf78ppnpm 1/1 Running 0 2m27s
rook-ceph-mgr-a-5fd6f7578c-wbsb6 1/1 Running 0 3m24s
rook-ceph-mon-a-bffc546c8-vjrfb 1/1 Running 0 4m26s
rook-ceph-mon-b-8499dd679c-6pzm9 1/1 Running 0 4m11s
rook-ceph-mon-c-77cd5dd54-64z52 1/1 Running 0 3m46s
rook-ceph-operator-59d78cf8bd-4zcsz 1/1 Running 0 13m
rook-ceph-osd-0-b46fbc7d7-hc2wz 1/1 Running 0 2m41s
rook-ceph-osd-1-648c5dc8d6-prwks 1/1 Running 0 2m40s
rook-ceph-osd-2-546d4d77fb-qb68j 1/1 Running 0 2m40s
rook-ceph-osd-prepare-ocs-deviceset-0-0-7j2kj-s72g4 0/1 Completed 0 2m56s
rook-ceph-osd-prepare-ocs-deviceset-1-0-lmd97-27chl 0/1 Completed 0 2m56s
rook-ceph-osd-prepare-ocs-deviceset-2-0-dnfbd-s7z8v 0/1 Completed 0 2m56s
rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-d7b4b5b6hnpr 1/1 Running 0 2m12s

Our installation is now complete and OCS fully operative.
Now we can browse the noobaa management console (for now it only works in Chrome) and create a new user to test the S3 object storage

Get the endpoint for the S3 object server
[root@utility ~]# oc get route s3 -o jsonpath='{.spec.host}’ -n openshift-storage
s3-openshift-storage.apps.ocp42.ssa.mbu.labs.redhat.com‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Test it with your preferred S3 client (I use Cyberduck in my windows desktop which I’m using to write this article)

Create something to check if you can write

It works!
Set the ocs-storagecluster-cephfs StorageClass as the default one
[root@utility ~]# oc patch storageclass ocs-storagecluster-cephfs -p ‘{“metadata”: {“annotations”:{“storageclass.kubernetes.io/is-default-class”:”true”}}}’
storageclass.storage.k8s.io/ocs-storagecluster-cephfs patched
[root@utility ~]#‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Test the ocs-storagecluster-cephfs StorageClass adding persistent storage to the registry
[root@utility ~]# oc edit configs.imageregistry.operator.openshift.io
storage:
pvc:
claim:‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Check the PVC created and wait for the new POD up&running
[root@utility ~]# oc get pvc -n openshift-image-registry
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
image-registry-storage Bound pvc-ba4a07c1-3d86-11ea-ad40-001a4a1601e7 100Gi RWX ocs-storagecluster-cephfs 12s
[root@utility ~]# oc get pod -n openshift-image-registry
NAME READY STATUS RESTARTS AGE
cluster-image-registry-operator-655fb7779f-pn7ms 2/2 Running 0 36h
image-registry-5bdf96556-98jbk 1/1 Running 0 105s
node-ca-9gbxg 1/1 Running 1 35h
node-ca-fzcrm 1/1 Running 0 35h
node-ca-gr928 1/1 Running 1 35h
node-ca-jkfzf 1/1 Running 1 35h
node-ca-knlcj 1/1 Running 0 35h
node-ca-mb6zh 1/1 Running 0 35h
[root@utility ~]#

Test it in a new project test
[root@utility ~]# oc new-project test
Now using project “test” on server “https://api.ocp42.ssa.mbu.labs.redhat.com:6443″.

You can add applications to this project with the ‘new-app’ command. For example, try:

oc new-app django-psql-example

to build a new example application in Python. Or use kubectl to deploy a simple Kubernetes application:

kubectl create deployment hello-node –image=gcr.io/hello-minikube-zero-install/hello-node

[root@utility ~]# podman pull alpine
Trying to pull docker.io/library/alpine…Getting image source signatures
Copying blob c9b1b535fdd9 doneCopying config e7d92cdc71 doneWriting manifest to image destination
Storing signaturese7d92cdc71feacf90708cb59182d0df1b911f8ae022d29e8e95d75ca6a99776a
[root@utility ~]# podman login -u $(oc whoami) -p $(oc whoami -t) $REGISTRY_URL –tls-verify=false
Login Succeeded!
[root@utility ~]# podman tag alpine $REGISTRY_URL/test/alpine
[root@utility ~]# podman push $REGISTRY_URL/test/alpine –tls-verify=false
Getting image source signatures
Copying blob 5216338b40a7 done
Copying config e7d92cdc71 done
Writing manifest to image destination
Storing signatures
[root@utility ~]# oc get is -n test
NAME IMAGE REPOSITORY TAGS UPDATED
alpine default-route-openshift-image-registry.apps.ocp42.ssa.mbu.labs.redhat.com/test/alpine latest 3 minutes ago
[root@utility ~]#

The registry works!
Other Scenario
If your cluster is deployed in vSphere and uses the default “thin” StorageClass but your datastore isn’t big enough, you can start from the OCS installation.
When it comes to creating the OCS Cluster Service, create a YAML file with your desired sizes and without storageClassName (it will use the default one).
You can also remove the “monPVCTemplate” if you are not interested in changing the storage size.
[root@utility ~]# cat <<EOF > ocs-cluster-service.yaml
apiVersion: ocs.openshift.io/v1
kind: StorageCluster
metadata:
name: ocs-storagecluster
namespace: openshift-storage
spec:
manageNodes: false
monPVCTemplate:
spec:
accessModes:
– ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: ”
volumeMode: Filesystem
storageDeviceSets:
– count: 1
dataPVCTemplate:
spec:
accessModes:
– ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: ”
volumeMode: Block
name: ocs-deviceset
placement: {}
portable: true
replica: 3
resources: {}
EOF

Limits and Requests
Limits and Requests, by default, are set like that
[root@utility ~]# oc describe node worker-1.ocp42.ssa.mbu.labs.redhat.com

Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
——— —- ———— ———- ————— ————- —
openshift-storage noobaa-core-0 4 (25%) 4 (25%) 8Gi (12%) 8Gi (12%) 13m
openshift-storage rook-ceph-mgr-a-676d4b4796-54mtk 1 (6%) 1 (6%) 3Gi (4%) 3Gi (4%) 12m
openshift-storage rook-ceph-mon-b-7d7747d8b4-k9txg 1 (6%) 1 (6%) 2Gi (3%) 2Gi (3%) 13m
openshift-storage rook-ceph-osd-1-854847fd4c-482bt 1 (6%) 2 (12%) 4Gi (6%) 8Gi (12%) 12m

We can create our new YAML file to change those settings in the ocs-storagecluster StorageCluster resource
[root@utility ~]# cat <<EOF > ocs-cluster-service-modified.yaml
apiVersion: ocs.openshift.io/v1
kind: StorageCluster
metadata:
name: ocs-storagecluster
namespace: openshift-storage
spec:
resources:
mon:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
mgr:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
noobaa-core:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
noobaa-db:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
manageNodes: false
monPVCTemplate:
spec:
accessModes:
– ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: ‘local-sc’
volumeMode: Filesystem
storageDeviceSets:
– count: 1
dataPVCTemplate:
spec:
accessModes:
– ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: ‘localblock-sc’
volumeMode: Block
name: ocs-deviceset
placement: {}
portable: true
replica: 3
resources:
limits:
cpu: 1
memory: 4Gi
requests:
cpu: 1
memory: 4Gi
EOF

And apply
[root@utility ~]# oc apply -f ocs-cluster-service-modified.yaml
Warning: oc apply should be used on resource created by either oc create –save-config or oc apply
storagecluster.ocs.openshift.io/ocs-storagecluster configured

We have to wait for the operator which reads the new configs and applies them
[root@utility ~]# oc describe node worker-1.ocp42.ssa.mbu.labs.redhat.com

Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
——— —- ———— ———- ————— ————- —
openshift-storage noobaa-core-0 2 (12%) 2 (12%) 2Gi (3%) 2Gi (3%) 23s
openshift-storage rook-ceph-mgr-a-54f87f84fb-pm4rn 1 (6%) 1 (6%) 1Gi (1%) 1Gi (1%) 56s
openshift-storage rook-ceph-mon-b-854f549cd4-bgdb6 1 (6%) 1 (6%) 1Gi (1%) 1Gi (1%) 46s
openshift-storage rook-ceph-osd-1-ff56d545c-p7hvn 1 (6%) 1 (6%) 4Gi (6%) 4Gi (6%) 50s

And now we have our PODs with the new configurations applied.
The OSD PODs won’t start if you choose too low values.
Sections:

mon for rook-ceph-mon
mgr for rook-ceph-mgr
noobaa-core and noobaa-db for the 2 containers in the pod noobaa-core-0
mds for rook-ceph-mds-ocs-storagecluster-cephfilesystem
rgw for rook-ceph-rgw-ocs-storagecluster-cephobjectstore
the resources section in the end for rook-ceph-osd

rgw and mds sections work only the first time we create the resource.

spec:
resources:
mds:
limits:
cpu: 2
memory: 4Gi
requests:
cpu: 2
memory: 4Gi
rgw:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 1
memory: 2Gi

Conclusions
Now you can enjoy your brand-new OCS 4.2 cluster in OCP 4.2.x.
Things changed if you think about OCS 3.x, for example, the use of the PVCs instead of directly using the disks attached. For now, there are a lot of limitations for sustainability and supportability reasons.
We will wait for a fully supported installation for these scenarios.
UPDATES

The cluster used to write this article has been updated from 4.2.14 to 4.2.16 and then from 4.2.16 to 4.3.0.

The current OCS setup is still working

The post OCS 4.2 in OCP 4.2.14 – UPI installation in RHV appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

How to deploy IBM Blockchain Platform on Red Hat OpenShift

Arshiya Lal is an Offering Manager for IBM’s Blockchain Platform. She leads their developer experience portfolio, spearheads a program for blockchain start-ups, and informs strategy for IBM’s blockchain offerings. She’s been featured on the Bad Crypto podcast and spoken at Duke, North Carolina Tech Association, and various start-up events. Before joining IBM’s Blockchain group, she developed expertise in a variety of groundbreaking technologies including smart textiles, 3-D Printing, augmented reality, virtual reality, and gesture recognition. She is a graduate of the Georgia Institute of Technology in Atlanta, Georgia.
IBM recently announced a new version of the IBM Blockchain Platform (v2.1.0) which is optimized and certified to deploy on Red Hat OpenShift. This offering is well-suited for organizations who need to store a copy of the ledger and run workloads on their own infrastructure, meet specific data residency requirements, or deploy blockchain components in multi-cloud or hybrid cloud architectures to meet consortium needs.
The IBM Blockchain Platform together with Red Hat OpenShift offers:
Simplicity
Quickly build, operate, govern and grow a blockchain network with the most complete blockchain software, services, tools and sample codes available.
Flexibility
Containerize smart contracts, peers, certificate authorities and ordering services and easily deploy them within your preferred environments.
Reliability
Confidently create networks with high performance and availability for the different stages of blockchain development, deployment and production.
Steps to deploy
Below are some quick tips to help you quickly get started deploying IBM Blockchain Platform  (v2.1.0) on Red Hat OpenShift. For full documentation, go here. 
Step 1: Access the software and documentation 
IBM Blockchain Platform V2.1.0 requires an entitlement key that is included with your order from Passport Advantage®. Get the entitlement key that is assigned to your ID:

Log in to MyIBM Container Software Library with the IBMid  and password that are associated with the entitled software.
In the Entitlement keys section, select Copy key to copy the entitlement key to the clipboard. For deployment instructions, see Deploying IBM Blockchain Platform V2.1.0.

Step 2: Evaluate the hardware and system configuration

Your system must meet the minimum hardware requirements. For more details, see System prerequisites. 
Ensure that you have a Red Hat OpenShift Container Platform 3.11 or 4.2 Kubernetes cluster available to install the IBM Blockchain Platform. For more information, see OpenShift Container Platform 3.11 and 4.2 documentation. 
You need to install and connect to your cluster by using the OpenShift Container Platform CLI to deploy the platform. 

Step 3: Get started
Complete the following steps to install IBM Blockchain Platform V2.1.0. 

Log in to your OpenShift cluster.
Create a new project.
Add security and access policies.
Create a secret for your entitlement key.
Deploy the IBM Blockchain Platform operator. 
Deploy the IBM Blockchain Platform console. 
Log in to the console.

Want a jumpstart?
Engage with Blockchain Lab Services experts who know the platform better than anyone and start to unlock all the value that IBM Blockchain Platform can bring to your business. With locations around the world, our experts can work side-by-side with you to accelerate the deployment and configuration of your blockchain network on RedHat. 
For more information about Blockchain Lab Services contact your IBM Blockchain Platform sales representative. 
For more information: https://ibm.com/blockchain/platform 
 
The post How to deploy IBM Blockchain Platform on Red Hat OpenShift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Recap: London OpenShift Commons Gathering January 29th 2020 [Videos and Slides]

It’s A Wrap! The first 2020 OpenShift Commons Gathering took place on January 29th in London at Savoy IET.  
This OpenShift Commons Gathering in London featured deep dives into OpenShift 4, DevSecOps, Operators, OKD4, Quarkus, Container Storage and much more!
 
The OpenShift Commons Gathering in London brought together over 350 Kubernetes and Cloud Native experts from all over the world to discuss container technologies, best practices for cloud native application developers and the open source software projects that underpin the OpenShift ecosystem.
Public Health England’s Francesco Giannoccaro discussing the use of open source technologies to support scientific computing at OpenShift Commons Gathering in London
 
Here are the videos and slides from the proceedings:
 

The Search for Connections Across the OpenShift Ecosystem
Diane Mueller (Red Hat)
Slides
Video

State of the Union: Unified Hybrid Cloud Vision
Julio Tapia (Red Hat)
Slides
Video

OpenShift 4 Release and Road Map Update
Duncan Hardie (Red Hat) Jan Kleinert (Red Hat)
Slides
Video

OKD4 Release Update & Road Map
Christian Glombek (Red Hat)
Slides
Video

State of DevSecOps: The Seventh Deadly Disease
John Willis (Red Hat)
Slides
Video

Case Study: OpenShift Hive at Worldpay
Bernd Malmqvist (Worldpay) Matt Simons (Worldpay)
Slides
Video

Future Finance Data Innovations with Open Banking and PSD2 @ Asiakastieto
Eero Arvonen (Suomen Asiakastieto)
Slides
Video

Lightning Talk:DevSecOps Culture with OpenSource Tools
Benjy Portnoy (Aqua Security)
Slides
Video

Lightning Talk: Secure DevOps for OpenShift
Chris Kranz (Sysdig)
Slides
Video

State of the Operator Ecosystem: Framework, SDKs and Best Practices
Guil Barros (Red Hat) Jason Dobies (Red Hat)
Slides
Video

Hybrid Cloud Case Study: OpenShift at Deutsche Bank
Jeremy Crawford (Deutsche Bank) Dipesh Patel (Deutsche Bank)
Slides
Video

Lightning Talk: OpenShift Container Storage
Karena Angell (Red Hat)
Slides
Video

Case Study: OpenShift at Public Health England
Francesco Giannoccaro (Public Health England)
Slides
Video

OpenShift Hosted Services Update
Patrick Strick (Red Hat)
Slides
Video

AMA Panel – Red Hat Upstream Project Leads, Engineers and Product Managers
 Diane Mueller (Red Hat) – moderator
N/A
Video

Closing: Road Ahead & Wrap-Up
Diane Mueller (Red Hat)
Slides
Video

 
To stay abreast of all the latest releases and events, please join the OpenShift Commons and join our mailing lists & slack channel.
What is OpenShift Commons?
Commons builds connections and collaboration across OpenShift communities, projects, and stakeholders. In doing so we’ll enable the success of customers, users, partners, and contributors as we deepen our knowledge and experiences together.
Our goals go beyond code contributions. Commons is a place for companies using OpenShift to accelerate its success and adoption. To do this we’ll act as resources for each other, share best practices and provide a forum for peer-to-peer communication.
Join OpenShift Commons today!
The post Recap: London OpenShift Commons Gathering January 29th 2020 [Videos and Slides] appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

OpenShift 4.3: Deploy Applications with Helm 3

Helm is a package manager for Kubernetes which helps users create templated packages called Helm Charts to include all Kubernetes resources that are required to deploy a particular application. Helm then assists with installing the Helm Chart on Kubernetes, and afterwards it can upgrade or rollback the installed package when new versions are available. Helm Charts are particularly useful for installation and upgrade of stateless applications given that the Kubernetes resources and the application image can simply be updated to newer versions. 
Helm 2 was based on a server-side component named Tiller which was responsible for performing Helm operations on Kubernetes clusters. Tiller was designed prior to Kubernetes role-based access control (RBAC) and although useful for single-tenant clusters, its permissive configuration could grant users a wide array of unintended permissions. Therefore it was recognised as a major security concern on multi-tenant clusters, which prevented many enterprise users from using Helm in production environments. OpenShift is an enterprise Kubernetes platform, and therefore we didn’t recommend the use of Helm 2 in production, even though it was possible to disable OpenShift security features in order for Helm 2 to be used on OpenShift.
Helm 3 was recently released as GA in the Helm community, and a major update has been removing Tiller and pivoting to a client-side architecture to address the aforementioned security concerns, removing the barrier tor using Helm in enterprise environments. OpenShift welcomes this change with open arms, and we are thrilled to announce that OpenShift 4.3 supports Helm 3 as a Tech Preview feature. It is planned for full support in upcoming releases of the platform.
Helm binaries are distributed alongside oc, odo and other OpenShift tools. 

Try out Helm 3 on OpenShift by following the instructions in OpenShift documentation.
New in Helm 3
Helm 3 introduces many large and small enhancements, a few of which are detailed below:
Tiller is gone: Helm 3 removes Tiller as the server-side component that managed Helm Charts and moves to a client-side model where all operations are performed via the Helm 3 CLI while relying on Kubernetes RBAC for authorization and security features. When a user instructs the Helm CLI to install a Helm Chart, the information about the Helm Chart is fetched from the repository, rendered on the client and then applied to Kubernetes while a record of this installation is created within the namespace (which is known as a Release). 

Releases in namespaces: Release information in Helm 3 is stored in the same namespace with a secret as the storage mechanism for the Release information
Three-way strategic merge patch: Upgrades have moved to a 3-way merge, compared to 2-way merge in Helm 2. In other words, Helm 3 takes the live state of the application resources into account in addition to the manifests in the old and new Helm Charts, and therefore preserves any manual or automatic (e.g. Horizontal Pod AutoScaler) changes that might have been applied to those resources. 
OCI Registries for charts: As an experimental feature, Helm 3 is exploring use of OCI Registries for storing and distributing charts. This would allow user to take advantage of security and provenance features that are available in these registries.
Chart validation: JSONSchema support is added to Charts in order to define a structure for the values supported by the Chart. The Helm CLI then uses this schema for validation of the values that the user provides to the Chart. 
Improved CRD support: Kubernetes Custom Resource Definition (CRD) installations are improved by treating them as special resources. Helm 3 installs the CRDs included in a Chart first, waits until they are made available on the Kubernetes API and then continues the installation of the remaining resources from the Chart.
Library charts: a class of charts called “library charts” are introduced in Helm 3 in order facilitate sharing snippets of code between charts and to encourage re-use. A library chart does not install anything and can only be defined as a dependency in other charts. 
For a more exhaustive list of updates in Helm 3, refer to the Helm 3 documentation.
Migration to Helm 3
Most existing charts are compatible with Helm 3. However, given that Helm 3 takes advantage of the Kubernetes security model while Helm 2 did not, there may be adjustments needed in order for your existing charts to be able to deploy properly without Tiller. You can read more on how to migrate from Helm 2 to Helm 3 in the Helm 3 documentation.
Next on OpenShift
For future releases of OpenShift, we are working on a tighter integration of Helm 3 into the OpenShift tools and web consoles. OpenShift embedded developer catalog, which is the central hub for all developer content, will add support for Helm Charts in addition to Operator-backed services, Templates, etc. Furthermore, integration with Helm releases is planned next in order for developers to be able to manage Helm releases directly from the OpenShift developer console.
We are very excited for the Helm 3 roadmap on OpenShift. Stay tuned…
The post OpenShift 4.3: Deploy Applications with Helm 3 appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Driving innovation for connected cars using IBM Cloud

Cars have always been built for travel, but the experience of driving has changed dramatically over the last few decades. Today’s connected cars are not only equipped with seamless internet, but usually have a wireless local area network (WLAN) that allows the car to access data, send data and communicate with Internet of Things (IoT) technology to improve safety and efficiency and optimize performance. The car’s connectivity with the outside world provides a superior in-car experience with on-the-go access to all the features one might have at home.
Traditionally, the networks supporting this robust connectivity, unlike cars, have not been built for travel. Data is stored in a home network in a local data center, which causes latency issues as the car travels and massive amounts of data are transferred across borders. In addition, privacy legislation, like the General Data Protection Regulation (GDPR), limit the transfer of personal data outside the EU, which not only creates a poor user experience on the road, but can impact safety-related IoT insights.
We at Travelping saw an opportunity to use cloud-native technologies for networking to help the automotive industry negotiate the challenges of cross-border data management regulations and improve latency issues for auto manufacturers looking to gain real-time IoT insights.
Road less traveled is most efficient
Travelping develops carrier-grade cloud-native network functions (CNFs) that are used to design software-defined networking solutions. Using IBM Cloud infrastructure products and IBM Cloud Kubernetes Service, we created a cloud-native solution that transports data directly to the vehicles, eliminating latency issues while fulfilling requirements for GDPR. We had strict technical requirements for our IT infrastructure and chose IBM Cloud for several reasons. IBM has a global footprint, which was key for us to provide networking capabilities in the cloud and better manage compliance with GDPR and European Data Security Operation laws, which was not possible on other clouds. Many clouds in the field are what we call north-south clouds. They terminate web traffic. Our solution forwards the traffic for our mobile users — what we call east-west traffic. IBM Cloud is the only one that still allows us to transport data from node to node in a network, and not just terminate it.
For us, one of the biggest advantages in choosing IBM Cloud, in addition to all the automation and speed, is that as a team of 30 people, we can deliver globally on a cloud platform that is deployed globally. And we don’t need to invest a penny for that; we can utilize computer resources that are virtually everywhere.
Software-defined networking is a radical change in the way networking is approached today as it brings the entire software development ecosystem close to the network, allowing operators to integrate all the network resources into the application domain. We moved to IBM Cloud Kubernetes and container deployment because you get an environment where you can run services that are rather simple in a five-nine — 99.999 percent service availability — environment. And it’s a five-nine environment that you get mostly for free, by following Kubernetes or cloud-native principles. With Kubernetes, there’s a common API. It works on private cloud and private deployment, but it also works in public clouds. You are totally agnostic, from developer notebook to private cloud deployments to edge deployments. You deploy in exactly the same way again and again. And this is only possible with Kubernetes.
Promise of 5G
For our industry, there’s a promise of 5G, and that cannot be fulfilled by the carrier alone anymore. There needs to be trust between operators and cloud providers to deliver a distributed infrastructure. Operators trust software vendors like us to create services for them. The whole 5G promise needs to be on more shoulders than it is at the at the moment, so that’s a little bit of a paradigm shift. It’s the first time in the mobile industry that we have had this shift. We need to create another infrastructure for communications services in the field, and that needs to be distributed; the cloud is the foundation for that. You don’t need to mount telecommunications equipment in owned data centers anymore because 90 percent of the spec is available in the cloud. You can book resources wherever you want to go. And this is a huge advantage — global carriers or local carriers can act globally and fulfill local regulations. A company from Germany can deploy in South Korea, as we have done on IBM Cloud. This was not possible in the past, but it’s possible today with cloud resources. In our experience, especially in Europe, IBM plays a role because it is a trusted partner of big customers, and therefore the entry was relatively easy for us.
Read the Travelping case study for more details.
 
The post Driving innovation for connected cars using IBM Cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

OpenShift 4.3: Deploying Applications in the OpenShift 4.3 Developer Perspective

Deploying applications
In this article, we will take a look at improvements in the user flows to deploy applications in OpenShift 4.3 Developer Perspective. You can learn more about all the improvements in the OpenShift 4.3 release here. Since the initial launch of the Developer Perspective in the 4.2 release of OpenShift, we’ve had frequent feedback sessions with developers, developer advocates, stakeholders, and other community members to better understand how the experience meets their needs. While, overall, the user interface has been well received, we continue to gather and use the feedback to enhance the flows.
The +Add item in the left navigation of the Developer Perspective is the entry point for the developers to add an application or service to their OpenShift project.  The Add page offers six user flows for adding components from Git, deploying Container Images, adding an item from the Developer Catalog, importing your Dockerfile from a git repo, Importing YAML or adding a Database. Developers can easily create, build and deploy applications in real-time using these user flows.

What are the improvements in 4.3?
Builder Image detection for From Git
The Import from Git flow has been enhanced to help users easily create applications by auto-filling in the details, making the process more automated.  We have introduced auto-detection of the builder image, providing assistance in determining the right build strategy.
In 4.3, as soon as the user enters a Git Repo URL, validation of the URL takes place. Once the URL is validated, the builder image detection starts. The recommended builder image is indicated with a star and is selected by default.

By suggesting a builder image, we are trying to reduce the number of steps it takes a user to build their application. However, the user is free to select a different builder image.To further increase the efficiency of the flow,  ‘Application’ and ‘Name’ fields have smart defaults based on the ‘Git Repo URL’ entered. These fields can also be edited if they are not what the user wants. Providing the user with optional suggestions in the form fields helps the user proceed faster without mandating what they enter.
Deploy Image from Image Stream 
The Deploy Image flow now offers the ability to use an image name from an internal registry to deploy an image. This option was present in the 3.11 release of OpenShift and is being reintroduced in 4.3 with some enhancements.
The user identifies the container image to be deployed by selecting the associated Project, Image Streams, and Tag in the Image section of the form.

To improve this flow from 3.11, upon project selection, we verify that there is proper access to pull images from it.  When there isn’t proper access, the user is able to choose to grant that access via a checkbox, which is selected by default.  

Resources section
In the 4.2 initial release of the Developer Perspective, the Import from Git, Import from Dockerfile and Deploy Image user flows created Deployment Configs by default. When the Serverless Operator was installed, a Serverless section was displayed allowing the user to select a checkbox indicating that they wanted a Knative Service created.
In 4.3, we have added a Resources section to these flows allowing the user to select what type of resource to create.  By default, Kubernetes Deployments will be created. Other resource types available for selection are Deployment Config and Knative Service.  The Knative Service option is only available when the OpenShift Serverless Operator is installed. Since these forms are dynamic and change based on user selections, the ‘Advanced options’ available will differ depending on the resource that is selected.
Learn More
Interested in learning more about application development with OpenShift?  Here are some resources which may be helpful:

Red Hat resources on application development on OpenShift : developers.redhat.com/openshift

Provide Feedback

Join our OpenShift Developer Experience Google Group, participate in discussions or attend our Office Hours Feedback session
Drop us an email with your comments about the OpenShift Console user experience.

 
The post OpenShift 4.3: Deploying Applications in the OpenShift 4.3 Developer Perspective appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

OpenShift 4.3: Creating virtual machines on Kubernetes with OpenShift’s CNV

Whether you are a new or a seasoned Kubernetes user, or you’re just considering working with Kubernetes, you have probably started exploring the technology and how best to integrate virtual machines with the Kubernetes engine. But which solution fits your needs? Is there a way to leverage both the isolation virtual machines provide and the orchestration platform of the Kubernetes engine? With Red Hat OpenShift, you can do both.
OpenShift 4.3 offers the ability to run both container-based workloads and virtual machines side by side as workloads on a Kubernetes cluster. Installing the Container-native virtualization operator on OpenShift will allow you to create, run, and manage VMs, as well as provide templates to quickly create the same VM multiple times.

(image form: https://www.openshift.com/)
But what is the virtualization experience in OpenShift? How advanced or simple are these virtual machines? Are they both highly customizable for the advanced user and easy to use for the novice user? Is it similar to virtual machines as we know them today? Enter Container-native virtualization.
OpenShift’s Container-native Virtualization (CNV) is Red Hat’s solution for running VMs on a Kubernetes cluster. CNV is set to achieve two goals: The first is to help all users consolidate their workload footprint to one platform, thus reducing the operational overhead of managing an additional virtualization platform alongside a container platform, whether they are long term virtual machines users or new to the VM world. The second is leveraging the Kubernetes engine’s and ecosystem strength and help users modernize their traditional workload capabilities, orchestration, and architecture.
CNV is a set of extensions to Kubernetes, designed to integrate seamlessly with other workloads. By doing so, you can now enjoy the best of both worlds, running virtual machines as you know them, but with the added benefit of all the features that Kubernetes offers.
Positioning Kubernetes at the center of running all sorts of workloads, the transition from VMs to Kubernetes is as simple as possible for users of any experience level.
In this post we will demonstrate how easy it is to manage virtual machines in OpenShift and take a look at the added benefits of the underlying Kubernetes technology.
We will start by breaking down the key features in the creation process. Let’s jump right in.
Prerequisite
You have deployed an OpenShift cluster on bare-metal and installed the Container-native Virtualization operator that enables you to run virtual machines on it.
Introducing virtual machines on OpenShift 
You can find the virtual machines section by navigating through the left side global navigation, under the ‘Workloads’ tab. A click on the ‘Virtual Machines’ menu item will land you on a virtual machines list on your cluster. Naturally, if this is your first time creating a virtual machine on this cluster, this list will be empty at this point. 

Click on the ‘Create Virtual Machine’ primary button at the top of the list. This will bring you to our brand new VM creation wizard. This wizard is compiled from basic concepts that are required for any functioning virtual machine, as well as some more advanced features.
Create Virtual Machine wizard
Step 1: General

The first step, ‘General’ covers any basic info required to create a virtual machine. On completing this step, you can already skip everything else to the review before the creation step by clicking the ‘Review and Create’ button. This button is always available for you from this point forward.
Let’s break this step down.
Templates
Templates are a set of virtual machines configurations defined by the users. They may include all field predefined or leave some of them empty, they may also include Networking and Storage configurations, as well as other properties. You can choose to use one of them as is and create your virtual machine. You can use them as starting points, getting you closer to your desired configuration.
When a template has been modified the template field will reflect this by appending -modified to the template name.
If you are interested in starting a fresh configuration from scratch, select ‘None’.
Please note that some templates may include different configurations for the next steps on the wizard as well, while others may only affect this ‘General’ step.
Source Field
In the source field you select the method of uploading or using the Operation System installation. There are four options to select from when providing a source for the VM:
PXE: 
PXE enables the user to install an OS and configure it over a network. PXE depends on DHCP to find the PXE Server on an L2 Network. 
URL:
An external URL to the .iso, .img, .qcow2 or .raw that the virtual machine should be created from.
Container:
Ephemeral virtual machine disk image which will be pulled from a container registry. A container allows you to provide an image from a container registry.
This method requires specifically prepared containers, and not just any container.
Disk:
Select an existing Persistent Volume Claim as a disk that has been previously cloned or created. When selecting “Attach Disk” the user will be prompted to select an available disk that has been previously cloned or created and made available as a Persistent Volume Claim.
Operating systems, Flavors, and Workload profiles
Selecting an operating system will fetch the relevant fields required, and available options for that OS. By doing this, the console will provide predefined resources request configurations called ‘workload profile’ and ‘Flavor’ suited for that OS which we will look into next.
Flavor is a CPU and Memory resources request, combined into five, off the shelf, sizes: Tiny, small, medium, large, custom.
While these flavor sizes differ in specicifaction from one operating system to another, they are aligned to the amount of resources required to run that OS and the profile user has selected.
By selecting custom, the user can request different CPU and Memory requests, as long as they do not exceed the clusters limit or go below the minimum required to run that OS.  
The user can choose from three workload profiles: High-Performance, Server or Desktop.
Each of these profiles will provide a configuration better suited for the character of the workload which will be running on the requested VM.
Name and Description
This name will appear as the VM’s name and title of that VM’s page as well as any list that VM will appear on. The description will appear only on the VM’s overview page.
Step 2: Networking
The next step on the creation wizard is ‘Networking’. Upon entering this step you are introduced to an already existing Network interface (or NIC). You can select to customize this Network interface, add new ones, or remove any of them. In this step, the ‘Review and Create’ button is available to skip the rest of the steps.
Step 3: Storage

The third Step ‘Storage’ will show a list of disks included in the upcoming VM. Depending on the source you have selected in the first step this might be pre-populated with some or no disks.
The user can add, modify or remove a disk. Adding a disk could be either to create a new disk or attach an existing one.
Clicking the ‘Add disk’ button will pop up the disk modal and you will be able to choose the type of disk from the Source drop down menu:
Blank (default), URL, Container and Attach disk, Attach cloned disk.
Step 4: Advanced

The “Advanced” setting step of the create VM wizard introduces the ability to configure Cloud-Init. In future versions, additional options like virtual hardware and boot order will be included as well.
Cloud-Init
In this section you have two options to configure the “cloud-init” settings:

By filling a form (default) to define a set of common parameters like hostname and authorized SSH keys.
By providing a custom cloud-init script.

Step 5: Review

This last step is a summary of settings configured. Here, you may review and verify this configuration before creating this virtual machine.
From here all the previous steps are available to skip to either by skipping with the ‘Back’ button or by clicking their names on the left navigation.
A click “Create Virtual Machine” will set the creation VM progress in motion. Once the VM was created you will land on the “Results” page that lets you navigate to the details page or go to the list.

Summary
Container-native Virtualization adds the ability to easily create and manage traditional virtual machines inside of OpenShift alongside standard container workloads. Its deep integration into the OpenShift UI makes the first steps very easy and intuitive. Powered by Kubernetes engine, VMs have never been so well integrated with containers. You should try this hybrid deployment, see for yourself how easy it is.
Read more about

OpenShift’s design repository
Openshift’s Virtualization design repository
OpenShift Tech Topic Container-native Virtualization
Container-native Virtualization FAQs
Kubevirt (Upstream of CNV)

 
The post OpenShift 4.3: Creating virtual machines on Kubernetes with OpenShift’s CNV appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Introduction to Customer Empathy Workshops

Product feedback from users goes a long way. It’s why Red Hat’s OpenShift Web Console UI is as awesome as it is today. Features like Dashboards and Topology were added because of user feedback—and that’s how we plan on enhancing the console even further. One thing’s for sure: The path to a better console experience relies on continued customer engagement.
Thus, Red Hat has launched a series of workshops specifically geared towards engaging and empathizing with OpenShift customers in order to better understand their needs. We’ve dubbed them customer empathy workshops.
What they are
Our customer empathy workshops enable customers to collaborate with OpenShift user experience, development, and product management to directly influence the future of the OpenShift console. Each workshop has a special topic to focus on so that the group can really hone in on the challenges they face in specific areas. Customers are introduced to our design thinking process as we dive into real product development challenges, starting with problem discovery and following with solution ideation.
The design thinking process

Hands-on activities give our customers the unique opportunity to connect with the OpenShift team, share their pain points, and collaborate with other community members throughout the session. This kind of collaboration makes the product what it is today, so we want to continue engaging with users as much as possible.
Value for our customers 
These workshops certainly help the product evolve, but they also give customers an opportunity to discover, impact, and connect.
Discover: Our customers will get the opportunity to learn how product decisions are made from the small fixes to the larger feature additions. They can also share their pain points, what they struggle with, and where they need help—as well as learn how other companies have overcome similar obstacles.
 
Impact: Customers can lead the conversation around the OpenShift user experience, engage with other OpenShift users, and collaborate through knowledge sharing and group solution ideation.
 
Connect: Discussing the OpenShift console with users brings together folks from different countries, industries, and technical backgrounds. We hope that our participants walk away with new connections and feel even more connected to the OpenShift community.
Our opportunities
While customers are discovering, impacting, and connecting, we’re gaining valuable insight from all the feedback. Specifically, we have the opportunity to listen, prioritize, and design.
Listen: We want to learn more about how our customers use OpenShift: What their environment looks like, how many people are on their team, what their biggest pain points are, and more.
 
Prioritize: Through hands-on activities, we hope to better understand the problem statements that arise throughout our workshop. The more we learn about customer pain points and what ideal solutions might look like, the better we can design a powerful experience.
 
Design: At the end of the day, we want to take all customer  feedback and implement features to make the OpenShift experience better. So after each workshop, we’ll analyze the data, explore the proposed solutions, and design a fix or new feature to address it.
Stay tuned
This series has been an exciting addition to our engagement efforts, and we’ve heard some great feedback from those who have already participated. Following each workshop, we’ll share a summary of the workshop and preliminary results. So keep an eye out for upcoming customer empathy workshops and content. We look forward to sharing the results with you in an upcoming blog article!
The post Introduction to Customer Empathy Workshops appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

How Omnitracs Transformed to a DevOps Culture with OpenShift

Omnitracs has taken an interesting road to get to its current position as a leader in fleet management software for logistics and transportation companies. Their SaaS-based offering allows companies to track, monitor, and bring into compliance all of their trucks and shipping vehicles around the globe from one system. But just because Omnitracs users were taking advantage of cloud-based software as a service models of consumption doesn’t mean Omnitracs developers were fully utilizing the cloud and the agile methodology it enables.
That’s only been the case for the past year, in fact, since Omnitracs began adopting Red Hat OpenShift. Andrew Harrison, lead IT DevOps Engineer and lead of the Agents of Change team at Omnitracs, was tasked with building the company a road to the future of software development, and the pavement on this road was built with OpenShift.
Since 2014, Omnitracs has been growing rapidly, launching over 30 new products, and merging in the assets from a number of acquired companies. To keep up with all of this growth, the developers in the company had to transform their way of doing things, top to bottom.
Thus, a year ago, Harrison was placed in charge of affecting change throughout Omnitracs’ IT organization. That means introducing devops, automation, agile methodologies, and continuous integration and deployments. That’s a tall order for a single team to spread such changes through an entire enterprise.
And yet, a year later, Harrison said he’s successfully transitioned the company away from a “waterfall” style of development and deployment towards a devops and agile based approach, thanks to the help of the Red Hat OpenShift platform. Instead of code pouring in as it was completed, like a waterfall, developers were able to iterate over time in smaller chunks. While the move began with OpenShift 3.11, the company was also one of the first to roll OpenShift 4.1 out to production systems.
Harrison said the benefits of moving to OpenShift 3.11 were immediately noticeable. “With OpenShift 3.11, we were able to get immediate cost reduction because we moved everything out to the public cloud [eliminating on prem costs]. We got rid of on-premise costs immediately. We wrote custom Ansible playbooks to make sure everything was infrastructure as code and was always repeatable. We reduced our environment deployment times from over a week to less than 2 hours,” said Harrison.
Those technical wins were paralleled by cultural wins as well, he added. “We had a total transformation in the IT department, especially in our organization. Our team, The Agents of Change, evolved from the traditional waterfall style to a real agile methodology which greatly reduced all of our release times.”
Soon after moving onto OpenShift, the team at Omnitracs embraced OpenShift 4.1 and the Operators model for services availability.Harrison noted that the Operators model enabled faster integration of services essential to building out a production-grade cluster. The team now uses Splunk for logging, and Ansible for automation.
Software is not the only thing changing inside the Omnitracs OpenShift clusters, however. The company is also using this Kubernetes-based cloud software to improve its internal IT teams. Said Harrison: “We took the SRE [Site Reliability Engineering] model and turned it on its ear a little. We’re spinning up 35 new scrum teams in the coming years, and we’re not going to find SREs for each of those teams; it’s not going to work. So we took a different approach: we’re using a virtual ops approach, where we’re embedding with these teams now and teaching them the ops part of devops. They are going to be their own SREs. They will have the ability and proper permissions in OpenShift to spin up their own namespaces, start projects, give people access to them, and all of that. We’re really giving them the ability to provide care and feeding for their own areas, but at the same time, giving them the ownership of it. So now when they are building their applications, they are actively thinking about the operationalization of it,” said Harrison. 
That transition also means internal developers are expanding their skill sets to include more capabilities, enhancing their resumes and growing their careers while also contributing to the growth of the company itself. It’s a win-win situation. “They know that the logs are collected in Splunk and through VictorOps, they get that notification back when something’s gone wrong. It’s just part of how they do their daily job now,” said Harrison. 
How did they manage to put all this power into the hands of the developers themselves? “Early adoption of OpenShift 4 was critical to our success. We were very much looking at this as if we were going to be on this for the next five years and didn’t want to get stuck on an older version, so we took that risk. We were very tightly working  with Red Hat while doing this. They’ve been embedded with our teams for the last year. They are a part of our team. We drove innovation within our team and within Red Hat. The stuff we were working on was being brought back into Red Hat to bring the platform to where it is today,” said Harrison.
“That transformation we talked about earlier from traditional systems administrators in an operational role to devops engineers, we did that in less than a year. We weren’t doing this a year ago. We were traditional systems administrators in our silos. We had Linux guys, we had VMware guys, we had network guys. Now we’re all on a team together, we’re all doing this stuff every day. It’s been an amazing transformation for the technology we’re working in and for our careers. It’s been great, and that tight partnership facilitated that shift very easily. Having Operators available to us allowed us to deliver services to the dev teams almost immediately on day one,” said Harrison.
 
The post How Omnitracs Transformed to a DevOps Culture with OpenShift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift