Deploy PostgreSQL in OpenShift backed by OpenShift Container Storage

> Note: This scenario assumes you already have an OpenShift cluster or have followed the instructions in the Deploying OpenShift Container Storage 4 to OpenShift 4 Blog to set up an OpenShift Container Platform 4.2.14+ cluster using OpenShift Container Storage 4.

1. Overview
PostgreSQL has been the fastest growing open source RDBMS over the past decade. It has a solid community and has been around for many years adding more and more features. PostgreSQL features ACID (Atomicity, Consistent, Isolation and Durability) properties. It has indexes (primary/unique), updatable views, triggers, foreign keys (FKs) and even stored procedures (SPs). PostgreSQL also features built-in replication via shipping the WAL (Write Ahead Log) to a number of different database replicas. These replicas can be used in read-only mode. It also has a synchronous replication, where the master waits for at least one replica to have written the data before ACKing.
1.1. This Blog will show you how to:
Create a PostgreSQL template that uses Red Hat OpenShift Container Storage 4 persistent storage and learn about the template variables.
Deploy a PostgreSQL via oc new-app using our new template
Learn about pgbench basics and deploy a pgbench container
Run performance testing to measure the performance of your database
Do all this from the command line!
2. Prerequisites:

Make sure you have a running OCP cluster.
A copy of this git repo.
NOTE: The scripts in this lab will work on Linux (various distributions) and MacOS. They have not been tested on any Windows OS.
Please git clone this repository, you’ll find all the scripts needed in the ocs4jenkins directory:

git clone https://github.com/red-hat-storage/ocs-training.git
3. Deploying PostgreSQL
3.1. Our PostgreSQL example
This example will deploy a PostgreSQL database that will use OpenShift Container Storage persistent storage and then run pgbench on the newly deployed database. pgbench is maintained together with PostgreSQL. It is used to create data and also to stress the database. It has many options and variations, but we are going to use the default workload scenario which is loosely based on TCP-B (a more write oriented workload). We will also learn about the pgbench output.
3.2. Creating a PostgreSQL template that uses OpenShift Container Storage 4
OpenShift Container Platform comes pre-configured with two PostgreSQL templates to use:
oc get templates -n openshift -o custom-columns=NAME:.metadata.name|grep -i ^postgres
Example output:
postgresql-ephemeral
postgresql-persistent
We are going to create a new template based on the postgresql-persistent template. To do this, we run the create_ocs_postgresql_template script.
The script parameters (CSI_DRIVER, PV_SIZE and NEW_TEMPLATE_NAME) are self explanatory. You can edit the script and change them, but remember that going forward, some sections might reference the default value of NEW_TEMPLATE_NAME. The script will perform the following tasks:

Change the name of the template (so it can co-exists with the one we copied from).
Add the storage class (sc) we want to use in the template (the jenkins-persistent template just uses the default storage class in OCP).
Add/change the size of the Persistent Volume (PV) we want for the PostgreSQL database.
Run the `oc create` command and the create the new template.

Run the create_ocs_jenkins_template script:
$ bash create_ocs_postgresql_template
After running the script, you should see another jenkins template:
oc get templates -n openshift -o custom-columns=NAME:.metadata.name|grep -i ^jenkins
Example output:
postgresql-ephemeral
postgresql-persistent
postgresql-persistent-ocs
The last jenkins template postgresql-persistent-ocs is the one that we are going to use.
3.3. Creating our project
Let’s create a project that will run our example scenario:
oc new-project my-postgresql
Once the project is created (oc will automatically use it), lets create postgresql database:
oc new-app –name=postgresql –template=postgresql-persistent-ocs
Pay close attention to the output of the “new-app” command, especially to the section below “* With parameters:” as it contains the randomly created username (“PostgreSQL Connection Username=”) and password (“PostgreSQL Connection Username”) to connect to the database, for example:
* With parameters:
* Memory Limit=512Mi
* Namespace=openshift
* Database Service Name=postgresql
* PostgreSQL Connection Username=user8CK # generated
* PostgreSQL Connection Password=DmoXvvuh6PetIG5V # generated
* PostgreSQL Database Name=sampledb
* Volume Capacity=50Gi
* Version of PostgreSQL Image=10

You can monitor the creation of the PostgreSQL pod using oc get pods and you can see that two pods are being created, a sidecar named postgresql–deploy, and the actual pod running the database named, postgresql–.
The output should similar to this:
$ oc get pods
NAME READY STATUS RESTARTS AGE
postgresql-1-deploy 0/1 Completed 0 65s
postgresql-1-ptcdm 1/1 Running 0 57s

So once the PostgreSQL pods is running and ready (in the above output the name is postgresql-1-ptcdm), we have a running database. Now we can create a pod that will contain pgbench, to do so we will use a container I’ve created and used for all my PostgreSQL tests.
The yaml file looks like this:
apiVersion: v1
kind: Pod
metadata:
labels:
name: pgbench
name: pgbench
spec:
containers:
– image: quay.io/sagyvolkov/pgbench-container:0.1
imagePullPolicy: IfNotPresent
name: pgbench-pod
resources: {}
securityContext:
capabilities: {}
privileged: false
terminationMessagePath: /dev/termination-log
dnsPolicy: Default
restartPolicy: OnFailure
serviceAccount: “”
status: {}
You can copy/paste this yaml to a file (lets call it pgbench.yaml) and then run:
oc -n my-postgresql apply -f pgbench.yaml
Once the pgbench pod is up, let’s make sure we can connect to our PostgreSQL database.
First we need the username and password that the oc new-app command output has returned. We also need the service IP address that was created when we ran “oc new-app”. To get the service IP, run:
oc get svc -o custom-columns=CLUSTER-IP:.spec.clusterIP
now that we have all the information to test connectivity, we can rsh into the pgbench pod:
oc rsh pgbench
and then once inside the pod, run: psql -U -h sampledb
for example:
psql -U user8CK -h 172.30.126.152 sampledb
Once you can see that you can login to the sampledb database, just type exit to leave psql.
Now, we can load data via pgbench. The pgbench container holds a wrapper script to run pgbench (as I wrote, this container is used for performance testing). It is out of the scope of this Blog to go over all the parameters of the run_pgbench script, but feel free to cat the script once you rsh to the pgbench pod.
To load our data, the command will be (again, this is running from within the pgbench pod): ./run_pgbench init 10 1 1 simple time 60 yes no sampledb 10

> NOTE: Please leave the parameters that are not enclosed with <> as they are.
for example:

./run_pgbench init 172.30.126.152 user8CK 10 1 1 simple time 60 yes no DmoXvvuh6PetIG5V sampledb 5
One of the parameters in this script is the scale factor of the data, in this case it is set to 10 (4th parameter) which will create a very small database, you can use a much larger scale factor to create bigger database (for example, a scale factor of 5350 is about 75GB in database size). When the load is done, we can now use the same script to run the workload:
./run_pgbench workload 172.30.126.152 user8CK 10 1 1 simple time 60 yes no DmoXvvuh6PetIG5V sampledb 5
With these variables used by the run_pgbench we are going to run pgbench for 60 seconds, using 2 threads and 2 jobs and the output will be sampled every 5 seconds.
The output will be similar to this:
Running pgbench workload …
starting vacuum…end.
progress: 5.0 s, 589.6 tps, lat 3.390 ms stddev 2.279
progress: 10.0 s, 613.2 tps, lat 3.261 ms stddev 2.026
progress: 15.0 s, 623.6 tps, lat 3.207 ms stddev 2.399
progress: 20.0 s, 624.2 tps, lat 3.204 ms stddev 4.685
progress: 25.0 s, 690.2 tps, lat 2.898 ms stddev 1.555
progress: 30.0 s, 681.8 tps, lat 2.933 ms stddev 2.599
progress: 35.0 s, 632.4 tps, lat 3.141 ms stddev 7.810
progress: 40.0 s, 628.4 tps, lat 3.204 ms stddev 5.069
progress: 45.0 s, 568.6 tps, lat 3.517 ms stddev 3.696
progress: 50.0 s, 601.8 tps, lat 3.323 ms stddev 2.555
progress: 55.0 s, 583.4 tps, lat 3.429 ms stddev 3.358
progress: 60.0 s, 623.0 tps, lat 3.211 ms stddev 1.025
transaction type:
scaling factor: 10
query mode: simple
number of clients: 2
number of threads: 2
duration: 60 s
number of transactions actually processed: 37303
latency average = 3.217 ms
latency stddev = 3.716 ms
tps = 621.691291 (including connections establishing)
tps = 621.719866 (excluding connections establishing)
END-PGBENCH-WORKLOAD

real 1m0.026s
user 0m0.482s
sys 0m2.045s

What we can see here is that we achieved 37303 transactions during our 60 seconds test with an average tps (transaction per seconds) of roughly 621 and latency average 3.217 ms.
As previously stated, you can play with parameters of the run_pgbench script to run a heavier, longer workload or to create a bigger database.
Resources and Feedback
To find out more about OpenShift Container Storage or to take a test drive, visit https://www.openshift.com/products/container-storage/.
If you would like to learn more about what the OpenShift Container Storage team is up to or provide feedback on any of the new 4.2 features, take this brief 3-minute survey.
The post Deploy PostgreSQL in OpenShift backed by OpenShift Container Storage appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

OpenShift 4.3: User Management Improvements

The Red Hat OpenShift Web Console has always strived to be the easiest way to interact with OpenShift resources, and in version 4.3 we’ve added more capabilities around viewing and editing user management resources. Dedicated pages to view Users and Groups for the cluster have been added, allowing cluster admins to easily see who has access to the cluster and how they are organized. These new pages are consolidated under one navigation section, so there is now just one place to look for any user management resource. Let’s take a closer look.
Viewing cluster users in the console
The OpenShift Web Console now includes a list of users who have previously logged into the cluster, a place where admins can come to see which users have authenticated to the system using which Identity Provider.
Admins can now impersonate users from this list to see the console exactly how a user with those permissions would, making it easy to test and troubleshoot RBAC settings. Previously this feature was available for the impersonation of role bindings to test a single role, however now being able to impersonate a user and exercise all of their roles at one time will ease more complex access-related tasks. To read more about impersonation in OpenShift, check out this blog post.

Details about an individual user can also be viewed, giving an admin a quick understanding of that user with the ability to view and edit the comprising YAML.

The Role Bindings tab for a user gives a summarized look at what roles that user has access to, with the ability to add additional role bindings right from that list.

Managing users in groups
Also new in OpenShift 4.3 is a dedicated view of the groups on the cluster. Admins can see what groups exist and how many users are contained in each.

Viewing the details of a group gives an overview, including a list of its current members with the option to view the details of a particular user.

Role bindings for the group are also viewable, letting an admin know what roles users in that group are inheriting, with the option to add more.

All in one place, with more to come
To make these User Management pages quick to locate, we’ve created a new navigation section to contain Users and Groups alongside Service Account objects, and also Roles and Role Bindings. This one area for all things User Management will serve as the home for future improvements as well, like continuing to refine how users are assigned roles.
If you’d like to learn more about what the OpenShift team is up to, check out our github design repo, or if you are interested in providing any feedback on any of the new 4.3 features, please take this brief 3-minute survey.
The post OpenShift 4.3: User Management Improvements appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Community Blog Round Up 20 January 2020

We’re super chuffed to see another THREE posts from our illustrious community – Adam Young talks about api port failure and speed bumps while Lars explores literate programming.
Shift on Stack: api_port failure by Adam Young
I finally got a right-sized flavor for an OpenShift deployment: 25 GB Disk, 4 VCPU, 16 GB Ram. With that, I tore down the old cluster and tried to redeploy. Right now, the deploy is failing at the stage of the controller nodes querying the API port. What is going on?
Read more at https://adam.younglogic.com/2020/01/shift-on-stack-api_port-failure/
Self Service Speedbumps by Adam Young
The OpenShift installer is fairly specific in what it requires, and will not install into a virtual machine that does not have sufficient resources. These limits are 16 GB RAM, 4 Virtual CPUs, and 25 GB Disk Space. This is fairly frustrating if your cloud provider does not give you a flavor that matches this. The last item specifically is an artificial limitation as you can always create an additional disk and mount it, but the installer does not know to do that.
Read more at https://adam.younglogic.com/2020/01/self-service-speedbumps/
Snarl: A tool for literate blogging by Lars Kellogg-Stedman
Literate programming is a programming paradigm introduced by Donald Knuth in which a program is combined with its documentation to form a single document. Tools are then used to extract the documentation for viewing or typesetting or to extract the program code so it can be compiled and/or run. While I have never been very enthusiastic about literate programming as a development methodology, I was recently inspired to explore these ideas as they relate to the sort of technical writing I do for this blog.
Read more at https://blog.oddbit.com/post/2020-01-15-snarl-a-tool-for-literate-blog/
Quelle: RDO

Configure OpenShift Metrics with Prometheus backed by OpenShift Container Storage

Note: This scenario assumes you already have an OpenShift cluster or have followed the instructions in the Deploying OpenShift Container Storage 4 to OpenShift 4 Blog to set up an OpenShift Container Platform 4.2.14+ cluster using OpenShift Container Storage 4.

1. This Blog will cover:
What Prometheus is
Checking the current storage backend of your Prometheus environment
Make your monitoring and data persistent
2. What is Prometheus?
From the official Prometheus website:
“Prometheus was started in 2012 by Soundcloud and is an open-source monitoring and alerting toolkit. Nowadays it is a stand-alone project and independent of any single company. Due to its design, it doesn’t rely itself on a cluster or distributed database, but all its nodes are autonomous. All communication happens through HTTP and Prometheus pulls information from its’ nodes rather than receiving them like Nagios, for example.”
3. Modify your Prometheus environment
By default, Prometheus in Red Hat OpenShift Container Platform 4 is deployed on ephemeral storage so it is now time to talk about adjusting the environment to your needs. Every supported configuration change is controlled through a central ConfigMap, which needs to be created before we can make changes.
3.1. Create the ConfigMap
When you start off with a clean installation of Openshift, the ConfigMap to configure the Prometheus environment may not be present. To check if your ConfigMap is present, execute this:
oc -n openshift-monitoring get configmap cluster-monitoring-config

Output if the ConfigMap is not yet created:
Error from server (NotFound): configmaps “cluster-monitoring-config” not found

If you are missing the ConfigMap, create it:
oc -n openshift-monitoring create configmap cluster-monitoring-config

You can edit the ConfigMap with the following command. Do this now and ensure that the ConfigMap looks like below – especially the data section should be present:
oc -n openshift-monitoring edit configmap cluster-monitoring-config

ConfigMap content
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |

3.2. Configuring persistent storage for the Prometheus stack
The Prometheus stack consists of the Prometheus database and the alertmanager data. Persisting the data from both is a best practice since data loss on any of these will cause you to lose your collected metrics and alerting data. View the official Openshift 4.2 documentation about this topic for further information.
While the documentation recommends using the local-storage provider, we will set up the Prometheus stack to use OpenShift Container Storage. By doing so, we will ensure that the Prometheus Pods can move freely between Nodes. Watch out for our performance briefs where we will show what this means for performance, by comparing the performance of the default EmptyDir, the recommended local-storage and OpenShift Container Storage-backed Prometheus.
To configure the Prometheus stack to use OpenShift Container Storage, edit the ConfigMap that was created in the previous section:
oc -n openshift-monitoring edit configmap cluster-monitoring-config

ConfigMap content
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
prometheusK8s:
volumeClaimTemplate:
metadata:
name: prometheusdb
spec:
storageClassName: ocs-storagecluster-ceph-rbd
resources:
requests:
storage: 40Gi
alertmanagerMain:
volumeClaimTemplate:
metadata:
name: alertmanager
spec:
storageClassName: ocs-storagecluster-ceph-rbd
resources:
requests:
storage: 40Gi

Once you save and exit the editor, the affected Pods will automatically be restarted and the new storage will be applied.

Note: It is not possible to retain data that was written on the default EmptyDir-based installation. Thus you will start with an empty database after changing the backend storage.
After a couple of minutes, the Alertmanager and Prometheus Pods will have restarted and you will see new PVCs in the openshift-monitoring namespace:

oc get -n openshift-monitoring pvc

Example output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
alertmanager-alertmanager-main-0 Bound pvc-2f6714f7-feff-11e9-9bdd-005056818b15 40Gi RWO ocs-storagecluster-ceph-rbd 102m
alertmanager-alertmanager-main-1 Bound pvc-2f6dd091-feff-11e9-9bdd-005056818b15 40Gi RWO ocs-storagecluster-ceph-rbd 102m
alertmanager-alertmanager-main-2 Bound pvc-2f74e00d-feff-11e9-9bdd-005056818b15 40Gi RWO ocs-storagecluster-ceph-rbd 102m
prometheusdb-prometheus-k8s-0 Bound pvc-e0f7b201-ff0c-11e9-9bdd-005056818b15 40Gi RWO ocs-storagecluster-ceph-rbd 4m34s
prometheusdb-prometheus-k8s-1 Bound pvc-e101b1db-ff0c-11e9-9bdd-005056818b15 40Gi RWO ocs-storagecluster-ceph-rbd 4m34s

3.3. Configure even more
You can configure a lot more inside the cluster-monitoring-config ConfigMap. Since this Blog is focused on Storage, the other options have been omitted. A great way to learn more is to go to the official OpenShift Container Platform documentation for configuring the Prometheus Cluster Monitoring stack.
One thing you want to check out in the documentation is how you can set up the alertmanager.yml and how to define the retention time of Prometheus. By default, Prometheus only retains the last 15 days worth of data.
Resources and Feedback
To find out more about OpenShift Container Storage or to take a test drive, visit https://www.openshift.com/products/container-storage/.
If you would like to learn more about what the OpenShift Container Storage team is up to or provide feedback on any of the new 4.2 features, take this brief 3-minute survey.
The post Configure OpenShift Metrics with Prometheus backed by OpenShift Container Storage appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

OpenShift 4.3: Quay Container Security Integration

Overview
In the Red Hat OpenShift 4.2 Web UI Console, we introduced a new Cluster Overview Dashboard as the landing page when users first log in. The dashboard is there to help users resolve issues more efficiently and maintain a healthy cluster. With the latest 4.3 release, we added an image security section to the cluster health dashboard card. This section will appear on the dashboard when the Container Security Operator gets installed.
Container Security Operator
The Container Security Operator brings Quay and Clair metadata into OpenShift.

Installing this operator enables cluster administrators to monitor known container image vulnerabilities in pods running on their Kubernetes cluster.

After deployment, there is no additional configuration for the operator to start querying the container registry for vulnerability data. If the registry supports image scanning, like Quay, then the vulnerabilities found will be exposed to the user by populating the ImageManifestVuln resource list and surfacing it on the cluster dashboard.
Vulnerability Details
Users will see the new “Image Security” section on the cluster health dashboard card. The card will list the number of vulnerabilities found and will provide a link to access more detailed information on the type of vulnerabilities.

Clicking on the link opens a popover with the breakdown of vulnerability by severity. Critical and high vulnerabilities will be listed first, followed by medium and lower risk. Fixable vulnerabilities are listed with two links. One link is the vulnerability name that goes out to the vulnerability view for the Quay instance the image is hosted on (for example, Quay.io), and the other is a link to view the Image Manifest Vulnerability resource details in the affected namespace.

Only the top five most severe vulnerabilities will be listed in the popover. For cases where there are more than five, users can navigate to the custom resource definition list to view all Image Manifest Vulnerability instances under the ImageManifestVuln resource.
Learn More
We are focused on helping users resolve issues rapidly to maintain healthy clusters. The Container Security Operator can now help by exposing vulnerability information on the cluster dashboard. Be on the lookout for additional security information to be surfaced in upcoming releases.
If you’d like to learn more about what the OpenShift team is up to or provide feedback on any of the new 4.3 features, please take this brief 3-minute survey.
The post OpenShift 4.3: Quay Container Security Integration appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Configure the OpenShift Image Registry backed by OpenShift Container Storage

1. Requirements

Note: This scenario supposes you already have Red Hat OpenShift Container Storage 4 running inside your Red Hat OpenShift Container Platform 4 environment. Alternatively, follow the instructions in the Deploying OpenShift Container Storage 4 to OpenShift 4 Blog to set up an OpenShift Container Platform 4.2.14+ cluster using OpenShift Container Storage 4.
In order to verify the presence of OpenShift Container Storage, please run:

oc describe project openshift-storage|grep Status

The result of the above command should be similar to:
Status: Active

2. Registry Introduction
OpenShift Container Platform provides a built in Container Image Registry which runs as a standard workload on the cluster. A registry is typically used as a publication target for images built on the cluster, as well as a source of images for workloads running on the cluster.
In addition, it is integrated into the cluster user authentication and authorization system, which means that access to create and retrieve images is controlled by defining user permissions on the image resources.
3. Image Registry Operator
The Registry is configured and managed by an infrastructure operator. The Image Registry Operator installs a single instance of the OpenShift Container Platform registry, and it manages all configuration of the registry, including setting up registry storage when you install an installer-provisioned infrastructure cluster on AWS, GCP, Azure, or OpenStack.
The Image Registry Operator runs in the openshift-image-registry namespace, and manages the registry instance in that location as well. All configuration and workload resources for the registry reside in that namespace.
In case you want to see detailed information about the registry operator, run:
oc describe configs.imageregistry.operator.openshift.io

4. Registry Storage Requirements
A registry needs to have storage in order to store its contents. Image data is stored in two locations. The actual image data is stored in a configurable storage location such as cloud storage or a filesystem volume.
The image metadata, which is exposed by the standard cluster APIs and is used to perform access control, is stored as standard API resources, specifically images and imagestreams.
4.1. Project
The registry is arranged in a Project (namespace) named: ‘openshift-image-registry’
To get more information about this project, run the following oc command:
oc describe project openshift-image-registry
Name: openshift-image-registry
Created: 3 hours ago
Labels: openshift.io/cluster-monitoring=true
Annotations: openshift.io/node-selector=
openshift.io/sa.scc.mcs=s0:c18,c17
openshift.io/sa.scc.supplemental-groups=1000340000/10000
openshift.io/sa.scc.uid-range=1000340000/10000
Display Name: <none>
Description: <none>
Status: Active
Node Selector: <none>
Quota: <none>
Resource limits: <none>

4.2. Pods
To view the registry related pods, run the command:
oc get pods -n openshift-image-registry

(The NAME of your machines will be different than shown below)
NAME READY STATUS RESTARTS AGE
cluster-image-registry-operator-74465655b4-gq44m 2/2 Running 0 3h16m
image-registry-7489584ddc-jhw2j 1/1 Running 0 3h16m
node-ca-4×477 1/1 Running 0 116m
node-ca-d82rv 1/1 Running 0 3h16m
node-ca-gxd8r 1/1 Running 0 3h16m
node-ca-kjp28 1/1 Running 0 3h16m
node-ca-lvb48 1/1 Running 0 116m
node-ca-ndwhh 1/1 Running 0 3h16m
node-ca-nwstp 1/1 Running 0 116m
node-ca-pwrrs 1/1 Running 0 3h16m

5. Review the current Registry Operator configuration settings
Let’s review the current Registry settings first. To do so, please run the command:
oc edit configs.imageregistry.operator.openshift.io/cluster

# Please edit the object below. Lines beginning with a ‘#’ will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: imageregistry.operator.openshift.io/v1
kind: Config
metadata:
creationTimestamp: “2019-10-28T09:07:09Z”
finalizers:
– imageregistry.operator.openshift.io/finalizer
generation: 3
name: cluster
resourceVersion: “16463”
selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster
uid: 52880e6b-f962-11e9-8995-12a861d1434e
spec:
defaultRoute: true
httpSecret: 0c394aabee8e6a9ef8aa2c927c8f8c487f8ad3249ff67794a8685af4f76c72811c97ee4ddf936602dd9fca12e198c4eff413130568a4c356d7b6f14f805bcb59
logging: 2
managementState: Managed
proxy:
http: “”
https: “”
noProxy: “”
readOnly: false
replicas: 1
requests:
read:
maxInQueue: 0
maxRunning: 0
maxWaitInQueue: 0s
write:
maxInQueue: 0
maxRunning: 0
maxWaitInQueue: 0s
storage:
s3:
bucket: cluster-ocs-f562-9d4rh-image-registry-us-east-1-rjqkgcsxlotmwm
encrypt: true
keyID: “”
region: us-east-1
regionEndpoint: “”
status:
conditions:
– lastTransitionTime: “2019-10-28T09:07:10Z”
reason: S3 Bucket Exists
status: “True”
type: StorageExists
– lastTransitionTime: “2019-10-28T09:07:10Z”
message: Public access to the S3 bucket and its contents have been successfully
blocked.
reason: Public Access Block Successful
status: “True”
type: StoragePublicAccessBlocked
– lastTransitionTime: “2019-10-28T09:07:10Z”
message: Tags were successfully applied to the S3 bucket
reason: Tagging Successful
status: “True”
type: StorageTagged
– lastTransitionTime: “2019-10-28T09:07:10Z”
message: Default AES256 encryption was successfully enabled on the S3 bucket
reason: Encryption Successful
status: “True”
type: StorageEncrypted
– lastTransitionTime: “2019-10-28T09:07:10Z”
message: Default cleanup of incomplete multipart uploads after one (1) day was
successfully enabled
reason: Enable Cleanup Successful
status: “True”
type: StorageIncompleteUploadCleanupEnabled
– lastTransitionTime: “2019-10-28T09:07:56Z”
message: The registry is ready
reason: Ready
status: “True”
type: Available
– lastTransitionTime: “2019-10-28T09:18:32Z”
message: The registry is ready
reason: Ready
status: “False”
type: Progressing
– lastTransitionTime: “2019-10-28T09:07:11Z”
status: “False”
type: Degraded
– lastTransitionTime: “2019-10-28T09:07:11Z”
status: “False”
type: Removed
observedGeneration: 3
readyReplicas: 0
storage:
s3:
bucket: cluster-ocs-f562-9d4rh-image-registry-us-east-1-rjqkgcsxlotmwm
encrypt: true
keyID: “”
region: us-east-1
regionEndpoint: “”
storageManaged: true

Note: The storage“ designation currently refers tos3and abucket` name. This is an initial deployment of OCP4 on AWS infrastructure.

storage:
s3:
bucket: cluster-ocs-f562-9d4rh-image-registry-us-east-1-rjqkgcsxlotmwm
encrypt: true
keyID: “”
region: us-east-1
regionEndpoint: “”

Close the VI editor by first pressing ESC and then : followed by q! and ENTER
6. Moving the registry storage to OpenShift Container Storage
In this section we will change the registry storage to OpenShift Container Storage, where it will consume CephFS RWX storage, as multiple pods will need to access the storage concurrently.
6.1. Storage Class
First we want to make sure that a CephFS storageclass is present, in order to create a Persistent Volume Claim for the registry storage.
To check for presence of an existing CephFS storage class, please run the following command:
oc get sc

This should result in an outcome similar to:
NAME PROVISIONER AGE
gp2 kubernetes.io/aws-ebs 5h57m
ocs-storagecluster-ceph-rbd (default) openshift-storage.rbd.csi.ceph.com 4h5m
ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com 4h5m
openshift-storage.noobaa.io openshift-storage.noobaa.io/obc 3h59m

According to the above output, there is already a storageclass named ocs-storagecluster-cephfs
6.2. PVC (Persistent Volume Claim)
In this step we will setup a pvc named ocs4registry addressed to our storageclass named ocs-storagecluster-cephfs, which is going to be used for storing registry data.
First, please make sure to be inside the openshift-image-registry project.
oc project openshift-image-registry

In order to create the pvc, please run the following command:
oc create -f <(echo ‘{
“apiVersion”: “v1″,
“kind”: “PersistentVolumeClaim”,
“metadata”: {
“name”: “ocs4registry”
},
“spec”: {
“storageClassName”: “ocs-storagecluster-cephfs”,
“accessModes”: [ “ReadWriteMany” ],
“resources”: {
“requests”: { “storage”: “100Gi”
}
}
}
}’);

This should result in:
persistentvolumeclaim/ocs4registry created

To check if it worked out well:
oc get pvc

Example output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
ocs4registry Bound pvc-b7339457-fb23-11e9-846d-0a3016334dd1 100Gi RWX ocs-storagecluster-cephfs 60s

6.3. Configure the Image Registry to use the newly created PVC
In this section we will instruct the Registry Operator to use the CephFS-backed RWX PVC.

Note: This method of moving the registry to OpenShift Container Storage will work exactly the same for OpenShift Container Platform on VMware infrastructure.
Now configure the registry to use OpenShift Container Storage. Find the storage: stanza and remove it and everything below it. Everything above it should remain in place. Then put the following instead:

oc edit configs.imageregistry.operator.openshift.io

Find the storage section and add the following:
storage:
pvc:
claim: ocs4registry

Close the VI editor by first pressing ESC and then : followed by wq! and ENTER
Then check the /registry mountpoint inside the image-registry pod, as a validation that the pod now uses the OCS pvc instead of the s3 resources on AWS. Here is how to do this:
oc get pods

Note: The NAME of your machines will be different than shown below

Example output:
NAME READY STATUS RESTARTS AGE
cluster-image-registry-operator-6d65bcbd4b-7h6b6 2/2 Running 0 8h
image-registry-6c4dbbcdbb-9bl8w 1/1 Running 0 8m59s
node-ca-26q5d 1/1 Running 0 8h
node-ca-6tdrs 1/1 Running 0 8h
node-ca-9jdwt 1/1 Running 0 8h
node-ca-g6dr5 1/1 Running 0 8h
node-ca-jt7w8 1/1 Running 0 8h
node-ca-r9qtx 1/1 Running 0 7h41m
node-ca-srgv9 1/1 Running 0 7h41m
node-ca-wg2xs 1/1 Running 0 7h41m

We now open a remote shell on the registry pod. This is the podname that starts with image-registry-*

Note: The NAME of your registry pod will be different than shown below

oc rsh image-registry-6c4dbbcdbb-9bl8w

Once connected, a bash prompt appears. You are now running a bash shell on the pod itself.
From within this remote shell, we can run both the df -h and/or mount command, which shows the result information from pod perspective:
sh-4.2$ df -h | grep registry

This results in the following output:
172.30.107.130:6789,172.30.82.116:6789,172.30.125.23:6789:/volumes/csi/csi-vol-5b88bd0e-fc09-11e9-9939-0a580a820206 100G 0 100G 0% /registry

Another approach to take could be to use the mount command.
sh-4.2$ mount | grep registry

Resulting in the following output:
172.30.107.130:6789,172.30.82.116:6789,172.30.125.23:6789:/volumes/csi/csi-vol-5b88bd0e-fc09-11e9-9939-0a580a820206 on /registry type ceph (rw,relatime,name=csi-cephfs-node,secret=<hidden>,acl,mds_namespace=ocs-storagecluster-cephfilesystem)

At this point, the image registry should be using the OCS RWX volume, backed by CephFS.
In the output from either command, it is shown that the /registry filesystem mount originates from /volumes/csi/csi-vol-5b88bd0e-fc09-11e9-9939-0a580a820206 sourced from Ceph nodes, managed by the rook operator.
As a result from the df -h command, we can verify it has 100Gi available space. The mount command shows which options were used when mounting the /registry filesystem slice.
You can exit the pod remote shell rsh by either pressing Ctrl+D or by executing exit.
Resources and Feedback
To find out more about OpenShift Container Storage or to take a test drive, visit https://www.openshift.com/products/container-storage/.
If you would like to learn more about what the OpenShift Container Storage team is up to or provide feedback on any of the new 4.2 features, take this brief 3-minute survey.
The post Configure the OpenShift Image Registry backed by OpenShift Container Storage appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

QAD turns cost centers into profit centers with IBM Cloud

Cloud has been an important part of our strategy at QAD for well over a decade. In fact, among the established global manufacturing enterprise resource planning (ERP) and supply chain software providers, QAD was one of the first to offer cloud-based solutions, starting with QAD Supplier Portal in 2003 and then ERP in 2007.
Despite our early start in the cloud, it took several years for cloud ERP to reach critical mass in general and in particular the manufacturing markets, which are QAD’s focus. IT departments, comfortable with running their own data centers, were cynical about cloud’s ability to deliver the right level of availability, scalability, security and performance.
Driving manufacturing cloud ERP adoption
There were two key drivers behind increased cloud ERP adoption. First, the recession in the late 2000s forced IT departments to move away from being cost centers and toward being profit centers. For example, IT departments found ways to increase speed to market, added efficiencies to processes and provided decision-makers with useful analytics. This segued into the second driver of the move to the cloud: removing the burden of running ERP and allowing more time for business differentiation. Moving to QAD Cloud not only provides support 365 days a year, but also delivers 99.987 percent application uptime, disaster recovery and a complete suite of Defense in Depth Security.
Another milestone in QAD’s evolution toward the cloud came in 2015, when we unveiled an initiative, called Channel Islands, to begin rearchitecting our applications and the underlying platform for the cloud era, for Industry 4.0 and for smart manufacturing. We also started to take better advantage of emerging technologies.
Moving to a global cloud provider
These drivers have sustained cloud growth of roughly 30 percent year over year for several years for QAD. While we were already working with a few cloud providers, it became clear to us that we needed another cloud provider that was strong outside of North America. IBM, which had acquired SoftLayer a few years earlier, and which had an excellent reputation for cloud management, was operating IBM Cloud in places like Australia. We investigated further and found that IBM had international cloud facilities in several key regions that matched well to our expanding customer base, including Paris, Singapore and Hong Kong.
IBM was also a front-runner vendor based on its system availability run rate. Its data centers are at the high-tier classification and are designed to provide the highest level of availability and security that manufacturers need. These factors, coupled with its crisp and consistent execution, made IBM the obvious choice.
Maximizing IBM Cloud collaboration opportunities
IBM’s delivery of service has continuously met our needs. We have service level agreements (SLAs) with IBM, and we extend those SLAs to our customers. Our track record working with IBM has been excellent. The company understands that the job is not done by simply delivering to current SLAs. We have weekly operational calls with IBM to align our business and track the KPIs that drive our excellent service to our customers. We also collaborate at monthly strategic meetings to discuss short-term technology roadmaps such as improvements to our VMware deployments to ensure we maintain the highest availability. Finally, IBM’s recent acquisition of Red Hat cements our relationship even further since Red Hat Enterprise Linux is the primary OS used for QAD Adaptive ERP in the cloud.
Speaking of collaboration, I was recently invited to join the an IBM customer advisory board. In that role, I’ll be providing feedback as the voice of the customer and the voice of the customer’s customer. With this kind of input, IBM can continue to provide technology that supports the next generation of ERP and supply chain solutions. IBM has had a really open mind in terms of working from the outside in when it’s developing on its cloud technology roadmap.
What is the number one benefit to working with IBM? Understanding that the highest availability is assumed, and that teamwork and collaboration produce results that exceed the SLA in terms of service delivery. IBM provides enterprise class service delivery to us and we extend that to our customers.  Whenever we pick up the phone and ask for guidance or support, it’s always there. I cannot think of a single time when we needed IBM and IBM wasn’t responsive.
Read the case study for more details.
The post QAD turns cost centers into profit centers with IBM Cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

The Block Editor is Now Supported on Mobile Devices

Part of what helps WordPress power 34% of the web is language: WordPress is fully translated into 68 languages. Pair that with the mobile apps, which make WordPress available across devices, and you have a globally accessible tool.

Today we’re announcing app updates that bring the new Block editor to mobile devices, so on-the-go publishing is even easier for that 34%.

At Automattic, we speak 88 different languages, so we thought: why not use some of them to tell you about the editor updates? Instead of a few screenshots and bullet points, here are some of the people who build the editor and apps sharing their favorite tools and tricks for the mobile Block editor. To make it more accessible, we’ve also included English translations. 

(And for those who want more detail — yes, there are still screenshots and bullet points!)

Rafael, Brazilian Portuguese

Com o novo editor, a criação de conteúdo é mais intuitiva por que as opções de formatação de texto e inserção de arquivos são exibidas de uma forma bem simples.

Toque no ícone ⊕ enquanto estiver editando um post ou página para ver os blocos disponíveis como Parágrafo, Título, Imagem, Vídeo, Lista, Galeria, Mídia e texto, Espaçador e muitos outros.

Translation

With the new editor, creating content is more intuitive because the options to format text and add media are displayed in a simple way. Tap on the ⊕ icon when editing whether a post or page to see all the available blocks like Paragraph, Heading, Image, Video, List, Gallery, Media & Text, Spacer and more.

Anitaa, Tamil

பயணங்களில் மிகவும் விருப்பமுள்ள எனக்கு, பயண குறிப்புகளை பயண நேரத்திலேயே எழுதுவது வழக்கம். இந்தப் புதிய கைபேசி செயலி என் வேலையே மிகவும் எளிதாக்குகிறது. எனக்குப் பிடித்த சில அம்சவ்கள்:  

கி போர்ட்டில் உள்ள நேக்ஸ்ட் பொத்தானை அழுத்துவதன் மூலமே புதிய பத்தியை தொடங்க முடிவது.பட்டியல் தொகுதியைப் பயன்படுத்தி எனது சொந்த பட்டியலை உருவாக்க முடியும்.

பட்டியலின் உள்ளெ பட்டியலை சரிபார்க்கும், அல்லது, துணை பட்டியலை உள்ளடக்கும் பட்டியல் பத்தியை ஆவலுடன் எதிர்பார்க்கிறேன். எனவே அடுத்த புதுப்பிப்பைப் பற்றி நான் மகிழ்ச்சியடைகிறேன்.

Translation

I love travelling and I spend a lot of time on my blog writing travel tips while on the go. My favorite features in the Block editor include:

Creating a new paragraph block by pressing the RETURN button on the keypad. Adding a List block to create my own lists.You can even add sub-lists!

I look forward to seeing what’s coming next!

Mario, Spanish

Cuando escribo, doy mil vueltas sobre qué palabras utilizar y me cuesta decidirme. Uso mi móvil porque me da la posibilidad de capturar mis ideas justo en el momento que se me ocurren. Es por eso que de las cosas que más me gustan del Editor es que puedo moverme de un bloque de texto a otro con facilidad y también cambiarlos de lugar. Además, se puede hacer/deshacer muy fácilmente, y siempre se mantiene el historial de edición lo que me da mayor seguridad a la hora de cambiar incluso sólo pequeñas partes del contenido que voy escribiendo.

Translation

When I write, I walk around in circles and can never decide which words to use. So I use my mobile phone, which lets me capture ideas right when they occur to me. That’s why the things I appreciate in the new Editor are the abilities to move from block to block with ease and to change their order and since you can undo/redo quite easily and can see your editing history, I have confidence when I change even small bits of the post I’m writing.

Jaclyn, Chinese

用過 Gutenberg 古騰堡後網誌效率高很多!因為寫旅行文章,很多時候是在旅途中或是平日空擋等候時間紀錄和寫下想法,行動 app 讓我隨時隨地都可以編輯文章。行動古騰堡簡化了移動文章段落重新排序的步驟,讓文章的架構變得很清楚,也更容易管理。

Translation

The new block editor truly makes a difference in my blogging efficiency and experience. Since my blog is about traveling, I often scribble notes and thoughts during my trips. The block editor on mobile simplifies the process of moving paragraphs around and organizing content, so the architecture of the post becomes clearer and easier to reorganize.

To start using the block editor on your app, make sure to update to the latest version, and then opt in to using it! To opt in, navigate to My Site → Settings and toggle on Use Block Editor.

We hope you give the latest release a try; tell us about your favorite part of the mobile block editor once you’ve had a chance to try it.

We’d also love to know your thoughts on the general writing flow and on some of the newer blocks like video, list, and quote blocks. For specific feedback, you can reach out to us from within the app by going to Me → Help and Support, then selecting Contact Us.
Quelle: RedHat Stack

Deploy Jenkins Pipelines in OpenShift 4 with OpenShift Container Storage 4

> Note: This scenario assumes you already have an OpenShift 4 cluster or have followed the instructions in the Deploying OpenShift Container Storage 4 to OpenShift 4 Blog to set up an OpenShift Container Platform 4.2.14+ cluster using OpenShift Container Storage 4.

1. Overview
Jenkins is one of the most important development infrastructure components, but can we make Jenkins pipelines run faster? Using OpenShift Container Storage we can speed up the build time of applications by using persistent storage to save the stateful data of dependencies and libraries, for example, that are needed during compilation.
1.1. In this blog you will learn how to:

Create a Jenkins template that uses OpenShift Container Storage 4 persistent storage.
Implement a BuildConfig for the JAX-RS project.
Use Jenkins’s PodTemplate in the BuildConfig to create Jenkins (Maven) slave Pods that will use OpenShift Container Storage 4 persistent storage.
How to run a simple build and measure the performance.
How OpenShift Container Storage 4 helps shorten the build time.
How to run the demo in a multi Jenkins environment simulating large engineering organization with many groups/projects using different Jenkins instances.
How to do all this from the command line.

2. Prerequisites:

Make sure you have a running OCP cluster. If you do not have a running OCP cluster, follow the instructions in the Deploying OpenShift Container Storage 4 to OpenShift 4 Blog to set up an OpenShift Container Platform 4.2.14+ cluster using OpenShift Container Storage 4.
A copy of this git repository.
NOTE: The scripts in this lab will work on Linux (various distributions) and MacOS. They have not been tested on any Windows OS.
Please git clone our repository and once cloned, you’ll find all the scripts needed in the `ocs4jenkins` directory:
“`sh
git clone https://github.com/red-hat-storage/ocs-training.git
“`

3. Deploying Jenkins
3.1. Jenkins introduction
Jenkins is a free and open source automation server. The gist of Jenkins is very simple: it helps the software development process by doing things that a developer would normally do manually. It’s typically used to manage build and test processes, where lots of scripts and little bits of software need to be run in succession in order to produce working binaries, containers or virtual machine images. In that sense, it matches the concept of continuous delivery (CD) perfectly.
Here, Jenkins is a server-based system that runs as a servlet in a containerized environment. It can interact with all major version control tools and can automate builds using tools like Maven, Ant and SBT.
Because of this level of integration with version control software, builds can easily be triggered by an action such as a “commit” in git. Builds can also be started via a daily/hourly cron job or even by simply requesting a build URL.
3.2. Our Jenkins demo
In our demo/example, we are going to use a Jenkins pipeline to build the openshift-tasks [project] (https://github.com/redhat-gpte-devopsautomation/openshift-tasks) to demonstrate how to implement a JAX-RS service. We are going to create an OpenShift project that will hold a Jenkins Pod (that also uses OpenShift Container Storage 4 persistent storage) and then when we start our build, the Jenkins master Pod is going to create a Maven Pod to actually run the build. That Maven Pod will use OpenShift Container Platform 4 for persistent storage.
The actual code we compile is not important for the demo, we are just utilizing the build stages to show how OpenShift Container Storage 4 can save significant amounts of build time. The pipeline has well-defined stages:

Create the Maven Pod.
Clone the code.
Build the artifact (including getting all dependencies).
End the build.

> Note: Pipelines usually have more stages, including at least one testing stage, but we are skipping these stages here.
Figure 1. Jenkins pipeline demo components

3.3. Creating a Jenkins template that uses OpenShift Container Storage 4
OCP4 comes preconfigured with two Jenkins templates to use:
oc get templates -n openshift -o custom-columns=NAME:.metadata.name|grep -i jenkins

Example output:
jenkins-ephemeral
Jenkins-persistent

We are going to create a new template based on the jenkins-persistent template. To do so, we are going to run the create_ocs_jenkins_template script.
The script parameters (CSI_DRIVER, PV_SIZE and NEW_TEMPLATE_NAME) are self explanatory; you can edit the script and change them but remember that other scripts in this lab might use the default value of NEW_TEMPLATE_NAME. The script will perform the following tasks:

Change the name of the template (so it can co-exists with the one we copied from).
Add the storage class (sc) we want to use in the template (the jenkins-persistent template just uses the default storage class in OCP).
Add/change the size of the PV we want for the Jenkins pod.
Add some Jenkins Java pod creation parameters to speed up new containers creation.
Run the “oc create” command and then create the new template

Run the create_ocs_jenkins_template script:
$ bash create_ocs_jenkins_template

After running the script, you should see another jenkins template:
oc get templates -n openshift -o custom-columns=NAME:.metadata.name|grep -i jenkins

Example output:
jenkins-ephemeral
jenkins-persistent
Jenkins-persistent-ocs

The last jenkins template jenkins-persistent-ocs is the one that we are going to use.
3.4. Creating our project
Now that we have a Jenkins OpenShift Container Storage 4 template, we can deploy Jenkins and use the deploy_jenkins bash script to:

Create a project.
Create a persistent volume claim that will be used for all our builds.
Create a Jenkins server Pod (using the template from previous step).
Create the Jenkins pipeline build configuration (as a BuildConfig) for our openshift-tasks project.

The script accepts two variables from the command line: the OpenShift project name you want to use and the persistent storage driver you want to use (in our case ocs-storagecluster-ceph-rbd).
The real “magic” takes place at the BuildConfig object, so before running the script, let’s take a look:
1 kind: “BuildConfig”
2 apiVersion: “v1″
3 metadata:
4 name: “jax-rs-build”
5 spec:
6 strategy:
7 type: JenkinsPipeline
8 jenkinsPipelineStrategy:
9 jenkinsfile: |-
10 *PodTemplate*(label: ‘maven-s’,
11 cloud: ‘openshift’,
12 inheritFrom: ‘maven’,
13 name: ‘maven-s’,
14 volumes: [persistentVolumeClaim(mountPath: ‘/home/jenkins/.m2′, claimName: ‘dependencies’, readOnly: false) ]
15 ) {
16 node(“maven-s”) {
17 stage(‘Source Checkout’) {
18 git url: “https://github.com/redhat-gpte-devopsautomation/openshift-tasks.git”
19 script {
20 def pom = readMavenPom file: ‘pom.xml’
21 def version = pom.version
22 }
23 }
24 // Using Maven build the war file
25 stage(‘Build JAX-RS’) {
26 echo “Building war file”
27 sh “mvn clean package -DskipTests=true”
28 }
29 }
30 }

So the pipeline is very simple, we create a Maven Pod (based on the OpenShift Container Platform Maven default image, line #10), git clone our code (line #18), and then create the artifact using Maven (line #27).
The “PodTemplate” section is where we attached the persistent volume that was created in the previous step in the script (the claim is called “dependencies”).
The importance of keeping the same claim is simple: for each build, when we build the artifact, we need to download all the dependencies in order to compile the code. Since these dependencies don’t really change most of the time for the same code, we use OpenShift Container Storage 4 persistent storage to keep the data persistent for each build, thus making any Maven build that follows the first build, up to 90% faster.
After explaining all of this, let’s run the script:
bash deploy_jenkins myjenkins-1 ocs-storagecluster-ceph-rbd

3.5. Running the build and looking at results
The oc command to run a build is very simple and it is literally oc start-build, however we are going to use the bash script run_builds to not only run this command for you, but also run the build 5 times in a sequential manner, measuring the duration of each run, and outputting this data into a log file per run. The script accepts two variables, the OpenShift project name where you created the Jenkins pod (and of course the BuildConfig and persistent volume), and a directory to place the outputs.
bash run_builds myjenkins-1 myjenkins-1

If we look at the newly created myjenkins-1 directory, it should have 10 files (2 files for each of the 5 runs of the build):
The files that match — are the output of the Jenkins build runs.
The files starting with “log-” will hold the build duration data. A quick grep sample of the results will show similar results to these:
cat myjenkins-1/log-myjenkins-1-jax-rs-build-*|grep ‘Total time’

Example output:
[INFO] Total time: 01:39 min
[INFO] Total time: 5.337 s
[INFO] Total time: 3.510 s
[INFO] Total time: 3.258 s
[INFO] Total time: 2.930 s

What we are “grepping” for is the total time it took for the actual maven Pod to run the build. Or, to be precise, the mvn clean package -DskipTests=true command. As you can see, the first build in this example took 99 seconds, while all the consecutive builds took less than 5 seconds. The reason for this is that the dependencies are downloaded for the first build and then reused again and again for any other build that follows.
It is important to note that this is a fairly small project/code that we’re using and bigger projects/code, will have an even greater impact on the maven commands as the dependencies will most likely be much larger.
Also important to note, if we would have used ephemeral storage for our Maven Pods, each of the 5 builds would have taken roughly 99 seconds. If we do some simple math, using ephemeral storage would have taken us roughly 500 seconds to run 5 builds versus the roughly 115 seconds if we are using OpenShift 4 persistent storage for the Maven pods!
3.6. Running our demo in a multi-tenant environment
In real-life scenarios with Jenkins in the Kubernetes/DevOps world, there are usually several Jenkins servers running. It could be there’s a Jenkins server per development team, or maybe a Jenkins server per engineering group (Dev, QE, Support, Professional services and so on). It could be that a developer is working on several projects that require different versions of Jenkins or Jenkins plugins. As you can see, the notion of having many Jenkins servers running on a single OpenShift cluster using some sort of software defined storage is very real.
To simulate a multi-Jenkins server environment, we are going to use the previous scripts (deploy_jenkins and run_builds), however, we’re going to “wrap” these two scripts with scripts that will create a multi-Jenkins server environment. The init_and_deploy_jenkins-parallel bash script variables are easy to understand. The script deploys NUMBER_OF_PROJECTS instances of Jenkins, with each project that holds a single Jenkins server named with the prefix of PROJECT_PREFIX. The script is doing the creation in batches of the DEPLOY_INCREMENT variable just to avoid any kind of resource issues during the Pod creation part.
To run the script:
bash init_and_deploy_jenkins-parallel

Once we have our Jenkins servers/Pods running, we can run our previous demo in parallel on all the Jenkins servers. For that we will use the run_builds-parallel script, which basically runs the run_builds script for the number of projects we created previously (remember, each OCP project hold a single Jenkins server). The variable NUMBER_OF_PROJECTS needs to match the same number from the init_and_deploy_jenkins-parallel script.
The script also creates a separate directory per project to store the output from the runs.
The script accepts one variable – which is a name for the run – so all other project directories output will be created under this RUN_NAME directory. To run the script:
bash run_builds-parallel running_60_jenkins

Once all runs are done (should take roughly 10 minutes), you can simply run the calculate_results script to go through all directories and calculate all the averages per run.
This script has some variables that need to match previous scripts, NUMBER_OF_PROJECTS, PROJECT_PREFIX, BUILD_CONFIG and NUMBER_OF_BUILDS must match the variables from all 4 previous scripts. The script also accepts the RUN_NAME variable, the same one we used in the run_builds-parallel script.

> Note: Depending on where you are running the scripts (remotely from your laptop or a node/pod inside the lab) and how good did the Kubernetes scheduler “spread” the Jenkins and maven pods, the run of 60 Jenkins pods doing 5 builds in parallel can take between 10 to 20 minutes, so you might want to change the number of projects running in parallel to a smaller number if you don’t want to wait.

bash calculate_results running_60_jenkins

The output should be similar to this in the sense that the average of the first build will be significantly higher than the rest (these numbers are in seconds):
bash calculate_results testing_60

Example output:
Average for build 1: 91.2667
Average for build 2: 8.248
Average for build 3: 5.41643
Average for build 4: 5.64875
Average for build 5: 4.7366

For the curious mind: Check to see if the Kubernetes scheduler has done a good job at distributing the 60 Jenkins pods:
$ oc get pods -o wide –all-namespaces|grep jenkins |grep -vi deploy|grep 1/1|awk ‘{print $8}’|sort|uniq -c

Resources and Feedback
To find out more about OpenShift Container Storage or to take a test drive, visit https://www.openshift.com/products/container-storage/.
If you would like to learn more about what the OpenShift Container Storage team is up to or provide feedback on any of the new 4.2 features, take this brief 3-minute survey.
The post Deploy Jenkins Pipelines in OpenShift 4 with OpenShift Container Storage 4 appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Best of 2019 Blogs: Announcing Docker Enterprise 3.0 General Availability

The post Best of 2019 Blogs: Announcing Docker Enterprise 3.0 General Availability appeared first on Mirantis | Pure Play Open Cloud.

One of the most popular blogs in 2019 was the release of Docker Enterprise 3.0. This post, originally published in July, recaps the key details.

Today, we’re excited to announce the general availability of Docker Enterprise 3.0 – the only desktop-to-cloud enterprise container platform enabling organizations to build and share applications and securely run them anywhere – from hybrid cloud to the edge.

Docker Enterprise 3.0 Demo

 

Leading up to GA, more than 2,000 people participated in the Docker Enterprise 3.0 public beta program to try it for themselves. We gathered feedback from some of these beta participants to find out what excites them most about the latest iteration of Docker Enterprise. Here are 3 things that customers are excited about and the features that support them:

Simplifying Kubernetes

Kubernetes is a powerful orchestration technology but due to its inherent complexity, many enterprises (including Docker customers) have struggled to realize the full value of Kubernetes on their own. Much of Kubernetes’ perceived complexity stems from a lack of intuitive security and manageability configurations that most enterprises expect and require for production-grade software. We’re addressing this challenge with Docker Kubernetes Service (DKS) – a Certified Kubernetes distribution that is included with Docker Enterprise 3.0. It’s the only offering that integrates Kubernetes from the developer desktop to production servers, with ‘sensible secure defaults’ out-of-the-box.

“Increasing application development velocity and digital agility are a strategic imperative for companies in all sectors today. Developer experience is the killer app,” said RedMonk co-founder, James Governor. “Docker Kubernetes Service and Docker Application aim to package and simplify developer and operator experience, making modern container based workflows more accessible to developers and operators alike.”

You can learn more about Docker Kubernetes Service here.

Automating Deployment of Containers and Kubernetes

One of the most common requests we’ve heard from customers has been to make it easier to deploy and manage their container environments. That’s why we introduced new lifecycle automation tools for day 1 and day 2 operations, helping customers accelerate and expand the deployment of containers and Kubernetes on their choice of infrastructure. Using a simple set of CLI commands, operations teams can easily deploy, scale, backup and restore, and upgrade their Docker Enterprise clusters across hybrid and multi-cloud deployment on AWS, Azure, or VMware.

Building Modern Applications 

With the ever-increasing emphasis on making things easier and faster for developers, it’s no surprise that Docker Desktop Enterprise and Docker Application created a lot of excitement amongst beta participants. Docker Desktop Enterprise is a new developer tool that decreases the “time-to-Docker” – accelerating developer onboarding and improving developer productivity. Docker Application, based on the CNAB standard, is a new application format that enables developers to bundle the many distributed resources that comprise a modern application into a single object that can be easily shared, installed and run anywhere. Docker Desktop Enterprise also allows users to quickly and easily create Docker Applications leveraging pre-defined Application Templates that support any language or framework.

“The Docker Enterprise platform and its approach to simplifying how containerized applications are built, shared and run allows us to fail fearlessly. We can test new services easily and quickly and if they work, we can immediately enhance the mortgage experience for our customers,” said Don Bauer, Lead DevOps Engineer, Citizens Bank. “Docker’s investment in new capabilities like Docker Application and simplified cluster management will further improve developer productivity and lifecycle automation for us so that we can continue to bring new, differentiated services to market faster.”

You can learn more about Docker Applications here.

How to Get Started

Learn More about What’s New in Docker Enterprise 3.0
Schedule a demo 

The post Best of 2019 Blogs: Announcing Docker Enterprise 3.0 General Availability appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis