Artificial Intelligence Apps with TensorFlow and Joget on OpenShift

This is a guest post by Julian Khoo, VP Product Development and Co-Founder at Joget Inc. Julian has almost 20 years of experience in enterprise software development, involving various products and platforms in application development, workflow management, content management, collaboration and e-commerce. 
What used to be just a pipe dream in the realms of science fiction, artificial intelligence (AI) and Machine Learning (ML) are now mainstream technologies in our everyday lives with applications in image and voice recognition, language translations, chatbots, and predictive data analysis.

TensorFlow is arguably one of the most popular open source AI library for machine learning. Built by Google, TensorFlow is designed for, training, testing and deploying deep learning neural networks. Neural networks are used in a variety of applications, notably in classification problems such as speech and image recognition.

Containers and Kubernetes are key to accelerating the ML lifecycle as these technologies provide data scientists the much needed agility, flexibility, portability, and scalability to train, test, and deploy ML models. 
Red Hat OpenShift is the industry’s leading containers and Kubernetes hybrid cloud platform. It provides all the above benefits, and through the integrated DevOps capabilities and integration with hardware accelerators, OpenShift enables better collaboration between data scientists and software developers. This helps accelerate the roll out of intelligent applications across hybrid cloud (data center, edge, and public clouds).
Joget is an open source no-code/low-code application platform that empowers non-coders to visually build and maintain apps anytime, anywhere. By accelerating and democratizing app development, Joget is a natural fit for modern Kubernetes Hybrid Cloud platforms like Red Hat OpenShift. 
In this example, we will look at incorporating a trained TensorFlow neural network model into a Joget Workflow app running on OpenShift to perform image recognition.
To illustrate the use of image recognition in an app, we’ll design a simple Joget app:

A user uploads an image via a web interface
The uploaded image will be labeled and classified based on the image recognized 
The workflow process then routes to different activities depending on the image label to recognize image correctly

For demonstration purposes, let’s assume we are looking for images of lions, because lions are awesome!
Example: Incorporate Image Recognition in a Joget App
Deploy Joget on OpenShift
In a previous article, we looked at deploying the Joget platform on OpenShift using the Red Hat OpenShift Certified Joget Operator. Follow the steps in Automating Low Code App Deployment on Red Hat OpenShift with the Joget Operator to setup the Joget environment. 

Develop AI Image Recognition Plugin
The TensorFlow project provides a sample model and Java code for labelling images.
We encapsulated it into the AI Label Image plugin (a custom Joget process tool plugin) that provides configuration options to select the file upload field, and determine where to store the results.

Design App for Image Recognition and Classification
Using the Form Builder, a simple form is designed to upload a file.

The App Generator is then used to generate the full working user interface (UI).

Using the Process Builder, a simple process is designed to handle the activity routing based on the image classification upon form submission, as per the process diagram below.

The AI Label Image tool is mapped to the AI Label Image plugin developed earlier.
 
AI Image Recognition App in Action
Once the app is published, a user can select the Upload Image link to upload the image.
The trained neural network in the sample uses a pre-trained Inception model that recognizes about 1000 different image labels.

Uploading an image of a lion will route to the “Lion Activity.”

On the other hand, uploading a different type of image (such as the car below) will route to the “Non-Lion Activity.”

What’s Next?
This small example serves to demonstrate the potential of harnessing AI/ML in your apps and workflow. Download the Joget app and plugin for this example, and get started with TensorFlow and Joget.
While this example on Joget Workflow uses a custom TensorFlow plugin, the upcoming next generation Joget DX bundles AI plugins as part of the core platform which will further simplify the integration of AI technology into your apps.

References (For Internal Use)

https://blog.openshift.com/automating-low-code-app-deployment-on-red-hat-openshift-with-the-joget-operator/

https://blog.openshift.com/how-to-automatically-scale-low-code-apps-with-joget-and-jboss-eap-on-openshift/

https://dev.joget.org/community/display/KBv6/AI+Label+Image+Plugin

https://blog.joget.org/2017/05/artificial-intelligence-and-automation.html

https://blog.joget.org/2019/03/artificial-intelligence-in-enterprise.html

 
The post Artificial Intelligence Apps with TensorFlow and Joget on OpenShift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Is it too late to integrate GitOps?

Author: Ryan Cook (rcook@redhat.com)
Is it too late to integrate GitOps?
By: Ryan Cook
The idiom “missed the boat” can be used to describe the loss of an opportunity or a chance to do something. With OpenShift, the excitement to use this new and cool product immediately may create your own “missed the boat” moment in regards to managing and maintaining deployments, routes, and other OpenShift objects but what if the opportunity isn’t completely gone?
Continuing with our series on GitOps (LINK), the following article will walk through the process of migrating an application and its resources that were created manually to a process in which a GitOps tool manages the assets. To help us understand the process we will manually deploy a httpd application. Using the steps below we will create a namespace, deployment, and service and expose the service which will create a route.
oc create -f https://raw.githubusercontent.com/openshift/federation-dev/master/labs/lab-4-assets/namespace.yaml
oc create -f https://raw.githubusercontent.com/openshift/federation-dev/master/labs/lab-4-assets/deployment.yaml
oc create -f https://raw.githubusercontent.com/openshift/federation-dev/master/labs/lab-4-assets/service.yaml
oc expose svc/httpd -n simple-app

We start with our sample application managed manually, then bring it under GitOps control in a way which ensures the application remains available.

Define a repository for the code
Export our current objects and load into git
Select and deploy a GitOps tool
Add the repository to our GitOps tool
Define the application in our GitOps tool
Perform a dry run of the object using the GitOps tool
Perform sync of the objects using the GitOps tool
Enable pruning and auto-syncing of the objects

As we have stated in the previous articles(LINK), when using GitOps the git repository is the source of truth for all of your objects within your Kubernetes cluster(s). We will assume that a git repository service is currently being used within your organization. This git repository can be public or private but the repository must be accessible by the Kubernetes clusters. The repository can be the same one as where the application code exists or a separate repository can be used specifically for deployments. It is suggested that the repository has strict permissions as secrets, routes, and other objects need to be stored.
For this exercise a new public repository can be created on GitHub. The repository can be named whatever you like but for this example we will use the name blogpost for our repository.
If the YAML files for the objects have not previously been stored in git or locally then oc or kubectl binary can help us out. Below we will request the YAML for our namespace, deployment, service, and route. Clone the newly created repository and cd into the directory.
oc get namespace simple-app -o yaml –export > namespace.yaml
oc get deployment httpd -o yaml -n simple-app –export > deployment.yaml
oc get service httpd -o yaml -n simple-app –export > service.yaml
oc get route httpd -o yaml -n simple-app –export > route.yaml

Make the following modification to the deployment.yaml to remove a field in which Argo CD cannot sync properly.
sed -i ‘/sgeneration: .*/d’ deployment.yaml

We also must modify the route. We will first set a multiline variable and then we will replace ingress: null with the contents of the variable.
export ROUTE=” ingress:
– conditions:
– status: ‘True’
type: Admitted”

sed -i “s/ ingress: null/$ROUTE/g” route.yaml

Once we have these files it is time to save them into the git repository. From this point forward, the repository should be the source of truth for anything and manual changes to any of the objects should be prohibited.
git commit -am ‘initial commit of objects’
git push origin master

We will assume that ArgoCD has already been deployed based on this blog post (LINK). So we will add the newly created repository to Argo CD that contains the simple-app code. Ensure that the repository below matches the one that was created in previous steps.
argocd repo add https://github.com/cooktheryan/blogpost

Next, create the app. The app defines the values for the GitOps tool to know the repository and path to use, the OpenShift cluster to manage the objects, the specific branch of the repository, and whether to sync assets automatically or not.
argocd app create –project default
–name simple-app –repo https://github.com/cooktheryan/blogpost.git
–path . –dest-server https://kubernetes.default.svc
–dest-namespace simple-app –revision master –sync-policy none

Once the application has been defined in Argo CD the tool will begin to verify the current objects deployed objects versus those defined in the repository. We have the sync policy currently disabled and pruning is not enabled so no items should be changed at this point. One thing that you will notice is that the application within Argo CD UI will be stated as “Out of Sync” this is due to a missing label that ArgoCD supplies. This label will not cause any assets to be redeployed when we run the sync.
Now let’s run a dry run to ensure no errors exist within our files.
argocd app sync simple-app –dry-run

If no errors show up during the dry run we can move forward with the sync.
argocd app sync simple-app

When running the command argocd get on our simple-app application we should see that the application is “Healthy” and “Synced”. This means that all of the resources in our git repository now match those that are deployed.
argocd app get simple-app
Name: simple-app
Project: default
Server: https://kubernetes.default.svc
Namespace: simple-app
URL: https://argocd-server-route-argocd.apps.example.com/applications/simple-app
Repo: https://github.com/cooktheryan/blogpost.git
Target: master
Path: .
Sync Policy: <none>
Sync Status: Synced to master (60e1678)
Health Status: Healthy

At this point we can enable “auto-sync” and “pruning” to ensure that nothing in manually created and that any time an object is updated and pushed to the repository it will be deployed.
argocd app set simple-app –sync-policy automated –auto-prune

Now you have successfully migrated an application that did not initially use GitOps to a GitOps managed application.
The post Is it too late to integrate GitOps? appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Recap: OpenShift Commons Gathering at Kubecon/NA San Diego [Videos Uploaded]

It’s A Wrap! OpenShift Commons Gathering took place on November 18th in San Diego co-located with Kubecon / NA 
This OpenShift Commons Gathering at Kubecon/NA featured deep dives into OpenShift 4, Service Mesh, Istio, Operators, OKD4, Project Quay and much more!
 
The OpenShift Commons Gathering in San Diego brought together Kubernetes and Cloud Native experts from all over the world to discuss container technologies, best practices for cloud native application developers and the open source software projects that underpin the OpenShift ecosystem.

Here are the videos (slides are coming up soon) from the proceedings:
 

Welcome to the Commons: Collaboration in Action
Diane Mueller (Red Hat) | Julio Tapia (Red Hat)
Slides
Video

Panel: Navigating Devops Transformation in the Enterprise – moderated by Diane Mueller (Red Hat)
Kris Pennella (Red Hat Open Innovation Labs) | John Willis (Red Hat) | Jabe Bloom (Red Hat) | Andrew Clay Shafer (Red Hat)
N/A
Video

OpenShift 4 Release and Road Map Update
Clayton Coleman (Red Hat) | Mike Barrett (Red Hat) | Derek Carr (Red Hat)
Slides
Video

OpenShift @ ING Case Study
M.F. Thijs Ebbers (ING)
Slides
Video

OpenShift @ Broadcom
Jose Chavez (Broadcom) | Ganesh Janakiraman (Broadcom)
Slides
Video

OpenShift @ Weather.com Case Study
Claude Ballew (The Weather Channel)
Slides
Video

Lightning Talks: ML Workloads with GPUs on OpenShift
Diane Feddema (Red Hat)
Slides
Video

Lightning Talk: OKD4
Christian Glombek (Red Hat)
Slides
Video

Lightning Talk: ProjectQuay.io
Joseph Schorr (Red Hat)
Slides
Video

State of the Operators: Framework, SDKs, and beyond
Rob Szumski (Red Hat)
Slides
Video

Operators in Action Panel – Builders, Users and Maintainers
Piyush Nimbalkar (Portworx) | Evan Pease (Couchbase) | Simon Croome (StorageOS) | Peter Hack (Dynatrace) | Jason Mimick (MongoDB)
N/A
Video

State of the Platform Services: Service Mesh and Beyond
Brian Redbeard Harrington (Red Hat) | Steve Dake (IBM)
Slides
Video

How to Deliver OpenShift-As-A-Service (just like Red Hat)
Jeremy Eder (Red Hat)
Slides
Video

OpenShift @ Omnitracs Case Study
Andrew Harrison (Omnitracs) | Brian Tomlinson (Red Hat)
Slides
Video

Reception Sponsor Lightning Talk: Bare Metal HCI for Red Hat OpenShift
Hiral Patel (Diamanti)
Slides
Video

OpenShift and Machine Learning at ExxonMobil
Cory Latschkowski (ExxonMobil) | Audrey Reznik (ExxonMobil)
Slides
Video

What’s Next AMA Panel with Red Hat OpenShift Engineers, Project Leads & Product Managers
Joe Fernandes (Red Hat)
N/A
Video

 
To stay abreast of all the latest releases and events, please join the OpenShift Commons and join our mailing lists & slack channel.
What is OpenShift Commons?
Commons builds connections and collaboration across OpenShift communities, projects, and stakeholders. In doing so we’ll enable the success of customers, users, partners, and contributors as we deepen our knowledge and experiences together.
Our goals go beyond code contributions. Commons is a place for companies using OpenShift to accelerate its success and adoption. To do this we’ll act as resources for each other, share best practices and provide a forum for peer-to-peer communication.
Join OpenShift Commons today!
The post Recap: OpenShift Commons Gathering at Kubecon/NA San Diego [Videos Uploaded] appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Application Migration with Container-native virtualization

More and more frequently, modern applications are choosing a container-first development and deployment paradigm built on the foundation of Kubernetes. However, not all applications are fully modernized and containerized micro services. Many applications are a hybrid of architectures and technology which have existed for years, even decades. This can add complexity, both to the application architecture and management overhead, when a container-based, cloud-native application component needs to access existing functionality which is virtual machine based. 
Container-native virtualization provides flexibility during the modernization process so that you can focus on the most critical aspects first, while still being able to access, manage, and consume VM-based aspects using the new Kubernetes-centric tools. Based on the KubeVirt project, recently accepted by the CNCF, Container-native virtualization manages both virtual machines and containers through a single control plane saving time, resources, and budget. Red Hat Container-native virtualization delivers KubeVirt functionality directly to OpenShift customers and helps to manage both virtual machines and OpenShift deployments from a single platform. This single platform simplifies the management of virtual machines and containers with a common Kubernetes interface that standardizes orchestration, networking, and storage management while also supporting the long term move to containers. 
What does this mean for the developer? Regardless of where your company is in their digital transformation, you have a platform to develop on. Many legacy applications find it too complex or expensive to move to containers. Container-native virtualization allows you to continue using existing virtual machines without the overhead. And because Container-native virtualization is Kubernetes-native you keep the structure of your existing virtual machines on a platform that takes advantage of all things Kubernetes. 
If you have an application built on .NET and Windows Server 2012 R2, as an example, but you’re ready to start bringing Kubernetes into the equation, Container-native virtualization may be a path to do so. In this scenario legacy applications built on .NET continue to live on in its original state, without change. Little effort is needed to migrate this workload in its existing virtual machine to Container-native virtualization. Meanwhile, those applications that are ready to start development in a Kubernetes-native state can do so. Typically, this would cause major complexities, but with Container-native virtualization both environments can be managed through a single management pane without compromising their unique architectures. Leveraging Container-native virtualization minimizes management time and resources and allows your application to live under its current requirements while delivering flexibility for future technology upgrades. 
To get a sample of KubeVirt and Red Hat Container-native virtualization, visit us next week at KubeCon in booth #D1 and join us for the KubeVirt Intro and Deep Dive sessions. 
The post Application Migration with Container-native virtualization appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Improve business performance with better cloud container monitoring

Today’s hybrid multicloud reality can be a double-edged sword. While the transformation can increase speed, scale and flexibility, it can also limit your view of performance and security within containers. In fact, an IBM and McKinsey research report notes that 67 percent of those surveyed consider management consistency a priority concern.
Imagine transferring materials to a sealed container and losing track of what’s inside. A comparable dilemma can occur when transitioning from traditional infrastructure and applications to containers — gaps can occur in visibility. Another problem is that container isolation often creates concerns about security blind spots.
Cloud-native containers: What you can’t see can hurt you
Containerized environments move between clouds, so you have a level of portability in what is considered cloud-native. Containers bring a lot of complexity into the environment and make it very important to have visibility into what that environment looks like as it changes … and it’s changing constantly.
Regardless of what phase your enterprise is in during its cloud-native digital transformation, closing the visibility gap within your infrastructure and applications is a critical challenge that must be addressed.
If you’re using container and Kubernetes services, you need to monitor your applications to observe performance and security. You also need to audit, report and prove your environment is secure. The alternative is potential downtime and vulnerability to costly security incidents that can have compliance ramifications.
Container monitoring: Close the gap
How do you address this issue to ensure reliable and high-performing cloud-native applications?
Fortunately, Sysdig created a unified platform designed to deliver end-to-end visibility and security for containers. The solution runs on IBM Cloud technology and is called IBM Cloud Monitoring with Sysdig. It’s a fully managed enterprise-grade container monitoring service that helps administrators, DevOps teams and developers monitor, troubleshoot, define alerts and design custom dashboards.
The value Sysdig brings is the way our product operates — we watch all this system activity and then begin to map the pieces together. Sysdig keeps track of it, inventories it, and as things change in that inventory and applications move from one location to another, we see in real-time what’s different and give organizations the ability to manage all of it.
The Sysdig solution helps enterprises achieve robust visibility and control, including SecOps risk mitigation and greater DevOps efficiency. It supports the IBM Cloud Kubernetes Service, the IBM managed container server for rapid application delivery, and IBM Cloud Private, the IBM application platform for developing and managing on-premises, containerized applications. With this combination, enterprises experience less risk, better performance and a 40 percent reduction in mean time to repair.
Kubernetes and container monitoring: Better together
Sysdig teamed with IBM as both companies bet on the Kubernetes environment emerging as the industry standard. We were looking for a company to help in the Kubernetes environment.
Sysdig had a lot of synergy between what IBM was trying to achieve with their business around the cloud and what we built around our adoption of Kubernetes early on as well. IBM has created a very robust Kubernetes service where you can virtualize the underlying infrastructure. Sysdig can use those services, but it’s very important that we do that within an environment where we can deliver high levels of availability to customers.
According to CRN, a news source for solution providers and technology integrators, Sysdig is one of The 10 Hottest Container Startups of 2018. It’s driving innovation and delivering enterprise-ready solutions for deploying cloud-native applications at a massive scale.
Using this solution, you can:

Accelerate diagnosis and resolution of performance and security incidents
Control the costs of your monitoring infrastructure
Visualize your entire environment
Deploy securely, verify compliance and block threats at runtime
Get critical Kubernetes and container insights for dynamic microservice monitoring
Reduce the impact of abnormal situations with proactive alerts
Better manage and control user access to data
Troubleshoot applications and infrastructure

Our partnership with IBM is about helping customers that are beginning to deploy their applications into the IBM Kubernetes environment operationalize environments so they can confidently run cloud-native workloads and continue to grow.
Get started:
For more information about IBM Cloud Monitoring with Sysdig and to get started, visit:

IBM Cloud Monitoring with Sysdig
How to increase observability by using IBM Cloud Monitoring with Sysdig
Get started with IBM Cloud Monitoring with Sysdig

The post Improve business performance with better cloud container monitoring appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Mirantis Launches Kubernetes-as-a-Service (KaaS) Across Bare Metal, Public and Private Clouds

The post Mirantis Launches Kubernetes-as-a-Service (KaaS) Across Bare Metal, Public and Private Clouds appeared first on Mirantis | Pure Play Open Cloud.
New product enables a self-service experience for developers and app owners across a fleet of K8s clusters with automated management and upgrades
KubeCon North America, November 19, 2019 —
Mirantis, the open cloud infrastructure company, announced the availability of its Kubernetes-as-a-Service (KaaS) beta release this morning at KubeCon North America. The continuously-updated K8s platform enables developers to create and manage Kubernetes clusters on demand through APIs or UI and eliminates the burden of managing a full stack of K8s components.
“Kubernetes is creating the new way for enterprises to build and run software as they move to cloud. However, lifecycle management for a fleet of Kubernetes clusters with full stack support is an unsolved challenge,” said Dave Van Everen, SVP Marketing, Mirantis. “With Mirantis KaaS, enterprises get zero touch, self-service Kubernetes clusters with a consistent developer experience across public clouds and on-prem infrastructure.”
At the heart of Mirantis KaaS is an automated lifecycle management service. It enables continuous, automated updates of the Kubernetes stack and related components, without impacting workloads. Moreover, end users can decide when they’d like to upgrade their self-service clusters. Mirantis KaaS deploys clusters in an HA configuration by default and utilizes built-in K8s features for rolling updates; therefore applications running on Mirantis KaaS will not experience downtime during an upgrade.
With Mirantis KaaS enterprises can now:

Consume K8s as a service on any public clouds and on-prem, in either multi cloud or hybrid configuration

Create a consistent developer experience on any public cloud or on-prem infrastructure, with appropriate enterprise security and governance
Enable application portability from one cloud to another and on-prem
Dramatically reduce the burden and cost of operating a large fleet of K8s clusters

The Mirantis KaaS beta release supports:

Bare Metal
Public Cloud
Private Cloud

Mirantis KaaS will be generally available in early 2020. The beta software is available for download here: https://www.mirantis.com/kaas
Join Mirantis for a live demo of the KaaS solution on December 12 at 10 AM PST: https://info.mirantis.com/live-demo-kaas
If you’re at KubeCon, RSVP to the K8sOrDie! party for an unforgettable evening of intrigue and fun: https://k8sordie.com/party/.

The post Mirantis Launches Kubernetes-as-a-Service (KaaS) Across Bare Metal, Public and Private Clouds appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Create and manage an OpenStack-based KaaS child cluster

The post Create and manage an OpenStack-based KaaS child cluster appeared first on Mirantis | Pure Play Open Cloud.
Once you’ve deployed your KaaS management cluster, you can begin creating actual Kubernetes child clusters. These clusters will use the same cloud provider type as the management cluster, so if you’ve deployed your management nodes on OpenStack, your child cluster will also run on OpenStack.
The general process looks like this:

Create an empty cluster.
Add machines to the cluster.  As part of this process, Kubernetes gets deployed on the machines.
Download the KUBECONFIG so that you can access the cluster.  

We’re going to cover all of those steps in this article.  Let’s start by creating the cluster.
Create a child cluster
We’ll start by creating a child cluster based on OpenStack.  The general process for using other cloud providers is similar.

The first thing we need to do is gather artifacts from the host cloud itself.  Log into your OpenStack Horizon dashboard and click API Access -> Download OpenStack RC file –> OpenStack clouds.yaml file.

Next we need the SSH key we’ll use to access the host machines.  If you haven’t already got one, you can create a new one from the command line. Make note of the file in which you store it:
$ ssh-keygen -t rsa -b 4096 -C “you@example.com”
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/nchase/.ssh/id_rsa): kaas 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in kaas.
Your public key has been saved in kaas.pub.
The key fingerprint is:
SHA256:DKSxJ6ChGJJwwBnUPd3kEAXxn4TmKpTTKObCd3vtAeo you@example.com
The key’s randomart image is:
+—[RSA 4096]—-+
|X=* o o=Bo       |
|=B o B .+..      |
|+   + + = .     |
|     o+oo o .    |
|   o = oS. o     |
|. o o o o        |
| o o + …       |
|  o o o. ..      |
|     E. ..       |
+—-[SHA256]—–+

Now log in to the KaaS web UI using credentials for a user with either operator or  writer permissions.  Your administrator will have set this up ahead of time.  (If you are the administrator, see the documentation for information on creating users.)

Select the required namespace.  Your administrator will have set this up ahead of time as well.  (Again, if you are the administrator, see the documentation.)

On the upper right side of the namespace page, click SSH keys -> Add SSH Key.
.
Name your key and click Upload Key to upload the public key file.  Make sure that you don’t upload the private key; for example, we named our key kaas, so the public key file is kaas.pub.

On the upper right side of the namespace page, click Credentials -> Add Credential.

You can add the cloud information manually, but it’s typically much easier to click the upload clouds.yaml link and upload the file we created in step 1. Uploading the file will auto-populate all of the required fields except the Password, which you’ll have to enter manually.  This is the same password you used to log into Horizon.

Scroll down and click the Create button to finish creating the credential.

Go back to the Clusters tab and click Create Cluster.

Name your cluster and decide what features you want to enable.  KaaS enables you to add create clusters with Istio service mesh, Harbor registries, and the Kubernetes dashboard by simply clicking checkboxes.  You can also enable the StackLight Logging, Monitoring and Alerting option, as well as configuring alerts. For the moment, however, we’re going to stick with a vanilla cluster — just Kubernetes. At the time of this writing, the most current version of Kubernetes available is 1.15.3, but you have the option to choose Kubernetes 14.6.  You always have the option to update your clusters, and as new versions are added to KaaS they will be made available in the UI.Click Create to create the empty cluster.
Now you’ve got a cluster. When the PENDING status disappears, you can add actual capacity to it.

Add Machines to your KaaS cluster
Now that you have a cluster, you need to add machines on which the cluster will actually run.  For a bare metal cluster, those machines will be actual servers, but for an OpenStack-based cluster, they will be OpenStack VMs.  (Note that you don’t create these VMs manually; KaaS will take care of it for you in the background.)
Once you add machines to a cluster through the UI, KaaS automatically provisions them and adds them to the actual Kubernetes cluster, so you can also use these instructions for scaling up your KaaS child cluster.

Choose the Machines tab, then click the Create Machine button.

Let’s start by creating the control plane.  Because we are deploying HA clusters by default, we want a minimum of 3 control plane nodes, so we’ll specify 3 and click the Control Plane checkbox.

Select the SSH key we added earlier and designate a username to associate with it, such as ubuntu.  In this case, the flavor and image are OpenStack parameters, as is the Availability Zone.Click Create to add the machines to the cluster.
KaaS will create the machines and assign them an IP address.  You can watch this progress from the Machines tab.

While we’re waiting on these servers, we can go ahead and add two compute nodes, the minimum a KaaS cluster.  The process is exactly the same, except that we don’t check the Control Plane box. You might also decide to use larger machines, since they are actually going to be hosting your workloads.

It will take a few minutes, but the machines will cycle through deployment status of Pending, then Updating, and finally to Ready.  If you check your OpenStack dashboard, you’ll gradually see these machines come online.

When the cluster is ready, we can try it out.
Testing the deployed Kubernetes cluster
Once the machines in the cluster show a status of READY you can test it out.  

Start by downloading the kubeconfig by going to the Clusters tab and clicking the arrow for the cluster you want to use.

Enter your KaaS password and click Download.

Make sure you have the kubectl client installed according to the instructions for your operating system.
Next you’ll need to go to the command line and point to the kubeconfig file you downloaded in steps 1 and 2:
$ export KUBECONFIG=~/Downloads/kubeconfig-kaasdemo.yml

Now you can go ahead and check what resources are available:
$ kubectl get nodes
NAME                                             STATUS ROLES AGE VERSION
kaas-node-3cd4e4eb-7832-4ef8-a1c6-d1cc8ffca8dd   Ready <none> 20m v1.15.3-8+d06f4e3032941e
kaas-node-5d541aac-c52c-4c21-a867-5bc621926e81   Ready master 21m v1.15.3-8+d06f4e3032941e
kaas-node-81df4422-8c99-4d00-9108-52a003b45bae   Ready master 18m v1.15.3-8+d06f4e3032941e
kaas-node-8917ae73-e715-49e1-8684-c11d3b4216b2   Ready master 25m v1.15.3-8+d06f4e3032941e
kaas-node-ee1f417d-9e57-4c0b-8952-7fc9c9363fa8   Ready <none> 20m v1.15.3-8+d06f4e3032941e
$ kubectl get namespaces
NAME              STATUS AGE
default           Active 25m
kube-node-lease   Active 25m
kube-public       Active 25m
kube-system       Active 25m

Now you have a fully functional Kubernetes cluster you can use like any other.  
What to try it out for yourself?  Download the Mirantis KaaS public beta!
The post Create and manage an OpenStack-based KaaS child cluster appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

OpenShift Hive: Cluster-as-a-Service

Red Hat OpenShift has enabled enterprise developers to utilize a fast feedback loop during the development phase of platforms and applications. The idea of ‘as-a-service’ has arisen from the ability of cloud providers to offer an on demand capability to consume services and products. This increased flexibility for organisations can further ease the development path to production.
Kubernetes and Red Hat OpenShift unlocks organisations to achieve freedom with platforms of choice on a number of cloud providers without lock-in as workloads are abstracted from vendor specific constructs. Kubernetes, and Red Hat OpenShift Container Platform, provide the ability to run operators, where operators can act as an organisation’s very own consumable on demand service whilst providing a unique user experience to its intended audience.
As a developer having a personal on demand environment was once one of the reasons for the rise of “shadow IT”. Organisations have since moved from the days of having to build servers for additional workloads through the use of new models of IT services thanks to virtualisation, PaaS and public/private cloud in an effort to adopt the on-demand/as-a-service utopia and enable their consumers to have the freedom to develop and produce strong value proposition products in today’s competitive market.
OpenShift has become the platform of choice for many organisations. However, this can mean developers are somewhat restricted in consuming PaaS environment, due to greater process and management surrounding the environment, in accordance with internal IT regulations. OpenShift Hive is an operator which enables operations teams to easily provision new PaaS environments for developers improving productivity and reducing process burden due to internal IT regulations. Hive can do this in a true DevOps fashion while still adhering to an organization’s regulations and security standards.
We will be exploring this operator in depth throughout this blog post getting familiar with its installation and uses to enable teams to achieve push-button personalised OpenShift clusters from a primary Hive enabled cluster.
OpenShift Hive Operator Overview
Hive provides API driven cluster provisioning, reshaping, deprovisioning and configuration at scale. Under the covers it leverages the OpenShift 4 installer to deploy a standardised operating environment for consumers to create a day 1 cluster with ease on a number of cloud provider platforms AWS, Azure, GCP.
There are 3 core components to Hive’s architecture. The operator itself, which acts as a deployer, manages configuration and ensures components are in a running state, handles reconciliation of CR’s (Custom Resources) and general Hive configuration. An admission controller, which acts as a control gate for CR webhook validation, and approves/denies requests to core CR’s. Finally, the Hive Controllers which reconcile all CR’s.
Hive Operating Architecture

[1] – Architecture Diagram
Hive utilizes a relatively simple process. API callers create a ClusterDeployment which contains a definition for the state of the cluster whose creation is desired. Hive is designed to work in a multi-tenant environment, therefore secrets and ClusterDeployments can be run in a separate namespace.
Hive generates ClusterDeployment’s through its command line utility hiveutil. These objects can be viewed in both yaml and json. Created objects describe the state of cluster to be generated with options to choose release image, DNS, SSH parameters and many other configuration parameters through hiveutil’s extensive command line flags.
Once configuration of the ClusterDeployment definition is completed and approved, Hive launches an install pod. The install pod is made up of multiple containers, which are used to download and copy the user defined release image and oc binaries. Hive then copies the installer binary to the Hive sidecar and executes the installation. Following the installation, admin passwords and kubeconfig are created as secrets and can be obtained from the CLI for easy access.
If a cluster deployment fails, Hive attempts to clean up cloud resources and retry indefinitely (with backoff). When a cluster is ready to be deleted, Hive invokes a deletion job and deprovisions all cloud resources based on the infraID tag, using the openshift-install deprovision commands.
How To Use OpenShift Hive
Currently, Hive is not available in the OpenShift Operator Hub. Hive needs to be installed from any one of a number of methods, which are covered in the github documentation. From your control/bastion node, there are a set of core dependencies which should be installed and resolved prior to deploying Hive. This includes but not limited to the following: Go, OC, Kustomize and MockGen. Most dependencies can be installed via yum or go get pathways. *Ensure to set $GOPATH to your project directory. A useful command to view your go environment variables is: go envas is go get -d ./…, which uses a special pattern to recursively search and resolve go-gettable dependencies in the project.
Once dependencies have been met, it’s time to install the Hive operator. In this guide, I installed Hive from the github repository using the latest master images to ensure I had the most recent capabilities and features.
Installation process
As a privileged user on an OpenShift cluster or OpenShift SDK/MiniShift, we will create a new project. The architecture described above uses a multi-tenant model, keeping the operator and its core resources in a separate project can be seen as good practice. Follow the steps below to install and deploy a Hive operator in its own namespace.
oc new-project hive

git clone https://github.com/openshift/hive.git
cd hive

make deploy

If you have correctly installed all the required dependencies and ensured the gopaths point to the appropriate places, the Hive operator should now be running it’s installation process. To verify this, the following command should provide some context and ensure resources have been or are in the process of being created.
oc get pods -n hive
NAME READY STATUS RESTARTS AGE
hive-controllers-xxxxxxxxxx-xxxxx 1/1 Running 0 1m
hive-operator-xxxxxxxxxx-xxxxx 1/1 Running 0 1m
hiveadmission-xxxxxxxxxx-xxxxx 1/1 Running 0 1m
hiveadmission-xxxxxxxxxx-xxxxx 1/1 Running 0 1m

Following the installation of the Hive operator, validate you have access to the hiveutil client commands. This will be the core way to interact with Hive for cluster deployments and generate the cluster config. If this is not the case, then run the following command from the projects directory:
make hiveutil

Creating OpenShift Clusters
If you’ve been able to follow this blog, you should have been able to deploy the Hive operator in its own namespace. At this point, we need to ensure our target cloud provider environment (AWS) has a valid Route 53 zone, which is standard practice in all OpenShift 4.x deployments.
Once the zone information has been confirmed, OpenShift will need a number of “secrets” made available to it either globally, per namespace or as environment variables; this will depend on the OpenShift architecture you or your organisation has chosen to use. First, we will want to create a file containing our docker registry pull secret (obtained from try.openshift.com). To ensure this pull secret is used for our cluster deployments, the file will be created in ~/.pull-secret.
Clusters created with OpenShift Hive do not have to be bound to the platform the job has been instantiated from. For example, deployments performed from a local minishift instance to either AWS, Azure, or GCP as a target provider.
During this blog post, the cloud provider for the deployment will be AWS. As such, Hive will need to be aware of AWS related credentials. To accomplish this, we will need to make sure we set our environment variables with credential information. This will allow Hive to inherit these during the cluster creation process. To check what environment variables have been set, use the env command or additionally, credentials can be defined in a file and specified at the time of cluster creation using –creds-file.
env
AWS_SECRET_ACCESS_KEY=blah-blah-secret-access-key-id-blah-blah
AWS_ACCESS_KEY_ID=blah-blah-secret-access-key-id-blah-blah

We now have created the core secrets OpenShift and Hive needs for the deployment. To generate a cluster using the hiveutil CLI, we can run the following command:
bin/hiveutil create-cluster –base-domain=mydomain.example.com –cloud=aws mycluster

What hiveutil does under the covers is makes use of a template file, which has prepopulated options configured and leaning heavily on env vars been defined for example cloud provider credentials. If this isn’t done, many of the options will default to machine defaults or template defaults, e.g. default SSH key or OpenShift image version. The previous create-cluster command will inherit secrets created earlier including desired credentials and pull secrets. If all required fields are satisfied, hiveutill will proceed to generate a deployer pod and create an OpenShift cluster. But what if we wanted to be more granular when creating an OpenShift cluster?
We could explicitly provide parameters based on command line flags available to hiveutil. For example:
bin/hiveutil create-cluster –base-domain=mydomain.example.com –cloud=aws –release-image=registry.svc.ci.openshift.org/origin/release:x.x –workers=1 –ssh-public-key=examplesshstring –uninstall-once=false –ssh-private-key-file=blah mycluster

As you can see, this can become quite cumbersome and difficult to manage with only a small set of configuration parameters defined.
A more manageable option would be to generate a static file and configure the desired specification before initiating a cluster deployment. As with all things OpenShift, we can view the ClusterDeployment in yaml and save the output to a file, if desired, enabling users to validate the configuration.
bin/hiveutil create-cluster –base-domain=mydomain.example.com –cloud=aws mycluster -o yaml

bin/hiveutil create-cluster –base-domain=mydomain.example.com –cloud=aws mycluster -o yaml > clusterdeployment-01.yaml

This gives an admin the ability to define their ClusterDeployment definition in a yaml text file before running a native oc apply, seen below, to create the objects specified. In this document, under the provisioning header is an example of what we might expect to see in a default cluster definition with minimal user configuration applied.
oc apply -f clusterdeployment-01.yaml

Up to this point, we have been able to quickly define what a desirable OpenShift cluster looks like through the creation of secrets and ClusterDeployment objects in OpenShift using a native oc apply command.
As Hive attempts to deploy a new cluster, an installer pod will be created consisting of multiple containers, during which the management sidecar container will copy the installer binary from the containers in the pod. Next, it shall invoke the OpenShift installer inheriting variables and secrets defined during the previously completed process.
We can easily create another cluster with a different set of parameters by duplicating or recreating the ClusterDeployment file, updating cluster name and desired parameters such as instance types. Once completed, all that is left to do trigger a deployment using oc commands; for example:
oc apply -f clusterdeployment-02.yaml

As with a native install, we can tail logs and get a view of what the installer is doing using oc commands as follows:
oc get pods -o json –selector job-name==${CLUSTER_NAME}-install | jq -r ‘.items | .[].metadata.name’

oc logs -f -c hive

oc exec -c hive — tail -f /tmp/openshift-install-console.log

Once the OpenShift cluster has been installed as indicated in the logs, secrets will be created for kubeadmin password. These will be required for command line access or accessing the web console. We can retrieve the secrets and web console url using the following:
oc get cd ${CLUSTER_NAME} -o jsonpath='{ .status.webConsoleURL }’

oc get secret `oc get cd ${CLUSTER_NAME} -o jsonpath='{ .status.adminPasswordSecret.name }’` -o jsonpath='{ .data.password }’ | base64 –decode

For users more fluent with Hive and it’s functionality, the admin kubeconfig is available as a secret post cluster creation. This is able to be retrieved using the following command:
oc get secret -n hive xyz-xyz-xyz-admin-kubeconfig -o json | jq “.data.kubeconfig” -r | base64 -d &gt; cluster.kubeconfig

When deleting the cluster, we can simply invoke the openshift-install job through hiveutil, which deploys a deprovisioning pod and destroys all cluster resources based on InfraID tags using:
oc delete clusterdeployment ${CLUSTER_NAME} –wait=false

OpenShift Hive Extending Its Useability Through Templates
Using Hive, we can create clusters through hiveutil create-cluster commands or using native oc commands with oc apply -f clusterDeployment.yaml. Each method is unique in itself, hiveutil leverages command line flags for parameter updates, while oc apply allows us to append key value pairs in a ClusterDeployment file.
This can still be viewed as a time consuming process to a developer or DevOps engineer who has to recreate ClusterDeployment yaml definitions without having a repeatable mechanism to quickly deploy clusters at scale.
To extend the functionality of clusters-as-a-service, I had the idea to introduce user templating. OpenShift already makes use of environment variables to attain values set by users. Therefore, a command such as oc process can allow for a cleaner clusters-as-code approach using cluster definitions files as templates, which can be harvested and made use of in a repeatable and malleable fashion.
Sticking with a similar change in the ClusterDeployment, we can simply parameterise the number of worker nodes, instance sizes, and cluster name. Below are the variables I want my ClusterDeployment to inherit.
$ cat cluster-deployment.params
CLUSTER_NAME=gallifrey
WORKER_COUNT=1
INSTANCE_TYPE=m4.xlarge
BASE_DOMAIN=example.cloud

The stub of code shown below is a snippet of a ClusterDeployment template file which has been parameterised to inherit the variables defined in a cluster-deployment.params file or from environment variables. Now we can provide a ClusterDeployment template the desired variables file and validate that parameterisation has had the desired outcome using the oc process command.
A bit more on oc process. oc process makes use of OpenShift template files and its supplementary parameters files. OpenShift Hive for its deployment uses a template file to create objects and populates the ClusterDeployment using environment variables if no alternatives have been specified on the command line it can fall back on defaults specified in the template itself.
Linked is a Hive template file. The parameters component of the template will be where user configuration will substitute template defaults. If defaults are defined (e.g. OpenShift version) and not overridden at the points described earlier, these will become fallback options for the template file. We can toggle and append to be more or less opinionated based on the desired approach taken. The rest of the template file is a highly parameterised version of that which is generated from the oc apply seen earlier, a link to the default template github file is here.
For the purposes of this blog I will pre-populate a number of fields in the template, so we can focus on a select few to showcase a cluster-as-a-service development model. Those fields which we will want to parameterise have previously been defined and used in this blog: name, domain,workers and instance type.
By using both a ClusterDeployment template and a populated parameters file with oc process, the notion of clusters as a service becomes a reality. Below is a snippet of pre/post templated files following the oc process command.
– apiVersion: hive.openshift.io/v1alpha1
kind: ClusterDeployment
metadata:
creationTimestamp: null
name: ${CLUSTER_NAME}
spec:
baseDomain: ${BASE_DOMAIN}
clusterName: ${CLUSTER_NAME}
compute:
– name: worker
platform:
aws:
rootVolume:
iops: 100
size: 22
type: gp2
type: m4.large
replicas: ${WORKER_COUNT}

– apiVersion: hive.openshift.io/v1alpha1
kind: ClusterDeployment
metadata:
creationTimestamp: null
name: gallifrey
spec:
baseDomain: example.cloud
clusterName: gallifrey
compute:
– name: worker
platform:
aws:
rootVolume:
iops: 100
size: 22
type: gp2
type: m4.large
replicas: 1

A final step following the processing of a template is the creation of the object itself.
oc process -f clusterdeployment-template.yaml –params-file=clusterdeploy.params | oc apply -f-

This will apply and create the objects templated using oc process. A deployer pod, is created and builds an OpenShift cluster to the desired specification. Through oc process and a parameters file, we can increase the speed of delivery in the creation of OpenShift clusters, whilst adopting a self service model when deploying from namespaces with RBAC applied. Defining a distinct parameter hierarchy would further enhance the speed of cluster development, where variables can be enforced at a cluster or project level, providing a developer the minimal options needing configuration to build clusters specific to their use case.
oc process -f clusterdeployment-template.yaml –params-file=openshift-app1/clusterdeploy.params | oc apply -f-

Conclusion
To conclude, in this blog post we install and use OpenShift’s Hive operator. During installation we resolved dependencies to enable operator functionality. Upon this, we have been able to build, destroy and reshape OpenShift clusters on demand.
The aim of this blog post was to deliver an insight and hands-on look into clusters-as-a-service through OpenShift’s Hive operator. Through using a number of deployment mechanisms including hiveutil or native OpenShift commands like oc apply and oc process. We have further seen its practicality extended when combined with a template file. This allows for fast processing and reduction in time taken to create clusters. For more information go to the OpenShift Hive github repository, where you can find all the resources needed to follow along and try it for yourself.
To extend the use of Hive even further, we can introduce an automation pipeline. This would be pivotal for developers who do not have direct access to clusters or where the creation of user generated ClusterDeployment files would not be an acceptable practice. We can achieve this through the use of git hooks and declarative pipelines, invoking the creation of OpenShift clusters based on updates to files in an organisations source control.
An addition to automation pipelines described above is automated testing. Where upon the success of pipeline automation, organisations can promote changes to different projects in true gitops fashion, whilst in failure cases, errors can be addressed and remediated. This provides auditability and accountability for consumers of Hive’s functions, aligning with enterprise ways of working. This will be visited in a follow up article in the future, exploring how we can integrate and extend Hive’s functionality in a gitops fashion.
The post OpenShift Hive: Cluster-as-a-Service appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

ExxonMobil and IBM Cloud fuel customer satisfaction at the pump

As digital technology has dramatically transformed how people live, it has also reshaped how consumers experience brands, turning transactions into experiences, and experiences into relationships. But digital transformation is more than simply building an app and going digital. It’s also about changing the way people think.
Many people can remember having to go to the bank to deposit a check with a teller. Overnight, the ATM transformed the way consumers interacted with banks, and the days of depending on tellers to cash checks were over.
That was a “wow” moment for me. The convenience this innovation created helped cement my loyalty to my bank. My expectations of my bank were reset, and that “wow” factor became the minimum requirement the bank needed to meet to keep my business. Fast forward to the present, and the check I once presented to the bank teller, I now photograph and deposit with my smartphone from the comfort of home.
Returning to the days of driving to a bank to deposit a check is now unthinkable. Savvy business innovation has created customers attuned to frictionless transactions. Businesses interested in thriving must meet that demand by continuing to provide exponentially greater “wow” experiences.
The same approach is leading ExxonMobil, an innovator in technology and innovation to solve the world’s toughest energy challenges and to innovate at the fuel pump. In 1986, we were the first fuel retailer in the United States to offer a pay-at-the-pump solution, creating a groundbreaking innovation to allow customers to stay by their vehicles while fueling, leading to self-service fueling at any time.
Innovating payment at the pump
In 1997, we introduced SpeedPass+, now called Exxon Mobil Rewards+, a touchless payment system using RFID-enabled technology. Similar to electronic payment systems commonly used for tolls, buses and subways, the Exxon Mobil Rewards+ app provided our customers a tag for their key rings that allowed them to fuel up while keeping their credit cards safely stowed in their wallets.
When mobile technology transformed digital payment systems, ExxonMobil recognized an opportunity to integrate our payment and loyalty programs into a single mobile application. Building on our longstanding relationship with IBM, we worked with IBM Services and IBM iX to develop the first mobile fueling app deployed by a major retail brand. Built on the IBM Cloud, the Exxon Mobil Rewards+ app gives our valued customers the freedom to earn, pay and save, all from the comfort of their cars.

Putting customer satisfaction first
The Exxon Mobil Rewards+ app puts the customer front and center. It’s more secure than swiping a credit card and eliminates the need to keep track of yet another loyalty card. The app allows consumers to pay for fuel using their favorite mobile device, from smartphones to watches to in-car dashboards, all without leaving the comfort of their vehicles. Customers can earn loyalty points through Exxon Mobil Rewards+ on every purchase of fuel, car wash or convenience store purchase. The app accepts Apple Pay, Samsung Pay, Google Pay and all major credit or debit cards.
To date, the Exxon Mobil Rewards+ app has been downloaded more than 1 million times and can be used in more than 11,000 fuel stations in the United States and Canada. Download the Exxon Mobil Rewards+ app here.
To learn more about how IBM and ExxonMobil developed Speedpass+, view the video and watch the replay of this session at Think 2019 in San Francisco.
The post ExxonMobil and IBM Cloud fuel customer satisfaction at the pump appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Introduction to GitOps with OpenShift

In this blog post we are going to introduce the principles and patterns of GitOps and how you can implement these patterns
on OpenShift.
What is GitOps?
GitOps in short is a set of practices to use Git pull requests to manage infrastructure and application configurations.
Git repository in GitOps is considered the only source of truth and contains the entire state of the system so that the trail of
changes to the system state are visible and auditable.
Traceability of changes in GitOps is no novelty in itself as this approach is almost universally employed for the application source
code. However GitOps advocates applying the same principles (reviews, pull requests, tagging, etc) to infrastructure and application
configuration so that teams can benefit from the same assurance as they do for the application source code.
Although there is no precise definition or agreed upon set of rules, the following principles are an approximation of what constitutes a GitOps practice:

Declarative description of the system is stored in Git (configs, monitoring, etc)
Changes to the state are made via pull requests
Git push reconciled with the state of the running system with the state in the Git repository

GitOps Principles

The definition of our systems is described as code
The configuration for our systems can be treated as code, so we can store it and have it automatically versioned in Git, our single source of truth.
That way we can rollout and rollback changes in our systems in an easy way.

The desired system state and configuration is defined and versioned in Git
Having the desired configuration of our systems stored and versioned in Git give us the ability to rollout / rollback changes easily to our systems and applications.
On top of that we can leverage Git’s security mechanisms in order to ensure the ownership and provence of the code.

Changes to the configuration can be automatically applied using PR mechanisms
Using Git Pull Requests we can manage in an easy way how changes are applied to the stored configuration, you can request reviews from different team members, run CI tests, etc.
On top of that you don’t need to share your cluster credentials with anyone, the person committing the change only needs access to the Git repository where the configuration is stored.

There is a controller that ensures no configuration drifts are present
As the desired system state is present in Git, we only need a software that makes sure the current system state matches the desired system state. In case the states differ this software
should be able to self-heal or notify the drift based on its configuration.

GitOps Patterns on OpenShift
On-Cluster Resource Reconciller
In this pattern, a controller on the cluster is responsible for comparing the Kubernetes resources (YAML files) in the Git repository that acts as the single source of truth, with the resources on the cluster. When a discrepancy is detected, the controller would send out notifications and possibly take action to reconcile the resources on Kubernetes with the ones in the Git repository. Anthos Config Management and Weaveworks Flux use this pattern in their GitOps implementation.

External Resource Reconciler (Push)
A variation of the previous pattern is that one or a number of controllers are responsible for keeping resources in sync between pairs of Git repositories and Kubernetes clusters. The difference with the previous pattern is that the controllers are not necessarily running any of the managed clusters. The Git-k8s cluster pairs are often defined as a CRD which configures the controllers on how the sync should take place. The controllers in this pattern would compare the Git repository defined in the CRD with the resources on the Kubernetes cluster that is also defined in the CRD and takes action based on the result of the comparison. ArgoCD is one of the solutions that uses this pattern for GitOps implementation.

GitOps on OpenShift
Multi-cluster Kubernetes Infrastructure Administration
The increased adoption of Kubernetes withing the organizations, in addition to adopting a multi-cloud strategy and edge computing is
increasing the number of OpenShift clusters per customer.
Edge computing use cases may reach a massive scale of 100s to 1000s of deployments per customer. The result is that customers have to manage
multiple independent or cooperative OpenShift clusters running on-prem and on public clouds.
Some of the use cases around this problem space are:

Ensure clusters have similar state (configs, monitoring, storage, etc)
Recreate (or recover) clusters from a known state
Create new clusters with a known state
Rollout a change to multiple OpenShift clusters
Rollback a change to multiple OpenShift clusters
Associate templated configuration with different environments

Application Configuration
Applications often get deployed to multiple clusters (dev, stage, etc) throughout their life cycles before they reach production. Furthermore, the availability and
scalability requirements for applications frequently drive customers to deploy applications across multiple clusters on-premise and on public cloud across regions
or simply for burst-out capabilities.
Some of the customer needs are:

Promote applications (binary, config, etc) across clusters (e.g. dev, stage, etc)
Rollout application changes (binary, config, etc) to multiple OpenShift clusters
Rollback application changes to previous known stages

OpenShift GitOps Use-cases

Apply configs from Git
As a cluster admin, I want to store OpenShift cluster configs in a Git repository and have the cluster to apply them automatically, so that I can install a new cluster and bring it to an identical known state from the Git repository.

Sync with Secret Manager
As a cluster admin, I want to keep OpenShift secrets in sync with a secret manager like Vault, so that I can manage secrets in a secret management platform.

Detect config drift
As a cluster admin, I want OpenShift GitOps to detect and display a warning when cluster configs are not in sync with the designated Git repo, so that I can take action when there is a config drift.

Notify config drift
As a cluster admin, I want to be notified when OpenShift GitOps detects a config drift, so that I can take action when there is a configuration drift.

Sync on config drift
As a cluster admin, I want to perform a sync on an OpenShift cluster to sync with the configs stored in a Git repository when a config drift is detected, so that the OpenShift cluster would come back to a known state.

Auto-sync on config drift
As a cluster admin, I want to configure an OpenShift cluster to automatically sync with the configs stored in a Git repository when a config drift is detected, so that the OpenShift cluster would always be in sync with the config in the designated Git repository.

Multiple clusters in one registry
As a cluster admin, I want to define multiple OpenShift cluster config in a single Git repository and apply to clusters selectively, so that I can manage all cluster configs form a single source of truth.

Cluster config hierarchy (Inheritance)
As a cluster admin, I want to define a hierarchy of cluster configs (stage, prod, app portfolio, etc with inheritance) in a Git repository, so that I can define configs that apply to a single or multiple clusters.
As an example if an admin defines a hierarchy of prod clusters → foo clusters → foo prod clusters in the Git repository, it is applied a union of the following configs to the “foo” production cluster:

all production clusters configs
“foo” cluster configs
“foo” production cluster configs

Templating and Overriding configs
As a cluster admin, I want to override a subset of inherited configs and their values, so that I can adjust the config for the specific clusters they are being applied to.

Granularity include and exclude configs
As a cluster admin, I want to define when a certain config should apply or not apply to clusters with certain characteristics, so that I can have granular control over including or excluding cluster configs.

Application configs
As a cluster admin, I want to define when a certain config should apply or not apply to clusters with certain characteristics, so that I can have granular control over including or excluding cluster configs.

Templating support
As a developer, I want to have a choice on how to define my application resources (Helm Chart, pure k8s yaml etc), so that I can pick the right format based on my application needs.

GitOps Tools on OpenShift
ArgoCD
ArgoCD follows the External Resource Reconcile pattern, it has a central UI that can orchestrate one to many clusters and multiple Git repositories. One weakness is that if ArgoCD goes down, application management cannot be done.
Official Website
Flux
Flux follows the On-Cluster Resource Reconcile pattern, as there is not a central management of repository definitions, if one cluster goes down. the ability exists still to manage applications. One weakness is that no central UI is provided by the tool.
Official Website
Installing ArgoCD on OpenShift
In this blog series we are using ArgoCD as it provides a great CLI and WebUI, future blogs will potentially explore different alternatives like Flux.
In order to deploy ArgoCD on OpenShift 4 you can go ahead and follow the following steps as a cluster admin:

Deploy ArgoCD components on OpenShift
# Create a new namespace for ArgoCD components
oc create namespace argocd
# Apply the ArgoCD Install Manifest
oc -n argocd apply -f https://raw.githubusercontent.com/argoproj/argo-cd/v1.2.2/manifests/install.yaml
# Get the ArgoCD Server password
ARGOCD_SERVER_PASSWORD=$(oc -n argocd get pod -l “app.kubernetes.io/name=argocd-server” -o jsonpath='{.items[*].metadata.name}’)

Patch ArgoCD Server Deployment so we can expose it using an OpenShift Route
# Patch ArgoCD Server so no TLS is configured on the server (–insecure)
PATCH='{“spec”:{“template”:{“spec”:{“$setElementOrder/containers”:[{“name”:”argocd-server”}],”containers”:[{“command”:[“argocd-server”,”–insecure”,”–staticassets”,”/shared/app”],”name”:”argocd-server”}]}}}}’
oc -n argocd patch deployment argocd-server -p $PATCH
# Expose the ArgoCD Server using an Edge OpenShift Route so TLS is used for incoming connections
oc -n argocd create route edge argocd-server –service=argocd-server –port=http –insecure-policy=Redirect

Deploy ArgoCD Cli Tool
# Download the argocd binary, place it under /usr/local/bin and give it execution permissions
curl -L https://github.com/argoproj/argo-cd/releases/download/v1.2.2/argocd-linux-amd64 -o /usr/local/bin/argocd
chmod +x /usr/local/bin/argocd

Update ArgoCD Server Admin Password
# Get ArgoCD Server Route Hostname
ARGOCD_ROUTE=$(oc -n argocd get route argocd-server -o jsonpath='{.spec.host}’)
# Login with the current admin password
argocd –insecure –grpc-web login ${ARGOCD_ROUTE}:443 –username admin –password ${ARGOCD_SERVER_PASSWORD}
# Update admin’s password
argocd –insecure –grpc-web –server ${ARGOCD_ROUTE}:443 account update-password –current-password ${ARGOCD_SERVER_PASSWORD} –new-password <your_new_password>

Now you should be able to use the ArgoCD WebUI and the ArgoCD Cli tool to interact with the ArgoCD Server

Next Steps
In future blog posts we will talk about multiple topics related to GitOps such as:

Multi-cluster management with GitOps
Disaster Recovery with GitOps
Moving to GitOps

The post Introduction to GitOps with OpenShift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift