Community Blog Round Up 09 December 2019

As we sail down the Ussuri river, Ben and Colleen report on their experiences at Shanghai Open Infrastructure Summit while Adam dives into Buildah.
Let’s Buildah Keystoneconfig by Adam Young
Buildah is a valuable tool in the container ecosystem. As an effort to get more familiar with it, and to finally get my hand-rolled version of Keystone to deploy on Kubernetes, I decided to work through building a couple of Keystone based containers with Buildah.
Read more at https://adam.younglogic.com/2019/12/buildah-keystoneconfig/
Oslo in Shanghai by Ben Nemec
Despite my trepidation about the trip (some of it well-founded!), I made it to Shanghai and back for the Open Infrastructure Summit and Project Teams Gathering. I even managed to get some work done while I was there.
Read more at http://blog.nemebean.com/content/oslo-shanghai
Shanghai Open Infrastructure Forum and PTG by Colleen Murphy
The Open Infrastructure Summit, Forum, and Project Teams Gathering was held last week in the beautiful city of Shanghai. The event was held in the spirit of cross-cultural collaboration and attendees arrived with the intention of bridging the gap with a usually faraway but significant part of the OpenStack community.
Read more at http://www.gazlene.net/shanghai-forum-ptg.html
Quelle: RDO

Cloud modernization: Cultural and workplace adoption

Genuine digital transformation requires cloud modernization strategies that involve and affect people, processes and technology. In part one of this two-part series, I focused on cloud-based technologies that enable organizations to modernize infrastructures, platforms and applications.
This time around, I’ll examine the people aspect of cloud modernization and include some thoughts on process. I’ll cover:

Why it’s necessary to refresh and reskill your workforce
Why organizational restructuring is important
How to reengineer people and process
How to measure employee value in an era of transformation

The most effective digital modernization initiatives are holistic efforts that coordinate people, processes and technologies to improve business outcomes.
Refresh and reskill the workforce
Running IT operations in a cloud environment requires a different set of skills and knowledge than a traditional application development world. This is because application and platform architectures are so dissimilar. Therefore, different engineering practices and design patterns will become relevant.
Consider the microservices architecture, in which a single application is made up of many small, loosely coupled and independently scalable and deployable components. If one service goes down, the same service can pop up 10 different times. Or what about 12-factor application methodologies for building cloud-based services — developers and designers need to understand these and ingrain them into the design.
Concepts like these weren’t that relevant in traditional application development. Understanding of a traditional infrastructure doesn’t necessarily translate to a cloud environment. For a cloud modernization initiative to succeed, enterprises must teach employees how to design, test and deploy native cloud-ready applications — or hire new employees who can.
The cloud model also calls for consuming more and more best-of-breed software-as-a-service (SaaS) capabilities provided and managed by third-party vendors. This also exacerbates the notion of consolidated service-level agreements and integrated service management skills in a multicloud environment. These are very hard-to-find skills.
Restructure the organization
The notion of development best practices and operational excellence isn’t new. So why are concepts like DevSecOps so highly talked about now? The answer has more to do with organization than technology.
Five years from now, we may be laughing at the technology we have today. But whatever technology we’re using, we’ll want development and operations teams to be tightly integrated. In the microservices architecture, you own the end-to-end lifecycle of the service or domain. That’s very different from how traditional infrastructure IT teams work.
Cloud computing is increasingly managed by self-sustaining independent teams, each with their own specialties and organized around billing, claims or similar domains. There’s a squad lead. There’s a business analyst. There are a few designers and a few developers. There are probably some testers. There’s a DevOps engineer, or maybe a site reliability engineer (SRE). Each is part of the total squad — and squads can be replicated. As the industry matures, it’ll continue to trend toward that model.
Another trend in organizational restructuring is the growing adoption of centers for enablement. Centers of excellence or competencies are nothing new; centers for enablement are more hands-on and dynamic, focusing not on process but on performance and reuse of assets.
Adopt new rules and process reengineering
Site reliability engineering is all about building highly resilient applications and platforms. If a service suddenly goes down, other services remain working because they are independent. So, if a feature on an airline app stops working, users can still look at current flights or their airline bonus points because the problem is isolated.
The widespread use of automation and robotics has spawned a discipline called AIOps, which is revolutionizing service management. Enterprises initially took a reactive approach to monitoring systems and handling events. With new technology available, enterprises can be proactive about monitoring and management. AIOps allows organizations to be prescriptive: They can know what your systems and instrumentation will look like six months from now because the AI perceives trends and patterns.
Measure people and performance
In a traditional IT development, developers are evaluated using metrics such as the number of defects in a kilo line of code (KLOC). But is this metric relevant in cloud-based development? Wouldn’t a more valuable metric be the amount of time from ideation to deployment? And what about the percentage of a developer’s time spent on building automation, or the number of reusable IT assets produced by individuals or squads? The idea is to use metrics that truly measure business value.
In a nirvana stage, mean time to recover a system should trend closely to time taken from ideation to deployment. This means the method and processes used for new requirements are exactly the same as those being used for bug fixes or other types of defects. They’re all treated as items in a prioritized list and will be consumed, built, packaged and deployed in exactly the same way.
Half measures simply won’t cut it. Digital modernization requires a cloud-first and automation-first strategy that emphasizes coordination between people, processes and technologies.
Learn more about IBM organizational change management services, as well as how guidance, migration, modernization, cloud-native application development and managed services from IBM professionals can help your journey to cloud.
The post Cloud modernization: Cultural and workplace adoption appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Microservices-Based Application Delivery with Citrix and Red Hat OpenShift

Microservices-Based Application Delivery with Citrix and Red Hat OpenShift

This is a guest post by Pushkar Patil. He is a Principal Product Manager, at Citrix Systems, Inc.
 
Citrix is thrilled to have recently achieved Red Hat OpenShift Operator Certification (Press Release). This new integration simplifies the deployment and control of the Citrix Application Delivery Controller (ADC) to a few clicks through an easy-to-use Operator.
Before we dive into how you can use Citrix Operators to speed up implementation and control in OpenShift environments, let me cover the benefits of using the Citrix Cloud Native Stack and how it solves the challenges of integrating ingress in Kubernetes.
Benefits of Citrix Cloud Native Stack
 
A purpose built software stack addressing the needs of various stakeholders like Developers, DevOps, DevSecOPs, SREs and Cluster Admin. The picture below shows the components of the stacks.

Citrix ADC is a feature-rich, application delivery controller that enhances the delivery and security of your microservices applications. Some of the key benefits include:
Production Grade Ingress 
Citrix ADC is proven to work at scale, providing features like advanced load balancing, TLS termination, L3-L7 protocol optimizations, and redundancy solutions to the internet’s largest web properties and thousands of enterprises.
Flexible
Citrix ADC support architecture flexibility – Citrix has a complete array of ADC form factors for every environment (physical, virtual, containerized, bare metal and cloud) for inside and outside your cluster. 
Agile
Better Developer experience – Citrix ADC uses CRDs to deliver features like Content Rewrite/Responder and now uses operators to improve lifecycle management of Citrix Ingress Controller and Citrix ADC CPX
Monitoring 
Citrix Cloud Native Stack readily integrates with open source tools like Prometheus, Grafana, Kibana and many more.
Deep visibility and Troubleshooting 
Citrix ADM with Service Graphs provides actionable insight into the health and performance of applications and offers proactive troubleshooting for any issues.
Automated
Citrix ADC provides a REST API (NITRO) which integrates with an automation framework, e.g. Ansible, Puppet, Chef etc. This allows application development and DevOps teams to enable allocation of new ADC services, on-demand as part of their application deployment workflow. The teams can develop application templates for advanced ADC functionality with simplified configuration specific to an individual application.
Elastic
Pooled Capacity provides the ability to share ADC capacity across all Citrix ADC form factors across a datacenter and/or cloud, to ease migration of workloads
 
Challenges running Ingress in OpenShift:
 
Kubernetes as an  application development and deployment platform is excellent, but getting requests into and out of the cluster does present challenges. Shortcomings include:
Migrating legacy applications to OpenShift
Previously, applications were written to use the TCP/UDP networking protocols. Kubernetes ingress objects don’t support TCP, TCP-SSL or UDP 
Failover handling
Because the ingress is the access point for traffic to the cluster,  it should continue to serve customers without any downtime if a disaster results in the cluster being unavailable.
Consistent ingress on premises and in the cloud
Microservices can be deployed on-prem and on public cloud and having inconsistent ingress mechanisms across locations adds complexity to operations.              
External access from/to outside OpenShift cluster
The ability to seamlessly integrate into existing networking fabrics without additional hops or network re-architecture increases efficiency
Security configuration
Ability to support security with SSL, mTLS is of great significance when it comes to Ingress solutions. 
Rolling upgrades
To upgrade without disruption is vital for production environments
How Citrix ADC makes OpenShift ingress easier to implement
Citrix Operators are a secret sauce which enable automation and lifecycle management of Citrix ADC and Citrix Ingress Solution for OpenShift clusters.  It wraps the logic for deploying and operating a Citrix ADC operations using Kubernetes construct. More specifically, Citrix Operators directly address the challenges of ingress within Kubernetes. 
Citrix Operators enable:

Deployment of the Citrix ADC and Ingress controller quickly and easily, for serving micro-services applications, including support for TCP/UDP protocols along with HTTP/HTTPS.
Citrix ADC to scale elastically and handle fail-over events without disruption.
Deployment of the Citrix Cloud Native Stack on any OpenShift Platform (OpenShift products) in any environment to bring a consistent approach to ingress.
Automation of security configuration with certificate and key management using Let’s Encrypt, or any other cert and key management application. 
Deployment in production OpenShift environments because it is tested and supported through Red Hat and Citrix. 
Citrix ADC and Ingress Controller to do software update automatically without disruption to the traffic. 

 
How do I use the Citrix operators?
Here we go through steps to use OpenShift Citrix Ingress Operator to configure Citrix ADC VPX in virtual machine form factor and resides outside the cluster. 
The rest of this blog describes the features of Citrix Operators that can be used to deploy and operate Citrix ADC in cloud-native environments.
Citrix Operator installation for Red Hat OpenShift overview:
There are 4 common ingress deployment models which customers commonly use with the Citrix Stack:
Two-Tier ingress – A CPX is deployed in a cluster behind the Citrix appliance to act as a DevOps friendly abstraction layer. This deployment needs CIC and CPX operator. 
Unified Ingress – The Citrix appliance is outside the cluster serving micro-services. This deployment would just need CIC operator.
Service-Mesh Lite – Provision a CPX instance(s) through which all microservices will communicate and one can have granular traffic management between your apps. This deployment needs both CIC and CPX operator.
Service Mesh – Citrix ADC can be injected as sidecar proxy to your applications and as a gateway to service mesh cluster.  This is not currently covered with Operators. We will be creating an operator in the future for this architecture.

CIC Operator
CPX Operator

Single Tier
Yes
No

Dual Tier
Yes
Yes

ServiceMesh lite
Yes
Yes

ServiceMesh
No
No

 
To learn more about the pros and cons of these deployment choice watch this CNCF webinar https://www.youtube.com/watch?v=OhWYoYAHukA
Here are the steps for using Unified Ingress deployment. There is a video that cover 2-tier ingress at the end of this blogs.
Steps 
Prerequisites

Access to the OpenShift Container Platform web console.

Procedure

Log in to the OpenShift Container Platform web console.
Navigate to Catalog → OperatorHub.
Type Citrix into the filter box to locate the Citrix Operator.
Click the Citrix Operator to display information about the Operator.
Click Install.
On the Create Operator Subscription page, select All namespaces on the cluster (default). This installs the Operator in the default openshift-operators project and makes the Operator available to all projects in the cluster.
Select the alpha Update Channel.
Select the Automatic Approval Strategy.
Click Subscribe.
The Subscription Overview page displays the Citrix Operator’s installation progress.

 
Once install is complete
Go to a project where you want to host CIC

Navigate to Installed Operators
Click on installed Citrix Ingress Controller  
In the Overview tab -> Create New
Edit nsIP field to point to Citrix ADC and update the license field to “yes”
Click create
Navigate to Workload -> Deployment and find the deployment of CIC
Verify the CIC pod is running and is connects to Citrix ADC upstream

 
Exposing Application:

Create service of application: Navigate to Networking > Services > Create Service
Create ingress for apache application: Navigate to Networking > Ingress > Create Ingress
Update VIP of Citrix ADC in the ingress configuration and apply

 
Get started by viewing this Technical Video to deploy 2-tier architecture using Operators: https://youtu.be/TqSJ6z7wIw0 

The post Microservices-Based Application Delivery with Citrix and Red Hat OpenShift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Lessons learned: How IBM Global Financing tackles application modernization with Red Hat OpenShift

IBM is like many of its clients when it comes to application modernization. The company faces the same challenges of trying to balance preserving the business value of established investments while remaining agile enough to respond to changing market demands.
Within IBM Global Financing, the financing arm that provides leases and loans to IBM clients and business partners, IBM is always looking for ways to modernize its own portfolio of mission-critical applications. Each application is periodically evaluated to determine how best to evolve, whether it’s a large mainframe application with up to 20 million lines of code or a newer web-based application.
IBM Global Financing thought it had its transformation plan mapped out. However, the advent of Red Hat OpenShift allowed the group to accelerate its plans.
Live and in production in just over one month
With the IBM acquisition of Red Hat in July 2019, several IBM Global Financing squads began experimenting with Red Hat OpenShift to accelerate modernization efforts.
Just over one month later (including two weeks of testing), IBM Global Financing went live with its first application using Red Hat OpenShift on IBM Cloud. The new application, called IBM Global Financing Concierge, is a Watson Assistant-based chatbot that serves up information from back-end heritage systems to help IBM Global Financing employees answer client questions and make decisions with greater speed and accuracy.
A key differentiator of the new Red Hat OpenShift environment is the concept of self-service, which enables developers to put their tested code directly into production. Prior to using Red Hat OpenShift, IBM Global Financing had a complex deployment process involving multiple IBM teams. Now, the development team has full authority to deploy new code end to end.
The quick success with the Watson Assistant chatbot spawned ideas to scale Red Hat OpenShift more broadly across the IBM Global Financing portfolio. IBM Global Financing formed a Red Hat Think Tank to bring together IBM Global Financing architects from around the world who had deep knowledge of the IBM application portfolio, and challenged them to figure out the patterns and migration paths to take advantage of Red Hat OpenShift.
Three lessons learned from adopting Red Hat OpenShift
Using Red Hat OpenShift enabled IBM Global Financing to accelerate in three important ways.

Scalability: IBM Global Financing, like many other parts of IBM, gets massive surges at the end of every quarter. Prior to Red Hat OpenShift, IBM Global Financing had to employ manual efforts to ensure its infrastructure could handle these surges. With Red Hat OpenShift, applications are automatically scaled up or down based on user volumes. IBM Global Financing is able to dynamically scale the performance of its applications to handle surge volume.
Resiliency: If there are problems with the IT infrastructure, Red Hat OpenShift can adjust by dynamically spawning extra containers that enable applications to continue to function without any down time.
Uptime: Red Hat OpenShift has allowed IBM Global Financing to eliminate the eight to 12-hour maintenance windows. Instead of employees scrambling to make deployments work during a weekend, they can apply a blue-green strategy and seamlessly deploy without disrupting users.

The results: Accelerating the application modernization journey
Since July, the IBM Global Financing team has modernized four assets on Red Hat OpenShift and it plans to complete two more by the end of the year. The team will be using modern web front-end applications to tap into mainframe functions through APIs and microservices.
The IBM Global Financing team has accelerated the application modernization journey through the use of Red Hat OpenShift. In just a few months it has redefined the future of innovation across IBM Global Financing IT with Red Hat OpenShift as the model architecture. Learn how Red Hat OpenShift can benefit your business.
IBM Global Financing is the world’s largest IT captive IT financier with an asset base of over $41 billion. With clients in more than 60 countries, and expertise in IT financing, working capital and credit, IBM Global Financing offers flexible payment plans and leasing solutions for IBM software, services and IT infrastructure – including Red Hat OpenShift. Learn more about IBM Global Financing.
The post Lessons learned: How IBM Global Financing tackles application modernization with Red Hat OpenShift appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Wavefront Automates and Unifies Red Hat OpenShift Observability, Full Stack

This is a guest post by Gordana Neskovic (@nesgor), a Senior Product Marketing Manager for Wavefront by VMware. Gordana has held Data Scientist / AI Architect roles at Wells Fargo, Pinterest, and SFO-ITT. Her current interests are at the intersection of cloud monitoring, operational analytics, and data science. She holds a Ph.D. in Electrical Engineering.
Special Thank You to Anirban Dey (@Anirbandey2011), and Srinivas Kandula (@kanduls) for contribution to this blog.
Anirban is a Product Line Manager at Wavefront by VMware. He is currently involved with expanding the Wavefront Integration portfolio and building new Wavefront integrations. He loves to learn and play with all the latest technologies around the developer platform/ecosystem and solve critical customer problems. In his spare time, he loves to travel, explore various cuisines, and listen to music.
Srinivas is a Staff Engineer at Wavefront By VMware. He is currently working on adding more integrations to Wavefront. He loves to learn and play with the latest technologies and build solutions with them. In his spare time, he loves to go cycling, swimming, and running.
Red Hat OpenShift is an enterprise Kubernetes platform intended to make the process of developing, deploying and managing cloud-native applications easier, scalable and more flexible. Wavefront by VMware provides enterprise-grade observability and analytics for OpenShift environments across multiple clouds. Wavefront ingests, analyzes and visualizes OpenShift telemetry – metrics, histograms, traces, and span logs – across the full-stack, including distributed applications, containers, microservices, and cloud infrastructure. 
 As a result of Wavefront’s collaboration with Red Hat, you can now get automated enterprise observability for OpenShift that’s full stack, through the Red Hat OpenShift Certified Wavefront Operator for OpenShift 4.1 and later. This Operator is available in Operator Hub embedded in OpenShift, a registry for finding Kubernetes Operator-backed services. 
 With the Red Hat OpenShift Certified Wavefront Operator, engineers, developers, and OpenShift operators get:

Accelerated and automated transition into Kubernetes and applications observability 
Streamline day 2 observability operations – from deployments of the Wavefront Collector for Kubernetes and Wavefront proxies to managed configurations and upgrades
Automated full-stack enterprise observability and deep insight analytics across OpenShift environments

Figure 1: Complete Visibility into OpenShift Cluster Nodes, Namespaces, Pods, Containers
What’s the Red Hat OpenShift Certified Wavefront Operator?
The Red Hat OpenShift Certified Wavefront Operator installs, upgrades and constantly checks the health of the Wavefront Collector for Kubernetes. It reduces costs and risks of managing observability for OpenShift environments and applications at scale. It also improves time-to-value by pretesting and expediting the Wavefront Collector for Kubernetes deployment and configuration.
In essence, the Red Hat OpenShift Certified Wavefront Operator is a method of packaging, deploying, and managing OpenShift observability using the Kubernetes APIs and kubectl tooling. The Operator runs in a pod on the cluster and interacts with the Kubernetes API server. It installs a Wavefront Collector for Kubernetes instance on each node in the OpenShift cluster, taking advantage of resource definitions, an extension mechanism in Kubernetes. Unless you specify you want to send data to Wavefront using direct ingestion, the Operator also installs and configures one or more Wavefront proxies. The Operator watches for Wavefront Collectors for Kubernetes instances and is notified when they are being added or modified. When the Operator receives a notification, it starts running a loop to ensure that all the required connections between the Wavefront Collector for Kubernetes and the OpenShift environment are available and are configured in the way the user expressed in the specification.
 The Red Hat OpenShift Certified Wavefront Operator also provides for the new Wavefront Collector for Kubernetes versions to be deployed using a rolling update, avoiding downtime and making it easy to stay up-to-date.
Through continuous certification, the interoperability and safety of the Red Hat OpenShift Certified Wavefront Operator is verified on an ongoing basis, with a fast turnaround for security updates. 
Installing Red Hat OpenShift Certified Wavefront Operator
It’s easy to install the Red Hat OpenShift Certified Wavefront Operatorby following these steps:

Using the OpenShift administrator console web interface, browse to the OpenShift OperatorHub
Search for Wavefront Operator
Click on the Wavefront Operator tile
Go through Wavefront Operator Overview and check all the operator metadata and links
Click on the Install button
Follow the step-by-step installation instructions

Enterprise Observability for OpenShift that’s Automated and Full-Stack
Once the Red Hat OpenShift Certified Wavefront Operator is installed and configured, Wavefront:

Automatically recognizes Kubernetes services
Discovers Kubernetes workloads and instruments Java-based services across any cloud
Populates pre-packaged multi-layered Kubernetes dashboards
Reports at scale, up to 1-sec resolution (sub-1-sec with histograms), streaming health and SLO metrics for OpenShift clusters, nodes, pods, and containerized applications
Provides detailed information about Kubernetes environment operations and autoconfigure a set of Kubernetes-related alerts 

Figure 2: Wavefront Enterprise Observability for OpenShift, Full-Stack
You can now start using and customizing the out-of-the-box pre-configured dashboards. You can alleviate code issues by understanding Kubernetes system metrics and instantly troubleshoot containers and applications microservices. With Wavefront’s powerful analytics, you can correlate and quickly drill down across applications running on OpenShift, containers, Kubernetes, and cloud. 
Also, engineers such as developers and SREs can deep-dive into any incident using distributed tracing to trace transactions across distributed services and identify root cause in seconds. With Wavefront’s sophisticated AI-driven performance analytics and trends prediction, you can proactively get an alert on anomalies across containerized applications running on OpenShift environments. 
Ready to Get Started?
If you’d like immediate, deep visibility into your entire OpenShift environment, sign-up for a Wavefront free trial. With the Red Hat OpenShift Certified Wavefront Operator , you will benefit from accelerated transition into enterprise observability for OpenShift that’s automated and full stack. Wavefront streamlines day 2 observability operations, providing deep insight analytics across your entire OpenShift environment, including containerized applications, Kubernetes, and the underlying infrastructure. 
 
The post Wavefront Automates and Unifies Red Hat OpenShift Observability, Full Stack appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

OpenShift 4 Pro Tip: Custom Branding

You can customize the OpenShift Container Platform web console to set a custom logo and product name. This is especially helpful if you need to tailor the web console to meet specific corporate or government requirements.
Add a Custom Logo and Product Name
Prerequisites
Create a file of the logo that you want to use. The logo can be a file in any common image format (e.g. GIF, JPG, PNG, or SVG) and is constrained to a max-height of 60px.
Procedure
1: Import your logo file into a ConfigMap in the openshift-config namespace:
$ oc create configmap console-custom-logo –from-file~/path/to/console-custom-logo.png -n openshift-config

2: Edit the web console’s Operator configuration to include customLogoFile and customProductName:
$ oc apply -f

apiVersion: operator.openshift.io/v1
kind: Console
metadata:
name: cluster
spec:
customization:
customProductName: MyProduct
customLogoFile:
name: console-custom-logo
key: console-custom-logo.png

Once the Operator configuration is updated, it will sync the custom logo ConfigMap into the console namespace, mount it to the console pod, and redeploy.
3: Check for success. If there are any issues, the console cluster operator will report Degraded, and the console Operator configuration will also report CustomLogoDegraded, but with reasons like KeyOrFilenameInvalid or NoImageProvided.
To check the clusteroperator, run:
$ oc get clusteroperator console -o yaml

To check the console Operator configuration, run:
$ oc get console.operator.openshift.io -o yaml

The post OpenShift 4 Pro Tip: Custom Branding appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Disaster Recovery with GitOps

Whether your OpenShift cluster(s) are hosted on-premise or in the cloud downtime happens. This could be a temporary outage or it can be an extended outage with no resolution in sight. This article will explain how GitOps can be used for the rapid redeployment of your Kubernetes objects. One important thing to note is that GitOps can only restore Kubernetes objects so that means any persistent data required for an application to correctly function must be restored for stateful applications, such as databases, to be back in service.
Continuing with our usage of Argo CD we will discuss two different ways to start the process in restoring these objects. For both of these procedures we will assume that a new cluster has been deployed and that Argo CD has also been deployed. We also assume that the same OpenShift routes and DNS zones will be used because the OpenShift routes should be stored within git as well.
Using the Argo CD binary you will manually need to define the repositories and Argo CD applications. This process will require you to have a list of repositories and command to define them within Argo CD. For example, we could run the following to restore our simple-app project.
argocd repo add https://github.com/cooktheryan/blogpost
argocd app create –project default
–name simple-app –repo https://github.com/cooktheryan/blogpost.git
–path . –dest-server https://kubernetes.default.svc
-dest-namespace simple-app –revision master

This process works as long as you have a list of all repositories, git branches, and namespaces documented. Once these items are all defined and loaded into Argo CD, the objects will begin to deploy within the cluster and sync with Argo CD.
With some planning we can make this process better though by using git to manage our GitOps resources. Storing a copy of the configmap and the various ArgoCD applications within Git or even something simple as a file or object share that exists outside of the data center hosting the OpenShift cluster will allow us to rapidly redefine the objects managed by Argo CD.
First, let’s take a look at the configmap in YAML format. We will see the repositories currently defined within Argo CD.
oc get configmap -n argocd argocd-cm -o yaml
apiVersion: v1
data:
repositories: |
– url: https://github.com/cooktheryan/blogpost
– url: http://github.com/openshift/federation-dev.git
– sshPrivateKeySecret:
key: sshPrivateKey
name: repo-federation-dev-3296805493
url: git@github.com:openshift/federation-dev.git
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{“apiVersion”:”v1″,”kind”:”ConfigMap”,”metadata”:{“annotations”:{},”labels”:{“app.kubernetes.io/name”:”argocd-cm”,”app.kubernetes.io/part-of”:”argocd”},”name”:”argocd-cm”,”namespace”:”argocd”}}
creationTimestamp: “2019-09-26T18:46:47Z”
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
name: argocd-cm
namespace: argocd
resourceVersion: “474704”
selfLink: /api/v1/namespaces/argocd/configmaps/argocd-cm
uid: fe331084-e08d-11e9-a49a-52fdfc072182

We will next save the configmap in YAML format.
oc get configmap -n argocd argocd-cm -o yaml –export > argocd-cm.yaml

But what if my repository requires a ssh key? If that is the case then we will need to export
the secret as well. If your repositories do not require a ssh key or authentication then ignore this step.
The configmap identifies the name of the secret that is used by the repository.
oc get secrets -n argocd repo-federation-dev-3296805493 -o yaml > repo-federation-dev-secret.yaml

We will now need to backup any Argo CD applications. This can be done per individual application or by just exporting
all of the applications to a YAML file. For this example since we only have one application within Argo CD.
It would be suggested to store the applications individually and within the same git repository that the Kubernetes
objects are defined so that they will be under revision control and available in the event of a disaster.
oc get applications -o yaml –export > simple-app-backup.yaml

Now that we have all of required Argo CD objects we will now import them into our new server that has been deployed when
the new environment was brought online.
First, we will update the configmap to include our previously defined repositories.
oc apply -f argocd-repos.yaml -n argocd-cm.yaml

OPTIONAL: If credentials were used for any of the repositories then the credentials in the secret must be imported before running a repo list.
oc apply -f repo-federation-dev-secret.yaml -n argocd

Next, we will restore our Argo CD applications which will cause the our Kubernetes objects like namespaces, services, deployments, and routes to deploy
onto the cluster.
oc create -f backup.yaml -n argocd

At this point all of the objects should begin to deploy and the applications within Argo CD should post a healthy state. As with
all backup solutions it makes sense to test this DR procedure frequently. This could be done per application basis on another cluster or with
Code Ready Containers.
In the coming weeks we will publish another disaster recovery post containing information on what to do if your cluster fails and how
a Global Load Balancer can keep the lights on.
The post Disaster Recovery with GitOps appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Modernization and cloud innovation lead to cultural shift at Blue Cross Blue Shield of Massachusetts

Blue Cross Blue Shield of Massachusetts is one of the largest health insurance providers in the state of Massachusetts. In the Enterprise Technology Organization, ​​​​​​​we run mission-critical applications 24 hours a day, 365 days a year and process extremely large transaction volumes related to healthcare claims and enrollment.
To put this in perspective, we processed 94 million eligibility requests alone last year.
Modernizing mission-critical and enterprise systems
To support millions of monthly transactions, we are using systems built in 2010 on a traditional, multi-tiered technology stack with application server technology.
While this infrastructure has served us well, our market share has increased in a way that it has become harder to preserve our systems’ availability when there’s an issue. For this reason, we began looking for a modern, cloud-based technology platform that could provide greater flexibility in handling transaction volume spikes. We wanted to pilot new technologies to understand what the process would be like to move a component of our systems to a new technology platform.
We reached out to IBM because they’ve been a strategic partner and we have a significant investment in IBM technologies. The sales team connected us with the IBM Garage to help us with this technology innovation process.
The IBM Garage team told us that we could work together to create a minimum viable product (MVP), which would, surprisingly, only take four weeks. We also asked about a future platform and what the roadmap would be for our existing IBM technologies. IBM Garage experts in mission-critical and enterprise systems quickly understood the large amount of complex code supporting our systems and the state of our systems technologically and architecturally, thus we began exploring cloud computing platforms, specifically IBM cloud offerings.
Cloud innovation with microservices and containerization
After evaluating technology for modernizing mission-critical electronic data interchange (EDI) applications, we were most interested in an on-premises cloud platform for both business and technology reasons. This cloud innovation option would provide more scalability, a strategic architectural direction, and the opportunity to preserve our technology investment with minimal code rewriting.
We chose to test our mission-critical apps on IBM Cloud Pak for Applications software running on the Red Hat OpenShift container platform because this scalable, containerized infrastructure helps us take advantage of new technologies while retaining our investment in IBM technology and architecturally evolving it.
Now, this wasn’t really a “lift and shift.” It was more of a strategic architecture with calculated changes to our technology platform and code. Our strategic goal was to get to a new cloud-based container platform in both the shortest time and smallest cost investment possible.
The IBM Garage team members have been instrumental in educating us on rapid application modernization and empowering our infrastructure, development and architectural groups. The IBM Garage Method was essential to facilitating knowledge transfer.
As part of the method, they taught us an agile technique called pair programming, where developers work in pairs and one developer writes code while the other simultaneously reviews it. Our developers, system administrators and infrastructure experts paired with the corresponding IBM Garage experts.
Pairing and rotating meant that everyone was hands-on with every component; there were no silos because we all knew what was involved in the migration of the services selected. We made fast progress because the learning curve diminished. From an infrastructure perspective, pairing allowed us to get everything done in short order and quickly resolve any roadblocks.
The IBM Garage team members literally and figuratively worked side-by-side with us; we were impressed with their collaboration and dedication.
Cultural transformation through cloud technology
We were amazed that in four weeks, we had a complete demonstration of the new system in the development environment and could even monitor performance and check that it was meeting our service level agreement.
In addition to the cultural shift – learning new ways to work – the biggest takeaway is the technology itself. The world is moving to cloud technologies and now, so are we.
With a modernized infrastructure that includes containerization, we have superior monitoring at the container level and can terminate unnecessary containers. Updated integrations enable us to automate deploys in a more robust fashion, without as much manual intervention. Automation gives us time to proactively refine systems that couldn’t get the same attention in the past, when we were too busy troubleshooting and reacting.
As a customer-focused company, modernized systems allow us to provide the best service we can. Our technology can scale to meet customer demand around the clock without downtime.
We look forward to further collaboration with the IBM Garage team since we have ambitious plans to fully transition to the modernized platform by the end of 2020. The eligibility and benefits piece that we worked on with the IBM Garage will go into production, but that is only about 15 percent of our overall system.
We plan to use both the technical knowledge and agile development approach that we’ve learned from the IBM Garage team to modernize the rest of our applications on the Red Hat OpenShift container platform running IBM Cloud Pak for Applications software. This MVP through the IBM Garage was a truly transformative process.
Want to experience the IBM Garage for your business? Schedule a no-charge visit with the IBM Garage to get started.
The post Modernization and cloud innovation lead to cultural shift at Blue Cross Blue Shield of Massachusetts appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Multi-cluster Management with GitOps

In this blog post we are going to introduce Multi-cluster Management patterns with GitOps and how you can implement these patterns on OpenShift.
If you’re interested in diving into an interactive tutorial, try this link.
In the introductory blog post to GitOps we described some of the use cases that we can solve with GitOps on OpenShift. In
today’s blog post we are going to describe how we can leverage GitOps patterns to perform tasks on multiple clusters.
We are going to explore the following use cases:

Deploy an application to multiple clusters
Customize the application by cluster
Perform a canary deployment

During this blog post we are not going to cover advanced GitOps workflows, instead we are going to show you basic capabilities around
the topic. More advanced posts around GitOps workflows will follow.
Environment

Two OpenShift 4.1 clusters, one for preproduction (context: pre) environment and one for production (context: pro) environment.
ArgoCD used as the GitOps tool
Demo files here

Deploy an Application to Multiple Clusters
In this first example, we are going to deploy our base application to both clusters.
As we are using ArgoCD as our GitOps tool an ArgoCD Server is already deployed in our environment as well as the argocd cli tool.
Our application definition can be found here

Ensure we have access to both clusters
$ oc –context pre get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-128-17.ap-southeast-1.compute.internal Ready master 19h v1.13.4+ab8449285
ip-10-0-136-41.ap-southeast-1.compute.internal Ready worker 19h v1.13.4+ab8449285
ip-10-0-151-90.ap-southeast-1.compute.internal Ready worker 19h v1.13.4+ab8449285

$ oc –context pro get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-140-239.ap-southeast-1.compute.internal Ready master 19h v1.13.4+ab8449285
ip-10-0-142-57.ap-southeast-1.compute.internal Ready worker 19h v1.13.4+ab8449285
ip-10-0-170-168.ap-southeast-1.compute.internal Ready worker 19h v1.13.4+ab8449285

Ensure we have our clusters registered in ArgoCD
$ argocd cluster list
SERVER NAME STATUS MESSAGE
https://api.openshift.pre.example.com:6443 pre Successful
https://api.openshift.pro.example.com:6443 pro Successful
https://kubernetes.default.svc Successful

Add our GitOps repository to ArgoCD
$ argocd repo add https://github.com/mvazquezc/gitops-demo.git

repository ‘https://github.com/mvazquezc/gitops-demo.git’ added

Deploy our application to preproduction and production clusters
# Create the application on Preproduction cluster
$ argocd app create –project default –name pre-reversewords –repo https://github.com/mvazquezc/gitops-demo.git –path reversewords_app/base –dest-server https://api.openshift.pre.example.com:6443 –dest-namespace reverse-words –revision pre

application ‘pre-reversewords’ created

# Create the application on Production cluster
$ argocd app create –project default –name pro-reversewords –repo https://github.com/mvazquezc/gitops-demo.git –path reversewords_app/base –dest-server https://api.openshift.pro.example.com:6443 –dest-namespace reverse-words –revision pro

application ‘pro-reversewords’ created

4.1 Above commands create a new ArgoCD Application named pre-reversewords and pro-reversewords that will be deployed on preproduction and production clusters in reverse-words namespace using the code from pre/pro branch located under path reversewords_app/base

As we haven’t defined a sync policy, we need to force ArgoCD to sync the Git repo content on our pre and pro clusters
$ argocd app sync pre-reversewords
$ argocd app sync pro-reversewords

After a few seconds we will see our application deployed on pre and pro clusters
# Get application status on preproduction cluster
$ argocd app get pre-reversewords

Name: pre-reversewords
Project: default
Server: https://api.openshift.pre.example.com:6443
Namespace: reverse-words
URL: https://argocd.apps.example.com/applications/pre-reversewords
Repo: https://github.com/mvazquezc/gitops-demo.git
Target: pre
Path: reversewords_app/base
Sync Policy: <none>
Sync Status: Synced to pre (306ce10)
Health Status: Healthy

GROUP KIND NAMESPACE NAME STATUS HEALTH
Namespace reverse-words Synced
Service reverse-words reverse-words Synced Healthy
apps Deployment reverse-words reverse-words Synced Healthy

# Get application status on production cluster
$ argocd app get pro-reversewords

Name: pro-reversewords
Project: default
Server: https://api.openshift.pro.example.com:6443
Namespace: reverse-words
URL: https://argocd.apps.example.com/applications/pro-reversewords
Repo: https://github.com/mvazquezc/gitops-demo.git
Target: pro
Path: reversewords_app/base
Sync Policy: <none>
Sync Status: Synced to pro (98bbfb1)
Health Status: Healthy

GROUP KIND NAMESPACE NAME STATUS HEALTH
Namespace reverse-words Synced
Service reverse-words reverse-words Synced Healthy
apps Deployment reverse-words reverse-words Synced Healthy

Our application defines a service for accessing its API, let’s try to access and get the release name for both clusters
# Get the preproduction cluster LB hostname
$ PRE_LB_HOSTNAME=$(oc –context pre -n reverse-words get svc reverse-words -o jsonpath='{.status.loadBalancer.ingress[*].hostname}’)
# Get the production cluster LB hostname
$ PRO_LB_HOSTNAME=$(oc –context pro -n reverse-words get svc reverse-words -o jsonpath='{.status.loadBalancer.ingress[*].hostname}’)
# Access the preproduccion LB and get the release name
$ curl http://${PRE_LB_HOSTNAME}:8080

Reverse Words Release: Base release. App version: v0.0.2
# Access the production LB and get the release name
$ curl http://${PRO_LB_HOSTNAME}:8080

Reverse Words Release: Base release. App version: v0.0.2

As you have seen, we have been able to deploy to multiple clusters from a single tool (ArgoCD). In the next section we are going to explore how we can override some configurations depending on the destination cluster by using embedded Kustomize on ArgoCD.
Customize the Application by Cluster
In this second example, we are going to modify the application behavior depending on which cluster is deployed.
We want the application to have a release name preproduction or production depending on which environment the application gets deployed on.
ArgoCD leverages Kustomize under the hood to deal with configuration overrides across environments.
The way we organize our application in Git is as follows:

The Git Repository has two branches, pre which has manifests for preproduction env, and pro for production env.

Application overrides can be found in their respective folders and branch:

Preproduction cluster overrides
Production cluster overrides

We placed the application overrides in the Git repository, there is only one override that configures a release name different than the default based on the cluster the application gets deployed
Deploy our Kustomized application to preproduction and production clusters
# Create the application on Preproduction cluster
argocd app create –project default –name pre-kustomize-reversewords –repo https://github.com/mvazquezc/gitops-demo.git –path reversewords_app/overlays/pre –dest-server https://api.openshift.pre.example.com:6443 –dest-namespace reverse-words –revision pre –sync-policy automated

application ‘pre-kustomize-reversewords’ created

# Create the application on Production cluster
argocd app create –project default –name pro-kustomize-reversewords –repo https://github.com/mvazquezc/gitops-demo.git –path reversewords_app/overlays/pro –dest-server https://api.openshift.pro.example.com:6443 –dest-namespace reverse-words –revision pro –sync-policy automated

application ‘pro-kustomize-reversewords’ created

2.1 Above commands create a new ArgoCD Application named pre-kustomize-reversewords and pro-kustomize-reversewords that will be deployed on preproduction and production clusters in reverse-words namespace using the code from pre and pro branch respectively. Each application will get the code from a different folder in our overlays folder, that way the application will be customized depending on which environment it gets deployed on. Note that only the modified values are stored in the overlay folder, the base application is still deployed from the base folder, so we don’t end up having duplicate application files.

As we have defined an automated sync policy we don’t need to force the sync, ArgoCD will start synching our application once it gets created. On top of that, if changes were made to the application repository, ArgoCD would re-deploy the changes for us.

After a few seconds we will see our application deployed on pre cluster
# Get application status on preproduction cluster
$ argocd app get pre-kustomize-reversewords

Name: pre-kustomize-reversewords
Project: default
Server: https://api.openshift.pre.example.com:6443
Namespace: reverse-words
URL: https://argocd.apps.example.com/applications/pre-kustomize-reversewords
Repo: https://github.com/mvazquezc/gitops-demo.git
Target: pre
Path: reversewords_app/overlays/pre
Sync Policy: Automated
Sync Status: Synced to pre (306ce10)
Health Status: Healthy

GROUP KIND NAMESPACE NAME STATUS HEALTH
Namespace reverse-words Synced
Service reverse-words reverse-words Synced Healthy
apps Deployment reverse-words reverse-words Synced Healthy

# Get application status on production cluster
$ argocd app get pro-kustomize-reversewords

Name: pro-kustomize-reversewords
Project: default
Server: https://api.openshift.pro.example.com:6443
Namespace: reverse-words
URL: https://argocd.apps.example.com/applications/pro-kustomize-reversewords
Repo: https://github.com/mvazquezc/gitops-demo.git
Target: pro
Path: reversewords_app/overlays/pro
Sync Policy: Automated
Sync Status: Synced to pro (98bbfb1)
Health Status: Healthy

GROUP KIND NAMESPACE NAME STATUS HEALTH
Namespace reverse-words Synced
Service reverse-words reverse-words Synced Healthy
apps Deployment reverse-words reverse-words Synced Healthy

Our application defines a service for accessing its API, let’s try to access and get the release name for both clusters
# Get the preproduction cluster LB hostname
$ PRE_LB_HOSTNAME=$(oc –context pre -n reverse-words get svc reverse-words -o jsonpath='{.status.loadBalancer.ingress[*].hostname}’)
# Get the production cluster LB hostname
$ PRO_LB_HOSTNAME=$(oc –context pro -n reverse-words get svc reverse-words -o jsonpath='{.status.loadBalancer.ingress[*].hostname}’)
# Access the preproduccion LB and get the release name
$ curl http://${PRE_LB_HOSTNAME}:8080

Reverse Words Release: Preproduction release. App version: v0.0.2
# Access the production LB and get the release name
$ curl http://${PRO_LB_HOSTNAME}:8080

Reverse Words Release: Production release. App version: v0.0.2

As you have seen, we have been able to deploy to multiple clusters and use custom configurations depending on which cluster we are using to deploy the application. In the next section we are going to explore how we can use GitOps to perform a basic canary deployment.
Perform a Canary Deployment
A common practice is to deploy a new version of an application to a small subset of the available clusters, and once the application has been proven to work as expected, then it gets promoted to the rest of the clusters.
We are going to use the Kustomized apps that we created before, let’s verify which versions are we running:
# Get the preproduction cluster LB hostname
$ PRE_LB_HOSTNAME=$(oc –context pre -n reverse-words get svc reverse-words -o jsonpath='{.status.loadBalancer.ingress[*].hostname}’)
# Get the production cluster LB hostname
$ PRO_LB_HOSTNAME=$(oc –context pro -n reverse-words get svc reverse-words -o jsonpath='{.status.loadBalancer.ingress[*].hostname}’)
# Access the preproduccion LB and get the release name
$ curl http://${PRE_LB_HOSTNAME}:8080

Reverse Words Release: Preproduction release. App version: v0.0.2
# Access the production LB and get the release name
$ curl http://${PRO_LB_HOSTNAME}:8080

Reverse Words Release: Production release. App version: v0.0.2

As you can see the current deployed version is v0.0.2, let’s perform a canary deployment to v0.0.3.

We need to update the container image that will be used on preproduction cluster, we are going to modify the Deployment overlay as follows:
# reversewords_app/overlays/pre/deployment.yaml in git branch pre
apiVersion: apps/v1
kind: Deployment
metadata:
name: reverse-words
labels:
app: reverse-words
spec:
template:
spec:
containers:
– name: reverse-words
image: quay.io/mavazque/reversewords:v0.0.3
env:
– name: RELEASE
value: “Preproduction release”
– $patch: replace

We send our changes to the git repository
git add reversewords_app/overlays/pre/deployment.yaml
git commit -m “Updated preproduction image version from v0.0.2 to v0.0.3″
git push origin pre

ArgoCD will detect the update in our code and will deploy the new changes, now we should see the version v0.0.3 deployed on pre and the version v0.0.2 deployed on pro.
# Access the preproduccion LB and get the release name
$ curl http://${PRE_LB_HOSTNAME}:8080

Reverse Words Release: Preproduction release. App version: v0.0.3
# Access the production LB and get the release name
$ curl http://${PRO_LB_HOSTNAME}:8080

Reverse Words Release: Production release. App version: v0.0.2

Let’s verify that our application is working as expected
$ curl http://${PRE_LB_HOSTNAME}:8080 -X POST -d ‘{“word”:”PALC”}’

{“reverse_word”:”CLAP”}

The application is working fine, now it’s time to update production to v0.0.3 as well. Let’s update the overlay:
# reversewords_app/overlays/pro/deployment.yaml in git branch pro
apiVersion: apps/v1
kind: Deployment
metadata:
name: reverse-words
labels:
app: reverse-words
spec:
template:
spec:
containers:
– name: reverse-words
image: quay.io/mavazque/reversewords:v0.0.3
env:
– name: RELEASE
value: “Production release”
– $patch: replace

Send the changes to Git
git add reversewords_app/overlays/pro/deployment.yaml
git commit -m “Updated production image version from v0.0.2 to v0.0.3″
git push origin pro

Get versions in use
# Access the preproduccion LB and get the release name
$ curl http://${PRE_LB_HOSTNAME}:8080

Reverse Words Release: Preproduction release. App version: v0.0.3
# Access the production LB and get the release name
$ curl http://${PRO_LB_HOSTNAME}:8080

Reverse Words Release: Production release. App version: v0.0.3

We should now update the base deployment so newer deployments use v0.0.3 version

Final Thoughts

We have updated our application by modifying the application overlays in Git, this is a very basic scenario, advanced scenarios may include CI tests, multiple approvals, etc.

We have pushed our code to the pre/pro branches directly, that is not a good practice, in a real life scenario a more advanced workflow should be used. We will discuss GitOps workflows in future blog posts.

We have ArgoCD Cli, ArgoCD has a WebUI where you can do almost the same operations as with the cli, on top of that you can visualize your applications and its components.

Next Steps
In future blog posts we will talk about multiple topics related to GitOps such as:

GitOps Workflows in Production
Disaster Recovery with GitOps
Moving to GitOps

The post Multi-cluster Management with GitOps appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Top Kubernetes Operators advancing across the Operator Capability Model

At KubeCon North America 2019 we highlighted what it means to deliver a mature Kubernetes Operator. A Kubernetes Operator is a method of packaging, deploying and managing a Kubernetes application. The key attribute of an Operator is the active, ongoing management of the application, including failover, backups, upgrades and autoscaling, just like a cloud service.
These capabilities are ranked into five levels, which are used to gauge maturity. We refer to this as the Operator Capability Model, which outlines a set of possible capabilities that can be applied to an application. Of course, if your app doesn’t store stateful data, a backup might not be applicable to you but log processing or alerting might be important. The important user experience that the Operator model aims for is getting that cloud-like, self-managing experience with knowledge baked in from the experts.
 

Several community Operators have shown us what can be possible when advancing across this capability model which we wanted to recognize as a goal for the rest of the community.
Couchbase
Couchbase has an extremely advanced Operator that handles data rebalancing, rack/zone awareness and auto recovery for a swath of operational events that can happen to your NoSQL cluster. Integrated metrics and dashboards for deep insight into your Couchbase clusters pushes the Operator strongly into Phase IV of the Capability Model.
But that’s not all, Couchbase also integrates itself natively into the OpenShift Console, so Couchbase users that are using the Operator to self-service their clusters can easily reconfigure clusters and check on its health. Read more about using Couchbase on OpenShift in the technical implementation guide.
Dynatrace
The Dynatrace OneAgent Operator allows users to easily deploy full-stack monitoring for Kubernetes clusters. The Dynatrace OneAgent automatically monitors the workload running in containers down to the code and request level. Stay tuned for additional Operator features coming from Dynatrace.
This entire Operator can help OpenShift cluster administrators reach Phase IV for their own applications running on the cluster. Read more in the Dynatrace product brief.
MongoDB
The MongoDB Enterprise Kubernetes Operator enables easy deploys of MongoDB into Kubernetes clusters, using MongoDB’s management, monitoring and backup platforms, Ops Manager and Cloud Manager.
The Operator allows developers a self-service model to get three different types of MongoDB databases, following the best practices for developer environments, replicated high availability clusters for production, and sharded clusters for horizontal scaling. As you can imagine, configuring each of these environments requires the expertise of MongoDB’s engineers which is already built into the Operator. Find out more in the MongoDB on OpenShift product brief.
Portworx
Portworx Enterprise is a cloud-native storage solution for production workloads and provides high-availability, data protection and security for containerized applications. Portworx Enterprise enables you to migrate entire applications, including data, between clusters in a single data center or cloud, or between clouds, with a single kubectl command.
Storage is a critical part of your clusters running on the cloud or on premises. The Portworx Operator exports metrics which can be used in conjunction with the Prometheus Operator to collect, alert and show Grafana dashboards of this data. Robust monitoring brings this Operator up to capability Phase IV. Portworx has a handy config generator for the Operator and more detailed instructions for using Portworx on OpenShift.
StorageOS
StorageOS is a cloud native, software-defined storage platform that transforms commodity server or cloud based disk capacity into enterprise-class persistent storage for containers. StorageOS is ideal for deploying databases, message busses, and other mission-critical stateful solutions, where rapid recovery and fault tolerance are essential.
The StorageOS Operator installs and manages StorageOS within a cluster. Cluster nodes may contribute local or attached disk-based storage into a distributed pool, which is then available to all cluster members via a global namespace. StorageOS also comes with out of the box support for Prometheus to make sure your cluster is healthy as Day 2 actions are taken, such as pinning the Nodes used for the storage cluster into a specific Node Pool. This Operator reaches Phase IV. Read the product brief for more information.
All available in OperatorHub
Check out all of these great Operators and more at OperatorHub.io. If you’re browsing from within your OpenShift 4 cluster, you can filter for Phase IV “Deep Insights” and Phase V “Auto Pilot.”
If you’re working on an Operator for OpenShift, find out more about our Operator Certification program.

 
The post Top Kubernetes Operators advancing across the Operator Capability Model appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift