Telco revolution or evolution: Depends on your perspective, but your network is changing

As the market embraces edge computing and 5G networks, telecommunications service providers are increasingly looking for ways to migrate their monolithic services to microservices and containers. These  providers are moving from legacy hardware appliances to virtualized network functions to containerized network functions on cloud infrastructure. Red Hat’s partnership with a rich ecosystem of software-defined networking (SDN) vendors, independent software vendors (ISVs), network equipment providers (NEPs), as well as its deep involvement in the open source projects powering these initiatives, give customers the choices and long-life support they need to build the services infrastructure that supports their business needs both today and tomorrow – as well as the journey in between.

Happy Together
The key word here is “journey”, not “toggle” or “flip-of-a-switch.” Rather than a sudden change where all workloads suddenly switch from virtualized to containerized, environments will go through a period where both types of applications, virtualized and containerized, must coexist. For example, mobile phone service providers are unlikely to rip-and-replace applications and services that are spread around the world due to scale, geography and other various business and technical reasons. As infrastructure reaches the end of its life, or as new applications are developed, containerized applications could be introduced to handle new services or growth in subscriber or bandwidth capacity, rather than re-writing to containerize an entire deployment all at once. A similar transition in automobiles can be observed as hybrid cars bridge the gap during the conversion from today’s gas-powered to tomorrow’s all electric – virtualization of network functions (VNF) is part of the journey from legacy hardware solutions to containerized network functions (CNF).
Why is this journey taking place? It all comes down to flexibility. The sheer number of networks becoming software-based and the number of connections within those networks mean that network infrastructure needs to be able to scale with demand. Using the mobile service provider example:

Physical network functions – Initially, specialized hardware was required to provide phone connectivity as well as new network functions such as voicemail, call forwarding, conferencing, etc. Physical hardware had to be sized based on peak usage, resulting in huge capital and operating expenditures on resources which was sitting, largely unused, therefore not generating any revenue .
NFV – Virtualized network functions allowed for increased flexibility and more dynamic resource allocation. Unlike dedicated hardware solutions, virtualized applications are abstracted from the hardware, thus the underlying infrastructure could be utilized more efficiently, reducing the expenditures  for resources that are not being used.
CNF – In addition to scaling to serve a growing user base, providers are now rolling out new applications to stay ahead of their competition. Instead of offering a single service (like voice calling), providers are creating new applications enabling innovations like live transcription, instant meeting coordination, and even real-time translation to break down communication barriers. Containerization brings with it a cornucopia of benefits affecting everything from app development to security. From the beginning, development time is reduced because each container image is just the application – no additional VM complexity. This means that containers can be created, updated or destroyed much faster than traditional infrastructure – further accelerating time-to-market and reducing the management burden with easier updates and rollbacks. Containers add a layer of abstraction, not present in VMs. While VMs rely on the infrastructure layer to provide benefits such as resilience, containers are cloud-native and are built to be independent of their infrastructure. This abstraction also enhances security, not just because patches can be rolled out faster, but because just the container host can be patched – as opposed to multiple, individual guest operating systems that each need attention. By making application development faster, scaling easier and management less complex, containers allow providers to launch newer applications (services) faster and gain a competitive advantage.

 
At Red Hat, we believe that the future is all about choice and that our customers know the best ways to serve their own customers. As modern telecommunications networks explore new architectures to meet tomorrow’s customer demands, Red Hat is ready to help along the way.
From current NFV to nascent and growing CNF deployments, providers are at different stages of evolving their networks. As providers continue their journeys towards network transformation, they can count on infrastructures running Red Hat OpenStack Platform to power production-ready, NFV applications. Red Hat OpenStack brings the stability of long-life releases backed by a 10 year support lifecycle that lets providers focus on business, not managing infrastructure. Red Hat OpenStack’s integration with a large ecosystem of partners and ISVs allows customers to choose how to build the most effective solution – beyond what a single vendor can offer. Customers around the world choose Red Hat OpenStack to serve as a platform for today’s workloads, but also as a future-ready foundation.
While already under way, the transition from virtualized to containerized network functions is going through a period where networks will need to run both types side-by-side. The strategy, “cap and grow”, allows for VMs to exist for their full, expected life cycle, while new applications will be written as containers – decreasing the ratio of VMs to containers over a period of years. Providers aren’t converting functional VMs into containers en-masse; rather they are writing or sourcing new applications (e.g., 5G functions) as containers, while they maintain VMs with older technology that will eventually decline in number. 
Finally comes the question of “where can I run my apps?” – “Wherever you want”! The conversation about containers would not be complete without mentioning one of their best assets: portability. Because containers are so abstracted from the infrastructure, OpenShift can run them on your cloud of choice; and move them as needed. For example, if a container is deployed in a public cloud but needs to move on premises for data locality reasons: it can.
Red Hat OpenShift Container Platform deployed in a public cloud is great but when Red Hat OpenShift is deployed on top of Red Hat OpenStack Platform, the result is an environment supporting both virtualized and containerized network functions. This allows both current and future applications to share the same infrastructure and our customers can move at their own pace by leveraging this deployment model of coexistence to migrate their VM workload or containers.
At the vanguard of network transformation, providers are fully containerizing their mobile network core and are writing their own cloud-native applications. For added stability and faster development, Red Hat can certify that a Universal Base Image (UBI) reduces risk by serving as a supportable foundation for container-based workloads: standardized innovation. Working within the partner ecosystem, Red Hat is expanding support and certification for CNF with ISVs, along with early access customer trials. Upstream, Red Hat is also working with the Kubernetes and other project communities to bring telecommunications requirements into the key projects for supporting CNF use cases.
What does this mean for our customers? Whether rolling out CNF, supporting a mixed environment of CNF and NFV, or running dedicated NFV environments, Red Hat has the platform, the experience and services to power their network function strategy wherever they are in their journey. Truly delivering open source solutions, Red Hat can be the trusted, impartial adviser to guide service providers through this transformation and beyond without the shackles of single (or forked) vendor proprietary offers – and therefore enable customers to build exactly what they want to deliver when and where they need it.
The post Telco revolution or evolution: Depends on your perspective, but your network is changing appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Mirantis Acquires Docker Enterprise Platform Business

The post Mirantis Acquires Docker Enterprise Platform Business appeared first on Mirantis | Pure Play Open Cloud.
Industry-leading Docker Enterprise container platform complements existing Kubernetes technology from Mirantis

Campbell, Calif – November 13, 2019 – Mirantis, the open cloud company, announced today its acquisition of Docker’s Enterprise Platform business. Its industry leading container platform, employees and hundreds of enterprise customers will accelerate Mirantis’ goal to deliver Kubernetes-as-a-Service with a consistent experience for developers on any cloud and on-prem infrastructure. Terms of the deal are confidential.

Docker Enterprise is the only platform that enables developers to seamlessly build, share and safely run any applications anywhere – from public cloud to hybrid cloud to the edge. One third of Fortune 100 companies use Docker Enterprise as their high-velocity innovation platform.

Mirantis and the newly acquired Docker Enterprise team will continue to develop and support the Docker Enterprise platform and add new capabilities that enterprise clients expect:

A zero touch, as-a-service experience to eliminate the administration, integration and operation burden for customers 
Mirantis Kubernetes and related cloud-native technologies 
A proven enterprise business model with a strong financial foundation  

“The Mirantis Kubernetes technology joined with the Docker Enterprise Container Platform brings simplicity and choice to enterprises moving to the cloud. Delivered as a service, it’s the easiest and fastest path to the cloud for new and existing applications,” said Adrian Ionel, CEO and co-founder at Mirantis. “The Docker Enterprise employees are among the most talented cloud native experts in the world and can be immensely proud of what they achieved. We’re very grateful for the opportunity to create an exciting future together and welcome the Docker Enterprise team, customers, partners, and community.”

Commitment to open source and collaboration with Docker, Inc.

Mirantis and Docker will work together on core upstream technology, contributing to open source development. In addition, Mirantis and Docker will continue to ensure integration between their products with Docker focused on Docker Desktop and Docker Hub and Mirantis on the Docker Enterprise container platform.

According to Gartner, “cloud computing continues to be the platform for innovation that the digital business demands. Cloud has become the foundation that enables businesses to transform, differentiate and gain a competitive advantage. In 2020, infrastructure, applications and data will continue to proliferate everywhere, forcing organizations to extend their hybrid and multi cloud strategies to the edge.” As enterprises continue their cloud adoption, they want to avoid expensive operations, developer roadblocks and cloud lock-in. A container platform based on open standards gives enterprises the strategic flexibility needed to run applications wherever they need.

Mirantis has made significant investments in Kubernetes which will flow into the Docker Enterprise platform and benefit all of its customers. Kubernetes has become the defacto container orchestrator with an active, vibrant community, and Mirantis will combine its Kubernetes technology with Docker Enterprise to fuel the next wave of cloud-native innovation for enterprises.

Additional Resources

Mirantis will host a webinar on Thursday, November 21, 2019 at 9:00 a.m. PT to discuss its vision for combining Docker Enterprise with Mirantis. Register here: https://info.mirantis.com/mirantis-product-update
Read the blog post by Adrian Ionel: https://www.mirantis.com/blog/mirantis-acquires-docker-enterprise-platform-business/

About Mirantis
Mirantis helps enterprises move to the cloud on their terms, delivering a true cloud experience on any infrastructure, powered by Kubernetes. The company uses a unique as-a-service model empowering developers to build, share and run their applications anywhere – from public to hybrid cloud and to the edge. Mirantis serves many of the world’s leading enterprises, including Adobe, Cox Communications, DocuSign, Reliance Jio, STC, Vodafone, and Volkswagen. Learn more at www.mirantis.com.

 

###

 

Media Contact

Joseph Eckert for Mirantis

jeckertflak@gmail.com

 

12020 Planning Guide for Cloud Computing, Gartner, October 2019

 The post Mirantis Acquires Docker Enterprise Platform Business appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

What We Announced Today and Why it Matters

The post What We Announced Today and Why it Matters appeared first on Mirantis | Pure Play Open Cloud.
Today we announced that we have acquired the Docker Enterprise platform business from Docker, Inc. including its industry leading Docker Enterprise and 750 customers.

Why Docker Enterprise, and Why Now?

Docker led the container revolution and changed the world towards a better way to build, share and run software. No infrastructure company has had a bigger impact on developers in the last decade.

Docker Enterprise is the only independent container platform that enables developers to seamlessly build, share and safely run their applications anywhere – from public cloud, to hybrid cloud to the edge.  One-third of Fortune 100 and one-fifth of Global 500 companies use Docker Enterprise. Two years ago Docker Enterprise started to ship Kubernetes as part of its Universal Control Plane and many of its customers are using it today or plan to use it in the near future.

This acquisition will accelerate Mirantis’ vision to deliver Kubernetes-as-a-Service with a consistent experience for developers on any cloud and on-prem infrastructure. 

Why Mirantis?

Mirantis has always brought strategic flexibility to companies moving to cloud. Our Kubernetes technology joined with Docker Enterprise brings even greater simplicity and choice to enterprises with a cloud-first mandate.

By basing the combined technology on flexible infrastructure and open standards like Kubernetes for full application portability, along with our expertise in providing market-leading enterprise support for open source software, Mirantis is a compelling alternative to lock-in platforms like VMware and Red Hat.

As enterprises continue to accelerate moving to hybrid and multi-cloud architectures, they want to avoid cloud lock-in and cloud sprawl. Selecting a container platform based on open standards allows enterprises to retain the strategic flexibility needed to run applications wherever needed. 

What Exactly Is Mirantis Acquiring?

Mirantis is acquiring Docker’s Enterprise business including products, technology, IP, and customer and partner relationships and will onboard former Docker Enterprise employees to continue to provide Docker Enterprise users a world-class customer experience.

The Docker Enterprise platform technology and associated IP: Docker Enterprise Engine, Docker Trusted Registry, Docker Unified Control Plane, Docker CLI 
All Docker Enterprise customers and contracts 
Strategic technology alliances and partner programs

What is Mirantis adding to Docker’s Enterprise business?

Its K8s-as-a-Service technology and expertise 
A shared product vision to deliver a consistent developer experience on any infrastructure, powered by K8s
A sound financial foundation with a proven track record of long-term success
Ongoing commitment to open source development and open standards 
The Mirantis as-a-service model for a simpler customer experience with greater economic value

Where Do We Go From Here?

Mirantis will assume the Docker Enterprise customer contracts. As the leader in enterprise support and managed services for open cloud infrastructure, Mirantis will work with customers on an ongoing basis to ensure continuity in support and services.

What About Docker Swarm?

The primary orchestrator going forward is Kubernetes. Mirantis is committed to providing an excellent experience to all Docker Enterprise platform customers and currently expects to support Swarm for at least two years, depending on customer input into the roadmap. Mirantis is also evaluating options for making the transition to Kubernetes easier for Swarm users.

Where You Can Learn More About Today’s News

Mirantis will be hosting a webinar on November 21, where we will be available to answer additional questions about the news and our vision for the future. Register here for the webinar.

If you’re a Docker Enterprise customer, an FAQ is available to answer your basic questions and provide a few ways to reach out to Mirantis. Of course, we will also reach out to you in the near term and ensure that your transition experience is seamless and positive.

We’re happy to note that this acquisition will enable Docker, Inc. to focus on advancing developers’ workflows when assembling, sharing and deploying modern applications. We see significant opportunities to collaborate with Docker, Inc. as both companies accelerate the move to containers and Kubernetes.
The post What We Announced Today and Why it Matters appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Federated Prometheus with Thanos Receive

OpenShift Container Platform 4 comes with a Prometheus monitoring stack preconfigured. This stack is in charge of getting cluster metrics to ensure everything is working seamlessly, so cool, isn’t it?
But what happens if we have more than one OpenShift cluster and we want to consume those metrics from a single tool, let me introduce you to Thanos.
In the words of its creators, Thanos is a set of components that can be composed into a highly available metrics system with unlimited storage capacity, which can be added seamlessly on top of existing Prometheus deployments.

NOTE: Prometheus instances and Thanos components deployed by prometheus-operator don’t have Red Hat commercial support yet, they are supported by the community.
NOTE: Prometheus remote_write is an experimental feature.

Architecture
In this blog post we are going to go through the deployment and configuration of multiple Prometheus instances, for such task we are going to use the Prometheus Operator available in the in-cluster Operator Marketplace.
We will have two OpenShift 4 clusters, each cluster comes with a pre-configured Prometheus instance managed by the OpenShift Cluster Monitoring Operator, those Prometheus instances are already scraping out our clusters.
Since we cannot modify the configuration for the existing Prometheus instances managed by the Cluster Monitoring Operator yet (We will be able to modify some properties in OCP 4.2), we will deploy new instances using the Prometheus Operator. Also, we don’t want to configure the new Prometheus instances to scrape out the exact same cluster data, instead we will configure the new instances to get the cluster metrics from the managed Prometheus instances using Prometheus Federation.

Prometheus will be configured to send all metrics to the Thanos Receive using remote_write.
Thanos Receive receives the metrics sent by the different Prometheus instances and persist them into the S3 Storage.
Thanos Store Gateway will be deployed so we can query persisted data on the S3 Storage.
Thanos Querier will be deployed, the Querier will answer user’s queries getting the required information from the Thanos Receiver and from the S3 Storage through the Thanos Store Gateway if needed.

Below a diagram depicting the architecture:

NOTE: Steps below assume you have valid credentials to connect to your clusters using oc tooling. We will refer to cluster1 as west2 context, cluster2 as east1 context and cluster3 as east2. Take a look at this video to know how to flatten your config files.

Deploying Thanos Store Gateway
The Store Gateway will be deployed only in one of the clusters, in this scenario we’re deploying it in Cluster3 (east2).
We want our metrics to persist indefinitely as well, an S3 Bucket is required for that. We will use AWS S3 for storing the persisted Prometheus data, you can find the required steps to create an AWS S3 Bucket here.
We need a secret that stores the S3 configuration (and credentials) for the Store Gateway to connect to AWS S3.
Download the file store-s3-secret.yaml and modify the credentials accordingly.
oc –context east2 create namespace thanos
oc –context east2 -n thanos create secret generic store-s3-credentials –from-file=store-s3-secret.yaml

At the moment of this writing the Thanos Store Gateway requires of anyuid for work on OCP 4, we are going to create a service account with such privileges:
oc –context east2 -n thanos create serviceaccount thanos-store-gateway
oc –context east2 -n thanos adm policy add-scc-to-user anyuid -z thanos-store-gateway

Download the file store-gateway.yaml containing the required definitions for deploying the Store Gateway.
oc –context east2 -n thanos create -f store-gateway.yaml

After a few seconds we should see the Store Gateway pod up and running:
oc –context east2 -n thanos get pods -l “app=thanos-store-gateway”
NAME READY STATUS RESTARTS AGE
thanos-store-gateway-0 1/1 Running 0 2m18s

Deploying Thanos Receive
Thanos Receive will be deployed only in one of the clusters, in this scenario we’re deploying it in Cluster3 (east2).
Thanos Receive requires a secret that stores the S3 configuration (and credentials) in order to persist data into S3, we are going to re-utilize the credentials created for the Store Gateway.
Our Thanos Receive instance will require clients to provide a Bearer Token in order to authenticate and be able to send metrics, we are going to deploy an OAuth Proxy in front of the Thanos Receive for providing such service.
We need to generate a session secret as well as annotate the ServiceAccount that will run the pods indicating which OpenShift Route will redirect to the oauth proxy.
oc –context east2 -n thanos create serviceaccount thanos-receive
oc –context east2 -n thanos create secret generic thanos-receive-proxy –from-literal=session_secret=$(head /dev/urandom | tr -dc A-Za-z0-9 | head -c43)
oc –context east2 -n thanos annotate serviceaccount thanos-receive serviceaccounts.openshift.io/oauth-redirectreference.thanos-receive='{“kind”:”OAuthRedirectReference”,”apiVersion”:”v1″,”reference”:{“kind”:”Route”,”name”:”thanos-receive”}}’

On top of that using delegating authentication and authorization requires a cluster role system:auth-delegator to be assigned to the service account the oauth_proxy is running under, so we are going to add this role to the service account we just created:
oc –context east2 -n thanos adm policy add-cluster-role-to-user system:auth-delegator -z thanos-receive

Download the file thanos-receive.yaml containing the required definitions for deploying the Thanos Receive.
oc –context east2 -n thanos create -f thanos-receive.yaml

After a few seconds we should see the Thanos Receive pod up and running:
oc –context east2 -n thanos get pods -l “app=thanos-receive”
NAME READY STATUS RESTARTS AGE
thanos-receive-0 2/2 Running 0 112s

Now we can publish our Thanos receive instance using an OpenShift Route:
oc –context east2 -n thanos create route reencrypt thanos-receive –service=thanos-receive –port=web-proxy –insecure-policy=Redirect

Create ServiceAccounts for sending metrics
Since our Thanos Receive instance requires clients to provide a Bearer Token in order to authenticate and be able to send metrics, we need to create two ServiceAccounts (one per cluster) and give them the proper rights so they can authenticate against the oauth-proxy.
In our case we have configured the oauth-proxy to authenticate any account that have access to the thanos namespace in the cluster where it’s running (east2):
-openshift-delegate-urls={“/”:{“resource”:”namespaces”,”resourceName”:”thanos”,”namespace”:”thanos”,”verb”:”get”}}

So it will be enough creating the ServiceAccounts in the namespace and granting view Role to them:
oc –context east2 -n thanos create serviceaccount west2-metrics
oc –context east2 -n thanos adm policy add-role-to-user view -z west2-metrics
oc –context east2 -n thanos create serviceaccount east1-metrics
oc –context east2 -n thanos adm policy add-role-to-user view -z east1-metrics

Deploying Prometheus instances using the Prometheus Operator
First things first, we need to deploy a new Prometheus instance into each cluster, we are going to use the Prometheus operator for such task, so let’s start by deploying the operator.
We will deploy the operator on west2 and east1 clusters.
Deploying the Prometheus Operator into a new Namespace
A new namespace where the Operator and the Prometheus instances will be deployed needs to be created.

Once logged in the OpenShift Console, on the left menu go to Home -> Projects and click on Create Project:

Fill in the required information, we’ve used thanos as our namespace name:

Now we are ready to deploy the Prometheus Operator, we’re going to use the in-cluster Operator Marketplace for that matter.

On the left menu go to Catalog -> OperatorHub:

From the list of Operators, choose Prometheus Operator:

Accept the Community Operator supportability warning (if prompted):

Install the Operator clicking on Install:

Create the subscription to the operator:

After a few seconds you should see the the operator installed.

NOTE: Above steps have to be performed in both clusters

Deploying Prometheus Instance
At this point we should have the Prometheus Operator already running on our namespace, which means we can start the deployment of our Prometheus instances leveraging it.
Configuring Serving CA to Connect to Cluster Managed Prometheus
Our Prometheus instance needs to connect to the Cluster Managed Prometheus instance in order to gather the cluster-related metrics, this connection uses TLS, so we will use the Serving CA to validate the Targets endpoints (Cluster Managed Prometheus).
The Serving CA is located in the openshift-monitoring namespace, we will create a copy into our namespace so we can use it in our Prometheus instances:
oc –context west2 -n openshift-monitoring get configmap serving-certs-ca-bundle –export -o yaml | oc –context west2 -n thanos apply -f –
oc –context east1 -n openshift-monitoring get configmap serving-certs-ca-bundle –export -o yaml | oc –context east1 -n thanos apply -f –

Configuring Required Cluster Role for Prometheus
We are going to use Service Monitors to discover Cluster Managed Prometheus instances and connect to them, in order to do so we need to grant specific privileges to the ServiceAccount that runs our Prometheus instances.
As you may know, the Cluster Managed Prometheus instances include the oauth proxy to perform authentication and authorization, in order to be able to authenticate we need a ServiceAccount that can GET all namespaces in the cluster. The token for this ServiceAccount will be used as Bearer Token to authenticate our connections to the Cluster Managed Prometheus instances.
Download cluster-role.yaml file containing the required ClusterRole and ClusterRoleBinding.
Now we are ready to create the ClusterRole and ClusterRoleBinding in both clusters:
oc –context west2 -n thanos create -f cluster-role.yaml
oc –context east1 -n thanos create -f cluster-role.yaml

Configuring Authentication for Thanos Receive
We need to create a secret containing the bearer token for the ServiceAccount we created before and that will grant access to the Thanos Receive, this secret will be mounted in the
Prometheus pod so it can be used to authenticate against the Thanos Receive:
oc –context west2 -n thanos create secret generic metrics-bearer-token –from-literal=metrics_bearer_token=$(oc –context east2 -n thanos serviceaccounts get-token west2-metrics)
oc –context east1 -n thanos create secret generic metrics-bearer-token –from-literal=metrics_bearer_token=$(oc –context east2 -n thanos serviceaccounts get-token east1-metrics)

Deploying Prometheus Instance
In order to deploy the Prometheus instance, we need to create a Prometheus object. On top of that two ServiceMonitors will be created. The ServiceMonitors have the required configuration for scraping the /federate endpoint from the Cluster Managed Prometheus instances. We will use openshift-oauth-proxy to protect our Prometheus instances so unauthenticated users cannot see our metrics.
As we want to protect our Prometheus instances using oauth-proxy we need to generate a session secret as well as annotate the ServiceAccount that will run the pods indicating which OpenShift Route will redirect to the oauth proxy.
oc –context west2 -n thanos create secret generic prometheus-k8s-proxy –from-literal=session_secret=$(head /dev/urandom | tr -dc A-Za-z0-9 | head -c43)
oc –context east1 -n thanos create secret generic prometheus-k8s-proxy –from-literal=session_secret=$(head /dev/urandom | tr -dc A-Za-z0-9 | head -c43)

oc –context west2 -n thanos annotate serviceaccount prometheus-k8s serviceaccounts.openshift.io/oauth-redirectreference.prometheus-k8s='{“kind”:”OAuthRedirectReference”,”apiVersion”:”v1″,”reference”:{“kind”:”Route”,”name”:”federated-prometheus”}}’
oc –context east1 -n thanos annotate serviceaccount prometheus-k8s serviceaccounts.openshift.io/oauth-redirectreference.prometheus-k8s='{“kind”:”OAuthRedirectReference”,”apiVersion”:”v1″,”reference”:{“kind”:”Route”,”name”:”federated-prometheus”}}’

Download the following files:

prometheus-thanos-receive.yaml
service-monitor-west2.yaml
service-monitor-east1.yaml

First, we will create the Prometheus instances and the required ServiceMonitor for scraping the Cluster Managed Prometheus instance on west2, then we will do the same for east1.
We need to modify the prometheus-thanos-receive.yaml in order to configure the remote_write url where Thanos Receive is listening:
THANOS_RECEIVE_HOSTNAME=$(oc –context east2 -n thanos get route thanos-receive -o jsonpath='{.status.ingress[*].host}’)
sed -i.orig “s/<THANOS_RECEIVE_HOSTNAME>/${THANOS_RECEIVE_HOSTNAME}/g” prometheus-thanos-receive.yaml

oc –context west2 -n thanos create -f prometheus-thanos-receive.yaml
oc –context west2 -n thanos create -f service-monitor-west2.yaml
oc –context east1 -n thanos create -f prometheus-thanos-receive.yaml
oc –context east1 -n thanos create -f service-monitor-east1.yaml

ServiceMonitor Notes
The Prometheus Operator introduces additional resources in Kubernetes, one of these resources are the ServiceMonitors. A ServiceMonitor describes the set of targets to be monitored by Prometheus. You can learn more about that here
You can see the following properties used in the ServiceMonitors we created above:

honorLabels: true -> We want to keep the labels from the Cluster Managed Prometheus instance
– ‘{__name__=~”.+”}’ -> We want to get all the metrics found on /federate endpoint
scheme: https -> The Cluster Managed Prometheus instance is configured to use TLS, so we need to use https port for connecting to it
bearerTokenFile: <omitted> -> In order to authenticate through the oauth proxy we need to send a token from a SA that can GET all namespaces
caFile: <omitted> -> We will use this CA to validate Prometheus Targets certificates
serverName: <omitted> -> This is the Server Name we expect targets to report back
namespaceSelector + selector -> We will apply use this selectors to get pods running on openshift-monitoring namespace that have the label prometheus: k8s

After a few seconds we should see our Prometheus instances running on both clusters:
oc –context west2 -n thanos get pods -l “prometheus=federated-prometheus”
NAME READY STATUS RESTARTS AGE
prometheus-federated-prometheus-0 4/4 Running 1 104s
prometheus-federated-prometheus-1 4/4 Running 1 104s

oc –context east1 -n thanos get pods -l “prometheus=federated-prometheus”
NAME READY STATUS RESTARTS AGE
prometheus-federated-prometheus-0 4/4 Running 1 53s
prometheus-federated-prometheus-1 4/4 Running 1 53s

Now we can publish our prometheus instances using an OpenShift Route:
oc –context west2 -n thanos create route reencrypt federated-prometheus –service=prometheus-k8s –port=web-proxy –insecure-policy=Redirect
oc –context east1 -n thanos create route reencrypt federated-prometheus –service=prometheus-k8s –port=web-proxy –insecure-policy=Redirect

Deploying Custom Application
Our Prometheus instance is getting the cluster metrics from the Cluster Monitoring managed Prometheus, now we are going to deploy a custom application and get metrics from this application as well, so you can see the potential benefits from this solution.
The custom application exports some Prometheus metrics that we want to gather, we’re going to define a ServiceMonitor to get the following metrics:

total_reserverd_words – Number of words reversed by our application
endpoints_accesed{endpoint} – Number of requests on a given endpoint

Deploying the application to both clusters
Download the file reversewords.yaml
oc –context west2 create namespace reverse-words-app
oc –context west2 -n reverse-words-app create -f reversewords.yaml
oc –context east1 create namespace reverse-words-app
oc –context east1 -n reverse-words-app create -f reversewords.yaml

After a few seconds we should see the Reverse Words pod up and running:
oc –context west2 -n reverse-words-app get pods -l “app=reverse-words”
NAME READY STATUS RESTARTS AGE
reverse-words-cb5b44bdb-hvg88 1/1 Running 0 95s

oc –context east1 -n reverse-words-app get pods -l “app=reverse-words”
NAME READY STATUS RESTARTS AGE
reverse-words-cb5b44bdb-zxlr6 1/1 Running 0 60s

Let’s go ahead and expose our application:
oc –context west2 -n reverse-words-app create route edge reverse-words –service=reverse-words –port=http –insecure-policy=Redirect
oc –context east1 -n reverse-words-app create route edge reverse-words –service=reverse-words –port=http –insecure-policy=Redirect

If we query the metrics for our application, we will get something like this:
curl -ks https://reverse-words-reverse-words-app.apps.west-2.sysdeseng.com/metrics | grep total_reversed_words | grep -v ^#
total_reversed_words 0

Let’s send some words and see how the metric increases:
curl -ks https://reverse-words-reverse-words-app.apps.west-2.sysdeseng.com/ -X POST -d ‘{“word”: “PALC”}’
{“reverse_word”:”CLAP”}

curl -ks https://reverse-words-reverse-words-app.apps.west-2.sysdeseng.com/metrics | grep total_reversed_words | grep -v ^#
total_reversed_words 1

In order to get this metrics into Prometheus, we need a ServiceMonitor that scrapes the metrics endpoint from our application.
Download the following files:

service-monitor-reversewords-west2.yaml
service-monitor-reversewords-east1.yaml

And create the ServiceMonitors:
oc –context west2 -n thanos create -f service-monitor-reversewords-west2.yaml
oc –context east1 -n thanos create -f service-monitor-reversewords-east1.yaml

After a few moments we should see a new Target within our Prometheus instance:

Deploying Thanos Querier
At this point we have:

Thanos Receive listening for metrics and persisting data to AWS S3
Thanos Store Gateway configured to get persisted data from AWS S3
Prometheus instances deployed on both clusters gathering cluster and custom app metrics and sending metrics to Thanos Receive

We can now go ahead and deploy the Thanos Querier which will provide an unified WebUi for getting metrics for all our clusters.
The Thanos Querier connects to Thanos Receive and Thanos Store Gateway instances over GRPC, we are going to use standard OpenShift services for providing such connectivity since
all components are running in the same cluster.
As we already did with Prometheus instances, we are going to protect the Thanos Querier WebUI with the openshift-oauth-proxy, so first of all a session secret has to be created:
oc –context east2 -n thanos create secret generic thanos-querier-proxy –from-literal=session_secret=$(head /dev/urandom | tr -dc A-Za-z0-9 | head -c43)

Download the thanos-querier-thanos-receive.yaml.

NOTE: Port http/9090 is needed in the service until Grafana allows to connect to datasources using serviceaccounts bearer tokens so we can connect through the oauth-proxy

oc –context east2 -n thanos create serviceaccount thanos-querier
oc –context east2 -n thanos create -f thanos-querier-thanos-receive.yaml

After a few seconds we should see the Querier pod up and running:
oc –context east2 -n thanos get pods -l “app=thanos-querier”
NAME READY STATUS RESTARTS AGE
thanos-querier-5f7cc544c-p9mn2 2/2 Running 0 2m43s

Annotate the SA with the route name so oauth proxy handles the authentication:
oc –context east2 -n thanos annotate serviceaccount thanos-querier serviceaccounts.openshift.io/oauth-redirectreference.thanos-querier='{“kind”:”OAuthRedirectReference”,”apiVersion”:”v1″,”reference”:{“kind”:”Route”,”name”:”thanos-querier”}}’

Time to expose the Thanos Querier WebUI:
oc –context east2 -n thanos create route reencrypt thanos-querier –service=thanos-querier –port=web-proxy –insecure-policy=Redirect

If we go now to the Thanos Querier WebUI we should see two stores:

Receive: East2 Thanos Receive
Store Gateway: S3 Bucket

Grafana
Now that we have Prometheus and Thanos components deployed, we are going to deploy Grafana.
Grafana will use Thanos Querier as Prometheus datasource and will enable the creation of graphs from aggregated metrics from all your clusters.
We have prepared a small demo with example dashboards for you to get a sneak peek of what can be done.
Deploying Grafana
As we did before with Prometheus and Thanos Querier, we want to protect Grafana access with openshift-oauth-proxy, so let’s start by creating a session secret:
oc –context east2 -n thanos create secret generic grafana-proxy –from-literal=session_secret=$(head /dev/urandom | tr -dc A-Za-z0-9 | head -c43)

Annotate the SA with the route name so oauth proxy handles the authentication:
oc –context east2 -n thanos create serviceaccount grafana
oc –context east2 -n thanos annotate serviceaccount grafana serviceaccounts.openshift.io/oauth-redirectreference.grafana='{“kind”:”OAuthRedirectReference”,”apiVersion”:”v1″,”reference”:{“kind”:”Route”,”name”:”grafana”}}’

Download the following files:

grafana.ini
prometheus.json
grafana-dashboards.yaml
reversewords-dashboard.yaml
clusters-dashboard.yaml
grafana.yaml

oc –context east2 -n thanos create secret generic grafana-config –from-file=grafana.ini
oc –context east2 -n thanos create secret generic grafana-datasources –from-file=prometheus.yaml=prometheus.json
oc –context east2 -n thanos create -f reversewords-dashboard.yaml
oc –context east2 -n thanos create -f grafana-dashboards.yaml
oc –context east2 -n thanos create -f clusters-dashboard.yaml
oc –context east2 -n thanos create -f grafana.yaml

Now we can expose the Grafana WebUI using an OpenShift Route:
oc –context east2 -n thanos create route reencrypt grafana –service=grafana –port=web-proxy –insecure-policy=Redirect

Once logged we should see two demo dashboards available for us to use:

The OCP Cluster Dashboard has a cluster selector so we can select which cluster we want to get the metrics from.
Metrics from east-1

Metrics from west-2

We can have aggregated metrics as well, example below.
Metrics from reversed words

Next Steps

Configure a Thanos Receiver Hashring

The post Federated Prometheus with Thanos Receive appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

OpenShift 4.2 vSphere Install with Static IPs

In my previous blog I went over how to install OpenShift 4.2 on VMware vSphere 6.7 using DHCP. Using DHCP with address reservation via MAC address filtering is a common way of ensuring network configurations are set and consistent on Red Hat Enterprise Linux CoreOS (RHCOS).
Many environments would rather use static IP configuration to achieve the same consistent network configurations. With the release of OpenShift 4.2, we have now added the ability to configure network configurations (and persist them across reboots) in the preboot ignition phase for RHCOS.
In this blog we are going to go over how to install OpenShift 4.2 on VMware vSphere using static IPs.
Environment Overview
As in the previous blog, I am using vSphere version 6.7.0 and ESXi version 6.7.0 Update 3. I will be following the official documentation where, you can read more information about prerequisites including the need to set up DNS, Load Balancer, Artifacts, and other ancillary services/items.
Prerequisites
It’s always important that you get familiar with the prerequisites by reading the official documentation before you install. There you can find more details about the prerequisites and what they entail. I will go over the prerequisites at a high level, and will link examples where possible.
vSphere Credentials
I will be using my administrative credentials for vSphere. I will also be passing these credentials to the OpenShift 4 installer and, by extension, to the OpenShift cluster. It’s not a requirement to do so, and installing without the credentials will effectively turn your installation into a “Bare Metal UPI” installation. Most notably, you’ll lose the ability to dynamically create VDMKs for your applications at install time.
DNS
Like any OpenShift 4 installation, the first consideration you need to take into account when setting up DNS for OpenShift 4 is the “cluster id”. The “cluster id” becomes part of your cluster’s FQDN. For example; with a cluster id of “openshift4” and my domain of “example.com”, the cluster domain (i.e. the FQDN) for my cluster is openshift4.example.com
DNS entries are created using the $CLUSTERID.$DOMAIN cluster domain FQDN nomenclature. All DNS lookups will be based on this cluster domain. Using my example cluster domain, openshift4.example.com, I have the following DNS entries set up in my environment. Note that the etcd servers are pointed to the IP of the masters, and they are in the form of etcd-$INDEX.
[chernand@laptop ~]$ dig master1.openshift4.example.com +short
192.168.1.111
[chernand@laptop ~]$ dig master2.openshift4.example.com +short
192.168.1.112
[chernand@laptop ~]$ dig master3.openshift4.example.com +short
192.168.1.113
[chernand@laptop ~]$ dig worker1.openshift4.example.com +short
192.168.1.114
[chernand@laptop ~]$ dig worker2.openshift4.example.com +short
192.168.1.115
[chernand@laptop ~]$ dig bootstrap.openshift4.example.com +short
192.168.1.116
[chernand@laptop ~]$ dig etcd-0.openshift4.example.com +short
192.168.1.111
[chernand@laptop ~]$ dig etcd-1.openshift4.example.com +short
192.168.1.112
[chernand@laptop ~]$ dig etcd-2.openshift4.example.com +short
192.168.1.113

The DNS lookup for the API endpoints also needs to be in place. OpenShift 4 expects api.$CLUSTERDOMAIN and api-int.$CLUSTERDOMAIN to be configured, they can both be set to the same IP address – which will be the IP of the Load Balancer.
[chernand@laptop ~]$ dig api.openshift4.example.com +short
192.168.1.110
[chernand@laptop ~]$ dig api-int.openshift4.example.com +short
192.168.1.110

A wildcard DNS entry needs to be in place for the OpenShift 4 ingress router, which is also a load balanced endpoint.
[chernand@laptop ~]$ dig *.apps.openshift4.example.com +short
192.168.1.110

In addition to the mentioned entries, you’ll also need to add SRV records. These records are needed for the masters to find the etcd servers. This needs to be in the form of _etcd-server-ssl._tcp.$CLUSTERDOMMAIN in your DNS server.
[chernand@laptop ~]$ dig _etcd-server-ssl._tcp.openshift4.example.com SRV +short
0 10 2380 etcd-0.openshift4.example.com.
0 10 2380 etcd-1.openshift4.example.com.
0 10 2380 etcd-2.openshift4.example.com.

Please review the official documentation to read more about the prerequisites for DNS before installing.
Load Balancer
You will need a load balancer to frontend the APIs, both internal and external, and the OpenShift router. Although Red Hat has no official recommendation to which load balancer to use, one that supports SNI is necessary (most load balancers do this today).
You will need to configure port 6443 and 22623 to point to the bootstrap and master nodes. The below example is using HAProxy (NOTE that it must be TCP sockets to allow SSL passthrough):
frontend openshift-api-server
bind *:6443
default_backend openshift-api-server
mode tcp
option tcplog

backend openshift-api-server
balance source
mode tcp
server btstrap 192.168.1.116:6443 check
server master1 192.168.1.111:6443 check
server master2 192.168.1.112:6443 check
server master3 192.168.1.113:6443 check

frontend machine-config-server
bind *:22623
default_backend machine-config-server
mode tcp
option tcplog

backend machine-config-server
balance source
mode tcp
server btstrap 192.168.1.116:22623 check
server master1 192.168.1.111:22623 check
server master2 192.168.1.112:22623 check
server master3 192.168.1.113:22623 check

You will also need to configure 80 and 443 to point to the worker nodes. The HAProxy configuration is below (keeping in mind that we’re using TCP sockets).
frontend ingress-http
bind *:80
default_backend ingress-http
mode tcp
option tcplog

backend ingress-http
balance source
mode tcp
server worker1 192.168.1.114:80 check
server worker2 192.168.1.115:80 check

frontend ingress-https
bind *:443
default_backend ingress-https
mode tcp
option tcplog

backend ingress-https
balance source
mode tcp
server worker1 192.168.1.114:443 check
server worker2 192.168.1.115:443 check

More information about load balancer configuration (and general networking guidelines) can be found in the official documentation page.
Web server
A web server is needed in order to hold the ignition configurations and install artifacts needed to install RHCOS. Any webserver will work as long as the webserver can be reached by the bootstrap, master, and worker nodes during installation. I will be using Apache, and created a directory specifically for the ignition files.
[chernand@laptop ~]$ mkdir -p /var/www/html/{ignition,install}

Artifacts
You will need to obtain the installation artifacts by visiting try.openshift.com, there you can login and click on “VMware vSphere” to get the installation artifacts. You will need:

OpenShift4 Client Tools
OpenShift4 OVA
Pull Secret
You will also need the RHCOS ISO and the OpenShift4 Metal BIOS file

You will need to put the client and the installer in your $PATH, in my example I put mine in my /usr/local/bin path
[chernand@laptop ~]$ which oc
/usr/local/bin/oc
[chernand@laptop ~]$ which kubectl
/usr/local/bin/kubectl
[chernand@laptop ~]$ which openshift-install
/usr/local/bin/openshift-install

I’ve also downloaded my pullsecret as pull-secret.json and saved it under a ~/.openshift directory I created.
[chernand@laptop ~]$ file ~/.openshift/pull-secret.json
/home/chernand/.openshift/pull-secret.json: JSON data

An ssh-key is needed. This is used in order to login to the RHCOS server if you ever need to debug the system.
[chernand@laptop ~]$ file ~/.ssh/id_rsa.pub
/home/chernand/.ssh/id_rsa.pub: OpenSSH RSA public key

For more information about ssh and RHCOS, you can read that at the official documentation site.
Installation
Once the prerequisites are place, you’re ready to begin the installation. The current installation of OpenShift 4 on vSphere must be done in stages. I will go over each stage step by step.
vSphere Preparations
For static IP configurations, you will need to upload the ISO into a datastore. On the vSphere WebUI, click on the “Storage” navigation button (it looks like stacked cylinders), and click on the datastore you’d like to upload the ISO to. In my example, I have a datastore specifically for ISOs:

On the right hand side window, you’ll see the summary page with navigation buttons. Select “Upload Files” and select the RHCOS ISO.

NOTE: Make sure you upload the ISO to a datastore that all your ESXi hosts have access to.

Once uploaded, you should have something like this:

You will also need the aforementioned OpenShift 4 Metal BIOS file. I’ve downloaded the bios file and saved it as bios.raw.gz on my webserver.
[root@webserver ~]# ll /var/www/html/install/bios.raw.gz
-rw-r–r–. 1 root root 700157452 Oct 15 11:54 /var/www/html/install/bios.raw.gz

Generate Install Configuration
Now that you’ve uploaded the ISO to vSphere for installation, you can go ahead and generate the install-config.yaml file. This file tells OpenShift about the environment you’re going to install to.
Before you create this file you’ll need an installation directory to store all your artifacts. I’m going to name mine openshift4.
[chernand@laptop ~]$ mkdir openshift4
[chernand@laptop ~]$ cd openshift4/

NOTE: Stay in this directory for the remainder of the installation procedure.

I’m going to export some environment variables that will make the creation of the install-config.yaml file easier. Please substitute your configuration where applicable.
[chernand@laptop openshift4]$ export DOMAIN=example.com
[chernand@laptop openshift4]$ export CLUSTERID=openshift4
[chernand@laptop openshift4]$ export VCENTER_SERVER=vsphere.example.com
[chernand@laptop openshift4]$ export VCENTER_USER=”administrator@vsphere.local”
[chernand@laptop openshift4]$ export VCENTER_PASS=’supersecretpassword’
[chernand@laptop openshift4]$ export VCENTER_DC=DC1
[chernand@laptop openshift4]$ export VCENTER_DS=datastore1
[chernand@laptop openshift4]$ export PULL_SECRET=$(&lt; ~/.openshift/pull-secret.json)
[chernand@laptop openshift4]$ export OCP_SSH_KEY=$(&lt; ~/.ssh/id_rsa.pub)

Once you’ve exported those, go ahead and create the install-config.yaml file in the openshift4 directory by running the following:
[chernand@laptop openshift4]$ cat < install-config.yaml
apiVersion: v1
baseDomain: ${DOMAIN}
compute:
– hyperthreading: Enabled
name: worker
replicas: 0
controlPlane:
hyperthreading: Enabled
name: master
replicas: 3
metadata:
name: ${CLUSTERID}
networking:
clusterNetworks:
– cidr: 10.254.0.0/16
hostPrefix: 24
networkType: OpenShiftSDN
serviceNetwork:
– 172.30.0.0/16
platform:
vsphere:
vcenter: ${VCENTER_SERVER}
username: ${VCENTER_USER}
password: ${VCENTER_PASS}
datacenter: ${VCENTER_DC}
defaultDatastore: ${VCENTER_DS}
pullSecret: ‘${PULL_SECRET}’
sshKey: ‘${OCP_SSH_KEY}’
EOF

I’m going over the options at a high level:

baseDomain – This is the domain of your environment.
metadata.name – This is your clusterid
Note: this makes all FQDNS for the cluster have the openshift4.example.com domain.
platform.vsphere – This is your vSphere specific configuration. This is optional and you can find a “standard” install config example in the docs.
pullSecret – This pull secret can be obtained by going to cloud.redhat.com.
Note: I saved mine as ~/.openshift/pull-secret.json
sshKey – This is your public SSH key (e.g. id_rsa.pub)

NOTE: The OpenShift installer removes this file during the install process, so you may want to keep a copy of it somewhere.

Create Ignition Files
The next step in the process is to create the installer manifest files using the openshift-install command. Keep in mind that you need to be in the install directory you created (in my case that’s the openshift4 directory).
[chernand@laptop openshift4]$ openshift-install create manifests
INFO Consuming “Install Config” from target directory
WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings

Note that the installer tells you the the masters are schedulable. For this installation, we need to set the masters to not schedulable.
[chernand@laptop openshift4]$ sed -i ‘s/mastersSchedulable: true/mastersSchedulable: false/g’ manifests/cluster-scheduler-02-config.yml
[chernand@laptop openshift4]$ cat manifests/cluster-scheduler-02-config.yml
apiVersion: config.openshift.io/v1
kind: Scheduler
metadata:
creationTimestamp: null
name: cluster
spec:
mastersSchedulable: false
policy:
name: “”
status: {}

To find out more about why you can’t run workloads on OpenShift 4.2 on the control plane, please refer to the official documentation.
Once the manifests are created, you can go ahead and create the ignition files for installation.
[chernand@laptop openshift4]$ openshift-install create ignition-configs
INFO Consuming “Master Machines” from target directory
INFO Consuming “Openshift Manifests” from target directory
INFO Consuming “Worker Machines” from target directory
INFO Consuming “Common Manifests” from target directory

Next, create an append-bootstrap.ign file. This file will tell RHCOS where to download the bootstrap.ign file to configure itself for the OpenShift cluster.
[chernand@laptop openshift4]$ cat < append-bootstrap.ign
{
“ignition”: {
“config”: {
“append”: [
{
“source”: “http://192.168.1.110:8080/ignition/bootstrap.ign”,
“verification”: {}
}
]
},
“timeouts”: {},
“version”: “2.1.0”
},
“networkd”: {},
“passwd”: {},
“storage”: {},
“systemd”: {}
}
EOF

Next, copy over the ignition files to this webserver:
[chernand@laptop ~]$ sudo scp *.ign root@192.168.1.110:/var/www/html/ignition/

You are now ready to create the VMs.
Creating the Virtual Machines
You create the RHCOS VMs for OpenShift 4 the same way you do any other VM. I will go over the process of creating the bootstrap VM. The process is similar for the masters and workers.
On the VMs and Template navigation screen (the one that looks like sheets of paper); right click your openshift4 folder and select New Virtual Machine.

The “New Virtual Machine” wizard will start.

Make sure “Create a new virtual machine” is selected and click next. On the next screen, name this VM “bootstrap” and make sure it gets created in the openshift4 folder. It should look like this:

On the next screen, choose an ESXi host in your cluster for the initial creation of this bootstrap VM, and click “Next”. The next screen it will ask you which datastore to use for the installation. Choose the datastore appropriate for your installation.

On the next page, it’ll ask you to set the compatibility version. Go ahead and select “ESXi 6.7 and Later” for the version and select next. On the next page, set the OS Family to “Linux” and the Version to “Red Hat Enterprise Linux 7 (64-Bit).

After you click next, it will ask you to customize the hardware. For the bootstrap set 4vCPUs, 8 GB of RAM, and a 120GB Hard Drive.

On The “New CD/DVD Drive” select “Datastore ISO File” and select the RHCOS ISO file you’ve uploaded earlier.
Next, click on the “VM Options” tab and scroll down and expand “Advanced”. Set “Latency Sensitivity” to “High”.

Click “Next”. This will bring you to the overview page:

Go ahead and click “Finish”, to create this VM.
You will need to run through these steps at least 5 more times (3 masters and 2 workers). Use the table below (based on the official documentation) to create your other 5 VMs.

MACHINEOPERATING SYSTEMvCPURAMSTORAGE

MastersRed Hat Enterprise Linux 7416 GB120 GB

WorkersRed Hat Enterprise Linux 748 GB120 GB

NOTE, if you’re cloning from the bootstrap, make sure you adjust the parameters accordingly and that you’re selecting “thin provision” for the disk clone.

Once you have all the servers created, the openshift4 directory should look something like this:

Next, boot up your bootstrap VM and open up the console. You’ll get the “RHEL CoreOS Installer” install splash screen. Hit the TAB button to interrupt the boot countdown so you can pass kernel parameters for the install.

On this screen, you’ll pass the following parameters. Please note that this needs to be done all on one line, I broke it up for easier readability.
ip=192.168.1.116::192.168.1.1:255.255.255.0:bootstrap.openshift4.example.com:ens192:none
nameserver=192.168.1.2
coreos.inst.install_dev=sda
coreos.inst.image_url=http://192.168.1.110:8080/install/bios.raw.gz
coreos.inst.ignition_url=http://192.168.1.110:8080/ignition/append-bootstrap.ign

NOTE Using ip=… syntax will set the host with a static IP you provided persistently across reboots. The syntax is: ip=$IPADDRESS::$DEFAULTGW:$NETMASK:$HOSTNAMEFQDN:$IFACE:none nameserver=$DNSSERVERIP

This is how it looked like in my environment:

Do this for ALL your servers, substituting the correct ip/config where appropriate. I did mine in the following order:

Bootstrap
Masters
Workers

Bootstrap Process
Back on the installation host, wait for the bootstrap complete using the OpenShift installer.
[chernand@laptop openshift4]$ openshift-install wait-for bootstrap-complete –log-level debug
DEBUG OpenShift Installer v4.2.0
DEBUG Built from commit 90ccb37ac1f85ae811c50a29f9bb7e779c5045fb
INFO Waiting up to 30m0s for the Kubernetes API at https://api.openshift4.example.com:6443…
INFO API v1.14.6+2e5ed54 up
INFO Waiting up to 30m0s for bootstrapping to complete…
DEBUG Bootstrap status: complete
INFO It is now safe to remove the bootstrap resources

Once you see this message you can safely delete the bootstrap VM and continue with the installation.
Finishing Install
With the bootstrap process completed, the cluster is actually up and running; but it is not yet in a state where it’s ready to receive workloads. Finish the install process by first exporting the KUBECONFIG environment variable.
[chernand@laptop openshift4]$ export KUBECONFIG=~/openshift4/auth/kubeconfig

You can now access the API. You first need to check if there are any CSRs that are pending for any of the nodes. You can do this by running oc get csr, this will list all the CSRs for your cluster.
[chernand@laptop openshift4]$ oc get csr
NAME AGE REQUESTOR CONDITION
csr-4hn7m 6m36s system:node:master3.openshift4.example.com Approved,Issued
csr-4p6jz 7m8s system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-6gvgh 6m21s system:node:worker2.openshift4.example.com Approved,Issued
csr-8q4q4 6m20s system:node:master1.openshift4.example.com Approved,Issued
csr-b5b8g 6m36s system:node:master2.openshift4.example.com Approved,Issued
csr-dc2vr 6m41s system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-fwprs 6m22s system:node:worker1.openshift4.example.com Approved,Issued
csr-k6vfk 6m40s system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-l97ww 6m42s system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-nm9hr 7m8s system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued

You can approve any pending CSRs by running the following command (please read more about certificates in the official documentation):
[chernand@laptop openshift4]$ oc get csr –no-headers | awk ‘{print $1}’ | xargs oc adm certificate approve

After you’ve verified that all CSRs are approved, you should be able to see your nodes.
[chernand@laptop openshift4]$ oc get nodes
NAME STATUS ROLES AGE VERSION
master1.openshift4.example.com Ready master 9m55s v1.14.6+c07e432da
master2.openshift4.example.com Ready master 10m v1.14.6+c07e432da
master3.openshift4.example.com Ready master 10m v1.14.6+c07e432da
worker1.openshift4.example.com Ready worker 9m56s v1.14.6+c07e432da
worker2.openshift4.example.com Ready worker 9m55s v1.14.6+c07e432da

In order to complete the installation, you need to add storage to the image registry. For testing clusters, you can set this to emptyDir (for more permanent storage, please see the official doc for more information).
[chernand@laptop openshift4]$ oc patch configs.imageregistry.operator.openshift.io cluster
–type merge –patch ‘{“spec”:{“storage”:{“emptyDir”:{}}}}’

At this point, you can now finish the installation process.
[chernand@laptop openshift4]$ openshift-install wait-for install-complete
INFO Waiting up to 30m0s for the cluster at https://api.openshift4.example.com:6443 to initialize…
INFO Waiting up to 10m0s for the openshift-console route to be created…
INFO Install complete!
INFO To access the cluster as the system:admin user when using ‘oc’, run ‘export KUBECONFIG=/home/chernand/openshift4/auth/kubeconfig’
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.openshift4.example.com
INFO Login to the console with user: kubeadmin, password: STeaa-LjEB3-fjNzm-2jUFA

Once you’ve seen this message, the install is complete and the cluster is ready to use. If you provided your vSphere credentials, you’ll have a storageclass set so you can create storage.
[chernand@laptop openshift4]$ oc get sc
NAME PROVISIONER AGE
thin (default) kubernetes.io/vsphere-volume 13m

You can use this storageclass to dynamically create VDMKs for your applications.
If you didn’t provide your vSphere credentials, you can consult the VMware Documentation site for how to set up storage integration with Kubernetes.
Conclusion
In this blog we went over how to install OpenShift 4 on VMware using the UPI method and how to set up RHCOS with static IPs. We also displayed the vSphere integration that allows OpenShift to create VDMKs for the applications.
Using OpenShift 4 on VMware is a great way to run your containerized workloads on your virtualization platform. Using OpenShift to manage your application workloads on top of VMware gives you the flexibility in your infrastructure to move workloads if the need arises.
We invite you to install OpenShift 4 on VMware vSphere and share your experience!
The post OpenShift 4.2 vSphere Install with Static IPs appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

How Boston Children’s Hospital Augments Doctors Cognition with Red Hat OpenShift

Software can be an enabler for healers. At Red Hat, we’ve seen this first hand from customers like Boston Children’s Hospital. That venerable infirmary is using Red Hat OpenShift and Linux containers to enhance their medical capabilities, and to augment their doctors cognitive capacity.
The Mother of all Demos
Let’s rewind 50 years to Silicon Valley as a quiet suburb with a big college doing very strange things for its day. We’re now back at Stanford Research Institute, also known as SRI. This facility, in December of 1968, hosted the Mother of all Demos: A revolutionary and ground breaking technology demo.
The Mother of all Demos was run by Douglas Engelbart, the head of SRI’s Augmentation Research Center. Back in 1969, this group demonstrated the following things on stage for the very first time: the Mouse, the GUI, the word processor, the Internet, the collaborative Internet-based Word Processor, the on-screen cursor, the chording keyboard and telepresence. You could also make a compelling argument that SRI invented the computer monitor as the standard visual interface.

Engelbart envisioned a world where humans use computers to expand their consciousness and their data analysis capabilities. Computers were not seen by SRI as the means to an end, but rather as tools to allow humans to become even more curious, experimental and exploratory. They were tools to increase productivity and mental capacity, not just for doing your taxes or counting things.
Today, however, the breakthroughs are much more modest by comparison to that original Mother of all Demos. In a way, we’re only now realizing the second phase of Engelbart’s dream the first being the proliferation of desktop computing and word processing. Beyond those mundane compute tasks, however, we see Google as the force that has most augmented human cognition with its search engine, today.
That doesn’t mean the second phase of human cognition enhancement hasn’t taken place, however. It’s just not being written about as much as those flashy face swapping applications.
The Future is Now
When Dr. Ellen Grant, Director of an innovative research team at Boston Children’s Hospital took the stage at Red Hat Summit in Boston earlier this year, she was able to show the incredible power computing can unleash when it is coupled with human knowledge and human cooperation.
ChRis, AKA the Research Integration Service, was created by a collaboration between the Boston Children’s Hospital, Boston University, and Red Hat. The system uses the scalability and flexibility of Red Hat OpenShift to provide an infrastructure of storing, analyzing and sharing patient data between doctors across a number of hospitals.
Dr. Grant explained how the system helps her diagnose a patient. Without ChRis, if a new patient presenting unexplained seizures showed up in her office, she’d spend 15 to 20 minutes looking through around 5,000 images of the child’s brain generated by an MRI. After that, she’d forward the patient to other doctors for extensive tests and medication, none of which are guaranteed to fix the problem.
With ChRis, Dr. Grant would instead first process those 5,000 images at scale across the compute cluster. The resulting data would add information to those images, such as coloring the various regions of the brain, and highlighting areas where the brain structure has more than a standard deviation from the normal.
Dr. Grant can then compare these MRIs to those of other patients with similar symptoms, even though that data has been anonymized to protect the patients’ identities. The information on those patients is not confined to simply those who’ve attended Boston Children’s Hospital, either: many hospitals can share ChRis, and share their data with other facilities. They call these other hospital datacenters “Enclaves.”
This is tremendously important, said Dr. Grant, as children’s hospitals often see extremely rare diseases and afflictions, making data on those cases scarce. It’s almost incumbent upon these hospitals to share their patient information safely, as the sample pool for some medical issues is far below the needed threshold for proper statistical analysis.
Dr. Grant laid out a hypothetical scenario around this fictional seizure patient, and she intimated that without ChRis, the child could be in for a lifetime of seizures and ineffective medical treatments, all because the initial examination would be from a doctor who had only 15 to 20 minutes to find a defect in the child’s brain MRIs.

“Now this is the future of precision medicine: this is what we want to do and this is not possible without the Red Hat infrastructure and ChRis to bridge those two worlds together. Our lead engineer is working hand-in-hand with the Red Hat engineers so there’s no black boxes, and that’s another critical point in medicine: I need to know what happens to my data. I need to trace it through so that I understand the analysis that I get. Working together in open source, yet encrypted environments has now helped us share our collective knowledge to better serve and save while protecting individual identity. Together we are changing how healthcare works and it’s about time,” said Dr. Grant.
Red Hat is enabling the Boston Children’s Hospital to have the tools they need to save lives and innovate in medicine. That’s their business, after all. We’re here to provide the infrastructure to allow them to do their work in a better, more productive and more impactful manner. But the innovation they’ve unlocked has been thanks to their skilled technical and medical teams.
While SRI has since moved on to a much wider range or research than took place in Engelbart’s time, we see our work on open source, Linux, Kubernetes and OpenShift as an extension of that 50 year old dream. We’re here to help augment the capabilities of humans around the world, and in doing so, enable those humans to cure many medical diseases and save lives.
Learn more in the press release.
The post How Boston Children’s Hospital Augments Doctors Cognition with Red Hat OpenShift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Community Blog Round Up 11 November 2019

As we dive into the Ussuri development cycle, I’m sad to report that there’s not a lot of writing happening upstream.
If you’re one of those people waiting for a call to action, THIS IS IT! We want to hear about your story, your problem, your accomplishment, your analogy, your fight, your win, your loss – all of it.
And, in the meantime, Adam Young says it’s not that cloud is difficult, it’s networking! Fierce words, Adam. And a super fierce article to boot.
Deleting Trunks in OpenStack before Deleting Ports by Adam Young
Cloud is easy. It is networking that is hard.
Read more at https://adam.younglogic.com/2019/11/deleting-trunks-before-ports/
Quelle: RDO