OpenShift Commons Briefing: Automate and Scale Your Data Pipelines the Cloud Native Way with Guillaume Moutier (Red Hat)

In this briefing, Guillaume Moutier, Senior Principal Technical Evangelist at Red Hat, gives an overview on building automated and scalable data pipelines in the cloud leveraging Ceph notifications, Kafka, and KNative Eventing and Serving.
With accelerating need for data agility around the globe, it’s important that the right data be in the right place at the right time. Failure to meet these demands can even result in regulatory non-compliance, as data retention policies change around the globe on an almost daily basis. Guillaume gives an introduction into what it means to build automated and scalable data pipelines in OpenShift.
Slides from the Briefing: Automate-and-scale-your-data-pipelines-the-Cloud-Native-Way
Additional Resources:

Red Hat Container Storage 4
AMQ Streams and Kafka on OpenShift
KNative

To stay abreast of all the latest releases and events, please join the OpenShift Commons and join our mailing lists & slack channel.
What is OpenShift Commons?
Commons builds connections and collaboration across OpenShift communities, projects, and stakeholders. In doing so we’ll enable the success of customers, users, partners, and contributors as we deepen our knowledge and experiences together.
Our goals go beyond code contributions. Commons is a place for companies using OpenShift to accelerate its success and adoption. To do this we’ll act as resources for each other, share best practices and provide a forum for peer-to-peer communication.
Join OpenShift Commons today!
The post OpenShift Commons Briefing: Automate and Scale Your Data Pipelines the Cloud Native Way with Guillaume Moutier (Red Hat) appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Fully Automated Management of Egress IPs with the egressip-ipam-operator

Introduction
Egress IPs is an OpenShift feature that allows for the assignment of an IP to a namespace (the egress IP) so that all outbound traffic from that namespace appears as if it is originating from that IP address (technically it is NATed with the specified IP).
This feature is useful within many enterprise environments as it allows for the establishment of firewall rules between namespaces and other services outside of the OpenShift cluster. The egress IP becomes the network identity of the namespace and all the applications running in it. Without egress IP, traffic from different namespaces would be indistinguishable because by default outbound traffic is NATed with the IP of the nodes, which are normally shared among projects. 

To clarify the concept, we can see in this diagram above containing two namespaces (A and B), each running two pods (A1, A2, B1, B2). A is a namespace whose applications can connect to a database in the company’s network. B is not authorized to do so. The A namespace is configured with an egress IP so all the pods outbound connections egress with that IP. A firewall is configured to allow connections from that IP to an enterprise database. The B namespace is not configured with an egress IP so its pods egress via using the node’s IP. Those IPs are not allowed by the firewall to connect to the database.
However, to enable this feature requires some manual steps to be properly configured. Also, when running on cloud providers, additional configuration is needed.
Reasoning about this question with a customer we realized that there was an opportunity to automate the entire process with an operator.
 
The egressip-ipam-operator 
The purpose of the egressip-ipam-operator is to manage the assignment of egressIPs (IPAM) to namespaces and to ensure that the necessary configuration in OpenShift and the underlying infrastructure is consistent.
IPs can be assigned to namespaces via an annotation or the egressip-ipam-operator can select one from a preconfigured CIDR range.
For a bare metal deployment, the configuration would be similar to the example below:
 
apiVersion: redhatcop.redhat.io/v1alpha1
kind: EgressIPAM
metadata:
name: egressipam-baremetal
spec:
cidrAssignments:
  – labelValue: “true”
    CIDR: 192.169.0.0/24
topologyLabel: egressGateway
nodeSelector:
  matchLabels:
    node-role.kubernetes.io/worker: “”
This configuration states that nodes selected by the nodeSelector should be divided in groups based on the topology label and each group will receive egressIPs from the specified CIDR.
In this example, we have only one group which in most cases will be enough for a bare metal configuration. Having multiple groups can occur when nodes are dislocated in multiple subnets, where different CIDRs are needed to make the addresses routable. This is exactly what happens with multi AZs deployments in cloud providers (see more about this below).
Users can opt in to having their namespaces receive egress IPs by adding the following annotation to the namespace: 
egressip-ipam-operator.redhat-cop.io/egressipam=<egressIPAM>. 
So, in the case of the example from above the annotation would take the form: 
egressip-ipam-operator.redhat-cop.io/egressipam=egressipam-baremetal.
When this occurs, the namespace is assigned an egress IP per cidrAssignment.
In the case of bare metal, a node is selected by OpenShift to carry that egress IP.
It is also possible for the user to specify which egress IPs a namespace should have. In this case, a second annotation is needed with the following format: 
egressip-ipam-operator.redhat-cop.io/egressips=IP1,IP2…
The annotation value is a comma separated array of IPs. There must be exactly one IP per cidrAssignment .
AWS Support
The egress-ipam-operator can also work with Amazon Web Services (AWS). In this case, the operator has additional tasks to perform because it needs to configure the EC2 VM instances to carry the additional IPs. This is due to the fact that like in most cloud providers, the cloud provider needs to control the IPs that are assigned to VMs.
For the AWS use case,the EgressIPAM configuration appears as follows:
apiVersion: redhatcop.redhat.io/v1alpha1
kind: EgressIPAM
metadata:
 name: egressipam-aws
spec:
 cidrAssignments:
   – labelValue: “eu-central-1a”
     CIDR: 10.0.128.0/20
   – labelValue: “eu-central-1b”
     CIDR: 10.0.144.0/20
   – labelValue: “eu-central-1c”
     CIDR: 10.0.160.0/20
 topologyLabel: topology.kubernetes.io/zone
 nodeSelector:
   matchLabels:
     node-role.kubernetes.io/worker: “”
Here, we can see multiple cidrAssignments, one per availability zone, in which the cluster is installed. Also, notice that the topologyLabel must be specified as topology.kubernetes.io/zone to identify the availability zone. The CIDRs must be the same as the CIDRs used for the node subnet.
When a project with the opt-in node is created, the following actions occur:

One IP per cidrAssignent is assigned to the namespace
One VM per zone is selected to carry the corresponding IP.
The OpenShift nodes corresponding to the AWS VMs are configured to carry that IP.

Installation
For detailed instructions on how to install the egress-ipam-operator, see the github repository.
Conclusion
Everytime there is an automation opportunity around and about OpenShift, we should consider capturing the automation as an operator and, possibly, also consider open sourcing the resulting operator. In this case, we automated the operations around egress IPs. 
Keep in mind that this operator is not officially supported by Red Hat and it is currently managed by the container Community of Practice (CoP) at Red Hat, which will provide best effort support. Feedback and contributions (for example, supporting additional cloud providers) are welcome.
 
The post Fully Automated Management of Egress IPs with the egressip-ipam-operator appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Accessing CodeReady Containers on a Remote Server

While installing an OpenShift cluster on a cloud isn’t difficult, the old school developer in me wants as much of my environment as possible to be in my house and fully under my control. I have some spare hardware in my basement that I wanted to use as an OpenShift 4 installation, but not enough to warrant a full blown cluster.
CodeReady Containers, or CRC for short, is perfect for that. Rather than try to rephrase what it is, I’ll just copy it directly from their site:
“CodeReady Containers brings a minimal, preconfigured OpenShift 4.1 or newer cluster to your local laptop or desktop computer for development and testing purposes. CodeReady Containers is delivered as a Red Hat Enterprise Linux virtual machine that supports native hypervisors for Linux, macOS, and Windows 10.”
The hiccup is that while my server is in my basement, I don’t want to have to physically sit at the machine to use it. Since CRC is deployed as a virtual machine, I needed a way to get to that VM from any other machine on my home network. This blog talks about how to configure HAProxy on the host machine to allow access to CRC from elsewhere on the network.
I ran the following steps on a CentoOS 8 installation, but they should work on any of the supported Linux distributions. You’ll also need some form of DNS resolution between your client machines and the DNS entries that CRC expects. In my case, I use a Pi-hole installation running on a Raspberry Pi (which effectively uses dnsmasq as described later in this post).
It’ll become obvious very quickly when you read this, but you’ll need sudo access on the CRC host machine.
Running CRC
The latest version of CRC can be downloaded from Red Hat’s site. You’ll need to download two things:

The crc binary itself, which is responsible for the management of the CRC virtual machine
Your pull secret, which is used during creation; save this in a file somewhere on the host machine

This blog isn’t going to go into the details of setting up CRC. Detailed information can be found in the Getting Started Guide in the CRC documentation.
That said, if you’re looking for a TL;DR version of that guide, it boils down to:
crc setup
crc start -p

Make sure CRC is running on the destination machine before continuing, since we’ll need the IP address that the VM is running on.
Configuring the Host Machine
We’ll use firewalld and HAProxy to route the host’s inbound traffic to the CRC instance. Before we can configure that, we’ll need to install a few dependencies:
sudo dnf -y install haproxy policycoreutils-python-utils

Configuring the Firewall
The CRC host machine needs to allow inbound connections on a variety of ports used by OpenShift. The following commands configure the firewall to open up those ports:
sudo systemctl start firewalld
sudo firewall-cmd –add-port=80/tcp –permanent
sudo firewall-cmd –add-port=6443/tcp –permanent
sudo firewall-cmd –add-port=443/tcp –permanent
sudo systemctl restart firewalld
sudo semanage port -a -t http_port_t -p tcp 6443

Configuring HA Proxy
Once the firewall is configured to allow traffic into the server, HAProxy is used to forward it to the CRC instance. Before we can configure that, we’ll need to know the IP of the server itself, as well as the IP of the CRC virtual machine:
export SERVER_IP=$(hostname –ip-address)
export CRC_IP=$(crc ip)

Note: If your server is running DHCP, you’ll want to take steps to ensure its IP doesn’t change, either by changing it to run on a static IP or by configuring DHCP reservations. Instructions for how to do that are outside the scope of this blog, but chances are if you’re awesome enough to want to set up a remote CRC instance, you know how to do this already.
We’re going to replace the default haproxy.cfg file, so to be safe, create a backup copy:
cd /etc/haproxy
sudo cp haproxy.cfg haproxy.cfg.orig

Replace the contents of the haproxy.cfg file with the following:
global
debug

defaults
log global
mode http
timeout connect 0
timeout client 0
timeout server 0

frontend apps
bind SERVER_IP:80
bind SERVER_IP:443
option tcplog
mode tcp
default_backend apps

backend apps
mode tcp
balance roundrobin
option ssl-hello-chk
server webserver1 CRC_IP check

frontend api
bind SERVER_IP:6443
option tcplog
mode tcp
default_backend api

backend api
mode tcp
balance roundrobin
option ssl-hello-chk
server webserver1 CRC_IP:6443 check

Note: Generally speaking, setting the timeouts to 0 is a bad idea. In this context, we set those to keep websockets from timing out. Since you are (or rather, “should”) be running CRC in a development environment, this shouldn’t be quite as big of a problem.
You can either manually change the instances of SERVER_IP and CRC_IP as appropriate, or run the following commands to automatically perform the replacements:
sudo sed -i “s/SERVER_IP/$SERVER_IP/g” haproxy.cfg
sudo sed -i “s/CRC_IP/$CRC_IP/g” haproxy.cfg

Once that’s finished, start HAProxy:
sudo systemctl start haproxy

Configuring DNS for Clients
As I said earlier, your client machines will need to be able to resolve the DNS entries used by CRC. This will vary depending on how you handle DNS. One possible option is to use dnsmasq on your client machine.
Before doing that, you’ll need to update NetworkManager to use dnsmasq. This is done by creating a new NetworkManager config file:
cat << EOF > /tmp/00-use-dnsmasq.conf
[main]
dns=dnsmasq
EOF

sudo mv /tmp/00-use-dnsmasq.conf /etc/NetworkManager/conf.d/00-use-dnsmasq.conf

You’ll also need to add DNS entries for the CRC server:
cat << EOF > /tmp/01-crc.conf
address=/apps-crc.testing/SERVER_IP
address=/api.crc.testing/SERVER_IP
EOF

sudo mv /tmp/01-crc.conf /etc/NetworkManager/dnsmasq.d/01-crc.conf

Again, you can either manually enter the IP of the host machine or use the following commands to replace it:
sudo sed -i “s/SERVER_IP/$SERVER_IP/g” /etc/NetworkManager/dnsmasq.d/01-crc.conf

Once the changes have been made, restart NetworkManager:
sudo systemctl reload NetworkManager

Accessing CRC
The crc binary provides subcommands for discovering the authentication information to access the CRC instance:
crc console –url

https://console-openshift-console.apps-crc.testing

crc console –credentials
To login as a regular user, run ‘oc login -u developer -p developer https://api.crc.testing:6443′.
To login as an admin, run ‘oc login -u kubeadmin -p mhk2X-Y8ozE-9icYb-uLCdV https://api.crc.testing:6443′

The URL from the first command will access the web console from any machine with the appropriate DNS resolution configured. The login credentials can be determined from the output of the second command.
To give credit where it is due, much of this information came from this gist by Trevor McKay.
Happy Coding :)
The post Accessing CodeReady Containers on a Remote Server appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Running Remote Workshops

The post Running Remote Workshops appeared first on Mirantis | Pure Play Open Cloud.
As humans we are hardwired to prefer to collaborate using many physical and visual queues, and we often find it easier to explain and share complex ideas through drawing or other visual collaboration means that are made easier when we are in the same room, were we can use tools like whiteboards or simply a pen and paper.  In the current climate, where we are either unable to travel to collaborate or because we just want to reduce our impact on the environment, the ability to effectively collaborate remotely is critical. 
Even with the rise of great and ubiquitous video conferencing capabilities, remote collaboration often leaves a lot to be desired. Whilst not the same as being able to look someone in the eye or be physically present with them, there are a number of things we can do to make remote collaboration easier and more effective.
Rules of engagement

Everyone must be on camera
All participants to utilize headphones or good quality conference speaker device
Ensure that you have a reliable internet link
Act like you are physically in the room
Focus your attention on the session

General standards

Keep sessions short, max 4 hours, but preferably 2 hours
Create and adhere to a formal agenda and times
Break up the sessions over several days as necessary
Be aware of time zones when planning
Break at least every 90min

Facilitators

Ensure that everyone is included and participates 
Ask for input and and feedback frequently
Avoid having long periods of one person presenting or talking
Timebox inputs, but still give people the opportunity to share their thoughts 
Try to keep an open microphone during the sessions  (and encourage others to do so). I know this goes against the conventional wisdom, but it’s better than talking and presenting into the void. (Note that this doesn’t apply in a noisy or distracting environment.)

Individual participation

Turn off notifications to reduce distractions 
Treat the session with the same respect you would as if you were in the room with a customer or colleague; don’t read email or do other unrelated work
Where possible, try to utilize two screens: one with the participant videos, and the other with the presentation or whiteboard

Tools 

Instant Messaging: Setup a dedicated Workshop/work session channel to share links and pictures (for example, Slack)
Video Conference: Pick the best possible video conferencing tool that all participants can access (such as Zoom, Google Hangouts (some companies forbid the required Google accounts on corporate machines), Webex (functionality can be limited on some operating systems, such as Linux), and so on) be aware of the participant limitations.
Web Whiteboard: Pick a online collaboration tool that allows everyone to participate and encourage everyone to get involved (See below for some options)
Shared Docs: Share agenda, notepads, spreadsheets, and so on with the whole team and let everyone contribute in real time (Google Docs)
Cellphone: Your camera can be used to share pictures of diagrams and anything else relevant.
Pen and Paper: Take notes, capture your thoughts, doodle 

Whiteboard style Collaboration Tools
We can use a number of different tools, including:
Google JamBoard

Link: https://jamboard.google.com
Web Browser and/or Mobile App: Both
Cost: Included in GSuite (There is a paid interactive display for Boardrooms)
Pros and cons: 

Integrated with Google GSuite and GDrive
Simple and intuitive to use
Work in web browser and/or app
Requires google logon to secure file access (Challenging if some attendants don’t have a Google account)
Can add sticky notes, but can’t add text directly 
Can add a image from Gdrive and embed other GSuite docs
Through the IOS and Android apps you can directly access the camera and embed images.
Jam can be downloaded as a PDF or saved as an editable frame
Multiple frames can created in one Jam
Jams can be opened and added to later.

Conclusion: Jamboard is a great tool and if the need to share it with external people who don’t have google accounts isn’t a issue it look like a great tool

A Web Whiteboard

Link: https://awwapp.com/
Web Browser and/or Mobile App: Web Browser
Cost: Advertisement, premium version is ad free
Capabilities: 

Simple and intuitive to use
Can be shared with anyone just via the link
Board can be exported as an image or PDF
Requires a account to save a board or create multiple boards
Has tools to create basic shapes (Rectangle and Circle)
Can add typed test directly 

Conclusion: Great simple web whiteboard tool that anyone can access, but if privacy and access control is an issue then don’t use it.

There are of course many others, some with great features, but these are the ones that I have tried.
Workshop Setup Checklist
We’ve found that the more we prepare for a workshop, the greater the degree of success we’ll see.  Some things we’ll want to consider:

Create a detailed agenda 

Agenda Template

Attendee List
Start and End Time
Detailed Session Description
Collaboration Tools to be used
Links to Collaboration tools and any relevant docs

Agree on the workshop start and end times for each session
Send the agenda out for comment and feedback at least a week before
Setup and test the Teleconference Tools, make sure ahead of time that everyone can access the chosen tool and test it (there is nothing more frustrating than having to deal with connectivity issues that eat into the session time)
Create Shared Docs and ensure that all attendees have access to the docs
Link all docs to the agenda

Conclusion
Remote work is likely to become more common even after the current emergency, particularly in the IT field, but while it can be jarring to those who are used to face-to-face contact, teleworking options are already quite advanced, and there’s no reason we can’t communicate effectively while spread out around the world. We just need to be certain that we’re prepared, both in terms of content and in terms of technology.
The post Running Remote Workshops appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Tips, Tricks, and Best Practices for Distributed RDO Teams

While a lot of RDO contributors are remote, there are many more who are not and now find themselves in lock down or working from home due to the coronavirus. A few members of the RDO community requested tips, tricks, and best practices for working on and managing a distributed team.
Connectivity
I mean, obviously, there needs to be enough bandwidth, which might normally be just fine, but if you have a partner and kids also using the internet, video calls might become impossible.
Communicate with the family to work out a schedule or join the call without video so you can still participate.
Manage Expectations
Even if you’re used to being remote AND don’t have a partner / family invading your space, there is added stress in the new reality.
Be sure to manage expectations with your boss about priorities, focus, goals, project tracking, and mental health.
This will be an ongoing conversation that evolves as projects and situations evolve.
Know Thyself
Some people NEED to get ready in the morning, dress in business clothes, and work in a specific space. Some people can wake up, grab their laptop and work from the bed.
Some people NEED to get up once an hour to walk around the block. Some people are content to take a break once every other hour or more.
Some people NEED to physically be in the office around other people. Some will be totally content to work from home.
Sure, some things aren’t optional, but work with what you can.
Figure out what works for you.
Embrace #PhysicalDistance Not #SocialDistance
Remember to stay connected socially with your colleagues. Schedule a meeting without an agenda where you chat about whatever.
Come find the RDO Technical Community Liaison, leanderthal, and your other favorite collaborators on Freenode IRC on channels #rdo and #tripleo.
For that matter, don’t forget to reach out to your friends and family.
Even introverts need to maintain a certain level of connection.
Further Reading
There’s a ton of information about working remotely / distributed productivity and this is, by no means, an exhaustive list, but to get you started:

Ergonomic Essentials for Remote Working
Cornell University Ergonomics
Wikipedia.Org Time Management
You’re Taking Breaks The Wrong Way
WHO | Health Workforce Burnout
Wikipedia.Org Stress Management
Lessons from Community Leaders on Working Remote
Top 15 Tips To Effectively Manage Remote Employees

Now let’s hear from you!
What tips, tricks, and resources do you recommend to work from home, especially in this time of stress? Please add your advice in the comments below.
And, as always, thank you for being a part of the RDO community!
Quelle: RDO

Red Hat OpenShift 4 and Red Hat Virtualization: Together at Last

OpenShift 4 was launched not quite a year ago at Red Hat Summit 2019.  One of the more significant announcements was the ability for the installer to deploy an OpenShift cluster using full-stack automation.  This means that the administrator only needs to provide credentials to a supported Infrastructure-as-a-Service, such as AWS, and the installer would provision all of the resources needed, e.g. virtual machines, storage, networks, and integrating them all together as well.
Over time, the full-stack automation experience has expanded to include Azure, Google Compute Platform, and Red Hat Openstack, allowing customers to deploy OpenShift clusters across different clouds and even on-premises with the same fully automated experience.
For organizations who need enterprise virtualization, but not the API-enabled, quota enforced consumption of infrastructure provided by Red Hat OpenStack, Red Hat Virtualization (RHV) provides a robust and trusted platform to consolidate workloads and provide the resiliency, availability, and manageability of a traditional hypervisor.
When using RHV, OpenShift’s “bare metal” installation experience, where there existed no testing or integration between OpenShift and the underlying infrastructure, has been the solution so far.  But, the wait is over! OpenShift 4.4 nightly releases now offer the full-stack automation experience for RHV!

Getting started with OpenShift on RHV
As you would expect from the full-stack automation installation experience, getting started is straightforward with just a few prerequisites below.  You can also use the quick start guide for more thorough and details instructions.

You need a RHV deployment with RHV Manager.  It doesn’t matter if you’re using a self-hosted Manager or standalone, just be sure you’re using RHV version 4.3.7.2 or later.
Until OpenShift 4.4 is generally available, you will need to download and use the nightly release of the OpenShift installer, available from https://cloud.redhat.com.
Network requirements:

DHCP is required for full-stack automated installs to assign IPs to nodes as they are created.
Identify three (3) IP addresses you can statically allocate to the cluster and create two (2) DNS entries, as below.  These are used for communicating with the cluster as well as internal DNS and API access.

An IP address for the internal-only OpenShift API endpoint
An IP address for the internal OpenShift DNS, with an external DNS record of api.clustername.basedomain for this address
An IP address for the ingress load balancer, with an external DNS record of *.apps.clustername.basedomain for this address.

Create an ovirt-config.yaml file for the credentials you want to use, this file has just four lines:

ovirt_url: https://rhv-m.host.name/ovirt-engine/api
ovirt_username: user@domain.tld
ovirt_password: password
ovirt_insecure: True

For now, the last value, “ovirt_insecure”, should be “True”.  As documented in this BZ, even if the RHV-M certificate is trusted by the client where openshift-install is executing from, that doesn’t mean that the pods deployed to OpenShift trust the certificate.  We are working on a solution to this, so please keep an eye on the BZ for when it’s been addressed!  Remember, this is tech preview :D

With the prerequisites out of the way, let’s move on to deploying OpenShift to Red Hat Virtualization!
Magic (but really automation)!
Starting the install process, as with all OpenShift 4 deployments, uses the openshift-install binary.  Once we answer the questions, the process is wholly automated and we don’t have to do anything but wait for it to complete!

# log level debug isn’t necessary, but gives detailed insight to what’s
# happening
# the “dir” parameter tells the installer to use the provided directory
# to store any artifacts related to the installation
[notroot@jumphost ~] openshift-install create cluster –log-level=debug –dir=orv
? SSH Public Key /home/notroot/.ssh/id_rsa.pub
? Platform ovirt
? Select the oVirt cluster Cluster2
? Select the oVirt storage domain nvme
? Select the oVirt network VLAN101
? Enter the internal API Virtual IP 10.0.101.219
? Enter the internal DNS Virtual IP 10.0.101.220
? Enter the ingress IP  10.0.101.221
? Base Domain lab.lan
? Cluster Name orv
? Pull Secret [? for help] **********************

snip snip snip

INFO Waiting up to 30m0s for the cluster at https://api.orv.lab.lan:6443 to initialize…
INFO Waiting up to 10m0s for the openshift-console route to be created…
INFO Install complete!
INFO To access the cluster as the system:admin user when using ‘oc’, run ‘export KUBECONFIG=/home/notroot/orv/auth/kubeconfig’
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.orv.lab.lan
INFO Login to the console with user: kubeadmin, password: passw-wordp-asswo-rdpas

The result, after a few minutes of waiting, is a fully functioning OpenShift cluster, ready for the final configuration to be applied, like deploying logging and monitoring, and configuring a persistent storage provider.
From a RHV perspective, the installer has created a template virtual machine, which was used to deploy all of the member nodes, regardless of role, for the OpenShift cluster.  As you saw at the end of the video, not only does the installer use this template, but the Machine API integration also makes use of it when creating new VMs when scaling the nodes.  Scaling nodes manually is as easy as one command line (oc scale –replicas=# machineset)!

Deploying OpenShift
To get started testing and trying OpenShift full-stack automated deployments to your RHV clusters, the installer can be found from the Red Hat OpenShift Cluster Manager.  For now, deploying the full-stack automation experience on RHV is in developer preview, so please send us any feedback and questions you have via BugZilla.  The quickest way to reach us is using “OpenShift Container Platform” as the product, with “Installer” as the component and “OpenShift on RHV” for sub-component.
The post Red Hat OpenShift 4 and Red Hat Virtualization: Together at Last appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

On Working Remotely: An Automattic Reader

How does a distributed company — a group of people with shared business goals but spread out around the world, representing different cultures, family settings, and local health considerations — stick together during a major health crisis like the COVID-19 pandemic?

We don’t intend to make it sound easy. And we are aware — from our families, our communities, the businesses we support, and our customers — that many, if not most companies cannot actually work 100 percent remotely because of the nature of their business.

For those who can transition to distributed work in the wake of this evolving crisis, we wanted to suggest ideas that might help colleagues work well together even when you’re no longer all sharing the same physical space.

We’re lucky that many Automatticians have shared advice and best practices based on their many years of working from home — and we’ve compiled some of these resources below to empower others to listen to and support their coworkers during a difficult and disruptive time.

Erin ‘Foletto’ Casali, Jetpack Design Operations Wrangler, offers a detailed read on setting up your remote work strategy for companies and individuals. (Note: Notion listed Erin’s piece as one of the best remote work guides on its wiki.)Cate Huston, who leads Automattic’s Developer Experience team, led a “Crash Course in Remote Management” webinar, presented with Vaya Consulting.Lori McLeese, Global Head of Human Resources, shared distributed best practices in a Q&A with True Ventures.Simon Ouderkirk, Jetpack Data Wrangler, focused on the value of connection in his post, “Phatic Communication, or Talk for the Sake of Talking.”Beau Lebens, WooCommerce Engineering Lead, posted concepts and a snapshot of a day in the distributed work life.Marcus Kazmierczak, a Special Projects Principal Engineer, wrote about the keys to effective asynchronous communication.Aaron Douglas, the WordPress iOS App team lead, shared some thoughts on staying mindful during video calls.James Huff, Happiness Engineer, published his recommendations from 10 years of working for Automattic.Artur Piszek, who leads the Earn team, came up with a primer and four pillars for remote work.Sara Rosso, Director of Marketing, wrote on the importance of remote meetups, especially when travel for in-real-life meetups is all but impossible. (Bonus from Sara: three essential skills.)Cate Huston again, this time on fixing five common pain points of working at home. (Note: this post is email-gated.)Jeff Pearce, WordPress.org Creative Technologist, shared about the importance of morning routines.Sasha Stone, Happiness Engineer, focuses on optimizing distributed life for self-care.Marjorie Asturias, Partnerships Wrangler, came up with five tips for working from home, which she shared on Fiverr’s blog.Erin Casali again, this time with some timeless tips from 2015, on setting processes and choosing tools for collaboration.

Of course, from his first post on remote work to his most recent one reflecting on the COVID-19 pandemic, to his Distributed podcast and beyond, founder and CEO Matt Mullenweg is a prominent voice on remote work and distributed culture. To send you off on a lighter note, Matt published his first “What’s In My Bag” post in 2014 and has done it again several times since.

We hope these resources are helpful to you during these trying times, and that you and everyone in your communities stay safe.
Quelle: RedHat Stack

Red Hat OpenShift Installation Process Experiences on IBM Z/LinuxONE

Red Hat OpenShift Installation Process Experiences on IBM Z/LinuxONE

OpenShift stands out as a leader with a security-focused, supported Kubernetes platform—including a foundation based on Red Hat Enterprise Linux.
But we already knew all that, the game changer for OpenShift, is the release of OCP version 4.x: OpenShift 4 is powered by Kubernetes Operators and Red Hat’s commitment to full-stack security, so you can develop and scale big ideas for the enterprise.
OpenShift started with distributed systems. It was eventually extended to IBM Power Systems, and now it is available on IBM Z. This creates a seamless user experience across major architectures such as x86, PPC and s390x!
This article’s goal is to share my experience on how to install the OpenShift Container Platform (OCP) 4.2.19 on IBM Z. We will use the minimum requirements to get our environment up and running. That said, for production or performance testing use the recommended hardware configuration from the official Red Hat documentation. The minimum machine requirements for a cluster with user-provisioned infrastructure are as follows:
The smallest OpenShift Container Platform clusters require the following hosts:
* One temporary bootstrap machine.
* Three control plane, or master, machines.
* At least two compute, or worker, machines.
The bootstrap, control plane (often called masters), and compute machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system.
All the RHCOS machines require network in initramfs during boot to fetch Ignition config files from the Machine Config Server. The machines are configured with static IP addresses. No DHCP server is required.
To install on IBM Z under z/VM, we require a single z/VM virtual NIC in layer 2 mode. You also need:
* A direct-attached OSA
* A z/VM VSwitch set up.
Minimum Resource Requirements
Each cluster machine must meet the following minimum requirements, in our case, these are the resource requirements for the VMs on IBM z/VM:

For our testing purposes (and resources limitations) we used DASD model 54 for each node instead of the 120GB recommended by the official Red Hat documentation.
Make sure to install OpenShift Container Platform version 4.2 using one of the following IBM hardware:

IBM Z, versions 13, 14, or 15.
LinuxONE, any version.

Hardware Requirements

1 LPAR with 3 IFLs that supports SMT2.
1 OSA or RoCE network adapter.

Operating System Requirements

One instance of z/VM 7.1.

This is the environment that we created to install the Openshift Container Platform following the minimum resource requirements. Keep in mind that other services will be required in this environment, and you can have them either on Z or provided to the Z box from outside; DNS (name resolution), HAProxy ( our load balancer), Workstation (our client system where we would run the CLI commands for OCP), HTTPd (serving the files such as the Red Hat CoreOS image as well as the ignition files that will be generated by later sections of this guide):

Network Topology Requirements
Before you install OpenShift Container Platform, you must provision two layer-4 load balancers. The API requires one load balancer and the default Ingress Controller needs the second load balancer to provide ingress to applications. In our case, we used a single instance of HAProxy running on a Red Hat Enterprise Linux 8 VM as our load balancer.
The following haproxy configuration will help us provide the load balancer layer for our purposes, edit the /etc/haproxy/haproxy.cfg and add:
listen ingress-http

bind *:80
mode tcp

server worker0 :80 check
server worker1 :80 check

listen ingress-https

bind *:443
mode tcp

server worker0 :443 check
server worker1 :443 check

listen api

bind *:6443
mode tcp

server bootstrap :6443 check
server master0 :6443 check
server master1 :6443 check
server master2 :6443 check

listen api-int

bind *:22623
mode tcp

server bootstrap :22623 check
server master0 :22623 check
server master1 :22623 check

server master2 :22623 check

Don’t forget to open the respective ports on the system’s firewall as well as set the SELinux boolean as follows:
# firewall-cmd –add-port=443/tcp
# firewall-cmd –add-port=443/tcp –permanent

# firewall-cmd –add-port=80/tcp
# firewall-cmd –add-port=80/tcp –permanent

# firewall-cmd –add-port=6443/tcp
# firewall-cmd –add-port=6443/tcp –permanent

# firewall-cmd –add-port=22623/tcp
# firewall-cmd –add-port=33623/tcp –permanent

# setsebool -P haproxy_connection_any 1

The following DNS records are required for an OpenShift Container Platform cluster that uses user-provisioned infrastructure. In each record, is the cluster name and is the cluster base domain that you specify in the install-config.yaml file.
Required DNS Records:
api..

This DNS record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster.
api-int..

This DNS record must point to the load balancer for the control plane machines. This record must be resolvable from all the nodes within the cluster.
The API server must be able to resolve the worker nodes by the host names that are recorded in Kubernetes. If it cannot resolve the node names, proxied API calls can fail, and you cannot retrieve logs from Pods.
*.apps..

A wildcard DNS record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster.
etcd-..

OpenShift Container Platform requires DNS records for each etcd instance to point to the control plane machines that host the instances. The etcd instances are differentiated by values, which start with 0 and end with n-1, where n is the number of control plane machines in the cluster. The DNS record must resolve to an unicast IPv4 address for the control plane machine, and the records must be resolvable from all the nodes in the cluster.
_etcd-server-ssl._tcp..

For each control plane machine, OpenShift Container Platform also requires a SRV DNS record for etcd server on that machine with priority 0, weight 10 and port 2380. A cluster that uses three control plane machines requires the following records:
# _service._proto.name. TTL class SRV priority weight port targ 
Transfer the initramfs, kernel, parameter file, and RHCOS images to z/VM, for example with FTP.
Punch the files to the virtual reader of the z/VM guest virtual machine that is to become your bootstrap node.
Log in to CMS on the bootstrap machine.
IPL the bootstrap machine from the reader.
Once the installation of the Red Hat CoreOS finishes, make sure to re-IPL this VM so it will load the Linux OS from it’s internal DASD.
Repeat this procedure for the other machines in the cluster, which means applying the same steps for creating the Red Hat Enterprise Linux CoreOS with the respective changes to
`master0`, `master1`, `master2`, `compute0` and `compute1`.et.
_etcd-server-ssl._tcp… 86400 IN SRV 0 10 2380 etcd-0…
_etcd-server-ssl._tcp… 86400 IN SRV 0 10 2380 etcd-1…
_etcd-server-ssl._tcp… 86400 IN SRV 0 10 2380 etcd-2…

As a summary, this is how our DNS records defined in our domain zone would look like when using Bind as my DNS server :
$TTL 86400
@ IN SOA .. admin.. (
2020021813 ;Serial
3600 ;Refresh
1800 ;Retry
604800 ;Expire
86400 ;Minimum TTL
)

;Name Server Information
@ IN NS ..

;IP Address for Name Server
IN A

;A Record for the following Host name

haproxy IN A
bootstrap IN A
master0 IN A
master1 IN A
master2 IN A
workstation IN A

compute0 IN A
compute1 IN A

etcd-0. IN A
etcd-1. IN A
etcd-2. IN A

;CNAME Record

api. IN CNAME haproxy..
api-int. IN CNAME haproxy..
*.apps. IN CNAME haproxy..

_etcd-server-ssl._tcp… 86400 IN SRV 0 10 2380 etcd-0…
_etcd-server-ssl._tcp… 86400 IN SRV 0 10 2380 etcd-1…
_etcd-server-ssl._tcp… 86400 IN SRV 0 10 2380 etcd-2…

Don’t forget to create the reserve records for your zone as well, example of how we setup ours:
$TTL 86400
@ IN SOA .. admin.. (
2020021813 ;Serial
3600 ;Refresh
1800 ;Retry
604800 ;Expire
86400 ;Minimum TTL
)
;Name Server Information
@ IN NS ..
IN A

;Reverse lookup for Name Server
IN PTR ..

;PTR Record IP address to Hostname
IN PTR haproxy..
IN PTR bootstrap..
IN PTR master0..
IN PTR master1..
IN PTR master2..
IN PTR master3..
IN PTR compute0..
IN PTR compute1..
IN PTR workstation..

Where for each record will be the last octet of their IP addresses.
Make sure that your Bind9 DNS server also provides access to the outside world, a.k.a Internet access by using the parameter in your /etc/named.conf options configuration section:
options {
// listen-on port 53 { 127.0.0.1; };
// listen-on-v6 port 53 { ::1; };
directory “/var/named”;
dump-file “/var/named/data/cache_dump.db”;
statistics-file “/var/named/data/named_stats.txt”;
memstatistics-file “/var/named/data/named_mem_stats.txt”;
secroots-file “/var/named/data/named.secroots”;
recursing-file “/var/named/data/named.recursing”;
allow-query { localhost; ; };
forwarders { ; };

For the sections Generating an SSH private key and Installing the CLI as well as Manually Creating the installation configuration files, we used the Workstation VM using RHEL8.
Generating an SSH private key and adding it to the agent
In our case, we used a Linux workstation as the base system outside of the OCP cluster. The next steps were done in this system.
If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and to the installation program.
If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t rsa -b 4096 -N ”
-f /

Then access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files:

https://…/openshift-v4/s390x/clients/ocp/latest/openshift-install-linux-4.2.18.tar.gz

Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar xvf .tar.gz

From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your installation pull secret as a .txt file. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
Installing the CLI
You can install the CLI in order to interact with OpenShift Container Platform using a command-line interface.
From the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site, navigate to the page for your installation type and click Download Command-line Tools.
Click the folder for your operating system and architecture and click the compressed file.
– Save the file to your file system.

https://…/openshift-v4/s390x/clients/ocp/latest/openshift-client-linux-4.2.18.tar.gz

– Extract the compressed file.
– Place it in a directory that is on your PATH.
After you install the CLI, it is available using the oc command:
$ oc
<command></command>
<command></command>

Manually creating the installation configuration file
For installations of OpenShift Container Platform that use user-provisioned infrastructure, you must manually generate your installation configuration file.
Create an installation directory to store your required installation assets in:
$ mkdir

Customize the following install-config.yaml file template and save it in the “.
Sample install-config.yaml file for bare metal
You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters. For IBM Z, please make sure to add architecture: s390x for both compute and controlPlane nodes or the config-cluster.yaml file will be generated with AMD64.
apiVersion: v1
baseDomain:
compute:
– architecture: s390x
hyperthreading: Enabled
name: worker
replicas: 0
controlPlane:
architecture: s390x
hyperthreading: Enabled
name: master
replicas: 3
metadata:
name:
networking:
clusterNetwork:
– cidr: 10.128.0.0/14
hostPrefix: 23
networkType: OpenShiftSDN
serviceNetwork:
– 172.30.0.0/16
platform:
none: {}
fips: false
pullSecret: ”
sshKey: ”

Creating the Kubernetes manifest and Ignition config files
Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines.
Generate the Kubernetes manifests for the cluster:
$ ./openshift-install create manifests –dir=

WARNING There are no compute nodes specified. The cluster will not fully initialize without compute nodes.
INFO Consuming “Install Config” from target directory
Modify the //manifests/cluster-scheduler-02-config.yml Kubernetes manifest file to prevent Pods from being
scheduled on the control plane machines:
1. Open the manifests/cluster-scheduler-02-config.yml file.
2. Locate the mastersSchedulable parameter and set its value to False.
3. Save and exit the file.
Create the Ignition config files:
$ ./openshift-install create ignition-configs –dir=

The following files are generated in the directory:
.
├── auth
│ ├── kubeadmin-password
│ └── kubeconfig
├── bootstrap.ign
├── master.ign
├── metadata.json
└── worker.ign
Copy the files master.ign, worker.ign and bootstrap.ign to the HTTPD node where you should have configured a http server (Apache) to serve these files during the creation of the Red Hat Linux CoreOS VMs.
Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines
Download the Red Hat Enterprise Linux CoreOS installation files from the RHCOS image mirror
Download the following files:
* The initramfs: rhcos–installer-initramfs.img
* The kernel: rhcos–installer-kernel
* The operating system image for the disk on which you want to install RHCOS. This type can differ by virtual machine:
* rhcos–s390x-metal-dasd.raw.gz for DASD (We used the DASD version)
Create parameter files. The following parameters are specific for a particular virtual machine:
* For coreos.inst.install_dev=, specify dasda for a DASD installation.
* For rd.dasd=, specifies the DASD where RHCOS is to be installed.
The bootstrap machine ignition file is called bootstrap-0, the master ignition files are numbered 0 through 2, the worker ignition files from 0 upwards. All other parameters can stay as they are.
Example parameter file we used on our environment, bootstrap-0.parm, for the bootstrap machine:
rd.neednet=1 coreos.inst=yes
coreos.inst.install_dev=
coreos.inst.image_url=http:///rhcos-4.2.18.raw.gz
coreos.inst.ignition_url=http:///bootstrap.ign
vlan=eth0.<1110>:
ip=:::::eth0.<1110>:off
nameserver=
rd.znet=qeth,<0.0.1f00>,<0.0.1f01>,<0.0.1f02>,layer2=1,portno=0
cio_ignore=all,!condev
rd.dasd=<0.0.0202>

Where = physical interface, = virtual interface alias for enc1e00 and &lt;1100&gt; = vlan ID
Note that for your environment the rd.znet=, rd.dasd=, coreos.inst.install_dev=, will all be different for you.
Each VM on z/VM will require access to the initramfs, kernel, and parameter (.parm) files on their internal disk. We used a common approach which is create a VM that will use it’s internal disk as a repository for all these files, and all the other VMs part of the cluster (bootstrap, master0, master1, …. worker1) will have access to this repository VMs disk (often in read-only mode) saving disk space as these files will only be used in the first stage of the process to load the files for each VM part of the cluster into the server’s memory. Each cluster VM will have a dedicated disk for the RHCOS, which is a completely separate disk (as previously covered, the model 54 ones).
Transfer the initramfs, kernel and all parameter (.parm) files to the repository VM’s local A disk on z/VM from an external FTP server:
==> ftp <VM_REPOSITORY_IP>

VM TCP/IP FTP Level 710

Connecting to <VM_REPOSITORY_IP>, port 21
220 (vsFTPd 3.0.2)
USER (identify yourself to the host):
>>>USER <username>
331 Please specify the password.
Password:
>>>PASS ********
230 Login successful.
Command:
cd <repositoryofimages>
ascii
get <parmfile_bootstrap>.parm
get <parmfile_master>.parm
get <parmfile_worker>.parm
locsite fix 80
binary
get <kernel_image>.img
get <initramfs_file>

Example of the VM definition (userid=LNXDB030) for the bootstrap VM on IBM z/VM for this installation:
USER LNXDB030 LBYONLY 16G 32G
INCLUDE DFLT
COMMAND DEFINE STORAGE 16G STANDBY 16G
COMMAND DEFINE VFB-512 AS 0101 BLK 524288
COMMAND DEFINE VFB-512 AS 0102 BLK 524288
COMMAND DEFINE VFB-512 AS 0103 BLK 524288
COMMAND DEFINE NIC 1E00 TYPE QDIO
COMMAND COUPLE 1E00 SYSTEM VSWITCHG
CPU 00 BASE
CPU 01
CPU 02
CPU 03
MACHINE ESA 8
OPTION APPLMON CHPIDV ONE
POSIXINFO UID 100533
MDISK 0191 3390 436 50 USAW01
MDISK 0201 3390 1 END LXDBC0

Where USER LNXDB030 LBYONLY 16G 32G is userid password memory definition, COMMAND DEFINE VFB-512 AS 0101 BLK 524288 is Swap definition, COMMAND DEFINE NIC 1E00 TYPE QDIO is NIC definition, COMMAND COUPLE 1E00 SYSTEM VSWITCHG is vswitch couple, MDISK 0191 3390 436 50 USAW01 is where you put the EXEC to run, MDISK 0201 3390 1 END LXDBC0 is the mdisk mod54 for the RHCOS.
Punch the files to the virtual reader of the z/VM guest virtual machine that is to become your bootstrap node.
Log in to CMS on the bootstrap machine.
IPL CMS

Create the exec file to punch the other files (kernel, parm file, initramfs) to start the linux installation on each linux servers part of Openshift cluster using the mdisk 191, this example shows the bootstrap exec file:
/* EXAMPLE EXEC FOR OC LINUX INSTALLATION */

TRACE O
‘CP SP CON START CL A *’
‘EXEC VMLINK MNT3 191 <1191 Z>’
‘CL RDR’
‘CP PUR RDR ALL’
‘CP SP PU * RDR CLOSE’
‘PUN KERNEL IMG Z (NOH’
‘PUN BOOTSTRAP PARM Z (NOH’
‘PUN INITRAMFS IMG Z (NOH’
‘CH RDR ALL KEEP NOHOLD’
‘CP IPL 00C’

The line EXEC VMLINK MNT3 191 shows that the disk from the repository VM will be linked to this VM’s EXEC process, making the files we already transferred to the the repository VM’s local disk available to the VM where this EXEC file will be run, for example the bootstrap VM.
Call the EXEC file to start the bootstrap installation process
<BOOTSTRAP> EXEC

Once the installation of the Red Hat CoreOS finishes, make sure to re-IPL this VM so it will load the Linux OS from it’s internal DASD:
#CP IPL 201

The you will see the RHCOS loading from it’s internal mode 54 dasd disk:
Red Hat Enterprise Linux CoreOS 42s390x.81.20200131.0 (Ootpa) 4.2″
SSH host key: <SHA256key>”
SSH host key: : <SHA256key>”
SSH host key: <SHA256key>”
eth0.1100: ,<ipaddress> fe80::3ff:fe00:9a”
bootstrap login:

Repeat this procedure for the other machines in the cluster, which means applying the same steps for creating the Red Hat Enterprise Linux CoreOS with the respective changes to master0, master1, master2, compute0 and compute1.
Make sure to include IPL 201 into the VMs definition so whenever the VM goes it will automatically IPL the disk 201 disk (RHCOS), example:
USER LNXDB030 LBYONLY 16G 32G
INCLUDE DFLT
COMMAND DEFINE STORAGE 16G STANDBY 16G
COMMAND DEFINE VFB-512 AS 0101 BLK 524288
COMMAND DEFINE VFB-512 AS 0102 BLK 524288
COMMAND DEFINE VFB-512 AS 0103 BLK 524288
COMMAND DEFINE NIC 1E00 TYPE QDIO
COMMAND COUPLE 1E00 SYSTEM VSWITCHG
CPU 00 BASE
CPU 01
CPU 02
CPU 03
IPL 201
MACHINE ESA 8
OPTION APPLMON CHPIDV ONE
POSIXINFO UID 100533
MDISK 0191 3390 436 50 USAW01
MDISK 0201 3390 1 END LXDBC0

Creating the cluster
To create the OpenShift Container Platform cluster, you wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program.
Monitor the bootstrap process:
$ ./openshift-install –dir= wait-for bootstrap-complete –log-level=debug

After bootstrap process is complete, remove the bootstrap machine from the load balancer.
Logging in to the cluster
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during
OpenShift Container Platform installation.
Export the kubeadmin credentials:
$ export KUBECONFIG=/auth/kubeconfig

Verify you can run oc commands successfully using the exported configuration:
$ oc whoami
system:admin

Review the pending certificate signing requests (CSRs) and ensure that the you see a client and server request with Pending or Approved status for each machine that you added to the cluster:
$ oc get csr

NAME AGE REQUESTOR CONDITION
csr-2qwv8 106m system:node:worker1. Approved,Issued
csr-2sjrr 61m system:node:worker1. Approved,Issued
csr-5s2rd 30m system:node:worker1. Approved,Issued
csr-9v5wz 15m system:node:worker1. Approved,Issued
csr-cffn6 127m system:servi…:node-bootstrapper Approved,Issued
csr-lmlsj 46m system:node:worker1. Approved,Issued
csr-qhwd8 76m system:node:worker1. Approved,Issued
csr-zz2z7 91m system:node:worker1. Approved,Issued

Check if all the nodes are Ready and healthy:
$ oc get nodes

NAME STATUS ROLES AGE VERSION
master0. Ready master 3d3h v1.14.6+c383847f6
master1. Ready master 3d3h v1.14.6+c383847f6
master2. Ready master 3d3h v1.14.6+c383847f6
worker0. Ready worker 3d3h v1.14.6+c383847f6
worker1. Ready worker 3d3h v1.14.6+c383847f6

Initial Operator configuration
After the control plane initializes, you must immediately configure some Operators so that they all become available.
Watch the cluster components come online (wait until all are True in the AVAILABLE column :
$ watch -n5 oc get clusteroperators

NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
authentication 4.2.0 True False False 69s
cloud-credential 4.2.0 True False False 12m
cluster-autoscaler 4.2.0 True False False 11m
console 4.2.0 True False False 46s
dns 4.2.0 True False False 11m
image-registry 4.2.0 False True False 5m26s
ingress 4.2.0 True False False 5m36s
kube-apiserver 4.2.0 True False False 8m53s
kube-controller-manag 4.2.0 True False False 7m24s
kube-scheduler 4.2.0 True False False 12m
machine-api 4.2.0 True False False 12m
machine-config 4.2.0 True False False 7m36s
marketplace 4.2.0 True False False 7m54m
monitoring 4.2.0 True False False 7h54s
network 4.2.0 True False False 5m9s
node-tuning 4.2.0 True False False 11m
openshift-apiserver 4.2.0 True False False 11m
openshift-controller- 4.2.0 True False False 5m43s
openshift-samples 4.2.0 True False False 3m55s
operator-lifecycle-man 4.2.0 True False False 11m
operator-lifecycle-ma 4.2.0 True False False 11m
service-ca 4.2.0 True False False 11m
service-catalog-apiser 4.2.0 True False False 5m26s
service-catalog-contro 4.2.0 True False False 5m25s
storage 4.2.0 True False False 5m30s

You will notice that the image-registry operator shows False, to fix this follow these steps:
$ oc patch configs.imageregistry.operator.openshift.io cluster –type merge –patch ‘{“spec”:{“storage”:{“emptyDir”:{}}}}’

Reference: https://docs.openshift.com/container-platform/4.2/installing/installing_ibm_z/installing-ibm-z.html#installation-registry-storage-config_installing-ibm-z
Once the the file gets patched it will automatically make sure that the image-registry container follow that state.
This is how the command $ oc get co (abbreviation of clusteroperators) should look like
$ oc get clusteroperators

NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
authentication 4.2.0 True False False 69s
cloud-credential 4.2.0 True False False 12m
cluster-autoscaler 4.2.0 True False False 11m
console 4.2.0 True False False 46s
dns 4.2.0 True False False 11m
image-registry 4.2.0 True False False 1ms
ingress 4.2.0 True False False 5m36s
kube-apiserver 4.2.0 True False False 8m53s
kube-controller-manag 4.2.0 True False False 7m24s
kube-scheduler 4.2.0 True False False 12m
machine-api 4.2.0 True False False 12m
machine-config 4.2.0 True False False 7m36s
marketplace 4.2.0 True False False 7m54m
monitoring 4.2.0 True False False 7h54s
network 4.2.0 True False False 5m9s
node-tuning 4.2.0 True False False 11m
openshift-apiserver 4.2.0 True False False 11m
openshift-controller- 4.2.0 True False False 5m43s
openshift-samples 4.2.0 True False False 3m55s
operator-lifecycle-man 4.2.0 True False False 11m
operator-lifecycle-ma 4.2.0 True False False 11m
service-ca 4.2.0 True False False 11m
service-catalog-apiser 4.2.0 True False False 5m26s
service-catalog-contro 4.2.0 True False False 5m25s
storage 4.2.0 True False False 5m30s

Monitor for cluster completion:
$ ./openshift-install –dir= wait-for install-complete
INFO Waiting up to 30m0s for the cluster to initialize…

The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server.
INFO Waiting up to 30m0s for the cluster at https://api..:6443 to initialize…
INFO Waiting up to 10m0s for the openshift-console route to be created…
INFO Install complete!
INFO To access the cluster as the system:admin user when using ‘oc’, run ‘export KUBECONFIG=/root//auth/kubeconfig’
INFO Access the OpenShift web-console here: https://console-openshift-console.apps..
INFO Login to the console with user: kubeadmin, password: 3cXGD-Mb9CC-hgAN8-7S9YG

Login using a web browser: http://console-openshift-console.apps..
This article only covers the installation process, for day 2 operations, keep in mind that no storage was configured for the persistent storage workloads, I will cover that process in my next article. As for now, Red Hat Openshift 4 is ready to be explored, the following video helps familiarize with the graphical user interface from the developer perspective:
Youtube Developer video: https://www.youtube.com/watch?v=opdrYhIjqrg&feature=youtu.be
References:
Official Red Hat OpenShift Documentation:

https://docs.openshift.com/container-platform/4.2/installing/installing_ibm_z/installing-ibm-z.html

Key people that collaborated with this article:
Alexandre de Oliveira, Edi Lopes Alves, Alex Souza, Adam Young, Apostolos Dedes (Toly) and Russ Popeil
Filipe Miranda is a Senior Solutions Architect at Red Hat. The views expressed in this article are his alone, and he is responsible for the information provided in the article.
The post Red Hat OpenShift Installation Process Experiences on IBM Z/LinuxONE appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

OpenShift Commons Briefing: Workload Consistency During Ceph Updates and Adding New Storage Devices with Red Hat’s Sagy Volkov

This is the second briefing of the “All Things Data” series of OpenShift Commons briefings. Future briefings are Tuesdays at 8:00am PST, so reach out with any topics you’re interested in and remember to bookmark the OpenShift Commons Briefing calendar!
In this second briefing for the “All Things Data” OpenShift Commons series, Red Hat’s Sagy Volkov gave a live demonstration of an OpenShift workload remaining online and running while Ceph storage updates and additions were being performed. This workload resilience and consistency during storage updates and additions is crucial to maintaining highly available applications in your OpenShift clusters.
Additional Resources:
OpenShift Container Storage: openshift.com/storage
Product Documentation for Product Documentation for Red Hat OpenShift Container Storage 4.2
Feedback:
To find out more about OpenShift Container Storage or to take a test drive, visit https://www.openshift.com/products/container-storage/.
If you would like to learn more about what the OpenShift Container Storage team is up to or provide feedback on any of the new 4.2 features, take this brief 3-minute survey.
 
The post OpenShift Commons Briefing: Workload Consistency During Ceph Updates and Adding New Storage Devices with Red Hat’s Sagy Volkov appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

WPBlockTalk: A Free Online Event Focused on the Block Editor

Ready to explore the possibilities with the block editor? WPBlockTalk is a free and live virtual event that will bring together designers, developers, and other WordPress enthusiasts from across the WordPress community.

Topics to expect:

Building the block editor: what it takes to develop the block editor, what features are on the roadmap, and how you can contributeDeveloping blocks: inspiration and ideas for developing your own custom blocksDesigning with blocks: learn more about using blocks to make powerful and versatile layouts and templates

If you’re passionate and curious about the future of WordPress, then this April 2 event is for you!

If you’re busy that day, don’t worry — all the talks will also be published on WordPress.tv for you to watch (and re-watch) whenever you like.

In the meantime, join the WPBlockTalk email list for registration details, speaker and schedule updates, and more. We look forward to seeing you online!
Quelle: RedHat Stack