OpenShift Commons Gathering in Milan 2019 – Recap [Slides]

The first Italian OpenShift Commons Gathering
gathered over 300 participants to Milan!
 
On September 18th, 2019, the first OpenShift Commons Gathering Milan brought together over 300 experts to discuss container technologies, operators, the operator framework and the open source software projects that support the OpenShift ecosystem. This was the first OpenShift Commons Gathering to take place in Italy.
The standing room only event hosted 11 talks in a whirlwind day of discussions. Of particular interest to the community was Christian Glombek’s presentation updating the status and roadmap for OKD4 and CoreOS.
Highlights from the Gathering induled an OpenShift 4 Roadmap Update, customer stories from Amadeus, the leading travel technology company, and local stories from Poste Italiane and SIA S.p.A. In addition to the technical updates and customer talks, there was plenty of time to network during the breaks and enjoy the famous Italian coffee.
Here are the slides from the event:
{please note: edited videos will be uploaded to youtube soon}

9:30 a.m.
Welcome to the Commons: Collaboration in Action
Diane Mueller (Red Hat)
Slides
Video

9:50 a.m.
Red Hat’s Unified Hybrid Cloud Vision
Brian Gracely (Red Hat)
Slides
Video

10:30 a.m.
OpenShift 4.1 Release Update and Road Map
William Markito Oliveira (Red Hat)  |  Christopher Blum (Red Hat)
Slides
Video

11:30 a.m.
Customer Keynote: OpenShift @ Amadeus
Salvatore Dario Minonne (Amadeus)
Slides
Video

12:00 a.m.
State of the Operators: Framework, SDKs, Hubs and beyond
Guil Barros (Red Hat)
Slides
Video

12:30 p.m.
Update on OKD4 and Fedora CoreOS
Christian Glombek (Red Hat)
Slides
Video

2:00 p.m.
OpenShift Managed su Azure
Marco D’Angelo (Microsoft)
Slides
Video

2:30 p.m.
Open Banking with Microservices Architectures and Apache Kafka on OpenShift
Paolo Gigante (Poste Italiane) | Pierluigi Sforza (Poste Italiane) | Paolo Patierno (Red Hat)
Slides
Video

3:00 p.m.
State of Serverless/Service Mesh
Giuseppe Bonocore (Red Hat) | William Markito Oliveira (Red Hat)
Slides
Video

4:15 p.m.
Case Study: OpenShift @ SIA
Nicola Nicolotti (SIA Spa) | Matteo Combi (SIA S.p.A.)
Slides
Video

4:45 p.m.
State of Cloud Native Storage
Christopher Blum (Red Hat)
Slides
Video

5:10 p.m.
AMA panel
Engineers & Product Managers (Red Hat OpenShift) + customer
 N/A
Video

5:30 p.m.
Road Ahead at OpenShift Wrap-Up
Diane Mueller & Tanja Repo (Red Hat)
Slides
Video

 
To stay updated of all the latest releases and events, please join the OpenShift Commons and join our mailing lists & Slack channel.
 
What is OpenShift Commons?
Commons builds connections and collaboration across OpenShift communities, projects, and stakeholders. In doing so we enable the success of customers, users, partners and contributors as we deepen our knowledge and experiences together.
Our goals go beyond code contributions. Commons is a place for companies using OpenShift to accelerate its success and adoption. To do this we’ll act as resources for each other, share best practices and provide a forum for peer-to-peer communication.
Join OpenShift Commons today!
 
Join us in the upcoming Commons Gatherings!
The OpenShift Commons Gatherings continue – please join us next time at:

October 28, 2019 in San Francisco, California – event is co-located with ODSC/West
November 18, 2019 in San Diego, California –  event is co-located with Kubecon/NA

 
The post OpenShift Commons Gathering in Milan 2019 – Recap [Slides] appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

International airline Etihad Airways delivers upscale flight booking with solution built on IBM Cloud

The airline industry is going through huge transformation both in terms of providing customer service and finding new ways to provide a better travel experience.
Etihad Airways is the national airline of the United Arab Emirates and hospitality is a key part of the culture. Excelling in customer service is important and having a technology platform that enables a friendly user experience is what we want to achieve and what our travelers have come to expect.
Creating improved flight booking to help travelers Choose Well
The first touch point we chose to address is traveler check in. How can we create a technology solution that provides the same fast and consistent experience across all touch points, digital and physical? What this means is whether a traveler starts the flight booking on an iPad, then moves to a mobile phone, then goes to the airport, we want them to have a consistent, easy and intuitive travel experience.
We were looking for a partner that would enable us to achieve this objective as fast as possible. Speed to market is important because the travel industry is very competitive.
We have a strategy, which was launched last year, branded “Choose Well.” Through Choose Well, we aim to empower our travelers to choose the right product at the right price point and with the option of the right ancillaries and services to meet their needs. The marketing campaign began in late November and we had to very quickly transform our technology to match that branding experience.
Moving past established technologies and silos to land a seamless solution
One of our challenges was the fact that the airline industry uses a lot of established technologies and silo-based solutions. To provide a transparent and seamless travel experience, we needed to have the platforms in place that could deliver that flight booking functionality across all devices and all use cases.
Originally, the strategy was to build the microservices to deliver this flight booking functionality from scratch. However, doing that has some risk and time penalty. In connecting with other airline teams and IBM representatives, we learned that using the IBM Cloud, which has industry-specific solutions to the travel sector, would significantly reduce our development time by using some of the existing assets that the technology offers.
The ability to work within an IBM Garage to further speed innovation was also fundamental to our decision to work with IBM. This way, we could engage various stakeholders from both companies in a constructive way and drive an outcome as quickly as possible.
We used IBM Garage methodologies at a nearby IBM site to co-create a flight booking solution with IBM. Our companies worked together as a single team to deliver what we were looking to achieve using existing assets, microservices and APIs to connect in an easier and significantly more efficient way. With the pre-built microservices architecture, our service orchestration platform and IBM API Connect, we’re able to connect very quickly and easily to existing systems like Sabre and also to new technologies like WhatsApp.
Gaining speed, improving efficiencies and enhancing customer experience
A very quick and successful proof-of-concept (POC) project gave us confidence in the solution path. Following that proof of concept, we launched the first release of the minimum viable product (MVP) in just 15 weeks.
With this project, the team started deploying the solution in the United States to be close to our host system. It was subsequently moved to the IBM Cloud data centers in the United Kingdom and then to Germany to optimize cost and performance. Moving workload between data centers was something that in the past would have taken weeks. With IBM Cloud, it took just hours.
By working with the IBM team and using IBM Garage methodologies, we achieved our goal. We are now offering a consistent and a more innovative flight booking and travel experience to our customers across touch points.
Learn more about IBM Garage and schedule a no-charge visit with the IBM Garage to get started.
The post International airline Etihad Airways delivers upscale flight booking with solution built on IBM Cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

9 steps to awesome with Kubernetes/OpenShift presented by Burr Sutter

Burr Sutter gave a terrific talk in India in July, where he laid out the terms, systems and processes needed to setup Kubernetes for developers. This is an introductory presentation, which may be useful for your larger community of Kubernetes users once you’ve already setup User Provisioned Infrastructure (UPI) in Red Hat OpenShift for them, though it does go into the deeper details of actually running the a cluster. To follow along, Burr created an accompanying GitHub repository, so you too can learn how to setup an awesome Kubernetes cluster in just 9 steps.

The post 9 steps to awesome with Kubernetes/OpenShift presented by Burr Sutter appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

A Look into the Technical Details of Kubernetes 1.16

This week Kubernetes 1.16 is expected and we want to highlight the technical features that enterprise Kubernetes users should know about. With Custom Resource Definitions (CRDs) moving into official general availability, storage improvements, and more, this release hardens the project and celebrates the main extension points for building cloud native applications on Kubernetes.
CRDs to GA
Custom Resource Definitions (CRDs) were introduced into upstream Kubernetes by Red Hat engineers in version 1.7. From the beginning, they were designed as a future-proof implementation of what was previously prototyped as ThirdPartyResources. The road of CRDs has focused on the original goal of making custom resources production ready, bringing it to be a generally available feature in Kubernetes, highlighted with the promotion of the API to v1 in 1.16.
CRDs have become a cornerstone of API extensions in the Kubernetes ecosystem, and is the basis of innovation and a core building block of OpenShift 4. Red Hat has continued pushing CRDs forward ever since, as one of the main drivers in the community behind the features and stability improvements, which finally lead to the v1 API. This progress made OpenShift 4 possible.
Let’s take a deeper look at what will change in the v1 API of Custom Resource Definitions (in the apiextensions.k8s.io/v1 API group). The main theme is around consistency of data stored in CustomResources:
The goal is that consumers of data stored in CustomResources can rely on that it has been validated on creation and on every update such that the data: 

follows a well-known schema
is strictly typed
and only contains values that were intended by the developers to be stored in the CRD.

 
We know all of these properties from native resources like, for example, pods.
 Pods have a well-known structure for metadata, spec, spec.containers, spec.volumes, etc.

Every field in a pod is strictly typed, e.g. every field is either a string, a number, an array, an object or a map. Wrong types are rejected by the kube-apiserver: one cannot put a string where a number is expected.
Unknown fields are stripped when creating a pod: a user or a controller cannot store arbitrary custom fields, e.g. pod.spec.myCustomField. For API compatibility reasons the Kubernetes API conventions say to drop and not reject those fields.

In order to fulfill these 3 properties, CRDs in the v1 version of apiextensions.k8s.io require:

That a schema is defined (in `CRD.spec.versions[n].schema`) – example:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: crontabs.example.openshift.io
spec:
group: example.openshift.io
versions:
– name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
cronSpec:
type: string
image:
type: string
replicas:
type: integer
scope: Namespaced
names:
plural: crontabs
singular: crontab
kind: CronTab
2. That the schema is structural (https://kubernetes.io/blog/2019/06/20/crd-structural-schema/) — KEP: https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/20190425-structural-openapi.md – the example above is structural.
3. That pruning of unknown fields (those which are not specified in the schema of (1)) is enabled (pruning used be opt-in in v1beta1 via `CRD.spec.preserveUnknownFields: false`) — KEP: https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/20180731-crd-pruning.md – pruning is enabled for the example as it is a v1 manifest where this is the default.

These are all formal requirements about CRDs and their CustomResources, checked by the kube-apiserver automatically. But there is an additional dimension of high quality APIs in the Kubernetes ecosystem: API review and approval.
API Review and Approval
Getting APIs right does not only mean to be a good fit for the described business logic. APIs must be

compatible with Kubernetes API Machinery of today and tomorrow,
future-proof in their own domain, i.e. certain API patterns are good and some are knowingly bad for later extensions.

The core Kubernetes developers had to learn painful lessons in the first releases of the Kubernetes platform, and eventually introduced a process called “API Review”. There is a set of people in the community who very carefully and with a lot of experience review every change against the APIs that are considered part of Kubernetes. Concretely, these are the APIs under the *.k8s.io domain.
To make it clear to the API consumer that APIs in *.k8s.io are following all quality standards of core Kubernetes, CRDs under this domain must also go through the API Review process (this is not a new requirement, but has been in place for a long time) and – and this is new – must link the API review approval PR in an annotation::
metadata:
annotations:
“api-approved.kubernetes.io”: “https://github.com/kubernetes/kubernetes/pull/78458″
Without this annotation, a CRD under the *.k8s.io domain is rejected by the API server.
There are discussions about introducing other reserved domains for the wider Kubernetes community, e.g. *.x-k8s.io, with different, lower requirements than for core resources under *.k8s.io.
CRD Defaulting to Beta
Next to the presented theme of CRD data consistency, another important feature in 1.16 is the promotion of defaulting to beta. Defaulting is known to everybody for native resources, i.e. unspecified fields in a manifest are automatically set to default values on creation by the kube-apiserver.
For example, pod.spec.restartPolicy defaults to Always. Hence, if the user does not set that field, the API server will set and persist Always as the value.
Also old objects already persisted in etcd can get new fields when read from etcd using the defaulting mechanism. This is an important difference to mutating admission webhooks, which are not called on read from etcd, and hence cannot simulate real defaulting.
Defaults are an important API feature which heavily drives an API design. Defaults are now definable in CRD OpenAPI schemas. Here is an example from an OpenShift 4 CRD:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
 name: kubeapiservers.operator.openshift.io
spec:
 scope: Cluster
 group: operator.openshift.io
 names:
   kind: KubeAPIServer
   plural: kubeapiservers
   singular: kubeapiserver
 subresources:
   status: {}
 versions:
 – name: v1
   served: true
   storage: true
   schema:
     openAPIV3Schema:
       spec:
           type: object
           properties:
             logLevel:
               type: string
               default: Normal
             managementState:
               pattern: ^(Managed|Force)$
               default: Managed
               type: string

When such an object is created with explicitly setting logLevel and managementState, the log level will be Normal and the managementState will be Managed.
Kubectl independence
Kubectl came to life almost five years ago as a replacement for the initial CLI for Kubernetes: kubecfg. Its main goals were:

improved user experience 
and modularity. 

Initially, these goals were met, but over time it flourished in some places, but not in others. Red Hat engineers worked on allowing extensibility and stability of kubectl since the beginning because this was required to make pieces of OpenShift as an enterprise distribution of Kubernetes possible.
The initial discussions about the possibility of splitting kubectl out of the main Kubernetes repository to allow faster iteration and shorter release cycles were started almost two years ago. Unfortunately, the years the kubectl code lived in the main Kubernetes repository caused it to have a tight coupling with some of the internals of Kubernetes. 
Several Red Hat engineers were involved in this effort from the start, refactoring the existing code to make it less coupled with internals, exposing libraries such as k8s.io/api (https://github.com/kubernetes/api/) and k8s.io/client-go (https://github.com/kubernetes/client-go/), to name a few, which are the foundation for many of the existing integrations. 
One of the biggest offenders in that internals fight was the fact that entire kubectl code relied on the internal API versions (iow. internal representation of all the resources exposed in kube-apiserver). Changing this required a lot of manual and mundane work to rewrite every piece of code to properly work with external, official API (iow. the ones you work on a regular basis when interacting with a cluster).
Many long hours of sometimes hard, other times dull work was put into this effort, which resulted in the recent initial brave step which moved (almost all) kubectl code to staging directory. In short, a staging repository is one that is treated as an external one, having its own distinct import path (in this case k8s.io/kubectl).
Reaching this first visible goal brings us several important implications. Kubectl is currently being published (through the publishing-bot) into its own repository that can be easily consumed by the external actors as k8s.io/kubectl. Even though there are a few commands left in the main kubernetes tree, we are working hard on closing this gap, while trying to figure out how the final extraction piece will work, mostly from the testing and release point of view.
Storage improvements
For this release, SIG-storage focused on bringing feature parity between Container Storage Interface (CSI) and in-tree drivers, as well as improving the stability of CSI sidecars and filling in functionality gaps.
We are working on migrating in-tree drivers and replacing them with their CSI equivalent. This is an effort across releases, with more work to follow, but we made steady progress. 
Some of the features the Red Hat storage team designed and implemented include:

Volume cloning (beta) to allow users to create volumes from existing sources.
CSI volume expansion as default, which brings feature parity between in-tree and CSI drivers. 
Raw block support improvements and bug fixes, especially when using raw block on iSCSI volumes.

Learn more
Kubernetes 1.16 brings enhancements to CRDs and storage. Check out the Kubernetes 1.16 release notes to learn more.
The post A Look into the Technical Details of Kubernetes 1.16 appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Talium, Irene Energy remove barriers to accessing electricity in Africa

Approximately 600 million people do not have electricity in Africa according to reports on World Bank data. Even though progress has been made to get more people in Africa on the grid, the absolute number of people without power remains the same due to population growth.
In rural areas, cell phones are vital for people with no access to banks to send and receive money, access medical care and stay in contact with family and friends, according to The African Gourmet. Some people have to walk miles to the nearest town to drop off their cell phone for charging, and the wait could be three days. Furthermore, they may have to allocate as much as 24 percent of their daily living allowance per charge.
Talium has brought its expertise in blockchain projects and experience as an energy sector systems integrator to Irene Energy. Together, the companies have architected a solution to improve access to electricity in Africa and lower costs.
Reducing the costs of accessing electricity in Africa
Universal electrification is hard and expensive, according to Quartz Africa. Grid connections cost anywhere from $250 to more than $2,500 depending on proximity to the grid. Mini-grids that offer a grid-like service still cost between $500 and $1,500 to connect each household. These are steep costs for both providers and consumers.
Irene Energy wanted to create a flexible and cost-effective back-office infrastructure for energy service providers built on blockchain technologies. It chose the Stellar payment network to enable low-cost micropayments and needed a secure way to manage user credentials so that smaller companies and individuals could participate in the energy market. Typically, the way to address this requirement is through hardware with built-in encryption. But that would be an expensive proposition and contrary to our project goal of reducing the costs of accessing electricity in Africa. So, we instead looked to lower the costs of the back-office technologies being used by energy service providers.
We chose IBM Cloud Data Shield, which runs containerized applications in a secure enclave on an IBM Cloud Kubernetes Service host. This solution simplifies data-in-use protection by a huge margin, while at the same time addressing the huge scalability concerns of the Irene Energy project. With the Irene Energy platform, there are no up-front costs. This is not only because the platform is on the cloud, but also because companies do not have to design their applications to be compatible with security requirements. Instead, IBM Cloud Data Shield automates that security process.
More affordable and accessible electricity
It was a great experience to partner with IBM because we could see that there was mutual interest in building something together. IBM really cares about what’s going on in the field; and, in this case, the lack of electricity in Africa. We felt that IBM wanted to address this very real concern with a constructive solution.
Reducing the cost of back-office technology for electric service providers with IBM Cloud means electricity can be available to more people in rural areas.
The Irene Energy platform is making electricity more affordable and accessible for millions of people in Africa. It also facilities electricity roaming and shared ownership of electricity assets.
Watch the video or read the case study to learn more.
The post Talium, Irene Energy remove barriers to accessing electricity in Africa appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

4 steps to modernize and cloud-enable applications

Customers today are no longer satisfied by the traditional consumer-business relationship. Instead, they expect engaging and informative digital experiences. In order to match these expectations and stay ahead of the curve, organizations must lean into digital transformation. Businesses need to modernize both customer-facing and enterprise applications to support a customer-centric approach to business. Developing a strategy to cloud-enable applications is crucial in gaining and maintaining a competitive advantage, especially when, according to a Forrester study, by 2023, 90 percent of current applications will still be in use, but most won’t have received sufficient modernization investment. This means there’s an opportunity for businesses who invest now in application modernization.
Application modernization: Taking a phased approach
Cloud-enabling applications doesn’t have to be an all-or-nothing proposition. Application modernization is best achieved by taking a phased approach, one that’s tailored to business goals and application architecture.
Companies can simplify and extend functionality while still meeting business and IT requirements by carefully choosing which applications to prioritize when modernizing for a hybrid cloud environment. This allows organizations to capitalize on the benefits of the cloud without disrupting existing workloads in on-premises environments.
Prime applications for digital transformation through the following four steps to gain a competitive edge.
1. Simplify with containers
Putting existing applications into containers is the first step to simplifying application deployment and management. Containers encapsulate the application with minimal or no changes to the application itself, enabling consistent testing and deployment that reduces costs and simplifies operations.
2. Extend with APIs
Extend existing applications with APIs that securely expose their full capabilities to developers. The applications become reusable across clouds to easily access and build new capabilities. Beyond APIs, this approach relies on an agile integration strategy that supports the volume of connections and variety of architectures required.
3. Decompose with microservices
Use microservices to break down monolithic applications into deployable components, where each component performs a single function. Businesses can then further enhance development agility and efficiency by putting each microservice in its own container. Using Kubernetes, companies can then manage and deliver the microservices of existing applications.
4. Refactor with new microservices
Refactoring involves building new microservices. In some instances, it may be easier to develop new applications utilizing cloud-native development practices instead of working with a current monolith. This provides teams with the ability to deliver innovation to users, encourage creative thinking and allow developers to experiment in a low-risk fashion.
Find your next step to modernize applications
Application modernization is a critical aspect of business modernization. Leading organizations that prioritize cloud-enabling their applications are breaking away from the competition by enhancing customer experiences and accelerating development and delivery.
Read the smart paper “Simplify and extend apps with an open, hybrid cloud” to learn more about application modernization and the unique approach, tools and solutions offered by IBM for application modernization.
The post 4 steps to modernize and cloud-enable applications appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Enabling OpenShift 4 Clusters to Stop and Resume Cluster VMs

There are a lot of reasons to stop and resume OpenShift Cluster VMs:

Save money on cloud hosting costs

Use a cluster only during daytime hours – for example for
exploratory or development work. If this is a cluster for just one
person it does not need to be running when the only user is not
using it.

Deploy a few clusters for students ahead of time when teaching a
workshop / class. And making sure that the clusters have
prerequisites installed ahead of time.

Background
When installing OpenShift 4 clusters a bootstrap certificate is created
that is used on the master nodes to create certificate signing requests
(CSRs) for kubelet client certificates (one for each kubelet) that will
be used to identify each kubelet on any node.
Because certificates can not be revoked, this certificate is made with a
short expire time and 24 hours after cluster installation, it can not be
used again. All nodes other than the master nodes have a service account
token which is revocable. Therefore the bootstrap certificate is only
valid for 24 hours after cluster installation. After then again every 30
days.
If the master kubelets do not have a 30 day client certificate (the
first only lasts 24 hours), then missing the kubelet client certificate
refresh window renders the cluster unusable because the bootstrap
credential cannot be used when the cluster is woken back up.
Practically, this requires an OpenShift 4 cluster to be running for at
least 25 hours after installation before it can be shut down.
The following process enables cluster shutdown right after installation.
It also enables cluster resume at any time in the next 30 days.
Note that this process only works up until the 30 day certificate
rotation. But for most test / development / classroom / etc. clusters
this will be a usable approach because these types of clusters are
usually rather short lived.
Preparing the Cluster to support stopping of VMs
These steps will enable a successful restart of a cluster after its VMs
have been stopped. This process has been tested on OpenShift 4.1.11 and
higher – including developer preview builds of OpenShift 4.2.

From the VM that you ran the OpenShift installer from create the
following DaemonSet manifest. This DaemonSet pulls down the same
service account token bootstrap credential used on all the
non-master nodes in the cluster.
cat << EOF >$HOME/kubelet-bootstrap-cred-manager-ds.yaml.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kubelet-bootstrap-cred-manager
namespace: openshift-machine-config-operator
labels:
k8s-app: kubelet-bootrap-cred-manager
spec:
replicas: 1
selector:
matchLabels:
k8s-app: kubelet-bootstrap-cred-manager
template:
metadata:
labels:
k8s-app: kubelet-bootstrap-cred-manager
spec:
containers:
– name: kubelet-bootstrap-cred-manager
image: quay.io/openshift/origin-cli:v4.0
command: [‘/bin/bash’, ‘-ec’]
args:
– |
#!/bin/bash

set -eoux pipefail

while true; do
unset KUBECONFIG

echo “———————————————————————-”
echo “Gather info…”
echo “———————————————————————-”
# context
intapi=$(oc get infrastructures.config.openshift.io cluster -o “jsonpath={.status.apiServerURL}”)
context=”$(oc –config=/etc/kubernetes/kubeconfig config current-context)”
# cluster
cluster=”$(oc –config=/etc/kubernetes/kubeconfig config view -o “jsonpath={.contexts[?(@.name==”$context”)].context.cluster}”)”
server=”$(oc –config=/etc/kubernetes/kubeconfig config view -o “jsonpath={.clusters[?(@.name==”$cluster”)].cluster.server}”)”
# token
ca_crt_data=”$(oc get secret -n openshift-machine-config-operator node-bootstrapper-token -o “jsonpath={.data.ca.crt}” | base64 –decode)”
namespace=”$(oc get secret -n openshift-machine-config-operator node-bootstrapper-token -o “jsonpath={.data.namespace}” | base64 –decode)”
token=”$(oc get secret -n openshift-machine-config-operator node-bootstrapper-token -o “jsonpath={.data.token}” | base64 –decode)”

echo “———————————————————————-”
echo “Generate kubeconfig”
echo “———————————————————————-”

export KUBECONFIG=”$(mktemp)”
kubectl config set-credentials “kubelet” –token=”$token” >/dev/null
ca_crt=”$(mktemp)”; echo “$ca_crt_data” > $ca_crt
kubectl config set-cluster $cluster –server=”$intapi” –certificate-authority=”$ca_crt” –embed-certs >/dev/null
kubectl config set-context kubelet –cluster=”$cluster” –user=”kubelet” >/dev/null
kubectl config use-context kubelet >/dev/null

echo “———————————————————————-”
echo “Print kubeconfig”
echo “———————————————————————-”
cat “$KUBECONFIG”

echo “———————————————————————-”
echo “Whoami?”
echo “———————————————————————-”
oc whoami
whoami

echo “———————————————————————-”
echo “Moving to real kubeconfig”
echo “———————————————————————-”
cp /etc/kubernetes/kubeconfig /etc/kubernetes/kubeconfig.prev
chown root:root ${KUBECONFIG}
chmod 0644 ${KUBECONFIG}
mv “${KUBECONFIG}” /etc/kubernetes/kubeconfig

echo “———————————————————————-”
echo “Sleep 60 seconds…”
echo “———————————————————————-”
sleep 60
done
securityContext:
privileged: true
runAsUser: 0
volumeMounts:
– mountPath: /etc/kubernetes/
name: kubelet-dir
nodeSelector:
node-role.kubernetes.io/master: “”
priorityClassName: “system-cluster-critical”
restartPolicy: Always
securityContext:
runAsUser: 0
tolerations:
– key: “node-role.kubernetes.io/master”
operator: “Exists”
effect: “NoSchedule”
– key: “node.kubernetes.io/unreachable”
operator: “Exists”
effect: “NoExecute”
tolerationSeconds: 120
– key: “node.kubernetes.io/not-ready”
operator: “Exists”
effect: “NoExecute”
tolerationSeconds: 120
volumes:
– hostPath:
path: /etc/kubernetes/
type: Directory
name: kubelet-dir
EOF

Deploy the DaemonSet to your cluster.
oc apply -f $HOME/kubelet-bootstrap-cred-manager-ds.yaml.yaml

Delete the secrets csr-signer-signer and csr-signer from the
openshift-kube-controller-manager-operator namespace
oc delete secrets/csr-signer-signer secrets/csr-signer -n openshift-kube-controller-manager-operator

This will trigger the Cluster Operators to re-create the CSR signer
secrets which are used when the cluster starts back up to sign the
kubelet client certificate CSRs. You can watch as various operators
switch from Progressing=False to Progressing=True and back to
Progressing=False. The operators that will cycle are
kube-apiserver, openshift-controller-manager,
kube-controller-manager and monitoring.
watch oc get clusteroperators

Sample Output.
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
authentication 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
cloud-credential 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
cluster-autoscaler 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
console 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
dns 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
image-registry 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
ingress 4.2.0-0.nightly-2019-08-27-072819 True False False 3h46m
insights 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
kube-apiserver 4.2.0-0.nightly-2019-08-27-072819 True True False 18h
kube-controller-manager 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
kube-scheduler 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
machine-api 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
machine-config 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
marketplace 4.2.0-0.nightly-2019-08-27-072819 True False False 3h46m
monitoring 4.2.0-0.nightly-2019-08-27-072819 True False False 3h45m
network 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
node-tuning 4.2.0-0.nightly-2019-08-27-072819 True False False 3h46m
openshift-apiserver 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
openshift-controller-manager 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
openshift-samples 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
operator-lifecycle-manager 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
operator-lifecycle-manager-catalog 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
operator-lifecycle-manager-packageserver 4.2.0-0.nightly-2019-08-27-072819 True False False 3h46m
service-ca 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
service-catalog-apiserver 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
service-catalog-controller-manager 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
storage 4.2.0-0.nightly-2019-08-27-072819 True False False 18h

Once all Cluster Operators show Available=True,
Progressing=False and Degraded=False the cluster is ready
for shutdown.

Stoppping the cluster VMs
Use the tools native to the cloud environment that your cluster is
running on to shut down the VMs.
The following command will shut down the VMs that make up a cluster on
Amazon Web Services.
Prerequisites:

The Amazon Web Services Command Line Interface, awscli, is
installed.

$HOME/.aws/credentials has the proper AWS credentials available to
execute the command.

REGION points to the region your VMs are deployed in.

CLUSTERNAME is set to the Cluster Name you used during
installation. For example cluster-${GUID}.

export REGION=us-east-2
export CLUSTERNAME=cluster-${GUID}

aws ec2 stop-instances –region ${REGION} –instance-ids $(aws ec2 describe-instances –filters “Name=tag:Name,Values=${CLUSTERNAME}-*” “Name=instance-state-name,Values=running” –query Reservations[*].Instances[*].InstanceId –region ${REGION} –output text)

Starting the cluster VMs
Use the tools native to the cloud environment that your cluster is
running on to start the VMs.
The following commands will start the cluster VMs in Amazon Web
Services.
export REGION=us-east-2
export CLUSTERNAME=cluster-${GUID}

aws ec2 start-instances –region ${REGION} –instance-ids $(aws ec2 describe-instances –filters “Name=tag:Name,Values=${CLUSTERNAME}-*” “Name=instance-state-name,Values=stopped” –query Reservations[*].Instances[*].InstanceId –region ${REGION} –output text)

Recovering the cluster
If the cluster missed the initial 24 hour certicate rotation some nodes
in the cluster may be in NotReady state. Validate if any nodes are in
NotReady. Note that immediately after waking up the cluster the nodes
may show Ready – but will switch to NotReady within a few seconds.
oc get nodes

Sample Output.
NAME STATUS ROLES AGE VERSION
ip-10-0-132-82.us-east-2.compute.internal NotReady worker 18h v1.14.0+b985ea310
ip-10-0-134-223.us-east-2.compute.internal NotReady master 19h v1.14.0+b985ea310
ip-10-0-147-233.us-east-2.compute.internal NotReady master 19h v1.14.0+b985ea310
ip-10-0-154-126.us-east-2.compute.internal NotReady worker 18h v1.14.0+b985ea310
ip-10-0-162-210.us-east-2.compute.internal NotReady master 19h v1.14.0+b985ea310
ip-10-0-172-133.us-east-2.compute.internal NotReady worker 18h v1.14.0+b985ea310

If some nodes show NotReady the nodes will start issuing Certificate
Signing Requests (CSRs). Repeat the following command until you see a
CSR for each NotReady node in the cluster with Pending in the
Condition column.
oc get csr

Once you see the CSRs they need to be approved. The following command
approves all outstanding CSRs.
oc get csr -oname | xargs oc adm certificate approve

When you double check the CSRs (using oc get csr) you should now see
that the CSRs have now been Approved and Issued (again in the
Condition column).
Double check that all nodes now show Ready. Note that this may take a
few seconds after approving the CSRs.
oc get nodes

Sample Output.
NAME STATUS ROLES AGE VERSION
ip-10-0-132-82.us-east-2.compute.internal Ready worker 18h v1.14.0+b985ea310
ip-10-0-134-223.us-east-2.compute.internal Ready master 19h v1.14.0+b985ea310
ip-10-0-147-233.us-east-2.compute.internal Ready master 19h v1.14.0+b985ea310
ip-10-0-154-126.us-east-2.compute.internal Ready worker 18h v1.14.0+b985ea310
ip-10-0-162-210.us-east-2.compute.internal Ready master 19h v1.14.0+b985ea310
ip-10-0-172-133.us-east-2.compute.internal Ready worker 18h v1.14.0+b985ea310

Your cluster is now fully ready to be used again.
Ansible Playbook to recover cluster
The following Ansible Playbook should recover a cluster after wake up.
Note the 5 minute sleep to give the nodes enough time to settle, start
all pods and issue CSRs.
Prerequisites:

Ansible installed

Current user either has a .kube/config that grants cluster-admin
permissions or a KUBECONFIG environment variable set that points
to a kube config file with cluster-admin permissions.

OpenShift Command Line interface (oc) in the current user’s PATH.

– name: Run cluster recover actions
hosts: localhost
connection: local
gather_facts: False
become: no
tasks:
– name: Wait 5 minutes for Nodes to settle and pods to start
pause:
minutes: 5

– name: Get CSRs that need to be approved
command: oc get csr -oname
register: r_csrs
changed_when: false

– name: Approve all Pending CSRs
when: r_csrs.stdout_lines | length > 0
command: “oc adm certificate approve {{ item }}”
loop: “{{ r_csrs.stdout_lines }}”

Summary
Following this process enables you to stop OpenShift 4 Cluster VMs right
after installation without having to wait for the 24h certificate
rotation to occur.
It also enables you to resume Cluster VMs that have been stopped while
the 24h certificate rotation would have occurred.
The post Enabling OpenShift 4 Clusters to Stop and Resume Cluster VMs appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

OpenShift 4.2 Disconnected Install

In a previous blog, it was announced that Red Hat is making the OpenShift nightly builds available to everyone. This gives users a chance to test upcoming features before their general availability. One of the features planned for OpenShift 4.2 is the ability to perform a “disconnected” or “air gapped” install, allowing you to install in an environment without access to the Internet or outside world.
NOTE: that nightly builds are unsupported and are for testing purposes only!
In this blog I will be going over how to perform a disconnected install in a lab environment. I will also give an overview of my environment, how to mirror the needed images, and any other tips and tricks I’ve learned along the way.
Environment Overview
In my environment, I have two networks. One network is completely disconnected and has no access to the Internet. The other network is connected to the Internet and has full access. I will use a bastion host that has access to both networks. This bastion host will perform the following functions.

Registry server (where I will mirror the content)
Apache web server (where I will store installation artifacts)
Installation host (where I will be performing the installation from)

Here is a high-level overview of the environment I’ll be working on.

In my environment, I have already set up DNS, DHCP, and other ancillary services for my network. Also, it’s important to get familiar with the OpenShift 4 prerequisites before attempting an install.
Doing a disconnected install can be challenging, so I recommend trying a fully connected OpenShift 4 install first to familiarize yourself with the install process (as they are quite similar).
Registry Set Up
You can use your own registry or build one from scratch. I used the following steps to build one from scratch. Since I’ll be using a container for my registry, and Apache for my webserver, I will need podman and httpd on my host.
yum -y install podman httpd httpd-tools

Create the directories you’ll need to run the registry. These directories will be mounted in the container running the registry.
mkdir -p /opt/registry/{auth,certs,data}

Next, generate an SSL certificate for the registry.  This can, optionally, be self-signed if you don’t have an existing, trusted, certificate authority. I’ll be using registry.ocp4.example.com as the hostname of my registry. Make sure your hostname is in DNS and resolves to the correct IP.
cd /opt/registry/certs
openssl req -newkey rsa:4096 -nodes -sha256 -keyout domain.key -x509 -days 365 -out domain.crt

Generate a username and password (must use bcrypt formatted passwords), for access to your registry.
htpasswd -bBc /opt/registry/auth/htpasswd dummy dummy

Make sure to open port 5000 on your host, as this is the default port for the registry. Since I am using Apache to stage the files I need for installation, I will open port 80 as well.
firewall-cmd –add-port=5000/tcp –zone=internal –permanent
firewall-cmd –add-port=5000/tcp –zone=public   –permanent
firewall-cmd –add-service=http  –permanent
firewall-cmd –reload

Now you’re ready to run the container. Here I specify the directories I want to mount inside the container. I also specify I want to run on port 5000. I recommend you put this in a shell script for ease of starting.
podman run –name poc-registry -p 5000:5000
-v /opt/registry/data:/var/lib/registry:z
-v /opt/registry/auth:/auth:z
-e “REGISTRY_AUTH=htpasswd”
-e “REGISTRY_AUTH_HTPASSWD_REALM=Registry”
-e “REGISTRY_HTTP_SECRET=ALongRandomSecretForRegistry”
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd
-v /opt/registry/certs:/certs:z
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key
docker.io/library/registry:2

Verify connectivity to your registry with curl. Provide it the username and password you created.
curl -u dummy:dummy -k https://registry.ocp4.example.com:5000/v2/_catalog

Note, this should return an “empty” repo

If you have issues connecting try to stop the container.
podman stop poc-registry

Once it’s down, you can start it back up using the same podman run command as before.
Obtaining Artifacts
You will need the preview builds for 4.2 in order to do a disconnected install. Specifically, you will need the client binaries along with the install artifacts. This can be found in the dev preview links provided below.

Client Binaries
Install Artifacts

Download the binaries and any installation artifacts you may need for the installation. The file names will differ depending on when you choose to download the preview builds (they get updated often).
You can inspect the nightly release notes and extract the build number from there. I did this with the curl command.
export BUILDNUMBER=$(curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp-dev-preview/latest/release.txt | grep ‘Name:’ | awk ‘{print $NF}’)
echo ${BUILDNUMBER}

To download the client binaries to your staging server/area (in my case, it’s the registry server itself) use curl:
curl -o /var/www/html/

https://mirror.openshift.com/pub/openshift-v4/clients/ocp-dev-preview/latest/openshift-client-linux-${BUILDNUMBER}.tar.gz

curl -o /var/www/html/

https://mirror.openshift.com/pub/openshift-v4/clients/ocp-dev-preview/latest/openshift-install-linux-${BUILDNUMBER}.tar.gz

You’ll also need these clients on your registry host, so feel free to un-tar them now.
tar -xzf /var/www/html/openshift-client-linux-${BUILDNUMBER}.tar.gz -C /usr/local/bin/
tar -xzf /var/www/html/openshift-install-linux-${BUILDNUMBER}.tar.gz -C /usr/local/bin/

Depending on what kind of install you will do, you would need to do one of the following.
PXE Install
If you’re doing a PXE install, you’ll need the BIOS, initramfs, and the kernel files. For example:
curl -o /var/www/html/

https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/latest/rhcos-${BUILDNUMBER}-metal-bios.raw.gz

curl -o /var/www/html/

https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/latest/rhcos-${BUILDNUMBER}-installer-initramfs.img

curl -o /var/www/html/

https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/latest/rhcos-${BUILDNUMBER}-installer-kernel

Once you have staged these, copy them over into your environment. Once they are in your PXE install server and your configuration updated, you can proceed to mirror your images.
ISO Install
If you’re doing an ISO install, you’ll still need the BIOS file but only the ISO for the install.
curl -o /var/www/html/

https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/latest/rhcos-${BUILDNUMBER}-metal-bios.raw.gz

curl -o /var/www/html/

https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/latest/rhcos-${BUILDNUMBER}-installer.iso

Once these are staged, copy them over to where you’ll need them for the installation. The BIOS file will need to be on a web server accessible to the OpenShift nodes. The ISO can be burned onto a disk/usb drive or mounted via your virtualization platform.
Once that’s done, you can proceed to mirror the container images.
Mirroring Images
The installation images will need to be mirrored in order to complete the installation. Before you begin you need to make sure you have the following in place.

An internal registry to mirror the images to (like the one I just built)

You’ll also need the certificate of this registry
The username/password for access

A pullsecret obtained at https://cloud.redhat.com/openshift/install/pre-release

I downloaded mine and saved it as ~/pull-secret.json

The oc and openshift-install CLI tools installed
The jq command is also helpful

First, you will need to get the information to mirror. This information can be obtained via the dev-preview release notes. With this information, I constructed the following environment variables.
export OCP_RELEASE=”4.2.0-0.nightly-2019-08-29-062233″
export AIRGAP_REG=’registry.ocp4.example.com:5000′
export AIRGAP_REPO=’ocp4/openshift4′
export UPSTREAM_REPO=’openshift-release-dev’   ## or ‘openshift’
export AIRGAP_SECRET_JSON=’~/pull-secret-2.json’
export OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE=${AIRGAP_REG}/${AIRGAP_REPO}:${OCP_RELEASE}
export RELEASE_NAME=”ocp-release-nightly”

I will now go over how to construct these environment variables from the release notes

OCP_RELEASE – Can be obtained by the Release Metadata.Version section of the release page.
AIRGAP_REG – This is your registry’s hostname with port
AIRGAP_REPO – This is the name of the repo in your registry (you don’t have to create it beforehand)
UPSTREAM_REPO – Can be obtained from the Pull From section of the release page.
AIRGAP_SECRET_JSON – This is the path to your pull secret  with your registry’s information (which we will create later)
OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE – This environment variable is set so the installer knows to use your registry.
RELEASE_NAME – This can be obtained in the Pull From section of the release page.

Before you can mirror the images, you’ll need to add the authentication for your registry to your pull secret file (the one you got from try.openshift.com). This will look something like this:
{
“registry.ocp4.example.com:5000″:
{
“auth”: “ZHVtbXk6ZHVtbXk=”,
“email”: “noemail@localhost”
}
}

The base64 is a construction of the registry’s auth in the username:password format. For example, with the username of dummy and password of dummy; I created the base64 by running:
echo -n ‘dummy:dummy’ | base64 -w0

You can add your registry’s information to your pull secret by using jq and the pull secret you downloaded (thus creating a new pull secret file with your registry’s information).
jq ‘.auths += {“registry.ocp4.example.com:5000″: {“auth”: “ZHVtbXk6ZHVtbXk=”,”email”: “noemail@localhost”}}’ &lt; ~/pull-secret.json &gt; ~/pull-secret-2.json

Also, if needed and you haven’t done so already, make sure you trust the self-signed certificate. This is needed in order for oc to be able to login to your registry during the mirror process.
cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/
update-ca-trust extract

With this in place, you can mirror the images with the following command.
oc adm release mirror -a ${AIRGAP_SECRET_JSON}
–from=quay.io/${UPSTREAM_REPO}/${RELEASE_NAME}:${OCP_RELEASE}
–to-release-image=${AIRGAP_REG}/${AIRGAP_REPO}:${OCP_RELEASE}
–to=${AIRGAP_REG}/${AIRGAP_REPO}

Part of the output will have an example imageContentSources to put in your install-config.yaml file. It’ll look something like this.
imageContentSources:
– mirrors:
– registry.ocp4.example.com:5000/ocp4/openshift4
source: quay.io/openshift-release-dev/ocp-release-nightly
– mirrors:
– registry.ocp4.example.com:5000/ocp4/openshift4
source: quay.io/openshift-release-dev/ocp-v4.0-art-dev

Save this output, as you’ll need it later
Installation
At this point you can proceed with the normal installation procedure, with the main difference being what you specify in the install-config.yaml file when you create the ignition configs.
Please refer to the official documentation for specific installation information. You’re most likely doing a Bare Metal install, so my previous blog would be helpful to look over as well.
When creating an install-config.yaml file, you need to specify additional parameters like the example below.
apiVersion: v1
baseDomain: example.com
compute:
– hyperthreading: Enabled
name: worker
replicas: 0
controlPlane:
hyperthreading: Enabled
name: master
replicas: 3
metadata:
name: ocp4
networking:
clusterNetworks:
– cidr: 10.254.0.0/16
hostPrefix: 24
networkType: OpenShiftSDN
serviceNetwork:
– 172.30.0.0/16
platform:
none: {}
pullSecret: ‘{“auths”:{“registry.ocp4.example.com:5000″: {“auth”: “ZHVtbXk6ZHVtbXk=”,”email”: “noemail@localhost”}}}’
sshKey: ‘ssh-rsa …. root@helper’
additionalTrustBundle: |
—–BEGIN CERTIFICATE—–
ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
—–END CERTIFICATE—–
imageContentSources:
– mirrors:
– registry.ocp4.example.com:5000/ocp4/openshift4
source: quay.io/openshift-release-dev/ocp-release-nightly
– mirrors:
– registry.ocp4.example.com:5000/ocp4/openshift4
source: quay.io/openshift-release-dev/ocp-v4.0-art-dev

Some things to note here:

pullSecret – only the information about your registry is needed.
sshKey – the contents of your id_rsa.pub file (or another ssh public key that you want to use)
additionalTrustBundle – this is your crt file for your registry. (i.e. the output of cat domain.crt)
imageContentSources –  What is the local registry is and the expected original source that should be in the metadata (otherwise they should be considered as tampered)

You will also need to export the OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE environment variable. This tells OpenShift which image to use for bootstrapping. This is in the form of ${AIRGAP_REG}/${AIRGAP_REPO}:${OCP_RELEASE}. It looked like this in my environment:
export OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE=registry.ocp4.example.com:5000/ocp4/openshift4:4.2.0-0.nightly-2019-08-29-062233

I created my install-config.yaml under my ~/ocp4 install directory. At this point you can create your Ignition configs as you would normally.
# openshift-install create ignition-configs –dir=/root/ocp4
INFO Consuming “Install Config” from target directory
WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings
WARNING Found override for ReleaseImage. Please be warned, this is not advised

Please note that it warns you about overriding the image and that, for the 4.2 dev preview, the masters are schedulable.

At this point, you can proceed with the installation as you would normally.
Troubleshooting
A good thing to do during the bootstrapping process is to login to the bootstrap server and tail the journal logs as the bootstrapping process progresses. Many errors or misconfigurations can be seen immediately when tailing this log.
[core@bootstrap ~]$ journalctl -b -f -u bootkube.service

There are times where you might have to approve the worker/master node’s CSR. You can check pending CSRs with the oc get csr command. This is important to check since the cluster operators won’t finish without any worker nodes added. You can approve all the pending CSRs in one shot with the following command.
[user@bastion ~]$ oc get csr –no-headers | awk ‘{print $1}’ | xargs oc adm certificate approve

After the bootstrap process is done, it’s helpful to see your cluster operators running. You can do this with the oc get co command. It’s helpful to have this in a watch in a separate window.
[user@bastion ~]$ watch oc get co

The two most common issues are that the openshift-install command is waiting for the image-registry and ingress to come up before it considers the install a success. Make sure you’ve approved the CSRs for your machines and you’ve configured storage for your image-registry. The commands I’ve provided should help you navigate any issues you may have.
Conclusion
In this blog, I went over how you can prepare for a disconnected install and how to perform a disconnected install using the nightly developer preview of OpenShift 4. Disconnected installs were a highly popular request for OpenShift 4, and we are excited to bring you a preview build.
Nightly builds are a great way to preview what’s up and coming with OpenShift, so you can test things before the GA release. We are excited to bring you this capability and hope that you find it useful. If you have any questions or comments, feel free to use the comment section below!
The post OpenShift 4.2 Disconnected Install appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Simplify modernization and build cloud-native with open source technologies

Cloud-native technologies are the new normal for application development. Cloud-native creates immeasurable business value with increased velocity and reduced operational costs. Together, these support emerging business opportunities.
Advancements in application development have focused on net new applications. We have seen that existing applications that cannot easily move to the cloud have been left on traditional technologies. As a result, less than 20 percent of enterprise workloads are deployed to the cloud according to an IBM-commissioned IBM-commissioned study by McKinsey & Company.
At IBM, we see open source as a foundation for the new hybrid multicloud world, and our recent acquisition of Red Hat underscores our long commitment to open technologies.
Open source allows consistency and choice
Key open source technologies – containers, Kubernetes, Istio, Knative and others – together define the new hybrid multicloud platform, providing consistency and choice across any cloud provider. These technologies allow developers to build applications to support enterprise workloads using a common technology base with flexible vendor choices. They establish freedom for enterprises to deploy applications across public, private and hybrid cloud platforms.
Kubernetes provides a container orchestration layer that consistently manages workloads. Developers have full freedom of choice on languages, runtimes, and frameworks, while Kubernetes maintains a consistent operational platform across diverse technologies. This approach provides a basis for microservices-based container applications as well as existing enterprise applications.
New IBM open source project accelerates the cloud journey
In 2017, IBM began modernizing our software portfolio into containers and Kubernetes, and optimized more than 100 products for Red Hat OpenShift. In addition to our own journey, we’ve learned a lot about modernization from our clients and partners. Together, we’ve migrated or modernized more than 100,000 workloads.
With the new IBM Cloud Pak for Applications, we’ve encoded our experience into a set of technologies to accelerate the journey to cloud. Built on open source technologies, IBM Cloud Pak for Applications delivers tools, technologies and platforms designed to bring WebSphere workloads to any cloud through Kabanero.io and Red Hat Runtimes.
IBM Cloud Pak for Applications provides a rich set of open source technologies and functions that allow enterprises to secure and curate their favorite frameworks and runtimes, including those using Java, Open Liberty, SPRING BOOT® with Tomcat®, JBoss®, Node.js®, Vert.x and more. IBM Cloud Pak for Applications performs vulnerability scanning on all open source frameworks and runtimes to prevent security issues. All IBM Cloud Paks are supported by IBM and contain Docker-certified middleware.
Move WebSphere applications to any cloud
For existing applications, modernization tools in the new IBM Cloud Pak chart a path to modernize WebSphere applications into a fully open source stack. IBM Cloud Pak for Applications analyzes applications and provides a modernization plan specific for each application. Many WebSphere applications can be migrated to containers with automation and without code changes.
In the end, traditional applications are ready to deploy to any cloud — from OpenShift on IBM Cloud, an existing infrastructure, or to your cloud of choice.
IBM Cloud Pak for Applications delivers the open technology platform for the future and enables businesses to address the 80 percent of enterprise workloads that have yet to move to cloud according the report.
Learn more about IBM Cloud Pak for Applications and register to join us for the upcoming IBM Application Modernization Technical Conference 2019, Chicago, IL, United States on 24-25 September 2019. Experience two days of in-depth technical sessions for developers, administrators and architects at the inaugural IBM Application Modernization Technical Conference 2019 and hear from top subject matter from our labs, IBM Business Partners and customers.
The post Simplify modernization and build cloud-native with open source technologies appeared first on Cloud computing news.
Quelle: Thoughts on Cloud