From the AWS Blog: What Red Hat OpenShift Dedicated can do for you

While Red Hat OpenShift makes it easier for teams to implement and run Kubernetes-based Linux container infrastructure, there are scenarios where a team may be too small or spread too thin even to administrate an OpenShift cluster on their own. For these teams, we offer Red Hat OpenShift Dedicated, a fully managed and provisioned service from Red Hat, hosted on AWS. These two services go hand in hand to provide production-grade container-based infrastructure on top of Amazon’s worldwide cloud infrastructure.
But what does that actually mean for an IT executive trying to suss out the total costs, savings and optimizations offered by moving to OpenShift Dedicated? Ryan Niksch, Partner Solutions Architect at Amazon, has written an extensive blog entry detailing the exact benefits of using OpenShift Dedicated. That should be some useful information for anyone evaluating the many hosted Kubernetes options available in the marketplace. The piece is full of wisdom like this:
You can take advantage of cost reductions of up to 70% using Reserved Instances, which match the pervasive running instances. This is ideal for the master and infrastructure nodes of the Red Hat OpenShift solutions running in your account. The reference architecture for Red Hat OpenShift on AWS recommends spanning  nodes over three availability zones, which translates to three master instances. The master and infrastructure nodes scale differently; so, there will be three additional instances for the infrastructure nodes. Purchasing reserved instances to offset the costs of the master nodes and the infrastructure nodes can free up funds for your next project.
Check out the whole article on the AWS Blog, here.
 
The post From the AWS Blog: What Red Hat OpenShift Dedicated can do for you appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Troubleshooting OpenShift Internal Networking

There are many times in OpenShift where microservices need to talk to each other internally without exposing routes to the outside world. These microservices interact via the Kubernetes service API, which acts as a load balancer that resolves a set of pods to a single IP.
While OpenShift and the service API work together to abstract most of the complexity behind networking, problems can still arise when trying to interact with other microservices deployed on OpenShift. In this post we’ll talk about some of the most common problems around internal networking and how to troubleshoot them. For the examples, we’ll assume a basic app consisting of a single ClusterIP Service and Deployment, each called hello-world.
The following service will be used as an example for each issue described in this post.
apiVersion: v1
kind: Service
metadata:
name: hello-world
namespace: my-namespace
spec:
selector:
app: hello-world
ports:
– name: https
protocol: TCP
port: 443
targetPort: 8443

How To Begin Troubleshooting
The best way to begin troubleshooting is to oc rsh into the pod that is making the call to your target service. For example, if I have a deployment called rhel that is attempting to talk to another deployment called hello-world, then I would want to access the rhel pod to begin troubleshooting my networking error. For example:
oc rsh rhel-2-fqjbw -n my-namespace

Once inside the pod, attempt running “curl -v” against the endpoint your pod is trying to reach. This will give verbose output and will often reveal the issue behind your networking error. The table below gives an overview of some sample curl -v output and some possible errors associated with it.

curl -v Output
Possible Errors

sh-4.2$ curl -v -k https://hello-world:443
Could not resolve host: hello-world; Unknown error
Closing connection 0
curl: (6) Could not resolve host: hello-world; Unknown error
1) Service does not exist
2) Target hostname is incorrect

sh-4.2$ curl -v -k https://hello-world.my-namespace:443
About to connect() to hello-world.my-namespace port 443 (#0)
Trying 172.30.209.206…
Connection timed out
Failed connect to hello-world.my-namespace:443; Connection timed out
Closing connection 0
curl: (7) Failed connect to hello-world.my-namespace:443; Connection timed out
1) Isolation policy is blocking traffic

sh-4.2$ curl -v -k https://hello-world:443
About to connect() to hello-world port 443 (#0)
Trying 172.30.250.96…
No route to host
Failed connect to hello-world:443; No route to host
Closing connection 0
curl: (7) Failed connect to hello-world:443; No route to host
1) Service selector is incorrect
2) Service port is incorrect
3) Service targetPort name is not specified on the deployment

sh-4.2$ curl -v -k https://hello-world:443
About to connect() to hello-world port 443 (#0)
Trying 10.128.2.44…
Connection refused
Failed connect to hello-world:443; Connection refused
Closing connection 0
curl: (7) Failed connect to hello-world:443; Connection refused
1) Service clusterIP is none
2) Service targetPort is incorrect
3) Container does not expose targetPort

 
Note that this only covers some of the most common networking errors that I have observed and that your particular error may not be covered in this post.
Let’s begin looking at some of the most common errors around OpenShift networking.
Service Does Not Exist
First things first, you need a service object to be able to route traffic to the desired app. You can quickly create a service with the “oc expose” command:
oc expose deployment hello-world # For Deployment objects
oc expose deploymentconfig hello-world # For Deployment Configs

Alternatively, you can use the hello-world service YAML at the beginning of this post as an example to help get started. Once you have written the YAML, create the service with:
oc apply -f $PATH_TO_SERVICE_YAML

Target Hostname Is Incorrect
The most common networking issues are caused by attempting to reference an incorrect host name. The host name of an app will be determined by the name of its service. Depending on whether or not the source and target apps are in the same namespace, the target host name will be either “ or ..
Source and Target Apps in the Same Namespace
If your source and target apps are in the same OpenShift namespace, then the target hostname will simply be the name of the target service. Using the hello-world service above as an example, any app trying to talk to the hello-world app would simply use the host name hello-world.
Source and Target Apps in Different Namespaces
The target host name will be a little different if the source and target apps live in different namespaces. In this case the target host name will be .. Using the hello-world service above as an example, any app trying to talk to the hello-world app from a different namespace would use the host name hello-world.my-namespace.
SDN Isolation Policy is Blocking Traffic
The OpenShift SDN supports three different modes for networking, with the default being network policy in OpenShift 4. It could be possible that your mode’s isolation policy has not been configured to allow traffic to reach your app.
If using network policy mode, ensure that a NetworkPolicy object has been created that allows traffic to reach your target app.
If using multitenant mode, ensure that your source and target apps’ namespaces have been joined together to allow network traffic.
Your pods should already be able to reach each other with subnet mode.
Service Selector is Incorrect
The most common way to route traffic with a service is to use a label selector that matches a label on the app’s pods. In the example service above, the hello-world service will route traffic to pods with a label app=hello-world. Make sure that the target Deployment or DeploymentConfig sets a label on each pod that matches the service selector and vice versa.
Here’s part of an example Deployment that sets the “app=hello-world” label that the service selector expects on each pod. Notice the “template.metadata.labels.app” value, which sets the pod “app=hello-world” label.
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
– name: hello-world

Service clusterIP is None
Notice in the above hello-world service that the YAML specification lacks a clusterIP key-value pair. This means that OpenShift will automatically assign the hello-world service an IP address. Compare that to this modified service below, in which we set the clusterIP to “None”:
apiVersion: v1
kind: Service
metadata:
name: hello-world
namespace: my-namespace
spec:
selector:
app: hello-world
ports:
– name: https
protocol: TCP
port: 443
targetPort: 8443
clusterIP: None

This is actually a headless service, meaning that the service is not assigned an IP address. Headless services have many different use cases, but if you’re running a simple architecture with the intention of a Deployment or DeploymentConfig being load-balanced by a service, you may have accidentally created a headless service. Check that the service has a clusterIP allocated with “oc get svc hello-world -o yaml”. If you can verify that the clusterIP is None, delete the service and apply it again without a clusterIP spec.
Service Ports are Incorrect or are Not Exposed
Part of a service’s job is to identify the port and targetPorts of an application. The service will accept traffic to port “port” and will redirect to port “targetPort” on the running container. Another common issue around internal traffic is that these port values can be either incorrect or unexposed by the container.
Service Port is Incorrect
The hello-world service above specifies port 443 and targetPort 8443:
apiVersion: v1
kind: Service

ports:
– name: https
protocol: TCP
port: 443
targetPort: 8443

This port, port 443, reroutes to port 8443 on the target container. Make sure that your requests are for port 443. Otherwise, the service will not be able to route your request to the targetPort of the container.
Service targetPort is Incorrect
Your service targetPort may be incorrect if your request is hitting the service port and is still failing. Make sure your service’s targetPort is specifying a port that is exposed by the container.
Service targetPort Name is not Specified on the Deployment
Imagine the hello-world service above exposed ports like this instead:
apiVersion: v1
kind: Service

ports:

– name: https
protocol: TCP
port: 443
targetPort: https

This port, port 443, is targeting a port called “https”. This targetPort name is referring to a port exposed on the Deployment or DeploymentConfig. Either resource will be expected to have a port specified to route to “targetPort: https”.
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
spec:

containers:

ports:
– containerPort: 8443
protocol: TCP
name: https

Notice at the bottom of the deployment the “ports:” stanza. Since the service is referring to the “https” targetPort, the deployment must also have a corresponding “https” port to route traffic to the desired port. In this case “https” endpoint will accept traffic at port 443 and reroute to port 8443, which is specified by the containerPort of the Deployment.
If your service uses a name instead of a number for its targetPort, make sure that name is also specified on the Deployment object.
Container does not Expose Targetport
This is one that is often overlooked. Sometimes the issue is not an OpenShift problem but instead has to do with the running container. If you know that the port and targetPort are specified and configured properly, then the running container may not be exposing the targetPort.
Take the following Spring Boot application.properties, for example:
server.port=8444

Given the hello-world service above, you can expect this application.properties config to be the root cause of your networking issue. The targetPort is set to 8443, but the container is actually exposing port 8444.
We can fix this by modifying the application.properties to instead read as:
server.port=8443

Although this example was in Spring Boot, a similar troubleshooting approach can be taken with any runtime or application source. Make sure that your app is configured properly to expose the container specified by your service.
Thanks for Reading!
Hopefully this was able to help you troubleshoot any networking issues you’re experiencing in OpenShift. Although this did not cover every networking issue you could possibly experience, I think it covers the most common (and arguably the most frustrating) errors.
For more information on OpenShift networking and the Kubernetes service API, check out the following links. Until next time!

https://docs.openshift.com/container-platform/4.1/networking/understanding-networking.html (OpenShift 4.1)
https://docs.openshift.com/container-platform/3.11/architecture/networking/networking.html (OpenShift 3.11)

https://kubernetes.io/docs/concepts/services-networking/service/

The post Troubleshooting OpenShift Internal Networking appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Cloud-native application deployment across hybrid multicloud infrastructure: 451 analyst interview

Modern enterprises are quickly adopting a cloud-native approach to speed up the application lifecycle and deliver features quickly and more efficiently to users. Jay Lyman, principal analyst with 451 Research, details the flexibility of cloud-native applications across a hybrid multicloud infrastructure. In the following video interviews he shares how Kubernetes is used to deploy varied applications, how containers and Kubernetes enable hybrid multicloud deployments, and what other capabilities are needed to fully embrace cloud-native benefits.
Cloud connection: What’s the connection between cloud-native, hybrid cloud and multicloud infrastructures?
As enterprises move to expand their IT infrastructure to embrace hybrid multicloud, finding “the best execution venue”, where applications are placed and run on the most appropriate environment, is very important. Watch the video below to learn how cloud-native application development can be adapted to any cloud environment, whether that be public, private or a multicloud infrastructure.
Watch the video.
Cloud-native applications: What types of applications are being run across hybrid and multicloud environments?
A variety of different applications can be designed through cloud native and deployed across hybrid and multicloud infrastructures, including both internal and external-facing applications.  Among modern enterprises today, a large majority of DevOps teams are planning to adopt containers and Kubernetes within the next few years. Although most DevOps activations are run on-premises and in a private cloud, there is a strong movement towards utilizing more managed services and DevOps in the public cloud. In the video below, Lyman details the use of cloud native in both on-premises infrastructure and private and public clouds.
Watch the video.
Containers and Kubernetes: How does cloud-native software relate to hybrid and multicloud deployments in the enterprise?
Cloud-native software is a good fit for hybrid and multicloud environments for a variety of reasons. Because cloud-native applications are built to optimize both cloud architectures and automation, they can more easily take advantage of operational functions that allow them to deploy consistently and effectively across multiple infrastructures. Watch the video below to learn how cloud-native software, like containers, Kubernetes and other microservices, allow organizations to pursue the “best execution venue” for their applications.
Watch the video.
Hybrid multicloud and Kubernetes: What makes Kubernetes a match for hybrid and multicloud deployments and what else is needed?
Kubernetes is beneficial in hybrid and multicloud environments due to it being a distributed application framework, meaning it is built to manage applications across varied infrastructures. There is a huge predicted movement towards Kubernetes over the next few years, which coincides with the growing movement towards hybrid and multicloud. However, in addition to Kubernetes, enterprises need security and compliance capabilities, as well as testing and certification to make sure that it works with existing DevOps tooling and middleware. In the video below, Lyman expands upon what is needed and how businesses can achieve improved efficiency and increased revenue through successful of integration of Kubernetes.
Watch the video.
 
To learn more about the compatibility of cloud-native and hybrid multicloud deployments, and its benefits and requirements, read this business impact brief.
 
The post Cloud-native application deployment across hybrid multicloud infrastructure: 451 analyst interview appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Good tech in practice: How cloud is making the world a better place

Working in tech, it’s always a source of pride for me when the technology that I work with is used for good and there’s an impact that goes beyond profit.
Recently, I started working on a customer story where the client is using cloud technology and AI to drive social change. This experience got me researching more into how IBM technologies are impacting the larger community and I found some fantastic projects using cloud, data and AI.
Using tech for good in Asia Pacific
Below are just a sampling of the stories I found from Asia Pacific across three different industries.
Healthcare: Improving primary healthcare and prevention with AI
Ikure is a social enterprise that meets primary healthcare and prevention needs through a unique combination of health outreach initiatives, skills development and technology intervention. Ikure, working with IBM, used AI and predictive capabilities to create a model to identify patients who have the highest risk of suffering from a heart attack, helping enable doctors to see the most urgent cases first, and ultimately save lives. The best part of building on an IBM Cloud stack is that the solution can be up and running quickly without needing to recode. This becomes especially useful if iKure, or other companies that build a similar product, have to deploy or embed their application elsewhere. The below video has explained the concept in a simplified, demo environment. Do watch and learn more about this project.

Environment: Reducing coastal erosion on Australia’s Gold Coast
Coastal erosion is a major challenge in most countries and results in damage to or loss of not just the beaches, but also infrastructure. Coastal erosion disrupts fishing, navigation, recreation and other activities. Australia’s Gold Coast Council has invested AU$14 million into rehabilitation projects, but these are prohibitively expensive, time-consuming and only enable groups to focus on a small fraction of the vast coastline.
Nature’s own defense against coastal erosion is seagrasses. They are effective in stabilizing the sea floor but are slow to develop, taking 50 years to regrow. Sea grasses are also easily damaged by changes in the environment and by increased wastewater entering our oceans.
To ensure their survival, intervention plans require ongoing monitoring of sea grass meadows through the use of underwater video footage and manual assessment by marine scientists. IBM is helping marine scientists use AI for image segmentation, eliminating long hours of manually labeling video footage, taking the labeling time from eight hours to 20 minutes. The AI model is reaching an accuracy of 91 percent and with additional data and training is expected to increase even more in accuracy. You can read the article by ZDnet to learn more about this project.
Animals: Improving pet adoption possibilities
For animal lovers and homeless dogs, adoption shelters are where matches-in-heaven get made. But there are various issues when it comes to finding the right match. This often includes not knowing a dog’s history. People looking to adopt are eager to identify a dog’s breed and how that could affect possible health issues in the future. A lack of such information can prevent some dogs from finding the right home.
Joel Joseph developed a simple yet effective app using IBM Cloud and Watson capabilities to help remedy this situation. Instead of categorizing a dog’s breed by sight, Joel built a model to visually recognize a dog’s bread. He trained IBM Watson using approximately 20,000 dog photos of around 120 different breeds. Now, animal lovers can simply take a photo of the dog and let Watson advise you on the breed. The app is currently a minimum viable product (MVP) and improvements are in works. Joel’s article on how he got started and where the project is headed is a really good read. Visit the page to check it out.
Building technology solutions that drive cross-border social change
In addition to the three above examples of projects in Asia, IBM is working with a number of clients and official bodies to create and execute social and environmental welfare projects across the globe. Many of these projects are not limited by geography. Below are two such examples.
Smart water solutions address water scarcity
IBM collaborated with SweetSense Inc. to build an IoT network of water flow sensors in Northern Kenya. The cloud-hosted water management platform uses sensors to provide supply and demand patterns based on groundwater extraction data and help water managers reduce water loss through leaks, theft or metering inaccuracies. You can learn more about how smart technology can help monitor and manage water resources here.
Plastic Bank tackles pollution and poverty with blockchain
Plastic Bank is mobilizing recycling entrepreneurs from amongst the world’s poorest communities to clean up plastic waste in return for life-changing goods. Watch the below video or read the case study to learn more.

 
Powering the next great idea
These solutions all mobilize tech for good. Technology is an enabler that can help bring new ideas and social impact to life. IBM Cloud and its team of experts can help you drive business and social change, too. Schedule a no-charge visit with the IBM Garage team to discuss your ideas, or start building with the full catalogue of IBM Cloud services now.
 
The post Good tech in practice: How cloud is making the world a better place appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

OpenShift 4.2 on Azure Preview

Introduction
In this blog we will be showing a video on how to get Red Hat OpenShift 4 installed on Microsoft Azure using the full stack automated method.. This method differs from the pre-existing infrastructure method, as the full stack automation gets your from zero to a full OpenShift deployment, creating all the required infrastructure components automatically.
Currently, installing OpenShift 4 on Azure is under tech preview. It won’t be supported until the GA release of OpenShift 4.2. This blog is meant for those who want to get a preview on what’s coming. Detailed instructions are below if you wish to follow along!

Prerequisites
It’s important that you get familiar with the general prerequisites by looking at the official documentation for OpenShift. There you can find specific details about the requirements and installation details for either full-stack automated or for pre-existing infrastructure deployments. I have broken up the prerequisites into sections and have marked those that are optional.
DNS
You will need to have a DNS domain already controlled by Azure. The OpenShift installer will configure DNS resolution (internal and external) for the cluster. This can be done by buying a domain on Azure or delegating a domain (or subdomain) to Azure. In either case, make sure the domain is set ahead of time.
During the install, you will be providing a $CLUSTERID. This ID will be used as part of the FQDN of the components created for your cluster. In other words, the ID will become part of your DNS name. For example, a domain of example.com and a $CLUSTERID of ocp4 will yield an OpenShift domain of ocp4.example.com for your cluster.
Choose wisely.
Azure CLI Tools (Optional)
It’s useful to install the Azure az CLI client. Although you can do all of what you need for Azure from the web UI, it’s helpful to have the CLI tool installed for debugging or streamlining the setup process.
Once you’ve installed the Azure CLI, you will need to login to set up the cli for access. Be sure to visit the Getting Started page for more information. Once set up, verify that you have a connection to your account with the following:
az account show

The output should look something like this
{
“environmentName”: “AzureCloud”,
“id”: “VVVVVVVV-VVVV-VVVV-VVVV-VVVVVVVVVVVV”,
“isDefault”: true,
“name”: “Microsoft Azure Account”,
“state”: “Enabled”,
“tenantId”: “WWWWWWWW-WWWW-WWWW-WWWW-WWWWWWWWWWWW”,
“user”: {
“name”: “user@email.com”,
“type”: “user”
}
}

Again, you don’t need the Azure CLI tool, but it does help.
OpenShift CLI Tools
In order to install and interact with OpenShift, you will need to download some CLI tools. These can be found by going to try.openshift.com and logging in with your Red Hat Customer Portal credentials. Click on Azure (note that it’s only Developer Preview currently). You will need to download the following:

The OpenShift Installer
The OpenShift CLI tools (includes oc and kubectl)
Download or copy your pull secret

You may need the “dev preview” binaries instead, as dev previews are always being updated. Always consult try.openshit.com for details.
Install
In this section I will be going over the installation of OpenShift 4.2 dev preview on Azure, with the assumption you have an Azure account and that you performed all of the prerequisites. I will be installing the following:

Installer will set up 3 Master nodes, 3 Worker nodes, and 1 bootstrap node.
I will be using az.redhatworkshops.io as my example domain.
I will be using openshift4 as my clusterid.
I am doing the install from a Linux host.

Creating a Service Principal
A Service Principal needs to be created for the installer to use. Service Principal can be thought of as a “robot” account for automation on Azure. More information about Service Principals can be found using the Microsoft Docs. To create a service principal; run the following command:
az ad sp create-for-rbac –name chernand-azure-video-sp

When successful, it should output the information about the service principal. Save this information somewhere as the installer will need it to do the install. The information should look something like this.
{
“appId”: “ZZZZZZZZ-ZZZZ-ZZZZ-ZZZZ-ZZZZZZZZZZZZ”,
“displayName”: “chernand-azure-video-sp”,
“name”: “http://chernand-azure-video-sp”,
“password”: “XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX”,
“tenant”: “YYYYYYYY-YYYY-YYYY-YYYY-YYYYYYYYYYYY”
}

Next, you need to give the service principal the right roles in order to properly install OpenShift. The service principal needs to have at least Contributor and User Access Administrator roles assigned in your subscription.
az role assignment create –assignee
ZZZZZZZZ-ZZZZ-ZZZZ-ZZZZ-ZZZZZZZZZZZZ –role Contributor
az role assignment create –assignee
ZZZZZZZZ-ZZZZ-ZZZZ-ZZZZ-ZZZZZZZZZZZZ –role “User Access Administrator”

NOTE: The UUID passed to –assignee is the appId in the output when you created the service principal.

In order to properly mint credentials for components in the cluster, your service principal needs to request for the following application permissions before you can deploy OpenShift on Azure: Azure Active Directory Graph -> Application.ReadWrite.OwnedBy
You can request permissions using the Azure portal or the Azure CLI. (You can read more about Azure Active Directory Permissions at the Microsoft Azure website)
az ad app permission add –id ZZZZZZZZ-ZZZZ-ZZZZ-ZZZZ-ZZZZZZZZZZZZ
–api 00000002-0000-0000-c000-000000000000
–api-permissions 824c81eb-e3f8-4ee6-8f6d-de7f50d565b7=Role

NOTE: The Application.ReadWrite.OwnedBy permission is granted to the application only after it is provided an “Admin Consent” by the tenant administrator. If you are the tenant administrator, you can run the following to grant this permission.

az ad app permission grant –id
ZZZZZZZZ-ZZZZ-ZZZZ-ZZZZ-ZZZZZZZZZZZZ
–api 00000002-0000-0000-c000-000000000000

You will also need your Subscription ID; you can get this by running the following.
az account list –output table

Installing OpenShift
It’s best to create a working directory when creating a cluster. This directory will hold all the install artifacts, including the initial kubeadmin account.
mkdir ~/ocp4

Run the openshift-install create install-config command specifying this working directory. This creates the initial install config (install-config.yaml) and stores it in that directory. You will need information about your service principal you created earlier.
$ openshift-install create install-config –dir=~/ocp4
? SSH Public Key /home/chernand/.ssh/azure_rsa.pub
? Platform azure
? azure subscription id 12345678-1234-1234-1234-123456789012
? azure tenant id YYYYYYYY-YYYY-YYYY-YYYY-YYYYYYYYYYYY
? azure service principal client id ZZZZ-ZZZZ-ZZZZ-ZZZZZZZZZZZZ
? azure service principal client secret [? for help] ***********
INFO Saving user credentials to “/home/chernand/.azure/osServicePrincipal.json”
? Region centralus
? Base Domain az.redhatworkshops.io
? Cluster Name openshift4
? Pull Secret [? for help] ****************************

Let’s go over the Azure specific options.

azure subscription id – This is your subscription id. This can be obtained by running: az account list –output table
azure tenant id – Your tenant id (this was in the output when you created your service principal)
azure service principal client id – This is the appId from the service principal creation output.
azure service principal client secret – This is the password from the service principal creation output.

The install-config.yaml file is in the ~/ocp4 working directory. It also creates a ~/.azure/osServicePrincipal.json file. Inspect these files if you wish.
cat ~/ocp4/install-config.yaml
cat ~/.azure/osServicePrincipal.json

After you’ve inspected these files; go ahead and install OpenShift.
openshift-install create cluster –dir=~/ocp4/

When the install is finished, you’ll see the following output.
INFO Consuming “Install Config” from target directory
INFO Creating infrastructure resources…
INFO Waiting up to 30m0s for the Kubernetes API at https://api.openshift4.az.redhatworkshops.io:6443…
INFO API v1.14.0+8e63b6d up
INFO Waiting up to 30m0s for bootstrapping to complete…
INFO Destroying the bootstrap resources…
INFO Waiting up to 30m0s for the cluster at https://api.openshift4.az.redhatworkshops.io:6443 to initialize…
INFO Waiting up to 10m0s for the openshift-console route to be created…
INFO Install complete!
INFO To access the cluster as the system:admin user when using ‘oc’, run ‘export KUBECONFIG=/home/chernand/ocp4/auth/kubeconfig’
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.openshift4.az.redhatworkshops.io:6443
INFO Login to the console with user: kubeadmin, password: 5char-5char-5char-5char

Set the KUBECONFIG environment variable to connect to your cluster.
export KUBECONFIG=$HOME/ocp4/auth/kubeconfig

Verify that your cluster is up and running.
$ oc cluster-info
Kubernetes master is running at https://api.openshift4.az.redhatworkshops.io:6443

To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.

Post Install
After your cluster is deployed, you may want to do some additional configuration tasks such as:

Configuring authentication and additional users
Adding additional routes and/or sharding network traffic
Migrating OpenShift services to specific nodes
Adding additional persistent storage or a dynamic storage provisioner
Adding more nodes to the cluster

It’s important to note that the kubeadmin user is meant to be a temporary admin user. You should replace this user with a more permanent admin user when you configure authentication.
Conclusion
In this blog we went over how to install OpenShift 4 on Azure using the full stack automated method. It’s important to note that this method is marked as developer preview, meaning it’s not supported by Red Hat. However, the installer is ready for you to deploy and test for non-production workloads. Please feel free to try it and provide feedback by leaving a comment below or or reach out via the Customer Portal Discussions page.
The post OpenShift 4.2 on Azure Preview appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

National Express signs 8-year cloud deal with Vodafone Business and IBM

UK’s largest coach operator National Express turns to new IBM & Vodafone venture for hybrid cloud boost. (PRNewsfoto/IBM)
National Express has signed an eight-year deal with Vodafone Business and IBM to help the UK-based coach company with its hybrid cloud plans, reports Computer Weekly. Under the agreement, infrastructure for the transportation provider will move to IBM Cloud as part of a larger hybrid cloud strategy.
The deal will enable National Express to more “effectively manage multiple clouds in different locations and from different vendors,” shared IBM. The deal will also help National Express to “seamlessly scale up and down in support of usage spikes. Additional security and risk management will be added to protect the transport operator’s technology infrastructure and provide greater resilience.”
Michael Valocchi, IBM general manager of the venture with Vodafone shared the following with Computer Weekly: “What we’re building for National Express is a future-proof platform that’s going to allow them to use hybrid cloud, to have the flexibility and scalability they need. It’s a way for them to innovate, both from a consumer experience [point of view] and from an operational perspective, by bringing together the predictive nature of maintenance and vehicle placement, and the operational benefits that brings.”
Vodafone Business and IBM launched their joint venture earlier this year to help companies innovate faster. The joint venture aims to provide the open, flexible technologies enterprises need to integrate multiple clouds and prepare for a digital future enabled by AI, 5G, edge computing and Software Defined Networking (SDN).
Read more about the National Express cloud deal in the full article from Computer Weekly.
The post National Express signs 8-year cloud deal with Vodafone Business and IBM appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

OpenShift Scale-CI: Part 1 – Evolution

If you’ve played around with Kubernetes or Red Hat OpenShift, which is an enterprise ready version of Kubernetes for production environments, the following questions may have occurred to you:

How large can we scale our OpenShift-based applications in our environments?
What are the cluster limits? How can we plan our environment according to object limits?
How can we tune our cluster to get maximum performance?
What are the challenges of running and maintaining a large and/or dense cluster?
How can we make sure each OpenShift release is stable, performant and satisfies our requirements in our own environments?

We, the OpenShift Scalability team at Red Hat created an automation pipeline and tooling called OpenShift Scale-CI to help answer all of these questions. OpenShift Scale-CI automates the installation, configuration and running of various Performance and Scale tests on OpenShift across multiple cloud providers.
Motivation behind building Scale-CI
There are two areas which led us to build Scale-CI:

Providing a green signal for every OpenShift product release, for all product changes to support scale and for shipping our Scalability and Performance guide with the product. 
Onboarding workloads to see how well they perform at scales above thousands of nodes per cluster.

It is important to find out at what point any system starts to slow down or completely fall apart. It could be because of various reasons:

Your cluster has low Master ApiServer, Kubelet QPS and Burst values.
Etcd backed quota size might be too low for large and dense clusters. 
The number of objects running on the cluster is beyond the supported cluster limits. 

This motivated us to scale test each and every release of OpenShift and ship the Scalability and Performance guide with each OpenShift release which helps users plan/tune their environment accordingly. 
 In order to make efficient use of the lab hardware or the hourly paid compute and storage in public cloud which might get very expensive at large scale, automation does a better job at optimization than humans do at the endless wash. rinse and repeat cycle of CI-based testing. This led us to create automation and tooling which works on any cloud provider and runs performance and scale tests to cover various components of OpenShift; Kubelet, Control plane, SDN, Monitoring with Prometheus, Router, Logging, Cluster Limits and Storage can all be tested with the click of a button.
We used to spend weeks to running tests and capturing data. Scale-CI speeds up the process, thus saving lots of time and money on compute and storage resources. Most importantly: It gave us the time to work on creative tasks like tooling and designing new scale tests to add to the framework.
Not every team or user has the luxury of building automation, tooling and access to the hardware to test how well their application or OpenShift component is working at scales above 2000 nodes . Being part of the Performance and Scalability team, we have access to a huge amount of hardware resources and this motivated us to build Scale-CI in such a way that anyone can come use it and participate in the community around it.  Users can submit a pull request on Github with a set of templates to get their workload onboarded into the pipeline. The onboarded workloads are automatically tested at scale on an OpenShift cluster built with the latest and greatest builds. It doesn’t hurt that this entire process is managed and maintained by the OpenShift Scalability team.
You can find us online at openshift-scale github organization. Any feedback or contributions are most welcome. Keep an eye out for our next blog, OpenShift Scale-CI Deep Dive: Part 2, which will have information about the various Scale-CI components including workloads, pipeline and the tooling we use to test OpenShift at scale.
The post OpenShift Scale-CI: Part 1 – Evolution appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Introducing Red Hat OpenShift 4.2 in Developer Preview: Releasing Nightly Builds 

You might have read about the architectural changes and enhancements in Red Hat OpenShift 4 that resulted in operational and installation benefits. Or maybe you read about how OpenShift 4 assists with developer innovation and hybrid cloud deployments. I want to draw attention to another part of OpenShift 4 that we haven’t exposed to you yet…until today.
When Red Hat acquired CoreOS, and had the opportunity to blend Container Linux with RHEL and Tectonic with OpenShift, the innovation did not remain only in the products we brought to market. 
An exciting part about working on new cloud-native technology is the ability to redefine how you work. Redefine how you hammer that nail with your hammer. These Red Hat engineers were building a house, and sometimes the tools they needed simply did not exist. 
OpenShift 4 represents new features, methods, and use cases that had not been attempted before with its upstream components (Kubernetes). A lot of the development process was around building internal tooling that would enable OpenShift to be successful as a distribution of Kubernetes and as an application platform. There were three specific results of that tooling work that resulted in some exciting improvements that you might already be using, but didn’t realize it just yet.

Over The Air
One of the technology concepts we leveraged from CoreOS is called “over the air” updates (OTA). If you are like me, you are probably thinking “Why is that considered important. We have been doing yum, docker, apt updates over the air for years!” This is different in some pretty significant ways. 

We created an entirely new software distribution system to intelligently suggest and apply Kubernetes cluster upgrades. This software component runs as a SaaS service from cloud.redhat.com and it allows us to help OpenShift 4 clusters to automatically upgrade.

By combining Red Hat’s deep experience in managing software lifecycle with Kubernetes-native configuration, automatic management with Kubernetes Operators, and a deep feedback loop between cluster health and software rollout, we can help reduce the complexity and risk in updating your production clusters while you can increase the speed at which you can roll out critical security fixes. Upgrade at the “click of a button” without stress.

This SaaS service has knowledge of what it is doing. Imagine if you had more than 1,200 lines of business (LOBs) and each one deployed up to 250 OpenShift clusters. They installed on multiple and different infrastructures. But they all used your software to do so.

You could start becoming aware when some organically updated before others and you could observe success rates and issues. If you found a common pattern you could stop the upgrade from reaching clusters that had not been upgraded yet. This allows Red Hat to more deeply partner with our customers to help shoulder the responsibility of running OpenShift.

OpenShift 4 introduced an optional component called telemeter. When enabled, we can have immediate awareness and feedback when a piece of OpenShift isn’t functioning properly so we can create a fix for your issue before you even know the problem is there.

We take care of updating the operating system and the platform together. You no longer need to cycle them on two different maintenance windows and we no longer have to suffer from a configuration skew or poor target system while updating the layers on top of the stack.

All four of those benefits combine to form over the air updates (OTA). OTA is more than pushing software around. It’s about us reaching out to you and declaring that we want to do more for you. Enabling over the air updates in OpenShift may help you keep your OPEX as low as possible. 
Continuous Improvements
OpenShift 4 has been streaming continuous fixes for a few weeks. Hopefully you have noticed by now that OpenShift 4 has been able to release 8 z-stream patches in 11 weeks. That is almost one per week, and that is our goal. In OpenShift 3 we would typically release a z-stream patch once every 3-5 weeks. Why have we done this? We did this for two reasons:

With the OTA upgrade framework in place we can have a scientifically higher confidence probability of success.
You do not have to take all the z-streams when they are released as they are cumulative, but by allowing them out each week when they are ready, you also don’t have to wait. If you are being affected by an issue, you can now have the lessons learned from other customers at your fingertips. 

When you are moving around that much software across public cloud and datacenter OpenShift deployments, you need strong automated testing solutions. Building that to the scale (1,000s of clusters per week) and diversity (AWS, Azure, GCP, IBM Public, vSphere, OpenStack, RHV) needed for OpenShift 4 was a large engineering project. 
In this new world of OTA coupled with continuous improvements, we refined how we were leveraging CI/CD to the point of extending its checkpoints all the way down to our customers.
Nightly Builds:
Starting today, we have decided to expose our customers and partners to an opportunity to gain access to our nightly builds. 
In the past, we would have had to run a high touch beta to give our customers this level of access. By leveraging the new tooling built for OTA and the above continuous improvements, we are now in a position where we can expose nightly builds for a future release of OpenShift (in this case OpenShift 4.2).
These are the caveats of nightly builds: they are not for production usage, they are not supported, they will have little documentation, and not all the features will be baked in them. But we intend for them to get better and better the closer they get to a GA milestone. The documentation should slowly grow as well. 
What they do offer is the ability to get the earliest possible look at features as they are being developed. That can help during deployment planning and ISV level integrations. It is for those reasons we feel our customers and partners will truly enjoy this new opportunity.
It’s About Working as a Community:
We believe that a transformation in software development is beginning with our continuous contributions in open source, our new Kubernetes over the air upgrades, and the automated integration, testing and rollout that makes these nightly builds available. 
We have extended the classic continuous feedback loop used in agile software development to reduce the time it takes to bring fixes and innovation from customer to community to Red Hat back to the customer. The health of production clusters is assessed and translated directly to critical interventions and fixes that rapidly make their way out to the fleet. 
A software loop that in the past might have only been available to hosted services is now available for users and customers of OpenShift. We see this as a true evolution in open source software and I’m excited about where it is going to lead us…together. 
To find out more go to try.openshift.com. Log in and select your infrastructure. Then look for the new developer preview link!
The post Introducing Red Hat OpenShift 4.2 in Developer Preview: Releasing Nightly Builds  appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Now Available: Google Cloud Platform Developer Preview on OpenShift 4.2

In the spirit of releasing early and often, we’re pleased to announce a Developer Preview of installation support for Red Hat OpenShift 4.2 on the Google Cloud Platform. Download the latest stable nightly build and give it a try!
As with our other supported cloud providers, the installer CLI will handle the necessary infrastructure creation as well as deploy the latest version of OpenShift. This means anyone with an account on GCP can now go from zero to running a production quality cluster by answering seven simple questions and waiting for everything to boot up. Just wire up Google Cloud DNS, provide the installer CLI with suitable permissions and let it take care of the rest. Join our mailing list and let us know what you think!

Tried the CLI and found a deployment topology not yet handled out of the box? We want to hear about that too! 
Afterwards you might consider taking a test drive of our user provided infrastructure preview. If you have prior experience managing compute instances, cloud firewalls, VPCs and load balancers with the gcloud CLI you shouldn’t find it significantly more time consuming once you’ve made your infrastructure decisions. 
We still believe opinionated installs are ideal for most deployments (especially when based on the technology powering OpenShift Online), but we realize the real world can be a complicated place. 
Our desire for this more advanced installation method is to afford users the utmost control over the underlying infrastructure all the while ensuring the critical Day 2 benefits of OpenShift’s automated upgrades, monitoring and world class support remain intact.  
The post Now Available: Google Cloud Platform Developer Preview on OpenShift 4.2 appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Of Ranchers and iPads: How British Columbia Replaced Paperwork with OpenShift and Aporeto

Morgan Mueller is a Range Agrologist working out of Williams Lake
The cattle rancher relies on a few trusty belongings out on the dusty trail: a good horse, strong coffee and a well-charged iPad with a backup battery. That last pairing of items may seem far astray from the rucksacks of those that herd “dogies,” steer and moo-cows, but in the north western region of Canada, there used to be even stranger things being carried in trail bags by cowherd.
For many years, herds of cattle grazing on provincial government land had to be documented and accounted for by hand. That meant a mountain of paperwork for rangers upon their return to the ranch. Instead of a bag full of beans and rawhide, they were lugging around a phonebooks-worth of paperwork to account for just where their bovines had been.
When Todd Wilson, product director of Enterprise DevOps for the Province of British Columbia, and his team began working with Red Hat OpenShift and Aporeto, they weren’t thinking about the cattle grazing on grasslands 1,000 miles north of them. Instead, they were looking for a way for the software developers inside the government of British Columbian to accelerate their velocity.
“Most of us are concentrated in Victoria, but we do have remote teams in Vancouver, Kamloops and Prince Rupert. Our datacenter is in Kamloops, so we’re fairly distributed across the province,” said Wilson, describing the teams he’s working with.
“We’re about three years into trying to transform how the province of British Columbia develops its applications and solutions,” said Wilson. “Getting into GitHub was one of the first things we did. We also began evangelizing the benefits of open source as a way to level the technical playfield between the government sector and the enterprise sector.”
Wilson said that OpenShift served as a unified platform for application developers across the province, giving all the teams using the system a consistent way to deliver their products. Once that existed, the next step was to bring legacy systems, such as mainframe databases online within reach of the OpenShift cluster. That’s where Aporeto came in. Wilson said the team was asking, “How do we evolve our security story so we can branch out of our OpenShift cluster and address some security needs of legacy systems in legacy zones, while also access cloud services without dying on the ever growing complexity of firewall rules,” said Wilson.
Aporeto provided an encrypted pathway to those older systems, bringing them online and accessible to OpenShift users and developers. This type of full system access via a cloud-like developer experience has unlocked software as a path to solve even the most obscure and non-digital of problems within the province.
Problems such as that phonebook of paper required to document cattle grazing.
Home Page on the Range
“We’re trying to decrease barriers to some remote areas up north with resource management and range apps. Ranchers in remote areas can apply to graze their cattle on provincial land. One of the things these ranchers were challenged with was connectivity. They’d have to print their plans on paper and account for where these cattle are grazing, and who’s cattle is where. Now they have an iPad app. They download it, they can take it offline, on a horse on the range, collect data on their ride, count cows, then upload all that stuff. It eliminates a paper process. This was revolutionary for those ranchers. They’d typically have to spend a whole week doing paperwork after a range ride. Now that’s compressed,” said Wilson.
This success is emblematic of the specific ways British Columbia is now able to solve its problems with software. Wilson said they’re now hosting over 100 applications on OpenShift, and that other provinces in Canada are looking into how they can leverage similar architectures, and share open source projects to cut down on replicated development work.
The ultimate goal, however, is embodied in the British Columbia Developers’ Exchange. This platform allows the government to engage with individual developers and small teams to address software problems in a manner similar to bug bounties: problems are defined, and small teams or individuals can take them on in exchange for pay. The goal is to eliminate the need for these teams to become full fledged, paperwork encumbered government contractors, saving everyone time and work.
But opening the doors of public systems to the actual public requires some intense architectural design and some serious security considerations. Wilson said this is the next big step for the BC teams.
“We’ve got a strategy we’re kicking off now: a zero trust framework project. We kicked off [in June]. Aporeto is forming part of that, but they’re not the entire story. We’re using tools to help secure the supply chain, adding better static source code analysis into the workflow, and building a registry of signed images that we vet and maintain. Now this is done in an ad hoc manner and not regularized. The idea is this project will provide all the kit and a process around that kit to provide confidence in the business area that the apps are kept up to date. One of the biggest challenges we have in the datacenter is pretty much zero visibility into these apps. When you ask what is the footprint of our vulnerabilities, they can’t tell you because they don’t know,” said Wilson.
The full solution to this problem involves many tools across the chain, he said: “OpenShift, Aporeto, Aqua Security, GitHub and Artifactory. All this produces a ton of artifacts around our security posture. We need to knit that together in a regular way. We want to make sure the development teams have a nice easy way to use the right libs, tools and images, and they can get their apps through to production,” said Wilson.
The post Of Ranchers and iPads: How British Columbia Replaced Paperwork with OpenShift and Aporeto appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift