OpenShift 4.2: Console customization

In Openshift 4, we built a brand new UI(Console) from the ground up with the goal in mind of keeping it simple, while still giving admins the ability to customize and extend it for their needs. In this blog, we’ll answer the following Console-related questions:

What can I customize in the Console?
What are some of the Console customization use cases?
How can I enable the different Console customizations?

The new Console Customizations are built using Custom Resource Definitions(CRD), allowing cluster admins a powerful mechanism to modify the Console as they see fit. The Console code has been written to detect certain CRDs that dynamically update the Console views. New in Openshift 4.2, the following Console CRDs are available for use:

ConsoleNotifications
ConsoleExternalLogLinks
ConsoleLinks
ConsoleCLIDownloads

The first thing to notice is that we use the identifier “Console” as the prefix for all our Console Customization CRDs. You can access the Console CRDs from the Console/ Administration/ Custom Resource Definitions navigation item. Just filter by “Console” to remove other items from the list:

Now let’s explore what each of these CRDs enable us to customize:
ConsoleNotifications CRD allows admins to create a banner of your choice of color with optional text and hyperlinks embedded into it. You have the option of placing the banner on the top, bottom, or top and bottom of the page. Take a look at the following sample schema:
apiVersion: console.openshift.io/v1
kind: ConsoleNotification
metadata:
name: example
spec:
text: This is an example notification message with an optional link.
location: BannerTop ##Other options are BannerBottom, BannerTopBottom
link:
href: ‘https://www.example.com’
text: Optional link text
color: ‘#fff’
backgroundColor: purple
The results will generate you the following view:

Potential Use Case: I want to clearly mark that this is a production cluster.
ConsoleExternalLogLinks CRD enable you to link to external logging solutions instead of using OpenShift Container Platform’s EFK logging stack.
hrefTemplate:
hrefTemplate is an absolute secure URL (must use https) for the log link including variables to be replaced. Variables are specified in the URL with the format ${variableName}, for instance, ${containerName} and will be replaced with the corresponding values from the resource. Resource is a pod.
Supported variables are
* ${resourceName} – name of the resource which contains the logs
* ${resourceUID} – UID of the resource which contains the logs
* e.g. `11111111-2222-3333-4444-555555555555`
* ${containerName} – name of the resource’s container that contains the logs
* ${resourceNamespace} – namespace of the resource that contains the logs
* ${podLabels} – JSON representation of labels matching the pod with the logs
* e.g. `{“key1″:”value1″,”key2″:”value2”}`
Example hrefTemplate: https://example.com/logs?resourceName=${resourceName}&containerName=${containerName}&resourceNamespace=${resourceNamespace}&podLabels=${podLabels}
Take a look at the following sample logging configuration:
apiVersion: console.openshift.io/v1
kind: ConsoleExternalLogLink
metadata:
creationTimestamp: ‘2019-09-09T18:50:09Z’
generation: 1
name: example
resourceVersion: ‘310302’
selfLink: /apis/console.openshift.io/v1/consoleexternalloglinks/example
uid: a5a05876-d332-11e9-9414-0aebe39a74f4
spec:
hrefTemplate: >-

https://example.com/logs?resourceName=${resourceName}&containerName=${containerName}&resourceNamespace=${resourceNamespace}&podLabels=${podLabels}

text: Example Logs

Potential Use Case: I want to send my logs to DataDog.
ConsoleLinks CRD enables a cluster admin to add links to the following menus:

User Menu
Help Menu
Application Menu

User Menu
For the User menu you have the following configuration items:

DISPLAY STRING
URL

Take a look at the following sample link:
apiVersion: console.openshift.io/v1
kind: ConsoleLink
metadata:
name: example
spec:
href: ‘https://www.example.com’
location: UserMenu
text: Additional user menu link
The results will generate you the following User Menu:

Potential Use Case: I want to link to additional user information.
Help Menu
For the Help menu you have the following configuration items:

DISPLAY STRING
URL

Take a look at the following sample link:
apiVersion: console.openshift.io/v1
kind: ConsoleLink
metadata:
name: example
spec:
href: ‘https://www.example.com’
location: HelpMenu
text: Additional help menu link
The results will generate you the following Help Menu:

Potential Use Case: I want to link to additional help information.
Application Menu
For the Application menu you have the following configuration items:

DISPLAY STRING
URL
SECTION
LOGO

Take a look at the following sample link:
apiVersion: console.openshift.io/v1
kind: ConsoleLink
metadata:
name: example
spec:
applicationMenu:
section: ThirdParty Applications
href: ‘https://www.example.com’
location: ApplicationMenu
text: My App link
The results will generate you the following Application Menu:

Potential Use Case: I want to link to another application dashboard.
ConsoleCLIDownloads CRDS is designed to enable admins to create links to useful command lines for their users. The CLI download description can include markdown such as paragraphs, unordered lists, code, links, etc.
Take a look at the following sample CLI:
apiVersion: console.openshift.io/v1
kind: ConsoleCLIDownload
metadata:
creationTimestamp: ‘2019-07-25T07:03:43Z’
generation: 1
name: example
resourceVersion: ‘413626’
selfLink: /apis/console.openshift.io/v1/consoleclidownloads/example
uid: 56df3c2e-aeaa-11e9-b4e5-0690f22365f6
spec:
description: >
This is an example CLI download description that can include markdown such
as paragraphs, unordered lists, code, [links](https://www.example.com), etc.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Fusce a lobortis
justo, eu suscipit purus.
displayName: examplecli
links:
– href: ‘https://www.example.com’
The results will generate the following view:
Potential Use Case: Allow third partycustomer applications to offer their CLIs for download.
Using CRDs to enable customization now affords us the ability to programatically update the Console. Any Operator can now extend the Console by using the Console Customization CRDs. The following example shows a link to the Couchbase user interface via the Application Launcher Menu. This entry was programmatically added when the Couchbase Operator was installed. Operators now have the ability to use any of the Console CRDs to enhance their user experience in the Console.
 

The post OpenShift 4.2: Console customization appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Mirantis Partners With OpenStack Foundation to Support Upgraded COA Exam

The post Mirantis Partners With OpenStack Foundation to Support Upgraded COA Exam appeared first on Mirantis | Pure Play Open Cloud.
Answering community demand, the Certified OpenStack Administrator Exam is back and now available for the Rocky release
Campbell, CA, October 17, 2019 — Mirantis announced today that it is providing resources to the OpenStack Foundation, including becoming the new administrators of the upgraded Certified OpenStack Administrator (COA) exam.
“With the OpenStack market forecasted to grow to $7.7 billion by 2022 according to 451 research, the demand for Certified OpenStack Administrators is clearly strong and set to continue growing for many years to come,” said Mark Collier, COO of the OpenStack Foundation. “We are excited to collaborate with Mirantis, who has stepped up to provide the resources needed to manage the COA, including the administration of the vendor-neutral OpenStack certification exam.”
During the three years of the COA offering, nearly 3,000 professionals across 77 countries have taken the exam, providing qualified talent to meet the growing number of OpenStack job opportunities. The exam’s intent is to seed a vibrant market and ecosystem of OpenStack professionals and training providers, and there are now dozens of OpenStack training providers around the world.
“As one of its early community leaders, Mirantis has trained more than 20,000 professionals on OpenStack and continues to see significant career opportunities for individuals with skills in administering and operating the platform,” said Dave Van Everen, SVP, Marketing, Mirantis. “We’re extremely proud of our accomplishments as an OpenStack training provider and look forward to our collaboration with the OpenStack Foundation continuing to offer a vendor-neutral certification exam that can help community members grow their careers in meaningful ways.” 
Anyone interested in becoming a COA can buy an exam through the OpenStack website or through one of the many OpenStack training partners in the marketplace. If an organization is interested in becoming an official COA Training Partner, please contact ecosystem@openstack.org. 
About the OpenStack Foundation (OSF)
The OpenStack Foundation (OSF) supports the development and adoption of open infrastructure globally, across a community of over 100,000 individuals in 187 countries, by hosting open source projects and communities of practice, including datacenter cloud, edge computing, NFV, CI/CD and container infrastructure.
The post Mirantis Partners With OpenStack Foundation to Support Upgraded COA Exam appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

OpenShift 4.2: The API Explorer

One of the most notable feature additions in Red Hat OpenShift 4.2’s Web Console UI is the new Explore page, featuring API explorer functionality. This feature will help novice users to learn more about Kubernetes resources while completing everyday tasks in the console.
What is the API explorer?
The API explorer is a new area of the console where you can sort through the various Kubernetes resources and learn more about how they can be used. For each resource, users will get the definition, the metadata that makes up the resource itself, and the individuals with access to view and create instances on the cluster.  
Why did we add the Explore page?
The console user interface (UI) is a good place to get started using OpenShift, especially for novice users. An IBM design article [1] describes how UIs act as a central place to browse systems or start making sense of large data sets. Web UIs are also great for tasks that require human intervention. On the other hand, command line interfaces (CLIs) are great for more automated tasks or tasks where the user knows exactly what they want to do and how to do it.
Through user feedback, we confirmed that OpenShift users do leverage the web console to learn about core concepts behind OpenShift and the underlying Kubernetes framework. We saw an opportunity to improve the learning experience right in the console. The Explore page is a place where users can dig more thoroughly into Kubernetes concepts and resources.
How can you use the API Explorer?
Here’s how it works. Visit the Explore page by navigating to the Home section and clicking Explore. You should see a list of the API resources available for viewing.
On this page, you can filter by group, version, or scope (namespace or cluster). You can also sort the list by kind, group, version, or namespaced resources.

If you click into a resource you will see the resource’s overview, where you’ll get information about the resource including whether it’s namespaced, what actions you can take on it, short names, and a description. This is just like using the command oc explain in the CLI. Using the tabs, you can access the resource’s schema and its project-specific information such as instances and access review.

The Schema tab allows you to drill down into some of the resource’s specific schema. In the example above, we’ve navigated from general Resource Quota schema to more specific information about its metadata and more specifically its ownerReferences. Breadcrumbs allow you to easily navigate back to a higher level of granularity.
Under Access Review, you can see which users, groups, and service accounts have access to perform a certain action on a specific resource. In the example below, we can see that two groups and one user have permission to create Resource Quotas. To edit access, navigate to the Administration section.

You can also access the API Explorer from the YAML editor when editing resources.
Click the ‘View Schema’ link in the upper right hand corner of the YAML editor and the resource schema will open in a side panel on the right. Now you can view a resource’s schema while editing a resource!

Through the addition of the Explore page, we hope to increase our users’ ability to learn more about Kubernetes and the underlying framework of OpenShift.
If you’d like to learn more about what the OpenShift team is up to or provide feedback on any of the new 4.2 features, please take this brief 3-minute survey.
The post OpenShift 4.2: The API Explorer appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Tips for taking the new OpenStack COA (Certified OpenStack Administrator) exam – October 2019

The post Tips for taking the new OpenStack COA (Certified OpenStack Administrator) exam – October 2019 appeared first on Mirantis | Pure Play Open Cloud.
The Certified OpenStack Administrator (COA) certification, originally launched in the middle of 2016, was the first professional certification offered by the OpenStack Foundation. It is designed to help companies identify top talent in the industry and help job seekers demonstrate their skills.
The Certified OpenStack Administrator is a professional typically with at least six months OpenStack experience, and has the skills required to provide day-to-day operation and management of an OpenStack cloud. 
Thanks to the collaborative efforts of the OpenStack Foundation and Mirantis, the Certified OpenStack Administrator (COA) exam is back; effective October 2019!
Mirantis is an official OpenStack Foundation training partner, with several OpenStack classes designed to help you fully prepare for the COA exam. Mirantis is also partnering with the OpenStack Foundation on the development and management of the new COA exam!
The new COA exam is based on OpenStack Rocky and covers the core compute, storage, image, and networking services. 
This blog is intended to provide you with valuable information on the new COA exam through a “Question & Answer” format.
What was announced for the new COA re-launch?
Mirantis announced today that it is providing resources to the OpenStack Foundation, including becoming the new administrators of the upgraded Certified OpenStack Administrator (COA) exam.
We are planning to continue COA exam sales starting October 17. If you’re interested in becoming a COA, you will be able to buy an exam through the OpenStack website or through one of the many OpenStack training partners in the marketplace.
What is different for the new COA?
Some items that have changed include:

The exam is browser-based, using the most recent version Google Chrome.Support for Microsoft Edge and Mozilla Firefox is coming soon.
The exam dates and times will be scheduled in advance and posted on the COA events Web page.
The use of any online docs (for example, docs.openstack.org) is no longer supported.
The domain knowledge requirements have changed slightly

Core OpenStack components are still covered
Supported OpenStack release is now Rocky

This blog should answer many, if not all, of your questions.
Why get certified?
Great question to start with!
The COA is an industry-wide, vendor neutral, certification, verifying that you have a predefined skill set. Once you get your certification, add it to your resume!  Managers hiring OpenStack professionals trust the COA and use it to weed out lesser qualified job applicants. Plus, with technology changing so rapidly, having a COA certification shows you are maintaining your skill set; keeping pace with the technology.
How do I register for the COA exam?
Simple, visit the COA events Web page!
Where can I find logistical details related to taking the exam?
Details related to cost, ID requirements, system requirements, instructions on how to enroll, and more, can be found on the OpenStack COA Web page.
Candidates are monitored virtually by a proctor during the exam session via streaming audio, video, and screen sharing feeds. The screen sharing feed allows proctors to view candidates’ desktops (including all monitors). 
The exam is browser-based. You can take the exam in your office or at home. However, you need to be in a quiet place and alone. You might be asked to use your webcam to verify your environment. 
Each COA exam is Guaranteed to Run (GTR) if there is at least one student registered. Visit the OpenStack COA Web page for more details. You must register in advance for the COA.
Note: The COA exam is in English only.
What skills should I focus on?
You are required to know how to use both the OpenStack command line client and the (Horizon) Dashboard UI.
In general, it is easiest to perform as many tasks as possible using the Dashboard UI.  However, there are several tasks that require command line use.
The exam focuses on OpenStack services for Identity (Keystone), Compute (Nova), Object Storage (Swift), Block Storage (Cinder), Networking (Neutron), and Images (Glance). 
For more details, read the OpenStack COA Requirements.  These requirements have changed slightly for the new COA. Please make sure you read them!
Do I need to know how to install and configure OpenStack?
No, during the exam you use a pre-configured all-in-one environment. The tasks/questions focus on the operation and administration of an OpenStack environment. For example, creating users, images, networks, and so on.
What OpenStack release is covered by the COA exam?
The COA is based on the OpenStack Rocky release.
What if my organization is running an older release of OpenStack, such as Mitaka?
In general, if you are experienced with more recent OpenStack releases, then you should be able to complete the required tasks for the COA exam.  For example, you need to be familiar with the OpenStack command line client. If your current release does not include the command line client, then you might not be successful.
How (& when) do I know if I passed?
Exams are scored automatically and a score report will be made available within three (3) business days. A passing score of 70% or higher is required. You are notified via email with your score and certification-related information.  You will receive a certificate and logo (add it to your email signature!) for personal use.
The certification is valid for 36 months from the exam date.
What if I don’t pass the exam?
If you do not pass the exam, you can retake it.  However, you must pay full price for the retake.
What if I took the previous COA?
If you took the previous COA and are certified, then you do not need to do anything until your certification expires.
If your COA certification has expired, you should sign up to take the new COA exam to be re-certified.
If you have an unused voucher for the previous COA, please contact the OpenStack Foundation at cert@openstack.org
What about the Mirantis OCM100 OpenStack certification & exam?
The Mirantis OCM100 (OpenStack Certification) is being discontinued. If you are interested in an OpenStack certification, please enroll in the COA, through the OpenStack Foundation using the OpenStack COA Web page.
If you have a current unused voucher for the Mirantis OCM100 exam, you can still take the OCM100 exam through December 20, 2019. The OCM100 voucher can not be used for the COA exam. Note, passing the OCM100 exam does not certify you for the COA.
If you are currently OpenStack certified through Mirantis, you do not need another OpenStack certification unless it is requested by your employer.
What is the difference between the OpenStack COA exam and exams from other vendors?
The OpenStack COA exam is a vendor neutral exam. Tasks are performed on an OpenStack cluster without any vendor add-ons that might change the way OpenStack works. Reference implementations are utilized, such as Logical Volume Manager (LVM) for Block Storage, Open vSwitch (OVS) for L2 networking, or KVM/QEMU for the hypervisor.
Other vendor exams are focused on vendor products, and are typically one certification in a series of multiple certifications, all focused on the vendor product(s). For example, Red Hat has their own certification for their OpenStack product: EX210 – Red Hat Certified System Administrator in Red Hat OpenStack exam. The Red Hat training and certification includes, for example, Red Hat® Ceph Storage, which is not vendor neutral.
Any Hints & Tips you can share?
First, read the COA Exam Tips from the OpenStack Foundation.
Next, ensure your machine is compatible with the exam environment requirements. Run the compatibility check tool before the day of the exam.
There are several older resources available to you, from 2016-17, discussing the original COA, that might still contain some valuable information:

Tips to pass OpenStack Foundation (COA) Certified OpenStack Administrator exam.
The OpenStack COA Exam: 10 tips for better chances
How to Pass the Certified OpenStack Administrator Exam
The COA: Why it matters for your career and how to pass it
COA – What You Need to Know
Are You Certifiable? Why Cloud Certifications Matter

How about some tips while taking the exam?

Probably the most important tip I can give you: pay attention to the time!  Don’t spend too much time on any one question/task!  
Attempt to answer all of the questions. Skip the ones you don’t know and work on the ones you do know.
The Dashboard UI was designed to be user friendly – use it as much as possible.
Use copy/paste as much as possible to avoid wasting time typing, including correcting errors!
Access to the OpenStack docs is no longer supported!
Use the command line help (–help).
If you encounter an issue with the exam environment, let the exam proctor know ASAP.

During the exam, am I allowed to use the online documents for help?
No. You are not allowed access to online docs. If you try to access any non-exam URL, such as docs.openstack.org, you will fail the exam.
All other documents, including hand-written notes, are not allowed.
I’m not familiar with the CLI.  Where can I go for help?
Suppose, for example, you are asked to create a new image from the CLI.  Assuming you know that creating an image requires use of the openstack image create command, here is an example of the help:
openstack image create –help
The response provides the command syntax plus a brief explanation of each operand.
Here is the command syntax section:

You can see, for example, image-name is required.
Further, it makes sense that you need to specify the image source, such as a file (–file).
The remaining text (from the –help request, not shown here) explains more details about each operand. For example:

Image visibility: what is the difference between public, private, community, or shared?  Is there a default for image visibility?
Disk format: what formats are supported?  Is there a default disk format?

How do I know if I’m ready for the COA?
First, as discussed earlier, you should have practical, day-to-day, experience with OpenStack.
In addition, review the COA requirements. Can you perform the listed tasks:

… without referring to the online doc for every task? 
… without referring to personal notes or cheat sheets?
… without referring to content from a class?
… without relying on the online help or man pages?

If you use help (–help), do you understand the output? For example, when creating an image, you still need to have a general understanding of what operands to use (file, format, etc), so that the –help output makes sense.

Remember, the COA exam expects you to have day-to-day experience using and managing OpenStack for at least 6 months.
On average, a third of COA exam takers do not pass. You can retake the exam at any time, however, you must pay full price for each retake. 
Lastly, practice, practice,  practice. Practice makes perfect!
OK, but how do I practice for the exam?
If needed, use a tool, such as DevStack, to deploy an OpenStack environment to test your skills.
You might also enroll in a class. Need more info on training?  Keep reading!
What training is available prior to the exam?
Excellent question!
The best path to certification is through one of the OpenStack Foundation training partners who provide vendor neutral OpenStack training, such as Mirantis.
Mirantis provides two courses that prepare you for the COA exam:

OS100: OpenStack Bootcamp I (3 days)
OS250: OpenStack Accelerated Bootcamp (4 days)

Each Mirantis course provides a Comprehensive Practice set of lab exercises where you can test your OpenStack knowledge and skills. If you complete the Comprehensive Practice without help from the class instructor and without referring to the class materials, you are ready to take the COA!
In addition, each Mirantis course includes a voucher to take the new COA exam!
You can enroll in a public (Instructor-Led) class or a self-paced (OnDemand) class.  In addition, you can also request a private class for your company.
All Mirantis OpenStack courses are at the Rocky release – a perfect match for the COA exam!
Need more details? Check us out!
There are several COA Prep courses available from other vendors. However, if any course has not been updated for the new COA, it might not be beneficial to you.
What if I have other questions?
While I tried to create a fairly comprehensive list of questions for this blog, you might have others.
Please send an email to the OpenStack Foundation at cert@openstack.org for any other questions.
The post Tips for taking the new OpenStack COA (Certified OpenStack Administrator) exam – October 2019 appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Istio Webinar Postmortem

The post Istio Webinar Postmortem appeared first on Mirantis | Pure Play Open Cloud.
Live demos are a thing of beauty — you never know when the demo gods will decide to break your precious setup. During yesterday’s webinar, Your Application Deserves Better than Kubernetes Ingress: Istio vs. Kubernetes, the first thing I wanted to show during the Istio demo was exposing an app using NodePort. Applying the deployment file resulted in the following:
$ kubectl apply -f flaskapp-deployment.yaml

$ kubectl get pods

NAME                            READY STATUS

flaskapp-deployment-pod-1       0/1 ImagePullBackOff

flaskapp-deployment-pod-2       0/1 ImagePullBackOff

flaskapp-deployment-pod-3       0/1 ImagePullBackOff
What gives? This deployment spec has worked time and again with no business failing to pull the image. I can only guess that Docker Hub is down (unlikely) and had two options: 1) re-build the image from source, or 2) Use a backup environment to show the end state. In order to keep on topic within the given time period, I decided to go with option #2 to at least showcase Istio Ingress at its best. 
After the webinar I decided to take a look at the environment; and lo and behold!
$ kubectl get pods

NAME                            READY STATUS

flaskapp-deployment-pod-1       1/1 Running

flaskapp-deployment-pod-2       1/1 Running

flaskapp-deployment-pod-3       1/1 Running
And a quick search of Docker Hub Status later:

As if by magic Docker Hub has decided to stop working within the exact time period of the Webinar. I learned a few things from this event:

Don’t rely on a public repository / link for your demos
Failures occur where you least expect it
Always prepare a backup

So that being said, I present to you how I envisioned the demo for this session:

I hope you enjoyed the live session regardless, and if you didn’t attend, I’m sorry you missed out on all the fun. Here’s hoping for a more successful demo next time!
The post Istio Webinar Postmortem appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Introducing Red Hat OpenShift 4.2: Developers get an expanded and improved toolbox  

Today Red Hat announces Red Hat OpenShift 4.2 extending its commitment to simplifying and automating the cloud and empowering developers to innovate.
Red Hat OpenShift 4, introduced in May, is the next generation of Red Hat’s trusted enterprise Kubernetes platform, reengineered to address the complexity of managing container-based applications in production systems. It is designed as a self-managing platform with automatic software updates and lifecycle management across hybrid cloud environments, built on the trusted foundation of Red Hat Enterprise Linux and Red Hat Enterprise Linux CoreOS. 
The Red Hat OpenShift 4.2 release focuses on tooling that is designed to deliver a developer-centric user experience. It also helps cluster administrators by easing the management of the platform and applications, with the availability of OpenShift migration tooling from 3.x to 4.x, as well as newly supported disconnected installs. 
How Developers can innovate on Kubernetes
Red Hat OpenShift is designed to help organizations implement a Kubernetes infrastructure that is designed for rapid application development and deployment. The result is a platform that enables IT operations and developers to collaborate together and effectively deploy containerized applications. 
With the OpenShift 4.2 release, developers have a streamlined Kubernetes experience, thanks to a dedicated developer perspective, new tooling and additional plugins that they can enable for container builds, CI/CD pipelines and serverless deployments. This empowers developers to focus on coding rather than dealing with the specifics of Kubernetes operations. 

Application topology view in developer console.

New developer perspective in the OpenShift Console.
Updates in OpenShift 4.2 to help developers include the availability of these client tools:

Web Console with a developer perspective so developers are able to focus on what matters to them, surfacing only information and configuration developers need to know. An enhanced UI for application topology and application builds makes it easier for developers to build, deploy and visualize containerized applications and cluster resources.

odo, a developer-focused command line interface that simplifies application development on OpenShift. Using a “git push” style interaction, this CLI helps developers who are unfamiliar with Kubernetes create applications on OpenShift without needing to understand the details of Kubernetes operations.
Red Hat OpenShift Connector for Microsoft Visual Studio Code, JetBrains IDE (including IntelliJ) and Eclipse Desktop IDE, making it easier to plug into existing developer pipelines. Developers can develop, build, debug and deploy their applications on OpenShift without leaving their favorite coding IDE.
Red Hat OpenShift Deployment Extension for Microsoft Azure DevOps. Users of this DevOps toolchain can now deploy their built applications to Azure Red Hat OpenShift, or any other OpenShift cluster directly from Microsoft Azure DevOps.

Visual studio plug-in view.
OpenShift on a laptop or desktop
With Red Hat CodeReady Containers now generally available, team members can develop on OpenShift on a laptop. A preconfigured OpenShift cluster is tailored for a laptop or desktop development making it easier to get going quickly with a personal cluster.
Service Mesh
Based on the Istio, Kiali and Jaeger projects and enhanced with Kubernetes Operators, OpenShift Service Mesh is designed to simplify the development, deployment and ongoing management of applications on OpenShift. It delivers a set of capabilities well suited for modern, cloud native applications such as microservices. This helps to free developer teams from the complex tasks of having to implement bespoke networking services for their applications and business logic. 
With Red Hat OpenShift Service Mesh, recently made generally available on OpenShift 4, customers can benefit from an end-to-end developer-focused experience. This includes key capabilities such as tracing and measurement, visualization and observability, and “one-click” service mesh installation and configuration. Operational and security benefits of the service mesh include encryption of east/west traffic in the cluster and integration with an API gateway from Red Hat 3scale.

Enhanced visualization of cluster traffic with Kiali in OpenShift Service Mesh.
Serverless
OpenShift Serverless helps developers deploy and run applications that will scale up or scale to zero on-demand. Based on the open source project Knative, OpenShift Serverless is currently in Technology Preview and is available as an Operator on every OpenShift 4 cluster.  The OpenShift Serverless Operator provides an easy way to get started and install the components necessary to deploy serverless applications or functions with OpenShift. With the Developer-focused Console perspective available in OpenShift 4.2, serverless options are enabled for common developer workflows, such as Import from Git or Deployan Image, allowing users to create serverless applications directly from the console. 

 
 
Configuring a serverless deployment in the OpenShift Console.
Other than the integration with the developer console, some key serverless updates to make development on Kubernetes easier include: kn – the Knative CLI providing an intuitive user experience, the ability to group objects necessary for applications, immutable point-in-time snapshots of code and configuration, and the ability to map a network endpoint to a specific revision or service. These features, available to try out in Technology Preview through OpenShift Serverless, helps make it easier for developers to get started with serverless architecture and have the flexibility to deploy their applications regardless of the infrastructure environment across the hybrid cloud, without lock-in concerns.
Cloud-native CI/CD with Pipelines
Continuous Integration and Continuous Delivery (CI/CD) are key practices of modern software development allowing developers to deploy more quickly and reliably. CI/CD tools provide teams the ability to get feedback through a streamlined and automated process, which is critical for agility. In OpenShift, you have a choice of using Jenkins or the new OpenShift Pipelines for CI/CD capabilities. 
We offer these two options because, while Jenkins is used by the majority of enterprises today, we are also looking to the future of cloud-native CI/CD with the open source Tekton project. OpenShift Pipelines, based on Tekton, better supports pipeline-as-code and GitOps approaches common in cloud-native solutions. In OpenShift Pipelines each step is executed in its own container, so resources are only used when the step is running. This gives developers full control over their team’s delivery pipelines, plugins and access control with no central CI/CD server to manage.
OpenShift Pipelines is currently in Developer Preview and available as an Operator on every OpenShift 4 cluster. OpenShift users can also choose to run Jenkins in OpenShift, which is available in OpenShift 3 and 4. 

Red Hat OpenShift Pipelines.
In total these updates can enable development teams to get started more quickly in Red Hat OpenShift.
Containers managed across the hybrid cloud
OpenShift is designed to deliver a cloud-like experience across the hybrid cloud with automated installation and platform updates. While the release has already been available across major public cloud providers, private clouds, virtualization platforms and bare-metal servers, 4.2 introduces general availability of OpenShift 4 on two new public clouds, Microsoft Azure and Google Cloud Platform, as well as on a private cloud, OpenStack.
Installer enhancements have been made across environments. New in this release is support for disconnected installs. Disconnected installation and cluster-wide proxy enablement with support for providing your own CA bundles help customers meet requirements for regulatory standards and internal security protocols. With disconnected installation, customers can get the latest version of OpenShift Container Platform in environments not accessible via the Internet, or with strict image testing policies.
Additionally, full stack automated installs using Red Hat Enterprise Linux CoreOS, a smaller footprint variant of Red Hat Enterprise Linux, can help customers get started in less than an hour to be up and running in the cloud. 
OpenShift Container Platform enables customers to build, deploy and manage their container-based applications consistently across cloud and on-premises infrastructure. With simplified, automated and faster installations of OpenShift Container Platform 4.2 now generally available for AWS, Azure, OpenStack and GCP, enterprises can operate the Kubernetes platform across a hybrid cloud environment.
“At Google Cloud we are committed to providing customers with the flexibility to deploy all types of enterprise workloads on GCP. Google and Red Hat share a long-standing collaboration across Kubernetes, support for the open source community, and a mutual belief that open standards and open innovation are good for customers. We look forward to helping OpenShift customers leverage the power of Google Cloud through this partnership,” said Rayn Veerubhotla, director, partnerships at Google Cloud.
“We are excited to see the 4.2 release bringing new tooling and local development experiences to OpenShift users,” said Gabe Monroy, director of program management for the Azure Application Platform, Microsoft. “Azure offers a range of OpenShift solutions including Azure Red Hat OpenShift, a fully managed OpenShift service jointly operated by Red Hat and Microsoft.  With Red Hat OpenShift Container Platform 4 now generally available on Azure, enterprises can create hybrid cloud environments that can meet their current needs and also evolve to handle future requirements.”
Migration tooling to help users upgrade from OpenShift 3 to 4
Existing OpenShift users can move to the latest release of OpenShift Container Platform more easily with the new workload migration tooling available alongside OpenShift 4.2 in the coming weeks. Previously a more manual undertaking, customers have a simpler and faster way to copy workloads from one OpenShift cluster to another. For example, a cluster admin selects an OpenShift Container Platform 3.x source cluster and a project (or namespace) within that cluster. The admin then chooses to copy or move the associated persisted volumes to the destination OpenShift Container Platform 4.x cluster. Applications continue running on the source cluster until the admin decides to shutdown the application on the source cluster.  
Covering a wide variety of enterprise use cases, the migration tool provides options to stage the migration and cutover. Persistent volumes can be handled in a couple of ways: 

Copying the data using a middle repository, leveraging project Velero. This migration tool can target a storage backend that is different from the original. For example, moving from Gluster to Ceph.
Keeping the data in the existing repository and attaching it to the new cluster – swinging the PV.
Using Restic, for a filesystem copy approach. 

Early access
Customers often would like access to code earlier in beta to try it out. To help expand the ways our customers can experiment with the latest OpenShift updates, starting with the previews of OpenShift 4.2, customers and partners have an opportunity to gain access to our nightly builds. Note that nightly builds are not for production usage, they are not supported, they have little documentation, and not all the features will be baked in them. We intend for them to get better and better the closer they get to a GA milestone. 
With this, customers and partners have the ability to get the earliest possible look at features as they are being developed, which can help during deployment planning and ISV level integrations. 
Note for community users of OKD
Work has begun on the OKD 4.0 release, the open source community distribution of Kubernetes that powers Red Hat OpenShift, and all are invited to give feedback on current development efforts for OKD4, Fedora CoreOS (FCOS) and Kubernetes by joining in the conversations on the OKD Working Group or check on the status of the efforts on OKD.io.
Get started
Organizations across industries and around the world use OpenShift to accelerate application development and delivery. 
“As a company transitioning to containers and Kubernetes, we wanted to work with Red Hat given their deep expertise in Kubernetes for enterprises. In adopting Red Hat OpenShift 4, we are able to focus on our business of IT cybersecurity — our developers get to focus on code, while administrators can work with a trusted platform that can be easier to manage thanks to automated updates.” — Sean Muller, enterprise architect & technology leader, LiquidIT
“We use Red Hat OpenShift as our Kubernetes solution, running a number of our critical systems in our private cloud environment. Using the automation in Red Hat OpenShift, we are able to continually deliver better functionality for our customers from our teams with short time-to-market and low risk.” — Alv Skjellet, head of IT platforms, Norsk Tipping
“With our focus on the commercial transportation industry, we have been working with Red Hat given their expertise in cloud-native technology to power our fleet management solutions for transportation and logistics companies. Red Hat OpenShift 4 is already helping us to unify our work on this trusted enterprise Kubernetes platform across our hybrid cloud environment. Each cluster can be deployed using a single command. Kubernetes Operators help to enable “one-click” upgrades and lifecycle management. Operators provide automation for our applications teams to move easily to an as-a-service model. We look forward to maximizing our use of the tools that are especially powerful for our developers so they can focus on innovations on our applications.” — Justin Newcom, vice president of Global Information Technology, Omnitracs 
With the OpenShift 4.2 release, available soon, we continue to deliver the industry’s most comprehensive enterprise Kubernetes platform with full-stack automated operations to manage hybrid cloud and multi-cloud deployments, optimized for developer productivity and frictionless innovation. Learn more:

Try out OpenShift
Read about Red Hat OpenShift customers

Footnote:
The use of the word “partnership” does not mean a legal partnership or any other form of legal relationship between Red Hat, Inc. and any other entity.
The post Introducing Red Hat OpenShift 4.2: Developers get an expanded and improved toolbox   appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Building an open ML platform with Red Hat OpenShift and Open Data Hub Project

While the potential for Artificial Intelligence (AI) and Machine Learning (ML) is abound, organizations face challenges scaling AI. The major challenges are: 

Sharing and Collaboration: Unable to easily share and collaborate, iteratively and rapidly 
Data access: Access to data is bespoke, manual and time consuming 
On-demand: No on-demand access to ML tools and frameworks and compute infrastructure
Production: Models are remaining prototypes and not going into production 
Tracking and Explaining AI: Reproducing, tracking and explaining results of AI/ML is hard 

Unaddressed, these challenges impact the speed, efficiency and productivity of the highly valuable data science teams. This leads to frustration, lack of job satisfaction and ultimately the promise of AI/ML to the business is not redeemed. 
IT departments are being challenged to address the above. IT has to deliver a cloud-like experience to data scientists. That means a platform that offers freedom of choice, is easy to access, is fast and agile, scales on-demand and is resilient. The use of open source technologies will prevent lockin, and maintain long term strategic leverage over cost. 
In many ways, a similar dynamic has played out in the world of application development in the past few years that has led to microservices, the hybrid cloud and automation and agile processes. And IT has addressed this with containers, kubernetes and open hybrid cloud. 
So how does IT address this challenge in the world of AI – by learning from their own experiences in the world of application development and applying to the world of AI/ML. IT addresses the challenge by building an AI platform that is container based, that helps build AI/ML services with agile process that accelerates innovation and is built with the hybrid cloud in mind. 

To do this, we start with Red Hat OpenShift, the industry leading container and Kubernetes platform for hybrid cloud, with over 1300 customers worldwide, and fast growing ML software and hardware ecosystem (e.g. NVIDIA, H2O.ai, Starburst, PerceptiLabs, etc.). Several customers (e.g. HCA Healthcare, ExxonMobil, BMW Group, etc.) have deployed containerized ML tool chain and DevOps processes on OpenShift and our ecosystem partners to operationalize their preferred ML architecture, and accelerate workflows for data scientists. 
We have also initiated Open Data Hub project, an example architecture with several upstream open source technologies, to show how the entire ML lifecycle can be demonstrated on top of OpenShift 
The Open Data Hub Project
So what is Open Data Hub project? Open Data Hub is an open source community project that implements end-2-end workflows from data ingestion, to transformation to model training and serving for AI and ML with containers on Kubernetes on OpenShift. It is a reference implementation on how to build an open AI/ML-as-a-service solution based on OpenShift with open source tools e.g. Tensorflow, JupyterHub, Spark, etc.. Red Hat IT has operationalized Open Data Hub project to provide AI and ML services within Red Hat. In addition, OpenShift also integrates with key ML software and hardware ISVs such as NVIDIA, Seldon, Starbust and others in this space to help operationalize your ML architecture.  

Open Data Hub project addresses the following use cases: 

As a Data Scientist, I want a “self-service cloud like” experience for my Machine Learning projects. 
As a Data Scientist, I want the freedom of choice to access a variety of cutting edge open source frameworks and tools for AI/ML. 
As a Data Scientist, I want access to data sources needed for my model training. 
As a Data Scientist, I want access to computational resources such as CPU, Memory and GPUs. 
As a Data Scientist, I want to collaborate with my colleagues and share with them my work, obtain feedback improve and iterate rapidly.  
As a Data Scientist, I want to work with developers (and devops) to deliver my ML models and work into production 
As a data engineer, I want to provide data scientists with access to various data sources with appropriate data controls and governance. 
As an IT admin/operator, I want to easily control the life cycle (install, configure and updates) of the open source components and technologies. And I want to have appropriate controls and quotas in place. 

The Open Data Hub project integrates a number of open source tools to enable an end to end AI/ML workflow. Jupyter notebook ecosystem is provided as the primary interface and experience to the data scientist. And the main reason for that is that Jupyter is widely popular amongst data scientists today. Data scientists can create and manage their Jupyter notebooks workspaces with an embedded JupyterHub. While they can create or import new notebooks, Open Data Hub project also includes a number of pre-existing notebooks called the “AI Library”. This AI Library is an open source collection of AI components machine learning algorithms and solutions to common use cases to allow rapid prototyping. JupyterHub integerates with OpenShift RBAC (Role Based Access Control) and therefore users only need to use their existing OpenShift platform credentials and sign on only once (single sign on). JupyterHub has a convenient user interface (called spawner) that allows the user to conveniently chose the notebooks from a drop down menu and specify the amount of compute resources (like number of CPU cores, memory and GPUs) needed to run their notebooks. Once the data scientists spawns the notebook, the Kubernetes scheduler (that is part of OpenShift) takes care of appropriately scheduling the notebooks. Users can perform their experiments and save and share their work. In addition, expert users can access the OpenShift CLI shell directly from their notebooks to make use of Kubernetes primitives such as Job or OpenShift capabilities such as Tekton or Knative. Or they can access these from the convenient Graphical User Interface (GUI) OpenShift called the OpenShift web console. 

Next, Open Data Hub project provides capabilities to manage data pipelines. Ceph object is provided as the S3 compatible Object store for data. Apache Spark is provided to stream data from external sources or the embedded Ceph S3 store and also for manipulating data. Apache Kafka is also provided for advanced control of the data pipeline (where there can be multiple data ingest, transformation, analysis and store steps). 
The data scientist has accessed data and built a model. Now what. The data scientist may want to share her or his work as a service with others such as fellow data scientists or application developers or analysts. Inference servers can make this happen. Open Data Hub project provides Seldon as the inferencing server so that models can be exposed as a RESTful service. 
At this point, there are one or more models being served by Seldon inference server. To monitor the performance of the model serving, Open Data Hub project also provides metrics collection and reporting with Prometheus and Grafana. They are very popular open source monitoring tools. This provides for the necessary feedback loop to monitor the health of the AI model particularly in production. 

As you can see Open Data Hub project thus enables a cloud-like experience for an end-2-end workflow from data access and manipulation, through to model training and finally model serving. 
Putting it all Together
As an OpenShift administrator how do I make all this happen. We introduce the Open Data Hub project Operator!
The Open Data Hub project Operator manages the install, configuration and lifecycle of the Open Data Hub project. This includes the deployment of aformentoned tools such as JupyterHub, Ceph, Spark, Kafka, Seldon, Prometheus and Grafana. The Open Data Hub project can be found in the OpenShift web console (as a community operator). OpenShift admins can therefore chose to make Open Data Hub project to specific OpenShift projects. This is done once. After that, data scientists log into their project spaces on OpenShift web console and find the Open Data Hub project operator installed and available in their projects. They can then choose to instantiate the Open Data Hub project for their project with a simple click and start accessing the tools (described above) right away. And this setup can be configured to be highly available and resilient. 

To get started on Open Data Hub project, try it out by simply following the install and basic tutorial instructions. Architecture details about the Open Data Hub project can be found here. Looking forward, we are excited about the roadmap. Among the highlights are additional integrations with Kubeflow, focus on data governance and security and integrations with rules based system – especially open source tools – Drools and Optaplanner. As a community project, you can have a say in all this by joining the Open Data Hub  project community. 
In summary, there are serious challenges to scaling AI/ML within an enterprise to realize the full potential. Red Hat OpenShift has a track record of addressing similar challenges in the software world. The Open Data Hub is a community project and reference architecture that provides a cloud-like experience for end to end AI/ML workflows on a hybrid cloud platform with OpenShift. We have an exciting and robust roadmap in service of this basic mission. Our vision is to encourage a robust and thriving open source community for AI on OpenShift. 
 
The post Building an open ML platform with Red Hat OpenShift and Open Data Hub Project appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

What is Rancher? The KaaS Landscape, Part 1

The post What is Rancher? The KaaS Landscape, Part 1 appeared first on Mirantis | Pure Play Open Cloud.
This article is the first in a series of pieces describing the various players in the Kubernetes as a Service (KaaS) area.  The idea is to give those considering using KaaS an idea of what’s out there and what will best suit their needs. (If you’re not sure whether this technology is for you, you might start with a look at KaaS in relation to another developer enablement strategy, Platform as a Service (PaaS).
Can’t stand the suspense? Download this PDF to get a feature-by-feature comparison of KaaS and PaaS solutions from leading vendors.
We’ll start by discussing the pros and cons of Rancher, one of the first such tools on the market.
What is Rancher?
Rancher is primarily a KaaS, in that it’s designed to help deploy and manage Kubernetes clusters. It includes both a web-based GUI and a command line interface that enable you to create and scale not just clusters, but also Kubernetes objects such as pods and deployments.  You can also import existing clusters to be managed by the Rancher interface.  
While it does include an application catalog that gives it some similar capabilities to a PaaS, its architecture places it firmly in the KaaS camp; commands generally get proxied through the Rancher server, but once deployed, clusters can also operate independently.

Rancher is designed to integrate with other infrastructure tools such as CI/CD tools, code repositories, monitoring, and user management, and can deploy clusters to most available providers, such as OpenStack, AWS, and Microsoft Azure.
A quick overview of using Rancher
The Rancher server can be downloaded and installed for free, so you can quickly get a feel for what it does. Let’s take a quick look at what the experience is like.

Once the software is deployed, you can finish configuration from the provided web address.  Start by creating your credentials and confirming the URL you’ll use to access the server.
Now you’re ready to add a new cluster.
Click Add Cluster to get started.
Out of the box, Rancher supports multiple cloud providers.  Choose what’s convenient for you.

If your provider requires additional information, the UI will prompt you for it.

Now we need to configure the nodes for the cluster.  For example, if you specified 2 nodes, you will see two additional VMs created. Note that by default these nodes are in a different zone than the Rancher server.

Rancher handles the security, and in some ways it’s very convenient; for example Global RBAC control makes it easier to work with multi-cluster applications.  On the other hand, you have to be sure you understand what it’s doing; because of the way resources are created, you must make sure to delete them from Rancher or you might have difficulty deleting them externally.  (Or at least, that’s what happened when we tested this on Google Compute Engine.)
While the cluster is deploying, you can download the Rancher CLI to manage it if necessary.

Once the cluster has been deployed, you can deploy applications from provided repositories. You can also add your own charts repository.

You can also deploy containers directly. Rancher makes it easy to deploy using typical patterns, such as a stateful set or a pod that runs on a cron schedule, but there is one downside: it does not appear to support private image repositories, which can make it difficult to deploy sensitive applications.

So that’s the general workflow; let’s summarize the ups and downs of using Rancher.
Advantages of using Rancher for KaaS
Rancher has been around for several years, and as such it’s a fairly comprehensive system, and it does have a number of advantages, including:

Support for multi-cluster applications:  To help mitigate deployment and management errors, Rancher can deploy the same Helm-based application on multiple clusters simultaneously.  It will also handle upgrading those applications.
Support for multiple operating systems:  While some KaaS’s and PaaS are locked into a specific operating system — for example, OpenShift requires the use of Red Hat Enterprise Linux (RHEL) — Rancher supports multiple operating systems, including Ubuntu 16.04 and 18.04, RHEL, the RancherOS container-optimized operating system, and even Windows Server 2019.
User management:  By providing the ability to control global RBAC settings, Rancher makes it easier to manage multiple clouds and multi-cloud applications.
Networking support:  Rancher includes Container Network Interface (CNI) support for Canal, Flannel, Calico, and Weave.
Storage support:  Rancher includes support for multiple storage drivers, including:

Amazon EBS Disk
AzureFile
AzureDisk
Ceph RBD
Gluster Volume
Google Persistent Disk
Openstack Cinder Volume
ScaleIO Volume
StorageOS
Vmware vSphere Volume

Disadvantages of using Rancher for KaaS
While Rancher is a very capable product, there are a few areas where you will want to be cautious:

No secure storage of secrets:  Rancher isn’t designed for heavy duty security, in that secrets are stored in plain text rather than being stored securely.
VMs run using CRDs:  Many organizations still need to run VMs for applications that can’t be containerized, but Rancher’s method for doing this is through the use of Custom Resource Definitions to create “VM Pods”.  This creates additional complexity and overhead compared to using something like Virtlet, which treats VMs as first-class citizens.
No private registry capability: while Rancher does enable you to import a private application catalog, it doesn’t have the ability to deploy containerized applications from a private image repository, making it more difficult to ensure particular images are used as the basis for applications.

Overall
Rancher is a comprehensive Kubernetes as a Service system enabling increased developer productivity for general application development, including for multi-cluster applications, but for those building applications in highly controlled and regulated environments, companies with significant non-containerized applications, or for which security is a topmost concern, it does have some gaps that would need to be considered.
The post What is Rancher? The KaaS Landscape, Part 1 appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Launching OpenShift/Kubernetes Support for Solarflare Cloud Onload

This is a guest post co-written by Solarflare, a Xilinx company. Miklos Reiter is Software Development Manager at Solarflare and leads the development of Solarflare’s Cloud Onload Operator. Zvonko Kaiser is Team Lead at Red Hat and leads the development of the Node Feature Discovery operator.
 

Figure 1: Demo of Onload accelerating Netperf in a Pod
 
Solarflare, now part of Xilinx, and Red Hat have collaborated to bring Solarflare’s Cloud Onload for Kubernetes to Red Hat’s OpenShift Container Platform. Solarflare’s Cloud Onload accelerates and scales network-intensive applications such as in-memory databases, software load balancers and web servers. The OpenShift Container Platform empowers developers to innovate and ship faster as the leading hybrid cloud, enterprise Kubernetes container platform.
The Solarflare Cloud Onload Operator automates the deployment of Cloud Onload for Red Hat OpenShift and Kubernetes. Two distinct use cases are supported:

Acceleration of workloads using Multus and macvlan/ipvlan
Acceleration of workloads over a Calico network

This blog post describes the first use case; a future blog post will focus on the second use case. 
Solarflare’s Cloud Onload Operator provides an integration path with Red Hat OpenShift Container Platform’s Device Plugin framework, which allows OpenShift to allocate and schedule containers according to the availability of specialized hardware resources. The Cloud Onload Operator uses the Multus multi-networking support and is compatible with both the immutable Red Hat CoreOS operating system as well as Red Hat Enterprise Linux. The Node Feature Discovery operator is also a part of this story, as it helps to automatically discover and use compute nodes with high-performance Solarflare network adapters, which Multus makes available to containers in addition to the usual Kubernetes network interface. OpenShift 4.2 will include the Node Feature Discovery operator.
Below is a network benchmark showing the benefits of Cloud Onload on OpenShift.
Up to 15x Performance Increase

Figure 2: NetPerf request-response performance with Onload versus the kernel
 
Figure 2 above illustrates the dramatic acceleration in network performance, which can be achieved with Cloud Onload. With Cloud Onload, a NetPerf TCP request-response test produces a more significant number of transactions per second delivering better performance than with just the native kernel network stack.
Moreover, performance scales almost linearly as we scale the number of NetPerf test streams up to the number of CPU cores in each server. In this test, Cloud Onload achieves eight times the kernel transaction rate with one stream, rising to a factor of 15 for 36 concurrent streams.
This test used a pair of machines with Solarflare XtremeScale X2541 100G adapters connected back-to-back (without going via a switch). The servers were using 2 x Intel Xeon E5-2697 v4 CPUs running at 2.30GHz.
Integration with Red Hat OpenShift
Deployment of Onload Drivers
The Cloud Onload Operator automates the deployment of the Onload kernel drivers and userspace libraries in Kubernetes.
For portability across operating systems, the kernel drivers are distributed as a driver container image. The operator ships with versions built against Red Hat Enterprise Linux and Red Hat CoreOS kernels. For non-standard kernels, one can build a custom driver container image. The operator automatically runs the driver container on each Kubernetes node, which loads the kernel modules.
Also, the driver container installs the user-space libraries on the host. Using a device plugin, the operator then injects the user-space libraries, together with the necessary Onload device files, into every pod which requires Onload.
Deployment of Onload on Kubernetes is significantly simplified by the operator, as it is not necessary to build Onload into application container images or to write custom sidecar injector or other logic to achieve the same effect.
Configuring Multus
OpenShift 4 ships with the Multus multi-networking plugin. Multus enables the creation of multiple network interfaces for Kubernetes pods.
Before we can create accelerated pods, we need to define a Network Attachment Definition (NAD) in the Kubernetes API. This object specifies which of the node’s interfaces to use for accelerated traffic, and also how to assign IP addresses to pod interfaces.
The Multus network configuration can vary from node to node, which is useful to assign static IPs to pods, or if the name of the Solarflare interface to use varies between nodes.
The following steps create a Multus network that can provide a macvlan subinterface for every pod that requests one. The plugin automatically allocates static IPs to configure the subinterface for each pod.
First, we create the NetworkAttachmentDefinition(NAD) object:
cat << EOF | oc apply -f –
apiVersion: “k8s.cni.cncf.io/v1″
kind: NetworkAttachmentDefinition
metadata:
name: onload-network
EOF
Then on each node that uses this network, we write a Multus config file specifying the properties of this network:

mkdir -p /etc/cni/multus/net.d
cat << EOF > /etc/cni/multus/net.d/onload-network.conf
{
  “cniVersion”: “0.3.0”,
  “type”: “macvlan”,
  “name”: “onload-network”,
  “master”: “sfc0″,
  “mode”: “bridge”,
  “ipam”: {
      “type”: “host-local”,
      “subnet”: “172.20.0.0/16″,
      “rangeStart”: “172.20.10.1”,
      “rangeEnd”: “172.20.10.253”,
      “routes”: [
          { “dst”: “0.0.0.0/0″ }
      ]
  }
}
EOF
Here, master specifies the name of the Solarflare interface on the node, and rangeStart and rangeEnd specify non-overlapping subsets of the subnet IP range.
An alternative to the macvlan plugin is the ipvlan plugin. The main difference between the ipvlan subinterfaces and the macvlan is that the ipvlan subinterfaces share the parent interface’s MAC address, providing better scalability in the L2 switching infrastructure. Cloud Onload 7.0 adds support for accelerating ipvlan subinterfaces in addition to macvlan subinterfaces. OpenShift 4.2 will add support for ipvlan.
Node Feature Discovery
A large cluster often runs on servers with different hardware. This means that workloads requiring high-performance networking may need to be explicitly scheduled to nodes with the appropriate hardware specification. In particular, Cloud Onload requires Solarflare XtremeScale X2 network adapters.
To assist with scheduling, we can use Node Feature Discovery. The NFD operator automatically detects hardware features and advertises them using node labels. We can use these node labels to restrict which nodes are used by the Cloud Onload Operator, by setting the Cloud Onload Operator’s nodeSelector property.
In future, NFD will be available within the operator marketplace in OpenShift 4.2. At the time of writing, NFD is installed manually as follows:
$ git clone https://github.com/openshift/cluster-nfd-operator
$ cd cluster-nfd-operator
$ make deploy
We can check that NFD has started successfully by confirming that all pods in the openshift-nfd namespace are running:
$ oc get pods -n openshift-nfd
At this point, all compute nodes with Solarflare NICs should have a node label indicating the presence of a PCI device with the Solarflare vendor ID (0x1924). We can check this by querying for nodes with the relevant label:
$ oc get nodes -l feature.node.kubernetes.io/pci-1924.present=true
We can now use this node label in the Cloud Onload Operator’s nodeSelector to restrict the nodes used with Onload. For maximum flexibility, we can, of course, use any node labels configured in the cluster.
Cloud Onload Installation
Installation requires downloading a zip file containing a number of YAML manifests from the Solarflare support website https://support.solarflare.com. Following the installation instructions in the README.txt contained in the zip file, we edit the example custom resource to specify the kernel version of the cluster worker nodes we are running:
kernelVersion: “4.18.0-80.1.2.el8_0.x86_64”
and the node selector:
nodeSelector:
beta.kubernetes.io/arch: amd64
node-role.kubernetes.io/worker: ”
feature.node.kubernetes.io/pci-1924.present: true
We then apply the manifests
$ for yaml_spec in manifests/*; do oc apply -f $yaml_spec; done
We expect to list the Solarflare Cloud Onload Operator on https://operatorhub.io soon, for installation using the Operator Lifecycle Manager and OpenShift’s built-in Operator Hub support.
Running the NetPerf Benchmark
We are now ready to create pods that can run Onload.
Netperf Test Image
We now build a container image which includes the netperf performance benchmark tool using a Fedora base image. Most common distributions that use glibc are compatible with Onload. This excludes extremely lightweight images, such as Alpine Linux.
The following Dockerfile produces the required image.
NetPerf.Dockerfile:

FROM fedora:29
RUN dnf -y install gcc make net-tools httpd iproute iputils procps-ng kmod which
ADD https://github.com/HewlettPackard/netperf/archive/netperf-2.7.0.tar.gz /root
RUN tar -xzf /root/netperf-2.7.0.tar.gz
RUN netperf-netperf-2.7.0/configure –prefix=/usr
RUN make install
CMD [“/bin/bash”]
We build the image:
$ docker build -t netperf -f netperf.Dockerfile
Example onloaded netperf daemonset
This is an example daemonset that runs netperf test pods on all nodes that have Solarflare interfaces.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: netperf
spec:
selector:
  matchLabels:
    name: netperf
template:
  metadata:
    labels:
      name: netperf
    annotations:
      k8s.v1.cni.cncf.io/networks: onload-network
  spec:
    nodeSelector:
      node-role.kubernetes.io/worker: ”
    containers:
      – name: netperf
        image: {{ docker_registry }}/netperf:latest
        stdin: true
        tty: true
        resources:
          limits:
            solarflare.com/sfc: 1
 
Here {{ docker_registry }} is the registry hostname (and :port if required).
The important sections are:

The annotations section under spec/template/metadata specifies which Multus network to use. With this annotation, Multus will provision a macvlan interface for the pods.
The resources section under containers requests Onload acceleration from the Cloud Onload Operator.

Running Onload inside accelerated pods with OpenShift/Multus
Each netperf test pod we have created has two network interfaces.
eth0: the default Openshift interface
net1: the Solarflare macvlan interface to be used with Onload
Any traffic between the net1 interfaces of two pods can be accelerated using Onload by either:

Prefixing the command with “onload”
Running with the environment variable LD_PRELOAD=libonload.so

Note: One caveat to the above is that two accelerated pods can only communicate using Onload if they are running on different nodes. (Onload bypasses the kernel’s macvlan driver to send traffic directly to the NIC, so traffic directed at another pod on the same node will not arrive.)
To run a simple netperf latency test we open a shell on each of two pods by running:
$ kubectl get pods
$ kubectl exec -it <pod_name> bash
On pod 1:
$ ifconfig net1
net1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
      inet 172.20.0.16  netmask 255.255.0.0  broadcast 0.0.0.0
[…]
bash-4.4$ onload –profile=latency netserver -p 4444
oo:netserver[107]: Using OpenOnload 201811 Copyright 2006-2018 Solarflare Communications, 2002-2005 Level 5 Networks [4]
Starting netserver with host ‘IN(6)ADDR_ANY’ port ‘4444’ and family AF_UNSPEC

On pod 2:
$ onload –profile=latency netperf -p 4444 -H 172.20.0.16 -t TCP_RR
Running multiple parallel NetPerf pairs, concurrently produced the results shown above.
Obtaining Cloud Onload
Visit https://solarflare.com/cloud-onload/ to learn more about Cloud Onload or make a sales inquiry. An evaluation of Solarflare’s Cloud Onload Operator for Kubernetes and OpenShift can be arranged on request.
 
The post Launching OpenShift/Kubernetes Support for Solarflare Cloud Onload appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift