Import Your WordPress Site to WordPress.com — Including Themes and Plugins

It’s been possible to export your posts, images, and other content to an export file, and then transfer this content into another WordPress site since the early days of WordPress.

Select WordPress from the list of options to import your site.

This basic WordPress import moved content, but didn’t include other important stuff like themes, plugins, users, or settings. Your imported site would have the same pages, posts, and images (great!) but look and work very differently from the way you or your users expect (less great).

There’s a reason that was written in the past tense: WordPress.com customers can now copy over everything from a self-hosted WordPress site — including themes and plugins — and create a carbon copy on WordPress.com. You’ll be able to enjoy all the features of your existing site, plus the the benefits of our fast, secure hosting with tons of features, and our world-class customer service.

Select “Everything” to import your entire WordPress site to WordPress.com.

To prep for your import, sign up for a WordPress.com account — if you’d like to import themes and plugins, be sure to select the Business or eCommerce plan — and install Jetpack (for free) on your self-hosted site to link it to WordPress.com. To start the actual import, head to Tools > Import in your WordPress.com dashboard.

Then sit back and relax while we take care of moving your old site to a new sunny spot at WordPress.com. We’ll let you know when it’s ready to roll!
Quelle: RedHat Stack

Securing Your Containers Isn’t Enough

The post Securing Your Containers Isn’t Enough appeared first on Mirantis | Pure Play Open Cloud.
The following is a guest post by Tim Reilly, CEO of Zettaset.
Containers have many use cases and benefits, but despite their impact, containers also create a host of new security challenges. Containers have short lifespans and move fast, so gaining full visibility into what’s happening in the environment can be difficult. Unlike virtual machines, containers aren’t always isolated from each other, so one compromised container can quickly lead to another. And if your data is compromised, it won’t matter that the container itself is secure.
When it comes to containers, the leading security concern continues to be the security of data that’s stored in them. Many organizations rely on access control, monitoring/logging, and existing workload security solutions to protect their container environments, but very few organizations have incorporated encryption because of the performance effect and complexity it often adds.
In a recent study, the extensive use of encryption, threat intelligence sharing, and integrating security into the DevOps process were all associated with lower than average data breach costs. Among those, encryption had the greatest impact – reducing breach costs by an average of $360,000. But encrypting data in containers is very different from encrypting data in other environments.
Join Zettaset & Mirantis on Wednesday, April 8th at 10am PDT for a deep dive into the issues surrounding protecting your container-based data and how to defend it against an ever-growing landscape of security threats and attackers.
In this webinar, our security experts will discuss:

Aspects of a containers architecture that makes it more vulnerable to security threats
How encryption protects data at the heart of your applications
Overcome the hurdles of deploying encryption in containers while maintaining the benefits of DevOps environments
Best practices for implementing encryption in container environments

Save My Spot
The post Securing Your Containers Isn’t Enough appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Make Your Business More Accessible with New Blocks

From our support sessions with customers each month, we know that growing your brand or business is a top website goal. And in this unprecedented time in which more people around the world are staying at home, it’s important to promote your products and services online to reach a wider audience and connect with more people.

Our team has been hard at work improving the block editor experience. We’ve launched six new blocks that integrate WordPress.com and Jetpack-enabled sites with popular services — Eventbrite, Calendly, Pinterest, Mapbox, Google Calendar, and OpenTable — enabling you to embed rich content and provide booking and scheduling options right on your blog or website.

Whether you’re an online boutique, a pilates studio, an independent consultant, or a local restaurant, these blocks offer you more ways to promote your brand or business. Take a look at each block — or simply jump to a specific one below.

EventbriteCalendlyPinterestMapboxGoogle CalendarOpenTable

Promote online events with the Eventbrite block

Looking for a way to promote an online event (like your museum’s virtual curator talk or your company’s webinar on remote work), or even an at-home livestream performance for your fans and followers? Offering key features of the popular event registration platform, the Eventbrite block embeds events on posts and pages so your visitors can register and purchase tickets right from your site.

Quick-start guide:

To use this block, you need an Eventbrite account. If you don’t have one, sign up at Eventbrite for free.In the block editor, click the Add Block (+) button and search for and select the Eventbrite Checkout block.Enter the URL of your Eventbrite event. Read these steps from Eventbrite if you need help.Select from two options: an In-page Embed shows the event details and registration options directly on your site. The Button & Modal option shows just a button; when clicked, the event details will pop up so your visitor can register.

Learn more on the Eventbrite block support page.

Schedule sessions with the Calendly block

Want to make it easier for people to book private meditation sessions or language lessons with you? The Calendly block, featured recently in our guide on moving your classes online, is a handy way for your clients and students to book a session directly on your site — eliminating the time spent coordinating schedules. You can also use the Calendly block to schedule team meetings or group events.

Quick-start guide:

To use this block, you need a Calendly account. Create one for free at Calendly.In the block editor, click the Add Block (+) button and search for and select the Calendly block.Enter your Calendly web address or embed code. Follow these steps from Calendly if you need help.Select from two styles: the Inline style embeds a calendar directly onto your site; the Link style inserts a button that a visitor can click to open a pop-up calendar.This block is currently available to sites on the WordPress.com Premium, Business, or eCommerce plans. It’s free on Jetpack sites.

Learn more on the Calendly block support page.

Up your visual game with the Pinterest block

Strong visuals help to provide inspiration, tell your stories, and sell your products and services. Pinterest is an engaging way for bloggers, influencers, and small business owners to enhance their site content and expand their following. With the Pinterest block, you can embed and share pins, boards, and profiles on your site.

Quick-start guide:

In the block editor, click the Add Block (+) button and search for and select the Pinterest block.Paste the URL of a pin, board, or profile you’d like to display and click Embed. Note that you can only embed public boards.Pro tip: in the block editor, go to Layout Elements and select Layout Grid to create a visually striking layout with pins, boards, and profiles, as shown above.

Display locations with the Map block

A map on your site is a quick visual way to display a location, like your restaurant’s takeout window or the drop-off spot for donations to a local food bank. Powered by mapping platform Mapbox, the Map block embeds a customized map on your site. Show the location of your business, a chain of boutique hotels, the meeting spots for your nonprofit’s volunteers, and more.

Quick-start guide:

In the block editor, click the Add Block (+) button and search for and select the Map block.In the text field, type the location you want to display and select the correct location from among the results that appear.Click on the red marker to edit the title and caption of the marker.Explore the toolbar for block-specific settings. Add more markers, for example, by clicking the Add a marker button.In the sidebar, customize your map’s appearance (including colors, height, and zoom level).

Explore more settings on the Map block support page.

Share your calendar with the Google Calendar block

Are you an author planning a book tour (or a series of online readings)? A digital marketing consultant hosting social media workshops? A neighborhood pop-up bakery? With the Google Calendar block, you can display a calendar of upcoming events or your hours of operation.

Quick-start guide:

In Google Calendar, click the three dots next to your calendar name and select Settings and sharing. Under Access Permissions, ensure Make available to public is checked. Click on Integrate calendar on the left and copy the code under Embed code.In the block editor, click the Add Block (+) button, search for and select the Custom HTML block, and paste the code you copied in Google Calendar.Publish your post or page. The next time you edit this post or page, you’ll see the code has been converted to shortcode.

Explore more settings on the Google Calendar block support page.

Streamline reservations with the OpenTable block

If you’re a restaurant or cafe owner, a primary goal of your site is to increase the number of bookings. Sure, people aren’t dining out right now, but you can be ready to take reservations in the future. With the OpenTable block, people can reserve a table directly from a post or page instead of calling or booking through a different reservation service.

Quick-start guide:

To use this block, your restaurant must be listed on OpenTable. Create an OpenTable listing now.In the block editor, click the Add Block (+) button and search for and select the OpenTable block.Enter your OpenTable Reservation Widget embed code. Check this OpenTable guide if you need help.Explore the block’s toolbar and sidebar settings. For example, choose from four different embed styles: Standard, Tall, Wide, and Button.This block is currently available to sites on the WordPress.com Premium, Business, or eCommerce plans. It’s free on Jetpack sites.

Learn more on the OpenTable block support page.

Which blocks are you most excited about?

Stay tuned for more new blocks soon!
Quelle: RedHat Stack

The ultimate guide to Kubernetes

The post The ultimate guide to Kubernetes appeared first on Mirantis | Pure Play Open Cloud.
One thing we know about Kubernetes is that it’s got a reputation for being difficult to use.  Fortunately, it doesn’t have to be that way. Here at Mirantis we’re committed to making things easy for you to get your work done, so we’ve decided to put together this guide to Kubernetes.
It will ultimately be organized in two ways: task based, where you can find information on tasks such as deploying Kubernetes, creating an application, or communicating between containers, and object-based, where you can find out about specific Kubernetes objects, such as Deployments, Pods, or Services.
You’ll notice that some sections don’t have content yet; we plan on filling them out as time goes on, so if you have suggestions for topics you’d like us to cover, or links to resources you find particularly valuable, please let us know.
Task-based
Introduction to Kubernetes
Don’t Be Scared of Kubernetes
Kubernetes has the broadest capabilities of any container orchestrator available today, which adds up to a lot of power and complexity. That can be overwhelming for a lot of people jumping in for the first time – enough to scare people off from getting started. Here are five things you might be afraid of, and 5 ways to get started.
Introduction to Kubernetes
Kubernetes provides a way for operators to provide self service access for developers who need to instantiate and orchestrate containers. Here’s an introduction to get you started.
Deploying Kubernetes
Building Your First Certified Kubernetes Cluster On-Premises
Where following entries have shown how to create a basic dev/test cluster, this article explains how to create a production cluster using Docker Enterprise.
How to install Kubernetes with Kubeadm: A quick and dirty guide
Sometimes you just need a Kubernetes cluster, and you don’t want to mess around. This article is a quick and dirty guide to creating a single-node Kubernetes cluster using Kubeadm, a tool the K8s community created to simplify the deployment process.
Multi-node Kubernetes with KDC: A quick and dirty guide
Kubeadm-dind-cluster, or KDC, is a configurable script that enables you to easily create a multi-node cluster on a single machine by deploying Kubernetes nodes as Docker containers (hence the Docker-in-Docker (dind) part of the name) rather than VMs or separate bare metal machines. 
Create and manage an OpenStack-based KaaS child cluster
Once you’ve deployed your KaaS management cluster, you can begin creating actual Kubernetes child clusters. These clusters will use the same cloud provider type as the management cluster, so if you’ve deployed your management nodes on OpenStack, your child cluster will also run on OpenStack.
How to deploy Airship in a Bottle: A quick and dirty guide
Airship is designed to deploy OpenStack, but it deploys it on Kubernetes, so the first thing it does is deploy a Kubernetes cluster, so it’s another option for getting a cluster up and running.
Configuring Kubernetes and components
Virtlet: run VMs as Kubernetes pods
Virtlet enables you to run VMs as first class citizens within Kubernetes; this article explains how and why to make that work.
Everything you ever wanted to know about using etcd with Kubernetes v1.6 (but were afraid to ask)
The etcd key-value store is the only stateful component of the Kubernetes control plane. This makes matters for an administrator simpler, but when etcd went from v2 to v3, it was a headache for operators.
Coming soon:

Development
Infrastructure and operations
Security
Networking

Got suggestions for our upcoming sections? Let us know in the comments!
The post The ultimate guide to Kubernetes appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Create with confidence — and better blocks

In the last few years, the teams working on the block editor have learned a lot about how people build sites now and how they want to build sites in the future.

The latest version represents the culmination of these discoveries, and the next stage in the editor’s evolution.

With better visuals and more advanced features, it’ll keep designers, developers, writers, and editors productive and happy, and — tension-building drumroll — it’s in your editor right now!

What’s new

With a comprehensive visual refresh, a plethora of new features, and dozens of bug fixes, the new block editor comes with a lot to unpack.

What follows is just a small (but delectable) sample of the many ways we’ve upgraded your editing experience. (You can get the full list of goodies in the release notes.)

We hope you enjoy.

A revamped editor UI

The first thing you’ll notice is the slick UI. Buttons, icons, text, and dropdowns are all sporting a contrast boost, with bolder colors and more whitespace between buttons, text labels, and menu items.

The new block editor’s UI

As you navigate through the editor’s menus, individual items are clearly highlighted, allowing you to quickly identify what you’ve selected.

Active menu items have distinct highlights

The block toolbars are now simpler, displaying the most commonly-used features. For example, paragraph blocks show only bold, italic, and link formatting buttons. You’ll find all the extra options in the dropdown menu.

The block toolbar options are simpler and uncluttered

What’s more, instead of listing blocks within a fixed-height container, the block inserter now spans the height of the window. You’ll now see more blocks and block categories at once with less scrolling.

The block inserter spans the full height of your screen

Introducing block patterns

With the block editor as your canvas you can design almost any layout you can imagine – but building intricate page structures should never get in the way of your creative process.

Here’s where the blocks really shine: along with individual blocks, the editor now includes block patterns, a library of predefined and reusable block layouts, that you use on any page or post.

To check out the list of available patterns, click on the block pattern icon (on the top right) to reveal a collection of pre-built layouts:

Block patterns are groups of individual blocks combined to create elegant layouts

Pick the pattern you want to use, and it will appear in your editor ready for you to customize with your own content.

Right now, you’ll find a few introductory patterns – Two Columns of Text, Two Buttons, Cover, and Two Images Side by Side – but we’ll be adding more and more patterns as they’re available. When the block patterns API opens up to third-party authors, you’ll also be able to develop and share your own.

(Have an idea for a great pattern? The block editor developer community is actively seeking ideas. The more ideas they receive, the better your editor will be!)

Colors, colors everywhere

When it comes to words and columns, websites aren’t newspapers: things don’t have to be black and white.

Use the new Text Color selector tool to change the color of sentences, and even individual words and letters. Highlight the text you’d like to change, then click on the arrow dropdown and select “Text Color.”

Select “Text Color” from the options

Pick the color of your word or character

 

To change the background colors of your columns, select the column and head to the sidebar, to Color settings.

Columns get background colours too!

The road ahead is paved with blocks

There’s still a long way to go, and the editor’s community of contributors hasn’t given its collective keyboards a moment’s rest. Work on polishing UI elements like the sidebar and dropdowns continues along with advancements to block patterns and other exciting features.

Are there ways we could improve the site editing experience even more? Please let us know! We’re always keen to hear how we can make the web a better place for everyone.
Quelle: RedHat Stack

How to Move Your Classes Online — and Charge for Them

We are proud to host many websites for language tutors, yoga schools, and personal fitness coaches around the world.

It’s exciting to see how educators and consultants across different industries are getting creative with their online offerings: language teachers conduct 1:1 sessions to help students hone pronunciation, yoga studios live-stream group sessions, and instructors lead writing boot camps via Zoom breakout rooms. Even my own strength coach is monitoring my workouts — I launch the camera on my phone, place it against the wall, and do deadlifts while he supervises.

Last year we launched Recurring Payments to support creators, consultants, small businesses, and other professionals in establishing dependable income streams. We were very pleased to discover that online educators using this feature are thriving as well!

Marta, for example, runs Spanish Teacher Barcelona, a Spanish language school located in — you guessed it! — Barcelona. She offers 1:1 sessions and classes in a coworking space in the city’s Gracia neighborhood. For customers that cannot meet in person, she hosts private lessons online, available with a subscription. She offers three subscription plans to meet the variety of needs of her students.

Ready to set up your own subscription-based service or move your existing classes online? Here’s a quick guide to get you set up with the right tools, so you can focus instead on providing the best educational environment possible. 

Set up your online class today

Below, we’ll cover the steps you can take to get your classes or private lessons up and running with the Recurring Payments feature. We’ll also recommend tools to make scheduling 1:1 sessions and operating your classes easier, like the Calendly block and various video conferencing tools. 

1. Create a “Subscribe” page to promote your class or service

You need to convince your customers that your subscription is worth paying for. A typical way to do this is with a “Subscribe” page where you explain the benefits of your services.

Take a look at the “Join” page on Longreads.com, an online publication that publishes and curates nonfiction storytelling on the web and funds stories with memberships:

A few tips to make your offer irresistible:

Focus on the benefits for the customer.Provide a few subscription options, such as classes at different frequencies and at different price points.Add testimonials if you can — people love to read reviews.

Create this page by going to My Sites → Pages → Add New.

2. Add a subscription with the Recurring Payments feature

Recurring Payments allows you to create renewable payments. Your subscribers will enter their credit card details, and will then be charged automatically every month or every year.

Recurring Payments is currently available on any of our paid plans. To get started, you’ll need to create a Stripe account, which is a global money transfer service. We partner with Stripe to make sure payments end up safely in your bank account.

You can start collecting Recurring Payments in five minutes.

On the “Subscribe” page you created above, search for the “Recurring Payments” block:

After clicking “Connect to Stripe,” you’ll be able to connect your existing Stripe account or create a new one.

Now you can create your first subscription.

Set the price, frequency (we recommend monthly for start), and the title of your subscription, like Writing Bootcamp, 3 breakout sessions/month or Conversational French for Beginners, 4 classes/month.

That’s it! Your subscription is now created. Once you publish the page and activate your Stripe account, your customers will be able to subscribe to this service.

Subscriptions are dependable: your subscribers will be automatically charged at the beginning of the next renewal period (in a month or a year). You don’t have to remind or nudge them, and they also don’t have to remember to pay you — everything is handled.

For more details, please read this Recurring Payments support article.

Would you rather sell access to your services as a one-time purchase? Check out the Simple Payments feature.

3. Schedule your lessons

Your subscribers can set up a time for their lessons using a service like Calendly, a handy tool that allows them to select a free slot in your schedule. We recently created the Calendly block to bring some of the service’s key features to you. While editing your page, search for the “Calendly” block.

Remember to check if the subscription is activeBefore hopping on an online meeting, you need to confirm that the person scheduling a call is indeed a paying subscriber. Check the list of your active Recurring Payments subscribers located in your WordPress.com dashboard under My Sites → Earn → Payments.Read more about managing your list of subscribers.

4. Select a tool to host your class

Video conferencing tools are very useful for teaching. Apart from seeing the other person, you can share your screen, send files, or even host a session for multiple people, lecture-style.

You can use Google Hangouts, Skype, or Zoom (which is what we use for our meetings here at WordPress.com). Zoom has put together a handy tutorial for teachers.

If you’d like additional setup tips on selecting a theme for your website, adding content and media, and adding students as viewers or contributors, read our support tutorial on building a virtual classroom.

What amazing class are you going to launch?
Quelle: RedHat Stack

Expert Advice: How to Make a Great Website for Your Small Business – Webinar

Whether you already own a small business or are exploring the idea of starting one, you’ll come away from this 60-minute live webinar with a wealth of actionable advice on how to maximize your digital presence.

Date: Thursday, April 2, 2020Time: 11:00 am PDT | 1:00 pm CDT | 2:00 pm EDT | 18:00 UTCRegistration link: https://zoom.us/webinar/register/4215849773038/WN_at0PB64eTo2I0zJx-74g2QWho’s invited: Business owners, freelancers, entrepreneurs, and anyone interesting in starting a small business or side gig.

Hosts Steve Dixon and Kathryn Presner, WordPress.com Happiness Engineers, have many combined years of experience helping small-business owners create and launch successful websites. They’ll give you tips on site design, search engine optimization (SEO), monetization, and mobile optimization. You’ll be able to submit questions beforehand—in the registration form—and during the live webinar.

Everyone is welcome, even if you already have a site, and even if your site wasn’t built on WordPress.com. We know you’re busy, so if you can’t make the live event, you’ll be able to watch a recording of the webinar on our YouTube channel.

Live attendance is limited, so be sure to register early. We look forward to seeing you on the webinar!
Quelle: RedHat Stack

Part 2: How to enable Hardware Accelerators on OpenShift, SRO Building Blocks

Excerpt from a DriverContainer manifest
Introduction
In Part 1: How to Enable Hardware Accelerators on OpenShift we gave a high-level overview of the Special Resource Operator (SRO) and a detailed view of the workflow on enabling hardware accelerators. 
Part 2 will go into detailed construction of the enablement, and explain which building blocks/features the SRO provides to make life easier. 
The most important part is the DriverContainer and its interaction with the cluster during deployment and updates. We will show how we can handle multiple DriverContainer vendors, and how SRO can manage them. 
Automatic Runtime Information Injection
Parts of the enablement stack rely on different runtime information which needs to be included in manifests and other resources like BuildConfigs. 
SRO will auto inject this information using several keywords that are used as placeholders in the manifests. 
We are using golang’s template package to implement data-driven templates for generating textual output.
We have identified the following crucial runtime information that can be used inside the operator to update the resources or in the manifests to be auto injected. Going through the manifests one by one we will see those patterns in action. 

{{.RuntimeArchitecture}} For different architectures, one can download different drivers and take different actions. This is based on runtime.GOARCH. We can leverage the templating if clauses to create a mapping e.g. {{if eq . RuntimeArchitecture “amd64″}}x86_64{{end}} Here we create a mapping for the arch=amd64 as exposed by OpenShift and GO to x86_64 the later is more common. 
{{.OperatingSystemMajor}} Depending on the OS deployed on the workers, SRO will use NFDs features to extract the correct OS version. We are mainly concerned about the major version and hence, we replace this tag either with rhel7, rhel8 or fedora31. One has to make sure that when referencing the files, containers etc. that they actually exist. 
{{.OperatingSystemMajorMinor}} Sometimes drivers are built for specific minor releases of the operating system. To have more control, one can use this tag to do a fine-grained selection.  This results in rhel7.6, rhel8.2 and so forth. 
{{.NodeFeature}}  This is the label from the special resource as reported by NFD, e.g in the case of the GPU it is: feature.node.kubernetes.io/pci-10de.present. We are using this label throughout SRO to deploy the stack only on nodes that are exposing this hardware.
{{.HardwareResource}} This is a human-readable name of the hardware one wants to enable. We’re using this with the manifests so that we have unique names for these special resources. The name tag should follow this semantic: <vendor>-<specialresrouce>. In the case of the GPU, we are setting HARDWARE=nvidia-gpu in the Makefile that is used to populate the ConfigMap with the right annotations. 
{{.ClusterVersion}} Sometimes features are only available on certain versions of the cluster being used to deploy images. See the section about DriverContainer how one would modify the image tag to pull a specific version. 
{{.KernelVersion}} Building or deploying drivers without knowledge about the current running kernel version makes really no sense. Additionally for the OS the kernel version is used to steer the right drivers or build them on the node. 

Attaching Metadata to SRO Resources
Currently, there are two ways to attach metadata to resources in a cluster: (1) labels and (2) annotations. With labels, one can select objects and find collections of objects. One cannot do this with annotations. The extended character set of annotations makes them more preferable to save metadata and since we’re not interested in any kind of filtering of resources. SRO uses annotations to add metadata. 
Following a collection of annotations used throughout the operator to enhance the functionality of a specific resource used (DaemonSet, Pod, BuildConfig, etc).

specialresource.openshift.io/wait: “true” Sometimes one needs an ordered startup of resources and a dependent resource cannot start or be deployed before the preceding is in a specific state. This annotation will instruct SRO to wait for a resource and only proceed with the next one when the previous one is in a set state: Pod in Running or Completed state, DaemonSet all Pods in Running state, etc.
specialresrouce.openshift.io/wait-for-logs: “<REGEX>” Besides having all Pods of a DaemonSet or Pod in Running state sometimes one needs some information from the logs to say that a Pod is ready. This annotation will look into the logs of the resource and will use the “<REGEX>”to match against the log. 
specialresource.openshift.io/state: “driver-container” This will tell SRO in which state the resources are supposed to be running. See the next chapter for a detailed overview of the states and validation steps. 
specialresource.openshift.io/driver-container-vendor: <nfd> We are coupling a BuildConfig with a DriveContainer with these annotations. We are using the NFD label of the special resource to have a unique key for this very specific hardware. In some circumstances, one could have several pairs of BuildConfigs and DriverContainers. 
specialresource.openshift.io/nfd: <feature-label> This annotation is only valid in the hardware configuration ConfigMap and will be ignored otherwise. The feature label is a unique key for various functions in SRO and is the label exposed by NFD for specific hardware. 
specialresource.openshift.io/hardware: <vendor>-<device>  A human-readable tag, also only valid in the hardware configuration ConfigMap, to describe the hardware being enabled. It is the textual description of the NFD feature-label discussed in the previous point. Used for the naming of all OpenShift resources that are created for this hardware type. 
specialresource.openshift.io/callback: <function> Sometimes one will need a custom function to enable a specific functionality or to make changes to a resource that are only valid for this manifest. SRO will look for this annotation and execute the <function> only for this manifest. 

We are using one label (specialresource.openshift.io/config=true) to filter the hardware configurations during reconciliation which is used to gather the right configs for hardware enablement.
Predefined State Transitions in SRO and Validation
The SRO pattern follows a specific set of states. We have gone into great detail what each of those states is doing in Part 1, but just for completeness, we’re going to list the annotations used in the manifests and add some brief comments to it. 
SRO can handle multiple DriveContainers at the same time. Sometimes one needs multiple drivers to enable a specific functionality. SRO will use customized labels for each vendor-hardware combination. 

“specialresource.openshift.io/driver-container-<vendor>-<hardware>”: “ready” Drivers are pulled or built on the cluster. We can now go on and enable the runtime.
“specialresource.openshift.io/runtime-enablement-<vendor>-<hardware>”: “ready” Any functionality (e.g. prestart hook) needed to enable the hardware in a container is done, move on and expose the hardware as an extended resource in the cluster. 
“specialresource.openshift.io/device-plugin-<vendor>-<hardware>”: “ready” The cluster has updated its capacity with the special resource, we can deploy now the monitoring stack. 
“specialresource.openshift.io/device-monitoring-<vendor>-<hardware>”: “ready” This is the last state that SRO watches. We can still use the other annotations to wait for resources or inject runtime information beyond this state. 

To validate the previous state, the current state resource will deploy an initContainer that executes the validation step. InitContainers run to completion and the dependent Pod will only be started if the initContainer exits successfully. Using this functionality, we have implemented a simple ordering and a gate for each state. We do not have to have a dedicated state for each of the validation steps. 
Throughout one state we are using the same name for all different resources. Taking e.g the DriverContainer state the name consists of {{.HardwareResource}}-{{.GroupName.DriverBuild}}. For each state there is a GroupName following the complete list of predefined GroupNames:
GroupName: resourceGroupName{
DriverBuild:            “driver-build”,
DriverContainer:        “driver-container”,
RuntimeEnablement:      “runtime-enablement”,
DevicePlugin:           “device-plugin”,
DeviceMonitoring:       “device-monitoring”,
DeviceGrafana:          “device-grafana”,
DeviceFeatureDiscovery: “device-feature-discovery”,
},

With all those text fragments and predefined templates, only one has to update the hardware configuration ConfigMap with the right values. Then, SRO takes care of almost “everything”.  Next, we have an example of a ConfigMap for GPUs. 
Hardware Configuration ConfigMaps 
For each accelerator or DriveContainer one has to create their own ConfigMap. Each ConfigMap describes the states that need to be executed to enable this accelerator. Here is an example patch.yaml we use in the Makefile to patch the ConfigMap for the GPU. 
metadata:
 labels:
   specialresource.openshift.io/config: ‘true’
 annotations:
   specialresource.openshift.io/nfd: feature.node.kubernetes.io/pci-10de.present
   specialresource.openshift.io/hardware: “nvidia-gpu”

The Makefile will create the ConfigMap from the files in a subdirectory specially stored for the accelerator  <sro>/recipes/nvidia-gpu/manifests and patch the ConfigMap with the correct accelerator configuration. 
SRO now knows the label to filter the nodes, annotate the manifests for node selection and how to add the glue between DriverContainer and BuildConfig. Lastly, a human-readable tag is used to identify the hardware by a descriptive name. See the following chapter to see all the information mentioned in the previous chapters that come into play.
Building Blocks Breakdown by State
Presumably, vendors are creating prebuilt DriverContainers with a tested configuration between hardware drivers and kernel versions, so SRO need only pull the DriverContainer.  If a DriverContainer cannot be pulled, this could mean that for this combination of driver and kernel version, there is no prebuilt DriverContainer; they might be incompatible. 
Pulling a prebuilt DriverContainer should always have the highest priority before attempting to build one from the source. This is only a fallback solution, but sometimes it is the only solution; perhaps it is very early on in development, or simply has not established a CI/CD pipeline for automatic builds. 
SRO will first try to pull a DriverContainer if it detects an ImagePullBackOff or ErrImagePull it will kick off the building of the drivers via BuildConfig. How SRO knows which BuildConfig to use and how the DriverContainer is annotated will be shown next. 
BuildConfig and DriverContainer Relationship
SRO can handle multiple DriverContainer <-> BuildConfig  combinations. The annotation to represent this relationship is this: 
annotations:
specialresource.openshift.io/driver-container-vendor: {{.NodeFeature}}

The {{.NodeFeature}} is populated by the value specialresource.openshift.io/nfd from the ConfigMap. A unique key used to identify the drivers for a specific hardware. Some enablements need more than one driver to enable a specific functionality. 
State BuildConfig
Taking a look at 0000-state-driver-buildconfig.yaml one can easily spot the places where SRO will inject runtime information. For every label or name we are appending  {{.HardwareResource}}  to tag all resources belonging to the special resource we want to enable. 
The BuildConfig has to be able to build drivers for different operating system versions. Thus, we inject the operating system into the BuildConfig so when the input source-a github repository-is checked out, the build system knows which directory holds the build data for rhel7 or rhel8. 
The resulting image has runtime information injected because we want to make sure that only compiled drivers land on the node that has the right kernel version. 
output:
 to:
   kind: ImageStreamTag
   name: {{.HardwareResource}}-{{.GroupName.DriverContainer}}
Last but not least we have the driver container vendor key that is used to match a BuildConfig to a DriverContainer as described above. 
State DriverContainer
For  0001-state-driver.yaml we follow the same pattern; append the hardware tag to any label or name and try to pull the image the BuildConfig has created. 
In disconnected environments, a prebuilt DriverContainer could be pushed to the internal registry to prevent the BuildConfig from being created and the build process started. The BuildConfig installs several RPMs. In future versions of the operator we will support custom mirrors that are located in disconnected environments.
annotations:
 specialresource.openshift.io/wait: “true”
 specialresrouce.openshift.io/wait-for-logs: “+ wait d+|+ sleep infinity”
 specialresource.openshift.io/inject-runtime-info: “true”
 specialresource.openshift.io/state: “driver-container”
 specialresource.openshift.io/driver-container-vendor: {{.NodeFeature}}
The DriverContainer uses most of the annotations we have described. Essentially we are saying that the operator is waiting for this DaemonSet that all Pods are in Running state and we are matching the regex in the logs to advance to the next state. 
After the DriverContainer is up and fully running, SRO will label the nodes with the provided state from the annotation.
As discussed, the last annotation is used to bond the DriverContainer with the BuildConfig. 
nodeSelector:
 {{.NodeFeature}}: “true”
 feature.node.kubernetes.io/kernel-version.full:{{.KernelVersion}}

The above snippet shows how the runtime information can be used to run the DriverContainer only on specific nodes in the cluster. (1) Only run the DriverContainer on nodes that have the special resource exposed, (2) place the DriverContainer only on nodes where the kernel version matches.  This way we prevent the DriverContainer from running on the wrong node. 
State Runtime Enablement
The runtime enablement manifest has two new interesting constructs. The DriverContainer has labeled the node with …/driver-container: ready. The runtime enablement uses nodeAffinity to deploy the Pod only when the drivers are ready. 
spec:
 affinity:
   nodeAffinity:
     requiredDuringSchedulingIgnoredDuringExecution:
       nodeSelectorTerms:
       – matchExpressions:
         – key: specialresource.openshift.io/driver-container-{{.HardwareResource}}
           operator: In 
             values:
             – ready 
Each special resource will have their own set of state labels, this way one can build up a hierarchy of enablements. For example we can enable a dependent driver when driver-container-<vendor-one>-<hardware> is ready. 
The second construct is the use of initContainer to run the validation step of the previous state. 
spec:
 initContainers: 
 – image: quay.io/openshift-psap/ubi8-kmod
   name: specialresource-driver-validation-{{.HardwareResource}}
   command: [“/bin/entrypoint.sh”]
   volumeMounts:
   – name: init-entrypoint
     mountPath: /bin/entrypoint.sh
     readOnly: true
     subPath: entrypoint.sh
Again, we are appending the human readable tag to all names to distinguish the resources belonging to one special resource. The runtime enablement will check if the drivers are loaded for the special resource.
For the entry points for each of the containers we’re using ConfigMaps as entry points that hold bash scripts. This is a very flexible way to adapt or to add commands for the startup. 
State Device Plugin
No new features are introduced for the manifests in this state, but lets recap them to see what we’re leveraging here. (1) add special resource specific naming, (2) nodeAffinity on the previous state, (3) initContainer  for validation and (4) nodeSelector to only run on a special resource exposed node. 
State Device Monitoring, Device Feature Discovery
After exposing the special resource to the cluster via Extended Resource we have only one common dependency of these two states; namely, the readiness of the device plugin. The nodeAffinity of these states are the same, which means they are executed in parallel.
State Grafana
The Grafana state is special because we are using a custom callback that enables Grafana to read the cluster’s prometheus data.  This callback is only valid for Grafana, and cannot be generalized to other manifests. 
metadata:
 name: specialresource-grafana-{{.HardwareResource}}
 namespace: openshift-sro
 annotations:
   specialresource.openshift.io/callback: specialresource-grafana-configmap

The specialresource-grafana-configmap is a key into a map of functions. If one wants to have more customer callbacks one has to append the function to the map  and annotate the manifest with the corresponding key. 
Container Probes
There are currently three probes that the kubelet can react upon. We will shortly discuss why we haven’t used probes in SRO to provide a means of synchronization or a kind of ordering. 

livenessProbe  If this probe fails, the kubelet kills the Container and, according to the restart policy, an action is taken. This probe cannot help us in the case of SRO regarding synchronization or ordering.
readinessProbe  This probe indicates if a Container can service requests. If this fails, the IP addresses are removed from the endpoints. A readiness probe cannot be used in SRO in a general way because almost all driver and similar containers do not provide any (externally) accessible service. 
startupProbe  The thought behind using this probe was to signal SRO as to the readiness of an application within the Container. The only problem  is that, despite the state of the application, the status of the Pod is always Running. If the probe fails, the Container is killed. We cannot deduce any readiness of the application within that container. 

To truly know if a specific application in a Container is really ready, one must look for the Running phase and examine the logs. Some applications are really fast, and keeping an eye on the Running status of a Pod is, in most cases, sufficient. 
Future Work
In this part of the second blog, we discussed in detail the building blocks of SRO and how one can use these to enable their own accelerator. One important factor missing is how to use SRO in disconnected environments with or without a proxy. 
Stay tuned for part three discussing the open points and a new accelerator that we enabled with SRO. 
Sneak Preview of Part 3
The post Part 2: How to enable Hardware Accelerators on OpenShift, SRO Building Blocks appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Guide to Installing an OKD 4.4 Cluster on your Home Lab

 
Take OKD 4, the Community Distribution of Kubernetes that powers Red Hat OpenShift, for a test drive on your Home Lab. 
Craig Robinson at East Carolina University has created an excellent blog explaining how to install OKD 4.4 in your home lab!
What is OKD?
OKD is the upstream community-supported version of the Red Hat OpenShift Container Platform (OCP).  OpenShift expands vanilla Kubernetes into an application platform designed for enterprise use at scale.  Starting with the release of OpenShift 4, the default operating system is Red Hat CoreOS, which provides an immutable infrastructure and automated updates. OKD’s default operating system is  Fedora CoreOS which, like OKD, is the upstream version of Red Hat CoreOS. 
Instructions for Deploying OKD 4 Beta on your Home Lab
For those of you who have a Home Lab, check out the step-by-step guide here helps you successfully build an OKD 4.4 cluster at home using VMWare as the example hypervisor, but you can use Hyper-V, libvirt, VirtualBox, bare metal, or other platforms just as easily. 
Experience is an excellent way to learn new technologies. Used hardware for a home lab that could run an OKD cluster is relatively inexpensive these days ($250–$350), especially when compared to a cloud-hosted solution costing over $250 per month.
The purpose of this step-by-step guide is to help you successfully build an OKD 4.4 cluster at home that you can take for a test drive.  VMWare is the example hypervisor used in this guide, but you could use Hyper-V, libvirt, VirtualBox, bare metal, or other platforms. 
This guide assumes you have a virtualization platform, basic knowledge of Linux, and the ability to Google.

Check out the step-by-step guide here on Medium.com

Once you’ve gain some experience with OpenShift by using the open source upstream combination of OKD and FCOS (Fedora CoreOS) to build your own cluster on your home lab, be sure to share your feedback and any issues with the OKD-WG on this Beta release of OKD in the  OKD Github Repo here: https://github.com/openshift/okd
 
Additional Resources:

To report issues, use the OKD Github Repo: https://github.com/openshift/okd
For support check out the #openshift-users channel on k8s Slack
The OKD Working Group meets bi-weekly to discuss development and next steps. Meeting schedule and location are tracked in the openshift/community repo.
Google group for okd-wg: https://groups.google.com/forum/#!forum/okd-wg

This should get you up and going. Good luck on your journey with OpenShift! 
The post Guide to Installing an OKD 4.4 Cluster on your Home Lab appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

OpenShift Commons Briefing: Bringing OpenShift to IBM Cloud with Chris Rosen (IBM)

.

 
In this briefing, IBM Cloud’s Chris Rosen discusses the logistics of bringing OpenShift to IBM Cloud and walk us thru how to make the most of this new offering from IBM Cloud.
Red Hat OpenShift is now available on IBM Cloud as a fully managed OpenShift service that leverages the enterprise scale and security of IBM Cloud, so you can focus on developing and managing your applications. It’s directly integrated into the same Kubernetes service that maintains 25 billion on-demand forecasts daily at The Weather Company.
Chris Rosen walks us thru how to

Enjoy dashboards with a native OpenShift experience, and push-button integrations with high-value IBM and Red Hat middleware and advanced services.
Rely on continuous availability with multizone clusters across six regions globally.
Move workloads and data more securely with Bring Your Own Key; Level 4 FIPS; and built-in industry compliance including PCI, HIPAA, GDPR, SOC1 and SOC2.
Start fast and small using one-click provisioning and metered billing, with no long-term commitment

Slides here: Red Hat OpenShift on IBM Cloud – Webinar – 2020-03-18
Additional Resources:

Red Hat OpenShift on IBM Cloud: https://www.ibm.com/ca-en/cloud/openshift
Documentation: https://cloud.ibm.com/docs/openshift?topic=openshift-service-arch
Get Started Tutorials: https://www.ibm.com/cloud/openshift/get-started

To stay abreast of all the latest releases and events, please join the OpenShift Commons and join our mailing lists & slack channel.

What is OpenShift Commons?
Commons builds connections and collaboration across OpenShift communities, projects, and stakeholders. In doing so we’ll enable the success of customers, users, partners, and contributors as we deepen our knowledge and experiences together.
Our goals go beyond code contributions. Commons is a place for companies using OpenShift to accelerate its success and adoption. To do this we’ll act as resources for each other, share best practices and provide a forum for peer-to-peer communication.
Join OpenShift Commons today!
The post OpenShift Commons Briefing: Bringing OpenShift to IBM Cloud with Chris Rosen (IBM) appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift