OpenShift Commons Briefing: OpenShift Multi-Cloud Object Gateway Deep Dive with Eran Tamir (Red Hat)

 
In this briefing, Red Hat’s Eran Tamir gives a deep dive on OpenShift Multi-Cloud Object Gateway which is a new data federation service introduced in OpenShift Container Storage 4.2. The technology is based on the NooBaa project, which was acquired by Red Hat in November 2018, and open sourced recently. The Multi-Cloud Object Gateway has an object interface with an S3 compatible API. The service is deployed automatically as part of OpenShift Container Storage 4.2 and provides the same functionality regardless of its hosting environment. Simplicity, Single experience anywhere
In its default deployment, the Multi-Cloud Object Gateway provides a local object service backed by using local storage or cloud-native storage, if hosted in the cloud.Every data bucket on the Multi-Cloud Object Gateway is backed, by default, by using local storage or cloud-native storage, if hosted in the cloud. No additional configuration is required. The Multi-Cloud Object Gateway’s object service API is always an S3 API, which means a single experience on-premise and in the cloud, for any cloud provider. This translates to a zero learning curve when moving to, or adding a new cloud vendor. That translates into greater agility for your teams. There’s more information over here.
Briefing Slides: RED HAT STORAGE MULTI-CLOUD OBJECT GATEWAY Eran Tamir
Additional Resources:
Noobaa Operator: https://github.com/noobaa/noobaa-operator
Product Documentation for Product Documentation for Red Hat OpenShift Container Storage 4.2
Feedback:
To find out more about OpenShift Container Storage or to take a test drive, visit https://www.openshift.com/products/container-storage/.
If you would like to learn more about what the OpenShift Container Storage team is up to or provide feedback on any of the new 4.2 features, take this brief 3-minute survey.
The post OpenShift Commons Briefing: OpenShift Multi-Cloud Object Gateway Deep Dive with Eran Tamir (Red Hat) appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

SAP and Red Hat collaborate to provide AI/ML on Data Hub

As the largest provider of ERP for enterprises world wide, SAP systems hold a lot of important business data. That’s a natural fit with analysis and machine learning tasks, but actually constructing a system capable of integrating with SAP Data Hub and running machine learning work loads is a complex undertaking. That’s why SAP has worked with Red Hat to bring OpenShift and Data Hub together. The company has published a detailed blog explaining the integrations, the business advantages and the architecture of such a setup. From their blog:
With the support of Jerry Platz, VP of SAP solutions at Inspired Intellect, the project team helped implement example cases at the Co-Innovation Lab in California. Leveraging the COIL provided Inspired Intellect more tools to develop, train and test Machine Learning models.  Platz shared, “We recognize the issues Gartner Identified in many of our client engagements, and we understand that solving them often requires innovation in both technology and organizational approach. That’s where the COIL partnership provides value. Clients can work shoulder-to-shoulder with SAP, Red Hat and ourselves on real-world problems, using the latest tools and methodologies.” Platz added, “Of course creating business outcomes is not just about accessing the latest tools, but creating a strategy that identifies the most valuable use-cases and enabling AI/ML throughout the organization. This COIL approach allows customers to reduce time-to-outcome by pre-packaging a wide variety of use-cases. Of course, each case must be tailored towards specific needs, but in our experience, the “prepackaged cases” we have developed provide a very good start”.  Platz continued “I’d also like to comment further on Gartner’s “vendor strategy” / integration complexity challenge. By standardizing our approaches on the Red Hat portfolio with SAP, we can make significant strides to overcome this challenge. Our most advanced clients have previously had to deal with a piecemeal approach, often with costly self-built integrations. What they really want is a stable, modular and less complicated platform with all the functionality. That’s what we are delivering, using SAP Data Hub and the Red Hat software portfolio”.
The post SAP and Red Hat collaborate to provide AI/ML on Data Hub appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Scaling Persistent Volume Claims with Red Hat OpenShift Container Storage v4.2

1. Objective
For choosing a storage solution for dynamic provisioning of persistent volume claims (PVC) in OpenShift Container Platform, the time it takes to bind and prepare a PVC for the use with application pods is a crucial factor.
For Red Hat OpenShift Container Storage v4.2 we performed a series of tests investigating how OCP v4.2 behaves from a scalability point of view. We wanted to know how fast application pods are starting when PVCs are from different storage classes provided, and to get get numbers which can be used when making decisions when choosing storage solution for OCP application pods.
The test results presented in this document are recommended values for OpenShift Container Storage v4.2 and do not show the real limits for Openshift Container Storage v4.2, which are higher. We will conduct more scalability tests for future OpenShift Container Storage releases.
For future OpenShift Container Storage releases we plan to target configurations for cases when more pods are running on the OpenShift Container Platform cluster and are actively requesting PVCs originating from Openshift Container Storage.
In this document we describe test processes and results gathered during PVC scale test execution with Openshift Container Storage v4.2 showing why OpenShift Container Storage is the supreme storage solution for use cases where pod density and PVC allocation speed are key, as e.g. in CI/CD environments.
2. Relevance of Scalability Testing?
When deploying OpenShift Container Platform clusters on public clouds, the cloud providers’ native storage solution can be used to provision storage to application pods running on OpenShift Container Platform platform. Different public cloud providers offer different solutions which attempt to satisfy the needs of OpenShift Container Platform users for storage, but from a scalability point of view most of them have limitations which can be problematic in deployments where the speed of storage allocation is crucial. When native storage classes are source for PVCs used by application pods running in OpenShift Container Platform cluster, there are two limiting factors which prevent users from reaching a higher number of pods with allocated Persistent Volume Claims in a short period of time.
These limitations imposed on users by public cloud providers can be summarized to below points:

Number of allowed block/volume devices per cloud instance
API throttling

Public cloud providers impose limits on maximum number of volumes which can be attached to a cloud instance (in this case an OpenShift Container Platform node) and this limits the maximum number of PVCs which can be allocated to pods on OpenShift Container Platform node to the maximum number of supported volumes by the cloud provider.
Recommended values for cloud providers might not be exhaustive and can change as maximum number of block devices supported by cloud providers can be updated. For exact values, check the documentation of your cloud provider.
At the time of this writing various public cloud providers allow you to attach different number of block devices per instance, and the maximum values differ depending on cloud provider and machine type used. Below is a short example of the current situation.
AWS recommendations are a maximum of 40 volumes per instance and going beyond that number can cause problems with machine functionality and is supported on best effort basis.
Many EC2 instances types support a maximum of 28 attachments (including network interfaces, EBS volumes, and NVMe instance store volumes) which allows even lower numbers of volumes which can be attached to instance.
Azure cloud refers to up to 20 volumes, and GCP offers different options depending on machine type and up to a maximum 120 persistent disks (for custom and predefined machine types – per limits section in document per cloud machine.
Another limiting factor when in public cloud is the cloud API rate which dictates how many API requests can be served during a given time period. These values differ from cloud provider to cloud provider, but in the case of busy OpenShift Container Platform clusters with a need to issue many API requests in a short period of time (a usual scenarios in high performance/scalability situations) it’s not uncommon that requests can’t be served (requests ending with errors like “API rate exceeded” ) and requests to allocate volumes for a pod will not be satisfied.
Every request for a volume is at least one API request, often there is another API request to tag volume and if at the same time there a different users issuing volume delete operations then it will generate additional API requests. Cloud providers allow only a limited number of API requests as specified in their documentation – aws, azure, gcp – these values can be changed to negotiate special API limits at an additional and are cloud provider defined.
3. OpenShift Container Storage To The Rescue
When Openshift Container Storage is used as storage provider for application pods in OpenShift Container Platform clusters, the maximum number of volumes per instance is now limited by the linux kernel itself, and these limits are much higher than the limits imposed by cloud providers.
Also, cloud limitations as API request throttling are not relevant anymore when using OpenShift Container Storage for provisioning persistent storage. In this case when a user sends a request for a new PVC, the API requests live inside the OpenShift Container Platform cluster without the need to interact with the public cloud provider’s API.
Other benefits like superior performance and the possibility to use the same storage solution across different private/hybrid/public clouds with same API and same management tools do provide better overall experience for end users and easier to maintain Day-2 operations for devops teams.
4. PVC Scale Test Description
In the PVC Scale tests we test the ability of OpenShift Container Storage to provision a PVC to application pods and measure the time it takes for the pods to be in “Ready” state with the PVC bound to the pod.
When the PVC is in Bound state it is considered as ready for use by applications.
Three different configurations for OpenShift Container Storage were covered as described in Table 1.
Table 1. Tested configurations during PVC Scale Testing

In the above table we specify cluster size parameters, number of tested PVCs and how many nodes were used for that specific test.
5. How Is The Pod Start Time Measured?
When a pod is started it logs various timestamps to OpenShift Container Platform API and for every pod it can be found when it is considered Ready. See an example in the below stanza from OCP API /api/v1/namespaces/namespace/pods
Status.condition
type = Ready
Status = True

An example

“conditions”: [
{
“status”: “True”,
“lastProbeTime”: null,
“type”: “Ready”,
“lastTransitionTime”: “2018-12-13T17:27:57Z”
},
]

A pod is considered as Ready with attached PVC (PVC in Bound state) when its status is True and type is Ready.
lastTransitionTime is the time when the pod moved to this state last time. This is defined in the pod conditions documentation.
Important note: Pods were not restarted once started, so lastTransitiontime was not updated with new value.
6. Test Results
6.1 Create Operation
For all tested configurations (1500, 3000, 5000 pods/PVCs), it was possible to scale up to the requested number of Pods/PVCs for both storage classes which are part of the OpenShift Container Storage v4.2 setup – ceph rbd / cephfs. During allocation of PVCs for pods, problems with OpenShift Container Storage v4.2 were not encountered in the form of crashing / restarting or similar with OpenShift Container Storage v4.2 pods in the openshift-storage namespace.
Graph 1. Time to start Pods with PVC from OpenShift Container Storage v4.2 storage backend

In the graph above we have information showing us how long it takes to start different number of pods with PVCs from the two OpenShift Container Storage v4.2 storage classes. From the above graph it is visible that pod’s startup time increases linearly and from the numbers we can conclude that it takes approximately 1.6s to start a pod with attached PVC ( Number of pods / total time )
On a 9 node OpenShift Container Storage v4.2 cluster the requirement to scale up to 5000 PVC could easily be satisfied.
6.2 Delete Operation
Another data point we were interested in was the time that is necessary to remove all volumes at CEPH side after pods/PVCs are removed. For this test we were deleting the project created in 6.1 section and measured the time it took until all volumes were removed at the storage backend side.
For the case of CEPH RBD we considered that delete was successful if the number of rbd images in example-storagecluster-cephblockpool was zero. Eg.
rbd ls example-storagecluster-cephblockpool | wc -l

In case of CEPHFS we repeated same process and in this case we used
ceph fs subvolume ls example-storagecluster-cephfilesystem csi |grep name | wc -l

Delete operations worked without problems and the OpenShift Container Platform cluster was fully operational during the whole delete process.
Graph 2 – Time necessary to delete pods with empty PVCs

6.2.1 Delete operation with data present on PVCs
It is known that during the delete process there is a difference whether there is data on a PVC or if the PVC is empty. The deletion times may differ for PVCs with data from PVCs without data.
For this test we created 1500 pods with one 2 GB PVC mounted to every pod and afterwards we wrote ( fio –randwrite ) 1 GB file per pod. Once randwrite operation finished ceph df was showing as below
# ceph df
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
ssd 6.0 TiB 1.3 TiB 4.7 TiB 4.7 TiB 78.18
TOTAL 6.0 TiB 1.3 TiB 4.7 TiB 4.7 TiB 78.18

POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL
example-storagecluster-cephblockpool 1 1.6 TiB 424.64k 4.7 TiB 82.27 344 GiB
example-storagecluster-cephfilesystem-metadata 2 2.2 KiB 22 384 KiB 0 344 GiB
example-storagecluster-cephfilesystem-data0 3 0 B 0 0 B 0 344 GiB

For CEPHFS case ceph df output was as one below
# ceph df
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
ssd 6.0 TiB 1.6 TiB 4.4 TiB 4.4 TiB 73.53
TOTAL 6.0 TiB 1.6 TiB 4.4 TiB 4.4 TiB 73.53

POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL
example-storagecluster-cephblockpool 1 393 MiB 25.80k 1.2 GiB 0.09 439 GiB
example-storagecluster-cephfilesystem-metadata 2 258 MiB 3.09k 774 MiB 0.06 439 Gi
example-storagecluster-cephfilesystem-data0 3 1.5 TiB 385.50k 4.4 TiB 77.36 439 GiB

From this we can see that the data set we requested during the test was indeed stored at storage backend, and from inside the pods we see df -h output.
Below graph shows the time it takes to free up storage space (after deleting the project holding the OpenShift Container Platform pods and PVCs)
Graph 3 – Time to delete 1500 PVCs with 1 GB data present on every PVC

As in the test when deleting empty PVC, we monitored the number of volumes in CEPH as indication that they were properly deleted in the storage backend
rbd ls example-storagecluster-cephblockpool | wc -l

and for CEPHFS
ceph fs subvolume ls example-storagecluster-cephfilesystem csi |grep name | wc -l

6.3 Concurent create / delete operations
In real life scenarios there will be many concurrent create / delete operations trying to simultaneously access the same storage resources. This kind of workload is typical for CI/CD pipelines and other modern cloud native applications. A storage layer that can absorb many concurrent create/delete operations in a performant way is therefore crucial for a successful deployment of such applications in OpenShift Container Platform.
For this purpose we tested a scenario where create pods/PVC operations in one OpenShift Container Platform project were running in parallel with delete operations of OpenShift Container Platform project containing pods/PVCs.
Below graph show the results we got for that test when creating 1500 Pods requesting storage from OpenShift Container Storage v4.2 and at same time dynamically deleting PVC in another OpenShift Container Platform project.
Graph 4: Time necessary to start Pods with PVC from OCP v4.2 when running delete PVC operation in parallel with create operation. Comparison with pure creation time (blue bars).

From the above graph we can see that the time to start 1500 pods with PVC from OpenShift Container Storage v4.2 storage increases if there are concurrent requests occurring at backend and competing for storage resources. Nevertheless the OpenShift Container Storage layer was still answering PVC creation requests in a timely manner without issues like timeouts and could easily cope with the workload.
7. Conclusion
The PVC Scale tests with OpenShift Container Platform / OpenShift Container Storage v4.2 were very stable and confirmed that OpenShift Container Storage v4.2 scales very fast and is able to satisfy very demanding and fast changing CI/CD environments.
There was not any issue with storage backend during PVC scale process up to the currently recommended 5000 PVC on a 9 node OpenShift Container Storage v4.2 cluster.
At same time, deletion of Pods with attached PVC from OpenShift Container Platform v4.2 storage ensured that whole process of cleaning storage backend (after PVC were deleted, what triggered PV deletion on OpenShift Container Platform side and RBDs at storage side) was smooth and without issues while doing massive Pods/PVC delete operations.
The obvious benefit of OpenShift Container Storage v4.2 when it comes to scaling is that OpenShift Container Storage v4.2 offers a possibility to OpenShift Container Platform users to start more pods per OpenShift Container Platform node for cases where pods are requesting PVCs for operations.
As it is shown in Table 1. above we were able to start 125 pods / node with every pod having a PVC attached, and having a total 1500 pods scheduled on 12 OpenShift Container Platform node cluster. This number is much higher than the declared supported values for public cloud providers (in average 10 times more pods with PVC per node than what public cloud providers offer for the same customer scenario where the requirement is to provide storage (PVC) to many OpenShift Container Platform pods and do that quickly).
In this specific test, using the same cluster it was possible to start 1500 pods on 12 application nodes for the case when OpenShift Container Storage v4.2 was used as storage backend. If customer wants to achieve same pod density in AWS with gp2 storage class as storage provider for application pods, it would not be possible and in order to get the same number of pods it would be necessary to add more machines to OpenShift Container Platform cluster, incurring additional cost.
If compared with OpenShift Container Storage v3.11, PVC scalability on OpenShift Container Storage v4.2 gives much better results and is more stable in many aspects, especially when it comes to tearing down volumes at storage side after deleting PVCs on the OpenShift Container Platform side.
8. Future work
Number of PVC per specific storage configuration per above write up are recommended values for OpenShift Container Storage v4.2 ( OpenShift Container Storage v4.2 ) and not actual limits but for stability we recommend to use them as maximum values for OpenShift Container Storage v4.2. We work to verify other test configuration and results of that work will be provided alongside with future OpenShift Container Storage releases.
The post Scaling Persistent Volume Claims with Red Hat OpenShift Container Storage v4.2 appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Securing applications at the Edge with Trusted Docker Containers

The post Securing applications at the Edge with Trusted Docker Containers appeared first on Mirantis | Pure Play Open Cloud.
Last week we presented the webinar How to Build a Basic Edge Cloud. One of the topics that drew the most attention is security, so we wanted to bring you this whitepaper, Secure the IoT Edge with Trusted Docker Containers.
Deploying applications to the edge requires special attention to security to prevent the compromise of end devices.   Mirantis has partnered with Intel to secure the last mile in Docker Enterprise Platform to hardware primitives in Trusted Platform Module (TPM), leveraging Intel Platform Trust Technology (Intel PTT). 
Some of the key steps we have taken to supply hardened enterprise security for trusted containers for our customers deploying at the edge include: 

Security in transit: Docker Enterprise leverages the trusted platform module to create credentials and generate key pairs for secure connection to enterprise infrastructure. 
Security at rest: Docker Enterprise leverages disk encryption to protect images in an encrypted volume, backed by keys in TPM. 
Node integrity: Security services tied to Docker Engine and to secure boot use a secure cryptoprocessor such as a Trusted Platform Module (TPM)  to measure container infrastructure files and prevent compromised files and data from being accessed. 
Image integrity:  In the Docker Trusted Registry, images are signed prior to delivery to end devices.  Once the image is received in the end device, Docker Content Trust verifies image integrity.
Node attestation:  Critical Docker infrastructure is measured against the Integrity Measurement Architecture and chained to the integrity of the Secure Boot flow, and can be attested by a remote verifier. 
Registry authentication: Docker Trusted Registry authenticates the device identify with credentials stored in a TPM. 

All of these features enhance Docker Enterprise Platform and provide the foundational capabilities required to extend the secure deployment of apps to the Edge and IOT. 
Interested in more details about how this all works?  Please download the whitepaper.
The post Securing applications at the Edge with Trusted Docker Containers appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Enterprise Kubernetes with OpenShift (Part one)

The question “What’s the difference between Kubernetes and OpenShift?” comes up every now and then, and it is quite like asking: “What’s the difference between an engine and a car?”
To answer the latter, a car is a product that immediately makes you productive: it is ready to get you where you want to go. The engine, in return, won’t get you anywhere unless you assemble it with other essential components that will form in the end a … car.
As for the first question, in essence, you can think of it as Kubernetes being the engine that drives OpenShift, and OpenShift as the complete car (hence platform) that will get you where you want to.
This question comes up every now and then, so the goal of this blog post is to remind you that:

at the heart of OpenShift IS Kubernetes, and that it is a 100% certified Kubernetes, fully open source and non-proprietary, which means:

The API to the OpenShift  cluster is 100% Kubernetes.
Nothing changes between a container running on any other Kubernetes and running on OpenShift. No changes to the application.

 OpenShift brings added-value features to complement Kubernetes, and that’s what makes it a turnkey platform, readily usable in production, and significantly improving the developer experience, as will be shown throughout the post. That’s what makes it both the successful Enterprise Platform-as-a-Service (PaaS) everyone knows about from a developer perspective, but also the very reliable Container-as-a-Service from a production standpoint.

OpenShift IS Kubernetes, 100% Certified by the CNCF
 
Certified Kubernetes is at the core of OpenShift. Users of `kubectl` love its power, once they are done with the learning curve. Users transitioning from an existing Kubernetes Cluster to OpenShift frequently point out how much they love redirecting their kubeconfig to an OpenShift cluster and have all of their existing scripts work perfectly.
 
You may have heard of the OpenShift CLI tool called `oc`. This  tool is command compatible with `kubectl`, but adds a few extra special helpers that help get your job done. But first, let’s see how oc is just kubectl:

kubectl commands
oc commands

kubectl get pods

oc get pods

kubectl get namespaces

oc get namespaces

kubectl create -f deployment.yaml

oc create -f deployment.yaml

 
Here are the results of using kubectl commands against an OpenShift API:

kubectl get pods => well, it returns … pods

 

kubectl get namespaces => well, it returns… namespaces

kubectl create -f mydeployment.yaml => it creates the kubernetes resources just like it would on any other Kubernetes platform. Let’s verify that with the following video:

 
In other words, the Kubernetes API is fully exposed in OpenShift, 100% compliant to the upstream. That’s why OpenShift is a Certified Kubernetes distribution by the CNCF.
 
OpenShift brings added-value features to complement Kubernetes
 
While the Kubernetes API is 100% accessible within OpenShift, the kubectl command-line lacks many features that could make it more user-friendly, and that’s why Red Hat complements Kubernetes with a set of  features and command-line tools like OC (OpenShift client) and ODO (OpenShift DO, targeting developers).
1 – “oc” complements “Kubectl” with extra power and simplicity
OC for instance is the OpenShift command-line that adds several features over kubectl, like the ability to create new namespaces, easily switch context, and commands for developers such as the ability to build container images and deploy applications directly from source code or binaries, also known as the Source-to-image process, or s2i.
Let’s take a look at a few instances of where oc has built-in helpers and additional functionality to make your day to day life easier.
First example is namespace management. Every Kubernetes cluster has multiple namespaces, usually to provide environments from development to production, but also for every developer that will need sandbox environments for instance. This means you’re going to switch between them frequently, since kubectl commands are contextual to your namespace. If you’re using kubectl, you will frequently see folks use helper scripts to do this, but with oc you just say oc project foobar to switch to foobar.
And if you can’t remember your namespace name? You can just list it out with oc get projects. What if you only had access to a subset of the namespaces on the cluster? That command should list them out right? Not so with kubectl, unless you have RBAC access to list all namespaces on the cluster, which is not frequently granted on larger clusters. But with oc, you easily get a list of your namespaces. A small way Openshift is enterprise-ready and designed to scale with both your human users and applications.
 
2 – ODO improves the developer experience over kubectl
Another tool that Red Hat brings with OpenShift is ODO, a command-line that streamlines the developer experience, allowing him to quickly deploy local code to a remote OpenShift cluster, and have an efficient inner loop where all his changes can instantly be synced with the running container in the remote OpenShift, avoiding the burden of rebuilding the image, pushing it to a registry then deploying it again.
Here are a few examples where “oc” or odo command makes life easier to work with containers and Kubernetes.
In the following section, let’s compare a kubectl-based workflow to using oc or odo.

Deploying code to OpenShift without being a YAML native-speaker:

Kubernetes / kubectl

$> git clone https://github.com/sclorg/nodejs-ex.git
1- Create a Dockerfile that builds the image from code
————–
FROM node
WORKDIR /usr/src/app
COPY package*.json ./
COPY index.js ./
COPY ./app ./app
RUN npm install
EXPOSE 3000
CMD [ “npm”, “start” ]
————–
2- build the image
$> podman build …
3- login to a registry
podman login …
4- Push the image to a registry
podman push
5- Create yaml files that will help deploy the app (deployment.yaml, service.yaml, ingress.yaml) are the bare minimum
6- deploy the manifest files:
Kubectl apply -f .

OpenShift / oc

$> oc new-app https://github.com/sclorg/nodejs-ex.git –name myapp

OpenShift / odo

$> git clone https://github.com/sclorg/nodejs-ex.git
$> odo create component nodejs myapp
$> odo push

 

Switching contexts: changing working namespace or working cluster

Kubernetes / kubectl

1- create context in kubeconfig for “myproject”
2- kubectl set-context …

OpenShift / oc

oc project “myproject”

 
 
Quality assurance process: “I have coded a new alpha feature, should we ship it to production?”
When you try a prototype car and the guy says: “I’ve put in some new types of brakes, honestly I’m not sure if they’re safe yet… but GO AHEAD and try it!”, do you blindly do so? I guess NO, and we feel the same way at Red Hat :)
That’s why we might refrain from alpha features until they mature, and we have battle-tested them during our qualification process and feel it is safer to use. Usually it goes through a Dev Preview phase, then a Tech Preview phase, then a General Availability Phase when they’re stable for production.
Why is that so ? Because, like in any other software craftsmanship, some initial concepts in Kubernetes might never make it in the final release, or they could functionally but in a very different implementation than what was initially delivered as an alpha feature. Because Red Hat is supporting more than a thousand customers for business-critical missions with OpenShift, we believe in delivering a stable and long-supported platform.
Red Hat is actively delivering frequent OpenShift releases, and updates the Kubernetes version within OpenShift, for instance OpenShift 4.3 which is the current GA release embeds Kubernetes 1.16, just one version behind upstream Kubernetes version 1.17; this is on-purpose in order to deliver production-grade Kubernetes and do extra Quality Assurance within the OpenShift release cycle.
 
The Kubernetes escalation flaw: “There is a critical Kubernetes bug in Production, do I need to upgrade all my production up to 3 releases to get the fix?”
In the Kubernetes upstream project, fixes are usually delivered in the next release, and sometimes backported to 1 or 2 minor releases, spanning a 6 months time frame.
Red Hat has a proven track record of fixing critical bugs earlier than others, and on a much longer time frame. Take a look at the Kubernetes privilege escalation flaw (CVE-2018-1002105) that was discovered in kubernetes 1.11 and backward fixed upstream only until 1.10.11, leaving all previous kubernetes 1.x until 1.9 subject to the flaw.
On the opposite, Red Hat patched OpenShift until version 3.2 (or Kubernetes 1.2), spanning 9 OpenShift releases backwards, showing it would actively support its customers in these difficult situations. (See this blog for further information).
 
Kubernetes upstream benefits from OpenShift and Red Hat’s contributions to code
Red Hat is the second largest code-contributor to Kubernetes behind Google and currently employs 3 of the top 5 Kubernetes code contributors. Little known is that many critical features of upstream Kubernetes have been contributed by Red Hat. Some major examples of these are:

RBAC: for some time, Kubernetes didn’t implement RBAC features (ClusterRole, ClusterRoleBinding), until Red Hat engineers decided to implement them in Kubernetes itself rather than as an added-value feature of OpenShift. So is Red Hat afraid of improving Kubernetes? Of course not, that’s what makes it Red Hat and not just any other Open Core software provider. Improvements that are made in the upstream communities mean more sustainability and broad adoption, which ultimately is the goal, to make these open source projects drive benefits for the customers.
Pod Security Policies: Initially, these concepts that allow secure execution of applications within pods were present in OpenShift and known as Security Context Constraints or SCC. Again, Red Hat decided to backport them upstream, and now everyone using Kubernetes or OpenShift benefits from it.

There are many more examples, but these are simple illustrations that Red Hat is committed to make Kubernetes an even more successful project.
 
So now the real question: what is the difference between OpenShift and Kubernetes? :)
 
By now, I hope you understand that Kubernetes is a core component of OpenShift, but nonetheless ONE component among MANY others. That means that just installing Kubernetes to have a production grade platform is not enough: you’ll need to add authentication, networking, security, monitoring, logs management, … That means you will also have to pick your tools among everything available (see CNCF landscape to get an idea of the complexity of the ecosystem), and maintain the cohesion of all of them as a whole; but also do updates and regression tests whenever there is a new version of one of these components. That finally means you are turning into a software editor, except that you are spending effort on building and maintaining a platform rather than investing on business value that will differentiate you from your competitors.
With OpenShift, Red Hat has decided to shield this complexity and deliver a comprehensive platform, including not only Kubernetes at its core, but also all the essential open source tools that make it an enterprise-ready solution to confidently run your production. Of course, in case you already have your own stacks, then you can opt-out and plug into your existing solutions.
OpenShift – a smarter Kubernetes Platform
 
Let’s look at Figure 1: surrounding Kubernetes are all the areas where Red Hat adds features that are not in Kubernetes by design, among which:
1- A trusted OS foundation: RHEL CoreOS or RHEL
Red Hat has been the leading provider of Linux for business-critical applications for over 20 years, and is putting its experience in delivering a SOLID and TRUSTED foundation for running containers in production. RHEL CoreOS shares the same kernel as RHEL, but is optimized for running containers and managing Kubernetes clusters at scale: and takes lower footprint and its immutable nature makes it easier to install clusters, adds auto-scaling, auto-remediation for workers, etc. All these features makes it the perfect foundation to deliver the same OpenShift experience, anywhere from bare-metal to private clouds and public clouds
2- Automated operations
Automated installation and day-2 Operations are OpenShift key features that make it easier to administrate, upgrade, and provide a first-class container platform. The usage of operators at the core of OpenShift 4 architecture is a strong foundation to make this possible.
OpenShift 4 also includes an extremely rich ecosystem of operators based solutions, developed by Red Hat and by 3rd party partners (see the operators catalog for Red Hat hosted operators, or operatorhub.io, a public marketplace created by Red Hat, to see community operators too).

OpenShift 4 gives you access to over 180 operators from the integrated catalog
 
3- Developer services
Since 2011, OpenShift has been a PaaS or Platform-as-a-Service, meaning that its goal is to make developer’s daily life easier, allowing them to focus on delivering code and offering a rich set of out-of-the-box support for languages such as Java, Node.js, PHP, Ruby, Python, Go and services like CICD, databases, etc. OpenShift 4 offers a rich catalog of over 100 services delivered through Operators, either by Red Hat or by our strong ecosystem of partners.
OpenShift 4 also adds a graphical UI (the developer console) dedicated to developers, allowing them to easily deploy applications to their namespaces from different sources (git code, external registries, Dockerfile…) and providing a visual representation of the application components to materialize how they interact together.
The developer console shows the components of your application and eases interaction with Kubernetes
 
In addition, OpenShift provides Codeready sets of tools for developers, such as Codeready Workspaces, a fully containerized web-IDE that runs on top of OpenShift itself, providing an IDE-as-a-service experience. For developers who still want to run everything on their laptop, they can rely on Codeready Containers, which is an all-in-one OpenShift 4 running on the laptop.

The integrated webIDE-as-a-service allows you to efficiently develop on Kubernetes/OpenShift
 
OpenShift also offers out-of-the-box advanced CI/CD features; such as containerized Jenkins with a DSL to accelerate writing your pipelines, or Tekton (now in Tech preview) for a more Kubernetes-native CICD experience. Both solutions offer a native integration with the OpenShift console, allowing to trigger pipeline, view deployments, logs etc.
4- Application services
OpenShift allows you to deploy traditional stateful applications, alongside cutting-edge cloud-native applications, by supporting modern architectures such as microservices or serverless. In fact; OpenShift Service Mesh provides Istio, Kiali and Jaeger out-of-the-box to support your adoption of microservices. OpenShift Serverless includes Knative but also joint initiatives with Microsoft such as Keda to provide Azure functions on top of OpenShift.

The integrated OpenShift ServiceMesh (Istio, Kiali, Jaeger) helps you with microservices development
 
To reduce the gap between legacy applications and containers, OpenShift allows you now to even migrate your legacy virtual machines to OpenShift itself by using Container Native Virtualization (now in TechPreview), making hybrid applications a reality and easing portability across clouds, both private and public.

A Windows 2019 Virtual Machine running natively on OpenShift with Container Native Virtualization (currently in Tech preview)
 
 
5- Cluster Services
Every enterprise-grade platform requires supporting services like monitoring, centralized logs, security mechanisms, authentication and authorization, networking management, and these are all features that come out-of-the-box with OpenShift with out-of-the-box open source solutions like ElasticSearch, Prometheus, Grafana. All these solutions are packed with pre-built dashboards, metrics, alerts that come from Red Hat’s experience in monitoring clusters at scale, giving you right-away the most important information for your production.
OpenShift also adds essential enterprise services like authentication with a built-in oauth provider, integration to your identity providers such as LDAP, ActiveDirectory, OpenID Connect, and so on.

Out-of-the-Box Grafana dashboards allows you to monitor your OpenShift cluster
 

Out-of-the-Box Prometheus metrics and alerting rules (+150) allows you to monitor your OpenShift cluster
What’s next?
This rich set of features and the deep-expertise Red Hat has in the Kubernetes ecosystem are the reason why OpenShift has a significant head start against other solutions in the market, as we can see in the following figure (see this article for more information).
 

“So far, Red Hat stands out as the market leader with 44 percent market share.
The company is reaping the fruit of its hands-on sales strategy, where they consult and train enterprise developers first and then monetize once the enterprise deploys containers in production.”
(source: https://www.lightreading.com/nfv/containers/ihs-red-hat-container-strategy-is-paying-off/d/d-id/753863)
 
I hope you enjoyed this first part of a series to come, where I will be discussing the benefits that OpenShift adds on top of Kubernetes in every one of these categories.
 
 
The post Enterprise Kubernetes with OpenShift (Part one) appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Latest Mirantis Addition Provides Developer-Friendly Tool to Run Kubernetes

The post Latest Mirantis Addition Provides Developer-Friendly Tool to Run Kubernetes appeared first on Mirantis | Pure Play Open Cloud.
Latest Mirantis Addition Provides Developer-Friendly Tool to Run Kubernetes
 
Following Docker Enterprise acquisition, the addition of Kontena resources brings technology and expertise to modernize apps at enterprise scale 
 
Campbell, Calif – February 25, 2020 — Following its November acquisition of Docker Enterprise, Mirantis, the open cloud company, took another step to bolster its Kubernetes lineup, announcing today the addition of the engineering and leadership of cloud container services company Kontena.
 
Kontena provides easy-to-use, fully integrated Kubernetes products for DevOps and software development teams to deploy, run, monitor and operate Kubernetes clusters and workloads on private, public, and hybrid cloud infrastructures running bare metal or virtual machines. The software is open source and available under the Apache 2.0 license. Kontena is currently used by hundreds of start-ups and software development teams working for some of the biggest enterprises in the world. The entire Kontena team will join Mirantis. 
 
Mirantis has made significant investments in Kubernetes, and this latest acquisition will accelerate its product roadmap in multi-cluster management, cluster visibility and insights, and tools for application developers. Mirantis will leverage the IP acquired from Kontena for existing Kubernetes technology in Docker Enterprise, including Docker Kubernetes Service (DKS) and Universal Control Plane (UCP). 
 
“We want to deliver the best Kubernetes experience for developers and enterprises,” said Adrian Ionel, Mirantis CEO. “With a small but talented team, Kontena engineers have built tools that make Kubernetes significantly easier for developers to use. Together, Mirantis and talent acquired from Kontena will extend our core Kubernetes offering, based on Docker Enterprise, with key developer-facing capabilities that deliver faster app development.”The post Latest Mirantis Addition Provides Developer-Friendly Tool to Run Kubernetes appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

OpenShift Commons Briefing: Data Protection and Disaster Recovery Solutions with Venkat Kolli (Red Hat)

As more and more business critical applications move to OpenShift platform, it is important to start thinking about how to protect these applications and application data.
In this briefing,  Red Hat’s Venkat Kolli walks through the different failure scenarios that will be impacting application availability in OpenShift and the different Backup & Disaster Recovery (DR) solutions that are designed to protect your OpenShift applications against these failures. While the traditional Backup & DR solutions have existed a while in Enterprise DataCenters, these solutions need to evolve to address the needs of the new container infrastructure. We will explore the differences between traditional approaches to backup & DR and the changes in approach required for OpenShift infrastructure.
Briefing Slides: Data Protection and Disaster Recovery Solutions for OpenShift
Additional Resources:
Product Documentation for Red Hat OpenShift Container Storage 4.2
Feedback:
To find out more about OpenShift Container Storage or to take a test drive, visit https://www.openshift.com/products/container-storage/.
If you would like to learn more about what the OpenShift Container Storage team is up to or provide feedback on any of the new 4.2 features, take this brief 3-minute survey.
The post OpenShift Commons Briefing: Data Protection and Disaster Recovery Solutions with Venkat Kolli (Red Hat) appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Mirantis will continue to support and develop Docker Swarm

The post Mirantis will continue to support and develop Docker Swarm appeared first on Mirantis | Pure Play Open Cloud.
Here at Mirantis, we’re excited to announce our continued support for Docker Swarm, while also investing in new features requested by customers.
Following our acquisition of Docker Enterprise in November 2019, we affirmed at least two years of continued Swarm support, pending discussions with customers. These conversations have led us to the conclusion that our customers want continued support of Swarm without an implied end date.
Mirantis’ goal is to simplify container usage at enterprise scale with freedom of choice for orchestrators. Swarm has a proven track record of running mission critical container workloads in demanding production environments, and our customers can rest assured that Mirantis will continue to support their Swarm investments.
To that end, Mirantis will be continuing to invest in active Swarm development. Recently Mirantis developed Swarm Jobs, a new service mode enabling run-and-done workloads on a Swarm cluster.
In addition, Mirantis is very excited to announce a commitment to the development of Cluster Volume Support with CSI Plugins. Originally discussed at DockerCon 2019, this development proposal received very positive feedback from the community. By leveraging the Container Storage Interface plugin architecture, Swarm will be able to use the growing CSI ecosystem to handle distributed persistent volumes, supporting a wider range of backend storage options and more flexible and intelligent scheduling.
Swarm Jobs and Swarm Volume Support are part of our 2020 Roadmap with dates announced at KubeCon EU.
The post Mirantis will continue to support and develop Docker Swarm appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Tech Preview: Get visibility into your OpenShift costs across your hybrid infrastructure

Do you know if your OpenShift project is currently on budget? If you deploy more containers right now or if OpenShift dynamically increases capacity, would that put your project in the red? 
Red Hat is introducing a new cost management SaaS offering that is included at no additional charge with your Red Hat OpenShift Container Platform subscription. Cost management is an OpenShift Container Platform service that is currently available in Technology Preview. The service, which customers access from cloud.redhat.com/beta, gives you visibility into your costs across on-premises and cloud environments. 
With cost management for OpenShift, you can easily aggregate costs across hybrid cloud infrastructure (on-premises, Amazon Web Services, Azure, with more cloud platforms on the roadmap) and track budget requirements.

 
It’s easy to get started with AWS and Azure – just configure your cloud account to provide the required information and add it as a source for cost management. A unique benefit to Red Hat’s service is can also add your on-premises environments if you’re running OpenShift Container Platform 4.3 with metering. For more details, see the “Getting Started with Cost Management” guide.
Once you have your sources set up, you may want to take cost management to the next level with more advanced features. Cost models allow you to apply markup ratios to reflect the real costs of running an environment.
But one of the most powerful features is the use of tagging to map charges to projects and organizations. Tags are used for many things, not just cost management, and your organization may already have a taxonomy for tagging and/or naming. Our own cost management product manager, Sergio Ocón-Cárdenas, wrote a blog with some best practice guidelines for tagging, so take a look at that before jumping into the documentation for “Managing cost data using tagging.”
As I mentioned, cost management is still Tech Preview which means we’re providing early access to a feature that is not supported through normal channels. We are giving you the ability to test functionality and provide feedback during the development process. So give cost management a try, and if you have any questions or suggestions, please email costmanagement@redhat.com.
The post Tech Preview: Get visibility into your OpenShift costs across your hybrid infrastructure appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift