RDO Train Released

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Train for RPM-based distributions, CentOS Linux and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Train is the 20th release from the OpenStack project, which is the work of more than 1115 contributors from around the world.
The release is already available on the CentOS mirror network at http://mirror.centos.org/centos/7/cloud/x86_64/openstack-train/. While we normally also have the release available via http://mirror.centos.org/altarch/7/cloud/ppc64le/ and http://mirror.centos.org/altarch/7/cloud/aarch64/ – there have been issues with the mirror network which is currently being addressed via https://bugs.centos.org/view.php?id=16590.
The RDO community project curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Linux and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS Linux users looking to build and maintain their own on-premise, public or hybrid clouds.
All work on RDO and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first.
PLEASE NOTE: At this time, RDO Train provides packages for CentOS7 only. We plan to move RDO to use CentOS8 as soon as possible during Ussuri development cycle so Train will be the last release working on CentOS7.

Interesting things in the Train release include:

Openstack Ansible, which provides ansible playbooks and roles for deployment, added murano support and fully migrated to systemd-journald from rsyslog. This project makes deploying OpenStack from source in a way that makes it scalable while also being simple to operate, upgrade, and grow.
Ironic, the Bare Metal service, aims to produce an OpenStack service and associated libraries capable of managing and provisioning physical machines in a security-aware and fault-tolerant manner. Beyond providing basic support for building software RAID and a myriad of other highlights, this project now offers a new tool for building ramdisk images, ironic-python-agent-builder.

Other improvements include:

Tobiko is now available within RDO! This project is an OpenStack testing framework focusing on areas mostly complementary to Tempest. While the tempest main focus has been testing OpenStack rest APIs, the main Tobiko focus would be to test OpenStack system operations while “simulating” the use of the cloud as the final user would. Tobiko’s test cases populate the cloud with workloads such as instances, allows the CI workflow to perform an operation such as an update or upgrade, and then runs test cases to validate that the cloud workloads are still functional.
Other highlights of the broader upstream OpenStack project may be read via https://releases.openstack.org/train/highlights.html.

Contributors
During the Train cycle, we saw the following new RDO contributors:

Joel Capitao
Zoltan Caplovic
Sorin Sbarnea
Sławek Kapłoński
Damien Ciabrini
Beagles
Soniya Vyas
Kevin Carter (cloudnull)
fpantano
Michał Dulko
Stephen Finucane
Sofer Athlan-Guyot
Gauvain Pocentek
John Fulton
Pete Zaitcev

Welcome to all of you and Thank You So Much for participating!
But we wouldn’t want to overlook anyone. A super massive Thank You to all 65 contributors who participated in producing this release. This list includes commits to rdo-packages and rdo-infra repositories:

Adam Kimball
Alan Bishop
Alex Schultz
Alfredo Moralejo
Arx Cruz
Beagles
Bernard Cafarelli
Bogdan Dobrelya
Brian Rosmaita
Carlos Goncalves
Cédric Jeanneret
Chandan Kumar
Damien Ciabrini
Daniel Alvarez
David Moreau Simard
Dmitry Tantsur
Emilien Macchi
Eric Harney
fpantano
Gael Chamoulaud
Gauvain Pocentek
Jakub Libosvar
James Slagle
Javier Peña
Joel Capitao
John Fulton
Jon Schlueter
Kashyap Chamarthy
Kevin Carter (cloudnull)
Lee Yarwood
Lon Hohberger
Luigi Toscano
Luka Peschke
marios
Martin Kopec
Martin Mágr
Matthias Runge
Michael Turek
Michał Dulko
Michele Baldessari
Natal Ngétal
Nicolas Hicher
Nir Magnezi
Otherwiseguy
Gabriele Cerami
Pete Zaitcev
Quique Llorente
Radomiropieralski
Rafael Folco
Rlandy
Sagi Shnaidman
shrjoshi
Sławek Kapłoński
Sofer Athlan-Guyot
Soniya Vyas
Sorin Sbarnea
Stephen Finucane
Steve Baker
Steve Linabery
Tobias Urdin
Tony Breeds
Tristan de Cacqueray
Victoria Martinez de la Cruz
Wes Hayutin
Yatin Karel
Zoltan Caplovic

The Next Release Cycle
At the end of one release, focus shifts immediately to the next, Ussuri, which has an estimated GA the week of 11-15 May 2020. The full schedule is available at https://releases.openstack.org/ussuri/schedule.html.
Twice during each release cycle, RDO hosts official Test Days shortly after the first and third milestones; therefore, the upcoming test days are 19-20 December 2019 for Milestone One and 16-17 April 2020 for Milestone Three.
Get Started
There are three ways to get started with RDO.
To spin up a proof of concept cloud, quickly, and on limited hardware, try an All-In-One Packstack installation. You can run RDO on a single node to get a feel for how it works.
For a production deployment of RDO, use the TripleO Quickstart and you’ll be running a production cloud in short order.
Finally, for those that don’t have any hardware or physical resources, there’s the OpenStack Global Passport Program. This is a collaborative effort between OpenStack public cloud providers to let you experience the freedom, performance and interoperability of open source infrastructure. You can quickly and easily gain access to OpenStack infrastructure via trial programs from participating OpenStack public cloud providers around the world.
Get Help
The RDO Project participates in a Q&A service at https://ask.openstack.org. We also have our users@lists.rdoproject.org for RDO-specific users and operrators. For more developer-oriented content we recommend joining the dev@lists.rdoproject.org mailing list. Remember to post a brief introduction about yourself and your RDO story. The mailing lists archives are all available at https://mail.rdoproject.org. You can also find extensive documentation on RDOproject.org.
The #rdo channel on Freenode IRC is also an excellent place to find and give help.
We also welcome comments and requests on the CentOS devel mailing list and the CentOS and TripleO IRC channels (#centos, #centos-devel, and #tripleo on irc.freenode.net), however we have a more focused audience within the RDO venues.
Get Involved
To get involved in the OpenStack RPM packaging effort, check out the RDO contribute pages, peruse the CentOS Cloud SIG page, and inhale the RDO packaging documentation.
Join us in #rdo and #tripleo on the Freenode IRC network and follow us on Twitter @RDOCommunity. You can also find us on Facebook and YouTube.
Quelle: RDO

Installing Service Mesh is Only the Beginning

A microservice architecture breaks up the monolith application into many smaller pieces and introduces new communication patterns between services like fault tolerance and dynamic routing.One of the major challenges with the management of a microservices architecture is trying to understand how services are composed, how they are connected and how all the individual components operate, from global perspective and drilling down into particular detail.
Besides the advantages of breaking down services into micro services (like agility, scalability, increased reusability, better testability and easy upgrades and versioning), this paradigm also increases the complexity of securing them due to a shift of the method calls via in-process communication into many separate network requests which need to be secured. Every new service you introduce needs to be protected from man-in-the-middle attacks and data leaks, manage access control, and audit who is using which resources and when. Not forgetting the fact that each service can be written in different programming languages. A Service Mesh like Istio provides traffic control and communication security capabilities at the platform level and frees the application writers from those tasks, allowing them to focus on business logic.
But just because the Service Mesh helps to offload the extra coding, developers still need to observe and manage how the services are communicating as they deploy an application.  With the OpenShift Service Mesh, Kiali has been packaged along with Istio to make that task easier. In this post we will show how to use Kiali capabilities to observe and manage an Istio Service Mesh. We will use a reference demo application to demonstrate how Kiali can compare different service versions and how you can configure traffic routing using Istio config resources. Then we will add mutual TLS to all the demo components in order to make our deployment communications more secure. Kiali will assist in this process helping to spot misconfigurations and unprotected communications.
How Does Kiali Work? Using A/B Testing as an Example
One pretty common exercise that Service Mesh is perfect for is performing A/B testing to compare application versions.  And with a microservices application, this can be more complex than with a single monolith. Let’s use a reference demo application to demonstrate how Kiali can compare different service versions and how you can configure traffic routing using Istio config resources.
Travel Agency Demo
Travel Agency is a demo microservices application that simulates a travel portal scenario. It creates two namespaces:

A first travel-portal namespace will deploy an application per city representing a personalized portal where users enter to search and book travels, in our example we will have three applications: paris-portal, rome-portal and london-portal. Every portal application will have two services: web service will handle users from the web channel, meanwhile vip service will take requests from priority channels like special travel brokers. All portals consume a travel quote service hosted in the next namespace.
A second travel-agency namespace will host services that will calculate quotes for travel. A main travels service will be the business entry point for the travel agency. It receives a city and a user as parameters and it calculates all elements that compose a travel budget: airfares, lodging, car reservation and travel insurances. There are several services that calculate separate prices and the travel service is responsible to aggregate them in a single response. Additionally, some users like vip users can have access to special discounts, managed by a specific discounts service.

The interaction between all services from the example can be shown in the following picture:

In the next steps we are going to deploy a new version of every service in the travel agency namespace that will run in parallel with the first version deployed. Let’s imagine that the next version will add new features that we want to test with live users and compare how are the results. Obviously, in the real world this could be complex and highly dependent on the domain, but for our example, we will focus on the response time that portals will get assuming that a slower portal will cause our users to lose interest.
One of the first steps we can do in Kiali is to enable Response time labels on the Graph:

The graph helps us to identify those services that could have some problems. In our example everything is green and healthy, but the Response time shows some suspects that the new version 2 probably has some slower features compared with version 1.
Our next stop will be to take a closer look into the travels application metrics:

Under the Inbound Metrics tab we will have data about the portal calls, Kiali can show metrics split by several criteria. Grouping by app shows that all portals have increased the response time since the moment version 2 was deployed.

If we show Inbound metrics grouped by app and version, then we spot an interesting difference: response time in general has been increased, but portals that handle vip users have worse behaviour.
Also, we can continue using Kiali to investigate and correlate these results with traces:

And also with logs from the workloads if it would be necessary to get more information:

Taking Action with Kiali
From our investigation phase we have spotted a slower response time from version 2 and even slower for vip user requests.
There can be multiple strategies from here, like undeploying the whole version 2, partial deployment of version 2 service by service, limiting which users can access the new version, or a combination of all of those.
In our case, we are going to show how we can use Kiali Actions to add Istio traffic routing into our example that can help to implement some of the previous strategies.
Matching routing
A first action we can perform is to add Istio resources to route traffic coming from vip users to version 2 and the rest of the requests to version 1.
Kiali allows to create Istio configurations from a high level user interface. From the actions located in the service details we can open the Matching Routing Wizard and discriminate requests using headers as it is shown in the picture:
Kiali will create the specifics VirtualService and DestinationRule resources under the service. As part of our strategy we will add similar rules for the suspected services: travels, flights, hotels, cars and insurances.
When we have finished creating Matching Routing for our version 2 services we can check that Kiali has created the correct Istio configuration using the “Istio Config” section:

Once this routing configuration is applied we can see the results in the Response time edges of the Graph:

Now in our example, all traffic coming from vip portals will be routed to the version 2, meanwhile the rest of the traffic is using the previous version 1 which has returned to its normal response time. The graph also shows that vip user requests have extra load as they need to access the discounts service.
Weighted routing
If we examine the discounts service, we can see big differences between response time from version 1 versus version 2:

Once we have spotted a clear cause for the slower response, we can decide to move most of the traffic to the version 1 but maintain some of the traffic to version 2 to get more data and observe the differences. This action will help to not impact too much into the overall performance of the app.
We can use the Weighted Routing Wizard to set 90% of the traffic into version 1 and maintain only a 10% for version 2:
Once the Istio configuration is created we can enable Request percentage in the graph and examine the discounts service:

Suspend traffic
Kiali also allows to suspend traffic partially or totally for a specific destination using the Suspend Traffic Wizard:

This action allows you to stop traffic for a specific workload, in other words, to distribute the traffic between the rest of connected workloads. Also, user can stop the whole service returning a fast HTTP 503 error to implement a strategy to “fail sooner” and recover rather than letting the slow requests flood the overall application.
Make the Service Mesh Work for You
Microservices scenarios demand good observability tooling and practices. In this post,  we have showed how to combine Kiali capabilities to observe, correlate traces via Jaeger Integration, define strategies and perform actions on a Istio based microservices application.
The ServiceMesh is a great tool that solves complex problems introduced by the Microservices paradigm. As a component of the OpenShift Service Mesh, Kiali provides the key observability features you need to truly understand all the telemetry and distributed tracing out of the service mesh. Kiali does all the work of correlating and processing the status of the Service Mesh, which means it becomes easier to quickly take a look at the status of the service mesh.  
No longer do you have to deal with separate consoles, or understanding how to configure special dashboards.  You don’t have to learn how to fetch traces for each service or understand which rules are needed to apply an A/B test. Kiali builds a topology from traffic metrics and it combines multiple namespace in a single view. Using animations, user can identify a slow bottleneck mapped with a slow animation in the graph. And it is all seamlessly integrated under the OpenShift Service Mesh console.
With OpenShift Service Mesh and Kiali, developers have the tools they need to offload the complexity of creating and managing the complex intraservice communications that are the glue to deploying a microservices-based application.  
The post Installing Service Mesh is Only the Beginning appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

5 key elements for a successful cloud migration

As organizations continue to increase their cloud investment to drive business forward, cloud adoption has become integral to IT optimization. Cloud migration allows businesses to be more agile, improve inefficiencies and provide better customer experiences. Today, the need for stability and flexibility has never been more necessary.
An optimized IT infrastructure is different for each organization, but it often consists of a combination of public cloud, private cloud and traditional IT environments. In an IDG cloud computing survey, 73 percent of key IT decision-makers reported having already adopted this combination of cloud technology, and another 17 percent intended to do so in the next 12 months.
However, for businesses that are worried about disruption to their operations, adopting a cloud infrastructure doesn’t have to be an all-or-nothing proposition. Companies can start reaping the benefits from cloud technologies while continuing to run assets on existing on-premises environments by incorporating applications into a hybrid cloud model.
5 steps to successfully migrate applications to the cloud
Migrating to a cloud environment can help improve operational performance and agility, workload scalability, and security. From virtually any source, businesses can migrate workloads and quickly begin capitalizing on the following hybrid cloud benefits:

Greater agility with IT resources on demand, which enables companies to scale during unexpected surges or seasonal usage patterns.
Reduced capital expenditure becomes possible by shifting from an operating expenses model to pay-as-you-go approach.
Enhanced security with various options throughout the stack—from physical hardware and networking to software and people.

Before embarking on the cloud migration process, use the steps below to gain a clear understanding of what’s involved.
1. Develop a strategy.
This should be done early and in a way that prioritizes business objectives over technology.
2. Identify the right applications.
Not all apps are cloud friendly. Some perform better on private or hybrid clouds than on a public cloud. Some may need minor tweaking while others need in-depth code changes. A full analysis of architecture, complexity and implementation is easier to do before the migration rather than after.
3. Secure the right cloud provider.
A key aspect of optimization will involve selecting a cloud provider that can help guide the cloud migration process during the transition and beyond. Some key questions to ask include: What tools, including third-party, does the cloud provider have available to help make the process easier? Can it support public, private and multicloud environments at any scale? How can it help businesses deal with complex interdependencies, inflexible architectures or redundant and out-of-date technology?
4. Maintain data integrity and operational continuity.
Managing risk is critical, and sensitive data can be exposed during a cloud migration. Post-migration validation of business processes is crucial to ensure that automated controls are producing the same outcomes without disrupting normal operations.
5. Adopt an end-to-end approach.
Service providers should have a robust and proven methodology to address every aspect of the migration process. This should include the framework to manage complex transactions on a consistent basis and on a global scale. Make sure to spell all of this out in the service-level agreement (SLA) with agreed-upon milestones for progress and results.
IT optimization drives digital transformation
The results of IT optimization—including accelerated adoption, cost-effectiveness, and scalability—will help drive business innovation and digital transformation. Adopting cloud with a phased approach by carefully considering which applications and workloads to migrate can help companies obtain these benefits without disruption business operations.
Learn more about improving IT flexibility and IBM services for extending workloads to cloud by reading the smart paper “Optimize IT: Accelerate digital transformation”.
The post 5 key elements for a successful cloud migration appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Expedite innovation by design with IBM Garage

Disruption across all industries has made rapid innovation an existential imperative. IBM Garage is a consultancy within IBM that co-creates with clients across industries to enable them on a new way of working and delivery innovative solutions rapidly. By integrating design throughout the solution lifecycle, IBM Garage drives a laser-focus on defining and delivering fast, impactful business outcomes that deliver real value to end users, moving from an idea to a production pilot in just six to eight weeks. Pilots are used to gather user feedback, test key hypotheses and decide whether to continue to invest in the idea, expand the pilot, or pivot.
IBM Garage Method for Cloud: Expanding design thinking
At the center of our approach is the IBM Garage Method for Cloud with prescriptive guidance for defining, designing, developing and scaling innovative applications. In the method, Enterprise Design Thinking practices are combined with principles and key practices from lean startup and agile to drive a laser focus on defining, delivering and validating a minimal viable product (MVP). Agile and DevOps practices are used by cross-disciplinary squads to deliver the MVP in weekly iterations at the direction of an empowered product owner from the client.
We apply ­design thinking to drive alignment between stakeholders and sponsors of an initiative, across business and IT, multiple business units, and within IT across development and operations organizations. The “secret sauce” of the Garage is a two-day Enterprise Design Thinking Workshop where we define an MVP to be built to test key hypotheses. We work across business and IT to ensure that what is being defined is implementable and truly minimal in terms of time and investment. When we do IT-focused projects, we use the same approach to ensure the business of IT is the focus, and that the project itself is also defined in terms of the projected business impact.
Enterprise Design Thinking ensures that what we build focuses on user needs by applying empathy to get to the heart of users’ problems, and to design a delightful user experience. We integrate design within the solution lifecycle with our designers working side-by-side with product owners, developers and architects designing and developing solutions in one-week iterations. Every week we do a playback (one of the keys of Enterprise Design Thinking) to the sponsors and stakeholders to review the progress being made on the MVP and ensure there is alignment on the priorities and decisions being made. Most often, the MVPs are intended to be minimal function pilot production solutions because we find that quickly putting a working solution in the hands of target users gets the highest fidelity feedback and is the shortest path to delivering business value.
The IBM Garage experience: Client stories
What does an IBM Garage experience and MVP look and feel like? Hear from our clients:

Mueller, Inc, a provider of construction materials, talks about “Getting this type of real-world feedback and insight from the field …was a complete game-changer”.
Elaw Technolgia explains the scope of their MVP and business in a recent blog post.
American Airlines talks about the impact of taking an MVP built with IBM Cloud and scaling it rapidly to meet customer needs impacted by hurricanes in this video.

How could your business benefit from the IBM Garage experience? Schedule a no-cost visit with IBM Garage.
 
The post Expedite innovation by design with IBM Garage appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

A PodPreset Based Webhook Admission Controller

One of the fundamental principles of cloud native applications is the ability to consume assets that are externalized from the application itself during runtime. This feature affords portability across different deployment targets as properties may differ from environment to environment. This pattern is also one of the principles of the Twelve Factor app and is supported through a variety of mechanisms within Kubernetes. Secrets and ConfigMaps are implementations in which assets can be stored whereas the injection point within an application can include environment variables or volume mounts. As Kubernetes and cloud native technologies have matured, there has been an increasing need to dynamically configure applications at runtime even though Kubernetes makes use of a declarative configuration model. Fortunately, Kubernetes contains a pluggable model that enables the validation and modification of applications submitted to the platform as pods, known as admission controllers. These controllers can either accept, reject or accept with modifications the pod which is attempting to be created.
The ability to modify pods at creation time allows both application developers and platform managers the ability to offer capabilities that surpass any limitation that may be imposed by strict declarative configurations. One such implementation of this feature is a concept called PodPresets which enables the injection of ConfigMaps, Secrets, volumes, volume mounts, and environment variables at creation time to pods matching a set of labels. Kubernetes has supported enabling the use of this feature since version 1.6 and the OpenShift Container Platform (OCP) made it available in the 3.6 release. However, due to a perceived direction change for dynamically injecting these types of resources into pods, the feature became deprecated in version 3.7 and removed in 3.11 which left a void for users attempting to take advantage of the provided capabilities.
As time went on and Kubernetes and OpenShift continued to mature, a new mechanism for providing admission controllers was created. Instead of admission plugins being defined within the API server itself, they could instead be run externally and the API server instead makes an HTTP invocation to the remote endpoint. This mechanism is known as a webhook admission controller and enabled end users the ability to easily extend the platform with their own set of features. One of the most popular forms of webhook admission controllers is the injection of a sidecar container to support applications running on the Istio Service Mesh. Even though the upstream PodPresets admission plugin was deprecated in OpenShift, there has been the continued desire for this type of feature. Thanks to a webhook based admission solution, this functionality can be restored with minimal changes. The remainder of this entry will provide an overview of the webhook admission controller including the deployment and implementation to an OpenShift environment.
The PodPreset Webhook admission controller project is located on GitHub in a repository called podpreset-webhook and is located within the Red Hat Community of Practice organization and contains all of the resources necessary to deploy the solution. The assets contained within the repository consist of a set of OpenShift/Kubernetes resources that registers a new PodPreset Custom Resource Definition (CRD), configures the permissions necessary in order for the controller to function properly as well as facilitate a deployment of a container image. Once stated, the container exposes a web server endpoint that listens for requests that are sent by the API server to determine whether newly created resources are valid and/or should be modified. ValidatingWebhookConfigurations and MutatingWebhookConfigurations are Kubernetes/OpenShift objects that registers the types of resources that should undergo assessment by an external web server as well as the location of where this component resides within the cluster. In the case of the PodPreset Webhook admission controller, at initialization time, a MutatingWebhookConfiguration resource is dynamically created that dictates that all newly created pods should be considered by the web server. The API server sends an AdmissionReview object that contains information related to the newly created resource including the pod itself. Logic contained within the web server (identical to the included upstream Kubernetes PodPreset admission plugin) will determine whether the labels on the pod matches a PodPreset custom resource contained within the same namespace as the pod request. If a match is found, the pod is mutated as per the rules contained in the PodPreset. Finally, a resulting Response is sent back to the API server containing any modifications that should be applied to the original pod (as JSON patches) and whether the object itself is valid. The pod then continues through the remainder of the admission process until the object is persisted in etcd.
With an understanding of the functionality of the PodPreset webhook admission controller, let’s deploy it to an OpenShift environment.
As a logged in user with elevated permissions, clone or download the repository to your local machine.
$ git clone https://github.com/redhat-cop/podpreset-webhook
$ cd podpreset-webhook

Next, create a new project called podpreset-webhook
$ oc new-project podpreset-webhook

Deploy the resources to the newly created project
$ oc apply -f deploy/crds/redhatcop_v1alpha1_podpreset_crd.yaml
$ oc apply -f deploy/service_account.yaml
$ oc apply -f deploy/clusterrole.yaml
$ oc apply -f deploy/cluster_role_binding.yaml
$ oc apply -f deploy/role.yaml
$ oc apply -f deploy/role_binding.yaml
$ oc apply -f deploy/secret.yaml
$ oc apply -f deploy/webhook.yaml

Verify controller pod has started by viewing all pods within the namespace
$ oc get pods

NAME READY STATUS RESTARTS AGE
podpreset-webhook-665f68679b-nnx8n 1/1 Running 0 56s
podpreset-webhook-665f68679b-pn96c 1/1 Running 1 55s

Verify the controller also created the MutatingWebhookConfiguration
$ oc get mutatingwebhookconfiguration mutating-webhook-configuration -o yaml

apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata:
creationTimestamp:
generation: 1
name: mutating-webhook-configuration
resourceVersion: “”
selfLink: /apis/admissionregistration.k8s.io/v1beta1/mutatingwebhookconfigurations/mutating-webhook-configuration
uid:
webhooks:
– admissionReviewVersions:
– v1beta1
clientConfig:
caBundle:
service:
name: podpreset-webhook
namespace: podpreset-webhook
path: /mutate-pods
failurePolicy: Ignore
name: podpresets.admission.redhatcop.redhat.io
namespaceSelector:
matchExpressions:
– key: control-plane
operator: DoesNotExist
rules:
– apiGroups:
– “”
apiVersions:
– v1
operations:
– CREATE
resources:
– pods
scope: ‘*’
sideEffects: Unknown
timeoutSeconds: 30

As expressed in the generated MutatingWebhookConfiguration, a web server called podpresets.admission.redhatcop.redhat.io was produced that specifies that all pods that are created invoke a web server located at the service called podprerset-webhook within the podpreset-webhook namespace at the /mutate-pods endpoint.
With the webhook server ready to accept requests from the API, lets walk through a scenario that demonstrates the functionality of this solution. As described previously, PodPresets can inject several different types runtime requirements into applications including environment variables. For this instance, PodPresets will be used to dynamically inject an environment variable named FOO with a value of bar. To confirm the existence of the environment variable, an example application will be deployed which repeatedly prints out the value of the FOO environment variable every 30 seconds.
First, let’s start by defining a new PodPreset object:
apiVersion: redhatcop.redhat.io/v1alpha1
kind: PodPreset
metadata:
name: podpreset-example
spec:
env:
– name: FOO
value: bar
selector:
matchLabels:
role: podpreset-example

Save the content to a file called podpreset-example.yaml and execute the following command to add the PodPreset to the project:
$ oc create -f podpreset-example.yaml

Now, create the application:
$ oc run podpreset-webhook-app –image=registry.access.redhat.com/ubi8/ubi-minimal:latest –command=true — bash -c ‘while true; do echo “Value of FOO is: $FOO” && sleep 30; done’

Wait until the application starts to run and then view the logs:
$ oc logs -f $(oc get pods -l deploymentconfig=podpreset-webhook-app -o name)

Value of FOO is:
As seen in the above output, no value is currently present for the environment variable named FOO. In the PodPreset object, only pods with the label “role=podpreset-example” will have this environment variable automatically injected.
Patch the DeploymentConfig for the application to include the requisite label:
$ oc patch dc/podpreset-webhook-app -p ‘{“spec”:{“template”:{“metadata”:{“labels”:{“role”:”podpreset-example”}}}}}’

Wait until the new pod has been deployed and view the logs:
$ oc logs -f $(oc get pods -l deploymentconfig=podpreset-webhook-app -o name)

Value of FOO is: bar
Confirm the environment variable has been set to match the value shown in the pod log output by describing the pod itself:
$ oc describe $(oc get pods -l deploymentconfig=podpreset-webhook-app -o name)

Containers:
podpreset-webhook-app:
Container ID: cri-o://571afdbfea25089333a16ec758cadd77e663f0a81da9953374fa167e7bba5f89
Image: registry.access.redhat.com/ubi8/ubi-minimal:latest
Image ID: registry.access.redhat.com/ubi8/ubi-minimal@sha256:ffbb6e58a87ec743b29214dc8484db0fe5157e8533c09e17590120c80af66dcc
Port:
Host Port:
Command:
bash
-c
while true; do echo “Value of FOO is: $FOO” && sleep 30; done
State: Running
Started: Thu, 26 Sep 2019 21:00:27 +0200
Ready: True
Restart Count: 0
Environment:
FOO: bar
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nns9t (ro)

As you can see, the environment variable FOO has been set with the value bar as described in the PodPreset API object and applied at runtime thanks to the PodPreset admission controller. Environment variables are only one such resource that can be managed by this MutatingWebhook. Other such use cases include adding secrets which may contain certificates or credentials injected as volumes for consumption by applications. This would decouple the logic necessary to support namespace level specifications as one environment may differ from another. The power of being able to dynamically manipulate the definition of resources across a fleet of applications demonstrates the benefits enabled by this functionality which can be deployed to both OpenShift and Kubernetes platforms.
The post A PodPreset Based Webhook Admission Controller appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Cloud modernization: A holistic approach

Digital transformation can be empowering at an organizational scale. It can help transform the customer experience, power innovation, increase agility and flexibility, reduce operating costs and drive data-based decision-making.
But outdated IT infrastructures and applications too often can stand in the way. Genuine digital transformation — in which enterprises take full advantage of  digital technologies such as artificial intelligence, automation, connected devices and remote collaboration and communications platforms — benefit from a cloud modernization strategy that involves people, processes and technology.
While several types of modernization, including infrastructure modernization, platform modernization, application modernization, business process modernization and cultural/workplace modernization, can enable full digital transformation, advancements in cloud and related technologies can have a significant impact on each of them.
Effective digital modernization initiatives are typically coordinated efforts. It makes little sense, for example, to launch an application modernization project in isolation when its effectiveness hinges on modernized infrastructures, platforms and business processes based in the cloud.
We believe in a holistic approach.
Creating a coordinated cloud modernization strategy
Organizations that didn’t reap the promised benefits of the cloud may have failed because they tried to run with a piecemeal process. Coordinated modernization efforts should align with the enterprise’s business and cloud strategies. In other words, companies need to crawl before they can walk.
Once strategies are aligned, enterprise IT can develop the cloud modernization plan. To minimize disruption, IT should first identify modernization patterns and potential problems across all realms.
One of the biggest challenges can be converting the business rules integrated into existing systems and apps. How can businesses extract them and centralize these rules? How can IT teams ensure that there is no redundancy when outdated rules are applied to a modern cloud environment? These questions and others should be answered before a cloud modernization plan is enacted.
Other boxes to check in the planning stages of cloud modernization include containerization of workflows and API enablements of business capabilities.
Established systems that have not been modernized are typically built on large, highly integrated applications, and changing specific components in these apps can be complex and costly. To facilitate modernization, IT should first decouple these apps from the underlying infrastructure with APIs. Then atomizing these monolith apps into granular, cloud-based business capabilities is simpler and carries less risk. This enables an environment where change can be isolated and deployment platform decisions can be made in response to application needs and market demands.
Achieving cost efficiencies
When large, established organizations embrace a cloud-first strategy, they typically retain some mainframe capabilities. That’s understandable, as often these existing capabilities drive core business functions and support current profits. They are absolutely essential for running day-to-day operations and ensuring the appropriate level of data protection and application availability. For example, IBM offers IT infrastructure solutions with flexible pricing structures built for a hybrid multicloud environment.
For organizations to effectively scale operations and achieve cost efficiencies, it is important to identify the cloud environment where processes and applications are best suited to run. Selection of on premises or off premises and public, private or hybrid cloud infrastructure is critical. Some processes can be done more efficiently in a distributed environment that uses cloud-based data links such as cloud broker. One excellent example is transactional processing, such as validating a person’s Social Security number and financial records for a loan application. Here the privacy of PII data is imperative — it requires a mission-critical back end for the data and a flexible, public cloud-based front end for filling out the loan app.
Measuring success and simplifying the journey to cloud
Modernizing enterprise IT without measuring the risks and rewards of each step in the journey may not be the best approach. What are the potential benefits and pitfalls of change versus the status quo? What tools and techniques should be deployed to measure results? These all are essential questions that IT and business decision-makers should consider.
Cloud-based modernization of enterprise IT requires detailed planning and collaboration. All facets of modernization should be in sync to achieve meaningful digital transformation. Enterprises should tread lightly on the road to cloud adoption. Each cloud journey is unique, and while there are best practices, organizations should learn as they go and adapt as needed.
Risk-reward assessments can be a helpful tool for measuring the success of cloud modernization — and for mitigating failure. We’ll highlight some strategies for measuring these risks and rewards in a forthcoming article.
Learn more about how guidance, migration, modernization, cloud-native application development and managed services from IBM professionals can help your journey to cloud.
The post Cloud modernization: A holistic approach appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Creating a GPU-enabled node with OpenShift 4.2 in Amazon EC2

OpenShift Container Platform 4 uses the Machine API operator to fully automate infrastructure provisioning. The Machine API provides full stack infrastructure automation on public or private clouds.
With the release of OpenShift Container Platform 4.2, administrators can easily define GPU-enabled nodes in EC2 using the Machine API and the Node Feature Discovery operator.
These instructions assume that you have OpenShift Container Platform 4 installed in AWS using the Installer Provided Infrastructure (IPI) installation method.
Machines and Machine Sets
The Machine API operator defines several custom resources to manage nodes as OpenShift objects. These include Machines and MachineSets.

A Machine defines instances with a desired configuration in a given cloud provider.
A MachineSet ensures that the specified number of machines exist on the provider. A MachineSet can scale machines up and down, providing self-healing functionality for the infrastructure.

The Machine/MachineSet abstraction allows OpenShift Container Platform to manage nodes the same way it manages pods in replica sets. They can be created, deleted, updated, scaled, and destroyed from the same object definition.
In this blog post we copy and modify a default worker MachineSet configuration to create a GPU-enabled MachineSet (and Machines) for the AWS EC2 cloud provider.
NOTE: This blog post shows how to deploy a GPU-enabled node running Red Hat Enterprise Linux CoreOS. With OpenShift Container Platform 4.2, GPUs are supported in Red Hat Enterprise Linux (RHEL) 7 nodes only. This process is not supported. Please see the release notes for details.
Add a GPU Node
First view the existing nodes, Machines, and MachineSets.
Each node is an instance of a machine definition with a specific AWS region and OpenShift role.
$ oc get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-131-156.us-east-2.compute.internal Ready master 21m v1.14.6+c07e432da
ip-10-0-132-241.us-east-2.compute.internal Ready worker 16m v1.14.6+c07e432da
ip-10-0-144-128.us-east-2.compute.internal Ready master 21m v1.14.6+c07e432da
ip-10-0-151-24.us-east-2.compute.internal Ready worker 15m v1.14.6+c07e432da
ip-10-0-166-12.us-east-2.compute.internal Ready worker 16m v1.14.6+c07e432da
ip-10-0-173-34.us-east-2.compute.internal Ready master 21m v1.14.6+c07e432da

The Machines and MachineSets exist in the openshift-machine-api namespace. Each worker machine set is associated with a different availability zone within the AWS region. The installer automatically load balances workers across availability zones.
$ oc get machinesets -n openshift-machine-api
NAME DESIRED CURRENT READY AVAILABLE AGE
openshift-blog-txxtf-worker-us-east-2a 1 1 1 1 22m
openshift-blog-txxtf-worker-us-east-2b 1 1 1 1 22m
openshift-blog-txxtf-worker-us-east-2c 1 1 1 1 22m

Right now there is only one worker machine per machine set, though a machine set could be scaled to add a node in a particular region and zone.
$ oc get machines -n openshift-machine-api | grep worker
openshift-blog-txxtf-worker-us-east-2a-8grnb running m4.large us-east-2 us-east-2a 51m
openshift-blog-txxtf-worker-us-east-2b-vw4ph running m4.large us-east-2 us-east-2b 51m
openshift-blog-txxtf-worker-us-east-2c-qpd4q running m4.large us-east-2 us-east-2c 51m

Make a copy of one of the existing worker MachineSet definitions and output the result to a JSON file. This will be the basis for our GPU-enabled worker machine set definition.
$ oc get machineset openshift-blog-txxtf-worker-us-east-2a -n openshift-machine-api -o json > openshift-blog-txxtf-gpu-us-east-2a.json

Notice that we are replacing “worker” with “gpu” in the file name. This will be the name of our new MachineSet.
Edit the JSON file. Make the following changes to the new MachineSet definition:
Change the instance type of the new MachineSet definition to p3, which includes an NVIDIA Tesla V100 GPU. Read more about AWS P3 instance types: https://aws.amazon.com/ec2/instance-types/#Accelerated_Computing
$ jq .spec.template.spec.providerSpec.value.instanceType openshift-blog-txxtf-gpu-us-east-2a.json
“p3.2xlarge”

Change the name and self link to a unique name that identifies the new MachineSet. Then delete the status section from the definition.
A diff of the original worker definition and the new GPU-enabled node definition looks like this:
$ oc -n openshift-machine-api get machineset openshift-blog-txxtf-worker-us-east-2a -o json | diff openshift-blog-txxtf-gpu-us-east-2a.json –
10c10
< “name”: “openshift-blog-txxtf-gpu-us-east-2a”, — > “name”: “openshift-blog-txxtf-worker-us-east-2a”,
13c13
< “selfLink”: “/apis/machine.openshift.io/v1beta1/namespaces/openshift-machine-api/machinesets/openshift-blog-txxtf-gpu-us-east-2a”, — > “selfLink”: “/apis/machine.openshift.io/v1beta1/namespaces/openshift-machine-api/machinesets/openshift-blog-txxtf-worker-us-east-2a”,
21c21
< “machine.openshift.io/cluster-api-machineset”: “openshift-blog-txxtf-gpu-us-east-2a” — > “machine.openshift.io/cluster-api-machineset”: “openshift-blog-txxtf-worker-us-east-2a”
31c31
< “machine.openshift.io/cluster-api-machineset”: “openshift-blog-txxtf-gpu-us-east-2a” — > “machine.openshift.io/cluster-api-machineset”: “openshift-blog-txxtf-worker-us-east-2a”
60c60
< “instanceType”: “p3.2xlarge”, — > “instanceType”: “m4.large”,
104a105,111
> },
> “status”: {
> “availableReplicas”: 1,
> “fullyLabeledReplicas”: 1,
> “observedGeneration”: 1,
> “readyReplicas”: 1,
> “replicas”: 1

Create the new MachineSet from the definition.
$ oc create -f openshift-blog-txxtf-gpu-us-east-2a.json
machineset.machine.openshift.io/openshift-blog-txxtf-gpu-us-east-2a created

View the new MachineSet.
$ oc -n openshift-machine-api get machinesets | grep gpu
openshift-blog-txxtf-gpu-us-east-2a 1 1 1 1 4m21s

The MachineSet replica count is set to “1” so a new Machine object is created automatically. View the new Machine object.
$ oc -n openshift-machine-api get machines | grep gpu
openshift-blog-txxtf-gpu-us-east-2a-rd665 running p3.2xlarge us-east-2 us-east-2a 4m36s

Eventually a node will come up based on the new definition. Find the node name.
$ oc -n openshift-machine-api get machines | grep gpu
openshift-blog-txxtf-gpu-us-east-2a-rd665 running p3.2xlarge us-east-2 us-east-2a 5m8s

View the metadata associated with the new node, including its labels. You can see the instance-type, operating system, zone, region, and hostname.
$ oc get node ip-10-0-132-138.us-east-2.compute.internal -o json | jq .metadata.labels
{
“beta.kubernetes.io/arch”: “amd64″,
“beta.kubernetes.io/instance-type”: “p3.2xlarge”,
“beta.kubernetes.io/os”: “linux”,
“failure-domain.beta.kubernetes.io/region”: “us-east-2″,
“failure-domain.beta.kubernetes.io/zone”: “us-east-2a”,
“kubernetes.io/arch”: “amd64″,
“kubernetes.io/hostname”: “ip-10-0-132-138″,
“kubernetes.io/os”: “linux”,
“node-role.kubernetes.io/worker”: “”,
“node.openshift.io/os_id”: “rhcos”
}

Note that there is no need to specify a namespace for the node. The node definition is cluster scoped.
Deploy the Node Feature Discovery Operator
After the GPU-enabled node is created, it’s time to discover the GPU enabled node so it can be scheduled. The first step in this process is to install the Node Feature Discovery (NFD) operator.
The NFD operator identifies hardware device features in nodes. It solves the general problem of identifying and cataloging hardware resources in the infrastructure nodes so they can be made available to OpenShift.
Install the Node Feature Discovery operator from OperatorHub in the OpenShift Container Platform console.

Once the NFD operator is installed into OperatorHub, select Node Feature Discovery from the installed operators list and select Create instance. This will install the openshift-nfd operator into the openshift-operators namespace.

Verify the operator is installed and running.
$ oc get pods -n openshift-operators
NAME READY STATUS RESTARTS AGE
nfd-operator-fd55688bd-4rrkq 1/1 Running 0 18m

Next, browse to the installed operator in the console. Select Create Node Feature Discovery.

Select Create to build a NFD Custom Resource. This will create NFD pods in the openshift-nfd namespace that poll the OpenShift nodes for hardware resources and catalogue them.

After a successful build, verify that a NFD pod is running on each nodes.
$ oc get pods -n openshift-nfd
NAME READY STATUS RESTARTS AGE
nfd-master-mc99f 1/1 Running 0 51s
nfd-master-t7rrl 1/1 Running 0 51s
nfd-master-w9pgx 1/1 Running 0 51s
nfd-worker-5wwnw 1/1 Running 2 51s
nfd-worker-h2p2p 1/1 Running 2 51s
nfd-worker-n99l2 1/1 Running 2 51s
nfd-worker-xmmqx 1/1 Running 2 51s

The NFD operator uses vendor PCI IDs to identify hardware in a node. NVIDIA uses the PCI ID 10de. View the NVIDIA GPU discovered by the NFD operator.
$ oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep ‘Roles|pci’
Roles: worker
feature.node.kubernetes.io/pci-1013.present=true
feature.node.kubernetes.io/pci-10de.present=true
feature.node.kubernetes.io/pci-1d0f.present=true

10DE appears in the node feature list for the GPU-enabled node we defined. So the NFD operator correctly identified the node from our GPU-enabled MachineSet.
Conclusion
And that’s it! The Machine API lets us define, template, and scale nodes as easily as pods in ReplicaSets. We used the worker MachineSet definition as a template to create a GPU-enabled MachineSet definition with a different AWS instanceType. After we built a new node from this definition, we installed the Node Feature Discovery operator from OperatorHub. It detected the GPU and labeled the node so the GPU can be exposed to OpenShift’s scheduler. Taken together this is a great example of the power and simplicity of full stack automation in OpenShift Container Platform 4.2.
Subsequent blog posts will describe the process for loading GPU drivers and running jobs. The approaches differ slightly depending on whether the GPU drivers and libraries are installed directly to the host or dep;oyed via a pod daemon set.
Resources

AWS Adds Nvidia GPUs

Accelerated Computing with Nvidia GPU on OpenShift

OpenShift 4 Architecture

The post Creating a GPU-enabled node with OpenShift 4.2 in Amazon EC2 appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

IBM Cloud news: Latest enhancements and client wins in regulated industries

IBM shared news of numerous enhancements to its IBM Cloud and AI portfolio this week, reports EnterpriseAI. The new capabilities announced make it easier for businesses to scale AI across clouds. The innovations are designed to help organizations overcome barriers to AI and can be run across any cloud with the IBM Cloud Pak for Data platform.
Enhanced public cloud security features were also announced this week. These include state-of-art cryptographic technology for the cloud called “Keep Your Own Key,” increased bandwidth for its next-gen virtual servers on the cloud, and new multizone regions, adding to the global IBM cloud data center footprint.
Arvind Krishna, IBM Senior Vice President for Cloud and Cognitive Software said in a press release that the announcements unveil “new capabilities for the IBM public cloud, designed to provide clients with the highest available level of security, leading data protection and enterprise-grade infrastructure to run Red Hat OpenShift.”
Client wins span transportation, banking and healthcare
Highly regulated industries, such as banking, healthcare and transportation, have layers of compliance requirements and hefty fines enacted if regulations aren’t met.
IBM is gaining in this field due to its highly secure public cloud. According to an IBM press release, clients in highly regulated industries such as banking and healthcare are choosing IBM Cloud for mission-critical workloads. Such clients include Aegean Airlines, BNP Paribas, Elaw Tecnologia SA, and Home Trust.
Clients want trust, security
Hillary Hunter, Vice President and CEO of IBM Cloud, spoke with TechRepublic about data security and the importance of a cloud partner businesses can trust while at the Gartner IT Symposium.
“When people are thinking about the cloud journey, there are so many human factors, but there’s also business factors, there’s regulatory concerns, security concerns,” shared Hunter. “To kind of start from the side of the human factors, people that are looking at a cloud journey want to be sure they have developers that are ready for that.”
“When we look at things from the kinds of things that a risk officer, a security officer, an information officer in a major corporation thinks about,” Hunter continued, “what we see is their concerns about public cloud are often related to regulations and privacy. In the IBM public cloud and with things like our Watson AI services, we have had a policy since day one where we said, ‘Your data is your data. We will never leverage your data that you’ve given to us to do your business on the public cloud. You have not given that data to us for use for any other purposes other than yours. We won’t use that data to train our AI models or to train AI models for other clients.’”
Read the full news coverage from EnterpriseAI and TechRepublic and learn more about the latest announcements from IBM.
The post IBM Cloud news: Latest enhancements and client wins in regulated industries appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Introducing OpenShift Container Storage 4.2

Red Hat OpenShift Container Storage is software-defined storage integrated with and optimized for Red Hat OpenShift Container Platform. It runs anywhere OpenShift does: on-premise or in the public cloud. OpenShift Container Storage 4.2 is built on Red Hat Ceph® Storage, Rook, and NooBaa to provide container native storage services that support block, file, and object services. For the initial 4.2 release, OpenShift Container Storage will be supported on OpenShift platforms deployed on Amazon Web Services and VMware.
Red Hat OpenShift Container Storage supports a variety of traditional and cloud-native workloads including:

block storage for databases and messaging.
shared file storage for continuous integration and data aggregation.
object storage for archival, backup, and media storage.

With OpenShift 4.x, Red Hat has rearchitected OpenShift to bring the power of Kubernetes Operators to transform our enterprise-grade Kubernetes distribution with automating complex workflows, i.e. deployment, bootstrapping, configuration, provisioning, scaling, upgrading, monitoring and resource management. In conjunction, OpenShift Container Storage 4.2 transforms the cloud storage customer experience by making it easier for Red Hat customers to install, upgrade and manage storage on OpenShift.
Focusing on Customer Experience
With OpenShift Container Storage 4.2, we focused on the customer experience from the ground up with the goal of making it easier and accessible to all OpenShift administrators — whether they are new to storage or are already ninja-level storage gurus — so anyone can install, upgrade and manage the storage in a public cloud-like experience.
We focused on the most common tasks, from deployment through to day-to-day operations, while emphasizing greater scalability and deeper integration with OpenShift to:

Deploy storage services.
Expand storage.
Provide out-of-the-box dashboards in OpenShift Administrator Console to indicate health and status of the storage.
Alert users when there’s a storage issue.

Simple Deployment
Leveraging the Kubernetes Operator framework, OpenShift Container Storage (OCS) automates a lot of the complexity involved in providing cloud native storage for OpenShift. OCS integrates deeply into cloud native environments by providing a seamless experience for scheduling, lifecycle management, resource management, security, monitoring, and user experience of the storage resources.
To deploy OpenShift Container Storage, the administrator can go to the OpenShift Administrator Console and navigate to the OperatorHub to find the OpenShift Container Storage Operator:

Once you have the OpenShift Container Storage operator, follow the instructions to install and subscribe to the operator. Once the operator has been installed, navigate to the installed OpenShift Container Storage Operator, where you’ll see the capabilities offered by it.

To create the storage service, click “Create instance” of the “Storage Cluster“ tile.

You will be prompted to select the nodes that will be used to setup the storage cluster. Once the nodes selection is made and submitted, the OpenShift Container Storage operator will begin setting up an OCS storage cluster, provisioning the underlying storage subsystem, deploying necessary drivers, and finally setting up the storage classes to allow your OpenShift users to easily provision and consume storage services that have just been deployed.
Integrated Monitoring and Management
One of the key capabilities provided is the integrated monitoring and management when you deploy OpenShift Container Storage. Specifically, dashboards, monitoring and alerting are provided out-of-the-box. Administrators can easily manage not just their compute but also their storage infrastructure.
If the storage services are impacted, the OpenShift Container Storage Operator will actively perform healing and recovery as needed to ensure data is resilient and available to users. There is no longer a need for the Administrator to have to act upon healing operations, setting up jobs to rebalance or redistribute the data or even upgrade the storage services. Let the OpenShift Container Storage operator handle all of this for you! For administrators concerned with automatic upgrades, the OpenShift Container Storage Operator can be configured to be manually upgraded to meet organizational maintenance policies or considerations.
Expansion Made Simple
Gone are the days when expanding a storage cluster required the expertise of a storage specialist. With a simple click on the “Add Capacity” action, the administrator can quickly grow the OCS storage cluster whenever additional capacity is needed.
The administrator can then specify how much additional capacity he/she wishes to add to the storage cluster without having to know all the details about how to provision the underlying storage devices.
Consuming Storage Services
OpenShift Container Storage may be used to provide storage for a number of workloads:

Block storage for application development and testing environments that include  databases, document stores, and messaging systems.
File storage for CI/CD build environments, web application storage, and for ingest and aggregation of datasets for machine learning.
Multi-cloud object storage for CI/CD build artifacts, origin storage, data archives, and pre-trained machine learning models that are ready for serving.

To enable user provisioning of storage, OCS provides storage classes that are ready-to-use when OCS is deployed.
Users and/or developers can request dynamically provisioned storage by including the storage class in their PersistentVolumeClaim requests for their applications.
In addition to block and file-based storage classes, OCS introduces native object storage services for Kubernetes. To provide support for object buckets, OCS introduces the Object Bucket Claims (OBC) and Object Buckets (OB) concept, which takes inspiration from Persistent Volume Claims (PVC) and Persistent Volumes (PV).
A generic, dynamic bucket provisioning API similar to Persistent Volumes and Persistent Volume Claims is introduced, so that users familiar with the PVC/PV model would see bucket provisioning as having a similar pattern. As a result, oc will be easy to use to create, list, and delete object bucket claims and object buckets.
Applications that require an object bucket will create an Object Bucket Claim (OBC) and refer to the object storage class name — just as you would request a Persistent Volume Claim with a Storage Class name. Let’s see how to create the Object Bucket Claim (OBC) using the following
my-bucket-claim.yaml:

apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: my-bucket-claim
spec:
generateBucketName: “my-bucket”
storageClassName: openshift-storage.noobaa.io
SSL: false

The object bucket permissions are scoped to the namespace. This means that all the OBCs from the same namespace will receive credentials that have permission to use any other bucket provisioned in the namespace.
You can use oc to create the Object Bucket Claim (OBC): 
$ oc create -f my-bucket-claim.yaml

objectbucketclaim.objectbucket.io/my-bucket-claim created
Use oc to confirm the OB and accompanying OBC is created:
$ oc get objectbucket

NAME                         STORAGE-CLASS CLAIM-NAMESPACE   CLAIM-NAME RECLAIM-POLICY PHASE AGE

obc-bucket-my-bucket-claim   openshift-storage.noobaa.io   bucket            my-bucket-claim Delete           Bound 80s
 
After creating the OBC, the following Kubernetes resources would be created:

an ObjectBucket (OB) which contains the bucket endpoint information, a reference to the OBC, and a reference to the storage class.
a ConfigMap in the same namespace as the OBC, which contains endpoint which applications connect to in order to consume the object interface
a Secret in the same namespace as the OBC, which contains the key-pairs needed to access the bucket.

The OBC can, in turn, be attached to a deployment or connected to by referencing its endpoint.
Resources and Feedback
To find out more about OpenShift Container Storage or to take a test drive, visit https://www.openshift.com/products/container-storage/.
If you would like to learn more about what the OpenShift Container Storage team is up to or provide feedback on any of the new 4.2 features, take this brief 3-minute survey.
 
The post Introducing OpenShift Container Storage 4.2 appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

KubeCon 2019 Q&A: Mirantis Will Showcase Its Latest Kubernetes Technology and Launch Something New at Booth P21

The post KubeCon 2019 Q&A: Mirantis Will Showcase Its Latest Kubernetes Technology and Launch Something New at Booth P21 appeared first on Mirantis | Pure Play Open Cloud.
The post KubeCon 2019 Q&A: Mirantis Will Showcase Its Latest Kubernetes Technology and Launch Something New at Booth P21 appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis