Deploying HA MongoDB on OpenShift using Portworx

This is a guest post written by Gou Rao, CTO and Co-Founder of Portworx, leading the company’s technology, market, and solution execution strategy. Previously Gou was the CTO of Data Protection at Dell, in charge of the technical direction, strategy and architecture. Portworx, is a cloud-native storage platform to run persistent workloads deployed on a […]
The post Deploying HA MongoDB on OpenShift using Portworx appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Why a modern integration strategy is essential for digital transformation

With customer expectations quickly evolving, enterprises are rushing to digitally transform and adopt cloud and AI capabilities. Companies are looking for ways to innovate faster, create personalized customer experiences, and deliver actionable business insights faster. The key is unlocking the hidden power of data and services.
Unlock data and services to power your digital transformation
From smartphones to Internet of Things (IoT) devices such as smart speakers and appliances, there are more services and more data being generated by companies and consumers than ever before. But before you can turn these into a competitive advantage, you first need to be able to access and connect to all, no matter where they may live across multiple clouds and on-premises environments.
Traditional integration architectures that are more centralized may not be as scalable. They simply can’t keep up with the speed and volume of integrations required to digitally transform. Modern integration requires speed, flexibility, security and scale, but ripping and replacing simply isn’t an option for large organizations.
What’s needed is an approach that balances traditional and modern integration, allowing businesses to use their existing investments, integrate hybrid environments and drive speed and efficiency, all while lowering costs.
Consider an agile integration strategy
By using a decentralized, container-based, microservices-aligned approach to integration, businesses can empower their teams to build and support the volume of integrations required to keep up in today’s digital world. We call this an agile integration strategy.
If you’re interested in learning more about agile integration strategy, please read the IBM Agile Integration Handbook or watch our on-demand webinar.
Adopting an agile integration approach requires a variety of integration capabilities, including API management, messaging, event streams and of course data and application integration.
In fact, Gartner predicts that by 2022, “at least 65 percent of large organizations will have implemented an HIP to power their digital transformation.”
Introducing the IBM Cloud Integration Platform
I’m pleased to announce IBM Cloud Integration Platform, the industry’s most complete hybrid integration platform, designed to support the scale, security and economics required for your integration and digital transformation initiatives.
IBM Cloud Integration Platform helps organizations easily connect applications, services and data across multiple clouds by bringing together into a single user experience.
IBM Cloud Integration Platform Capabilities include:

API lifecycle management
Data and application integration
Enterprise messaging
Event streams (Apache Kafka)
High-speed data transfer

The platform is container based and can be deployed across on-premises or cloud environment that support Kubernetes. This allows businesses to set up the appropriate organizational models and governance practices they need to support a decentralized and self-service approach to integration. The platform’s common asset repository allows teams to find, share and reuse integration resources, helping to improve efficiency.
The new IBM Cloud Integration Platform makes use of existing investments while getting the much-needed flexibility and speed that an agile integration strategy provides in a secured environment. These can accelerate your organization’s ability to deliver the personalized experiences your customers demand, take advantage of cutting-edge AI applications and deliver game-changing business insights to help you make more informed decisions faster.
The bottom line is that this can help companies cut the time and cost of integration by one third.
Interested in learning more about how IBM can help you turn your data and services into a competitive advantage? Visit our IBM Cloud Integration Platform page today.
The post Why a modern integration strategy is essential for digital transformation appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

About the February 2019 Cri-O / RunC / Docker vulnerability

What OpenShift Online and OpenShift Dedicated customers should know about the recently announced vulnerability of runc/docker/CRI-O On February 11th, 2019, details of a vulnerability that researchers have confirmed is present on certain versions of runc (impacting docker and CRI-O) was published.  These tools are deployed as part of the OpenShift product and impact the Red […]
The post About the February 2019 Cri-O / RunC / Docker vulnerability appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Introducing a New Way to Try Red Hat OpenShift Online Pro

Red Hat OpenShift Online hosting has been available since 2011, and to date, well over 4 million applications have been launched on OpenShift Online. This service has been available in two tiers: the free Starter plan and the paid Pro plan. Both services offered the same OpenShift experience, with the Starter plan geared toward developers […]
The post Introducing a New Way to Try Red Hat OpenShift Online Pro appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

OpenShift Protects against Nasty Container Exploit

Have you ever done something that was difficult for you to do, but you did it anyway because you cared about the people it would affect? Maybe it was something people honestly forgot you were even doing because you have been doing it for so long? This week I would like to pause and say […]
The post OpenShift Protects against Nasty Container Exploit appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

New IBM services help companies manage the new multicloud world

Enterprises are going through a huge transformation in how they operate.
Today, according to a recent study, 85 percent of enterprises operate in a multicloud environment. The IBM Institute for Business Value estimates that by 2021, 98 percent of organizations studied plan to adopt multicloud architectures.
Navigating multicloud complexity
There are no two ways about it. It’s a multicloud world. With most businesses already running and trying to manage five or more cloud environments, often from multiple vendors, companies are struggling with how to keep up. The IBV paper study also states that just 38 percent of those same enterprises studied will have the procedures and tools they need to operate this environment.
This is compounded by the fact that managing these multiple clouds is largely customized and can be complex, with potentially major security implications and a lack of consistent management and integration tools. What’s required are services that provide an integrated approach to provide companies with a single management and operations system that address three critical layers:

Business management. Applications that provide digital service ordering, modern service management, and cost governance.
Orchestration. An automation layer that enables services of different types, from different vendors to be delivered in an optimized manner and available to consumers.
Operations. A layer that enables infrastructure and operations admins to monitor and maintain systems, including legacy infrastructure, private cloud, public cloud and container environments.

Simplifying multicloud management
To help navigate this complexity, IBM is embracing the multicloud reality and vision for our clients. Our next step in achieving this is building off of a recent partnership expansion with ServiceNow to offer new services that are designed to help enterprises simplify the management of their IT resources across multiple cloud providers and on-premises environments. Not only can the IBM Services for Multicloud Management help you address those three areas, it will also include a unified, self-service experience to enable companies to:

integrate with the ServiceNow Portal to configure and buy cloud services/solutions from multiple cloud providers
offer a global DevOps pipeline and performance management services
offer data center performance cloud health, container management and AI ops management
provide workload planning, cloud sourcing, procurement and cost and asset management

Introducing new IBM Services for Cloud Strategy and Design
To succeed on any cloud platform and deliver real business value, it’s essential for companies to build the right strategy following a broad assessment. At IBM, we’re working with clients across industries to help them determine which processes, methods and applications need to be moved or modernized for cloud.
With our new IBM Services for Cloud Strategy and Design, IBM is providing a comprehensive set of consulting services to advise clients on their journey to hybrid cloud. Services include design, migration, integration, road mapping and architectural services with support for multiple vendor platforms. Our enhanced cloud capabilities combined with our Cloud Innovate tools, IBM Cloud approach and automated decision accelerators, help companies architect the best, holistic approach to cloud.
Dedicated teams of certified IBM consultants work with clients to help design, build and manage their cloud architecture with open, secure multicloud strategies. With the right multicloud support, we’re supporting companies with application development, migration, modernization and management for faster deployment.
There’s no question cloud is here to stay. The question is, is your company ready for a multicloud world?
Learn more about these new IBM Cloud Services and get started today.
The post New IBM services help companies manage the new multicloud world appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

The Modern Software Platform

This is the first post in an ongoing series which will explore the changes, improvements, and additions we’re planning for the next big release of Red Hat OpenShift, version 4.0. Check in each week for more information that will prepare you for the shift to 4.0. From the time the fledgling Kubernetes community met at […]
The post The Modern Software Platform appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

What’s the Difference Between OpenShift and Kubernetes?

Over on the Red Hat Blog, Brian “redbeard” Harrington has laid out an excellent new post explaining just how Kubernetes, Red Hat OpenShift and OKD all relate to one another. From his post: At CoreOS we considered Kubernetes to be the “kernel” of distributed systems. We recognized that a well designed job scheduler, operating across […]
The post What’s the Difference Between OpenShift and Kubernetes? appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Unified Edge Cloud Infrastructure for PNFs, VNFs, Mobile Edge — Webinar Q&A

The post Unified Edge Cloud Infrastructure for PNFs, VNFs, Mobile Edge — Webinar Q&A appeared first on Mirantis | Pure Play Open Cloud.
One of Mirantis’ most popular webinars in 2018 was one we presented with Cloudify as part of our launch of MCP Edge, a version of Mirantis Cloud Platform software tuned specifically for edge cloud infrastructure. In case you missed the webinar, you can watch the recording and view the Q&A below.
Is the low latency characteristic of the edge cloud mainly a function of the cloud being close to the user?
Satish Salagame (Mirantis): The user’s proximity to the edge and avoiding multiple network hops is certainly a key component. However, the edge infrastructure design should ensure that unnecessary delays are not introduced, especially in the datapath. This is where EPA (Enhanced Platform Awareness) features like NUMA aware scheduling, CPU-pinning, huge pages all help. Also, data plane acceleration techniques such as SR-IOV and DPDK help in accelerating the data plane. This is why the Edge cloud infrastructure has lot of commonality with NFVI.
Shay Naeh (Cloudify): There are many use cases that require low latency, and the central cloud as we see it today is going to be broken into smaller edge clouds for use cases like connected cars and augmented reality, which require latency of less than 20ms. Latency is only one reason for this in the edge. The second reason for the edge itself is you don’t want to transfer all the enormous data points to the central clouds, and I call it a data tsunami of infromation for IoT, for connected cars, etc.
Satish: So you want to process everything locally, aggregate it, and send it to the central cloud just for learning, and this emanates the learning information from edge to edge. Let’s say you go to special use cases, one of the edges, so you can teach the other edges about it, and they will be informed, even though their use case was learned in another edge. So the two main reasons are the new application use cases that require low latency and the enormous data points that will be available now with 5G, IoT, and new scenarios.
Does Virtlet used in Mirantis Cloud Platform Edge solve all the problems associated with VNFs?
Satish: Virtlet is certainly one critical building block in solving some of the VNF problems we talked about. It allows a VM-based VNF to run unmodified in a k8s environment. However, it doesn’t solve all the problems. For example, if we have a complex VNF with multiple components, each running as separate VMs, and a proprietary VNFM designed for OpenStack or some other VIM, it takes some effort to adopt this VNF to the k8s/Virtlet environment. However, there will be many use cases where Virtlet can be used to design a very efficient, small footprint k8s edge cloud. Also, it provides a great transition path as more and more VNFs become containerized and cloud-native.
How does Virtlet compare with Kubevirt?
Satish: See our blog on the topic.
How does the MCP Master of Masters work with Cloudify?
Satish: The MCP Master of Masters is focused on the deployment and lifecycle management of infrastructure. The key differentiation here is that the MCP Master of Masters is focused on infrastructure orchestration and infrastructure management, whereas Cloudify is more focused on workload orchestration. In the edge cloud case, that includes edge applications and VNFs. That’s the fundamental difference between the two, and working together, they complement each other and make a powerful edge stack.
Shay: It’s not only VNFs, it can be any distributed application that you would like to run, and you can deploy it on multiple edges and manage it using Cloudify. The MCP Master of Masters will provide the infrastructure, and Cloudify will run on top of it and provision the workloads on the edges.
Satish: Obviously the MCP Master of Masters will have to communicate with Cloudify in terms of providing inventory information to the orchestrator and providing profile information for each edge cloud being managed by MCP, so that the orchestrator has all the required information to launch the edge applicaitons and VNFs appropriately in the correct edge environment.
What is the business use case for abstracting away environments with Cloudify?
Ilan Adler (Cloudify): The use cases are reducing transformation cost, reusing existing investments and components (software and hardware) to enable native and Edge, and using a Hybrid Stack to allow a smoother transition to Cloud Native Edge by allowing integration of the existing network services with new cloud native edge management based on Kubernetes.
How is this solution different from an access/core cloud management solution for a telco?
Satish: The traditional access/aggregation telco networks focused on carrying the traffic to the core for processing. However, with Edge computing, there are two important aspects:

The Edge clouds which are close to the user are processing data in the edge itself
Backhauling the data to the core cloud is prevented

Both are critical as we move to 5G.
Have you considered using a lightweight (small footprint) fast containerized VM approach like Kata Containers? The benefits are VMs with the speed of containers, that act and look like a container in K8S.
Satish: We briefly looked at Kata Containers. Our focus was on key networking capabilities and the ability to handle VNF workloads that need to run as VMs. Based on our research we found Virtlet to be the best candidate for our needs.
What’s the procedure to import a VM into a Virtlet?
Nick Chase (Mirantis): Virtlet creates VM pods that run regular qcow2 images, so the first step is to create a qcow2 image for your VM. Next, host it at an HTTPS URL, then create a pod manifest just as you would for a Docker container, specifying that the pod should run on a machine that has virtlet installed. Also, the image URI has a virtlet.cloud prefix indicating that it’s VM pod. Watch a demo of MCP Edge with Virtlet.
Regarding the networking part, do you still use OvS or proceed with the SR-IOV since it supports interworking with Calico (as of the new version of MCP)?
Satish: In the architecture we showed today, we are not using OvS. It’s a pure Kubernetes cluster with CNI-Genie, which allows us to use multiple CNIs; CNI-SRIOV for data plane acceleration; and Calico or Flannel. Our default is Calico for the networking.
From your experience in real-world scenarios, is the placement, migration (based on agreed-on SLA and user mobility), and replication of VNFs a challenging task? If yes, Why? Which VNF type is more challenging?
Satish: Yes, these are all challenging tasks, especially with complex VNFs that:

Contain multiple VNF components (VMs)
Require multiple tenant networks (Control, Management, Data planes)
Have proprietary VNF managers
Require complex on-boarding mechanisms.

Does the Cloudify entity behave as a NFVO? or an OSS/BSS?
Shay: Cloudify can also work as a NFVO, VNFm, and Service Orchestrator. In essence it’s all a function of what blueprints you choose to utilize. Cloudify is not an OSS/BSS system.
Does the Service Orchestrator include NFVO?
Shay: Yes
In “Edge Computing Orchestration” slide, there is a red arrow pointing to the public cloud. What type of things is it orchestrating in a public cloud?
Satish: It could orchestrate pretty much everything in the public cloud as well applications, networking, managed services, infrastructure, etc.
SO and e2e orchestrator are the same?
Satish: Yes
In the ETSI model, is Mirantis operating as the NFVi and ViM? And Cloudify acting as the VNFM and NFVO?
Shay: Yes. Mirantis provides the infrastructure and the capability to run workloads on top of it. Cloudify manages the lifecycle operations of each one of the VNFs (this is the role of the VNFM or VNF Manager), and it also creates the workloads and service chaining between the VNFs. This translates into a service which is the responsibility of the NFVO, which is to stitch in together multiple capabilities to provide a service. This service can be complex and span multiple edges, multiple domains and if needed connect it to some core backends, etc.
Satish: As we move to 5G and we start dealing with network slicing and complex applications, this becomes even more critical, having an intelligent orchestrator like Cloudify orchestrating the required VNFs and doing the service function chaining and doing it in a very dynamic fashion. That will be an extremely powerful thing to combine with MCP.
What is your view on other open source orchestration platforms like ONAP, OSM?
Satish: See our blog comparing different NFV orchestration platforms. Also see SWOT analyses and scorecards in our Open NFV Executive Briefing Center.
What is the function of the end to end orchestrator?
Shay: When you’re going to have multiple edges and different types of edges, you’d like to have one easy, centralized way to manage all those edges. In addition to that, you need to run different operations on different edges, and there are different models to do this. You can have a master orchestrator that can talk to a local orchestrator, and just send commands, and the local orchestrator is a control point for the master orchestrator, but still you need the master orchestrator.
Another more advanced way to do it is to have an autonomous orchestrator, that the master only delegates work to, but when there is no connection to a master orchestrator, it will work on its own, and manage the lifecycle operations of the edge, including healing, scaling, etc., autonomously and independently. When there is no connection, it will run as a local orchestrator, and when the connection resumes, it can aggregate all the information and send it to the master orchestrator.
So you need to handle many edges, possibly hundreds or thousands of edges, and you need to do it in a very efficient way that is acceptable by the use case that you are trying to orchestrate.
For the OpenStack edge deployment, what is the minimal footprint? A single node?
Satish: A single node is possible, but it is still a work in progress. Our initial goal for MCP Edge is to support a minimum of 3 – 6 nodes.
With respect to service design (say using TOSCA model), can we define a service having a mix of k8s pods and VM pods?
Nick: I would assume yes because the VM pods are treated as first-class containers, right?
Shay: Yes, definitely. Moreover, Cloudify can actually be the glue that can create a service chain between Kubernetes workloads, pods and VMs, as well as external services like databases and others. We implement the service broker interface, which provides a way for cloud-native Kubernetes services and pods to access external services as if they were internal native services. This is using the service broker API, and tomorrow you can bring the service into Kubernetes, and it will be transparent, because you implemented it in a cloud-native way. The service provider exposes a catalog, which can access an external service, for example one on Amazon that can run a database. That should be very easy.
How is a new edge site provisioned/introduced? Is some automation possible by the Master of Masters?
Satish: Yes, provisioning of a new edge cloud and subsequent LCM will be handled by the Master of Masters in an automated way. The Master of Masters will have multiple edge cloud configurations and using those configurations (blueprints), it will be able to provision multiple edge clouds.
Would this become an alternative to OpenStack, which manages VMs today? If not, how would OpenStack be used with Edge cloud?
Satish: Depending on the use cases, an edge cloud may consist of any of the following:

Pure k8s cluster with Virtlet
Pure OpenStack cluster
combination of OpenStack + k8s clusters

The overall architecture will depend on the use cases and edge applications to be supported.
NFVO can be part of the end to end orchestrator?
Satish: Yes
Is the application orchestration dynamic?
Satish: Yes, you can have it be dynamic based on inputs in the TOSCA blueprint.
How do you ensure an End-to-end SLA for a critical application connecting between Edge clouds?
Satish: One way to do this is by creating a network slice with the required end-to-end SLA characteristics and launch the critical edge application in that slice.
The post Unified Edge Cloud Infrastructure for PNFs, VNFs, Mobile Edge — Webinar Q&A appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

python-tempestconf’s journey

For those who are not familiar with the python-tempestconf, it’s a tool for generating a tempest configuration file, which is required for running Tempest tests against a live OpenStack cluster. It queries a cloud and automatically discovers cloud settings, which weren’t provided by a user.
Internal project
In August 2016 config_tempest tool was decoupled from Red Hat Tempest fork and the python-tempestconf repository under the github redhat-openstack organization was created. The tool became an internal tool used for generating tempest.conf in downstream jobs which were running Tempest.
Why we like `python-tempestconf`
The reason why is quite easy. We at Red Hat were (and still are) running many different OpenStack jobs with different configurations which execute Tempest. And there python-tempestconf stepped in. We didn’t have to implement the logic for creating or modifying tempest.conf within the job configuration, we just used python-tempestconf which did that for us. It’s not only about the generating tempest.conf itself, because the tool also creates basic users, uploads an image and creates basic flavors which all of them are required for running Tempest tests.
Usage of python-tempestconf was also beneficial for engineers who liked the idea of not struggling with creating a tempest.conf file from scratch but rather using the tool which was able to generate it for them. The generated tempest.conf was sufficient for running simple Tempest tests.
Imagine you have a fresh OpenStack deployment and you want to run some Tempest tests, because you want to make sure that the deployment was successful. In order to do that, you can run the python-tempestconf which will do the basic configuration for you and will generate a tempest.conf, and execute Tempest. That’s it, isn’t it easy?
I have to admit, when I joined Red Hat and more specifically OpenStack team, I kind of struggled with all the information about OpenStack and Tempest, it was too much new information. Therefore I really liked when I could generate a tempest.conf which I could use for running just basic tests. If I had to generate the tempest.conf myself, my learning process would be a little bit slower. Therefore, I’m really grateful that we had the tool at that time.
Shipping in a package
At the beginning of 2017 we started to ship python-tempestconf rpm package. It’s available in RDO repositories from Ocata and higher. python-tempestconf package is also installed as a dependency of openstack-tempest package. So if a user installs openstack-tempest, also python-tempestconf will be installed. At this time, we also changed the entrypoint and the tool is executed via discover-tempest-config command. However, you could have already read all about it in this article.
Upstream project
By the end of 2017 python-tempestconf became an upstream project and got under OpenStack organization.
We have significantly improved the tool since then, not only its code but also its documentation, which contains all the required information for a user, see here. In my opinion every project which is designed for wider audience of users (python-tempestconf is an upstream project, so this condition is fulfilled), should have a proper documentation. Following python-tempestconf’s documentation should be any user able to execute it, set wanted arguments and set some special tempest options without any bigger problems.
I would say that there are 3 greatest improvements. One of them is the user documentation, which I’ve already mentioned. The second and third are improvements of the code itself and they are os-client-config integration and refactoring of the code in order to simplify adding new OpenStack services the tool can generate config for.
os-client-config is a library for collecting client configuration for using an OpenStack cloud in a consistent way. By importing the library a user can specify OpenStack credentials by 2 different ways:

Using OS_* environment variables, which is maybe the most common way. It requires sourcing credentials before running python-tempestconf. In case of packstack environment, it’s keystonerc_admin/demo file and in case of devstack there is openrc script.
Using –os-cloud parameter which takes one argument – name of the cloud which holds the required credentials. Those are stored in a cloud.yaml file.

The second code improvement was the simplification of adding new OpenStack services the tool can generate tempest.conf for. If you want to add a service, just create a bug in our storyboard, see python-tempestconf’s contributor guide. If you feel like it, you can also implement it. Adding a new service requires creating a new file, representing the service and implementing a few required methods.
To conclude
The tool has gone through major refactoring and got significantly improved since it was moved to its own repository in August 2016. If you’re a Tempest user, I’d recommend you try python-tempestconf if you haven’t already.
Quelle: RDO