Using Red Hat OpenStack Platform director to deploy co-located Ceph storage – Part One

An exciting new feature in Red Hat OpenStack Platform 11 is full Red Hat OpenStack Platform director support for deploying Red Hat Ceph storage directly on your overcloud compute nodes. Often called hyperconverged, or HCI (for Hyperconverged Infrastructure), this deployment model places the Red Hat Ceph Storage Object Storage Daemons (OSDs) and storage pools directly on the compute nodes.
Co-locating Red Hat Ceph Storage in this way can significantly reduce both the physical and financial footprint of your deployment without requiring any compromise on storage.

Red Hat OpenStack Platform director is the deployment and lifecycle management tool for Red Hat OpenStack Platform. With director, operators can deploy and manage OpenStack from within the same convenient and powerful lifecycle tool.
There are two primary ways to deploy this type of storage deployment which we currently refer to as pure HCI and mixed HCI.

Pure HCI: All compute nodes in the overcloud are co-located with Ceph Storage services. You can find a complete deployment scenario in the Hyperconverged Infrastructure Guide available on the Red Hat portal.

Mixed HCI: The overcloud is deployed with both standard compute nodes and co-located compute nodes. This requires the creation of a new custom role in director. Customizing roles are part of the composability features provided with Red Hat OpenStack Platform director. You can find further information about this deployment scenario, including accompanying code available from Github, in the Hyperconverged Infrastructure Guide and the Hyperconverged Red Hat OpenStack Platform 10 and Red Hat Ceph Storage 2 Reference Architecture.

In this two-part blog series we are going to focus on the Pure HCI scenario demonstrating how to deploy an overcloud with all compute nodes supporting Ceph. We do this using the Red Hat OpenStack Platform director. In this example we also implement resource isolation so that the Compute and Ceph services have their own dedicated resources and do not conflict with each other. We then show the results in action with a set of Browbeat benchmark tests.
But first …
Before we get into the actual deployment, let’s take a look at some of the benefits around co-locating storage and compute resources.

Smaller deployment footprint: When you perform the initial deployment, you co-locate more services together on single nodes, which helps simplify the architecture on fewer physical servers.

Easier to plan, cheaper to start out: co-location provides a decent option when your resources are limited. For example, instead of using six nodes, three for Compute and three for Ceph Storage, you can just co-locate the storage and use only three nodes.

More efficient capacity usage: You can utilize the same hardware resources for both Compute and Ceph services. For example, the Ceph OSDs and the compute services can take advantage of the same CPU, RAM, and solid-state drive (SSD). Many commodity hardware options provide decent resources that can accommodate both services on the same node.

Resource isolation: Red Hat addresses the noisy neighbor effect through resource isolation, which you orchestrate through Red Hat OpenStack Platform director.

However, while co-location realizes many benefits there are some considerations to be aware of with this deployment model. Co-location does not necessarily offer reduced latency in storage I/O. This is due to the distributed nature of Ceph storage: storage data is spread across different OSDs, and OSDs will be spread across several hyper-converged nodes. An instance on one node might need to access storage data from OSDs spread across several other nodes.
The Lab
Now that we fully understand the benefits and considerations for using co-located storage, let’s take a look at a deployment scenario to see it in action. 

We have developed a scenario using Red Hat OpenStack Platform 11 that deploys and demonstrates a simple “Pure HCI” environment. Here are the details.
We are using three nodes for simplicity:

1 director node
1 Controller node
1 Compute node (Compute + Ceph)

Each of these nodes are these same specifications:

Dell PowerEdge R530
Intel Xeon CPU E5-2630 v3 @ 2.40GHz  – This contains 8 cores each with hyper-threading, providing us with a total of 16 cores.
32 GB RAM
278 GB SSD Hard Drive

Of course for production installs you would need a much more detailed architecture; this scenario simply allows us to quickly and easily demonstrate the advantages of co-located storage. 

This scenario follows these resource isolation guidelines:

Reserve enough resources for 1 Ceph OSD on the Compute node
Reserve enough resources to potentially scale an extra OSD on the same Compute node
Plan for instances to use 2GB on average but reserve 0.5GB per instance on the Compute node for overhead.

This scenario uses network isolation using VLANs:

Because the default Compute node deployment template shipped with the tripleo-heat-templates do not attach the Storage Management network computes we need to change that. They require a simple modification to accommodate the Storage Management network which is illustrated later.

Now that we have everything ready, we are set to deploy our hyperconverged solution! But you’ll have to wait for next time for that so check back soon to see the deployment in action in Part Two of the series!

Want to find out how Red Hat can help you plan, implement and run your OpenStack environment? Join Red Hat Architects Dave Costakos and Julio Villarreal Pelegrino in “Don’t fail at scale: How to plan, build, and operate a successful OpenStack cloud” today.
For full details on architecting your own Red Hat OpenStack Platform deployment check out the official Architecture Guide. And for details about Red Hat OpenStack Platform networking see the detailed Networking Guide.
Quelle: RedHat Stack

Published by