“Hyper-converged” is a strategy for running services. Traditionally we would instinctively separate our concerns into separate groups of Storage, Compute, or another classification to keep each layer separated for the team whose job it is to work with each component. We put all of these groups onto a set of systems because it’s more cost effective since hardware is so much faster these days.

By doing something like deploying OpenStack and Ceph in a Hyper-Converged cluster, it is possible to make much more effective use of hardware in terms of density (i.e. how much space a cluster would consume in a data center) and resource utilization. Hardware is expensive, so being able to fully utilize both the space and available pool of resources ends up being more cost-effective.

 

One example might be that you can have 6 servers, 3 of which for OpenStack and 3 for Ceph which may or may not be fully utilized. They take up space, power, and have a base associated cost per-server of how much you have to spend at minimum. If you deploy a hyper-converged cluster, you have effectively halved your base cost of a cluster and can add more servers as additional resources are needed.

Our Private Cloud Core is an example of a hyper-converged system but we also recommended converged approaches (compute and storage only) once the PCC’s Compute and Storage resources are in use optimally. Typically the Networking component of a hyper-converged system isn’t necessary to scale at the same rate as the Compute and Storage. For more information check out Converged vs Hyper-Converged.