Ceph Crimson – Performance Testing on NVMe

ceph-crimson
Diagram showing multi region Flex Metal Cloud

We are closely following the Ceph advancements targeting NVMe performance. We use Ceph as the hosted private cloud core of our Infrastructure as a Service Product. Our concern is, as is many long time fans of Ceph, is how will Ceph adapt to NVMe performance when its history has long been rooted in spinning media.

As an example, currently we use very high performance NVMes in almost all of our converged Ceph and OpenStack clusters. The raw IOPS can only be described as stunning. They are extremely fast, much faster than almost anything you can throw at it. See the specs below:

intel ssd dc p4610 performance

When we introduce Ceph as the HA storage backend, as many of you know, this places quite a bit of software between the CPUs and the Storage. While brilliant in stability, scaling, data integrity, and the list goes on, one thing Ceph isn’t brilliant with – giving the CPU access to all those IOPS.

There are many reasons for this, both “on server” and in the network, but the net effect is after all of the things that Ceph has to do, the performance appears to be roughly on the scale of 1/10 of what the gear can actually do. Once we connect in from a VM to a very lightly loaded Ceph NVMe Cluster, this is what we see for Reads, for example.

quick 50k iops graph

For sure some modifications and tweaks can bring those numbers up. But it is so far different from what the gear can do, we evaluated this against other potential solutions. In particular, when you look at what NVMe over Fabric can do, it hands a very complex problem to the Ceph wizards.

For all of the team working on Ceph Crimson, we are all very excited and ready to help in any way we can.


Todd Robinson InMotion Hosting President

That’s it! Again, please connect with me on LinkedIn or leave a comment below with requests or questions — even corrections if I managed to mess up something!

Also, if you believe in open source and your company is using a “Mega Public Cloud”, please consider our On-Demand Flex Metal Private Cloud. Built on open source, no lock-in, better costs, and private from the start.

Todd Robinson | President at InMotion Hosting, BoldGrid, and – a big OpenStack fan!

Was this article helpful? Let us know!