Cloud hosting for enterprise-level deployments needs a highly scalable storage solution to streamline and manage important business data. As technology and best practices move towards cloud based services to keep up with a thriving business, Ceph was born out of the need for a software solution that encourages a sustainable model for growth.

In This Article

Our background on Ceph comes from it being part of our hyper-converged private clouds but also as our underlying software for our large scale object storage clusters. For more info check out our OpenMetal Private Cloud.

What is Ceph?

Ceph is an open source storage platform that is designed to allow object, block, and file storage from a single system. Designed to be self-healing and self-managed, Ceph strives to reduce administrator and budget costs, allowing it to deal with outages on its own. It also aims for completely distributed operations without a single point of failure and is scalable to the exabyte level. Ceph software also runs on commodity hardware and replicates data to make it fault-tolerant.

How Does Ceph Work?

Ceph can employ five distinct daemons that are all fully distributed and can run on the same set of servers, allowing users to interact directly with them:

  • Ceph monitors (ceph-mon) keep track of active and failed cluster nodes.
  • Ceph managers (ceph-mgr) run alongside monitor daemons to provide additional monitoring and interfaces to external monitoring and management systems.
  • Metadata servers (ceph-mds) store the metadata of inodes and directories.
  • Object storage devices (ceph-osd) store the actual content files.
  • Representational state transfer (RESTful) gateways (ceph-rgw) exposes the object storage layer as an interface compatible with OpenStack Swift APIs. 

The deployment of one or more Ceph monitors and two or more Ceph object storage devices is called a Ceph Storage Cluster. In action, the Ceph filesystem, Ceph object storage, and Ceph block devices read data from and write data to the Ceph Storage Cluster. Within the Ceph Storage Cluster, the Ceph object storage devices stores the data as objects on storage nodes. A Ceph Storage Cluster can have thousands of storage nodes.

Within the storage system itself, Ceph uses distributed object storage, which is a computer data storage architecture that treats data as objects. This is different than other storage architectures that manage data in a file hierarchy, like file systems. Through Ceph’s software libraries, users gain direct access to the reliable autonomic distributed object store (RADOS) object-based storage system, which also provides a foundation for some of Ceph’s features, like RADOS Block Device and the Ceph Filesystem. 

Block Storage

Ceph provides access to block storage through mounting the Ceph Cluster as block device through a Linux kernel module called RDB or Radios Block Device.

When data is written to Ceph using a block device, Ceph automatically stripes and replicates the data across the cluster. Ceph’s object storage system also allows users to mount Ceph as a thin-provisioned block device. Ceph block devices utilize RADOS capabilities, which include snapshotting, replication and consistency. The block device can also provide block storage to virtual machines in OpenStack.

There is also great news coming with Ceph Crimson, a purpose built OSD system that helps leverage the incredible power of NVMe drives.

Filesystem

Ceph’s filesystem (CephFS) is a POSIX-compliant (which are standards used to ensure file compatibility between operating systems) filesystem that uses a Ceph Storage Cluster to store data. With the Ceph metadata server cluster, maps of the directories and file names are stored within RADOS clusters. In addition, the metadata server cluster can scale and rebalance the file system dynamically to distribute data evenly among cluster hosts, ensuring high performance and preventing heavy loads within the cluster.

Object Storage – Client Tools/RGW

Within the storage system itself, Ceph uses distributed object storage. This is separate from the concept of connecting to the Ceph Cluster to use it as an object store. Ceph has a native object storage gateway called RGW. It is a service that runs on several or all of the members of a cluster and provides a S3 compatible API and gateway for your programs to add, remove, etc. objects. OpenMetal Clouds all come with on-demand Ceph Object Storage as part of the Core and with stand-alone Storage Clusters.

It can be a bit confusing for sure. A quick description of block storage vs object storage may help.

Ceph Storage Cluster

A Ceph Storage Cluster is the deployment of two daemon types: one or more Ceph monitors and two or more Ceph object storage devices. The Ceph Storage Cluster is the foundation for all Ceph deployments and could contain thousands of storage devices.

How Does It Work?

In action, the Ceph filesystem, Ceph object storage, and Ceph block devices read data from and write data to the Ceph Storage Cluster. Within the Ceph Storage Cluster, the Ceph object storage devices stores the data as objects on storage nodes. Object store devices store the actual content files, and Ceph monitors keep track of active and failed cluster devices.

Users setting up, modifying and taking down Ceph Clusters will use the ceph-deploy tool. Made exclusively for Ceph, ceph-deploy allows users to launch Ceph quickly and easier with practical initial configuration settings. The tool gives you the ability to install Ceph packages on remote hosts, create a cluster, add monitors, gather and forget keys, add object storage devices, take down clusters and more.

In summary, we believe Ceph is great software and it forms the basis of our storage systems – both on hyper-converged and converged clouds and the stand alone Ceph powered petabyte scale storage systems we offer.

Powered by OpenStack

 Our Ceph-powered storage clusters provide exabyte-level storage with unparalleled reliability. Take charge of replication, erasure coding, and performance enhancements using NVMe drives. Seamlessly replicate data for recovery. Select your ideal Ceph version. Redefine your storage experience. Learn More