Common Ceph Commands



Introduction

With this deployment of OpenStack, Ceph is the backend storage cluster.
As an operator, you may find yourself needing to work with the Ceph cluster.
This guide aims to cover common tasks when working with Ceph.


Getting Started

To get started working with Ceph over the command line, you need to be logged
in over SSH as root to one of the original hardware nodes. Each node already
has the Ceph CLI tool installed.


Ceph Health and Disk Usage

The following sections explain how to determine the state and disk utilization
of the Ceph cluster.

Ceph Status

How much space can the Ceph cluster store? Is the Ceph cluster healthy? To
answer these questions, use ceph status.

Detailed output of ceph status:

$ ceph status
cluster:
  id:     e274eb83-13d9-4f1c-971d-f1babc2bf846
  health: HEALTH_WARN
          1 daemons have recently crashed

services:
  mon: 3 daemons, quorum loving-ox,giddy-possum,lackadaisical-scorpion (age 7d)
  mgr: giddy-possum(active, since 7d), standbys: loving-ox, lackadaisical-scorpion
  osd: 3 osds: 3 up (since 7d), 3 in (since 7d)
  rgw: 3 daemons active (giddy-possum.rgw0, lackadaisical-scorpion.rgw0, loving-ox.rgw0)

task status:

data:
  pools:   12 pools, 329 pgs
  objects: 20.30k objects, 79 GiB
  usage:   230 GiB used, 2.4 TiB / 2.6 TiB avail
  pgs:     329 active+clean

io:
  client:   0 B/s rd, 636 KiB/s wr, 0 op/s rd, 120 op/s wr

The disk usage of this Ceph cluster can be seen in the above output under the
data: portion as 230 GiB used, 2.4 TiB / 2.6 TiB avail. This is the
raw storage Ceph has to work with and does not reflect the actual storage
available since Ceph, in our configuration, keeps three copies, or replicas,
of the data.

In addition you can see the status of this cluster under the health:
portion. A healthy cluster reports back HEALTH_OK. In this case, the
cluster is not healthy, indicated by HEALTH_WARN. Troubleshooting this
status falls outside the scope of this guide.

 

Ceph Pools

Data in Ceph is stored in pools.

To obtain more information about the pools, start with using ceph df.

Here’s an example:

$ ceph df
--- RAW STORAGE ---
CLASS  SIZE     AVAIL    USED     RAW USED  %RAW USED
ssd    2.6 TiB  2.4 TiB  227 GiB   230 GiB       8.58
TOTAL  2.6 TiB  2.4 TiB  227 GiB   230 GiB       8.58

--- POOLS ---
POOL                   ID  PGS  STORED   OBJECTS  USED     %USED  MAX AVAIL
device_health_metrics   1    1  367 KiB        3  1.1 MiB      0    773 GiB
images                  2   32  6.6 GiB      894   20 GiB   0.85    773 GiB
volumes                 3   32   31 GiB    8.45k   92 GiB   3.81    773 GiB
vms                     4   32   38 GiB   10.75k  115 GiB   4.73    773 GiB
backups                 5   32      0 B        0      0 B      0    773 GiB
metrics                 6   32      0 B        0      0 B      0    773 GiB
manila_data             7   32      0 B        0      0 B      0    773 GiB
manila_metadata         8   32      0 B        0      0 B      0    773 GiB
.rgw.root               9   32  3.2 KiB        7   84 KiB      0    773 GiB
default.rgw.log        10   32  3.4 KiB      207  384 KiB      0    773 GiB
default.rgw.control    11   32      0 B        8      0 B      0    773 GiB
default.rgw.meta       12    8      0 B        0      0 B      0    773 GiB

Under POOLS you can see how the data in Ceph is organized.

Of importance is the MAX AVAIL column. This indicates the maximum amount of
space this Object Storage Daemon (OSD) can store before reaching capacity.

 

Ceph OSDs

With Private Cloud Core, the Ceph cluster is arranged in that there is a
single OSD per physical hardware node mapped to its NVMe drive, totaling three
OSDs as part of a single cluster.

Get the status of all OSDs using ceph osd status:

$ ceph osd status
ID  HOST                           USED  AVAIL  WR OPS  WR DATA  RD OPS  RD DATA  STATE
 0  giddy-possum.local            76.7G   817G      0      819       3        0   exists,up
 1  loving-ox.local               76.7G   817G     40      199k      4        0   exists,up
 2  lackadaisical-scorpion.local  76.7G   817G     38      181k      2        0   exists,up

List disk utilization statistics per OSD using ceph osd df:

$ ceph osd df
ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP     META      AVAIL    %USE  VAR   PGS  STATUS
 0    ssd  0.87329   1.00000  894 GiB   77 GiB   76 GiB  2.4 MiB  1022 MiB  818 GiB  8.58  1.00  329      up
 2    ssd  0.87329   1.00000  894 GiB   77 GiB   76 GiB  2.4 MiB  1022 MiB  818 GiB  8.58  1.00  329      up
 1    ssd  0.87329   1.00000  894 GiB   77 GiB   76 GiB  2.4 MiB  1022 MiB  818 GiB  8.58  1.00  329      up
                       TOTAL  2.6 TiB  230 GiB  227 GiB  7.2 MiB   3.0 GiB  2.4 TiB  8.58
MIN/MAX VAR: 1.00/1.00  STDDEV: 0

NW
Nick West Systems Engineer

Nick is an avid aggressive inline skater, nature enthusiast, and loves working with open source software in a Linux environment.

More Articles by Nick

Was this article helpful? Let us know!