IntroductionIt is now possible to make use of ephemeral storage with the Private Cloud Core platform. This storage type is available through additional
Computenodes. In this guide, you will learn how to create an instance that makes use of ephemeral storage.
What is Ephemeral Storage?Ephemeral storage is fast, temporary disk space that does not persist after the instance is terminated. This type of storage is well-suited for CI/CD job runners, distributed worker nodes or database systems, or anything that requires fast storage or has inherent data redundancy. For an overview of how data can be stored, see Storage Concepts in the OpenStack documentation.
PrerequisitesTo use ephemeral storage, you must have an additional
Computenode added to the Private Cloud Core. To add this node type, reach out to your account manager who can add this for you. We do not currently support adding this node through Flex Metal Central. Note! — The
Storage and Computenodes do not make use of ephemeral storage! These nodes, when added, join the existing Ceph cluster.
BackgroundWith our latest update, a new class of flavors have been added, that once set, ensure your instance is spun up on an ephemeral storage Compute node. We use LVM to manage the local NVMe drive, providing ephemeral storage. Flavors that set ephemeral storage:
ccare what allow for ephemeral storage.
ProcedureTo have an instance use ephemeral storage, during instance creation, set the flavor to any prefixed with “cc”.
cc.smallis an example flavor that allows an instance’s storage to be ephemeral. The following demonstrates setting the
cc.smallflavor, providing 25GB of ephemeral disk space. Listed under the Ephemeral Disk column is the amount of ephemeral storage this flavor sets.
Instance Boot ConsiderationsWhen booting an instance with the ephemeral flavors, there are several boot source options available. Of those options, these are important to consider: Boot Sources
How can Ephemeral Storage be Used?When an instance is spun up, its ephemeral storage can be accessed using the path
ephemeral-1was spun up using the
cc.smallflavor. Using SSH and the
dfcommand, the varying mount points can be seen:
In this list,
[[email protected] ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 877M 0 877M 0% /dev tmpfs 909M 0 909M 0% /dev/shm tmpfs 909M 17M 892M 2% /run tmpfs 909M 0 909M 0% /sys/fs/cgroup /dev/vda1 25G 2.5G 23G 10% / /dev/vdb 25G 1.1G 24G 5% /mnt tmpfs 182M 0 182M 0% /run/user/1000
/dev/vdbis the block device mapped directly to the NVMe drive through the path
/mnt. Note! — Some operating systems format the ephemeral block device with the
VFATfile system. You may consider reformatting the device to another file system, like
ext4. The following is a CentOS 8 instance showing the file system types where
/dev/vdbhas file format
[[email protected] ~]$ df -hT Filesystem Type Size Used Avail Use% Mounted on devtmpfs devtmpfs 1.8G 0 1.8G 0% /dev tmpfs tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs tmpfs 1.9G 17M 1.9G 1% /run tmpfs tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/vda1 xfs 50G 1.7G 49G 4% / /dev/vdb vfat 50G 32K 50G 1% /mnt tmpfs tmpfs 374M 0 374M 0% /run/user/1000
PerformanceThis section goes into detail about the performance differences between an LVM ephemeral device and one loaded from Ceph. An instance called ephemeral running CentOS 8 has been created. This instance is using the
cc.mediumflavor, providing 50GB of ephemeral storage. The operating system is booted from a Ceph volume. The following is a test of the read and write speeds comparing the Ceph volume to the ephemeral drive. Random and sequential read and write tests are performed. In addition, the ephemeral device is formatted to ext4.
Convert to ext4Using SSH and the command
df -hT, the file systems can be seen:
The ephemeral drive in this output is
[[email protected] ~]# df -hT Filesystem Type Size Used Avail Use% Mounted on devtmpfs devtmpfs 1.8G 0 1.8G 0% /dev tmpfs tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs tmpfs 1.9G 8.5M 1.9G 1% /run tmpfs tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/vda1 xfs 50G 2.2G 48G 5% / /dev/vdb vfat 50G 32K 50G 1% /mnt tmpfs tmpfs 374M 0 374M 0% /run/user/1000
/dev/vdband is mounted to
/mnt. The file system is set to
VFATand will be changed to
ext4. This example uses
umount /mnt && mkfs.ext4 /dev/vdbto accomplish formatting the drive to ext4:
Once the drive is reformatted, mount it to
[[email protected] ~]# umount /mnt && mkfs.ext4 /dev/vdb mke2fs 1.45.6 (20-Mar-2020) /dev/vdb contains a vfat file system Proceed anyway? (y,N) y Creating filesystem with 13107200 4k blocks and 3276800 inodes Filesystem UUID: 637150f9-a48b-4fd0-9fcc-62fb23d800b5 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424 Allocating group tables: done Writing inode tables: done Creating journal (65536 blocks): done Writing superblocks and filesystem accounting information: done
/mntagain and confirm the file system:
[[email protected] ~]# mount -a [[email protected] ~]# df -hT | grep vdb /dev/vdb ext4 49G 53M 47G 1% /mnt
Performance TestsNow that the ephemeral drive’s file system is set to
ext4, the performance of disk reads and writes will be compared to the root drive (
/). The application fio is used to test disk performance. The tests performed are:
- Random disk reads and writes
- Sequential disk reads and writes
ResultsAbove each table are the parameters passed to
fio. These tests were performed within a Private Cloud Core – Small using an additional Compute – Large node.
rw=randread bs=4k size=2g numjobs=1 iodepth=2 runtime=60 end_fsync=1 ioengine=posixaio+—————————————————-+ +===============+===================+================+ Bandwidth (MiB/s)
rw=randwrite bs=4k size=2g numjobs=1 iodepth=2 runtime=60 end_fsync=1 ioengine=posixaio+—————————————————-+ +===============+===================+================+ Bandwidth (MiB/s)
rw=read bs=4k size=2g numjobs=1 iodepth=2 runtime=60 end_fsync=1 ioengine=posixaio+—————————————————-+ +===============+===================+================+ Bandwidth (MiB/s)
rw=write bs=4k size=2g numjobs=1 iodepth=2 runtime=60 end_fsync=1 ioengine=posixaio+—————————————————-+ +===============+===================+================+ Bandwidth (MiB/s)