Install and Configure Ceph Block Device on Ubuntu 18.04

|
Last Updated:
|
|

In this guide, we are going to learn on how to install and configure Ceph Block Device on Ubuntu 18.04. Ceph Block Devices is one of the deployments options of Ceph Storage Cluster. Other deployments include  Ceph Object Storage and Ceph File System.

Ceph block device is also known as Reliable Autonomic Distributed Object Store (RADOS) Block Device (RBD). They interact with Ceph OSDs using the librbd library.

Ceph can be mounted as block device just like how the normal hard drives are mounted. When data is written to Ceph block device, it is stripped across multiple Object Storage Devices (OSD) in a Ceph storage cluster.

RBD integrates with KVM and hence, can be used to provide block storage devices for cloud computing systems such as OpenStack

Install and Configure Ceph Block Device on Ubuntu 18.04

Before you can proceed, ensure that you have a running Ceph storage cluster.

Follow the guide below to learn how to install and setup Ceph Storage Cluster on Ubuntu 18.04.

Setup Three Node Ceph Storage Cluster on Ubuntu 18.04

Deployment Architecture

Configure Ceph Block Devices on Ceph Storage Cluster

Prepare Ceph Client for Ceph Deployment

Create Ceph Deployment User

On a Ceph client, create the ceph user with passwordless sudo rights for installing ceph packages and configurations just like it was done on the Ceph OSD nodes. Do not use the username ceph as it is reserved.

Replace cephadmin username accordingly.

useradd -m -s /bin/bash cephadmin
passwd cephadmin
echo -e "Defaults:cephadmin !requiretty\ncephadmin ALL=(ALL:ALL) NOPASSWD:ALL" >> /etc/sudoers.d/cephadmin
chmod 0440 /etc/sudoers.d/cephadmin

Configure Time Synchronzation

To ensure that the time is synchronized between the Ceph cluster and the Ceph client, configure your Ceph client to use the same NTP server as your Ceph cluster.

apt install chrony
vim /etc/chrony/chrony.conf
...
# pool ntp.ubuntu.com        iburst maxsources 4
# pool 0.ubuntu.pool.ntp.org iburst maxsources 1
# pool 1.ubuntu.pool.ntp.org iburst maxsources 1
# pool 2.ubuntu.pool.ntp.org iburst maxsources 2
pool ntp.kifarunix-demo.com iburst
...

Restart and enable chronyd to run on system boot.

systemctl enable chronyd
systemctl restart chronyd

Install Python 2

Python 2 is required to deploy Ceph on Ubuntu 18.04. You can install Python 2 by executing the command below;

apt install python-minimal

Setup Password-less SSH login to Ceph Client

Login as Ceph user you have created on your Ceph Admin node, generate password-less SSH keys and copy them to the client.

We have already done this in our guide on setting up Ceph cluster on Ubuntu 18.04. If you followed the guide, simply update the user SSH configuration file, ~/.ssh/config, with the connection details of the Ceph client.

su - cephadmin
vim ~/.ssh/config
Host ceph-osd01
   Hostname ceph-osd01
   User cephadmin
Host ceph-osd02
   Hostname ceph-osd02
   User cephadmin
Host ceph-osd03
   Hostname ceph-osd03
   User cephadmin
Host ceph-client
   Hostname ceph-client
   User cephadmin

Update the hosts file, if you do not have a DNS server.

echo "192.168.2.118 ceph-client.kifarunix-demo.com ceph-client" >> /etc/hosts

Next, copy the SSH keys to Ceph client.

ssh-copy-id ceph-client

Install Ceph Utilities on Ceph Admin Node

Run the command below on Ceph Admin node to install common utilities used to mount and interact with ceph storage cluster. These are provided by the package, ceph-common.

sudo apt install ceph-common

Install Ceph on Ceph Client

Install Ceph packages on the Ceph client by running the command below. Run Ceph deployment from the ceph user configuration directory.

su - cephadmin
cd kifarunix-cluster/
ceph-deploy install ceph-client

Copy Ceph configuration and keys to Ceph client.

ceph-deploy admin ceph-client

Create Block Device Pools

In order to use the Ceph block device client, you need to create a pool for the RADOS Block Device RBD) and initialize it.

  • A pool is a logical group for storing objects. They manage placement groups, replicas and the CRUSH rule for the pool.
  • A placement group is a fragment of logical object pool that places objects as a group into OSDs. Ceph client calculates which placement group an object should be stored.

To create Ceph pool use the command below;

ceph osd pool create {pool-name} pg_num pgp_num

Where:

  • {pool-name} is the name of the Ceph pool you are creating.
  • pg_num is the total number of placement groups for the pool. It is recommended that pg_num is set to 128 when using less than 5 OSDs in your Ceph cluster.
  • pgp_num specifies total number of placement groups for placement purposes. Should be equal to the total number of placement groups.

Hence, on the Ceph Client, run the command below to create a pool called kifarunixrbd.

cephadmin@ceph-admin:~$ ssh ceph-ceph-client

Next, create an OSD pool with placement group number 128

cephadmin@ceph-client:~$ sudo ceph osd pool create kifarunixrdb 128 128

Associate the Pool created with the respective application to prevent unauthorized types of clients from writing data to the pool. An application can be;

  • cephfs (Ceph Filesystem).
  • rbd (Ceph Block Device).
  • rgw (Ceph Object Gateway).

To associate the pool created above with RBD, simply execute the command below. Replace the name of the pool accordingly.

cephadmin@ceph-client:~$ sudo ceph osd pool application enable kifarunixrdb rbd

Once you have created and enabled an application for a pool, you can initialize it using the command below;

cephadmin@ceph-client:~$ sudo rbd pool init -p kifarunixrdb

To list available pools;

sudo ceph osd lspools

Creating Block Device Images

Create an image for a block device in the Ceph storage cluster before adding it to a node using the command below in a Ceph Client.

rbd create <image-name> --size <megabytes> --pool <pool-name>

For example, to create an block device image of 1GB in our pool created above, kifarunixrbd, simply run the command;

cephadmin@ceph-client:~$ sudo rbd create disk01 --size 1G --pool kifarunixrbd

To list the images in your pool, eg kifarunixrbd;

sudo rbd ls -l kifarunixrbd
NAME     SIZE PARENT FMT PROT LOCK 
disk01 1 GiB          2         

To retrieve information about the image created, run the command;

sudo rbd --image disk01 -p kifarunixrbd info
rbd image 'disk01':
	size 1 GiB in 256 objects
	order 22 (4 MiB objects)
	id: 37886b8b4567
	block_name_prefix: rbd_data.37886b8b4567
	format: 2
	features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
	op_features: 
	flags: 
	create_timestamp: Sat Mar 14 21:39:19 2020

To remove an image from the pool;

sudo rbd rm disk01 -p kifarunixrbd

To move it to trash for later deletion;

sudo rbd trash move kifarunixrbd/disk01

To restore image from trash to the pool, obtain the image ID as assigned on the trash store then restore the image using the ID;

sudo rbd trash list kifarunixrbd
37986b8b4567 image01

Where kifarunixrbd is the name of the pool.

sudo rbd trash restore kifarunixrbd/37986b8b4567

To empty image from trash for permanent deletion;

rbd trash remove kifarunixrbd/37986b8b4567

Mapping Images to Block Devices

After creating an Image, you can map it to block devices using. However, before you can map the image, disable any features that are unsupported by the kernel by running the command below. Replace the pool and image names.

cephadmin@ceph-client:~$ sudo rbd feature disable kifarunixrbd/disk01 object-map fast-diff deep-flatten

The RBD kernel module will be loaded by rdb if it is not already loaded.

Next, map the image to block device.

sudo rbd map disk01 --pool kifarunixrbd
/dev/rbd0

To show block device images mapped to kernel modules with the rbd command;

cephadmin@ceph-client:~$ sudo rbd showmapped
id pool         image  snap device    
0  kifarunixrbd disk01 -    /dev/rbd0

To unmap a block device image, use the command, rbd unmap /dev/rbd/{poolname}/{imagename} for example;

sudo rbd unmap /dev/rbd/kifarunixrbd/disk01

Create FileSystem on Ceph Block Device

The Ceph mapped block device is now ready. All is left is to create a file system on it and mount it to make it useable.

For example, to create an XFS file system on it;

sudo mkfs.xfs /dev/rbd0 -L cephbd
meta-data=/dev/rbd0              isize=512    agcount=9, agsize=31744 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=0, rmapbt=0, reflink=0
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=1024   swidth=1024 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Mounting Ceph Block Device on Ubuntu 18.04

You can now mount the block device. For example, to mount it under /media/ceph directory;

sudo mkdir /media/ceph
sudo mount /dev/rbd0 /media/ceph

You can as well mount it as follows;

sudo mount /dev/rbd/kifarunixrbd/disk01 /media/ceph/

Check mounted Filesystems;

df -hT -P /dev/rbd0
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/rbd0      xfs  1014M   34M  981M   4% /media/ceph

There you go.

If you check the Ceph cluster health;

root@ceph-osd01:~# ceph --status
  cluster:
    id:     ecc4e749-830a-4ec5-8af9-22fcb5cadbca
    health: HEALTH_OK
 
  services:
    mon: 2 daemons, quorum ceph-osd01,ceph-osd02
    mgr: ceph-osd01(active), standbys: ceph-osd02
    osd: 3 osds: 3 up, 3 in
 
  data:
    pools:   1 pools, 128 pgs
    objects: 18  objects, 14 MiB
    usage:   3.1 GiB used, 8.9 GiB / 12 GiB avail
    pgs:     128 active+clean

That marks the end of our guide on how to install and Configure Ceph Block Device on Ubuntu 18.04.

Reference

Block Device Quick Start

Other Tutorials

Install and Setup GlusterFS on Ubuntu 18.04

How to Install and Configure NFS Server on RHEL/CentOS 7

SUPPORT US VIA A VIRTUAL CUP OF COFFEE

We're passionate about sharing our knowledge and experiences with you through our blog. If you appreciate our efforts, consider buying us a virtual coffee. Your support keeps us motivated and enables us to continually improve, ensuring that we can provide you with the best content possible. Thank you for being a coffee-fueled champion of our work!

Photo of author
koromicha
I am the Co-founder of Kifarunix.com, Linux and the whole FOSS enthusiast, Linux System Admin and a Blue Teamer who loves to share technological tips and hacks with others as a way of sharing knowledge as: "In vain have you acquired knowledge if you have not imparted it to others".

Leave a Comment