Install and Setup Ceph Storage Cluster on Ubuntu 22.04

|
Last Updated:
|
|

Follow through this post to learn how to install and setup Ceph storage cluster on Ubuntu 22.04. Ceph is a scalable distributed storage system designed for cloud infrastructure and web-scale object storage. It can also be used to provide Ceph Block Storage as well as Ceph File System storage.

As of this blog post update, CEPH Reef is the current stable release.

Install and Configure Ceph Storage Cluster on Ubuntu 22.04

The Ceph Storage Cluster Daemons

Ceph Storage Cluster is made up of different daemons eas performing specific role.

  • Ceph Object Storage Daemon (OSD, ceph-osd)
    • It provides ceph object data store.
    • It also performs data replication , data recovery, rebalancing and provides storage information to Ceph Monitor.
    • At least an OSD is required per storage device.
  • Ceph Monitor (ceph-mon)
    • It maintains maps of the entire Ceph cluster state including monitor map, manager map, the OSD map, and the CRUSH map.
    • manages authentication between daemons and clients.
    • A Ceph cluster must contain a minimum of three running monitors in order to be both redundant and highly-available.  If you have five or more nodes, it is recommended to run five monitors.
  • Ceph Manager (ceph-mgr)
    • keeps track of runtime metrics and the current state of the Ceph cluster, including storage utilization, current performance metrics, and system load.
    • manages and exposes Ceph cluster web dashboard and API.
    • At least two managers are required for HA.
  • Ceph Metadata Server (MDS):
    • Manages metadata for the Ceph File System (CephFS). Coordinates metadata access and ensures consistency across clients.
    • One or more, depending on the requirements of the CephFS.
  • RADOS Gateway (RGW):
    • Also called “Ceph Object Gateway”
    • is a component of the Ceph storage system that provides object storage services with a RESTful interface. RGW allows applications and users to interact with Ceph storage using industry-standard APIs, such as the S3 (Simple Storage Service) API (compatible with Amazon S3) and the Swift API (compatible with OpenStack Swift).

Ceph Storage Cluster Deployment Methods

There are different methods you can use to deploy Ceph storage cluster.

  • cephadm leverages container technology (specifically, Docker containers) to deploy and manage Ceph services on a cluster of machines.
  • Rook deploys and manages Ceph clusters running in Kubernetes, while also enabling management of storage resources and provisioning via Kubernetes APIs.
  • ceph-ansible deploys and manages Ceph clusters using Ansible.
  • ceph-salt installs Ceph using Salt and cephadm.
  • jaas.ai/ceph-mon installs Ceph using Juju.
  • Installs Ceph via Puppet.
  • Ceph can also be installed manually.

Use of cephadm and rooks are the recommended methods for deploying Ceph storage cluster.

Ceph Depolyment Requirements

Depending on the deployment method you choose, there are different requirements for deploying Ceph storage cluster

In this tutorial, we will use cephadm to install Ceph on Ubuntu 22.04

Below are the requirements for deploying Ceph storage cluster via cephadm;

All the required dependencies are installed automatically by the bootstrap process.

Prepare Ceph Nodes for Ceph Storage Cluster Deployment on Ubuntu 22.04

Our Ceph Storage Cluster Deployment Architecture

The diagram below depicts our ceph storage cluster deployment architecture. In a typical production environment, you would have at least 3 monitor nodes as well as at least 3 OSDs.

install and setup Ceph storage cluster

If your cluster nodes are in the same network subnet, cephadm will automatically add up to five monitors to the subnet, as new hosts are added to the cluster.

Ceph Storage Nodes Hardware Requirements

Check the hardware recommendations page for the Ceph storage cluster nodes hardware requirements.

Attach Storage Disks to Ceph OSD Nodes

Each Ceph OSD node in our architecture above has unallocated LVM logical volumes of 100 GB each. Check how to create Logical volumes in Linux.

vdisplay ceph-vg

Sample output;

  --- Logical volume ---
  LV Path                /dev/ceph-vg/ceph-lv
  LV Name                ceph-lv
  VG Name                ceph-vg
  LV UUID                bxP17M-313C-sxsz-LIPf-xPoC-vXFg-KxgEE0
  LV Write Access        read/write
  LV Creation host, time ceph-osd2, 2023-11-15 17:39:40 +0000
  LV Status              available
  # open                 0
  LV Size                <100.00 GiB
  Current LE             25599
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

As already mentioned, you can use raw devices instead.

Run System Update

On all the nodes, update your system package cache.

apt update

Set Hostnames and Update Hosts File

To begin with, setup up your nodes hostnames;

hostnamectl set-hostname ceph-admin

Set the respective hostnames on other nodes.

If you are not using DNS for name resolution, then update the hosts file accordingly.

For example, in our setup, each node hosts file should contain the lines below;

less /etc/hosts
...
192.168.122.240 ceph-admin
192.168.122.45 ceph-mon
192.168.122.231 ceph-osd1
192.168.122.49 ceph-osd2

Set Time Synchronization

Ensure that the time on all the nodes is synchronized. Thus install Chrony on each and set it up such that all nodes uses the same NTP server.

apt install chrony -y

Edit the Chrony configuration and set your NTP server by replacing the NTP server pools with your NTP server address.

vim /etc/chrony/chrony.conf

Define your NTP Server. Replace ntp.kifarunix-demo.com with your respective NTP server address.

...
# pool ntp.ubuntu.com        iburst maxsources 4
# pool 0.ubuntu.pool.ntp.org iburst maxsources 1
# pool 1.ubuntu.pool.ntp.org iburst maxsources 1
# pool 2.ubuntu.pool.ntp.org iburst maxsources 2
pool ntp.kifarunix-demo.com iburst
...

Restart Chronyd

systemctl restart chronyd

Install SSH Server on Each Node

Ceph deployment through cephadm utility requires that an SSH server is installed on all the nodes.

Ubuntu 22.04 comes with SSH server already installed. If not, install and start it as follows;

apt install openssh-server
systemctl enable --now sshd

Install Python3

Python is required to deploy Ceph on Ubuntu 22.04. Python 3 is installed by default on Ubuntu 22.04

Create Ceph Deployment User on Ceph Admin Node

On the ceph admin node, create a ceph user with sudo rights required for installing ceph packages and configurations as shown below. Do not use the username ceph as it is reserved.

Replace cephadmin username accordingly.

useradd -m -s /bin/bash cephadmin
passwd cephadmin
echo "cephadmin ALL=(ALL:ALL) NOPASSWD:ALL" >> /etc/sudoers.d/cephadmin
visudo -cf /etc/sudoers.d/cephadmin

If you, /etc/sudoers.d/cephadmin: parsed OK, then proceed. Otherwise, fix any error.

chmod 0440 /etc/sudoers.d/cephadmin

Install Docker CE on Each Node

The cephadm utility is used to bootstrap a Ceph cluster and to manage ceph daemons deployed with systemd and Docker containers.

Thus, on each Node, run the command below to install Docker CE;

sudo apt update
sudo apt install apt-transport-https \
	ca-certificates \
	curl \
	gnupg-agent \
	software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
sudo gpg --dearmor > /etc/apt/trusted.gpg.d/docker-ce.gpg
echo "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -sc) stable" > /etc/apt/sources.list.d/docker-ce.list
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io -y
sudo systemctl enable --now docker

Enable Root Login on Other Nodes from Ceph Admin Node

In order to add other nodes to the Ceph cluster using Ceph Admin Node, you will have to use the root user account.

Thus, on the Ceph Monitor, Ceph OSD nodes, enable root login from the Ceph Admin node;

vim /etc/ssh/sshd_config

Add the configs below, replacing the IP address for Ceph Admin accordingly.

Match Address 192.168.122.240
        PermitRootLogin yes

Reload ssh;

systemctl reload sshd

Setup Ceph Storage Cluster on Ubuntu 22.04

Install cephadm Utility on Ceph Admin Node

On the Ceph admin node, you need to install the cephadm utility.

Cephadm installs and manages a Ceph cluster using containers and systemd, with tight integration with the CLI and dashboard GUI.

  • cephadm only supports Octopus and newer releases.
  • cephadm is fully integrated with the new orchestration API and fully supports the new CLI and dashboard features to manage cluster deployment.
  • cephadm requires container support (podman or docker) and Python 3.

To install cephadm on Ubuntu 22.04, you can EITHER do it using apt or simply downloading the binary and install it on the system.

The method to use will depend on the versions of Ceph version you are deploying. For example, we are installing Ceph Reef in this guide. Ceph Reef, which is currently, as of this post update, version 18.2.0.

If you check the cephadm utility provided by the default repos, it is a lower version;

apt-cache policy cephadm
cephadm:
  Installed: (none)
  Candidate: 17.2.6-0ubuntu0.22.04.1
  Version table:
     17.2.6-0ubuntu0.22.04.1 500
        500 http://de.archive.ubuntu.com/ubuntu jammy-updates/universe amd64 Packages
     17.2.5-0ubuntu0.22.04.3 500
        500 http://de.archive.ubuntu.com/ubuntu jammy-security/universe amd64 Packages
     17.1.0-0ubuntu3 500
        500 http://de.archive.ubuntu.com/ubuntu jammy/universe amd64 Packages

The surest way to install the latest version of cephadm is via installing the Ceph package repository;

wget -q -O- 'https://download.ceph.com/keys/release.asc' | \
gpg --dearmor -o /etc/apt/trusted.gpg.d/cephadm.gpg
echo deb https://download.ceph.com/debian-reef/ $(lsb_release -sc) main \
> /etc/apt/sources.list.d/cephadm.list
apt update

Confirm the version;

apt-cache policy cephadm
cephadm:
  Installed: (none)
  Candidate: 18.2.0-1jammy
  Version table:
     18.2.0-1jammy 500
        500 https://download.ceph.com/debian-reef jammy/main amd64 Packages
     17.2.6-0ubuntu0.22.04.1 500
        500 http://de.archive.ubuntu.com/ubuntu jammy-updates/universe amd64 Packages
     17.2.5-0ubuntu0.22.04.3 500
        500 http://de.archive.ubuntu.com/ubuntu jammy-security/universe amd64 Packages
     17.1.0-0ubuntu3 500
        500 http://de.archive.ubuntu.com/ubuntu jammy/universe amd64 Packages

apt install cephadm

Initialize Ceph Cluster Monitor On Ceph Admin Node

Your nodes are now ready to deploy a Ceph storage cluster. To begin with, switch to cephadmin user;

su - cephadmin
whoami

Output;

cephadmin

It is now time to bootstrap the Ceph cluster in order to create the first Ceph monitor daemon on Ceph admin node. Thus, run the command below, substituting the IP address with that of the Ceph admin node accordingly.

sudo cephadm bootstrap --mon-ip 192.168.122.240
Creating directory /etc/ceph for ceph.conf
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chrony.service is enabled and running
Repeating the final host check...
podman (/usr/bin/podman) version 3.4.4 is present
systemctl is present
lvcreate is present
Unit chrony.service is enabled and running
Host looks OK
Cluster fsid: 70d227de-83e3-11ee-9dda-ff8b7941e415
Verifying IP 192.168.122.240 port 3300 ...
Verifying IP 192.168.122.240 port 6789 ...
Mon IP `192.168.122.240` is in CIDR network `192.168.122.0/24`
Mon IP `192.168.122.240` is in CIDR network `192.168.122.0/24`
Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network
Pulling container image quay.io/ceph/ceph:v18...
Ceph version: ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)
Extracting ceph user uid/gid from container image...
Creating initial keys...
Creating initial monmap...
Creating mon...
Waiting for mon to start...
Waiting for mon...
mon is available
Assimilating anything we can from ceph.conf...
Generating new minimal ceph.conf...
Restarting the monitor...
Setting mon public_network to 192.168.122.0/24
Wrote config to /etc/ceph/ceph.conf
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
Creating mgr...
Verifying port 9283 ...
Verifying port 8765 ...
Verifying port 8443 ...
Waiting for mgr to start...
Waiting for mgr...
mgr not available, waiting (1/15)...
mgr not available, waiting (2/15)...
mgr not available, waiting (3/15)...
mgr is available
Enabling cephadm module...
Waiting for the mgr to restart...
Waiting for mgr epoch 5...
mgr epoch 5 is available
Setting orchestrator backend to cephadm...
Generating ssh key...
Wrote public SSH key to /etc/ceph/ceph.pub
Adding key to root@localhost authorized_keys...
Adding host ceph-admin...
Deploying mon service with default placement...
Deploying mgr service with default placement...
Deploying crash service with default placement...
Deploying ceph-exporter service with default placement...
Deploying prometheus service with default placement...
Deploying grafana service with default placement...
Deploying node-exporter service with default placement...
Deploying alertmanager service with default placement...
Enabling the dashboard module...
Waiting for the mgr to restart...
Waiting for mgr epoch 9...
mgr epoch 9 is available
Generating a dashboard self-signed certificate...
Creating initial admin user...
Fetching dashboard port number...
Ceph Dashboard is now available at:

	     URL: https://ceph-admin:8443/
	    User: admin
	Password: hnrpt41gff

Enabling client.admin keyring and conf on hosts with "admin" label
Saving cluster configuration to /var/lib/ceph/70d227de-83e3-11ee-9dda-ff8b7941e415/config directory
Enabling autotune for osd_memory_target
You can access the Ceph CLI as following in case of multi-cluster or non-default config:

	sudo /usr/sbin/cephadm shell --fsid 70d227de-83e3-11ee-9dda-ff8b7941e415 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Or, if you are only running a single cluster on this host:

	sudo /usr/sbin/cephadm shell 

Please consider enabling telemetry to help improve Ceph:

	ceph telemetry on

For more information see:

	https://docs.ceph.com/en/latest/mgr/telemetry/

Bootstrap complete.

According to the documentation, the bootstrap command;

  • Create a monitor and manager daemon for the new cluster on the localhost.
  • Generate a new SSH key for the Ceph cluster and add it to the root user’s /root/.ssh/authorized_keys file.
  • Write a copy of the public key to /etc/ceph/ceph.pub.
  • Write a minimal configuration file to /etc/ceph/ceph.conf. This file is needed to communicate with the new cluster.
  • Write a copy of the client.admin administrative (privileged!) secret key to /etc/ceph/ceph.client.admin.keyring.
  • Add the _admin label to the bootstrap host. By default, any host with this label will (also) get a copy of /etc/ceph/ceph.conf and /etc/ceph/ceph.client.admin.keyring.

Enable Ceph CLI

When bootstrap command completes, a command for accessing Ceph CLI is provided. Execute that command to access Ceph CLI;

sudo /usr/sbin/cephadm shell \
	--fsid 70d227de-83e3-11ee-9dda-ff8b7941e415 \
	-c /etc/ceph/ceph.conf \
	-k /etc/ceph/ceph.client.admin.keyring

This drops you onto Ceph CLI;

You can run the ceph commands eg to check the Ceph status;

ceph -s
  cluster:
    id:     70d227de-83e3-11ee-9dda-ff8b7941e415
    health: HEALTH_WARN
            mon ceph-admin is low on available space
            OSD count 0 < osd_pool_default_size 3
 
  services:
    mon: 1 daemons, quorum ceph-admin (age 61m)
    mgr: ceph-admin.ykkdly(active, since 59m)
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

You can exit the ceph CLI by pressing Ctrl+D or type exit and press ENTER.

There are other ways in which you can access the Ceph CLI. For example, you can run Ceph CLI commands using cephadm command.

sudo cephadm shell -- ceph -s

Or You could install Ceph CLI tools on the host;

sudo cephadm add-repo --release reef
sudo cephadm install ceph-common

With this method, then you can just ran the Ceph commands easily;

sudo ceph -s

Copy SSH Keys to Other Ceph Nodes

Copy the SSH key generated by the bootstrap command to Ceph Monitor, OSD1 and OSD2 root user account. Ensure Root Login is permitted on the Ceph monitor node.

sudo ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-mon
sudo ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-osd1
sudo ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-osd2

Drop into Ceph CLI

You can drop into the Ceph CLI to execute the next commands.

sudo /usr/sbin/cephadm shell \
	--fsid 70d227de-83e3-11ee-9dda-ff8b7941e415 \
	-c /etc/ceph/ceph.conf \
	-k /etc/ceph/ceph.client.admin.keyring

Or if you installed the ceph-common package, no need to drop into the cli as you can directly execute the ceph commands from the terminal.

Add Ceph Monitor Node to Ceph Cluster

At this point, we have just provisioned Ceph Admin node only. You can list all the hosts known to the Ceph ochestrator (ceph-mgr) using the command below

sudo ceph orch host ls

Sample output;

HOST        ADDR             LABELS  STATUS  
ceph-admin  192.168.122.240  _admin          
1 hosts in cluster

So next, add the Ceph Monitor node to the cluster.

Assuming you have copied the Ceph SSH public key, execute the command below to add the Ceph Monitor to the cluster;

sudo ceph orch host add ceph-mon

Sample command output;

Added host 'ceph-mon' with addr '192.168.122.45'

Next, label the host with its role (remember our ceph-monitor also doubles up as an OSD, hence label mon/osd0 below);

sudo ceph orch host label add ceph-mon mon/osd0

Add Ceph OSD Nodes to Ceph Cluster

Similarly, add the OSD Nodes to the cluster;

sudo ceph orch host add ceph-osd1
sudo ceph orch host add ceph-osd2

Define their respective labels;

for i in {1..2}; do sudo ceph orch host label add ceph-osd$i osd$i; done

List Ceph Cluster Nodes;

You can list the Ceph cluster nodes;

sudo ceph orch host ls

Sample output;

HOST        ADDR             LABELS    STATUS  
ceph-admin  192.168.122.240  _admin            
ceph-mon    192.168.122.45   mon/osd0          
ceph-osd1   192.168.122.231  osd1              
ceph-osd2   192.168.122.49   osd2              
4 hosts in cluster

Attach Logical Storage Volumes to Ceph OSD Nodes

In our setup, we have unallocated logical volume of 100GB on each OSD node to be used as a backstore for OSD daemons.

To attach the logical volumes to the OSD node, run the command below. Replace ceph-vg/ceph-lv with Volume group and logical volume names accordingly. Otherwise, use the raw device path.

sudo ceph orch daemon add osd ceph-mon:ceph-vg/ceph-lv

Command output;

Created osd(s) 0 on host 'ceph-mon'

Repeat the same for the other OSD nodes.

sudo ceph orch daemon add osd ceph-osd1:ceph-vg/ceph-lv
sudo ceph orch daemon add osd ceph-osd2:ceph-vg/ceph-lv

The Ceph Nodes are now ready for OSD use.

Check Ceph Cluster Health

To verify the health status of the ceph cluster, simply execute the command ceph s on the admin node or even on each OSD node (if you have installed cephadm/ceph commands there).

To check Ceph cluster health status from the admin node;

sudo ceph -s

Sample output;

  cluster:
    id:     70d227de-83e3-11ee-9dda-ff8b7941e415
    health: HEALTH_OK
 
  services:
    mon: 4 daemons, quorum ceph-admin,ceph-mon,ceph-osd1,ceph-osd2 (age 28m)
    mgr: ceph-admin.ykkdly(active, since 100m), standbys: ceph-mon.grwzmv
    osd: 3 osds: 3 up (since 15m), 3 in (since 16m)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 2 objects, 449 KiB
    usage:   81 MiB used, 300 GiB / 300 GiB avail
    pgs:     1 active+clean 

Accessing Ceph Admin Web User Interface

The bootstrap commands give a url and credentials to use to access the Ceph admin web user interface;

Ceph Dashboard is now available at:

	     URL: https://ceph-admin:8443/
	    User: admin
	Password: hnrpt41gff

Thus, open the browser and navigate to the URL above. Or you can even use the cephadm node IP address if you want..

Open the port on firewall if any is running.

Enter the provided credential and reset your admin password and proceed to login to Ceph Admin UI.

ceph reef 18 login page

If you want, you can activate the telemetry module by clicking Activate button or just from the CLI;

sudo cephadm shell -- ceph telemetry on --license sharing-1-0

Go through other Ceph menu to see more about Ceph.

Install and Setup Ceph Storage Cluster on Ubuntu 22.04

Ceph Dashboard;

ceph reef dashboard

Under the Cluster Menu, you can see quite other details; hosts, disks, OSDs etc.

ceph reef cluster

There you go. That marks the end of our tutorial on how to install and setup Ceph storage cluster on Ubuntu 22.04.

Reference

Deploying a new ceph cluster

Other Tutorials

Install and setup GlusterFS on Ubuntu 22.04/Ubuntu 20.04

Easily Install and Configure Samba File Server on Ubuntu 22.04

SUPPORT US VIA A VIRTUAL CUP OF COFFEE

We're passionate about sharing our knowledge and experiences with you through our blog. If you appreciate our efforts, consider buying us a virtual coffee. Your support keeps us motivated and enables us to continually improve, ensuring that we can provide you with the best content possible. Thank you for being a coffee-fueled champion of our work!

Photo of author
koromicha
I am the Co-founder of Kifarunix.com, Linux and the whole FOSS enthusiast, Linux System Admin and a Blue Teamer who loves to share technological tips and hacks with others as a way of sharing knowledge as: "In vain have you acquired knowledge if you have not imparted it to others".

Leave a Comment