Follow through this post to learn how to install and setup Ceph Storage cluster on Ubuntu 22.04. Ceph is a scalable distributed storage system designed for cloud infrastructure and web-scale object storage. It can also be used to provide Ceph Block Storage as well as Ceph File System storage.
As of this writing, CEPH Quincy is the current stable release.
Install and Setup Ceph Storage Cluster on Ubuntu 22.04
Ceph Storage Cluster setup requires at least one Ceph Monitor, Ceph Manager, and Ceph OSD (Object Storage Daemon) and may be Ceph Metadata Server for providing Ceph File System Storage.
All these components perform various roles;
- Ceph Admin node (
cephadm
)- It is the node on which Ceph deployment script (cephadm) is installed on.
- Ceph Object Storage Daemon (OSD,
ceph-osd
)- It provides ceph object data store.
- It also performs data replication , data recovery, rebalancing and provides storage information to Ceph Monitor.
- Ceph Monitor (
ceph-mon
)- It maintains maps of the entire ceph cluster state including monitor map, manager map, the OSD map, and the CRUSH map.
- manages authentication between daemons and clients
- Ceph Manager (
ceph-mgr
)- keeps track of runtime metrics and the current state of the Ceph cluster, including storage utilization, current performance metrics, and system load.
- manages and exposes Ceph cluster web dashboard and API.
- At least two managers are required for HA.
Ceph Quincy Depolyment Requirements
Below are the requirements for deploying Ceph Quincy storage cluster;
- Python 3
- Systemd
- Podman or Docker for running containers (we use docker in this setup)
- Time synchronization (such as chrony or NTP)
- LVM2 for provisioning storage devices.
Prepare Ceph Nodes for Ceph Storage Cluster Deployment on Ubuntu 22.04
Attach Storage Disks to Ceph OSD Nodes
Each Ceph OSD node in our architecture above has unallocated LVM logical volumes of 4 GB each. Check how to create Logical volumes in Linux.
lvs
Sample output;
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
ceph-lv ceph-vg -wi-a----- <4.00g
Run System Update
On all the nodes, update your system package cache.
apt update
Set hostnames and Update Hosts File
To begin with, setup up your nodes hostnames;
hostnamectl set-hostname ceph-admin
Set the respective hostnames on other nodes.
If you are not using DNS for name resolution, then update the hosts file accordingly.
For example, in our setup, each node hosts file should contain the lines below;
less /etc/hosts
...
192.168.56.124 ceph-admin
192.168.56.129 ceph-mon
192.168.56.130 ceph-osd1
192.168.56.131 ceph-osd2
Set Time Synchronization
Ensure that the time on all the nodes is synchronized. Thus install Chrony on each and set it up such that all nodes uses the same NTP server.
apt install chrony -y
Edit the Chrony configuration and set your NTP server by replacing the NTP server pools with your NTP server address.
vim /etc/chrony/chrony.conf
Define your NTP Server. Replace ntp.kifarunix-demo.com with your respective NTP server address.
...
# pool ntp.ubuntu.com iburst maxsources 4
# pool 0.ubuntu.pool.ntp.org iburst maxsources 1
# pool 1.ubuntu.pool.ntp.org iburst maxsources 1
# pool 2.ubuntu.pool.ntp.org iburst maxsources 2
pool ntp.kifarunix-demo.com iburst
...
Restart Chronyd
systemctl restart chronyd
Install SSH Server on Each Node
Ceph deployment through cephadm utility requires that an SSH server is installed on all the nodes.
Ubuntu 22.04 comes with SSH server already installed. If not, install and start it as follows;
apt install openssh-server
systemctl enable --now sshd
Install Python3
Python is required to deploy Ceph on Ubuntu 22.04. Python 3 is installed by default on Ubuntu 22.04
Create Ceph Deployment User on Ceph Admin Node
On the ceph admin node, create a ceph user with sudo rights required for installing ceph packages and configurations as shown below. Do not use the username ceph
as it is reserved.
Replace cephadmin
username accordingly.
useradd -m -s /bin/bash cephadmin
passwd cephadmin
echo "cephadmin ALL=(ALL:ALL) NOPASSWD:ALL" >> /etc/sudoers.d/cephadmin
visudo -cf /etc/sudoers.d/cephadmin
If you, /etc/sudoers.d/cephadmin: parsed OK
, then proceed. Otherwise, fix any error.
chmod 0440 /etc/sudoers.d/cephadmin
Install Docker CE on Each Node
The cephadm utility is used to bootstrap a Ceph cluster and to manage ceph daemons deployed with systemd and Docker containers.
Thus, on each Node, run the command below to install Docker CE;
sudo apt install apt-transport-https ca-certificates curl gnupg-agent software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor > /etc/apt/trusted.gpg.d/docker-ce.gpg
echo "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -sc) stable" > /etc/apt/sources.list.d/docker-ce.list
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io -y
sudo systemctl enable --now docker
Enable Root Login on Other Nodes from Ceph Admin Node
In order to add other nodes to the Ceph cluster using Ceph Admin Node, you will have to use the root user account.
Thus, on the Ceph Monitor, Ceph OSD nodes, enable root login from the Ceph Admin node;
vim /etc/ssh/sshd_config
Add the configs below, replacing the IP address for Ceph Admin accordingly.
Match Address 192.168.56.124
PermitRootLogin yes
Reload ssh;
systemctl reload sshd
Setup Ceph Storage Cluster on Ubuntu 22.04
Install cephadm
Utility on Ceph Admin Node
On the Ceph admin node, you need to install the cephadm utility.
Cephadm installs and manages a Ceph cluster using containers and systemd, with tight integration with the CLI and dashboard GUI.
- cephadm only supports Octopus and newer releases.
- cephadm is fully integrated with the new orchestration API and fully supports the new CLI and dashboard features to manage cluster deployment.
- cephadm requires container support (podman or docker) and Python 3.
You can check other recommended methods of deploying Ceph.
To install cephadm on Ubuntu 22.04;
sudo wget -q https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm -P /usr/bin/
sudo chmod +x /usr/bin/cephadm
Setup Ceph Cluster Monitor On Ceph Admin Node
Your nodes are now ready to deploy a Ceph storage cluster. To begin with, switch to cephadmin
user;
su - cephadmin
whoami
Output;
cephadmin
Initialize Ceph Cluster monitor on Ceph Admin Node
It is now time to bootstrap the Ceph cluster in order to create the first Ceph monitor daemon on Ceph Admin node, run the command below, substituting the IP address of the Ceph Admin Node accordingly.
sudo cephadm bootstrap --mon-ip 192.168.56.124
Creating directory /etc/ceph for ceph.conf Verifying podman|docker is present... Verifying lvm2 is present... Verifying time synchronization is in place... Unit systemd-timesyncd.service is enabled and running Repeating the final host check... docker (/usr/bin/docker) is present systemctl is present lvcreate is present Unit systemd-timesyncd.service is enabled and running Host looks OK Cluster fsid: 7d9f97d0-e718-11ec-a3f2-9d53aa092ff4 Verifying IP 192.168.56.124 port 3300 ... Verifying IP 192.168.56.124 port 6789 ... Mon IP `192.168.56.124` is in CIDR network `192.168.56.0/24` Mon IP `192.168.56.124` is in CIDR network `192.168.56.0/24` Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network Pulling container image quay.io/ceph/ceph:v17... Ceph version: ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable) Extracting ceph user uid/gid from container image... Creating initial keys... Creating initial monmap... Creating mon... Waiting for mon to start... Waiting for mon... mon is available Assimilating anything we can from ceph.conf... Generating new minimal ceph.conf... Restarting the monitor... Setting mon public_network to 192.168.56.0/24 Wrote config to /etc/ceph/ceph.conf Wrote keyring to /etc/ceph/ceph.client.admin.keyring Creating mgr... Verifying port 9283 ... Waiting for mgr to start... Waiting for mgr... mgr not available, waiting (1/15)... mgr not available, waiting (2/15)... mgr not available, waiting (3/15)... mgr is available Enabling cephadm module... Waiting for the mgr to restart... Waiting for mgr epoch 5... mgr epoch 5 is available Setting orchestrator backend to cephadm... Generating ssh key... Wrote public SSH key to /etc/ceph/ceph.pub Adding key to [email protected] authorized_keys... Adding host ceph-admin... Deploying mon service with default placement... Deploying mgr service with default placement... Deploying crash service with default placement... Deploying prometheus service with default placement... Deploying grafana service with default placement... Deploying node-exporter service with default placement... Deploying alertmanager service with default placement... Enabling the dashboard module... Waiting for the mgr to restart... Waiting for mgr epoch 9... mgr epoch 9 is available Generating a dashboard self-signed certificate... Creating initial admin user... Fetching dashboard port number... Ceph Dashboard is now available at: URL: https://ceph-admin:8443/ User: admin Password: ej0hzlh4ks Enabling client.admin keyring and conf on hosts with "admin" label Saving cluster configuration to /var/lib/ceph/7d9f97d0-e718-11ec-a3f2-9d53aa092ff4/config directory Enabling autotune for osd_memory_target You can access the Ceph CLI as following in case of multi-cluster or non-default config: sudo /usr/bin/cephadm shell --fsid 7d9f97d0-e718-11ec-a3f2-9d53aa092ff4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Or, if you are only running a single cluster on this host: sudo /usr/bin/cephadm shell Please consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/master/mgr/telemetry/ Bootstrap complete.
According to the documentation, the bootstrap command;
- Create a monitor and manager daemon for the new cluster on the localhost.
- Generate a new SSH key for the Ceph cluster and add it to the root user’s
/root/.ssh/authorized_keys
file. - Write a copy of the public key to
/etc/ceph/ceph.pub
. - Write a minimal configuration file to
/etc/ceph/ceph.conf
. This file is needed to communicate with the new cluster. - Write a copy of the
client.admin
administrative (privileged!) secret key to/etc/ceph/ceph.client.admin.keyring
. - Add the
_admin
label to the bootstrap host. By default, any host with this label will (also) get a copy of/etc/ceph/ceph.conf
and/etc/ceph/ceph.client.admin.keyring
.
Ceph admin containers are installed;
sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 21903dc86c78 quay.io/prometheus/alertmanager:v0.23.0 "/bin/alertmanager -…" 6 minutes ago Up 6 minutes ceph-7d9f97d0-e718-11ec-a3f2-9d53aa092ff4-alertmanager-ceph-admin b1089c966910 quay.io/ceph/ceph-grafana:8.3.5 "/bin/sh -c 'grafana…" 6 minutes ago Up 6 minutes ceph-7d9f97d0-e718-11ec-a3f2-9d53aa092ff4-grafana-ceph-admin 2d244538efe7 quay.io/prometheus/prometheus:v2.33.4 "/bin/prometheus --c…" 7 minutes ago Up 7 minutes ceph-7d9f97d0-e718-11ec-a3f2-9d53aa092ff4-prometheus-ceph-admin 4d7de2cabc17 quay.io/prometheus/node-exporter:v1.3.1 "/bin/node_exporter …" 8 minutes ago Up 8 minutes ceph-7d9f97d0-e718-11ec-a3f2-9d53aa092ff4-node-exporter-ceph-admin 037bb43a7c17 quay.io/ceph/ceph "/usr/bin/ceph-crash…" 11 minutes ago Up 11 minutes ceph-7d9f97d0-e718-11ec-a3f2-9d53aa092ff4-crash-ceph-admin f79bcf2f95a0 quay.io/ceph/ceph:v17 "/usr/bin/ceph-mgr -…" 14 minutes ago Up 14 minutes ceph-7d9f97d0-e718-11ec-a3f2-9d53aa092ff4-mgr-ceph-admin-qgwzfj 19df40c47f9a quay.io/ceph/ceph:v17 "/usr/bin/ceph-mon -…" 14 minutes ago Up 14 minutes ceph-7d9f97d0-e718-11ec-a3f2-9d53aa092ff4-mon-ceph-admin
Enable Ceph CLI
When bootstrap command completes, a command for accessing Ceph CLI is provided. Execute that command to access Ceph CLI;
sudo /usr/bin/cephadm shell --fsid 7d9f97d0-e718-11ec-a3f2-9d53aa092ff4 \
-c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
This drops you onto Ceph Docker CLI;
You can run the ceph commands eg to check the Ceph status;
sudo ceph -s
cluster: id: 643e4b1e-e679-11ec-ad5a-ad66b59a3777 health: HEALTH_WARN OSD count 0 < osd_pool_default_size 3 services: mon: 1 daemons, quorum ceph-admin (age 17m) mgr: ceph-admin.nryrev(active, since 5m) osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs:
You can exit the docker CLI by pressing Ctrl+D or type exit and press ENTER.
There are other ways in which you can access the Ceph CLI. For example, you can run Ceph CLI commands using cephadm command.
sudo cephadm shell -- ceph -s
Or You could install Ceph CLI tools on the host (As of this writing, however, there are no repos for ceph-common for Ubuntu 22.04 Jammy, hence the install commands below do not work at the moment.);
sudo cephadm add-repo --release quincy
sudo cephadm install ceph-common
With this method, then you can just ran the Ceph commands easily;
sudo ceph -s
Copy SSH Keys to Other Ceph Nodes
Copy the SSH key generated by the bootstrap command to Ceph Monitor, OSD1 and OSD2 root user account. Ensure Root Login is permitted on the Ceph monitor node.
sudo ssh-copy-id -f -i /etc/ceph/ceph.pub [email protected]
sudo ssh-copy-id -f -i /etc/ceph/ceph.pub [email protected]
sudo ssh-copy-id -f -i /etc/ceph/ceph.pub [email protected]
Drop into Ceph CLI
Now that we weren't able to install the ceph-common package, let's first drop into the Ceph CLI before we can proceed.
sudo /usr/bin/cephadm shell --fsid 7d9f97d0-e718-11ec-a3f2-9d53aa092ff4 \
-c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
Add Ceph Monitor Node to Ceph Cluster
At this point, we have just provisioned Ceph Admin node only. You can list all the hosts known to the Ceph ochestrator (ceph-mgr) using the command below
ceph orch host ls
Sample output;
HOST ADDR LABELS STATUS
ceph-admin 192.168.59.31 _admin
So next, add the Ceph Monitor node to the cluster.
Assuming you have copied the Ceph SSH public key, execute the command below to add the Ceph Monitor to the cluster;
ceph orch host add ceph-mon
Sample command output;
Added host 'ceph-mon' with addr '192.168.56.129'
Next, label the host with its role (remember our ceph-monitor also doubles up as an OSD, hence label mon/osd below);
ceph orch host label add ceph-mon mon/osd
Add Ceph OSD Nodes to Ceph Cluster
Similarly, add the OSD Nodes to the cluster;
ceph orch host add ceph-osd1
ceph orch host add ceph-osd2
Define their respective labels;
for i in {1..2}; do ceph orch host label add ceph-osd$i osd$i; done
List Ceph Cluster Nodes;
You can list the Ceph cluster nodes;
ceph orch host ls
Sample output;
HOST ADDR LABELS STATUS ceph-admin 192.168.56.124 _admin ceph-mon 192.168.56.129 mon/osd ceph-osd1 192.168.56.130 osd1 ceph-osd2 192.168.56.131 osd2 4 hosts in cluster
Attach Logical Storage Volumes to Ceph OSD Nodes
In our setup, we have unallocated logical volume of 4 GB on each OSD node to be used as a backstore for OSD daemons.
To attach the logical volumes to the OSD node, run the command below. Replace ceph-vg/ceph-lv with Volume group and logical volume names accordingly.
ceph orch daemon add osd ceph-mon:ceph-vg/ceph-lv
Repeat the same for the other OSD nodes.
ceph orch daemon add osd ceph-osd1:ceph-vg/ceph-lv
ceph orch daemon add osd ceph-osd2:ceph-vg/ceph-lv
The Ceph Nodes are now ready for OSD use.
Check Ceph Cluster Health
To verify the health status of the ceph cluster, simply execute the command ceph
s on each OSD node.
To check Ceph cluster health status from the admin node;
ceph -s
Sample output;
cluster: id: 7d9f97d0-e718-11ec-a3f2-9d53aa092ff4 health: HEALTH_OK services: mon: 4 daemons, quorum ceph-admin,ceph-mon,ceph-osd2,ceph-osd1 (age 3m) mgr: ceph-admin.qgwzfj(active, since 14m), standbys: ceph-mon.lmmtlr osd: 3 osds: 2 up (since 15m), 2 in (since 19s) data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 11 MiB used, 30 GiB / 30 GiB avail pgs:
Accessing Ceph Admin Web User Interface
The bootstrap commands give a url and credentials to use to access the Ceph admin web user interface;
Ceph Dashboard is now available at:
URL: https://ceph-admin:8443/
User: admin
Password: ej0hzlh4ks
Thus, open the browser and navigate to the URL.
Open the port on firewall if any is running.
Enter the provided credential and reset your admin password and proceed to login to Ceph Admin UI.
If you want, you can activate the telemetry module by clicking Activate button or just from the CLI;
sudo cephadm shell -- ceph telemetry on --license sharing-1-0
Go through other Ceph menu to see more about Ceph.
Ceph Dashboard;
Under the Cluster Menu, you can see quite other details; hosts, disks, OSDs etc.
There you go. That marks the end of our tutorial on how to install and setup Ceph storage cluster on Ubuntu 22.04.
Reference
Other Tutorials
Install and setup GlusterFS on Ubuntu 22.04/Ubuntu 20.04
Easily Install and Configure Samba File Server on Ubuntu 22.04