Setup Replicated GlusterFS Volume on Ubuntu

|
Last Updated:
|
|

Follow through this tutorial to learn how to setup replicated GlusterFS volume on Ubuntu. There are different types of Volume architectures that you may want to consider. These include;

Setup Replicated GlusterFS Volume on Ubuntu

Replicated GlusterFS volume provides reliability and data redundancy and cushion against data loss in case one of the bricks get damaged. This is because exact copies of the data are stored on all the bricks that makes up the volume. You need at least two bricks to create a volume with 2 replicas or a minimum of three bricks to create a volume of 3 replicas.

It is recommended to use at least 3 bricks in order to avoid the issue with split brain.

In our deployment architecture, we are using two storage servers with extra storage partition attached apart from the root partition.

  • Storage Node 1:
    • Hostname: gfs01.kifarunix-demo.com
    • IP address: 192.168.57.6
    • Gluster Storage Disk: /dev/sdb1
    • Size: 4GB
    • Mount Point: /gfsvolume
    • OS: Ubuntu 22.04/Ubuntu 20.04
  • Storage Node 2:
    • Hostname: gfs02.kifarunix-demo.com
    • IP address: 192.168.56.124
    • Gluster Storage Disk: /dev/sdb1
    • Size: 4GB
    • Mount Point: /gfsvolume
    • OS: Ubuntu 22.04/Ubuntu 20.04
  • Hostname: gfs03.kifarunix-demo.com
    • IP address: 192.168.56.125
    • Gluster Storage Disk: /dev/sdb1
    • Size: 4GB
    • Mount Point: /gfsvolume
    • OS: Ubuntu 22.04/Ubuntu 20.04
  • GlusterFS Client:
    • Hostname: gfsclient.kifarunix-demo.com
    • IP address: 192.168.43.197
    • OS: Ubuntu 22.04/Ubuntu 20.04

Before you can proceed;

Install GlusterFS Server on Ubuntu

Open GlusterFS Ports on Firewall

Create GlusterFS Trusted Storage Pool

A storage pool is a cluster of storage nodes which provides bricks to the storage volume.

To create GlusterFS TSP, run the command below from either of the nodes, replacing SERVER with the other node being probed.

gluster peer probe SERVER

For example, to create the trusted storage pool containing Node 02 and Node 03 from Node 01;

gluster peer probe gfs02
gluster peer probe gfs03

If all is well, you should get a successful probe; peer probe: success.

To get the status of the TSP peers;

gluster peer status

Sample output;

Number of Peers: 2

Hostname: gfs02
Uuid: b81803a8-893a-499e-9a87-6bac00a62822
State: Peer in Cluster (Connected)

Hostname: gfs03
Uuid: 88cf40a0-d458-4080-8c7a-c3cddbce86c0
State: Peer in Cluster (Connected)

Create Replicated GlusterFS Storage Volume

First of all ensure the storage drive is mounted. In our setup, we are using non root partition, /dev/sdb1 mounted under /gfsvolume.

df -hTP /gfsvolume/
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/sdb1      ext4  3.9G   24K  3.7G   1% /gfsvolume

To create a replicated GlusterFS storage volume, use the gluster volume create command, whose CLI syntax is shown below;

gluster volume create <NEW-VOLNAME> [[replica <COUNT> \
[arbiter <COUNT>]]|[replica 2 thin-arbiter 1]] [disperse [<COUNT>]] \
[disperse-data <COUNT>] [redundancy <COUNT>] [transport <tcp|rdma|tcp,rdma>] \
<NEW-BRICK> <TA-BRICK>... [force]

For example, to create a replicated storage volume using the two nodes, replace the name of the volume, replicated_volume as well as the nodes host-names accordingly;

gluster volume create replicated_volume replica 3 transport tcp gfs01:/gfsvolume/gv0 \
gfs02:/gfsvolume/gv0 \
gfs03:/gfsvolume/gv0

Sample command output;

volume create: replicated_volume: success: please start the volume to access data

Start GlusterFS Volume

Once you have created the volume, you can now start it for you to start storing data in the volume.

The command, gluster volume start VOLUME_NAME, can be used to start volume.

gluster volume start replicated_volume

Sample command output;

volume start: replicated_volume: success

Check the volume status;

gluster volume status
Status of volume: replicated_volume
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gfs01:/gfsvolume/gv0                  49152     0          Y       2050 
Brick gfs02:/gfsvolume/gv0                  50073     0          Y       16260
Brick gfs03:/gfsvolume/gv0                  60961     0          Y       1421 
Self-heal Daemon on localhost               N/A       N/A        Y       2071 
Self-heal Daemon on gfs03                   N/A       N/A        Y       1438 
Self-heal Daemon on gfs02                   N/A       N/A        Y       16277
 
Task Status of Volume replicated_volume
------------------------------------------------------------------------------
There are no active volume tasks

Check gluster information;

gluster volume info
 
Volume Name: replicated_volume
Type: Replicate
Volume ID: 4f522843-13df-4042-b73a-ff6722dc9891
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: gfs01:/gfsvolume/gv0
Brick2: gfs02:/gfsvolume/gv0
Brick3: gfs03:/gfsvolume/gv0
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off

Open GlusterFS Volumes Ports on Firewall

In order for the clients to connect to the volumes created, you need to open the respective node volume port on firewall. The ports are shown using the gluster volume status command above.

Similarly, ensure that the nodes can communicate to each other on these nodes

For example if your using UFW, on Node 01, allow clients and other Gluster nodes to connect to port 49152/tcp by running the command below;

ufw allow from <Client-IP-or-Network> to any port 49152 proto tcp comment "GlusterFS Client Access"
ufw allow from <Node02 IP> to any port 49152 proto tcp comment "GlusterFS Node02"
ufw allow from <Node03 IP> to any port 49152 proto tcp comment "GlusterFS Node03"

On Node 02, allow clients to connect to port 50073/tcp by running the command below;

ufw allow from <Client-IP-or-Network> to any port 50073 proto tcp comment "GlusterFS Client Access"
ufw allow from <Node01-IP> to any port 50073 proto tcp comment "GlusterFS Node01"
ufw allow from <Node03 IP> to any port 50073 proto tcp comment "GlusterFS Node03"

On Node 03, allow clients to connect to port 60961/tcp by running the command below;

ufw allow from <Client-IP-or-Network> to any port 60961 proto tcp comment "GlusterFS Client Access"
ufw allow from <Node01 IP> to any port 60961 proto tcp comment "GlusterFS Node01"
ufw allow from <Node02 IP> to any port 60961 proto tcp comment "GlusterFS Node02"

Mount Replicated GlusterFS Volume on Clients

As an example, we are using Ubuntu systems as GlusterFS clients.

Thus, install GlusterFS client and proceed as follows to mount the replicated GlusterFS volume.

Ensure the client can resolve the Gluster nodes hostnames.

Create the mount point

mkdir /mnt/gfsvol

Mount the distributed volume. If using domain names, ensure they are resolvable.

mount -t glusterfs gfs01:/replicated_volume /mnt/gfsvol/

Run the df command to check the mounted filesystems.

df -hTP /mnt/gfsvol/
Filesystem               Type            Size  Used Avail Use% Mounted on
gfs01:/replicated_volume fuse.glusterfs  3.9G   41M  3.7G   2% /mnt/gfsvol

From other clients, you can mount the volume on the other node;

mount -t glusterfs gfs02:/replicated_volume /mnt/gfsvol/

To auto-mount the volume on system boot, you need to add the line below to /etc/fstab.

gfs01:/replicated_volume /mnt/gfsvol glusterfs defaults,_netdev 0 0

To test the data distribution, create two test files on the client. One of the file will be stored one of the volumes and the other file on the other volume. see example below;

mkdir /mnt/gfsvol/Test-dir
touch /mnt/gfsvol/Test-dir/{test-file,test-file-two}

If you can check on node01, node02 and node 03, they should all contain the same files

ls /gfsvolume/gv0/Test-dir/
test-file  test-file-two

On node02,

ls /gfsvolume/gv0/Test-dir/
test-file  test-file-two

On node03,

ls /gfsvolume/gv0/Test-dir/
test-file  test-file-two

That concludes our guide on setting up replicated GlusterFS volume on Ubuntu.

Other tutorials

Easily Install and Configure Samba File Server on Ubuntu 22.04

Install Couchbase Server on Ubuntu 22.04/Ubuntu 20.04

Install and Setup Ceph Storage Cluster on Ubuntu 20.04

SUPPORT US VIA A VIRTUAL CUP OF COFFEE

We're passionate about sharing our knowledge and experiences with you through our blog. If you appreciate our efforts, consider buying us a virtual coffee. Your support keeps us motivated and enables us to continually improve, ensuring that we can provide you with the best content possible. Thank you for being a coffee-fueled champion of our work!

Photo of author
koromicha
I am the Co-founder of Kifarunix.com, Linux and the whole FOSS enthusiast, Linux System Admin and a Blue Teamer who loves to share technological tips and hacks with others as a way of sharing knowledge as: "In vain have you acquired knowledge if you have not imparted it to others".

Leave a Comment