Setup Software RAID on Debian 10

|
Published:
|
|

In this tutorial, you will learn how to setup software raid on Debian 10. RAID is an acronym for Redundant Array of Independent Disks. The term inexpensive can occasionally be used instead of independentRAID is used to combine multiple devices/inexpensive disk drives into an array which yields performance that is better than that of one large and expensive drive.

Some of the major reasons to use RAID include:

• enhanced transfer speed
• enhanced number of transactions per second
• increased single block device capacity
• greater efficiency in recovering from a single disk failure

There are three possible types of RAID:

  • Firmware RAID
  • Hardware RAID
  • Software RAID.

This guide focuses on setting up Software RAID on Debian 10.

Setup Software RAID on Debian 10

Software RAID is used to implement the various RAID levels in the kernel block device code. The Linux kernel contains a multiple device (MD) driver that allows the RAID solution to be completely hardware independent.

RAID Levels

The various RAID levels that can be implemented include;

Raid Level 0 (striping)

  • With RAID level 0, data is striped/written across the member disks of the array.
  • Striping means data is broken down into small chunks.
  • The data is striped across the member drives of the array.
  • Storage capacity of the array is equal to the sum of the capacity of the member disks/partitions
  • RAID Level 0 provides high I/O performance
  • Doesn’t provide fault tolerance and hence if one device in the array fails, then the whole array fails.
  • Requires a minimum of two storage devices.

RAID Level 1 (mirroring)

  • With RAID Level 1, a mirrored copy (identical) of data is written to each member drive of the array.
  • Provides redundancy and hence, high data availability. If one of the drive member of the array fails, the data in the other drives can be used. When you add another disk to the array, the data on existing disk will be copied to the new disk as well.
  • The storage capacity of the level 1 array is equal to the capacity of the smallest mirrored hard disk in a Hardware RAID or the smallest mirrored partition in a Software RAID hence less space efficient.
  • Requires a minimum of two storage devices.

Raid Level 5 (Striping with Parity)

  • This is the most commonly used RAID level.
  • Requires at least three storage drives/devices.
  • In this level, data is stripped across the member drives in the array along with the parity information. Parity is a raw binary data whose value is calculated so it can be used to reconstruct striped data from the other drives if one of the drives in the array fails.
  • Provides fault tolerance.
  • The storage capacity is equal to the capacity of the smallest member partition multiplied by the number of partitions minus one.

RAID Level 6 (Striping with double parity)

  • Similar to RAID level 5 except that it supports dual parity.
  • It can withstand 2 disk failure in the array.
  • Requires at least 4 devices

RAID Level 10 (mirroring+stripping)

  • RAID level 10 combines the performance advantages of level 0 with the redundancy of level 1.
  • Often denoted as RAID 1+0 ( stripe of mirrors).
  • Requires at least 4 devices.
  • half of the storage devices is used for data mirroring hence, less space efficient.
  • most expensive of the RAID levels with lower usable capacity and high system costs.

Setup Software RAID on Debian 10

So how do you setup Software RAID on Debian 10? In this tutorial, we will demonstrate how to setup RAID Level 1 on Debian 10.

Creating RAID Partitions on RAID disk

To setup RAID Level 1, you need at least two drives/partitions. In our demo server, we already have attached two disks, /dev/sdb and /dev/sdc each of 4GB as shown below;

lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   15G  0 disk 
├─sda1   8:1    0   13G  0 part /
├─sda2   8:2    0    1K  0 part 
└─sda5   8:5    0    2G  0 part [SWAP]
sdb      8:16   0    4G  0 disk 
sdc      8:32   0    4G  0 disk

In order to use a disk as RAID disk, you need to create a RAID partition type on each disk.

In this demo, we use parted command for this purpose. parted may not be installed by default on Debian. Hence;

apt install parted
  • Set the type of partition on the disk. We use msdos in this setup. If prompted whether to delete existing data, accept and proceed.
parted -a optimal /dev/sdb mklabel msdos
parted -a optimal /dev/sdc mklabel msdos
  • Create the partition and set the filesystem type.
parted -a optimal /dev/sdb mkpart primary ext4 0% 100%
parted -a optimal /dev/sdc mkpart primary ext4 0% 100%
  • Set the partition as software RAID partition.
parted -a optimal /dev/sdb set 1 raid on
parted -a optimal /dev/sdc set 1 raid on
  • Display the partition table.
parted -a optimal /dev/sdb print
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 4295MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  4295MB  4294MB  primary               raid
parted -a optimal /dev/sdc print
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdc: 4295MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  4295MB  4294MB  primary  ext4         raid

Setup Software RAID on Debian 10

Now that we have two disks setup, you can now go ahead and setup software RAID on Debian 10.

Before you can proceed, ensure that you have mdadm package installed. mdadm is a utility that can be used to manage MD devices aka Linux Software RAID.

Check if the package is installed.;

apt list -a mdadm
Listing... Done
mdadm/focal,now 4.1-5ubuntu1 amd64 [installed,automatic]

If not installed, you can install it by running the command below;

apt install mdadm

The basic command line syntax for mdadm commad is;

mdadm [mode] <raiddevice> [options] <component-devices>

[mode] specifies any major mdadm operation mode which can be one of the following;

  • Assemble (-A, --assemble): assembles the components of a previously created array into an active array.
  • Build (-B, --build): Builds an array that doesn’t have per-device metadata (superblocks).
  • Create (-C, --create): Creates a new array with per-device metadata (superblocks).
  • Follow/Monitor (-F, --follow, --monitor): Monitor one or more md devices and act on any state changes. This is only meaningful for RAID1, 4, 5, 6, 10 or multipath arrays.
  • Grow (-G, --grow): Grow (or shrink) an array, or otherwise reshape it in some way.
  • Incremental Assembly (-I, --incremental): Add a single device to an appropriate array.
  • Manage: This is for doing things to specific components of an array such as adding new spares and removing faulty devices.
  • Misc: This is an ‘everything else’ mode that supports operations on active arrays, operations on component devices such as erasing old superblocks, and information gathering operations.
  • Auto-detect (–auto-detect): This mode does not act on a specific device or array, but rather it requests the Linux Kernel to activate any auto-detected arrays.

So as an example, let us see how to create RAID level 1 using the two disks we setup above.

mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sd[bc]1

The options;

  • -l, –level= Sets RAID level which can be one of the; linear, raid0, 0, stripe, raid1, 1, mirror, raid4, 4, raid5, 5, raid6, 6, raid10, 10, multipath
  • -n, –raid-devices= specifies the number of active devices in the array.

To use short command line options;

mdadm -C /dev/md0 -l raid1 -n=2 /dev/sd[bc]1

For other command line options, consult, man mdadm.

The above command creates an /dev/md0 as a RAID1 array consisting of /dev/sdb1 and /dev/sdc1 drives.

mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

Check the status of the RAID;

mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Fri Jun 11 07:13:55 2021
        Raid Level : raid1
        Array Size : 4190208 (4.00 GiB 4.29 GB)
     Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Fri Jun 11 07:14:35 2021
             State : clean, resyncing 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

     Resync Status : 35% complete

              Name : debian:0  (local to host debian)
              UUID : c2afb48c:8391bfc4:41455be3:18ce0c01
            Events : 5

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1

To list detailed information about each RAID device;

mdadm --examine /dev/sd[bc]1
/dev/sdb1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : c2afb48c:8391bfc4:41455be3:18ce0c01
           Name : debian:0  (local to host debian)
  Creation Time : Fri Jun 11 07:13:55 2021
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 8380416 (4.00 GiB 4.29 GB)
     Array Size : 4190208 (4.00 GiB 4.29 GB)
    Data Offset : 6144 sectors
   Super Offset : 8 sectors
   Unused Space : before=6064 sectors, after=0 sectors
          State : active
    Device UUID : 20315496:2688da76:1ce6f030:36585c40

    Update Time : Fri Jun 11 07:15:22 2021
  Bad Block Log : 512 entries available at offset 16 sectors
       Checksum : ba07c260 - correct
         Events : 11


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : c2afb48c:8391bfc4:41455be3:18ce0c01
           Name : debian:0  (local to host debian)
  Creation Time : Fri Jun 11 07:13:55 2021
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 8380416 (4.00 GiB 4.29 GB)
     Array Size : 4190208 (4.00 GiB 4.29 GB)
    Data Offset : 6144 sectors
   Super Offset : 8 sectors
   Unused Space : before=6064 sectors, after=0 sectors
          State : active
    Device UUID : 777754fd:4bef2715:3abb3ae7:cb25d0d1

    Update Time : Fri Jun 11 07:15:22 2021
  Bad Block Log : 512 entries available at offset 16 sectors
       Checksum : 7131292 - correct
         Events : 11


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

You can also check the status by running the command below;

cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdc1[1] sdb1[0]
      4190208 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

From the output above;

  • Personalities line shows the RAID level the kernel currently supports.
  • md0 device line shows the state of the array, the current raid level set on the device and the devices used in the array.
  • The other line indicates the usable size of the array in blocks
  • [n/m] e.g [2/2]  shows that the array would have n devices however, currently, m devices are in use. When m >= n then things are good.
  • [UU] means both disks on the RAID array are UP.

Read more on Mdstat page.

Create a Filesystem on RAID Device

Once you have created a RAID device, you need to create a filesystem on it for you to mount and use it.

Note that as shown above, we created RAID 1 which combined two 4G disks into a single 4G disk.

Hence, to create a filesystem on md0 device, then run the command below to create an ext4 filesystem on the RAID device.

mkfs.ext4 /dev/md0
mke2fs 1.45.5 (07-Jan-2020)
Creating filesystem with 1047552 4k blocks and 262144 inodes
Filesystem UUID: 97240b9e-8286-49fe-a304-a98bd3f66c42
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

Mounting Software RAID Device on Debian 10

You can now mount your RAID device on your convenient location.

mount /dev/md0 /mnt/

To confirm the mounting;

df -hT -P /mnt/
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/md0       ext4  3.9G   16M  3.7G   1% /mnt

To automount the device on boot, update /etc/fstab file by adding the line similar to the below;

/dev/md0 /mnt ext4 defaults 0 0

Also, you need to update the /etc/mdadm/mdadm.conf by creating a prototype config file that describes currently active arrays that are known to be made from partitions of IDE or SCSI drives using the mdadm --detail --scan command

mdadm --detail --scan
ARRAY /dev/md0 metadata=1.2 name=ubuntu20:0 UUID=244a7fd9:d6fcc210:9b559249:df999270

To write the information to mdadm.conf, then run;

mdadm --detail --scan >> /etc/mdadm/mdadm.conf

Once you update the mdadm.conf, you can then update initramfs.

update-initramfs -u

When done, updating the initramfs, you can reboot the system to confirm if the RAID device can mount automatically.

So, if one of the disks get damaged, when you re-add the disk to the array, then your data should be restored!

And that marks the end of our guide on how to setup Software RAID on Debian 10.

Other Tutorials

Install and Setup GlusterFS Storage Cluster on CentOS 8

Setup GlusterFS Distributed Replicated Volume on CentOS 8

Install and Configure Ceph Block Device on Ubuntu 18.04

Setup Three Node Ceph Storage Cluster on Ubuntu 18.04

SUPPORT US VIA A VIRTUAL CUP OF COFFEE

We're passionate about sharing our knowledge and experiences with you through our blog. If you appreciate our efforts, consider buying us a virtual coffee. Your support keeps us motivated and enables us to continually improve, ensuring that we can provide you with the best content possible. Thank you for being a coffee-fueled champion of our work!

Photo of author
koromicha
I am the Co-founder of Kifarunix.com, Linux and the whole FOSS enthusiast, Linux System Admin and a Blue Teamer who loves to share technological tips and hacks with others as a way of sharing knowledge as: "In vain have you acquired knowledge if you have not imparted it to others".

Leave a Comment