Create RAID Level 10 (RAID 1+0) on Ubuntu 20.04

0
214

Software RAID levels can be managed using mdadm tool on Linux. In this tutorial, you will learn how to create RAID Level 10 (RAID 1+0) on Ubuntu 20.04 using mdadm utility. There are different levels of RAID configurations ranging from 0-9. Some levels, such as RAID level 1 (mirroring) and RAID level 0 (striping) can be combined to provide better storage redundancy and higher chances of data recovery just in case some disks get corrupted.

In our previous guide, we learnt how to create and setup RAID level 1.

Setup Software RAID on Ubuntu 20.04

Create RAID Level 10 (RAID 1+0) on Ubuntu 20.04

Attach Physical Drives to your Machine

RAID level 10 (1+0), requires at least four drives.

We already have physical drives attached to our system,/dev/sd[b-e], all with 4G storage size.

lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0                       7:0    0 93.8M  1 loop /snap/core/8935
loop1                       7:1    0   67M  1 loop /snap/lxd/14133
sda                         8:0    0   15G  0 disk 
├─sda1                      8:1    0    1M  0 part 
├─sda2                      8:2    0    1G  0 part /boot
└─sda3                      8:3    0   14G  0 part 
  └─ubuntu--vg-ubuntu--lv 253:0    0   14G  0 lvm  /
sdb                         8:16   0    4G  0 disk 
sdc                         8:32   0    4G  0 disk 
sdd                         8:48   0    4G  0 disk 
sde                         8:64   0    4G  0 disk

Creating RAID Partitions

Partition and initialize the disks attached above as RAID partitions.

for i in {b..e}; do parted -a optimal /dev/sd$i mklabel msdos; done
for i in {b..e}; do parted -a optimal /dev/sd$i mkpart primary ext4 0% 100%; done
for i in {b..e}; do parted -a optimal /dev/sd$i set 1 raid on; done

Checking the partition tables for the disk;

parted -a optimal /dev/sdb print
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 4295MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  4295MB  4294MB  primary               raid

You can do the same for other disks.

Create RAID Level 10 (RAID 1+0) on Ubuntu 20.04

Once the disks are setup, you can now create RAID level 10 (1+0) on Ubuntu 20.04 using the mdadm command.

Check if the mdadm package is installed;

apt list -a mdadm
Listing... Done
mdadm/focal-updates,now 4.1-5ubuntu1.2 amd64 [installed,automatic]
mdadm/focal 4.1-5ubuntu1 amd64

If not installed, you can install it by running the command below;

apt install mdadm

The basic command line syntax for mdadm commad is;

mdadm [mode] <raiddevice> [options] <component-devices>

[mode] specifies any major mdadm operation mode which can be one of the following;

  • Assemble (-A, --assemble): assembles the components of a previously created array into an active array.
  • Build (-B, --build): Builds an array that doesn’t have per-device metadata (superblocks).
  • Create (-C, --create): Creates a new array with per-device metadata (superblocks).
  • Follow/Monitor (-F, --follow, --monitor): Monitor one or more md devices and act on any state changes. This is only meaningful for RAID1, 4, 5, 6, 10 or multipath arrays.
  • Grow (-G, --grow): Grow (or shrink) an array, or otherwise reshape it in some way.
  • Incremental Assembly (-I, --incremental): Add a single device to an appropriate array.
  • Manage: This is for doing things to specific components of an array such as adding new spares and removing faulty devices.
  • Misc: This is an ‘everything else’ mode that supports operations on active arrays, operations on component devices such as erasing old superblocks, and information gathering operations.
  • Auto-detect (–auto-detect): This mode does not act on a specific device or array, but rather it requests the Linux Kernel to activate any auto-detected arrays.

To create RAID Level 10 (RAID 1+0) on Ubuntu 20.04, such a command can be used.

mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sd[bcde]1

Sample output;

mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

For information on mdadm options, consult man pages.

Check the status of the RAID

You can check the status of the created RAID device above using the command below;

mdadm --detail /dev/md0

Sample output;

/dev/md0:
           Version : 1.2
     Creation Time : Tue Jun 15 18:35:00 2021
        Raid Level : raid10
        Array Size : 8380416 (7.99 GiB 8.58 GB)
     Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Tue Jun 15 18:35:42 2021
             State : clean 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : ubuntu20:0  (local to host ubuntu20)
              UUID : 4491a495:a29490e6:3e353c6d:cffac47d
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync set-A   /dev/sdb1
       1       8       33        1      active sync set-B   /dev/sdc1
       2       8       49        2      active sync set-A   /dev/sdd1
       3       8       65        3      active sync set-B   /dev/sde1

To list detailed information about each RAID device;

mdadm --examine /dev/sd[bcde]1

You can also check the status by running the command below;

cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid10 sde1[3] sdd1[2] sdc1[1] sdb1[0]
      8380416 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
      
unused devices: <none>

From the output above;

  • Personalities line shows the RAID level the kernel currently supports.
  • md device line shows the state of the array, the current raid level set on the device and the devices used in the array.
  • The other line indicates the usable size of the array in blocks
  • [n/m] e.g [4/4]  shows that the array would have n devices however, currently, m devices are in use. When m >= n then things are good. U means up, UUUU means all four devices are used on the array and all are up.

Create a Filesystem on RAID 10 Device

Once you have created a RAID 10 device, you need to create a filesystem on it to make it useable.

We used four disks each having 4G. Instead of getting 16G in total, RAID 10 cuts the size by half and hence, 8G will be available for use.

The data is mirrored and striped across the disks in the array.

To create a filesystem on RAID 10 device. The command below creates an EXT4 filesystem.

mkfs.ext4 /dev/md0

Mounting RAID 10 Device

You can now mount your RAID 10 device to start using it

mount /dev/md0 /mnt

To confirm the mounting;

df -hT -P /mnt/
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/md0       ext4  7.9G   36M  7.4G   1% /mnt

To automount the device on boot, update /etc/fstab file by adding the line similar to the below;

/dev/md0 /mnt ext4 defaults 0 0

Also, you need to update the /etc/mdadm/mdadm.conf by creating a prototype config file that describes currently active arrays that are known to be made from partitions of IDE or SCSI drives using the mdadm --detail --scan command

mdadm --detail --scan
ARRAY /dev/md0 metadata=1.2 name=ubuntu20:0 UUID=244a7fd9:d6fcc210:9b559249:df999270

To write the information to mdadm.conf, then run;

mdadm --detail --scan >> /etc/mdadm/mdadm.conf

Once you update the mdadm.conf, you can then update initramfs.

update-initramfs -u

When done, updating the initramfs, you can reboot the system to confirm if the RAID device can mount automatically.

And that marks the end of our guide on how to create RAID level 10 (1+0) on Ubuntu 20.04.

Consult man mdadm for more information on its usage.

Other Tutorials

Setup Software RAID on Rocky Linux 8

Setup Software RAID on Debian 10

Easy way to Setup NFS Server on Ubuntu 20.04

LEAVE A REPLY

Please enter your comment!
Please enter your name here