Have you ever wondered if you could move Linux OS installation to another drive? Well, yes, that is definitely possible. Moving a Linux installation to another drive involves copying the existing system files and updating the bootloader configuration on the new drive.
Table of Contents
Migrating Linux OS Installation to Another Drive
Why Move Linux OS to Another Drive?
But, why would one want to move a Linux OS installation to another drive? There could be various reasons, but some of them could be:
- Upgrade or Replace Hard Drive: The most common reason is to replace an existing hard drive with a larger one or upgrade to a faster drive, such as a solid-state drive (SSD).
- Migration to a New Computer: If you’re getting a new computer and want to retain your existing Linux setup, moving the installation to the new drive can be a way to achieve this.
- Disk Space Issues: Running out of disk space on the current drive might prompt the need to move to a larger one.
- Performance Improvement: Switching to a faster drive type, like moving from an HDD to an SSD, can significantly improve system performance.
- Backup and Recovery: Moving the installation to a different drive can be part of a backup and recovery strategy, especially if you’re concerned about the health of your current drive.
- Experimentation and Testing: Linux enthusiasts and system administrators might want to experiment with different drives or configurations for testing purposes.
- Repartitioning and Resizing: If you need to repartition your drives or resize existing partitions, moving the Linux installation to a different drive allows for more flexibility.
- Replacing Faulty Drive: If your current drive is failing or has bad sectors, moving the installation to a new drive can help in avoiding data loss and system instability.
So for my case, I will be moving my Ubuntu OS installation from a old Laptop to a new Laptop with better performance.
So, what are the steps that you can use to move Linux OS to another drive?
- Backup your current system to avoid data loss just in case things don’t go well.
- Ensure the new drive has same or even more disk space compared to the current drive.
- Create Bootable USB with a Live ISO of your preferred Linux distro. I am using Ubuntu 22.04 Live ISO here.
- Remove your new Laptop drive and attach to your current Linux system or vice versa. Ensure you have disk/card reader, whether it is for SSD or HDD.
- Boot your current Linux system from the Live CD/USB. This is necessary because you can’t make changes to a mounted Linux system.
- Create partitions on the new drive similar to the partitions on the current drive.
- Copy data from the current drive to the new drive
- Update FSTAB accordingly.
- Install the GRUB on the new drive.
- Update Initial RAM filesystem.
- Cross your Fingers, Reboot and confirm
So, let’s get into this!
Backup your Linux System Data
Before you can even think of attempting to play with your system disk, ensure your have your backups done and verified.
You can check various backup tutorials we have;
Confirm Drive Sizes
I am moving my Linux from a 256 GB SSD drive to a 256 GB Drive.
My current Linux OS disk usage;
df -hT -P /
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/vgubuntu-root ext4 232G 164G 57G 75% /
And it is an SSD;
lsblk -o name,rota
nvme0n1 0
├─nvme0n1p1 0
├─nvme0n1p2 0
└─nvme0n1p3 0
└─nvme0n1p3_crypt 0
├─vgubuntu-root 0
└─vgubuntu-swap_1 0
ROTA=0
: Indicates a non-rotational device, usually a solid-state drive (SSD).
As as you can see, my drive is LUKS encrypted! See lsblk output, nvme0n1p3_crypt.
Create Bootable USB with a Live ISO of your Preferred Linux Distro
As already mentioned, I am using Ubuntu 22.04 Live ISO burnt into a bootable USB drive.
I used dd command to create a bootable USB drive with live Ubuntu 22.04;
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 1 7.5G 0 disk
└─sda1 8:1 1 7.5G 0 part
Create bootable Live USB!
sudo dd bs=1M if=/media/kifarunix/vol02/ubuntu-22.04-desktop-amd64.iso of=/dev/sda1 status=progress oflag=sync
Boot your Current Linux System from the Live CD/USB
With live bootable USB attached to your current Linux system (we assume this is the system that has our current Linux OS installation):
- Power it off
- Boot to bios to update the boot order. Set the bootable USB as the first one to boot from.
- Save the changes and boot your new system to live OS.
- When the live OS boots, choose Try Ubuntu (remember we are using an Ubuntu live image here) and proceed to boot into it.
Detach New System Drive Attach your Current Linux System
- Poweroff the system with your new drive.
- Once it is off, detach the drive (how to remove a drive varies from system to system. Use Youtube videos to check how to detach a drive on your respective Laptop model).
- After you have removed the drive, attach it to current Laptop (now running in Live ISO) via a card/drive reader.
Create Partitions on the New Drive
You need to create partitions on the new system’s drive similar to the partitions on the current Linux OS installation drive.
Check available drives
Now, that we have attached our current Linux OS installation drive to new Laptop, let’s confirm the drives from the Live OS!
sudo su -
lsblk | grep -iv loop
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 1 7.5G 0 disk
├─sda1 8:1 1 4.7G 0 part /cdrom
├─sda2 8:2 1 4.9M 0 part
├─sda3 8:3 1 300K 0 part
└─sda4 8:4 1 2.8G 0 part /var/crash
/var/log
sdb 8:16 0 238.5G 0 disk /media/ubuntu/SSD
nvme0n1 259:0 0 238.5G 0 disk
├─nvme0n1p1 259:1 0 512M 0 part
├─nvme0n1p2 259:2 0 732M 0 part
└─nvme0n1p3 259:3 0 237.3G 0 part
Where:
- sda is the bootable USB drive
- sdb is the new Linux OS installation drive
- nvme is the current drive with our current Linux OS installation.
Confirm the Partitions and Partition Scheme
Confirm the partitions on the current Linux OS installation drive;
parted /dev/nvme0n1 p
Sample output;
Model: WDC PC SN730 SDBPNTY-256G-1101 (nvme)
Disk /dev/nvme0n1: 256GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 538MB 537MB fat32 EFI System Partition boot, esp
2 538MB 1305MB 768MB ext4
3 1305MB 256GB 255GB
- The drive is partitioned using GPT (GUID Partition Table) partition table.
- It is also UEFI partition as shown by the “boot” and “esp” (EFI System Partition) flags.
Is the Drive LUKS encrypted?
One thing to note also is that my current Linux OS installation drive is LUKS encrypted!
blkid | grep nvme0n1
/dev/nvme0n1p1: UUID="CF45-D10D" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="2a76337e-fa5b-4def-8958-54eb2b15a2ab"
/dev/nvme0n1p2: UUID="6b905fe4-46c6-438e-a800-30ca9ff9f047" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="73482e3b-d43a-4432-bb36-ce342f231b06"
/dev/nvme0n1p3: UUID="f36fbf36-b4e7-43d8-add5-55eb31fe92e2" TYPE="crypto_LUKS" PARTUUID="cb48c987-3dfa-4b6c-a413-5a45a7310ae7"
As you can see, /dev/nvme0n1p3 device is of type TYPE="crypto_LUKS"
.
Swap might also be there. But until you mount the LUKS drive, you wont see it. Anyway, you can always add swap whenever you want.
If you need to ensure that the new drive will also be LUKS encrypted, then you need to format it, create LUKS on it and partition it.
Create Partitions on New Drive
So, let’s format the new drive with GPT partition table;
umunt /dev/sdb
parted /dev/sdb mklabel gpt
If you are prompted about existing data being destroyed, confirm and proceed.
Create EFI partition of 512MiB similar to current Linux OS installation.
parted /dev/sdb mkpart primary fat32 1MiB 513MiB
Initialize the ESP partition;
parted /dev/sdb set 1 esp on
Create Boot partition. I created boot partition of 1GB, a little bit bigger than the current Linux OS installation one ;
parted /dev/sdb mkpart primary ext4 513MiB 1537MiB
Create the root partition;
parted /dev/sdb mkpart primary ext4 1537MiB 100%
Confirm;
parted /dev/sdb p
Model: JMicron Generic (scsi)
Disk /dev/sdb: 256GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 538MB 537MB ext4 primary boot, esp
2 538MB 1612MB 1074MB primary
3 1612MB 256GB 254GB primary
lsblk
sda 8:0 1 7.5G 0 disk
├─sda1 8:1 1 4.7G 0 part /cdrom
├─sda2 8:2 1 4.9M 0 part
├─sda3 8:3 1 300K 0 part
└─sda4 8:4 1 2.8G 0 part /var/crash
/var/log
sdb 8:16 0 238.5G 0 disk
├─sdb1 8:17 0 512M 0 part
├─sdb2 8:18 0 1G 0 part
└─sdb3 8:19 0 237G 0 part
nvme0n1 259:0 0 238.5G 0 disk
├─nvme0n1p1 259:4 0 512M 0 part
├─nvme0n1p2 259:5 0 732M 0 part
└─nvme0n1p3 259:6 0 237.3G 0 part
Is the LUKS Drive using LVM?
You would just run the pvs command to check but since we are running on Live OS, then you need first unlock the LUKS drive to check.
cryptsetup luksOpen /dev/nvme0n1p3 dm_crypt-0
You will be prompted to enter the LUKS decryption passphrase.
Once you unlock the drive, check using pvs or dmsetup table command.
pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/dm_crypt-0 vgubuntu lvm2 a-- 237.24g 36.00m
or
dmsetup table
Each should show you if LVM is on!
Sample output of dmsetup table
command.
dm_crypt-0: 0 497534976 crypt aes-xts-plain64 :64:logon:cryptsetup:f36fbf36-b4e7-43d8-add5-55eb31fe92e2-d0 0 259:6 32768
vgubuntu-root: 0 495460352 linear 253:0 2048
vgubuntu-swap_1: 0 1998848 linear 253:0 495462400
So, if using LVM, then before you create the logical volumes on your new drive, you need to LUKS encrypt it first.
Thus, let’s encrypt the new drive root’s partition with LUKS;
cryptsetup luksFormat /dev/sdb3
Be sure to keep your passphrase safe and in a place you can easily retrieve!
Unlock the LUKS drive so you can create the partitions.
cryptsetup luksOpen /dev/sdb3 dm_crypt-1
This will add a <name> mapping to /dev/mapper/
e.g /dev/mapper/nvme0n1p3_crypt.
lsblk
sdb 8:16 0 238.5G 0 disk
├─sdb1 8:17 0 512M 0 part
├─sdb2 8:18 0 1G 0 part
└─sdb3 8:19 0 237G 0 part
└─dm_crypt-1 253:3 0 237G 0 crypt
nvme0n1 259:0 0 238.5G 0 disk
├─nvme0n1p1 259:4 0 512M 0 part
├─nvme0n1p2 259:5 0 732M 0 part
└─nvme0n1p3 259:6 0 237.3G 0 part
└─dm_crypt-0 253:0 0 237.2G 0 crypt
├─vgubuntu-root
│ 253:1 0 236.3G 0 lvm
└─vgubuntu-swap_1
253:2 0 976M 0 lvm
You can now proceed create partitions on the new drive using the mapped device, /dev/mapper/dm_crypt-1.
pvcreate /dev/mapper/dm_crypt-1
Create Volume group. Do not use the same name of the VG as the existing one!
vgcreate ubuntuvg /dev/mapper/dm_crypt-1
Create logical volume using entire space. If you want, you can create swap as well.
For example;
lvcreate -L 1G -n swap ubuntuvg
The create root LVM using entire remaining space;
lvcreate -l +100%FREE -n root ubuntuvg
Confirm the logical volumes;
lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root ubuntuvg -wi-a----- <235.96g
swap ubuntuvg -wi-a----- 1.00g
root vgubuntu -wi-a----- 236.25g
swap_1 vgubuntu -wi-a----- 976.00m
Create Filesystems on the Partitions
EFI parition should be set to FAT32 and boot and root devices, set to EXT4;
mkfs.fat -F32 /dev/sdb1
mkfs.ext4 /dev/sdb2
mkfs.ext4 /dev/mapper/ubuntuvg-root
Make swap!
mkswap /dev/mapper/ubuntuvg-swap
Move Linux OS to Another Drive
It is now time to migrate or move your Linux OS installation to another drive.
Remember, that our Linux OS installation is currently residing on the /dev/nvme0n1p3 drive and we migrating it to our new system's drive, /dev/sdb.
Mount Old and New Drives
Old here means, the drive with the Linux OS installation, /dev/nvme0n1p3.
mkdir /mnt/{sdb,nvme}
- Mount the current Linux OS root partition to /mnt/nvme
- boot partition to /mnt/nvme/boot.
- EFI partition to /mnt/nvme0n1p3/boot/efi.
mount /dev/mapper/vgubuntu-root /mnt/nvme
mount /dev/nvme0n1p2 /mnt/nvme/boot/
mount /dev/nvme0n1p1 /mnt/nvme/boot/efi
Similarly, mount the new drive;
mount /dev/mapper/ubuntuvg-root /mnt/sdb/
mkdir /mnt/sdb/boot
mount /dev/sdb2 /mnt/sdb/boot/
mkdir /mnt/sdb/boot/efi
mount /dev/sdb1 /mnt/sdb/boot/efi/
This is how drives are mounted now!
df -hT
/dev/mapper/vgubuntu-root ext4 232G 163G 58G 75% /mnt/nvme
/dev/nvme0n1p2 ext4 704M 127M 526M 20% /mnt/nvme/boot
/dev/nvme0n1p1 vfat 511M 6.1M 505M 2% /mnt/nvme/boot/efi
/dev/mapper/ubuntuvg-root ext4 232G 36K 220G 1% /mnt/sdb
/dev/sdb2 ext4 974M 28K 907M 1% /mnt/sdb/boot
/dev/sdb1 vfat 511M 4.0K 511M 1% /mnt/sdb/boot/efi
Clone the Linux OS to new Drive
Now that the partitions are created and mounted, copy the data from the old disk to new disk. To ensure that proper permissions and ownership are retained, use rsync
command.
You can exclude unnecessary directories such as /tmp, /proc, /dev, /sys, e.tc
rsync -avhP --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} /mnt/nvme/ /mnt/sdb/
The rsync options, -avhP
used are explained below:
-a
: Stands for "archive" and is used to preserve the file attributes and permissions during the synchronization. It's a shorthand for several other flags like-rlptgoD
.-v
: Stands for "verbose" and makesrsync
display detailed information about the files being copied, which can be helpful for tracking progress.-h
: Stands for "human-readable" and makes the output more easily understandable for humans by using units like "K" (kilobytes), "M" (megabytes), etc.-P
: Combines two options:--progress
: Displays progress information during the transfer, including the percentage of completion.--partial
: Allows resuming partially transferred files.
Depending on the size of your data, this may take sometime to complete the copying!
Once the data copying is done, you can confirm drives usage;
du -hs /mnt/sdb
163G /mnt/sdb
du -hs /mnt/nvme/
163G /mnt/nvme/
All seems to be good!
Install GRUB Bootloader
Next install the GRUB bootloader on to the new disk. Proceed as follows;
mount --bind /dev /mnt/sdb/dev
mount --bind /dev/pts /mnt/sdb/dev/pts
mount --bind /proc /mnt/sdb/proc
mount --rbind /sys /mnt/sdb/sys
If your system was on UEFI, then ensure that you mount the /sys
with --rbind
option. This enables grub-install
command to manipulate the EFI variables.
Chroot into your system disk and install bootloader:
chroot /mnt/sdb/
Run the command below to install GRUB on the specified device's boot sector or EFI System Partition (ESP) if it's an EFI system;
grub-install /dev/sdb
If using GRUB 2, then update the command above accordingly.
Sample output;
Installing for x86_64-efi platform.
Installation finished. No error reported.
If you get the warnings below;
grub-install: warning: EFI variables cannot be set on this system.
grub-install: warning: You will have to complete the GRUB setup manually.
Ensure /sys is mounted with --rbind option.
Verify GRUB installation;
grub-install --recheck /dev/sdb
Update the GRUB configuration to reflect the changes.
update-grub
Update the Filesystem Table (FSTAB)
FSTAB (etc/fstab) defines how storage devices and partitions are mounted into the file system hierarchy at system boot.
First, let's get the old system fstab configurations (note that we are still within chroot!);
grep -vE "^$|^#" etc/fstab
UUID=4713c4fd-460e-4d1f-953e-69c472907234 / ext4 errors=remount-ro 0 1
UUID=8c32b607-959f-4ba0-859c-047607b8ec89 /boot ext4 defaults 0 2
UUID=83A8-304C /boot/efi vfat umask=0077 0 1
UUID=620960b0-096e-4384-9217-2c928c9737fe none swap sw 0 0
You have to update the device UUIDS to match your new device UUIDS.
You can get UUIDs using blkid command;
blkid | grep -E 'swap|sdb'
/dev/nvme0n1p1: UUID="43C1-9E44" TYPE="vfat" PARTLABEL="primary" PARTUUID="430cc037-97f6-4364-9c84-26ba9f782446"
/dev/nvme0n1p2: UUID="86e7c866-3199-4c5e-90cc-5c162d1f8955" TYPE="ext4" PARTLABEL="primary" PARTUUID="f50ada98-c395-44e5-9b1b-f17576f94e5a"
/dev/nvme0n1p3: UUID="4709471c-9d3a-4d98-8dc5-6247bc31a366" TYPE="crypto_LUKS" PARTLABEL="primary" PARTUUID="db703ec6-4d44-4724-ae47-6792f18aa614"
/dev/mapper/vgubuntu--1-swap_2: UUID="620960b0-096e-4384-9217-2c928c9737fe" TYPE="swap"
/dev/mapper/nvme0n1p3_crypt: UUID="hf1lmF-3qxg-0Leq-hLeW-vFH1-feMa-Q4TAvN" TYPE="LVM2_member"
/dev/mapper/vgubuntu-swap: UUID="41135eca-3e0c-4731-9263-25340f0a7f57" TYPE="swap"
The update fstab as follows.
cat > etc/fstab << 'EOL'
UUID=4709471c-9d3a-4d98-8dc5-6247bc31a366 / ext4 errors=remount-ro 0 1
UUID=86e7c866-3199-4c5e-90cc-5c162d1f8955 /boot ext4 defaults 0 2
UUID=43C1-9E44 /boot/efi vfat umask=0077 0 1
UUID=41135eca-3e0c-4731-9263-25340f0a7f57 none swap sw 0 0
EOL
At this point, you are now almost done.
Update LUKS Crypt Table
LUKS crypt table defines the layout and configuration of a LUKS-encrypted device.
cat etc/crypttab
dm_crypt UUID=4713c4fd-460e-4d1f-953e-69c472907234 none luks,discard
Where:
- dm_crypt: This field specifies the name of the mapped device that will be created when the encrypted device is unlocked (/dev/mapper/dm_crypt). In this case, it is set to
dm_crypt
. - UUID=4713c4fd-460e-4d1f-953e-69c472907234: This field specifies the UUID (Universally Unique Identifier) of the encrypted block device. The UUID is a unique identifier assigned to the underlying block device.
- none: The passphrase or key used to unlock the encrypted device. In this case, it is set to
none
, indicating that the device will prompt for the key. - luks: This field specifies the encryption format or type. In this example, it is set to
luks
, indicating that the encrypted device is formatted using the LUKS (Linux Unified Key Setup) encryption standard. - discard: This option specifies that the
discard
operation (TRIM) should be enabled for the encrypted device. TRIM is a feature that allows the operating system to inform the SSD about blocks of data that are no longer in use, helping with performance and wear leveling on SSDs that support this feature.
Get the LUKS device UUID;
cryptsetup luksUUID /dev/sdb3
4709471c-9d3a-4d98-8dc5-6247bc31a366
Update crypt table;
echo "dm_crypt-1 UUID=4709471c-9d3a-4d98-8dc5-6247bc31a366 none luks,discard" > etc/crypttab
Update Initramfs
Regenerate the initramfs to include the changes made to the LUKS-encrypted drive.
update-initramfs -u
Update GRUB
Update the GRUB configuration to reflect the changes.
update-grub
Exit Chroot and Reboot the System
Exit the chroot environment and umount the /dev, /sys, /proc directories;
exit
umount /mnt/nvme/sys
umount /mnt/nvme/proc
umount /mnt/nvme/dev/pts
umount /mnt/nvme/dev
Poweroff the System
- Power off the system
- Remove the live bootable USB
- Remove the old Linux OS installation drive.
Power on the System and Confirm if all is good
If everything goes well, you should boot into system as it were before migration!