Creating an RAID 1 on Ubuntu with mdadm

In my home server that I use for various tasks and projects, I have two old 1 TB drives, that come from a Windows machine. Till now I kept them with NTFS filesystem because I needed the files the were on these drives. Now that the files are moved on my NAS, I can use these two drives for something else. Like moving MySQL databases on a RAID 1 mirroring to have a live backup if something goes wrong. 

So, what is a RAID?

RAID stands for Redundant Array of Independent/Inexpensive Disks (RAID)

In other words, it’s a way to combine multiple physical disk drives under one or more logical volume to obtain data redundancy or better performance or both

In this tutorial i will cover RAID 1 or Mirroring. RAID 1

RAID 1 requires minimum 2 disks

It has excellent redundancy your data will be written on both disks (MIRRORING) so if one drive fails the other should work so you have an exact copy available and ready to use. In fact, no matter how many disks does your array has, it will remain functional with even only one disk is operational, but be aware that in this case, no “backup” is available.  

Tow disks same exactly the same data on both

The array size is limited by the size of the smallest disk in your array.

Performance

Writing and reading performance differ greatly between array types 

In our case, writing speed is dictated by the slowest drive in your array. This is because the data is written simultaneously on both drives.

On the other hand, reading should be much faster as data can be requested from any drive in your array.

How to do it?

To create a software raid array with mdadm you can follow these steps: 

In Linux the disk drive is also referred to as a “block device”, so to list the block devices attached to your system use lsblk (list block devices) command:

$lsblk

sda      8:0    0   7,3T  0 disk 
├─sda1   8:1    0   128M  0 part 
└─sda2   8:2    0   7,3T  0 part /mnt/usb8tb
sdb      8:16   0 223,6G  0 disk 
├─sdb1   8:17   0   512M  0 part /boot/efi
├─sdb2   8:18   0 215,6G  0 part /
└─sdb3   8:19   0   7,5G  0 part [SWAP]
sdc      8:32   0 931,5G  0 disk 
└─sdc1   8:33   0 931,5G  0 part /mnt/2
sdd      8:48   0 931,5G  0 disk 
└─sdd1   8:49   0 931,5G  0 part /mnt/1
sde      8:64   0 931,5G  0 disk 
├─sde1   8:65   0   300M  0 part 
├─sde2   8:66   0    99M  0 part 
├─sde3   8:67   0   128M  0 part 
├─sde4   8:68   0 194,8G  0 part 
└─sde5   8:69   0 491,3G  0 part /mnt/3

In my case I will use sdc and sdd disk drives to create the array, luckily they are the same size drives.

And also you can see that these disk drives are mounted on /mnt/1 and /mnt/2 mount points.

sdc      8:32   0 931,5G  0 disk 
└─sdc1   8:33   0 931,5G  0 part /mnt/2
sdd      8:48   0 931,5G  0 disk 
└─sdd1   8:49   0 931,5G  0 part /mnt/1

If the disks are already mounted we will first need to unmount them:

$sudo umount /mnt/1
$sudo umount /mnt/2

After you unmount the drives if you try again lsblk command you will see that this time will be no mount points listed for sdc and sdd:

sdc      8:32   0 931,5G  0 disk 
└─sdc1   8:33   0 931,5G  0 part 
sdd      8:48   0 931,5G  0 disk 
└─sdd1   8:49   0 931,5G  0 part 

As I said these drives were in a Windows system, so there is an NTFS file system on them, but to be sure better check. We can do that using lsbk again like that:

$ lsblk -no FSTYPE /dev/sdc

ntfs
$ lsblk -no FSTYPE /dev/sdd

ntfs

Now that the playground has been set let’s begin installing mdadm and setting up the RAID array

1. Install mdadm

$ sudo apt install mdadm

2. Do some checks

This is not necessary for my setup but I’ll do it anyway just to see the result. We will examine the disks to see if it’s any raid configured with those disks with the following command:

$ sudo mdadm --examine /dev/sdc1 /dev/sdd1
mdadm: No md superblock detected on /dev/sdc1.
mdadm: No md superblock detected on /dev/sdd1.

As you can see it says No md superblock detected on /dev/sd..  This means that our filesystem is not used by any raid array and also isn’t ready yet to be used in one.  We have to partition these block devices, especially for software raid. We’ll use parted for this

3. Let’s create raid partitions on these two disks,

For that i will use parted if you are more comfortable with fdisk, there is no problem of using it

Below is the entire parted process, but I will also explain it line by line. After you type parted -a optimal /dev/sdc command you will be prompted by (parter) CLI where you will introduce the following commands:

mklabel gpt

Here we chose partition table. In this case GPT. For a drive zis size is not required, can be also old fasioned msdos.

mkpart primary ext4 0% 100%

This will create one partition for the entire disk, from the beginning 0% to the end 100%

set 1 raid on

Set the partition flag, in our case raid, it could be boot, swap..etc

align-check

Check if alignment is optimal as statet when we typed the command -a optimal

print

Prints info about partition

$ sudo parted -a optimal /dev/sdc
GNU Parted 3.2
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt                                                      
Warning: The existing disk label on /dev/sdc will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? y                                                                 
(parted) mkpart primary ext4 0% 100%                                      
(parted) set 1 raid on                                                    
(parted) align-check
alignment type(min/opt)  [optimal]/minimal? optimal                       
Partition number? 1                                                       
1 aligned
(parted) print                                                            
Model: ATA WDC WD10EZEX-22B (scsi)
Disk /dev/sdc: 1000GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  1000GB  1000GB  ext4         primary  raid

Repeat the steps for the second disk

Once we had created partitions, let’s verify the changes on both disks with $ sudo mdadm –examine /dev/sdc1 /dev/sdd1 command

$sudo mdadm --examine /dev/sdc1 /dev/sdd1 
/dev/sdc:
   MBR Magic : aa55
Partition[0] :   1953525167 sectors at            1 (type ee)
/dev/sdd:
   MBR Magic : aa55
Partition[0] :   1953525167 sectors at            1 (type ee)

4. Create RAID 1 block device and add disks to the array

~$ sudo mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[c-d]1
mdadm: /dev/sdc1 appears to contain an ext2fs file system
       size=976760832K  mtime=Thu Jan  1 02:00:00 1970
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
mdadm: /dev/sdd1 appears to contain an ext2fs file system
       size=976760832K  mtime=Thu Jan  1 02:00:00 1970
Continue creating array? 
Continue creating array? (y/n) y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

Now the array is resyncing. This is happening because it has no idea that the disks are empty. It simply kind of copy a bunch of 0s from one disk to the other one by one.

You can check how it progresses by typing the following command:

$ cat /proc/mdstat
Personalities : [raid1] 
md0 : active raid1 sdd1[1] sdc1[0]
      976628736 blocks super 1.2 [2/2] [UU]
      [>....................]  resync =  0.7% (7010368/976628736) finish=126.4min speed=127791K/sec
      bitmap: 8/8 pages [32KB], 65536KB chunk

unused devices: <none>

Now we wait until this resync job is finished, in my case that will be a little over two hours. Then when the job is finished we have to check raid array and raid devices with the following commands.

First will check raid devices:

sudo mdadm -E /dev/sd[c-d]1

 /dev/sdc1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : d285bbe8:07d1b67f:535ca58a:a0a45a51
           Name : tuxb:0  (local to host tuxb)
  Creation Time : Tue Dec  1 23:57:12 2020
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 1953257472 (931.39 GiB 1000.07 GB)
     Array Size : 976628736 (931.39 GiB 1000.07 GB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264112 sectors, after=0 sectors
          State : clean
    Device UUID : ab9f7727:479dc3b2:fe25aa01:2ce0a473

Internal Bitmap : 8 sectors from superblock
    Update Time : Wed Dec  2 02:07:22 2020
  Bad Block Log : 512 entries available at offset 16 sectors
       Checksum : 4813e561 - correct
         Events : 1586


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : d285bbe8:07d1b67f:535ca58a:a0a45a51
           Name : tuxb:0  (local to host tuxb)
  Creation Time : Tue Dec  1 23:57:12 2020
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 1953257472 (931.39 GiB 1000.07 GB)
     Array Size : 976628736 (931.39 GiB 1000.07 GB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264112 sectors, after=0 sectors
          State : clean
    Device UUID : f553b660:7a23ed20:11612026:105970f7

Internal Bitmap : 8 sectors from superblock
    Update Time : Wed Dec  2 02:07:22 2020
  Bad Block Log : 512 entries available at offset 16 sectors
       Checksum : 97bdd3d6 - correct
         Events : 1586


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

And now will check array

 sudo mdadm --detail /dev/md0


/dev/md0:
           Version : 1.2
     Creation Time : Tue Dec  1 23:57:12 2020
        Raid Level : raid1
        Array Size : 976628736 (931.39 GiB 1000.07 GB)
     Used Dev Size : 976628736 (931.39 GiB 1000.07 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Wed Dec  2 02:07:22 2020
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : tuxb:0  (local to host tuxb)
              UUID : d285bbe8:07d1b67f:535ca58a:a0a45a51
            Events : 1586

    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       1       8       49        1      active sync   /dev/sdd1

If you do these steps while the array is resyncing, there should be no problem, you have to see the State : clean. Resyncing instead of just State : clean as it is now

5. Create and mount the file system

To  create an ext4 file system on the /dev/md0 use mkfs.ext4 like this:

$mkfs.ext4 /dev/md0

Now let’s mount the raid and see how it works…

For that you will need to create a mount point, i prefer /mnt/raid1tb.. Create the mount point like this:

$mkdir /mnt/raid1tb

And the mount the array with following command:

$sudo mount /dev/md0 /mnt/raid1tb

Go to /mnt/raid1tb and check if it’s mounted by creating some folders or files and if everything is ok, let’s add this to fstab file to be mounted automatically at boot time.

To do this we have to edit /etc/fstab file and add our new block device.

My preferred editor is vim, but of course, you can use your favorite one, so to do this with vim these are the steps:

$ sudo vim /etc/fstab

Add the following lines in your fstab file:

 #RAID 1 TB
 /dev/md0    /mnt/raid1tb    ext4    defaults    0 0

After editing fstab file let’s do a test to see if there are any errors in your fstab entry, Who knows?  maybe a typo or something slipt your attention and you don’t want your system to hang at startup…

To do this first unmount the raid1tb like this:

$ sudo umount /mnt/raid1tb

How to remount fstab without restart? Nothing simpler.. Just like this:

$ sudo mount -a

In my case i have the following error:

mount: /mnt/1: can't find UUID=f2aa75ac-da40-4f75-a30b-17df0a0695dc.

This is because I forgot to delete the old mount point for one of the drives that I used to create this array.

Hope it’s useful…….

Be the first to comment

Leave a Reply