DietPi Blog

... discover DietPi, Debian and Linux info

Using a RAID system with DietPi – part 4

Part 4: RAID maintenance.

The first three posts “Using a RAID system with DietPi – part 1”, “- part 2” and “- part 3” dealt with the setup of, the access to and the management/diagnosis of a RAID system with an example of a RAID 5.

This blog post deals with the maintenance tasks, e.g. changing RAID disks, changing RAID levels, ending up a RAID system, etc.
These tasks will be updated resp. extended, whenever suitable issues become present (e.g. via the DietPi Forum).

This blog post is one of a series regarding setup, usage and management of a RAID system:

  1. Using a RAID system with DietPi – part 1: System overview and installation of the RAID
  2. Using a RAID system with DietPi – part 2: Access the RAID
  3. Using a RAID system with DietPi – part 3: Basic management and diagnosis tasks
  4. Using a RAID system with DietPi – part 4: RAID maintenance
  5. Using a RAID system with DietPi – part 5: Redundancy tasks

Table of contents

  1. Adding a hot spare disk
  2. Extending a RAID5 or RAID6 storage
  3. Switch from RAID5 to RAID6
  4. End up a RAID assembly

1. Adding a hot spare disk

A hot spare disk has its task of a „passive“ redundant disk in the RAID assembly. In case of a failure of one of the other disks the hot spare disk takes over the function of the failed disk doing an automatic rebuild.
The hot spare disk needs the identical formatting/partitioning like all other disks in the RAID like described in the chapter regarding the installation in a earlier blog post.

Then it can be added to the RAID assembly via

mdadm /dev/md0 --add /dev/sdX

where sdX is the additional disk (i.e. ‚X‘ has to be set with the correct disk number), example /dev/sde.

After this, the RAID configuration within the file /etc/mdadm/mdadm.conf needs to be updated. Therefore, the command

mdadm --examine --scan --verbose

can be executed and the output has to be edited into the mdadm.conf file (change the correspondig RAID assembly area of the file).

2. Extending a RAID5 or RAID6 storage

If the storage of the RAID needs to be extended (e.g. due to running out of RAID memory space), a further disk (or further disks) may be added. Like mentioned above, it is a good practice to always use identical hard disk drive models.
The procedure of extending contains of the following steps:

  1. The new drive has to be connected to the system and be prepared as described above (delete existing partitions)
  2. The new drive is added as a hot spare disk like described before and the mdadm.conf file is edited as described
  3. The system configuration for the next boot is updated (update-initramfs -u -k all)
  4. The RAID’s disk structure is extended
  5. The RAID’s storage is resized

2.1 Extending the RAID disk structure

In this step, the RAID system is extended via

mdadm --grow --raid-devices=X /dev/md0

The value of ‘X’ is set to the new number of devices in the RAID (e.g. if a RAID5 is extended from 4 to 5 disk devices, raid-devices=5 is set).
The RAID grow procedure can be observed via the status output

cat /proc/mdstat

2.2 Resizing the RAID storage

After the RAID grow procedure is finished, the RAID storage has to be resized, which can be executed via the Dietpi-Drive_Manager (resize /mnt/raid).

3. Switch from RAID5 to RAID6

As a prerequisite, a further hard disk is needed to achieve a RAID6.

Converting a RAID5 assembly to RAID6 needs these basic steps:

  1. Adding an additional disk as a so called „hot spare“ disk
  2. Updating the RAID configuration
  3. Converting from RAID5 to RAID6
  4. Final check/update of the RAID configuration

3.1 Adding an additional disk as a so called „hot spare“ disk

To use the additional disk, it needs the identical formatting/partitioning like all other disks in the RAID like described within chapter 3.
Then it can be added to the RAID assembly via

mdadm /dev/md0 --add /dev/sdX

where sdX is the additional disk (i.e. ‚X‘ has to be set with the correct disk number), example /dev/sde.

3.2 Updating the RAID configuration

In this step, the RAID configuration within the file /etc/mdadm/mdadm.conf needs to be updated. Therefore, the command

mdadm --examine --scan --verbose

can be executed and the output has to be edited into the mdadm.conf file (change the correspondig RAID assembly area of the file).

3.3 Converting from RAID5 to RAID6

In this step, the conversion is started via

mdadm --grow /dev/md0 --level=6 --backup-file=/<path_to_different_location>/md0.bak

The backup file is needed to restart if the conversion is interrupted in any way. It must not be located on the RAID itself, therefore <path_to_different_location> has to be exchanged with an appropriate location.
During the conversion, the status of the RAID assembly (via mdadm --detail /dev/md0) changes from „clean“ to „degraded“ to „reshaping“ and finishes with „clean“.

If the conversion was interrupted (e.g. by a system reboot), it can be restarted via:

mdadm --run /dev/md0
mdadm --grow --continue --backup-file=/root/md0.bak /dev/md0

3.4 Update the RAID configuration

As a final step, a check resp. an update of the RAID configuration like described before the RAID conversion step. The mdadm.conf needs a change, because the formerly hot spare disk now is a disk within the RAID assembly.

4. End up a RAID assembly

A RAID assembly (resp. the super block) can only be unlinked or deleted, if there are no write accesses any more. That means, all services and processes with the option to access the RAID assembly should be terminated. (e.g. service samba stop resp. kill -9 processID).

To find all of them, the command lsof can be used:

root@raid:~# lsof /dev/md0

In addition, the RAID assembly must be unmounted (e.g. done via the dietpi-drive_manager or via unmount /dev/md0).

If the unmount command gives an error message like [target is busy], this indicates of services/processes still accessing the RAID.

After the unmount of the RAID the assembly is stopped via:

root@raid:~# mdadm --stop /dev/md0

If the disks used shall later be used in another RAID assembly, the old RAID information will be detected and an acknowledge to use these RAID partitions of the disks is queried.

If the disks shall be used for any „normal“ disk operation, it is necessary to delete the RAID information by clearing the superblock on all partitions (disks).

root@raid:~# mdadm --zero-superblock /dev/sdX1

At last, the mdadm.conf configuration is edited and the entries regarding the old RAID assembly are deleted resp. commented out. This may be done by simply deleting the two [ARRAY /dev/md/0 ...] lines at the end of the file.

Finally, if the unmount of the RAID was done by hand (via umount /dev/md0 instead via the dietpi-drive_manager), the mount directive in the file /etc/fstab has to be deleted.

Using a RAID system with DietPi – part 4

3 thoughts on “Using a RAID system with DietPi – part 4

  1. I have read part 1 and part 2. I feel the knowledge you share is very useful

  2. Both parts have been read by me. What you teach is really helpful, in my opinion.

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top