Part 4: RAID maintenance.
The first three posts “Using a RAID system with DietPi – part 1”, “- part 2” and “- part 3” dealt with the setup of, the access to and the management/diagnosis of a RAID system with an example of a RAID 5.
This blog post deals with the maintenance tasks, e.g. changing RAID disks, changing RAID levels, ending up a RAID system, etc.
These tasks will be updated resp. extended, whenever suitable issues become present (e.g. via the DietPi Forum).
This blog post is one of a series regarding setup, usage and management of a RAID system:
- Using a RAID system with DietPi – part 1: System overview and installation of the RAID
- Using a RAID system with DietPi – part 2: Access the RAID
- Using a RAID system with DietPi – part 3: Basic management and diagnosis tasks
- Using a RAID system with DietPi – part 4: RAID maintenance
- Using a RAID system with DietPi – part 5: Redundancy tasks
Table of contents
- Adding a hot spare disk
- Extending a RAID5 or RAID6 storage
- Switch from RAID5 to RAID6
- End up a RAID assembly
1. Adding a hot spare disk
A hot spare disk has its task of a „passive“ redundant disk in the RAID assembly. In case of a failure of one of the other disks the hot spare disk takes over the function of the failed disk doing an automatic rebuild.
The hot spare disk needs the identical formatting/partitioning like all other disks in the RAID like described in the chapter regarding the installation in a earlier blog post.
Then it can be added to the RAID assembly via
mdadm /dev/md0 --add /dev/sdX
where sdX
is the additional disk (i.e. ‚X‘ has to be set with the correct disk number), example /dev/sde
.
After this, the RAID configuration within the file /etc/mdadm/mdadm.conf
needs to be updated. Therefore, the command
mdadm --examine --scan --verbose
can be executed and the output has to be edited into the mdadm.conf
file (change the correspondig RAID assembly area of the file).
2. Extending a RAID5 or RAID6 storage
If the storage of the RAID needs to be extended (e.g. due to running out of RAID memory space), a further disk (or further disks) may be added. Like mentioned above, it is a good practice to always use identical hard disk drive models.
The procedure of extending contains of the following steps:
- The new drive has to be connected to the system and be prepared as described above (delete existing partitions)
- The new drive is added as a hot spare disk like described before and the
mdadm.conf
file is edited as described - The system configuration for the next boot is updated (
update-initramfs -u -k all
) - The RAID’s disk structure is extended
- The RAID’s storage is resized
2.1 Extending the RAID disk structure
In this step, the RAID system is extended via
mdadm --grow --raid-devices=X /dev/md0
The value of ‘X’ is set to the new number of devices in the RAID (e.g. if a RAID5 is extended from 4 to 5 disk devices, raid-devices=5
is set).
The RAID grow procedure can be observed via the status output
cat /proc/mdstat
2.2 Resizing the RAID storage
After the RAID grow procedure is finished, the RAID storage has to be resized, which can be executed via the Dietpi-Drive_Manager (resize /mnt/raid
).
3. Switch from RAID5 to RAID6
As a prerequisite, a further hard disk is needed to achieve a RAID6.
Converting a RAID5 assembly to RAID6 needs these basic steps:
- Adding an additional disk as a so called „hot spare“ disk
- Updating the RAID configuration
- Converting from RAID5 to RAID6
- Final check/update of the RAID configuration
3.1 Adding an additional disk as a so called „hot spare“ disk
To use the additional disk, it needs the identical formatting/partitioning like all other disks in the RAID like described within chapter 3.
Then it can be added to the RAID assembly via
mdadm /dev/md0 --add /dev/sdX
where sdX
is the additional disk (i.e. ‚X‘ has to be set with the correct disk number), example /dev/sde
.
3.2 Updating the RAID configuration
In this step, the RAID configuration within the file /etc/mdadm/mdadm.conf
needs to be updated. Therefore, the command
mdadm --examine --scan --verbose
can be executed and the output has to be edited into the mdadm.conf
file (change the correspondig RAID assembly area of the file).
3.3 Converting from RAID5 to RAID6
In this step, the conversion is started via
mdadm --grow /dev/md0 --level=6 --backup-file=/<path_to_different_location>/md0.bak
The backup file is needed to restart if the conversion is interrupted in any way. It must not be located on the RAID itself, therefore <path_to_different_location>
has to be exchanged with an appropriate location.
During the conversion, the status of the RAID assembly (via mdadm --detail /dev/md0
) changes from „clean“ to „degraded“ to „reshaping“ and finishes with „clean“.
If the conversion was interrupted (e.g. by a system reboot), it can be restarted via:
mdadm --run /dev/md0
mdadm --grow --continue --backup-file=/root/md0.bak /dev/md0
3.4 Update the RAID configuration
As a final step, a check resp. an update of the RAID configuration like described before the RAID conversion step. The mdadm.conf
needs a change, because the formerly hot spare disk now is a disk within the RAID assembly.
4. End up a RAID assembly
A RAID assembly (resp. the super block) can only be unlinked or deleted, if there are no write accesses any more. That means, all services and processes with the option to access the RAID assembly should be terminated. (e.g. service samba stop resp. kill -9 processID
).
To find all of them, the command lsof can be used:
root@raid:~# lsof /dev/md0
In addition, the RAID assembly must be unmounted (e.g. done via the dietpi-drive_manager
or via unmount /dev/md0
).
If the unmount command gives an error message like [target is busy]
, this indicates of services/processes still accessing the RAID.
After the unmount of the RAID the assembly is stopped via:
root@raid:~# mdadm --stop /dev/md0
If the disks used shall later be used in another RAID assembly, the old RAID information will be detected and an acknowledge to use these RAID partitions of the disks is queried.
If the disks shall be used for any „normal“ disk operation, it is necessary to delete the RAID information by clearing the superblock on all partitions (disks).
root@raid:~# mdadm --zero-superblock /dev/sdX1
At last, the mdadm.conf
configuration is edited and the entries regarding the old RAID assembly are deleted resp. commented out. This may be done by simply deleting the two [ARRAY /dev/md/0 ...]
lines at the end of the file.
Finally, if the unmount of the RAID was done by hand (via umount /dev/md0
instead via the dietpi-drive_manager
), the mount directive in the file /etc/fstab
has to be deleted.
I have read part 1 and part 2. I feel the knowledge you share is very useful
Both parts have been read by me. What you teach is really helpful, in my opinion.
Thanks for the kind feedback.
I have read both portions. The knowledge you impart is really beneficial, in my own estimation.
Maintaining a RAID system with DietPi involves key tasks like adding hot spare disks, extending RAID5 or RAID6 storage, and converting from RAID5 to RAID6. These maintenance actions ensure the system stays robust and adaptable. It’s essential for anyone managing RAID systems to handle disk replacements and storage resizing efficiently.