Creating a bug report/issue
I have searched the existing open and closed issues
Required Information
- DietPi version |
G_DIETPI_VERSION_CORE=9 G_DIETPI_VERSION_SUB=11 G_DIETPI_VERSION_RC=2 G_GITBRANCH='master' G_GITOWNER='MichaIng'
- Distro version |
bookworm
- Kernel version |
Linux Myles 6.1.0-31-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.128-1 (2025-02-07) x86_64 GNU/Linux
- Architecture |
amd64
- SBC model |
bare metal
or (EG: RPi3)
Expected behaviour
Actual behaviour
- Raid5 mount does not load after updating to 9.11 when trying to assemble i get this error:
sudo mdadm --assemble --force /dev/md0 /dev/sd[dbacf] -v
mdadm: looking for devices for /dev/md0
mdadm: Cannot assemble mbr metadata on /dev/sda
mdadm: /dev/sda has no superblock - assembly aborted
mdadm --examine
mdadm: No devices to examine
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 7.3T 0 disk
├─sda1 8:1 0 128M 0 part
└─sda2 8:2 0 7.3T 0 part
sdb 8:16 0 7.3T 0 disk
├─sdb1 8:17 0 128M 0 part
└─sdb2 8:18 0 7.3T 0 part
sdc 8:32 0 465.8G 0 disk
├─sdc1 8:33 0 64M 0 part /boot/efi
└─sdc2 8:34 0 465.7G 0 part /
sdd 8:48 0 7.3T 0 disk
├─sdd1 8:49 0 128M 0 part
└─sdd2 8:50 0 7.3T 0 part
sde 8:64 0 7.3T 0 disk
├─sde1 8:65 0 128M 0 part
└─sde2 8:66 0 7.3T 0 part
sdf 8:80 0 7.3T 0 disk
├─sdf1 8:81 0 128M 0 part
└─sdf2 8:82 0 7.3T 0 part
# Please refer to mdadm.conf(5) for information about this file.
#
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
# This configuration was auto-generated on Sat, 08 Feb 2025 17:20:13 -0800 by mkconf
ARRAY /dev/md/0 level=raid5 metadata=1.2 num-devices=5 UUID=5e588dd2:6100b17a:510363b4:6c519cb4 name=Myles:0
devices=/dev/sda,/dev/sdb,/dev/sdf,/dev/sdd,/dev/sdc
root@Myles:~# mdadm --examine /dev/sda
/dev/sda:
MBR Magic : aa55
Partition[0] : 4294967295 sectors at 1 (type ee)
root@Myles:~# mdadm --examine /dev/sdb
/dev/sdb:
MBR Magic : aa55
Partition[0] : 4294967295 sectors at 1 (type ee)
root@Myles:~# mdadm --examine /dev/sdc
/dev/sdc:
MBR Magic : aa55
Partition[0] : 4294967295 sectors at 1 (type ee)
root@Myles:~# mdadm --examine /dev/sdd
/dev/sdd:
MBR Magic : aa55
Partition[0] : 4294967295 sectors at 1 (type ee)
root@Myles:~# mdadm --examine /dev/sdf
/dev/sdf:
MBR Magic : aa55
Partition[0] : 976773167 sectors at 1 (type ee)
root@Myles:~#
Extra details
so, i think the overall issue that when the server rebooted the names changed, so sda, sdb, sdc etc got swapped around. not sure how to find the new correct order.
i can run mdadm --create --assume-clean /dev/md0 --level=5 --raid-devices=5 /dev/sdd /dev/sdb /dev/sda /dev/sdd /dev/sde
to recreate the raid but without knowing the correct order im kinda stuck
root@Myles:~# fsck.ext4 /dev/md0
e2fsck 1.47.0 (5-Feb-2023)
ext2fs_open2: Bad magic number in super-block
fsck.ext4: Superblock invalid, trying backup blocks...
fsck.ext4: Bad magic number in super-block while trying to open /dev/md0
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>
Found a gpt partition table in /dev/md0
one edit, i think i found the right order but it has to run a scan (by removing --assume-clean) before i can test… in 12 hours.
I did not migrate my RAID. But based on your experiences I will note my UUIDs (-> blkid
resp. ls -l /dev/disk/by-partuuid/
) in advance as well as the mdadm.conf
file.
I always wanted to investigate on using UUIDs for defining the RAID disks and add it to our blog post series (Search Results for “raid” – DietPi Blog) to avoid such situations…
What is your “old” mdadm.conf file containing? In my actual file there is a list of parts of the partuuids instead of a list of /dev/sd...
(like ARRAY /dev/md/0 level=raid5 metadata=1.2 num-devices=4 UUID=84a4c297:9de37779:389cd5f5:ab3529ac name=raid:0
). Maybe with this information you can find out the order.
When i created it the first time this is what it was
ARRAY /dev/md/0 level=raid5 metadata=1.2 num-devices=5 UUID=8f35ed59:f5079edc:8553eb33:724d1b8f name=Myle>
devices=/dev/sdd,/dev/sdb,/dev/sda,/dev/sdc,/dev/sdf
root@Myles:~# blkid
/dev/sdf: UUID="6584281f-4b7a-5206-d274-34bca75da06a" UUID_SUB="ef933208-f470-d175-2bd5-07f8a531a4f5" LABEL
="Myles:0" TYPE="linux_raid_member"
/dev/sdd: UUID="6584281f-4b7a-5206-d274-34bca75da06a" UUID_SUB="f3b2747b-06ad-5367-d17f-657f9baa8be5" LABEL
="Myles:0" TYPE="linux_raid_member"
/dev/sdb: UUID="6584281f-4b7a-5206-d274-34bca75da06a" UUID_SUB="917f1024-b45a-9f17-d97d-755444f29dfa" LABEL
="Myles:0" TYPE="linux_raid_member"
/dev/md0p1: PARTLABEL="primary" PARTUUID="3783a883-4eeb-4949-834a-2d6269a0389e"
/dev/sde: UUID="6584281f-4b7a-5206-d274-34bca75da06a" UUID_SUB="46d14c58-13db-7add-30f1-51fb4e0fb375" LABEL
="Myles:0" TYPE="linux_raid_member"
/dev/sdc2: UUID="b05bc48c-ada7-4634-8927-face55570ae0" BLOCK_SIZE="4096" TYPE="ext4" PARTLABEL="root" PARTU
UID="53d9e500-0bbe-4e87-a722-addeeac6b2ab"
/dev/sdc1: UUID="A4F6-9F29" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="EFI" PARTUUID="b741a1f9-2a9c-421a-913b-
2430c096596b"
/dev/sda: UUID="6584281f-4b7a-5206-d274-34bca75da06a" UUID_SUB="20a7a91d-083e-cc09-8c7d-ec530c41a7b1" LABEL
="Myles:0" TYPE="linux_raid_member"
now, taking a shot in the dark i took the value of the letter ( d=4, b=2 etc…) and then went and found my drives in dietpi-drive_manager and took the new values in that order hoping that that drive manager kept the drives in the same order.
from this site i found that if i can find the right order i should be fine.
I do think im on the right track.
root@Myles:~# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Feb 23 21:04:03 2025
Raid Level : raid5
Array Size : 31255576576 (29.11 TiB 32.01 TB)
Used Dev Size : 7813894144 (7.28 TiB 8.00 TB)
Raid Devices : 5
Total Devices : 5
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Mon Feb 24 10:45:30 2025
State : clean, degraded, recovering
Active Devices : 4
Working Devices : 5
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Rebuild Status : 88% complete
Name : Myles:0 (local to host Myles)
UUID : 6584281f:4b7a5206:d27434bc:a75da06a
Events : 9318
Number Major Minor RaidDevice State
0 8 64 0 active sync /dev/sde
1 8 16 1 active sync /dev/sdb
2 8 0 2 active sync /dev/sda
3 8 48 3 active sync /dev/sdd
5 8 80 4 spare rebuilding /dev/sdf
1 Like