Corrupt GPT table on mdadm NVME RAID

Creating a bug report/issue

:white_check_mark: I have searched the existing open and closed issues

Required Information

  • DietPi version | cat /boot/dietpi/.version
    • G_DIETPI_VERSION_CORE=10
      G_DIETPI_VERSION_SUB=0
      G_DIETPI_VERSION_RC=1
      G_GITBRANCH=‘master’
      G_GITOWNER=‘MichaIng’
  • Distro version | echo $G_DISTRO_NAME $G_RASPBIAN
    • trixie
  • Kernel version | uname --all
    • Linux R5BP 6.1.115-vendor-rk35xx #1 SMP Wed Dec 24 10:54:39 UTC 2025 aarch64 GNU/Linux
  • Architecture | dpkg --print-architecture
    • arm64
  • SBC model | echo $G_HW_MODEL_NAME or (EG: RPi3)
    • ROCK 5B (aarch64) - It’s actually a Rock 5B +
  • Power supply used | (EG: 5V 1A RAVpower)
    • Radxa Power PD 30W - 9V / 2A, 12V / 2.5A
  • SD card used | (EG: SanDisk ultra)
    • SanDisk ImageMate 64GB

Additional Information (if applicable)

  • Can this issue be replicated on a fresh installation of DietPi?
    • Yes
  • Bug report ID | echo $G_HW_UUID
    • ee71bfae-d00d-4bb6-b536-6568f1989cac

Steps to reproduce

After a successfull write of the Radxa 5B image to the SD using Balena Etcher, on first boot, the GPT table is corrupt on the SD card.

  1. Flash the SD card with the Radxa 5B image using Balena Etcher
  2. Customize the setup files:
    1. dietpi.txt
    2. dietpi-wifi.txt
    3. dietpiEnv.txt
      1. fdtfile=rockchip/rk3588-rock-5b.dtbfdtfile=rockchip/rk3588-rock-5b-plus.dtb
  3. Put SD into board and wait for the setup to complete
  4. login and run fdisk -l
  5. Error is shown for the boot partition

Expected behaviour

  • No errors with the GPT partitions.

Actual behaviour

The boot partition is corrupt, but the backup is OK.

  • Disk /dev/mtdblock0: 16 MiB, 16777216 bytes, 32768 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: 7B7A6FDE-8147-4712-A91F-524B1DDF8D33
    The primary GPT table is corrupt, but the backup appears OK, so that will be used.
    

Extra details

Everything works fine, no issues while using the system whatsoever.
I’m building a RAID 1 with the board so that’s why I found this error. First I thought it was the cheap SD card I was using, so I bought a brand new SanDisk, but the error remains.
I flashed the SD cards multiple times, both of them, but the error is consistent, every single time I flash the image to the card and boot them.

according the user, it was a problem in an NVME disk

Yeah, it doesn’t look like a DietPi issue, but actually something withmdadm.

After I’m done creating the array, I see this error on both devices in it.

cbaldan@R5BP:~$ sudo fdisk -l /dev/nvme0n1 /dev/nvme1n1
The primary GPT table is corrupt, but the backup appears OK, so that will be used.
Disk /dev/nvme0n1: 476.94 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: WDC PC SN520 SDAPNUW-512G-1006
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 9BD20D8C-B6A5-4A75-A853-79003834B9CE

Device         Start        End    Sectors   Size Type
/dev/nvme0n1p1  2048 1000214527 1000212480 476.9G Linux RAID
The primary GPT table is corrupt, but the backup appears OK, so that will be used.


Disk /dev/nvme1n1: 476.94 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: WDC PC SN520 SDAPNUW-512G-1014
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 8BA87062-B713-4498-9E5C-B5745EE0789B

Device         Start        End    Sectors   Size Type
/dev/nvme1n1p1  2048 1000214527 1000212480 476.9G Linux RAID

Isn’t it expected that adding drives to a RAID erases the partitions? GPT does have a backup table, but restoring it would then break the RAID, overwriting sections on the drive which are used for RAID metadata, isn’t it?

Though weird that fdisk even detects a GPT partitioned disk, even that the primary table is broken. Did you properly format the disks before adding them to the RAID?

E.g. our article on this topic states that existing partitions need to be removed: Using a RAID system with DietPi – part 1 – DietPi Blog
Overwriting the first MiB of the drive with zeros + partprobe should do the same, or erase traces of the prior partition table even better.

Thanks for the tutorial, I hadn’t seen Dietpi’s when I was looking for help.

I’ve just re-did the entire RAID setup: unmounted the device, deleted the RAID, cleared the superblock of both drives and deleted the partitions they had. The corrupt GPT table message was gone for both devices, and dietpi-drive_manager showed the drives exactly like in the tutorial - format required message.

Then I followed the tutorial to a T, the only difference to the other tutorial I followed was creating a partition of type fd on broth drives before the creating the array (https://youtu.be/CJ0ed38N8-s).

But still, right after I run the mdadm –-create command, the corrupt GPT table error is back on both drives.

I guess it’s just how it is, maybe the utils aren’t fully updated to support NVME drives yet??

cbaldan@R5BP:~$ sudo mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/nvme0n1 /dev/nvme1n1
(...)
cbaldan@R5BP:~$ sudo fdisk -l /dev/nvme1n1 /dev/nvme0n1
The primary GPT table is corrupt, but the backup appears OK, so that will be used.
Disk /dev/nvme1n1: 476.94 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: WDC PC SN520 SDAPNUW-512G-1014
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 8BA87062-B713-4498-9E5C-B5745EE0789B
The primary GPT table is corrupt, but the backup appears OK, so that will be used.


Disk /dev/nvme0n1: 476.94 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: WDC PC SN520 SDAPNUW-512G-1006
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 9BD20D8C-B6A5-4A75-A853-79003834B9CE

When you create the RAID it writes some metadata to the beginning of the disk, which overrides parts of the GPT. This is why you see the error.

Don’t write the RAID to the whole disk, instead create a partition (which spans the full disk) and create the RAID from there.

Or if you want to use whole disk instead of using partitions, wipe the table before creating the RAID.

You can also force mdadm to write the metadata to the end of the disc with metadata version 1.0

mdadm --create /dev/md1 --metadata=1.0 ...

BTW I also use a RAID on my NAS but I don’t want the hassle with mdadm so I just use BTRFS fs, which has build-in RAID functionality.

1 Like

I got it sorted out.

The solution was to nuke the GPT table using gdisk function zap under expert commands.

Then I followed the tutorial and no issues showing up on fdisk -l.

Credits: random guy at https://askubuntu.com/a/474712/424813


@Jappe thanks for the tip, I’m going to try btrfs since it’s considered mature for RAID1, but it’s odd that a tool that is almost 20 years old is still not considered 100% mature, with very divided opnions everywhere.

But I like the simplicity and flexibility it has over mdadm. I’m new to all this, I just hope I don’t lose all my pictures I’m self hosting in this array ¯_(ツ)_/¯ - immich.