Think I already sent that lsblk/cat output. Except the kernel error messages. Will check that.
I also assume that the dd copy/clone process was successful.
Generally spoken: Do you habe a clue where the problem could be?
Did this problem occur before? And if yes, how was it solved?
Because upgrading a hard drive is a rather common operation and shouldn’t be rocket science.
When I did a copy via dd, the target disk had the identical UUID/PARTUUID as the origin disk. I did check this via command blkid.
This given UUID from blkid needs to fit to the UUID within /etc/fstab.
Out from this, your remark But the drive IDs are different again and now I have this layout I do not understand and would like to investigate, because I assume that they are identical when using dd to copy disks.
And if the UUID of the disks are identical, the contents of /etc/fstab should also fit and the new disk environment should run out of the box.
Next try.
This time I used a different approach.
I also connected a monitor to the HDMI port, before I just was connected over SSH.
Both SSDs were connected via a powered USB hub. I formatted the new 2TB SSD and created an EXT4 partition of the disk’s full size.
Now with dietpi-drive_manager I moved the dietpi-userdata folder to the new disk which were about 600 GB what took some time but went well without any error messages.
After moving the data I turned off the Raspi, removed the 1 TB disk and restarted.
In the first seconds after boot and login everything seems to be fine.
But then probably while the services (apache2, php, Mariadb etc.) try to start, I get a kin of “panic screen” saying:
xhci_hd … xHCI host controller not responding, assume dead
and then lots of error messages are shown, mostly EXT4-fs errors because all harddisks seem to be non accessible anymore.
The 2TB disk is brand new and on a windows system it works like a breeze .
So I guess there’s nothing wrong with the disk itself
Strangely, when I remove the 256 GB boot device and start from fresh installed dietpi system from a micro SD card, I don’t see that kind of troubles.
Maybe because the dietpi-userdata still resides on the micro SD and no services etc. try to access the 2TB SSD.
Strange also:
I can mount the 2 TB SSD but when I use MC to navigate around the dietpi-userdata folder, everything, all data seem to be there.
But entering a folder for the first time often takes several seconds, sometimes 20 and more.
Could it be the device drivers cannot deal with a SSD of that size or what could be the reason of thsi strange behaviour.
And I also assume that this xHCI host error occured after my first attempts with dd.
I just didn’t see the error messages because I was connected via SSH.
Any ideas?
Does it boot correctly when the 1TB drive is connected?
Can you plugin both harddrives and show the output of lsblk -o name,fstype,label,size,ro,type,mountpoint,partuuid,uuid
and cat /etc/fstab.
As soon as the 2TB drive is connected, some seconds after boot (and login) the xHCI error appears.
Without login it also crashes.
So there’s no time to issue the lsblk command etc.
If only the 1 TB drive is connected, the system won’t boot correctly because the dietpi-userdata folder is missing.
First of all, as long as it is not the root filesystem, dd IMO is overkill, potential cause of issues and extremely slow. Just attach the new drive, use dietpi-drive_manager to move userdata (which copies files, not raw data bit by bit), detach the old drive and mount new one to the old mount point. I see no reason to keep UUID/PARTUUID the same in this case.
The device path /dev/sd* btw is irrelevant. It is automatically assigned based on the order in which the kernel detects storage devices. The new SSD is obviously faster or earlier detected then the old one. Since we however use UUIDs for the /etc/fstab entries, this doesn’t matter.
But seems like this is what you finally did.
Probably this SSD has an issue with UAS? Try to get its USB ID:
lsusb
Let’s assume it is 152d:1567, then add the following to the end of the line (space separated) in /boot/cmdline.txt
Today I tried lsusb command, but it doesn’t show me any of the attached drives. It should show three devices:
micro SD Card used for boot
M.2 USB 3.0 256 GB drive attaxched to USB 3.0 port #1
Samsung 2 TB drive attached to USB port #2
> root@DietPi:~# root@DietPi:~# lsusb
> Bus 002 Device 002: ID 152d:0578 JMicron Technology Corp. / JMicron USA Technology Corp. JMS578 SATA 6Gb/s
> Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
> Bus 001 Device 004: ID 046d:c534 Logitech, Inc. Unifying Receiver
> Bus 001 Device 003: ID 152d:0576 JMicron Technology Corp. / JMicron USA Technology Corp. Gen1 SATA 6Gb/s Bridge
> Bus 001 Device 002: ID 2109:3431 VIA Labs, Inc. Hub
> Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
But the drives are there and recognized:
r
> oot@DietPi:~# lsblk
> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
> sda 8:0 0 1,8T 0 disk
> └─sda1 8:1 0 875,6G 0 part
> sdb 8:16 0 238,5G 0 disk
> ├─sdb1 8:17 0 128M 0 part
> └─sdb2 8:18 0 238,3G 0 part
> mmcblk0 179:0 0 7,5G 0 disk
> ├─mmcblk0p1 179:1 0 128M 0 part /boot
> └─mmcblk0p2 179:2 0 7,4G 0 part /
Edit: Ooops, I forgot, that I won’t see the SSDs itself but only the USB interfaces they’re connected to. Will check, which one the 2 TB SSD is.
The two JMicron devices are USB-SATA adapters. You can (unmount +) detach one of the USB drives/stations to see which of the two is gone. Or you just add both (it doesn’t break something) for testing:
we have seen issues with JMicron adapter in 2021. I thought it has been fixed by RPi guys or with recent kernel/firmware versions. But I guess running usb-storage.quirks from @MichaIng fixed it for you, right?