i am having similiar errors, or at least i think. Docker just stopped in a period of nights.
Been trying to get it working without sucess.
This are the logs i get.
> dietpi@zuble:~$ sudo systemctl daemon-reload && sudo systemctl start docker.service
> dietpi@zuble:~$ sudo journalctl -u docker
> Jan 10 21:53:14 zuble systemd[1]: Started Docker Application Container Engine.
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.862194122Z" level=debug msg="Listener created for HTTP on fd ()"
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.864240510Z" level=debug msg="Golang's threads limit set to 56070"
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.865498807Z" level=info msg="parsed scheme: \"unix\"" module=grpc
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.865574751Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.865642270Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.865684325Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.866261751Z" level=debug msg="metrics API listening on /var/run/docker/metrics.sock"
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.869233714Z" level=info msg="parsed scheme: \"unix\"" module=grpc
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.869609159Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.869884103Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.870075788Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.872631307Z" level=debug msg="Using default logging driver journald"
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.872906362Z" level=debug msg="processing event stream" module=libcontainerd namespace=plugins.moby
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.874004196Z" level=debug msg="[graphdriver] priority list: [btrfs zfs overlay2 fuse-overlayfs aufs overlay devicemapper vfs]"
> Jan 10 21:53:17 zuble dockerd[1478]: time="2023-01-10T21:53:17.885449101Z" level=error msg="[graphdriver] prior storage driver overlay2 failed: lstat /mnt/caixa1hdd/dietpi_userdata/docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7: bad message"
> Jan 10 21:53:17 zuble dockerd[1478]: time="2023-01-10T21:53:17.888619675Z" level=debug msg="Cleaning up old mountid : start."
> Jan 10 21:53:17 zuble dockerd[1478]: failed to start daemon: error initializing graphdriver: lstat /mnt/caixa1hdd/dietpi_userdata/docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7: bad message
> Jan 10 21:53:17 zuble systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
> Jan 10 21:53:17 zuble systemd[1]: docker.service: Failed with result 'exit-code'.
> dietpi@zuble:~$ sudo dockerd
> DEBU[2023-01-10T22:03:01.815256161Z] Listener created for HTTP on unix (/var/run/docker.sock)
> DEBU[2023-01-10T22:03:01.816946087Z] Golang's threads limit set to 56070
> INFO[2023-01-10T22:03:01.817911142Z] parsed scheme: "unix" module=grpc
> INFO[2023-01-10T22:03:01.817977012Z] scheme "unix" not registered, fallback to default scheme module=grpc
> DEBU[2023-01-10T22:03:01.818005661Z] metrics API listening on /var/run/docker/metrics.sock
> INFO[2023-01-10T22:03:01.818061401Z] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>} module=grpc
> INFO[2023-01-10T22:03:01.818337846Z] ClientConn switching balancer to "pick_first" module=grpc
> INFO[2023-01-10T22:03:01.821360883Z] parsed scheme: "unix" module=grpc
> INFO[2023-01-10T22:03:01.821439605Z] scheme "unix" not registered, fallback to default scheme module=grpc
> INFO[2023-01-10T22:03:01.821498716Z] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>} module=grpc
> INFO[2023-01-10T22:03:01.821533457Z] ClientConn switching balancer to "pick_first" module=grpc
> DEBU[2023-01-10T22:03:01.823928235Z] processing event stream module=libcontainerd namespace=plugins.moby
> DEBU[2023-01-10T22:03:01.824755605Z] Using default logging driver journald
> DEBU[2023-01-10T22:03:01.825117605Z] [graphdriver] priority list: [btrfs zfs overlay2 fuse-overlayfs aufs overlay devicemapper vfs]
> ERRO[2023-01-10T22:03:04.819504918Z] [graphdriver] prior storage driver overlay2 failed: lstat /mnt/caixa1hdd/dietpi_userdata/docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7: bad message
> DEBU[2023-01-10T22:03:04.820647307Z] Cleaning up old mountid : start.
> failed to start daemon: error initializing graphdriver: lstat /mnt/caixa1hdd/dietpi_userdata/docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7: bad message
Really i am out of scope here.
Any idea how i can start to dig a solution ?
Many thanks
had to move the data caixa1hdd , using the deitpi tool, in order to unmount
altough i get the similiar previous docker errors
[ SUB1 ] DietPi Updating user data location >
[ INFO ] DietPi-Set_userdata | - From : /mnt/caixa1hdd/dietpi_userdata
[ INFO ] DietPi-Set_userdata | - To : /mnt/dietpi_userdata
[ INFO ] DietPi-Set_userdata | Please wait...
[ OK ] DietPi-Set_userdata | rm /mnt/dietpi_userdata
[ OK ] DietPi-Set_userdata | mkdir -p /mnt/dietpi_userdata
du: cannot access '/mnt/caixa1hdd/dietpi_userdata/docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7': Bad message
du: cannot access '/mnt/caixa1hdd/dietpi_userdata/docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7-init/lower': Bad message
du: cannot access '/mnt/caixa1hdd/dietpi_userdata/docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7-init/work': Bad message
du: cannot access '/mnt/caixa1hdd/dietpi_userdata/docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7-init/diff': Bad message
du: cannot access '/mnt/caixa1hdd/dietpi_userdata/docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7-init/link': Bad message
du: cannot access '/mnt/caixa1hdd/dietpi_userdata/docker-data/overlay2/l/SXJVIUB2CGZSGTVBYWTWMCYIG4': Bad message
du: cannot access '/mnt/caixa1hdd/dietpi_userdata/docker-data/overlay2/26d39d4e30a29962fd0d7913bc96b219575449369338e185791126a716b96dab/committed': Bad message
[ OK ] DietPi-Set_userdata | Free space check: path=/mnt/dietpi_userdata | available=53267 MiB | required=2676 MiB
[ INFO ] DietPi-Set_userdata | Moving your existing data from /mnt/caixa1hdd/dietpi_userdata to /mnt/dietpi_userdata, please wait...
cp: cannot stat '/mnt/caixa1hdd/dietpi_userdata/./docker-data/overlay2/l/SXJVIUB2CGZSGTVBYWTWMCYIG4': Bad message
cp: cannot stat '/mnt/caixa1hdd/dietpi_userdata/./docker-data/overlay2/26d39d4e30a29962fd0d7913bc96b219575449369338e185791126a716b96dab/committed': Bad message
cp: cannot stat '/mnt/caixa1hdd/dietpi_userdata/./docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7-init/diff': Bad message
cp: cannot stat '/mnt/caixa1hdd/dietpi_userdata/./docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7-init/link': Bad message
cp: cannot stat '/mnt/caixa1hdd/dietpi_userdata/./docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7-init/work': Bad message
cp: cannot stat '/mnt/caixa1hdd/dietpi_userdata/./docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7-init/lower': Bad message
cp: cannot stat '/mnt/caixa1hdd/dietpi_userdata/./docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7': Bad message
[FAILED] DietPi-Set_userdata | Failed to copy /mnt/caixa1hdd/dietpi_userdata/ to /mnt/dietpi_userdata.
[FAILED] DietPi-Set_userdata | Exited with error
sudo fsck -a /dev/sda1
fsck from util-linux 2.38
/dev/sda1 contains an errored filesystem, forced check.
/dev/sda1: inode 34876241 seems to contain garbage.
/dev/sda1: INCONSISTENCY UNEXPECTED; RUN fsck MANUALLY.
(i.e., no -a or -p options)
this what i got
better get a new disk and clone before repair , right ?
i have the caixa2hdd synced to 1, but i found it a few days after it happened, so those same files should be aswell corrupted in hdd2 , or no ?
because i fsck bth units and only hdd1 appears to have problems.
can i swap both units ?
What do you mean by “synced”? Did you clone it bit-by-bit via e.g. dd? Then errors are contained on both. Or did you copy the content like with rsync? Then corrupted files are either skipped or related meta data reset, and if it is not an RPi, the bootloader will be missing so that the system cannot boot from this drive.
better get a new disk and clone before repair
Only if you consider a professional data recovery service to try getting more data restored than fsck is able to. Otherwise, to earlier you repair it via fsck the better, I’d say, minimising the risk that existing corruption causes more.
my setup is the follow :
/ + /boot in sda (ssd)
user-data is in /mnt/caixa1hdd/dietpi_userdata
dietpi-backup into /mnt/caixa1hdd/dietpi-backup
dietpi-sync from /mnt/caixa1hdd to /mnt/caixa2hdd/dietpi-sync
the hdd1 is the one with the invalid checksum and thus giviing me the error : [graphdriver] prior storage driver overlay2 failed
what would be the steps of moving the /mnt/caixa2hdd/dietpi-sync/dietpi_userdata into /mnt/caixa1hdd/dietpi_userdata ? as that might work.
i formated the drive giving me invalid checksum, hdd1, and did rsync -a /hdd2 /hdd1 in another machine, so the previous content of hdd1 would be the same , except the corrupted files that rsync took care.
tried to boot the pi but ssh just hangs after asking passphrase, and neither i have hdmi response.