Docker not starting

i am having similiar errors, or at least i think. Docker just stopped in a period of nights.
Been trying to get it working without sucess.
This are the logs i get.

> dietpi@zuble:~$ sudo systemctl daemon-reload && sudo systemctl start docker.service
> dietpi@zuble:~$ sudo journalctl  -u docker
> Jan 10 21:53:14 zuble systemd[1]: Started Docker Application Container Engine.
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.862194122Z" level=debug msg="Listener created for HTTP on fd ()"
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.864240510Z" level=debug msg="Golang's threads limit set to 56070"
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.865498807Z" level=info msg="parsed scheme: \"unix\"" module=grpc
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.865574751Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.865642270Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.865684325Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.866261751Z" level=debug msg="metrics API listening on /var/run/docker/metrics.sock"
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.869233714Z" level=info msg="parsed scheme: \"unix\"" module=grpc
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.869609159Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.869884103Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.870075788Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.872631307Z" level=debug msg="Using default logging driver journald"
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.872906362Z" level=debug msg="processing event stream" module=libcontainerd namespace=plugins.moby
> Jan 10 21:53:14 zuble dockerd[1478]: time="2023-01-10T21:53:14.874004196Z" level=debug msg="[graphdriver] priority list: [btrfs zfs overlay2 fuse-overlayfs aufs overlay devicemapper vfs]"
> Jan 10 21:53:17 zuble dockerd[1478]: time="2023-01-10T21:53:17.885449101Z" level=error msg="[graphdriver] prior storage driver overlay2 failed: lstat /mnt/caixa1hdd/dietpi_userdata/docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7: bad message"
> Jan 10 21:53:17 zuble dockerd[1478]: time="2023-01-10T21:53:17.888619675Z" level=debug msg="Cleaning up old mountid : start."
> Jan 10 21:53:17 zuble dockerd[1478]: failed to start daemon: error initializing graphdriver: lstat /mnt/caixa1hdd/dietpi_userdata/docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7: bad message
> Jan 10 21:53:17 zuble systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
> Jan 10 21:53:17 zuble systemd[1]: docker.service: Failed with result 'exit-code'.
> dietpi@zuble:~$ sudo dockerd
> DEBU[2023-01-10T22:03:01.815256161Z] Listener created for HTTP on unix (/var/run/docker.sock) 
> DEBU[2023-01-10T22:03:01.816946087Z] Golang's threads limit set to 56070          
> INFO[2023-01-10T22:03:01.817911142Z] parsed scheme: "unix"                         module=grpc
> INFO[2023-01-10T22:03:01.817977012Z] scheme "unix" not registered, fallback to default scheme  module=grpc
> DEBU[2023-01-10T22:03:01.818005661Z] metrics API listening on /var/run/docker/metrics.sock 
> INFO[2023-01-10T22:03:01.818061401Z] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
> INFO[2023-01-10T22:03:01.818337846Z] ClientConn switching balancer to "pick_first"  module=grpc
> INFO[2023-01-10T22:03:01.821360883Z] parsed scheme: "unix"                         module=grpc
> INFO[2023-01-10T22:03:01.821439605Z] scheme "unix" not registered, fallback to default scheme  module=grpc
> INFO[2023-01-10T22:03:01.821498716Z] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
> INFO[2023-01-10T22:03:01.821533457Z] ClientConn switching balancer to "pick_first"  module=grpc
> DEBU[2023-01-10T22:03:01.823928235Z] processing event stream                       module=libcontainerd namespace=plugins.moby
> DEBU[2023-01-10T22:03:01.824755605Z] Using default logging driver journald        
> DEBU[2023-01-10T22:03:01.825117605Z] [graphdriver] priority list: [btrfs zfs overlay2 fuse-overlayfs aufs overlay devicemapper vfs] 
> ERRO[2023-01-10T22:03:04.819504918Z] [graphdriver] prior storage driver overlay2 failed: lstat /mnt/caixa1hdd/dietpi_userdata/docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7: bad message 
> DEBU[2023-01-10T22:03:04.820647307Z] Cleaning up old mountid : start.             
> failed to start daemon: error initializing graphdriver: lstat /mnt/caixa1hdd/dietpi_userdata/docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7: bad message

Really i am out of scope here.
Any idea how i can start to dig a solution ?
Many thanks

Somehow the storage driver failed. I see you moved dietpi user data to an external disk. What file system format do you use?

lsblk -o name,fstype,label,size,ro,type,mountpoint,partuuid,uuid

and can you check for kernel error messages

dmesg -l err,crit,alert,emerg

yep , the OS/root running in ssd , the data is one HDD, the other HDD is a backup from the frist…

dietpi@zuble:~$ lsblk -o name,fstype,label,size,ro,type,mountpoint,partuuid,uuid
NAME FSTYPE LABEL   SIZE RO TYPE MOUNTPOINT PARTUUID                             UUID
sda                56.5G  0 disk                                                 
├─sda1
│    vfat           128M  0 part /boot      455d093c-01                          806D-D19A
└─sda2
     ext4          56.4G  0 part /          455d093c-02                          9e1cb772-602b-4210-99bd-2eb47ea516a9
sdb               931.5G  0 disk                                                 
└─sdb1
     ext4         931.5G  0 part /mnt/caixa2hdd 5590c5b2-1384-4260-9934-8b46af93a657 9bbbdd94-6e31-456d-b5a1-a0f8ef14c53f
sdc               931.5G  0 disk                                                 
└─sdc1
     ext4         931.5G  0 part /mnt/caixa1hdd a8e52344-4507-4d80-9d54-7308ab5a7631 50a6eb09-1f8b-44ee-b1b3-9a0f95dc783a
dietpi@zuble:~$ dmesg -l err,crit,alert,emerg
[72210.684525] EXT4-fs error (device sdc1): ext4_lookup:1836: inode #34876247: comm find: iget: checksum invalid
[72210.684680] EXT4-fs error (device sdc1): ext4_lookup:1836: inode #34876247: comm find: iget: checksum invalid
[72210.841918] EXT4-fs error (device sdc1): ext4_lookup:1836: inode #34876244: comm find: iget: checksum invalid
[72210.842203] EXT4-fs error (device sdc1): ext4_lookup:1836: inode #34876244: comm find: iget: checksum invalid
[72210.842425] EXT4-fs error (device sdc1): ext4_lookup:1836: inode #34876241: comm find: iget: checksum invalid
[72210.842495] EXT4-fs error (device sdc1): ext4_lookup:1836: inode #34876241: comm find: iget: checksum invalid

it looks like its not the best idea of data flow , or just a mechanical accident :confused:

try to unmount the disk and perform some checks

fsck -a /dev/sdc1

had to move the data caixa1hdd , using the deitpi tool, in order to unmount

altough i get the similiar previous docker errors

[ SUB1 ] DietPi Updating user data location > 
[ INFO ] DietPi-Set_userdata |  - From : /mnt/caixa1hdd/dietpi_userdata
[ INFO ] DietPi-Set_userdata |  - To   : /mnt/dietpi_userdata
[ INFO ] DietPi-Set_userdata | Please wait...
[  OK  ] DietPi-Set_userdata | rm /mnt/dietpi_userdata
[  OK  ] DietPi-Set_userdata | mkdir -p /mnt/dietpi_userdata
du: cannot access '/mnt/caixa1hdd/dietpi_userdata/docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7': Bad message
du: cannot access '/mnt/caixa1hdd/dietpi_userdata/docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7-init/lower': Bad message
du: cannot access '/mnt/caixa1hdd/dietpi_userdata/docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7-init/work': Bad message
du: cannot access '/mnt/caixa1hdd/dietpi_userdata/docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7-init/diff': Bad message
du: cannot access '/mnt/caixa1hdd/dietpi_userdata/docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7-init/link': Bad message
du: cannot access '/mnt/caixa1hdd/dietpi_userdata/docker-data/overlay2/l/SXJVIUB2CGZSGTVBYWTWMCYIG4': Bad message
du: cannot access '/mnt/caixa1hdd/dietpi_userdata/docker-data/overlay2/26d39d4e30a29962fd0d7913bc96b219575449369338e185791126a716b96dab/committed': Bad message
[  OK  ] DietPi-Set_userdata | Free space check: path=/mnt/dietpi_userdata | available=53267 MiB | required=2676 MiB
[ INFO ] DietPi-Set_userdata | Moving your existing data from /mnt/caixa1hdd/dietpi_userdata to /mnt/dietpi_userdata, please wait...
cp: cannot stat '/mnt/caixa1hdd/dietpi_userdata/./docker-data/overlay2/l/SXJVIUB2CGZSGTVBYWTWMCYIG4': Bad message
cp: cannot stat '/mnt/caixa1hdd/dietpi_userdata/./docker-data/overlay2/26d39d4e30a29962fd0d7913bc96b219575449369338e185791126a716b96dab/committed': Bad message
cp: cannot stat '/mnt/caixa1hdd/dietpi_userdata/./docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7-init/diff': Bad message
cp: cannot stat '/mnt/caixa1hdd/dietpi_userdata/./docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7-init/link': Bad message
cp: cannot stat '/mnt/caixa1hdd/dietpi_userdata/./docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7-init/work': Bad message
cp: cannot stat '/mnt/caixa1hdd/dietpi_userdata/./docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7-init/lower': Bad message
cp: cannot stat '/mnt/caixa1hdd/dietpi_userdata/./docker-data/overlay2/e6a944298c6f3b5d5c5802c663a23c3efb67b79d4b94f0837226196cb730f5e7': Bad message
[FAILED] DietPi-Set_userdata | Failed to copy /mnt/caixa1hdd/dietpi_userdata/ to /mnt/dietpi_userdata.
[FAILED] DietPi-Set_userdata | Exited with error

edit:
will check it outside pi

You could disconnect all disk and boot your Pi of a SD card :wink:

Once online, just connect (not mount) the disk in question and perform some checks.

easier to just shutdown and plug in main machine :slight_smile:

sudo fsck -a /dev/sda1

fsck from util-linux 2.38
/dev/sda1 contains an errored filesystem, forced check.
/dev/sda1: inode 34876241 seems to contain garbage. 

/dev/sda1: INCONSISTENCY UNEXPECTED; RUN fsck MANUALLY.
	(i.e., no -a or -p options)

this what i got
better get a new disk and clone before repair , right ?

a backup is always recommended. As well, we could consult @MichaIng

thanks for the time Joulinar , will w8 then :slight_smile:

i have the caixa2hdd synced to 1, but i found it a few days after it happened, so those same files should be aswell corrupted in hdd2 , or no ?
because i fsck bth units and only hdd1 appears to have problems.
can i swap both units ?

not necessarily. Are both disks identically? I mean content wise?

What do you mean by “synced”? Did you clone it bit-by-bit via e.g. dd? Then errors are contained on both. Or did you copy the content like with rsync? Then corrupted files are either skipped or related meta data reset, and if it is not an RPi, the bootloader will be missing so that the system cannot boot from this drive.

better get a new disk and clone before repair

Only if you consider a professional data recovery service to try getting more data restored than fsck is able to. Otherwise, to earlier you repair it via fsck the better, I’d say, minimising the risk that existing corruption causes more.

just got time to get back to it

my setup is the follow :
/ + /boot in sda (ssd)
user-data is in /mnt/caixa1hdd/dietpi_userdata
dietpi-backup into /mnt/caixa1hdd/dietpi-backup
dietpi-sync from /mnt/caixa1hdd to /mnt/caixa2hdd/dietpi-sync

the hdd1 is the one with the invalid checksum and thus giviing me the error : [graphdriver] prior storage driver overlay2 failed

what would be the steps of moving the /mnt/caixa2hdd/dietpi-sync/dietpi_userdata into /mnt/caixa1hdd/dietpi_userdata ? as that might work.

what should be the goal. DietPi Sync is creating a copy of the existing data. Should be same data on both side.

yes, but one drive is corrupted where the home folder is located, and the other drive which holds the sync its not.

my doubt lies how can i move the caixa2hdd/dietpi-sync/dietpi_userdata into caixa1hdd/dietpi_userdata ?

you can use rsync to copy data back

should i include the -a flag ?

i formated the drive giving me invalid checksum, hdd1, and did rsync -a /hdd2 /hdd1 in another machine, so the previous content of hdd1 would be the same , except the corrupted files that rsync took care.
tried to boot the pi but ssh just hangs after asking passphrase, and neither i have hdmi response.

any idea @MichaIng ?

maybe the drive is dying.

so with the content of hdd1 replaced with the hdd2, which presents no errors, it should boot without problems ?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.