Navidrome service doesn't start on boot

Creating a bug report/issue

Required Information

  • DietPi version
    G_DIETPI_VERSION_CORE=8
    G_DIETPI_VERSION_SUB=20
    G_DIETPI_VERSION_RC=1
    G_GITBRANCH=‘master’
    G_GITOWNER=‘MichaIng’
    G_LIVE_PATCH_STATUS[0]=‘applied’
  • Distro version | bookworm
  • Kernel version | Linux DietPi 6.1.21-v8+ #1642 SMP PREEMPT Mon Apr 3 17:24:16 BST 2023 aarch64 GNU/Linux
  • Architecture | arm64
  • SBC model | RPi 4

Additional Information (if applicable)

  • Software title | Navidrome
  • Was the software title installed freshly or updated/migrated? Yes
  • Can this issue be replicated on a fresh installation of DietPi? Yes

Steps to reproduce

  1. Fresh install of DietPi
  2. Fresh install of Navidrome

Expected behaviour

Navidrome service running normally after boot

Actual behaviour

Navidrome won’t start if the system just booted. Here is the output of journalctl -u navidrome

Aug 12 08:40:34 DietPi systemd[1]: navidrome.service: Starting requested but asserts failed.
Aug 12 08:40:34 DietPi systemd[1]: Assertion failed for navidrome.service - navidrome (DietPi).

If I then restart Navidrome service, it works properly.

I am no Linux expert, any help is welcome :slight_smile:
Thanks!

1 Like

short question, do you have any network share mounted that Navidrome should acceess?

No, Navidrome data is located on a physically connected hard drive.
And I have no network share mounted.

nearly the same :rofl:

Can you share following

cat /etc/fstab
lsblk -o name,fstype,label,size,ro,type,mountpoint,partuuid,uuid

Haha alright :slight_smile:

Here you are:

root@DietPi:~# cat /etc/fstab
# You can use "dietpi-drive_manager" to setup mounts.
# NB: It overwrites and re-creates physical drive mount entries on use.
#----------------------------------------------------------------
# NETWORK
#----------------------------------------------------------------


#----------------------------------------------------------------
# TMPFS
#----------------------------------------------------------------
tmpfs /tmp tmpfs size=1922M,noatime,lazytime,nodev,nosuid,mode=1777
tmpfs /var/log tmpfs size=50M,noatime,lazytime,nodev,nosuid

#----------------------------------------------------------------
# MISC: ecryptfs, vboxsf, glusterfs, mergerfs, bind, Btrfs subvolume
#----------------------------------------------------------------


#----------------------------------------------------------------
# SWAP SPACE
#----------------------------------------------------------------


#----------------------------------------------------------------
# PHYSICAL DRIVES
#----------------------------------------------------------------
PARTUUID=0b153577-02 / ext4 noatime,lazytime,rw 0 1
PARTUUID=0b153577-01 /boot vfat noatime,lazytime,rw 0 2
UUID=310de04f-5d3a-4a7b-86bf-3132840a70cf /mnt/hdd1 ext4 noatime,lazytime,rw,nofail,noauto,x-systemd.automount

root@DietPi:~# lsblk -o name,fstype,label,size,ro,type,mountpoint,partuuid,uuid
NAME FSTYPE LABEL  SIZE RO TYPE MOUNTPOINT PARTUUID                             UUID
sda                1.8T  0 disk                                                 
└─sda1
     ext4          1.8T  0 part /mnt/hdd1  e1aa30ed-f559-4c00-8fc0-0eb764c058f6 310de04f-5d3a-4a7b-86bf-3132840a70cf
mmcblk0
                  28.9G  0 disk                                                 
├─mmcblk0p1
│    vfat          128M  0 part /boot      0b153577-01                          D33A-B557
└─mmcblk0p2
     ext4         28.8G  0 part /          0b153577-02                          076d28bd-d8f2-47da-8e22-28b4493b8787

For testing, can you adjust the /etc/fstab line to have following arguments only.

noatime,lazytime,rw,nofail,auto

How does it looks like after reboot

Well now Navidrome starts fine!
So is it related to HDD mount options? I don’t think I have modified those (I used dietpi-drive_manager to mount it).

No it’s not something you did. It seems a behaviour of systemd on Debian Bookworm. We already had a similar case and we are looking for possibilities to work around. For time being, don’t use drive manager as this will reset the mount option :wink:

Ping @MichaIng

Understood. Thanks a lot for your help! :smiley:

It is slightly different than in the other case. There the EnvironmentFile= directive in the [Service] block was the issue, not causing the systemd.automount to trigger. Here it is AssertPathExists= in the [Unit] block. However, similarly when checking whether something exist, the automount should be triggered, as it is when doing e.g. ls /mnt/dietpi_userdata/navidrome or stat /mnt/dietpi_userdata/navidrome.

Here we cannot add a touch command like in the other case. We added AssertPathExists= since ReadWritePaths= is used as well. And if the latter fails as the directory “really” does not exist, it throws a very confusing error message which does not give a clue unless you know what it is about. So we added AssertPathExists= which is checked first. However, it seems to not show that much more information either? At least the word “Assertion” in the error message can be found in the systemd service.

The question is now whether just removing AssertPathExists= works, or whether ReadWritePaths= fails as well and does not trigger the automount.

@KosmosMonk
Can you test this:

mkdir -p /etc/systemd/system/navidrome.service.d
echo -e '[Unit]\nAssertPathExists=' > /etc/systemd/system/navidrome.service.d/test.conf

Then revert the /etc/fstab change and reboot.

2 Likes

That worked, no error in Navidrome logs.

My Navidrome not restarting, PFA the screen shot of status

This is a 11 month old issue. Pls create a new one.