Systemctl : Failed to connect to bus: No medium found

Creating a bug report/issue

I have searched the existing open and closed issues

Required Information

  • DietPi version
    G_DIETPI_VERSION_CORE=9
    G_DIETPI_VERSION_SUB=2
    G_DIETPI_VERSION_RC=1
    G_GITBRANCH='master'
    G_GITOWNER='MichaIng'

  • Distro version
    bookworm 0

  • Kernel version
    Linux DietRPI 6.1.21-v8+ #1642 SMP PREEMPT Mon Apr 3 17:24:16 BST 2023 aarch64 GNU/Linux

  • Architecture
    arm64

  • SBC model
    RPi 4 Model B (aarch64)

  • Power supply used
    5.1V 3A

  • SD card used
    SSD Samsung EVO870

Additional Information (if applicable)

I have create a systemd service to control a noctua fan via a python scripts.
when i run the command :
systemctl --user status noctua
i receive the following error
Failed to connect to bus: No medium found

Extra details

i have followed the troubleshooting in the following thread :

I did install dbus user session by running apt install dbus-user-session
it worked for a moment but it doesn’t anymore.

Thank you for your time !

usually it should be

systemctl status <service name>

i followed this tutorial, that uses systemctl --user

systemctl status returns :

● DietRPI
    State: running
    Units: 285 loaded (incl. loaded aliases)
     Jobs: 0 queued
   Failed: 0 units
    Since: Thu 1970-01-01 01:00:02 CET; 54 years 2 months ago
  systemd: 252.22-1~deb12u1
   CGroup: /
           ├─init.scope
           │ └─1 /sbin/init
           ├─system.slice
           │ ├─containerd.service
           │ │ ├─ 600 /usr/bin/containerd
           │ │ ├─1185 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 10fd90f55e75259da9447f57cac398fd42aeb34000008597384cc0417894440c -address /run/containerd/containerd.sock
           │ │ ├─1231 /usr/bin/containerd-shim-runc-v2 -namespace moby -id e4c703deeabd557eb4e5a5fb2af58eefef5482f88d686335b1f29524e9193a15 -address /run/containerd/containerd.sock
           │ │ └─1383 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 3322faf3d0315541d30bdbcd2d891a5f38f92112bcab4a04008929744b484631 -address /run/containerd/containerd.sock
           │ ├─cron.service
           │ │ └─418 /usr/sbin/cron -f
           │ ├─dbus.service
           │ │ └─419 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
           │ ├─docker-10fd90f55e75259da9447f57cac398fd42aeb34000008597384cc0417894440c.scope
           │ │ └─1298 cloudflared --no-autoupdate tunnel run
           │ ├─docker-3322faf3d0315541d30bdbcd2d891a5f38f92112bcab4a04008929744b484631.scope
           │ │ ├─1432 /package/admin/s6/command/s6-svscan -d4 -- /run/service
           │ │ ├─1783 s6-supervise s6-linux-init-shutdownd
           │ │ ├─1785 /package/admin/s6-linux-init/command/s6-linux-init-shutdownd -c /run/s6/basedir -g 3000 -C -B
           │ │ ├─1806 s6-supervise frontend
           │ │ ├─1807 s6-supervise nginx
           │ │ ├─1808 s6-supervise backend
           │ │ ├─1810 s6-supervise s6rc-fdholder
           │ │ ├─1812 s6-supervise s6rc-oneshot-runner
           │ │ ├─1828 /package/admin/s6/command/s6-ipcserverd -1 -- /package/admin/s6/command/s6-ipcserver-access -v0 -E -l0 -i data/rules -- /package/admin/s6/command/s6-sudod -t 30000 -- /package/admin/s6-rc/command/s6-rc-oneshot-run -l ../.. --
           │ │ ├─1987 "nginx: master process nginx"
           │ │ ├─1989 bash ./run backend
           │ │ ├─1994 node --abort_on_uncaught_exception --max_old_space_size=250 index.js
           │ │ ├─2014 "nginx: worker process"
           │ │ ├─2015 "nginx: worker process"
           │ │ ├─2016 "nginx: worker process"
           │ │ ├─2017 "nginx: worker process"
           │ │ └─2018 "nginx: cache manager process"
           │ ├─docker-e4c703deeabd557eb4e5a5fb2af58eefef5482f88d686335b1f29524e9193a15.scope
           │ │ ├─1303 httpd -DFOREGROUND
           │ │ ├─1616 httpd -DFOREGROUND
           │ │ ├─1617 httpd -DFOREGROUND
           │ │ └─1619 httpd -DFOREGROUND
           │ ├─docker.service
           │ │ ├─ 628 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
           │ │ ├─1213 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 443 -container-ip 172.18.0.3 -container-port 443
           │ │ ├─1274 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 81 -container-ip 172.18.0.3 -container-port 81
           │ │ └─1317 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 80 -container-ip 172.18.0.3 -container-port 80
           │ ├─ifup@eth0.service
           │ │ └─509 dhclient -4 -v -i -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases -I -df /var/lib/dhcp/dhclient6.eth0.leases eth0
           │ ├─ssh.service
           │ │ └─612 "sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups"
           │ ├─system-getty.slice
           │ │ └─getty@tty1.service
           │ │   └─602 /sbin/agetty -o "-p -- \\u" --noclear - linux
           │ ├─systemd-journald.service
           │ │ └─155 /lib/systemd/systemd-journald
           │ ├─systemd-logind.service
           │ │ └─424 /lib/systemd/systemd-logind
           │ └─systemd-udevd.service
           │   └─udev
           │     └─176 /lib/systemd/systemd-udevd
           └─user.slice
             └─user-1000.slice
               ├─session-6.scope
               │ ├─2201 "sshd: dietpi [priv]"
               │ ├─2207 "sshd: dietpi@pts/0"
               │ ├─2208 -bash
               │ ├─2228 sudo systemctl status
               │ ├─2229 sudo systemctl status
               │ ├─2230 systemctl status
               │ └─2231 "(pager)"
               └─user@1000.service
                 ├─app.slice
                 │ └─noctua.service
                 │   ├─636 /usr/bin/sudo /usr/bin/python3 /home/dietpi/Public/Services/RPI-scripts/noctua/noctuaFan.py
                 │   └─644 /usr/bin/python3 /home/dietpi/Public/Services/RPI-scripts/noctua/noctuaFan.py
                 └─init.scope
                   ├─603 /lib/systemd/systemd --user
                   └─604 "(sd-pam)"

but if i try
systemctl status noctua it returns an error service not found.

maybe creating the service at the user level was a mistakes and i should redo it without the --user tag

Just give it a try

it seems to fix my issues
thx for the advices