HDD not spinning down after installing another one

Creating a bug report/issue

Required Information

  • DietPi version | latest
  • Distro version | bullseye
  • Kernel version | Linux DietPi 6.0.13-meson64 #22.11.2 SMP PREEMPT Sun Dec 18 16:52:19 CET 2022 aarch64 GNU/Linux
  • SBC model | Odroid HC4

Hi guys, I will try to explain the best I can, what has happened and what I try to do without success.

Until two weeks ago, I had on my Odroid HC4 two drives installed, one HDD(sdb1)(seagate barracuda) and one SSD(sda1), I had on the ssd a swap partition and user data (not rootfs) and on the HDD just downloaded data from transmission, so on this drive I had setup idle spindown on dietpi-drive_manager for 10 minutes that even this drive doesn’t support APM, the drive went to standby perfectly.

When I received the new HDD (seagate ironwolf)(sda1) I replace the SSD with the HDD after moving user data back to the microSD and deleting swap partition. Then I went to setup idle spindown, and it didn’t work, neither of both drives goes to standby, (the new one also doesn’t support APM).

I start looking online solutions.
I found this post https://andrejacobs.org/linux/spinning-down-seagate-hard-drives-in-ubuntu-20-04/ and
I started installing Openseachest tools in order to set idle spindown to the new drive, before anything, I disabled hdparm on dietpi-drive_manager.
I installed Openseachest toolslike the post, and I checked that the drive didn’t have setup idle times, so I set it, and now it goes standby. I then set the old one, but nothing it doesn’t got to standby or if it goes, something wakes up.

Then I installed hd-idle, to try another way to set the old drive, and again anything… If it goes to sleep something wakes up…

I again search something that can be waking the drive up, and I found this https://serverfault.com/a/834902. I installed auditd and did the same commands as it said, and the result is like hdparm is the process it wakes the drive.

I don’t know what else to do, or if you know another tool to find what wakes the drive, or which process are using the drive in real time.

It’s driving me nuts, because I have the board in the living room, and hearing all the time the drive going to standby and waking up is a bit annoying.

Thank you so much as always.

Maybe @MichaIng could have a look

APM isn’t supported anymore by most modern drives, indeed. Basically a dying concept, while power management is nowadays done hardware-internally. But spindown/standby is/should be a dedicated feature.

Does it go into standby/spindown directly when you run this command?

hdparm -S 120 /dev/sda

Adjust /dev/sda to match the actual device name.

So if I understand correctly, the automatic spindown works for the new drive after you you configured it with Seagate’s own tool openSeaChest, and on the old drive with hd-idle? But on both cases they do wake up again quickly?

and the result is like hdparm is the process it wakes the drive.

hdparm is not a process but a one time action tool which gets or sets drive attributes and finishes. So that it is the one which wakes up the drive is very strange, as long as not someone or something called the command actively. What software do you have installed? Also htop can be used to see the list of active processes. Probably something is regularly calling hdparm to read drive information to show them on some GUI or web interface.

Sorry @MichaIng for not writing you earlier,

hdparm -S 120 /dev/sda

I have tested that command with a shorter time, and it does work fine with two drives, but the old one keeps waking up after a few minutes.

So if I understand correctly, the automatic spindown works for the new drive after you you configured it with Seagate’s own tool openSeaChest, and on the old drive with hd-idle? But on both cases they do wake up again quickly?

That’s correct but only the old drive.

What software do you have installed? Also htop can be used to see the list of active processes. Probably something is regularly calling hdparm to read drive information to show them on some GUI or web interface.

Software installed…
2023-01-22 21_09_43-Window

And outside dietpi-software: traccar, hd-idle and auditd

Process list:

 Running Processes

Real memory: 3.69 GiB total / 2.6 GiB free / 755.45 MiB cached   Swap space: 0 bytes total / 0 bytes free

ID	Owner	Size	Command
1266 	traccar 	352.54 MiB 	/opt/traccar/jre/bin/java -jar tracker-server.jar conf/traccar.xml
9334 	homeassistant 	339.26 MiB 	/home/homeassistant/.pyenv/versions/3.10.9/bin/python3.10 /home/homeassistant/.p ...
1494 	plex 	67.69 MiB 	/usr/lib/plexmediaserver/Plex Media Server
1459 	debian-transmission 	52.65 MiB 	/usr/bin/transmission-daemon -f --log-error
2035 	plex 	41.51 MiB 	Plex Plug-in [com.plexapp.system] /usr/lib/plexmediaserver/Resources/Plug-ins-91 ...
1506 	vaultwarden 	34.36 MiB 	/opt/vaultwarden/vaultwarden
1433 	root 	30.02 MiB 	php-fpm: master process (/etc/php/7.4/fpm/php-fpm.conf)
1557 	cloudflared 	28.85 MiB 	/usr/local/bin/cloudflared proxy-dns --port 5053 --upstream ...
1269 	root 	28.51 MiB 	/usr/bin/python3 /usr/bin/fail2ban-server -xf start
1372 	root 	28 MiB 	/usr/bin/perl /usr/share/webmin/miniserv.pl /etc/webmin/miniserv.conf
1421 	root 	22.73 MiB 	/usr/sbin/smbd --foreground --no-process-group
1435 	www-data 	18.66 MiB 	php-fpm: pool www
7317 	root 	17.43 MiB 	/usr/sbin/smbd --foreground --no-process-group
1436 	www-data 	16.94 MiB 	php-fpm: pool www
1434 	www-data 	16.93 MiB 	php-fpm: pool www
1437 	www-data 	16.92 MiB 	php-fpm: pool www
1400 	root 	14.52 MiB 	/usr/sbin/nmbd --foreground --no-process-group
909 	root 	13.03 MiB 	/usr/libexec/udisks2/udisksd
1429 	root 	12.33 MiB 	/usr/sbin/smbd --foreground --no-process-group
1 	root 	11.97 MiB 	/sbin/init
2279 	plex 	11.96 MiB 	/usr/lib/plexmediaserver/Plex Tuner Service /usr/lib/plexmediaserver/Resources/T ...
1427 	root 	11.88 MiB 	/usr/sbin/smbd --foreground --no-process-group
1284 	pihole 	9.55 MiB 	/usr/bin/pihole-FTL -f
825 	root 	7.91 MiB 	/lib/systemd/systemd-journald
901 	root 	6.53 MiB 	/usr/libexec/accounts-daemon
908 	root 	6.32 MiB 	/lib/systemd/systemd-logind
950 	root 	6.01 MiB 	/usr/libexec/polkitd --no-debug
854 	root 	5.91 MiB 	/lib/systemd/systemd-udevd
9788 	root 	5.55 MiB 	-bash
1453 	www-data 	5.18 MiB 	nginx: worker process
1428 	root 	4.8 MiB 	/usr/sbin/smbd --foreground --no-process-group
1454 	www-data 	4.44 MiB 	nginx: worker process
902 	messagebus 	3.76 MiB 	/usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd- ...
1720 	root 	3.63 MiB 	/bin/bash /usr/sbin/fancontrol
1529 	root 	2.57 MiB 	/usr/sbin/cron -f
9783 	root 	2.43 MiB 	/usr/sbin/dropbear -p 1022 -W 65536 -s -g
11506 	root 	2.36 MiB 	ps --cols 2048 -eo user:80,ruser:80,group:80,rgroup:80,pid,ppid,pgid,pcpu,rss,ni ...
1455 	www-data 	2.34 MiB 	nginx: worker process
1789 	root 	1.8 MiB 	/sbin/auditd
1276 	root 	1.44 MiB 	/sbin/agetty -o -p -- \u --noclear tty1 linux
1456 	www-data 	1012 KiB 	nginx: worker process
1452 	root 	1008 KiB 	nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
1312 	xrdp 	604 KiB 	/usr/sbin/xrdp
11505 	root 	516 KiB 	sh -c ps --cols 2048 -eo user:80,ruser:80,group:80,rgroup:80,pid,ppid,pgid,pcpu, ...
1279 	root 	480 KiB 	/usr/sbin/xrdp-sesman
11410 	root 	428 KiB 	sleep 10
936 	root 	184 KiB 	/usr/sbin/dropbear -p 1022 -W 65536 -s -g
2 	root 	0 kB 	[kthreadd]
3 	root 	0 kB 	[rcu_gp]
4 	root 	0 kB 	[rcu_par_gp]
5 	root 	0 kB 	[slub_flushwq]
6 	root 	0 kB 	[netns]
10 	root 	0 kB 	[mm_percpu_wq]
11 	root 	0 kB 	[rcu_tasks_kthread]
12 	root 	0 kB 	[rcu_tasks_trace_kthread]
13 	root 	0 kB 	[ksoftirqd/0]
14 	root 	0 kB 	[rcu_preempt]
15 	root 	0 kB 	[migration/0]
16 	root 	0 kB 	[cpuhp/0]
17 	root 	0 kB 	[cpuhp/1]
18 	root 	0 kB 	[migration/1]
19 	root 	0 kB 	[ksoftirqd/1]
22 	root 	0 kB 	[cpuhp/2]
23 	root 	0 kB 	[migration/2]
24 	root 	0 kB 	[ksoftirqd/2]
27 	root 	0 kB 	[cpuhp/3]
28 	root 	0 kB 	[migration/3]
29 	root 	0 kB 	[ksoftirqd/3]
32 	root 	0 kB 	[kdevtmpfs]
33 	root 	0 kB 	[inet_frag_wq]
36 	root 	0 kB 	[kauditd]
37 	root 	0 kB 	[oom_reaper]
38 	root 	0 kB 	[writeback]
39 	root 	0 kB 	[kcompactd0]
40 	root 	0 kB 	[ksmd]
41 	root 	0 kB 	[khugepaged]
42 	root 	0 kB 	[cryptd]
43 	root 	0 kB 	[kintegrityd]
44 	root 	0 kB 	[kblockd]
45 	root 	0 kB 	[blkcg_punt_bio]
46 	root 	0 kB 	[tpm_dev_wq]
47 	root 	0 kB 	[edac-poller]
48 	root 	0 kB 	[devfreq_wq]
49 	root 	0 kB 	[watchdogd]
51 	root 	0 kB 	[kworker/0:1H-mmc_complete]
69 	root 	0 kB 	[kswapd0]
70 	root 	0 kB 	[ecryptfs-kthread]
81 	root 	0 kB 	[kthrotld]
190 	root 	0 kB 	[xenbus_probe]
320 	root 	0 kB 	[spi0]
377 	root 	0 kB 	[vfio-irqfd-clea]
434 	root 	0 kB 	[bch_btree_io]
435 	root 	0 kB 	[bcache]
436 	root 	0 kB 	[bch_journal]
498 	root 	0 kB 	[mld]
499 	root 	0 kB 	[kworker/3:1H-kblockd]
500 	root 	0 kB 	[ipv6_addrconf]
501 	root 	0 kB 	[kstrp]
507 	root 	0 kB 	[zswap1]
508 	root 	0 kB 	[zswap1]
509 	root 	0 kB 	[zswap-shrink]
510 	root 	0 kB 	[kworker/u9:0]
627 	root 	0 kB 	[kworker/0:2-mm_percpu_wq]
629 	root 	0 kB 	[card0-crtc0]
631 	root 	0 kB 	[irq/17-dw_hdmi_top_irq]
632 	root 	0 kB 	[irq/17-ff600000.hdmi-tx]
671 	root 	0 kB 	[irq/20-ffe09000.usb]
683 	root 	0 kB 	[irq/21-ffe05000.sd]
685 	root 	0 kB 	[irq/23-ffe05000.sd cd]
692 	root 	0 kB 	[mmc_complete]
699 	root 	0 kB 	[charger_manager]
732 	root 	0 kB 	[kworker/1:1H-kblockd]
742 	root 	0 kB 	[stmmac_wq]
755 	root 	0 kB 	[ata_sff]
756 	root 	0 kB 	[scsi_eh_0]
757 	root 	0 kB 	[scsi_tmf_0]
758 	root 	0 kB 	[scsi_eh_1]
759 	root 	0 kB 	[scsi_tmf_1]
761 	root 	0 kB 	[kworker/2:3-mm_percpu_wq]
764 	root 	0 kB 	[kworker/0:3-wg-crypt-wg0]
784 	root 	0 kB 	[jbd2/mmcblk0p1-8]
785 	root 	0 kB 	[ext4-rsv-conver]
925 	root 	0 kB 	[rc0]
935 	root 	0 kB 	[irq/36-panfrost-mmu]
937 	root 	0 kB 	[irq/37-panfrost-job]
938 	root 	0 kB 	[pan_js]
939 	root 	0 kB 	[pan_js]
940 	root 	0 kB 	[pan_js]
941 	root 	0 kB 	[irq/38-vdec]
979 	root 	0 kB 	[sugov:0]
1159 	root 	0 kB 	[irq/25-mdio_mux-0.0:00]
1305 	root 	0 kB 	[wg-crypt-wg0]
1464 	root 	0 kB 	[jbd2/sdb1-8]
1465 	root 	0 kB 	[ext4-rsv-conver]
1480 	root 	0 kB 	[jbd2/sda1-8]
1481 	root 	0 kB 	[ext4-rsv-conver]
3393 	root 	0 kB 	[kworker/2:2H-kblockd]
3485 	root 	0 kB 	[kworker/3:0-events]
5760 	root 	0 kB 	[kworker/1:0-cgroup_destroy]
6479 	root 	0 kB 	[kworker/u8:0-events_power_efficient]
6869 	root 	0 kB 	[kworker/2:0-cgroup_destroy]
7824 	root 	0 kB 	[kworker/3:2-mm_percpu_wq]
7920 	root 	0 kB 	[kworker/u8:2-events_power_efficient]
9261 	root 	0 kB 	[kworker/1:0H-kblockd]
9288 	root 	0 kB 	[kworker/3:0H]
9660 	root 	0 kB 	[kworker/u8:3-ext4-rsv-conversion]
9703 	root 	0 kB 	[kworker/0:2H]
9776 	root 	0 kB 	[kworker/1:2-events]
9943 	root 	0 kB 	[kworker/2:1H]
10060 	root 	0 kB 	[kworker/3:2H]
10801 	root 	0 kB 	[kworker/0:0H]
10836 	root 	0 kB 	[kworker/2:0H]
10885 	root 	0 kB 	[kworker/1:1]
10886 	root 	0 kB 	[kworker/1:3-events]
10902 	root 	0 kB 	[kworker/2:1-events]
11215 	root 	0 kB 	[kworker/2:2]
11451 	root 	0 kB 	[kworker/u8:1-events_power_efficient]

I tested stopping all the services, and then send the command

hdparm -Y /dev/sdb1

to sending to standby state, and there were no wakes up on 2 hours.

Do you think on something else I can test??

Thank you!

So it’s one of these services, as long as it spins up again when starting all services? -Y sends the drive into lowest possible power state. -S would be the better test since it sets the drive into the same standby state that is applied after spindown timeout. -S 120 applies 10 minutes standby timeout, but regardless which value you apply here, the drive is also send into standby immediately by this commnad.

Do you have your dietpi_userdata stored on this drive? Otherwise I’d bet on Webmin. Loop through individual services:

systemctl stop webmin
hdparm -S 120 /dev/sda
# ... wait and check whether drive still wakes up
systemctl start webmin

Well seems the problem is webmin, because I’ve stopped and I don’t know if it wakes up the disk because right know I am not at home, but I have installed fatrace to check the access to the disk, and before stopping webmin, after five minutes or less I was getting this line:

df(130814): CO /mnt/odroidhdd

I don’t know what df is, but the process is not listed in htop or I don’t have time to see it.

After stopping webmin I don’t get any line at all. I have to checked at home if the disk is in standby mode.

I found on an OMV thread that webmin has a setting on webmin configuration → background status collection, that have this:

I am going to test disabling that settings, and test it again if there is no access to the disk.

If not I will uninstall webmin for the moment, but is strange that this only happens on one drive… and that never happened before, one thing that happens on this disk is that I have a warning for free space.

I will update.

Update, I’ve uninstalled Webmin, and the disk still doesn’t go to sleep… the new one goes fine to standby state.

Today I make use of fatrace again and auditd and fatrace didn’t register any access to the drive, but auditd the same as days before:

time->Wed Jan 25 10:22:49 2023
type=PROCTITLE msg=audit(1674638569.122:845): proctitle=68647061726D002D59002F6465762F736731
type=PATH msg=audit(1674638569.122:845): item=0 name="/dev/sg1" inode=378 dev=00:05 mode=020660 ouid=0 ogid=6 rdev=15:01 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
type=CWD msg=audit(1674638569.122:845): cwd="/mnt/odroidhdd"
type=SYSCALL msg=audit(1674638569.122:845): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd623bbe0 a2=800 a3=0 items=1 ppid=27924 pid=42598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=4294967295 comm="hdparm" exe="/usr/sbin/hdparm" subj=unconfined key=(null)

I had set yesterday hdparm to test if after uninstalling webmin it worked, and maybe is why auditd shows that, I am going to make a test disabling hdparm on dietpi-drive_manager.

df is a command line utility to show mounted filesystems. Just run it on console to see. It however doesn’t usually spin up drives, at least not that I never observed. The filesystem size/usage information is stored in memory somewhere and does not need to be read off the hardware.

hdparm however is a utility to head information of the underlying physical drive indeed, so most commands this tool supports do wake of drives. The question is which software calls hdparm regularly. There is no service or something pre-installed on DietPi which would do that, and hdparm doesn’t ship anything like that by itself. It alone is a command-line utility as well, with some udev rules to apply the spindown timeout at boot and set the drive to sleep/wakeup when/after hibernating the system.

Could you run your fatrace/auditd on /usr/sbin/hdparm instead, to see which process is accessing this file?