Would it be possible to extend the cloudshell display to include the stats from more than two drives? I have a 32 GB eMMC, a 16GB USB flash for backup and a 120 GB SSD but only the former two appear on the display. My coding skills are not up to the job
Yep, no worries, i’ll take a look. Will have to wait until after v135 is released (stacked up), but heres the ticket for tracking: https://github.com/Fourdee/DietPi/issues/582
Thanks Fourdee look forward to it - the reason for the request is the SSD somehow became disconnected/unmounted and it took a while for me to realise that was the reason for some errors.
Bizzare, did this occur when the device was powered on, or, after reboot? Might be worth checking dmesg after this occurs again, last entries should give some info, or paste here and i’ll take a look.
Not really sure when it happened but the three devices I have running DietPi are on 24/7 (Banana M1, Raspberry 3, XU4) and are are only rebooted if necessary. I’ll watch for it happening again and check the logs.
It happened again - the SSD failed. I suspect overheating - the XU4 was simultaneously updating Emby databases on several devices but the files are on the eMMC not the SSD. Anyhow the SSD felt very warm. A software reboot went into emergency mode. Rather than attach a monitor I powered down and left to cool and started again and all is well.
This is from dmesg - it goes on like this so just a selection
Had to reserve v136 for some bug fixes. So nudged your request to v137 and will take a look in the next few days.
HDD temps:
You might be able to read the HDD temps by running:
apt-get install hddtemp
hddtemp /dev/sdb
It reads from SMART info directly on the drive, so if its not available, possibly the cloudshell USB <> SATA converter doesnt support it. In which case, have to go by touch, or IR temp gun.
Usually, operating temps for SSD’s are 0-70’c, is it possible yours is running hotter than this?
Ready for testing.
Had to change a lot of the source code and how it saves settings, so this will reset your current DietPi-Cloudshell settings.
Please run the following commands to update (copy and paste all into term):
rm /DietPi/dietpi/.dietpi-cloudshell
cat << _EOF_ > /etc/systemd/system/dietpi-cloudshell.service
[Unit]
Description=dietpi-cloudshell on main screen
[Service]
Type=forking
StandardOutput=tty
# These are run from dietpi-cloudshell and autostart
ExecStartPre=/bin/bash -c 'setterm --term linux --blank 0 --powersave off' # Fails to set powersaving, must originate from tty
ExecStartPre=/bin/bash -c 'tput civis'
ExecStart=/bin/bash -c '/DietPi/dietpi/dietpi-cloudshell 1 &'
ExecStop=/bin/bash -c 'setterm -reset'
ExecStop=/bin/bash -c '/DietPi/dietpi/func/dietpi-notify 0 DietPi-Cloudshell terminated, have a nice day!'
[Install]
WantedBy=multi-user.target
_EOF_
systemctl daemon-reload
wget https://raw.githubusercontent.com/Fourdee/DietPi/testing/dietpi/dietpi-cloudshell -O /DietPi/dietpi/dietpi-cloudshell
Reboot system, then run dietpi-cloudshell to configure. You’ll be looking for the “Storage” settings option to configure mount paths. Then enable the additional scenes as needed.
Some initial observations are that before the USB2/3 screen there is some garbage text on screen for a very brief period - can’t read it and I doubt I’d be quick enough to photograph it.
I have the SSD mounted as /mnt/ssd and is the second device in cloudshell - sometimes the screen says “mount not active” and other cycles it appears as it should (same applies to flash/rootfs storage). A few reboots and the situation isn’t any different. I don’t remember seeing “mount not active” in v7.
Can I request that you can give drives identifying names instead of USB Storage 1 - such as SSD, HD, Flash drive etc?
The pictures show the blue line extending onto the second line - minor I know but I’m sure you’ll want it to look perfect - and the “mount not active message”.
The next shows some screen garbage - an error that is quickly overwritten.
Thanks for the update - the line fixed as you say - I have changed the drive names - the garbage text doesn’t happen any more for some reason.
I still get the “mount not active” message for the eMMC and the SSD - but not consistently. A reboot doesn’t make any difference. Some cycles they are reported OK, some not.
Mount not active is triggered when df | grep drive mount (eg: /mnt/usb_1) returns no results. So either its being temporarily dismounted, or something else.
Let me monitor my XU4 over the next few days and see if I can replicate.
Next time you see mount not active and your connected to the system over SSH, run df -h and see if your drive is listed.
Still intermittently getting this message and all drives present with df -h. Sometimes the root fs drive is reported as “Mount not active” which is a bit of a paradox.