Just wanted to share this with the Dietpi devs to see what they think,
I recently purchased a rock64 (1GB RAM) version and am very pleased with the price & performance when compared to the Raspberry PI 3B+ and also against my 3.2Ghz Intel Dual Core (Haswell) Pentium file server.
Been doing various tests on the rock64 using the Dietpi image, Ayufan’s Debian Stretch minimal image and Armbian’s Ubuntu Bionic minimal image.
One of the tests performed was to setup all 5 environments with docker and run an SABnzbd container within docker.
In all 5 environments (inlcuding the RPI 3B+ and my Intel file sevrer), download performance of the SABnzbd docker is equal (I bandwidth limited them all to 30Mbps and they all achieved that consistently throughout the download), however, the biggest difference between them is the I/O performance when unrar unpacks a download. This is what I get (Note: Exact same download in all tests):
Intel Dual core Pentium file server running at 3.2Ghz (SSD connected via SATAIII) running Debian Stretch minimal - A 5GB rar set unpacks in 58 seconds.
Raspberry Pi 3B+ (SSD connected via USB2) running DietPi - A 5GB rar set unpacks in 5 minutes 30 seconds.
Rock64 (SSD connected via USB3) running Ayufan’s Debian Stretch minimal - A 5GB rar set unpacks in 3 minutes.
Rock64 (SSD connected via USB3) running Armbian Bionic minimal - A 5GB rar set unpacks in 3 minutes. 5) Rock64 (SSD connected via USB3) running DietPi - A 5GB rar set unpacks in 1 minute 57 seconds!
I’m very impressed with the Rock64 USB3 performance but am puzzled as to why it is so much faster when using DietPi than with Ayufan’s Stretch image and the Armbian Ubuntu Bionic image. All 3 seem to run the same kernel version.
What do you think the difference could be? I have tried my best to ensure all of these environments are the same and the same SSD drive has been used for all tests.
I’m actually now considering ditching my Intel file server and getting a dual USB3 dock for attaching 2 x large HDDs to the rock64 running DietPi.
I have also compiled snapraid for the rock64 and I get 81MB/s when doing a parity sync using a USB3 connected SSD (data drive) and a USB3 connected HDD (parity drive), connected to the rock64 via a usb3 hub.
I am also wondering about that huge difference. Lets go through some possible reasons:
Assure that you unrar onto disk and no accidentally into RAM, as DietPi by default uses RAMdisk for /DietPi and /var/log directories. /tmp as well, but that should be RAMdisk on nearly all Linux images. But I guess you are sure about this already.
As well I guess you stopped any background services, that could interfere, e.g. cron execution?
We mount drives by default with async and noatime modes. The first (async) leads to disk changes being held in RAM until a certain block size was reached. But this should be default on nearly all Linux distros as well. The second (noatime) is at least not included into mount defaults, and leads to file access times not being saved. This as well reduces file access times a bid, but yeah, you will only see “last modified” and not “last accessed” times . It could be checked how the defaults are on Debian/Ubuntu images and if that actually has an influence on unpacking, where only new files should be created. Also if anyhow, this settings should never have that big influence.
As you used docker, the SABnzbd + unrar version/setup should be the same.
One last thing, if cron was up, on ARMbian a minutely running cron job is placed that sets priority for every network file transfer process to real-time: /etc/cron.d/make_nas_processes_faster. The cron job itself should not have any impact, just a very quick scan and task, but not sure how the set process priorities might have an impact on other running tasks with less priorities. However, on idle this should also not influence unpacking speed in measurable way.
So finally I have no clue what might have such a huge impact. Check the above and also check htop for other running background processes that might interfere.
Definitely extracting to disk and not RAM (I only have a 1GB RAM rock64 and the extracted file is 4.4GB in size).
I stopped and disabled any unused services before testing.
In all cases, USB3 SSD mounted using defaults,noatime
Same sabnzbd docker image used for all tests.
I have to admit to having not checked cron jobs before running the tests. I thought they would just be minimal stuff like logrotate, maybe some time sync stuff etc.
Following on from the tests in sabnzbd, I thought I would simplify the testing somewhat and just tried the test below by using the non-free version of unrar directly from the command line (docker removed from the equation) using the dietpi image and the ayufan stretch minimal image. (The Armbian SD card has been reused in another SBC today so I didn’t test Armbian again)
The test results below make me wonder how accurate the job reporting is for download speed and unpack speed in sabnzbd.
4.4GB 90 x rar file set unpacked using UNRAR 5.30 beta 2 freeware to a USB3 connected SSD, from and to the same folder on the SSD. Tests carried out straight after first boot.
SSD using ext4 filesystem and mounted using defaults,noatime
Command used for all tests “time unrar x file001.rar”
Nice results . Jep these look more realistic. Also that for each system both results are so close, is an indication that they were accurate and not disturbed. Yey DietPi still wins, even that it’s just close .
Reason when doing 4.4 GB unrar on 1 GB RAM, could be indeed the lower idle RAM usage, allowing slightly more caching and perhaps a tiny bit more free CPU time as well, due to some disabled/masked otherwise default system services? (dbus, NTP, additional TTYs and some others)
CPU at 100% during unpack?
Did the task actually fill RAM, at least with cache (yellow bar in htop)?
Ah in case what about swap file? DietPi pre-creates one, should be 1 GB on 1 GB system, so 2 GB overall is assured. But actually a used swap file might slow down parallel unpacking onto same disk.
swapiness is set to 1 on DietPi, which leads to swapfile being used in least cases. On default systems it is used more/earlier, allowing theoretically more cache use (of otherwise unused RAM), but produces disk I/O faster as well. Not sure about pros/cons in combination with unpacking.
=> So swapfile could be taken out of equation as well, disabling it on all system before doing benchmark, at least if RAM was even close to fill up and swap used.
As I seem to have the rock64 testing bug at the moment, I thought I would try the same unrar test again, same SSD etc but this time on a clean install of Arch Arm (which uses unrar 5.61 and kernel 4.18.12 with no swap and no services running)
Arch Arm results
real 1m21.725s
user 0m18.597s
sys 0m24.845s
32 seconds faster this time!
Maybe unrar 5.61 is making the difference or maybe the usb3 drivers (or filesystem caching mechanics) improved significantly between kernel 4.4 and kernel 4.18? or maybe Arch Arm is running the rock64 at 1.5Ghz by default (rather than 1.3Ghz)? Not sure how to check this.
I bet it’s the newer kernel version, I remember that there was something improved about disk I/O, perhaps the USB3 drivers on top of that.
But could have other (supporting) reasons, as Arch is a really different distro, while ARMbian, Ayufan Stretch, Rapsbian and DietPi are all based on Debian.
Indeed on dietpi.com it’s listed with 1.3 GHz while on Arch Linux ARM page it’s listed as 1.5 GHz. On somehow official specifications I could not find something so far…
Very strange, we do just run it on default and do not change something about the CPU clock. So if it’s really 1.3 GHz on DietPi, while 1.5 GHz on default Arch install, there must have been something done. Would be interesting to find out, perhaps we can offer overclocking for Rock64 then as well.
Indication/Result of higher clocks should be of course higher temps. Would be interesting to check as well:
Further feedback on this whilst testing with Arch Arm:
Tried doing some scp copying of data from the rock64 to my file server and it bombed out part way through with I/O errors. when connected to the USB3 connected SSD.
I compiled snapraid for Arch Arm and then tried to do a parity sync and that failed part way through with I/O errors when reading from the USB3 connected SSD.
I switched back to Dietpi and did another snapraid sync and I get I/O errors at the same point, so it looks like, either the SSD is on its last legs (it is very old now) or it is not getting enough voltage to power itself through the USB3 port.
I need to find another mains powered enclosure to move the SSD into to rule out the USB port power issue.
Moved the SSD to a mains powered enclosure and the I/O error issue has gone away. So clearly the SSD needs more power than the rock64 can provide, regardless of distro.
snapraid sync performance also better under Arch Arm, I get 91MB/s when doing a parity sync from USB3 SSD (data drive) to USB3 HDD (parity drive), both connected to a usb3 hub via the rock64 USB3 port.
Ah jep, this is mostly the case. Always add SSDs/HDDs with dedicated power supply to an SBC. The USB power is not sufficient in most cases and/or decreases stability.
USB port mostly can only power reliably up to a USB stick and such.
I had to splice in a 5vdc power supply to my external drive on my nextcloud drive, otherwise it wouldn’t spin up and it would crash my SBC…the RPi has a hack to make it output more power but it’s easier to cut a cable and splice power into it or use a dedicated power supply for the drive…it provided proper power and is much more stable
On RPi, default max current on all USB ports together(!) is 0.6A (=3W), with max USB power setting (enabled by default on DietPi), it’s increased to 1.2A (=6W).
But this might be different on Rock64, but still, even if the USB ports provided enough power, the PSU of the board might be insufficient.
This is a Rock64 with recent Armbian image (there I added the 1.4GHz cpufreq OPP but we did not allow for the 1.5 GHz settings since too many instability reports occured). With ayufan images you can dynamically load a DT overlay for the higher clockspeeds and since DietPi is just a modified userland on top of ayufan’s work it should work here exactly the same (for details do a web search for ‘ayufan DT overclock’ or something like that)
SSDs with trashed performance is quite common when powered by SBC, the reason is usually undervoltage and not undercurrent (on almost all SBC USB ports there are current limiters in action exceeding the commonly known 500mA for USB2 or 900mA for USB3 – ROCK64 allows 650mA on each USB2 port and 950 mA on the USB3 port but a RPi 3 uses 1.2A for all 4 USB2 ports combined or a NanoPi M4 for example has one global 2A current limiter for all 4 USB3 ports – you always need to study schematics).
But as already said: usually it’s undervoltage (cable/contact resistance between PSU and board and again between board and disk) causing the problems and not limited current (same on the RPi 3 where 1.2A are set by default – compare with ‘vcgencmd get_config int’ output – but due to Polyfuses and Micro USB powering the voltage available to USB peripherals often drops below 4.5V and then majority of external SSDs get in trouble, the majority of 2.5" HDD already has trouble with less than 4.75V)