Pihole & Nextcloud: Reducing CPU load

I don’t think so. I installed via dietpi-software: Pihole, Unbound, Nextcloud, Wireguard, which leads to the following packets installed right now:

114 Nextcloud

81 LLSP
82 LLMP
92 Certbot

172 Wireguard
87 Sqlite
88 MariaDB
91 Redis

17 Git
130 Python3

Hmm since it’s RPi 4, the shared bus for USB and Ethernet is all also not a reason.

Which filesystem do you use on the USB drive?

Strange also that I/O is only visible on the SD card. Nextcloud data and database are both definitely on the USB drive? I’m not aware of any disk writes during downloads, that would be very bad behaviour. I can try to test this tomorrow since my personal Nextcloud is setup the same way, only on RPi 2.

ok I did some testing and it seems lighttpd is creating some kind of cache before the actual download happen. At least if I download files using a web browser, chunks are stored in /var/cache/lighttpd/uploads beforehand. Yes I know, its a folder called upload but it’s actually storing data in front of a download as well. This would explain the high I/O consumption on your SD card as this folder is located on it, still. Putting it into memory should make the download available much faster.

At least on my test a 4GB file become available in short time.



root@DietPi:~# df -Th | grep hdd
/dev/sda2      ext4      1.8T  527G  1.3T  29% /mnt/hdd



At least the dietpi_userdata should be on the HDD (if I understand correctly).

root@DietPi:~# ls /mnt/hdd/dietpi_userdata/
Music  Pictures  Video  downloads  mysql  nextcloud_data

nextcloud_data contains folders for the users that use the nextcloud (and the actual files), and mysql seems database related :smiley: So I think everything is on the hdd.

I’m downloading via the client, but I don’t think it makes a difference.

Can you tell me how to do this?

What I don’t understand if this is the problem: Why am I seemingly the only one with this problem? I’d think my setup is kinda… basic, as I only use the default dietpi installations without any changes (as far as I remember).

No, I also run nextcloud on a RPi 4 4GB and I have also problems with large files (1 GB and bigger) but I gave up and just don’t store such big files anymore.
When you search the web for the “big file download problem” problem you see we are not the only ones :wink:
Most of the time I read about the PHP limits and 32bit OS limitation. But on Github I found an issue with another solution (they claim the streamer is buggy)

As I can see an Admin of this forum already requested a pull about this issue, but it looks like it’s not implemented yet.
But you can try/fix it by your own:

The fix seems to be a one-liner basically:

In lib/private/Streamer.php

exchange public function __construct(IRequest $request, int $size, int $numberOfFiles)

public function __construct(IRequest $request, float $size, int $numberOfFiles)

You would need to add following line into /etc/fstab if you like to test to have the lighttpd upload folder located into ram

tmpfs /var/cache/lighttpd/uploads tmpfs size=2G,noatime,lazytime,nodev,nosuid,mode=1777

Once rebooted, this should create a 2GB temp file system (half of your memory). Don’t worry, the memory is used if there are data stored only. While downloading a lager file, you should be able to watch the memory usage growing. I hope using the desktop client is working same way as using the web browser.

This did not help, sadly. I also found lots of threads about problems with big files, but (as you said) most of them have to do with 32-bit and the php limits - I found not one where the whole dns resolving crashes.


This actually fixes the problem! Things happen as you said: RAM use goes up to about 1GB (I assume the whole 900mb file gets put into RAM), and the download starts and no crashes. After the download, the RAM is free again. No crashes, no problems. I restored the CPU affinity for php-fpm just in case, and it still works fine. I’m really happy I can finally use the cloud to the fullest! :smiley: Thank you very much!

ok, It’s not a perfect solution as you are limited to a file size of 2GB, as this is the size of the directory. Not sure how it will behave if you try to download larger files :roll_eyes:

I just uploaded a 10GB file with no problems.

Downloading:
RAM went up to about 2GB and the download starts, no crashes again. While downloading, used RAM gets lower and lower. Once the 2GB are downloaded, the download stops. I had a 2GB temp file in my local folder, which was deleted after a short while. I tried it twice, same behaviour both times. No visible error messages (Edit: “Connection closed” in the nextcloud client appears). Looks like I can’t get files >=2GB out of the cloud.

While this isn’t a huge problem for now, I wonder if there is a fix for this - Can the temp folder on the Pi be put onto the HDD? Or was it there before? Would this be bad for the life of the HDD?

Or could the temp folder be changed that it works with bigger files?

thats basically what I have stated above. The current download is limited to files with less than 2GB. Uploading is working differently and is going to your HDD directly. Of curse you could try to link the folder /var/cache/lighttpd/uploads to your HDD. Afterwards you would need to test how it behave from performance point of view.

But I guess I found the perfect solution for your case :smiley:

You could keep the tmpfs as it is now. This will mainly serve all downloads up to 2GB, they will be stored in chinks inside memory beforehand. For larger files, we simply add another temp folder on your HDD to cover download chunks once our folder on memory filled up. For this you just need to adjust web server configuration file

nano /etc/lighttpd/lighttpd.conf

search for the parameter server.upload-dirs (somewhere near to the beginning).
Adjust the value as follows

server.upload-dirs          = ( "/var/cache/lighttpd/uploads", "/mnt/path/to/HDD/tmp" )

Save and leave the file. Now we can create the temp folder on HDD

mkdir -p /mnt/path/to/HDD/tmp
chown www-data:www-data /mnt/path/to/HDD/tmp
systemctl restart lighttpd.service

If all goes well, your web server should be up again. On a large download, you should see chunks are stored in /var/cache/lighttpd/uploads first. And once filled up, it should continue inside /mnt/path/to/HDD/tmp. Have a look how performance is developing once chunks are stored on HDD.

basic idea has been given by this forum post https://redmine.lighttpd.net/boards/3/topics/9819

Thank you so much again. This solution works flawless.

Both directories get filled immediatly once a download is scheduled, and it removes the files while the download is proceeding. Perfomance did not take a hit (I loaded about 1gb of the hdd parts so far). And, most importantly, no crashes of DNS resolving.

Just out of interest: What would happen if I try to download a file that’s larger than 2GB + whatever’s left on my hdd? Will I get an error, or would it just crash?

Usually it should work without issue if you download larger files. Just give it a try and watch performance.

Hmm but I would expect to have the directory located in memory to be filled first. And to use the HDD afterwards only. At least that was how it behaves on my test system :thinking:

I currently don’t have a 1.5TB file around, so I can’t really test it for now :smiley: But I don’t think this will be a problem at all, since I always have a look at the usage, so this was just out of interest :slight_smile:

Both folders were definitly filled while the file was prepared. Maybe it’s different because I used the client?

I’m mainly happy it works fine for now. I still had the download for the 10gb file queued when I started my pc a few minutes ago, and I didn’t even notice. It’s such a quality of life improvement, incredible :smiley:

There should be no need to test with a 1.5TB file. I guess a file with a size bigger 2GB should be sufficient as this would already fully occupy the temp folder we have created in memory.

Anyway, good it is working for your now :slight_smile:

Nice that multiple upload directories can be defined. We could even add this to dietpi-software, using either a dedicated tmpfs or /tmp and /mnt/dietpi_userdata/lighttpd_tmp or so.

I still wonder what the reason for this is. For uploads it makes sense to not store/overwrite with incomplete uploads at the final destination in case of a connection loss/cancellation, but for downloads I see no reason why data is copied from disk to disk before it is served. Chunks could be stored in internal memory directly.

Not sure whether this is dedicated to Lighttpd or the same with other webservers, and dedicated to how Nextcloud serves downloads or the same for regular downloads from a served directory in /var/www.

I don’t think this is dedicated to Nextcloud. It looks like a behaviour of Lighttpd. Means we would need to test with each web server available how it behave if it comes down to download/upload of larger files.