what could further help would be to move the temp upload folder of the web server to your HDD. Might be improving I/O.
Would that be “tempdirectory” in /var/www/nextcloud/config/config.php ? It’s not in there yet, but the nextcloud docs say that it’s the parameter to “Override where Nextcloud stores temporary files.”
I’d just add
'tempdirectory => '/mnt/hdd/tmp',
and then it should take my external hdd as the temp file location? (/mnt/hdd is the mount point for my hdd)
I did some reading and it looks like this will not really solve anything. https://github.com/nextcloud/server/issues/19682
At least on my test, uploads are stored in chunks inside /mnt/dietpi_userdata/nextcloud_data/admin/uploads
root@DietPi3:~# ls -latr /mnt/dietpi_userdata/nextcloud_data/admin/uploads/web-file-upload-2c01c5374f030cfb937fa89d31ba2a99-1650278500775
total 296968
drwxr-xr-x 3 www-data www-data 4096 Apr 18 12:41 ..
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:48 713031680
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:48 723517440
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:48 734003200
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:48 744488960
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:48 754974720
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:48 765460480
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:48 775946240
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:48 786432000
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:48 796917760
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:48 807403520
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:48 817889280
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:48 828375040
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:48 838860800
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:49 849346560
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:49 859832320
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:49 870318080
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:49 880803840
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:49 891289600
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:49 901775360
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:49 912261120
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:49 922746880
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:49 933232640
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:49 943718400
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:50 954204160
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:50 964689920
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:50 975175680
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:50 985661440
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:50 996147200
-rw-r--r-- 1 www-data www-data 10485760 Apr 18 12:51 1006632960
drwxr-xr-x 2 www-data www-data 4096 Apr 18 12:51 .
root@DietPi3:~#
How much memory does your RPi has?
There is a tmp upload directory for PHP, used (AFAIK) when uploading files via web interface (not when done via Nextcloud clients). That is set to /tmp tmpfs, so moving that to disk should worsen things and has no effect when just browsing the Nextcloud interface.
EDIT: Ah whoops, you found that Nextcloud internal upload dir already. I wasn’t sure whether this is used by Nextcloud as override for the PHP tmp upload dir, and it seems so. I wanted to find this in the docs but found part_file_in_storage which defaults to true and states again differently: https://docs.nextcloud.com/server/stable/admin_manual/configuration_server/config_sample_php_parameters.html
Confusing .
Sounds like the php temp path needs to be changed?
It’s a Pi 4, htop lists it at 3.75G. Even when loading huge files, the whole memory isn’t used.
hmm at least for me uploads are done to /mnt/dietpi_userdata/nextcloud_data/admin/uploads where they are stored until upload is complete. Afterwards they got assembled into the target file on target location.
Let me link myself as I’m to lazy to type thinks twice. This is basically similar to the other topic
https://dietpi.com/forum/t/nextcloud-slow-upload-and-freezing-when-uploading-big-files/6518/12
Copying my edit from above:
EDIT: Ah whoops, you found that Nextcloud internal upload dir already. I wasn’t sure whether this is used by Nextcloud as override for the PHP tmp upload dir, and it seems so. I wanted to find this in the docs but found part_file_in_storage which defaults to true and states again differently: https://docs.nextcloud.com/server/stable/admin_manual/configuration_server/config_sample_php_parameters.html
Confusing .
It’s a Pi 4, htop lists it at 3.75G. Even when loading huge files, the whole memory isn’t used.
What is the largest file you are trying to upload? And is it a single user who is uploading large files? Or are their multiple user who will do this? Because an idea would be to move the upload folder into memory. Bu there you are limited due to the physical ram size. It might be an option if files you upload will not be bigger than 2GB.
On 32-bit images, the max upload size is limited to 2 GiB anyway. Although with the now chunked uploads I’m not 100% sure if this is still true. See the Nextcloud docs about this: https://docs.nextcloud.com/server/stable/admin_manual/configuration_files/big_file_upload_configuration.html
In the info box it says that chunks are used by the Nextcloud client, not including web interface uploads .
Just to be clear, Uploading is no problem, just Downloading.
To be sure, I tried uploading a 3.46 GB file via the nextcloud desktop app for windows (I moved the file into a folder and waited). lighttpd and php-fpm shoot up from time to time, but internet still works fine, and the download goes smoothly, too. Load stays <3.00.
Downloading, I right-click the file in the folder and choose “Always keep on this device”. Doing this on a ~900mb file leads to the behaviour described in this thread. This happens on several devices, even if there are no other uploads/downloads at the same time.
Do you use a reverse proxy?
I don’t think so. I installed via dietpi-software: Pihole, Unbound, Nextcloud, Wireguard, which leads to the following packets installed right now:
114 Nextcloud
81 LLSP
82 LLMP
92 Certbot
172 Wireguard
87 Sqlite
88 MariaDB
91 Redis
17 Git
130 Python3
Hmm since it’s RPi 4, the shared bus for USB and Ethernet is all also not a reason.
Which filesystem do you use on the USB drive?
Strange also that I/O is only visible on the SD card. Nextcloud data and database are both definitely on the USB drive? I’m not aware of any disk writes during downloads, that would be very bad behaviour. I can try to test this tomorrow since my personal Nextcloud is setup the same way, only on RPi 2.
ok I did some testing and it seems lighttpd is creating some kind of cache before the actual download happen. At least if I download files using a web browser, chunks are stored in /var/cache/lighttpd/uploads beforehand. Yes I know, its a folder called upload but it’s actually storing data in front of a download as well. This would explain the high I/O consumption on your SD card as this folder is located on it, still. Putting it into memory should make the download available much faster.
At least on my test a 4GB file become available in short time.
root@DietPi:~# df -Th | grep hdd
/dev/sda2 ext4 1.8T 527G 1.3T 29% /mnt/hdd
At least the dietpi_userdata should be on the HDD (if I understand correctly).
root@DietPi:~# ls /mnt/hdd/dietpi_userdata/
Music Pictures Video downloads mysql nextcloud_data
nextcloud_data contains folders for the users that use the nextcloud (and the actual files), and mysql seems database related So I think everything is on the hdd.
I’m downloading via the client, but I don’t think it makes a difference.
Can you tell me how to do this?
What I don’t understand if this is the problem: Why am I seemingly the only one with this problem? I’d think my setup is kinda… basic, as I only use the default dietpi installations without any changes (as far as I remember).
No, I also run nextcloud on a RPi 4 4GB and I have also problems with large files (1 GB and bigger) but I gave up and just don’t store such big files anymore.
When you search the web for the “big file download problem” problem you see we are not the only ones
Most of the time I read about the PHP limits and 32bit OS limitation. But on Github I found an issue with another solution (they claim the streamer is buggy)
https://github.com/nextcloud/server/issues/15117#issuecomment-701242456
As I can see an Admin of this forum already requested a pull about this issue, but it looks like it’s not implemented yet.
But you can try/fix it by your own:
The fix seems to be a one-liner basically:
In lib/private/Streamer.php
exchange public function __construct(IRequest $request, int $size, int $numberOfFiles)
public function __construct(IRequest $request, float $size, int $numberOfFiles)
You would need to add following line into /etc/fstab if you like to test to have the lighttpd upload folder located into ram
tmpfs /var/cache/lighttpd/uploads tmpfs size=2G,noatime,lazytime,nodev,nosuid,mode=1777
Once rebooted, this should create a 2GB temp file system (half of your memory). Don’t worry, the memory is used if there are data stored only. While downloading a lager file, you should be able to watch the memory usage growing. I hope using the desktop client is working same way as using the web browser.
This did not help, sadly. I also found lots of threads about problems with big files, but (as you said) most of them have to do with 32-bit and the php limits - I found not one where the whole dns resolving crashes.
This actually fixes the problem! Things happen as you said: RAM use goes up to about 1GB (I assume the whole 900mb file gets put into RAM), and the download starts and no crashes. After the download, the RAM is free again. No crashes, no problems. I restored the CPU affinity for php-fpm just in case, and it still works fine. I’m really happy I can finally use the cloud to the fullest! Thank you very much!
ok, It’s not a perfect solution as you are limited to a file size of 2GB, as this is the size of the directory. Not sure how it will behave if you try to download larger files
I just uploaded a 10GB file with no problems.
Downloading:
RAM went up to about 2GB and the download starts, no crashes again. While downloading, used RAM gets lower and lower. Once the 2GB are downloaded, the download stops. I had a 2GB temp file in my local folder, which was deleted after a short while. I tried it twice, same behaviour both times. No visible error messages (Edit: “Connection closed” in the nextcloud client appears). Looks like I can’t get files >=2GB out of the cloud.
While this isn’t a huge problem for now, I wonder if there is a fix for this - Can the temp folder on the Pi be put onto the HDD? Or was it there before? Would this be bad for the life of the HDD?
Or could the temp folder be changed that it works with bigger files?