Nextcloud Installation

I used the menu interface to install nextcloud with apache. It seems that apache does not have a prefork module installed, and I have an issue with forked processes consuming all my cpu/memory causing the server to crash.

Was this installation intended to be installed with lighttp?

Why wouldn’t a prefork module be installed? Do I simply need to install and configure? I don’t mind modifying my system, I’m just asking for a best known method to resolve a runaway apache prefork issue. Just trying to avoid pitfalls in the interest of saving time.

With the little time I had over the last couple days, I poked at it a little, and found that the prefork module is actually enabled.

The reason I thought it was not installed is due to the following output:

root@nextcloudpi:/etc/apache2/conf-available# apachectl -l                                                                    
Compiled in modules:                                                                                                          
  core.c                                                                                                                      
  mod_so.c                                                                                                                    
  mod_watchdog.c                                                                                                              
  http_core.c                                                                                                                 
  mod_log_config.c                                                                                                            
  mod_logio.c                                                                                                                 
  mod_version.c                                                                                                               
  mod_unixd.c

I ran some command, (I don’t remember what it was), and found that it was actually installed, but I guess it isn’t listed here. I’m not sure why.

After researching the prefork subject, and reading from this article here, that mpm_worker gives the best performance and memory useage, I disabled php7.0, then enabled mpm_worker, then attempted to enable php7.0 again.

It appears that php7.0 will not run with the installed version of mpm_worker?

root@nextcloudpi:~# a2enmod php7.0                                                                                            
Considering dependency mpm_prefork for php7.0:                                                                                
Considering conflict mpm_event for mpm_prefork:                                                                               
Considering conflict mpm_worker for mpm_prefork:                                                                              
ERROR: Module mpm_worker is enabled - cannot proceed due to conflicts. It needs to be disabled first!                         
ERROR: Could not enable dependency mpm_prefork for php7.0, aborting

So I re-enabled mpm_prefork with php7.0 just exactly the way it was in the original installation.

After learning best configuration practices from here, I created the file: /etc/apache2/conf-available/performancetune.conf with the following contents:

<IfModule mpm_prefork_module>                                                                                                                                 
StartServers 2                                                                                                                                                
MinSpareServers 2                                                                                                                                             
MaxSpareServers 5                                                                                                                                             
MaxClients 150                                                                                                                                                
#must be customized                                                                                                                                           
ServerLimit 20                                                                                                                                                
#must be customized                                                                                                                                           
MaxRequestsPerChild 50                                                                                                                                        
</IfModule>                                                                                                                                                   
                                                                                                                                                              
KeepAlive Off

Then enabled that configuration:

a2enconf performancetune

I had another issue, however. It seems that nextcloud does not have any method to back off incoming data when clients are sending streams faster than what the storage medium can handle? The server would also crash due to too much throughput. I changed the nextcloud.conf file to contain the following:

<IfModule mod_bw.c>                                                                                                                                           
BandwidthModule On                                                                                                                                            
ForceBandWidthModule On                                                                                                                                       
#Overall Limit to 4Mb                                                                                                                                         
Bandwidth all "4194304"                                                                                                                                       
MaxConnection all "400"                                                                                                                                       
#any files over 1mb will be limited to 500kb/s                                                                                                                
LargeFileLimit * 1024 512000                                                                                                                                  
BandWidthError 510                                                                                                                                            
Alias /nextcloud "/var/www/nextcloud/"                                                                                                                        
                                                                                                                                                              
<Directory /var/www/nextcloud/>                                                                                                                               
        Options +FollowSymlinks                                                                                                                               
        AllowOverride All                                                                                                                                     
                                                                                                                                                              
        <IfModule mod_dav.c>                                                                                                                                  
                Dav off                                                                                                                                       
        </IfModule>                                                                                                                                           
                                                                                                                                                              
        SetEnv HOME /var/www/nextcloud                                                                                                                        
        SetEnv HTTP_HOME /var/www/nextcloud                                                                                                                   
                                                                                                                                                              
        # Hard coding 128M OPCache size, only for /nextcloud, to suppress warning on nextcloud admin panel.                                                   
        php_admin_value opcache.memory_consumption 128                                                                                                        
                                                                                                                                                              
</Directory>                                                                                                                                                  
</IfModule>

I’m guessing that’s probably not the best method of using the bandwidth module, but it seems to be working extremely well. I think my server is now behaving, these settings made a massive improvement. I will probably need to tweak them some as these values were just straight up guesses. I’ll run some hardcore client syncs over the next couple days to see how it does.

If anybody has input, please share.

@twentyninehairs

May I ask which machine/Debian version you are using, as detailed tweaking as you do, significantly depend on the system. Generally everything should work just fine by using the default configuration provided by DietPi. For low RAM (SBC) devices we already tweak settings to reduce recourse requirements a way that does not risk usual tasks/behaviour.

The default webserver with DietPi is indeed Lighttpd, but you can choose another within dietpi-software before installing Nextcloud or manually choose another webserver stack for installation first, and install Nextcloud afterwards.

I am not too familiar with other Apache mpm modules, but mpm-prefork is in any known case the default together with mod-php, which should have a reason that might or might not work for you.

If you wan’t a different, in cases less memory consuming, method of running PHP, try Lighttpd or Nginx with php-fpm (which is default with related DietPi webserver stacks).

So far best wishes and (still) merry Christmas and happy new year :smiley: !

Thank you for your reply! I had a very merry Christmas, I hope you did as well! :smiley:

I’m running the default dietpi image for a Raspberry Pi 3, new installation. Uses Debian Stretch, but I suppose you know that. I selected Apache from the menu before running the nextcloud installation because I read it was more compatible in a few different non-dietpi related forums. I don’t know how compatible lighhtpd is, and they probably didn’t either, but I selected Apache because I thought I might have less issues.

These posted settings still cause too much I/O overhead for my installation and need more tweaking if I’m going to run it this way. These settings vastly reduced that problem, but it is still too high. I didn’t fully troubleshoot yet, but I think maybe Large file transfers that span a very long period of time still appear to probably be starting too many threads. At any rate there is still high CPU I/O wait time for large long file transfers.

For a small home installation, is there any downside that you know of to using lighttpd? This is for something like 6-12 users, and I suppose I will keep the bandwidth down to around 2-4Mb/s. But some files I need to use it to transfer are maybe 60Gb in size or more. I use it for small files also, but big files are definitely transferring over a long period of time.

I’ve never used lighttpd previously. Do you think it would have less issues with this type of setup?

Do you know if there would be issues using the dietpi menu to switch between lighttpd and apache2 to compare the two? I’m sure I would need to rerun the nextcloud install script after switching a web server for the first time using the menu?

@twentyninehairs
Yeah, we also enjoyed Christmas here :slight_smile:!

About Lighttpd:

  • One thing is, that we have not yet an optimized config for ownCloud/Nextcloud for this, as someone with deeper Lighttpd knowledge would be helpful. So some incompatible modules (dav) and redirections could be missing. But in comparison to Nginx, where much adjustment is necessary, on Lighttpd ownCloud+Nextcloud do run out of the box, as far as I could test (production server is on Apache). Also never heard of an issue from others.
  • As it uses php-fpm like Nginx, performance and memory consumption should be moreless the same.
  • Lighttpd is not officially supported/recommended, as it is listed as not fully compatible at sabre.io, the dav backend of Nextcloud: http://sabre.io/dav/webservers/ But webdav, using windows Nextcloud client, as well as Thunderbird Lightning as caldav client worked on my tests.
  • I would suggest to test it with the Nextcloud apps/features you/your users need.
  • Consider also Nginx, as it developed faster the last years, supports more features/modules and overtook Lighttpd in some benchmarks in performance. But the difference should be small, if any, as said, as both use php-fpm.

About switching webserver:

  • Just did it for some reasons on test systems. You can just uninstall only the webserver, using dietpi-software, choose the other one and then do “dietpi-software reinstall 114” to reinstall Nextcloud. This will on existing instances just install the other webserver specific config.
  • Theoretically you should be also able to install two webservers at once and activate/inactive them: service apache2 start/stop, service php7.0-fpm start/stop + service nginx start/stop, but yeah, it’s all about testing :smiley:.

@MichaIng

Sorry for the late replies. This is for my home network, and I work on it when I can. I use Zabbix network monitoring software to monitor my network performance, and understand what is happening.

I can see that the problems with Apache really are not that the Apache server is too heavy to handle the load with a small number of users 8-12 limiting bandwidth. With my testing, the Apache server appears to be able to easily handle the load using a very small fraction of system resources of the Raspberry Pi 3. It appears that the default settings allow the Apache server to far outstrip hardware capacity. I believe I can continue to tweak the Apache config making a very robust server, but I’m not opposed to using another type of web server.

Hard Drive I/O throughput might be the major bottleneck for this device. It’s mostly and issue of CPU cycles wasted on disk writes. In my case, I’m using a 64GB flash disk drive for the Maria database (and dietpi user data, large to mitigate RW flash wear leveling issues) and a 4TB USB HD as the main storage for the user data. Both formatted to EXT4.

I reduced the resources given to Apache to the following contents in my performancetune.conf file:

StartServers 1 MinSpareServers 1 MaxSpareServers 2 MaxClients 50 #must be customized ServerLimit 3 #must be customized MaxRequestsPerChild 100

KeepAlive Off

With these reduced settings, I can see that disk I/O to the MariaDB on the USB stick probably poses a minor issue with my installation. The server is very responsive with file transfers averaging about 2.4mb/sec. But not quite 1/4 CPU usage is consumed by the CPU waiting for disk I/O to the maria DB.

Also, there seems to be very brief periods that the Apache server attempts to write large chunks of data to the main data storage drive all at once. When large files syncing through the client occur, the client transfers a large chunk of the file for a while to the server for a while, then the server crashes for about 10 minutes with very high disk reads and writes to the main storage directory. Hard disk utilization spikes to levels between 15mb/sec and 13mb/sec. At this time, the client errors stating that no connection to the server is available no data is transferring to/from the server. and webapp is non-responsive. Disk I/O time spent on each operation spikes to flatline at 50ms. After about 10 minutes of this, the server recovers to do it all over again. I don’t think those files ever actually transfer.

I’m not sure that the bandwidth module settings I listed previously are working properly. When I reduce the bandwidth setting to what I thought would be 1Mb: ‘Bandwidth all “1048576”’, and files over 1Mb to be limited to 100kb: ‘LargeFileLimit * 1024 102400’, I didn’t see any relevant changes in the file transfer rates from clients.

Do you have issues syncing using a windows client with large files? Maybe files >2Gb?

@MichaIng

Upon further analysis, I realized that when there are problems, both the read and write speed to the storage
hard drive is spiking to 15-13mb per second, the disk I/O time completely flatlines at 50ms, and the network I/O traffic pattern exactly inversely matches the Disk I/O traffic.

I believe the maximum practical throughput speed of the USB bus on the Pi should be around 30mb/sec. So that means that throughput to that storage HD is hitting the maximum throughput headroom of that USB bus. Other things are using that same bus as well likely accounting for the small percentage of IO when running at 13mb/sec. If I’m not mistaken, I believe the ethernet adapter is connected to the same USB bus the USB ports are connected to. There is other disk I/O happening at the same time as well. This would point to most all the headroom of that USB bus is probably taken by writing a single file to the storage drive. The server is probably becoming non-responsive for a long period of maybe 10 minutes because Ethernet traffic is crowded out of that bus. I could see this causing corruption of some filesystems or maybe some databases if I/O to other drives time out waiting for a response while in the middle of writing to a file.

I think the solution would need to be to setup a kind of QOS system for the kernel to manage the bottleneck of that USB bus that everything is tied to. I might use cgroups to throttle my main storage HD to 5mb/sec, for example. I’m not sure if other subsystems need limits as well.

Yeah, Nextcloud on a SBC is a bit overkill…it does take alot of oomph to run it…but if you are patient, it can perform well enough for photo backups on phones

there is also a way to pre-generate all the thumbnails…but it must be run thru a command shell (get it in the apps tab under admin)
https://github.com/rullzer/PreviewGenerator

sudo -u www-data /var/www/nextcloud//occ preview:pre-generate

It will take some time for your little processor to chop thru and make the thumbnails…then it’s MUCH faster on updating the webpage

They say it should be made a cronjob every 10 min…but I have failed to put mine in crontab effectively
anyone want to help?

@WarHawk If you installed nextcloud with the dietpi install script, you only need to use the command:

ncc preview:pre-generate

I do not have that plugin, but I’m guessing you could do that by editing the main crontab script with the command

crontab -e

Then try putting this at the end of the file, and save it.

*/10 * * * * ncc preview:pre-generate

Please don’t confuse the issue I’m trying to solve, however. The problem I’m having at the moment is that a windows client cannot sync large files to the server at all due to USB hard drive I/O saturating the USB bus.

It doesn’t sync large files at all. I would not care if the transfers are slow, but it doesn’t work at all. I would really like to solve this. When large chunks of files are written to a USB hard drive, it looks like the ethernet adapter is choked out of packets through that bus, and clients cannot access the server while those chunks are written. So a client will timeout attempting to communicate about that file being uploaded, and disconnects. Then when the server is finished writing a chunk, it becomes available again, and the client starts over attempting to send the same file. It never completes sending that file.

I’m trying to keep the USB bus from saturating by imposing I/O limits. Digging into this further today, I found that dietpi is already using cgroups for some I/O.

Does anybody have any suggestions to limit I/O to a USB disk? I do not have any experience with cgroups, so I will have some difficulty knowing what best practices would be to modify a system already in place.

@WarHawk I might also recommend installing Webmin. Dietpi has it as an optional install. If you disable background data collection in the settings, Webmin doesn’t take much resources to run, maybe around 40meg of memory. This utility makes managing a linux system a lot easier, and has a GUI for managing CRON jobs. Well worth it in my opionin.

@MichaIng Back to troubleshooting my large file sync issue, I mucked with the cgroup system, and I think I made some headway, but I caused my server to crash and loose all access to storage drives at one point. I think I understand what to do, but I don’t know if using cgroups to set hard drive I/O to a specific throughput is a good answer, because it is slowing down performance to a set number. I wonder if ionice might be something to look into?

Anyway, I realized that this must have been happening when the server was attempting to piece together several large file chunks together. That explains why the duration was so long, and both the read and write speeds were maxed out.

I found here that the following setting can be placed in the nextcloud config.php file:

'part_file_in_storage' => false,

According to that documentation, Nextcloud stores part files created during upload in the same storage location as the upload target. Setting this to false will store the part files in the root of the user folder.

My user folder and maraia database are on a 64Gb USB flash drive that is apparently not capable of saturating the USB BUS. It’s 64Gb to mitigate write wear leveling issues.

This is not a particularly good fix because it doesn’t eliminate the root cause of the USB bus saturation problem. It was a good enough to break up the USB bus I/O saturation issue and keep my system responsive. It now completes uploading large files. I think the UI took minor performance hit when I made this change, though.

The Apache Bandwidth module is working also, it just doesn’t always hold to the numbers I think it should. There is definitely a massive change in data throughput, and things sometimes break when I shut it off, so it’s definitely doing it’s thing. I’m still tweaking parameters to optimize performance. It seems that the bandwidth settings can be globally loaded by themselves, so I created a file called bw.conf in the conf-available and put all my settings in that. I’ll post those settings once I dial them in from test runs.

All in all, I’m very happy with Apache and Nextcloud. Apache is definitely not too heavy for the Pi, it just needs the parameters reigned in to acceptable levels. If I ever get all this straightened out, I can post a tutorial with configuration changes, if you would like.

Large file transfers are still taking the server down for something like 10min at a time while part files are combining. All those files are completing, and nothing is ever corrupted, but I still don’t like that my server is sometimes down.

The Pi’s core processing capabilities are more than powerful enough to run Nextcloud, the only problem with it really is the USB bus bottleneck. I never tested lighttpd, but I don’t see how that would change this issue.

I tried using WiFi instead of a wired ethernet connection thinking maybe that is on a different bus. There is too much wireless noise where I live to keep a constant connection on the 2.4Ghz range even when the Pi is sitting next to the access point. According to the tests I ran, it didn’t seem to help, but I was not really able to run adequate testing due to all the noise. Sad, but true.

I can’t see any way of fixing this without decreasing transfer speed to/from the storage hard drive. I thought about maybe connecting the hard drive to a USB 1.0 hub, then connecting that hub to a USB 2.0 hub, and connecting the 2.0 hub to the Pi. I believe that should reduce transfer speed to around 800k if it even works at all, but I think that is too slow. I’m comfortable with a minimum of 2mb/sec with 4-8mb/sec ideal. Using cgroups is the only other solution I can think of, and I haven’t been able to implement that within a reasonable amount of time.

So, I think I am going to give up on the Pi, and use an ODROID-C2 instead.

Nice…I had to install the Preview Generator app from the admin apps page(I don’t think it’s installed stock on the DietPi install)

When I ran the

occ preview:pre-generate

it would run for a second…then stop

I got it to go thru by installing screen, then running:

occ preview:generate-all

in the /var/www/nextcloud directory (here occ resides)
Then detatch from the screen session and let it run in the background
It has taken a LOOOOOOOOOOOONG time to go thru several hundred gigs of photos…

I think I it to work without crashing

sudo -u www-data php /var/www/nextcloud/occ preview:pre-generate

Guess I will add

*/10     * * * * www-data php /var/www/nextcloud/occ preview:pre-generate

to /etc/crontab

WarHawk
Note that with Nextcloud on DietPi you can use ncc, which is a globals function placed with DietPi as shortcut for: sudo -u www-data php /var/www/nextcloud/occ

With ownCloud, the shortcut is occ. Those are different to allow ownCloud and Nextcloud being installed in parallel :wink:.

But AFAIK these global functions are not available within crontab. Cron is executed in a non-interactive sub shell, thus /etc/profile and /etc/bash.bashrc are not sourced there, which is required for ncc/occ the be available. So there you need to use the full sudo + command path, as you did.

Instead of adding the job directly to /etc/crontab, I suggest you add it directly to www-data users crontab, to skip using sudo:
crontab -u www-data -e
There add:
*/10 * * * * php /var/www/nextcloud/occ preview:pre-generate

E.g. this is my www-data crontab:

2018-12-08 01:21:38 root@micha:/var/log# crontab -u www-data -l
*/15 * * * * php /var/www/nextcloud/cron.php
10 * * * * php /var/www/nextcloud/occ preview:pre-generate -vvv &>> /var/log/micha-pre-generate.log

Nextcloud cron job is added by DietPi-Software installer. I just run pre-generation every hour and do debug logging, from ancient time to debug/test preview generation times and such :wink:.

Seems I am getting the same timout as they were in the threads

Does the preview:generate-all delete all the previews and re-generate them?

Uggh…I will figure this darn thing out…LOL

preview:generate-all does not delete any previews, AFAIK, it just re-scans all files and “requests” all previews for them. The internal (core, not app) generator then generates them, if not existent.

So the app just does the same, recursively to all files, that would happen, if you manually browse through all directories/files open details, gallery app and such, to force Nextcloud core to generate previews in all usual sizes/ratios.

Seems I am getting the same timout as they were in the threads

You mean the cron job just stops, even it has not yet finished? Not 100% sure about PHP timeouts, but not that you need to edit the ini file in /etc/php/7.0/cli/php.ini|conf.d to apply changes for occ/cron.php commands.

Nice…because I know the preview:generate-all is working…I even have it set to -vvv output to a file and I can see it slowly chopping thru…it does have to go thru over 300gigs of files and whatnot so it’s reaaaaaaly slow…I guess once it generates them I can then set it to go thru every 10 minutes…if I did it now…it would have multiple instances and lock up the OPiPC…so far it’s doing well…but with the preview:pre-generate stops after about 5 seconds of running (seems like the developer also had issues with it as well)
As long as it doesn’t re-generate all of them only parses what has been done and continues on…then I think the preview:generate-all is the way I should go.

The CRON job I added to run the preview:pre-generate stops, I never see it going in htop, but when I setup the preview:generate-all it runs and runs and runs…but if I have it set to start too soon, it ends up making multiple threads and bogs the SBC down until it locks up…so I set it to start only once a day until it chops thru the initial generation of the files

crontab -e -u www-data

0 22  * * * php /var/www/nextcloud/occ preview:generate-all -vvv &>> /var/log/nextcloud/files.created.txt

All I can say is, it’s MUCH faster loading the pages with the previews pre-generated…

Do you have your’s set for AJAX or CRON?

generate-all will be definitely slower, since it re-scans all files. pre-generate will only walk through the files that were added since last execution, so should be much faster.

But yeah, if there is no solution to have it running more than 5 minutes, that’s a problem. Although, since you run it every 10 minutes, even if you add too much images at once, to complete in 5 minutes, the second or third cron execution will do :wink:.

Just checked my logs: It runs longer then 5 minutes in my case, if I add much pictures. Which webserver you use? Lighttpd with php-fpm (default on DietPi)?
I use Apache with mod_php, so things might be different. Worth investigating how this can be solved. If found, we can add a fix to DietPi install script.

Apache2…

Not sure how to roll back to Lighttpd since it is already set up for Apache2

Correction…it’s back to Cron rather than Ajax…I’ll leave that alone…I have a stock install but a non-standard build of dietpi

I am running MariaDB though

nextcloud:/home/warhawk# mysql_upgrade

This installation of MySQL is already upgraded to 10.1.37-MariaDB, use --force if you still need to run mysql_upgrade

Here is everything from my “settings” page

Nextcloud
Version: 14.0.4.2
Apps installed: 35
Apps updates available: 0

PHP
Version: 7.0.30
Memory Limit: 512 MB
Max Execution Time: 3600
Upload max size: 511 MB

Database
Type: mysql
Version: 10.1.37
Size: 131.6 MB

I will say this…was having fits for a while…power failure…ended up using the dietpi-drive_manager to scan and fix my external drive…it had a few corruptions…after I did that…smooth sailing

WarHawk
No need to switch to Lighttpd, that will just cause trouble on configs and you would need to reinstall PHP and Nextcloud as well (at least the configuration steps). As said, I use Apache here as well and did since starting with Nextcloud (ownCloud that time).

You said the developer (you mean rullzer?) has this issue as well? Do you have a link? Will run some tests here again and check PHP configs.