MariaDB (for ownCloud) High CPU Load

I recently set up a new RPi4b with DietPi (ARMv6 32) and have it set up to back up a couple of Windows computers with rsync. For added utility, I installed ownCloud to be able to access the backed up files remotely, essentially as a read-only option. The other day, I saw the CPU temp was high and found that the mariadbd process was showing 400% cpu usage. Upon rebooting, the process does not initially appear with ‘top’, but several hours later shows 100% usage. Perhaps if I leave it longer, it will start using additional cores again.

The owncloud configuration is almost completely stock as installed by dietpi-software. Below are the few install choices and post-install configuration changes I made:

  • Lighttpd for the web server

  • Added {files_external_allow_create_new_local’ => ‘true’,} to /var/www/owncloud/config/config.php and added my usb drive as a storage location in the ownCloud web interface.

  • used the dietpi-letsencrypt utility to set up SSL stuff, along with a DDNS address, port forwarding in my router, and including the domain in the trusted list in config.php

I’m the only user, and the CPU usage rises to 100% and beyond without any access or activity on my part. The external drive also stays idle.

I know next to nothing about mySQL databases, so please let me know what additional info I could provide or if there are any troubleshooting steps I can take.

Thanks in advance,
~Mike

you could try to have a look to logs. Maybe there are hints inside

journalctl -u mariadb

Thanks. The contents of the log file are pasted below. There doesn’t seem to be anything significant there to my untrained eye. The timestamp corresponds to when I last restarted the service. At that point, CPU usage for the process was 0%, and it stayed that way for at least the first hour. When I checked again 9 hours later, usage was back to 100% with no further logging.

-- Journal begins at Thu 2022-11-10 19:54:14 PST, ends at Fri 2022-11-11 06:24:58 PST. --
Nov 10 19:54:22 DietPi systemd[1]: Starting MariaDB 10.5.15 database server...
Nov 10 19:54:23 DietPi mariadbd[505]: 2022-11-10 19:54:23 0 [Note] /usr/sbin/mariadbd (mysqld 10.5.15-MariaDB-0+deb11u1) starting as process 505 ...
Nov 10 19:54:23 DietPi mariadbd[505]: 2022-11-10 19:54:23 0 [Note] InnoDB: Uses event mutexes
Nov 10 19:54:23 DietPi mariadbd[505]: 2022-11-10 19:54:23 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
Nov 10 19:54:23 DietPi mariadbd[505]: 2022-11-10 19:54:23 0 [Note] InnoDB: Number of pools: 1
Nov 10 19:54:23 DietPi mariadbd[505]: 2022-11-10 19:54:23 0 [Note] InnoDB: Using generic crc32 instructions
Nov 10 19:54:23 DietPi mariadbd[505]: 2022-11-10 19:54:23 0 [Note] InnoDB: Using Linux native AIO
Nov 10 19:54:23 DietPi mariadbd[505]: 2022-11-10 19:54:23 0 [Note] InnoDB: Initializing buffer pool, total size = 134217728, chunk size = 134217728
Nov 10 19:54:23 DietPi mariadbd[505]: 2022-11-10 19:54:23 0 [Note] InnoDB: Completed initialization of buffer pool
Nov 10 19:54:23 DietPi mariadbd[505]: 2022-11-10 19:54:23 0 [Note] InnoDB: 128 rollback segments are active.
Nov 10 19:54:23 DietPi mariadbd[505]: 2022-11-10 19:54:23 0 [Note] InnoDB: Creating shared tablespace for temporary tables
Nov 10 19:54:23 DietPi mariadbd[505]: 2022-11-10 19:54:23 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
Nov 10 19:54:23 DietPi mariadbd[505]: 2022-11-10 19:54:23 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
Nov 10 19:54:23 DietPi mariadbd[505]: 2022-11-10 19:54:23 0 [Note] InnoDB: 10.5.15 started; log sequence number 291705846; transaction id 231848
Nov 10 19:54:23 DietPi mariadbd[505]: 2022-11-10 19:54:23 0 [Note] InnoDB: Loading buffer pool(s) from /mnt/dietpi_userdata/mysql/ib_buffer_pool
Nov 10 19:54:23 DietPi mariadbd[505]: 2022-11-10 19:54:23 0 [Note] Plugin 'FEEDBACK' is disabled.
Nov 10 19:54:24 DietPi mariadbd[505]: 2022-11-10 19:54:24 0 [Note] Server socket created on IP: '127.0.0.1'.
Nov 10 19:54:24 DietPi mariadbd[505]: 2022-11-10 19:54:24 0 [Note] Reading of all Master_info entries succeeded
Nov 10 19:54:24 DietPi mariadbd[505]: 2022-11-10 19:54:24 0 [Note] Added new Master_info '' to hash table
Nov 10 19:54:24 DietPi mariadbd[505]: 2022-11-10 19:54:24 0 [Note] /usr/sbin/mariadbd: ready for connections.
Nov 10 19:54:24 DietPi mariadbd[505]: Version: '10.5.15-MariaDB-0+deb11u1'  socket: '/run/mysqld/mysqld.sock'  port: 3306  Raspbian 11
Nov 10 19:54:24 DietPi systemd[1]: Started MariaDB 10.5.15 database server.
Nov 10 19:54:24 DietPi /etc/mysql/debian-start[525]: Upgrading MySQL tables if necessary.
Nov 10 19:54:24 DietPi /etc/mysql/debian-start[530]: Looking for 'mysql' as: /usr/bin/mysql
Nov 10 19:54:24 DietPi /etc/mysql/debian-start[530]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck
Nov 10 19:54:24 DietPi /etc/mysql/debian-start[530]: This installation of MariaDB is already upgraded to 10.5.15-MariaDB.
Nov 10 19:54:24 DietPi /etc/mysql/debian-start[530]: There is no need to run mysql_upgrade again for 10.5.15-MariaDB.
Nov 10 19:54:24 DietPi /etc/mysql/debian-start[530]: You can use --force if you still want to run mysql_upgrade
Nov 10 19:54:24 DietPi /etc/mysql/debian-start[539]: Checking for insecure root accounts.
Nov 10 19:54:24 DietPi /etc/mysql/debian-start[543]: Triggering myisam-recover for all MyISAM tables and aria-recover for all Aria tables
Nov 10 19:54:26 DietPi mariadbd[505]: 2022-11-10 19:54:26 0 [Note] InnoDB: Buffer pool(s) load completed at 221110 19:54:26

ok, indeed nothing inside the logs. Can you execute following as soon as MariaDB is going wild.

mysql -e "SHOW PROCESSLIST";

Sure thing. It took a little longer this time for the high CPU usage to occur, but here’s the output from the mysql process command while the mariadbd process is using 100% CPU.

+----+----------+-----------+----------+---------+------+--------------+------------------------------------------------------------------------------------------------------+----------+
| Id | User     | Host      | db       | Command | Time | State        | Info                                                                                                 | Progress |
+----+----------+-----------+----------+---------+------+--------------+------------------------------------------------------------------------------------------------------+----------+
| 66 | oc_admin | localhost | owncloud | Query   |  439 | Sending data | select `fileid`, `name`, `user_id` from `oc_filecache` `fc` join `oc_mounts` on `storage` = `storage |    0.000 |
| 68 | root     | localhost | NULL     | Query   |    0 | starting     | SHOW PROCESSLIST                                                                                     |    0.000 |
+----+----------+-----------+----------+---------+------+--------------+------------------------------------------------------------------------------------------------------+----------+

can you check if the first row persist if you repeat the command?

Yes it does, and CPU usage for the process remains at 100%.

+----+----------+-----------+----------+---------+------+--------------+------------------------------------------------------------------------------------------------------+----------+
| Id | User     | Host      | db       | Command | Time | State        | Info                                                                                                 | Progress |
+----+----------+-----------+----------+---------+------+--------------+------------------------------------------------------------------------------------------------------+----------+
| 51 | oc_admin | localhost | owncloud | Query   |  235 | Sending data | select `fileid`, `name`, `user_id` from `oc_filecache` `fc` join `oc_mounts` on `storage` = `storage |    0.000 |
| 52 | root     | localhost | NULL     | Query   |    0 | starting     | SHOW PROCESSLIST                                                                                     |    0.000 |
+----+----------+-----------+----------+---------+------+--------------+------------------------------------------------------------------------------------------------------+----------+

root@DietPi:~# mysql -e "SHOW PROCESSLIST";
+----+----------+-----------+----------+---------+------+--------------+------------------------------------------------------------------------------------------------------+----------+
| Id | User     | Host      | db       | Command | Time | State        | Info                                                                                                 | Progress |
+----+----------+-----------+----------+---------+------+--------------+------------------------------------------------------------------------------------------------------+----------+
| 51 | oc_admin | localhost | owncloud | Query   |  246 | Sending data | select `fileid`, `name`, `user_id` from `oc_filecache` `fc` join `oc_mounts` on `storage` = `storage |    0.000 |
| 53 | root     | localhost | NULL     | Query   |    0 | starting     | SHOW PROCESSLIST                                                                                     |    0.000 |
+----+----------+-----------+----------+---------+------+--------------+------------------------------------------------------------------------------------------------------+----------+

Hmm the process seems to be stuck, as Time is increasing. Maybe something to ask on owncloud forum what the reason could be.

Do you have maybe a enmours amount of files stored in your owncloud which change often (because of rsync)? Maybe the filescanning process takes just so long?

Thank you for the help and suggestions. I will pursue it on the ownCloud forum.

I thought about the possibility of it taking a long time to process the files. All data is on an external hard drive that I regularly swap out while storing the other drive offsite. The drive has 2.0 TB on it. However, it always seems to be idle while the CPU usage is high. I can feel that it is not spinning.

If there is something it just needs to get through that would be fine if it was limited to using one core. But when I first found this was happening, all four cores were maxed out and the processor was too hot.

you could try to keep it running for a while and wait for a couple of days how it is going.

Yea I read about some owncloud users, which also uses a RPi, that it took around 2 days only for the scanning.

1 Like

Sounds like a good suggestion. The only issue is if it starts using all four cores again and gets the processor too hot. I installed the cpulimit program and have used it to (hopefully) limit the process to 100%. So I’ll let it go like that for several days and see what happens.