Mariadb not starting - causing Nextcloud to Error 500

I was trying to figure out why my nextcloud instance started giving me an error 500 error when I found out that the mariadb isn’t starting and is continuously reactivating itself in dietpi-services. I can’t seem to figure out why, though, or where to look for logs.

Any ideas on where to start so I could potentially fix this problem?

Many thanks for your report.

Pleas paste the following:

journalctl -u mariadb
tail -20 /var/log/mysql/error.log

journalctl -u mariadb reveals this being repeated over and over again:

Aug 01 18:26:33 Ouroboros systemd[1]: mariadb.service: Service RestartSec=5s expired, scheduling restart.
Aug 01 18:26:33 Ouroboros systemd[1]: mariadb.service: Scheduled restart job, restart counter is at 24.
Aug 01 18:26:33 Ouroboros systemd[1]: Stopped MariaDB 10.3.22 database server.
Aug 01 18:26:33 Ouroboros systemd[1]: Starting MariaDB 10.3.22 database server...
Aug 01 18:26:33 Ouroboros mysqld[3152]: 2020-08-01 18:26:33 0 [Note] /usr/sbin/mysqld (mysqld 10.3.22-MariaDB-0+deb10u1) starting as process 3152 ...
Aug 01 18:26:34 Ouroboros systemd[1]: mariadb.service: Main process exited, code=killed, status=6/ABRT
Aug 01 18:26:34 Ouroboros systemd[1]: mariadb.service: Failed with result 'signal'.
Aug 01 18:26:34 Ouroboros systemd[1]: Failed to start MariaDB 10.3.22 database server.
Aug 01 18:26:39 Ouroboros systemd[1]: mariadb.service: Service RestartSec=5s expired, scheduling restart.
Aug 01 18:26:39 Ouroboros systemd[1]: mariadb.service: Scheduled restart job, restart counter is at 25.
Aug 01 18:26:39 Ouroboros systemd[1]: Stopped MariaDB 10.3.22 database server.
Aug 01 18:26:39 Ouroboros systemd[1]: Starting MariaDB 10.3.22 database server...
Aug 01 18:26:39 Ouroboros mysqld[3251]: 2020-08-01 18:26:39 0 [Note] /usr/sbin/mysqld (mysqld 10.3.22-MariaDB-0+deb10u1) starting as process 3251 ...
Aug 01 18:26:40 Ouroboros systemd[1]: mariadb.service: Main process exited, code=killed, status=6/ABRT
Aug 01 18:26:40 Ouroboros systemd[1]: mariadb.service: Failed with result 'signal'.
Aug 01 18:26:40 Ouroboros systemd[1]: Failed to start MariaDB 10.3.22 database server.
Aug 01 18:26:45 Ouroboros systemd[1]: mariadb.service: Service RestartSec=5s expired, scheduling restart.
Aug 01 18:26:45 Ouroboros systemd[1]: mariadb.service: Scheduled restart job, restart counter is at 26.
Aug 01 18:26:45 Ouroboros systemd[1]: Stopped MariaDB 10.3.22 database server.
Aug 01 18:26:45 Ouroboros systemd[1]: Starting MariaDB 10.3.22 database server...
Aug 01 18:26:46 Ouroboros mysqld[3341]: 2020-08-01 18:26:46 0 [Note] /usr/sbin/mysqld (mysqld 10.3.22-MariaDB-0+deb10u1) starting as process 3341 ...
Aug 01 18:26:47 Ouroboros systemd[1]: mariadb.service: Main process exited, code=killed, status=6/ABRT
Aug 01 18:26:47 Ouroboros systemd[1]: mariadb.service: Failed with result 'signal'.
Aug 01 18:26:47 Ouroboros systemd[1]: Failed to start MariaDB 10.3.22 database server.

The error.log command reveals this:

Resource Limits:
Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            8388608              unlimited            bytes     
Max core file size        0                    unlimited            bytes     
Max resident set          unlimited            unlimited            bytes     
Max processes             30493                30493                processes 
Max open files            16364                16364                files     
Max locked memory         65536                65536                bytes     
Max address space         unlimited            unlimited            bytes     
Max file locks            unlimited            unlimited            locks     
Max pending signals       30493                30493                signals   
Max msgqueue size         819200               819200               bytes     
Max nice priority         0                    0                    
Max realtime priority     0                    0                    
Max realtime timeout      unlimited            unlimited            us        
Core pattern: core

It looks like MariaDB is being killed by something, but I can’t tell by what…

pls can you restart MariaDB and post complete error.log

systemctl restart mariadb.service
cat /var/log/mysql/error.log

This keeps being repeated in the error.log:

2020-08-02 10:16:15 0 [Note] InnoDB: Using Linux native AIO
2020-08-02 10:16:15 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2020-08-02 10:16:15 0 [Note] InnoDB: Uses event mutexes
2020-08-02 10:16:15 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
2020-08-02 10:16:15 0 [Note] InnoDB: Number of pools: 1
2020-08-02 10:16:15 0 [Note] InnoDB: Using generic crc32 instructions
2020-08-02 10:16:15 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
2020-08-02 10:16:15 0 [Note] InnoDB: Completed initialization of buffer pool
2020-08-02 10:16:15 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
2020-08-02 10:16:15 0 [Note] InnoDB: Starting crash recovery from checkpoint LSN=3455401268
2020-08-02 10:16:15 0 [Note] InnoDB: Starting final batch to recover 21 pages from redo log.
2020-08-02 10:16:16 0 [Note] InnoDB: 128 out of 128 rollback segments are active.
2020-08-02 10:16:16 0 [Note] InnoDB: Removed temporary tablespace data file: "ibtmp1"
2020-08-02 10:16:16 0 [Note] InnoDB: Creating shared tablespace for temporary tables
2020-08-02 10:16:16 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
2020-08-02 10:16:16 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
2020-08-02 10:16:16 0 [Note] InnoDB: 10.3.22 started; log sequence number 3455410844; transaction id 8707007
2020-08-02 10:16:16 0 [Note] InnoDB: Loading buffer pool(s) from /mnt/f28f98de-752b-4d8c-81d2-982c7b5f037b/dietpi_userdata/mysql/ib_buffer_pool
2020-08-02 10:16:16 0 [ERROR] InnoDB: Space id and page no stored in the page, read in are [page id: space=12, page number=1505], should be [page id: space=12, page number=1501]
2020-08-02 10:16:16 0 [Note] Plugin 'FEEDBACK' is disabled.
2020-08-02 10:16:16 0 [Note] Recovering after a crash using tc.log
2020-08-02 10:16:16 0 [ERROR] Bad magic header in tc log
2020-08-02 10:16:16 0 [ERROR] Crash recovery failed. Either correct the problem (if it's, for example, out of memory error) and restart, or delete tc log and start mysqld with --tc-heuristic-recover={commit|rollback}
2020-08-02 10:16:16 0 [ERROR] Can't init tc log
2020-08-02 10:16:16 0 [ERROR] Aborting

2020-08-02 10:16:16 0x906fa3e0  InnoDB: Assertion failure in file /build/mariadb-10.3-GK21G6/mariadb-10.3-10.3.22/storage/innobase/btr/btr0cur.cc line 310
InnoDB: Failing assertion: btr_page_get_next(latch_leaves.blocks[0]->frame) == page_get_page_no(page)
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to https://jira.mariadb.org/
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: https://mariadb.com/kb/en/library/innodb-recovery-modes/
InnoDB: about forcing recovery.
200802 10:16:16 [ERROR] mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.

To report this bug, see https://mariadb.com/kb/en/reporting-bugs

We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed, 
something is definitely wrong and this may fail.

Server version: 10.3.22-MariaDB-0+deb10u1
key_buffer_size=134217728
read_buffer_size=131072
max_used_connections=0
max_threads=153
thread_count=5
It is possible that mysqld could use up to 
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 466218 K  bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x932007a8
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x906f9d24 thread_stack 0x49000

Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (0x0): 
Connection ID (thread ID): 1
Status: NOT_KILLED

Optimizer switch: index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,engine_condition_pushdown=off,index_condition_pushdown=on,derived_merge=on,derived_with_keys=on,firstmatch=on,loosescan=on,materialization=on,in_to_exists=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on,mrr=off,mrr_cost_based=off,mrr_sort_keys=off,outer_join_with_cache=on,semijoin_with_cache=on,join_cache_incremental=on,join_cache_hashed=on,join_cache_bka=on,optimize_join_buffer_size=off,table_elimination=on,extended_keys=on,exists_to_in=on,orderby_uses_equalities=on,condition_pushdown_for_derived=on,split_materialized=on

The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.
Writing a core file...
Working directory at /mnt/f28f98de-752b-4d8c-81d2-982c7b5f037b/dietpi_userdata/mysql
Resource Limits:
Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            8388608              unlimited            bytes     
Max core file size        0                    unlimited            bytes     
Max resident set          unlimited            unlimited            bytes     
Max processes             30493                30493                processes 
Max open files            16364                16364                files     
Max locked memory         65536                65536                bytes     
Max address space         unlimited            unlimited            bytes     
Max file locks            unlimited            unlimited            locks     
Max pending signals       30493                30493                signals   
Max msgqueue size         819200               819200               bytes     
Max nice priority         0                    0                    
Max realtime priority     0                    0                    
Max realtime timeout      unlimited            unlimited            us        
Core pattern: core

pls do

mv /mnt/dietpi_userdata/mysql/tc.log /mnt/dietpi_userdata/mysql/tc.save
systemctl restart mariadb.service

That didn’t seem to fix it, sadly. It still has the same error in the mysql log.
Do you mind quickly letting me know what the tc.log file is?

tc.log is used to perform a crash recovery. But it was corrupted in your case. Therefore it needs to be related

[ERROR] Bad magic header in tc log

pls can you post error.log again restarting maria db

The error log looks much the same, with it repeating this message:

2020-08-02 11:02:55 0 [Note] InnoDB: Using Linux native AIO
2020-08-02 11:02:55 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2020-08-02 11:02:55 0 [Note] InnoDB: Uses event mutexes
2020-08-02 11:02:55 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
2020-08-02 11:02:55 0 [Note] InnoDB: Number of pools: 1
2020-08-02 11:02:55 0 [Note] InnoDB: Using generic crc32 instructions
2020-08-02 11:02:55 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
2020-08-02 11:02:55 0 [Note] InnoDB: Completed initialization of buffer pool
2020-08-02 11:02:55 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
2020-08-02 11:02:55 0 [Note] InnoDB: Starting crash recovery from checkpoint LSN=3455401268
2020-08-02 11:02:55 0 [Note] InnoDB: Starting final batch to recover 21 pages from redo log.
2020-08-02 11:02:56 0 [Note] InnoDB: 128 out of 128 rollback segments are active.
2020-08-02 11:02:56 0 [Note] InnoDB: Removed temporary tablespace data file: "ibtmp1"
2020-08-02 11:02:56 0 [Note] InnoDB: Creating shared tablespace for temporary tables
2020-08-02 11:02:56 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
2020-08-02 11:02:56 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
2020-08-02 11:02:56 0 [Note] InnoDB: Waiting for purge to start
2020-08-02 11:02:56 0 [ERROR] InnoDB: Space id and page no stored in the page, read in are [page id: space=12, page number=1505], should be [page id: space=12, page number=1501]
2020-08-02 11:02:56 0x906fa3e0  InnoDB: Assertion failure in file /build/mariadb-10.3-GK21G6/mariadb-10.3-10.3.22/storage/innobase/btr/btr0cur.cc line 310
InnoDB: Failing assertion: btr_page_get_next(latch_leaves.blocks[0]->frame) == page_get_page_no(page)
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to https://jira.mariadb.org/
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: https://mariadb.com/kb/en/library/innodb-recovery-modes/
InnoDB: about forcing recovery.
200802 11:02:56 [ERROR] mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.

To report this bug, see https://mariadb.com/kb/en/reporting-bugs

We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed, 
something is definitely wrong and this may fail.

Server version: 10.3.22-MariaDB-0+deb10u1
key_buffer_size=134217728
read_buffer_size=131072
max_used_connections=0
max_threads=153
thread_count=4
It is possible that mysqld could use up to 
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 466218 K  bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0xa4400848
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x906f9d24 thread_stack 0x49000

Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (0x0): 
Connection ID (thread ID): 4
Status: NOT_KILLED

Optimizer switch: index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,engine_condition_pushdown=off,index_condition_pushdown=on,derived_merge=on,derived_with_keys=on,firstmatch=on,loosescan=on,materialization=on,in_to_exists=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on,mrr=off,mrr_cost_based=off,mrr_sort_keys=off,outer_join_with_cache=on,semijoin_with_cache=on,join_cache_incremental=on,join_cache_hashed=on,join_cache_bka=on,optimize_join_buffer_size=off,table_elimination=on,extended_keys=on,exists_to_in=on,orderby_uses_equalities=on,condition_pushdown_for_derived=on,split_materialized=on

The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.
Writing a core file...
Working directory at /mnt/f28f98de-752b-4d8c-81d2-982c7b5f037b/dietpi_userdata/mysql
Resource Limits:
Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            8388608              unlimited            bytes     
Max core file size        0                    unlimited            bytes     
Max resident set          unlimited            unlimited            bytes     
Max processes             30493                30493                processes 
Max open files            16364                16364                files     
Max locked memory         65536                65536                bytes     
Max address space         unlimited            unlimited            bytes     
Max file locks            unlimited            unlimited            locks     
Max pending signals       30493                30493                signals   
Max msgqueue size         819200               819200               bytes     
Max nice priority         0                    0                    
Max realtime priority     0                    0                    
Max realtime timeout      unlimited            unlimited            us        
Core pattern: core

puhh looks like your database got corrupted

Do you have a backup from your NextCloud database?

Oooooh nooooo. This is really bad, cuz my Gitea instance isn’t loading because of it either.
Uh, I don’t know if there are any database backups, or at least I don’t know where they’d be saved (yeah, I know, no backups = dumb). If it makes any difference, I did very recently upgrade Nextcloud to the latest version. Do you think that might have something to do with it and provide clues as to what happened in some logs anywhere?

Also, is there a way to fix the database before I move on with any deeper repairs? I looked around and apparently I could use mysqlcheck. Would that be safe in this instance?

First thing before doing anything, create a backup/copy of your database directory. If thinks goes wrong (even more worse than they are already :wink: ) you are able to copy data back.

/mnt/f28f98de-752b-4d8c-81d2-982c7b5f037b/dietpi_userdata/mysql

An option is to post your error.log on a MariaDB specialised board. Maybe there are experts who are more knowledgeable than I on fixing such deep database related issues.

https://mariadb.com/kb/en/community/+questions/

So I looked around and I think I’m just gonna redeploy the apps and recreate the database. Do you know a quick way of doing that while keeping the old data from their dietpi-userdata folders, even with the database gone?
Thanks

You could try to run dietpi-software reinstall 114. It should force a re-installation of NextCloud and all related software packages. But I’m not sure if this will work with a corrupted database. Therefore it might be good to backup all your data from /mnt/dietpi_userdata/nextcloud_data first. In worst case, you would need to uninstall and install NextCloud completely. If the NextCloud DB would need to be reinstalled from scratch, you will loos the link between NextCloud and the data on OS layer. Even if data are still exist on OS layer, NextCloud will not be able to display them. But it should be possible to get the data back displayed by running ncc files:scan --all

Sounds good! What about the Gitea instance, though? It seems like it also uses the corrupt database and won’t work without it. Will that be fine if I recreate the database and use the same dietpi_userdata folder, or?

honestly I don’t have much experience on Gitea and how it’s working.

Usually you could use Gitea Backup/Restore function but I guess this will not work due to the corrupted database.

https://docs.gitea.io/en-us/backup-and-restore/

I’ve tried reinstalling nextcloud, but it gave me the same error during the launch of mariadb, even after it was reinstalled.
Do I need to delete the old database in order for this to work? I’m prepared to just restart gitea/nextcloud from scratch if I have to.

Probably it it would be needed to run a complete uninstall, da a reboot and a new installation to set it up from scratch.

I know this isn’t quite on topic of this thread but since I’m already replying on it:
I’ve simply re-flashed the dietpi distro as I can afford to just redeploy everything - I’m the only user :stuck_out_tongue:. However, I can’t seem to stop pihole from being accessible from my external domain. The 99-dietpi-pihole-block_public_admin.conf file is in the /etc/lighttpd/conf-enabled folder and i’ve enabled the lighty mod, so I don’t quite know what’s going on. There are no errors being thrown by the lighttpd syntax checker, either, and it’s being loaded.
Any quick solution to this, or?

EDIT: Found my own solution! I put my main website in a subfolder in /var/www and set lighttpd to change the document root to that subfolder based on the domain. Did the same for nextcloud, too, with my own tweaks and a subdomain. Looks very slick!

hmm that’s strange. I just checked it on a VM and it is working well. Pihole Admin page was not reachable from internet. I just got 403 Forbidden

Can you do

lighty-enable-mod dietpi-pihole-block_public_admin
service lighttpd force-reload