uwjhn
April 11, 2022, 8:34am
1
I’m running dietpi (8.3.1) on a raspberry for years. since this night and without any reason I know, the CPU load on influxDB (v1.8.10 (git: 1.8 688e697c51fd)) has increased from about nothing to about 110% (25% over all cores). A reboot didn`t solve the issue. Any suggestions what to log/ test? Storage on pi is not full at all.
Maybe you could ask the question on InfluxDB community. https://community.influxdata.com/
I guess these guys are more specialised on how to analyse InfluxDB.
uwjhn
April 11, 2022, 3:20pm
3
Seems to be solved by switching to TSI indexing.
uwjhn
June 3, 2022, 5:56am
5
The problem was only solved temporarily. Now it’s back. “Fatal error: Out of Memory”. I noticed that the 8GB RAM of my Pi is barely used (only 700 MB). How can I change the settings of the influxDB docker container to use more of the memory?
Where do you see the error message? Somewhere inside Docker or container logs? Maybe you could ask the container developer for support as well.
uwjhn
June 3, 2022, 6:23am
7
errors are from the logs:
$ journalctl -u influxdb.service | grep "error"
Here is a short extract:
Jun 03 07:21:41 jhnPi influxd-systemd-start.sh[1640]: ts=2022-06-03T06:21:41.450109Z lvl=info msg=“Error replacing new TSM files” log_id=0ar2GsGW000 engine=tsm1 tsm1_level=2 tsm1_strategy=level trace_id=0ar3TiBG001 op_name=tsm1_compact_group db_shard_id=636 error=“cannot allocate memory”
Jun 03 07:21:43 jhnPi influxd-systemd-start.sh[1640]: ts=2022-06-03T06:21:43.450267Z lvl=info msg=“Error replacing new TSM files” log_id=0ar2GsGW000 engine=tsm1 tsm1_level=2 tsm1_strategy=level trace_id=0ar3Tp~G000 op_name=tsm1_compact_group db_shard_id=652 error=“cannot allocate memory”
Jun 03 07:21:43 jhnPi influxd-systemd-start.sh[1640]: ts=2022-06-03T06:21:43.452220Z lvl=info msg=“Error replacing new TSM files” log_id=0ar2GsGW000 engine=tsm1 tsm1_level=2 tsm1_strategy=level trace_id=0ar3Tp~G001 op_name=tsm1_compact_group db_shard_id=636 error=“cannot allocate memory”
Did you tried to restart your system?
And is this an error message from inside the container? I understood you are running InfluxDB from Docker.
uwjhn
June 3, 2022, 6:48am
9
System restart yes. often. sometimes helps for a few hours. I installed influxDB directly from “dietpi software”.
That’s basically 2 conflicting’s statement’s as we don’t install InfluxDB as Docker container
uwjhn
June 3, 2022, 9:02am
11
yes. I do not know why I assumed that it was a docker installation. So please forget about the docker comment
Sorry.
You are using NodeRed together with InfluxDB? And you are using 32bit? At least I found 2 issues (all not DietPi related and a bit older) having similar issues. Maybe you can have a look.
opened 02:50PM - 07 Jul 16 UTC
closed 01:15AM - 17 Oct 18 UTC
need more info
### Bug report
**System info:** InfluxDB 0.13.0-1, Linux softsw69 2.6.34.10-0.… 6-desktop #1 SMP PREEMPT 2011-12-13 18:27:38 +0100 x86_64 x86_64 x86_64 GNU/Linux (Suse Linux)
**Steps to reproduce:**
After almost two days writting around 5.000 metrics per seconds, InfluxDB crashes. When I start InfluxDB process again, I got this error in log:
> [cacheloader] 2016/07/07 10:40:44 reading file /home/influxdb/wal/performance/default/2/_02089.wal, size 10485900
> [cacheloader] 2016/07/07 10:40:44 reading file /home/influxdb/wal/_internal/monitor/4/_00002.wal, size 0
> [shard] 2016/07/07 10:40:44 /home/influxdb/data/_internal/monitor/4 database index loaded in 1.725057ms
> [store] 2016/07/07 10:40:44 /home/influxdb/data/_internal/monitor/4 opened in 1.04092455s
> [cacheloader] 2016/07/07 10:40:46 reading file /home/influxdb/wal/performance/default/2/_02090.wal, size 10487370
> [cacheloader] 2016/07/07 10:40:48 reading file /home/influxdb/wal/performance/default/2/_02091.wal, size 10487395
> [cacheloader] 2016/07/07 10:40:50 reading file /home/influxdb/wal/performance/default/2/_02092.wal, size 6489923
> [cacheloader] 2016/07/07 10:40:51 reading file /home/influxdb/wal/performance/default/2/_02093.wal, size 10486224
> [cacheloader] 2016/07/07 10:40:53 reading file /home/influxdb/wal/performance/default/2/_02094.wal, size 10486823
> [cacheloader] 2016/07/07 10:40:54 reading file /home/influxdb/wal/performance/default/2/_02095.wal, size 10486442
> [cacheloader] 2016/07/07 10:40:56 reading file /home/influxdb/wal/performance/default/2/_02096.wal, size 6498702
> [cacheloader] 2016/07/07 10:40:57 reading file /home/influxdb/wal/performance/default/2/_02097.wal, size 10487287
> [cacheloader] 2016/07/07 10:40:59 reading file /home/influxdb/wal/performance/default/2/_02098.wal, size 10487150
> [cacheloader] 2016/07/07 10:41:01 reading file /home/influxdb/wal/performance/default/2/_02099.wal, size 10486054
> [cacheloader] 2016/07/07 10:41:04 reading file /home/influxdb/wal/performance/default/2/_02100.wal, size 6571972
> [cacheloader] 2016/07/07 10:41:05 reading file /home/influxdb/wal/performance/default/2/_02101.wal, size 971433
> [cacheloader] 2016/07/07 10:41:05 reading file /home/influxdb/wal/performance/default/2/_02102.wal, size 0
> [tsm1] 2016/07/07 10:41:05 beginning full compaction of group 0, 4 TSM files
> [tsm1] 2016/07/07 10:41:05 compacting full group (0) /home/influxdb/data/performance/default/2/000000336-000000006.tsm (#0)
> [tsm1] 2016/07/07 10:41:05 compacting full group (0) /home/influxdb/data/performance/default/2/000000416-000000005.tsm (#1)
> [tsm1] 2016/07/07 10:41:05 compacting full group (0) /home/influxdb/data/performance/default/2/000000464-000000005.tsm (#2)
> [tsm1] 2016/07/07 10:41:05 compacting full group (0) /home/influxdb/data/performance/default/2/000000496-000000005.tsm (#3)
> [tsm1] 2016/07/07 10:41:05 error compacting TSM files: cannot allocate memory
> [tsm1] 2016/07/07 10:41:05 beginning level 1 compaction of group 0, 2 TSM files
> [tsm1] 2016/07/07 10:41:05 compacting level 1 group (0) /home/influxdb/data/performance/default/2/000000521-000000001.tsm (#0)
> [tsm1] 2016/07/07 10:41:05 compacting level 1 group (0) /home/influxdb/data/performance/default/2/000000522-000000001.tsm (#1)
> [tsm1] 2016/07/07 10:41:05 beginning level 3 compaction of group 0, 4 TSM files
> [tsm1] 2016/07/07 10:41:05 compacting level 3 group (0) /home/influxdb/data/performance/default/2/000000500-000000003.tsm (#0)
> [tsm1] 2016/07/07 10:41:05 compacting level 3 group (0) /home/influxdb/data/performance/default/2/000000504-000000003.tsm (#1)
> [tsm1] 2016/07/07 10:41:05 compacting level 3 group (0) /home/influxdb/data/performance/default/2/000000508-000000003.tsm (#2)
> [tsm1] 2016/07/07 10:41:05 compacting level 3 group (0) /home/influxdb/data/performance/default/2/000000512-000000003.tsm (#3)
> fatal error: runtime: out of memory
>
> runtime stack:
> runtime.throw(0xce4740, 0x16)
> /usr/local/go/src/runtime/panic.go:547 +0x90
> runtime.sysMap(0xc84ef00000, 0x100000, 0x456100, 0x10efab8)
What is happenning?
I know that there are still space on disk.
uwjhn
June 3, 2022, 9:30am
13
Thanks. I am already working on the “solution” of the Node-RED forum, which I also found a few days ago. It is not yet working. I`ll try to keep you updated.