DietPi-RAMlog is not a good solution for small systems

Hello there,

I’m investigating DietPi, for use in “small” systems. (Looks interesting.) But I cannot really comprehend the “DietPi-RAMlog” component.

The DietPi-RAMlog will delete the logs every hour. This means that:

  • The logs are almost useless. (If you log in at the wrong time, all logs will be gone.)
  • A log spam can bring down the system.

Btw. the more widespread rsyslog / logrotate is just as bad. However, before rsyslog, most distributions used syslogd. (Why they switched to rsyslog, I have no idea. It’s not like syslogd is very huge and bulky. The one I’m using is literally 200 lines.)

Syslogd have builtin feature for “rotating”.
Here’s the busybox version: busybox/syslogd.c at master · brgl/busybox · GitHub
(It’s the first feature.)
And here’s the big syslogd: GitHub - troglobit/sysklogd: BSD syslog daemon and syslog()/syslogp() API for Linux, RFC3164 + RFC5424

Builtin “rotating” has a lot of advantages:

  • It cannot fail.
  • It cannot break the max memory usage.
  • It will not bring down the system, if you spam the log.
  • It cannot forget to run the “rotate”.
  • You will know exactly how much ram/flash you’ll be using on logs.

Also syslogd is not a custom built “DietPi-RAMlog”. Syslogd is old, tried, verified and tested through the ages.

Hello and thank you for your post.
You can change the logging behaviour with dietpi-config. There are two types of logging to the RAM and one option for persistent logging / log rotate.

The “full logging” option however installs rsyslog and logrotate indeed.

The default RAMlog option indeed is meant to allow viewing logs directly after the error occurred, respectively after it has been triggered. There is the other RAMlog option which writes logs to a different dir on disk before clearing it.

Benefit really is zero disk writes and slightly enhanced performance and SD card life time that way.

Not sure whether syslogd is better? It is really unused nowadays and I’m not aware of any issues with rsyslog and logrotate. But of course syslogd is more lightweight.

However, we actually aim to switch all logging to journald, mostly done already: journalctl
It must run anyway for systemd to work and any other logging daemon just doubles system logs and adds overhead. So as long as no plain text log files are explicitly wanted, I’d always try logging to and reading from journald.

Have you tried a simple “while(1) do_some_random_logging()”?
(Swap should be disabled, as on all flash based systems.) Maybe they’ve solved that kind of spam. But the use of cron / logrotate indicates otherwise.

Indeed. But the “OverlayFS / read-only” feature, will solve that as well. (With rsyslog) And it’s not a bad feature to use anyway. (Allthough I preferer the more manual root-ro script approach.)

I’ve never really given much notice to journalctl. It seems to have some nice features. (Auto rotate) But it also says “… and also forwarding messages to existing syslog implementations”. If that’s the case, it just adds to the problem, really. But maybe that’s configureable?

And ofc, your logs should not be written to flash either :wink:

Why would someone write such a code for unconditional endless logging burst? That breaks any logging system earlier or later, regardless whether done to RAM or disk, with limited size or not. And I was arguing from the end user perspective which uses software that does usually no such logging bursts (given no debug flags set), not taking into account the developer perspective, which cannot rely on any specific system logging being used anyway. Of course also system admins may write own software scripts for own systems, using syslog daemons, but that’s rare, I guess.

On DietPi, swap is configured to be used only when really required. So as long as you do not exceed your physical RAM, the size of the swap file is irrelevant, it is simply not used. However, of course one might want to free the disk space by consequently disabling the swap file then :wink:. If one however runs a system where during short periods RAM usage exceeds physical RAM, then one simply requires a swap file. I personally wouldn’t run a system where this is too regularly the case, but extend RAM or reduce tasks then, but some users may simply not have the option, or it really is a short daily memory usage peak only that does not involve much overall disk writes, like Pi-hole (very low base RAM usage) doing Gravity updates (can exceed 512 MiB of RPi Zero W, very often used as Pi-hole hardware).

But not sure why it’s relevant for the logging question? We either log to disk or to an always limited RAM space, limited not by the system memory but by the individual tmpfs size (50 MiB by default for DietPi-RAMlog) or daemon internal limitations (which journald has, including rate limiting to protect against bursts). So the swap space is never affected by logging, unless the daemon is manually (mis)configured.

Okay that is a very specific use case, if you do not intend to write any persistent data, change no settings, do no updates/maintenance etc on a regular basis, like for sensor or monitoring systems and such. But I guess for at least 95% of all systems with typical DietPi use cases, overlayfs is not feasible or at least causes more issues than it solves, e.g. if one regularly needs to disable => reboot => enable => reboot whenever persistent changes want to be done.

Finally DietPi and its software implementations are exactly designed to minimise disk writes, where RAMlog is one part of, the dedicated userdata directory another, the slim base image one etc. Of course it doesn’t fit each and every use case, but it should cover the most common use cases, at least that’s the aim :slightly_smiling_face:.

When programs fails, they tend to do unintended stuff. Endless cycles of “error logs” are very common.

Could be. It was relevant enough though, for the Raspberry OS to build it into their rather limited config menu. So, it can’t be that exotic.

We’re using similar techniques on all of our production systems. (Especially Windows units.)

Btw. it doesn’t mean that all writes are discarded. (Allthough, often it does.) But datastorages etc. are ofc writeable.

No, sorry about that. My point was, that if anyone were going to test these things, they should take care. A (random) log burst, will eat up all your ram (if your logs are in ram). When this happen your system might decide to swap. Which in small systems means they’ll become almost useless and if your swap file are in flash (as per standard) it’ll rather quickly destroy that. (Some flash only has 10k erase cycles.)
Also the OOM will not help in this case. (Unless they’ve made some kind of change.) The OOM doesn’t kill filesystems.
… so, be careful.

In such case a tmpfs with limited space is much better that if it would be able to full the whole RAM or disk, isn’t it? However, in my experience it is rather a rare case, good software is written to error out instead of staying in a loop and systemd (the wrapper service) by default exits the restart loop after some attempts. But of course such exists, same as simply very verbose logging software. That’s way with DietPi-RAMlog logs are cleared every hour.

Of course as an option it’s nice, same as it’s an option to disable DietPi-RAMlog for the cases where it makes sense.

I know how overlayfs can work. It is not feasible to add each and every directory where each and every software we have in the catalogue may write to as a write-through dir. Some are mixed binary + config + data directories. And of course in most cases I guess one is more interested in keeping software up-to-date easily, without requiring to disable the overlayfs first, compared to assuring that nothing unintended can be written to, especially when in practice those unintended locations is anyway never written to unless you explicitly run a command to do so. But it really depends on the use case of course, which part of system security you trust more or give more weight etc. I’m really just talking about which is the most reasonable “default” setup.

No, such a burst will error out once the tmpfs is full with it’s 50 MiB default size for DietPi-RAMlog. As I said above, that is actually the benefit of a limited size tmpfs as RAM log solution compared to e.g. systemd-journald which can log to RAM as well (does by default on DietPi, not anymore by default on Debian) but has a much higher /run tmpfs limit of usually 512 MiB which can be more critical on low RAM SBCs. It also has a 10% free RAM limit, so it cannot fill the RAM entirely, but with 90% usage, of course another peak can then lead to OOM kills much easier, and failing system logs are usually a bigger problem than failing service/software logs (those in /var/log).

Log2ram seems to work VERY well, it is capable of even installing and running a zram setup in the config

I believe that would be a good option for addon as well