Thoughts about docker

What do you think about docker? Are there real advantages for using it on a device like a Raspberry Pi? I know it could be usefull on servers with large amount of services or for software dev, to get a consistent substructre and less config fiddling when you migrate to another system / environment. But for endusers, who just want’s to run stable services, who don’t change much on their system (just setup and forget like).
I only see more complication in it, because it adds another layer of configuration.
What do you think?

let’s give my favorite answer :wink: > it depends

Yes you are right, Docker will implement another layer that could case issues. As well it will consume resources (around 100MB memory usage) just to have it started and not counting any container running. On smaller SBC this might already a challenge. On modern SBC like RPi4 4-8 GB it doesn’t really matter. Biggest challenge might be, if Docker is failing - all your container will become unavailable as well. Another point, if you have some container require a database, each of them might have their own one. Means it will consume more resources, compare to a single database running natively.

On the other hand side, Docker could simplify thinks. In most cases, Docker container are ready to use and users don’t need to think of on how to setup an application. No need to take care on dependency or system settings. As well it could reduce installation time as you don’t need to compile stuff. And some applications are available as Docker container only.

As you see, not really black and white. It depends on your needs and personal preferences.

1 Like

I would agree that it depends, and also on what the skillset or energy that the individual has to put into it.

For somethings it is nice to be able to just select the software and have it install.

On the other hand, using docker images can help make things very neat and you can just backup your mount points (or run them from a network location e.g. NAS). If you ever need to move to a new device or yours breaks, it can be as simple as installing DietPi + Docker, and then starting up all the containers on the new device and your applications won’t even know they moved.

It will also help avoid conflicts with resources or ports and makes upgrading a very clean exercise (delete the old container, pull the latest image and start it).

The downside of Docker is that it’s another tool that you have to learn to use correctly, and while it’s extremely popular and well supported, it’s not something that was targeted at home users so it can take a depth of knowledge in things like networking, DNS, filesystems etc. if you need to do anything beyond the most basic of setups. There are benefits but the cost of that is time invested in learning/setting it up.

I run a Pi4 with about 10 containers, some are general applications I’ve downloaded the image for and some are things I’ve created myself. If you’re curious about it, one of the nice things about Docker’s isolation is that you can just install it and try playing around with it including things you may already be running natively on your OS, and just expose them on a different port while you learn what you want to do.

Basically one mission of DietPi is to make “native” software installs with less memory footprint as easy as otherwise Docker can be. But it isn’t always possible or hard to archive :wink:.

Most importantly Docker shouldn’t be seen as “the” solution for every case, like sadly some developers do, e.g. building their (simple) software all around a Docker container, instead of the other way round (which would allow a native install without Docker). It makes sense when many components need to be installed and setup to work together, which may otherwise require time to learn about, implies the risk of misconfiguration etc. Also in a highly sensitive environment one may like the additional security/sandboxing layer which Docker implies, though this can be achieved in other ways (firewall, apparmor, SELinux, interface binding, …).

Downsides, aside of additional disk and RAM usage, from my side is the limited ability to configure the container and contained software. I mean this is basically one of the points behind Docker, so users cannot break the setup, developers/maintainers do not need to deal with support related to wrong configs or so, but I’ve seen a lot of very badly configured containers, where I would like to be able to fix things or at least get some insights into the actual setup behind the components, but it is naturally not possible to do so without building and maintaining an own container. I.e. especially for experienced admins one of the major points behind Docker is its exact weakness and basically renders its usage a downside when you (think you) know better than the container maintainers how things should be setup, at least for your particular use case :wink:.

There are good reasons not to use docker - for example, certain hardware monitoring systems, or system upgrade processes/daemons. There are also good reasons to use docker as it provides significant isolation for potentially conflicting applications/containers.

In my particular use case, I want netdata on the iron, or as close as possible to it. But I also want the ability to very easily upgrade certain applications and servers without having to worry excessively about the OS concerns (or conflicts) in doing so. Docker provides this in spades to those who know how to properly use it.

It’s a damn fine tool and very much should be included and kept updated. I don’t think it’s our job to protect people from software power tools. After all, that’s how some really amazing things get built - by pushing the conventional expectations.

–jmp

1 Like

I know with UnRaid there are “packages” for the docker images that contain pre-setup data locations and most other needed configuration information that retains persistent data (for databases and whatnot)

Upgrading is as simple as telling it to upgrade the “package” and it downloads the new docker builds. then uses the previous config…and viola…up and running and as stable as can be…the biggest thing is yes it is more “user” intensive for configuration and maintenance…not a set it and forget it kinda thing, However the versatility of the docker “packages” are maintained by a community, I think it would be hella hard to do as a single maintainer

I use docker and portainer (gui for docker) to build/maintain many programs…they do work well…but a beginner will flounder until they learn

There is a guy online that shows how to install/configure/update/maintain many packages on RPi and other SBC’s, and already has a huge App Template library setup
GitHub - novaspirit/pi-hosted: Raspberry Pi Self Hosted Server Based on Docker / Portainer.io