Hello,
As the title says, after about 4 days or so of uptime, my log is flooded with the same message.
This Article really cleared the basics, but since I’m not very knowledgeable about this, I wanted to confirm.
Is this supposed to happen? I’m not open to the internet so it shouldn’t be caused by a DDOS attack, the table is just full due to regular traffic?
Option 1, which is completely removing nf_conntrack support seems to be the only permanent solution, I don’t want to increase the max size at the cost of memory as the RasPi is already limited in that field, Besides I suppose that’ll just give the same error when it’s filled up as well.
Am I risking anything major/important by proceeding? Will there is a noticeable tradeoff in features?
Thanks in advance.
If I’m not mistaken, this is coming from iptables and I’m not sure if it is a good idea to unlimited the max number of connection.
Would be better to find out why you have so many open connections. Did you checked the current number?
/sbin/sysctl net.netfilter.nf_conntrack_count
To reduce the number, you could try to shorten the timeout values
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 60
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 60
maybe even to a lower value.
https://kodeslogic.medium.com/how-to-fix-nf-conntrack-table-full-dropping-packet-a5fedc6c463d
The output of
/sbin/sysctl net.netfilter.nf_conntrack_count
is 65536, which is the same that max. allocated by
/sbin/sysctl -a|grep -i nf_conntrack_max
Could you please elaborate on checking open connections?
As for programs, I don’t have port forwarding so I use Pitunnel as a failsafe. Other than that, it’s just nextcloud and deluge.
And reguarding timeout values, I’ve shortened them all to 60 and applied changes with sysctl -p, but no change so far.
Unfortunately your article won’t open for me at the time of writing, so I haven’t been through it.
Aren’t the values/entries supposed to be automatically cleared? Otherwise won’t they just keep piling up over time and lead to the same error?
What happen if you stop deluge? Maybe there are to many open torrents/connections?
That… actually works!
The values seen to be dropping slowly over the hours, it’s currently at 64906.
So deluge is requesting too many connections over a certain period of time, I suppose reducing the max global connections would counter this?
Thanks for you help! 
not sure if there is something you can do in deluge.
Alright, I’ll just ask in the deluge forums.