Hmm I don’t know Python syntax, but in the first script at least I see a f.close() which looks like you close the file after writing to it. And as well it is only a single file (right?) and Python would be bad, if it wouldn’t lock a file that is opened via f = open. So in theory indeed there should be only one file opened at all time.
But as I said, I am no Python expert, perhaps it creates tmp files or such, and, indeed you face the issue.
How do you execute the script? Does it run in a background, executing the code in a loop, or is it executed via external trigger, cron, systemd timer or such?
Please check htop to check whether there are more than one instances of the script running in parallel. In case you need to increase the execution intervals. Generally I would advice to run the script only once at start, and do the file writes via loop every X seconds. Then also it might be not required to declare, open and close the file each time, but just do this once at script start.
hi, there are no more instances in htop, it runs at start-up in infinite loop
the point is it never closes the file, so in the end it reaches the limit of 1024 I see in ulimit
I did a temporary solution which is doing the loop once per minute and not every 2 seconds as I did before, so in the 8 hours it needs to run, it wont reach the limit of 1024, so no crash
Interesting, so it seems that f.close does not really close the file, at least does not reduce that count for the limit. EDIT: Or it is opened once per line, which is the issue? But we would need to compare the whole code with loop to check if this is the case, as it seems for the guy in the linked topic.
Jep then solution as I thought above, open/declare the file and writer once before the loop starts and then just write to it within the loop.
Perhaps there is also some sort of garbage collector that really frees the files from limit after f.close, but the above sounds way cleaner, especially when writing in seconds intervals.