Replies: 1 comment 11 replies
-
|
Hi @ingoratsdorf - not sure if you can help here? I think more ram use is expected based on data growth as we are caching some data for performance reasons and to reduce disk R/W. I'd recommend trying to increase the upper limit to at least 300, 512 to be safe. |
Beta Was this translation helpful? Give feedback.
11 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment

Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I use netalertx in docker container,
all was fine for some days and ram usage was steady at 160 MB with some peeks at 185 MB so I put a docker limit at 256 MB
now the limit has been reached (I did not check every day the trend, it's holidays) and the container stopped,
when I restart it, it jumps very quickly in a few minutes to 250 MB usage and soon it dies by reaching the limit
the db is 50 MB in size (and shouldn't be all in RAM I guess ..), the largest log file is 23 MB (this shouldn't be loaded in RAM; also for info I have the log files in bind volumes on disk and not in tmpfs)
so I wonder what could be using the RAM in the python server (it's seen from ps that the python server is the main consumer of RAM)
and what can I do ?
also as a last resort .. how could I profile the python programs (I mean which functions) to find the memory hog ?
has anyone prior experience with RAM usage ?
Beta Was this translation helpful? Give feedback.
All reactions