Recent Discussions

Potential Memory Leak

Unanswered
Crème D’Argent posted this in #questions
Messages327 messages
Views0 views
Crème D’ArgentOP
I generally just need someone knowledgable enough to tell me if I'm tripping or not.
I run a server that has an average of 20+ players, with active periods getting around 40 to 50.

Server Info:
PaperMC: Latest build of 1.21.8
Container RAM: 16gb
Allocated RAM: 12gb
Plugins: 40-ish

All ran through pelican panel, so, dockerized.

The memory started being suspicious to me, as we've been reaching the limit of the container ram every 2 days or so, whilst the server usage was never quite something special.
So. Always, by default, since the xms flag is is 12gb, it would take up 12gb. That's fine. And I' expect another 1-2gb in overhead from the native memory. So 14gb usage would seem fine in my head. But it often goes beyond that, and leaks out of the 16gb limit until it crashes. Is this normal usage?

The leak is slow, and heapdumps show that it's not in heap memory. It's most definitely something in native memory. Nothing else runs in the container.
What is the best way to diagnose this? I've tried playing with flags. None. Aikars. ZGC. I'm debating if it's paper itself, since I'm running an older version?
I can't replicate the leak on a cloned server as it requires people actually playing.
Crème D’ArgentOP
For reference, right now, after running for 4h 30min, it is at 14.70gb usage.
Yakutian Laika
!spark
@Yakutian Laika !spark
Crème D’ArgentOP
How will spark help with native memory usage? I've already profiled memory through it.
Nothing of help in that info, as spark only profiles heap memory
I literally even mentioned doing a heap dump
Pixiebob
We want to verify your claims and a spark report is the fastest and easiest way to check
This is standard procedure.
A lot of time user claim they allocated X memory and their spark report shows different story
@Pixiebob We want to verify your claims and a spark report is the fastest and easiest way to check
Crème D’ArgentOP
Just shooting a !spark doesn't tell me that ;P I'm more than happy to provide, though I've narrowed it down to thread allocation issues
Now what I'm unsure of is how to format the crash report so I can actually see whats being allocated in said threads
Crème D’ArgentOP
I'm at the verge of giving up. It's abysmally hard to read the threads, and I'm not quite sure on what the threshold of 'normal' is, so I'd appreciate another pair of eyes
Crème D’ArgentOP
316 craft scheduler threads... definitely a thread leak, no?
@Crème D’Argent 316 craft scheduler threads... definitely a thread leak, no?
Chum salmon
/spark profiler start --timeout 300 --thread *

Run this so we can see what the threads are actually doing
Chum salmon
Ping me when its done
Crème D’ArgentOP
@Chum salmon
For reference, at the end of this profiler, ram used was 14.3gb on the container (running for 40 minutes due to a crash)
Chum salmon
I will just point out that this crash is from either no ram or no threads, so it's possible it's still some memory issue
But also that is a lot of threads
29 Bukkit async and 19 folia async, plus another like 30 spawned by plugins using their own thread pools
@Chum salmon I will just point out that this crash is from either no ram or no threads, so it's possible it's still some memory issue
Crème D’ArgentOP
I dont think its memory due to the fact that the container only reached 15gb, but it might be that the allocated one got full so...
Entirely possible
I will say, the server does have a custom plugin that uses Javalin for a webserver, but that has never been an issue, and it doesnt seem to be spawning any threads
Or well, any additional ones
None of the plugins are async except... FAWE I think?
Loading...