Running a Quarkus Operator SDK operator as a packaged Docker image deployed to Kubernetes (NOT dev mode). Over time, jvm_threads_live_threads increases in the running pod. A thread dump shows many WAITING threads with names like "pool-10-thread-1", "pool-11-thread-1", etc., suggesting executor/thread pools are created repeatedly and not shut down.
This happens in a running pod (same process/PID), not only across restarts.
Expected behavior
A stable operator should not continuously create new executor pool threads during normal operation. jvm_threads_live_threads should remain roughly steady after startup.
Actual behavior
jvm_threads_live_threads increases over time while the pod keeps running.
- Thread dump shows many
WAITING threads with names like:
"pool-10-thread-1" prio=5 tid=33 WAITING
"pool-11-thread-1" ... WAITING
- Growth correlates with repeated “recompilation/redeploy” cycles and/or operator runtime events (watch reconnect, leader election, etc.).
Reproducible application:

- Dump screenshot:

- Heap dump attached
heapdump-pingpong.zip
If this behavior is expected, or caused by a specific configuration, I would appreciate any guidance on how to correctly debug or tune the operator to avoid accumulating pool-* threads. If this is not expected, are there recommended places in the Operator SDK / Quarkus / Fabric8 stack where I should investigate further? Any suggestions on what additional diagnostics (logs, thread dumps, configuration checks) would help identify whether this is a leak, a misconfiguration, or an intentional feature (e.g., multiple watchers/executors)? Thank you in advance for any pointers.
Running a Quarkus Operator SDK operator as a packaged Docker image deployed to Kubernetes (NOT dev mode). Over time,
jvm_threads_live_threadsincreases in the running pod. A thread dump shows manyWAITINGthreads with names like"pool-10-thread-1","pool-11-thread-1", etc., suggesting executor/thread pools are created repeatedly and not shut down.This happens in a running pod (same process/PID), not only across restarts.
Expected behavior
A stable operator should not continuously create new executor pool threads during normal operation.
jvm_threads_live_threadsshould remain roughly steady after startup.Actual behavior
jvm_threads_live_threadsincreases over time while the pod keeps running.WAITINGthreads with names like:"pool-10-thread-1" prio=5 tid=33 WAITING"pool-11-thread-1" ... WAITINGReproducible application:
heapdump-pingpong.zip
If this behavior is expected, or caused by a specific configuration, I would appreciate any guidance on how to correctly debug or tune the operator to avoid accumulating pool-* threads. If this is not expected, are there recommended places in the Operator SDK / Quarkus / Fabric8 stack where I should investigate further? Any suggestions on what additional diagnostics (logs, thread dumps, configuration checks) would help identify whether this is a leak, a misconfiguration, or an intentional feature (e.g., multiple watchers/executors)? Thank you in advance for any pointers.