-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose DefaultSerializerProvider.flushCachedSerializers()? (or prevent Metaspace leaking without it) #1905
Comments
Interesting idea. Yes, ideally caches should always be size-bound. My main concern is that construction of serializers/deserializers is really, really expensive, so it's bit different from things like reusing buffers. I think use of weak references may not work quite as well here (compared to buffer recycling), but combination of bounded cache and ability to clear seems like an improvement. One thing to note, too, which may be obvious: simply dropping As to how to expose flush: I don't think it should be in |
Ok. So... first things first: bound caching should be done, but for 3.0:
Second part on exposing... after thinking about this a bit, I think that existing exposure should work. This is quite specialized functionality and the intent with limited Given this I think situation as is should remain for 2.9: I don't want to add API change that is not strictly needed, even though I understand that it'd be more convenient not need cast. |
Would you consider the possibility of letting the outside world define the implementation of the cache? |
@LukeButtersFunnelback I am not dead set against such possibility. There'd be many details wrt API exposed for 3.0, to be discussed, but I am open to such possibility, yes. |
Fair enough - If you are going to bound SerializerCache and DeserializerCache in 3.0 could TypeFactory. _typeCache could come along for the ride? |
Oh, yeah - Re dropping and recreating ObjectMapper - Yes, I can see how that could be a workable approach. Unfortunately in my real case Spring has passed it off to a lot of other places which makes that tricky to actually do unless there's some spring-magic I'm unaware of. Potentially I could try to wrap the ObjectMapper originally given to Spring with something that passes calls through and occasionally rebuilds the underlying mapper. That said, if there's a good prospect of it being fixed in 3.0 we can probably just wait for that. |
@mattsheppard Wrt dropping With Jackson 3.x I think the idea of both more configurable and replaceable components can help. |
|
If I provide Jackson's ObjectMapper new dynamically generated types on each request, I seem to get a process which grows in metaspace forever (or eventually throws an OutOfMemoryError error if I set a maximum bound on metaspace).
I created an example project to exhibit this at https://github.com/mattsheppard/JacksonDatabindMetaspaceLeak (it uses groovy to generate the dynamic classes because that's what the application I originally found the problem in used, but it doesn't appear to be groovy specific).
Note that this leaking occurs even if I call the TypeFactory's clear cache method which was discussed in #489.
I stumbled across
((DefaultSerializerProvider) mapper.getSerializerProvider()).flushCachedSerializers();
after exploring some heap dumps, and calling that every once-in-a-while seems to be sufficient to run more-or-less indefinitely.If it's not possible to use weak references or allow for some sort of bounded cache to avoid the actual problem (#489 suggests you may be reluctant), would it at least be possible to expose the flush method on SerializerProvider to avoid the cast?
The text was updated successfully, but these errors were encountered: