-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(*) various fixes, several pertaining to FFI & Gateway integration #308
Conversation
88fd2e3
to
82a3c86
Compare
👍 I've been taking this for a spin this afternoon. Tested all runtimes with Wasmer and Wasmtime are working flawlessly now--not a single segfault or unexplained error from
|
Hmm this looks like a third thing at first glance... This is with V8 and the same cache invalidation test suite? By the way, does that suite pass entirely for you? Am I missing anything besides my local postgres/redis instances? |
Oh, actually all I needed to do was to build the binary without debug mode; makes sense, especially with the no-pool patch enabled. Great. |
Oh, I spoke too soon. I got a number of these:
I encountered it last week too but forgot about it. |
Proxy-Wasm: ensure the filter chain pool is the one it is allocated with.
82a3c86
to
1ed6e0d
Compare
Ah, so I was mistaken about this before. You (currently) do need a postgres instance for this test, even when selecting |
Yes, I came up with that testing pattern in the Gateway lol :) I guess there are two questions, the first of which relates to successfully running all I do have a Postgres instance running, and I am trying to run the whole suite (all DB modes) as it is in the
Which works to a certain point and then the last few tests fail:
So as I was saying above, I encountered this when investigating the coredumps, and remember going down a Kong spec utils/yaml rabbit hole. Do you not experience this problem, and if you don't, are there any other steps I may have missed before running the tests? And then, my other question was about the V8 coredump:
|
[slacking you about the
Yup! It causes a bunch of churn on the kong filter chain entities, so it just seems the easiest way to uncover issues right now. |
2 segfault fixes:
fix(ffi) correctly set filter->log during chain loading
fix(proxy-wasm) always unset instance ctx filter chain