-
Notifications
You must be signed in to change notification settings - Fork 431
chore(llmobs): minimize number of span/eval encoding #13273
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Bootstrap import analysisComparison of import times between this PR and base. SummaryThe average import time from this PR is: 248 ± 8 ms. The average import time from base is: 249 ± 9 ms. The import time difference between this PR and base is: -0.8 ± 0.4 ms. The difference is not statistically significant (z = -2.18). Import time breakdownThe following import paths have shrunk:
|
BenchmarksBenchmark execution time: 2025-04-29 18:41:46 Comparing candidate commit 1e15750 in PR branch Found 14 performance improvements and 2 performance regressions! Performance is the same for 484 metrics, 8 unstable metrics. scenario:iast_aspects-ospathsplitdrive_aspect
scenario:iast_aspects-replace_aspect
scenario:packagesupdateimporteddependencies-import_many
scenario:packagesupdateimporteddependencies-import_many_cached
scenario:packagesupdateimporteddependencies-import_many_stdlib
scenario:packagesupdateimporteddependencies-import_many_stdlib_cached
scenario:packagesupdateimporteddependencies-import_many_unknown
scenario:packagesupdateimporteddependencies-import_many_unknown_cached
scenario:packagesupdateimporteddependencies-import_one
scenario:packagesupdateimporteddependencies-import_one_cache
scenario:packagesupdateimporteddependencies-import_one_stdlib
scenario:packagesupdateimporteddependencies-import_one_stdlib_cache
scenario:packagesupdateimporteddependencies-import_one_unknown
scenario:packagesupdateimporteddependencies-import_one_unknown_cache
scenario:samplingrules-average_match
scenario:samplingrules-low_match
|
15bf572
to
1e15750
Compare
This PR does an experimental refactor to minimize the number of JSON encoding attempts we make per span/eval event.
Previously, we call
safe_json()
(which is just a wrapper around json.dumps()) the following times per span/eval event:enqueue()
to calculate the event size to determine if we should truncate the event due to size limitsencoder.put()
to again... calculate the event size to increase the encoder's buffer size accordingyencoder.encode()
to finally JSON encode the payload before flushing.Thanks to #12966 we got rid of 2), but we still encode the same event 2 times before sending. Considering the average size of span event payloads, this is definitely redundancy that needs to go.
This PR attempts to solve this issue by encoding every span/eval metric only one time at
enqueue()
, and stores the encoded events as byte/strings in the writer buffer. At flush time, we create the encoded payload by concatenating the buffer strings together. We need to move encoding to enqueue time because we have some logic relying on encoded event sizes.The tradeoffs here are:
Checklist
Reviewer Checklist