Skip to content

chore(llmobs): minimize number of span/eval encoding #13273

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

Yun-Kim
Copy link
Contributor

@Yun-Kim Yun-Kim commented Apr 25, 2025

This PR does an experimental refactor to minimize the number of JSON encoding attempts we make per span/eval event.

Previously, we call safe_json() (which is just a wrapper around json.dumps()) the following times per span/eval event:

  1. at enqueue() to calculate the event size to determine if we should truncate the event due to size limits
  2. at encoder.put() to again... calculate the event size to increase the encoder's buffer size accordingy
  3. at encoder.encode() to finally JSON encode the payload before flushing.

Thanks to #12966 we got rid of 2), but we still encode the same event 2 times before sending. Considering the average size of span event payloads, this is definitely redundancy that needs to go.

This PR attempts to solve this issue by encoding every span/eval metric only one time at enqueue(), and stores the encoded events as byte/strings in the writer buffer. At flush time, we create the encoded payload by concatenating the buffer strings together. We need to move encoding to enqueue time because we have some logic relying on encoded event sizes.

The tradeoffs here are:

  • PRO: reduce performance/time impact of JSON encoding span events to 50% (or 33%) of our original system
  • CON: makes constructing payloads a little more complex to maintain, since we are explicitly concatenating strings here instead of encoding everything at the end.

Checklist

  • PR author has checked that all the criteria below are met
  • The PR description includes an overview of the change
  • The PR description articulates the motivation for the change
  • The change includes tests OR the PR description describes a testing strategy
  • The PR description notes risks associated with the change, if any
  • Newly-added code is easy to change
  • The change follows the library release note guidelines
  • The change includes or references documentation updates if necessary
  • Backport labels are set (if applicable)

Reviewer Checklist

  • Reviewer has checked that all the criteria below are met
  • Title is accurate
  • All changes are related to the pull request's stated goal
  • Avoids breaking API changes
  • Testing strategy adequately addresses listed risks
  • Newly-added code is easy to change
  • Release note makes sense to a user of the library
  • If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment
  • Backport labels are set in a manner that is consistent with the release branch maintenance policy

Copy link
Contributor

CODEOWNERS have been resolved as:

ddtrace/llmobs/_writer.py                                               @DataDog/ml-observability

Copy link
Contributor

github-actions bot commented Apr 25, 2025

Bootstrap import analysis

Comparison of import times between this PR and base.

Summary

The average import time from this PR is: 248 ± 8 ms.

The average import time from base is: 249 ± 9 ms.

The import time difference between this PR and base is: -0.8 ± 0.4 ms.

The difference is not statistically significant (z = -2.18).

Import time breakdown

The following import paths have shrunk:

ddtrace.auto 1.297 ms (0.52%)
ddtrace 0.674 ms (0.27%)
ddtrace.bootstrap.sitecustomize 0.623 ms (0.25%)
ddtrace.bootstrap.preload 0.623 ms (0.25%)
ddtrace.internal.products 0.623 ms (0.25%)
ddtrace.internal.remoteconfig.client 0.623 ms (0.25%)

@pr-commenter
Copy link

pr-commenter bot commented Apr 25, 2025

Benchmarks

Benchmark execution time: 2025-04-29 18:41:46

Comparing candidate commit 1e15750 in PR branch yunkim/llmobs-writer-optimize-encoding with baseline commit 1e1626a in branch main.

Found 14 performance improvements and 2 performance regressions! Performance is the same for 484 metrics, 8 unstable metrics.

scenario:iast_aspects-ospathsplitdrive_aspect

  • 🟥 execution_time [+551.733ns; +625.573ns] or [+15.248%; +17.288%]

scenario:iast_aspects-replace_aspect

  • 🟥 execution_time [+554.476ns; +626.655ns] or [+11.687%; +13.208%]

scenario:packagesupdateimporteddependencies-import_many

  • 🟩 execution_time [-27.083ms; -26.926ms] or [-99.717%; -99.141%]

scenario:packagesupdateimporteddependencies-import_many_cached

  • 🟩 execution_time [-27.137µs; -26.042µs] or [-18.399%; -17.657%]

scenario:packagesupdateimporteddependencies-import_many_stdlib

  • 🟩 execution_time [-725.209ms; -721.857ms] or [-100.130%; -99.667%]

scenario:packagesupdateimporteddependencies-import_many_stdlib_cached

  • 🟩 execution_time [-728.113ms; -723.985ms] or [-100.261%; -99.693%]

scenario:packagesupdateimporteddependencies-import_many_unknown

  • 🟩 execution_time [-29.783ms; -29.624ms] or [-97.520%; -96.999%]

scenario:packagesupdateimporteddependencies-import_many_unknown_cached

  • 🟩 execution_time [-244.844µs; -229.933µs] or [-23.809%; -22.359%]

scenario:packagesupdateimporteddependencies-import_one

  • 🟩 execution_time [-11.290ms; -11.229ms] or [-100.094%; -99.554%]

scenario:packagesupdateimporteddependencies-import_one_cache

  • 🟩 execution_time [-742.470ns; -645.364ns] or [-10.633%; -9.243%]

scenario:packagesupdateimporteddependencies-import_one_stdlib

  • 🟩 execution_time [-10.304ms; -10.260ms] or [-100.035%; -99.600%]

scenario:packagesupdateimporteddependencies-import_one_stdlib_cache

  • 🟩 execution_time [-672.701ns; -582.499ns] or [-9.718%; -8.415%]

scenario:packagesupdateimporteddependencies-import_one_unknown

  • 🟩 execution_time [-19.723ms; -19.617ms] or [-100.038%; -99.502%]

scenario:packagesupdateimporteddependencies-import_one_unknown_cache

  • 🟩 execution_time [-749.076ns; -685.022ns] or [-10.732%; -9.814%]

scenario:samplingrules-average_match

  • 🟩 execution_time [-187.938µs; -184.395µs] or [-36.130%; -35.449%]

scenario:samplingrules-low_match

  • 🟩 execution_time [-349.268µs; -346.157µs] or [-67.706%; -67.103%]

@Yun-Kim Yun-Kim force-pushed the yunkim/llmobs-writer-optimize-encoding branch from 15bf572 to 1e15750 Compare April 29, 2025 17:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant