Skip to content

Add setting of manual.keep on the root span for AI Guard#5578

Open
y9v wants to merge 1 commit intomasterfrom
ai-guard-add-manual-keep-to-root-span
Open

Add setting of manual.keep on the root span for AI Guard#5578
y9v wants to merge 1 commit intomasterfrom
ai-guard-add-manual-keep-to-root-span

Conversation

@y9v
Copy link
Copy Markdown
Member

@y9v y9v commented Apr 10, 2026

What does this PR do?
This PR adds setting of manual.keep tag and decision maker tag on the root span if there is an AI Guard span present.

Motivation:
We want to keep all root spans which contain AI Guard spans.

Change log entry
None.

Additional Notes:
APPSEC-61355

How to test the change?
CI is enough.

@y9v y9v self-assigned this Apr 10, 2026
@y9v y9v requested review from a team as code owners April 10, 2026 13:26
@y9v y9v requested a review from mabdinur April 10, 2026 13:26
@github-actions
Copy link
Copy Markdown

Typing analysis

Note: Ignored files are excluded from the next sections.

steep:ignore comments

This PR introduces 2 steep:ignore comments, and clears 2 steep:ignore comments.

steep:ignore comments (+2-2)Introduced:
lib/datadog/ai_guard/evaluation.rb:27
lib/datadog/ai_guard/evaluation.rb:70
Cleared:
lib/datadog/ai_guard/evaluation.rb:21
lib/datadog/ai_guard/evaluation.rb:64

@datadog-official
Copy link
Copy Markdown

datadog-official bot commented Apr 10, 2026

✅ Tests

🎉 All green!

❄️ No new flaky tests detected
🧪 All tests passed

🎯 Code Coverage (details)
Patch Coverage: 88.89%
Overall Coverage: 95.35% (-0.02%)

This comment will be updated automatically if new data arrives.
🔗 Commit SHA: ededce9 | Docs | Datadog PR Page | Was this helpful? React with 👍/👎 or give us feedback!

@pr-commenter
Copy link
Copy Markdown

pr-commenter bot commented Apr 10, 2026

Benchmarks

Benchmark execution time: 2026-04-10 13:50:06

Comparing candidate commit ededce9 in PR branch ai-guard-add-manual-keep-to-root-span with baseline commit 3e20c4d in branch master.

Found 0 performance improvements and 0 performance regressions! Performance is the same for 46 metrics, 0 unstable metrics.

Explanation

This is an A/B test comparing a candidate commit's performance against that of a baseline commit. Performance changes are noted in the tables below as:

  • 🟩 = significantly better candidate vs. baseline
  • 🟥 = significantly worse candidate vs. baseline

We compute a confidence interval (CI) over the relative difference of means between metrics from the candidate and baseline commits, considering the baseline as the reference.

If the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD), the change is considered significant.

Feel free to reach out to #apm-benchmarking-platform on Slack if you have any questions.

More details about the CI and significant changes

You can imagine this CI as a range of values that is likely to contain the true difference of means between the candidate and baseline commits.

CIs of the difference of means are often centered around 0%, because often changes are not that big:

---------------------------------(------|---^--------)-------------------------------->
                              -0.6%    0%  0.3%     +1.2%
                                 |          |        |
         lower bound of the CI --'          |        |
sample mean (center of the CI) -------------'        |
         upper bound of the CI ----------------------'

As described above, a change is considered significant if the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD).

For instance, for an execution time metric, this confidence interval indicates a significantly worse performance:

----------------------------------------|---------|---(---------^---------)---------->
                                       0%        1%  1.3%      2.2%      3.1%
                                                  |   |         |         |
       significant impact threshold --------------'   |         |         |
                      lower bound of CI --------------'         |         |
       sample mean (center of the CI) --------------------------'         |
                      upper bound of CI ----------------------------------'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants