Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(llmobs): avoid raising errors during llmobs integration span processing #10713

Merged
merged 10 commits into from
Sep 23, 2024

Conversation

Yun-Kim
Copy link
Contributor

@Yun-Kim Yun-Kim commented Sep 19, 2024

This PR does 2 things:

User facing changes

  • captures any integration-specific _llmobs_set_tags() method errors and logs the error instead of potentially crashing the user application.

Non-user facing changes

Refactors the BaseLLMIntegration class and child classes to follow a cleaner and shared llmobs_set_tags() method, which internally try/catches an abstract method _llmobs_set_tags() instead (which is implemented by each integration). We also no longer need to check integration.is_pc_sampled_llmobs(span) since we don't currently do any sampling yet and we can handle it in the llmobs_set_tags() method if needed.
tldr: _llmobs_set_tags() is now an abstract method that needs to be implemented by all LLM integrations, and its function signature now takes in the following arguments/keyword arguments (same as llmobs_set_tags()):

  • span: span to annotate
  • args: list of args passed to the traced method
  • kwargs: dict of keyword args passed to the traced method. If any integration requires additional data not contained by either args/kwargs (such as the model instance in Gemini or tool_input dictionary in langchain), we can pass it into the method using the kwarg dict.
  • response: returned response from llm provider (streamed or non-streamed)
  • operation: string denoting which LLM operation it is (eg. "completion", "chat", "embedding", "chain", "retrieval")

I did some refactoring to each integration to follow this new signature, which included merging logic for how we handle streamed responses, and additional required args (i.e. model instance, tool inputs).

Previously each integration did its own thing for llmobs_set_tags() with arbitrary args/kwargs, and it was difficult to maintain. Now that we have a strict function signature, future integrations should be simpler to create, and existing integrations should be easier to maintain.

Checklist

  • PR author has checked that all the criteria below are met
  • The PR description includes an overview of the change
  • The PR description articulates the motivation for the change
  • The change includes tests OR the PR description describes a testing strategy
  • The PR description notes risks associated with the change, if any
  • Newly-added code is easy to change
  • The change follows the library release note guidelines
  • The change includes or references documentation updates if necessary
  • Backport labels are set (if applicable)

Reviewer Checklist

  • Reviewer has checked that all the criteria below are met
  • Title is accurate
  • All changes are related to the pull request's stated goal
  • Avoids breaking API changes
  • Testing strategy adequately addresses listed risks
  • Newly-added code is easy to change
  • Release note makes sense to a user of the library
  • If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment
  • Backport labels are set in a manner that is consistent with the release branch maintenance policy

Copy link
Contributor

CODEOWNERS have been resolved as:

releasenotes/notes/fix-llmobs-integrations-safe-tagging-5e170868e5758510.yaml  @DataDog/apm-python
ddtrace/_trace/trace_handlers.py                                        @DataDog/apm-sdk-api-python
ddtrace/contrib/internal/anthropic/_streaming.py                        @DataDog/ml-observability
ddtrace/contrib/internal/anthropic/patch.py                             @DataDog/ml-observability
ddtrace/contrib/internal/google_generativeai/_utils.py                  @DataDog/ml-observability
ddtrace/contrib/internal/google_generativeai/patch.py                   @DataDog/ml-observability
ddtrace/contrib/internal/langchain/patch.py                             @DataDog/ml-observability
ddtrace/contrib/internal/openai/_endpoint_hooks.py                      @DataDog/ml-observability
ddtrace/contrib/internal/openai/utils.py                                @DataDog/ml-observability
ddtrace/llmobs/_integrations/anthropic.py                               @DataDog/ml-observability
ddtrace/llmobs/_integrations/base.py                                    @DataDog/ml-observability
ddtrace/llmobs/_integrations/bedrock.py                                 @DataDog/ml-observability
ddtrace/llmobs/_integrations/gemini.py                                  @DataDog/ml-observability
ddtrace/llmobs/_integrations/langchain.py                               @DataDog/ml-observability
ddtrace/llmobs/_integrations/openai.py                                  @DataDog/ml-observability
tests/llmobs/test_llmobs_integrations.py                                @DataDog/ml-observability

ddtrace/llmobs/_integrations/langchain.py Show resolved Hide resolved
ddtrace/llmobs/_integrations/openai.py Outdated Show resolved Hide resolved
ddtrace/llmobs/_integrations/openai.py Outdated Show resolved Hide resolved
ddtrace/llmobs/_integrations/openai.py Show resolved Hide resolved
ddtrace/llmobs/_integrations/langchain.py Show resolved Hide resolved
ddtrace/llmobs/_integrations/openai.py Outdated Show resolved Hide resolved
ddtrace/llmobs/_integrations/langchain.py Show resolved Hide resolved
ddtrace/llmobs/_integrations/openai.py Outdated Show resolved Hide resolved
ddtrace/llmobs/_integrations/openai.py Outdated Show resolved Hide resolved
ddtrace/llmobs/_integrations/openai.py Show resolved Hide resolved
ddtrace/llmobs/_integrations/langchain.py Show resolved Hide resolved
ddtrace/llmobs/_integrations/openai.py Outdated Show resolved Hide resolved
@pr-commenter
Copy link

pr-commenter bot commented Sep 19, 2024

Benchmarks

Benchmark execution time: 2024-09-23 17:28:00

Comparing candidate commit bfa070c in PR branch yunkim/refactor-integrations with baseline commit 3fc23d7 in branch main.

Found 0 performance improvements and 0 performance regressions! Performance is the same for 353 metrics, 47 unstable metrics.

Base automatically changed from yunkim/refactor-llmobs-integrations to main September 19, 2024 21:01
@datadog-dd-trace-py-rkomorn
Copy link

datadog-dd-trace-py-rkomorn bot commented Sep 20, 2024

Datadog Report

Branch report: yunkim/refactor-integrations
Commit report: bfa070c
Test service: dd-trace-py

✅ 0 Failed, 592 Passed, 694 Skipped, 20m 12.03s Total duration (16m 51.39s time saved)

@Yun-Kim Yun-Kim marked this pull request as ready for review September 20, 2024 21:00
@Yun-Kim Yun-Kim requested review from a team as code owners September 20, 2024 21:00
ddtrace/contrib/internal/openai/_endpoint_hooks.py Outdated Show resolved Hide resolved
ddtrace/llmobs/_integrations/base.py Outdated Show resolved Hide resolved
ddtrace/llmobs/_integrations/base.py Outdated Show resolved Hide resolved
Copy link
Contributor

@mabdinur mabdinur left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fix looks good to me. Moving forward should we try to remove the llm logic from ddtrace/_trace/trace_handlers.py?

@Yun-Kim Yun-Kim enabled auto-merge (squash) September 23, 2024 22:18
@Yun-Kim Yun-Kim merged commit cf6f007 into main Sep 23, 2024
575 checks passed
@Yun-Kim Yun-Kim deleted the yunkim/refactor-integrations branch September 23, 2024 22:27
Copy link
Contributor

The backport to 2.11 failed:

The process '/usr/bin/git' failed with exit code 1

To backport manually, run these commands in your terminal:

# Fetch latest updates from GitHub
git fetch
# Create a new working tree
git worktree add .worktrees/backport-2.11 2.11
# Navigate to the new working tree
cd .worktrees/backport-2.11
# Create a new branch
git switch --create backport-10713-to-2.11
# Cherry-pick the merged commit of this pull request and resolve the conflicts
git cherry-pick -x --mainline 1 cf6f007dcde78958ea44514f0ceca3701635b068
# Push it to GitHub
git push --set-upstream origin backport-10713-to-2.11
# Go back to the original working tree
cd ../..
# Delete the working tree
git worktree remove .worktrees/backport-2.11

Then, create a pull request where the base branch is 2.11 and the compare/head branch is backport-10713-to-2.11.

Copy link
Contributor

The backport to 2.12 failed:

The process '/usr/bin/git' failed with exit code 1

To backport manually, run these commands in your terminal:

# Fetch latest updates from GitHub
git fetch
# Create a new working tree
git worktree add .worktrees/backport-2.12 2.12
# Navigate to the new working tree
cd .worktrees/backport-2.12
# Create a new branch
git switch --create backport-10713-to-2.12
# Cherry-pick the merged commit of this pull request and resolve the conflicts
git cherry-pick -x --mainline 1 cf6f007dcde78958ea44514f0ceca3701635b068
# Push it to GitHub
git push --set-upstream origin backport-10713-to-2.12
# Go back to the original working tree
cd ../..
# Delete the working tree
git worktree remove .worktrees/backport-2.12

Then, create a pull request where the base branch is 2.12 and the compare/head branch is backport-10713-to-2.12.

Copy link
Contributor

The backport to 2.13 failed:

The process '/usr/bin/git' failed with exit code 1

To backport manually, run these commands in your terminal:

# Fetch latest updates from GitHub
git fetch
# Create a new working tree
git worktree add .worktrees/backport-2.13 2.13
# Navigate to the new working tree
cd .worktrees/backport-2.13
# Create a new branch
git switch --create backport-10713-to-2.13
# Cherry-pick the merged commit of this pull request and resolve the conflicts
git cherry-pick -x --mainline 1 cf6f007dcde78958ea44514f0ceca3701635b068
# Push it to GitHub
git push --set-upstream origin backport-10713-to-2.13
# Go back to the original working tree
cd ../..
# Delete the working tree
git worktree remove .worktrees/backport-2.13

Then, create a pull request where the base branch is 2.13 and the compare/head branch is backport-10713-to-2.13.

@Yun-Kim
Copy link
Contributor Author

Yun-Kim commented Sep 24, 2024

This PR is dependent on too many code changes from features (e.g. tool calling, streamed response support and refactors) that backporting it to 2.11, 2.12 and 2.13 requires a large amount of non-trivial code changes to allow it to be backported to those branches. We will leave this as a standalone fix that will be released in 2.14.

mabdinur pushed a commit that referenced this pull request Sep 25, 2024
…essing (#10713)

This PR does 2 things:


### User facing changes
- captures any integration-specific `_llmobs_set_tags()` method errors
and logs the error instead of potentially crashing the user application.

### Non-user facing changes
Refactors the `BaseLLMIntegration` class and child classes to follow a
cleaner and shared `llmobs_set_tags()` method, which internally
try/catches an abstract method `_llmobs_set_tags()` instead (which is
implemented by each integration). We also no longer need to check
`integration.is_pc_sampled_llmobs(span)` since we don't currently do any
sampling yet and we can handle it in the `llmobs_set_tags()` method if
needed.
tldr: `_llmobs_set_tags()` is now an abstract method that needs to be
implemented by all LLM integrations, and its function signature now
takes in the following arguments/keyword arguments (same as
`llmobs_set_tags()`):
- span: span to annotate
- args: list of args passed to the traced method
- kwargs: dict of keyword args passed to the traced method. If any
integration requires additional data not contained by either args/kwargs
(such as the model instance in Gemini or tool_input dictionary in
langchain), we can pass it into the method using the kwarg dict.
- response: returned response from llm provider (streamed or
non-streamed)
- operation: string denoting which LLM operation it is (eg.
"completion", "chat", "embedding", "chain", "retrieval")

I did some refactoring to each integration to follow this new signature,
which included merging logic for how we handle streamed responses, and
additional required args (i.e. model instance, tool inputs).

Previously each integration did its own thing for `llmobs_set_tags()`
with arbitrary args/kwargs, and it was difficult to maintain. Now that
we have a strict function signature, future integrations should be
simpler to create, and existing integrations should be easier to
maintain.

## Checklist
- [x] PR author has checked that all the criteria below are met
- The PR description includes an overview of the change
- The PR description articulates the motivation for the change
- The change includes tests OR the PR description describes a testing
strategy
- The PR description notes risks associated with the change, if any
- Newly-added code is easy to change
- The change follows the [library release note
guidelines](https://ddtrace.readthedocs.io/en/stable/releasenotes.html)
- The change includes or references documentation updates if necessary
- Backport labels are set (if
[applicable](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting))

## Reviewer Checklist
- [x] Reviewer has checked that all the criteria below are met 
- Title is accurate
- All changes are related to the pull request's stated goal
- Avoids breaking
[API](https://ddtrace.readthedocs.io/en/stable/versioning.html#interfaces)
changes
- Testing strategy adequately addresses listed risks
- Newly-added code is easy to change
- Release note makes sense to a user of the library
- If necessary, author has acknowledged and discussed the performance
implications of this PR as reported in the benchmarks PR comment
- Backport labels are set in a manner that is consistent with the
[release branch maintenance
policy](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants