Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

opentelemetry-instrumentation-vertexai: support StreamGenerateContent #3297

Open
codefromthecrypt opened this issue Feb 25, 2025 · 2 comments · May be fixed by #3331
Open

opentelemetry-instrumentation-vertexai: support StreamGenerateContent #3297

codefromthecrypt opened this issue Feb 25, 2025 · 2 comments · May be fixed by #3331
Assignees
Labels

Comments

@codefromthecrypt
Copy link
Contributor

What problem do you want to solve?

Right now, if you run use langchain with VertexAI in a chatbot, the initialization will appear like this:

    return ChatVertexAI(
        model_name=os.getenv("CHAT_MODEL"), streaming=True, temperature=temperature
    )

When a generation occurs, if you have bootstrapped your deps, you'll see a normal platform span like this:

google.cloud.aiplatform.v1beta1.PredictionService/StreamGenerateContent

You won't yet see a genai span as while non-streaming has, streaming hasn't yet been implemented.
https://github.com/open-telemetry/opentelemetry-python-contrib/blob/opentelemetry-instrumentation-vertexai%3D%3D2.0b0/instrumentation-genai/opentelemetry-instrumentation-vertexai/src/opentelemetry/instrumentation/vertexai/__init__.py#L81-L82

Describe the solution you'd like

I'd like the next release of opentelemetry-instrumentation-vertexai to include google.cloud.aiplatform.v1beta1.PredictionService/StreamGenerateContent

Describe alternatives you've considered

Currently, we use langtrace as the data is most similar to the semantic conventions.

    from langtrace_python_sdk.instrumentation import VertexAIInstrumentation

    VertexAIInstrumentation().instrument()
    return ChatVertexAI(
        model_name=os.getenv("CHAT_MODEL"), streaming=True, temperature=temperature
    )

Additional Context

cc @aabmass and FYI this is the specific code I would like to remove https://github.com/elastic/elasticsearch-labs/blob/main/example-apps/chatbot-rag-app/api/llm_integrations.py#L23-L26

Would you like to implement a fix?

None

@aabmass
Copy link
Member

aabmass commented Feb 25, 2025

I'd like the next release of opentelemetry-instrumentation-vertexai to include google.cloud.aiplatform.v1beta1.PredictionService/StreamGenerateContent

Absolutely I'm working on it now. I also need to support the async API. Should be quite straightforward

@aabmass
Copy link
Member

aabmass commented Feb 25, 2025

Also, thanks for trying things out.

@aabmass aabmass linked a pull request Mar 5, 2025 that will close this issue
7 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Development

Successfully merging a pull request may close this issue.

2 participants