We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Right now, if you run use langchain with VertexAI in a chatbot, the initialization will appear like this:
return ChatVertexAI( model_name=os.getenv("CHAT_MODEL"), streaming=True, temperature=temperature )
When a generation occurs, if you have bootstrapped your deps, you'll see a normal platform span like this:
google.cloud.aiplatform.v1beta1.PredictionService/StreamGenerateContent
You won't yet see a genai span as while non-streaming has, streaming hasn't yet been implemented. https://github.com/open-telemetry/opentelemetry-python-contrib/blob/opentelemetry-instrumentation-vertexai%3D%3D2.0b0/instrumentation-genai/opentelemetry-instrumentation-vertexai/src/opentelemetry/instrumentation/vertexai/__init__.py#L81-L82
I'd like the next release of opentelemetry-instrumentation-vertexai to include google.cloud.aiplatform.v1beta1.PredictionService/StreamGenerateContent
Currently, we use langtrace as the data is most similar to the semantic conventions.
from langtrace_python_sdk.instrumentation import VertexAIInstrumentation VertexAIInstrumentation().instrument() return ChatVertexAI( model_name=os.getenv("CHAT_MODEL"), streaming=True, temperature=temperature )
cc @aabmass and FYI this is the specific code I would like to remove https://github.com/elastic/elasticsearch-labs/blob/main/example-apps/chatbot-rag-app/api/llm_integrations.py#L23-L26
None
The text was updated successfully, but these errors were encountered:
Absolutely I'm working on it now. I also need to support the async API. Should be quite straightforward
Sorry, something went wrong.
Also, thanks for trying things out.
aabmass
Successfully merging a pull request may close this issue.
What problem do you want to solve?
Right now, if you run use langchain with VertexAI in a chatbot, the initialization will appear like this:
When a generation occurs, if you have bootstrapped your deps, you'll see a normal platform span like this:
You won't yet see a genai span as while non-streaming has, streaming hasn't yet been implemented.
https://github.com/open-telemetry/opentelemetry-python-contrib/blob/opentelemetry-instrumentation-vertexai%3D%3D2.0b0/instrumentation-genai/opentelemetry-instrumentation-vertexai/src/opentelemetry/instrumentation/vertexai/__init__.py#L81-L82
Describe the solution you'd like
I'd like the next release of opentelemetry-instrumentation-vertexai to include
google.cloud.aiplatform.v1beta1.PredictionService/StreamGenerateContent
Describe alternatives you've considered
Currently, we use langtrace as the data is most similar to the semantic conventions.
Additional Context
cc @aabmass and FYI this is the specific code I would like to remove https://github.com/elastic/elasticsearch-labs/blob/main/example-apps/chatbot-rag-app/api/llm_integrations.py#L23-L26
Would you like to implement a fix?
None
The text was updated successfully, but these errors were encountered: