Skip to content

tests: Move langchain under toxgen #4349

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

sentrivana
Copy link
Contributor

WIP, this will likely need quite some tweaking

Copy link

codecov bot commented Apr 29, 2025

❌ 5 Tests Failed:

Tests completed Failed Passed Skipped
23474 5 23469 5860
View the top 3 failed test(s) by shortest run time
tests.integrations.langchain.test_langchain test_langchain_agent[True-False-False]
Stack Traces | 0.26s run time
.../integrations/langchain/test_langchain.py:167: in test_langchain_agent
    list(agent_executor.stream({"input": "How many letters in the word eudca"}))
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain/agents/agent.py:1451: in stream
    for step in iterator:
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain/agents/agent_iterator.py:174: in __iter__
    for chunk in self.agent_executor._iter_next_step(
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain/agents/agent.py:1066: in _iter_next_step
    output = self.agent.plan(
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain/agents/agent.py:461: in plan
    output = self.runnable.invoke(inputs, config={"callbacks": callbacks})
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain_core/runnables/base.py:2053: in invoke
    input = step.invoke(
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain_core/runnables/base.py:4064: in invoke
    return self.bound.invoke(
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain_core/language_models/chat_models.py:166: in invoke
    self.generate_prompt(
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain_core/language_models/chat_models.py:544: in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain_core/language_models/chat_models.py:408: in generate
    raise e
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain_core/language_models/chat_models.py:398: in generate
    self._generate_with_cache(
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain_core/language_models/chat_models.py:577: in _generate_with_cache
    return self._generate(
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain_community/chat_models/openai.py:439: in _generate
    response = self.completion_with_retry(
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain_community/chat_models/openai.py:356: in completion_with_retry
    return self.client.create(**kwargs)
sentry_sdk/integrations/openai.py:277: in _sentry_patched_create_sync
    return _execute_sync(f, *args, **kwargs)
sentry_sdk/integrations/openai.py:263: in _execute_sync
    raise e from None
sentry_sdk/integrations/openai.py:260: in _execute_sync
    result = f(*args, **kwargs)
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../openai/_utils/_utils.py:287: in wrapper
    return func(*args, **kwargs)
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../chat/completions/completions.py:925: in create
    return self._post(
.tox/py3.10-langchain-v0.0.354/lib/python3.10....../site-packages/openai/_base_client.py:1239: in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
.tox/py3.10-langchain-v0.0.354/lib/python3.10....../site-packages/openai/_base_client.py:1034: in request
    raise self._make_status_error_from_response(err.response) from None
E   openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: badkey. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
tests.integrations.langchain.test_langchain test_span_origin
Stack Traces | 0.261s run time
.../integrations/langchain/test_langchain.py:336: in test_span_origin
    list(agent_executor.stream({"input": "How many letters in the word eudca"}))
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain/agents/agent.py:1451: in stream
    for step in iterator:
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain/agents/agent_iterator.py:174: in __iter__
    for chunk in self.agent_executor._iter_next_step(
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain/agents/agent.py:1066: in _iter_next_step
    output = self.agent.plan(
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain/agents/agent.py:461: in plan
    output = self.runnable.invoke(inputs, config={"callbacks": callbacks})
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain_core/runnables/base.py:2053: in invoke
    input = step.invoke(
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain_core/runnables/base.py:4064: in invoke
    return self.bound.invoke(
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain_core/language_models/chat_models.py:166: in invoke
    self.generate_prompt(
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain_core/language_models/chat_models.py:544: in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain_core/language_models/chat_models.py:408: in generate
    raise e
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain_core/language_models/chat_models.py:398: in generate
    self._generate_with_cache(
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain_core/language_models/chat_models.py:577: in _generate_with_cache
    return self._generate(
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain_community/chat_models/openai.py:439: in _generate
    response = self.completion_with_retry(
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain_community/chat_models/openai.py:356: in completion_with_retry
    return self.client.create(**kwargs)
sentry_sdk/integrations/openai.py:277: in _sentry_patched_create_sync
    return _execute_sync(f, *args, **kwargs)
sentry_sdk/integrations/openai.py:263: in _execute_sync
    raise e from None
sentry_sdk/integrations/openai.py:260: in _execute_sync
    result = f(*args, **kwargs)
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../openai/_utils/_utils.py:287: in wrapper
    return func(*args, **kwargs)
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../chat/completions/completions.py:925: in create
    return self._post(
.tox/py3.10-langchain-v0.0.354/lib/python3.10....../site-packages/openai/_base_client.py:1239: in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
.tox/py3.10-langchain-v0.0.354/lib/python3.10....../site-packages/openai/_base_client.py:1034: in request
    raise self._make_status_error_from_response(err.response) from None
E   openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: badkey. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
tests.integrations.langchain.test_langchain test_langchain_agent[False-False-True]
Stack Traces | 0.301s run time
.../integrations/langchain/test_langchain.py:167: in test_langchain_agent
    list(agent_executor.stream({"input": "How many letters in the word eudca"}))
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain/agents/agent.py:1451: in stream
    for step in iterator:
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain/agents/agent_iterator.py:174: in __iter__
    for chunk in self.agent_executor._iter_next_step(
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain/agents/agent.py:1066: in _iter_next_step
    output = self.agent.plan(
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain/agents/agent.py:461: in plan
    output = self.runnable.invoke(inputs, config={"callbacks": callbacks})
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain_core/runnables/base.py:2053: in invoke
    input = step.invoke(
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain_core/runnables/base.py:4064: in invoke
    return self.bound.invoke(
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain_core/language_models/chat_models.py:166: in invoke
    self.generate_prompt(
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain_core/language_models/chat_models.py:544: in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain_core/language_models/chat_models.py:408: in generate
    raise e
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain_core/language_models/chat_models.py:398: in generate
    self._generate_with_cache(
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain_core/language_models/chat_models.py:577: in _generate_with_cache
    return self._generate(
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain_community/chat_models/openai.py:439: in _generate
    response = self.completion_with_retry(
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../langchain_community/chat_models/openai.py:356: in completion_with_retry
    return self.client.create(**kwargs)
sentry_sdk/integrations/openai.py:277: in _sentry_patched_create_sync
    return _execute_sync(f, *args, **kwargs)
sentry_sdk/integrations/openai.py:263: in _execute_sync
    raise e from None
sentry_sdk/integrations/openai.py:260: in _execute_sync
    result = f(*args, **kwargs)
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../openai/_utils/_utils.py:287: in wrapper
    return func(*args, **kwargs)
.tox/py3.10-langchain-v0.0.354/lib/python3.10.../chat/completions/completions.py:925: in create
    return self._post(
.tox/py3.10-langchain-v0.0.354/lib/python3.10....../site-packages/openai/_base_client.py:1239: in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
.tox/py3.10-langchain-v0.0.354/lib/python3.10....../site-packages/openai/_base_client.py:1034: in request
    raise self._make_status_error_from_response(err.response) from None
E   openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: badkey. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

To view more test analytics, go to the Test Analytics Dashboard
📋 Got 3 mins? Take this short survey to help us improve Test Analytics.

Base automatically changed from ivana/toxgen/move-more-stuff to master April 29, 2025 09:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant