Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Still default to openai model after setting llm, summary_llm, and agent to gemini/gemini-1.5-pro #822

Open
swimmingmachine opened this issue Jan 18, 2025 · 2 comments
Labels
bug Something isn't working

Comments

@swimmingmachine
Copy link

from paperqa import Docs, Settings
from paperqa.settings import AgentSettings

# valid extensions include .pdf, .txt, and .html
doc_paths = ("rhm1.pdf", "rhm2.pdf")

# Prepare the Docs object by adding a bunch of documents
docs = Docs()
for doc_path in doc_paths:
    docs.add(doc_path)

# Set up how we want to query the Docs object
settings = Settings()
settings.llm = "gemini/gemini-1.5-pro"
settings.answer.answer_max_sources = 3

# Query the Docs object to get an answer
session = docs.query(
    "What manufacturing challenges are unique to bispecific antibodies?",
    settings=Settings(
        llm="gemini/gemini-1.5-pro", 
        summary_llm="gemini/gemini-1.5-pro", 
        agent=AgentSettings(agent_llm="gemini/gemini-1.5-pro", agent_llm_config={"model_name": "gemini/gemini-1.5-pro"}),
        embedding="gemini/text-embedding-004",
        paper_directory="docs"
    ),
)
print(session)

Got the following error.
raise OpenAIError(
litellm.llms.openai.common_utils.OpenAIError: Error code: 404 - {'error': {'message': 'The model gpt-4o-2024-08-06 does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}

During handling of the above exception, another exception occurred:

 raise NotFoundError(

litellm.exceptions.NotFoundError: litellm.NotFoundError: OpenAIException - Error code: 404 - {'error': {'message': 'The model gpt-4o-2024-08-06 does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}
Received Model Group=gpt-4o-2024-08-06
Available Model Group Fallbacks=None

Copy link

dosubot bot commented Jan 18, 2025

Hey there, @swimmingmachine! I'm here to help you with any bugs, questions, or contributions you might have. Let's tackle this issue together!

The error you're encountering suggests that the paperqa library is still attempting to use the default OpenAI model gpt-4o-2024-08-06, despite your settings specifying the gemini/gemini-1.5-pro model. This could be due to a few reasons:

  1. Incorrect Configuration: Ensure that the settings are correctly applied to the Docs object. Double-check that the Settings object is properly configured and passed to the query method.

  2. Model Availability: Verify that the gemini/gemini-1.5-pro model is available and correctly configured in your environment. Ensure that any necessary credentials or configurations for accessing this model are set up.

  3. Fallback Mechanism: The error message indicates that there might be a fallback mechanism to the default OpenAI model. Check if there is any fallback configuration in the paperqa or litellm settings that might be causing this behavior.

  4. Library Version: Ensure that you are using the latest version of the paperqa library, as there might have been updates or bug fixes related to model configuration.

If these steps do not resolve the issue, you may need to look into the library's documentation or source code to understand how it handles model selection and fallback mechanisms.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@dosubot dosubot bot added the bug Something isn't working label Jan 18, 2025
@eamag
Copy link

eamag commented Feb 12, 2025

I think you have to pass a config too, like

model = 'gemini/gemini-2.0-flash-exp'

llm_config={
    "model_list": [{
        "model_name": model,
        "litellm_params": {
            "model": model,
            "api_key": os.getenv('GEMINI_API_KEY')
        }
    }]
}
s = Settings(
    llm=model,
    llm_config=llm_config,
    summary_llm=model,
    summary_llm_config=llm_config,
    embedding="ollama/granite3-dense",
    paper_directory=paper_directory,
    verbosity=3,
)
s.agent.agent_llm = model
s.agent.agent_llm_config = llm_config
answer_response = ask(
    question,
    settings=s,
)
print(answer_response)

I haven't tried google's embeddings though

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants