-
Couldn't load subscription status.
- Fork 5.3k
Description
Description
In the agent, I set function_calling_llm intending to pass tool information through function calls rather than injecting it into prompts. However, through event listener monitoring, I found that this didn't take effect—it always used the prompt method.
After reviewing the source code, I discovered that the issue arises because litellm.utils.supports_function_calling(self.model, custom_llm_provider=provider) is used to determine whether the model supports function calls. Since this model is newly developed by our company and not included in the list, how should I address this problem?
Steps to Reproduce
null
Expected behavior
null
Screenshots/Code snippets
null
Operating System
Ubuntu 20.04
Python Version
3.10
crewAI Version
0.193.2
crewAI Tools Version
0.73.1
Virtual Environment
Venv
Evidence
null
Possible Solution
null
Additional context
null