Skip to content

function_calling_llm does not work #3708

@moqiaaa

Description

@moqiaaa

Description

In the agent, I set function_calling_llm intending to pass tool information through function calls rather than injecting it into prompts. However, through event listener monitoring, I found that this didn't take effect—it always used the prompt method.

After reviewing the source code, I discovered that the issue arises because litellm.utils.supports_function_calling(self.model, custom_llm_provider=provider) is used to determine whether the model supports function calls. Since this model is newly developed by our company and not included in the list, how should I address this problem?

Steps to Reproduce

null

Expected behavior

null

Screenshots/Code snippets

null

Operating System

Ubuntu 20.04

Python Version

3.10

crewAI Version

0.193.2

crewAI Tools Version

0.73.1

Virtual Environment

Venv

Evidence

null

Possible Solution

null

Additional context

null

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions