Skip to content

Conversation

@devin-ai-integration
Copy link
Contributor

Fix function_calling_llm support for custom models

Summary

Fixes #3708 where function_calling_llm did not work for custom models not in litellm's supported models list.

The issue occurred because when a user set function_calling_llm to use a custom model (e.g., a company's proprietary model through a custom provider), the framework would check if the model supports function calling using litellm.utils.supports_function_calling(). For models not in litellm's list, this check would return False, preventing function calling from being used even though the model actually supported it.

Changes:

  1. BaseLLM class: Added supports_function_calling() method that returns True by default (similar to existing supports_stop_words() method)
  2. LLM class: Added supports_function_calling parameter to allow explicit override of litellm's auto-detection
  3. Tests: Added test coverage for both the BaseLLM default and LLM override functionality

Usage examples:

# For custom models through litellm that support function calling
llm = LLM(model="custom-provider/my-model", supports_function_calling=True)

# For custom LLMs extending BaseLLM (now works by default)
class MyLLM(BaseLLM):
    def call(self, messages, tools=None, ...):
        # implementation
        pass
    # No need to override supports_function_calling() unless you want False

Review & Testing Checklist for Human

  • Run full test suite - I encountered pytest plugin conflicts (pytest-recording vs pytest-vcr) and could only verify via direct Python execution. Please run uv run pytest tests/ to ensure no regressions
  • Test with actual custom model - Create an agent with a custom model (not in litellm's list) and verify function_calling_llm works with the new supports_function_calling=True parameter
  • Verify backward compatibility - Check that existing custom LLMs that override supports_function_calling() still work correctly
  • Check dependency changes - The uv.lock file was regenerated from scratch due to corruption; review for any unexpected dependency updates

Test Plan

  1. Create an LLM with a custom model name: LLM(model="company/custom-model", supports_function_calling=True)
  2. Set it as function_calling_llm on an agent
  3. Verify that tools use function calling (via event listeners or logging)
  4. Run existing test suite to ensure no regressions

Notes

- Add supports_function_calling() method to BaseLLM class with default True
- Add supports_function_calling parameter to LLM class to allow override of litellm check
- Add tests for both BaseLLM default and LLM override functionality
- Fixes #3708: Custom models not in litellm's list can now use function calling

Co-Authored-By: João <[email protected]>
@devin-ai-integration
Copy link
Contributor Author

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add '(aside)' to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

- Add noqa comment for hardcoded test JWT token
- Add return statement to satisfy ruff RET503 check

Co-Authored-By: João <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

function_calling_llm does not work

0 participants