Skip to content

Conversation

@Akshat8510
Copy link

1. Link to an existing issue (if applicable):

Problem:
The LiteLLM adapter in _content_to_message_param would return early as soon as it encountered a function_response part. In multipart Content objects (common in Vision-Language workflows), this caused additional parts like text captions or image blobs to be silently discarded.

Solution:
I modified _content_to_message_param to accumulate all parts into a list of messages instead of returning on the first tool hit.

  • function_response parts are converted to messages with the tool role.
  • All other parts (text, inline_data, etc.) are processed via the _get_content helper and appended as a subsequent message (usually with the user role).
  • Updated the docstring to reflect support for mixed/multipart content.

Testing Plan

Unit Tests:

  • I have verified the logic with a custom test suite covering mixed content.
  • All local verification tests pass.

Summary of verification:
I ran a test suite covering three core scenarios:

  1. Mixed Content (Tool + Text): Verified that a tool response followed by text now returns a list of 2 messages (tool, then user).
  2. Multiple Tools: Verified that parallel tool responses are correctly returned as a list of tool messages.
  3. Text Only: Verified backward compatibility; standard text messages still return as a single message.

Manual End-to-End (E2E) Tests:

I used a reproduction script to simulate the conversion of a types.Content object containing both a FunctionResponse and a text part.

Terminal Logs:

--- Testing: Mixed Content ---
Result: List of 2 messages
  [0] Role: tool
  [1] Role: user
--- Testing: Multiple Tools ---
Result: List of 2 messages
  [0] Role: tool
  [1] Role: tool
--- Testing: Text Only ---
Result: Single message. Role: user

Checklist

  • I have read the CONTRIBUTING.md document.
  • I have performed a self-review of my own code.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have added tests that prove my fix is effective or that my feature works.
  • New and existing unit tests pass locally with my changes.
  • I have manually tested my changes end-to-end.

@google-cla
Copy link

google-cla bot commented Jan 9, 2026

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Akshat8510, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves an issue in the LiteLLM adapter where multipart content objects, particularly those combining tool responses with text or media, were not fully processed. The changes ensure that all components of a complex content object are correctly converted into a list of messages, enhancing the adapter's ability to support advanced vision-language and tool-use workflows without data loss.

Highlights

  • Enhanced Content Handling in LiteLLM Adapter: The _content_to_message_param function in the LiteLLM adapter has been updated to correctly process multipart types.Content objects, preventing the silent discarding of non-function_response parts.
  • Accumulation of All Content Parts: The function now accumulates all parts of a types.Content object into a list of messages, rather than returning prematurely after the first function_response.
  • Consistent Message Role Assignment: function_response parts are now consistently converted to messages with the tool role, while other content types (text, inline_data) are processed and appended as separate messages, typically with the user role.
  • Docstring Update: The docstring for _content_to_message_param has been updated to accurately reflect its new capability to handle mixed and multipart content.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@adk-bot adk-bot added the models [Component] Issues related to model support label Jan 9, 2026
@adk-bot
Copy link
Collaborator

adk-bot commented Jan 9, 2026

Response from ADK Triaging Agent

Hello @Akshat8510, thank you for your contribution!

Before we can merge this pull request, we need you to sign our Contributor License Agreement (CLA). You can do so by following the instructions at https://cla.developers.google.com/.

Once you've signed the CLA, the status check will be updated, and we can proceed with the review. Thanks!

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix an issue with handling mixed tool responses and other content types in the LiteLLM adapter. The approach of accumulating messages is correct, but the current implementation has some regressions. It introduces a less safe method for serializing tool responses, and more critically, it incorrectly handles thought and function_call parts in messages, leading to dropped information. Additionally, a large part of the original message processing logic has become unreachable dead code. I've provided detailed feedback on how to address these issues.

…responses while preserving original logic

Signed-off-by: Akshat Kumar <[email protected]>
@Akshat8510
Copy link
Author

@gemini-code-assist /gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the _content_to_message_param function to correctly handle mixed content containing both tool responses and other media like text. The change correctly separates tool responses from other parts and constructs a list of messages, fixing a bug where parts of the content were being silently dropped. My review identifies one critical issue: a large block of dead code that was not removed after the refactoring. I've also suggested improving the function's docstring to reflect its new, more complex behavior, which is important for future maintainability.

@Akshat8510
Copy link
Author

@gemini-code-assist /gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively refactors _content_to_message_param to correctly handle mixed content, addressing a bug where parts of multipart messages were being discarded. The new logic for separating and processing tool responses from other content is a significant improvement. I've identified a couple of issues: some leftover dead code from the refactoring and a bug in how reasoning content is constructed. Addressing these will make the implementation solid.

content_parts.append(part)

message_content = await _get_content(content_parts, provider=provider) or None
reasoning_content = "\n".join([p.thought for p in reasoning_parts]) if reasoning_parts else None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There appears to be a bug in the construction of reasoning_content. It's currently using p.thought, which seems to be a boolean flag, instead of p.text, which holds the actual reasoning text. This will result in incorrect content for the thought field of the ChatCompletionAssistantMessage.

Suggested change
reasoning_content = "\n".join([p.thought for p in reasoning_parts]) if reasoning_parts else None
reasoning_content = "\n".join([p.text for p in reasoning_parts if p.text]) if reasoning_parts else None

if not final_messages:
return []

return final_messages if len(final_messages) > 1 else final_messages[0]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

After this return statement, there is a block of unreachable code from line 551 to 608. This seems to be leftover code from the refactoring that should be removed to improve code clarity and maintainability.

@Akshat8510
Copy link
Author

@gemini-code-assist /gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors _content_to_message_param to correctly handle mixed content, including tool responses alongside text or media. The previous implementation would prematurely return upon encountering a tool response, discarding subsequent content. The new logic correctly accumulates all parts and constructs a list of messages. While the core logic of the fix is sound, a significant block of dead code from the previous implementation has been left in the function, which needs to be removed.

@Akshat8510
Copy link
Author

Closing this PR as I see the issue has been addressed in commit fdc286a. Glad to see the data-loss issue for mixed content is resolved! Looking forward to more contributions.

@Akshat8510 Akshat8510 closed this Jan 9, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

models [Component] Issues related to model support

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] LiteLlm adapter silently drops non-function_response parts in multipart Content

2 participants