-
Notifications
You must be signed in to change notification settings - Fork 2.5k
Description
Issue Summary
Tool responses from MCP (Model Context Protocol) servers are double-serialized when converted to LiteLLM message format, creating triple-nested JSON that prevents Claude, GPT, and other non-Gemini models from parsing and presenting tool results to users.
Environment
- ADK Version: 1.19.0
- Affected Models: All models via LiteLLM (Claude via Vertex AI, Azure OpenAI GPT-5, etc.)
- Tool Integration: MCP servers (e.g., Google Workspace via Gluon Link)
- Working Models: Native Gemini models (not affected - use different conversion path)
Problem Description
When MCP tools return responses, the data comes as already-serialized JSON strings. However, the _content_to_message_param() function in lite_llm.py (line 313) unconditionally calls _safe_json_serialize() on these responses, which performs json.dumps() on already-JSON strings. This creates escaped, triple-nested JSON that models cannot parse.
Root Cause
File: lite_llm.py
Line: 313
tool_messages.append(
ChatCompletionToolMessage(
role="tool",
tool_call_id=part.function_response.id,
content=_safe_json_serialize(part.function_response.response), # Double serialization
)
)The _safe_json_serialize() helper (lines 180-189) calls json.dumps() without checking if the input is already a JSON string, causing:
Original MCP response: {"type": "files", "count": 2, ...}
After first serialization (MCP): '{"type": "files", "count": 2, ...}'
After second serialization (line 313): '"{\\"type\\": \\"files\\", \\"count\\": 2, ...}"'
Final in conversation: '{"content": [{"type": "text", "text": "{\\\"type\\\": ..."}]}' # Triple-nested!
Impact
Models cannot parse tool results:
- Claude receives
"{\\\"type\\\": \\\"files\\\"..."instead of clean JSON - GPT receives the same malformed triple-escaped structure
- Models fail to extract actual tool data (file listings, search results, etc.)
- Users see incomplete responses: "I'll list files," but no actual file data is displayed
- Models can call tools successfully, but cannot present results to users
This breaks the entire tool usage workflow for non-Gemini models.
Steps to Reproduce
-
Configure an ADK agent with LiteLLM model and MCP tools:
model = LiteLlm(model="vertex_ai/claude-sonnet-4-5@20250929", stream=True) toolset = McpToolset(mcp_servers={"google": mcp_config}) agent = Agent(model=model, toolset=toolset)
-
Send a request that triggers tool execution:
"List my recent files from Google Drive" -
Observe in logs:
- Tool executes successfully
- MCP returns valid JSON:
{"type": "files", "items": [...]} - Conversion to LiteLLM format double-serializes
- Model receives:
'{"content": [{"type": "text", "text": "{\\\"type\\\"..."}]}'
-
Model response:
- Says: "I'll list the files from your Drive root directory."
- Shows: No actual file data (cannot parse the triple-nested JSON)
Example Log Evidence
From uvicorn_debug.log (line 3355):
{
"content": [
{
"type": "text",
"text": "{\n \"type\": \"files\",\n \"path\": \"/\",\n \"count\": 2,\n \"items\": [\n {\n \"name\": \"mydoc_v2.pdf\",\n \"owner\": \"my_user\",\n \"sharing\": \"Private\",\n \"modified\": \"2025-06-13\",\n \"size\": \"1.6 MB\",\n \"link\": \"https://drive.google.com/file/d/...\",\n \"id\": \"1F4y...\"\n }\n ]\n}"
}
]
}Notice: The entire JSON payload is inside a quoted string within the "text" field, with escaped quotes \". This is the result of double serialization.
What Claude/GPT receives:
'{"content": [{"type": "text", "text": "{\\n \\"type\\": \\"files\\",\\n ..."}]}'
They cannot parse this as structured data.
Expected Behavior
Tool responses should be passed through as-is when they're already JSON strings, or serialized only once if they're Python objects:
# Correct handling:
if isinstance(part.function_response.response, str):
content = part.function_response.response # Already serialized by MCP
else:
content = _safe_json_serialize(part.function_response.response) # Serialize Python objectsProposed Fix
File: lite_llm.py
Lines: 306-320
# BUGFIX: Check if response is already a string before serializing
tool_messages = []
for part in content.parts:
if part.function_response:
# If response is already a string (from MCP), don't serialize again
response_content = (
part.function_response.response
if isinstance(part.function_response.response, str)
else _safe_json_serialize(part.function_response.response)
)
tool_messages.append(
ChatCompletionToolMessage(
role="tool",
tool_call_id=part.function_response.id,
content=response_content,
)
)
if tool_messages:
return tool_messages if len(tool_messages) > 1 else tool_messages[0]