-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Description
Initial Checks
- I confirm that I'm using the latest version of Pydantic AI
- I confirm that I searched for my issue in https://github.com/pydantic/pydantic-ai/issues before opening this issue
Description
There is an edge case where sometimes if multiple UserPromptParts are consecutive, the next ToolReturnPart will have copies of every message part so far in the conversation. As the flow of an agent moves forward this means the size of the conversation will double every time a ToolReturnPart is emitted causing a explosion in context.
This bug was introduced in the v1.0.7 release:
I believe the root case is this line:
https://github.com/pydantic/pydantic-ai/blob/main/pydantic_ai_slim/pydantic_ai/_agent_graph.py#L1186
I have provided a unit test that triggers the beginning of the "bad state", which can result in this repetitive duplication.
The unit test may be appended to the bottom of tests/test_history_processor.py
The test will fail on main.
I am opening a P/R with a fix for this bug as well, and ensure no regressions occur.
Example Code
def test_clean_message_history_keeps_tool_stub_separate():
"""Regression guard for b26a6872f that merged tool-return stubs into the next user request."""
# TODO: imports should get moved to the top whenever we open P/R
from pydantic_ai.messages import ToolReturnPart
from pydantic_ai._agent_graph import _clean_message_history
tool_stub = ModelRequest(
parts=[
ToolReturnPart(
tool_name='summarize',
content='summaries galore',
tool_call_id='call-1',
)
]
)
user_request = ModelRequest(parts=[UserPromptPart(content='fresh prompt')])
cleaned = _clean_message_history([tool_stub, user_request])
assert len(cleaned[0].parts) == 1, 'tool-return part started as unique and should remain unique'
assert len(cleaned) == 2, 'tool-return stubs must remain separate from subsequent user prompts'
assert isinstance(cleaned[0].parts[0], ToolReturnPart)
assert isinstance(cleaned[1].parts[0], UserPromptPart)Python, Pydantic AI & LLM client version
Python 3.12, 3.13
`pydantic_ai>=1.0.7`
LLM Clients tested:
* Claude Sonnet 4 from Vertex AI
* Claude Sonnet 4-5 from Vertex AI
* ChatGPT 5 Codex from OpenAI
* ChatGPT 5 from Azure
* ChatGPT 5 from OpenAI
* Qwen3 Coder 480b from Cerebras
I am confident that this bug is present