fix: trim history by logical turn boundaries instead of message count since history is always a clean sequence of alternating user and assistant messages#71
Open
Danchi-1 wants to merge 1 commit intomofa-org:mainfrom
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
📋 Summary
The previous history trimming logic used a raw message count (1 + max_exchanges * 2) based on the assumption that every conversation turn is exactly two messages: one user, one assistant. That assumption breaks down when tool-calling is involved, because a single logical turn can span four or more messages. The new logic groups messages into logical turns by identifying User message boundaries, then drops only complete oldest turns until the session is within the max_exchanges limit.
🔗 Related Issues
Closes #30
Related to #
🧠 Context
The change is needed because manage_history trimmed conversation history by raw message count, which assumes every turn is exactly two messages, but tool-calling turns can span four or more. It solves the problem of history trimming slicing through the middle of a tool-call sequence, leaving orphaned or mismatched messages that corrupt the context sent to the AI provider. The fix adopts a turn-boundary approach by grouping everything between User messages as one atomic unit because User messages are the only reliable structural delimiter in the OpenAI message format that marks where one logical exchange ends and another begins.
🛠️ Changes
🧪 How you Tested
Started a session and sent more messages than max_history_exchanges allows. Verify that the oldest turns are dropped cleanly, the system prompt is always the first message, and responses remain coherent.
Configure a session with enable_local_mcp = true and trigger a single tool call. Let it complete fully (user -> assistant+tool_calls -> tool result -> final assistant), then send enough subsequent messages to push it out of the history window. Verify the entire tool-call turn is dropped as one unit with no orphaned messages remaining.
Trigger a turn where the assistant calls more than one tool. After history trimming occurs, inspect session.messages and confirm all tool result messages from that turn were dropped together alongside their parent assistant tool-call message.
Send exactly max_history_exchanges turns. Verify nothing is trimmed and all messages are preserved.
Send max_history_exchanges + 1 turns. Verify only the single oldest complete turn is dropped and nothing else.
Add a temporary debug log after each manage_history call that prints session.messages and manually confirm no message of type Tool or Assistant+tool_calls exists without its corresponding counterpart.
📸 Screenshots / Logs (if applicable)
🧹 Checklist
Code Quality
cargo fmtruncargo clippypasses without warningsTesting
cargo testpasses locally without any errorDocumentation
PR Hygiene
main🚀 Deployment Notes (if applicable)
🧩 Additional Notes for Reviewers