Skip to content

Conversation

@Tarquinen
Copy link
Collaborator

@Tarquinen Tarquinen commented Dec 26, 2025

  • Remove support for injecting synthetic assistant messages for context pruning nudges
  • Simplify codebase to only use user message injection

Injecting assistant messages is too risky because it's unknown how different providers handle edge cases like missing reasoning blocks or malformed assistant content. User message injection is safer and more predictable across all providers. Seems like all SOTA models are reasoning so this is almost always default behavior anyway with less risk.

Tarquinen and others added 2 commits December 25, 2025 17:14
Remove the ability to inject synthetic assistant messages for context
pruning nudges. This simplifies the codebase to only use user message
injection.

Reasoning: Injecting assistant messages is risky because it's unknown
how different providers handle edge cases like missing reasoning blocks
or malformed assistant content. User message injection is safer and
more predictable across all providers.

Removed:
- getLastAssistantMessage() helper
- createSyntheticAssistantMessage() function
- isReasoningModel state property
- chat.params hook for model detection
- wrapPrunableToolsAssistant() function
- All assistant prompt files (nudge and system)
@Tarquinen Tarquinen merged commit ae0415c into dev Dec 26, 2025
1 check passed
@Tarquinen Tarquinen deleted the refactor/remove-assistant-message-injection branch December 26, 2025 02:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants