-
Notifications
You must be signed in to change notification settings - Fork 14
v1.1.1-beta.1 - Split prune into discard/extract, assistant-role injection, config restructure #180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Add protectedTurns config option to protect recent tools from pruning - Track currentTurn in session state using step-start parts - Store turn number on each cached tool parameter entry - Skip caching tools that are within the protected turn window - Exclude turn-protected tools from nudge counter
Reject prune requests for IDs not found in the tool cache, which catches both hallucinated IDs and turn-protected tools that aren't shown in the <prunable-tools> list.
When the last tool used was prune, inject a cooldown message instead of the full prunable-tools list. This prevents the model from repeatedly invoking the prune tool in successive turns. Also refactors magic strings into named constants for better maintainability.
- Note that some tools in context may not appear in the list (expected) - Clarify that the list is only injected when tools are available
Renames the flat protectedTurns config to a nested turnProtection object matching the structure of nudge. Now has: - enabled: boolean to toggle the feature - turns: number of turns to protect tools after use
…tion Feat/turn based tool protection
…quent tiny prunes
Encourage consolidated pruning guidance
Refactor prompt loading and rename pruning prompt files
- Replace reason-as-first-id pattern with structured metadata object - metadata.reason: 'completion' | 'noise' | 'consolidation' - metadata.distillation: optional object for consolidation findings - Update prompts to clarify distillation rules per reason type - Add docs/ to gitignore
Implement hidden distillation in prune tool
Update prune distillation guidelines for high fidelity
- Splits system and nudge prompts into separate files for discard, extract, or both. - Updates to load the appropriate system prompt based on configuration. - Updates to dynamically select the nudge string. - Deletes the previous monolithic prompt files.
- Remove cross-references between discard/extract tools so each works independently
- Add decision hierarchy in both-mode: discard as default, extract for preservation
- Consolidate redundant 'WHEN TO X' scenarios with trigger lists
- Change trigger lists to focus on timing ('evaluate when') vs prescriptive action
- Remove confusing 'minimal distillation for cleanup' concept from extract prompts
- Standardize structure across all system prompts and nudges
…rd-and-extract Split prune tool into discard and extract
Yes, just something simple like a log file is fine. I can easily have a little tool make it easier to visualize. |
- Move turnProtection from per-tool to top-level - Consolidate nudge and protectedTools into tools.settings - Simplify tools.discard and tools.extract to just enabled flags - Rename pruningSummary to pruneNotification - Remove strategies.discardTool and strategies.extractTool
Refactor config schema to consolidate tool settings
Fix edit tool pruning to use correct field names
- add .prettierrc with project style rules - add format and format:check scripts to package.json - add format check step to PR workflow
chore: add prettier for code formatting
- Add saveContext method to Logger that saves transformed messages as JSON
- Save to ~/.config/opencode/logs/dcp/context/{sessionId}/{timestamp}.json
- Only logs when debug.enabled is true in config
- Minimize output by stripping unnecessary metadata (IDs, summary, path, etc.)
- Keep essential fields: role, time, tokens, text/tool parts
Add debug context logging for session messages
v1.1.0-beta.2 - Version bump
Switch from user role to assistant role message for injecting prunable-tools context. This reduces the likelihood of the model directly addressing the injected content as if responding to user input, improving the conversational experience.
Reasoning models expect their encrypted reasoning parts in assistant messages, which we cannot generate. This adds detection via chat.params hook and appends a synthetic user message (<system-context-injection/>) to close the assistant turn properly when a reasoning model is active.
…nt messages - Nudge prompts now use 'I must/I will' instead of 'You must/You should' - Prunable tools wrapper and cooldown message use first-person - System prompts updated to correctly describe injection as assistant message
…olete injected_context_handling instructions
Use assistant role for prunable-tools injection
v1.1.0-beta.3 - Version bump
…istillation Simplify extract distillation to array format
v1.1.1-beta.1 - Version bump
Summary
discardandextracttools for more granular context managementFull Changelog: v1.0.3...v1.1.1-beta.1