You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
**Supersede Writes** — Prunes write tool inputs for files that have subsequently been read. When a file is written and later read, the original write content becomes redundant since the current file state is captured in the read result. Runs automatically on every request with zero LLM cost.
31
31
32
-
**Prune Tool** — Exposes a `prune` tool that the AI can call to manually trigger pruning when it determines context cleanup is needed.
32
+
**Discard Tool** — Exposes a `discard` tool that the AI can call to remove completed or noisy tool outputs from context. Use this for task completion cleanup and removing irrelevant outputs.
33
+
34
+
**Extract Tool** — Exposes an `extract` tool that the AI can call to distill valuable context into concise summaries before removing the raw outputs. Use this when you need to preserve key findings while reducing context size.
33
35
34
36
**On Idle Analysis** — Uses a language model to semantically analyze conversation context during idle periods and identify tool outputs that are no longer relevant.
35
37
@@ -54,59 +56,82 @@ DCP uses its own config file:
54
56
55
57
```jsonc
56
58
{
57
-
// Enable or disable the plugin
58
-
"enabled":true,
59
-
// Enable debug logging to ~/.config/opencode/logs/dcp/
60
-
"debug":false,
61
-
// Summary display: "off", "minimal", or "detailed"
62
-
"pruningSummary":"detailed",
63
-
// Strategies for pruning tokens from chat history
64
-
"strategies": {
65
-
// Remove duplicate tool calls (same tool with same arguments)
66
-
"deduplication": {
67
-
"enabled":true,
68
-
// Additional tools to protect from pruning
69
-
"protectedTools": []
59
+
// Enable or disable the plugin
60
+
"enabled":true,
61
+
// Enable debug logging to ~/.config/opencode/logs/dcp/
62
+
"debug":false,
63
+
// Notification display: "off", "minimal", or "detailed"
64
+
"pruneNotification":"detailed",
65
+
// Protect from pruning for <turns> message turns
66
+
"turnProtection": {
67
+
"enabled":false,
68
+
"turns":4,
70
69
},
71
-
// Prune write tool inputs when the file has been subsequently read
72
-
"supersedeWrites": {
73
-
"enabled":true
70
+
// LLM-driven context pruning tools
71
+
"tools": {
72
+
// Shared settings for all prune tools
73
+
"settings": {
74
+
// Nudge the LLM to use prune tools (every <nudgeFrequency> tool results)
75
+
"nudgeEnabled":true,
76
+
"nudgeFrequency":10,
77
+
// Additional tools to protect from pruning
78
+
"protectedTools": [],
79
+
},
80
+
// Removes tool content from context without preservation (for completed tasks or noise)
81
+
"discard": {
82
+
"enabled":true,
83
+
},
84
+
// Distills key findings into preserved knowledge before removing raw content
85
+
"extract": {
86
+
"enabled":true,
87
+
// Show distillation content as an ignored message notification
88
+
"showDistillation":false,
89
+
},
74
90
},
75
-
// Exposes a prune tool to your LLM to call when it determines pruning is necessary
76
-
"pruneTool": {
77
-
"enabled":true,
78
-
// Additional tools to protect from pruning
79
-
"protectedTools": [],
80
-
// Nudge the LLM to use the prune tool (every <frequency> tool results)
81
-
"nudge": {
82
-
"enabled":true,
83
-
"frequency":10
84
-
}
91
+
// Automatic pruning strategies
92
+
"strategies": {
93
+
// Remove duplicate tool calls (same tool with same arguments)
94
+
"deduplication": {
95
+
"enabled":true,
96
+
// Additional tools to protect from pruning
97
+
"protectedTools": [],
98
+
},
99
+
// Prune write tool inputs when the file has been subsequently read
100
+
"supersedeWrites": {
101
+
"enabled":true,
102
+
},
103
+
// (Legacy) Run an LLM to analyze what tool calls are no longer relevant on idle
104
+
"onIdle": {
105
+
"enabled":false,
106
+
// Additional tools to protect from pruning
107
+
"protectedTools": [],
108
+
// Override model for analysis (format: "provider/model")
109
+
// "model": "anthropic/claude-haiku-4-5",
110
+
// Show toast notifications when model selection fails
111
+
"showModelErrorToasts":true,
112
+
// When true, fallback models are not permitted
113
+
"strictModelSelection":false,
114
+
},
85
115
},
86
-
// (Legacy) Run an LLM to analyze what tool calls are no longer relevant on idle
87
-
"onIdle": {
88
-
"enabled":false,
89
-
// Additional tools to protect from pruning
90
-
"protectedTools": [],
91
-
// Override model for analysis (format: "provider/model")
92
-
// "model": "anthropic/claude-haiku-4-5",
93
-
// Show toast notifications when model selection fails
94
-
"showModelErrorToasts":true,
95
-
// When true, fallback models are not permitted
96
-
"strictModelSelection":false
97
-
}
98
-
}
99
116
}
100
117
```
101
118
102
119
</details>
103
120
121
+
### Turn Protection
122
+
123
+
When enabled, turn protection prevents tool outputs from being pruned for a configurable number of message turns. This gives the AI time to reference recent tool outputs before they become prunable. Applies to both `discard` and `extract` tools, as well as automatic strategies.
124
+
104
125
### Protected Tools
105
126
106
127
By default, these tools are always protected from pruning across all strategies:
0 commit comments