Add "Stop" button during LLM generation #1150
Draft
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR implements a "Stop" button that allows users to halt AI generation mid-stream when they notice the response is going in the wrong direction. The implementation follows a clean three-part architecture for reliable stopping:
Key Features
🛑 Immediate Stop Capability: Users can halt AI generation at any point during streaming by clicking the prominent red "Stop" button that replaces the Send button.
🔄 Message Restoration: When stopped, the original user message is automatically restored to the input field, making it easy to refine the prompt and try again.
💰 Cost Efficient: Real network abort via AbortController ensures streaming stops immediately, preventing unnecessary token consumption.
✅ Clean State Management: The UI transitions smoothly between streaming and ready states with clear status messages.
Implementation Architecture
Backend (Server-side)
generationControl.ts): Manages AbortController instances by chat ID for immediate network terminationchatStopFlag.ts): ShareDB-based flags for graceful loop terminationllmStreaming.ts): Integrated AbortController with cooperative cancellation checks/ai-chat-stop): RESTful API for triggering stop operationsFrontend (Client-side)
handleStopGeneration()to VZCodeContextUser Experience
Before (during generation):

After clicking Stop:

The screenshots show the complete workflow: the AI chat interface during generation, and the clean result after stopping with the original message restored for refinement.
Technical Details
The implementation uses three complementary stopping mechanisms:
isStopRequested()flags for responsive UI updatesThe solution is designed to be:
All existing tests pass (105 passed | 1 skipped) with no regressions introduced.
Fixes #1145.
💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.