-
-
Notifications
You must be signed in to change notification settings - Fork 36
fix: handle unexpected 'content_block_start' event order (fixes #4) #11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…k_start` events in Claude SSE responses.
Summary of ChangesHello @em4go, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request resolves a critical bug related to event ordering when translating streaming responses from OpenAI-compatible APIs to the Anthropic SSE protocol. Specifically, it addresses scenarios where providers like GitHub Copilot might not send the expected initial Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request effectively addresses a critical bug where message_start events could be sent out of order by some OpenAI-compatible providers like GitHub Copilot. The introduction of the ensureMessageStarted helper function is a clean solution that guarantees the correct event sequence by lazily emitting the message_start event just before the first content block is sent. The changes are logical and correctly placed to handle all content types (thinking, text, and tool use). My review includes a couple of suggestions for improvement regarding error handling and code consistency, which you can find in the detailed comments.
| ensureMessageStarted := func() { | ||
| if param.MessageStarted { | ||
| return | ||
| } | ||
| messageStart := map[string]interface{}{ | ||
| "type": "message_start", | ||
| "message": map[string]interface{}{ | ||
| "id": param.MessageID, | ||
| "type": "message", | ||
| "role": "assistant", | ||
| "model": param.Model, | ||
| "content": []interface{}{}, | ||
| "stop_reason": nil, | ||
| "stop_sequence": nil, | ||
| "usage": map[string]interface{}{ | ||
| "input_tokens": 0, | ||
| "output_tokens": 0, | ||
| }, | ||
| }, | ||
| } | ||
| messageStartJSON, _ := json.Marshal(messageStart) | ||
| results = append(results, "event: message_start\ndata: "+string(messageStartJSON)+"\n\n") | ||
| param.MessageStarted = true | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This new helper is defined as a closure. For consistency with other helper functions in this file like stopThinkingContentBlock and stopTextContentBlock, consider making it a standalone package-level function. This would look like func ensureMessageStarted(param *ConvertOpenAIResponseToAnthropicParams, results *[]string) and would improve code consistency and maintainability.
| messageStartJSON, _ := json.Marshal(messageStart) | ||
| results = append(results, "event: message_start\ndata: "+string(messageStartJSON)+"\n\n") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error from json.Marshal is ignored. While it's unlikely to fail for this static data structure, it's a good practice to handle errors to prevent silent failures or sending malformed events. Consider checking the error and returning if it's not nil. This would prevent a malformed event from being sent. Ideally, the error should also be logged.
messageStartJSON, err := json.Marshal(messageStart)
if err != nil {
return
}
results = append(results, "event: message_start\ndata: "+string(messageStartJSON)+"\n\n")
Description
This PR fixes a bug where the
message_startevent was sometimes skipped or sent out of order when using the GitHub Copilot provider, causing the "Unexpected event order" error in amp code.The Issue
The existing implementation relied on receiving a streaming chunk containing
{ role: "assistant" }to trigger themessage_startevent. While standard OpenAI endpoints typically send this as the very first chunk, GitHub Copilot's OpenAI-compatible API (and potentially others) sometimes skips this initial "role-only" chunk and sends content immediately (e.g., text diffs or thinking blocks).When this happened, the translator would emit
content_block_startevents beforemessage_start, violating the Anthropic SSE protocol and causing client errors.The Fix
I introduced a helper
ensureMessageStarted()that checks ifmessage_starthas been sent. This helper is called immediately before emitting any:content_block_startfor thinking)content_block_startfor text)content_block_startfor tool_use)This guarantees strict adherence to the protocol regardless of how the upstream provider chunks their response.
Verification
I wasn't able to reproduce the specific chunking behavior locally, so I'm opening this as a draft. @Cr4ckPwd, could you please test if this resolves the error on your end?
Fixes #4