Skip to content

Conversation

@em4go
Copy link

@em4go em4go commented Dec 7, 2025

Description

This PR fixes a bug where the message_start event was sometimes skipped or sent out of order when using the GitHub Copilot provider, causing the "Unexpected event order" error in amp code.

The Issue

The existing implementation relied on receiving a streaming chunk containing { role: "assistant" } to trigger the message_start event. While standard OpenAI endpoints typically send this as the very first chunk, GitHub Copilot's OpenAI-compatible API (and potentially others) sometimes skips this initial "role-only" chunk and sends content immediately (e.g., text diffs or thinking blocks).

When this happened, the translator would emit content_block_start events before message_start, violating the Anthropic SSE protocol and causing client errors.

The Fix

I introduced a helper ensureMessageStarted() that checks if message_start has been sent. This helper is called immediately before emitting any:

  • Thinking content (content_block_start for thinking)
  • Text content (content_block_start for text)
  • Tool calls (content_block_start for tool_use)

This guarantees strict adherence to the protocol regardless of how the upstream provider chunks their response.

Verification

I wasn't able to reproduce the specific chunking behavior locally, so I'm opening this as a draft. @Cr4ckPwd, could you please test if this resolves the error on your end?

Fixes #4

@gemini-code-assist
Copy link

Summary of Changes

Hello @em4go, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a critical bug related to event ordering when translating streaming responses from OpenAI-compatible APIs to the Anthropic SSE protocol. Specifically, it addresses scenarios where providers like GitHub Copilot might not send the expected initial role: "assistant" chunk, causing content_block_start events to be emitted before message_start. The fix introduces a robust mechanism to ensure message_start is always sent first, preventing client errors and improving the reliability of the translation process.

Highlights

  • Ensured message_start event order: Implemented a new helper function ensureMessageStarted() to guarantee that the message_start event is always emitted before any content_block_start events, strictly adhering to the Anthropic SSE protocol.
  • Addressed GitHub Copilot compatibility: Fixed an issue where OpenAI-compatible providers, such as GitHub Copilot, might skip the initial role: "assistant" chunk, leading to out-of-order events and client errors in the Anthropic SSE translation.
  • Proactive event emission: The ensureMessageStarted() helper is now invoked proactively before emitting thinking content, text content, and tool call content blocks, ensuring protocol compliance regardless of the upstream provider's chunking behavior.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively addresses a critical bug where message_start events could be sent out of order by some OpenAI-compatible providers like GitHub Copilot. The introduction of the ensureMessageStarted helper function is a clean solution that guarantees the correct event sequence by lazily emitting the message_start event just before the first content block is sent. The changes are logical and correctly placed to handle all content types (thinking, text, and tool use). My review includes a couple of suggestions for improvement regarding error handling and code consistency, which you can find in the detailed comments.

Comment on lines +135 to +158
ensureMessageStarted := func() {
if param.MessageStarted {
return
}
messageStart := map[string]interface{}{
"type": "message_start",
"message": map[string]interface{}{
"id": param.MessageID,
"type": "message",
"role": "assistant",
"model": param.Model,
"content": []interface{}{},
"stop_reason": nil,
"stop_sequence": nil,
"usage": map[string]interface{}{
"input_tokens": 0,
"output_tokens": 0,
},
},
}
messageStartJSON, _ := json.Marshal(messageStart)
results = append(results, "event: message_start\ndata: "+string(messageStartJSON)+"\n\n")
param.MessageStarted = true
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This new helper is defined as a closure. For consistency with other helper functions in this file like stopThinkingContentBlock and stopTextContentBlock, consider making it a standalone package-level function. This would look like func ensureMessageStarted(param *ConvertOpenAIResponseToAnthropicParams, results *[]string) and would improve code consistency and maintainability.

Comment on lines +155 to +156
messageStartJSON, _ := json.Marshal(messageStart)
results = append(results, "event: message_start\ndata: "+string(messageStartJSON)+"\n\n")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The error from json.Marshal is ignored. While it's unlikely to fail for this static data structure, it's a good practice to handle errors to prevent silent failures or sending malformed events. Consider checking the error and returning if it's not nil. This would prevent a malformed event from being sent. Ideally, the error should also be logged.

		messageStartJSON, err := json.Marshal(messageStart)
		if err != nil {
			return
		}
		results = append(results, "event: message_start\ndata: "+string(messageStartJSON)+"\n\n")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Unexpected event order, got content_block_start before "message_start"

1 participant