-
Notifications
You must be signed in to change notification settings - Fork 172
refactor: extract error parsing into dedicated files for providers #977
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor: extract error parsing into dedicated files for providers #977
Conversation
📝 WalkthroughSummary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings. WalkthroughCentralized error parsing across multiple providers by introducing provider-specific parse helpers and routing non-OK responses to them; converted several provider error-detail fields to pointer types with Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
This stack of pull requests is managed by Graphite. Learn more about stacking. |
🧪 Test Suite AvailableThis PR can be tested by a repository admin. |
78eb82c to
3540e84
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
core/providers/gemini/errors.go (1)
43-48: Trailing newline in aggregated error message.The message concatenation appends a newline after each error message, leaving a trailing newline in the final string. Consider trimming it for cleaner error messages.
for _, errorResp := range errorResps { if errorResp.Error != nil { message = message + errorResp.Error.Message + "\n" } } + message = strings.TrimSuffix(message, "\n") if bifrostErr.Error == nil {Note: You'll need to add
"strings"to the imports if not already present.core/providers/cohere/errors.go (1)
9-20: Consider guarding against empty Type when unmarshal fails.When
HandleProviderAPIErrorfails to unmarshal the response body,errorResp.Typewill be an empty string. Currently,bifrostErr.Typeis always set to&errorResp.Type, which would point to an empty string in error scenarios.For consistency with how
Codeis handled (only set if non-nil), consider only settingTypewhen it's non-empty:func parseCohereError(resp *fasthttp.Response) *schemas.BifrostError { var errorResp CohereError bifrostErr := providerUtils.HandleProviderAPIError(resp, &errorResp) - bifrostErr.Type = &errorResp.Type + if errorResp.Type != "" { + bifrostErr.Type = &errorResp.Type + } if bifrostErr.Error == nil { bifrostErr.Error = &schemas.ErrorField{} } bifrostErr.Error.Message = errorResp.Message if errorResp.Code != nil { bifrostErr.Error.Code = errorResp.Code } return bifrostErr }core/providers/elevenlabs/errors.go (1)
58-67: Inconsistent error construction pattern between validation and non-validation errors.When validation errors produce a message, a new
BifrostErroris constructed and returned (lines 59-66). However, the non-validation error path (lines 81-85) reuses thebifrostErrreturned byHandleProviderAPIError.This inconsistency could cause subtle differences in behavior—for example, if
HandleProviderAPIErrorpopulates additional fields onbifrostErrin the future, validation errors would lose that information.Consider using the same pattern for both cases:
if message != "" { - return &schemas.BifrostError{ - IsBifrostError: false, - StatusCode: schemas.Ptr(resp.StatusCode()), - Error: &schemas.ErrorField{ - Type: schemas.Ptr(errorType), - Message: message, - }, - } + if bifrostErr.Error == nil { + bifrostErr.Error = &schemas.ErrorField{} + } + bifrostErr.Error.Type = schemas.Ptr(errorType) + bifrostErr.Error.Message = message + return bifrostErr }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
core/providers/anthropic/anthropic.go(4 hunks)core/providers/anthropic/errors.go(1 hunks)core/providers/anthropic/types.go(1 hunks)core/providers/cohere/cohere.go(4 hunks)core/providers/cohere/errors.go(1 hunks)core/providers/elevenlabs/elevenlabs.go(4 hunks)core/providers/elevenlabs/errors.go(2 hunks)core/providers/elevenlabs/types.go(1 hunks)core/providers/gemini/errors.go(2 hunks)core/providers/gemini/gemini.go(5 hunks)core/providers/gemini/types.go(1 hunks)core/providers/openai/errors.go(1 hunks)core/providers/openai/openai.go(5 hunks)core/providers/utils/utils.go(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (6)
core/providers/anthropic/errors.go (2)
core/providers/anthropic/types.go (1)
AnthropicError(387-393)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)
core/providers/openai/errors.go (2)
core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)core/schemas/bifrost.go (1)
BifrostErrorExtraFields(420-424)
core/providers/cohere/errors.go (2)
core/providers/cohere/types.go (1)
CohereError(528-532)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)
core/providers/elevenlabs/errors.go (2)
core/providers/elevenlabs/types.go (1)
ElevenlabsError(157-159)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)
core/providers/openai/openai.go (2)
core/providers/openai/errors.go (1)
ParseOpenAIError(10-38)core/schemas/bifrost.go (5)
TextCompletionStreamRequest(88-88)ChatCompletionStreamRequest(90-90)ResponsesStreamRequest(92-92)SpeechStreamRequest(95-95)TranscriptionStreamRequest(97-97)
core/providers/gemini/errors.go (2)
core/providers/gemini/types.go (2)
GeminiGenerationError(1341-1343)GeminiGenerationErrorStruct(1345-1350)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)
🔇 Additional comments (30)
core/providers/utils/utils.go (1)
319-322: Good defensive copy of response body.Creating a copy before unmarshalling ensures the raw response remains available for error messages even if the original buffer is reused by fasthttp's pooling. This is the right approach for reliability.
core/providers/elevenlabs/types.go (1)
156-159: LGTM - nullable Detail field.Making
Detaila pointer withomitemptycorrectly handles cases where the error response may not include detail information, and aligns with the centralized error parsing approach.core/providers/anthropic/types.go (1)
386-393: LGTM - nullable Error field prevents nil pointer dereferences.Converting to a pointer type correctly handles API responses where the error field may be absent, aligning with the centralized error parsing pattern.
core/providers/openai/openai.go (5)
408-408: LGTM - centralized error parsing for text completion streaming.The delegation to
ParseOpenAIErrorwith the correctTextCompletionStreamRequesttype is appropriate.
815-815: LGTM - centralized error parsing for chat completion streaming.Correctly uses
ChatCompletionStreamRequestfor the streaming context.
1256-1256: LGTM - centralized error parsing for responses streaming.Correctly uses
ResponsesStreamRequestfor the streaming context.
1649-1649: LGTM - centralized error parsing for speech streaming.Correctly uses
SpeechStreamRequestfor the streaming context.
1923-1923: LGTM - centralized error parsing for transcription streaming.Correctly uses
TranscriptionStreamRequestfor the streaming context.core/providers/gemini/types.go (1)
1341-1357: LGTM!The error type definitions are well-structured. Using a pointer for the
Errorfield withomitemptycorrectly handles the nullable nature of the error details in the JSON response.core/providers/gemini/errors.go (2)
11-35: LGTM!The
ToGeminiErrorfunction correctly handles nil checks and properly maps BifrostError fields to the Gemini error structure.
56-67: LGTM!The fallback to single error parsing handles the case where Gemini returns a single error object instead of an array. The logic correctly populates the BifrostError fields.
core/providers/gemini/gemini.go (5)
99-99: LGTM!The error handling is correctly refactored to use the centralized
parseGeminiErrorfunction.
157-157: LGTM!Consistent with the error handling refactoring pattern.
252-252: LGTM!Consistent with the error handling refactoring pattern.
479-479: LGTM!Consistent with the error handling refactoring pattern for streaming responses.
727-727: LGTM!Consistent with the error handling refactoring pattern for transcription streaming.
core/providers/anthropic/errors.go (1)
9-19: LGTM!The
parseAnthropicErrorfunction follows the same centralized error parsing pattern as other providers. The nil checks and field mappings are correct.core/providers/elevenlabs/elevenlabs.go (4)
97-97: LGTM!Consistent with the error handling refactoring pattern.
229-229: LGTM!Consistent with the error handling refactoring pattern.
349-349: LGTM!Consistent with the error handling refactoring pattern for streaming responses.
486-486: LGTM!Consistent with the error handling refactoring pattern.
core/providers/cohere/cohere.go (4)
152-155: LGTM!The error handling is now delegated to the centralized
parseCohereErrorfunction, consolidating previously duplicated logic.
210-212: LGTM!Consistent with the refactoring pattern applied across other error paths in this file.
385-388: LGTM!Streaming error handling now properly delegates to the centralized error parser.
598-601: LGTM!ResponsesStream error handling follows the same consolidated pattern.
core/providers/anthropic/anthropic.go (4)
151-154: LGTM!Error handling in
completeRequestnow delegates to the centralizedparseAnthropicErrorfunction.
195-198: LGTM!Consistent error handling delegation in
listModelsByKey.
449-452: LGTM!The shared streaming handler now uses centralized error parsing, which benefits all providers using this function.
779-782: LGTM!Responses stream error handling follows the consolidated pattern.
core/providers/elevenlabs/errors.go (1)
12-14: LGTM on the signature simplification.Removing the
providerNameparameter and delegating toHandleProviderAPIErroraligns with the standardized error parsing approach across other providers.
3540e84 to
e8afa58
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
core/providers/elevenlabs/errors.go (1)
12-89: Centralized ElevenLabs error parsing looks good; consider also setting top-level error typeThe new
parseElevenlabsErrorcorrectly reusesHandleProviderAPIErrorand then:
- Aggregates validation errors into a combined message (with locations and joined types) when
ValidationErrorsis non-empty.- Falls back to the non-validation
Detail.Status/Detail.Messagefields when present, updatingbifrostErr.Errorin place.For consistency with
NewProviderAPIError, you may want to also setbifrostErr.Type(not justbifrostErr.Error.Type) in both branches so downstream code can rely on the top-level type field as well.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
core/providers/anthropic/anthropic.go(4 hunks)core/providers/anthropic/errors.go(1 hunks)core/providers/anthropic/types.go(1 hunks)core/providers/cohere/cohere.go(4 hunks)core/providers/cohere/errors.go(1 hunks)core/providers/elevenlabs/elevenlabs.go(4 hunks)core/providers/elevenlabs/errors.go(2 hunks)core/providers/elevenlabs/types.go(1 hunks)core/providers/gemini/errors.go(2 hunks)core/providers/gemini/gemini.go(5 hunks)core/providers/gemini/types.go(1 hunks)core/providers/openai/errors.go(1 hunks)core/providers/openai/openai.go(5 hunks)core/providers/utils/utils.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
- core/providers/anthropic/errors.go
- core/providers/cohere/errors.go
- core/providers/anthropic/types.go
- core/providers/openai/errors.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
core/providers/gemini/gemini.gocore/providers/elevenlabs/elevenlabs.gocore/providers/cohere/cohere.gocore/providers/elevenlabs/types.gocore/providers/gemini/types.gocore/providers/gemini/errors.gocore/providers/elevenlabs/errors.gocore/providers/openai/openai.gocore/providers/anthropic/anthropic.gocore/providers/utils/utils.go
🧬 Code graph analysis (3)
core/providers/gemini/errors.go (4)
core/providers/gemini/types.go (2)
GeminiGenerationError(1341-1343)GeminiGenerationErrorStruct(1345-1350)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)core/schemas/utils.go (1)
Ptr(16-18)core/utils.go (1)
Ptr(51-53)
core/providers/elevenlabs/errors.go (2)
core/providers/elevenlabs/types.go (1)
ElevenlabsError(157-159)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)
core/providers/openai/openai.go (2)
core/providers/openai/errors.go (1)
ParseOpenAIError(10-38)core/schemas/bifrost.go (5)
TextCompletionStreamRequest(88-88)ChatCompletionStreamRequest(90-90)ResponsesStreamRequest(92-92)SpeechStreamRequest(95-95)TranscriptionStreamRequest(97-97)
🔇 Additional comments (9)
core/providers/anthropic/anthropic.go (1)
153-153: Excellent refactoring to centralize error parsing.The consistent delegation to
parseAnthropicError(resp)across all error paths (non-streaming and streaming) improves maintainability and reduces code duplication. The centralized function properly handles Anthropic-specific error details while delegating base error handling to the provider utilities, maintaining clean separation of concerns.core/providers/utils/utils.go (1)
319-322: HandleProviderAPIError body copy looks correctCopying
resp.Body()into a local slice before unmarshalling avoids aliasing/reuse issues with fasthttp’s internal buffer while preserving existing behavior.core/providers/cohere/cohere.go (1)
155-155: Cohere HTTP error paths now consistently delegate toparseCohereErrorAll non-200 branches correctly route through
parseCohereError(resp), which keeps error parsing centralized and removes local duplication without changing the surrounding control flow.Also applies to: 211-211, 388-388, 602-602
core/providers/openai/openai.go (1)
408-408: Streaming OpenAI HTTP errors correctly unified onParseOpenAIErrorSwitching all streaming HTTP error branches to
ParseOpenAIErroraligns them with non-streaming handlers and centralizes error shaping (including provider/model/request-type metadata) in one place.Also applies to: 815-815, 1256-1256, 1650-1650, 1923-1923
core/providers/elevenlabs/elevenlabs.go (1)
97-98: ElevenLabs HTTP error handling correctly routed throughparseElevenlabsErrorNon-200 responses for list models, speech, streaming speech, and transcription now all use
parseElevenlabsError(resp), which removes duplicated parsing logic at call sites while preserving existing logging and control flow.Also applies to: 227-230, 346-350, 484-487
core/providers/gemini/gemini.go (1)
98-100: Gemini HTTP error paths now uniformly useparseGeminiErrorCentralizing non-200 handling on
parseGeminiError(resp)across sync and streaming entry points keeps Gemini error parsing consistent and removes per-callsite JSON/body handling.Also applies to: 156-158, 251-253, 477-480, 725-728
core/providers/elevenlabs/types.go (1)
158-159: MakingElevenlabsError.Detailoptional is a sound modeling improvementSwitching
Detailto*ElevenlabsErrorDetailwithomitemptymatches the API’s “detail may be missing” behavior and avoids serializing empty detail objects; the existing customUnmarshalJSONcontinues to apply on the inner type.core/providers/gemini/types.go (1)
1341-1357: GeminiGenerationError types align with expected Gemini error payloads*The
GeminiGenerationError,GeminiGenerationErrorStruct, andGeminiGenerationErrorDetailsstructs correctly model a standard Gemini error (code/message/status plus typed details with fieldViolations) and provide a solid basis for the centralizedparseGeminiErrorlogic.core/providers/gemini/errors.go (1)
11-35: LGTM! Clean error conversion logic.The function safely converts BifrostError to GeminiGenerationError with appropriate nil checks and default values. The implementation is straightforward and correct.
e8afa58 to
455b127
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
core/providers/gemini/errors.go (1)
11-35: Unify multi‑error and single‑error handling inparseGeminiError(trim newline, propagate code)
ToGeminiErroritself looks fine, butparseGeminiErrorstill treats array vs single error responses inconsistently:
For
[]GeminiGenerationError(lines 40‑53), it:
- Concatenates messages with
+ "\n", leaving a trailing newline inbifrostErr.Error.Message.- Ignores
errorResp.Error.Codeentirely.For single
GeminiGenerationError(lines 56‑65), it:
- Sets
bifrostErr.Error.CodefromerrorResp.Error.Code.- Sets
bifrostErr.Error.MessagefromerrorResp.Error.Message.To keep error shape consistent and avoid cosmetic artifacts in messages, consider:
- After building the concatenated
message, strip the trailing newline:- var message string - for _, errorResp := range errorResps { - if errorResp.Error != nil { - message = message + errorResp.Error.Message + "\n" - } - } + var message string + for _, errorResp := range errorResps { + if errorResp.Error != nil && errorResp.Error.Message != "" { + message += errorResp.Error.Message + "\n" + } + } + if message != "" { + message = strings.TrimSuffix(message, "\n") + }
- When at least one element has a non‑nil
Error, also setbifrostErr.Error.Codefrom the first such element, mirroring the single‑error path:- if bifrostErr.Error == nil { - bifrostErr.Error = &schemas.ErrorField{} - } - bifrostErr.Error.Message = message + if bifrostErr.Error == nil { + bifrostErr.Error = &schemas.ErrorField{} + } + + // Preserve a representative error code when available + for _, er := range errorResps { + if er.Error != nil { + bifrostErr.Error.Code = schemas.Ptr(strconv.Itoa(er.Error.Code)) + break + } + } + + bifrostErr.Error.Message = messageThis way, both array and single‑object responses populate
CodeandMessagein the same way, and consumers don’t see spurious trailing newlines.Also applies to: 37-67
🧹 Nitpick comments (1)
core/providers/elevenlabs/errors.go (1)
12-89: ElevenLabs error parsing is robust, but consider wiring provider/request context intoExtraFieldsThe new
parseElevenlabsErrorcorrectly:
- Leverages
HandleProviderAPIErroras a common entry point.- Distinguishes validation vs non‑validation formats using
Detail.ValidationErrors.- Aggregates validation messages/locations/types into a single human‑readable message.
- Falls back to
Detail.Message/Detail.Statuswhen there are no validation errors.One thing to consider (for consistency with helpers like
ParseOpenAIError) is attaching provider/request context toExtraFieldson the returned*schemas.BifrostError(e.g.,Provider,RequestType,ModelRequested). With the currentfunc parseElevenlabsError(resp *fasthttp.Response)signature, the parser can’t set that context itself; if you rely on those fields for logs or metrics, you may want to:
- Either extend the signature to accept provider/request context and populate
ExtraFieldshere, or- Have call sites wrap the returned error and set
ExtraFieldsbefore returning upstream.If you don’t currently depend on those fields for ElevenLabs errors, the existing behavior is functionally fine.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
core/providers/anthropic/anthropic.go(4 hunks)core/providers/anthropic/errors.go(1 hunks)core/providers/anthropic/types.go(1 hunks)core/providers/cohere/cohere.go(4 hunks)core/providers/cohere/errors.go(1 hunks)core/providers/elevenlabs/elevenlabs.go(4 hunks)core/providers/elevenlabs/errors.go(2 hunks)core/providers/elevenlabs/types.go(1 hunks)core/providers/gemini/errors.go(2 hunks)core/providers/gemini/gemini.go(5 hunks)core/providers/gemini/types.go(1 hunks)core/providers/openai/errors.go(1 hunks)core/providers/openai/openai.go(5 hunks)core/providers/utils/utils.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
- core/providers/openai/errors.go
- core/providers/anthropic/errors.go
- core/providers/anthropic/anthropic.go
- core/providers/gemini/gemini.go
- core/providers/cohere/errors.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
core/providers/cohere/cohere.gocore/providers/elevenlabs/elevenlabs.gocore/providers/elevenlabs/types.gocore/providers/utils/utils.gocore/providers/openai/openai.gocore/providers/elevenlabs/errors.gocore/providers/gemini/errors.gocore/providers/gemini/types.gocore/providers/anthropic/types.go
🧬 Code graph analysis (2)
core/providers/openai/openai.go (3)
core/providers/openai/errors.go (1)
ParseOpenAIError(10-38)core/schemas/bifrost.go (5)
TextCompletionStreamRequest(88-88)ChatCompletionStreamRequest(90-90)ResponsesStreamRequest(92-92)SpeechStreamRequest(95-95)TranscriptionStreamRequest(97-97)core/schemas/models.go (1)
Model(109-129)
core/providers/elevenlabs/errors.go (2)
core/providers/elevenlabs/types.go (1)
ElevenlabsError(157-159)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)
🔇 Additional comments (10)
core/providers/cohere/cohere.go (4)
152-155: LGTM! Consistent error handling refactoring.The replacement of manual error handling with
parseCohereError(resp)improves code organization by centralizing error parsing logic. The debug logging is preserved, and the return type matches the method signature.
210-212: LGTM! Error handling correctly centralized.The use of
parseCohereError(resp)is consistent with the refactoring approach and properly handles non-200 responses.
385-388: LGTM! Streaming error handling properly refactored.The centralized error parsing is correctly applied for streaming responses, with proper cleanup deferred before the error return.
600-603: TheparseCohereErrorimplementation is sound and properly handles all error cases.The function delegates response body parsing to
HandleProviderAPIError, which safely copies and unmarshals the response body. It correctly maps HTTP status codes, handles Cohere-specific error fields (Type, Message, Code), and protects against edge cases like malformed JSON or missing optional fields. The implementation follows the centralized error handling pattern consistently across the provider.core/providers/anthropic/types.go (1)
386-393: AnthropicError pointer+omitempty change looks correctMaking
Errora pointer withjson:"error,omitempty"aligns with optional error payloads and avoids emitting empty objects; assuming call sites nil‑check before dereference, this is a safe, compatible refactor.core/providers/utils/utils.go (1)
318-323: Copying the response body before unmarshalling is a solid safety improvementUsing a local
bodyslice avoids coupling error parsing to fasthttp’s internal buffer lifecycle while preserving existing behavior for callers.core/providers/openai/openai.go (1)
145-148: Centralizing OpenAI HTTP‑error handling via ParseOpenAIError is consistent and correctAll non‑streaming and streaming OpenAI paths now delegate HTTP‑level error parsing to
ParseOpenAIError, passing the appropriateRequestType, provider key, and model (or empty for ListModels). This keeps error construction andExtraFieldspopulation consistent with the new helper incore/providers/openai/errors.gowithout changing control flow.Also applies to: 265-268, 405-409, 645-649, 1109-1112, 1253-1257, 1457-1460, 1531-1534, 1647-1650, 1805-1809, 1920-1924
core/providers/gemini/types.go (1)
1341-1357: GeminiGenerationError type hierarchy matches Google RPC error schemaDefining
GeminiGenerationError,GeminiGenerationErrorStruct, andGeminiGenerationErrorDetailswithcode/message/status/details[@type,fieldViolations]cleanly models Gemini’s error payloads and is a good basis for the new centralized parser.core/providers/elevenlabs/elevenlabs.go (1)
96-98: ElevenLabs call sites now correctly delegate HTTP errors toparseElevenlabsErrorThe switch from
parseElevenlabsError(providerName, resp)toparseElevenlabsError(resp)across list‑models, speech, streaming speech, and transcription paths matches the new helper signature and keeps the control flow and status‑code checks intact.Also applies to: 226-230, 347-350, 484-487
core/providers/elevenlabs/types.go (1)
156-159: MakingDetailoptional on ElevenlabsError is appropriateSwitching
Detailto*ElevenlabsErrorDetailwithjson:"detail,omitempty"matches the varied ElevenLabs error payloads and works cleanly with the existing custom unmarshalling plus the new parser, which already checks fornil.
455b127 to
8e37f9a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
core/providers/gemini/errors.go (1)
42-54: Inconsistent error field extraction between array and single response paths.The array response path (lines 42-54) only extracts
Messageand leaves a trailing newline, while the single response path (lines 59-65) extracts bothCodeandMessage. For consistency, the array path should:
- Trim the trailing newline using
strings.TrimSuffix(message, "\n")- Extract the
Codefrom the first error (similar to single response handling)if len(errorResps) > 0 { var message string + var code *string for _, errorResp := range errorResps { if errorResp.Error != nil { message = message + errorResp.Error.Message + "\n" + if code == nil && errorResp.Error.Code != 0 { + code = schemas.Ptr(strconv.Itoa(errorResp.Error.Code)) + } } } if bifrostErr.Error == nil { bifrostErr.Error = &schemas.ErrorField{} } - bifrostErr.Error.Message = message + bifrostErr.Error.Message = strings.TrimSuffix(message, "\n") + if code != nil { + bifrostErr.Error.Code = code + } return bifrostErr }Note: You'll also need to add
"strings"to the imports.
🧹 Nitpick comments (1)
core/providers/cohere/errors.go (1)
9-20: Consider guarding against empty error fields when unmarshalling fails.When
HandleProviderAPIErrorfails to unmarshal the response (e.g., malformed JSON), it returns early with the raw response as the message, buterrorRespfields remain zero-valued. The subsequent assignments on Lines 12 and 16 will then setTypeto a pointer to an empty string and overwrite the useful raw response message with an empty string.func parseCohereError(resp *fasthttp.Response) *schemas.BifrostError { var errorResp CohereError bifrostErr := providerUtils.HandleProviderAPIError(resp, &errorResp) - bifrostErr.Type = &errorResp.Type - if bifrostErr.Error == nil { - bifrostErr.Error = &schemas.ErrorField{} + if errorResp.Type != "" { + bifrostErr.Type = &errorResp.Type + } + if errorResp.Message != "" { + if bifrostErr.Error == nil { + bifrostErr.Error = &schemas.ErrorField{} + } + bifrostErr.Error.Message = errorResp.Message } - bifrostErr.Error.Message = errorResp.Message if errorResp.Code != nil { + if bifrostErr.Error == nil { + bifrostErr.Error = &schemas.ErrorField{} + } bifrostErr.Error.Code = errorResp.Code } return bifrostErr }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
core/providers/anthropic/anthropic.go(4 hunks)core/providers/anthropic/errors.go(1 hunks)core/providers/anthropic/types.go(1 hunks)core/providers/cohere/cohere.go(4 hunks)core/providers/cohere/errors.go(1 hunks)core/providers/elevenlabs/elevenlabs.go(4 hunks)core/providers/elevenlabs/errors.go(2 hunks)core/providers/elevenlabs/types.go(1 hunks)core/providers/gemini/errors.go(2 hunks)core/providers/gemini/gemini.go(5 hunks)core/providers/gemini/types.go(1 hunks)core/providers/openai/errors.go(1 hunks)core/providers/openai/openai.go(5 hunks)core/providers/utils/utils.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
- core/providers/elevenlabs/types.go
- core/providers/openai/errors.go
- core/providers/cohere/cohere.go
- core/providers/elevenlabs/errors.go
- core/providers/elevenlabs/elevenlabs.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
core/providers/gemini/gemini.gocore/providers/cohere/errors.gocore/providers/anthropic/types.gocore/providers/anthropic/errors.gocore/providers/openai/openai.gocore/providers/gemini/errors.gocore/providers/anthropic/anthropic.gocore/providers/utils/utils.gocore/providers/gemini/types.go
🧬 Code graph analysis (4)
core/providers/cohere/errors.go (2)
core/providers/cohere/types.go (1)
CohereError(528-532)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)
core/providers/anthropic/errors.go (2)
core/providers/anthropic/types.go (1)
AnthropicError(387-393)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)
core/providers/openai/openai.go (2)
core/providers/openai/errors.go (1)
ParseOpenAIError(10-38)core/schemas/bifrost.go (5)
TextCompletionStreamRequest(88-88)ChatCompletionStreamRequest(90-90)ResponsesStreamRequest(92-92)SpeechStreamRequest(95-95)TranscriptionStreamRequest(97-97)
core/providers/gemini/errors.go (2)
core/providers/gemini/types.go (2)
GeminiGenerationError(1341-1343)GeminiGenerationErrorStruct(1345-1350)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)
🔇 Additional comments (17)
core/providers/gemini/types.go (1)
1341-1357: LGTM - Error types restructured for centralized parsing.The new error types are well-structured with
GeminiGenerationErrorwrapping an optional pointer toGeminiGenerationErrorStruct. This aligns with the centralized error parsing pattern used across other providers.Minor observation: Unlike the Anthropic error type,
GeminiGenerationErrorStructfields don't haveomitemptytags. This is acceptable since these fields are typically always present in Gemini error responses.core/providers/anthropic/types.go (1)
389-392: LGTM - Error field correctly changed to pointer with omitempty.Converting
Errorto a pointer type enables nil-safe handling in the centralizedparseAnthropicErrorfunction, and theomitemptytag prevents serializing null error fields in JSON output.core/providers/anthropic/errors.go (1)
9-19: LGTM - Centralized error parsing for Anthropic.The function correctly delegates to
HandleProviderAPIErrorand enriches the error with Anthropic-specific type and message fields when available. The nil check on Line 13 is a good defensive pattern even thoughHandleProviderAPIErrorcurrently always initializes theErrorfield.core/providers/utils/utils.go (1)
317-337: LGTM - Body copy prevents potential data corruption.Creating a local copy of
resp.Body()before unmarshalling is the correct approach. The fasthttp response body references an internal buffer that could be reused, so copying it ensures stable data for both JSON unmarshalling and raw response message construction in the error path.core/providers/gemini/gemini.go (5)
99-99: LGTM - Error handling centralized in completeRequest.The error path now uses the centralized
parseGeminiError(resp)function, which standardizes error handling across all Gemini API calls.
157-157: LGTM - Consistent error handling in listModelsByKey.
252-252: LGTM - Consistent error handling in ChatCompletion.
477-479: LGTM - Consistent error handling in SpeechStream.
725-727: LGTM - Consistent error handling in TranscriptionStream.core/providers/anthropic/anthropic.go (4)
153-153: LGTM - Error handling centralized in completeRequest.The error path now uses the centralized
parseAnthropicError(resp)function, replacing the previous inline error construction.
197-197: LGTM - Consistent error handling in listModelsByKey.
449-451: LGTM - Streaming error handling unified.The streaming error path now uses the same
parseAnthropicError(resp)function as non-streaming paths, replacing the previously separateparseStreamAnthropicErrorhelper.
797-799: LGTM - Consistent error handling in HandleAnthropicResponsesStream.core/providers/openai/openai.go (2)
408-408: LGTM! Streaming error handling now uses centralized parser.The switch to
ParseOpenAIErrorfromerrors.goaligns with the refactor goal of consolidating duplicate error handling. This is consistent with the non-streaming handlers in this file.
815-815: Consistent error handling refactor across all streaming endpoints.All streaming handlers (
ChatCompletionStream,ResponsesStream,SpeechStream,TranscriptionStream) now consistently use the centralizedParseOpenAIErrorfunction with appropriate request types. This eliminates code duplication and standardizes error parsing.Also applies to: 1256-1256, 1649-1649, 1923-1923
core/providers/gemini/errors.go (2)
11-35: LGTM! Clean conversion from BifrostError to GeminiGenerationError.The function correctly handles nil input, extracts relevant fields with proper nil checks, and constructs the GeminiGenerationError with the new pointer-based struct layout.
56-67: Single response handling looks correct.The fallback from array to single object parsing is appropriate for handling different Gemini API error response formats. The Code and Message extraction is properly implemented here.
8e37f9a to
9e093fc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (3)
core/providers/elevenlabs/elevenlabs.go (3)
226-230: Speech error path correctly uses centralized parser (same concern as listModels)The move to
parseElevenlabsError(resp)here matches the listModels path and keeps behavior consistent; logging still includesproviderName. The only outstanding question is the same provider alias consistency noted in the listModels comment.
346-350: SpeechStream non‑200 handling aligned with new parserFor streaming, the non‑200 path now also goes through
parseElevenlabsError(resp), which is consistent with the rest of the provider. Behavior looks unchanged apart from delegating to the centralized helper; alias/label concern is the same as in the listModels comment.
484-487: Transcription error handling consistently routed through centralized parserTranscription’s non‑200 branch now mirrors the other methods by using
parseElevenlabsError(resp)after logging. This keeps error handling uniform across Elevenlabs operations; only remaining consideration is the shared provider key/alias check mentioned earlier.
🧹 Nitpick comments (4)
core/providers/elevenlabs/elevenlabs.go (1)
96-98: Centralized Elevenlabs error parsing looks good; verify provider key/alias usageSwitching to
parseElevenlabsError(resp)keeps the non‑200 path simple and aligned with the new centralized helper. One thing to double‑check at the stack level: sinceproviderNameis no longer passed in, ensureparseElevenlabsError(and the underlyingHandleProviderAPIError) still tag theBifrostErrorwith the same provider key/alias asGetProviderKey()(including any custom provider aliases fromCustomProviderConfig). If it currently hardcodesschemas.Elevenlabsinstead of using the alias, you might want to thread the alias in or assert that this difference is intentional.core/providers/gemini/gemini.go (1)
99-100: Centralizing Gemini HTTP error parsing viaparseGeminiErrorlooks consistent; consider whether you still need per-call context on the returned error.All non-200 HTTP cases now funnel through
parseGeminiError(resp), including streaming setup paths. That’s a nice simplification and correctly avoids reusingresp.Body()after release thanks toHandleProviderAPIErrorcloning the body.One thing to double‑check: previously, streaming error helpers took
providerName(and in some providers, request type/model) when building*schemas.BifrostError. With the new helper only taking*fasthttp.Response, those contextual fields (e.g.,ExtraFields.Provider,ExtraFields.RequestType,ExtraFields.ModelRequestedorError.Typederived from provider status) now depend on being added further up the stack, if at all.If any downstream logging/metrics/clients rely on those being present for HTTP‑level failures, it may be worth:
- either enriching the
*schemas.BifrostErrorat these call sites just before returning, or- extending
parseGeminiErrorto accept and populate that context.If nothing currently depends on that metadata, the current refactor is fine as‑is.
Also applies to: 155-158, 250-253, 476-480, 724-728
core/providers/gemini/errors.go (1)
12-36: Gemini error translation logic is sound; optionally propagatestatus/details intoBifrostErrorfor richer diagnostics.The new
ToGeminiErrorandparseGeminiErrorflows look correct:
ToGeminiErrorsafely handlesniland sensibly defaultscodefromStatusCode,statusfromError.Type, andmessagefromError.Message.parseGeminiErrorcorrectly:
- Tries
[]GeminiGenerationErrorfirst, aggregating messages and settingError.Codefrom the first entry.- Falls back to single
GeminiGenerationError.- Relies on
HandleProviderAPIErrorto clone the body and always return a non‑nil base*BifrostError, so there’s no NPE risk.Two optional polish points you may want to consider:
Map Gemini
statusintoBifrostError
In both the slice and single‑error branches you have access toError.Status, but right now that isn’t surfaced intoBifrostError(e.g.,Error.Type). If upstream tooling usesError.Typeto distinguish categories (INVALID_ARGUMENT, PERMISSION_DENIED, etc.), setting it fromstatuswould make these errors more useful.Optionally surface selected
details
GeminiGenerationErrorDetailsis defined but currently unused in the mapping. Even just appending the first field-violation description into the aggregatedMessage(behind a small helper) would significantly improve debuggability while keeping the structure simple.These are not correctness issues, just potential improvements for observability and ergonomics.
Also applies to: 38-79
core/providers/gemini/types.go (1)
1341-1357: NewGeminiGenerationError*types correctly model the Gemini error envelope; consideromitemptyif you want sparse re‑encoding.The new error types line up with Gemini’s documented error shape (
error.code,error.message,error.status,error.details{ @type, fieldViolations[].description }) and work fine for unmarshalling and forToGeminiError’s serialization.If you intend these fields to behave as optional when you serialize them back out (e.g., to avoid emitting empty
detailsor blankstatus), you might want to addomitemptyto the struct tags:type GeminiGenerationErrorStruct struct { Code int `json:"code,omitempty"` Message string `json:"message,omitempty"` Status string `json:"status,omitempty"` Details []GeminiGenerationErrorDetails `json:"details,omitempty"` }Not required for correctness, just a minor alignment with “optional fields” semantics and slightly cleaner JSON on the wire.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
core/providers/anthropic/anthropic.go(4 hunks)core/providers/anthropic/errors.go(1 hunks)core/providers/anthropic/types.go(1 hunks)core/providers/cohere/cohere.go(4 hunks)core/providers/cohere/errors.go(1 hunks)core/providers/elevenlabs/elevenlabs.go(4 hunks)core/providers/elevenlabs/errors.go(2 hunks)core/providers/elevenlabs/types.go(1 hunks)core/providers/gemini/errors.go(2 hunks)core/providers/gemini/gemini.go(5 hunks)core/providers/gemini/types.go(1 hunks)core/providers/openai/errors.go(1 hunks)core/providers/openai/openai.go(5 hunks)core/providers/utils/utils.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
- core/providers/elevenlabs/types.go
- core/providers/cohere/errors.go
- core/providers/openai/errors.go
- core/providers/anthropic/anthropic.go
- core/providers/utils/utils.go
- core/providers/elevenlabs/errors.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
core/providers/cohere/cohere.gocore/providers/openai/openai.gocore/providers/elevenlabs/elevenlabs.gocore/providers/anthropic/types.gocore/providers/anthropic/errors.gocore/providers/gemini/gemini.gocore/providers/gemini/errors.gocore/providers/gemini/types.go
🧬 Code graph analysis (2)
core/providers/anthropic/errors.go (2)
core/providers/anthropic/types.go (1)
AnthropicError(387-393)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)
core/providers/gemini/errors.go (4)
core/providers/gemini/types.go (2)
GeminiGenerationError(1341-1343)GeminiGenerationErrorStruct(1345-1350)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)core/schemas/utils.go (1)
Ptr(16-18)core/utils.go (1)
Ptr(51-53)
🔇 Additional comments (7)
core/providers/openai/openai.go (1)
405-409: Centralizing streaming error handling onParseOpenAIErrorlooks good; verify behavior withStreamBodyresponsesUsing
ParseOpenAIErroracross all OpenAI streaming handlers (TextCompletionStreaming,ChatCompletionStreaming,ResponsesStreaming,SpeechStream,TranscriptionStream) improves consistency and deduplicates error parsing. TherequestType,providerName, andrequest.Modelarguments are all wired correctly.One subtle point to double‑check: in each of these handlers,
resp.StreamBody = trueis set beforeclient.Do, and on non‑200 responses you immediately callParseOpenAIError(resp, ...)without reading fromresp.BodyStream(). IfParseOpenAIError(via helpers inerrors.go/providerUtils) assumes a buffered body and reads fromresp.Body()instead of the stream, it may see an empty or incomplete body for these streaming responses.Please confirm that
ParseOpenAIErrorcorrectly handles responses whereStreamBodyis enabled (e.g., by reading/ buffering the stream in the error path), or adjust either the streaming call‑sites or the helper so error JSON from streaming endpoints is reliably available to the parser.Also applies to: 812-816, 1253-1257, 1646-1650, 1920-1924
core/providers/anthropic/types.go (1)
386-393: AnthropicError pointer +omitemptychange is sound and matches the new parserMaking
Errora pointer withjson:"error,omitempty"aligns withparseAnthropicError, which nil‑checkserrorResp.Errorbefore reading it, so there’s no new nil‑deref risk. The only behavior change is that re‑marshalledAnthropicErrorvalues will now omit theerrorfield entirely when it’s absent, instead of emitting an empty object—this looks desirable, but it’s worth confirming no external consumer depended on the field always being present.Please run or extend any Anthropic error‑path tests/assertions that inspect serialized error JSON to confirm they still match expectations.
core/providers/anthropic/errors.go (1)
1-20: parseAnthropicError correctly layers Anthropic details on top of the shared Bifrost errorThe helper cleanly composes with
HandleProviderAPIError: generic parsing and status handling stay inproviderUtils, and when the body matchesAnthropicErroryou populateError.TypeandError.Messagein a single place. The defensivebifrostErr.Error == nilcheck is harmless (and future‑proof) given the current utils contract.Recommend adding or running Anthropic integration/unit tests that hit a few real error shapes (e.g., invalid API key, bad params, and a malformed body) to confirm
BifrostError.Error.Typeand.Messageare populated as expected while the generic fallback still works when unmarshalling fails.core/providers/cohere/cohere.go (4)
151-155: Centralizing non‑200 error handling incompleteRequestlooks good; ensure metadata is preservedSwitching to
parseCohereError(resp)here is consistent with the new centralized error‑parsing pattern and removes duplication. Please just confirm thatparseCohereErrorpopulatesBifrostError.ExtraFields(at leastProviderandRequestType) in a way that matches what callers expect, sincecompleteRequestitself no longer adds that context.
209-212: Consistent error parsing inlistModelsByKey; verify list‑models context is capturedDelegating the non‑200 path to
parseCohereError(resp)keeps Cohere error handling uniform with other operations. Double‑check thatparseCohereErrorincludes enough context for list‑models errors (provider, request type, and ideally endpoint/URI) now that this function only forwards its result.
384-388: Streaming HTTP error path: confirmparseCohereErroris safe with streaming responses and doesn’t releaserespFor the streaming chat path, the pattern:
- sets
resp.StreamBody = true,- on non‑200 defers
ReleaseStreamingResponse(resp), and- then calls
parseCohereError(resp),is reasonable, but it assumes that:
parseCohereErrorcan safely read and buffer the body from a response whose body is being streamed, andparseCohereErrordoes not itself callReleaseStreamingResponse/fasthttp.ReleaseResponse, to avoid double‑releasing/closing.Please verify both assumptions against the implementation in
core/providers/cohere/errors.goand shared utils.
599-603: Same streaming‑error concerns apply toResponsesStreamThis mirrors the streaming chat error path and has the same requirements:
parseCohereError(resp)must handle a streaming response body correctly, and- it must not independently release/close
resp, since that’s already handled via the deferredReleaseStreamingResponse(resp).If those invariants hold, this centralization of error handling for responses streaming is solid.
9e093fc to
257bd7a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
core/providers/elevenlabs/errors.go (1)
58-67: Inconsistent error construction in validation error path.The validation error handling creates a new
BifrostErrorinstead of updating thebifrostErrobtained fromHandleProviderAPIError(line 14). This is inconsistent with the non-validation error path (lines 80-86) and discards any context fromHandleProviderAPIError, such as the copied response body.Apply this diff to align with the standardized error handling pattern:
if message != "" { + if bifrostErr.Error == nil { + bifrostErr.Error = &schemas.ErrorField{} + } + bifrostErr.Error.Type = schemas.Ptr(errorType) + bifrostErr.Error.Message = message - return &schemas.BifrostError{ - IsBifrostError: false, - StatusCode: schemas.Ptr(resp.StatusCode()), - Error: &schemas.ErrorField{ - Type: schemas.Ptr(errorType), - Message: message, - }, - } }
🧹 Nitpick comments (2)
core/providers/gemini/errors.go (1)
44-52: Consider using strings.Builder for concatenating error messages.String concatenation in a loop using the
+operator (line 51) creates intermediate string allocations. For better performance with multiple errors, consider usingstrings.Builder:- var message string + var messageBuilder strings.Builder var firstError *GeminiGenerationErrorStruct for _, errorResp := range errorResps { if errorResp.Error != nil { if firstError == nil { firstError = errorResp.Error } - message = message + errorResp.Error.Message + "\n" + messageBuilder.WriteString(errorResp.Error.Message) + messageBuilder.WriteString("\n") } } // Trim trailing newline - message = strings.TrimSuffix(message, "\n") + message := strings.TrimSuffix(messageBuilder.String(), "\n")core/providers/openai/errors.go (1)
1-42: ParseOpenAIError logic looks solid; consider removing redundant nil-check.The flow via
providerUtils.HandleProviderAPIErrorand the subsequent field copying correctly handles both JSON and non‑JSON error bodies, andExtraFieldsis now always populated. GivenHandleProviderAPIErroralways returns a non‑nil*schemas.BifrostErrortoday, theif bifrostErr != nil { ... }guard before settingExtraFieldsis redundant and can be safely dropped to simplify the code.- // Set ExtraFields unconditionally so provider/model/request metadata is always attached - if bifrostErr != nil { - bifrostErr.ExtraFields = schemas.BifrostErrorExtraFields{ - Provider: providerName, - ModelRequested: model, - RequestType: requestType, - } - } + // Set ExtraFields so provider/model/request metadata is always attached + bifrostErr.ExtraFields = schemas.BifrostErrorExtraFields{ + Provider: providerName, + ModelRequested: model, + RequestType: requestType, + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
core/providers/anthropic/anthropic.go(4 hunks)core/providers/anthropic/errors.go(1 hunks)core/providers/anthropic/types.go(1 hunks)core/providers/cohere/cohere.go(4 hunks)core/providers/cohere/errors.go(1 hunks)core/providers/elevenlabs/elevenlabs.go(4 hunks)core/providers/elevenlabs/errors.go(2 hunks)core/providers/elevenlabs/types.go(1 hunks)core/providers/gemini/errors.go(2 hunks)core/providers/gemini/gemini.go(5 hunks)core/providers/gemini/types.go(1 hunks)core/providers/openai/errors.go(1 hunks)core/providers/openai/openai.go(5 hunks)core/providers/utils/utils.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (8)
- core/providers/cohere/cohere.go
- core/providers/cohere/errors.go
- core/providers/anthropic/types.go
- core/providers/anthropic/errors.go
- core/providers/elevenlabs/types.go
- core/providers/anthropic/anthropic.go
- core/providers/utils/utils.go
- core/providers/gemini/gemini.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
core/providers/elevenlabs/elevenlabs.gocore/providers/gemini/errors.gocore/providers/openai/errors.gocore/providers/openai/openai.gocore/providers/elevenlabs/errors.gocore/providers/gemini/types.go
🧠 Learnings (1)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.
Applied to files:
core/providers/elevenlabs/elevenlabs.gocore/providers/gemini/errors.gocore/providers/openai/errors.gocore/providers/openai/openai.gocore/providers/elevenlabs/errors.gocore/providers/gemini/types.go
🧬 Code graph analysis (4)
core/providers/gemini/errors.go (4)
core/providers/gemini/types.go (2)
GeminiGenerationError(1341-1343)GeminiGenerationErrorStruct(1345-1350)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)core/schemas/utils.go (1)
Ptr(16-18)core/utils.go (1)
Ptr(51-53)
core/providers/openai/errors.go (2)
core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)core/schemas/bifrost.go (1)
BifrostErrorExtraFields(420-424)
core/providers/openai/openai.go (2)
core/providers/openai/errors.go (1)
ParseOpenAIError(10-42)core/schemas/bifrost.go (5)
TextCompletionStreamRequest(88-88)ChatCompletionStreamRequest(90-90)ResponsesStreamRequest(92-92)SpeechStreamRequest(95-95)TranscriptionStreamRequest(97-97)
core/providers/elevenlabs/errors.go (2)
core/providers/elevenlabs/types.go (1)
ElevenlabsError(157-159)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)
🔇 Additional comments (6)
core/providers/elevenlabs/elevenlabs.go (1)
97-97: LGTM! Error parsing calls correctly updated.All call sites to
parseElevenlabsErrorhave been consistently updated to pass only therespparameter, aligning with the centralized error handling approach across providers.Also applies to: 229-229, 349-349, 486-486
core/providers/elevenlabs/errors.go (2)
12-14: LGTM! Function signature and initialization aligned with standardized approach.The updated signature removes the
providerNameparameter and usesHandleProviderAPIErrorto obtain a base error structure, consistent with the centralized error handling pattern.
70-88: LGTM! Non-validation error handling correctly updates the base error.This path properly updates the
bifrostErrfromHandleProviderAPIErrorinstead of creating a new error, which is the correct approach for the standardized error handling pattern.core/providers/gemini/errors.go (1)
12-36: LGTM!The function correctly transforms a BifrostError into the new GeminiGenerationError format. The nil handling, field extraction, and struct construction are all appropriate.
core/providers/gemini/types.go (1)
1345-1357: PR summary claim appears inaccurate: error struct fields are not actually marked optional.The PR summary states "made error struct fields optional with appropriate
omitemptytags," butCode,Message, andStatusinGeminiGenerationErrorStruct(andTypeinGeminiGenerationErrorDetails) lackomitemptytags. Based on Gemini API documentation, these are standard error response fields that are always present. The struct definitions are correct—these fields should remain required—but the PR summary description doesn't match the implementation and should be clarified.Likely an incorrect or invalid review comment.
core/providers/openai/openai.go (1)
408-409: Centralizing streaming HTTP error handling on ParseOpenAIError looks correct and consistent.All streaming handlers now delegate non‑200 HTTP responses to
ParseOpenAIErrorwith the appropriateRequestType,providerName, andrequest.Model, and they correctly deferReleaseStreamingResponsebefore returning. This keeps error semantics aligned with the non‑streaming paths and guaranteesExtraFieldsmetadata on all OpenAI streaming HTTP errors.Also applies to: 815-816, 1256-1257, 1649-1650, 1923-1924
dc2aa28 to
a5c379a
Compare
a5c379a to
6bd2b80
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
core/providers/elevenlabs/errors.go (1)
58-67: Inconsistent error handling: validation errors bypass centralized error construction.The validation error path creates a new
BifrostErrorinstead of updating thebifrostErrreturned byHandleProviderAPIError. This inconsistency means validation errors don't benefit from the centralized error handling setup and follow a different construction path than non-validation errors.Apply this diff to align validation error handling with non-validation errors:
if message != "" { - return &schemas.BifrostError{ - IsBifrostError: false, - StatusCode: schemas.Ptr(resp.StatusCode()), - Error: &schemas.ErrorField{ - Type: schemas.Ptr(errorType), - Message: message, - }, - } + if bifrostErr.Error == nil { + bifrostErr.Error = &schemas.ErrorField{} + } + bifrostErr.Error.Type = schemas.Ptr(errorType) + bifrostErr.Error.Message = message + return bifrostErr } }core/providers/cohere/cohere.go (1)
432-462:chunkIndexis incremented twice per event inChatCompletionStreamWithin the streaming goroutine,
chunkIndexis incremented at Line 432 and again at Line 461, causing the exposedChunkIndexinExtraFieldsto skip values (1, 3, 5, …) instead of being sequential per chunk. This is a minor correctness issue in the metadata.You can make
ChunkIndexstrictly sequential by removing one of the increments, for example:- chunkIndex++ - // Extract response ID from message-start events if event.Type == StreamEventMessageStart && event.ID != nil { responseID = *event.ID } @@ lastChunkTime = time.Now() - chunkIndex++ + chunkIndex++Or, alternatively, keep a single increment at the top of the loop and remove the one after
lastChunkTime—the important part is that each logical chunk only increments once.
🧹 Nitpick comments (2)
core/providers/openai/openai.go (1)
405-409: Streaming text completion HTTP error handling is correct; optional DRY helperUsing
ParseOpenAIErrorhere brings streaming HTTP errors in line with non‑streaming handling and keepsExtraFieldsconsistent. Thedefer providerUtils.ReleaseStreamingResponse(resp)before returning ensures the stream resources are cleaned up after parsing. Given this exact pattern is repeated across several streaming handlers, you could optionally extract a small helper (e.g.,handleOpenAIStreamingHTTPError(...)) to avoid duplication and keep the error path consistent in one place, but functionally this is fine as‑is.core/providers/gemini/errors.go (1)
38-79: Multi-/single-error parsing is consistent; consider a small defensive guard onbifrostErr.The refactored
parseGeminiError:
- Uses
HandleProviderAPIErrorto support both[]GeminiGenerationErrorandGeminiGenerationErrorpayloads.- Aggregates multi-error messages into a newline-joined string and trims the trailing newline.
- Sets
Error.Codefrom the first error andError.Messageto the combined message, so array and single responses now populate both fields consistently (fixing the earlier inconsistency noted in past reviews).This all looks correct. The only minor robustness concern is that the function assumes
providerUtils.HandleProviderAPIErrornever returns a nil*schemas.BifrostError; if that contract ever changes, thebifrostErr.Error = ...writes will panic.If you’d like to decouple from that assumption, you could defensively ensure
bifrostErris non-nil before use:func parseGeminiError(resp *fasthttp.Response) *schemas.BifrostError { - var errorResps []GeminiGenerationError - bifrostErr := providerUtils.HandleProviderAPIError(resp, &errorResps) + var errorResps []GeminiGenerationError + bifrostErr := providerUtils.HandleProviderAPIError(resp, &errorResps) + if bifrostErr == nil { + bifrostErr = &schemas.BifrostError{} + } if len(errorResps) > 0 { @@ - if bifrostErr.Error == nil { - bifrostErr.Error = &schemas.ErrorField{} - } + if bifrostErr.Error == nil { + bifrostErr.Error = &schemas.ErrorField{} + } @@ - bifrostErr = providerUtils.HandleProviderAPIError(resp, &errorResp) + bifrostErr = providerUtils.HandleProviderAPIError(resp, &errorResp) + if bifrostErr == nil { + bifrostErr = &schemas.BifrostError{} + } if errorResp.Error != nil { - if bifrostErr.Error == nil { - bifrostErr.Error = &schemas.ErrorField{} - } + if bifrostErr.Error == nil { + bifrostErr.Error = &schemas.ErrorField{} + }Not strictly required if the existing contract is guaranteed, but it makes this helper more self-contained and future-proof.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
core/providers/anthropic/anthropic.go(4 hunks)core/providers/anthropic/errors.go(1 hunks)core/providers/anthropic/types.go(1 hunks)core/providers/cohere/cohere.go(4 hunks)core/providers/cohere/errors.go(1 hunks)core/providers/elevenlabs/elevenlabs.go(4 hunks)core/providers/elevenlabs/errors.go(2 hunks)core/providers/elevenlabs/types.go(1 hunks)core/providers/gemini/errors.go(2 hunks)core/providers/gemini/gemini.go(5 hunks)core/providers/gemini/types.go(1 hunks)core/providers/openai/errors.go(1 hunks)core/providers/openai/openai.go(5 hunks)core/providers/utils/utils.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
- core/providers/anthropic/anthropic.go
- core/providers/cohere/errors.go
- core/providers/openai/errors.go
- core/providers/utils/utils.go
- core/providers/anthropic/types.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
core/providers/elevenlabs/elevenlabs.gocore/providers/gemini/gemini.gocore/providers/gemini/errors.gocore/providers/elevenlabs/errors.gocore/providers/cohere/cohere.gocore/providers/anthropic/errors.gocore/providers/gemini/types.gocore/providers/openai/openai.gocore/providers/elevenlabs/types.go
🧠 Learnings (2)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.
Applied to files:
core/providers/elevenlabs/elevenlabs.gocore/providers/gemini/gemini.gocore/providers/gemini/errors.gocore/providers/elevenlabs/errors.gocore/providers/cohere/cohere.gocore/providers/anthropic/errors.gocore/providers/gemini/types.gocore/providers/openai/openai.gocore/providers/elevenlabs/types.go
📚 Learning: 2025-12-11T11:58:25.307Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: core/providers/openai/responses.go:42-84
Timestamp: 2025-12-11T11:58:25.307Z
Learning: In core/providers/openai/responses.go (and related OpenAI response handling), document and enforce the API format constraint: if ResponsesReasoning != nil and the response contains content blocks, all content blocks should be treated as reasoning blocks by default. Implement type guards or parsing logic accordingly, and add unit tests to verify that when ResponsesReasoning is non-nil, content blocks are labeled as reasoning blocks. Include clear comments in the code explaining the rationale and ensure downstream consumers rely on this behavior.
Applied to files:
core/providers/openai/openai.go
🧬 Code graph analysis (3)
core/providers/elevenlabs/errors.go (2)
core/providers/elevenlabs/types.go (1)
ElevenlabsError(157-159)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)
core/providers/anthropic/errors.go (2)
core/providers/anthropic/types.go (1)
AnthropicError(387-393)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)
core/providers/openai/openai.go (2)
core/providers/openai/errors.go (1)
ParseOpenAIError(10-42)core/schemas/models.go (1)
Model(109-129)
🔇 Additional comments (14)
core/providers/elevenlabs/types.go (1)
158-158: LGTM: Error detail field properly converted to optional pointer.The change to
*ElevenlabsErrorDetailwithomitemptycorrectly aligns with the centralized error handling refactor, allowing error details to be absent when not provided by the API.core/providers/elevenlabs/elevenlabs.go (1)
97-97: LGTM: Error parsing calls correctly updated to new signature.All four call sites consistently updated to use the refactored
parseElevenlabsError(resp)signature without theproviderNameparameter, aligning with the centralized error handling approach.Also applies to: 229-229, 349-349, 486-486
core/providers/openai/openai.go (4)
812-816: Consistent centralized error parsing for chat completion streamSame pattern as text completions: on non‑200 HTTP status you release the streaming response and delegate to
ParseOpenAIErrorwithChatCompletionStreamRequestandrequest.Model. This keeps chat stream HTTP errors aligned with the rest of the provider’s error handling.
1253-1257: Responses stream HTTP errors now flow through unified parserFor
/v1/responsesstreaming, non‑200 HTTP responses are now parsed viaParseOpenAIErrorwithResponsesStreamRequest, which matches how non‑streaming responses errors are handled and ensures consistentBifrostErrorshaping. Resource cleanup viaReleaseStreamingResponseremains intact.
1646-1650: Speech stream HTTP error path aligned with shared OpenAI error parsingOn speech SSE endpoints, routing non‑OK HTTP responses through
ParseOpenAIErrorwithSpeechStreamRequestand the requested model will give callers consistent error metadata with the other OpenAI operations. The surrounding defer keeps the streaming response lifecycle correct.
1920-1924: Transcription stream HTTP errors correctly use ParseOpenAIErrorThe transcription streaming handler now mirrors the other streaming functions: for non‑200 status codes you release the streaming response and use
ParseOpenAIErrorwithTranscriptionStreamRequestandrequest.Model. This centralizes parsing and maintains consistent error shaping across all OpenAI streaming APIs.core/providers/anthropic/errors.go (1)
9-20: LGTM! Solid error parsing implementation.The function correctly centralizes Anthropic error parsing with appropriate defensive nil checks and proper field mapping. The pointer assignment pattern on line 16 (
&errorResp.Error.Type) is safe because the nested struct is heap-allocated during JSON unmarshalling, making it a valid Go pattern for optional fields.The integration with
HandleProviderAPIErrorensures consistent error handling across providers, aligning with the broader refactoring goals.Note: This review is based on the isolated file. Since this PR is part of a stack (dependent PRs #979 and #999), a holistic review of the entire stack would provide additional context for the error handling patterns being established across all providers.
core/providers/gemini/types.go (1)
1341-1357: Gemini error type definitions align with parser usage.Struct shapes and JSON tags match how
parseGeminiErrorconsumes Gemini error envelopes; the pointer wrapper onErrorkeeps the envelope optional and avoids nil derefs in the parser. No changes requested here.core/providers/gemini/gemini.go (1)
99-100: Centralized Gemini HTTP error handling viaparseGeminiErrorlooks consistent.All non-200 HTTP paths (sync and streaming) now delegate to
parseGeminiError(resp), which aligns this provider with the unified error parsing approach used elsewhere in the PR. Body decoding, latency handling, and streaming resource cleanup are preserved.Also applies to: 157-158, 251-253, 477-480, 725-728
core/providers/gemini/errors.go (1)
12-36:ToGeminiErrormapping fromBifrostErroris clear and side-effect free.The helper cleanly derives
code,status, andmessagefromBifrostError(with sane defaults) and wraps them in the newGeminiGenerationErrorenvelope used across the refactored parser. No changes requested.core/providers/cohere/cohere.go (4)
151-155: Centralizing non-200 error handling viaparseCohereErrorlooks goodRouting all non-OK responses in
completeRequestthroughparseCohereErrorkeeps the HTTP error path consistent with other providers and removes duplication, without changing the success path behavior.
209-212: Consistent list-models error handling viaparseCohereErrorHaving
listModelsByKeydelegate non-200 responses toparseCohereErroraligns list-models with the same centralized error-parsing logic used elsewhere, which should simplify future maintenance.
384-388: Streaming HTTP error path now shares centralized parser; verify behavior withStreamBodyUsing
parseCohereError(resp)for non-200 statuses inChatCompletionStreamunifies streaming and non-streaming error handling. Givenresp.StreamBody = true, please double-check thatparseCohereError(and the underlying utils it calls) correctly read and buffer the error body whenStreamBodyis set, and don’t rely on patterns that assume a non-stream response.It would be good to exercise this by hitting the chat stream endpoint with an invalid key and confirming:
- The stream is not started,
- You receive a single
BifrostErrorwith the expected Cohere error payload,- No panics or leaks occur around
ReleaseStreamingResponse.
599-603: Shared error parser for responses stream; confirm handling of non-200 SSE responses
ResponsesStreamnow also delegates non-OK HTTP responses toparseCohereError(resp), which is consistent with the rest of the provider.As with
ChatCompletionStream, please verify with a responses-stream request that:
- A non-200 response (e.g., bad key) is turned into a single coherent
BifrostError,- The body is fully consumed and
ReleaseStreamingResponseruns cleanly,- No assumptions in
parseCohereErrorbreak whenresp.StreamBodyis true.
6bd2b80 to
982bc60
Compare
982bc60 to
6bd2b80
Compare
6bd2b80 to
982bc60
Compare
982bc60 to
6bd2b80
Compare
6bd2b80 to
982bc60
Compare
982bc60 to
6bd2b80
Compare
6bd2b80 to
982bc60
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
core/providers/elevenlabs/errors.go (1)
18-67: Preserve StatusCode in validation error path.The validation error handling (lines 18-67) returns a new
BifrostError(lines 59-66), discarding thebifrostErrobtained fromHandleProviderAPIErrorat line 14. This loses theStatusCodeand other fields. The non-validation path (lines 71-86) correctly updatesbifrostErr.Errorand returnsbifrostErr.Apply this diff to preserve StatusCode:
if message != "" { - return &schemas.BifrostError{ - IsBifrostError: false, - StatusCode: schemas.Ptr(resp.StatusCode()), - Error: &schemas.ErrorField{ - Type: schemas.Ptr(errorType), - Message: message, - }, - } + if bifrostErr.Error == nil { + bifrostErr.Error = &schemas.ErrorField{} + } + bifrostErr.Error.Type = schemas.Ptr(errorType) + bifrostErr.Error.Message = message + return bifrostErr }
🧹 Nitpick comments (1)
core/providers/anthropic/errors.go (1)
9-19: Consider creating explicit string copies for clarity.The error field extraction works correctly, but taking the address of
errorResp.Error.Typedirectly (line 16) is slightly unusual. While safe in Go (strings are immutable), creating an explicit copy would be clearer:func parseAnthropicError(resp *fasthttp.Response) *schemas.BifrostError { var errorResp AnthropicError bifrostErr := providerUtils.HandleProviderAPIError(resp, &errorResp) if errorResp.Error != nil { if bifrostErr.Error == nil { bifrostErr.Error = &schemas.ErrorField{} } - bifrostErr.Error.Type = &errorResp.Error.Type + errType := errorResp.Error.Type + bifrostErr.Error.Type = &errType bifrostErr.Error.Message = errorResp.Error.Message } return bifrostErr }This makes the ownership semantics explicit and follows the pattern used elsewhere in the codebase.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
core/providers/anthropic/anthropic.go(4 hunks)core/providers/anthropic/errors.go(1 hunks)core/providers/anthropic/types.go(1 hunks)core/providers/cohere/cohere.go(4 hunks)core/providers/cohere/errors.go(1 hunks)core/providers/elevenlabs/elevenlabs.go(4 hunks)core/providers/elevenlabs/errors.go(2 hunks)core/providers/elevenlabs/types.go(1 hunks)core/providers/gemini/errors.go(2 hunks)core/providers/gemini/gemini.go(5 hunks)core/providers/gemini/types.go(1 hunks)core/providers/openai/errors.go(1 hunks)core/providers/openai/openai.go(5 hunks)core/providers/utils/utils.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
- core/providers/openai/errors.go
- core/providers/elevenlabs/elevenlabs.go
- core/providers/cohere/errors.go
- core/providers/anthropic/types.go
- core/providers/gemini/gemini.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
core/providers/elevenlabs/types.gocore/providers/cohere/cohere.gocore/providers/anthropic/anthropic.gocore/providers/utils/utils.gocore/providers/elevenlabs/errors.gocore/providers/openai/openai.gocore/providers/anthropic/errors.gocore/providers/gemini/errors.gocore/providers/gemini/types.go
🧠 Learnings (2)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.
Applied to files:
core/providers/elevenlabs/types.gocore/providers/cohere/cohere.gocore/providers/anthropic/anthropic.gocore/providers/utils/utils.gocore/providers/elevenlabs/errors.gocore/providers/openai/openai.gocore/providers/anthropic/errors.gocore/providers/gemini/errors.gocore/providers/gemini/types.go
📚 Learning: 2025-12-11T11:58:25.307Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: core/providers/openai/responses.go:42-84
Timestamp: 2025-12-11T11:58:25.307Z
Learning: In core/providers/openai/responses.go (and related OpenAI response handling), document and enforce the API format constraint: if ResponsesReasoning != nil and the response contains content blocks, all content blocks should be treated as reasoning blocks by default. Implement type guards or parsing logic accordingly, and add unit tests to verify that when ResponsesReasoning is non-nil, content blocks are labeled as reasoning blocks. Include clear comments in the code explaining the rationale and ensure downstream consumers rely on this behavior.
Applied to files:
core/providers/openai/openai.go
🧬 Code graph analysis (4)
core/providers/elevenlabs/errors.go (2)
core/providers/elevenlabs/types.go (1)
ElevenlabsError(157-159)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)
core/providers/openai/openai.go (2)
core/providers/openai/errors.go (1)
ParseOpenAIError(10-42)core/schemas/bifrost.go (5)
TextCompletionStreamRequest(88-88)ChatCompletionStreamRequest(90-90)ResponsesStreamRequest(92-92)SpeechStreamRequest(95-95)TranscriptionStreamRequest(97-97)
core/providers/anthropic/errors.go (2)
core/providers/anthropic/types.go (1)
AnthropicError(387-393)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)
core/providers/gemini/errors.go (3)
core/providers/gemini/types.go (2)
GeminiGenerationError(1341-1343)GeminiGenerationErrorStruct(1345-1350)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)core/utils.go (1)
Ptr(51-53)
🔇 Additional comments (7)
core/providers/cohere/cohere.go (1)
154-154: LGTM! Consistent centralization of error parsing.The error handling paths now consistently delegate to
parseCohereError(resp), which aligns with the PR's goal of extracting error parsing into dedicated files. This reduces code duplication and improves maintainability.Also applies to: 211-211, 387-387, 602-602
core/providers/utils/utils.go (1)
317-338: Good fix for body preservation during error parsing.The change to copy
resp.Body()before unmarshalling prevents use-after-free issues since fasthttp reuses internal buffers. This is consistent with the pattern used incompleteRequest(line 164) and ensures error parsing functions receive stable data.core/providers/elevenlabs/types.go (1)
157-159: LGTM! Proper nullable handling for optional error detail.Converting
Detailto a pointer withomitemptycorrectly models the optional nature of error details in ElevenLabs API responses. This prevents nil pointer issues when the API returns errors without detailed information and allows proper JSON serialization.core/providers/anthropic/anthropic.go (1)
153-153: LGTM! Consistent error parsing centralization.All error handling paths now use
parseAnthropicError(resp), removing duplication and aligning with the PR's goal. The exported streaming functions (HandleAnthropicChatCompletionStreaming,HandleAnthropicResponsesStream) correctly use the same centralized error parser.Also applies to: 197-197, 451-451, 799-799
core/providers/openai/openai.go (1)
146-146: Excellent: Centralized error handling across all request types.All error paths now delegate to
ParseOpenAIError, ensuring consistent error metadata (provider, model, request type) and eliminating duplicated parsing logic across streaming and non-streaming endpoints.Also applies to: 267-267, 408-408, 648-648, 815-815, 1111-1111, 1256-1256, 1459-1459, 1533-1533, 1649-1649, 1808-1808, 1923-1923
core/providers/gemini/types.go (1)
1341-1357: LGTM: Clean error type definitions.The new error types (
GeminiGenerationError,GeminiGenerationErrorStruct,GeminiGenerationErrorDetails) provide clear separation between the error envelope and structured error details, with appropriate JSON tags for optional fields.core/providers/gemini/errors.go (1)
38-79: LGTM: Consistent error extraction for both array and single responses.The function correctly handles both
[]GeminiGenerationErrorand singleGeminiGenerationErrorformats:
- Array path: concatenates messages (with trailing newline trimmed), extracts Code from first error
- Single response path: extracts both Code and Message
- Both paths update
bifrostErr.Errorand returnbifrostErr, preserving StatusCode fromHandleProviderAPIError
982bc60 to
0f9320e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
core/providers/elevenlabs/errors.go (1)
44-51: Minor: Leading space when message is empty but locations exist.If
messagesis empty butlocationsis populated, the resulting message will have a leading space (e.g.," [body.field]"instead of"[body.field]").if len(messages) > 0 { message = strings.Join(messages, "; ") } if len(locations) > 0 { locationStr := strings.Join(locations, ", ") - message = message + " [" + locationStr + "]" + if message != "" { + message = message + " [" + locationStr + "]" + } else { + message = "[" + locationStr + "]" + } }core/providers/openai/errors.go (1)
32-39: Redundant nil check —HandleProviderAPIErroralways returns non-nil.Looking at
HandleProviderAPIErrorincore/providers/utils/utils.go, it always returns a pointer to a newly allocatedBifrostError, nevernil. The check at line 33 is unnecessary.- // Set ExtraFields unconditionally so provider/model/request metadata is always attached - if bifrostErr != nil { - bifrostErr.ExtraFields = schemas.BifrostErrorExtraFields{ - Provider: providerName, - ModelRequested: model, - RequestType: requestType, - } + // Set ExtraFields unconditionally so provider/model/request metadata is always attached + bifrostErr.ExtraFields = schemas.BifrostErrorExtraFields{ + Provider: providerName, + ModelRequested: model, + RequestType: requestType, }core/providers/gemini/errors.go (1)
40-78: Consider: Response body is parsed twice when falling through to single-object handling.
HandleProviderAPIErroris called twice—once for array format (line 42) and once for single-object format (line 70). While correct (the body is copied internally), this is somewhat wasteful when the response is a single object.An alternative would be to try single-object parsing first (more common case?) or use
json.RawMessageto avoid double parsing. However, this is a minor optimization and the current approach is functionally correct.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
core/providers/anthropic/anthropic.go(4 hunks)core/providers/anthropic/errors.go(1 hunks)core/providers/anthropic/types.go(1 hunks)core/providers/cohere/cohere.go(4 hunks)core/providers/cohere/errors.go(1 hunks)core/providers/elevenlabs/elevenlabs.go(4 hunks)core/providers/elevenlabs/errors.go(2 hunks)core/providers/elevenlabs/types.go(1 hunks)core/providers/gemini/errors.go(2 hunks)core/providers/gemini/gemini.go(5 hunks)core/providers/gemini/types.go(1 hunks)core/providers/openai/errors.go(1 hunks)core/providers/openai/openai.go(5 hunks)core/providers/utils/utils.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- core/providers/elevenlabs/elevenlabs.go
- core/providers/anthropic/anthropic.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
core/providers/cohere/cohere.gocore/providers/elevenlabs/types.gocore/providers/utils/utils.gocore/providers/elevenlabs/errors.gocore/providers/openai/errors.gocore/providers/cohere/errors.gocore/providers/anthropic/types.gocore/providers/gemini/errors.gocore/providers/gemini/gemini.gocore/providers/gemini/types.gocore/providers/openai/openai.gocore/providers/anthropic/errors.go
🧠 Learnings (2)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.
Applied to files:
core/providers/cohere/cohere.gocore/providers/elevenlabs/types.gocore/providers/utils/utils.gocore/providers/elevenlabs/errors.gocore/providers/openai/errors.gocore/providers/cohere/errors.gocore/providers/anthropic/types.gocore/providers/gemini/errors.gocore/providers/gemini/gemini.gocore/providers/gemini/types.gocore/providers/openai/openai.gocore/providers/anthropic/errors.go
📚 Learning: 2025-12-11T11:58:25.307Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: core/providers/openai/responses.go:42-84
Timestamp: 2025-12-11T11:58:25.307Z
Learning: In core/providers/openai/responses.go (and related OpenAI response handling), document and enforce the API format constraint: if ResponsesReasoning != nil and the response contains content blocks, all content blocks should be treated as reasoning blocks by default. Implement type guards or parsing logic accordingly, and add unit tests to verify that when ResponsesReasoning is non-nil, content blocks are labeled as reasoning blocks. Include clear comments in the code explaining the rationale and ensure downstream consumers rely on this behavior.
Applied to files:
core/providers/openai/errors.gocore/providers/openai/openai.go
🧬 Code graph analysis (6)
core/providers/elevenlabs/errors.go (2)
core/providers/elevenlabs/types.go (1)
ElevenlabsError(157-159)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)
core/providers/openai/errors.go (2)
core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)core/schemas/bifrost.go (1)
BifrostErrorExtraFields(421-425)
core/providers/cohere/errors.go (2)
core/providers/cohere/types.go (1)
CohereError(543-547)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)
core/providers/gemini/errors.go (4)
core/providers/gemini/types.go (2)
GeminiGenerationError(1341-1343)GeminiGenerationErrorStruct(1345-1350)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)core/schemas/utils.go (1)
Ptr(16-18)core/utils.go (1)
Ptr(51-53)
core/providers/openai/openai.go (2)
core/providers/openai/errors.go (1)
ParseOpenAIError(10-42)core/schemas/bifrost.go (5)
TextCompletionStreamRequest(88-88)ChatCompletionStreamRequest(90-90)ResponsesStreamRequest(92-92)SpeechStreamRequest(95-95)TranscriptionStreamRequest(97-97)
core/providers/anthropic/errors.go (2)
core/providers/anthropic/types.go (1)
AnthropicError(387-393)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)
🔇 Additional comments (24)
core/providers/utils/utils.go (1)
319-322: LGTM: Body copy prevents use-after-free.Creating a local copy of
resp.Body()before unmarshalling ensures the original buffer isn't consumed or mutated during the unmarshal operation. This is a sound defensive programming practice for fasthttp response bodies.core/providers/anthropic/types.go (1)
389-392: LGTM: Pointer field enables nil-safe error handling.Changing
Errorto a pointer withomitemptyaligns with the centralized error parsing inparseAnthropicError, which checks for nil before accessing nested fields. This prevents forced serialization of empty error objects and supports optional error details.core/providers/cohere/cohere.go (4)
154-154: LGTM: Centralized error parsing.Routing through
parseCohereError(resp)consolidates error handling and ensures consistent error field population across all request types.
211-211: LGTM: Consistent error handling.Using
parseCohereError(resp)for list models errors maintains consistency with the centralized error parsing pattern.
387-387: LGTM: Unified streaming error path.Streaming errors now use
parseCohereError(resp), aligning with non-streaming error handling.
602-602: LGTM: Consistent streaming error handling.ResponsesStream now uses the centralized
parseCohereError(resp)for HTTP errors.core/providers/cohere/errors.go (1)
9-21: LGTM: Centralized Cohere error parsing.The function safely extracts error details and delegates to
providerUtils.HandleProviderAPIError. The nil-check on line 13 prevents nil pointer dereferences, and optional Code assignment is handled correctly.core/providers/elevenlabs/types.go (1)
158-158: LGTM: Pointer field enables optional detail handling.Changing
Detailto a pointer withomitemptyallows nil-safe parsing and prevents forced serialization of empty detail objects. The customUnmarshalJSONalready handles nil cases appropriately.core/providers/anthropic/errors.go (1)
9-20: LGTM: Safe Anthropic error parsing.The function properly checks for nil on line 12 before accessing
errorResp.Errorfields, preventing nil pointer dereferences. The delegation toproviderUtils.HandleProviderAPIErrorensures consistent error construction.core/providers/gemini/gemini.go (5)
99-99: LGTM: Centralized error parsing.Routing through
parseGeminiError(resp)consolidates error handling for all non-OK responses.
157-157: LGTM: Consistent error handling.List models errors now use
parseGeminiError(resp)for consistent error construction.
252-252: LGTM: Unified chat completion error path.Chat completion errors now use the centralized
parseGeminiError(resp).
479-479: LGTM: Consistent streaming error handling.SpeechStream HTTP errors now use
parseGeminiError(resp).
727-727: LGTM: Unified transcription stream errors.TranscriptionStream now uses the centralized
parseGeminiError(resp)for HTTP errors.core/providers/openai/openai.go (5)
408-408: LGTM: Centralized error parsing with context.Routing through
ParseOpenAIErrorwith requestType, providerName, and model ensures consistent error field population including ExtraFields.
815-815: LGTM: Consistent streaming error handling.Chat completion stream errors now use
ParseOpenAIErrorwith full context.
1256-1256: LGTM: Unified responses stream error path.ResponsesStream HTTP errors use the centralized
ParseOpenAIError.
1649-1649: LGTM: Consistent speech stream error handling.SpeechStream now uses
ParseOpenAIErrorfor HTTP errors.
1923-1923: LGTM: Unified transcription stream errors.TranscriptionStream uses the centralized
ParseOpenAIErrorwith context parameters.core/providers/elevenlabs/errors.go (1)
12-88: LGTM! Clean refactor to centralized error handling.The function correctly handles both validation errors (array format) and non-validation errors (single object format), delegating to
HandleProviderAPIErrorfor initial parsing.core/providers/openai/errors.go (1)
9-42: LGTM! Well-structured OpenAI error parser.The function correctly extracts OpenAI error fields and unconditionally sets
ExtraFieldsmetadata (addressing the previous review feedback). Usingschemas.BifrostErroras the unmarshal target is a pragmatic approach given the structural similarity.core/providers/gemini/types.go (1)
1341-1357: LGTM! Clean error type definitions.The new
GeminiGenerationErrorhierarchy is well-structured and aligns with the Gemini API error response format. Using a pointer for theErrorfield enables nil checks for proper error detection.core/providers/gemini/errors.go (2)
12-36: LGTM!ToGeminiErrorcorrectly converts BifrostError to Gemini format.The conversion logic properly extracts status, message, and code from the BifrostError and constructs a well-formed
GeminiGenerationErrorfor response serialization.
38-65: LGTM! Array error aggregation is now consistent with single-error handling.The Code field is now extracted from the first error (line 61), and messages are properly concatenated with trailing newline trimmed (lines 51, 55). This addresses the previous review feedback about inconsistent field extraction.
0f9320e to
14a7a70
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (6)
core/providers/openai/errors.go (1)
32-39: Consider removing the defensive nil check.The condition
if bifrostErr != nilon line 33 is defensive but unnecessary. Based on the code snippet fromcore/providers/utils/utils.go(lines 316-337),HandleProviderAPIErroralways returns a non-nil*schemas.BifrostError. Removing the conditional would simplify the code:- // Set ExtraFields unconditionally so provider/model/request metadata is always attached - if bifrostErr != nil { - bifrostErr.ExtraFields = schemas.BifrostErrorExtraFields{ - Provider: providerName, - ModelRequested: model, - RequestType: requestType, - } - } + // Set ExtraFields unconditionally so provider/model/request metadata is always attached + bifrostErr.ExtraFields = schemas.BifrostErrorExtraFields{ + Provider: providerName, + ModelRequested: model, + RequestType: requestType, + }core/providers/gemini/errors.go (1)
13-36: HardenToGeminiError: consider falling back tobifrostErr.Error.CodewhenStatusCodeis nil.
Right nowCodedefaults to 500 unlessbifrostErr.StatusCodeis set (Line 26-28). If some call paths construct*schemas.BifrostErrorwithoutStatusCodebut withError.Code, you’ll lose the real status/code.Suggested minimal diff:
func ToGeminiError(bifrostErr *schemas.BifrostError) *GeminiGenerationError { @@ code := 500 @@ if bifrostErr.StatusCode != nil { code = *bifrostErr.StatusCode + } else if bifrostErr.Error != nil && bifrostErr.Error.Code != nil { + if parsed, err := strconv.Atoi(*bifrostErr.Error.Code); err == nil { + code = parsed + } } return &GeminiGenerationError{core/providers/gemini/types.go (1)
1341-1357: Consider addingomitempty+ namingFieldViolationselement type (if these structs are ever marshaled).
IfGeminiGenerationErrorStruct/GeminiGenerationErrorDetailsare only used for unmarshaling provider responses, this is fine. If they’re ever marshaled (tests, logs, passthrough),omitemptyand a named violation type will keep output smaller and improve reuse.Example refactor:
type GeminiGenerationErrorStruct struct { - Code int `json:"code"` - Message string `json:"message"` - Status string `json:"status"` - Details []GeminiGenerationErrorDetails `json:"details"` + Code int `json:"code,omitempty"` + Message string `json:"message,omitempty"` + Status string `json:"status,omitempty"` + Details []GeminiGenerationErrorDetails `json:"details,omitempty"` } type GeminiGenerationErrorDetails struct { Type string `json:"@type"` - FieldViolations []struct { - Description string `json:"description"` - } `json:"fieldViolations"` + FieldViolations []GeminiFieldViolation `json:"fieldViolations,omitempty"` } + +type GeminiFieldViolation struct { + Description string `json:"description,omitempty"` +}core/providers/cohere/cohere.go (1)
151-155: Avoid logging full provider error bodies (truncate / guard).
Line 153 logsstring(resp.Body())on every non-200; this can get large/noisy and may unintentionally include provider details you don’t want in logs. Consider truncating and/or logging status + request id fields if available.core/providers/anthropic/errors.go (1)
9-20: LGTM! Effective refactoring that centralizes error parsing.The extraction of error parsing into a dedicated helper is a clean improvement that reduces code duplication across streaming and non-streaming paths.
Note: Lines 16-17 handle
Typeas a pointer butMessageas a value. This inconsistency suggestsTypeis optional whileMessageis required. A brief comment explaining this design choice would improve maintainability, especially as this pattern is being replicated across multiple providers in the stack.Consider adding a comment:
func parseAnthropicError(resp *fasthttp.Response) *schemas.BifrostError { var errorResp AnthropicError bifrostErr := providerUtils.HandleProviderAPIError(resp, &errorResp) + // Populate error details from Anthropic's error response + // Type is optional (*string), Message is required (string) if errorResp.Error != nil { if bifrostErr.Error == nil { bifrostErr.Error = &schemas.ErrorField{} } bifrostErr.Error.Type = &errorResp.Error.Type bifrostErr.Error.Message = errorResp.Error.Message } return bifrostErr }core/providers/elevenlabs/elevenlabs.go (1)
226-230: Avoid logging full provider error bodies at Debug if they can include sensitive request details.
string(resp.Body())can include user-provided text and/or identifiers; consider truncating/size-limiting (and/or structured fields) to reduce accidental sensitive-data retention in logs.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
core/providers/anthropic/anthropic.go(4 hunks)core/providers/anthropic/errors.go(1 hunks)core/providers/anthropic/types.go(1 hunks)core/providers/cohere/cohere.go(4 hunks)core/providers/cohere/errors.go(1 hunks)core/providers/elevenlabs/elevenlabs.go(4 hunks)core/providers/elevenlabs/errors.go(2 hunks)core/providers/elevenlabs/types.go(1 hunks)core/providers/gemini/errors.go(2 hunks)core/providers/gemini/gemini.go(5 hunks)core/providers/gemini/types.go(1 hunks)core/providers/openai/errors.go(1 hunks)core/providers/openai/openai.go(5 hunks)core/providers/utils/utils.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
- core/providers/cohere/errors.go
- core/providers/elevenlabs/types.go
- core/providers/utils/utils.go
- core/providers/anthropic/types.go
- core/providers/gemini/gemini.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
core/providers/gemini/errors.gocore/providers/cohere/cohere.gocore/providers/anthropic/anthropic.gocore/providers/openai/openai.gocore/providers/elevenlabs/errors.gocore/providers/elevenlabs/elevenlabs.gocore/providers/openai/errors.gocore/providers/anthropic/errors.gocore/providers/gemini/types.go
🧠 Learnings (2)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.
Applied to files:
core/providers/gemini/errors.gocore/providers/cohere/cohere.gocore/providers/anthropic/anthropic.gocore/providers/openai/openai.gocore/providers/elevenlabs/errors.gocore/providers/elevenlabs/elevenlabs.gocore/providers/openai/errors.gocore/providers/anthropic/errors.gocore/providers/gemini/types.go
📚 Learning: 2025-12-11T11:58:25.307Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: core/providers/openai/responses.go:42-84
Timestamp: 2025-12-11T11:58:25.307Z
Learning: In core/providers/openai/responses.go (and related OpenAI response handling), document and enforce the API format constraint: if ResponsesReasoning != nil and the response contains content blocks, all content blocks should be treated as reasoning blocks by default. Implement type guards or parsing logic accordingly, and add unit tests to verify that when ResponsesReasoning is non-nil, content blocks are labeled as reasoning blocks. Include clear comments in the code explaining the rationale and ensure downstream consumers rely on this behavior.
Applied to files:
core/providers/openai/openai.gocore/providers/openai/errors.go
🧬 Code graph analysis (5)
core/providers/gemini/errors.go (3)
core/providers/gemini/types.go (2)
GeminiGenerationError(1341-1343)GeminiGenerationErrorStruct(1345-1350)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)core/schemas/utils.go (1)
Ptr(16-18)
core/providers/openai/openai.go (2)
core/providers/openai/errors.go (1)
ParseOpenAIError(10-42)core/schemas/bifrost.go (5)
TextCompletionStreamRequest(88-88)ChatCompletionStreamRequest(90-90)ResponsesStreamRequest(92-92)SpeechStreamRequest(95-95)TranscriptionStreamRequest(97-97)
core/providers/elevenlabs/errors.go (2)
core/providers/elevenlabs/types.go (1)
ElevenlabsError(157-159)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)
core/providers/openai/errors.go (2)
core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)core/schemas/bifrost.go (1)
BifrostErrorExtraFields(421-425)
core/providers/anthropic/errors.go (2)
core/providers/anthropic/types.go (1)
AnthropicError(387-393)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)
🔇 Additional comments (8)
core/providers/openai/openai.go (1)
408-408: LGTM! Centralized error handling is consistent across all streaming endpoints.The refactor successfully routes all streaming error responses through the external
ParseOpenAIErrorfunction. All calls use the correct request types and pass the appropriate parameters (response, request type, provider name, model). This eliminates code duplication and provides a single maintenance point for error parsing logic.Also applies to: 815-815, 1256-1256, 1649-1649, 1923-1923
core/providers/gemini/errors.go (1)
39-79: Stack-wide Gemini error envelope verification complete. All error type migrations are properly aligned:
- Old
GeminiChatRequestErrortypes have been removed throughout the codebase- New
GeminiGenerationErrorandGeminiGenerationErrorStructtypes are correctly defined intypes.goand used consistently in error parsingparseGeminiErroris wired from all expected callsites ingemini.go(5 invocations covering streaming and non-streaming flows)No lingering references to removed types or integration issues detected across the stack.
core/providers/cohere/cohere.go (2)
209-212: LGTM: swapping inline error construction forparseCohereError.
This reduces duplication and keepslistModelsByKeyaligned with other Cohere call sites.
599-603: No issue here; the error body is available at this point.The response body is readable in the error path because
BodyStream()is only used in the successful response path (inside the goroutine after the status check passes). At line 602, whenparseCohereError(resp)is called, the response is fresh from the HTTP request, andHandleProviderAPIErrorproperly reads the body viaresp.Body()with a copy operation (body := append([]byte(nil), resp.Body()...)). No prior operations consume the body before the error check.Likely an incorrect or invalid review comment.
core/providers/anthropic/anthropic.go (1)
153-153: LGTM! Consistent application of centralized error parsing.The refactoring correctly replaces inline error handling with
parseAnthropicError(resp)across all error paths:
- Non-streaming:
completeRequest(line 153),listModelsByKey(line 197)- Streaming:
HandleAnthropicChatCompletionStreaming(line 451),HandleAnthropicResponsesStream(line 799)The pattern is applied consistently and maintains the same error semantics while improving code organization.
Also applies to: 197-197, 451-451, 799-799
core/providers/elevenlabs/elevenlabs.go (2)
484-487: LGTM: consistent centralized error parsing usage.
96-98: Signature migration verified; error context properly maintained via utility function.All four call sites (lines 97, 229, 349, 486) use the new signature correctly. Provider error context is preserved through
providerUtils.HandleProviderAPIError(resp, &errorResp)which constructs theBifrostErrorwith status code and structured error details. No stale call sites with the old signature remain.core/providers/elevenlabs/errors.go (1)
12-89: Fix streamed-body error parsing (likely emptyresp.Body()when streaming).The core issue is real: when
SpeechStreamsetsresp.StreamBody = true(confirmed at line 305 in elevenlabs.go),resp.Body()may be empty on error paths, causingHandleProviderAPIErrorto fail gracefully with no error details since it reads fromresp.Body()(line 319 in utils.go).However, the suggested fix uses
resp.SetBodyRaw(b), which does not exist in fasthttp's Response API. Instead, you need to buffer fromresp.BodyStream()before callingHandleProviderAPIError. Consider modifyingHandleProviderAPIErroritself to handle streamed bodies, or buffer the stream data directly inparseElevenlabsErrorand pass it to a modified version of the handler.Also note: the secondary fix for the
locationscondition check (adding&& message != "") is valid and should be applied.
14a7a70 to
0f9320e
Compare
0f9320e to
14a7a70
Compare
14a7a70 to
0f9320e
Compare
0f9320e to
be363a1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
core/providers/elevenlabs/elevenlabs.go (1)
346-350: Streaming error path may parse an empty body.As noted in a previous review, when
resp.StreamBody = true(line 305),resp.Body()may be empty even when the provider returns a JSON error body. This causesparseElevenlabsError(resp)to fall back to a generic message. Consider buffering fromresp.BodyStream()before parsing errors in streaming contexts.
🧹 Nitpick comments (3)
core/providers/openai/errors.go (1)
9-42: LGTM - Centralized error parsing with ExtraFields always set.The function correctly addresses the previous review feedback by setting
ExtraFieldsunconditionally.Minor note: The
if bifrostErr != nilcheck on line 33 is redundant sinceHandleProviderAPIErroralways returns a non-nil*schemas.BifrostError. The check is harmless but could be removed.- // Set ExtraFields unconditionally so provider/model/request metadata is always attached - if bifrostErr != nil { - bifrostErr.ExtraFields = schemas.BifrostErrorExtraFields{ - Provider: providerName, - ModelRequested: model, - RequestType: requestType, - } + // Set ExtraFields unconditionally so provider/model/request metadata is always attached + bifrostErr.ExtraFields = schemas.BifrostErrorExtraFields{ + Provider: providerName, + ModelRequested: model, + RequestType: requestType, }core/providers/gemini/errors.go (1)
38-79: Addresses past review feedback; consider optimizing double-parse.The function correctly handles both array and single error formats. Past review comments have been addressed:
- Trailing newline is now trimmed (line 55)
- Code is extracted from the first error in array responses (lines 60-62)
However, the function calls
HandleProviderAPIErrortwice (lines 42 and 70), parsing the response body each time. This works becausefasthttp.Response.Body()returns the same buffer on repeated calls, but it's inefficient as it parses JSON twice when the array parse fails.Consider parsing once and detecting the format:
func parseGeminiError(resp *fasthttp.Response) *schemas.BifrostError { - // Try to parse as []GeminiGenerationError - var errorResps []GeminiGenerationError - bifrostErr := providerUtils.HandleProviderAPIError(resp, &errorResps) - if len(errorResps) > 0 { + statusCode := resp.StatusCode() + body := append([]byte(nil), resp.Body()...) + + // Try array format first + var errorResps []GeminiGenerationError + if err := sonic.Unmarshal(body, &errorResps); err == nil && len(errorResps) > 0 { + bifrostErr := &schemas.BifrostError{ + IsBifrostError: false, + StatusCode: &statusCode, + Error: &schemas.ErrorField{}, + } var message string // ... rest of array handling return bifrostErr } - // Try to parse as GeminiGenerationError + // Try single object format var errorResp GeminiGenerationError - bifrostErr = providerUtils.HandleProviderAPIError(resp, &errorResp) + bifrostErr := &schemas.BifrostError{ + IsBifrostError: false, + StatusCode: &statusCode, + Error: &schemas.ErrorField{}, + } + if err := sonic.Unmarshal(body, &errorResp); err != nil { + bifrostErr.Error.Message = fmt.Sprintf("provider API error: %s", string(body)) + return bifrostErr + } if errorResp.Error != nil { // ... rest of single handling } return bifrostErr }This avoids parsing the body twice and provides clearer control flow.
core/providers/gemini/types.go (1)
1352-1357: Consider extracting the anonymous struct for better maintainability.The
FieldViolationsfield uses an anonymous struct, which reduces code reusability and clarity. Extracting it to a named type would improve maintainability.Apply this diff to extract the anonymous struct:
+type GeminiFieldViolation struct { + Description string `json:"description"` +} + type GeminiGenerationErrorDetails struct { Type string `json:"@type"` - FieldViolations []struct { - Description string `json:"description"` - } `json:"fieldViolations"` + FieldViolations []GeminiFieldViolation `json:"fieldViolations"` }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
core/providers/anthropic/anthropic.go(4 hunks)core/providers/anthropic/errors.go(1 hunks)core/providers/anthropic/types.go(1 hunks)core/providers/cohere/cohere.go(4 hunks)core/providers/cohere/errors.go(1 hunks)core/providers/elevenlabs/elevenlabs.go(4 hunks)core/providers/elevenlabs/errors.go(2 hunks)core/providers/elevenlabs/types.go(1 hunks)core/providers/gemini/batch_test.go(0 hunks)core/providers/gemini/errors.go(2 hunks)core/providers/gemini/gemini.go(5 hunks)core/providers/gemini/types.go(1 hunks)core/providers/openai/errors.go(1 hunks)core/providers/openai/openai.go(5 hunks)core/providers/utils/utils.go(1 hunks)
💤 Files with no reviewable changes (1)
- core/providers/gemini/batch_test.go
🚧 Files skipped from review as they are similar to previous changes (2)
- core/providers/cohere/cohere.go
- core/providers/anthropic/errors.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
core/providers/gemini/gemini.gocore/providers/cohere/errors.gocore/providers/openai/errors.gocore/providers/elevenlabs/errors.gocore/providers/elevenlabs/types.gocore/providers/anthropic/anthropic.gocore/providers/gemini/errors.gocore/providers/openai/openai.gocore/providers/elevenlabs/elevenlabs.gocore/providers/gemini/types.gocore/providers/anthropic/types.gocore/providers/utils/utils.go
🧠 Learnings (2)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.
Applied to files:
core/providers/gemini/gemini.gocore/providers/cohere/errors.gocore/providers/openai/errors.gocore/providers/elevenlabs/errors.gocore/providers/elevenlabs/types.gocore/providers/anthropic/anthropic.gocore/providers/gemini/errors.gocore/providers/openai/openai.gocore/providers/elevenlabs/elevenlabs.gocore/providers/gemini/types.gocore/providers/anthropic/types.gocore/providers/utils/utils.go
📚 Learning: 2025-12-11T11:58:25.307Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: core/providers/openai/responses.go:42-84
Timestamp: 2025-12-11T11:58:25.307Z
Learning: In core/providers/openai/responses.go (and related OpenAI response handling), document and enforce the API format constraint: if ResponsesReasoning != nil and the response contains content blocks, all content blocks should be treated as reasoning blocks by default. Implement type guards or parsing logic accordingly, and add unit tests to verify that when ResponsesReasoning is non-nil, content blocks are labeled as reasoning blocks. Include clear comments in the code explaining the rationale and ensure downstream consumers rely on this behavior.
Applied to files:
core/providers/openai/errors.gocore/providers/openai/openai.go
🧬 Code graph analysis (5)
core/providers/cohere/errors.go (2)
core/providers/cohere/types.go (1)
CohereError(543-547)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)
core/providers/openai/errors.go (1)
core/schemas/bifrost.go (1)
BifrostErrorExtraFields(421-425)
core/providers/elevenlabs/errors.go (2)
core/providers/elevenlabs/types.go (1)
ElevenlabsError(157-159)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)
core/providers/gemini/errors.go (4)
core/providers/gemini/types.go (2)
GeminiGenerationError(1341-1343)GeminiGenerationErrorStruct(1345-1350)core/providers/utils/utils.go (1)
HandleProviderAPIError(317-338)core/schemas/utils.go (1)
Ptr(16-18)core/utils.go (1)
Ptr(51-53)
core/providers/openai/openai.go (2)
core/providers/openai/errors.go (1)
ParseOpenAIError(10-42)core/schemas/bifrost.go (5)
TextCompletionStreamRequest(88-88)ChatCompletionStreamRequest(90-90)ResponsesStreamRequest(92-92)SpeechStreamRequest(95-95)TranscriptionStreamRequest(97-97)
🔇 Additional comments (18)
core/providers/utils/utils.go (1)
319-322: LGTM! Defensive copy prevents use-after-free.The
append([]byte(nil), resp.Body()...)idiom correctly creates a copy of the response body before unmarshalling. This is important because fasthttp may reuse the underlying buffer, and the copy ensures the body data remains valid for both unmarshalling and error message construction.core/providers/anthropic/types.go (1)
389-392: LGTM! Pointer type enables proper nil handling.Converting
Errorto a pointer type withomitemptyis appropriate for optional error details. This aligns with the centralized error parsing inparseAnthropicErrorwhich must now check for nil before accessingError.TypeorError.Message.core/providers/anthropic/anthropic.go (4)
151-154: LGTM! Centralized error parsing.The refactor correctly delegates error handling to
parseAnthropicError, which aligns with the PR's goal of centralizing error parsing logic across providers.
196-198: LGTM! Consistent error handling pattern.
449-452: LGTM! Streaming error path now uses centralized parser.
797-800: LGTM! Responses stream error handling consolidated.core/providers/elevenlabs/elevenlabs.go (3)
96-98: LGTM! Simplified error parsing call.
227-230: LGTM! Consistent with centralized error handling.
484-487: LGTM! Error handling aligned with other providers.core/providers/gemini/gemini.go (5)
98-100: LGTM - Error handling centralized.The change to use
parseGeminiError(resp)aligns with the PR's objective to extract error parsing into dedicated files.
156-158: LGTM - Consistent error handling pattern.Same centralized error parsing approach applied to the list models endpoint.
250-253: LGTM - ChatCompletion error path updated.Error handling now routes through the centralized
parseGeminiError.
477-480: LGTM - SpeechStream error handling consolidated.Streaming error path now uses the same centralized parser.
725-728: LGTM - TranscriptionStream follows the same pattern.All Gemini error handling paths are now consistent.
core/providers/elevenlabs/types.go (1)
157-159: LGTM - Optional detail field is appropriate.Making
Detaila pointer withomitemptycorrectly handles cases where the ElevenLabs API returns errors without detail information. The existingUnmarshalJSONmethod on*ElevenlabsErrorDetailcontinues to work correctly.core/providers/elevenlabs/errors.go (1)
12-88: LGTM - Centralized error parsing for ElevenLabs.The refactored function correctly handles both validation errors (array format) and non-validation errors (single object format). The early return for validation errors with a complete BifrostError is appropriate.
One observation: If
errorResp.Detailis non-nil but bothValidationErrorsis empty andMessageis nil, the function will return the basebifrostErrfromHandleProviderAPIErrorwith potentially only an emptyErrorField. This is acceptable as the status code is still preserved.core/providers/gemini/errors.go (1)
12-36: LGTM - ToGeminiError conversion function.Correctly derives a
GeminiGenerationErrorfrom aBifrostError, handling nil cases and extracting relevant fields.core/providers/openai/openai.go (1)
408-408: LGTM! Centralized error parsing is consistently applied.All streaming endpoints now route error handling through
ParseOpenAIError, which properly populates error metadata including provider, model, and request type. The refactoring reduces code duplication and ensures consistent error handling across all OpenAI streaming operations.Also applies to: 815-815, 1256-1256, 1649-1649, 1923-1923
Merge activity
|

Summary
Refactored error handling across multiple providers to improve code organization and reduce duplication.
Changes
errors.gofiles for Anthropic, Cohere, OpenAI, and Gemini providersType of change
Affected areas
How to test
Run the test suite to ensure error handling still works correctly:
Additionally, test API calls with invalid credentials or parameters to verify error responses are properly handled.
Breaking changes
Related issues
Improves code maintainability and reduces duplication.
Security considerations
No security implications as this is a refactoring of internal error handling.
Checklist
docs/contributing/README.mdand followed the guidelines