Skip to content

Conversation

@Pratham-Mishra04
Copy link
Collaborator

Summary

Refactored error handling across multiple providers to improve code organization and reduce duplication.

Changes

  • Extracted error parsing logic into dedicated errors.go files for Anthropic, Cohere, OpenAI, and Gemini providers
  • Consolidated duplicate error handling code into reusable functions
  • Fixed potential nil pointer dereferences in error handling
  • Made error struct fields optional with proper omitempty tags
  • Standardized error parsing approach across providers

Type of change

  • Bug fix
  • Feature
  • Refactor
  • Documentation
  • Chore/CI

Affected areas

  • Core (Go)
  • Transports (HTTP)
  • Providers/Integrations
  • Plugins
  • UI (Next.js)
  • Docs

How to test

Run the test suite to ensure error handling still works correctly:

# Core/Transports
go version
go test ./core/providers/...

Additionally, test API calls with invalid credentials or parameters to verify error responses are properly handled.

Breaking changes

  • Yes
  • No

Related issues

Improves code maintainability and reduces duplication.

Security considerations

No security implications as this is a refactoring of internal error handling.

Checklist

  • I read docs/contributing/README.md and followed the guidelines
  • I added/updated tests where appropriate
  • I updated documentation where needed
  • I verified builds succeed (Go and UI)
  • I verified the CI pipeline passes locally if applicable

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 1, 2025

📝 Walkthrough

Summary by CodeRabbit

  • Refactor

    • Centralized and unified error parsing across providers, including streaming, for more consistent and reliable error messages.
    • Improved handling of optional provider error details to avoid missing or noisy error information.
  • Tests

    • Removed a legacy Gemini batch test file, reducing internal test coverage for batch-related behaviors.

✏️ Tip: You can customize this high-level summary in your review settings.

Walkthrough

Centralized error parsing across multiple providers by introducing provider-specific parse helpers and routing non-OK responses to them; converted several provider error-detail fields to pointer types with omitempty; updated utils to copy response bodies before unmarshalling; removed some stream-specific parsers and a Gemini test file.

Changes

Cohort / File(s) Summary
Anthropic
core/providers/anthropic/anthropic.go, core/providers/anthropic/errors.go, core/providers/anthropic/types.go
Added parseAnthropicError(resp) and replaced inline/stream-specific error handling to use it; removed parseStreamAnthropicError; changed AnthropicError.Error to *struct{...} with json:"error,omitempty".
Cohere
core/providers/cohere/cohere.go, core/providers/cohere/errors.go
Added parseCohereError(resp) and replaced manual non-200 and streaming error construction with the centralized parser.
ElevenLabs
core/providers/elevenlabs/elevenlabs.go, core/providers/elevenlabs/errors.go, core/providers/elevenlabs/types.go
Simplified parseElevenlabsError signature to func(resp *fasthttp.Response) *schemas.BifrostError; delegated parsing to providerUtils.HandleProviderAPIError; made ElevenlabsError.Detail a pointer *ElevenlabsErrorDetail with json:"detail,omitempty".
Gemini
core/providers/gemini/gemini.go, core/providers/gemini/errors.go, core/providers/gemini/types.go
Introduced GeminiGenerationError types, replaced older Gemini chat error types, added unified parseGeminiError(resp) that aggregates multi-entry errors, removed stream-specific parser, and updated call sites.
OpenAI
core/providers/openai/errors.go, core/providers/openai/openai.go
Added centralized ParseOpenAIError(resp, requestType, providerName, model) and updated call sites to use it; removed prior in-file stream-specific parsers.
Provider utils
core/providers/utils/utils.go
HandleProviderAPIError now copies resp.Body() into a local buffer and uses that copy for unmarshalling and rawResponse construction to avoid consuming the original body.
Tests removed
core/providers/gemini/batch_test.go
Removed entire Gemini batch test file (deletes many batch-related unit tests and coverage).

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

  • Pay special attention to Gemini type changes and all call sites expecting prior Gemini error shapes.
  • Verify nil-safety and JSON behavior for pointerized fields: AnthropicError.Error, ElevenlabsError.Detail, GeminiGenerationError.Error.
  • Confirm all replaced call sites use the new parser signatures and that streaming paths handle errors identically.
  • Review HandleProviderAPIError body-copy for memory implications on large responses.
  • Note the removed Gemini batch tests—ensure coverage elsewhere or restore tests if needed.

Poem

🐇
I hop through logs and tidy strings,
I stitch the errors into wings.
Pointers softened, parsers sing,
Streams now echo what they bring.
A carrot-clad commit—code springs.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 76.92% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main refactoring objective: extracting error parsing logic into dedicated files for providers.
Description check ✅ Passed The description covers all required template sections including summary, changes, type of change, affected areas, testing instructions, breaking changes, and checklists.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch 12-01-refactor_core_providers_error_handling_standardization

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Collaborator Author

Pratham-Mishra04 commented Dec 1, 2025

This stack of pull requests is managed by Graphite. Learn more about stacking.

@github-actions
Copy link
Contributor

github-actions bot commented Dec 1, 2025

🧪 Test Suite Available

This PR can be tested by a repository admin.

Run tests for PR #977

@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 12-01-refactor_core_providers_error_handling_standardization branch from 78eb82c to 3540e84 Compare December 2, 2025 14:40
@Pratham-Mishra04 Pratham-Mishra04 marked this pull request as ready for review December 2, 2025 15:09
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
core/providers/gemini/errors.go (1)

43-48: Trailing newline in aggregated error message.

The message concatenation appends a newline after each error message, leaving a trailing newline in the final string. Consider trimming it for cleaner error messages.

 		for _, errorResp := range errorResps {
 			if errorResp.Error != nil {
 				message = message + errorResp.Error.Message + "\n"
 			}
 		}
+		message = strings.TrimSuffix(message, "\n")
 		if bifrostErr.Error == nil {

Note: You'll need to add "strings" to the imports if not already present.

core/providers/cohere/errors.go (1)

9-20: Consider guarding against empty Type when unmarshal fails.

When HandleProviderAPIError fails to unmarshal the response body, errorResp.Type will be an empty string. Currently, bifrostErr.Type is always set to &errorResp.Type, which would point to an empty string in error scenarios.

For consistency with how Code is handled (only set if non-nil), consider only setting Type when it's non-empty:

 func parseCohereError(resp *fasthttp.Response) *schemas.BifrostError {
 	var errorResp CohereError
 	bifrostErr := providerUtils.HandleProviderAPIError(resp, &errorResp)
-	bifrostErr.Type = &errorResp.Type
+	if errorResp.Type != "" {
+		bifrostErr.Type = &errorResp.Type
+	}
 	if bifrostErr.Error == nil {
 		bifrostErr.Error = &schemas.ErrorField{}
 	}
 	bifrostErr.Error.Message = errorResp.Message
 	if errorResp.Code != nil {
 		bifrostErr.Error.Code = errorResp.Code
 	}
 	return bifrostErr
 }
core/providers/elevenlabs/errors.go (1)

58-67: Inconsistent error construction pattern between validation and non-validation errors.

When validation errors produce a message, a new BifrostError is constructed and returned (lines 59-66). However, the non-validation error path (lines 81-85) reuses the bifrostErr returned by HandleProviderAPIError.

This inconsistency could cause subtle differences in behavior—for example, if HandleProviderAPIError populates additional fields on bifrostErr in the future, validation errors would lose that information.

Consider using the same pattern for both cases:

 			if message != "" {
-				return &schemas.BifrostError{
-					IsBifrostError: false,
-					StatusCode:     schemas.Ptr(resp.StatusCode()),
-					Error: &schemas.ErrorField{
-						Type:    schemas.Ptr(errorType),
-						Message: message,
-					},
-				}
+				if bifrostErr.Error == nil {
+					bifrostErr.Error = &schemas.ErrorField{}
+				}
+				bifrostErr.Error.Type = schemas.Ptr(errorType)
+				bifrostErr.Error.Message = message
+				return bifrostErr
 			}
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 71676b6 and 3540e84.

📒 Files selected for processing (14)
  • core/providers/anthropic/anthropic.go (4 hunks)
  • core/providers/anthropic/errors.go (1 hunks)
  • core/providers/anthropic/types.go (1 hunks)
  • core/providers/cohere/cohere.go (4 hunks)
  • core/providers/cohere/errors.go (1 hunks)
  • core/providers/elevenlabs/elevenlabs.go (4 hunks)
  • core/providers/elevenlabs/errors.go (2 hunks)
  • core/providers/elevenlabs/types.go (1 hunks)
  • core/providers/gemini/errors.go (2 hunks)
  • core/providers/gemini/gemini.go (5 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/openai/errors.go (1 hunks)
  • core/providers/openai/openai.go (5 hunks)
  • core/providers/utils/utils.go (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (6)
core/providers/anthropic/errors.go (2)
core/providers/anthropic/types.go (1)
  • AnthropicError (387-393)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
core/providers/openai/errors.go (2)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
core/schemas/bifrost.go (1)
  • BifrostErrorExtraFields (420-424)
core/providers/cohere/errors.go (2)
core/providers/cohere/types.go (1)
  • CohereError (528-532)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
core/providers/elevenlabs/errors.go (2)
core/providers/elevenlabs/types.go (1)
  • ElevenlabsError (157-159)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
core/providers/openai/openai.go (2)
core/providers/openai/errors.go (1)
  • ParseOpenAIError (10-38)
core/schemas/bifrost.go (5)
  • TextCompletionStreamRequest (88-88)
  • ChatCompletionStreamRequest (90-90)
  • ResponsesStreamRequest (92-92)
  • SpeechStreamRequest (95-95)
  • TranscriptionStreamRequest (97-97)
core/providers/gemini/errors.go (2)
core/providers/gemini/types.go (2)
  • GeminiGenerationError (1341-1343)
  • GeminiGenerationErrorStruct (1345-1350)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
🔇 Additional comments (30)
core/providers/utils/utils.go (1)

319-322: Good defensive copy of response body.

Creating a copy before unmarshalling ensures the raw response remains available for error messages even if the original buffer is reused by fasthttp's pooling. This is the right approach for reliability.

core/providers/elevenlabs/types.go (1)

156-159: LGTM - nullable Detail field.

Making Detail a pointer with omitempty correctly handles cases where the error response may not include detail information, and aligns with the centralized error parsing approach.

core/providers/anthropic/types.go (1)

386-393: LGTM - nullable Error field prevents nil pointer dereferences.

Converting to a pointer type correctly handles API responses where the error field may be absent, aligning with the centralized error parsing pattern.

core/providers/openai/openai.go (5)

408-408: LGTM - centralized error parsing for text completion streaming.

The delegation to ParseOpenAIError with the correct TextCompletionStreamRequest type is appropriate.


815-815: LGTM - centralized error parsing for chat completion streaming.

Correctly uses ChatCompletionStreamRequest for the streaming context.


1256-1256: LGTM - centralized error parsing for responses streaming.

Correctly uses ResponsesStreamRequest for the streaming context.


1649-1649: LGTM - centralized error parsing for speech streaming.

Correctly uses SpeechStreamRequest for the streaming context.


1923-1923: LGTM - centralized error parsing for transcription streaming.

Correctly uses TranscriptionStreamRequest for the streaming context.

core/providers/gemini/types.go (1)

1341-1357: LGTM!

The error type definitions are well-structured. Using a pointer for the Error field with omitempty correctly handles the nullable nature of the error details in the JSON response.

core/providers/gemini/errors.go (2)

11-35: LGTM!

The ToGeminiError function correctly handles nil checks and properly maps BifrostError fields to the Gemini error structure.


56-67: LGTM!

The fallback to single error parsing handles the case where Gemini returns a single error object instead of an array. The logic correctly populates the BifrostError fields.

core/providers/gemini/gemini.go (5)

99-99: LGTM!

The error handling is correctly refactored to use the centralized parseGeminiError function.


157-157: LGTM!

Consistent with the error handling refactoring pattern.


252-252: LGTM!

Consistent with the error handling refactoring pattern.


479-479: LGTM!

Consistent with the error handling refactoring pattern for streaming responses.


727-727: LGTM!

Consistent with the error handling refactoring pattern for transcription streaming.

core/providers/anthropic/errors.go (1)

9-19: LGTM!

The parseAnthropicError function follows the same centralized error parsing pattern as other providers. The nil checks and field mappings are correct.

core/providers/elevenlabs/elevenlabs.go (4)

97-97: LGTM!

Consistent with the error handling refactoring pattern.


229-229: LGTM!

Consistent with the error handling refactoring pattern.


349-349: LGTM!

Consistent with the error handling refactoring pattern for streaming responses.


486-486: LGTM!

Consistent with the error handling refactoring pattern.

core/providers/cohere/cohere.go (4)

152-155: LGTM!

The error handling is now delegated to the centralized parseCohereError function, consolidating previously duplicated logic.


210-212: LGTM!

Consistent with the refactoring pattern applied across other error paths in this file.


385-388: LGTM!

Streaming error handling now properly delegates to the centralized error parser.


598-601: LGTM!

ResponsesStream error handling follows the same consolidated pattern.

core/providers/anthropic/anthropic.go (4)

151-154: LGTM!

Error handling in completeRequest now delegates to the centralized parseAnthropicError function.


195-198: LGTM!

Consistent error handling delegation in listModelsByKey.


449-452: LGTM!

The shared streaming handler now uses centralized error parsing, which benefits all providers using this function.


779-782: LGTM!

Responses stream error handling follows the consolidated pattern.

core/providers/elevenlabs/errors.go (1)

12-14: LGTM on the signature simplification.

Removing the providerName parameter and delegating to HandleProviderAPIError aligns with the standardized error parsing approach across other providers.

@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 12-01-refactor_core_providers_error_handling_standardization branch from 3540e84 to e8afa58 Compare December 4, 2025 13:13
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
core/providers/elevenlabs/errors.go (1)

12-89: Centralized ElevenLabs error parsing looks good; consider also setting top-level error type

The new parseElevenlabsError correctly reuses HandleProviderAPIError and then:

  • Aggregates validation errors into a combined message (with locations and joined types) when ValidationErrors is non-empty.
  • Falls back to the non-validation Detail.Status/Detail.Message fields when present, updating bifrostErr.Error in place.

For consistency with NewProviderAPIError, you may want to also set bifrostErr.Type (not just bifrostErr.Error.Type) in both branches so downstream code can rely on the top-level type field as well.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3540e84 and e8afa58.

📒 Files selected for processing (14)
  • core/providers/anthropic/anthropic.go (4 hunks)
  • core/providers/anthropic/errors.go (1 hunks)
  • core/providers/anthropic/types.go (1 hunks)
  • core/providers/cohere/cohere.go (4 hunks)
  • core/providers/cohere/errors.go (1 hunks)
  • core/providers/elevenlabs/elevenlabs.go (4 hunks)
  • core/providers/elevenlabs/errors.go (2 hunks)
  • core/providers/elevenlabs/types.go (1 hunks)
  • core/providers/gemini/errors.go (2 hunks)
  • core/providers/gemini/gemini.go (5 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/openai/errors.go (1 hunks)
  • core/providers/openai/openai.go (5 hunks)
  • core/providers/utils/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
  • core/providers/anthropic/errors.go
  • core/providers/cohere/errors.go
  • core/providers/anthropic/types.go
  • core/providers/openai/errors.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/providers/gemini/gemini.go
  • core/providers/elevenlabs/elevenlabs.go
  • core/providers/cohere/cohere.go
  • core/providers/elevenlabs/types.go
  • core/providers/gemini/types.go
  • core/providers/gemini/errors.go
  • core/providers/elevenlabs/errors.go
  • core/providers/openai/openai.go
  • core/providers/anthropic/anthropic.go
  • core/providers/utils/utils.go
🧬 Code graph analysis (3)
core/providers/gemini/errors.go (4)
core/providers/gemini/types.go (2)
  • GeminiGenerationError (1341-1343)
  • GeminiGenerationErrorStruct (1345-1350)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
core/schemas/utils.go (1)
  • Ptr (16-18)
core/utils.go (1)
  • Ptr (51-53)
core/providers/elevenlabs/errors.go (2)
core/providers/elevenlabs/types.go (1)
  • ElevenlabsError (157-159)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
core/providers/openai/openai.go (2)
core/providers/openai/errors.go (1)
  • ParseOpenAIError (10-38)
core/schemas/bifrost.go (5)
  • TextCompletionStreamRequest (88-88)
  • ChatCompletionStreamRequest (90-90)
  • ResponsesStreamRequest (92-92)
  • SpeechStreamRequest (95-95)
  • TranscriptionStreamRequest (97-97)
🔇 Additional comments (9)
core/providers/anthropic/anthropic.go (1)

153-153: Excellent refactoring to centralize error parsing.

The consistent delegation to parseAnthropicError(resp) across all error paths (non-streaming and streaming) improves maintainability and reduces code duplication. The centralized function properly handles Anthropic-specific error details while delegating base error handling to the provider utilities, maintaining clean separation of concerns.

core/providers/utils/utils.go (1)

319-322: HandleProviderAPIError body copy looks correct

Copying resp.Body() into a local slice before unmarshalling avoids aliasing/reuse issues with fasthttp’s internal buffer while preserving existing behavior.

core/providers/cohere/cohere.go (1)

155-155: Cohere HTTP error paths now consistently delegate to parseCohereError

All non-200 branches correctly route through parseCohereError(resp), which keeps error parsing centralized and removes local duplication without changing the surrounding control flow.

Also applies to: 211-211, 388-388, 602-602

core/providers/openai/openai.go (1)

408-408: Streaming OpenAI HTTP errors correctly unified on ParseOpenAIError

Switching all streaming HTTP error branches to ParseOpenAIError aligns them with non-streaming handlers and centralizes error shaping (including provider/model/request-type metadata) in one place.

Also applies to: 815-815, 1256-1256, 1650-1650, 1923-1923

core/providers/elevenlabs/elevenlabs.go (1)

97-98: ElevenLabs HTTP error handling correctly routed through parseElevenlabsError

Non-200 responses for list models, speech, streaming speech, and transcription now all use parseElevenlabsError(resp), which removes duplicated parsing logic at call sites while preserving existing logging and control flow.

Also applies to: 227-230, 346-350, 484-487

core/providers/gemini/gemini.go (1)

98-100: Gemini HTTP error paths now uniformly use parseGeminiError

Centralizing non-200 handling on parseGeminiError(resp) across sync and streaming entry points keeps Gemini error parsing consistent and removes per-callsite JSON/body handling.

Also applies to: 156-158, 251-253, 477-480, 725-728

core/providers/elevenlabs/types.go (1)

158-159: Making ElevenlabsError.Detail optional is a sound modeling improvement

Switching Detail to *ElevenlabsErrorDetail with omitempty matches the API’s “detail may be missing” behavior and avoids serializing empty detail objects; the existing custom UnmarshalJSON continues to apply on the inner type.

core/providers/gemini/types.go (1)

1341-1357: GeminiGenerationError types align with expected Gemini error payloads*

The GeminiGenerationError, GeminiGenerationErrorStruct, and GeminiGenerationErrorDetails structs correctly model a standard Gemini error (code/message/status plus typed details with fieldViolations) and provide a solid basis for the centralized parseGeminiError logic.

core/providers/gemini/errors.go (1)

11-35: LGTM! Clean error conversion logic.

The function safely converts BifrostError to GeminiGenerationError with appropriate nil checks and default values. The implementation is straightforward and correct.

@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 12-01-refactor_core_providers_error_handling_standardization branch from e8afa58 to 455b127 Compare December 5, 2025 14:01
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
core/providers/gemini/errors.go (1)

11-35: Unify multi‑error and single‑error handling in parseGeminiError (trim newline, propagate code)

ToGeminiError itself looks fine, but parseGeminiError still treats array vs single error responses inconsistently:

  • For []GeminiGenerationError (lines 40‑53), it:

    • Concatenates messages with + "\n", leaving a trailing newline in bifrostErr.Error.Message.
    • Ignores errorResp.Error.Code entirely.
  • For single GeminiGenerationError (lines 56‑65), it:

    • Sets bifrostErr.Error.Code from errorResp.Error.Code.
    • Sets bifrostErr.Error.Message from errorResp.Error.Message.

To keep error shape consistent and avoid cosmetic artifacts in messages, consider:

  1. After building the concatenated message, strip the trailing newline:
-		var message string
-		for _, errorResp := range errorResps {
-			if errorResp.Error != nil {
-				message = message + errorResp.Error.Message + "\n"
-			}
-		}
+		var message string
+		for _, errorResp := range errorResps {
+			if errorResp.Error != nil && errorResp.Error.Message != "" {
+				message += errorResp.Error.Message + "\n"
+			}
+		}
+		if message != "" {
+			message = strings.TrimSuffix(message, "\n")
+		}
  1. When at least one element has a non‑nil Error, also set bifrostErr.Error.Code from the first such element, mirroring the single‑error path:
-		if bifrostErr.Error == nil {
-			bifrostErr.Error = &schemas.ErrorField{}
-		}
-		bifrostErr.Error.Message = message
+		if bifrostErr.Error == nil {
+			bifrostErr.Error = &schemas.ErrorField{}
+		}
+
+		// Preserve a representative error code when available
+		for _, er := range errorResps {
+			if er.Error != nil {
+				bifrostErr.Error.Code = schemas.Ptr(strconv.Itoa(er.Error.Code))
+				break
+			}
+		}
+
+		bifrostErr.Error.Message = message

This way, both array and single‑object responses populate Code and Message in the same way, and consumers don’t see spurious trailing newlines.

Also applies to: 37-67

🧹 Nitpick comments (1)
core/providers/elevenlabs/errors.go (1)

12-89: ElevenLabs error parsing is robust, but consider wiring provider/request context into ExtraFields

The new parseElevenlabsError correctly:

  • Leverages HandleProviderAPIError as a common entry point.
  • Distinguishes validation vs non‑validation formats using Detail.ValidationErrors.
  • Aggregates validation messages/locations/types into a single human‑readable message.
  • Falls back to Detail.Message/Detail.Status when there are no validation errors.

One thing to consider (for consistency with helpers like ParseOpenAIError) is attaching provider/request context to ExtraFields on the returned *schemas.BifrostError (e.g., Provider, RequestType, ModelRequested). With the current func parseElevenlabsError(resp *fasthttp.Response) signature, the parser can’t set that context itself; if you rely on those fields for logs or metrics, you may want to:

  • Either extend the signature to accept provider/request context and populate ExtraFields here, or
  • Have call sites wrap the returned error and set ExtraFields before returning upstream.

If you don’t currently depend on those fields for ElevenLabs errors, the existing behavior is functionally fine.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e8afa58 and 455b127.

📒 Files selected for processing (14)
  • core/providers/anthropic/anthropic.go (4 hunks)
  • core/providers/anthropic/errors.go (1 hunks)
  • core/providers/anthropic/types.go (1 hunks)
  • core/providers/cohere/cohere.go (4 hunks)
  • core/providers/cohere/errors.go (1 hunks)
  • core/providers/elevenlabs/elevenlabs.go (4 hunks)
  • core/providers/elevenlabs/errors.go (2 hunks)
  • core/providers/elevenlabs/types.go (1 hunks)
  • core/providers/gemini/errors.go (2 hunks)
  • core/providers/gemini/gemini.go (5 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/openai/errors.go (1 hunks)
  • core/providers/openai/openai.go (5 hunks)
  • core/providers/utils/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
  • core/providers/openai/errors.go
  • core/providers/anthropic/errors.go
  • core/providers/anthropic/anthropic.go
  • core/providers/gemini/gemini.go
  • core/providers/cohere/errors.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/providers/cohere/cohere.go
  • core/providers/elevenlabs/elevenlabs.go
  • core/providers/elevenlabs/types.go
  • core/providers/utils/utils.go
  • core/providers/openai/openai.go
  • core/providers/elevenlabs/errors.go
  • core/providers/gemini/errors.go
  • core/providers/gemini/types.go
  • core/providers/anthropic/types.go
🧬 Code graph analysis (2)
core/providers/openai/openai.go (3)
core/providers/openai/errors.go (1)
  • ParseOpenAIError (10-38)
core/schemas/bifrost.go (5)
  • TextCompletionStreamRequest (88-88)
  • ChatCompletionStreamRequest (90-90)
  • ResponsesStreamRequest (92-92)
  • SpeechStreamRequest (95-95)
  • TranscriptionStreamRequest (97-97)
core/schemas/models.go (1)
  • Model (109-129)
core/providers/elevenlabs/errors.go (2)
core/providers/elevenlabs/types.go (1)
  • ElevenlabsError (157-159)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
🔇 Additional comments (10)
core/providers/cohere/cohere.go (4)

152-155: LGTM! Consistent error handling refactoring.

The replacement of manual error handling with parseCohereError(resp) improves code organization by centralizing error parsing logic. The debug logging is preserved, and the return type matches the method signature.


210-212: LGTM! Error handling correctly centralized.

The use of parseCohereError(resp) is consistent with the refactoring approach and properly handles non-200 responses.


385-388: LGTM! Streaming error handling properly refactored.

The centralized error parsing is correctly applied for streaming responses, with proper cleanup deferred before the error return.


600-603: The parseCohereError implementation is sound and properly handles all error cases.

The function delegates response body parsing to HandleProviderAPIError, which safely copies and unmarshals the response body. It correctly maps HTTP status codes, handles Cohere-specific error fields (Type, Message, Code), and protects against edge cases like malformed JSON or missing optional fields. The implementation follows the centralized error handling pattern consistently across the provider.

core/providers/anthropic/types.go (1)

386-393: AnthropicError pointer+omitempty change looks correct

Making Error a pointer with json:"error,omitempty" aligns with optional error payloads and avoids emitting empty objects; assuming call sites nil‑check before dereference, this is a safe, compatible refactor.

core/providers/utils/utils.go (1)

318-323: Copying the response body before unmarshalling is a solid safety improvement

Using a local body slice avoids coupling error parsing to fasthttp’s internal buffer lifecycle while preserving existing behavior for callers.

core/providers/openai/openai.go (1)

145-148: Centralizing OpenAI HTTP‑error handling via ParseOpenAIError is consistent and correct

All non‑streaming and streaming OpenAI paths now delegate HTTP‑level error parsing to ParseOpenAIError, passing the appropriate RequestType, provider key, and model (or empty for ListModels). This keeps error construction and ExtraFields population consistent with the new helper in core/providers/openai/errors.go without changing control flow.

Also applies to: 265-268, 405-409, 645-649, 1109-1112, 1253-1257, 1457-1460, 1531-1534, 1647-1650, 1805-1809, 1920-1924

core/providers/gemini/types.go (1)

1341-1357: GeminiGenerationError type hierarchy matches Google RPC error schema

Defining GeminiGenerationError, GeminiGenerationErrorStruct, and GeminiGenerationErrorDetails with code/message/status/details[@type,fieldViolations] cleanly models Gemini’s error payloads and is a good basis for the new centralized parser.

core/providers/elevenlabs/elevenlabs.go (1)

96-98: ElevenLabs call sites now correctly delegate HTTP errors to parseElevenlabsError

The switch from parseElevenlabsError(providerName, resp) to parseElevenlabsError(resp) across list‑models, speech, streaming speech, and transcription paths matches the new helper signature and keeps the control flow and status‑code checks intact.

Also applies to: 226-230, 347-350, 484-487

core/providers/elevenlabs/types.go (1)

156-159: Making Detail optional on ElevenlabsError is appropriate

Switching Detail to *ElevenlabsErrorDetail with json:"detail,omitempty" matches the varied ElevenLabs error payloads and works cleanly with the existing custom unmarshalling plus the new parser, which already checks for nil.

@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 12-01-refactor_core_providers_error_handling_standardization branch from 455b127 to 8e37f9a Compare December 6, 2025 10:05
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
core/providers/gemini/errors.go (1)

42-54: Inconsistent error field extraction between array and single response paths.

The array response path (lines 42-54) only extracts Message and leaves a trailing newline, while the single response path (lines 59-65) extracts both Code and Message. For consistency, the array path should:

  1. Trim the trailing newline using strings.TrimSuffix(message, "\n")
  2. Extract the Code from the first error (similar to single response handling)
 	if len(errorResps) > 0 {
 		var message string
+		var code *string
 		for _, errorResp := range errorResps {
 			if errorResp.Error != nil {
 				message = message + errorResp.Error.Message + "\n"
+				if code == nil && errorResp.Error.Code != 0 {
+					code = schemas.Ptr(strconv.Itoa(errorResp.Error.Code))
+				}
 			}
 		}
 		if bifrostErr.Error == nil {
 			bifrostErr.Error = &schemas.ErrorField{}
 		}
-		bifrostErr.Error.Message = message
+		bifrostErr.Error.Message = strings.TrimSuffix(message, "\n")
+		if code != nil {
+			bifrostErr.Error.Code = code
+		}
 		return bifrostErr
 	}

Note: You'll also need to add "strings" to the imports.

🧹 Nitpick comments (1)
core/providers/cohere/errors.go (1)

9-20: Consider guarding against empty error fields when unmarshalling fails.

When HandleProviderAPIError fails to unmarshal the response (e.g., malformed JSON), it returns early with the raw response as the message, but errorResp fields remain zero-valued. The subsequent assignments on Lines 12 and 16 will then set Type to a pointer to an empty string and overwrite the useful raw response message with an empty string.

 func parseCohereError(resp *fasthttp.Response) *schemas.BifrostError {
 	var errorResp CohereError
 	bifrostErr := providerUtils.HandleProviderAPIError(resp, &errorResp)
-	bifrostErr.Type = &errorResp.Type
-	if bifrostErr.Error == nil {
-		bifrostErr.Error = &schemas.ErrorField{}
+	if errorResp.Type != "" {
+		bifrostErr.Type = &errorResp.Type
+	}
+	if errorResp.Message != "" {
+		if bifrostErr.Error == nil {
+			bifrostErr.Error = &schemas.ErrorField{}
+		}
+		bifrostErr.Error.Message = errorResp.Message
 	}
-	bifrostErr.Error.Message = errorResp.Message
 	if errorResp.Code != nil {
+		if bifrostErr.Error == nil {
+			bifrostErr.Error = &schemas.ErrorField{}
+		}
 		bifrostErr.Error.Code = errorResp.Code
 	}
 	return bifrostErr
 }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 455b127 and 8e37f9a.

📒 Files selected for processing (14)
  • core/providers/anthropic/anthropic.go (4 hunks)
  • core/providers/anthropic/errors.go (1 hunks)
  • core/providers/anthropic/types.go (1 hunks)
  • core/providers/cohere/cohere.go (4 hunks)
  • core/providers/cohere/errors.go (1 hunks)
  • core/providers/elevenlabs/elevenlabs.go (4 hunks)
  • core/providers/elevenlabs/errors.go (2 hunks)
  • core/providers/elevenlabs/types.go (1 hunks)
  • core/providers/gemini/errors.go (2 hunks)
  • core/providers/gemini/gemini.go (5 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/openai/errors.go (1 hunks)
  • core/providers/openai/openai.go (5 hunks)
  • core/providers/utils/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
  • core/providers/elevenlabs/types.go
  • core/providers/openai/errors.go
  • core/providers/cohere/cohere.go
  • core/providers/elevenlabs/errors.go
  • core/providers/elevenlabs/elevenlabs.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/providers/gemini/gemini.go
  • core/providers/cohere/errors.go
  • core/providers/anthropic/types.go
  • core/providers/anthropic/errors.go
  • core/providers/openai/openai.go
  • core/providers/gemini/errors.go
  • core/providers/anthropic/anthropic.go
  • core/providers/utils/utils.go
  • core/providers/gemini/types.go
🧬 Code graph analysis (4)
core/providers/cohere/errors.go (2)
core/providers/cohere/types.go (1)
  • CohereError (528-532)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
core/providers/anthropic/errors.go (2)
core/providers/anthropic/types.go (1)
  • AnthropicError (387-393)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
core/providers/openai/openai.go (2)
core/providers/openai/errors.go (1)
  • ParseOpenAIError (10-38)
core/schemas/bifrost.go (5)
  • TextCompletionStreamRequest (88-88)
  • ChatCompletionStreamRequest (90-90)
  • ResponsesStreamRequest (92-92)
  • SpeechStreamRequest (95-95)
  • TranscriptionStreamRequest (97-97)
core/providers/gemini/errors.go (2)
core/providers/gemini/types.go (2)
  • GeminiGenerationError (1341-1343)
  • GeminiGenerationErrorStruct (1345-1350)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
🔇 Additional comments (17)
core/providers/gemini/types.go (1)

1341-1357: LGTM - Error types restructured for centralized parsing.

The new error types are well-structured with GeminiGenerationError wrapping an optional pointer to GeminiGenerationErrorStruct. This aligns with the centralized error parsing pattern used across other providers.

Minor observation: Unlike the Anthropic error type, GeminiGenerationErrorStruct fields don't have omitempty tags. This is acceptable since these fields are typically always present in Gemini error responses.

core/providers/anthropic/types.go (1)

389-392: LGTM - Error field correctly changed to pointer with omitempty.

Converting Error to a pointer type enables nil-safe handling in the centralized parseAnthropicError function, and the omitempty tag prevents serializing null error fields in JSON output.

core/providers/anthropic/errors.go (1)

9-19: LGTM - Centralized error parsing for Anthropic.

The function correctly delegates to HandleProviderAPIError and enriches the error with Anthropic-specific type and message fields when available. The nil check on Line 13 is a good defensive pattern even though HandleProviderAPIError currently always initializes the Error field.

core/providers/utils/utils.go (1)

317-337: LGTM - Body copy prevents potential data corruption.

Creating a local copy of resp.Body() before unmarshalling is the correct approach. The fasthttp response body references an internal buffer that could be reused, so copying it ensures stable data for both JSON unmarshalling and raw response message construction in the error path.

core/providers/gemini/gemini.go (5)

99-99: LGTM - Error handling centralized in completeRequest.

The error path now uses the centralized parseGeminiError(resp) function, which standardizes error handling across all Gemini API calls.


157-157: LGTM - Consistent error handling in listModelsByKey.


252-252: LGTM - Consistent error handling in ChatCompletion.


477-479: LGTM - Consistent error handling in SpeechStream.


725-727: LGTM - Consistent error handling in TranscriptionStream.

core/providers/anthropic/anthropic.go (4)

153-153: LGTM - Error handling centralized in completeRequest.

The error path now uses the centralized parseAnthropicError(resp) function, replacing the previous inline error construction.


197-197: LGTM - Consistent error handling in listModelsByKey.


449-451: LGTM - Streaming error handling unified.

The streaming error path now uses the same parseAnthropicError(resp) function as non-streaming paths, replacing the previously separate parseStreamAnthropicError helper.


797-799: LGTM - Consistent error handling in HandleAnthropicResponsesStream.

core/providers/openai/openai.go (2)

408-408: LGTM! Streaming error handling now uses centralized parser.

The switch to ParseOpenAIError from errors.go aligns with the refactor goal of consolidating duplicate error handling. This is consistent with the non-streaming handlers in this file.


815-815: Consistent error handling refactor across all streaming endpoints.

All streaming handlers (ChatCompletionStream, ResponsesStream, SpeechStream, TranscriptionStream) now consistently use the centralized ParseOpenAIError function with appropriate request types. This eliminates code duplication and standardizes error parsing.

Also applies to: 1256-1256, 1649-1649, 1923-1923

core/providers/gemini/errors.go (2)

11-35: LGTM! Clean conversion from BifrostError to GeminiGenerationError.

The function correctly handles nil input, extracts relevant fields with proper nil checks, and constructs the GeminiGenerationError with the new pointer-based struct layout.


56-67: Single response handling looks correct.

The fallback from array to single object parsing is appropriate for handling different Gemini API error response formats. The Code and Message extraction is properly implemented here.

@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 12-01-refactor_core_providers_error_handling_standardization branch from 8e37f9a to 9e093fc Compare December 8, 2025 11:54
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (3)
core/providers/elevenlabs/elevenlabs.go (3)

226-230: Speech error path correctly uses centralized parser (same concern as listModels)

The move to parseElevenlabsError(resp) here matches the listModels path and keeps behavior consistent; logging still includes providerName. The only outstanding question is the same provider alias consistency noted in the listModels comment.


346-350: SpeechStream non‑200 handling aligned with new parser

For streaming, the non‑200 path now also goes through parseElevenlabsError(resp), which is consistent with the rest of the provider. Behavior looks unchanged apart from delegating to the centralized helper; alias/label concern is the same as in the listModels comment.


484-487: Transcription error handling consistently routed through centralized parser

Transcription’s non‑200 branch now mirrors the other methods by using parseElevenlabsError(resp) after logging. This keeps error handling uniform across Elevenlabs operations; only remaining consideration is the shared provider key/alias check mentioned earlier.

🧹 Nitpick comments (4)
core/providers/elevenlabs/elevenlabs.go (1)

96-98: Centralized Elevenlabs error parsing looks good; verify provider key/alias usage

Switching to parseElevenlabsError(resp) keeps the non‑200 path simple and aligned with the new centralized helper. One thing to double‑check at the stack level: since providerName is no longer passed in, ensure parseElevenlabsError (and the underlying HandleProviderAPIError) still tag the BifrostError with the same provider key/alias as GetProviderKey() (including any custom provider aliases from CustomProviderConfig). If it currently hardcodes schemas.Elevenlabs instead of using the alias, you might want to thread the alias in or assert that this difference is intentional.

core/providers/gemini/gemini.go (1)

99-100: Centralizing Gemini HTTP error parsing via parseGeminiError looks consistent; consider whether you still need per-call context on the returned error.

All non-200 HTTP cases now funnel through parseGeminiError(resp), including streaming setup paths. That’s a nice simplification and correctly avoids reusing resp.Body() after release thanks to HandleProviderAPIError cloning the body.

One thing to double‑check: previously, streaming error helpers took providerName (and in some providers, request type/model) when building *schemas.BifrostError. With the new helper only taking *fasthttp.Response, those contextual fields (e.g., ExtraFields.Provider, ExtraFields.RequestType, ExtraFields.ModelRequested or Error.Type derived from provider status) now depend on being added further up the stack, if at all.

If any downstream logging/metrics/clients rely on those being present for HTTP‑level failures, it may be worth:

  • either enriching the *schemas.BifrostError at these call sites just before returning, or
  • extending parseGeminiError to accept and populate that context.

If nothing currently depends on that metadata, the current refactor is fine as‑is.

Also applies to: 155-158, 250-253, 476-480, 724-728

core/providers/gemini/errors.go (1)

12-36: Gemini error translation logic is sound; optionally propagate status/details into BifrostError for richer diagnostics.

The new ToGeminiError and parseGeminiError flows look correct:

  • ToGeminiError safely handles nil and sensibly defaults code from StatusCode, status from Error.Type, and message from Error.Message.
  • parseGeminiError correctly:
    • Tries []GeminiGenerationError first, aggregating messages and setting Error.Code from the first entry.
    • Falls back to single GeminiGenerationError.
    • Relies on HandleProviderAPIError to clone the body and always return a non‑nil base *BifrostError, so there’s no NPE risk.

Two optional polish points you may want to consider:

  1. Map Gemini status into BifrostError
    In both the slice and single‑error branches you have access to Error.Status, but right now that isn’t surfaced into BifrostError (e.g., Error.Type). If upstream tooling uses Error.Type to distinguish categories (INVALID_ARGUMENT, PERMISSION_DENIED, etc.), setting it from status would make these errors more useful.

  2. Optionally surface selected details
    GeminiGenerationErrorDetails is defined but currently unused in the mapping. Even just appending the first field-violation description into the aggregated Message (behind a small helper) would significantly improve debuggability while keeping the structure simple.

These are not correctness issues, just potential improvements for observability and ergonomics.

Also applies to: 38-79

core/providers/gemini/types.go (1)

1341-1357: New GeminiGenerationError* types correctly model the Gemini error envelope; consider omitempty if you want sparse re‑encoding.

The new error types line up with Gemini’s documented error shape (error.code, error.message, error.status, error.details{ @type, fieldViolations[].description }) and work fine for unmarshalling and for ToGeminiError’s serialization.

If you intend these fields to behave as optional when you serialize them back out (e.g., to avoid emitting empty details or blank status), you might want to add omitempty to the struct tags:

type GeminiGenerationErrorStruct struct {
    Code    int                            `json:"code,omitempty"`
    Message string                         `json:"message,omitempty"`
    Status  string                         `json:"status,omitempty"`
    Details []GeminiGenerationErrorDetails `json:"details,omitempty"`
}

Not required for correctness, just a minor alignment with “optional fields” semantics and slightly cleaner JSON on the wire.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8e37f9a and 9e093fc.

📒 Files selected for processing (14)
  • core/providers/anthropic/anthropic.go (4 hunks)
  • core/providers/anthropic/errors.go (1 hunks)
  • core/providers/anthropic/types.go (1 hunks)
  • core/providers/cohere/cohere.go (4 hunks)
  • core/providers/cohere/errors.go (1 hunks)
  • core/providers/elevenlabs/elevenlabs.go (4 hunks)
  • core/providers/elevenlabs/errors.go (2 hunks)
  • core/providers/elevenlabs/types.go (1 hunks)
  • core/providers/gemini/errors.go (2 hunks)
  • core/providers/gemini/gemini.go (5 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/openai/errors.go (1 hunks)
  • core/providers/openai/openai.go (5 hunks)
  • core/providers/utils/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
  • core/providers/elevenlabs/types.go
  • core/providers/cohere/errors.go
  • core/providers/openai/errors.go
  • core/providers/anthropic/anthropic.go
  • core/providers/utils/utils.go
  • core/providers/elevenlabs/errors.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/providers/cohere/cohere.go
  • core/providers/openai/openai.go
  • core/providers/elevenlabs/elevenlabs.go
  • core/providers/anthropic/types.go
  • core/providers/anthropic/errors.go
  • core/providers/gemini/gemini.go
  • core/providers/gemini/errors.go
  • core/providers/gemini/types.go
🧬 Code graph analysis (2)
core/providers/anthropic/errors.go (2)
core/providers/anthropic/types.go (1)
  • AnthropicError (387-393)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
core/providers/gemini/errors.go (4)
core/providers/gemini/types.go (2)
  • GeminiGenerationError (1341-1343)
  • GeminiGenerationErrorStruct (1345-1350)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
core/schemas/utils.go (1)
  • Ptr (16-18)
core/utils.go (1)
  • Ptr (51-53)
🔇 Additional comments (7)
core/providers/openai/openai.go (1)

405-409: Centralizing streaming error handling on ParseOpenAIError looks good; verify behavior with StreamBody responses

Using ParseOpenAIError across all OpenAI streaming handlers (TextCompletionStreaming, ChatCompletionStreaming, ResponsesStreaming, SpeechStream, TranscriptionStream) improves consistency and deduplicates error parsing. The requestType, providerName, and request.Model arguments are all wired correctly.

One subtle point to double‑check: in each of these handlers, resp.StreamBody = true is set before client.Do, and on non‑200 responses you immediately call ParseOpenAIError(resp, ...) without reading from resp.BodyStream(). If ParseOpenAIError (via helpers in errors.go/providerUtils) assumes a buffered body and reads from resp.Body() instead of the stream, it may see an empty or incomplete body for these streaming responses.

Please confirm that ParseOpenAIError correctly handles responses where StreamBody is enabled (e.g., by reading/ buffering the stream in the error path), or adjust either the streaming call‑sites or the helper so error JSON from streaming endpoints is reliably available to the parser.

Also applies to: 812-816, 1253-1257, 1646-1650, 1920-1924

core/providers/anthropic/types.go (1)

386-393: AnthropicError pointer + omitempty change is sound and matches the new parser

Making Error a pointer with json:"error,omitempty" aligns with parseAnthropicError, which nil‑checks errorResp.Error before reading it, so there’s no new nil‑deref risk. The only behavior change is that re‑marshalled AnthropicError values will now omit the error field entirely when it’s absent, instead of emitting an empty object—this looks desirable, but it’s worth confirming no external consumer depended on the field always being present.

Please run or extend any Anthropic error‑path tests/assertions that inspect serialized error JSON to confirm they still match expectations.

core/providers/anthropic/errors.go (1)

1-20: parseAnthropicError correctly layers Anthropic details on top of the shared Bifrost error

The helper cleanly composes with HandleProviderAPIError: generic parsing and status handling stay in providerUtils, and when the body matches AnthropicError you populate Error.Type and Error.Message in a single place. The defensive bifrostErr.Error == nil check is harmless (and future‑proof) given the current utils contract.

Recommend adding or running Anthropic integration/unit tests that hit a few real error shapes (e.g., invalid API key, bad params, and a malformed body) to confirm BifrostError.Error.Type and .Message are populated as expected while the generic fallback still works when unmarshalling fails.

core/providers/cohere/cohere.go (4)

151-155: Centralizing non‑200 error handling in completeRequest looks good; ensure metadata is preserved

Switching to parseCohereError(resp) here is consistent with the new centralized error‑parsing pattern and removes duplication. Please just confirm that parseCohereError populates BifrostError.ExtraFields (at least Provider and RequestType) in a way that matches what callers expect, since completeRequest itself no longer adds that context.


209-212: Consistent error parsing in listModelsByKey; verify list‑models context is captured

Delegating the non‑200 path to parseCohereError(resp) keeps Cohere error handling uniform with other operations. Double‑check that parseCohereError includes enough context for list‑models errors (provider, request type, and ideally endpoint/URI) now that this function only forwards its result.


384-388: Streaming HTTP error path: confirm parseCohereError is safe with streaming responses and doesn’t release resp

For the streaming chat path, the pattern:

  • sets resp.StreamBody = true,
  • on non‑200 defers ReleaseStreamingResponse(resp), and
  • then calls parseCohereError(resp),

is reasonable, but it assumes that:

  1. parseCohereError can safely read and buffer the body from a response whose body is being streamed, and
  2. parseCohereError does not itself call ReleaseStreamingResponse / fasthttp.ReleaseResponse, to avoid double‑releasing/closing.

Please verify both assumptions against the implementation in core/providers/cohere/errors.go and shared utils.


599-603: Same streaming‑error concerns apply to ResponsesStream

This mirrors the streaming chat error path and has the same requirements:

  • parseCohereError(resp) must handle a streaming response body correctly, and
  • it must not independently release/close resp, since that’s already handled via the deferred ReleaseStreamingResponse(resp).

If those invariants hold, this centralization of error handling for responses streaming is solid.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
core/providers/elevenlabs/errors.go (1)

58-67: Inconsistent error construction in validation error path.

The validation error handling creates a new BifrostError instead of updating the bifrostErr obtained from HandleProviderAPIError (line 14). This is inconsistent with the non-validation error path (lines 80-86) and discards any context from HandleProviderAPIError, such as the copied response body.

Apply this diff to align with the standardized error handling pattern:

 		if message != "" {
+			if bifrostErr.Error == nil {
+				bifrostErr.Error = &schemas.ErrorField{}
+			}
+			bifrostErr.Error.Type = schemas.Ptr(errorType)
+			bifrostErr.Error.Message = message
-			return &schemas.BifrostError{
-				IsBifrostError: false,
-				StatusCode:     schemas.Ptr(resp.StatusCode()),
-				Error: &schemas.ErrorField{
-					Type:    schemas.Ptr(errorType),
-					Message: message,
-				},
-			}
 		}
🧹 Nitpick comments (2)
core/providers/gemini/errors.go (1)

44-52: Consider using strings.Builder for concatenating error messages.

String concatenation in a loop using the + operator (line 51) creates intermediate string allocations. For better performance with multiple errors, consider using strings.Builder:

-		var message string
+		var messageBuilder strings.Builder
 		var firstError *GeminiGenerationErrorStruct
 		for _, errorResp := range errorResps {
 			if errorResp.Error != nil {
 				if firstError == nil {
 					firstError = errorResp.Error
 				}
-				message = message + errorResp.Error.Message + "\n"
+				messageBuilder.WriteString(errorResp.Error.Message)
+				messageBuilder.WriteString("\n")
 			}
 		}
 		// Trim trailing newline
-		message = strings.TrimSuffix(message, "\n")
+		message := strings.TrimSuffix(messageBuilder.String(), "\n")
core/providers/openai/errors.go (1)

1-42: ParseOpenAIError logic looks solid; consider removing redundant nil-check.

The flow via providerUtils.HandleProviderAPIError and the subsequent field copying correctly handles both JSON and non‑JSON error bodies, and ExtraFields is now always populated. Given HandleProviderAPIError always returns a non‑nil *schemas.BifrostError today, the if bifrostErr != nil { ... } guard before setting ExtraFields is redundant and can be safely dropped to simplify the code.

-	// Set ExtraFields unconditionally so provider/model/request metadata is always attached
-	if bifrostErr != nil {
-		bifrostErr.ExtraFields = schemas.BifrostErrorExtraFields{
-			Provider:       providerName,
-			ModelRequested: model,
-			RequestType:    requestType,
-		}
-	}
+	// Set ExtraFields so provider/model/request metadata is always attached
+	bifrostErr.ExtraFields = schemas.BifrostErrorExtraFields{
+		Provider:       providerName,
+		ModelRequested: model,
+		RequestType:    requestType,
+	}
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9e093fc and 257bd7a.

📒 Files selected for processing (14)
  • core/providers/anthropic/anthropic.go (4 hunks)
  • core/providers/anthropic/errors.go (1 hunks)
  • core/providers/anthropic/types.go (1 hunks)
  • core/providers/cohere/cohere.go (4 hunks)
  • core/providers/cohere/errors.go (1 hunks)
  • core/providers/elevenlabs/elevenlabs.go (4 hunks)
  • core/providers/elevenlabs/errors.go (2 hunks)
  • core/providers/elevenlabs/types.go (1 hunks)
  • core/providers/gemini/errors.go (2 hunks)
  • core/providers/gemini/gemini.go (5 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/openai/errors.go (1 hunks)
  • core/providers/openai/openai.go (5 hunks)
  • core/providers/utils/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (8)
  • core/providers/cohere/cohere.go
  • core/providers/cohere/errors.go
  • core/providers/anthropic/types.go
  • core/providers/anthropic/errors.go
  • core/providers/elevenlabs/types.go
  • core/providers/anthropic/anthropic.go
  • core/providers/utils/utils.go
  • core/providers/gemini/gemini.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/providers/elevenlabs/elevenlabs.go
  • core/providers/gemini/errors.go
  • core/providers/openai/errors.go
  • core/providers/openai/openai.go
  • core/providers/elevenlabs/errors.go
  • core/providers/gemini/types.go
🧠 Learnings (1)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.

Applied to files:

  • core/providers/elevenlabs/elevenlabs.go
  • core/providers/gemini/errors.go
  • core/providers/openai/errors.go
  • core/providers/openai/openai.go
  • core/providers/elevenlabs/errors.go
  • core/providers/gemini/types.go
🧬 Code graph analysis (4)
core/providers/gemini/errors.go (4)
core/providers/gemini/types.go (2)
  • GeminiGenerationError (1341-1343)
  • GeminiGenerationErrorStruct (1345-1350)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
core/schemas/utils.go (1)
  • Ptr (16-18)
core/utils.go (1)
  • Ptr (51-53)
core/providers/openai/errors.go (2)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
core/schemas/bifrost.go (1)
  • BifrostErrorExtraFields (420-424)
core/providers/openai/openai.go (2)
core/providers/openai/errors.go (1)
  • ParseOpenAIError (10-42)
core/schemas/bifrost.go (5)
  • TextCompletionStreamRequest (88-88)
  • ChatCompletionStreamRequest (90-90)
  • ResponsesStreamRequest (92-92)
  • SpeechStreamRequest (95-95)
  • TranscriptionStreamRequest (97-97)
core/providers/elevenlabs/errors.go (2)
core/providers/elevenlabs/types.go (1)
  • ElevenlabsError (157-159)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
🔇 Additional comments (6)
core/providers/elevenlabs/elevenlabs.go (1)

97-97: LGTM! Error parsing calls correctly updated.

All call sites to parseElevenlabsError have been consistently updated to pass only the resp parameter, aligning with the centralized error handling approach across providers.

Also applies to: 229-229, 349-349, 486-486

core/providers/elevenlabs/errors.go (2)

12-14: LGTM! Function signature and initialization aligned with standardized approach.

The updated signature removes the providerName parameter and uses HandleProviderAPIError to obtain a base error structure, consistent with the centralized error handling pattern.


70-88: LGTM! Non-validation error handling correctly updates the base error.

This path properly updates the bifrostErr from HandleProviderAPIError instead of creating a new error, which is the correct approach for the standardized error handling pattern.

core/providers/gemini/errors.go (1)

12-36: LGTM!

The function correctly transforms a BifrostError into the new GeminiGenerationError format. The nil handling, field extraction, and struct construction are all appropriate.

core/providers/gemini/types.go (1)

1345-1357: PR summary claim appears inaccurate: error struct fields are not actually marked optional.

The PR summary states "made error struct fields optional with appropriate omitempty tags," but Code, Message, and Status in GeminiGenerationErrorStruct (and Type in GeminiGenerationErrorDetails) lack omitempty tags. Based on Gemini API documentation, these are standard error response fields that are always present. The struct definitions are correct—these fields should remain required—but the PR summary description doesn't match the implementation and should be clarified.

Likely an incorrect or invalid review comment.

core/providers/openai/openai.go (1)

408-409: Centralizing streaming HTTP error handling on ParseOpenAIError looks correct and consistent.

All streaming handlers now delegate non‑200 HTTP responses to ParseOpenAIError with the appropriate RequestType, providerName, and request.Model, and they correctly defer ReleaseStreamingResponse before returning. This keeps error semantics aligned with the non‑streaming paths and guarantees ExtraFields metadata on all OpenAI streaming HTTP errors.

Also applies to: 815-816, 1256-1257, 1649-1650, 1923-1924

@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 12-01-refactor_core_providers_error_handling_standardization branch 3 times, most recently from dc2aa28 to a5c379a Compare December 11, 2025 14:37
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 12-01-refactor_core_providers_error_handling_standardization branch from a5c379a to 6bd2b80 Compare December 11, 2025 19:21
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
core/providers/elevenlabs/errors.go (1)

58-67: Inconsistent error handling: validation errors bypass centralized error construction.

The validation error path creates a new BifrostError instead of updating the bifrostErr returned by HandleProviderAPIError. This inconsistency means validation errors don't benefit from the centralized error handling setup and follow a different construction path than non-validation errors.

Apply this diff to align validation error handling with non-validation errors:

 		if message != "" {
-			return &schemas.BifrostError{
-				IsBifrostError: false,
-				StatusCode:     schemas.Ptr(resp.StatusCode()),
-				Error: &schemas.ErrorField{
-					Type:    schemas.Ptr(errorType),
-					Message: message,
-				},
-			}
+			if bifrostErr.Error == nil {
+				bifrostErr.Error = &schemas.ErrorField{}
+			}
+			bifrostErr.Error.Type = schemas.Ptr(errorType)
+			bifrostErr.Error.Message = message
+			return bifrostErr
 		}
 	}
core/providers/cohere/cohere.go (1)

432-462: chunkIndex is incremented twice per event in ChatCompletionStream

Within the streaming goroutine, chunkIndex is incremented at Line 432 and again at Line 461, causing the exposed ChunkIndex in ExtraFields to skip values (1, 3, 5, …) instead of being sequential per chunk. This is a minor correctness issue in the metadata.

You can make ChunkIndex strictly sequential by removing one of the increments, for example:

-                chunkIndex++
-
                 // Extract response ID from message-start events
                 if event.Type == StreamEventMessageStart && event.ID != nil {
                     responseID = *event.ID
                 }
@@
                     lastChunkTime = time.Now()
-                    chunkIndex++
+                    chunkIndex++

Or, alternatively, keep a single increment at the top of the loop and remove the one after lastChunkTime—the important part is that each logical chunk only increments once.

🧹 Nitpick comments (2)
core/providers/openai/openai.go (1)

405-409: Streaming text completion HTTP error handling is correct; optional DRY helper

Using ParseOpenAIError here brings streaming HTTP errors in line with non‑streaming handling and keeps ExtraFields consistent. The defer providerUtils.ReleaseStreamingResponse(resp) before returning ensures the stream resources are cleaned up after parsing. Given this exact pattern is repeated across several streaming handlers, you could optionally extract a small helper (e.g., handleOpenAIStreamingHTTPError(...)) to avoid duplication and keep the error path consistent in one place, but functionally this is fine as‑is.

core/providers/gemini/errors.go (1)

38-79: Multi-/single-error parsing is consistent; consider a small defensive guard on bifrostErr.

The refactored parseGeminiError:

  • Uses HandleProviderAPIError to support both []GeminiGenerationError and GeminiGenerationError payloads.
  • Aggregates multi-error messages into a newline-joined string and trims the trailing newline.
  • Sets Error.Code from the first error and Error.Message to the combined message, so array and single responses now populate both fields consistently (fixing the earlier inconsistency noted in past reviews).

This all looks correct. The only minor robustness concern is that the function assumes providerUtils.HandleProviderAPIError never returns a nil *schemas.BifrostError; if that contract ever changes, the bifrostErr.Error = ... writes will panic.

If you’d like to decouple from that assumption, you could defensively ensure bifrostErr is non-nil before use:

 func parseGeminiError(resp *fasthttp.Response) *schemas.BifrostError {
-	var errorResps []GeminiGenerationError
-	bifrostErr := providerUtils.HandleProviderAPIError(resp, &errorResps)
+	var errorResps []GeminiGenerationError
+	bifrostErr := providerUtils.HandleProviderAPIError(resp, &errorResps)
+	if bifrostErr == nil {
+		bifrostErr = &schemas.BifrostError{}
+	}
 	if len(errorResps) > 0 {
@@
-		if bifrostErr.Error == nil {
-			bifrostErr.Error = &schemas.ErrorField{}
-		}
+		if bifrostErr.Error == nil {
+			bifrostErr.Error = &schemas.ErrorField{}
+		}
@@
-	bifrostErr = providerUtils.HandleProviderAPIError(resp, &errorResp)
+	bifrostErr = providerUtils.HandleProviderAPIError(resp, &errorResp)
+	if bifrostErr == nil {
+		bifrostErr = &schemas.BifrostError{}
+	}
 	if errorResp.Error != nil {
-		if bifrostErr.Error == nil {
-			bifrostErr.Error = &schemas.ErrorField{}
-		}
+		if bifrostErr.Error == nil {
+			bifrostErr.Error = &schemas.ErrorField{}
+		}

Not strictly required if the existing contract is guaranteed, but it makes this helper more self-contained and future-proof.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 46611e8 and 6bd2b80.

📒 Files selected for processing (14)
  • core/providers/anthropic/anthropic.go (4 hunks)
  • core/providers/anthropic/errors.go (1 hunks)
  • core/providers/anthropic/types.go (1 hunks)
  • core/providers/cohere/cohere.go (4 hunks)
  • core/providers/cohere/errors.go (1 hunks)
  • core/providers/elevenlabs/elevenlabs.go (4 hunks)
  • core/providers/elevenlabs/errors.go (2 hunks)
  • core/providers/elevenlabs/types.go (1 hunks)
  • core/providers/gemini/errors.go (2 hunks)
  • core/providers/gemini/gemini.go (5 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/openai/errors.go (1 hunks)
  • core/providers/openai/openai.go (5 hunks)
  • core/providers/utils/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
  • core/providers/anthropic/anthropic.go
  • core/providers/cohere/errors.go
  • core/providers/openai/errors.go
  • core/providers/utils/utils.go
  • core/providers/anthropic/types.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/providers/elevenlabs/elevenlabs.go
  • core/providers/gemini/gemini.go
  • core/providers/gemini/errors.go
  • core/providers/elevenlabs/errors.go
  • core/providers/cohere/cohere.go
  • core/providers/anthropic/errors.go
  • core/providers/gemini/types.go
  • core/providers/openai/openai.go
  • core/providers/elevenlabs/types.go
🧠 Learnings (2)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.

Applied to files:

  • core/providers/elevenlabs/elevenlabs.go
  • core/providers/gemini/gemini.go
  • core/providers/gemini/errors.go
  • core/providers/elevenlabs/errors.go
  • core/providers/cohere/cohere.go
  • core/providers/anthropic/errors.go
  • core/providers/gemini/types.go
  • core/providers/openai/openai.go
  • core/providers/elevenlabs/types.go
📚 Learning: 2025-12-11T11:58:25.307Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: core/providers/openai/responses.go:42-84
Timestamp: 2025-12-11T11:58:25.307Z
Learning: In core/providers/openai/responses.go (and related OpenAI response handling), document and enforce the API format constraint: if ResponsesReasoning != nil and the response contains content blocks, all content blocks should be treated as reasoning blocks by default. Implement type guards or parsing logic accordingly, and add unit tests to verify that when ResponsesReasoning is non-nil, content blocks are labeled as reasoning blocks. Include clear comments in the code explaining the rationale and ensure downstream consumers rely on this behavior.

Applied to files:

  • core/providers/openai/openai.go
🧬 Code graph analysis (3)
core/providers/elevenlabs/errors.go (2)
core/providers/elevenlabs/types.go (1)
  • ElevenlabsError (157-159)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
core/providers/anthropic/errors.go (2)
core/providers/anthropic/types.go (1)
  • AnthropicError (387-393)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
core/providers/openai/openai.go (2)
core/providers/openai/errors.go (1)
  • ParseOpenAIError (10-42)
core/schemas/models.go (1)
  • Model (109-129)
🔇 Additional comments (14)
core/providers/elevenlabs/types.go (1)

158-158: LGTM: Error detail field properly converted to optional pointer.

The change to *ElevenlabsErrorDetail with omitempty correctly aligns with the centralized error handling refactor, allowing error details to be absent when not provided by the API.

core/providers/elevenlabs/elevenlabs.go (1)

97-97: LGTM: Error parsing calls correctly updated to new signature.

All four call sites consistently updated to use the refactored parseElevenlabsError(resp) signature without the providerName parameter, aligning with the centralized error handling approach.

Also applies to: 229-229, 349-349, 486-486

core/providers/openai/openai.go (4)

812-816: Consistent centralized error parsing for chat completion stream

Same pattern as text completions: on non‑200 HTTP status you release the streaming response and delegate to ParseOpenAIError with ChatCompletionStreamRequest and request.Model. This keeps chat stream HTTP errors aligned with the rest of the provider’s error handling.


1253-1257: Responses stream HTTP errors now flow through unified parser

For /v1/responses streaming, non‑200 HTTP responses are now parsed via ParseOpenAIError with ResponsesStreamRequest, which matches how non‑streaming responses errors are handled and ensures consistent BifrostError shaping. Resource cleanup via ReleaseStreamingResponse remains intact.


1646-1650: Speech stream HTTP error path aligned with shared OpenAI error parsing

On speech SSE endpoints, routing non‑OK HTTP responses through ParseOpenAIError with SpeechStreamRequest and the requested model will give callers consistent error metadata with the other OpenAI operations. The surrounding defer keeps the streaming response lifecycle correct.


1920-1924: Transcription stream HTTP errors correctly use ParseOpenAIError

The transcription streaming handler now mirrors the other streaming functions: for non‑200 status codes you release the streaming response and use ParseOpenAIError with TranscriptionStreamRequest and request.Model. This centralizes parsing and maintains consistent error shaping across all OpenAI streaming APIs.

core/providers/anthropic/errors.go (1)

9-20: LGTM! Solid error parsing implementation.

The function correctly centralizes Anthropic error parsing with appropriate defensive nil checks and proper field mapping. The pointer assignment pattern on line 16 (&errorResp.Error.Type) is safe because the nested struct is heap-allocated during JSON unmarshalling, making it a valid Go pattern for optional fields.

The integration with HandleProviderAPIError ensures consistent error handling across providers, aligning with the broader refactoring goals.

Note: This review is based on the isolated file. Since this PR is part of a stack (dependent PRs #979 and #999), a holistic review of the entire stack would provide additional context for the error handling patterns being established across all providers.

core/providers/gemini/types.go (1)

1341-1357: Gemini error type definitions align with parser usage.

Struct shapes and JSON tags match how parseGeminiError consumes Gemini error envelopes; the pointer wrapper on Error keeps the envelope optional and avoids nil derefs in the parser. No changes requested here.

core/providers/gemini/gemini.go (1)

99-100: Centralized Gemini HTTP error handling via parseGeminiError looks consistent.

All non-200 HTTP paths (sync and streaming) now delegate to parseGeminiError(resp), which aligns this provider with the unified error parsing approach used elsewhere in the PR. Body decoding, latency handling, and streaming resource cleanup are preserved.

Also applies to: 157-158, 251-253, 477-480, 725-728

core/providers/gemini/errors.go (1)

12-36: ToGeminiError mapping from BifrostError is clear and side-effect free.

The helper cleanly derives code, status, and message from BifrostError (with sane defaults) and wraps them in the new GeminiGenerationError envelope used across the refactored parser. No changes requested.

core/providers/cohere/cohere.go (4)

151-155: Centralizing non-200 error handling via parseCohereError looks good

Routing all non-OK responses in completeRequest through parseCohereError keeps the HTTP error path consistent with other providers and removes duplication, without changing the success path behavior.


209-212: Consistent list-models error handling via parseCohereError

Having listModelsByKey delegate non-200 responses to parseCohereError aligns list-models with the same centralized error-parsing logic used elsewhere, which should simplify future maintenance.


384-388: Streaming HTTP error path now shares centralized parser; verify behavior with StreamBody

Using parseCohereError(resp) for non-200 statuses in ChatCompletionStream unifies streaming and non-streaming error handling. Given resp.StreamBody = true, please double-check that parseCohereError (and the underlying utils it calls) correctly read and buffer the error body when StreamBody is set, and don’t rely on patterns that assume a non-stream response.

It would be good to exercise this by hitting the chat stream endpoint with an invalid key and confirming:

  • The stream is not started,
  • You receive a single BifrostError with the expected Cohere error payload,
  • No panics or leaks occur around ReleaseStreamingResponse.

599-603: Shared error parser for responses stream; confirm handling of non-200 SSE responses

ResponsesStream now also delegates non-OK HTTP responses to parseCohereError(resp), which is consistent with the rest of the provider.

As with ChatCompletionStream, please verify with a responses-stream request that:

  • A non-200 response (e.g., bad key) is turned into a single coherent BifrostError,
  • The body is fully consumed and ReleaseStreamingResponse runs cleanly,
  • No assumptions in parseCohereError break when resp.StreamBody is true.

@TejasGhatte TejasGhatte force-pushed the 12-01-refactor_core_providers_error_handling_standardization branch from 6bd2b80 to 982bc60 Compare December 11, 2025 20:02
@TejasGhatte TejasGhatte mentioned this pull request Dec 11, 2025
16 tasks
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 12-01-refactor_core_providers_error_handling_standardization branch from 982bc60 to 6bd2b80 Compare December 11, 2025 20:13
@TejasGhatte TejasGhatte force-pushed the 12-01-refactor_core_providers_error_handling_standardization branch from 6bd2b80 to 982bc60 Compare December 11, 2025 20:32
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 12-01-refactor_core_providers_error_handling_standardization branch from 982bc60 to 6bd2b80 Compare December 11, 2025 20:33
@TejasGhatte TejasGhatte force-pushed the 12-01-refactor_core_providers_error_handling_standardization branch from 6bd2b80 to 982bc60 Compare December 11, 2025 21:21
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 12-01-refactor_core_providers_error_handling_standardization branch from 982bc60 to 6bd2b80 Compare December 11, 2025 21:33
@TejasGhatte TejasGhatte force-pushed the 12-01-refactor_core_providers_error_handling_standardization branch from 6bd2b80 to 982bc60 Compare December 11, 2025 21:47
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
core/providers/elevenlabs/errors.go (1)

18-67: Preserve StatusCode in validation error path.

The validation error handling (lines 18-67) returns a new BifrostError (lines 59-66), discarding the bifrostErr obtained from HandleProviderAPIError at line 14. This loses the StatusCode and other fields. The non-validation path (lines 71-86) correctly updates bifrostErr.Error and returns bifrostErr.

Apply this diff to preserve StatusCode:

 		if message != "" {
-			return &schemas.BifrostError{
-				IsBifrostError: false,
-				StatusCode:     schemas.Ptr(resp.StatusCode()),
-				Error: &schemas.ErrorField{
-					Type:    schemas.Ptr(errorType),
-					Message: message,
-				},
-			}
+			if bifrostErr.Error == nil {
+				bifrostErr.Error = &schemas.ErrorField{}
+			}
+			bifrostErr.Error.Type = schemas.Ptr(errorType)
+			bifrostErr.Error.Message = message
+			return bifrostErr
 		}
🧹 Nitpick comments (1)
core/providers/anthropic/errors.go (1)

9-19: Consider creating explicit string copies for clarity.

The error field extraction works correctly, but taking the address of errorResp.Error.Type directly (line 16) is slightly unusual. While safe in Go (strings are immutable), creating an explicit copy would be clearer:

 func parseAnthropicError(resp *fasthttp.Response) *schemas.BifrostError {
 	var errorResp AnthropicError
 	bifrostErr := providerUtils.HandleProviderAPIError(resp, &errorResp)
 	if errorResp.Error != nil {
 		if bifrostErr.Error == nil {
 			bifrostErr.Error = &schemas.ErrorField{}
 		}
-		bifrostErr.Error.Type = &errorResp.Error.Type
+		errType := errorResp.Error.Type
+		bifrostErr.Error.Type = &errType
 		bifrostErr.Error.Message = errorResp.Error.Message
 	}
 	return bifrostErr
 }

This makes the ownership semantics explicit and follows the pattern used elsewhere in the codebase.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6bd2b80 and 982bc60.

📒 Files selected for processing (14)
  • core/providers/anthropic/anthropic.go (4 hunks)
  • core/providers/anthropic/errors.go (1 hunks)
  • core/providers/anthropic/types.go (1 hunks)
  • core/providers/cohere/cohere.go (4 hunks)
  • core/providers/cohere/errors.go (1 hunks)
  • core/providers/elevenlabs/elevenlabs.go (4 hunks)
  • core/providers/elevenlabs/errors.go (2 hunks)
  • core/providers/elevenlabs/types.go (1 hunks)
  • core/providers/gemini/errors.go (2 hunks)
  • core/providers/gemini/gemini.go (5 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/openai/errors.go (1 hunks)
  • core/providers/openai/openai.go (5 hunks)
  • core/providers/utils/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
  • core/providers/openai/errors.go
  • core/providers/elevenlabs/elevenlabs.go
  • core/providers/cohere/errors.go
  • core/providers/anthropic/types.go
  • core/providers/gemini/gemini.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/providers/elevenlabs/types.go
  • core/providers/cohere/cohere.go
  • core/providers/anthropic/anthropic.go
  • core/providers/utils/utils.go
  • core/providers/elevenlabs/errors.go
  • core/providers/openai/openai.go
  • core/providers/anthropic/errors.go
  • core/providers/gemini/errors.go
  • core/providers/gemini/types.go
🧠 Learnings (2)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.

Applied to files:

  • core/providers/elevenlabs/types.go
  • core/providers/cohere/cohere.go
  • core/providers/anthropic/anthropic.go
  • core/providers/utils/utils.go
  • core/providers/elevenlabs/errors.go
  • core/providers/openai/openai.go
  • core/providers/anthropic/errors.go
  • core/providers/gemini/errors.go
  • core/providers/gemini/types.go
📚 Learning: 2025-12-11T11:58:25.307Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: core/providers/openai/responses.go:42-84
Timestamp: 2025-12-11T11:58:25.307Z
Learning: In core/providers/openai/responses.go (and related OpenAI response handling), document and enforce the API format constraint: if ResponsesReasoning != nil and the response contains content blocks, all content blocks should be treated as reasoning blocks by default. Implement type guards or parsing logic accordingly, and add unit tests to verify that when ResponsesReasoning is non-nil, content blocks are labeled as reasoning blocks. Include clear comments in the code explaining the rationale and ensure downstream consumers rely on this behavior.

Applied to files:

  • core/providers/openai/openai.go
🧬 Code graph analysis (4)
core/providers/elevenlabs/errors.go (2)
core/providers/elevenlabs/types.go (1)
  • ElevenlabsError (157-159)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
core/providers/openai/openai.go (2)
core/providers/openai/errors.go (1)
  • ParseOpenAIError (10-42)
core/schemas/bifrost.go (5)
  • TextCompletionStreamRequest (88-88)
  • ChatCompletionStreamRequest (90-90)
  • ResponsesStreamRequest (92-92)
  • SpeechStreamRequest (95-95)
  • TranscriptionStreamRequest (97-97)
core/providers/anthropic/errors.go (2)
core/providers/anthropic/types.go (1)
  • AnthropicError (387-393)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
core/providers/gemini/errors.go (3)
core/providers/gemini/types.go (2)
  • GeminiGenerationError (1341-1343)
  • GeminiGenerationErrorStruct (1345-1350)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
core/utils.go (1)
  • Ptr (51-53)
🔇 Additional comments (7)
core/providers/cohere/cohere.go (1)

154-154: LGTM! Consistent centralization of error parsing.

The error handling paths now consistently delegate to parseCohereError(resp), which aligns with the PR's goal of extracting error parsing into dedicated files. This reduces code duplication and improves maintainability.

Also applies to: 211-211, 387-387, 602-602

core/providers/utils/utils.go (1)

317-338: Good fix for body preservation during error parsing.

The change to copy resp.Body() before unmarshalling prevents use-after-free issues since fasthttp reuses internal buffers. This is consistent with the pattern used in completeRequest (line 164) and ensures error parsing functions receive stable data.

core/providers/elevenlabs/types.go (1)

157-159: LGTM! Proper nullable handling for optional error detail.

Converting Detail to a pointer with omitempty correctly models the optional nature of error details in ElevenLabs API responses. This prevents nil pointer issues when the API returns errors without detailed information and allows proper JSON serialization.

core/providers/anthropic/anthropic.go (1)

153-153: LGTM! Consistent error parsing centralization.

All error handling paths now use parseAnthropicError(resp), removing duplication and aligning with the PR's goal. The exported streaming functions (HandleAnthropicChatCompletionStreaming, HandleAnthropicResponsesStream) correctly use the same centralized error parser.

Also applies to: 197-197, 451-451, 799-799

core/providers/openai/openai.go (1)

146-146: Excellent: Centralized error handling across all request types.

All error paths now delegate to ParseOpenAIError, ensuring consistent error metadata (provider, model, request type) and eliminating duplicated parsing logic across streaming and non-streaming endpoints.

Also applies to: 267-267, 408-408, 648-648, 815-815, 1111-1111, 1256-1256, 1459-1459, 1533-1533, 1649-1649, 1808-1808, 1923-1923

core/providers/gemini/types.go (1)

1341-1357: LGTM: Clean error type definitions.

The new error types (GeminiGenerationError, GeminiGenerationErrorStruct, GeminiGenerationErrorDetails) provide clear separation between the error envelope and structured error details, with appropriate JSON tags for optional fields.

core/providers/gemini/errors.go (1)

38-79: LGTM: Consistent error extraction for both array and single responses.

The function correctly handles both []GeminiGenerationError and single GeminiGenerationError formats:

  • Array path: concatenates messages (with trailing newline trimmed), extracts Code from first error
  • Single response path: extracts both Code and Message
  • Both paths update bifrostErr.Error and return bifrostErr, preserving StatusCode from HandleProviderAPIError

@TejasGhatte TejasGhatte force-pushed the 12-01-refactor_core_providers_error_handling_standardization branch from 982bc60 to 0f9320e Compare December 12, 2025 06:37
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
core/providers/elevenlabs/errors.go (1)

44-51: Minor: Leading space when message is empty but locations exist.

If messages is empty but locations is populated, the resulting message will have a leading space (e.g., " [body.field]" instead of "[body.field]").

 			if len(messages) > 0 {
 				message = strings.Join(messages, "; ")
 			}
 			if len(locations) > 0 {
 				locationStr := strings.Join(locations, ", ")
-				message = message + " [" + locationStr + "]"
+				if message != "" {
+					message = message + " [" + locationStr + "]"
+				} else {
+					message = "[" + locationStr + "]"
+				}
 			}
core/providers/openai/errors.go (1)

32-39: Redundant nil check — HandleProviderAPIError always returns non-nil.

Looking at HandleProviderAPIError in core/providers/utils/utils.go, it always returns a pointer to a newly allocated BifrostError, never nil. The check at line 33 is unnecessary.

-	// Set ExtraFields unconditionally so provider/model/request metadata is always attached
-	if bifrostErr != nil {
-		bifrostErr.ExtraFields = schemas.BifrostErrorExtraFields{
-			Provider:       providerName,
-			ModelRequested: model,
-			RequestType:    requestType,
-		}
+	// Set ExtraFields unconditionally so provider/model/request metadata is always attached
+	bifrostErr.ExtraFields = schemas.BifrostErrorExtraFields{
+		Provider:       providerName,
+		ModelRequested: model,
+		RequestType:    requestType,
 	}
core/providers/gemini/errors.go (1)

40-78: Consider: Response body is parsed twice when falling through to single-object handling.

HandleProviderAPIError is called twice—once for array format (line 42) and once for single-object format (line 70). While correct (the body is copied internally), this is somewhat wasteful when the response is a single object.

An alternative would be to try single-object parsing first (more common case?) or use json.RawMessage to avoid double parsing. However, this is a minor optimization and the current approach is functionally correct.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 982bc60 and 0f9320e.

📒 Files selected for processing (14)
  • core/providers/anthropic/anthropic.go (4 hunks)
  • core/providers/anthropic/errors.go (1 hunks)
  • core/providers/anthropic/types.go (1 hunks)
  • core/providers/cohere/cohere.go (4 hunks)
  • core/providers/cohere/errors.go (1 hunks)
  • core/providers/elevenlabs/elevenlabs.go (4 hunks)
  • core/providers/elevenlabs/errors.go (2 hunks)
  • core/providers/elevenlabs/types.go (1 hunks)
  • core/providers/gemini/errors.go (2 hunks)
  • core/providers/gemini/gemini.go (5 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/openai/errors.go (1 hunks)
  • core/providers/openai/openai.go (5 hunks)
  • core/providers/utils/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • core/providers/elevenlabs/elevenlabs.go
  • core/providers/anthropic/anthropic.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/providers/cohere/cohere.go
  • core/providers/elevenlabs/types.go
  • core/providers/utils/utils.go
  • core/providers/elevenlabs/errors.go
  • core/providers/openai/errors.go
  • core/providers/cohere/errors.go
  • core/providers/anthropic/types.go
  • core/providers/gemini/errors.go
  • core/providers/gemini/gemini.go
  • core/providers/gemini/types.go
  • core/providers/openai/openai.go
  • core/providers/anthropic/errors.go
🧠 Learnings (2)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.

Applied to files:

  • core/providers/cohere/cohere.go
  • core/providers/elevenlabs/types.go
  • core/providers/utils/utils.go
  • core/providers/elevenlabs/errors.go
  • core/providers/openai/errors.go
  • core/providers/cohere/errors.go
  • core/providers/anthropic/types.go
  • core/providers/gemini/errors.go
  • core/providers/gemini/gemini.go
  • core/providers/gemini/types.go
  • core/providers/openai/openai.go
  • core/providers/anthropic/errors.go
📚 Learning: 2025-12-11T11:58:25.307Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: core/providers/openai/responses.go:42-84
Timestamp: 2025-12-11T11:58:25.307Z
Learning: In core/providers/openai/responses.go (and related OpenAI response handling), document and enforce the API format constraint: if ResponsesReasoning != nil and the response contains content blocks, all content blocks should be treated as reasoning blocks by default. Implement type guards or parsing logic accordingly, and add unit tests to verify that when ResponsesReasoning is non-nil, content blocks are labeled as reasoning blocks. Include clear comments in the code explaining the rationale and ensure downstream consumers rely on this behavior.

Applied to files:

  • core/providers/openai/errors.go
  • core/providers/openai/openai.go
🧬 Code graph analysis (6)
core/providers/elevenlabs/errors.go (2)
core/providers/elevenlabs/types.go (1)
  • ElevenlabsError (157-159)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
core/providers/openai/errors.go (2)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
core/schemas/bifrost.go (1)
  • BifrostErrorExtraFields (421-425)
core/providers/cohere/errors.go (2)
core/providers/cohere/types.go (1)
  • CohereError (543-547)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
core/providers/gemini/errors.go (4)
core/providers/gemini/types.go (2)
  • GeminiGenerationError (1341-1343)
  • GeminiGenerationErrorStruct (1345-1350)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
core/schemas/utils.go (1)
  • Ptr (16-18)
core/utils.go (1)
  • Ptr (51-53)
core/providers/openai/openai.go (2)
core/providers/openai/errors.go (1)
  • ParseOpenAIError (10-42)
core/schemas/bifrost.go (5)
  • TextCompletionStreamRequest (88-88)
  • ChatCompletionStreamRequest (90-90)
  • ResponsesStreamRequest (92-92)
  • SpeechStreamRequest (95-95)
  • TranscriptionStreamRequest (97-97)
core/providers/anthropic/errors.go (2)
core/providers/anthropic/types.go (1)
  • AnthropicError (387-393)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
🔇 Additional comments (24)
core/providers/utils/utils.go (1)

319-322: LGTM: Body copy prevents use-after-free.

Creating a local copy of resp.Body() before unmarshalling ensures the original buffer isn't consumed or mutated during the unmarshal operation. This is a sound defensive programming practice for fasthttp response bodies.

core/providers/anthropic/types.go (1)

389-392: LGTM: Pointer field enables nil-safe error handling.

Changing Error to a pointer with omitempty aligns with the centralized error parsing in parseAnthropicError, which checks for nil before accessing nested fields. This prevents forced serialization of empty error objects and supports optional error details.

core/providers/cohere/cohere.go (4)

154-154: LGTM: Centralized error parsing.

Routing through parseCohereError(resp) consolidates error handling and ensures consistent error field population across all request types.


211-211: LGTM: Consistent error handling.

Using parseCohereError(resp) for list models errors maintains consistency with the centralized error parsing pattern.


387-387: LGTM: Unified streaming error path.

Streaming errors now use parseCohereError(resp), aligning with non-streaming error handling.


602-602: LGTM: Consistent streaming error handling.

ResponsesStream now uses the centralized parseCohereError(resp) for HTTP errors.

core/providers/cohere/errors.go (1)

9-21: LGTM: Centralized Cohere error parsing.

The function safely extracts error details and delegates to providerUtils.HandleProviderAPIError. The nil-check on line 13 prevents nil pointer dereferences, and optional Code assignment is handled correctly.

core/providers/elevenlabs/types.go (1)

158-158: LGTM: Pointer field enables optional detail handling.

Changing Detail to a pointer with omitempty allows nil-safe parsing and prevents forced serialization of empty detail objects. The custom UnmarshalJSON already handles nil cases appropriately.

core/providers/anthropic/errors.go (1)

9-20: LGTM: Safe Anthropic error parsing.

The function properly checks for nil on line 12 before accessing errorResp.Error fields, preventing nil pointer dereferences. The delegation to providerUtils.HandleProviderAPIError ensures consistent error construction.

core/providers/gemini/gemini.go (5)

99-99: LGTM: Centralized error parsing.

Routing through parseGeminiError(resp) consolidates error handling for all non-OK responses.


157-157: LGTM: Consistent error handling.

List models errors now use parseGeminiError(resp) for consistent error construction.


252-252: LGTM: Unified chat completion error path.

Chat completion errors now use the centralized parseGeminiError(resp).


479-479: LGTM: Consistent streaming error handling.

SpeechStream HTTP errors now use parseGeminiError(resp).


727-727: LGTM: Unified transcription stream errors.

TranscriptionStream now uses the centralized parseGeminiError(resp) for HTTP errors.

core/providers/openai/openai.go (5)

408-408: LGTM: Centralized error parsing with context.

Routing through ParseOpenAIError with requestType, providerName, and model ensures consistent error field population including ExtraFields.


815-815: LGTM: Consistent streaming error handling.

Chat completion stream errors now use ParseOpenAIError with full context.


1256-1256: LGTM: Unified responses stream error path.

ResponsesStream HTTP errors use the centralized ParseOpenAIError.


1649-1649: LGTM: Consistent speech stream error handling.

SpeechStream now uses ParseOpenAIError for HTTP errors.


1923-1923: LGTM: Unified transcription stream errors.

TranscriptionStream uses the centralized ParseOpenAIError with context parameters.

core/providers/elevenlabs/errors.go (1)

12-88: LGTM! Clean refactor to centralized error handling.

The function correctly handles both validation errors (array format) and non-validation errors (single object format), delegating to HandleProviderAPIError for initial parsing.

core/providers/openai/errors.go (1)

9-42: LGTM! Well-structured OpenAI error parser.

The function correctly extracts OpenAI error fields and unconditionally sets ExtraFields metadata (addressing the previous review feedback). Using schemas.BifrostError as the unmarshal target is a pragmatic approach given the structural similarity.

core/providers/gemini/types.go (1)

1341-1357: LGTM! Clean error type definitions.

The new GeminiGenerationError hierarchy is well-structured and aligns with the Gemini API error response format. Using a pointer for the Error field enables nil checks for proper error detection.

core/providers/gemini/errors.go (2)

12-36: LGTM! ToGeminiError correctly converts BifrostError to Gemini format.

The conversion logic properly extracts status, message, and code from the BifrostError and constructs a well-formed GeminiGenerationError for response serialization.


38-65: LGTM! Array error aggregation is now consistent with single-error handling.

The Code field is now extracted from the first error (line 61), and messages are properly concatenated with trailing newline trimmed (lines 51, 55). This addresses the previous review feedback about inconsistent field extraction.

@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 12-01-refactor_core_providers_error_handling_standardization branch from 0f9320e to 14a7a70 Compare December 12, 2025 08:27
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (6)
core/providers/openai/errors.go (1)

32-39: Consider removing the defensive nil check.

The condition if bifrostErr != nil on line 33 is defensive but unnecessary. Based on the code snippet from core/providers/utils/utils.go (lines 316-337), HandleProviderAPIError always returns a non-nil *schemas.BifrostError. Removing the conditional would simplify the code:

-	// Set ExtraFields unconditionally so provider/model/request metadata is always attached
-	if bifrostErr != nil {
-		bifrostErr.ExtraFields = schemas.BifrostErrorExtraFields{
-			Provider:       providerName,
-			ModelRequested: model,
-			RequestType:    requestType,
-		}
-	}
+	// Set ExtraFields unconditionally so provider/model/request metadata is always attached
+	bifrostErr.ExtraFields = schemas.BifrostErrorExtraFields{
+		Provider:       providerName,
+		ModelRequested: model,
+		RequestType:    requestType,
+	}
core/providers/gemini/errors.go (1)

13-36: Harden ToGeminiError: consider falling back to bifrostErr.Error.Code when StatusCode is nil.
Right now Code defaults to 500 unless bifrostErr.StatusCode is set (Line 26-28). If some call paths construct *schemas.BifrostError without StatusCode but with Error.Code, you’ll lose the real status/code.

Suggested minimal diff:

 func ToGeminiError(bifrostErr *schemas.BifrostError) *GeminiGenerationError {
@@
 	code := 500
@@
 	if bifrostErr.StatusCode != nil {
 		code = *bifrostErr.StatusCode
+	} else if bifrostErr.Error != nil && bifrostErr.Error.Code != nil {
+		if parsed, err := strconv.Atoi(*bifrostErr.Error.Code); err == nil {
+			code = parsed
+		}
 	}
 	return &GeminiGenerationError{
core/providers/gemini/types.go (1)

1341-1357: Consider adding omitempty + naming FieldViolations element type (if these structs are ever marshaled).
If GeminiGenerationErrorStruct / GeminiGenerationErrorDetails are only used for unmarshaling provider responses, this is fine. If they’re ever marshaled (tests, logs, passthrough), omitempty and a named violation type will keep output smaller and improve reuse.

Example refactor:

 type GeminiGenerationErrorStruct struct {
-	Code    int                            `json:"code"`
-	Message string                         `json:"message"`
-	Status  string                         `json:"status"`
-	Details []GeminiGenerationErrorDetails `json:"details"`
+	Code    int                            `json:"code,omitempty"`
+	Message string                         `json:"message,omitempty"`
+	Status  string                         `json:"status,omitempty"`
+	Details []GeminiGenerationErrorDetails `json:"details,omitempty"`
 }
 
 type GeminiGenerationErrorDetails struct {
 	Type            string `json:"@type"`
-	FieldViolations []struct {
-		Description string `json:"description"`
-	} `json:"fieldViolations"`
+	FieldViolations []GeminiFieldViolation `json:"fieldViolations,omitempty"`
 }
+
+type GeminiFieldViolation struct {
+	Description string `json:"description,omitempty"`
+}
core/providers/cohere/cohere.go (1)

151-155: Avoid logging full provider error bodies (truncate / guard).
Line 153 logs string(resp.Body()) on every non-200; this can get large/noisy and may unintentionally include provider details you don’t want in logs. Consider truncating and/or logging status + request id fields if available.

core/providers/anthropic/errors.go (1)

9-20: LGTM! Effective refactoring that centralizes error parsing.

The extraction of error parsing into a dedicated helper is a clean improvement that reduces code duplication across streaming and non-streaming paths.

Note: Lines 16-17 handle Type as a pointer but Message as a value. This inconsistency suggests Type is optional while Message is required. A brief comment explaining this design choice would improve maintainability, especially as this pattern is being replicated across multiple providers in the stack.

Consider adding a comment:

 func parseAnthropicError(resp *fasthttp.Response) *schemas.BifrostError {
 	var errorResp AnthropicError
 	bifrostErr := providerUtils.HandleProviderAPIError(resp, &errorResp)
+	// Populate error details from Anthropic's error response
+	// Type is optional (*string), Message is required (string)
 	if errorResp.Error != nil {
 		if bifrostErr.Error == nil {
 			bifrostErr.Error = &schemas.ErrorField{}
 		}
 		bifrostErr.Error.Type = &errorResp.Error.Type
 		bifrostErr.Error.Message = errorResp.Error.Message
 	}
 	return bifrostErr
 }
core/providers/elevenlabs/elevenlabs.go (1)

226-230: Avoid logging full provider error bodies at Debug if they can include sensitive request details.

string(resp.Body()) can include user-provided text and/or identifiers; consider truncating/size-limiting (and/or structured fields) to reduce accidental sensitive-data retention in logs.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0f9320e and 14a7a70.

📒 Files selected for processing (14)
  • core/providers/anthropic/anthropic.go (4 hunks)
  • core/providers/anthropic/errors.go (1 hunks)
  • core/providers/anthropic/types.go (1 hunks)
  • core/providers/cohere/cohere.go (4 hunks)
  • core/providers/cohere/errors.go (1 hunks)
  • core/providers/elevenlabs/elevenlabs.go (4 hunks)
  • core/providers/elevenlabs/errors.go (2 hunks)
  • core/providers/elevenlabs/types.go (1 hunks)
  • core/providers/gemini/errors.go (2 hunks)
  • core/providers/gemini/gemini.go (5 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/openai/errors.go (1 hunks)
  • core/providers/openai/openai.go (5 hunks)
  • core/providers/utils/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
  • core/providers/cohere/errors.go
  • core/providers/elevenlabs/types.go
  • core/providers/utils/utils.go
  • core/providers/anthropic/types.go
  • core/providers/gemini/gemini.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/providers/gemini/errors.go
  • core/providers/cohere/cohere.go
  • core/providers/anthropic/anthropic.go
  • core/providers/openai/openai.go
  • core/providers/elevenlabs/errors.go
  • core/providers/elevenlabs/elevenlabs.go
  • core/providers/openai/errors.go
  • core/providers/anthropic/errors.go
  • core/providers/gemini/types.go
🧠 Learnings (2)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.

Applied to files:

  • core/providers/gemini/errors.go
  • core/providers/cohere/cohere.go
  • core/providers/anthropic/anthropic.go
  • core/providers/openai/openai.go
  • core/providers/elevenlabs/errors.go
  • core/providers/elevenlabs/elevenlabs.go
  • core/providers/openai/errors.go
  • core/providers/anthropic/errors.go
  • core/providers/gemini/types.go
📚 Learning: 2025-12-11T11:58:25.307Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: core/providers/openai/responses.go:42-84
Timestamp: 2025-12-11T11:58:25.307Z
Learning: In core/providers/openai/responses.go (and related OpenAI response handling), document and enforce the API format constraint: if ResponsesReasoning != nil and the response contains content blocks, all content blocks should be treated as reasoning blocks by default. Implement type guards or parsing logic accordingly, and add unit tests to verify that when ResponsesReasoning is non-nil, content blocks are labeled as reasoning blocks. Include clear comments in the code explaining the rationale and ensure downstream consumers rely on this behavior.

Applied to files:

  • core/providers/openai/openai.go
  • core/providers/openai/errors.go
🧬 Code graph analysis (5)
core/providers/gemini/errors.go (3)
core/providers/gemini/types.go (2)
  • GeminiGenerationError (1341-1343)
  • GeminiGenerationErrorStruct (1345-1350)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
core/schemas/utils.go (1)
  • Ptr (16-18)
core/providers/openai/openai.go (2)
core/providers/openai/errors.go (1)
  • ParseOpenAIError (10-42)
core/schemas/bifrost.go (5)
  • TextCompletionStreamRequest (88-88)
  • ChatCompletionStreamRequest (90-90)
  • ResponsesStreamRequest (92-92)
  • SpeechStreamRequest (95-95)
  • TranscriptionStreamRequest (97-97)
core/providers/elevenlabs/errors.go (2)
core/providers/elevenlabs/types.go (1)
  • ElevenlabsError (157-159)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
core/providers/openai/errors.go (2)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
core/schemas/bifrost.go (1)
  • BifrostErrorExtraFields (421-425)
core/providers/anthropic/errors.go (2)
core/providers/anthropic/types.go (1)
  • AnthropicError (387-393)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
🔇 Additional comments (8)
core/providers/openai/openai.go (1)

408-408: LGTM! Centralized error handling is consistent across all streaming endpoints.

The refactor successfully routes all streaming error responses through the external ParseOpenAIError function. All calls use the correct request types and pass the appropriate parameters (response, request type, provider name, model). This eliminates code duplication and provides a single maintenance point for error parsing logic.

Also applies to: 815-815, 1256-1256, 1649-1649, 1923-1923

core/providers/gemini/errors.go (1)

39-79: Stack-wide Gemini error envelope verification complete. All error type migrations are properly aligned:

  • Old GeminiChatRequestError types have been removed throughout the codebase
  • New GeminiGenerationError and GeminiGenerationErrorStruct types are correctly defined in types.go and used consistently in error parsing
  • parseGeminiError is wired from all expected callsites in gemini.go (5 invocations covering streaming and non-streaming flows)

No lingering references to removed types or integration issues detected across the stack.

core/providers/cohere/cohere.go (2)

209-212: LGTM: swapping inline error construction for parseCohereError.
This reduces duplication and keeps listModelsByKey aligned with other Cohere call sites.


599-603: No issue here; the error body is available at this point.

The response body is readable in the error path because BodyStream() is only used in the successful response path (inside the goroutine after the status check passes). At line 602, when parseCohereError(resp) is called, the response is fresh from the HTTP request, and HandleProviderAPIError properly reads the body via resp.Body() with a copy operation (body := append([]byte(nil), resp.Body()...)). No prior operations consume the body before the error check.

Likely an incorrect or invalid review comment.

core/providers/anthropic/anthropic.go (1)

153-153: LGTM! Consistent application of centralized error parsing.

The refactoring correctly replaces inline error handling with parseAnthropicError(resp) across all error paths:

  • Non-streaming: completeRequest (line 153), listModelsByKey (line 197)
  • Streaming: HandleAnthropicChatCompletionStreaming (line 451), HandleAnthropicResponsesStream (line 799)

The pattern is applied consistently and maintains the same error semantics while improving code organization.

Also applies to: 197-197, 451-451, 799-799

core/providers/elevenlabs/elevenlabs.go (2)

484-487: LGTM: consistent centralized error parsing usage.


96-98: Signature migration verified; error context properly maintained via utility function.

All four call sites (lines 97, 229, 349, 486) use the new signature correctly. Provider error context is preserved through providerUtils.HandleProviderAPIError(resp, &errorResp) which constructs the BifrostError with status code and structured error details. No stale call sites with the old signature remain.

core/providers/elevenlabs/errors.go (1)

12-89: Fix streamed-body error parsing (likely empty resp.Body() when streaming).

The core issue is real: when SpeechStream sets resp.StreamBody = true (confirmed at line 305 in elevenlabs.go), resp.Body() may be empty on error paths, causing HandleProviderAPIError to fail gracefully with no error details since it reads from resp.Body() (line 319 in utils.go).

However, the suggested fix uses resp.SetBodyRaw(b), which does not exist in fasthttp's Response API. Instead, you need to buffer from resp.BodyStream() before calling HandleProviderAPIError. Consider modifying HandleProviderAPIError itself to handle streamed bodies, or buffer the stream data directly in parseElevenlabsError and pass it to a modified version of the handler.

Also note: the secondary fix for the locations condition check (adding && message != "") is valid and should be applied.

@TejasGhatte TejasGhatte force-pushed the 12-01-refactor_core_providers_error_handling_standardization branch from 14a7a70 to 0f9320e Compare December 12, 2025 09:53
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 12-01-refactor_core_providers_error_handling_standardization branch from 0f9320e to 14a7a70 Compare December 12, 2025 11:24
@TejasGhatte TejasGhatte force-pushed the 12-01-refactor_core_providers_error_handling_standardization branch from 14a7a70 to 0f9320e Compare December 12, 2025 11:25
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 12-01-refactor_core_providers_error_handling_standardization branch from 0f9320e to be363a1 Compare December 12, 2025 11:26
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
core/providers/elevenlabs/elevenlabs.go (1)

346-350: Streaming error path may parse an empty body.

As noted in a previous review, when resp.StreamBody = true (line 305), resp.Body() may be empty even when the provider returns a JSON error body. This causes parseElevenlabsError(resp) to fall back to a generic message. Consider buffering from resp.BodyStream() before parsing errors in streaming contexts.

🧹 Nitpick comments (3)
core/providers/openai/errors.go (1)

9-42: LGTM - Centralized error parsing with ExtraFields always set.

The function correctly addresses the previous review feedback by setting ExtraFields unconditionally.

Minor note: The if bifrostErr != nil check on line 33 is redundant since HandleProviderAPIError always returns a non-nil *schemas.BifrostError. The check is harmless but could be removed.

-	// Set ExtraFields unconditionally so provider/model/request metadata is always attached
-	if bifrostErr != nil {
-		bifrostErr.ExtraFields = schemas.BifrostErrorExtraFields{
-			Provider:       providerName,
-			ModelRequested: model,
-			RequestType:    requestType,
-		}
+	// Set ExtraFields unconditionally so provider/model/request metadata is always attached
+	bifrostErr.ExtraFields = schemas.BifrostErrorExtraFields{
+		Provider:       providerName,
+		ModelRequested: model,
+		RequestType:    requestType,
 	}
core/providers/gemini/errors.go (1)

38-79: Addresses past review feedback; consider optimizing double-parse.

The function correctly handles both array and single error formats. Past review comments have been addressed:

  • Trailing newline is now trimmed (line 55)
  • Code is extracted from the first error in array responses (lines 60-62)

However, the function calls HandleProviderAPIError twice (lines 42 and 70), parsing the response body each time. This works because fasthttp.Response.Body() returns the same buffer on repeated calls, but it's inefficient as it parses JSON twice when the array parse fails.

Consider parsing once and detecting the format:

 func parseGeminiError(resp *fasthttp.Response) *schemas.BifrostError {
-	// Try to parse as []GeminiGenerationError
-	var errorResps []GeminiGenerationError
-	bifrostErr := providerUtils.HandleProviderAPIError(resp, &errorResps)
-	if len(errorResps) > 0 {
+	statusCode := resp.StatusCode()
+	body := append([]byte(nil), resp.Body()...)
+	
+	// Try array format first
+	var errorResps []GeminiGenerationError
+	if err := sonic.Unmarshal(body, &errorResps); err == nil && len(errorResps) > 0 {
+		bifrostErr := &schemas.BifrostError{
+			IsBifrostError: false,
+			StatusCode:     &statusCode,
+			Error:          &schemas.ErrorField{},
+		}
 		var message string
 		// ... rest of array handling
 		return bifrostErr
 	}
 
-	// Try to parse as GeminiGenerationError
+	// Try single object format
 	var errorResp GeminiGenerationError
-	bifrostErr = providerUtils.HandleProviderAPIError(resp, &errorResp)
+	bifrostErr := &schemas.BifrostError{
+		IsBifrostError: false,
+		StatusCode:     &statusCode,
+		Error:          &schemas.ErrorField{},
+	}
+	if err := sonic.Unmarshal(body, &errorResp); err != nil {
+		bifrostErr.Error.Message = fmt.Sprintf("provider API error: %s", string(body))
+		return bifrostErr
+	}
 	if errorResp.Error != nil {
 		// ... rest of single handling
 	}
 	return bifrostErr
 }

This avoids parsing the body twice and provides clearer control flow.

core/providers/gemini/types.go (1)

1352-1357: Consider extracting the anonymous struct for better maintainability.

The FieldViolations field uses an anonymous struct, which reduces code reusability and clarity. Extracting it to a named type would improve maintainability.

Apply this diff to extract the anonymous struct:

+type GeminiFieldViolation struct {
+	Description string `json:"description"`
+}
+
 type GeminiGenerationErrorDetails struct {
 	Type            string `json:"@type"`
-	FieldViolations []struct {
-		Description string `json:"description"`
-	} `json:"fieldViolations"`
+	FieldViolations []GeminiFieldViolation `json:"fieldViolations"`
 }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 14a7a70 and be363a1.

📒 Files selected for processing (15)
  • core/providers/anthropic/anthropic.go (4 hunks)
  • core/providers/anthropic/errors.go (1 hunks)
  • core/providers/anthropic/types.go (1 hunks)
  • core/providers/cohere/cohere.go (4 hunks)
  • core/providers/cohere/errors.go (1 hunks)
  • core/providers/elevenlabs/elevenlabs.go (4 hunks)
  • core/providers/elevenlabs/errors.go (2 hunks)
  • core/providers/elevenlabs/types.go (1 hunks)
  • core/providers/gemini/batch_test.go (0 hunks)
  • core/providers/gemini/errors.go (2 hunks)
  • core/providers/gemini/gemini.go (5 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/openai/errors.go (1 hunks)
  • core/providers/openai/openai.go (5 hunks)
  • core/providers/utils/utils.go (1 hunks)
💤 Files with no reviewable changes (1)
  • core/providers/gemini/batch_test.go
🚧 Files skipped from review as they are similar to previous changes (2)
  • core/providers/cohere/cohere.go
  • core/providers/anthropic/errors.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/providers/gemini/gemini.go
  • core/providers/cohere/errors.go
  • core/providers/openai/errors.go
  • core/providers/elevenlabs/errors.go
  • core/providers/elevenlabs/types.go
  • core/providers/anthropic/anthropic.go
  • core/providers/gemini/errors.go
  • core/providers/openai/openai.go
  • core/providers/elevenlabs/elevenlabs.go
  • core/providers/gemini/types.go
  • core/providers/anthropic/types.go
  • core/providers/utils/utils.go
🧠 Learnings (2)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.

Applied to files:

  • core/providers/gemini/gemini.go
  • core/providers/cohere/errors.go
  • core/providers/openai/errors.go
  • core/providers/elevenlabs/errors.go
  • core/providers/elevenlabs/types.go
  • core/providers/anthropic/anthropic.go
  • core/providers/gemini/errors.go
  • core/providers/openai/openai.go
  • core/providers/elevenlabs/elevenlabs.go
  • core/providers/gemini/types.go
  • core/providers/anthropic/types.go
  • core/providers/utils/utils.go
📚 Learning: 2025-12-11T11:58:25.307Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: core/providers/openai/responses.go:42-84
Timestamp: 2025-12-11T11:58:25.307Z
Learning: In core/providers/openai/responses.go (and related OpenAI response handling), document and enforce the API format constraint: if ResponsesReasoning != nil and the response contains content blocks, all content blocks should be treated as reasoning blocks by default. Implement type guards or parsing logic accordingly, and add unit tests to verify that when ResponsesReasoning is non-nil, content blocks are labeled as reasoning blocks. Include clear comments in the code explaining the rationale and ensure downstream consumers rely on this behavior.

Applied to files:

  • core/providers/openai/errors.go
  • core/providers/openai/openai.go
🧬 Code graph analysis (5)
core/providers/cohere/errors.go (2)
core/providers/cohere/types.go (1)
  • CohereError (543-547)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
core/providers/openai/errors.go (1)
core/schemas/bifrost.go (1)
  • BifrostErrorExtraFields (421-425)
core/providers/elevenlabs/errors.go (2)
core/providers/elevenlabs/types.go (1)
  • ElevenlabsError (157-159)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
core/providers/gemini/errors.go (4)
core/providers/gemini/types.go (2)
  • GeminiGenerationError (1341-1343)
  • GeminiGenerationErrorStruct (1345-1350)
core/providers/utils/utils.go (1)
  • HandleProviderAPIError (317-338)
core/schemas/utils.go (1)
  • Ptr (16-18)
core/utils.go (1)
  • Ptr (51-53)
core/providers/openai/openai.go (2)
core/providers/openai/errors.go (1)
  • ParseOpenAIError (10-42)
core/schemas/bifrost.go (5)
  • TextCompletionStreamRequest (88-88)
  • ChatCompletionStreamRequest (90-90)
  • ResponsesStreamRequest (92-92)
  • SpeechStreamRequest (95-95)
  • TranscriptionStreamRequest (97-97)
🔇 Additional comments (18)
core/providers/utils/utils.go (1)

319-322: LGTM! Defensive copy prevents use-after-free.

The append([]byte(nil), resp.Body()...) idiom correctly creates a copy of the response body before unmarshalling. This is important because fasthttp may reuse the underlying buffer, and the copy ensures the body data remains valid for both unmarshalling and error message construction.

core/providers/anthropic/types.go (1)

389-392: LGTM! Pointer type enables proper nil handling.

Converting Error to a pointer type with omitempty is appropriate for optional error details. This aligns with the centralized error parsing in parseAnthropicError which must now check for nil before accessing Error.Type or Error.Message.

core/providers/anthropic/anthropic.go (4)

151-154: LGTM! Centralized error parsing.

The refactor correctly delegates error handling to parseAnthropicError, which aligns with the PR's goal of centralizing error parsing logic across providers.


196-198: LGTM! Consistent error handling pattern.


449-452: LGTM! Streaming error path now uses centralized parser.


797-800: LGTM! Responses stream error handling consolidated.

core/providers/elevenlabs/elevenlabs.go (3)

96-98: LGTM! Simplified error parsing call.


227-230: LGTM! Consistent with centralized error handling.


484-487: LGTM! Error handling aligned with other providers.

core/providers/gemini/gemini.go (5)

98-100: LGTM - Error handling centralized.

The change to use parseGeminiError(resp) aligns with the PR's objective to extract error parsing into dedicated files.


156-158: LGTM - Consistent error handling pattern.

Same centralized error parsing approach applied to the list models endpoint.


250-253: LGTM - ChatCompletion error path updated.

Error handling now routes through the centralized parseGeminiError.


477-480: LGTM - SpeechStream error handling consolidated.

Streaming error path now uses the same centralized parser.


725-728: LGTM - TranscriptionStream follows the same pattern.

All Gemini error handling paths are now consistent.

core/providers/elevenlabs/types.go (1)

157-159: LGTM - Optional detail field is appropriate.

Making Detail a pointer with omitempty correctly handles cases where the ElevenLabs API returns errors without detail information. The existing UnmarshalJSON method on *ElevenlabsErrorDetail continues to work correctly.

core/providers/elevenlabs/errors.go (1)

12-88: LGTM - Centralized error parsing for ElevenLabs.

The refactored function correctly handles both validation errors (array format) and non-validation errors (single object format). The early return for validation errors with a complete BifrostError is appropriate.

One observation: If errorResp.Detail is non-nil but both ValidationErrors is empty and Message is nil, the function will return the base bifrostErr from HandleProviderAPIError with potentially only an empty ErrorField. This is acceptable as the status code is still preserved.

core/providers/gemini/errors.go (1)

12-36: LGTM - ToGeminiError conversion function.

Correctly derives a GeminiGenerationError from a BifrostError, handling nil cases and extracting relevant fields.

core/providers/openai/openai.go (1)

408-408: LGTM! Centralized error parsing is consistently applied.

All streaming endpoints now route error handling through ParseOpenAIError, which properly populates error metadata including provider, model, and request type. The refactoring reduces code duplication and ensures consistent error handling across all OpenAI streaming operations.

Also applies to: 815-815, 1256-1256, 1649-1649, 1923-1923

Copy link
Collaborator Author

Pratham-Mishra04 commented Dec 12, 2025

Merge activity

  • Dec 12, 1:21 PM UTC: A user started a stack merge that includes this pull request via Graphite.
  • Dec 12, 1:22 PM UTC: @Pratham-Mishra04 merged this pull request with Graphite.

@Pratham-Mishra04 Pratham-Mishra04 merged commit 812b5db into main Dec 12, 2025
8 checks passed
@Pratham-Mishra04 Pratham-Mishra04 deleted the 12-01-refactor_core_providers_error_handling_standardization branch December 12, 2025 13:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants