Skip to content

Conversation

@akshaydeo
Copy link
Contributor

@akshaydeo akshaydeo commented Dec 4, 2025

Summary

Add batch delete functionality to the Bifrost API, allowing users to delete batch jobs. This implementation primarily focuses on the Gemini provider while adding the necessary interfaces for other providers.

Changes

  • Added BatchDeleteRequest method to the Bifrost core
  • Implemented batch delete functionality for the Gemini provider
  • Added BatchDelete interface method to all providers (with unsupported operation responses for non-Gemini providers)
  • Added new request/response schemas for batch deletion
  • Enhanced the GenAI router to support batch delete operations
  • Improved file upload handling for the Gemini provider, including support for resumable uploads

Type of change

  • Feature
  • Refactor

Affected areas

  • Core (Go)
  • Transports (HTTP)
  • Providers/Integrations

How to test

Test batch deletion with the Gemini provider:

# Create a batch job
curl -X POST "http://localhost:8080/genai/v1beta/models/gemini-1.5-pro:batchGenerateContent" \
  -H "Content-Type: application/json" \
  -d '{"model":"gemini-1.5-pro", "src":[{"contents":[{"parts":[{"text":"Hello"}]}]}]}'

# Delete the batch job (replace BATCH_ID with the ID from the create response)
curl -X DELETE "http://localhost:8080/genai/v1beta/batches/BATCH_ID" \
  -H "Content-Type: application/json"

Breaking changes

  • No

Related issues

Implements batch deletion functionality for better resource management.

Security considerations

No additional security implications beyond existing authentication mechanisms.

Checklist

  • I added/updated tests where appropriate
  • I verified builds succeed (Go and UI)

@akshaydeo akshaydeo mentioned this pull request Dec 4, 2025
16 tasks
Copy link
Contributor Author

akshaydeo commented Dec 4, 2025

This stack of pull requests is managed by Graphite. Learn more about stacking.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 4, 2025

📝 Walkthrough

Summary by CodeRabbit

  • New Features
    • Added batch delete operation across supported providers
    • Implemented resumable file upload capability with session management
    • Extended Gemini batch operations with create, list, retrieve, cancel, and delete endpoints
    • Added batch job status mapping and response converters for Gemini integration

✏️ Tip: You can customize this high-level summary in your review settings.

Walkthrough

Adds batch delete operation support across the Bifrost platform. Introduces BatchDeleteRequest type and Provider interface method. Implements full batch delete for Gemini provider with DELETE request construction. Marks operation as unsupported for remaining providers. Adds SDK response converters for batch operations and resumable file upload subsystem with session management in HTTP transport layer.

Changes

Cohort / File(s) Summary
Core Request Type & Schema Definitions
core/schemas/bifrost.go, core/schemas/provider.go
Introduces BatchDeleteRequest constant and adds BatchDelete field to AllowedRequests. Updates IsOperationAllowed to handle batch delete requests and extends Provider interface with new BatchDelete method signature.
Bifrost Core Orchestration
core/bifrost.go
Implements BatchDeleteRequest method that validates requests, resolves provider context, determines base provider type, conditionally gathers authentication keys, executes deletion via retry logic, and augments errors with request metadata.
Gemini Provider - Full Implementation
core/providers/gemini/gemini.go
Implements full BatchDelete functionality for Gemini provider with DELETE request construction, API key attachment, latency capture, error handling via parseGeminiError, and response population with deletion status.
Gemini Provider - SDK Converters & Types
core/providers/gemini/batch.go, core/providers/gemini/types.go
Adds comprehensive SDK response converters (ToGeminiJobState, ToGeminiBatchJobResponse, ToGeminiBatchRetrieveResponse, ToGeminiBatchListResponse, ToGeminiBatchCancelResponse). Introduces Gemini SDK constants (job states) and types for batch operations. Updates GeminiFileUploadRequest with MimeType and ResumableSessionID fields.
Unsupported Batch Delete Stubs
core/providers/anthropic/batch.go, core/providers/mistral/batch.go, core/providers/parasail/batch.go, core/providers/sgl/batch.go, core/providers/cerebras/cerebras.go, core/providers/cohere/cohere.go, core/providers/ollama/ollama.go, core/providers/openrouter/openrouter.go
Adds BatchDelete method to each provider returning unsupported operation error. All follow consistent signature and error handling pattern.
HTTP Transport - Resumable Upload & Batch Routing
transports/bifrost-http/integrations/genai.go
Introduces resumable file upload subsystem with in-memory session storage, automatic cleanup, phase-1 metadata exchange and phase-2 content submission. Adds batch-oriented routing for Gemini GenAI operations (create/list/retrieve/cancel/delete) with session-based state propagation and context carrying for provider/filename metadata.
HTTP Transport - Router & Integration
transports/bifrost-http/integrations/router.go
Extends BatchRequest with DeleteRequest field. Adds BatchDeleteResponseConverter type and updates RouteConfig to support batch delete response conversion. Implements handleBatchRequest logic for delete operations and adds early-return handling for resumable upload initialization errors.
HTTP Transport - Utilities
transports/bifrost-http/integrations/utils.go
Adds Content-Length header setting in sendSuccess response and corresponding debug logging for response metadata.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Areas requiring extra attention:

  • core/providers/gemini/gemini.go: Full batch delete implementation including URL construction for both full resource names and short IDs, error parsing, and latency instrumentation
  • transports/bifrost-http/integrations/genai.go: Complex resumable upload subsystem with session lifecycle management, cleanup logic, phase-1/phase-2 request/response flows, and multi-endpoint coordination
  • transports/bifrost-http/integrations/router.go: Batch delete request handling integration, error propagation for resumable uploads, and converter application logic
  • core/providers/gemini/types.go & batch.go: SDK type definitions and converter implementations for Gemini batch operations with proper nil-safety and metadata population

Poem

🐰 Batch deletion hops along,
Gemini sings the implementation song,
While others politely decline the call,
Resumable uploads upload it all!
New sessions bloom and cleanup sweeps—
The platform grows while the rabbit leaps! 🌱✨

Pre-merge checks and finishing touches

❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Title check ❓ Inconclusive The title is vague and does not clearly summarize the main change. It uses a generic prefix '[DO NOT MERGE]' and a developer-facing branch name rather than describing the actual feature being added. Update the title to clearly describe the main feature (e.g., 'Add batch delete functionality to Bifrost' or 'Implement batch delete for Gemini provider').
✅ Passed checks (2 passed)
Check name Status Explanation
Description check ✅ Passed The PR description covers the main sections of the template (Summary, Changes, Type of change, Affected areas, How to test, Breaking changes) but lacks details on testing and security considerations could be more thorough.
Docstring Coverage ✅ Passed Docstring coverage is 96.88% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch 12-04-gemini-sdk-batch-support

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7cb3901 and 4f5a21c.

📒 Files selected for processing (17)
  • core/bifrost.go (1 hunks)
  • core/providers/anthropic/batch.go (1 hunks)
  • core/providers/cerebras/cerebras.go (1 hunks)
  • core/providers/cohere/cohere.go (1 hunks)
  • core/providers/gemini/batch.go (1 hunks)
  • core/providers/gemini/gemini.go (1 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/mistral/batch.go (1 hunks)
  • core/providers/ollama/ollama.go (1 hunks)
  • core/providers/openrouter/openrouter.go (1 hunks)
  • core/providers/parasail/batch.go (1 hunks)
  • core/providers/sgl/batch.go (1 hunks)
  • core/schemas/bifrost.go (1 hunks)
  • core/schemas/provider.go (3 hunks)
  • transports/bifrost-http/integrations/genai.go (13 hunks)
  • transports/bifrost-http/integrations/router.go (7 hunks)
  • transports/bifrost-http/integrations/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
  • core/providers/openrouter/openrouter.go
  • core/providers/cerebras/cerebras.go
  • core/schemas/bifrost.go
  • core/providers/mistral/batch.go
  • core/providers/ollama/ollama.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • transports/bifrost-http/integrations/utils.go
  • core/providers/gemini/batch.go
  • core/providers/sgl/batch.go
  • core/providers/cohere/cohere.go
  • transports/bifrost-http/integrations/genai.go
  • core/providers/gemini/gemini.go
  • core/providers/parasail/batch.go
  • core/providers/anthropic/batch.go
  • core/schemas/provider.go
  • transports/bifrost-http/integrations/router.go
  • core/bifrost.go
  • core/providers/gemini/types.go
🧠 Learnings (2)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.

Applied to files:

  • transports/bifrost-http/integrations/utils.go
  • core/providers/gemini/batch.go
  • core/providers/sgl/batch.go
  • core/providers/cohere/cohere.go
  • transports/bifrost-http/integrations/genai.go
  • core/providers/gemini/gemini.go
  • core/providers/parasail/batch.go
  • core/providers/anthropic/batch.go
  • core/schemas/provider.go
  • transports/bifrost-http/integrations/router.go
  • core/bifrost.go
  • core/providers/gemini/types.go
📚 Learning: 2025-12-12T08:25:02.629Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: transports/bifrost-http/integrations/router.go:709-712
Timestamp: 2025-12-12T08:25:02.629Z
Learning: In transports/bifrost-http/**/*.go, update streaming response handling to align with OpenAI Responses API: use typed SSE events such as response.created, response.output_text.delta, response.done, etc., and do not rely on the legacy data: [DONE] termination marker. Note that data: [DONE] is only used by the older Chat Completions and Text Completions streaming APIs. Ensure parsers, writers, and tests distinguish SSE events from the [DONE] sentinel and handle each event type accordingly for correct stream termination and progress updates.

Applied to files:

  • transports/bifrost-http/integrations/utils.go
  • transports/bifrost-http/integrations/genai.go
  • transports/bifrost-http/integrations/router.go
🧬 Code graph analysis (11)
transports/bifrost-http/integrations/utils.go (1)
plugins/mocker/main.go (1)
  • Response (97-103)
core/providers/gemini/batch.go (2)
core/schemas/batch.go (13)
  • BatchStatus (5-5)
  • BatchStatusValidating (8-8)
  • BatchStatusInProgress (10-10)
  • BatchStatusFinalizing (11-11)
  • BatchStatusCompleted (12-12)
  • BatchStatusFailed (9-9)
  • BatchStatusCancelling (14-14)
  • BatchStatusCancelled (15-15)
  • BatchStatusExpired (13-13)
  • BifrostBatchCreateResponse (85-109)
  • BifrostBatchRetrieveResponse (152-187)
  • BifrostBatchListResponse (129-140)
  • BifrostBatchCancelResponse (199-208)
core/providers/gemini/types.go (11)
  • GeminiJobStatePending (1627-1627)
  • GeminiJobStateRunning (1628-1628)
  • GeminiJobStateSucceeded (1629-1629)
  • GeminiJobStateFailed (1630-1630)
  • GeminiJobStateCancelling (1631-1631)
  • GeminiJobStateCancelled (1632-1632)
  • GeminiBatchJobResponseSDK (1663-1669)
  • GeminiBatchMetadata (1544-1557)
  • GeminiBatchStats (1478-1482)
  • GeminiBatchDest (1578-1581)
  • GeminiBatchListResponseSDK (1672-1675)
core/providers/sgl/batch.go (4)
core/providers/sgl/sgl.go (1)
  • SGLProvider (18-24)
core/schemas/account.go (1)
  • Key (8-18)
core/schemas/bifrost.go (2)
  • BifrostError (370-379)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/cohere/cohere.go (2)
core/schemas/bifrost.go (2)
  • BifrostError (370-379)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
transports/bifrost-http/integrations/genai.go (3)
core/schemas/provider.go (1)
  • Provider (315-364)
core/providers/gemini/types.go (2)
  • GeminiFileUploadRequest (1701-1707)
  • Type (781-781)
core/schemas/batch.go (9)
  • BifrostBatchCreateRequest (65-82)
  • BatchRequestItem (31-37)
  • BifrostBatchCreateResponse (85-109)
  • BifrostBatchListRequest (112-126)
  • BifrostBatchListResponse (129-140)
  • BifrostBatchRetrieveRequest (143-149)
  • BifrostBatchRetrieveResponse (152-187)
  • BifrostBatchCancelRequest (190-196)
  • BifrostBatchCancelResponse (199-208)
core/providers/gemini/gemini.go (2)
core/schemas/bifrost.go (4)
  • BifrostError (370-379)
  • BatchDeleteRequest (103-103)
  • BifrostResponseExtraFields (299-310)
  • RequestType (83-83)
core/providers/utils/utils.go (4)
  • CheckOperationAllowed (472-484)
  • NewBifrostOperationError (563-574)
  • SetExtraHeaders (179-209)
  • MakeRequestWithContext (40-94)
core/providers/parasail/batch.go (3)
core/providers/parasail/parasail.go (1)
  • ParasailProvider (17-23)
core/schemas/bifrost.go (2)
  • BifrostError (370-379)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/anthropic/batch.go (3)
core/providers/anthropic/anthropic.go (1)
  • AnthropicProvider (25-33)
core/schemas/bifrost.go (2)
  • BifrostError (370-379)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/schemas/provider.go (1)
core/schemas/bifrost.go (1)
  • BatchDeleteRequest (103-103)
transports/bifrost-http/integrations/router.go (3)
transports/bifrost-http/integrations/genai.go (1)
  • ErrResumableUploadInit (37-37)
core/schemas/bifrost.go (2)
  • BatchDeleteRequest (103-103)
  • FileUploadRequest (104-104)
core/schemas/provider.go (1)
  • Provider (315-364)
core/bifrost.go (4)
core/schemas/bifrost.go (5)
  • BatchDeleteRequest (103-103)
  • BifrostError (370-379)
  • ErrorField (388-395)
  • BifrostErrorExtraFields (437-441)
  • RequestType (83-83)
core/schemas/provider.go (2)
  • Provider (315-364)
  • CustomProviderConfig (249-255)
ui/lib/types/config.ts (2)
  • CustomProviderConfig (165-170)
  • RequestType (124-146)
core/schemas/account.go (1)
  • Key (8-18)
🔇 Additional comments (11)
core/providers/parasail/batch.go (1)

35-38: ParasailProvider.BatchDelete stub matches existing unsupported pattern.

The new method cleanly returns NewUnsupportedOperationError for BatchDeleteRequest, consistent with other Parasail batch operations. No issues from this change.

core/schemas/provider.go (1)

183-189: BatchDelete support is wired correctly into schemas and Provider, but verify external implementors.

The additions of AllowedRequests.BatchDelete, the BatchDeleteRequest branch in IsOperationAllowed, and the Provider.BatchDelete method keep batch operation gating consistent and give Bifrost a uniform delete surface. Behavior-wise this looks correct.

Note that extending the Provider interface is a source/binary‑breaking change for any out‑of‑tree provider implementations; ensure any custom providers are updated (at least with an “unsupported operation” stub) before rolling this out.

Also applies to: 232-233, 352-353

transports/bifrost-http/integrations/router.go (1)

80-89: Batch delete wiring in router looks consistent.

BatchRequest.DeleteRequest, BatchDeleteResponseConverter, the new RouteConfig field, and the schemas.BatchDeleteRequest case in handleBatchRequest are all aligned and match the existing batch operation patterns.

Also applies to: 159-162, 268-292, 707-708, 825-845

core/providers/gemini/batch.go (2)

247-269: Status→job‑state mapping looks good.

ToGeminiJobState cleanly maps Bifrost batch statuses into the Gemini job state enums and falls back to JOB_STATE_PENDING for unknown statuses; no issues here.


375-385: Cancel converter is simple and consistent.

ToGeminiBatchCancelResponse just mirrors ID and normalized state, which is consistent with other converters; no further changes needed.

transports/bifrost-http/integrations/genai.go (3)

786-871: Upload parsers are structured well; only minor polish needed.

The split between resumable (parseGeminiResumableUpload + parseGeminiMultipartRelated) vs. standard multipart (parseGeminiMultipartUpload) vs. raw (parseGeminiRawUpload) is clear and consistent. Beyond the separate issues already called out (debug prints, purpose handling), the parsing logic itself is reasonable and matches the GenAI/Gemini request shapes.

Also applies to: 942-988, 990-1015


669-691: Batch PreCallbacks for provider/model/batchID extraction look solid.

The helpers to extract provider from headers, model from URL, and batch IDs from path (including normalizing batches/ prefix and :batchGenerateContent / :cancel suffixes) are consistent and defensive.

Also applies to: 692-716, 718-750, 752-784


1175-1199: Gemini file list/retrieve/delete PreCallbacks are consistent and safe.

Provider extraction and file_id parsing/validation for list/retrieve/delete endpoints are straightforward and align with the rest of the GenAI integration.

Also applies to: 1201-1225, 1227-1250

core/providers/gemini/types.go (3)

1623-1634: LGTM! Job state constants are well-defined.

The SDK job state constants are properly defined and clearly distinguished from the internal batch states. The naming convention and values align with the Google GenAI SDK expectations.


1639-1647: LGTM! SDK batch types are well-structured.

The GeminiBatchCreateRequestSDK and other SDK types are properly defined. The Src field using interface{} is appropriate for handling both file references (string) and inline requests (array), though validation of this field should occur in the transport layer (already flagged in past reviews).


1702-1707: LGTM! File upload request structure is properly updated.

The GeminiFileUploadRequest changes are well-defined:

  • MimeType field appropriately captures the file's MIME type
  • ResumableSessionID is correctly marked as internal-only with json:"-"
  • Comments clearly document each field's purpose

The past review comments flag concerns about MimeType extraction and forwarding in the transport layer parsers, which is the correct place to address those issues.


Comment @coderabbitai help to get the list of available commands and usage tips.

@akshaydeo akshaydeo marked this pull request as ready for review December 4, 2025 13:55
@akshaydeo akshaydeo changed the title gemini-sdk-batch-support [DO NOT MERGE] gemini-sdk-batch-support Dec 4, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (7)
transports/bifrost-http/integrations/utils.go (1)

194-194: Redundant Content-Length header setting.

The fasthttp library automatically sets the Content-Length header when SetBody() is called. Explicitly setting it here is unnecessary.

Apply this diff to remove the redundant header:

-	ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody)))
 	ctx.SetBody(responseBody)
core/providers/gemini/batch.go (3)

870-896: Consider defensive handling for batch stats calculations.

The calculations for PendingRequestCount and SuccessfulRequestCount (lines 889-890) could produce negative values if the underlying data is inconsistent (e.g., Completed > Total or Failed > Completed).

 			BatchStats: &GeminiBatchStats{
 				RequestCount:           resp.RequestCounts.Total,
-				PendingRequestCount:    resp.RequestCounts.Total - resp.RequestCounts.Completed,
-				SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed,
+				PendingRequestCount:    max(0, resp.RequestCounts.Total - resp.RequestCounts.Completed),
+				SuccessfulRequestCount: max(0, resp.RequestCounts.Completed - resp.RequestCounts.Failed),
 			},

898-933: Same defensive handling consideration applies here.

The PendingRequestCount and SuccessfulRequestCount calculations at lines 916-917 have the same potential for negative values as noted above.


935-972: Same defensive handling consideration for list response conversion.

Lines 955-956 have the same calculation pattern that could produce negative values.

transports/bifrost-http/integrations/genai.go (3)

52-67: Consider adding graceful shutdown for the cleanup goroutine.

The goroutine started in init() runs forever with no way to stop it. While this works for long-running services, it could cause issues in tests or if the package is used in a context where cleanup is expected. Consider exposing a shutdown mechanism or using context cancellation.


217-237: Consider extracting duplicated resumable upload route configuration.

The POST and PUT routes for /upload/v1beta/files/resumable/{session_id} share nearly identical configuration. Consider extracting the common logic to reduce duplication.

// Example refactor:
func createResumableUploadRouteConfig(method string, pathPrefix string) RouteConfig {
    return RouteConfig{
        Type:   RouteConfigTypeGenAI,
        Path:   pathPrefix + "/upload/v1beta/files/resumable/{session_id}",
        Method: method,
        // ... shared configuration
    }
}

Also applies to: 273-293


746-778: Minor: extractGeminiBatchIDFromPathCancel duplicates logic from extractGeminiBatchIDFromPath.

The only difference is the strings.TrimSuffix(batchIDStr, ":cancel") line. Consider consolidating into a single function with a parameter, or rely on the router's path matching to exclude the :cancel suffix.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6a51a9f and 5a7e7a2.

📒 Files selected for processing (25)
  • core/bifrost.go (1 hunks)
  • core/providers/anthropic/batch.go (1 hunks)
  • core/providers/azure/batch.go (1 hunks)
  • core/providers/bedrock/batch.go (1 hunks)
  • core/providers/cerebras/batch.go (1 hunks)
  • core/providers/cohere/batch.go (1 hunks)
  • core/providers/elevenlabs/batch.go (1 hunks)
  • core/providers/gemini/batch.go (1 hunks)
  • core/providers/gemini/files.go (1 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/groq/batch.go (1 hunks)
  • core/providers/mistral/batch.go (1 hunks)
  • core/providers/ollama/batch.go (1 hunks)
  • core/providers/openai/batch.go (1 hunks)
  • core/providers/openrouter/batch.go (1 hunks)
  • core/providers/parasail/batch.go (1 hunks)
  • core/providers/perplexity/batch.go (1 hunks)
  • core/providers/sgl/batch.go (1 hunks)
  • core/providers/vertex/batch.go (1 hunks)
  • core/schemas/batch.go (1 hunks)
  • core/schemas/bifrost.go (1 hunks)
  • core/schemas/provider.go (3 hunks)
  • transports/bifrost-http/integrations/genai.go (14 hunks)
  • transports/bifrost-http/integrations/router.go (7 hunks)
  • transports/bifrost-http/integrations/utils.go (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/providers/openrouter/batch.go
  • core/schemas/bifrost.go
  • core/providers/anthropic/batch.go
  • core/providers/mistral/batch.go
  • core/providers/perplexity/batch.go
  • core/providers/openai/batch.go
  • core/providers/cohere/batch.go
  • core/providers/cerebras/batch.go
  • core/providers/ollama/batch.go
  • core/schemas/batch.go
  • core/providers/groq/batch.go
  • core/bifrost.go
  • core/providers/sgl/batch.go
  • core/providers/elevenlabs/batch.go
  • core/providers/gemini/batch.go
  • transports/bifrost-http/integrations/utils.go
  • core/providers/azure/batch.go
  • core/providers/parasail/batch.go
  • core/providers/vertex/batch.go
  • transports/bifrost-http/integrations/router.go
  • core/schemas/provider.go
  • core/providers/bedrock/batch.go
  • core/providers/gemini/files.go
  • transports/bifrost-http/integrations/genai.go
  • core/providers/gemini/types.go
🧬 Code graph analysis (17)
core/providers/openrouter/batch.go (5)
core/providers/openrouter/openrouter.go (1)
  • OpenRouterProvider (18-23)
core/schemas/account.go (1)
  • Key (8-17)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/schemas/bifrost.go (1)
ui/lib/types/config.ts (1)
  • RequestType (122-134)
core/providers/anthropic/batch.go (4)
core/providers/anthropic/anthropic.go (1)
  • AnthropicProvider (21-28)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/mistral/batch.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/openai/batch.go (4)
core/providers/openai/openai.go (1)
  • OpenAIProvider (24-30)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/cohere/batch.go (4)
core/providers/cohere/cohere.go (1)
  • CohereProvider (67-73)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/cerebras/batch.go (4)
core/providers/cerebras/cerebras.go (1)
  • CerebrasProvider (16-21)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/ollama/batch.go (4)
core/providers/ollama/ollama.go (1)
  • OllamaProvider (18-23)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/schemas/batch.go (2)
core/schemas/provider.go (1)
  • Provider (313-362)
core/schemas/bifrost.go (2)
  • ModelProvider (32-32)
  • BifrostResponseExtraFields (295-304)
core/providers/groq/batch.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/sgl/batch.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/azure/batch.go (5)
core/providers/azure/azure.go (1)
  • AzureProvider (23-28)
core/schemas/account.go (1)
  • Key (8-17)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/parasail/batch.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/vertex/batch.go (4)
core/providers/vertex/vertex.go (1)
  • VertexProvider (57-62)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/schemas/provider.go (2)
core/schemas/bifrost.go (1)
  • BatchDeleteRequest (103-103)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/bedrock/batch.go (4)
core/providers/bedrock/bedrock.go (1)
  • BedrockProvider (29-35)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/gemini/files.go (1)
core/schemas/files.go (1)
  • BifrostFileUploadResponse (89-105)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
🔇 Additional comments (35)
core/providers/sgl/batch.go (1)

35-38: LGTM!

The BatchDelete method correctly follows the established pattern for unsupported operations in the SGL provider, consistent with other batch methods in this file.

core/providers/groq/batch.go (1)

35-38: LGTM!

The BatchDelete method correctly follows the established pattern for unsupported operations in the Groq provider, consistent with other batch methods in this file.

core/providers/openrouter/batch.go (1)

35-38: LGTM!

The BatchDelete method correctly follows the established pattern for unsupported operations in the OpenRouter provider, consistent with other batch methods in this file.

core/providers/vertex/batch.go (1)

35-38: LGTM!

The BatchDelete method correctly follows the established pattern for unsupported operations in the Vertex AI provider, consistent with other batch methods in this file.

core/providers/azure/batch.go (1)

438-441: LGTM!

The BatchDelete method correctly follows the established pattern for unsupported operations in the Azure provider, consistent with other batch methods across all providers.

core/providers/anthropic/batch.go (1)

783-786: LGTM!

The BatchDelete method correctly follows the established pattern for unsupported operations in the Anthropic provider, consistent with other batch methods across all providers.

core/providers/mistral/batch.go (1)

35-38: LGTM!

The BatchDelete method correctly follows the established pattern for unsupported operations in the Mistral provider, consistent with other batch methods in this file.

core/providers/elevenlabs/batch.go (1)

35-38: LGTM! Consistent implementation.

The BatchDelete method correctly follows the established pattern for unsupported batch operations in the Elevenlabs provider, consistent with the other Batch* methods in this file.

core/providers/cohere/batch.go (1)

35-38: LGTM! Proper unsupported operation handling.

The BatchDelete implementation correctly returns an unsupported operation error, maintaining consistency with other batch operations in the Cohere provider.

core/schemas/bifrost.go (1)

103-103: LGTM! RequestType constant properly added.

The BatchDeleteRequest constant is correctly placed within the batch operation constants group and follows the established naming convention.

core/providers/bedrock/batch.go (1)

931-934: LGTM! Consistent with Bedrock batch API pattern.

The BatchDelete method correctly returns an unsupported operation error, matching the pattern of other unsupported batch operations in the Bedrock provider.

core/providers/cerebras/batch.go (1)

35-38: LGTM! Follows established pattern.

The BatchDelete implementation is correct and consistent with other unsupported batch operations in the Cerebras provider.

core/providers/openai/batch.go (1)

597-600: LGTM! Proper unsupported operation.

The BatchDelete method correctly signals that this operation is not supported by OpenAI provider, following the same pattern as other providers.

core/providers/ollama/batch.go (1)

35-38: LGTM! Completes the provider interface.

The BatchDelete implementation correctly returns an unsupported operation error, maintaining consistency with the Ollama provider's other batch operations.

core/schemas/provider.go (3)

182-182: LGTM! AllowedRequests field added correctly.

The BatchDelete field is properly added to the AllowedRequests struct, maintaining consistency with other batch operation flags.


231-232: LGTM! Permission check implemented correctly.

The BatchDeleteRequest case is properly handled in IsOperationAllowed, returning the ar.BatchDelete flag value as expected.


350-351: LGTM! Provider interface extended properly.

The BatchDelete method is correctly added to the Provider interface with the appropriate signature, matching the pattern of other batch operations. All providers in this PR implement this interface method.

core/providers/perplexity/batch.go (1)

35-39: LGTM!

The BatchDelete implementation follows the established pattern for unsupported operations in this provider, consistent with the other Batch* methods above.

core/providers/parasail/batch.go (1)

35-39: LGTM!

The BatchDelete implementation is consistent with the existing unsupported batch operations pattern.

core/bifrost.go (1)

1133-1209: LGTM!

The BatchDeleteRequest method follows the established pattern from other batch operations (BatchRetrieveRequest, BatchCancelRequest, BatchResultsRequest) with proper validation, provider resolution, key selection, and retry logic.

core/schemas/batch.go (1)

265-281: LGTM!

The BifrostBatchDeleteRequest and BifrostBatchDeleteResponse types follow the established patterns from other batch operations. The response structure with ID, Object, and Deleted fields aligns with standard delete response semantics.

core/providers/gemini/batch.go (3)

846-868: LGTM!

The ToGeminiJobState function provides a clear mapping from Bifrost batch statuses to Gemini SDK job states, with sensible defaults for edge cases.


974-984: LGTM!

Simple and correct conversion for cancel response.


986-1043: LGTM!

The BatchDelete implementation correctly:

  • Validates operation is allowed
  • Validates batch_id is required
  • Handles both prefixed (batches/...) and non-prefixed batch IDs
  • Uses HTTP DELETE method
  • Accepts both 200 OK and 204 No Content as success responses
  • Returns properly structured response with latency metadata

The implementation is consistent with the patterns established by BatchCancel and BatchRetrieve methods.

transports/bifrost-http/integrations/router.go (3)

88-88: LGTM - Consistent extension of BatchRequest struct.

The addition of DeleteRequest field follows the established pattern for other batch request types.


159-162: LGTM - BatchDeleteResponseConverter type definition.

Follows the same pattern as other batch response converters in this file.


820-841: LGTM - BatchDeleteRequest handling follows established pattern.

The implementation is consistent with other batch request handlers (create, list, retrieve, cancel, results), including proper nil checks, error handling, post-callback invocation, and optional response conversion.

core/providers/gemini/files.go (3)

466-485: LGTM - Well-structured wrapper type and conversion function.

The GeminiFileUploadResponseWrapper provides proper structure for the Gemini API response format, and ToGeminiFileUploadResponse correctly maps Bifrost fields to Gemini fields. The hardcoded "application/octet-stream" MIME type is a reasonable default when actual MIME type is not available in the Bifrost response.


488-519: LGTM - Consistent conversion implementations.

ToGeminiFileListResponse and ToGeminiFileRetrieveResponse follow the same patterns as ToGeminiFileUploadResponse and correctly handle all field mappings including optional fields like ExpiresAt.


522-549: LGTM - Clean helper functions.

  • toGeminiFileState: Properly maps Bifrost status to Gemini state strings with sensible default handling
  • formatGeminiTimestamp: Correctly returns empty string for zero values and uses RFC3339 format
  • safeDerefInt64: Safe nil pointer handling
transports/bifrost-http/integrations/genai.go (1)

438-661: LGTM - Batch route configurations follow established patterns.

The CreateGenAIBatchRouteConfigs function properly implements batch operations (create, list, retrieve, cancel, delete) with:

  • Consistent provider extraction from headers
  • Proper request/response conversion
  • Appropriate error handling

The implementation aligns with the existing file route configurations in this file.

core/providers/gemini/types.go (4)

1578-1589: LGTM - SDK job state constants.

The new GeminiJobState* constants properly define the Google GenAI SDK job states, which are distinct from the internal batch states defined earlier in the file.


1594-1651: LGTM - SDK batch request/response types.

The new SDK-aligned types (GeminiBatchCreateRequestSDK, GeminiBatchJobResponseSDK, etc.) are well-structured:

  • Src interface{} correctly allows both string (file reference) and array (inline requests)
  • Types mirror the Google GenAI Python SDK expectations
  • Proper JSON tags for serialization

1663-1678: LGTM - File request types are properly defined.

The GeminiFileListRequest, GeminiFileRetrieveRequest, and GeminiFileDeleteRequest types provide clean structures for file operations with appropriate optional fields.


1656-1662: No breaking changes from removing Provider field.

The GeminiFileUploadRequest struct removal of the Provider field does not break any existing callers. No code in the codebase accesses this field directly. The provider information is obtained from the HTTP context (bifrostContextKeyProvider) and maintained separately in upload sessions, making the removal safe.

Comment on lines +45 to +50
func generateSessionID() string {
bytes := make([]byte, 16)
rand.Read(bytes)
return hex.EncodeToString(bytes)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Handle error from crypto/rand.Read.

While crypto/rand.Read rarely fails, ignoring the error could lead to predictable session IDs if the random source is unavailable, which could be a security concern for session hijacking.

 func generateSessionID() string {
 	bytes := make([]byte, 16)
-	rand.Read(bytes)
+	if _, err := rand.Read(bytes); err != nil {
+		// Fallback to a less ideal but still unique ID
+		// This should rarely happen in practice
+		return fmt.Sprintf("%d-%d", time.Now().UnixNano(), time.Now().UnixNano())
+	}
 	return hex.EncodeToString(bytes)
 }
🤖 Prompt for AI Agents
transports/bifrost-http/integrations/genai.go lines 45-49: the call to
crypto/rand.Read ignores its error which can lead to weak/predictable session
IDs if the random source fails; change generateSessionID to return (string,
error) (or propagate an error) and check the error from rand.Read (or
io.ReadFull) before hex-encoding the bytes, returning an error if reading
failed; update all callers to handle the new error return (or if you prefer a
panicking behavior for unrecoverable failure, explicitly handle the error with a
clear panic/log.Fatalf and keep the string return).

Comment on lines +240 to +260
fmt.Printf("[DEBUG] FileUploadResponseConverter (phase2 POST): using raw response\n")
return resp.ExtraFields.RawResponse, nil
}
result := gemini.ToGeminiFileUploadResponse(resp)
// If displayName is empty, use the original filename from context
if result.File.DisplayName == "" {
if originalFilename := (*ctx).Value(contextKeyOriginalFilename{}); originalFilename != nil {
if filename, ok := originalFilename.(string); ok && filename != "" {
result.File.DisplayName = filename
fmt.Printf("[DEBUG] FileUploadResponseConverter (phase2 POST): set displayName from context=%s\n", filename)
}
}
}
fmt.Printf("[DEBUG] FileUploadResponseConverter (phase2 POST): converted response=%+v\n", result)
return result, nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove all debug fmt.Printf statements.

There are numerous [DEBUG] print statements throughout this file that should be removed before merging. These include:

  • Lines 240, 249, 253-254: FileUploadResponseConverter debug logs
  • Lines 792-793, 813-814, 818-819, 824, 835-836, 841, 854-855, 860-861: parseGeminiResumableUpload debug logs
  • Lines 1001, 1013, 1022, 1034, 1046, 1062-1063: parseGeminiResumableUploadPhase2 debug logs
  • Lines 1072-1077: setResumableUploadFinalStatus debug logs
  • Lines 1108, 1125, 1129, 1147-1148: extractGeminiFileUploadParams debug logs

If debug logging is needed for production troubleshooting, use a proper logger with configurable log levels instead.

Also applies to: 792-793, 813-814, 818-819, 824-824, 835-836, 841-841, 854-855, 860-861, 1001-1001, 1013-1013, 1022-1022, 1034-1034, 1046-1046, 1062-1063, 1072-1077, 1108-1108, 1125-1125, 1129-1129, 1147-1148

🤖 Prompt for AI Agents
transports/bifrost-http/integrations/genai.go lines ~240-254 (and also at the
other locations listed): remove all debug fmt.Printf calls (the “[DEBUG]” print
statements) present around the specified lines; replace them with calls to the
package's logger (e.g., logger.Debugf or equivalent) if persistent debug logging
is required, or simply delete the lines if no logging is needed, and clean up
any unused fmt imports left behind.

}
} else {
// This is file content
uploadReq.File = []byte(content)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# Check the context around line 929 in genai.go
cat -n transports/bifrost-http/integrations/genai.go | sed -n '920,940p'

Repository: maximhq/bifrost

Length of output: 748


🏁 Script executed:

# Search for multipart/related usage and binary file handling
rg -n "multipart/related" --type go -C5

Repository: maximhq/bifrost

Length of output: 3198


🏁 Script executed:

# Check how 'content' variable is assigned before line 929
rg -n "content.*=" transports/bifrost-http/integrations/genai.go | head -20

Repository: maximhq/bifrost

Length of output: 757


🏁 Script executed:

# Get the full parseGeminiMultipartRelated function
cat -n transports/bifrost-http/integrations/genai.go | sed -n '867,934p'

Repository: maximhq/bifrost

Length of output: 2542


🏁 Script executed:

# Search for tests related to binary uploads or multipart/related
fd -e go --type f | xargs rg -l "parseGeminiMultipartRelated|binary.*upload|multipart.*test" 2>/dev/null

Repository: maximhq/bifrost

Length of output: 104


🏁 Script executed:

# Check if there are any existing issues or comments about binary data handling
rg -n "binary|null.*byte|UTF-8|encoding" transports/bifrost-http/integrations/genai.go

Repository: maximhq/bifrost

Length of output: 76


Critical: Binary data corruption in multipart/related parsing.

The file content is corrupted during parsing because the request body is converted to a string at line 887 (strings.Split(string(body), delimiter)), then extracted as a string, and finally converted back to bytes at line 929. Go strings are UTF-8 encoded; any binary data containing null bytes or invalid UTF-8 sequences will be lost or replaced with the replacement character (U+FFFD). This breaks all binary file uploads through the multipart/related endpoint.

Use bytes.Split() instead of strings.Split() to preserve binary data integrity throughout the parsing process.

🤖 Prompt for AI Agents
transports/bifrost-http/integrations/genai.go around line 887-929: the
multipart/related parsing currently converts the request body to a string
(strings.Split(string(body), delimiter)) and later reconstructs the file with
[]byte(content), corrupting binary data; change the parsing to operate on the
raw []byte using bytes.Split (or bytes.Index/bytes.SplitN) so you never convert
body to string, extract the part slice directly as []byte, and assign
uploadReq.File = partSlice (or a copy) instead of converting through string;
ensure any boundary/index operations use byte delimiters and adjust variable
types accordingly to preserve binary integrity.

Comment on lines +868 to +897
fmt.Printf("[DEBUG] router: calling FileUploadRequest for provider=%s, purpose=%s, filename=%s\n", fileReq.UploadRequest.Provider, fileReq.UploadRequest.Purpose, fileReq.UploadRequest.Filename)
fileResponse, bifrostErr := g.client.FileUploadRequest(requestCtx, fileReq.UploadRequest)
if bifrostErr != nil {
errMsg := "unknown error"
if bifrostErr.Error != nil {
errMsg = bifrostErr.Error.Message
}
fmt.Printf("[DEBUG] router: FileUploadRequest error: %s (provider=%s)\n", errMsg, fileReq.UploadRequest.Provider)
g.sendError(ctx, bifrostCtx, config.ErrorConverter, bifrostErr)
return
}
fmt.Printf("[DEBUG] router: FileUploadRequest success, response ID=%s\n", fileResponse.ID)
if config.PostCallback != nil {
fmt.Printf("[DEBUG] router: calling PostCallback\n")
if err := config.PostCallback(ctx, req, fileResponse); err != nil {
fmt.Printf("[DEBUG] router: PostCallback error: %v\n", err)
g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute post-request callback"))
return
}
fmt.Printf("[DEBUG] router: PostCallback success\n")
}
if config.FileUploadResponseConverter != nil {
fmt.Printf("[DEBUG] router: calling FileUploadResponseConverter\n")
response, err = config.FileUploadResponseConverter(bifrostCtx, fileResponse)
fmt.Printf("[DEBUG] router: FileUploadResponseConverter done, err=%v\n", err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove debug fmt.Printf statements before merging.

These debug statements use fmt.Printf instead of the structured logger, and should be removed before production:

  • Line 868: fmt.Printf("[DEBUG] router: calling FileUploadRequest...
  • Line 875: fmt.Printf("[DEBUG] router: FileUploadRequest error...
  • Line 879: fmt.Printf("[DEBUG] router: FileUploadRequest success...
  • Line 881: fmt.Printf("[DEBUG] router: calling PostCallback\n"
  • Line 883: fmt.Printf("[DEBUG] router: PostCallback error...
  • Line 887: fmt.Printf("[DEBUG] router: PostCallback success\n"
  • Line 890: fmt.Printf("[DEBUG] router: calling FileUploadResponseConverter\n"
  • Line 892: fmt.Printf("[DEBUG] router: FileUploadResponseConverter done...

If debug logging is needed for production, use g.logger.Debug(...) instead.

 	case schemas.FileUploadRequest:
 		if fileReq.UploadRequest == nil {
 			g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(nil, "Invalid file upload request"))
 			return
 		}
-		fmt.Printf("[DEBUG] router: calling FileUploadRequest for provider=%s, purpose=%s, filename=%s\n", fileReq.UploadRequest.Provider, fileReq.UploadRequest.Purpose, fileReq.UploadRequest.Filename)
 		fileResponse, bifrostErr := g.client.FileUploadRequest(requestCtx, fileReq.UploadRequest)
 		if bifrostErr != nil {
-			errMsg := "unknown error"
-			if bifrostErr.Error != nil {
-				errMsg = bifrostErr.Error.Message
-			}
-			fmt.Printf("[DEBUG] router: FileUploadRequest error: %s (provider=%s)\n", errMsg, fileReq.UploadRequest.Provider)
 			g.sendError(ctx, bifrostCtx, config.ErrorConverter, bifrostErr)
 			return
 		}
-		fmt.Printf("[DEBUG] router: FileUploadRequest success, response ID=%s\n", fileResponse.ID)
 		if config.PostCallback != nil {
-			fmt.Printf("[DEBUG] router: calling PostCallback\n")
 			if err := config.PostCallback(ctx, req, fileResponse); err != nil {
-				fmt.Printf("[DEBUG] router: PostCallback error: %v\n", err)
 				g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute post-request callback"))
 				return
 			}
-			fmt.Printf("[DEBUG] router: PostCallback success\n")
 		}
 		if config.FileUploadResponseConverter != nil {
-			fmt.Printf("[DEBUG] router: calling FileUploadResponseConverter\n")
 			response, err = config.FileUploadResponseConverter(bifrostCtx, fileResponse)
-			fmt.Printf("[DEBUG] router: FileUploadResponseConverter done, err=%v\n", err)
 		} else {
 			response = fileResponse
 		}
🤖 Prompt for AI Agents
transports/bifrost-http/integrations/router.go around lines 868 to 892: several
debug statements use fmt.Printf and must be removed or replaced with the
structured logger before merging; remove the listed fmt.Printf calls (lines 868,
875, 879, 881, 883, 887, 890, 892) and, if you still need debug visibility,
replace them with g.logger.Debug(...) calls that include the same contextual
fields (provider, purpose, filename, error details, response ID, etc.) and drop
any raw fmt.Printf usages so production code uses the structured logger.

Comment on lines 1001 to 1032
fmt.Printf("[DEBUG] router: file response conversion error: %v\n", err)
g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to convert file response"))
return
}

fmt.Printf("[DEBUG] router: sending file upload success response\n")
g.sendSuccess(ctx, bifrostCtx, config.ErrorConverter, response)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove remaining debug statements in file request handling.

Similar to the file upload path, these debug statements should be removed:

 	if err != nil {
-		fmt.Printf("[DEBUG] router: file response conversion error: %v\n", err)
 		g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to convert file response"))
 		return
 	}
 
-	fmt.Printf("[DEBUG] router: sending file upload success response\n")
 	g.sendSuccess(ctx, bifrostCtx, config.ErrorConverter, response)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
fmt.Printf("[DEBUG] router: file response conversion error: %v\n", err)
g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to convert file response"))
return
}
fmt.Printf("[DEBUG] router: sending file upload success response\n")
g.sendSuccess(ctx, bifrostCtx, config.ErrorConverter, response)
g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to convert file response"))
return
}
g.sendSuccess(ctx, bifrostCtx, config.ErrorConverter, response)
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/router.go around lines 1001 to 1007
there are leftover fmt.Printf debug statements; remove the two fmt.Printf lines
(the "router: file response conversion error" and "router: sending file upload
success response") so the handler no longer prints raw debug output to stdout;
if you need to preserve diagnostics replace them with the project's structured
logger at the appropriate level (e.g., logger.Debugf) or drop them entirely.


ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody)))
ctx.SetBody(responseBody)
fmt.Printf("[DEBUG] sendSuccess: status=200, contentLen=%d, body=%s\n", len(responseBody), string(responseBody))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Remove debug log that exposes full response body.

Logging the full response body may expose sensitive information such as PII, API keys, authentication tokens, or other confidential data. Debug logs should not include complete response payloads in production environments.

Apply this diff to remove the sensitive debug log:

 	ctx.SetBody(responseBody)
-	fmt.Printf("[DEBUG] sendSuccess: status=200, contentLen=%d, body=%s\n", len(responseBody), string(responseBody))
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
fmt.Printf("[DEBUG] sendSuccess: status=200, contentLen=%d, body=%s\n", len(responseBody), string(responseBody))
ctx.SetBody(responseBody)
}
🤖 Prompt for AI Agents
transports/bifrost-http/integrations/utils.go around line 196: remove the debug
fmt.Printf that prints the full response body since it can expose sensitive
data; replace it with a non-sensitive log (e.g., log status and content length
only) or remove the statement entirely, ensuring no raw responseBody or
string(responseBody) is logged and that any remaining logging follows safe
practices (no PII or tokens).

@akshaydeo akshaydeo force-pushed the 12-04-gemini-sdk-batch-support branch from 5a7e7a2 to 3bf5d39 Compare December 4, 2025 20:44
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (5)
transports/bifrost-http/integrations/router.go (1)

868-893: Remove or replace debug fmt.Printf logging in file request handlers

The fmt.Printf debug statements in the file upload path and the trailing conversion/success logs are still present and bypass the structured logger:

  • Around the upload path (e.g., provider/purpose/filename, error/success, callbacks, converters).
  • Around file response conversion error and final “success response” lines.

These match the previously raised concern and should be removed or replaced with g.logger.Debug(...) (or dropped entirely) before this ships.

Also applies to: 1001-1006

transports/bifrost-http/integrations/genai.go (4)

45-49: Handle error from crypto/rand.Read.

This issue was already flagged. Ignoring the error from crypto/rand.Read could lead to predictable session IDs if the random source is unavailable.


240-254: Remove debug fmt.Printf statements.

These debug statements were already flagged in a previous review. They should be removed or replaced with a proper logger with configurable log levels.


887-929: Binary data corruption in multipart/related parsing.

This issue was already flagged. Converting binary body to string (line 887) and back to bytes (line 929) corrupts non-UTF-8 binary data. Use bytes.Split() instead of strings.Split().


792-861: Remove all debug fmt.Printf statements.

This issue was already flagged in a previous review. The numerous debug statements throughout the resumable upload handling code should be removed or replaced with a proper structured logger.

Also applies to: 1072-1077, 1108-1148

🧹 Nitpick comments (8)
core/providers/gemini/types.go (1)

1594-1602: Document the flexible Src field type.

The Src field is defined as interface{}, which can accept either a string (file reference) or an array of inline requests. Consider adding a comment documenting the expected types for clarity.

Apply this diff to improve documentation:

 type GeminiBatchCreateRequestSDK struct {
 	Model string `json:"model,omitempty"`
-	// Src can be either:
-	// - A string like "files/display_name" for file-based input
-	// - An array of inline request objects
+	// Src can be either a string (e.g., "files/display_name") for file-based input
+	// or an array of GeminiBatchInlineRequest for inline requests
 	Src interface{} `json:"src,omitempty"`
 }
transports/bifrost-http/integrations/router.go (1)

421-427: Consider cancelling the Bifrost context when PreCallback short-circuits resumable uploads

In the ErrResumableUploadInit branch you return early without invoking cancel(), unlike the other code paths that eventually defer cancel() or pass it into streaming. If ConvertToBifrostContext uses a context.WithCancel, this risks a small but avoidable context/goroutine leak on resumable-init short-circuits.

You could defensively call cancel() before returning:

-		if config.PreCallback != nil {
-			if err := config.PreCallback(ctx, bifrostCtx, req); err != nil {
-				// Check if this is a resumable upload init that was already handled
-				if err == ErrResumableUploadInit {
-					// Response was already written by the PreCallback, just return
-					return
-				}
+		if config.PreCallback != nil {
+			if err := config.PreCallback(ctx, bifrostCtx, req); err != nil {
+				// Check if this is a resumable upload init that was already handled
+				if err == ErrResumableUploadInit {
+					// Response was already written by the PreCallback; clean up context and return
+					cancel()
+					return
+				}
core/providers/gemini/files.go (1)

465-549: Gemini file conversion helpers are well-structured and consistent

The new wrapper type and ToGeminiFile* converters cleanly translate Bifrost file responses into the Gemini JSON shapes, with sane defaults (e.g., application/octet-stream MIME type, UTC timestamps, safe handling of optional ExpiresAt). This design should work well with the GenAI-facing routes.

transports/bifrost-http/integrations/genai.go (5)

207-314: Extract shared logic between POST and PUT resumable upload routes.

The POST (lines 209-261) and PUT (lines 265-314) routes for resumable uploads have nearly identical FileRequestConverter and FileUploadResponseConverter implementations. Consider extracting these to shared functions to reduce duplication and ease maintenance.

Example extraction:

// Shared converter for resumable upload phase 2
func resumableUploadFileRequestConverter(ctx *context.Context, req interface{}) (*FileRequest, error) {
    if geminiReq, ok := req.(*gemini.GeminiFileUploadRequest); ok {
        provider := schemas.Gemini
        if p := (*ctx).Value(bifrostContextKeyProvider); p != nil {
            provider = p.(schemas.ModelProvider)
        }
        bifrostReq := &schemas.BifrostFileUploadRequest{
            Provider: provider,
            File:     geminiReq.File,
            Filename: geminiReq.Filename,
            Purpose:  geminiReq.Purpose,
        }
        return &FileRequest{
            Type:          schemas.FileUploadRequest,
            UploadRequest: bifrostReq,
        }, nil
    }
    return nil, errors.New("invalid file upload request type")
}

663-778: Extract common provider extraction logic.

The provider extraction pattern (reading from header, defaulting to Gemini, storing in context) is duplicated across extractGeminiBatchCreateParams, extractGeminiBatchListQueryParams, extractGeminiBatchIDFromPath, and extractGeminiBatchIDFromPathCancel. Consider extracting to a helper function.

+// extractProviderFromHeader extracts and sets provider in context
+func extractProviderFromHeader(ctx *fasthttp.RequestCtx, bifrostCtx *context.Context) schemas.ModelProvider {
+    provider := string(ctx.Request.Header.Peek("x-model-provider"))
+    if provider == "" {
+        provider = string(schemas.Gemini)
+    }
+    *bifrostCtx = context.WithValue(*bifrostCtx, bifrostContextKeyProvider, schemas.ModelProvider(provider))
+    return schemas.ModelProvider(provider)
+}

 func extractGeminiBatchCreateParams(ctx *fasthttp.RequestCtx, bifrostCtx *context.Context, req interface{}) error {
-    provider := string(ctx.Request.Header.Peek("x-model-provider"))
-    if provider == "" {
-        provider = string(schemas.Gemini)
-    }
-    *bifrostCtx = context.WithValue(*bifrostCtx, bifrostContextKeyProvider, schemas.ModelProvider(provider))
+    extractProviderFromHeader(ctx, bifrostCtx)
     // ... rest of function

1145-1145: Hardcoded path prefix may drift from route definition.

The resumable upload URL uses a hardcoded path /genai/upload/v1beta/files/resumable/ which should match the route defined at line 211 (pathPrefix + "/upload/v1beta/files/resumable/{session_id}"). If the route path changes, this URL construction will break. Consider deriving this path from a shared constant or the route configuration.


174-193: Consider extracting common FileRequestConverter/BatchRequestConverter pattern.

Multiple route configurations have nearly identical converter implementations that:

  1. Type-assert the request
  2. Extract provider from context (defaulting to Gemini)
  3. Build a Bifrost request with the provider

This pattern is repeated for file upload, file list, file retrieve, file delete, and all batch operations. Extracting shared helper functions would reduce code duplication significantly.

Also applies to: 217-236, 273-292, 324-341, 365-380, 404-419, 450-490


511-531: BatchCreateRequestConverter field name is misleading for list/retrieve/cancel/delete operations; consider renaming to BatchRequestConverter across all integrations.

The field is used polymorphically for all batch operation types (list, retrieve, cancel, delete) via the BatchRequest.Type field, but the name suggests it only handles create operations. While response converters use operation-specific names (BatchListResponseConverter, BatchRetrieveResponseConverter, etc.), the request converter remains generic. To improve clarity, consider renaming BatchCreateRequestConverter to BatchRequestConverter in RouteConfig and all integration files where it's used (anthropic.go, openai.go, bedrock.go, genai.go).

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5a7e7a2 and 3bf5d39.

📒 Files selected for processing (25)
  • core/bifrost.go (1 hunks)
  • core/providers/anthropic/batch.go (1 hunks)
  • core/providers/azure/batch.go (1 hunks)
  • core/providers/bedrock/batch.go (1 hunks)
  • core/providers/cerebras/batch.go (1 hunks)
  • core/providers/cohere/batch.go (1 hunks)
  • core/providers/elevenlabs/batch.go (1 hunks)
  • core/providers/gemini/batch.go (1 hunks)
  • core/providers/gemini/files.go (1 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/groq/batch.go (1 hunks)
  • core/providers/mistral/batch.go (1 hunks)
  • core/providers/ollama/batch.go (1 hunks)
  • core/providers/openai/batch.go (1 hunks)
  • core/providers/openrouter/batch.go (1 hunks)
  • core/providers/parasail/batch.go (1 hunks)
  • core/providers/perplexity/batch.go (1 hunks)
  • core/providers/sgl/batch.go (1 hunks)
  • core/providers/vertex/batch.go (1 hunks)
  • core/schemas/batch.go (1 hunks)
  • core/schemas/bifrost.go (1 hunks)
  • core/schemas/provider.go (3 hunks)
  • transports/bifrost-http/integrations/genai.go (14 hunks)
  • transports/bifrost-http/integrations/router.go (7 hunks)
  • transports/bifrost-http/integrations/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (10)
  • transports/bifrost-http/integrations/utils.go
  • core/schemas/bifrost.go
  • core/providers/bedrock/batch.go
  • core/providers/azure/batch.go
  • core/providers/sgl/batch.go
  • core/providers/vertex/batch.go
  • core/schemas/batch.go
  • core/providers/openrouter/batch.go
  • core/providers/groq/batch.go
  • core/providers/mistral/batch.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/providers/parasail/batch.go
  • core/providers/openai/batch.go
  • core/schemas/provider.go
  • core/providers/perplexity/batch.go
  • core/providers/anthropic/batch.go
  • core/providers/gemini/batch.go
  • core/providers/gemini/files.go
  • core/providers/elevenlabs/batch.go
  • transports/bifrost-http/integrations/genai.go
  • core/providers/cohere/batch.go
  • transports/bifrost-http/integrations/router.go
  • core/bifrost.go
  • core/providers/gemini/types.go
  • core/providers/ollama/batch.go
  • core/providers/cerebras/batch.go
🧬 Code graph analysis (11)
core/providers/parasail/batch.go (2)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/openai/batch.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/schemas/provider.go (2)
core/schemas/bifrost.go (1)
  • BatchDeleteRequest (103-103)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/perplexity/batch.go (4)
core/providers/perplexity/perplexity.go (1)
  • PerplexityProvider (19-24)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/anthropic/batch.go (4)
core/providers/anthropic/anthropic.go (1)
  • AnthropicProvider (21-28)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/gemini/batch.go (5)
core/schemas/batch.go (15)
  • BatchStatus (5-5)
  • BatchStatusValidating (8-8)
  • BatchStatusInProgress (10-10)
  • BatchStatusFinalizing (11-11)
  • BatchStatusCompleted (12-12)
  • BatchStatusFailed (9-9)
  • BatchStatusCancelling (14-14)
  • BatchStatusCancelled (15-15)
  • BatchStatusExpired (13-13)
  • BifrostBatchCreateResponse (85-109)
  • BifrostBatchRetrieveResponse (152-187)
  • BifrostBatchListResponse (129-140)
  • BifrostBatchCancelResponse (199-208)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/gemini/types.go (11)
  • GeminiJobStatePending (1582-1582)
  • GeminiJobStateRunning (1583-1583)
  • GeminiJobStateSucceeded (1584-1584)
  • GeminiJobStateFailed (1585-1585)
  • GeminiJobStateCancelling (1586-1586)
  • GeminiJobStateCancelled (1587-1587)
  • GeminiBatchJobResponseSDK (1618-1624)
  • GeminiBatchMetadata (1499-1512)
  • GeminiBatchStats (1433-1437)
  • GeminiBatchDest (1533-1536)
  • GeminiBatchListResponseSDK (1627-1630)
core/schemas/bifrost.go (5)
  • BifrostError (364-373)
  • Gemini (48-48)
  • BatchDeleteRequest (103-103)
  • BifrostResponseExtraFields (295-304)
  • RequestType (83-83)
core/providers/utils/utils.go (4)
  • CheckOperationAllowed (408-420)
  • NewBifrostOperationError (449-460)
  • SetExtraHeaders (178-208)
  • MakeRequestWithContext (39-93)
core/schemas/provider.go (1)
  • Provider (313-362)
core/providers/gemini/files.go (1)
core/schemas/files.go (1)
  • BifrostFileUploadResponse (89-105)
transports/bifrost-http/integrations/genai.go (3)
transports/bifrost-http/integrations/router.go (3)
  • EmbeddingResponseConverter (129-129)
  • RequestParser (217-217)
  • RouteConfig (270-301)
core/providers/gemini/types.go (10)
  • GeminiFileUploadRequest (1656-1662)
  • Type (779-779)
  • GeminiFileListRequest (1665-1669)
  • GeminiFileRetrieveRequest (1672-1674)
  • GeminiFileDeleteRequest (1677-1679)
  • GeminiBatchCreateRequestSDK (1596-1602)
  • GeminiBatchListRequestSDK (1633-1636)
  • GeminiBatchRetrieveRequestSDK (1639-1641)
  • GeminiBatchCancelRequestSDK (1644-1646)
  • GeminiBatchDeleteRequestSDK (1649-1651)
core/providers/gemini/batch.go (4)
  • ToGeminiBatchJobResponse (871-896)
  • ToGeminiBatchListResponse (936-972)
  • ToGeminiBatchRetrieveResponse (899-933)
  • ToGeminiBatchCancelResponse (975-984)
core/providers/cohere/batch.go (4)
core/providers/cohere/cohere.go (1)
  • CohereProvider (67-73)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/ollama/batch.go (3)
core/providers/ollama/ollama.go (1)
  • OllamaProvider (18-23)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/cerebras/batch.go (1)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
🔇 Additional comments (15)
core/providers/gemini/types.go (1)

1578-1589: LGTM! Job state constants properly defined.

The new job state constants follow a consistent naming pattern and cover all necessary lifecycle states for batch jobs.

core/providers/openai/batch.go (1)

597-600: LGTM! Consistent unsupported operation implementation.

The BatchDelete method correctly returns an unsupported operation error, consistent with OpenAI's current API capabilities and the pattern used across other providers.

core/providers/anthropic/batch.go (1)

783-786: LGTM! Consistent with other batch methods.

The BatchDelete implementation follows the established pattern for unsupported operations in the Anthropic provider.

core/providers/cohere/batch.go (1)

35-38: LGTM! Maintains consistency with other batch operations.

The BatchDelete method correctly returns an unsupported operation error, maintaining consistency with all other batch methods in this provider.

core/providers/perplexity/batch.go (1)

35-38: LGTM! Correct unsupported operation handling.

The implementation is consistent with other batch operations in the Perplexity provider.

core/schemas/provider.go (1)

182-182: LGTM! Complete and consistent integration of BatchDelete.

The changes properly integrate BatchDelete into:

  1. The AllowedRequests struct for operation gating
  2. The IsOperationAllowed switch case for validation
  3. The Provider interface for implementation

All additions follow the established patterns for other batch operations.

Also applies to: 231-232, 350-351

core/providers/cerebras/batch.go (1)

35-38: LGTM! Consistent with provider patterns.

The BatchDelete implementation correctly returns an unsupported operation error, matching the pattern for all other batch operations in the Cerebras provider.

core/providers/ollama/batch.go (1)

35-38: LGTM! Final provider implementation is consistent.

The BatchDelete method correctly implements the unsupported operation pattern, completing the consistent interface implementation across all providers.

core/bifrost.go (1)

1133-1209: BatchDeleteRequest implementation is consistent with existing Batch APIs*

Validation, provider/config lookup, key selection, and retry/error-enrichment all mirror the established patterns for retrieve/cancel/results; this looks correct and cohesive with the existing batch surface.

core/providers/parasail/batch.go (1)

35-38: Parasail BatchDelete stub matches existing unsupported batch pattern

The BatchDelete method correctly advertises unsupported status using the shared NewUnsupportedOperationError helper, consistent with the other Parasail batch operations.

core/providers/elevenlabs/batch.go (1)

35-38: Elevenlabs BatchDelete stub is wired consistently

The BatchDelete implementation cleanly follows the existing pattern for unsupported Elevenlabs batch operations and aligns with the Provider interface.

transports/bifrost-http/integrations/router.go (1)

81-89: BatchDelete routing and conversion hooks are correctly plumbed

The additional DeleteRequest field on BatchRequest, the BatchDeleteResponseConverter type and RouteConfig field, and the new BatchDeleteRequest branch in handleBatchRequest all follow the existing batch patterns (create/list/retrieve/cancel/results) and should integrate cleanly with the new core.BatchDeleteRequest API.

Also applies to: 159-162, 271-292, 820-841

core/providers/gemini/batch.go (1)

986-1043: Gemini BatchDelete implementation looks correct and aligned with other batch ops

The new BatchDelete:

  • Enforces operation-allowed and batch_id presence.
  • Builds the correct DELETE URL for both raw IDs and batches/... resource names.
  • Uses x-goog-api-key consistently with other Gemini batch calls.
  • Treats 200/204 as success and returns a typed BifrostBatchDeleteResponse with enriched ExtraFields.

This is consistent with the existing Gemini batch behaviors.

transports/bifrost-http/integrations/genai.go (2)

438-661: New batch API routes look structurally sound.

The CreateGenAIBatchRouteConfigs function properly implements:

  • Batch create with both file-based and inline request support (lines 463-482)
  • Batch list with pagination (lines 503-543)
  • Batch retrieve, cancel, and delete operations
  • Proper provider extraction and context propagation
  • Response converters that check for raw responses before converting

The route structure follows the established patterns in this file.


1033-1042: Session access pattern is safe; concurrent reuse is not possible.

Session IDs are generated from 16 cryptographically random bytes (line 48), making reuse extremely unlikely. Within a single request, parseGeminiResumableUploadPhase2 (line 1033) and extractGeminiResumableUploadParams (line 1097-1105) are called sequentially as RequestParser and PreCallback in the same request handler, not concurrently. Sessions are deleted immediately after use (line 1105), preventing reuse across requests. The cleanup goroutine (lines 57-60) removes sessions only after 1 hour, which doesn't affect active request processing.

Comment on lines +847 to +269
func ToGeminiJobState(status schemas.BatchStatus) string {
switch status {
case schemas.BatchStatusValidating:
return GeminiJobStatePending
case schemas.BatchStatusInProgress:
return GeminiJobStateRunning
case schemas.BatchStatusFinalizing:
return GeminiJobStateRunning
case schemas.BatchStatusCompleted:
return GeminiJobStateSucceeded
case schemas.BatchStatusFailed:
return GeminiJobStateFailed
case schemas.BatchStatusCancelling:
return GeminiJobStateCancelling
case schemas.BatchStatusCancelled:
return GeminiJobStateCancelled
case schemas.BatchStatusExpired:
return GeminiJobStateFailed
default:
return GeminiJobStatePending
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix BatchStats computation to avoid negative pending counts

The SDK converters generally look good, but there’s a corner case in the stats math:

  • In BatchRetrieve, BifrostBatchRetrieveResponse.RequestCounts is populated with Completed and Failed, while Total is left at its zero value.
  • ToGeminiBatchRetrieveResponse and ToGeminiBatchListResponse currently use resp.RequestCounts.Total directly to derive RequestCount and PendingRequestCount.

When Total == 0 and Completed > 0, PendingRequestCount = Total - Completed becomes negative, which is invalid.

Consider defensively deriving totals and pending counts, for example:

-	result.Metadata = &GeminiBatchMetadata{
-		Name:       resp.ID,
-		State:      ToGeminiJobState(resp.Status),
-		CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339),
-		BatchStats: &GeminiBatchStats{
-			RequestCount:           resp.RequestCounts.Total,
-			PendingRequestCount:    resp.RequestCounts.Total - resp.RequestCounts.Completed,
-			SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed,
-		},
-	}
+	total := resp.RequestCounts.Total
+	if total == 0 {
+		total = resp.RequestCounts.Completed + resp.RequestCounts.Failed
+	}
+	success := resp.RequestCounts.Completed
+	if success < 0 {
+		success = 0
+	}
+	pending := total - (success + resp.RequestCounts.Failed)
+	if pending < 0 {
+		pending = 0
+	}
+
+	result.Metadata = &GeminiBatchMetadata{
+		Name:       resp.ID,
+		State:      ToGeminiJobState(resp.Status),
+		CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339),
+		BatchStats: &GeminiBatchStats{
+			RequestCount:           total,
+			PendingRequestCount:    pending,
+			SuccessfulRequestCount: success,
+		},
+	}

and mirror the same pattern inside ToGeminiBatchListResponse when populating BatchStats, so the invariants RequestCount ≥ 0, PendingRequestCount ≥ 0, and RequestCount ≈ success + failures + pending always hold.

Also applies to: 899-919, 941-958

🤖 Prompt for AI Agents
core/providers/gemini/batch.go lines ~847-868 (and similarly at 899-919,
941-958): the current stats math can produce negative PendingRequestCount when
resp.RequestCounts.Total is zero but Completed/Failed are non-zero; instead
derive totals defensively: compute total := resp.RequestCounts.Total; if total
== 0 { total = resp.RequestCounts.Completed + resp.RequestCounts.Failed +
resp.RequestCounts.Pending }; compute pending := total -
resp.RequestCounts.Completed - resp.RequestCounts.Failed; if pending < 0 {
pending = 0 }; then set RequestCount = total and PendingRequestCount = pending
(ensuring RequestCount ≥ 0, PendingRequestCount ≥ 0 and RequestCount ≈ success +
failures + pending). Apply the same change to ToGeminiBatchListResponse and any
other places noted.

copy(uploadReq.File, body)
uploadReq.Filename = session.Filename
uploadReq.MimeType = session.MimeType
uploadReq.Purpose = "batch" // Default purpose for file uploads via GenAI API
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Hardcoded purpose may be incorrect for non-batch uploads.

The purpose is hardcoded to "batch" for all file uploads via the resumable upload API. If this endpoint is used for other purposes (e.g., general file storage for chat context), this default may be incorrect. Consider allowing the purpose to be specified in the metadata during phase 1.

🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/genai.go around line 1056, the file
upload purpose is hardcoded to "batch", which can be incorrect for non-batch
uploads; change the code to read the purpose from the phase-1 metadata (e.g.,
check uploadReq.Metadata or incoming request metadata) and set uploadReq.Purpose
to that value if present, otherwise fall back to "batch"; additionally
validate/normalize the provided purpose against allowed values (or sanitize it)
before assigning to uploadReq.Purpose so invalid values are not sent to the
GenAI API.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (7)
transports/bifrost-http/integrations/router.go (2)

868-892: Debug statements flagged in previous review.

These fmt.Printf debug statements were already flagged in a past review comment. They should be removed or replaced with structured logging before merging.


1001-1007: Debug statements flagged in previous review.

These fmt.Printf debug statements were already flagged in a past review comment. They should be removed before merging.

transports/bifrost-http/integrations/genai.go (5)

45-49: Error from crypto/rand.Read not handled (flagged in previous review).

This was already flagged in a past review. Ignoring the error could lead to predictable session IDs if the random source fails.


240-254: Debug statements flagged in previous review.

These fmt.Printf debug statements were already flagged in a past review comment. They should be removed or replaced with structured logging.


792-861: Debug statements flagged in previous review.

Multiple fmt.Printf debug statements in parseGeminiResumableUpload and related functions were flagged in a past review. They should be removed or replaced with structured logging.


927-930: Binary data corruption flagged in previous review.

The conversion through string(body) at line 887 and back to []byte(content) at line 929 corrupts binary file uploads. This was already flagged in a past review.


1066-1079: Remove debug statements from setResumableUploadFinalStatus.

These debug statements should be removed before merging.

🧹 Nitpick comments (2)
transports/bifrost-http/integrations/genai.go (2)

217-237: Consider extracting duplicate FileRequestConverter logic.

The FileRequestConverter closures at lines 217-237, 273-293 are nearly identical. Consider extracting to a shared helper function to reduce duplication.

// Helper to create FileRequest from GeminiFileUploadRequest
func createFileUploadRequest(ctx *context.Context, geminiReq *gemini.GeminiFileUploadRequest) (*FileRequest, error) {
    provider := schemas.Gemini
    if p := (*ctx).Value(bifrostContextKeyProvider); p != nil {
        provider = p.(schemas.ModelProvider)
    }
    return &FileRequest{
        Type: schemas.FileUploadRequest,
        UploadRequest: &schemas.BifrostFileUploadRequest{
            Provider: provider,
            File:     geminiReq.File,
            Filename: geminiReq.Filename,
            Purpose:  geminiReq.Purpose,
        },
    }, nil
}

1011-1064: Remove debug statements and clean up session handling.

The debug statements here should be removed. Additionally, the comment on line 1059 notes that the session shouldn't be deleted here, but extractGeminiResumableUploadParams (line 1105) deletes it later. Ensure this ordering is always correct to avoid race conditions.

Remove the debug statements and consider adding a brief code comment clarifying the session lifecycle (created in phase 1, used in phase 2 parser, deleted in phase 2 PreCallback).

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5a7e7a2 and 3bf5d39.

📒 Files selected for processing (25)
  • core/bifrost.go (1 hunks)
  • core/providers/anthropic/batch.go (1 hunks)
  • core/providers/azure/batch.go (1 hunks)
  • core/providers/bedrock/batch.go (1 hunks)
  • core/providers/cerebras/batch.go (1 hunks)
  • core/providers/cohere/batch.go (1 hunks)
  • core/providers/elevenlabs/batch.go (1 hunks)
  • core/providers/gemini/batch.go (1 hunks)
  • core/providers/gemini/files.go (1 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/groq/batch.go (1 hunks)
  • core/providers/mistral/batch.go (1 hunks)
  • core/providers/ollama/batch.go (1 hunks)
  • core/providers/openai/batch.go (1 hunks)
  • core/providers/openrouter/batch.go (1 hunks)
  • core/providers/parasail/batch.go (1 hunks)
  • core/providers/perplexity/batch.go (1 hunks)
  • core/providers/sgl/batch.go (1 hunks)
  • core/providers/vertex/batch.go (1 hunks)
  • core/schemas/batch.go (1 hunks)
  • core/schemas/bifrost.go (1 hunks)
  • core/schemas/provider.go (3 hunks)
  • transports/bifrost-http/integrations/genai.go (14 hunks)
  • transports/bifrost-http/integrations/router.go (7 hunks)
  • transports/bifrost-http/integrations/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (7)
  • transports/bifrost-http/integrations/utils.go
  • core/providers/cohere/batch.go
  • core/providers/sgl/batch.go
  • core/schemas/bifrost.go
  • core/providers/cerebras/batch.go
  • core/providers/anthropic/batch.go
  • core/providers/mistral/batch.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/schemas/provider.go
  • core/providers/ollama/batch.go
  • core/providers/azure/batch.go
  • core/providers/parasail/batch.go
  • core/schemas/batch.go
  • core/providers/openai/batch.go
  • core/providers/vertex/batch.go
  • core/providers/groq/batch.go
  • core/providers/openrouter/batch.go
  • core/bifrost.go
  • core/providers/bedrock/batch.go
  • core/providers/perplexity/batch.go
  • core/providers/gemini/batch.go
  • transports/bifrost-http/integrations/router.go
  • transports/bifrost-http/integrations/genai.go
  • core/providers/elevenlabs/batch.go
  • core/providers/gemini/files.go
  • core/providers/gemini/types.go
🧬 Code graph analysis (16)
core/schemas/provider.go (2)
core/schemas/bifrost.go (1)
  • BatchDeleteRequest (103-103)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/ollama/batch.go (4)
core/providers/ollama/ollama.go (1)
  • OllamaProvider (18-23)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/azure/batch.go (4)
core/providers/azure/azure.go (1)
  • AzureProvider (23-28)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/parasail/batch.go (4)
core/providers/parasail/parasail.go (1)
  • ParasailProvider (17-22)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/schemas/batch.go (2)
core/schemas/provider.go (1)
  • Provider (313-362)
core/schemas/bifrost.go (2)
  • ModelProvider (32-32)
  • BifrostResponseExtraFields (295-304)
core/providers/openai/batch.go (4)
core/providers/openai/openai.go (1)
  • OpenAIProvider (24-30)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/vertex/batch.go (4)
core/providers/vertex/vertex.go (1)
  • VertexProvider (57-62)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/groq/batch.go (2)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/bifrost.go (3)
core/schemas/bifrost.go (5)
  • BatchDeleteRequest (103-103)
  • BifrostError (364-373)
  • ErrorField (382-389)
  • BifrostErrorExtraFields (431-435)
  • RequestType (83-83)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/provider.go (2)
  • Provider (313-362)
  • CustomProviderConfig (248-254)
core/providers/bedrock/batch.go (4)
core/providers/bedrock/bedrock.go (1)
  • BedrockProvider (29-35)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/perplexity/batch.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/gemini/batch.go (3)
core/schemas/batch.go (15)
  • BatchStatus (5-5)
  • BatchStatusValidating (8-8)
  • BatchStatusInProgress (10-10)
  • BatchStatusFinalizing (11-11)
  • BatchStatusCompleted (12-12)
  • BatchStatusFailed (9-9)
  • BatchStatusCancelling (14-14)
  • BatchStatusCancelled (15-15)
  • BatchStatusExpired (13-13)
  • BifrostBatchCreateResponse (85-109)
  • BifrostBatchRetrieveResponse (152-187)
  • BifrostBatchListResponse (129-140)
  • BifrostBatchCancelResponse (199-208)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/gemini/types.go (10)
  • GeminiJobStatePending (1582-1582)
  • GeminiJobStateRunning (1583-1583)
  • GeminiJobStateSucceeded (1584-1584)
  • GeminiJobStateFailed (1585-1585)
  • GeminiJobStateCancelling (1586-1586)
  • GeminiJobStateCancelled (1587-1587)
  • GeminiBatchJobResponseSDK (1618-1624)
  • GeminiBatchMetadata (1499-1512)
  • GeminiBatchStats (1433-1437)
  • GeminiBatchDest (1533-1536)
core/providers/utils/utils.go (3)
  • CheckOperationAllowed (408-420)
  • SetExtraHeaders (178-208)
  • MakeRequestWithContext (39-93)
transports/bifrost-http/integrations/router.go (4)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
transports/bifrost-http/integrations/genai.go (1)
  • ErrResumableUploadInit (36-36)
core/schemas/bifrost.go (2)
  • BatchDeleteRequest (103-103)
  • FileUploadRequest (104-104)
core/schemas/provider.go (1)
  • Provider (313-362)
transports/bifrost-http/integrations/genai.go (4)
core/providers/gemini/types.go (2)
  • GeminiFileUploadRequest (1656-1662)
  • Type (779-779)
core/schemas/files.go (2)
  • BifrostFileUploadRequest (53-66)
  • BifrostFileUploadResponse (89-105)
core/providers/gemini/files.go (1)
  • ToGeminiFileUploadResponse (472-485)
core/providers/gemini/batch.go (3)
  • ToGeminiBatchJobResponse (871-896)
  • ToGeminiBatchListResponse (936-972)
  • ToGeminiBatchRetrieveResponse (899-933)
core/providers/elevenlabs/batch.go (4)
core/providers/elevenlabs/elevenlabs.go (1)
  • ElevenlabsProvider (23-29)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/gemini/files.go (1)
core/schemas/files.go (1)
  • BifrostFileUploadResponse (89-105)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
🔇 Additional comments (30)
core/providers/gemini/types.go (2)

1578-1589: LGTM! Clear distinction between SDK and internal batch states.

The new GeminiJobState* constants are well-documented as "Google GenAI SDK job states" and properly differentiated from the existing GeminiBatchState* constants. The naming convention and value format are consistent.


1656-1662: LGTM! File upload enhancements align with resumable upload support.

The addition of MimeType and ResumableSessionID fields appropriately extends the file upload functionality. The MimeType field enables explicit MIME type handling, while ResumableSessionID (marked as internal with json:"-") supports resumable upload sessions.

core/providers/vertex/batch.go (1)

35-38: LGTM! Consistent with other unsupported batch operations.

The BatchDelete implementation follows the same pattern as other batch methods in this file, correctly returning an unsupported operation error.

core/providers/azure/batch.go (1)

438-441: LGTM! Follows the established pattern.

The BatchDelete stub correctly signals that Azure does not support this operation.

core/providers/openrouter/batch.go (1)

35-38: LGTM! Consistent stub implementation.

core/providers/bedrock/batch.go (1)

932-935: LGTM! Consistent implementation.

core/schemas/provider.go (2)

182-182: LGTM! Proper extension of AllowedRequests.

The BatchDelete field is correctly placed among other batch operations and follows the established naming convention.


231-232: LGTM! Complete and consistent Provider interface extension.

The BatchDelete capability is properly integrated across all three required locations:

  1. AllowedRequests.BatchDelete field (line 182)
  2. IsOperationAllowed case handler (lines 231-232)
  3. Provider interface method (lines 350-351)

The implementation follows the established patterns for other batch operations.

Also applies to: 350-351

core/providers/groq/batch.go (1)

35-38: LGTM! Consistent stub implementation.

core/providers/openai/batch.go (1)

597-600: LGTM! Completes the consistent BatchDelete stub pattern across all providers.

All reviewed providers (Vertex, Azure, OpenRouter, Bedrock, Groq, and OpenAI) implement BatchDelete as an unsupported operation using the same pattern. This consistency is excellent for maintainability.

core/providers/perplexity/batch.go (1)

35-38: LGTM!

The BatchDelete stub implementation follows the established pattern used by other batch methods in this file, correctly returning an unsupported operation error with the appropriate request type constant.

core/providers/ollama/batch.go (1)

35-38: LGTM!

The BatchDelete stub implementation is consistent with the other batch method stubs in this file and correctly implements the Provider interface requirement.

core/schemas/batch.go (1)

265-281: LGTM!

The BifrostBatchDeleteRequest and BifrostBatchDeleteResponse types are well-structured and follow the established patterns from other batch request/response types in this file. The request mirrors BifrostBatchRetrieveRequest and BifrostBatchCancelRequest, while the response appropriately includes a Deleted confirmation field.

core/bifrost.go (1)

1133-1209: LGTM!

The BatchDeleteRequest implementation follows the established pattern from other batch methods (BatchRetrieveRequest, BatchCancelRequest, BatchResultsRequest):

  • Consistent nil/empty validation for request, provider, and batch_id
  • Proper context defaulting
  • Same base provider type resolution logic for custom providers
  • Correct key selection pattern
  • Appropriate use of executeRequestWithRetries
  • Error extra fields populated consistently
core/providers/parasail/batch.go (1)

35-38: LGTM!

The BatchDelete stub implementation is consistent with the other batch method stubs and correctly follows the established provider pattern.

core/providers/elevenlabs/batch.go (1)

35-38: LGTM!

The BatchDelete stub correctly follows the same pattern as other unsupported batch operations in this provider, returning the appropriate error with consistent parameter usage.

core/providers/gemini/files.go (3)

466-485: LGTM!

The new wrapper type and updated converter correctly align with Google's API response format. The structured approach with GeminiFileUploadResponseWrapper is cleaner than returning a raw map.


487-519: LGTM!

The new ToGeminiFileListResponse and ToGeminiFileRetrieveResponse converters follow consistent patterns and correctly map Bifrost fields to Gemini's SDK format.


521-549: LGTM!

The helper functions toGeminiFileState, formatGeminiTimestamp, and safeDerefInt64 are clean, handle edge cases appropriately (zero timestamp returns empty string, nil pointer returns 0), and improve code readability.

transports/bifrost-http/integrations/router.go (4)

88-88: LGTM!

The DeleteRequest field addition to BatchRequest follows the established pattern for other batch operation fields.


159-162: LGTM!

The BatchDeleteResponseConverter type definition is consistent with other batch response converter types in this file.


422-426: LGTM!

Proper sentinel error handling for ErrResumableUploadInit to short-circuit processing when a resumable upload initialization has already been handled by the PreCallback.


820-841: LGTM!

The BatchDeleteRequest case follows the same pattern as other batch operations (BatchCreate, BatchList, etc.) with proper nil checks, error handling, post-callback execution, and response conversion.

core/providers/gemini/batch.go (3)

846-868: LGTM!

The ToGeminiJobState function provides a comprehensive mapping from Bifrost batch statuses to Gemini SDK job states. Mapping BatchStatusExpired to GeminiJobStateFailed is a reasonable choice since Gemini doesn't have an explicit expired state.


986-1043: LGTM!

The BatchDelete implementation follows the established pattern from other batch operations:

  • Proper operation allowed check
  • Input validation for empty batch ID
  • Correct URL construction handling both prefixed and unprefixed batch IDs
  • Appropriate HTTP DELETE method
  • Handles both 200 OK and 204 No Content success responses

887-892: The SuccessfulRequestCount calculation is correct. The Completed field in BatchRequestCounts represents all finished requests (both successful and failed), not only successful completions. The formula Completed - Failed accurately computes the successful count. This is consistent across all providers: Anthropic maps its Succeeded + Errored to Completed, and OpenAI/Azure use Completed to include all finished requests regardless of outcome.

Likely an incorrect or invalid review comment.

transports/bifrost-http/integrations/genai.go (4)

23-67: Session management implementation looks good with one consideration.

The uploadSession type and cleanup goroutine provide appropriate session lifecycle management. The 1-hour expiration with 10-minute cleanup interval is reasonable.

Note: The cleanup goroutine started in init() runs for the lifetime of the process, which is acceptable for a server application. Consider documenting this behavior.


438-501: LGTM - Batch create route configuration.

The batch create endpoint configuration properly handles both file-based (src as string) and inline (src as array) request formats, with correct provider extraction from context.


503-543: LGTM - Batch list route configuration.

The batch list endpoint properly extracts pagination parameters and converts to Bifrost format.


621-661: LGTM - Batch delete route configuration.

The batch delete endpoint follows the established pattern for batch operations with proper provider extraction and response handling (returns empty object on success).

Comment on lines +1594 to +1647
// GeminiBatchCreateRequestSDK represents the SDK format for batch create requests.
// The SDK sends: batches.create(model="...", src=[...] or src="files/...")
type GeminiBatchCreateRequestSDK struct {
Model string `json:"model,omitempty"`
// Src can be either:
// - A string like "files/display_name" for file-based input
// - An array of inline request objects
Src interface{} `json:"src,omitempty"`
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for functions that handle GeminiBatchCreateRequestSDK
echo "=== Functions handling GeminiBatchCreateRequestSDK ==="
rg -n "GeminiBatchCreateRequestSDK" --type=go core/providers/gemini/

echo -e "\n=== Type assertions and switches on Src ==="
rg -n -C3 "\.Src|type assert|switch.*type" --type=go core/providers/gemini/ | head -100

echo -e "\n=== Looking for conversion/validation functions ==="
ast-grep --pattern $'func $_(req $_ GeminiBatchCreateRequestSDK $_) $$$'

Repository: maximhq/bifrost

Length of output: 924


🏁 Script executed:

#!/bin/bash
# Search for all batch-related functions and converters
echo "=== Batch-related conversion functions ==="
rg -n "batch|Batch" --type=go core/providers/gemini/ | grep -i "func\|convert" | head -30

echo -e "\n=== Custom UnmarshalJSON implementations ==="
rg -n "UnmarshalJSON" --type=go core/providers/gemini/

echo -e "\n=== Files in gemini provider directory ==="
ls -la core/providers/gemini/

echo -e "\n=== Search for any handling of Src field in entire gemini provider ==="
rg -n "Src" --type=go core/providers/gemini/ -B2 -A2

Repository: maximhq/bifrost

Length of output: 6507


🏁 Script executed:

#!/bin/bash
# Read the BatchCreate function to see how it handles GeminiBatchCreateRequestSDK
echo "=== BatchCreate function implementation ==="
sed -n '124,250p' core/providers/gemini/batch.go

echo -e "\n=== buildBatchRequestItems function ==="
sed -n '57,122p' core/providers/gemini/batch.go

Repository: maximhq/bifrost

Length of output: 6307


🏁 Script executed:

#!/bin/bash
# Search for all usages of GeminiBatchCreateRequestSDK
echo "=== All references to GeminiBatchCreateRequestSDK ==="
rg -n "GeminiBatchCreateRequestSDK" --type=go

echo -e "\n=== Check if any SDK types are actually used ==="
rg -n "GeminiBatchCreateRequestSDK|GeminiBatchInlineRequest|GeminiBatchJobResponseSDK" --type=go core/providers/gemini/

echo -e "\n=== Check test file ==="
cat core/providers/gemini/batch_test.go

Repository: maximhq/bifrost

Length of output: 3264


🏁 Script executed:

#!/bin/bash
# Read the genai.go file around the usage points
echo "=== Line 448 area ==="
sed -n '430,470p' transports/bifrost-http/integrations/genai.go

echo -e "\n=== Line 451 area (more context) ==="
sed -n '445,480p' transports/bifrost-http/integrations/genai.go

echo -e "\n=== Line 678 area ==="
sed -n '670,710p' transports/bifrost-http/integrations/genai.go

Repository: maximhq/bifrost

Length of output: 4475


🏁 Script executed:

#!/bin/bash
# Read the complete BatchCreateRequestConverter function
echo "=== Complete BatchCreateRequestConverter function ==="
sed -n '451,550p' transports/bifrost-http/integrations/genai.go

Repository: maximhq/bifrost

Length of output: 3604


🏁 Script executed:

#!/bin/bash
# Check if there's validation in BatchCreate that requires InputFileID or Requests
echo "=== Validation in core provider BatchCreate ==="
sed -n '140,155p' core/providers/gemini/batch.go

Repository: maximhq/bifrost

Length of output: 660


Src field handling lacks explicit error messaging for invalid types.

The Src field in GeminiBatchCreateRequestSDK is processed via type switch in transports/bifrost-http/integrations/genai.go (lines 451–481), supporting both string (file reference) and []interface{} (inline requests) cases. However, three issues exist:

  1. Invalid Src types that are neither string nor []interface{} are silently ignored, resulting in empty Requests and InputFileID being passed to the core provider.
  2. Non-map items within the []interface{} array are silently skipped without indication to the user.
  3. The downstream validation in BatchCreate catches this with the error "either input_file_id or requests must be provided", but this doesn't clarify that the issue originated from an invalid Src format.

Add explicit validation in the HTTP transport layer to return a clear error for unsupported Src types (e.g., "src must be a string (file reference) or array of request objects").

🤖 Prompt for AI Agents
In core/providers/gemini/types.go around lines 1594 to 1602 and in
transports/bifrost-http/integrations/genai.go around lines 451 to 481, the Src
field handling silently ignores unsupported types and non-map array items;
update the transport layer to validate Src explicitly: if Src is a string treat
as file ref, if it's an array iterate and require each item be a map/object and
return a clear HTTP 400 error when any item is not a map, and if Src is any
other type return a clear HTTP 400 error like "src must be a string (file
reference) or array of request objects" so the client receives a precise
validation message instead of downstream generic errors.

Comment on lines +1138 to +1169
// Build the upload URL for phase 2
// Use the request's host and scheme to build the URL
scheme := "http"
if ctx.IsTLS() {
scheme = "https"
}
host := string(ctx.Host())
uploadURL := fmt.Sprintf("%s://%s/genai/upload/v1beta/files/resumable/%s", scheme, host, uploadReq.ResumableSessionID)

fmt.Printf("[DEBUG] extractGeminiFileUploadParams: returning upload URL=%s\n", uploadURL)

// Send the upload URL response
ctx.Response.Header.Set("X-Goog-Upload-URL", uploadURL)
ctx.Response.Header.Set("X-Goog-Upload-Status", "active")
ctx.Response.Header.SetContentType("application/json")
ctx.SetStatusCode(200)

// Return empty JSON object as response body
ctx.SetBody([]byte("{}"))

// Mark that response was written
*bifrostCtx = context.WithValue(*bifrostCtx, contextKeyResponseWritten{}, true)

// Return sentinel error to signal router to skip further processing
return ErrResumableUploadInit
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Check if there's existing handling for forwarded headers in the codebase
rg -n -i "x-forwarded|forwarded-host|forwarded-proto" --type go

Repository: maximhq/bifrost

Length of output: 294


🏁 Script executed:

# Look at the X-Forwarded-Proto handling in session.go
sed -n '135,180p' transports/bifrost-http/handlers/session.go

Repository: maximhq/bifrost

Length of output: 1471


🏁 Script executed:

# Search for X-Forwarded-Host in the codebase
rg -n "X-Forwarded-Host" --type go

# Also check if ctx.Host() is used in genai.go or if there are URL construction patterns
rg -n "ctx\.Host\(\)" transports/bifrost-http/integrations/genai.go

# Check the broader pattern in genai.go
rg -n "fmt\.Sprintf.*://" transports/bifrost-http/integrations/genai.go | head -20

Repository: maximhq/bifrost

Length of output: 220


🏁 Script executed:

# Check for README or documentation
find . -type f -name "README*" -o -name "*.md" | head -20

# Search for any comments or issues about reverse proxy in the codebase
rg -n -i "reverse.?proxy|proxy|forwarded" --type go --type md | head -30

Repository: maximhq/bifrost

Length of output: 4570


🏁 Script executed:

# Look at more context around the problematic code in genai.go
sed -n '1120,1170p' transports/bifrost-http/integrations/genai.go

# Check if there are other places in genai.go that handle ctx.IsTLS()
rg -n "ctx\.IsTLS\(\)" transports/bifrost-http/integrations/genai.go

Repository: maximhq/bifrost

Length of output: 2080


🏁 Script executed:

# Search for other URL constructions with scheme/host in handlers and integrations
rg -n "fmt\.Sprintf.*https?://" transports/bifrost-http/ --type go | head -20

# Also check if there are any other genai file upload related functions
rg -n "X-Goog-Upload" transports/bifrost-http/ --type go

Repository: maximhq/bifrost

Length of output: 971


🏁 Script executed:

# Search for utility functions that might handle forwarded headers
rg -n "func.*Forward|func.*Scheme|func.*Host" transports/bifrost-http/ --type go | grep -i "forward\|scheme"

# Check if there's a common patterns file or utils
find transports/bifrost-http -name "*util*" -o -name "*helper*" -o -name "*common*" | head -10

Repository: maximhq/bifrost

Length of output: 146


🏁 Script executed:

# Check handlers/utils.go
cat -n transports/bifrost-http/handlers/utils.go | head -80

# Check integrations/utils.go
cat -n transports/bifrost-http/integrations/utils.go | head -80

Repository: maximhq/bifrost

Length of output: 6347


Support reverse proxy headers for resumable upload URL construction.

The upload URL at line 1145 uses ctx.Host() and ctx.IsTLS() directly without considering X-Forwarded-Proto and X-Forwarded-Host headers. When Bifrost runs behind a reverse proxy, this URL will reference internal host/scheme instead of the client-facing address, causing the client's subsequent upload requests to fail. Follow the pattern from session.go (lines 139, 175) which checks X-Forwarded-Proto for HTTPS detection, and similarly add support for X-Forwarded-Host.

🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/genai.go around lines 1138 to 1163, the
upload URL is built using ctx.Host() and ctx.IsTLS() which breaks when behind a
reverse proxy; change the logic to first inspect request headers for
X-Forwarded-Proto (use "https" if it contains "https", otherwise "http") and
X-Forwarded-Host (use its value if present and non-empty), falling back to
ctx.IsTLS() and ctx.Host() only if those headers are missing; construct
uploadURL from the chosen scheme and host and keep the rest of the response flow
identical.

@akshaydeo akshaydeo force-pushed the 12-04-gemini-sdk-batch-support branch from 3bf5d39 to 9e1cf19 Compare December 5, 2025 15:27
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (4)
core/providers/gemini/batch.go (1)

887-891: Address the BatchStats computation issue to prevent negative counts.

The SDK converters still contain the issue previously flagged: when resp.RequestCounts.Total is 0 but Completed or Failed are non-zero, the math produces negative PendingRequestCount values.

This affects:

  • ToGeminiBatchJobResponse (lines 887-891)
  • ToGeminiBatchRetrieveResponse (lines 914-918)
  • ToGeminiBatchListResponse (lines 953-957)

Apply defensive computation as suggested in the previous review to ensure valid, non-negative counts.

Also applies to: 914-918, 953-957

transports/bifrost-http/integrations/router.go (2)

868-892: Remove debug fmt.Printf statements before merging.

These debug statements should be removed or replaced with structured logging before production. This was flagged in a previous review.


1001-1001: Remove debug fmt.Printf statement before merging.

This debug statement should be removed or replaced with structured logging before production. This was flagged in a previous review.

transports/bifrost-http/integrations/genai.go (1)

463-482: Validate Src field type explicitly to provide clear error messages.

The Src field is processed via type switch but silently ignores unsupported types, resulting in empty Requests and InputFileID. This leads to a generic downstream error instead of a clear validation message.

Based on previous review feedback, add explicit validation:

 // Handle src field - can be string (file reference) or array (inline requests)
 switch src := sdkReq.Src.(type) {
 case string:
 	// File-based input: src="files/display_name"
 	bifrostReq.InputFileID = strings.TrimPrefix(src, "files/")
 case []interface{}:
 	// Inline requests: src=[{contents: [...], config: {...}}]
 	requests := make([]schemas.BatchRequestItem, 0, len(src))
 	for i, item := range src {
 		if itemMap, ok := item.(map[string]interface{}); ok {
 			customID := fmt.Sprintf("request-%d", i)
 			requests = append(requests, schemas.BatchRequestItem{
 				CustomID: customID,
 				Body:     itemMap,
 			})
+		} else {
+			return nil, fmt.Errorf("src array item %d must be an object, got %T", i, item)
 		}
 	}
 	bifrostReq.Requests = requests
+default:
+	if sdkReq.Src != nil {
+		return nil, errors.New("src must be a string (file reference) or array of request objects")
+	}
 }
🧹 Nitpick comments (2)
core/providers/gemini/files.go (1)

472-485: Consider preserving actual MIME type instead of hardcoding.

ToGeminiFileUploadResponse hardcodes MimeType as "application/octet-stream". The BifrostFileUploadResponse schema doesn't appear to carry the original MIME type, but if the upload request or response contains this information, preserving it would provide more accurate metadata to clients.

If the MIME type is available elsewhere (e.g., from the original request or stored in extra fields), consider passing it through:

-			MimeType:       "application/octet-stream",
+			MimeType:       getMimeTypeOrDefault(resp), // or pass as parameter
transports/bifrost-http/integrations/genai.go (1)

174-180: Consider extracting repeated provider resolution into a helper function.

The pattern of extracting provider from context with a Gemini default is repeated across multiple route converters. This could be simplified with a helper function.

// Helper function to add at the top of the file
func getProviderFromContext(ctx *context.Context) schemas.ModelProvider {
	if p := (*ctx).Value(bifrostContextKeyProvider); p != nil {
		return p.(schemas.ModelProvider)
	}
	return schemas.Gemini
}

Then in each converter:

-// Get provider from context
-provider := schemas.Gemini
-if p := (*ctx).Value(bifrostContextKeyProvider); p != nil {
-	provider = p.(schemas.ModelProvider)
-}
+provider := getProviderFromContext(ctx)

Also applies to: 220-223, 276-279, 327-330, 368-371, 407-410

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3bf5d39 and 9e1cf19.

📒 Files selected for processing (25)
  • core/bifrost.go (1 hunks)
  • core/providers/anthropic/batch.go (1 hunks)
  • core/providers/azure/batch.go (1 hunks)
  • core/providers/bedrock/batch.go (1 hunks)
  • core/providers/cerebras/batch.go (1 hunks)
  • core/providers/cohere/batch.go (1 hunks)
  • core/providers/elevenlabs/batch.go (1 hunks)
  • core/providers/gemini/batch.go (1 hunks)
  • core/providers/gemini/files.go (1 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/groq/batch.go (1 hunks)
  • core/providers/mistral/batch.go (1 hunks)
  • core/providers/ollama/batch.go (1 hunks)
  • core/providers/openai/batch.go (1 hunks)
  • core/providers/openrouter/batch.go (1 hunks)
  • core/providers/parasail/batch.go (1 hunks)
  • core/providers/perplexity/batch.go (1 hunks)
  • core/providers/sgl/batch.go (1 hunks)
  • core/providers/vertex/batch.go (1 hunks)
  • core/schemas/batch.go (1 hunks)
  • core/schemas/bifrost.go (1 hunks)
  • core/schemas/provider.go (3 hunks)
  • transports/bifrost-http/integrations/genai.go (14 hunks)
  • transports/bifrost-http/integrations/router.go (9 hunks)
  • transports/bifrost-http/integrations/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (10)
  • transports/bifrost-http/integrations/utils.go
  • core/providers/azure/batch.go
  • core/providers/cohere/batch.go
  • core/providers/sgl/batch.go
  • core/schemas/bifrost.go
  • core/providers/vertex/batch.go
  • core/providers/mistral/batch.go
  • core/schemas/batch.go
  • core/providers/perplexity/batch.go
  • core/providers/cerebras/batch.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/providers/bedrock/batch.go
  • core/providers/anthropic/batch.go
  • core/bifrost.go
  • core/schemas/provider.go
  • core/providers/ollama/batch.go
  • core/providers/groq/batch.go
  • core/providers/openai/batch.go
  • core/providers/gemini/batch.go
  • core/providers/elevenlabs/batch.go
  • core/providers/parasail/batch.go
  • core/providers/gemini/files.go
  • transports/bifrost-http/integrations/genai.go
  • transports/bifrost-http/integrations/router.go
  • core/providers/openrouter/batch.go
  • core/providers/gemini/types.go
🧬 Code graph analysis (10)
core/providers/bedrock/batch.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/anthropic/batch.go (4)
core/providers/anthropic/anthropic.go (1)
  • AnthropicProvider (21-28)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/bifrost.go (4)
core/schemas/bifrost.go (4)
  • BatchDeleteRequest (103-103)
  • BifrostError (364-373)
  • BifrostErrorExtraFields (431-435)
  • RequestType (83-83)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/provider.go (2)
  • Provider (313-362)
  • CustomProviderConfig (248-254)
core/schemas/account.go (1)
  • Key (8-17)
core/schemas/provider.go (2)
core/schemas/bifrost.go (2)
  • BatchDeleteRequest (103-103)
  • BifrostError (364-373)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/ollama/batch.go (4)
core/providers/ollama/ollama.go (1)
  • OllamaProvider (18-23)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/openai/batch.go (4)
core/providers/openai/openai.go (1)
  • OpenAIProvider (24-30)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/gemini/batch.go (3)
core/schemas/batch.go (11)
  • BatchStatus (5-5)
  • BatchStatusValidating (8-8)
  • BatchStatusInProgress (10-10)
  • BatchStatusFinalizing (11-11)
  • BatchStatusCompleted (12-12)
  • BatchStatusFailed (9-9)
  • BatchStatusCancelling (14-14)
  • BatchStatusCancelled (15-15)
  • BatchStatusExpired (13-13)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/gemini/types.go (11)
  • GeminiJobStatePending (1582-1582)
  • GeminiJobStateRunning (1583-1583)
  • GeminiJobStateSucceeded (1584-1584)
  • GeminiJobStateFailed (1585-1585)
  • GeminiJobStateCancelling (1586-1586)
  • GeminiJobStateCancelled (1587-1587)
  • GeminiBatchJobResponseSDK (1618-1624)
  • GeminiBatchMetadata (1499-1512)
  • GeminiBatchStats (1433-1437)
  • GeminiBatchDest (1533-1536)
  • GeminiBatchListResponseSDK (1627-1630)
core/providers/utils/utils.go (2)
  • CheckOperationAllowed (408-420)
  • NewBifrostOperationError (449-460)
core/providers/elevenlabs/batch.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/parasail/batch.go (4)
core/providers/parasail/parasail.go (1)
  • ParasailProvider (17-22)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/gemini/files.go (2)
core/schemas/files.go (1)
  • BifrostFileUploadResponse (89-105)
ui/lib/constants/logs.ts (1)
  • Status (133-133)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Graphite / mergeability_check
🔇 Additional comments (25)
core/providers/openrouter/batch.go (1)

35-38: LGTM! Consistent with existing batch operation stubs.

The BatchDelete implementation correctly follows the established pattern for unsupported batch operations in the OpenRouter provider.

core/providers/bedrock/batch.go (1)

932-935: LGTM! Consistent stub for unsupported operation.

The BatchDelete stub correctly indicates that batch deletion is not supported by the Bedrock provider, following the same pattern as other providers.

core/providers/groq/batch.go (1)

35-38: LGTM! Consistent with other unsupported batch operations.

The BatchDelete stub appropriately returns an unsupported operation error, matching the pattern used for all batch operations in the Groq provider.

core/providers/openai/batch.go (1)

597-600: LGTM! Appropriate stub for unsupported operation.

The BatchDelete stub correctly indicates that batch deletion is not supported by the OpenAI provider, following the established error handling pattern.

core/schemas/provider.go (1)

182-182: LGTM! Consistent integration of BatchDelete capability.

The changes properly integrate BatchDelete into the provider framework:

  • Added to AllowedRequests for operation gating
  • Integrated into IsOperationAllowed switch statement
  • Added to Provider interface with correct signature

All changes follow the established patterns for other batch operations.

Also applies to: 231-232, 350-351

core/providers/anthropic/batch.go (1)

783-786: LGTM! Appropriate stub for unsupported operation.

The BatchDelete stub correctly indicates that batch deletion is not currently supported by the Anthropic provider, consistent with the error handling pattern used across providers.

core/providers/gemini/batch.go (1)

986-1043: LGTM! Well-implemented BatchDelete for Gemini provider.

The implementation correctly:

  • Validates required batch_id
  • Builds the proper URL with batches/ prefix handling
  • Uses DELETE HTTP method
  • Sets appropriate headers
  • Handles both 200 and 204 success status codes
  • Returns a properly structured response
core/providers/parasail/batch.go (1)

35-38: LGTM! Consistent with other unsupported batch operations.

The BatchDelete stub appropriately returns an unsupported operation error, matching the pattern used for all batch operations in the Parasail provider.

core/bifrost.go (1)

1133-1209: LGTM!

The BatchDeleteRequest method follows the established pattern used by other batch operations (BatchCancelRequest, BatchResultsRequest, etc.) with consistent:

  • Input validation (nil check, provider required, batch_id required)
  • Context defaulting
  • Provider lookup and config retrieval
  • Custom provider base type resolution
  • Key selection for providers requiring keys
  • Retry execution with proper request type
  • Error field augmentation
core/providers/elevenlabs/batch.go (1)

35-38: LGTM!

The BatchDelete stub correctly follows the established pattern for unsupported operations, consistent with the other batch method stubs in this file.

core/providers/ollama/batch.go (1)

35-38: LGTM!

The BatchDelete stub correctly follows the established pattern for unsupported operations, consistent with the other batch method stubs in this file.

transports/bifrost-http/integrations/router.go (5)

88-88: LGTM!

The DeleteRequest field addition to BatchRequest follows the established pattern for other batch request types.


159-161: LGTM!

The BatchDeleteResponseConverter type definition follows the established pattern for other batch response converters.


291-291: LGTM!

The BatchDeleteResponseConverter field in RouteConfig follows the established pattern for other batch response converters.


820-840: LGTM!

The BatchDeleteRequest case in handleBatchRequest follows the exact same pattern as other batch request handlers (BatchCreateRequest, BatchCancelRequest, etc.), with consistent:

  • Nil request validation
  • Client method invocation
  • PostCallback execution
  • Response converter application

422-426: ErrResumableUploadInit is properly defined and exported. The error is declared in genai.go (lines 35-36) as an exported sentinel error with appropriate documentation, and is correctly checked at line 423 in router.go.

core/providers/gemini/files.go (2)

521-549: LGTM!

The helper functions are well-implemented:

  • toGeminiFileState correctly inverts ToBifrostFileStatus for round-trip compatibility
  • formatGeminiTimestamp properly handles the zero-value case and uses UTC
  • safeDerefInt64 is a standard nil-safe dereference pattern

487-506: No action needed. The ToGeminiFileListResponse function correctly does not populate the URI field because FileObject items in the list response do not contain URI information. Unlike ToGeminiFileUploadResponse and ToGeminiFileRetrieveResponse which operate on full response objects with StorageURI fields, the list response contains only basic FileObject metadata without storage URI data.

transports/bifrost-http/integrations/genai.go (4)

664-778: LGTM! Well-structured parameter extraction with proper validation.

The batch parameter extractors include appropriate validation (non-empty batch_id, proper format with "batches/" prefix) and handle provider defaults consistently.


780-806: LGTM! Clear request routing based on upload protocol.

The function properly delegates to specialized parsers based on the upload protocol and content type, providing a clean separation of concerns.


1011-1113: LGTM! Proper session lifecycle management in phase 2.

The phase 2 handler correctly retrieves session metadata, validates it, copies file content, and cleans up the session after extracting provider information. The session cleanup at line 1105 ensures no memory leaks.


1169-1245: LGTM! Consistent parameter extraction pattern across file operations.

The file list/retrieve/delete parameter extractors follow the same provider extraction and validation pattern established in the batch operations, maintaining good consistency across the codebase.

core/providers/gemini/types.go (3)

1578-1589: LGTM! SDK-aligned job state constants for GenAI compatibility.

The GeminiJobState constants are correctly separated from the internal GeminiBatchState constants, allowing proper translation between the Google GenAI SDK format and internal representations.


1603-1651: LGTM! Comprehensive SDK batch type definitions.

The SDK batch types properly mirror the Google GenAI Python SDK format, providing clear request/response structures for batch create, list, retrieve, cancel, and delete operations. The separation of SDK types from internal types maintains a clean boundary between external API surface and internal implementation.


1656-1662: LGTM! GeminiFileUploadRequest properly supports resumable uploads.

The additions of MimeType and ResumableSessionID fields enable proper MIME type handling and multi-phase resumable upload tracking. Removing the Provider field (noted in the summary) reduces public coupling to the schemas package, which is a good architectural decision.

Comment on lines +52 to +68
func init() {
go func() {
ticker := time.NewTicker(10 * time.Minute)
for range ticker.C {
now := time.Now()
uploadSessions.Range(func(key, value interface{}) bool {
if session, ok := value.(*uploadSession); ok {
if now.Sub(session.CreatedAt) > time.Hour {
uploadSessions.Delete(key)
}
}
return true
})
}
}()
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Provide a mechanism to stop the cleanup goroutine for graceful shutdown.

The cleanup goroutine runs indefinitely with no way to stop it. If the application needs to shut down gracefully, this goroutine will continue running and prevent clean termination.

Consider using a context or stop channel:

+var cleanupStop = make(chan struct{})
+
 func init() {
 	go func() {
 		ticker := time.NewTicker(10 * time.Minute)
+		defer ticker.Stop()
 		for {
-			for range ticker.C {
+			select {
+			case <-ticker.C:
 				now := time.Now()
 				uploadSessions.Range(func(key, value interface{}) bool {
 					if session, ok := value.(*uploadSession); ok {
 						if now.Sub(session.CreatedAt) > time.Hour {
 							uploadSessions.Delete(key)
 						}
 					}
 					return true
 				})
+			case <-cleanupStop:
+				return
 			}
 		}
 	}()
 }

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/genai.go around lines 52 to 67, the
cleanup goroutine spawned in init() runs forever with no shutdown signal; change
it to accept a cancellable signal (either a package-level stop channel or a
context.Context) and use a select between ticker.C and ctx.Done()/stopChan to
terminate; on shutdown close/ cancel the signal, stop the ticker with
ticker.Stop(), and return from the goroutine so the process can exit cleanly
(also add an exported Shutdown/Close function or wire the context from
application startup to call cancellation).

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (9)
transports/bifrost-http/integrations/router.go (1)

868-893: Remove fmt.Printf debug logging from file request paths

These [DEBUG] fmt.Printf statements should not remain in production request handling; they were already flagged in a prior review. Replace them with structured logging (g.logger.Debug(...)) if you still need diagnostics, or remove them entirely:

  • Lines 868, 875, 879, 881, 883, 887, 890, 892
  • Line 1001 (file response conversion error)

Also applies to: 1001-1001

transports/bifrost-http/integrations/genai.go (5)

45-49: Handle crypto/rand.Read errors in generateSessionID

rand.Read can fail; ignoring its error may yield predictable session IDs, which is undesirable even for upload sessions. This was raised previously and still applies.

Consider:

func generateSessionID() string {
    bytes := make([]byte, 16)
-   rand.Read(bytes)
+   if _, err := rand.Read(bytes); err != nil {
+       // Extremely rare; fall back to time-based ID rather than silently weakening randomness
+       now := time.Now().UnixNano()
+       return fmt.Sprintf("%d-%d", now, now)
+   }
    return hex.EncodeToString(bytes)
}

867-934: Critical: multipart/related parsing still corrupts binary file content

parseGeminiMultipartRelated still converts the full body to a string and splits on string boundaries:

  • parts := strings.Split(string(body), delimiter)
  • Later, file content is assigned via uploadReq.File = []byte(content)

This path corrupts arbitrary binary payloads (null bytes, non‑UTF‑8 sequences) and was previously flagged as critical.

Operate on []byte throughout, e.g.:

- delimiter := "--" + boundary
- parts := strings.Split(string(body), delimiter)
+ delimiter := []byte("--" + boundary)
+ parts := bytes.Split(body, delimiter)

- // work with string `part` and `content`
+ for _, part := range parts {
+     part = bytes.TrimSpace(part)
+     if len(part) == 0 || bytes.Equal(part, []byte("--")) {
+         continue
+     }
+     // Find header/content separator as byte indexes, then:
+     //   headers := part[:headerEnd]
+     //   content := part[contentStart:]
+     // and pass `content` directly into metadata unmarshal or assign to uploadReq.File (copy if needed).
+ }

This avoids string conversion and preserves exact binary content.


792-806: Remove remaining fmt.Printf debug logging in GenAI integration

There are many [DEBUG] fmt.Printf calls left in the Gemini file/resumable helpers (parsers, callbacks, status setters). These were previously flagged and should not remain in production:

  • Around parseGeminiFileUploadRequest / parseGeminiResumableUpload / parseGeminiResumableUploadPhase2
  • In setResumableUploadFinalStatus
  • In extractGeminiResumableUploadParams and extractGeminiFileUploadParams

Replace them with the integration’s structured logger (e.g., logger.Debug(...)) if you need persistent diagnostics, or delete them outright, and drop any now-unused fmt imports.

Also applies to: 814-865, 1013-1063, 1072-1077, 1108-1109, 1125-1125, 1139-1147


1051-1057: Avoid hardcoding file purpose to "batch" in resumable phase‑2

Phase‑2 resumable uploads unconditionally set uploadReq.Purpose = "batch". That may be wrong if the same endpoint is used for non‑batch uploads (e.g., general files for prompting).

Consider:

  • Carrying the intended purpose in the phase‑1 metadata/session and reusing it here; or
  • Allowing the client to specify purpose in metadata with validation and falling back to "batch" only when none is provided.

1138-1145: Make resumable upload URL construction reverse‑proxy aware

The upload URL for phase‑2 is built from ctx.IsTLS() and ctx.Host() only:

scheme := "http"
if ctx.IsTLS() { scheme = "https" }
host := string(ctx.Host())
uploadURL := fmt.Sprintf("%s://%s/...", scheme, host, ...)

Behind a reverse proxy, this will produce internal host/scheme rather than the client‑visible one, breaking resumable uploads.

Read X-Forwarded-Proto and X-Forwarded-Host first (falling back to ctx.IsTLS() / ctx.Host() only if they’re unset), following the pattern used elsewhere in the HTTP layer, then construct uploadURL from those values.

core/providers/gemini/batch.go (3)

870-896: Fix defensive stats computation to prevent negative counts.

The BatchStats math at lines 887-891 can produce negative PendingRequestCount or SuccessfulRequestCount when resp.RequestCounts.Total is zero but Completed or Failed are non-zero (e.g., from BatchRetrieve which leaves Total at zero).

Apply the defensive totals pattern suggested in the previous review to ensure all counts remain non-negative.

-	result.Metadata = &GeminiBatchMetadata{
-		Name:       resp.ID,
-		State:      ToGeminiJobState(resp.Status),
-		CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339),
-		BatchStats: &GeminiBatchStats{
-			RequestCount:           resp.RequestCounts.Total,
-			PendingRequestCount:    resp.RequestCounts.Total - resp.RequestCounts.Completed,
-			SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed,
-		},
-	}
+	total := resp.RequestCounts.Total
+	if total == 0 {
+		total = resp.RequestCounts.Completed + resp.RequestCounts.Failed
+	}
+	success := resp.RequestCounts.Completed - resp.RequestCounts.Failed
+	if success < 0 {
+		success = 0
+	}
+	pending := total - resp.RequestCounts.Completed - resp.RequestCounts.Failed
+	if pending < 0 {
+		pending = 0
+	}
+
+	result.Metadata = &GeminiBatchMetadata{
+		Name:       resp.ID,
+		State:      ToGeminiJobState(resp.Status),
+		CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339),
+		BatchStats: &GeminiBatchStats{
+			RequestCount:           total,
+			PendingRequestCount:    pending,
+			SuccessfulRequestCount: success,
+		},
+	}

Based on learnings from previous reviews.


898-933: Apply the same defensive stats fix here.

Lines 914-918 have the identical negative count issue. Use the same defensive computation pattern to derive total, success, and pending with bounds checks.

Based on learnings from previous reviews.


935-972: Apply the same defensive stats fix in the batch loop.

Lines 953-957 within the loop have the same issue. Each batch item needs the defensive totals computation to avoid negative counts.

Based on learnings from previous reviews.

🧹 Nitpick comments (2)
transports/bifrost-http/integrations/router.go (1)

80-89: BatchDelete wiring through router and batch handler is coherent

Adding DeleteRequest to BatchRequest, the BatchDeleteResponseConverter type/field on RouteConfig, and the schemas.BatchDeleteRequest case in handleBatchRequest collectively mirror existing batch operations and correctly delegate to BatchDeleteRequest on the client.

The BatchCreateRequestConverter field now serves as a generic batch converter for all batch types; consider renaming it to BatchRequestConverter in a follow-up to better reflect its broader role.

Also applies to: 159-162, 291-291, 820-841

transports/bifrost-http/integrations/genai.go (1)

438-661: Gemini batch route configs integrate cleanly with Bifrost batch APIs

The new batch routes:

  • Map Gemini SDK batch structs (create/list/retrieve/cancel/delete) to BifrostBatch*Request types, including handling of src (file vs inline) and pageSize/pageToken.
  • Use PreCallbacks to derive provider from headers and normalize batch_id path params (batches/{id}, stripping :cancel where needed).
  • Respect RawResponse passthrough and otherwise convert via gemini.ToGemini* helpers.

This batch plumbing looks correct and consistent with the rest of the router.

BatchCreateRequestConverter on RouteConfig is now used as a generic batch converter for all batch operations; consider renaming it (and associated comments) to something like BatchRequestConverter in a later cleanup for clarity.

Also applies to: 663-778

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3bf5d39 and 9e1cf19.

📒 Files selected for processing (25)
  • core/bifrost.go (1 hunks)
  • core/providers/anthropic/batch.go (1 hunks)
  • core/providers/azure/batch.go (1 hunks)
  • core/providers/bedrock/batch.go (1 hunks)
  • core/providers/cerebras/batch.go (1 hunks)
  • core/providers/cohere/batch.go (1 hunks)
  • core/providers/elevenlabs/batch.go (1 hunks)
  • core/providers/gemini/batch.go (1 hunks)
  • core/providers/gemini/files.go (1 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/groq/batch.go (1 hunks)
  • core/providers/mistral/batch.go (1 hunks)
  • core/providers/ollama/batch.go (1 hunks)
  • core/providers/openai/batch.go (1 hunks)
  • core/providers/openrouter/batch.go (1 hunks)
  • core/providers/parasail/batch.go (1 hunks)
  • core/providers/perplexity/batch.go (1 hunks)
  • core/providers/sgl/batch.go (1 hunks)
  • core/providers/vertex/batch.go (1 hunks)
  • core/schemas/batch.go (1 hunks)
  • core/schemas/bifrost.go (1 hunks)
  • core/schemas/provider.go (3 hunks)
  • transports/bifrost-http/integrations/genai.go (14 hunks)
  • transports/bifrost-http/integrations/router.go (9 hunks)
  • transports/bifrost-http/integrations/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (9)
  • core/providers/bedrock/batch.go
  • transports/bifrost-http/integrations/utils.go
  • core/providers/openai/batch.go
  • core/providers/groq/batch.go
  • core/providers/elevenlabs/batch.go
  • core/providers/openrouter/batch.go
  • core/providers/anthropic/batch.go
  • core/schemas/batch.go
  • core/providers/perplexity/batch.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/providers/mistral/batch.go
  • core/providers/vertex/batch.go
  • core/schemas/provider.go
  • core/schemas/bifrost.go
  • core/providers/azure/batch.go
  • core/providers/cerebras/batch.go
  • core/providers/parasail/batch.go
  • core/providers/gemini/files.go
  • transports/bifrost-http/integrations/genai.go
  • core/providers/gemini/batch.go
  • core/providers/cohere/batch.go
  • core/bifrost.go
  • core/providers/sgl/batch.go
  • transports/bifrost-http/integrations/router.go
  • core/providers/ollama/batch.go
  • core/providers/gemini/types.go
🧬 Code graph analysis (11)
core/providers/vertex/batch.go (4)
core/providers/vertex/vertex.go (1)
  • VertexProvider (57-62)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/schemas/provider.go (2)
core/schemas/bifrost.go (1)
  • BatchDeleteRequest (103-103)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (1)
ui/lib/types/config.ts (1)
  • RequestType (122-134)
core/providers/azure/batch.go (5)
core/providers/azure/azure.go (1)
  • AzureProvider (22-27)
core/schemas/account.go (1)
  • Key (8-17)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/cerebras/batch.go (4)
core/providers/cerebras/cerebras.go (1)
  • CerebrasProvider (16-21)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/gemini/files.go (1)
core/schemas/files.go (1)
  • BifrostFileUploadResponse (89-105)
transports/bifrost-http/integrations/genai.go (3)
core/schemas/provider.go (1)
  • Provider (313-362)
core/schemas/files.go (5)
  • BifrostFileUploadRequest (53-66)
  • BifrostFileUploadResponse (89-105)
  • BifrostFileListRequest (108-124)
  • BifrostFileRetrieveRequest (136-145)
  • BifrostFileDeleteRequest (167-176)
core/schemas/models.go (1)
  • Model (109-129)
core/bifrost.go (3)
core/schemas/bifrost.go (3)
  • BatchDeleteRequest (103-103)
  • BifrostError (364-373)
  • RequestType (83-83)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/provider.go (1)
  • Provider (313-362)
core/providers/sgl/batch.go (4)
core/providers/sgl/sgl.go (1)
  • SGLProvider (18-23)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
transports/bifrost-http/integrations/router.go (4)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
transports/bifrost-http/integrations/genai.go (1)
  • ErrResumableUploadInit (36-36)
core/schemas/bifrost.go (2)
  • BatchDeleteRequest (103-103)
  • FileUploadRequest (104-104)
core/schemas/provider.go (1)
  • Provider (313-362)
core/providers/ollama/batch.go (4)
core/providers/ollama/ollama.go (1)
  • OllamaProvider (18-23)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Graphite / mergeability_check
🔇 Additional comments (25)
core/schemas/bifrost.go (1)

103-103: LGTM!

The BatchDeleteRequest constant addition follows the established pattern and is positioned logically among other batch operation types.

core/schemas/provider.go (3)

182-182: LGTM!

The BatchDelete field addition to AllowedRequests follows the established pattern for other batch operations.


231-232: LGTM!

The BatchDeleteRequest case in IsOperationAllowed correctly returns the BatchDelete flag, maintaining consistency with other batch operation checks.


350-351: LGTM!

The BatchDelete method addition to the Provider interface follows the established signature pattern, ensuring all providers implement batch delete support.

core/providers/vertex/batch.go (1)

35-38: LGTM!

The BatchDelete stub implementation correctly returns an unsupported operation error, consistent with other batch operations in Vertex provider.

core/providers/azure/batch.go (1)

438-441: LGTM!

The BatchDelete stub implementation follows the established pattern for unsupported operations in Azure provider.

core/providers/mistral/batch.go (1)

35-38: LGTM!

The BatchDelete stub implementation correctly returns an unsupported operation error, matching the pattern used for other batch operations.

core/providers/cohere/batch.go (1)

35-38: LGTM!

The BatchDelete stub implementation follows the established unsupported operation pattern for Cohere provider.

core/providers/cerebras/batch.go (1)

35-38: LGTM!

The BatchDelete stub implementation is consistent with other unsupported batch operations in Cerebras provider.

core/providers/ollama/batch.go (1)

35-38: LGTM!

The BatchDelete stub implementation correctly follows the unsupported operation pattern for Ollama provider.

core/providers/sgl/batch.go (1)

35-38: SGL BatchDelete stub matches existing unsupported-operation pattern

Signature and error wiring are consistent with other SGL batch methods and the Provider interface; no issues.

core/providers/parasail/batch.go (1)

35-38: Parasail BatchDelete stub is correctly wired as unsupported

Implementation cleanly returns the standardized unsupported-operation error and matches the Provider interface.

core/bifrost.go (1)

1133-1209: BatchDeleteRequest implementation is consistent with other batch APIs

Validation, config resolution, key selection, retry wiring, and ExtraFields population mirror BatchRetrieve/BatchCancel/BatchResults; behavior looks correct.

core/providers/gemini/files.go (1)

466-485: Typed Gemini upload wrapper looks correct

Wrapper shape ({"file": {...}}) and field mapping from BifrostFileUploadResponse to GeminiFileResponse are coherent; timestamp/status helpers are reused correctly.

transports/bifrost-http/integrations/router.go (1)

421-426: Sentinel handling for resumable-upload init is correctly short‑circuiting

Treating ErrResumableUploadInit as a special case and returning without sending an additional error/response cleanly supports PreCallback‑handled resumable init flows.

transports/bifrost-http/integrations/genai.go (4)

78-132: GenAI chat/embedding/speech/transcription routing and converters look correct

The main /v1beta/models/{model:*} route correctly branches GeminiGenerationRequest into embedding/chat/speech/transcription Bifrost requests and uses the appropriate ToGemini* converters plus streaming config; no functional concerns.


165-205: Gemini file routing (upload, resumable, list, retrieve, delete) is coherently mapped to Bifrost

  • File routes now use Gemini SDK request types and convert to the corresponding BifrostFile*Request with provider taken from context/header.
  • Resumable POST/PUT routes share a consistent phase‑2 parser, converter, and post‑callback, and correctly reuse the same file upload machinery.
  • List/retrieve/delete routes extract IDs/query params and map them cleanly into Bifrost requests, with RawResponse passthrough when present.

Overall, the file API surface looks consistent and aligns well with the core schema types.

Also applies to: 207-314, 318-355, 357-433


1171-1177: Provider and path‑param extraction helpers for file list/retrieve/delete look good

extractGeminiFileListQueryParams, extractGeminiFileRetrieveParams, and extractGeminiFileDeleteParams:

  • Default provider to Gemini or take it from x-model-provider.
  • Safely parse pageSize/pageToken and file_id from query/path.
  • Populate the Gemini SDK request structs used by the converters and set provider in the Bifrost context.

These helpers align with the rest of the integration’s parameter handling.

Also applies to: 1195-1219, 1221-1245


1249-1252: Batch route registration is correctly hooked into GenAI router

Including CreateGenAIBatchRouteConfigs("/genai", handlerStore) alongside the existing GenAI and file routes cleanly wires the new batch endpoints into the GenAI router.

core/providers/gemini/batch.go (3)

846-868: LGTM! Status mapping is comprehensive.

The conversion logic correctly maps all Bifrost batch statuses to their Gemini SDK equivalents, with a sensible default fallback.


974-984: LGTM! Simple and correct converter.

This function appropriately converts only the essential fields without complex computations.


986-1043: LGTM! BatchDelete implementation follows established patterns.

The method correctly:

  • Validates required fields
  • Handles both batch ID formats (with/without "batches/" prefix)
  • Accepts appropriate HTTP status codes for DELETE (200 and 204)
  • Returns proper response with latency metadata
core/providers/gemini/types.go (3)

1578-1589: LGTM! SDK job state constants are well-defined.

The constants correctly distinguish SDK job states from internal batch states, with clear naming and appropriate values.


1591-1651: LGTM! SDK batch types are correctly structured.

The type definitions appropriately model the Google GenAI SDK batch API surface. The Src interface{} field in GeminiBatchCreateRequestSDK is correctly typed to accept both string (file reference) and array (inline requests) formats.

Note: The past review comment about Src field validation applies to the transport layer handling (transports/bifrost-http/integrations/genai.go), not these type definitions.


1655-1662: LGTM! File upload request properly updated for resumable uploads.

The changes appropriately:

  • Add MimeType field for explicit MIME type handling
  • Add internal ResumableSessionID field for resumable upload session tracking
  • Remove Provider field to simplify the public API

These modifications align with the resumable upload functionality mentioned in the PR objectives.

@akshaydeo akshaydeo force-pushed the 12-04-gemini-sdk-batch-support branch from 9e1cf19 to 53d30ec Compare December 5, 2025 18:24
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (9)
transports/bifrost-http/integrations/genai.go (6)

51-67: Consider adding a shutdown signal for the cleanup goroutine

The init-spawned cleanup goroutine runs indefinitely with no way to stop it, which can complicate graceful shutdown. If your process supports coordinated shutdown, consider wiring a stop channel or context into this ticker loop so it can exit cleanly.


23-49: Handle crypto/rand.Read errors when generating session IDs

generateSessionID ignores the error from rand.Read, which is rare but can lead to weak or predictable IDs if the randomness source fails.

Consider handling the error explicitly, e.g.:

 func generateSessionID() string {
 	bytes := make([]byte, 16)
-	rand.Read(bytes)
-	return hex.EncodeToString(bytes)
+	if _, err := rand.Read(bytes); err != nil {
+		// Extremely rare; fall back to a time-based ID rather than silently using zeroed bytes
+		return fmt.Sprintf("%d-%d", time.Now().UnixNano(), time.Now().UnixNano())
+	}
+	return hex.EncodeToString(bytes)
 }

438-661: Batch route configuration is structurally sound but needs stricter Src validation

The new CreateGenAIBatchRouteConfigs correctly wires Gemini SDK batch routes (create/list/retrieve/cancel/delete) to Bifrost batch requests and response converters, using PreCallbacks to derive provider, model, and batch IDs.

However, in the create converter, Src handling silently ignores unsupported types and non-map array items:

switch src := sdkReq.Src.(type) {
case string:
    // ok
case []interface{}:
    for i, item := range src {
        if itemMap, ok := item.(map[string]interface{}); ok {
            // ok
        }
        // invalid items silently skipped
    }
    // other types fall through with empty Requests/InputFileID
}

For invalid Src inputs this flows into core, which later returns a generic “either input_file_id or requests must be provided”.

It would be better to validate at the transport layer and return a clear 400, e.g.:

-	switch src := sdkReq.Src.(type) {
+	switch src := sdkReq.Src.(type) {
 	case string:
 		bifrostReq.InputFileID = strings.TrimPrefix(src, "files/")
 	case []interface{}:
 		requests := make([]schemas.BatchRequestItem, 0, len(src))
 		for i, item := range src {
-			if itemMap, ok := item.(map[string]interface{}); ok {
-				customID := fmt.Sprintf("request-%d", i)
-				requests = append(requests, schemas.BatchRequestItem{
-					CustomID: customID,
-					Body:     itemMap,
-				})
-			}
+			itemMap, ok := item.(map[string]interface{})
+			if !ok {
+				return nil, fmt.Errorf("src[%d] must be an object", i)
+			}
+			customID := fmt.Sprintf("request-%d", i)
+			requests = append(requests, schemas.BatchRequestItem{
+				CustomID: customID,
+				Body:     itemMap,
+			})
 		}
 		bifrostReq.Requests = requests
+	default:
+		return nil, fmt.Errorf("src must be a string (file reference) or array of request objects")
 	}

so clients receive specific feedback when src has the wrong shape.


1011-1063: Hardcoded Purpose may be too narrow for resumable uploads

In parseGeminiResumableUploadPhase2, uploadReq.Purpose is always set to "batch". If this resumable pathway is ever reused for non-batch uploads (e.g., general file storage for other Gemini features), this hard-coding will be incorrect.

Consider either:

  • Passing the purpose from phase-1 metadata and storing it in the uploadSession, then using that here, or
  • At least making "batch" a default only when no explicit purpose is known.
-	uploadReq.Filename = session.Filename
-	uploadReq.MimeType = session.MimeType
-	uploadReq.Purpose = "batch" // Default purpose for file uploads via GenAI API
+	uploadReq.Filename = session.Filename
+	uploadReq.MimeType = session.MimeType
+	if session.Purpose != "" {
+		uploadReq.Purpose = session.Purpose
+	} else {
+		uploadReq.Purpose = "batch"
+	}

(with Purpose added to uploadSession if needed).


1115-1163: Support X-Forwarded headers when building resumable upload URL

extractGeminiFileUploadParams builds the phase-2 upload URL using ctx.IsTLS() and ctx.Host() only:

scheme := "http"
if ctx.IsTLS() {
    scheme = "https"
}
host := string(ctx.Host())
uploadURL := fmt.Sprintf("%s://%s/genai/upload/v1beta/files/resumable/%s", scheme, host, uploadReq.ResumableSessionID)

Behind a reverse proxy, this can yield an internal host/scheme instead of the client-facing address, breaking the client’s follow-up upload call.

Consider honoring X-Forwarded-Proto and X-Forwarded-Host first, falling back to ctx.IsTLS()/ctx.Host():

-	scheme := "http"
-	if ctx.IsTLS() {
-		scheme = "https"
-	}
-	host := string(ctx.Host())
+	scheme := "http"
+	if xfProto := strings.ToLower(string(ctx.Request.Header.Peek("X-Forwarded-Proto"))); strings.Contains(xfProto, "https") {
+		scheme = "https"
+	} else if ctx.IsTLS() {
+		scheme = "https"
+	}
+
+	host := string(ctx.Request.Header.Peek("X-Forwarded-Host"))
+	if host == "" {
+		host = string(ctx.Host())
+	}

so the generated URL works correctly when Bifrost is deployed behind a proxy.


780-793: Remove or replace [DEBUG] fmt.Printf logging with structured logger

There are numerous fmt.Printf calls in the new resumable/file/batch plumbing (e.g., parseGeminiFileUploadRequest, parseGeminiResumableUpload, parseGeminiResumableUploadPhase2, setResumableUploadFinalStatus, extractGeminiResumableUploadParams, extractGeminiFileUploadParams) that:

  • Print paths, session IDs, providers, filenames, and sometimes full response bodies.
  • Bypass the existing logger abstraction.
  • Risk leaking sensitive data to stdout in production.

These should be removed or migrated to logger.Debug(...) with carefully chosen, non-sensitive fields. For example:

-	fmt.Printf("[DEBUG] parseGeminiResumableUploadPhase2: bodyLen=%d, filename=%s, provider=%s\n", len(body), session.Filename, session.Provider)
+	// logger.Debug("parseGeminiResumableUploadPhase2", "bodyLen", len(body), "filename", session.Filename, "provider", session.Provider)

and similar for the other debug statements.

Also applies to: 814-815, 824-836, 841-842, 860-862, 1013-1014, 1046-1047, 1071-1077, 1088-1109, 1125-1130, 1147-1157

transports/bifrost-http/integrations/utils.go (1)

194-197: Remove debug log that prints full response body

The fmt.Printf in sendSuccess logs the entire response payload, which can expose sensitive data (PII, API keys, tokens) and should not be present in production. The explicit Content-Length header is redundant because SetBody already sets it.

Recommend removing the debug line (or replacing it with structured logging that omits bodies):

-	ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody)))
-	ctx.SetBody(responseBody)
-	fmt.Printf("[DEBUG] sendSuccess: status=200, contentLen=%d, body=%s\n", len(responseBody), string(responseBody))
+	ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody)))
+	ctx.SetBody(responseBody)
transports/bifrost-http/integrations/router.go (1)

868-893: Remove file-path debug fmt.Printf statements

The [DEBUG] fmt.Printf calls in handleFileRequest (logging provider, purpose, filenames, errors, and conversion failures) bypass the structured logger and may leak sensitive file metadata or error details. They should not remain in production.

Recommend removing them or switching to g.logger.Debug(...) with sanitized fields:

-		fmt.Printf("[DEBUG] router: calling FileUploadRequest for provider=%s, purpose=%s, filename=%s\n", fileReq.UploadRequest.Provider, fileReq.UploadRequest.Purpose, fileReq.UploadRequest.Filename)
...
-			fmt.Printf("[DEBUG] router: FileUploadRequest error: %s (provider=%s)\n", errMsg, fileReq.UploadRequest.Provider)
...
-		fmt.Printf("[DEBUG] router: FileUploadRequest success, response ID=%s\n", fileResponse.ID)
...
-			fmt.Printf("[DEBUG] router: calling PostCallback\n")
...
-				fmt.Printf("[DEBUG] router: PostCallback error: %v\n", err)
...
-			fmt.Printf("[DEBUG] router: PostCallback success\n")
...
-			fmt.Printf("[DEBUG] router: calling FileUploadResponseConverter\n")
...
-			fmt.Printf("[DEBUG] router: FileUploadResponseConverter done, err=%v\n", err)
...
-	if err != nil {
-		fmt.Printf("[DEBUG] router: file response conversion error: %v\n", err)
+	if err != nil {

Also applies to: 1000-1001

core/providers/gemini/batch.go (1)

846-972: Fix BatchStats math in SDK converters to avoid negative pending/success counts

ToGeminiBatchJobResponse, ToGeminiBatchRetrieveResponse, and ToGeminiBatchListResponse currently compute:

  • RequestCount = resp.RequestCounts.Total
  • PendingRequestCount = resp.RequestCounts.Total - resp.RequestCounts.Completed
  • SuccessfulRequestCount = resp.RequestCounts.Completed - resp.RequestCounts.Failed

When Total is left at zero (as in BatchRetrieve, where only Completed/Failed are populated), this can produce negative pending counts and inconsistent totals.

Derive totals defensively and clamp pending to non-negative, e.g.:

-	result := &GeminiBatchJobResponseSDK{
-		Name:  resp.ID,
-		State: ToGeminiJobState(resp.Status),
-	}
-
-	// Add metadata if available
-	if resp.CreatedAt > 0 {
-		result.Metadata = &GeminiBatchMetadata{
-			Name:       resp.ID,
-			State:      ToGeminiJobState(resp.Status),
-			CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339),
-			BatchStats: &GeminiBatchStats{
-				RequestCount:           resp.RequestCounts.Total,
-				PendingRequestCount:    resp.RequestCounts.Total - resp.RequestCounts.Completed,
-				SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed,
-			},
-		}
-	}
+	result := &GeminiBatchJobResponseSDK{
+		Name:  resp.ID,
+		State: ToGeminiJobState(resp.Status),
+	}
+
+	if resp.CreatedAt > 0 {
+		total := resp.RequestCounts.Total
+		if total == 0 {
+			// Derive total from known components when not explicitly set
+			total = resp.RequestCounts.Completed + resp.RequestCounts.Failed
+		}
+		success := resp.RequestCounts.Completed - resp.RequestCounts.Failed
+		if success < 0 {
+			success = 0
+		}
+		pending := total - (success + resp.RequestCounts.Failed)
+		if pending < 0 {
+			pending = 0
+		}
+
+		result.Metadata = &GeminiBatchMetadata{
+			Name:       resp.ID,
+			State:      ToGeminiJobState(resp.Status),
+			CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339),
+			BatchStats: &GeminiBatchStats{
+				RequestCount:           total,
+				PendingRequestCount:    pending,
+				SuccessfulRequestCount: success,
+			},
+		}
+	}

and mirror the same pattern in ToGeminiBatchRetrieveResponse and inside the loop in ToGeminiBatchListResponse when populating BatchStats, so invariants like RequestCount ≥ 0 and PendingRequestCount ≥ 0 always hold.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9e1cf19 and 53d30ec.

📒 Files selected for processing (25)
  • core/bifrost.go (1 hunks)
  • core/providers/anthropic/batch.go (1 hunks)
  • core/providers/azure/batch.go (1 hunks)
  • core/providers/bedrock/batch.go (1 hunks)
  • core/providers/cerebras/batch.go (1 hunks)
  • core/providers/cohere/batch.go (1 hunks)
  • core/providers/elevenlabs/batch.go (1 hunks)
  • core/providers/gemini/batch.go (1 hunks)
  • core/providers/gemini/files.go (1 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/groq/batch.go (1 hunks)
  • core/providers/mistral/batch.go (1 hunks)
  • core/providers/ollama/batch.go (1 hunks)
  • core/providers/openai/batch.go (1 hunks)
  • core/providers/openrouter/batch.go (1 hunks)
  • core/providers/parasail/batch.go (1 hunks)
  • core/providers/perplexity/batch.go (1 hunks)
  • core/providers/sgl/batch.go (1 hunks)
  • core/providers/vertex/batch.go (1 hunks)
  • core/schemas/batch.go (1 hunks)
  • core/schemas/bifrost.go (1 hunks)
  • core/schemas/provider.go (3 hunks)
  • transports/bifrost-http/integrations/genai.go (14 hunks)
  • transports/bifrost-http/integrations/router.go (9 hunks)
  • transports/bifrost-http/integrations/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (10)
  • core/providers/parasail/batch.go
  • core/providers/mistral/batch.go
  • core/schemas/batch.go
  • core/bifrost.go
  • core/providers/vertex/batch.go
  • core/providers/cohere/batch.go
  • core/providers/anthropic/batch.go
  • core/schemas/bifrost.go
  • core/providers/groq/batch.go
  • core/schemas/provider.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/providers/openrouter/batch.go
  • core/providers/perplexity/batch.go
  • core/providers/sgl/batch.go
  • core/providers/azure/batch.go
  • core/providers/bedrock/batch.go
  • core/providers/cerebras/batch.go
  • core/providers/elevenlabs/batch.go
  • core/providers/openai/batch.go
  • transports/bifrost-http/integrations/router.go
  • core/providers/gemini/types.go
  • core/providers/gemini/files.go
  • transports/bifrost-http/integrations/genai.go
  • core/providers/gemini/batch.go
  • transports/bifrost-http/integrations/utils.go
  • core/providers/ollama/batch.go
🧬 Code graph analysis (11)
core/providers/openrouter/batch.go (5)
core/providers/openrouter/openrouter.go (1)
  • OpenRouterProvider (18-23)
core/schemas/account.go (1)
  • Key (8-17)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/perplexity/batch.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/sgl/batch.go (2)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/azure/batch.go (4)
core/providers/azure/azure.go (1)
  • AzureProvider (22-27)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/bedrock/batch.go (4)
core/providers/bedrock/bedrock.go (1)
  • BedrockProvider (28-34)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/cerebras/batch.go (3)
core/providers/cerebras/cerebras.go (1)
  • CerebrasProvider (16-21)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/openai/batch.go (4)
core/providers/openai/openai.go (1)
  • OpenAIProvider (24-30)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
transports/bifrost-http/integrations/router.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
transports/bifrost-http/integrations/genai.go (1)
  • ErrResumableUploadInit (36-36)
core/schemas/bifrost.go (1)
  • BatchDeleteRequest (103-103)
core/providers/gemini/files.go (1)
core/schemas/files.go (1)
  • BifrostFileUploadResponse (89-105)
transports/bifrost-http/integrations/genai.go (4)
core/schemas/provider.go (1)
  • Provider (313-362)
core/schemas/bifrost.go (4)
  • ModelProvider (32-32)
  • Gemini (48-48)
  • FileUploadRequest (104-104)
  • BatchCreateRequest (98-98)
core/providers/gemini/types.go (3)
  • GeminiFileUploadRequest (1656-1662)
  • Type (779-779)
  • GeminiBatchCreateRequestSDK (1596-1602)
core/schemas/batch.go (2)
  • BifrostBatchCreateRequest (65-82)
  • BatchRequestItem (31-37)
core/providers/ollama/batch.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
🔇 Additional comments (22)
core/providers/bedrock/batch.go (1)

931-935: Unsupported BatchDelete stub looks correct and consistent with the stack.

The BatchDelete method cleanly returns a structured unsupported-operation error keyed by schemas.BatchDeleteRequest and provider.GetProviderKey(), matching the new BatchDelete flow and the pattern used for other non‑Gemini providers in this stack. No additional wiring or validation is needed here until Bedrock gains native batch delete support.

core/providers/perplexity/batch.go (1)

35-38: LGTM! Consistent stub implementation.

The BatchDelete method correctly follows the established pattern for unsupported batch operations in the Perplexity provider, matching the approach used by the other five batch methods in this file.

core/providers/elevenlabs/batch.go (1)

35-38: LGTM! Consistent stub implementation.

The BatchDelete method correctly follows the established pattern for unsupported operations in the Elevenlabs provider, matching the implementation style of the other five batch operation methods.

core/providers/ollama/batch.go (1)

35-38: BatchDelete stub correctly mirrors other unsupported batch operations

The BatchDelete implementation cleanly matches the existing Batch* unsupported stubs (same error helper, correct BatchDeleteRequest type, provider key usage), satisfying the updated provider interface without changing Ollama behavior. No issues from a correctness or integration perspective.

core/providers/sgl/batch.go (1)

35-38: LGTM! Consistent implementation following established pattern.

The BatchDelete method correctly follows the same pattern as the other five Batch* methods in this file, appropriately returning an unsupported operation error for the SGL provider.

core/providers/azure/batch.go (1)

438-441: LGTM! Stub implementation follows the established pattern.

The BatchDelete stub correctly returns an unsupported operation error, consistent with the PR's objective to provide primary BatchDelete support for Gemini while adding interface stubs for other providers. The implementation aligns with Azure's other batch operation patterns in this file.

core/providers/cerebras/batch.go (1)

35-38: LGTM! Consistent stub implementation.

The BatchDelete method correctly follows the established pattern of other Batch* stubs in this file. The implementation appropriately returns an unsupported operation error using the correct request type constant and provider key.

core/providers/openai/batch.go (1)

597-600: Implementation is correct and follows established pattern for unsupported operations.

The stub correctly returns an unsupported operation error without a CheckOperationAllowed check. This pattern is consistent across all unsupported operation stubs in the codebase (vertex, sgl, parasail, openrouter, and perplexity providers all use the same approach), and OpenAI's Batch API does not provide a delete endpoint. The implementation is appropriate.

core/providers/gemini/files.go (1)

466-485: Gemini upload response wrapper looks correct and SDK-aligned

The GeminiFileUploadResponseWrapper and updated ToGeminiFileUploadResponse cleanly wrap the file object under "file" and map the core fields as expected; no functional issues spotted.

core/providers/gemini/types.go (2)

1568-1651: SDK batch/job state and request/response types are well-shaped

The added Gemini job-state constants and SDK-facing batch structs (GeminiBatchCreateRequestSDK, GeminiBatchJobResponseSDK, etc.) match the GenAI SDK surface and integrate cleanly with the new converters in gemini/batch.go.


1655-1679: File request structs align with new file routing

GeminiFileUploadRequest and the list/retrieve/delete request types line up with the GenAI file routes and provider conversions in genai.go; structure and JSON tags look correct.

core/providers/openrouter/batch.go (1)

35-38: OpenRouter BatchDelete stub is consistent with other unsupported batch methods

The new BatchDelete implementation correctly returns NewUnsupportedOperationError and matches the pattern used by the other OpenRouter batch methods.

transports/bifrost-http/integrations/router.go (3)

80-89: BatchDelete wiring into batch request/route config looks consistent

Extending BatchRequest with DeleteRequest and adding BatchDeleteResponseConverter in RouteConfig cleanly integrates batch delete into the existing batch plumbing; naming and usage are consistent with the other batch fields.

Also applies to: 159-162, 271-292


421-426: Resumable-upload sentinel handling in PreCallback is safe

Special-casing ErrResumableUploadInit to return early from createHandler correctly skips Bifrost execution when the PreCallback has already written the HTTP response, without impacting normal error handling for other cases.


820-841: BatchDelete handler mirrors other batch operations correctly

The new schemas.BatchDeleteRequest branch in handleBatchRequest validates DeleteRequest, calls BatchDeleteRequest on the client, runs PostCallback, and uses BatchDeleteResponseConverter when present. This matches the pattern for the other batch operations and should behave as expected.

core/providers/gemini/batch.go (1)

986-1043: Gemini BatchDelete implementation is correct and matches other batch methods

BatchDelete validates batch_id, builds the proper /batches/{id} DELETE URL (handling both bare IDs and batches/…), sends the request with x-goog-api-key, and returns a well-formed BifrostBatchDeleteResponse on 200/204. This is consistent with the other Gemini batch operations.

transports/bifrost-http/integrations/genai.go (6)

165-205: GenAI file route configs and conversions look coherent

The file routes (/upload/v1beta/files, /v1beta/files, /v1beta/files/{file_id} with GET/DELETE) correctly:

  • Use Gemini-specific request types (GeminiFileUploadRequest, GeminiFileListRequest, etc.).
  • Convert to the corresponding Bifrost file requests with provider derived from context.
  • Use appropriate PreCallbacks to populate provider, query params, and path params.

This wiring matches the Gemini provider’s file API expectations.

Also applies to: 322-337, 357-433


780-806: Multipart and raw upload parsers are reasonable

parseGeminiFileUploadRequest correctly dispatches between resumable/multipart/raw paths based on headers, while parseGeminiMultipartUpload and parseGeminiRawUpload assemble GeminiFileUploadRequest with file bytes and best-effort filename extraction. The overall parsing strategy looks sound given fasthttp’s primitives.

Also applies to: 936-1009


808-865: Resumable upload parsing is functionally correct; be aware of multipart/related edge cases

parseGeminiResumableUpload and parseGeminiMultipartRelated correctly distinguish:

  • JSON-only phase-1 metadata (storing session with display name, MIME type, size).
  • Multipart/related with metadata + file content.
  • Raw body uploads as a fall-back.

While the current multipart/related parsing via manual boundary splitting is a bit low-level, it is functionally consistent; if you later see interoperability issues with less standard clients, consider moving to a dedicated multipart parser for robustness.

Also applies to: 867-934


1081-1113: Resumable phase-2 PreCallback correctly derives provider and original filename

extractGeminiResumableUploadParams:

  • Validates session_id.
  • Loads provider and original filename from the stored session.
  • Deletes the session once data is extracted.
  • Stores provider and filename in the Bifrost context for downstream converters.

This is a good separation of concerns for provider/filename propagation.


1195-1245: File retrieve/delete param extractors are straightforward and correct

extractGeminiFileRetrieveParams and extractGeminiFileDeleteParams:

  • Normalize provider from x-model-provider (defaulting to Gemini).
  • Validate and extract file_id from the path.
  • Populate the appropriate Gemini request structs.

This is the right place to centralize that wiring.


1247-1252: Batch routes correctly integrated into GenAI router

Adding CreateGenAIBatchRouteConfigs("/genai", handlerStore) to NewGenAIRouter ensures the new batch endpoints are registered alongside the existing GenAI and file routes without changing their behavior.

Comment on lines +1066 to +1085
// setResumableUploadFinalStatus sets the X-Goog-Upload-Status header to "final" for phase 2 responses
func setResumableUploadFinalStatus(ctx *fasthttp.RequestCtx, req interface{}, resp interface{}) error {
// Set the upload status to final to signal completion of resumable upload
ctx.Response.Header.Set("X-Goog-Upload-Status", "final")

// Log the response for debugging
respJSON, _ := sonic.Marshal(resp)
fmt.Printf("[DEBUG] setResumableUploadFinalStatus: set X-Goog-Upload-Status=final, response body=%s\n", string(respJSON))

// Also log the full response headers for debugging
fmt.Printf("[DEBUG] setResumableUploadFinalStatus: status code=%d\n", ctx.Response.StatusCode())

return nil
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Post-callback helper is fine; consider dropping debug prints if not needed

setResumableUploadFinalStatus correctly sets X-Goog-Upload-Status: final for phase-2 responses. The debug logging of full response JSON and status code is useful during development but should be dropped or moved to the structured logger before production.

🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/genai.go around lines 1066 to 1079, drop
the two fmt.Printf debug prints (and the sonic.Marshal call) or replace them
with the project's structured logger at debug level; specifically remove
marshaling of the entire response for performance, and either call the existing
logger (e.g., logger.Debugf/Debug) to log a concise message including status or
completely remove the debug logging for production. Ensure no unused imports
remain after removing fmt/sonic.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (14)
transports/bifrost-http/integrations/utils.go (1)

196-196: Remove debug log that exposes full response body.

This debug log prints the complete response body, which may contain sensitive data (PII, tokens, credentials). This should be removed before merging.

 	ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody)))
 	ctx.SetBody(responseBody)
-	fmt.Printf("[DEBUG] sendSuccess: status=200, contentLen=%d, body=%s\n", len(responseBody), string(responseBody))
 }
transports/bifrost-http/integrations/genai.go (9)

52-67: Cleanup goroutine runs forever with no shutdown signal.

The init goroutine loops on ticker.C with no way to stop, and the ticker is never closed. This can block graceful shutdown or leak goroutines in tests. Consider adding a package‑level stop channel or context plus defer ticker.Stop() so the owning process can terminate the cleaner cleanly.


780-865: Resumable upload phase‑1 metadata handling is solid but logs should not use fmt.Printf.

The resumable handler:

  • Correctly branches on multipart/related vs JSON metadata vs raw bytes.
  • Parses snake_case metadata into GeminiFileUploadRequest.
  • Creates and stores an uploadSession with filename/MIME/size, and associates a ResumableSessionID.

However, multiple [DEBUG] fmt.Printf calls (content type, body length, parsed metadata, session ID, raw content) should not be left in production. Either:

  • Remove these lines entirely, or
  • Replace with the structured logger used elsewhere (e.g., logger.Debugf in this package).

This also applies to similar debug prints in nearby functions.


1011-1063: Resumable phase‑2 parser is mostly correct but hardcodes purpose to "batch".

parseGeminiResumableUploadPhase2:

  • Validates session_id from the path.
  • Loads the session, ensuring it exists and has valid type.
  • Copies the body into uploadReq.File and propagates filename/MIME from the session.
  • Stores ResumableSessionID for later PreCallback use.

But it unconditionally sets uploadReq.Purpose = "batch". If this resumable endpoint is or will be used for non‑batch uploads (e.g., general file storage for chat context), hardcoding the purpose could be incorrect and surprising. Prefer:

  • Reading an explicit purpose from metadata in phase‑1 (with validation), and
  • Falling back to "batch" only when no explicit purpose is provided.

1066-1079: setResumableUploadFinalStatus should not rely on fmt.Printf for logging.

Functionally, this callback correctly:

  • Sets X-Goog-Upload-Status: final.
  • Logs the response and status code.

Swap fmt.Printf for the integration’s logger (or remove the logs) to avoid writing debug output directly to stdout in production.


1081-1113: Resumable phase‑2 provider/filename extraction is good; debug print should use logger.

extractGeminiResumableUploadParams:

  • Validates session_id.
  • Loads provider + original filename from the session, defaulting provider to Gemini.
  • Cleans up the session entry after extraction.
  • Stores both provider and original filename into the Bifrost context.

This is the right place to bind provider and original display name. The [DEBUG] fmt.Printf should be converted to structured logging or removed.


45-49: Handle crypto/rand.Read failure when generating session IDs.

generateSessionID ignores the error from rand.Read, which can silently fall back to zeroed bytes and weaken ID unpredictability if the RNG fails. At minimum, check the error and either panic/log‑fatal or return a clearly invalid ID.

 func generateSessionID() string {
 	bytes := make([]byte, 16)
-	rand.Read(bytes)
+	if _, err := rand.Read(bytes); err != nil {
+		// Treat this as unrecoverable; adjust to your logging policy.
+		panic(fmt.Sprintf("failed to generate secure session ID: %v", err))
+	}
 	return hex.EncodeToString(bytes)
 }

438-501: Validate Src type and contents explicitly for batch create.

GeminiBatchCreateRequestSDK.Src accepts string (file ref) or []interface{} (inline requests), but unsupported types and non‑map items in the slice are silently ignored, letting an invalid src bubble down and trigger the generic "either input_file_id or requests must be provided" error in the provider. That makes debugging client mistakes hard.

Consider validating at this layer:

  • If Src is neither string nor []interface{}: return a clear 400 error like “src must be a string (file reference) or array of request objects”.
  • If Src is []interface{}, require each element to be a map[string]interface{} (or a strongly typed inline struct); if any element is not, return a 400 with a precise message rather than silently skipping.

This keeps API feedback actionable and avoids opaque downstream failures.


867-934: Critical: multipart/related parsing corrupts binary file content.

parseGeminiMultipartRelated converts the raw body to a string and back:

  • parts := strings.Split(string(body), delimiter)
  • Later assigns file content via uploadReq.File = []byte(content).

This will corrupt arbitrary binary data, since string(body) assumes UTF‑8 and may mangle embedded NULs or arbitrary bytes. All splitting and slicing must be done on []byte.

Refactor to operate purely on []byte, for example:

- delimiter := "--" + boundary
- parts := strings.Split(string(body), delimiter)
+ delimiter := []byte("--" + boundary)
+ parts := bytes.Split(body, delimiter)

- headerEnd := strings.Index(part, "\r\n\r\n")
+ headerEnd := bytes.Index(part, []byte("\r\n\r\n"))
  // ...

- headers := part[:headerEnd]
- content := part[headerEnd:]
+ headers := string(part[:headerEnd]) // headers can be treated as text
+ content := part[headerEnd:]

- uploadReq.File = []byte(content)
+ uploadReq.File = append([]byte(nil), content...) // copy raw bytes

Ensure all other operations (e.g., trimming, JSON unmarshal of metadata) only convert the header/JSON portion to string, never the raw file bytes.


1115-1163: Honor reverse‑proxy headers when constructing resumable upload URL.

extractGeminiFileUploadParams builds the phase‑2 upload URL using ctx.IsTLS() and ctx.Host(), which will reflect the internal server address when running behind a reverse proxy. Clients will then receive an unusable URL.

Follow the existing pattern in session.go and elsewhere:

  • Prefer X-Forwarded-Proto (or Forwarded) to determine http vs https.
  • Prefer X-Forwarded-Host for the host when present.
  • Fall back to ctx.IsTLS() / ctx.Host() only if no forwarded headers exist.

This ensures the upload URL matches the external address seen by clients.

transports/bifrost-http/integrations/router.go (2)

855-893: Remove fmt.Printf debug logs from file upload handling.

Within handleFileRequest’s FileUploadRequest case, multiple [DEBUG] fmt.Printf calls log provider, purpose, filename, errors, callbacks, and converter activity. These should not be left in the router:

  • They bypass the structured logger and clutter stdout.
  • They can leak filenames/purposes in logs unexpectedly.

Either remove them entirely or replace with g.logger.Debug(...) at the appropriate points if this visibility is still needed.


1000-1012: Remove debug fmt.Printf on file response conversion errors.

The final if err != nil block logs conversion errors via fmt.Printf before sending an error response:

if err != nil {
    fmt.Printf("[DEBUG] router: file response conversion error: %v\n", err)
    g.sendError(...)
}

This should either:

  • Use g.logger.Debug/Error with structured context, or
  • Drop the print entirely and rely on centralized logging.

Avoiding direct fmt.Printf keeps logging consistent and production‑safe.

core/providers/gemini/batch.go (2)

870-896: Fix BatchStats math to avoid negative or inconsistent counts.

The current logic:

RequestCount:           resp.RequestCounts.Total,
PendingRequestCount:    resp.RequestCounts.Total - resp.RequestCounts.Completed,
SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed,

assumes:

  • Total is always populated, and
  • Completed includes both successes and failures.

But in some flows (e.g., BatchRetrieve, listing), Total is zero while Completed/Failed are non‑zero, yielding negative pending counts and incorrect success numbers.

Recommend deriving totals defensively, e.g.:

total := resp.RequestCounts.Total
if total == 0 {
    total = resp.RequestCounts.Completed + resp.RequestCounts.Failed
}
success := resp.RequestCounts.Completed
if success < 0 {
    success = 0
}
pending := total - (success + resp.RequestCounts.Failed)
if pending < 0 {
    pending = 0
}

BatchStats: &GeminiBatchStats{
    RequestCount:           total,
    PendingRequestCount:    pending,
    SuccessfulRequestCount: success,
}

and mirror the same computation in ToGeminiBatchRetrieveResponse and ToGeminiBatchListResponse so invariants hold (RequestCount ≥ 0, PendingRequestCount ≥ 0, and RequestCount ≈ success + failures + pending).


935-972: ToGeminiBatchListResponse follows the same pattern; reuse fixed stats computation.

The list converter builds GeminiBatchJobResponseSDK entries from the list data, wiring IDs, states, and timestamps into metadata. Apply the same defensive stats computation here to avoid negative pending counts when Total is zero or not provided by upstream.

🧹 Nitpick comments (5)
core/providers/gemini/files.go (2)

466-470: Misleading comment: fields use camelCase, not snake_case.

The comment states "Uses snake_case field names to match Google's API format," but GeminiFileResponse uses camelCase JSON tags (e.g., displayName, mimeType, sizeBytes). This is actually correct for Google's Gemini API. Consider updating the comment to reflect the actual casing.

 // ToGeminiFileUploadResponse converts a Bifrost file upload response to Gemini format.
-// Uses snake_case field names to match Google's API format.
+// Uses camelCase field names to match Google's Gemini API format.
 // GeminiFileUploadResponseWrapper is a wrapper that contains the file response for the upload API.
 type GeminiFileUploadResponseWrapper struct {
 	File GeminiFileResponse `json:"file"`
 }

472-485: Consider preserving actual MIME type if available.

The MimeType is hardcoded to "application/octet-stream". While this is a safe fallback, it may not accurately represent the actual file type. If the original upload captured the MIME type, consider storing it in BifrostFileUploadResponse and using it here.

transports/bifrost-http/integrations/utils.go (1)

194-194: Redundant Content-Length header setting.

fasthttp.RequestCtx.SetBody() automatically sets the Content-Length header based on the body size. This explicit header setting is not harmful but is unnecessary.

transports/bifrost-http/integrations/genai.go (2)

712-778: Batch ID extractors correctly normalize IDs but share duplicated logic.

Both extractGeminiBatchIDFromPath and extractGeminiBatchIDFromPathCancel:

  • Default provider from header.
  • Enforce batch_id presence and non‑empty string.
  • Normalize IDs to the batches/<id> format, trimming :cancel where appropriate.

Consider extracting the shared “read + normalize batch_id” steps into a small helper to avoid drift between the two code paths, but the current behavior is functionally sound.


984-1009: Raw body upload handling is fine but doesn’t derive MIME type.

The raw upload path copies the entire body into uploadReq.File, and optionally extracts filename from Content-Disposition. It leaves MimeType unset, which may be acceptable if downstream code infers MIME type elsewhere; if not, consider populating it from Content-Type as a best‑effort hint.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9e1cf19 and 53d30ec.

📒 Files selected for processing (25)
  • core/bifrost.go (1 hunks)
  • core/providers/anthropic/batch.go (1 hunks)
  • core/providers/azure/batch.go (1 hunks)
  • core/providers/bedrock/batch.go (1 hunks)
  • core/providers/cerebras/batch.go (1 hunks)
  • core/providers/cohere/batch.go (1 hunks)
  • core/providers/elevenlabs/batch.go (1 hunks)
  • core/providers/gemini/batch.go (1 hunks)
  • core/providers/gemini/files.go (1 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/groq/batch.go (1 hunks)
  • core/providers/mistral/batch.go (1 hunks)
  • core/providers/ollama/batch.go (1 hunks)
  • core/providers/openai/batch.go (1 hunks)
  • core/providers/openrouter/batch.go (1 hunks)
  • core/providers/parasail/batch.go (1 hunks)
  • core/providers/perplexity/batch.go (1 hunks)
  • core/providers/sgl/batch.go (1 hunks)
  • core/providers/vertex/batch.go (1 hunks)
  • core/schemas/batch.go (1 hunks)
  • core/schemas/bifrost.go (1 hunks)
  • core/schemas/provider.go (3 hunks)
  • transports/bifrost-http/integrations/genai.go (14 hunks)
  • transports/bifrost-http/integrations/router.go (9 hunks)
  • transports/bifrost-http/integrations/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (8)
  • core/providers/bedrock/batch.go
  • core/providers/ollama/batch.go
  • core/providers/cerebras/batch.go
  • core/providers/azure/batch.go
  • core/schemas/bifrost.go
  • core/providers/perplexity/batch.go
  • core/providers/anthropic/batch.go
  • core/providers/mistral/batch.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/providers/openrouter/batch.go
  • core/providers/openai/batch.go
  • core/providers/parasail/batch.go
  • core/schemas/provider.go
  • core/providers/groq/batch.go
  • core/bifrost.go
  • core/providers/vertex/batch.go
  • core/providers/sgl/batch.go
  • core/schemas/batch.go
  • core/providers/cohere/batch.go
  • transports/bifrost-http/integrations/genai.go
  • core/providers/gemini/batch.go
  • transports/bifrost-http/integrations/utils.go
  • core/providers/gemini/files.go
  • core/providers/elevenlabs/batch.go
  • core/providers/gemini/types.go
  • transports/bifrost-http/integrations/router.go
🧬 Code graph analysis (10)
core/providers/openai/batch.go (4)
core/providers/openai/openai.go (1)
  • OpenAIProvider (24-30)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/parasail/batch.go (2)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/schemas/provider.go (2)
core/schemas/bifrost.go (2)
  • BatchDeleteRequest (103-103)
  • BifrostError (364-373)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/groq/batch.go (4)
core/providers/groq/groq.go (1)
  • GroqProvider (16-21)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/vertex/batch.go (4)
core/providers/vertex/vertex.go (1)
  • VertexProvider (57-62)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/sgl/batch.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/schemas/batch.go (2)
core/schemas/provider.go (1)
  • Provider (313-362)
core/schemas/bifrost.go (2)
  • ModelProvider (32-32)
  • BifrostResponseExtraFields (295-304)
core/providers/cohere/batch.go (4)
core/providers/cohere/cohere.go (1)
  • CohereProvider (67-73)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/gemini/files.go (1)
core/schemas/files.go (1)
  • BifrostFileUploadResponse (89-105)
core/providers/elevenlabs/batch.go (2)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
🔇 Additional comments (34)
core/providers/elevenlabs/batch.go (1)

35-38: LGTM!

The BatchDelete stub follows the established pattern for unsupported operations, consistent with the other Batch* methods in this file. The method signature correctly uses the new BifrostBatchDeleteRequest and BifrostBatchDeleteResponse types.

core/providers/vertex/batch.go (1)

35-38: LGTM!

The BatchDelete stub correctly follows the established pattern for unsupported operations, consistent with the other Batch* methods in this file.

core/providers/openrouter/batch.go (1)

35-38: LGTM!

The BatchDelete stub correctly follows the established pattern for unsupported operations, consistent with the other Batch* methods in this file.

core/providers/cohere/batch.go (1)

35-38: LGTM!

The BatchDelete implementation follows the established pattern for unsupported batch operations in this provider.

core/schemas/provider.go (3)

182-182: LGTM!

The BatchDelete field addition to AllowedRequests is correctly placed and follows the naming conventions.


231-232: LGTM!

The BatchDeleteRequest case in IsOperationAllowed correctly returns the BatchDelete field and maintains consistency with other batch operations.


350-351: LGTM!

The BatchDelete method addition to the Provider interface follows the established pattern for batch operations with appropriate signature and documentation.

core/providers/openai/batch.go (1)

597-600: LGTM!

The BatchDelete implementation correctly returns an unsupported operation error, consistent with the pattern used across other providers.

core/providers/groq/batch.go (1)

35-38: LGTM!

The BatchDelete implementation follows the consistent pattern for unsupported batch operations across all providers.

core/schemas/batch.go (2)

265-272: LGTM!

The BifrostBatchDeleteRequest struct follows the established pattern for batch request types with appropriate fields and JSON tags.


274-281: LGTM!

The BifrostBatchDeleteResponse struct is well-designed with appropriate fields, including a Deleted boolean to indicate success, and follows the standard response pattern.

core/providers/sgl/batch.go (1)

35-38: LGTM!

The BatchDelete implementation is consistent with the unsupported operation pattern used across all provider stubs.

core/providers/parasail/batch.go (1)

35-38: LGTM!

The BatchDelete implementation follows the established pattern for unsupported batch operations.

core/bifrost.go (1)

1133-1209: LGTM!

The BatchDeleteRequest public API method is well-implemented and follows the exact pattern established by other batch operations (BatchCancel, BatchRetrieve, BatchResults). The implementation includes:

  • Proper input validation for nil request, missing provider, and missing batch ID
  • Consistent error handling and messaging
  • Appropriate base provider type determination for custom providers
  • Key selection logic for providers that require authentication
  • Retry logic through the executeRequestWithRetries helper
  • Error augmentation with request metadata
transports/bifrost-http/integrations/genai.go (9)

207-261: Resumable phase‑2 file upload routing and response enrichment look correct.

The POST/PUT resumable routes correctly reuse the same parser, map GeminiFileUploadRequestBifrostFileUploadRequest, propagate provider from context, and use PostCallback to set X-Goog-Upload-Status and the original filename when missing. This wiring matches the resumable init flow and Bifrost’s file API shape.


263-314: PUT variant for resumable uploads is consistent with POST.

The PUT route mirrors the POST resumable handler: same parser, provider resolution, and response conversion, differing only in HTTP method. This maintains compatibility with clients that prefer PUT without duplicating logic.


663-684: Batch create pre‑callback correctly normalizes provider and model.

extractGeminiBatchCreateParams sensibly defaults the provider header to Gemini, stores it on the Bifrost context, and strips :batchGenerateContent from the path model segment before assigning to the SDK request. This keeps both provider and model consistent between router and provider.


686-710: Batch list query parsing is straightforward and robust.

The pre‑callback maps x-model-provider into the Bifrost context and parses pageSize / pageToken from query args into GeminiBatchListRequestSDK, ignoring parse failures gracefully. That’s a reasonable, non‑surprising behavior.


936-982: Multipart/form‑data upload path looks correct and falls back to filename when metadata missing.

parseGeminiMultipartUpload:

  • Reads the metadata JSON if present and sets uploadReq.Filename.
  • Reads the file content into a correctly sized buffer.
  • Falls back to fileHeader.Filename when metadata omitted.

This is a reasonable mapping from the SDK’s multipart format to GeminiFileUploadRequest.


1171-1193: File list query param extraction is consistent with other extractors.

The code:

  • Defaults provider from x-model-provider.
  • Parses pageSize and pageToken into GeminiFileListRequest (Limit and After).

This mirrors the batch list flow and seems correct.


1195-1219: Retrieve‑file extractor correctly validates file_id and patches the SDK request.

extractGeminiFileRetrieveParams:

  • Sets provider in context from header (default Gemini).
  • Validates that file_id exists and is non‑empty.
  • Copies it into the typed GeminiFileRetrieveRequest.

Nothing stands out as problematic here.


1221-1245: Delete‑file extractor matches retrieve‑file pattern.

The delete extractor:

  • Handles provider exactly like the retrieve path.
  • Validates file_id presence and non‑empty string.
  • Sets FileID on GeminiFileDeleteRequest.

This is consistent and should interoperate cleanly with the router’s FileDeleteRequest path.


1247-1255: GenAI router wiring includes batch routes as expected.

NewGenAIRouter now appends CreateGenAIBatchRouteConfigs to the same /genai prefix used by existing chat and file routes. This cleanly exposes the new batch surface without changing existing endpoints.

core/providers/gemini/batch.go (3)

847-868: ToGeminiJobState mapping looks reasonable.

The mapping from internal schemas.BatchStatus to SDK job states is sensible (e.g., InProgressRUNNING, CompletedSUCCEEDED, CancellingCANCELLING). Treating Expired as FAILED is a choice but matches many APIs that consider expiry terminal/error.


898-933: ToGeminiBatchRetrieveResponse conversion is consistent aside from stats math.

The retrieve converter correctly:

  • Propagates ID, Status, timestamps, and operation name.
  • Sets Dest.FileName when OutputFileID is present.

Once the BatchStats calculation is fixed as above, this will be a solid SDK mapping.


986-1043: BatchDelete implementation and URL construction look correct.

BatchDelete:

  • Checks provider capability via CheckOperationAllowed.
  • Validates BatchID.
  • Constructs the delete URL for both batches/<id> and raw <id> forms.
  • Uses DELETE, attaches API key, and treats 200/204 as success, otherwise defers to parseGeminiError.
  • Returns a minimal BifrostBatchDeleteResponse with latency metadata.

This matches the router’s new delete path and should behave well against the Gemini API.

transports/bifrost-http/integrations/router.go (5)

80-89: BatchRequest extension for delete operations is coherent.

Adding DeleteRequest *schemas.BifrostBatchDeleteRequest to BatchRequest matches the expanded batch API surface and integrates cleanly with the existing Type switch in handleBatchRequest.


159-162: New BatchDeleteResponseConverter type fits the existing converter pattern.

Defining a dedicated converter for BifrostBatchDeleteResponse keeps RouteConfig extensible and mirrors the other batch response converters, making integration code uniform.


271-301: RouteConfig additions for batch delete look correct.

Including BatchDeleteResponseConverter in RouteConfig and using BatchCreateRequestConverter as the generic batch converter is consistent with how batch, file, and inference routes are distinguished in RegisterRoutes and createHandler.


440-454: Batch routing detection and dispatch are cleanly integrated.

Using config.BatchCreateRequestConverter != nil to identify batch routes, then delegating to handleBatchRequest with a strongly typed BatchRequest, aligns with the existing inference/file split. The early defer cancel() ensures contexts are cleaned up for batch routes.


702-853: Batch delete handling mirrors other batch operations.

The new schemas.BatchDeleteRequest branch:

  • Validates DeleteRequest non‑nil.
  • Calls g.client.BatchDeleteRequest.
  • Applies PostCallback and optional BatchDeleteResponseConverter before sending success.

This mirrors create/list/retrieve/cancel/results patterns and should be easy for integration code to reason about.

core/providers/gemini/types.go (3)

1578-1590: SDK job state constants align with documented GenAI SDK states.

Defining GeminiJobState* alongside internal batch states clarifies the distinction between internal/provider statuses and SDK‑visible states and supports the new converters in batch.go. Names and values look consistent.


1591-1651: New SDK batch types provide a clean surface for router ↔ provider integration.

The added SDK‑facing types:

  • GeminiBatchCreateRequestSDK (with flexible Src),
  • Inline request/config structs,
  • GeminiBatchJobResponseSDK, list/request/cancel/delete SDK structs,

match the payloads expected by the Google GenAI SDK and give the HTTP layer a strongly typed target for JSON unmarshaling. Combined with the converters in batch.go and the request mappers in genai.go, this is a solid abstraction boundary.


1655-1662: GeminiFileUploadRequest redesign matches resumable upload needs.

Changing GeminiFileUploadRequest to:

  • Keep raw File []byte and ResumableSessionID internal (json:"-"),
  • Expose filename, mimeType, and purpose as JSON fields,

fits the resumable upload protocol: phase‑1 carries metadata only, phase‑2 carries bytes, and the router/provider can mix these via in‑memory fields without leaking internal details on the wire.

@akshaydeo akshaydeo force-pushed the 12-04-gemini-sdk-batch-support branch from 53d30ec to 5dffad9 Compare December 6, 2025 11:00
@akshaydeo akshaydeo force-pushed the 12-04-gemini-sdk-batch-support branch from 5dffad9 to db0afec Compare December 7, 2025 09:51
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (9)
transports/bifrost-http/integrations/genai.go (7)

45-49: Handle error from crypto/rand.Read.

While crypto/rand.Read rarely fails, ignoring the error could lead to predictable session IDs if the random source is unavailable, which is a security concern for session hijacking.

 func generateSessionID() string {
 	bytes := make([]byte, 16)
-	rand.Read(bytes)
+	if _, err := rand.Read(bytes); err != nil {
+		// Fallback - this should rarely happen
+		return fmt.Sprintf("%d", time.Now().UnixNano())
+	}
 	return hex.EncodeToString(bytes)
 }

52-67: Provide a mechanism to stop the cleanup goroutine for graceful shutdown.

The cleanup goroutine runs indefinitely with no way to stop it. This prevents clean application termination.

+var cleanupStop = make(chan struct{})
+
 func init() {
 	go func() {
 		ticker := time.NewTicker(10 * time.Minute)
+		defer ticker.Stop()
 		for {
-			for range ticker.C {
+			select {
+			case <-ticker.C:
 				now := time.Now()
 				uploadSessions.Range(func(key, value interface{}) bool {
 					if session, ok := value.(*uploadSession); ok {
 						if now.Sub(session.CreatedAt) > time.Hour {
 							uploadSessions.Delete(key)
 						}
 					}
 					return true
 				})
+			case <-cleanupStop:
+				return
 			}
 		}
 	}()
 }

240-254: Remove debug fmt.Printf statements before merging.

There are numerous [DEBUG] print statements throughout this file that should be removed or replaced with a proper logger with configurable log levels. These debug statements are present at lines 240, 249, 253-254, 792-793, 814, 818, 824, 835, 841, 854, 860, 1013, 1022, 1034, 1046, 1062-1063, 1072-1077, 1108, 1125, 1129, 1147-1148.


885-931: Critical: Binary data corruption in multipart/related parsing.

The file content is corrupted because the request body is converted to a string at line 887 (strings.Split(string(body), delimiter)), then extracted as a string, and finally converted back to bytes at line 929. Go strings are UTF-8 encoded; binary data containing null bytes or invalid UTF-8 sequences will be lost or replaced. This breaks binary file uploads.

Use bytes.Split() or bytes.Index() to preserve binary data integrity throughout the parsing process.


1056-1056: Hardcoded purpose may be incorrect for non-batch uploads.

The purpose is hardcoded to "batch" for all file uploads via the resumable upload API. If this endpoint is used for other purposes (e.g., general file storage for chat context), this default may be incorrect. Consider allowing the purpose to be specified in the metadata during phase 1.


1138-1148: Support reverse proxy headers for resumable upload URL construction.

The upload URL uses ctx.Host() and ctx.IsTLS() directly without considering X-Forwarded-Proto and X-Forwarded-Host headers. When Bifrost runs behind a reverse proxy, the URL will reference internal host/scheme instead of the client-facing address, causing subsequent upload requests to fail.

 			// Build the upload URL for phase 2
-			// Use the request's host and scheme to build the URL
-			scheme := "http"
-			if ctx.IsTLS() {
-				scheme = "https"
+			scheme := string(ctx.Request.Header.Peek("X-Forwarded-Proto"))
+			if scheme == "" {
+				if ctx.IsTLS() {
+					scheme = "https"
+				} else {
+					scheme = "http"
+				}
+			}
+			host := string(ctx.Request.Header.Peek("X-Forwarded-Host"))
+			if host == "" {
+				host = string(ctx.Host())
 			}
-			host := string(ctx.Host())

1066-1079: Remove debug prints from setResumableUploadFinalStatus.

The debug logging marshaling the full response JSON (lines 1072-1077) should be removed or moved to the structured logger. This is consistent with the feedback to remove all [DEBUG] print statements.

core/providers/gemini/batch.go (1)

871-896: [Duplicate] Defensive stats computation still needed to avoid negative pending counts.

This is the same issue flagged in the previous review. The converters ToGeminiBatchJobResponse (line 889), ToGeminiBatchRetrieveResponse (line 916), and ToGeminiBatchListResponse (line 955) all compute PendingRequestCount as Total - Completed, which can become negative when Total == 0 but Completed > 0.

As noted in the earlier review, BatchRetrieve populates RequestCounts.Completed and RequestCounts.Failed but leaves Total at zero. This will cause the SDK converters to produce invalid negative pending counts.

The recommended fix from the previous review should be applied:

+	total := resp.RequestCounts.Total
+	if total == 0 {
+		total = resp.RequestCounts.Completed + resp.RequestCounts.Failed
+	}
+	pending := total - resp.RequestCounts.Completed - resp.RequestCounts.Failed
+	if pending < 0 {
+		pending = 0
+	}
+
 	result.Metadata = &GeminiBatchMetadata{
 		Name:       resp.ID,
 		State:      ToGeminiJobState(resp.Status),
 		CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339),
 		BatchStats: &GeminiBatchStats{
-			RequestCount:           resp.RequestCounts.Total,
-			PendingRequestCount:    resp.RequestCounts.Total - resp.RequestCounts.Completed,
-			SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed,
+			RequestCount:           total,
+			PendingRequestCount:    pending,
+			SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed,
 		},
 	}

Apply the same pattern to all three converter functions.

Also applies to: 899-933, 936-972

transports/bifrost-http/integrations/router.go (1)

870-894: [Duplicate] Remove debug fmt.Printf statements before merging.

These debug statements are still present from the previous review. As noted before, they should be removed or replaced with structured logging using g.logger.Debug(...):

Lines to address:

  • Line 870: fmt.Printf("[DEBUG] router: calling FileUploadRequest...
  • Lines 873-877: Error message construction and debug print
  • Line 881: fmt.Printf("[DEBUG] router: FileUploadRequest success...
  • Lines 883-885: PostCallback debug prints
  • Lines 889, 894: FileUploadResponseConverter debug prints
  • Line 1012: fmt.Printf("[DEBUG] router: file response conversion error...

If debug logging is needed for production, use the structured logger instead of raw stdout.

Also applies to: 1012-1012

🧹 Nitpick comments (4)
core/providers/gemini/files.go (2)

466-470: Misleading comment: JSON tags use camelCase, not snake_case.

The comment states "Uses snake_case field names" but GeminiFileResponse uses camelCase JSON tags (displayName, mimeType, sizeBytes, etc.). Consider updating the comment to accurately reflect the format.

-// Uses snake_case field names to match Google's API format.
+// Uses camelCase field names to match Google's API format.

472-485: Hardcoded MimeType may not reflect actual file type.

MimeType is hardcoded to "application/octet-stream". If the original upload request or response contains the actual MIME type, consider propagating it instead of using a generic fallback.

transports/bifrost-http/integrations/genai.go (1)

503-531: Consider renaming converter field for clarity.

The BatchCreateRequestConverter field is being used for BatchListRequest (line 511), BatchRetrieveRequest (line 553), BatchCancelRequest (line 591), and BatchDeleteRequest (line 629). While functionally correct since it returns the appropriate BatchRequest.Type, using a more generic field name like BatchRequestConverter would improve readability and reduce confusion.

transports/bifrost-http/integrations/router.go (1)

92-99: Note: FileRequest.DeleteRequest field appears unused.

The DeleteRequest field was added to FileRequest (line 97), but it's not referenced in handleFileRequest (lines 858-1023). The existing FileDeleteRequest case (lines 952-972) already handles file deletion using fileReq.DeleteRequest.

If this field is intended for future use, consider adding a comment. Otherwise, it may be unnecessary at this time.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5dffad9 and db0afec.

📒 Files selected for processing (25)
  • core/bifrost.go (1 hunks)
  • core/providers/anthropic/batch.go (1 hunks)
  • core/providers/azure/batch.go (1 hunks)
  • core/providers/bedrock/batch.go (1 hunks)
  • core/providers/cerebras/batch.go (1 hunks)
  • core/providers/cohere/batch.go (1 hunks)
  • core/providers/elevenlabs/batch.go (1 hunks)
  • core/providers/gemini/batch.go (1 hunks)
  • core/providers/gemini/files.go (1 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/groq/batch.go (1 hunks)
  • core/providers/mistral/batch.go (1 hunks)
  • core/providers/ollama/batch.go (1 hunks)
  • core/providers/openai/batch.go (1 hunks)
  • core/providers/openrouter/batch.go (1 hunks)
  • core/providers/parasail/batch.go (1 hunks)
  • core/providers/perplexity/batch.go (1 hunks)
  • core/providers/sgl/batch.go (1 hunks)
  • core/providers/vertex/batch.go (1 hunks)
  • core/schemas/batch.go (1 hunks)
  • core/schemas/bifrost.go (1 hunks)
  • core/schemas/provider.go (3 hunks)
  • transports/bifrost-http/integrations/genai.go (14 hunks)
  • transports/bifrost-http/integrations/router.go (9 hunks)
  • transports/bifrost-http/integrations/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (11)
  • core/providers/vertex/batch.go
  • core/schemas/bifrost.go
  • core/providers/bedrock/batch.go
  • core/providers/cohere/batch.go
  • transports/bifrost-http/integrations/utils.go
  • core/providers/openai/batch.go
  • core/providers/ollama/batch.go
  • core/providers/anthropic/batch.go
  • core/bifrost.go
  • core/schemas/provider.go
  • core/providers/parasail/batch.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/providers/azure/batch.go
  • core/providers/mistral/batch.go
  • core/providers/elevenlabs/batch.go
  • core/providers/sgl/batch.go
  • core/providers/cerebras/batch.go
  • core/providers/openrouter/batch.go
  • core/providers/gemini/files.go
  • core/providers/groq/batch.go
  • core/providers/perplexity/batch.go
  • transports/bifrost-http/integrations/router.go
  • transports/bifrost-http/integrations/genai.go
  • core/providers/gemini/batch.go
  • core/providers/gemini/types.go
  • core/schemas/batch.go
🧬 Code graph analysis (9)
core/providers/mistral/batch.go (2)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/elevenlabs/batch.go (3)
core/providers/elevenlabs/elevenlabs.go (1)
  • ElevenlabsProvider (23-29)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/sgl/batch.go (4)
core/providers/sgl/sgl.go (1)
  • SGLProvider (18-23)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/cerebras/batch.go (4)
core/providers/cerebras/cerebras.go (1)
  • CerebrasProvider (16-21)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/openrouter/batch.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/groq/batch.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/perplexity/batch.go (4)
core/providers/perplexity/perplexity.go (1)
  • PerplexityProvider (19-24)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
transports/bifrost-http/integrations/router.go (4)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
transports/bifrost-http/integrations/genai.go (1)
  • ErrResumableUploadInit (36-36)
core/schemas/bifrost.go (2)
  • BatchDeleteRequest (103-103)
  • FileUploadRequest (104-104)
core/schemas/provider.go (1)
  • Provider (313-362)
core/providers/gemini/batch.go (3)
core/schemas/batch.go (3)
  • BatchStatus (5-5)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/gemini/types.go (5)
  • GeminiBatchJobResponseSDK (1618-1624)
  • GeminiBatchMetadata (1499-1512)
  • GeminiBatchStats (1433-1437)
  • GeminiBatchDest (1533-1536)
  • GeminiBatchListResponseSDK (1627-1630)
core/providers/utils/utils.go (4)
  • CheckOperationAllowed (408-420)
  • NewBifrostOperationError (449-460)
  • SetExtraHeaders (178-208)
  • MakeRequestWithContext (39-93)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
🔇 Additional comments (17)
core/providers/azure/batch.go (1)

437-441: LGTM!

The BatchDelete method correctly returns an unsupported operation error, consistent with the pattern used by other providers that don't support batch deletion.

core/schemas/batch.go (1)

265-282: LGTM!

The new BifrostBatchDeleteRequest and BifrostBatchDeleteResponse types are well-structured and consistent with existing batch request/response patterns in the codebase.

core/providers/openrouter/batch.go (1)

35-39: LGTM!

The BatchDelete method follows the same pattern as other batch operations in this provider, correctly returning an unsupported operation error.

core/providers/mistral/batch.go (1)

35-39: LGTM!

The BatchDelete method is consistent with the existing batch operation stubs in this provider.

core/providers/cerebras/batch.go (1)

35-39: LGTM!

The BatchDelete method is consistent with the existing batch operation stubs in this provider.

core/providers/sgl/batch.go (1)

35-38: LGTM!

The BatchDelete implementation correctly follows the established pattern for unsupported batch operations, consistent with other Batch* methods in this file and across other providers.

core/providers/perplexity/batch.go (1)

35-38: LGTM!

Consistent implementation of unsupported BatchDelete operation, matching the pattern used by other providers.

core/providers/elevenlabs/batch.go (1)

35-38: LGTM!

The BatchDelete implementation follows the established unsupported operation pattern consistently.

core/providers/groq/batch.go (1)

35-38: LGTM!

Consistent BatchDelete stub matching the established pattern across providers.

transports/bifrost-http/integrations/genai.go (1)

663-778: LGTM - Batch parameter extraction functions.

The extraction functions follow a consistent pattern: extract provider from header (defaulting to Gemini), validate required path parameters, and properly format batch IDs with the batches/ prefix. Good input validation with clear error messages.

core/providers/gemini/batch.go (1)

986-1043: LGTM! BatchDelete implementation is solid.

The implementation correctly:

  • Validates operation permissions and required parameters
  • Handles both batch ID formats (batches/xxx and bare IDs)
  • Uses HTTP DELETE method appropriately
  • Accepts both 200 OK and 204 No Content as success responses
  • Returns properly structured BifrostBatchDeleteResponse
transports/bifrost-http/integrations/router.go (3)

159-161: LGTM! BatchDeleteResponseConverter follows established patterns.

The new converter type and RouteConfig field are consistent with other batch operation converters (Create, List, Retrieve, Cancel, Results).

Also applies to: 291-291


424-428: LGTM! Clean handling for resumable upload initialization.

The special-case handling for ErrResumableUploadInit allows the PreCallback to handle resumable upload session initialization and write its own response, then return gracefully without error propagation. This is an appropriate pattern for this use case.


822-842: LGTM! Batch delete handling is consistent with existing patterns.

The BatchDeleteRequest case follows the same structure as other batch operations:

  • Validates the request
  • Invokes the client method
  • Executes optional PostCallback
  • Applies optional response converter
core/providers/gemini/types.go (3)

1578-1589: LGTM! SDK job state constants are well-defined.

The new GeminiJobState* constants are clearly documented as Google GenAI SDK-specific states and are properly distinguished from the internal GeminiBatchState* constants. The naming and values align with the SDK format.


1591-1651: LGTM! SDK batch types are correctly structured.

The new SDK-compatible batch types properly mirror the Google GenAI SDK format:

  • GeminiBatchCreateRequestSDK.Src as interface{} appropriately supports both string (file reference) and array (inline requests) inputs
  • Request/response types include all necessary fields with correct JSON tags
  • The types enable SDK-compatible batch operations while maintaining type safety

Note: The past review comment about Src field validation applies to the transport layer (transports/bifrost-http/integrations/genai.go), not to these type definitions.


1656-1662: LGTM! GeminiFileUploadRequest refactor supports resumable uploads.

The changes appropriately:

  • Add MimeType field for explicit content type specification (replacing implicit provider-based handling)
  • Add ResumableSessionID for resumable upload session tracking
  • Remove Provider field, reducing dependency on the schemas package

These modifications align with the PR's goal of supporting resumable uploads and refactoring file handling.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (11)
transports/bifrost-http/integrations/utils.go (1)

194-196: Avoid logging full response body in sendSuccess.

This debug fmt.Printf logs the entire JSON payload, which can leak PII/secrets and is costly on hot paths. The explicit Content-Length header is fine; the body log is not.

Apply this diff to drop the unsafe log while keeping the header:

-	ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody)))
-	ctx.SetBody(responseBody)
-	fmt.Printf("[DEBUG] sendSuccess: status=200, contentLen=%d, body=%s\n", len(responseBody), string(responseBody))
+	ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody)))
+	ctx.SetBody(responseBody)

If you still want debug observability, prefer a structured log that only includes status and len(responseBody), without the body contents.

transports/bifrost-http/integrations/router.go (1)

864-895: Remove fmt.Printf debug logs or switch to structured logger

There are several [DEBUG] fmt.Printf calls in the file-upload path (FileUploadRequest, post-callback, response conversion, and final error logging). These bypass the structured g.logger and will spam stdout in production. Prior review already requested their removal or replacement.

Recommend deleting them or converting to g.logger.Debug(...) with structured fields; keep user-visible behavior unchanged.

Also applies to: 1011-1013

core/providers/gemini/batch.go (1)

870-896: Fix BatchStats math to avoid negative pending/success counts

All three SDK converters currently derive stats as:

  • RequestCount: resp.RequestCounts.Total
  • PendingRequestCount: resp.RequestCounts.Total - resp.RequestCounts.Completed
  • SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed

For Gemini batches where Total is left at 0 but Completed/Failed are non-zero (e.g., BatchRetrieve only sets those fields), this produces negative pending/successful counts, which is invalid.

Derive totals defensively and clamp pending to ≥ 0, e.g.:

+func deriveStats(counts schemas.BatchRequestCounts) (total, pending, success int) {
+    total = counts.Total
+    if total == 0 {
+        total = counts.Completed + counts.Failed
+    }
+    success = counts.Completed
+    if success < 0 {
+        success = 0
+    }
+    pending = total - (success + counts.Failed)
+    if pending < 0 {
+        pending = 0
+    }
+    return
+}

and then in each converter:

-    BatchStats: &GeminiBatchStats{
-        RequestCount:           resp.RequestCounts.Total,
-        PendingRequestCount:    resp.RequestCounts.Total - resp.RequestCounts.Completed,
-        SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed,
-    },
+    total, pending, success := deriveStats(resp.RequestCounts)
+    BatchStats: &GeminiBatchStats{
+        RequestCount:           total,
+        PendingRequestCount:    pending,
+        SuccessfulRequestCount: success,
+    },

Apply consistently in ToGeminiBatchJobResponse, ToGeminiBatchRetrieveResponse, and ToGeminiBatchListResponse.

Also applies to: 898-933, 935-972

transports/bifrost-http/integrations/genai.go (8)

23-67: Session store and cleanup goroutine need shutdown hook

The uploadSessions map plus ticker-based cleanup is fine, but the init() goroutine runs forever with no way to stop it, which complicates graceful shutdown and tests.

Consider adding a package-level stop channel or context so the cleanup loop can exit cleanly on server shutdown (as previously suggested), and ensure ticker.Stop() is called.


44-49: Handle error from crypto/rand.Read when generating session IDs

generateSessionID ignores the error from rand.Read, which can (in rare cases) yield predictable or zeroed IDs if the random source fails.

Handle the error and either:

  • return a (string, error) and propagate it, or
  • fall back to a clearly best-effort ID (e.g., time-based) with a comment explaining why.
In Go, what is the recommended way to handle errors from crypto/rand.Read for generating session IDs?

808-865: Resumable upload phase-1 logic is fine but debug prints should be removed

The phase-1 handler:

  • Parses JSON metadata, stores filename/MIME size in uploadSession.
  • Generates a session ID and stores it on uploadReq for the PreCallback to consume.
  • Falls back gracefully if metadata unmarshal fails.

However, multiple fmt.Printf calls log content type, body, parsed metadata, session IDs, and raw content lengths. These should be removed or replaced with the structured logger at an appropriate level before production.


867-934: Fix binary corruption in parseGeminiMultipartRelated

This function converts the raw body to a string and splits on the boundary:

  • parts := strings.Split(string(body), delimiter)
  • Later, file content is reconstituted with []byte(content).

This corrupts arbitrary binary data (null bytes, non-UTF-8 sequences). Replace all string-based splitting with byte-based operations (bytes.Split, bytes.Index, bytes.TrimSpace) and keep the file content as []byte slices end-to-end, only converting JSON metadata segments to bytes/strings as needed for sonic.Unmarshal.

This is a correctness blocker for binary uploads.


1011-1064: Resumable phase-2: avoid debug prints and reconsider hardcoded purpose

Phase-2 handler correctly:

  • Validates session_id.
  • Loads the session and copies body into uploadReq.File.
  • Restores filename/MIME from the session.
  • Associates the session ID for later PreCallback use.

Issues:

  • Multiple fmt.Printf debug logs should be removed or switched to the logger.
  • uploadReq.Purpose = "batch" is hardcoded and may be wrong for non-batch uploads going through this path. Consider deriving purpose from metadata/session or allowing a default that matches the actual API usage.

1066-1079: Drop heavy debug logging from setResumableUploadFinalStatus

Setting X-Goog-Upload-Status: final is correct, but marshalling the entire resp and printing it plus status code on every call is noisy and potentially expensive.

Remove these fmt.Printf calls (and the sonic.Marshal) or replace them with concise structured logging if needed at debug level.


1081-1112: Resumable phase-2 PreCallback is correct; drop debug print

extractGeminiResumableUploadParams:

  • Validates session_id.
  • Loads and deletes the session.
  • Sets bifrostContextKeyProvider and stores original filename into context.

This is the right place to restore provider/filename. Just remove the fmt.Printf debug line and rely on the logger if you still need diagnostics.


1115-1163: Respect reverse-proxy headers when building resumable upload URL and remove debug prints

extractGeminiFileUploadParams:

  • Sets provider on bifrostCtx.
  • For metadata-only requests with a resumable session, updates the session with provider, constructs a phase-2 URL using ctx.IsTLS() and ctx.Host(), and returns ErrResumableUploadInit after writing the response.

Issues:

  1. URL construction ignores X-Forwarded-Proto / X-Forwarded-Host. Behind a reverse proxy, clients will receive an internal host/scheme and subsequent uploads may fail. Mirror the pattern used elsewhere (e.g., in session.go) to prefer forwarded headers and fall back to ctx.IsTLS()/ctx.Host().
  2. Several fmt.Printf debug statements should be removed or converted to structured logger calls.
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5dffad9 and db0afec.

📒 Files selected for processing (25)
  • core/bifrost.go (1 hunks)
  • core/providers/anthropic/batch.go (1 hunks)
  • core/providers/azure/batch.go (1 hunks)
  • core/providers/bedrock/batch.go (1 hunks)
  • core/providers/cerebras/batch.go (1 hunks)
  • core/providers/cohere/batch.go (1 hunks)
  • core/providers/elevenlabs/batch.go (1 hunks)
  • core/providers/gemini/batch.go (1 hunks)
  • core/providers/gemini/files.go (1 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/groq/batch.go (1 hunks)
  • core/providers/mistral/batch.go (1 hunks)
  • core/providers/ollama/batch.go (1 hunks)
  • core/providers/openai/batch.go (1 hunks)
  • core/providers/openrouter/batch.go (1 hunks)
  • core/providers/parasail/batch.go (1 hunks)
  • core/providers/perplexity/batch.go (1 hunks)
  • core/providers/sgl/batch.go (1 hunks)
  • core/providers/vertex/batch.go (1 hunks)
  • core/schemas/batch.go (1 hunks)
  • core/schemas/bifrost.go (1 hunks)
  • core/schemas/provider.go (3 hunks)
  • transports/bifrost-http/integrations/genai.go (14 hunks)
  • transports/bifrost-http/integrations/router.go (9 hunks)
  • transports/bifrost-http/integrations/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (13)
  • core/providers/cerebras/batch.go
  • core/providers/elevenlabs/batch.go
  • core/providers/vertex/batch.go
  • core/schemas/provider.go
  • core/providers/gemini/files.go
  • core/providers/ollama/batch.go
  • core/providers/perplexity/batch.go
  • core/providers/anthropic/batch.go
  • core/providers/cohere/batch.go
  • core/bifrost.go
  • core/providers/groq/batch.go
  • core/providers/openrouter/batch.go
  • core/providers/parasail/batch.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/schemas/bifrost.go
  • core/providers/bedrock/batch.go
  • transports/bifrost-http/integrations/utils.go
  • core/providers/openai/batch.go
  • core/providers/azure/batch.go
  • core/providers/sgl/batch.go
  • core/schemas/batch.go
  • transports/bifrost-http/integrations/genai.go
  • core/providers/gemini/batch.go
  • core/providers/mistral/batch.go
  • transports/bifrost-http/integrations/router.go
  • core/providers/gemini/types.go
🧬 Code graph analysis (7)
core/providers/openai/batch.go (4)
core/providers/openai/openai.go (1)
  • OpenAIProvider (24-30)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/azure/batch.go (3)
core/providers/azure/azure.go (1)
  • AzureProvider (22-27)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/sgl/batch.go (4)
core/providers/sgl/sgl.go (1)
  • SGLProvider (18-23)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/schemas/batch.go (2)
core/schemas/provider.go (1)
  • Provider (313-362)
core/schemas/bifrost.go (2)
  • ModelProvider (32-32)
  • BifrostResponseExtraFields (295-304)
core/providers/gemini/batch.go (4)
core/schemas/batch.go (13)
  • BatchStatus (5-5)
  • BatchStatusValidating (8-8)
  • BatchStatusInProgress (10-10)
  • BatchStatusFinalizing (11-11)
  • BatchStatusCompleted (12-12)
  • BatchStatusFailed (9-9)
  • BatchStatusCancelling (14-14)
  • BatchStatusCancelled (15-15)
  • BatchStatusExpired (13-13)
  • BifrostBatchRetrieveResponse (152-187)
  • BifrostBatchListResponse (129-140)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/gemini/types.go (11)
  • GeminiJobStatePending (1582-1582)
  • GeminiJobStateRunning (1583-1583)
  • GeminiJobStateSucceeded (1584-1584)
  • GeminiJobStateFailed (1585-1585)
  • GeminiJobStateCancelling (1586-1586)
  • GeminiJobStateCancelled (1587-1587)
  • GeminiBatchJobResponseSDK (1618-1624)
  • GeminiBatchMetadata (1499-1512)
  • GeminiBatchStats (1433-1437)
  • GeminiBatchDest (1533-1536)
  • GeminiBatchListResponseSDK (1627-1630)
core/schemas/bifrost.go (5)
  • BifrostError (364-373)
  • Gemini (48-48)
  • BatchDeleteRequest (103-103)
  • BifrostResponseExtraFields (295-304)
  • RequestType (83-83)
core/providers/utils/utils.go (4)
  • CheckOperationAllowed (408-420)
  • NewBifrostOperationError (449-460)
  • SetExtraHeaders (178-208)
  • MakeRequestWithContext (39-93)
core/providers/mistral/batch.go (2)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
transports/bifrost-http/integrations/router.go (4)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
transports/bifrost-http/integrations/genai.go (1)
  • ErrResumableUploadInit (36-36)
core/schemas/bifrost.go (1)
  • BatchDeleteRequest (103-103)
core/schemas/provider.go (1)
  • Provider (313-362)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
🔇 Additional comments (25)
core/schemas/bifrost.go (1)

85-105: BatchDeleteRequest RequestType addition looks consistent.

The new BatchDeleteRequest enum value fits the existing batch request naming/ordering and aligns with the new batch delete schemas/providers.

core/providers/bedrock/batch.go (1)

931-935: Bedrock BatchDelete stub matches unsupported-operation pattern.

This no-op implementation cleanly advertises that Bedrock doesn’t support batch delete, consistent with other unsupported operations.

core/providers/azure/batch.go (1)

437-441: Azure BatchDelete unsupported stub is wired correctly.

Signature and error behavior align with other Azure batch methods and the shared NewUnsupportedOperationError helper; good placeholder until Azure exposes delete.

core/providers/sgl/batch.go (1)

35-38: SGL BatchDelete stub is consistent with other SGL batch methods.

Extends the existing “not supported” pattern to the new BatchDelete operation; no additional changes needed.

core/providers/openai/batch.go (1)

597-600: OpenAI BatchDelete stub correctly advertises unsupported behavior.

Implements the new interface method and cleanly returns a shared unsupported-operation error; matches patterns used across providers.

core/providers/mistral/batch.go (1)

35-38: Mistral BatchDelete stub aligns with existing unsupported batch APIs.

Consistent use of NewUnsupportedOperationError for the new request type; interface is satisfied without implying batch delete support.

core/schemas/batch.go (1)

265-281: Batch delete request/response schemas look well-shaped.

BifrostBatchDeleteRequest/Response follow existing batch/file delete conventions (ID + deleted + ExtraFields) and give providers room via ExtraParams.

transports/bifrost-http/integrations/router.go (3)

80-89: BatchRequest: DeleteRequest wiring looks consistent

Adding DeleteRequest *schemas.BifrostBatchDeleteRequest is consistent with other batch fields and enables the new delete path without changing existing behavior.


159-162: BatchDeleteResponseConverter integration is coherent

The new BatchDeleteResponseConverter type and its inclusion in RouteConfig follow the existing pattern for other batch converters and keep the public surface symmetrical. No functional issues spotted.

Also applies to: 271-301


822-843: BatchDelete branch mirrors other batch operations correctly

The schemas.BatchDeleteRequest case validates input, calls g.client.BatchDeleteRequest, runs PostCallback, and runs/omits BatchDeleteResponseConverter exactly in line with other batch cases. This is structurally sound.

core/providers/gemini/batch.go (2)

846-868: Status mapping to SDK job states looks correct

ToGeminiJobState covers all known schemas.BatchStatus values and maps them to the appropriate GeminiJobState* constants, with a sensible default to JOB_STATE_PENDING.


986-1043: Gemini BatchDelete implementation is aligned with other operations

BatchDelete performs standard gating (CheckOperationAllowed), validates BatchID, builds the correct DELETE URL (with/without batches/ prefix), sends the request, and treats 200/204 as success, returning a populated BifrostBatchDeleteResponse with latency in ExtraFields. This matches other provider operations and looks functionally solid.

transports/bifrost-http/integrations/genai.go (10)

165-205: File route converters for Gemini look structurally correct

The new file endpoints (upload, resumable POST/PUT, list, retrieve, delete) correctly:

  • Create SDK request types via GetRequestTypeInstance.
  • Convert to FileRequest with provider from bifrostContextKeyProvider.
  • Use appropriate Bifrost request/response structs and converters.

This wiring matches existing router patterns and looks functionally sound.

Also applies to: 207-314


438-661: Batch route configs are coherent and align with new core Batch APIs*

CreateGenAIBatchRouteConfigs wires SDK batch requests/responses to Bifrost BatchCreate/List/Retrieve/Cancel/Delete operations consistently:

  • Provider extracted via bifrostContextKeyProvider.
  • Path patterns and methods match expected GenAI endpoints.
  • Response converters support raw passthrough and SDK formatting.

The use of BatchCreateRequestConverter for all batch types is slightly misnamed but consistent with the existing router contract.


664-710: Batch create PreCallback correctly sets provider and model

extractGeminiBatchCreateParams:

  • Derives provider from x-model-provider (defaulting to Gemini) and sets it on bifrostCtx.
  • Extracts model from the URL and removes the :batchGenerateContent suffix.

This keeps Bifrost and SDK request fields aligned.


780-806: File upload entrypoint routing for resumable and multipart is sensible

parseGeminiFileUploadRequest distinguishes:

  • Resumable/multipart (X-Goog-Upload-Protocol) → parseGeminiResumableUpload.
  • multipart/*parseGeminiMultipartUpload.
  • Fallback raw body → parseGeminiRawUpload.

This branching is clear and supports GenAI SDK patterns.


936-982: Multipart/form-data handling looks correct and preserves filenames

parseGeminiMultipartUpload:

  • Reads metadata JSON if present to set uploadReq.Filename.
  • Extracts required file part, reads full content, and uses the header filename as a fallback when metadata is absent.

This is straightforward and seems robust.


984-1009: Raw body upload parsing is straightforward and safe

parseGeminiRawUpload copies the full body into uploadReq.File and attempts to parse a filename from Content-Disposition. This is appropriate for non-multipart uploads.


1171-1192: File list PreCallback/provider extraction looks good

extractGeminiFileListQueryParams:

  • Sets provider from x-model-provider with Gemini default.
  • Maps pageSize/pageToken query params to GeminiFileListRequest.Limit and After.

This matches the GenAI API semantics and is consistent with other extractors.


1195-1219: File retrieve PreCallback wiring is correct

extractGeminiFileRetrieveParams:

  • Sets provider from header with Gemini default.
  • Validates file_id and assigns it into the SDK request struct.

No issues found.


1221-1244: File delete PreCallback is consistent with retrieve

extractGeminiFileDeleteParams mirrors the retrieve variant for file_id and provider extraction and sets GeminiFileDeleteRequest.FileID. Looks correct.


1248-1255: NewGenAIRouter: inclusion of batch routes is correct

NewGenAIRouter now appends both file and batch route configs to the base GenAI routes. This is the expected integration point for the new batch features.

core/providers/gemini/types.go (3)

1578-1590: SDK job-state constants align with Google GenAI semantics

The GeminiJobState* constants mirror documented SDK job states and are suitable targets for ToGeminiJobState. This separation from internal batch states is clean.


1591-1651: SDK batch request/response types look compatible with GenAI SDK

The new SDK structs (GeminiBatchCreateRequestSDK, inline request/config, job/list/list-request/retrieve/cancel/delete) reflect the expected shapes:

  • Src polymorphism for file vs inline.
  • Name fields for resource IDs.
  • BatchJobs/NextPageToken for list responses.

They match how the GenAI router uses them and should serialize/deserialise correctly.


1655-1662: GeminiFileUploadRequest shape matches new transport and provider usage

Switching GeminiFileUploadRequest to:

  • File []byte (non-JSON),
  • Filename, MimeType, Purpose,
  • ResumableSessionID (non-JSON, internal),

is consistent with how genai.go parsers populate uploads and how the Gemini provider expects to send files upstream. No issues spotted with this struct definition.

Comment on lines 423 to 432
if err := config.PreCallback(ctx, bifrostCtx, req); err != nil {
// Check if this is a resumable upload init that was already handled
if err == ErrResumableUploadInit {
// Response was already written by the PreCallback, just return
return
}
g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute pre-request callback: "+err.Error()))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Call cancel() before early-returning on ErrResumableUploadInit

When PreCallback returns ErrResumableUploadInit, the handler returns without ever calling cancel(), leaving the cancellable context from ConvertToBifrostContext un-cancelled on this path. Even though no upstream call is made, it’s safer and more consistent to cancel here.

Consider:

if config.PreCallback != nil {
    if err := config.PreCallback(ctx, bifrostCtx, req); err != nil {
        // Check if this is a resumable upload init that was already handled
        if err == ErrResumableUploadInit {
-           // Response was already written by the PreCallback, just return
-           return
+           // Response was already written by the PreCallback; clean up context and return
+           cancel()
+           return
        }
        g.sendError(ctx, bifrostCtx, config.ErrorConverter,
            newBifrostError(err, "failed to execute pre-request callback: "+err.Error()))
        return
    }
}
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/router.go around lines 423 to 429, the
handler returns early when PreCallback yields ErrResumableUploadInit but fails
to call the cancel function created by ConvertToBifrostContext; update the error
branch so that you invoke cancel() before returning on ErrResumableUploadInit to
ensure the cancellable context is properly cleaned up, leaving the rest of the
error handling path unchanged.

@akshaydeo akshaydeo force-pushed the 12-04-gemini-sdk-batch-support branch from db0afec to 4d7148c Compare December 7, 2025 17:01
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (10)
transports/bifrost-http/integrations/router.go (2)

423-428: Ensure cancellable context is cleaned up on ErrResumableUploadInit

When PreCallback returns ErrResumableUploadInit, the handler returns without invoking cancel(), unlike other paths that defer cancel() before handing off to batch/file/inference handlers. While this path doesn’t call into Bifrost, it’s more consistent (and future-proof against PreCallbacks that might spawn work) to call cancel() before returning here.


870-895: Remove remaining fmt.Printf debug logging from file handlers

The fmt.Printf debug statements in the file upload path and file response error handling are still present and log directly to stdout, bypassing the structured logger (g.logger). These should either be removed or converted to g.logger.Debug(...) calls with appropriate context before this stack is merged.

Also applies to: 1012-1013

core/providers/gemini/batch.go (1)

870-896: Still vulnerable to negative pending counts in BatchStats.

The past review comment flagging negative PendingRequestCount when resp.RequestCounts.Total == 0 remains unaddressed. Lines 889-890, 915-917, and 954-956 all perform Total - Completed without defensive checks, which produces invalid negative counts when Bifrost populates only Completed and Failed but leaves Total at zero.

Apply the defensive math suggested in the previous review:

+	total := resp.RequestCounts.Total
+	if total == 0 {
+		total = resp.RequestCounts.Completed + resp.RequestCounts.Failed
+	}
+	pending := total - resp.RequestCounts.Completed
+	if pending < 0 {
+		pending = 0
+	}
+	success := resp.RequestCounts.Completed - resp.RequestCounts.Failed
+	if success < 0 {
+		success = 0
+	}
+
 	result.Metadata = &GeminiBatchMetadata{
 		Name:       resp.ID,
 		State:      ToGeminiJobState(resp.Status),
 		CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339),
 		BatchStats: &GeminiBatchStats{
-			RequestCount:           resp.RequestCounts.Total,
-			PendingRequestCount:    resp.RequestCounts.Total - resp.RequestCounts.Completed,
-			SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed,
+			RequestCount:           total,
+			PendingRequestCount:    pending,
+			SuccessfulRequestCount: success,
 		},
 	}

Mirror the same pattern in ToGeminiBatchListResponse (lines 954-956).

Also applies to: 898-933, 935-972

transports/bifrost-http/integrations/genai.go (7)

45-49: Unchecked error from crypto/rand.Read remains a security risk.

The past review comment remains valid: ignoring the error from crypto/rand.Read at line 47 could lead to predictable session IDs if the random source fails, creating a session hijacking vulnerability.

Check the error and either return it to the caller or use a fallback:

 func generateSessionID() string {
 	bytes := make([]byte, 16)
-	rand.Read(bytes)
+	if _, err := rand.Read(bytes); err != nil {
+		return fmt.Sprintf("%d-%d", time.Now().UnixNano(), time.Now().Unix())
+	}
 	return hex.EncodeToString(bytes)
 }

52-67: Cleanup goroutine still lacks graceful shutdown.

The past review comment remains unaddressed: the cleanup goroutine spawned in init() runs indefinitely with no way to stop it, preventing clean application termination.

Add a stop channel:

+var cleanupStop = make(chan struct{})
+
 func init() {
 	go func() {
 		ticker := time.NewTicker(10 * time.Minute)
+		defer ticker.Stop()
-		for range ticker.C {
+		for {
+			select {
+			case <-ticker.C:
 				now := time.Now()
 				uploadSessions.Range(func(key, value interface{}) bool {
 					if session, ok := value.(*uploadSession); ok {
 						if now.Sub(session.CreatedAt) > time.Hour {
 							uploadSessions.Delete(key)
 						}
 					}
 					return true
 				})
+			case <-cleanupStop:
+				return
+			}
 		}
 	}()
 }

438-502: Src field still silently ignores invalid types.

The past review comment remains valid: the type switch at lines 464-482 silently ignores Src values that are neither string nor []interface{}, leaving both InputFileID and Requests empty. The downstream validation in BatchCreate then returns a generic error that doesn't indicate the root cause.

Add explicit validation:

 		// Handle src field - can be string (file reference) or array (inline requests)
 		switch src := sdkReq.Src.(type) {
 		case string:
 			// File-based input: src="files/display_name"
 			bifrostReq.InputFileID = strings.TrimPrefix(src, "files/")
 		case []interface{}:
 			// Inline requests: src=[{contents: [...], config: {...}}]
 			requests := make([]schemas.BatchRequestItem, 0, len(src))
 			for i, item := range src {
 				if itemMap, ok := item.(map[string]interface{}); ok {
 					customID := fmt.Sprintf("request-%d", i)
 					requests = append(requests, schemas.BatchRequestItem{
 						CustomID: customID,
 						Body:     itemMap,
 					})
+				} else {
+					return nil, fmt.Errorf("src array item %d must be an object, got %T", i, item)
 				}
 			}
 			bifrostReq.Requests = requests
+		default:
+			return nil, errors.New("src must be a string (file reference) or array of request objects")
 		}

867-934: Critical: Binary data corruption still present in multipart/related parsing.

The past review comment remains unaddressed: converting the request body to a string at line 887 (strings.Split(string(body), delimiter)) and then back to bytes at line 929 ([]byte(content)) corrupts any binary data containing null bytes or invalid UTF-8 sequences. This breaks all binary file uploads through the multipart/related endpoint.

Use bytes.Split() or bytes.Index() to preserve binary integrity:

-	// Split body by boundary
 	delimiter := "--" + boundary
-	parts := strings.Split(string(body), delimiter)
+	delimiterBytes := []byte(delimiter)
+	// Use bytes operations to preserve binary data
+	var parts [][]byte
+	remaining := body
+	for {
+		idx := bytes.Index(remaining, delimiterBytes)
+		if idx == -1 {
+			if len(remaining) > 0 {
+				parts = append(parts, remaining)
+			}
+			break
+		}
+		parts = append(parts, remaining[:idx])
+		remaining = remaining[idx+len(delimiterBytes):]
+	}
 
 	for _, part := range parts {
-		part = strings.TrimSpace(part)
-		if part == "" || part == "--" {
+		part = bytes.TrimSpace(part)
+		if len(part) == 0 || bytes.Equal(part, []byte("--")) {
 			continue
 		}
 
 		// Split headers from content
-		headerEnd := strings.Index(part, "\r\n\r\n")
+		headerEnd := bytes.Index(part, []byte("\r\n\r\n"))
 		if headerEnd == -1 {
-			headerEnd = strings.Index(part, "\n\n")
+			headerEnd = bytes.Index(part, []byte("\n\n"))
 			if headerEnd == -1 {
 				continue
 			}
 		}
 
-		headers := part[:headerEnd]
-		content := part[headerEnd:]
-		content = strings.TrimPrefix(content, "\r\n\r\n")
-		content = strings.TrimPrefix(content, "\n\n")
+		headers := string(part[:headerEnd])
+		content := part[headerEnd:]
+		content = bytes.TrimPrefix(content, []byte("\r\n\r\n"))
+		content = bytes.TrimPrefix(content, []byte("\n\n"))
 
 		// Check content type of this part
-		headersLower := strings.ToLower(headers)
+		headersLower := strings.ToLower(string(headers))
 		if strings.Contains(headersLower, "application/json") {
 			// This is metadata
 			var metadata struct {
 				File struct {
 					DisplayName string `json:"display_name"`
 					MimeType    string `json:"mime_type"`
 				} `json:"file"`
 			}
-			if err := sonic.Unmarshal([]byte(content), &metadata); err == nil {
+			if err := sonic.Unmarshal(content, &metadata); err == nil {
 				if metadata.File.DisplayName != "" {
 					uploadReq.Filename = metadata.File.DisplayName
 				}
 				if metadata.File.MimeType != "" {
 					uploadReq.MimeType = metadata.File.MimeType
 				}
 			}
 		} else {
 			// This is file content - keep as binary
-			uploadReq.File = []byte(content)
+			uploadReq.File = content
 		}
 	}

240-254: Remove debug fmt.Printf statements before merge.

The past review comment remains valid: numerous [DEBUG] print statements are scattered throughout this file at the annotated line ranges. These should be removed or replaced with proper structured logging before merging to production.

If debug logging is needed, use the existing logger with configurable log levels instead of fmt.Printf.

Also applies to: 792-793, 813-814, 818-819, 824-824, 835-836, 841-841, 854-855, 860-861, 1001-1001, 1013-1013, 1022-1022, 1034-1034, 1046-1046, 1062-1063, 1072-1077, 1108-1108, 1125-1125, 1129-1129, 1147-1148


1056-1056: Hardcoded purpose may be incorrect for non-batch uploads.

The past review comment remains valid: the purpose is hardcoded to "batch" for all resumable file uploads at line 1056. If this endpoint is used for other purposes (e.g., general file storage for chat context), this default will be incorrect.

Read the purpose from phase-1 metadata:

-	uploadReq.Purpose = "batch" // Default purpose for file uploads via GenAI API
+	// Use purpose from session metadata if available, otherwise default to "batch"
+	if session.Purpose != "" {
+		uploadReq.Purpose = session.Purpose
+	} else {
+		uploadReq.Purpose = "batch"
+	}

And store it during phase 1 in the session.


1138-1163: Upload URL construction ignores reverse proxy headers.

The past review comment remains valid: the upload URL at lines 1140-1145 uses ctx.Host() and ctx.IsTLS() directly without checking X-Forwarded-Proto and X-Forwarded-Host headers. When Bifrost runs behind a reverse proxy, this produces an internal URL that causes the client's subsequent upload requests to fail.

Follow the pattern from session.go:

-	// Build the upload URL for phase 2
-	// Use the request's host and scheme to build the URL
-	scheme := "http"
-	if ctx.IsTLS() {
-		scheme = "https"
+	// Determine scheme and host, respecting reverse proxy headers
+	scheme := "http"
+	if proto := string(ctx.Request.Header.Peek("X-Forwarded-Proto")); proto != "" {
+		if strings.Contains(proto, "https") {
+			scheme = "https"
+		}
+	} else if ctx.IsTLS() {
+		scheme = "https"
+	}
+	
+	host := string(ctx.Host())
+	if forwardedHost := string(ctx.Request.Header.Peek("X-Forwarded-Host")); forwardedHost != "" {
+		host = forwardedHost
 	}
-	host := string(ctx.Host())
 	uploadURL := fmt.Sprintf("%s://%s/genai/upload/v1beta/files/resumable/%s", scheme, host, uploadReq.ResumableSessionID)
🧹 Nitpick comments (1)
core/bifrost.go (1)

1133-1209: BatchDeleteRequest implementation is consistent with other batch APIs

The BatchDeleteRequest method follows the same validation, config resolution, base-provider/key selection, and retry pattern as BatchCancelRequest / BatchRetrieveRequest, and correctly tags ExtraFields on failure. The only noticeable duplication is the repeated provider/config/key boilerplate across batch methods; if this area evolves further, consider a small shared helper to reduce repetition, but it’s not blocking for this stack.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between db0afec and 4d7148c.

📒 Files selected for processing (25)
  • core/bifrost.go (1 hunks)
  • core/providers/anthropic/batch.go (1 hunks)
  • core/providers/azure/batch.go (1 hunks)
  • core/providers/bedrock/batch.go (1 hunks)
  • core/providers/cerebras/batch.go (1 hunks)
  • core/providers/cohere/batch.go (1 hunks)
  • core/providers/elevenlabs/batch.go (1 hunks)
  • core/providers/gemini/batch.go (1 hunks)
  • core/providers/gemini/files.go (1 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/groq/batch.go (1 hunks)
  • core/providers/mistral/batch.go (1 hunks)
  • core/providers/ollama/batch.go (1 hunks)
  • core/providers/openai/batch.go (1 hunks)
  • core/providers/openrouter/batch.go (1 hunks)
  • core/providers/parasail/batch.go (1 hunks)
  • core/providers/perplexity/batch.go (1 hunks)
  • core/providers/sgl/batch.go (1 hunks)
  • core/providers/vertex/batch.go (1 hunks)
  • core/schemas/batch.go (1 hunks)
  • core/schemas/bifrost.go (1 hunks)
  • core/schemas/provider.go (3 hunks)
  • transports/bifrost-http/integrations/genai.go (14 hunks)
  • transports/bifrost-http/integrations/router.go (9 hunks)
  • transports/bifrost-http/integrations/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (7)
  • core/providers/bedrock/batch.go
  • transports/bifrost-http/integrations/utils.go
  • core/schemas/batch.go
  • core/schemas/bifrost.go
  • core/providers/parasail/batch.go
  • core/providers/ollama/batch.go
  • core/providers/perplexity/batch.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/providers/sgl/batch.go
  • core/providers/openai/batch.go
  • transports/bifrost-http/integrations/router.go
  • core/providers/anthropic/batch.go
  • core/providers/gemini/batch.go
  • core/providers/mistral/batch.go
  • core/providers/openrouter/batch.go
  • core/schemas/provider.go
  • core/providers/elevenlabs/batch.go
  • core/providers/groq/batch.go
  • core/providers/gemini/files.go
  • core/providers/gemini/types.go
  • core/providers/cerebras/batch.go
  • core/providers/cohere/batch.go
  • transports/bifrost-http/integrations/genai.go
  • core/bifrost.go
  • core/providers/azure/batch.go
  • core/providers/vertex/batch.go
🧬 Code graph analysis (15)
core/providers/openai/batch.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
transports/bifrost-http/integrations/router.go (4)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
transports/bifrost-http/integrations/genai.go (1)
  • ErrResumableUploadInit (36-36)
core/schemas/bifrost.go (2)
  • BatchDeleteRequest (103-103)
  • FileUploadRequest (104-104)
core/schemas/provider.go (1)
  • Provider (313-362)
core/providers/anthropic/batch.go (4)
core/providers/anthropic/anthropic.go (1)
  • AnthropicProvider (21-28)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/gemini/batch.go (4)
core/schemas/batch.go (11)
  • BatchStatus (5-5)
  • BatchStatusValidating (8-8)
  • BatchStatusInProgress (10-10)
  • BatchStatusFinalizing (11-11)
  • BatchStatusCompleted (12-12)
  • BatchStatusFailed (9-9)
  • BatchStatusCancelling (14-14)
  • BatchStatusCancelled (15-15)
  • BatchStatusExpired (13-13)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/gemini/types.go (11)
  • GeminiJobStatePending (1582-1582)
  • GeminiJobStateRunning (1583-1583)
  • GeminiJobStateSucceeded (1584-1584)
  • GeminiJobStateFailed (1585-1585)
  • GeminiJobStateCancelling (1586-1586)
  • GeminiJobStateCancelled (1587-1587)
  • GeminiBatchJobResponseSDK (1618-1624)
  • GeminiBatchMetadata (1499-1512)
  • GeminiBatchStats (1433-1437)
  • GeminiBatchDest (1533-1536)
  • GeminiBatchListResponseSDK (1627-1630)
core/providers/utils/utils.go (2)
  • SetExtraHeaders (178-208)
  • MakeRequestWithContext (39-93)
core/schemas/provider.go (1)
  • Provider (313-362)
core/providers/mistral/batch.go (3)
core/providers/mistral/mistral.go (1)
  • MistralProvider (17-22)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/openrouter/batch.go (4)
core/providers/openrouter/openrouter.go (1)
  • OpenRouterProvider (18-23)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/schemas/provider.go (2)
core/schemas/bifrost.go (1)
  • BatchDeleteRequest (103-103)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/elevenlabs/batch.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/groq/batch.go (4)
core/providers/groq/groq.go (1)
  • GroqProvider (16-21)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/gemini/types.go (1)
core/schemas/models.go (1)
  • Model (109-129)
core/providers/cerebras/batch.go (4)
core/providers/cerebras/cerebras.go (1)
  • CerebrasProvider (16-21)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/cohere/batch.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/bifrost.go (3)
core/schemas/bifrost.go (3)
  • BatchDeleteRequest (103-103)
  • BifrostError (364-373)
  • RequestType (83-83)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/provider.go (2)
  • Provider (313-362)
  • CustomProviderConfig (248-254)
core/providers/azure/batch.go (4)
core/providers/azure/azure.go (1)
  • AzureProvider (22-27)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/providers/vertex/batch.go (3)
core/providers/vertex/vertex.go (1)
  • VertexProvider (57-62)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
🔇 Additional comments (19)
core/providers/cerebras/batch.go (1)

35-38: LGTM! Consistent scaffolding for batch delete support.

The BatchDelete stub follows the established pattern for unsupported batch operations in the Cerebras provider, correctly returning an unsupported operation error.

core/providers/elevenlabs/batch.go (1)

35-38: LGTM! Consistent implementation.

The BatchDelete method correctly implements the unsupported operation pattern, maintaining consistency with other batch operations in the Elevenlabs provider.

core/providers/vertex/batch.go (1)

35-38: LGTM! Proper scaffolding for Vertex AI.

The BatchDelete stub correctly follows the established pattern for unsupported batch operations in the Vertex AI provider.

core/providers/openrouter/batch.go (1)

35-38: LGTM! Consistent with OpenRouter's batch operation pattern.

The BatchDelete stub properly returns an unsupported operation error, maintaining consistency with other batch methods.

core/providers/cohere/batch.go (1)

35-38: LGTM! Proper unsupported operation handling.

The BatchDelete method correctly implements the unsupported operation pattern for the Cohere provider.

core/providers/azure/batch.go (1)

438-441: LGTM! Appropriate unsupported operation for Azure.

The BatchDelete stub correctly returns an unsupported operation error. While Azure implements other batch operations, batch deletion is not yet supported, and this stub provides the appropriate interface implementation.

core/providers/groq/batch.go (1)

35-38: LGTM! Consistent implementation for Groq.

The BatchDelete stub properly follows the established pattern for unsupported batch operations.

core/schemas/provider.go (3)

182-182: LGTM! Proper extension of AllowedRequests.

The BatchDelete field correctly extends the AllowedRequests struct, enabling authorization control for batch delete operations.


231-232: LGTM! Correct switch case addition.

The BatchDeleteRequest case properly maps to the BatchDelete field, maintaining consistency with other batch operation mappings.


350-351: LGTM! Well-integrated interface extension.

The BatchDelete method addition to the Provider interface is properly documented and positioned. All providers in this PR now implement this interface method, ensuring consistent API surface across the stack.

core/providers/sgl/batch.go (1)

35-38: BatchDelete stub matches existing unsupported pattern

The SGL BatchDelete implementation correctly follows the established "unsupported operation" pattern used for the other batch methods on this provider; no issues from a core/stack perspective.

core/providers/openai/batch.go (1)

597-600: OpenAI BatchDelete correctly surfaced as unsupported

The OpenAI BatchDelete method cleanly returns NewUnsupportedOperationError, which is consistent with the rest of the stack given there is no upstream OpenAI batch delete API wired here.

core/providers/mistral/batch.go (1)

35-38: Mistral BatchDelete unsupported implementation looks correct

The Mistral BatchDelete method matches the existing unsupported Batch* pattern and is wired to the proper BatchDeleteRequest request type; this aligns with the rest of the stack.

core/providers/anthropic/batch.go (1)

783-786: Anthropic BatchDelete stub aligns with provider capabilities

The Anthropic BatchDelete method correctly exposes the API surface while returning the standardized unsupported-operation error, which is appropriate given there is no Anthropic batch delete endpoint wired in this stack.

transports/bifrost-http/integrations/router.go (3)

81-89: BatchRequest extended cleanly for delete operations

Adding DeleteRequest *schemas.BifrostBatchDeleteRequest to BatchRequest is a straightforward extension and keeps all batch shapes in one place; it fits well with the new delete path in handleBatchRequest.


159-162: BatchDeleteResponseConverter wiring is consistent with existing batch converters

The new BatchDeleteResponseConverter type and the corresponding RouteConfig.BatchDeleteResponseConverter field mirror the pattern used for other batch responses, keeping integration-specific shaping out of the core router. This should make plugging Gemini/GenAI batch-delete SDK responses in genai.go straightforward without impacting other integrations.

Also applies to: 277-291


822-842: BatchDelete routing matches other batch operations

The new schemas.BatchDeleteRequest branch in handleBatchRequest is structured identically to create/list/retrieve/cancel/results: it validates DeleteRequest, calls BatchDeleteRequest on the Bifrost client, runs PostCallback, then applies the optional BatchDeleteResponseConverter. This is the right place in the stack for the new delete capability and looks correct end‑to‑end.

core/providers/gemini/batch.go (1)

986-1043: LGTM! BatchDelete implementation follows established patterns.

The BatchDelete method correctly mirrors the structure of BatchCancel, including URL construction, header handling, error checking, and response formatting. Accepting both 200 and 204 status codes (line 1029) is appropriate for DELETE operations.

core/providers/gemini/types.go (1)

1578-1651: LGTM! SDK batch types are well-structured.

The new GenAI SDK job state constants (lines 1578-1589) and SDK batch types (lines 1591-1651) provide a clean separation between internal batch states and SDK-facing representations. The type definitions are clear and properly documented.

Note: The validation issue for the Src field mentioned in past reviews is a transport-layer concern (in genai.go) rather than a type definition issue, and has been flagged separately.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (13)
transports/bifrost-http/integrations/utils.go (1)

188-197: Remove raw body debug logging from sendSuccess.

fmt.Printf at Line 196 logs the full JSON response body to stdout, which is both noisy in production and risks leaking sensitive data. It was already flagged in a previous review.

Recommend dropping the log entirely (or, if you really need it, switch to the structured logger and avoid logging full payloads):

 	responseBody, err := sonic.Marshal(response)
@@
-	ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody)))
-	ctx.SetBody(responseBody)
-	fmt.Printf("[DEBUG] sendSuccess: status=200, contentLen=%d, body=%s\n", len(responseBody), string(responseBody))
+	ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody)))
+	ctx.SetBody(responseBody)
transports/bifrost-http/integrations/router.go (3)

864-895: Remove fmt.Printf debug logging from file upload path.

The file‑upload handler still uses several fmt.Printf debug statements (Lines 870, 877, 881, 883, 885, 889, 892, 894). These bypass the router logger and will spam stdout in production; this was already called out in earlier reviews.

If you still need diagnostics here, switch to g.logger.Debug(...) with structured fields and avoid printing raw errors in multiple places; otherwise, just remove them:

-		fmt.Printf("[DEBUG] router: calling FileUploadRequest for provider=%s, purpose=%s, filename=%s\n", fileReq.UploadRequest.Provider, fileReq.UploadRequest.Purpose, fileReq.UploadRequest.Filename)
@@
-			errMsg := "unknown error"
-			if bifrostErr.Error != nil {
-				errMsg = bifrostErr.Error.Message
-			}
-			fmt.Printf("[DEBUG] router: FileUploadRequest error: %s (provider=%s)\n", errMsg, fileReq.UploadRequest.Provider)
@@
-		fmt.Printf("[DEBUG] router: FileUploadRequest success, response ID=%s\n", fileResponse.ID)
@@
-			fmt.Printf("[DEBUG] router: calling PostCallback\n")
@@
-				fmt.Printf("[DEBUG] router: PostCallback error: %v\n", err)
@@
-			fmt.Printf("[DEBUG] router: PostCallback success\n")
@@
-			fmt.Printf("[DEBUG] router: calling FileUploadResponseConverter\n")
@@
-			fmt.Printf("[DEBUG] router: FileUploadResponseConverter done, err=%v\n", err)

1011-1013: Remove final fmt.Printf on file response conversion errors.

The bottom of handleFileRequest still prints conversion errors via fmt.Printf before sending an error response, duplicating prior feedback and mixing stdout logging with the structured error path:

-	if err != nil {
-		fmt.Printf("[DEBUG] router: file response conversion error: %v\n", err)
-		g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to convert file response"))
-		return
-	}
+	if err != nil {
+		g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to convert file response"))
+		return
+	}

423-429: Call cancel() before early‑returning on ErrResumableUploadInit.

When PreCallback returns ErrResumableUploadInit, the handler exits without invoking the cancel function returned by ConvertToBifrostContext, so that cancellable context is never cleaned up on this path. This was already raised in a previous review.

Recommend:

 		if config.PreCallback != nil {
 			if err := config.PreCallback(ctx, bifrostCtx, req); err != nil {
-				// Check if this is a resumable upload init that was already handled
-				if err == ErrResumableUploadInit {
-					// Response was already written by the PreCallback, just return
-					return
-				}
+				// Check if this is a resumable upload init that was already handled
+				if err == ErrResumableUploadInit {
+					// Response was already written by the PreCallback; clean up context and return
+					cancel()
+					return
+				}
 				g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute pre-request callback: "+err.Error()))
 				return
 			}
 		}
transports/bifrost-http/integrations/genai.go (8)

52-67: Provide a shutdown signal for the cleanup goroutine

The init goroutine with time.NewTicker(10 * time.Minute) runs forever and has no way to be stopped for graceful shutdown; this can block process exit in some environments. Consider wiring a stop channel or context and stopping the ticker when shutting down the HTTP server, as previously suggested.


207-261: Resumable upload phase‑2 response converter still uses debug fmt.Printf

FileUploadResponseConverter for the POST resumable endpoint logs multiple [DEBUG] messages via fmt.Printf, including dumping the converted response. These were previously flagged for removal or replacement with structured logging and will be noisy in production.

You can either remove these debug prints entirely or switch to the existing logger with a concise message (no full JSON marshal of the response), and drop the fmt usage if nothing else needs it.


442-488: Validate Src type in SDK batch create to avoid opaque downstream errors

GeminiBatchCreateRequestSDK.Src is type‑switched only for string and []interface{}; other types (or non‑map items inside the slice) are silently ignored, so the provider later fails with a generic “either input_file_id or requests must be provided” error. This makes bad src payloads hard to debug.

Consider rejecting unsupported shapes early:

-        switch src := sdkReq.Src.(type) {
+        switch src := sdkReq.Src.(type) {
         case string:
             bifrostReq.InputFileID = strings.TrimPrefix(src, "files/")
         case []interface{}:
             // existing loop...
+        default:
+            return nil, fmt.Errorf("src must be a string (file reference) or array of request objects")
         }

and, inside the []interface{} branch, validate that each element is a map[string]interface{} and return a clear error if not, instead of silently skipping it.


1011-1063: Resumable phase‑2 handler still has debug prints and hardcoded purpose

parseGeminiResumableUploadPhase2:

  • Uses multiple fmt.Printf debug statements that should be removed or replaced with structured logging before production.
  • Hardcodes uploadReq.Purpose = "batch", which may be incorrect if the resumable upload is reused for non‑batch purposes (prior feedback).

Consider:

  • Making Purpose part of the phase‑1 metadata/session so it can be preserved here.
  • Dropping the debug prints or routing them through the existing logger at debug level with concise messages.

1066-1079: Post‑callback correctly marks upload as final; drop heavy debug logging

setResumableUploadFinalStatus correctly sets X-Goog-Upload-Status: final, but marshaling the full response via sonic.Marshal and printing it with fmt.Printf is expensive and noisy, and was previously flagged.

You can keep just the header mutation and, if needed, log a short message via the structured logger without serializing the whole response.


45-48: Handle crypto/rand.Read error when generating session IDs

generateSessionID ignores the error from rand.Read, which can theoretically produce predictable IDs if the random source fails. At minimum, check the error and fall back to a timestamp‑based ID (or log and fail fast) instead of silently proceeding.

 func generateSessionID() string {
-	bytes := make([]byte, 16)
-	rand.Read(bytes)
-	return hex.EncodeToString(bytes)
+	bytes := make([]byte, 16)
+	if _, err := rand.Read(bytes); err != nil {
+		// Very unlikely, but avoid silently using zero bytes.
+		// Fall back to a timestamp-based ID so it's still unique-ish.
+		return fmt.Sprintf("%d-%d", time.Now().UnixNano(), time.Now().UnixNano())
+	}
+	return hex.EncodeToString(bytes)
 }

867-934: Fix binary data corruption in multipart/related parsing

parseGeminiMultipartRelated currently does:

  • parts := strings.Split(string(body), delimiter)
  • Works with part as a string
  • Assigns file content via uploadReq.File = []byte(content)

Converting arbitrary file bytes to string can corrupt binary data (null bytes, invalid UTF‑8) and break uploads, as previously flagged.

Refactor to operate purely on []byte:

+   import "bytes"
...
-   delimiter := "--" + boundary
-   parts := strings.Split(string(body), delimiter)
+   delimiter := []byte("--" + boundary)
+   parts := bytes.Split(body, delimiter)
...
-   for _, part := range parts {
-       part = strings.TrimSpace(part)
-       if part == "" || part == "--" {
+   for _, partBytes := range parts {
+       partBytes = bytes.TrimSpace(partBytes)
+       if len(partBytes) == 0 || bytes.Equal(partBytes, []byte("--")) {
            continue
        }
-       headerEnd := strings.Index(part, "\r\n\r\n")
+       headerSep := []byte("\r\n\r\n")
+       headerEnd := bytes.Index(partBytes, headerSep)
        if headerEnd == -1 {
-           headerEnd = strings.Index(part, "\n\n")
+           headerSep = []byte("\n\n")
+           headerEnd = bytes.Index(partBytes, headerSep)
            if headerEnd == -1 {
                continue
            }
        }
-
-       headers := part[:headerEnd]
-       content := part[headerEnd:]
-       content = strings.TrimPrefix(content, "\r\n\r\n")
-       content = strings.TrimPrefix(content, "\n\n")
+       headers := string(partBytes[:headerEnd])
+       content := partBytes[headerEnd+len(headerSep):]
...
-       if strings.Contains(headersLower, "application/json") {
+       if strings.Contains(headersLower, "application/json") {
            // metadata
-           if err := sonic.Unmarshal([]byte(content), &metadata); err == nil {
+           if err := sonic.Unmarshal(content, &metadata); err == nil {
                ...
        } else {
            // This is file content
-           uploadReq.File = []byte(content)
+           uploadReq.File = append([]byte(nil), content...)
        }

This preserves binary content without string round‑trips.


1115-1163: Support reverse‑proxy headers when constructing resumable upload URL

extractGeminiFileUploadParams builds the resumable upload URL using ctx.IsTLS() and ctx.Host() directly. Behind a reverse proxy, this will return the internal host/scheme, so the client receives an unusable URL. Prior review already called this out.

You can mirror the pattern from session.go and honor X-Forwarded-Proto / X-Forwarded-Host:

-        // Build the upload URL for phase 2
-        // Use the request's host and scheme to build the URL
-        scheme := "http"
-        if ctx.IsTLS() {
-            scheme = "https"
-        }
-        host := string(ctx.Host())
+        // Build the upload URL for phase 2, honoring reverse-proxy headers
+        scheme := "http"
+        if xfProto := strings.ToLower(string(ctx.Request.Header.Peek("X-Forwarded-Proto"))); xfProto != "" {
+            if strings.Contains(xfProto, "https") {
+                scheme = "https"
+            }
+        } else if ctx.IsTLS() {
+            scheme = "https"
+        }
+        host := string(ctx.Request.Header.Peek("X-Forwarded-Host"))
+        if host == "" {
+            host = string(ctx.Host())
+        }
         uploadURL := fmt.Sprintf("%s://%s/genai/upload/v1beta/files/resumable/%s", scheme, host, uploadReq.ResumableSessionID)

This makes the resumable URL usable for clients when Bifrost is fronted by a proxy.

core/providers/gemini/batch.go (1)

871-972: Defensively derive batch stats to avoid negative pending/success counts

The current stats math:

  • Uses RequestCount = resp.RequestCounts.Total
  • Computes PendingRequestCount = Total - Completed
  • Computes SuccessfulRequestCount = Completed - Failed

In BifrostBatchRetrieveResponse (and sometimes list entries), RequestCounts.Total is left at zero while Completed/Failed are non‑zero, so PendingRequestCount and SuccessfulRequestCount can become negative, which is invalid for counts and can break SDK consumers. This matches the previously flagged issue.

A more defensive pattern in all three converters (ToGeminiBatchJobResponse, ToGeminiBatchRetrieveResponse, ToGeminiBatchListResponse) would be:

-    result.Metadata = &GeminiBatchMetadata{
-        Name:       resp.ID,
-        State:      ToGeminiJobState(resp.Status),
-        CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339),
-        BatchStats: &GeminiBatchStats{
-            RequestCount:           resp.RequestCounts.Total,
-            PendingRequestCount:    resp.RequestCounts.Total - resp.RequestCounts.Completed,
-            SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed,
-        },
-    }
+    total := resp.RequestCounts.Total
+    if total == 0 {
+        total = resp.RequestCounts.Completed + resp.RequestCounts.Failed
+    }
+    if total < 0 {
+        total = 0
+    }
+    success := resp.RequestCounts.Completed
+    if success < 0 {
+        success = 0
+    }
+    failures := resp.RequestCounts.Failed
+    if failures < 0 {
+        failures = 0
+    }
+    pending := total - (success + failures)
+    if pending < 0 {
+        pending = 0
+    }
+
+    result.Metadata = &GeminiBatchMetadata{
+        Name:       resp.ID,
+        State:      ToGeminiJobState(resp.Status),
+        CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339),
+        BatchStats: &GeminiBatchStats{
+            RequestCount:           total,
+            PendingRequestCount:    pending,
+            SuccessfulRequestCount: success,
+        },
+    }

and analogously for the list converter (using batch.RequestCounts instead of resp.RequestCounts), so RequestCount ≥ 0, PendingRequestCount ≥ 0, and RequestCount ≈ success + failures + pending always hold.

🧹 Nitpick comments (2)
core/providers/gemini/files.go (1)

465-549: Gemini file SDK converters look consistent; optional MIME/URI enrichment

The new ToGeminiFileUploadResponse, ToGeminiFileListResponse, and ToGeminiFileRetrieveResponse mappings are consistent with the Gemini SDK schema and reuse formatGeminiTimestamp/toGeminiFileState/safeDerefInt64 correctly. The default MimeType and missing URI in list responses are acceptable given the current BifrostFile* shapes, but if you later plumb MIME type and storage URI into BifrostFileListResponse/BifrostFileRetrieveResponse, it would be worth populating those here too for parity with upload responses.

transports/bifrost-http/integrations/genai.go (1)

1081-1112: Resumable upload param extraction and session cleanup look correct

extractGeminiResumableUploadParams safely:

  • Validates session_id from the path.
  • Loads provider and original filename from uploadSessions.
  • Deletes the session once consumed.
  • Stashes provider and filename in bifrostCtx for downstream converters.

Apart from the debug fmt.Printf, this flow is sound and avoids session leaks.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between db0afec and 4d7148c.

📒 Files selected for processing (25)
  • core/bifrost.go (1 hunks)
  • core/providers/anthropic/batch.go (1 hunks)
  • core/providers/azure/batch.go (1 hunks)
  • core/providers/bedrock/batch.go (1 hunks)
  • core/providers/cerebras/batch.go (1 hunks)
  • core/providers/cohere/batch.go (1 hunks)
  • core/providers/elevenlabs/batch.go (1 hunks)
  • core/providers/gemini/batch.go (1 hunks)
  • core/providers/gemini/files.go (1 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/groq/batch.go (1 hunks)
  • core/providers/mistral/batch.go (1 hunks)
  • core/providers/ollama/batch.go (1 hunks)
  • core/providers/openai/batch.go (1 hunks)
  • core/providers/openrouter/batch.go (1 hunks)
  • core/providers/parasail/batch.go (1 hunks)
  • core/providers/perplexity/batch.go (1 hunks)
  • core/providers/sgl/batch.go (1 hunks)
  • core/providers/vertex/batch.go (1 hunks)
  • core/schemas/batch.go (1 hunks)
  • core/schemas/bifrost.go (1 hunks)
  • core/schemas/provider.go (3 hunks)
  • transports/bifrost-http/integrations/genai.go (14 hunks)
  • transports/bifrost-http/integrations/router.go (9 hunks)
  • transports/bifrost-http/integrations/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (13)
  • core/providers/perplexity/batch.go
  • core/providers/cerebras/batch.go
  • core/providers/elevenlabs/batch.go
  • core/providers/openai/batch.go
  • core/providers/bedrock/batch.go
  • core/providers/ollama/batch.go
  • core/providers/vertex/batch.go
  • core/providers/cohere/batch.go
  • core/schemas/bifrost.go
  • core/schemas/batch.go
  • core/providers/openrouter/batch.go
  • core/providers/groq/batch.go
  • core/providers/anthropic/batch.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/providers/mistral/batch.go
  • transports/bifrost-http/integrations/router.go
  • core/schemas/provider.go
  • core/providers/sgl/batch.go
  • transports/bifrost-http/integrations/genai.go
  • transports/bifrost-http/integrations/utils.go
  • core/providers/gemini/files.go
  • core/providers/gemini/types.go
  • core/providers/azure/batch.go
  • core/providers/parasail/batch.go
  • core/bifrost.go
  • core/providers/gemini/batch.go
🧬 Code graph analysis (9)
core/providers/mistral/batch.go (5)
core/providers/mistral/mistral.go (1)
  • MistralProvider (17-22)
core/schemas/account.go (1)
  • Key (8-17)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
core/schemas/provider.go (2)
core/schemas/bifrost.go (1)
  • BatchDeleteRequest (103-103)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/sgl/batch.go (4)
core/providers/sgl/sgl.go (1)
  • SGLProvider (18-23)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (391-402)
transports/bifrost-http/integrations/genai.go (5)
core/schemas/provider.go (1)
  • Provider (313-362)
core/schemas/bifrost.go (11)
  • ModelProvider (32-32)
  • Gemini (48-48)
  • FileUploadRequest (104-104)
  • FileListRequest (105-105)
  • FileRetrieveRequest (106-106)
  • FileDeleteRequest (107-107)
  • BatchCreateRequest (98-98)
  • BatchListRequest (99-99)
  • BatchRetrieveRequest (100-100)
  • BatchCancelRequest (101-101)
  • BatchDeleteRequest (103-103)
transports/bifrost-http/integrations/router.go (5)
  • EmbeddingResponseConverter (129-129)
  • RequestParser (217-217)
  • FileRequestConverter (105-105)
  • FileRequest (92-99)
  • BatchRequest (81-89)
core/providers/gemini/types.go (10)
  • GeminiFileUploadRequest (1656-1662)
  • Type (779-779)
  • GeminiFileListRequest (1665-1669)
  • GeminiFileRetrieveRequest (1672-1674)
  • GeminiFileDeleteRequest (1677-1679)
  • GeminiBatchCreateRequestSDK (1596-1602)
  • GeminiBatchListRequestSDK (1633-1636)
  • GeminiBatchRetrieveRequestSDK (1639-1641)
  • GeminiBatchCancelRequestSDK (1644-1646)
  • GeminiBatchDeleteRequestSDK (1649-1651)
core/schemas/batch.go (10)
  • BifrostBatchCreateRequest (65-82)
  • BatchRequestItem (31-37)
  • BifrostBatchCreateResponse (85-109)
  • BifrostBatchListRequest (112-126)
  • BifrostBatchRetrieveRequest (143-149)
  • BifrostBatchRetrieveResponse (152-187)
  • BifrostBatchCancelRequest (190-196)
  • BifrostBatchCancelResponse (199-208)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/gemini/files.go (1)
core/schemas/files.go (1)
  • BifrostFileUploadResponse (89-105)
core/providers/gemini/types.go (1)
core/schemas/models.go (1)
  • Model (109-129)
core/providers/parasail/batch.go (3)
core/providers/parasail/parasail.go (1)
  • ParasailProvider (17-22)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (364-373)
  • BatchDeleteRequest (103-103)
core/bifrost.go (4)
core/schemas/bifrost.go (5)
  • BatchDeleteRequest (103-103)
  • BifrostError (364-373)
  • ErrorField (382-389)
  • BifrostErrorExtraFields (431-435)
  • RequestType (83-83)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/provider.go (1)
  • Provider (313-362)
core/schemas/account.go (1)
  • Key (8-17)
core/providers/gemini/batch.go (2)
core/schemas/batch.go (7)
  • BatchStatus (5-5)
  • BifrostBatchCreateResponse (85-109)
  • BifrostBatchRetrieveResponse (152-187)
  • BifrostBatchListResponse (129-140)
  • BifrostBatchCancelResponse (199-208)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/gemini/types.go (11)
  • GeminiJobStatePending (1582-1582)
  • GeminiJobStateRunning (1583-1583)
  • GeminiJobStateSucceeded (1584-1584)
  • GeminiJobStateFailed (1585-1585)
  • GeminiJobStateCancelling (1586-1586)
  • GeminiJobStateCancelled (1587-1587)
  • GeminiBatchJobResponseSDK (1618-1624)
  • GeminiBatchMetadata (1499-1512)
  • GeminiBatchStats (1433-1437)
  • GeminiBatchDest (1533-1536)
  • GeminiBatchListResponseSDK (1627-1630)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
🔇 Additional comments (25)
core/providers/sgl/batch.go (1)

35-38: BatchDelete stub is consistent with other unsupported batch ops.

The new BatchDelete method cleanly mirrors the existing unsupported Batch* methods for SGL; interface coverage looks good.

core/providers/azure/batch.go (1)

437-441: Azure BatchDelete unsupported stub looks correct.

Method signature and unsupported‑operation behavior are consistent with other Azure batch methods and the Provider interface.

core/schemas/provider.go (1)

161-183: BatchDelete wiring in schemas is consistent with existing batch APIs.

Adding BatchDelete to AllowedRequests, IsOperationAllowed, and the Provider interface cleanly extends the existing batch surface; it matches patterns for create/list/retrieve/cancel/results.

Also applies to: 196-233, 340-352

core/providers/mistral/batch.go (1)

35-38: Mistral BatchDelete unsupported implementation matches existing pattern.

The method correctly returns NewUnsupportedOperationError with BatchDeleteRequest and the provider key, consistent with other batch stubs.

core/bifrost.go (1)

1133-1209: BatchDeleteRequest implementation is consistent with other batch methods.

Validation, config lookup, key selection, retry handling, and error ExtraFields assignment all mirror the existing Batch* methods; API surface and behavior look coherent.

core/providers/parasail/batch.go (1)

35-38: Parasail BatchDelete stub aligns with other unsupported batch operations.

Implementation is straightforward and matches the existing unsupported Batch* methods for this provider.

transports/bifrost-http/integrations/router.go (3)

80-89: Extending BatchRequest with DeleteRequest looks correct.

New DeleteRequest *BifrostBatchDeleteRequest field integrates cleanly with the existing batch wrapper; used only when Type == BatchDeleteRequest.


159-162: BatchDelete response converter wiring is consistent with existing batch converters.

The BatchDeleteResponseConverter type and corresponding RouteConfig.BatchDeleteResponseConverter field follow the same pattern as the other Batch* converters; no issues.

Also applies to: 286-292


705-855: BatchDelete handling in handleBatchRequest is aligned with other batch operations.

The new schemas.BatchDeleteRequest case validates DeleteRequest, calls BatchDeleteRequest on the client, runs PostCallback, and applies the optional BatchDeleteResponseConverter, mirroring the other batch cases; behavior looks correct.

Also applies to: 822-843

core/providers/gemini/batch.go (2)

847-868: Bifrost → Gemini job state mapping looks reasonable

ToGeminiJobState covers all schemas.BatchStatus variants and degrades unknown values to JOB_STATE_PENDING. Mapping BatchStatusExpired to JOB_STATE_FAILED is a defensible choice given the SDK state set.


986-1043: BatchDelete implementation aligns with other Gemini batch operations

The new BatchDelete method correctly validates batch_id, builds the URL for both batches/{id} and already-prefixed IDs, sends DELETE, and treats 200/204 as success while populating latency metadata. This matches the pattern in BatchCancel and the other batch methods.

transports/bifrost-http/integrations/genai.go (12)

175-235: File upload request conversion and provider resolution look correct

The primary /upload/v1beta/files route correctly converts GeminiFileUploadRequest into BifrostFileUploadRequest, resolves the provider from context, and prefers RawResponse when present. This matches the patterns used for other providers and keeps Gemini‑specific SDK wrapping in the provider layer.


263-314: PUT resumable continuation path mirrors POST semantics correctly

The PUT resumable route shares the same request parser, provider resolution, and response shaping as the POST variant, which is useful for client compatibility. The “original filename from context” logic and final‑status post‑callback are wired consistently.


322-433: File list / retrieve / delete routes: conversions look consistent

The new Gemini file list, retrieve, and delete routes correctly:

  • Read provider from x-model-provider header with Gemini as default.
  • Populate the GeminiFile*Request types from path/query params.
  • Convert to BifrostFile*Request and back to SDK‑style responses, returning an empty object on delete (matching Gemini behavior).

No functional issues stand out here.


503-543: Batch list / retrieve / cancel route wiring matches core schemas

The new batch list, retrieve, and cancel routes:

  • Correctly default provider to Gemini or take it from x-model-provider.
  • Map SDK request types into the appropriate BifrostBatch*Request structs.
  • Delegate response shaping to the Gemini converters, while honoring RawResponse passthrough.

The path patterns and PreCallback helpers (extractGeminiBatchListQueryParams, extractGeminiBatchIDFromPath, extractGeminiBatchIDFromPathCancel) are consistent with the underlying core behavior.


621-658: Batch delete route wiring is consistent with other batch endpoints

The DELETE /v1beta/batches/{batch_id} route properly:

  • Normalizes batch_id via extractGeminiBatchIDFromPath.
  • Wraps into BifrostBatchDeleteRequest.
  • Returns an empty object on success, which matches Gemini’s behavior for delete.

This looks correct and aligned with the new provider‑level BatchDelete.


664-744: Good reuse of provider extraction helpers for all batch endpoints

The extractGeminiBatchCreateParams, extractGeminiBatchListQueryParams, and extractGeminiBatchIDFromPath* helpers consistently:

  • Derive provider from x-model-provider (defaulting to Gemini).
  • Normalize model or batch_id path parameters.
  • Populate the respective SDK request types in place.

This keeps transport concerns localized and avoids duplicating logic per route.


780-806: Resumable vs standard/multipart detection logic is reasonable

parseGeminiFileUploadRequest distinguishes:

  • X-Goog-Upload-Protocol of resumable/multipart → resumable handler.
  • multipart/* → standard multipart form handler.
  • Otherwise → raw body.

This routing matches the GenAI SDK patterns and should let both SDK and non‑SDK clients interoperate.


937-982: Multipart/form-data upload handler looks solid

parseGeminiMultipartUpload correctly:

  • Parses optional JSON metadata for displayName.
  • Extracts the file part from ctx.MultipartForm().
  • Reads exactly fileHeader.Size bytes into memory and falls back to the original filename when metadata is absent.

No obvious correctness issues here.


985-1009: Raw body upload path is straightforward and safe

parseGeminiRawUpload validates non‑empty body, copies it into uploadReq.File, and attempts to derive a filename from Content-Disposition. That’s a reasonable default path for simple clients and keeps the request body immutable.


1171-1245: File list/retrieve/delete param extractors are consistent and safe

The extractGeminiFileListQueryParams, extractGeminiFileRetrieveParams, and extractGeminiFileDeleteParams helpers all:

  • Normalize provider from x-model-provider.
  • Validate required path params (file_id).
  • Populate the Gemini SDK request types in place.

These helpers are consistent with the batch equivalents and look correct.


1247-1252: GenAI router now composes batch, file, and core routes cleanly

NewGenAIRouter now appends CreateGenAIBatchRouteConfigs after the existing chat and file routes, which keeps the stack of GenAI endpoints cohesive. No routing conflicts are apparent given the distinct path prefixes.


812-865: Consider returning an error for invalid or missing metadata instead of silent failure

When sonic.Unmarshal fails to parse the metadata JSON, the function only logs a debug message and returns nil, leaving uploadReq with uninitialized fields. This defers the error detection to a downstream generic "file required" check, making diagnosis harder.

Return an error explicitly when metadata parsing fails or when required fields (DisplayName, MimeType) are missing, rather than proceeding with a partially initialized upload session.

core/providers/gemini/types.go (2)

1578-1651: SDK batch job state and response types align with converters

The added Google GenAI SDK job state constants and the GeminiBatch*SDK types (GeminiBatchJobResponseSDK, list/create/list‑request/retrieve/cancel/delete) match how the transport and provider converters are using them. The separation between internal batch states (GeminiBatchState*) and SDK‑facing job states (GeminiJobState*) is clear and makes the ToGeminiJobState mapping straightforward.


1655-1662: Updated GeminiFileUploadRequest supports MIME type and resumable metadata cleanly

The redesigned GeminiFileUploadRequest with File, Filename, MimeType, Purpose, and internal ResumableSessionID matches the new upload parsing logic in genai.go and decouples this type from core schemas. This should make it easier to evolve Gemini‑specific file features without touching core schemas.

@akshaydeo akshaydeo force-pushed the 12-04-gemini-sdk-batch-support branch from 4d7148c to ee1a292 Compare December 7, 2025 18:18
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (11)
core/providers/elevenlabs/batch.go (1)

36-37: Use blank identifiers for unused parameters.

The existing Batch* methods in this file consistently use _ for unused parameters. Update BatchDelete to follow the same pattern.

Apply this diff:

-func (provider *ElevenlabsProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+func (provider *ElevenlabsProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
 }
transports/bifrost-http/integrations/router.go (2)

873-897: Remove debug statements before merging.

Multiple fmt.Printf debug statements remain in the file upload handling paths. These should be removed or replaced with structured logging.

Also applies to: 1015-1015


427-430: Call cancel() before returning on ErrResumableUploadInit.

The handler returns early without calling the cancel function, leaving the context un-cancelled on this path.

transports/bifrost-http/integrations/genai.go (8)

46-50: Handle error from crypto/rand.Read.

Ignoring the error from crypto/rand.Read could lead to predictable session IDs if the random source fails, creating a security risk.


53-68: Provide shutdown mechanism for cleanup goroutine.

The cleanup goroutine runs indefinitely with no way to stop it, preventing graceful application shutdown.


469-488: Add default case to handle unexpected Src types.

The switch on sdkReq.Src silently ignores types other than string and []interface{}, which can cause confusing downstream failures when input is malformed.

Apply this diff:

 		switch src := sdkReq.Src.(type) {
 		case string:
 			bifrostReq.InputFileID = strings.TrimPrefix(src, "files/")
 		case []interface{}:
 			requests := make([]schemas.BatchRequestItem, 0, len(src))
 			for i, item := range src {
 				if itemMap, ok := item.(map[string]interface{}); ok {
 					customID := fmt.Sprintf("request-%d", i)
 					requests = append(requests, schemas.BatchRequestItem{
 						CustomID: customID,
 						Body:     itemMap,
 					})
 				}
 			}
 			bifrostReq.Requests = requests
+		default:
+			return nil, fmt.Errorf("invalid src type: expected string or array, got %T", src)
 		}

873-940: Critical: Binary data corruption in multipart/related parsing.

Converting the request body to string at line 893 (strings.Split(string(body), delimiter)) and back to bytes at line 935 ([]byte(content)) corrupts binary file uploads containing null bytes or invalid UTF-8 sequences.

Use bytes.Split() instead of strings.Split() to preserve binary data integrity throughout parsing.


1062-1062: Hardcoded purpose may be incorrect for non-batch uploads.

The purpose is hardcoded to "batch" for all resumable uploads, which may be incorrect if this endpoint is used for other purposes.

Consider reading purpose from phase-1 metadata or allowing it to be specified.


1144-1151: Support reverse proxy headers for resumable upload URL construction.

The upload URL uses ctx.Host() and ctx.IsTLS() directly without considering X-Forwarded-Proto and X-Forwarded-Host headers, causing failures when Bifrost runs behind a reverse proxy.

Check forwarded headers first before falling back to direct context methods.


1138-1141: Avoid mutating uploadSession after storing in sync.Map.

Line 1140 mutates session.Provider while the session pointer is stored in the concurrent map, creating a data race with the cleanup goroutine that reads session fields.

Load, copy, mutate the copy, and store the new pointer back instead of mutating in place.


1255-1257: Route registration order causes batch requests to match generic catch-all route.

The generic route /v1beta/models/{model:*} is registered before the batch-specific route /v1beta/models/{model}:batchGenerateContent, causing batch requests to be routed to the wrong handler.

Reorder so batch routes are registered before generic routes:

routes := CreateGenAIBatchRouteConfigs("/genai", handlerStore)
routes = append(routes, CreateGenAIFileRouteConfigs("/genai", handlerStore)...)
routes = append(routes, CreateGenAIRouteConfigs("/genai")...)
🧹 Nitpick comments (9)
core/providers/groq/batch.go (1)

36-37: Use blank identifiers for unused parameters.

The existing Batch* methods in this file consistently use _ for unused parameters (lines 11, 16, 21, 26, 31). Update BatchDelete to follow the same pattern.

Apply this diff:

-func (provider *GroqProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+func (provider *GroqProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
 }
core/providers/parasail/batch.go (1)

36-37: Use blank identifiers for unused parameters.

The existing Batch* methods in this file consistently use _ for unused parameters (lines 11, 16, 21, 26, 31). Update BatchDelete to follow the same pattern.

Apply this diff:

-func (provider *ParasailProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+func (provider *ParasailProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
 }
core/providers/perplexity/batch.go (1)

36-37: Use blank identifiers for unused parameters.

The existing Batch* methods in this file consistently use _ for unused parameters (lines 11, 16, 21, 26, 31). Update BatchDelete to follow the same pattern.

Apply this diff:

-func (provider *PerplexityProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+func (provider *PerplexityProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
 }
core/providers/azure/batch.go (1)

22-23: Use blank identifiers for unused parameters.

Following the pattern established across other providers and Go conventions for documenting intentionally unused parameters, use _ instead of named parameters.

Apply this diff:

-func (provider *AzureProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+func (provider *AzureProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
 }
core/providers/cohere/cohere.go (1)

847-848: Use blank identifiers for unused parameters.

The existing Batch* methods in this file consistently use _ for unused parameters (lines 822, 827, 832, 837, 842). Update BatchDelete to follow the same pattern.

Apply this diff:

-func (provider *CohereProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+func (provider *CohereProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
 }
core/providers/cerebras/cerebras.go (1)

265-266: Use blank identifiers for unused parameters.

The existing File* and Batch* methods in this file consistently use _ for unused parameters (lines 215, 220, 225, 230, 235, 240, 245, 250, 255, 260). Update BatchDelete to follow the same pattern.

Apply this diff:

-func (provider *CerebrasProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+func (provider *CerebrasProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
 }
core/providers/openrouter/openrouter.go (1)

285-288: Make BatchDelete param naming consistent with other unsupported stubs (avoid unused param lint).
BatchCreate/BatchList/... use _ params, but BatchDelete uses named params that are unused. Suggest aligning.

-// BatchDelete is not supported by OpenRouter provider.
-func (provider *OpenRouterProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+// BatchDelete is not supported by OpenRouter provider.
+func (provider *OpenRouterProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
 }
core/bifrost.go (1)

1303-1312: Consider load-balancing key choice for delete (avoid always using keys[0]).
If multiple keys are configured, always picking the first key can create uneven usage. If bifrost.keySelector is safe with model="", using it here would match the broader load-balancing intent.

core/providers/ollama/ollama.go (1)

251-255: Use _ params for unsupported BatchDelete stub (consistency).
Small consistency win with other unsupported methods in this file.

-func (provider *OllamaProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+func (provider *OllamaProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
 }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between aedefda and 4f8eb5d.

📒 Files selected for processing (23)
  • core/bifrost.go (1 hunks)
  • core/providers/anthropic/batch.go (1 hunks)
  • core/providers/azure/batch.go (1 hunks)
  • core/providers/cerebras/cerebras.go (1 hunks)
  • core/providers/cohere/cohere.go (1 hunks)
  • core/providers/elevenlabs/batch.go (1 hunks)
  • core/providers/gemini/batch.go (1 hunks)
  • core/providers/gemini/gemini.go (1 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/groq/batch.go (1 hunks)
  • core/providers/mistral/batch.go (1 hunks)
  • core/providers/ollama/ollama.go (1 hunks)
  • core/providers/openrouter/openrouter.go (2 hunks)
  • core/providers/parasail/batch.go (1 hunks)
  • core/providers/perplexity/batch.go (1 hunks)
  • core/providers/sgl/batch.go (1 hunks)
  • core/providers/vertex/batch.go (1 hunks)
  • core/schemas/batch.go (1 hunks)
  • core/schemas/bifrost.go (1 hunks)
  • core/schemas/provider.go (3 hunks)
  • transports/bifrost-http/integrations/genai.go (13 hunks)
  • transports/bifrost-http/integrations/router.go (7 hunks)
  • transports/bifrost-http/integrations/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (8)
  • core/schemas/bifrost.go
  • core/schemas/batch.go
  • transports/bifrost-http/integrations/utils.go
  • core/providers/anthropic/batch.go
  • core/providers/sgl/batch.go
  • core/providers/vertex/batch.go
  • core/schemas/provider.go
  • core/providers/mistral/batch.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/providers/cohere/cohere.go
  • core/providers/perplexity/batch.go
  • core/providers/groq/batch.go
  • core/providers/elevenlabs/batch.go
  • core/providers/parasail/batch.go
  • core/providers/cerebras/cerebras.go
  • core/providers/openrouter/openrouter.go
  • core/providers/azure/batch.go
  • transports/bifrost-http/integrations/router.go
  • core/providers/gemini/gemini.go
  • core/bifrost.go
  • core/providers/gemini/batch.go
  • core/providers/ollama/ollama.go
  • transports/bifrost-http/integrations/genai.go
  • core/providers/gemini/types.go
🧠 Learnings (2)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.

Applied to files:

  • core/providers/cohere/cohere.go
  • core/providers/perplexity/batch.go
  • core/providers/groq/batch.go
  • core/providers/elevenlabs/batch.go
  • core/providers/parasail/batch.go
  • core/providers/cerebras/cerebras.go
  • core/providers/openrouter/openrouter.go
  • core/providers/azure/batch.go
  • transports/bifrost-http/integrations/router.go
  • core/providers/gemini/gemini.go
  • core/bifrost.go
  • core/providers/gemini/batch.go
  • core/providers/ollama/ollama.go
  • transports/bifrost-http/integrations/genai.go
  • core/providers/gemini/types.go
📚 Learning: 2025-12-12T08:25:02.629Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: transports/bifrost-http/integrations/router.go:709-712
Timestamp: 2025-12-12T08:25:02.629Z
Learning: In transports/bifrost-http/**/*.go, update streaming response handling to align with OpenAI Responses API: use typed SSE events such as response.created, response.output_text.delta, response.done, etc., and do not rely on the legacy data: [DONE] termination marker. Note that data: [DONE] is only used by the older Chat Completions and Text Completions streaming APIs. Ensure parsers, writers, and tests distinguish SSE events from the [DONE] sentinel and handle each event type accordingly for correct stream termination and progress updates.

Applied to files:

  • transports/bifrost-http/integrations/router.go
  • transports/bifrost-http/integrations/genai.go
🧬 Code graph analysis (11)
core/providers/perplexity/batch.go (4)
core/providers/perplexity/perplexity.go (1)
  • PerplexityProvider (19-25)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/groq/batch.go (4)
core/providers/groq/groq.go (1)
  • GroqProvider (16-22)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/elevenlabs/batch.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/parasail/batch.go (4)
core/providers/parasail/parasail.go (1)
  • ParasailProvider (17-23)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/cerebras/cerebras.go (5)
core/schemas/account.go (1)
  • Key (8-17)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
ui/lib/types/logs.ts (1)
  • BifrostError (226-232)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/openrouter/openrouter.go (4)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
ui/lib/types/logs.ts (1)
  • BifrostError (226-232)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/azure/batch.go (3)
core/providers/azure/azure.go (1)
  • AzureProvider (26-32)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
transports/bifrost-http/integrations/router.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
transports/bifrost-http/integrations/genai.go (1)
  • ErrResumableUploadInit (37-37)
core/schemas/provider.go (1)
  • Provider (315-364)
core/providers/gemini/batch.go (2)
core/schemas/batch.go (13)
  • BatchStatus (5-5)
  • BatchStatusValidating (8-8)
  • BatchStatusInProgress (10-10)
  • BatchStatusFinalizing (11-11)
  • BatchStatusCompleted (12-12)
  • BatchStatusFailed (9-9)
  • BatchStatusCancelling (14-14)
  • BatchStatusCancelled (15-15)
  • BatchStatusExpired (13-13)
  • BifrostBatchCreateResponse (85-109)
  • BifrostBatchRetrieveResponse (152-187)
  • BifrostBatchListResponse (129-140)
  • BifrostBatchCancelResponse (199-208)
core/providers/gemini/types.go (11)
  • GeminiJobStatePending (1627-1627)
  • GeminiJobStateRunning (1628-1628)
  • GeminiJobStateSucceeded (1629-1629)
  • GeminiJobStateFailed (1630-1630)
  • GeminiJobStateCancelling (1631-1631)
  • GeminiJobStateCancelled (1632-1632)
  • GeminiBatchJobResponseSDK (1663-1669)
  • GeminiBatchMetadata (1544-1557)
  • GeminiBatchStats (1478-1482)
  • GeminiBatchDest (1578-1581)
  • GeminiBatchListResponseSDK (1672-1675)
core/providers/ollama/ollama.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/gemini/types.go (1)
core/schemas/models.go (1)
  • Model (109-129)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
🔇 Additional comments (10)
core/providers/gemini/types.go (1)

1623-1696: LGTM!

The SDK job state constants and batch types are well-structured and follow consistent naming conventions. The additions integrate cleanly with the existing Gemini batch API types and align with the PR's goal of adding batch delete support via the Google GenAI SDK format.

transports/bifrost-http/integrations/router.go (3)

88-88: LGTM: BatchDelete field properly integrated.

The DeleteRequest field follows the established pattern for batch operations and correctly uses the BifrostBatchDeleteRequest type from the schema layer.


159-161: LGTM: BatchDeleteResponseConverter properly defined.

The converter type and RouteConfig field integration follow the established patterns for other batch response converters.

Also applies to: 291-291


825-845: LGTM: BatchDelete request handling is consistent.

The BatchDeleteRequest case follows the exact pattern established by other batch operations, with proper validation, error handling, and optional response conversion.

core/providers/gemini/batch.go (1)

248-270: LGTM: Status mapping is comprehensive.

ToGeminiJobState correctly maps all Bifrost batch statuses to Gemini job states with appropriate defaults.

transports/bifrost-http/integrations/genai.go (5)

213-320: LGTM: Resumable upload endpoints properly structured.

The dual POST/PUT endpoints for phase-2 resumable uploads correctly extract session metadata, convert requests, and handle response propagation with original filename preservation.

Note: Debug statements at lines 246, 255, 260 are already flagged for removal in past review.


669-784: LGTM: Batch parameter extractors are well-implemented.

The parameter extraction functions properly handle provider defaulting, query parameter parsing, and batch ID normalization with appropriate error handling.


942-1015: LGTM: Standard multipart and raw upload parsers are correct.

Both parseGeminiMultipartUpload and parseGeminiRawUpload properly handle file data extraction with appropriate byte copying and filename parsing.


1072-1119: LGTM: Resumable upload finalization logic is correct.

The final status setting and session cleanup properly handle the phase-2 response flow with appropriate session lifecycle management.

Note: Debug statements at lines 1078, 1082 are already flagged for removal in past review.


1175-1251: LGTM: File parameter extractors are consistent and correct.

The file parameter extraction functions properly handle provider defaulting, path parameter extraction, and query parameter parsing with appropriate error handling.

Comment on lines +1249 to +1339
// BatchDeleteRequest deletes a batch job.
func (bifrost *Bifrost) BatchDeleteRequest(ctx context.Context, req *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
if req == nil {
return nil, &schemas.BifrostError{
IsBifrostError: false,
Error: &schemas.ErrorField{
Message: "batch delete request is nil",
},
}
}
if req.Provider == "" {
return nil, &schemas.BifrostError{
IsBifrostError: false,
Error: &schemas.ErrorField{
Message: "provider is required for batch delete request",
},
}
}
if req.BatchID == "" {
return nil, &schemas.BifrostError{
IsBifrostError: false,
Error: &schemas.ErrorField{
Message: "batch_id is required for batch delete request",
},
}
}
if ctx == nil {
ctx = bifrost.ctx
}

provider := bifrost.getProviderByKey(req.Provider)
if provider == nil {
return nil, &schemas.BifrostError{
IsBifrostError: false,
Error: &schemas.ErrorField{
Message: "provider not found for batch delete request",
},
}
}

config, err := bifrost.account.GetConfigForProvider(req.Provider)
if err != nil {
return nil, newBifrostErrorFromMsg(fmt.Sprintf("failed to get config for provider %s: %v", req.Provider, err.Error()))
}
if config == nil {
return nil, newBifrostErrorFromMsg(fmt.Sprintf("config is nil for provider %s", req.Provider))
}

// Determine the base provider type for key requirement checks
baseProvider := req.Provider
if config.CustomProviderConfig != nil && config.CustomProviderConfig.BaseProviderType != "" {
baseProvider = config.CustomProviderConfig.BaseProviderType
}

var key schemas.Key
if providerRequiresKey(baseProvider, config.CustomProviderConfig) {
keys, keyErr := bifrost.getAllSupportedKeys(&ctx, req.Provider, baseProvider)
if keyErr != nil {
return nil, newBifrostError(keyErr)
}
if len(keys) > 0 {
key = keys[0]
}
}

response, bifrostErr := executeRequestWithRetries(&ctx, config, func() (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
return provider.BatchDelete(ctx, key, req)
}, schemas.BatchDeleteRequest, req.Provider, "")
if bifrostErr != nil {
bifrostErr.ExtraFields = schemas.BifrostErrorExtraFields{
RequestType: schemas.BatchDeleteRequest,
Provider: req.Provider,
}
return nil, bifrostErr
}
return response, nil
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add ExtraFields.RequestType/Provider to early validation errors for BatchDeleteRequest.
Right now nil/provider/batch_id validation returns don’t populate ExtraFields, which makes client-side error handling/log correlation weaker.

 func (bifrost *Bifrost) BatchDeleteRequest(ctx context.Context, req *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	if req == nil {
 		return nil, &schemas.BifrostError{
 			IsBifrostError: false,
 			Error: &schemas.ErrorField{
 				Message: "batch delete request is nil",
 			},
+			ExtraFields: schemas.BifrostErrorExtraFields{
+				RequestType: schemas.BatchDeleteRequest,
+			},
 		}
 	}
 	if req.Provider == "" {
 		return nil, &schemas.BifrostError{
 			IsBifrostError: false,
 			Error: &schemas.ErrorField{
 				Message: "provider is required for batch delete request",
 			},
+			ExtraFields: schemas.BifrostErrorExtraFields{
+				RequestType: schemas.BatchDeleteRequest,
+			},
 		}
 	}
 	if req.BatchID == "" {
 		return nil, &schemas.BifrostError{
 			IsBifrostError: false,
 			Error: &schemas.ErrorField{
 				Message: "batch_id is required for batch delete request",
 			},
+			ExtraFields: schemas.BifrostErrorExtraFields{
+				RequestType: schemas.BatchDeleteRequest,
+				Provider:    req.Provider,
+			},
 		}
 	}
🤖 Prompt for AI Agents
In core/bifrost.go around lines 1249 to 1325, the early validation and
pre-flight error returns in BatchDeleteRequest do not populate
BifrostError.ExtraFields (RequestType and Provider); update each early return
(nil req, missing provider, missing batch_id, provider not found, and any
config-get errors before executeRequestWithRetries) to set ExtraFields =
schemas.BifrostErrorExtraFields{RequestType: schemas.BatchDeleteRequest,
Provider: <use req.Provider when available or empty string>} so clients can
correlate errors by request type and provider; ensure the
newBifrostError/newBifrostErrorFromMsg returns include these ExtraFields before
returning.

Comment on lines +289 to +292
BatchStats: &GeminiBatchStats{
RequestCount: resp.RequestCounts.Total,
PendingRequestCount: resp.RequestCounts.Total - resp.RequestCounts.Completed,
SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed,
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix BatchStats computation to avoid negative counts.

When resp.RequestCounts.Total is zero but Completed or Failed are non-zero, the computed PendingRequestCount = Total - Completed becomes negative. Derive totals defensively to maintain invariants.

Apply this pattern to all three converters (ToGeminiBatchJobResponse, ToGeminiBatchRetrieveResponse, ToGeminiBatchListResponse):

-		BatchStats: &GeminiBatchStats{
-			RequestCount:           resp.RequestCounts.Total,
-			PendingRequestCount:    resp.RequestCounts.Total - resp.RequestCounts.Completed,
-			SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed,
-		},
+		// Derive totals defensively to prevent negative counts
+		total := resp.RequestCounts.Total
+		if total == 0 && (resp.RequestCounts.Completed > 0 || resp.RequestCounts.Failed > 0) {
+			total = resp.RequestCounts.Completed + resp.RequestCounts.Failed
+		}
+		pending := total - resp.RequestCounts.Completed - resp.RequestCounts.Failed
+		if pending < 0 {
+			pending = 0
+		}
+		success := resp.RequestCounts.Completed - resp.RequestCounts.Failed
+		if success < 0 {
+			success = 0
+		}
+		BatchStats: &GeminiBatchStats{
+			RequestCount:           total,
+			PendingRequestCount:    pending,
+			SuccessfulRequestCount: success,
+		},

Also applies to: 316-320, 355-359

🤖 Prompt for AI Agents
In core/providers/gemini/batch.go around lines 289-293 (and similarly at 316-320
and 355-359), the BatchStats math can produce negative counts when telemetry is
inconsistent; replace the direct uses with defensive computations: set
RequestCount = max(resp.RequestCounts.Total, resp.RequestCounts.Completed,
resp.RequestCounts.Failed), PendingRequestCount = max(0, RequestCount -
resp.RequestCounts.Completed), and SuccessfulRequestCount = max(0,
resp.RequestCounts.Completed - resp.RequestCounts.Failed). Apply this same
pattern in ToGeminiBatchJobResponse, ToGeminiBatchRetrieveResponse, and
ToGeminiBatchListResponse so all three converters compute non-negative,
consistent batch stats.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
transports/bifrost-http/integrations/genai.go (1)

180-200: Add MimeType field to BifrostFileUploadRequest and forward it from Gemini requests. The geminiReq.MimeType is parsed and available in all three conversion functions (lines 180-200, 223-243, 279-299) but is not forwarded to schemas.BifrostFileUploadRequest because the struct lacks a MimeType field. Either add the field to the struct or store it in ExtraParams.

♻️ Duplicate comments (14)
core/schemas/bifrost.go (1)

85-109: New RequestType constant batch_delete is fine; ensure UI/SDK enums stay in sync (already called out earlier).

Please confirm the downstack PRs (Graphite stack) include the corresponding UI RequestType update and any exhaustive switches.

core/providers/sgl/batch.go (1)

35-38: Fix unused-param pattern in BatchDelete stub (match rest of file).

- func (provider *SGLProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+ func (provider *SGLProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
  	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
  }
transports/bifrost-http/integrations/router.go (2)

426-431: Call cancel() before early return on ErrResumableUploadInit.

This path returns without canceling the context created by lib.ConvertToBifrostContext, which is inconsistent with the other early-return paths.

 if config.PreCallback != nil {
 	if err := config.PreCallback(ctx, bifrostCtx, req); err != nil {
 		// Check if this is a resumable upload init that was already handled
 		if err == ErrResumableUploadInit {
 			// Response was already written by the PreCallback, just return
+			cancel()
 			return
 		}
 		g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute pre-request callback: "+err.Error()))
 		return
 	}
 }

873-897: Remove fmt.Printf debug statements (use structured logger if needed).

Also applies to: 1014-1016

core/providers/gemini/batch.go (1)

272-375: BatchStats math can produce negative counts when Total is unset.

PendingRequestCount: Total - Completed and SuccessfulRequestCount: Completed - Failed can go negative if Total is 0 but Completed/Failed are populated (common when upstream doesn’t set totals). Consider defensively deriving total/pending/success before populating GeminiBatchStats (apply similarly in create/retrieve/list converters).

transports/bifrost-http/integrations/genai.go (9)

45-50: Handle error from crypto/rand.Read when generating session IDs.


52-68: Cleanup goroutine needs a shutdown signal (and should stop the ticker).


213-267: Remove [DEBUG] fmt.Printf noise (and avoid logging full bodies).

Also applies to: 786-812, 814-871, 1017-1070, 1072-1119, 1121-1173


873-940: Critical: multipart/related parsing corrupts binary uploads by converting body to string.

strings.Split(string(body), ...) and []byte(content) will mangle non-UTF8 / null bytes. Parse as []byte (e.g., bytes.Split / bytes.Index) end-to-end.


1057-1063: Don’t hardcode Purpose = "batch" for resumable phase-2 uploads.


1137-1142: Avoid mutating *uploadSession stored in sync.Map (data race risk).


1144-1163: Resumable upload URL construction should respect reverse-proxy headers.

Use X-Forwarded-Proto / X-Forwarded-Host (fall back to ctx.IsTLS() / ctx.Host()).


444-507: Validate sdkReq.Src and fail fast on unsupported types.

Right now non-string/non-[]interface{} silently yields empty input and pushes a confusing downstream error.


1255-1258: Batch routes must be registered before catch-all model route (first-match-wins).

/v1beta/models/{model:*} will swallow :batchGenerateContent unless batch routes come first.

 func NewGenAIRouter(client *bifrost.Bifrost, handlerStore lib.HandlerStore, logger schemas.Logger) *GenAIRouter {
-	routes := CreateGenAIRouteConfigs("/genai")
-	routes = append(routes, CreateGenAIFileRouteConfigs("/genai", handlerStore)...)
-	routes = append(routes, CreateGenAIBatchRouteConfigs("/genai", handlerStore)...)
+	routes := CreateGenAIBatchRouteConfigs("/genai", handlerStore)
+	routes = append(routes, CreateGenAIFileRouteConfigs("/genai", handlerStore)...)
+	routes = append(routes, CreateGenAIRouteConfigs("/genai")...)
🧹 Nitpick comments (7)
core/providers/cohere/cohere.go (1)

846-849: Use _ for unused parameters to match the file’s existing unsupported-operation style.

Suggested tweak:

-func (provider *CohereProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+func (provider *CohereProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
 }
core/providers/mistral/batch.go (1)

35-38: Prefer _ for unused params in unsupported methods (consistency / avoids linter noise).

-func (provider *MistralProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+func (provider *MistralProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
 }
core/providers/openrouter/openrouter.go (1)

285-313: Make BatchDelete match the existing “unsupported” method signatures in this file (use _).

-func (provider *OpenRouterProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+func (provider *OpenRouterProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
 }
core/providers/vertex/batch.go (1)

35-38: Optional: use _ for unused params (matches common Batch stubs).*

-func (provider *VertexProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+func (provider *VertexProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
 }
core/providers/parasail/batch.go (1)

35-38: Optional: _ unused params for consistency with other unsupported Batch methods.*

-func (provider *ParasailProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+func (provider *ParasailProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
 }
core/bifrost.go (1)

1249-1325: BatchDeleteRequest wiring looks consistent with other Batch*Request methods.

Two optional improvements:

  1. Consider populating ExtraFields.RequestType/Provider for the early validation returns (nil req / missing provider / missing batch_id) to improve downstream error handling consistency.
  2. When providerRequiresKey(...) and multiple keys exist, consider using bifrost.keySelector(...) (vs keys[0]) if you want better key distribution for batch delete as well.
core/providers/gemini/gemini.go (1)

2054-2111: Consider returning a canonical batch ID (avoid “short vs full name” ambiguity).

Right now ID echoes request.BatchID, which may be either "batches/..." or a short id. Since you already normalize for the URL, consider also returning the canonical resource name you actually targeted (and/or standardize with create/retrieve semantics).

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between aedefda and 4f8eb5d.

📒 Files selected for processing (23)
  • core/bifrost.go (1 hunks)
  • core/providers/anthropic/batch.go (1 hunks)
  • core/providers/azure/batch.go (1 hunks)
  • core/providers/cerebras/cerebras.go (1 hunks)
  • core/providers/cohere/cohere.go (1 hunks)
  • core/providers/elevenlabs/batch.go (1 hunks)
  • core/providers/gemini/batch.go (1 hunks)
  • core/providers/gemini/gemini.go (1 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/groq/batch.go (1 hunks)
  • core/providers/mistral/batch.go (1 hunks)
  • core/providers/ollama/ollama.go (1 hunks)
  • core/providers/openrouter/openrouter.go (2 hunks)
  • core/providers/parasail/batch.go (1 hunks)
  • core/providers/perplexity/batch.go (1 hunks)
  • core/providers/sgl/batch.go (1 hunks)
  • core/providers/vertex/batch.go (1 hunks)
  • core/schemas/batch.go (1 hunks)
  • core/schemas/bifrost.go (1 hunks)
  • core/schemas/provider.go (3 hunks)
  • transports/bifrost-http/integrations/genai.go (13 hunks)
  • transports/bifrost-http/integrations/router.go (7 hunks)
  • transports/bifrost-http/integrations/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
  • transports/bifrost-http/integrations/utils.go
  • core/providers/groq/batch.go
  • core/providers/elevenlabs/batch.go
  • core/schemas/batch.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/schemas/provider.go
  • core/providers/cohere/cohere.go
  • core/providers/mistral/batch.go
  • core/providers/perplexity/batch.go
  • core/providers/cerebras/cerebras.go
  • core/schemas/bifrost.go
  • core/bifrost.go
  • core/providers/sgl/batch.go
  • core/providers/azure/batch.go
  • core/providers/parasail/batch.go
  • core/providers/ollama/ollama.go
  • core/providers/gemini/batch.go
  • transports/bifrost-http/integrations/router.go
  • core/providers/anthropic/batch.go
  • transports/bifrost-http/integrations/genai.go
  • core/providers/vertex/batch.go
  • core/providers/gemini/gemini.go
  • core/providers/openrouter/openrouter.go
  • core/providers/gemini/types.go
🧠 Learnings (2)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.

Applied to files:

  • core/schemas/provider.go
  • core/providers/cohere/cohere.go
  • core/providers/mistral/batch.go
  • core/providers/perplexity/batch.go
  • core/providers/cerebras/cerebras.go
  • core/schemas/bifrost.go
  • core/bifrost.go
  • core/providers/sgl/batch.go
  • core/providers/azure/batch.go
  • core/providers/parasail/batch.go
  • core/providers/ollama/ollama.go
  • core/providers/gemini/batch.go
  • transports/bifrost-http/integrations/router.go
  • core/providers/anthropic/batch.go
  • transports/bifrost-http/integrations/genai.go
  • core/providers/vertex/batch.go
  • core/providers/gemini/gemini.go
  • core/providers/openrouter/openrouter.go
  • core/providers/gemini/types.go
📚 Learning: 2025-12-12T08:25:02.629Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: transports/bifrost-http/integrations/router.go:709-712
Timestamp: 2025-12-12T08:25:02.629Z
Learning: In transports/bifrost-http/**/*.go, update streaming response handling to align with OpenAI Responses API: use typed SSE events such as response.created, response.output_text.delta, response.done, etc., and do not rely on the legacy data: [DONE] termination marker. Note that data: [DONE] is only used by the older Chat Completions and Text Completions streaming APIs. Ensure parsers, writers, and tests distinguish SSE events from the [DONE] sentinel and handle each event type accordingly for correct stream termination and progress updates.

Applied to files:

  • transports/bifrost-http/integrations/router.go
  • transports/bifrost-http/integrations/genai.go
🧬 Code graph analysis (12)
core/schemas/provider.go (2)
core/schemas/bifrost.go (1)
  • BatchDeleteRequest (103-103)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/cohere/cohere.go (4)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
ui/lib/types/logs.ts (1)
  • BifrostError (226-232)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/mistral/batch.go (4)
core/providers/mistral/mistral.go (1)
  • MistralProvider (17-23)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/schemas/bifrost.go (1)
ui/lib/types/config.ts (1)
  • RequestType (122-144)
core/providers/sgl/batch.go (4)
core/providers/sgl/sgl.go (1)
  • SGLProvider (18-24)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/parasail/batch.go (4)
core/providers/parasail/parasail.go (1)
  • ParasailProvider (17-23)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/ollama/ollama.go (4)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
ui/lib/types/logs.ts (1)
  • BifrostError (226-232)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/gemini/batch.go (2)
core/schemas/batch.go (13)
  • BatchStatus (5-5)
  • BatchStatusValidating (8-8)
  • BatchStatusInProgress (10-10)
  • BatchStatusFinalizing (11-11)
  • BatchStatusCompleted (12-12)
  • BatchStatusFailed (9-9)
  • BatchStatusCancelling (14-14)
  • BatchStatusCancelled (15-15)
  • BatchStatusExpired (13-13)
  • BifrostBatchCreateResponse (85-109)
  • BifrostBatchRetrieveResponse (152-187)
  • BifrostBatchListResponse (129-140)
  • BifrostBatchCancelResponse (199-208)
core/providers/gemini/types.go (11)
  • GeminiJobStatePending (1627-1627)
  • GeminiJobStateRunning (1628-1628)
  • GeminiJobStateSucceeded (1629-1629)
  • GeminiJobStateFailed (1630-1630)
  • GeminiJobStateCancelling (1631-1631)
  • GeminiJobStateCancelled (1632-1632)
  • GeminiBatchJobResponseSDK (1663-1669)
  • GeminiBatchMetadata (1544-1557)
  • GeminiBatchStats (1478-1482)
  • GeminiBatchDest (1578-1581)
  • GeminiBatchListResponseSDK (1672-1675)
core/providers/anthropic/batch.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/vertex/batch.go (4)
core/providers/vertex/vertex.go (1)
  • VertexProvider (58-64)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/gemini/gemini.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/utils/utils.go (4)
  • CheckOperationAllowed (472-484)
  • NewBifrostOperationError (513-524)
  • SetExtraHeaders (179-209)
  • MakeRequestWithContext (40-94)
core/schemas/provider.go (1)
  • Provider (315-364)
core/providers/openrouter/openrouter.go (3)
core/schemas/batch.go (4)
  • BifrostBatchCreateRequest (65-82)
  • BifrostBatchCreateResponse (85-109)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
🔇 Additional comments (8)
core/schemas/provider.go (2)

165-189: AllowedRequests: BatchDelete flag + switch mapping looks consistent with existing gating semantics.

Given the Provider interface now includes BatchDelete, please ensure the full Graphite stack (incl. downstack PRs) compiles with all provider implementations updated accordingly.

Also applies to: 197-246


314-364: Provider interface extension (BatchDelete) is clean, but confirm all implementers are updated in-stack.

This is a compile-time breaking interface change for any out-of-tree/custom providers; please verify downstream integrations (and any internal custom provider wrappers) implement BatchDelete.

core/providers/azure/batch.go (1)

21-24: Verify azure/batch.go imports were updated (context/schemas/providerUtils) so this compiles.

The hunk shown doesn’t include the import block; please confirm the real file has the needed imports for context.Context, schemas, and providerUtils.

core/providers/perplexity/batch.go (1)

35-38: BatchDelete unsupported stub matches existing pattern.

transports/bifrost-http/integrations/router.go (2)

80-89: BatchRequest.DeleteRequest addition looks consistent with existing Batch request struct.*


159-162: BatchDelete converter hook + handler wiring is consistent with other batch operations.

Also applies to: 291-301, 825-846

core/providers/gemini/batch.go (1)

245-270: ToGeminiJobState mapping is straightforward; default-to-pending is reasonable.

core/providers/gemini/types.go (1)

1623-1697: SDK batch request/response structs look fine; keep transport-layer validation for Src.

Comment on lines +377 to +384

// BatchDelete is not supported by Anthropic provider.
func (provider *AnthropicProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Build break: context.Context used but context isn’t imported.

BatchDelete uses context.Context, but the import block only includes time, providerUtils, schemas, fasthttp. Add the missing import.

 import (
+	"context"
 	"time"

 	providerUtils "github.com/maximhq/bifrost/core/providers/utils"
 	"github.com/maximhq/bifrost/core/schemas"
 	"github.com/valyala/fasthttp"
 )

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In core/providers/anthropic/batch.go around lines 377 to 381, the BatchDelete
signature references context.Context but the file's import block doesn't include
the "context" package; add "context" to the imports for this file (alongside
time, providerUtils, schemas, fasthttp) so the code compiles.

Comment on lines +264 to +267
// BatchDelete is not supported by Cerebras provider.
func (provider *CerebrasProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cat -n core/providers/cerebras/cerebras.go

Repository: maximhq/bifrost

Length of output: 15578


Use _ for unused parameters in unsupported stub to match recent batch operation methods.

- func (provider *CerebrasProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+ func (provider *CerebrasProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
  	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
  }

All other batch operation stubs (BatchCreate, BatchList, BatchRetrieve, BatchCancel, BatchResults) and file operation stubs consistently use _ for unused parameters. BatchDelete should follow the same pattern.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// BatchDelete is not supported by Cerebras provider.
func (provider *CerebrasProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
}
// BatchDelete is not supported by Cerebras provider.
func (provider *CerebrasProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
}
🤖 Prompt for AI Agents
In core/providers/cerebras/cerebras.go around lines 264 to 267, the BatchDelete
stub lists unused parameters by name; change the method signature to use
underscores for unused parameters (e.g., replace named params with _ types) to
match the other batch operation stubs, leaving the body returning the
unsupported operation error unchanged.

Comment on lines +1702 to 1707
File []byte `json:"-"` // Raw file content (not serialized)
Filename string `json:"filename"` // Original filename
MimeType string `json:"mimeType"` // MIME type of the file
Purpose string `json:"purpose"` // Purpose of the file (e.g., "batch")
ResumableSessionID string `json:"-"` // Session ID for resumable uploads (internal use)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Verify MimeType handling consistency across file upload parsers.

The concern about consistent MimeType population across multipart, multipart/related, raw-body, and resumable upload flows is valid and should be addressed. However, verification could not be completed due to inability to locate the specific struct definition and HTTP parser implementations in the current repository state.

Please confirm:

  • That the struct definition contains the MimeType field and it is consistently extracted from Content-Type headers in all file upload parser flows
  • That downstream upload functions receive and utilize the MimeType value
🤖 Prompt for AI Agents
In core/providers/gemini/types.go around lines 1702 to 1707, the MimeType field
is present but you must verify and enforce consistent population across all
upload parsers: inspect multipart, multipart/related, raw-body and resumable
upload code paths to ensure each extracts the Content-Type (or detects MIME) and
sets File.MimeType before creating the file struct; ensure any helper that
parses headers returns the MIME and that downstream upload/storage functions
accept and use File.MimeType (propagate it through function signatures if
missing); add unit/integration tests for each upload flow asserting MimeType is
populated and used, and add a fallback detection (e.g., sniffing) where
Content-Type may be absent.

Comment on lines +251 to +254
// BatchDelete is not supported by Ollama provider.
func (provider *OllamaProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, let's check if there's any PR context or stack information
cd core/providers/ollama && wc -l ollama.go

Repository: maximhq/bifrost

Length of output: 72


🏁 Script executed:

# Read the ollama.go file to see the pattern of unsupported methods
cat -n core/providers/ollama/ollama.go | head -300

Repository: maximhq/bifrost

Length of output: 15822


Align unsupported stub signature with the _-param pattern used in recent batch/file methods.

This method doesn't use ctx, key, or request. The preceding batch methods (BatchCreate, BatchList, BatchRetrieve, BatchCancel, BatchResults) and following file methods (FileUpload, FileList, FileRetrieve, FileDelete, FileContent) all use _ for unused parameters; BatchDelete should match this pattern for consistency.

- func (provider *OllamaProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+ func (provider *OllamaProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
  	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// BatchDelete is not supported by Ollama provider.
func (provider *OllamaProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
}
// BatchDelete is not supported by Ollama provider.
func (provider *OllamaProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
}
🤖 Prompt for AI Agents
In core/providers/ollama/ollama.go around lines 251 to 254, the BatchDelete
method currently declares unused parameters (ctx, key, request); change the
signature to use underscores for unused params (e.g., func (provider
*OllamaProvider) BatchDelete(_ context.Context, _ schemas.Key, _
*schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse,
*schemas.BifrostError)) so it matches the `_`-param pattern used by other
batch/file methods, leaving the body and return (unsupported operation error)
unchanged.

@akshaydeo akshaydeo force-pushed the 12-04-gemini-sdk-batch-support branch from 4f8eb5d to 536795a Compare December 12, 2025 19:26
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (15)
transports/bifrost-http/integrations/utils.go (1)

196-196: Remove debug log that exposes full response body.

Logging the full response body may expose sensitive information such as PII, API keys, authentication tokens, or other confidential data. This concern was already raised in a previous review.

Apply this diff to remove the sensitive debug log:

 	ctx.SetBody(responseBody)
-	fmt.Printf("[DEBUG] sendSuccess: status=200, contentLen=%d, body=%s\n", len(responseBody), string(responseBody))
core/providers/cerebras/cerebras.go (1)

264-267: Consider matching the file’s existing stub signature style (_ params) for BatchDelete.
This is consistency-only (unless your lint rules enforce it).

core/providers/sgl/batch.go (1)

35-39: Optional: make BatchDelete signature consistent with the other SGL stubs (_ params).

core/providers/anthropic/batch.go (1)

3-9: Fix build break: context package is missing from imports.
BatchDelete uses context.Context but context isn’t imported.

 import (
+	"context"
 	"time"

 	providerUtils "github.com/maximhq/bifrost/core/providers/utils"
 	"github.com/maximhq/bifrost/core/schemas"
 	"github.com/valyala/fasthttp"
 )

Also applies to: 378-381

core/bifrost.go (1)

1249-1325: Populate BifrostError.ExtraFields on BatchDeleteRequest early returns (RequestType/Provider).
This improves log correlation/UX consistency with other request handlers in this file.

 func (bifrost *Bifrost) BatchDeleteRequest(ctx context.Context, req *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	if req == nil {
 		return nil, &schemas.BifrostError{
 			IsBifrostError: false,
 			Error: &schemas.ErrorField{
 				Message: "batch delete request is nil",
 			},
+			ExtraFields: schemas.BifrostErrorExtraFields{
+				RequestType: schemas.BatchDeleteRequest,
+			},
 		}
 	}
 	if req.Provider == "" {
 		return nil, &schemas.BifrostError{
 			IsBifrostError: false,
 			Error: &schemas.ErrorField{
 				Message: "provider is required for batch delete request",
 			},
+			ExtraFields: schemas.BifrostErrorExtraFields{
+				RequestType: schemas.BatchDeleteRequest,
+			},
 		}
 	}
 	if req.BatchID == "" {
 		return nil, &schemas.BifrostError{
 			IsBifrostError: false,
 			Error: &schemas.ErrorField{
 				Message: "batch_id is required for batch delete request",
 			},
+			ExtraFields: schemas.BifrostErrorExtraFields{
+				RequestType: schemas.BatchDeleteRequest,
+				Provider:    req.Provider,
+			},
 		}
 	}
 	if ctx == nil {
 		ctx = bifrost.ctx
 	}

 	provider := bifrost.getProviderByKey(req.Provider)
 	if provider == nil {
 		return nil, &schemas.BifrostError{
 			IsBifrostError: false,
 			Error: &schemas.ErrorField{
 				Message: "provider not found for batch delete request",
 			},
+			ExtraFields: schemas.BifrostErrorExtraFields{
+				RequestType: schemas.BatchDeleteRequest,
+				Provider:    req.Provider,
+			},
 		}
 	}

 	config, err := bifrost.account.GetConfigForProvider(req.Provider)
 	if err != nil {
-		return nil, newBifrostErrorFromMsg(fmt.Sprintf("failed to get config for provider %s: %v", req.Provider, err.Error()))
+		bifrostErr := newBifrostErrorFromMsg(fmt.Sprintf("failed to get config for provider %s: %v", req.Provider, err.Error()))
+		bifrostErr.ExtraFields = schemas.BifrostErrorExtraFields{RequestType: schemas.BatchDeleteRequest, Provider: req.Provider}
+		return nil, bifrostErr
 	}
 	if config == nil {
-		return nil, newBifrostErrorFromMsg(fmt.Sprintf("config is nil for provider %s", req.Provider))
+		bifrostErr := newBifrostErrorFromMsg(fmt.Sprintf("config is nil for provider %s", req.Provider))
+		bifrostErr.ExtraFields = schemas.BifrostErrorExtraFields{RequestType: schemas.BatchDeleteRequest, Provider: req.Provider}
+		return nil, bifrostErr
 	}
 
 	// ... keep the rest as-is ...
 }
core/providers/gemini/gemini.go (1)

2054-2111: Harden BatchDelete batch-id normalization + return canonical ID; verify accepted success codes.
The URL builder only handles batches/<id> vs <id>. If callers pass a full resource path (contains / but not batches/ prefix), this can still generate a wrong URL. Also, you return ID: request.BatchID even though the request may be normalized. Finally, please verify whether Gemini’s delete can return 202 Accepted (in addition to 200/204).

@@
-    batchID := request.BatchID
-    var url string
-    if strings.HasPrefix(batchID, "batches/") {
-        url = fmt.Sprintf("%s/%s", provider.networkConfig.BaseURL, batchID)
-    } else {
-        url = fmt.Sprintf("%s/batches/%s", provider.networkConfig.BaseURL, batchID)
-    }
+    batchID := request.BatchID
+    // Accept either short ID ("abc"), canonical ("batches/abc"), or full resource path.
+    var url string
+    if strings.Contains(batchID, "/") {
+        url = fmt.Sprintf("%s/%s", provider.networkConfig.BaseURL, strings.TrimPrefix(batchID, "/"))
+    } else {
+        batchID = "batches/" + batchID
+        url = fmt.Sprintf("%s/%s", provider.networkConfig.BaseURL, batchID)
+    }
@@
-    if resp.StatusCode() != fasthttp.StatusOK && resp.StatusCode() != fasthttp.StatusNoContent {
+    if resp.StatusCode() != fasthttp.StatusOK && resp.StatusCode() != fasthttp.StatusNoContent /* && resp.StatusCode() != fasthttp.StatusAccepted */ {
         return nil, parseGeminiError(resp)
     }
@@
     return &schemas.BifrostBatchDeleteResponse{
-        ID:      request.BatchID,
+        ID:      batchID,
         Object:  "batch",
         Deleted: true,
Google Gemini (Generative Language API) v1beta batches delete endpoint: what HTTP status codes indicate success (200 vs 204 vs 202)?
transports/bifrost-http/integrations/router.go (2)

426-431: Call cancel() before returning on ErrResumableUploadInit.
Right now that early return skips cancelling the context created by ConvertToBifrostContext.

 if config.PreCallback != nil {
     if err := config.PreCallback(ctx, bifrostCtx, req); err != nil {
         // Check if this is a resumable upload init that was already handled
         if err == ErrResumableUploadInit {
             // Response was already written by the PreCallback, just return
+            cancel()
             return
         }
         g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute pre-request callback: "+err.Error()))
         return
     }
 }

873-897: Remove fmt.Printf debug logs from file handling paths.
These should be replaced with the structured logger or dropped.

Also applies to: 1022-1023

core/providers/gemini/batch.go (1)

283-295: Prevent negative PendingRequestCount / inconsistent BatchStats in SDK converters.
Current PendingRequestCount: Total - Completed can go negative when Total is unset/zero but Completed is non-zero (common for retrieve/list shapes).

@@
-            BatchStats: &GeminiBatchStats{
-                RequestCount:           resp.RequestCounts.Total,
-                PendingRequestCount:    resp.RequestCounts.Total - resp.RequestCounts.Completed,
-                SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed,
-            },
+            // Derive totals defensively to keep counts non-negative and consistent.
+            BatchStats: func() *GeminiBatchStats {
+                total := resp.RequestCounts.Total
+                if total == 0 && (resp.RequestCounts.Completed > 0 || resp.RequestCounts.Failed > 0 || resp.RequestCounts.Pending > 0) {
+                    total = resp.RequestCounts.Completed + resp.RequestCounts.Failed + resp.RequestCounts.Pending
+                }
+                pending := total - resp.RequestCounts.Completed - resp.RequestCounts.Failed
+                if pending < 0 {
+                    pending = 0
+                }
+                success := resp.RequestCounts.Completed - resp.RequestCounts.Failed
+                if success < 0 {
+                    success = 0
+                }
+                return &GeminiBatchStats{
+                    RequestCount:           total,
+                    PendingRequestCount:    pending,
+                    SuccessfulRequestCount: success,
+                }
+            }(),

(Apply the same pattern in ToGeminiBatchRetrieveResponse and ToGeminiBatchListResponse where you currently repeat the same math.)

Also applies to: 311-321, 350-360

transports/bifrost-http/integrations/genai.go (6)

45-68: Handle rand.Read error + make cleanup goroutine stoppable (graceful shutdown).
Both are reliability/security footguns in long-running servers.


873-940: Critical: multipart/related parsing corrupts binary uploads (string conversion).
strings.Split(string(body), ...) + []byte(content) will corrupt arbitrary binary. Parse on []byte.

@@
-func parseGeminiMultipartRelated(ctx *fasthttp.RequestCtx, uploadReq *gemini.GeminiFileUploadRequest, body []byte, contentType string) error {
+func parseGeminiMultipartRelated(ctx *fasthttp.RequestCtx, uploadReq *gemini.GeminiFileUploadRequest, body []byte, contentType string) error {
@@
-    delimiter := "--" + boundary
-    parts := strings.Split(string(body), delimiter)
+    delimiter := []byte("--" + boundary)
+    parts := bytes.Split(body, delimiter)
 
-    for _, part := range parts {
-        part = strings.TrimSpace(part)
-        if part == "" || part == "--" {
+    for _, part := range parts {
+        part = bytes.TrimSpace(part)
+        if len(part) == 0 || bytes.Equal(part, []byte("--")) {
             continue
         }
 
-        headerEnd := strings.Index(part, "\r\n\r\n")
-        if headerEnd == -1 {
-            headerEnd = strings.Index(part, "\n\n")
-            if headerEnd == -1 {
-                continue
-            }
-        }
+        headerEnd := bytes.Index(part, []byte("\r\n\r\n"))
+        sepLen := 4
+        if headerEnd == -1 {
+            headerEnd = bytes.Index(part, []byte("\n\n"))
+            sepLen = 2
+            if headerEnd == -1 {
+                continue
+            }
+        }
 
-        headers := part[:headerEnd]
-        content := part[headerEnd:]
-        content = strings.TrimPrefix(content, "\r\n\r\n")
-        content = strings.TrimPrefix(content, "\n\n")
+        headers := part[:headerEnd]
+        content := part[headerEnd+sepLen:]
 
-        headersLower := strings.ToLower(headers)
-        if strings.Contains(headersLower, "application/json") {
+        headersLower := bytes.ToLower(headers)
+        if bytes.Contains(headersLower, []byte("application/json")) {
             ...
-            if err := sonic.Unmarshal([]byte(content), &metadata); err == nil {
+            if err := sonic.Unmarshal(content, &metadata); err == nil {
                 ...
             }
         } else {
-            uploadReq.File = []byte(content)
+            uploadReq.File = append([]byte(nil), content...)
         }
     }
     return nil
 }

(Requires adding bytes import.)


444-507: Fix GenAI route registration order so batch routes aren’t swallowed by {model:*}.
With first-match routing, /v1beta/models/{model:*} can capture :batchGenerateContent. Register batch routes before the catch-all.

 func NewGenAIRouter(client *bifrost.Bifrost, handlerStore lib.HandlerStore, logger schemas.Logger) *GenAIRouter {
-    routes := CreateGenAIRouteConfigs("/genai")
-    routes = append(routes, CreateGenAIFileRouteConfigs("/genai", handlerStore)...)
-    routes = append(routes, CreateGenAIBatchRouteConfigs("/genai", handlerStore)...)
+    routes := CreateGenAIBatchRouteConfigs("/genai", handlerStore)
+    routes = append(routes, CreateGenAIFileRouteConfigs("/genai", handlerStore)...)
+    routes = append(routes, CreateGenAIRouteConfigs("/genai")...)
     return &GenAIRouter{
         GenericRouter: NewGenericRouter(client, handlerStore, routes, logger),
     }
 }

Also applies to: 1255-1258


469-488: Fail fast on unexpected sdkReq.Src types (avoid silent empty batch requests).
Right now unknown types fall through and produce an invalid Bifrost request.


798-803: Remove fmt.Printf debug logging (and avoid logging raw bodies).
Several added debug prints include request metadata and bodies; these should be dropped or routed via structured logger at debug level with safe redaction.

Also applies to: 820-871, 1019-1070, 1072-1085, 1114-1118, 1131-1169


1057-1066: Avoid mutating *uploadSession after storing in sync.Map; honor X-Forwarded- for upload URL.*

  • Mutating the stored pointer can race with the cleanup goroutine. Prefer load→copy→store (or store provider at creation time).
  • ctx.Host()/ctx.IsTLS() breaks behind reverse proxies; use X-Forwarded-Host / X-Forwarded-Proto.

Also applies to: 1137-1142, 1144-1152

🧹 Nitpick comments (2)
transports/bifrost-http/integrations/utils.go (1)

194-194: Redundant Content-Length header.

fasthttp automatically sets the Content-Length header when calling ctx.SetBody(), making this explicit header setting unnecessary.

Apply this diff to remove the redundant line:

 	responseBody, err := sonic.Marshal(response)
 	if err != nil {
 		g.sendError(ctx, bifrostCtx, errorConverter, newBifrostError(err, "failed to encode response"))
 		return
 	}
 
-	ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody)))
 	ctx.SetBody(responseBody)
 	fmt.Printf("[DEBUG] sendSuccess: status=200, contentLen=%d, body=%s\n", len(responseBody), string(responseBody))
core/providers/cohere/cohere.go (1)

846-850: Optional: align BatchDelete stub signature with other Cohere stubs (_ params).
Keeps the provider file consistent and avoids lint noise if enabled.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4f8eb5d and 536795a.

📒 Files selected for processing (23)
  • core/bifrost.go (1 hunks)
  • core/providers/anthropic/batch.go (1 hunks)
  • core/providers/azure/batch.go (1 hunks)
  • core/providers/cerebras/cerebras.go (1 hunks)
  • core/providers/cohere/cohere.go (1 hunks)
  • core/providers/elevenlabs/batch.go (1 hunks)
  • core/providers/gemini/batch.go (1 hunks)
  • core/providers/gemini/gemini.go (1 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/groq/batch.go (1 hunks)
  • core/providers/mistral/batch.go (1 hunks)
  • core/providers/ollama/ollama.go (1 hunks)
  • core/providers/openrouter/openrouter.go (2 hunks)
  • core/providers/parasail/batch.go (1 hunks)
  • core/providers/perplexity/batch.go (1 hunks)
  • core/providers/sgl/batch.go (1 hunks)
  • core/providers/vertex/batch.go (1 hunks)
  • core/schemas/batch.go (1 hunks)
  • core/schemas/bifrost.go (1 hunks)
  • core/schemas/provider.go (3 hunks)
  • transports/bifrost-http/integrations/genai.go (13 hunks)
  • transports/bifrost-http/integrations/router.go (7 hunks)
  • transports/bifrost-http/integrations/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (9)
  • core/schemas/bifrost.go
  • core/providers/azure/batch.go
  • core/providers/parasail/batch.go
  • core/providers/mistral/batch.go
  • core/providers/groq/batch.go
  • core/schemas/provider.go
  • core/providers/ollama/ollama.go
  • core/providers/vertex/batch.go
  • core/providers/openrouter/openrouter.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • transports/bifrost-http/integrations/utils.go
  • core/schemas/batch.go
  • core/providers/gemini/gemini.go
  • core/providers/sgl/batch.go
  • core/providers/anthropic/batch.go
  • core/providers/perplexity/batch.go
  • transports/bifrost-http/integrations/router.go
  • core/providers/cerebras/cerebras.go
  • core/providers/cohere/cohere.go
  • core/providers/elevenlabs/batch.go
  • core/bifrost.go
  • transports/bifrost-http/integrations/genai.go
  • core/providers/gemini/batch.go
  • core/providers/gemini/types.go
🧠 Learnings (2)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.

Applied to files:

  • transports/bifrost-http/integrations/utils.go
  • core/schemas/batch.go
  • core/providers/gemini/gemini.go
  • core/providers/sgl/batch.go
  • core/providers/anthropic/batch.go
  • core/providers/perplexity/batch.go
  • transports/bifrost-http/integrations/router.go
  • core/providers/cerebras/cerebras.go
  • core/providers/cohere/cohere.go
  • core/providers/elevenlabs/batch.go
  • core/bifrost.go
  • transports/bifrost-http/integrations/genai.go
  • core/providers/gemini/batch.go
  • core/providers/gemini/types.go
📚 Learning: 2025-12-12T08:25:02.629Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: transports/bifrost-http/integrations/router.go:709-712
Timestamp: 2025-12-12T08:25:02.629Z
Learning: In transports/bifrost-http/**/*.go, update streaming response handling to align with OpenAI Responses API: use typed SSE events such as response.created, response.output_text.delta, response.done, etc., and do not rely on the legacy data: [DONE] termination marker. Note that data: [DONE] is only used by the older Chat Completions and Text Completions streaming APIs. Ensure parsers, writers, and tests distinguish SSE events from the [DONE] sentinel and handle each event type accordingly for correct stream termination and progress updates.

Applied to files:

  • transports/bifrost-http/integrations/utils.go
  • transports/bifrost-http/integrations/router.go
  • transports/bifrost-http/integrations/genai.go
🧬 Code graph analysis (7)
core/schemas/batch.go (2)
core/schemas/provider.go (1)
  • Provider (315-364)
core/schemas/bifrost.go (2)
  • ModelProvider (32-32)
  • BifrostResponseExtraFields (299-309)
core/providers/sgl/batch.go (4)
core/providers/sgl/sgl.go (1)
  • SGLProvider (18-24)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/anthropic/batch.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/perplexity/batch.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/cerebras/cerebras.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/cohere/cohere.go (4)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
ui/lib/types/logs.ts (1)
  • BifrostError (226-232)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/gemini/batch.go (2)
core/schemas/batch.go (13)
  • BatchStatus (5-5)
  • BatchStatusValidating (8-8)
  • BatchStatusInProgress (10-10)
  • BatchStatusFinalizing (11-11)
  • BatchStatusCompleted (12-12)
  • BatchStatusFailed (9-9)
  • BatchStatusCancelling (14-14)
  • BatchStatusCancelled (15-15)
  • BatchStatusExpired (13-13)
  • BifrostBatchCreateResponse (85-109)
  • BifrostBatchRetrieveResponse (152-187)
  • BifrostBatchListResponse (129-140)
  • BifrostBatchCancelResponse (199-208)
core/providers/gemini/types.go (11)
  • GeminiJobStatePending (1627-1627)
  • GeminiJobStateRunning (1628-1628)
  • GeminiJobStateSucceeded (1629-1629)
  • GeminiJobStateFailed (1630-1630)
  • GeminiJobStateCancelling (1631-1631)
  • GeminiJobStateCancelled (1632-1632)
  • GeminiBatchJobResponseSDK (1663-1669)
  • GeminiBatchMetadata (1544-1557)
  • GeminiBatchStats (1478-1482)
  • GeminiBatchDest (1578-1581)
  • GeminiBatchListResponseSDK (1672-1675)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
🔇 Additional comments (7)
core/schemas/batch.go (1)

265-282: New BatchDelete request/response types look consistent with existing batch schemas.
No issues spotted with the struct shapes/tags; ExtraParams being excluded from JSON is consistent with prior patterns.

core/providers/elevenlabs/batch.go (1)

35-39: BatchDelete unsupported stub is consistent with the provider’s other Batch stubs.*

core/providers/perplexity/batch.go (1)

35-39: BatchDelete unsupported stub matches the provider’s existing Batch behavior.*

transports/bifrost-http/integrations/router.go (1)

825-846: BatchDelete routing + converter hook looks consistent.
Matches the existing batch patterns (call → PostCallback → optional converter).

core/providers/gemini/types.go (3)

1623-1634: LGTM! SDK job state constants are well-defined.

The constants are properly documented and clearly distinguished from the internal batch states. The naming convention is consistent and follows Go best practices.


1636-1696: SDK batch type definitions are well-structured.

The new SDK-specific types properly mirror the Google GenAI SDK's expected format with appropriate JSON tags and documentation. The type definitions themselves are sound.

Note: A previous review raised concerns about validation of the Src field (line 1646) in the HTTP transport layer. While the flexible interface{} type here is intentional and correct for the SDK format, ensure the validation suggestions from past reviews have been addressed in the request parsing logic.


1700-1707: File upload request structure updated correctly for resumable uploads.

The changes properly support the resumable upload feature:

  • MimeType field added with appropriate JSON serialization
  • ResumableSessionID correctly marked as internal-only with json:"-"
  • Removal of Provider field (if it existed) aids decoupling

Note: Previous reviews identified concerns about MimeType extraction in upload parsers (parseGeminiMultipartUpload, parseGeminiRawUpload) and forwarding in converter functions. Those concerns relate to how this struct is populated and used in the transport layer, not the type definition itself.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (18)
core/providers/cerebras/cerebras.go (1)

264-267: Align BatchDelete stub signature with _ unused-param convention.

-func (provider *CerebrasProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+func (provider *CerebrasProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
 }
core/providers/sgl/batch.go (1)

35-38: Use _ for unused params to match the rest of this file’s stubs.

-func (provider *SGLProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+func (provider *SGLProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
 }
core/providers/ollama/ollama.go (1)

251-254: Align BatchDelete stub signature with the _-param pattern used nearby.

-func (provider *OllamaProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+func (provider *OllamaProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
 }
transports/bifrost-http/integrations/utils.go (1)

194-197: Remove debug log that prints full response body (PII/secrets risk).

fmt.Printf(... body=%s ...) can leak sensitive data to stdout/stderr. Drop it or log only non-sensitive metadata.

Also consider using ctx.Response.Header.SetContentLength(len(responseBody)) (or omit manual Content-Length entirely) instead of setting it via string formatting.

- ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody)))
+ ctx.Response.Header.SetContentLength(len(responseBody))
  ctx.SetBody(responseBody)
- fmt.Printf("[DEBUG] sendSuccess: status=200, contentLen=%d, body=%s\n", len(responseBody), string(responseBody))
core/providers/gemini/gemini.go (1)

2054-2111: Harden BatchDelete batch_id normalization for full resource names.

Today it only special-cases "batches/". If callers pass a full resource path containing / (but not starting with "batches/"), the URL becomes invalid.

A robust pattern is: treat any batchID containing / as an already-qualified path; otherwise format as batches/<id>.

  batchID := request.BatchID
  var url string
- if strings.HasPrefix(batchID, "batches/") {
-   url = fmt.Sprintf("%s/%s", provider.networkConfig.BaseURL, batchID)
- } else {
-   url = fmt.Sprintf("%s/batches/%s", provider.networkConfig.BaseURL, batchID)
- }
+ if strings.Contains(batchID, "/") {
+   url = fmt.Sprintf("%s/%s", provider.networkConfig.BaseURL, strings.TrimPrefix(batchID, "/"))
+ } else {
+   url = fmt.Sprintf("%s/batches/%s", provider.networkConfig.BaseURL, batchID)
+ }
core/bifrost.go (1)

1249-1325: Populate BifrostError.ExtraFields on all early returns for BatchDeleteRequest.

Right now the early validation/config/provider-not-found returns omit ExtraFields, unlike other request methods in this file.

 	if req == nil {
 		return nil, &schemas.BifrostError{
 			IsBifrostError: false,
 			Error: &schemas.ErrorField{
 				Message: "batch delete request is nil",
 			},
+			ExtraFields: schemas.BifrostErrorExtraFields{
+				RequestType: schemas.BatchDeleteRequest,
+			},
 		}
 	}
 	if req.Provider == "" {
 		return nil, &schemas.BifrostError{
 			IsBifrostError: false,
 			Error: &schemas.ErrorField{
 				Message: "provider is required for batch delete request",
 			},
+			ExtraFields: schemas.BifrostErrorExtraFields{
+				RequestType: schemas.BatchDeleteRequest,
+			},
 		}
 	}
 	if req.BatchID == "" {
 		return nil, &schemas.BifrostError{
 			IsBifrostError: false,
 			Error: &schemas.ErrorField{
 				Message: "batch_id is required for batch delete request",
 			},
+			ExtraFields: schemas.BifrostErrorExtraFields{
+				RequestType: schemas.BatchDeleteRequest,
+				Provider:    req.Provider,
+			},
 		}
 	}
 	...
 	if provider == nil {
 		return nil, &schemas.BifrostError{
 			IsBifrostError: false,
 			Error: &schemas.ErrorField{
 				Message: "provider not found for batch delete request",
 			},
+			ExtraFields: schemas.BifrostErrorExtraFields{
+				RequestType: schemas.BatchDeleteRequest,
+				Provider:    req.Provider,
+			},
 		}
 	}
 	...
 	if err != nil {
-		return nil, newBifrostErrorFromMsg(...)
+		bifrostErr := newBifrostErrorFromMsg(...)
+		bifrostErr.ExtraFields = schemas.BifrostErrorExtraFields{RequestType: schemas.BatchDeleteRequest, Provider: req.Provider}
+		return nil, bifrostErr
 	}
 	if config == nil {
-		return nil, newBifrostErrorFromMsg(...)
+		bifrostErr := newBifrostErrorFromMsg(...)
+		bifrostErr.ExtraFields = schemas.BifrostErrorExtraFields{RequestType: schemas.BatchDeleteRequest, Provider: req.Provider}
+		return nil, bifrostErr
 	}
transports/bifrost-http/integrations/genai.go (9)

180-200: Verify MimeType and Purpose propagation across upload flows (and wire through if supported).
Right now you parse GeminiFileUploadRequest.MimeType and set Purpose defaults in phase-2; confirm downstream request structs actually carry/need these fields so they aren’t silently discarded.

#!/bin/bash
set -euo pipefail

# 1) Confirm whether the core upload request supports MimeType/Purpose fields.
rg -n --type go "type BifrostFileUploadRequest struct" -A40

# 2) Show all places where BifrostFileUploadRequest is constructed in genai integration.
rg -n --type go "BifrostFileUploadRequest\{" -n transports/bifrost-http/integrations/genai.go

# 3) Confirm what the Gemini provider actually uses for file upload (mimeType/purpose).
rg -n --type go "FileUpload|mime|MimeType|purpose|Purpose" core/providers/gemini -S

Also applies to: 223-243, 1057-1063


45-50: Handle error from crypto/rand.Read in generateSessionID.
Ignoring the error can lead to weak session IDs on entropy failure.


52-68: Provide a shutdown mechanism for the cleanup goroutine + stop the ticker.
Current init() goroutine runs forever and never ticker.Stop()s.


1255-1258: Fix GenAI route registration order: batch routes must come before the {model:*} catch-all.
With first-match-wins routing, POST .../models/{model}:batchGenerateContent can be captured by the generic {model:*} route if registered first.

func NewGenAIRouter(client *bifrost.Bifrost, handlerStore lib.HandlerStore, logger schemas.Logger) *GenAIRouter {
-    routes := CreateGenAIRouteConfigs("/genai")
-    routes = append(routes, CreateGenAIFileRouteConfigs("/genai", handlerStore)...)
-    routes = append(routes, CreateGenAIBatchRouteConfigs("/genai", handlerStore)...)
+    routes := CreateGenAIBatchRouteConfigs("/genai", handlerStore)
+    routes = append(routes, CreateGenAIFileRouteConfigs("/genai", handlerStore)...)
+    routes = append(routes, CreateGenAIRouteConfigs("/genai")...)

873-940: Critical: multipart/related parsing corrupts binary file data (string conversion).
strings.Split(string(body), ...) + []byte(content) will corrupt non-UTF8 / null-containing payloads. Parse as bytes end-to-end (bytes.Split / bytes.Index).


1137-1142: Critical: avoid mutating *uploadSession after storing in sync.Map (data race).
Load→copy→store the updated struct instead of mutating the shared pointer.


1144-1152: Support X-Forwarded-Proto / X-Forwarded-Host when constructing the resumable upload URL.
Using ctx.IsTLS() / ctx.Host() breaks behind reverse proxies.


469-488: Validate GeminiBatchCreateRequestSDK.Src types and fail fast on unsupported values.
Currently non-string / non-[]interface{} values are silently ignored, producing confusing downstream errors.


244-260: Remove fmt.Printf debug statements (replace with structured logger or drop).
This is noisy in production and bypasses log levels/fields.

Also applies to: 798-803, 820-848, 1019-1069, 1077-1084, 1114-1115, 1131-1136, 1153-1154

transports/bifrost-http/integrations/router.go (2)

425-431: Call cancel() before returning on ErrResumableUploadInit.
You return before hitting any defer cancel() paths, leaving the cancellable context un-cancelled on this branch.

if config.PreCallback != nil {
    if err := config.PreCallback(ctx, bifrostCtx, req); err != nil {
        // Check if this is a resumable upload init that was already handled
        if err == ErrResumableUploadInit {
-           // Response was already written by the PreCallback, just return
+           // Response was already written by the PreCallback; clean up and return
+           cancel()
            return
        }
        g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute pre-request callback: "+err.Error()))
        return
    }
}

873-897: Remove fmt.Printf debug logging from file handling paths (use structured logger if needed).
These print to stdout and bypass the router logger; they were already flagged in earlier reviews.

Also applies to: 1021-1024

core/providers/gemini/batch.go (1)

289-293: Fix BatchStats math to avoid negative counts (defensive totals/pending).
PendingRequestCount := Total - Completed can go negative if Total is unset/0 while Completed/Failed are non-zero.

Also applies to: 316-320, 355-359

🧹 Nitpick comments (3)
core/providers/parasail/batch.go (1)

35-38: Consider using _ for unused params to match other unsupported stubs.

-func (provider *ParasailProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+func (provider *ParasailProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
 }
core/providers/gemini/batch.go (1)

311-335: Consider guarding CreatedAt == 0 to avoid emitting 1970-01-01 timestamps.
If CreatedAt can be zero in some flows, prefer leaving CreateTime empty rather than formatting Unix(0).

Also applies to: 350-363

core/providers/gemini/types.go (1)

1700-1707: Double-check GeminiFileUploadRequest JSON field naming matches the actual SDK payloads.
If some clients send mime_type / display_name (snake_case), you may need custom unmarshaling or dual tags (similar to other structs in this file).

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4f8eb5d and 536795a.

📒 Files selected for processing (23)
  • core/bifrost.go (1 hunks)
  • core/providers/anthropic/batch.go (1 hunks)
  • core/providers/azure/batch.go (1 hunks)
  • core/providers/cerebras/cerebras.go (1 hunks)
  • core/providers/cohere/cohere.go (1 hunks)
  • core/providers/elevenlabs/batch.go (1 hunks)
  • core/providers/gemini/batch.go (1 hunks)
  • core/providers/gemini/gemini.go (1 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/groq/batch.go (1 hunks)
  • core/providers/mistral/batch.go (1 hunks)
  • core/providers/ollama/ollama.go (1 hunks)
  • core/providers/openrouter/openrouter.go (2 hunks)
  • core/providers/parasail/batch.go (1 hunks)
  • core/providers/perplexity/batch.go (1 hunks)
  • core/providers/sgl/batch.go (1 hunks)
  • core/providers/vertex/batch.go (1 hunks)
  • core/schemas/batch.go (1 hunks)
  • core/schemas/bifrost.go (1 hunks)
  • core/schemas/provider.go (3 hunks)
  • transports/bifrost-http/integrations/genai.go (13 hunks)
  • transports/bifrost-http/integrations/router.go (7 hunks)
  • transports/bifrost-http/integrations/utils.go (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • core/providers/azure/batch.go
🚧 Files skipped from review as they are similar to previous changes (10)
  • core/schemas/bifrost.go
  • core/schemas/provider.go
  • core/providers/mistral/batch.go
  • core/providers/vertex/batch.go
  • core/providers/groq/batch.go
  • core/providers/elevenlabs/batch.go
  • core/providers/cohere/cohere.go
  • core/providers/anthropic/batch.go
  • core/schemas/batch.go
  • core/providers/openrouter/openrouter.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/providers/perplexity/batch.go
  • core/providers/ollama/ollama.go
  • transports/bifrost-http/integrations/utils.go
  • transports/bifrost-http/integrations/router.go
  • core/providers/cerebras/cerebras.go
  • core/providers/sgl/batch.go
  • core/providers/gemini/batch.go
  • transports/bifrost-http/integrations/genai.go
  • core/bifrost.go
  • core/providers/gemini/types.go
  • core/providers/parasail/batch.go
  • core/providers/gemini/gemini.go
🧠 Learnings (2)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.

Applied to files:

  • core/providers/perplexity/batch.go
  • core/providers/ollama/ollama.go
  • transports/bifrost-http/integrations/utils.go
  • transports/bifrost-http/integrations/router.go
  • core/providers/cerebras/cerebras.go
  • core/providers/sgl/batch.go
  • core/providers/gemini/batch.go
  • transports/bifrost-http/integrations/genai.go
  • core/bifrost.go
  • core/providers/gemini/types.go
  • core/providers/parasail/batch.go
  • core/providers/gemini/gemini.go
📚 Learning: 2025-12-12T08:25:02.629Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: transports/bifrost-http/integrations/router.go:709-712
Timestamp: 2025-12-12T08:25:02.629Z
Learning: In transports/bifrost-http/**/*.go, update streaming response handling to align with OpenAI Responses API: use typed SSE events such as response.created, response.output_text.delta, response.done, etc., and do not rely on the legacy data: [DONE] termination marker. Note that data: [DONE] is only used by the older Chat Completions and Text Completions streaming APIs. Ensure parsers, writers, and tests distinguish SSE events from the [DONE] sentinel and handle each event type accordingly for correct stream termination and progress updates.

Applied to files:

  • transports/bifrost-http/integrations/utils.go
  • transports/bifrost-http/integrations/router.go
  • transports/bifrost-http/integrations/genai.go
🧬 Code graph analysis (10)
core/providers/perplexity/batch.go (2)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
transports/bifrost-http/integrations/router.go (4)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
transports/bifrost-http/integrations/genai.go (1)
  • ErrResumableUploadInit (37-37)
core/schemas/bifrost.go (2)
  • BatchDeleteRequest (103-103)
  • FileUploadRequest (104-104)
core/schemas/provider.go (1)
  • Provider (315-364)
core/providers/cerebras/cerebras.go (4)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
ui/lib/types/logs.ts (1)
  • BifrostError (226-232)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/sgl/batch.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/gemini/batch.go (2)
core/schemas/batch.go (12)
  • BatchStatus (5-5)
  • BatchStatusValidating (8-8)
  • BatchStatusInProgress (10-10)
  • BatchStatusFinalizing (11-11)
  • BatchStatusCompleted (12-12)
  • BatchStatusFailed (9-9)
  • BatchStatusCancelling (14-14)
  • BatchStatusCancelled (15-15)
  • BatchStatusExpired (13-13)
  • BifrostBatchRetrieveResponse (152-187)
  • BifrostBatchListResponse (129-140)
  • BifrostBatchCancelResponse (199-208)
core/providers/gemini/types.go (11)
  • GeminiJobStatePending (1627-1627)
  • GeminiJobStateRunning (1628-1628)
  • GeminiJobStateSucceeded (1629-1629)
  • GeminiJobStateFailed (1630-1630)
  • GeminiJobStateCancelling (1631-1631)
  • GeminiJobStateCancelled (1632-1632)
  • GeminiBatchJobResponseSDK (1663-1669)
  • GeminiBatchMetadata (1544-1557)
  • GeminiBatchStats (1478-1482)
  • GeminiBatchDest (1578-1581)
  • GeminiBatchListResponseSDK (1672-1675)
transports/bifrost-http/integrations/genai.go (4)
core/schemas/bifrost.go (11)
  • ModelProvider (32-32)
  • Gemini (48-48)
  • FileUploadRequest (104-104)
  • FileListRequest (105-105)
  • FileRetrieveRequest (106-106)
  • FileDeleteRequest (107-107)
  • BatchCreateRequest (98-98)
  • BatchListRequest (99-99)
  • BatchRetrieveRequest (100-100)
  • BatchCancelRequest (101-101)
  • BatchDeleteRequest (103-103)
core/providers/gemini/types.go (5)
  • GeminiFileUploadRequest (1701-1707)
  • Type (781-781)
  • GeminiFileListRequest (1710-1714)
  • GeminiFileRetrieveRequest (1717-1719)
  • GeminiFileDeleteRequest (1722-1724)
transports/bifrost-http/integrations/router.go (7)
  • RequestParser (217-217)
  • FileRequestConverter (105-105)
  • FileRequest (92-99)
  • RouteConfig (270-301)
  • FileUploadResponseConverter (165-165)
  • BatchCreateRequestConverter (102-102)
  • BatchRequest (81-89)
core/schemas/batch.go (10)
  • BifrostBatchCreateRequest (65-82)
  • BatchRequestItem (31-37)
  • BifrostBatchCreateResponse (85-109)
  • BifrostBatchListRequest (112-126)
  • BifrostBatchListResponse (129-140)
  • BifrostBatchRetrieveRequest (143-149)
  • BifrostBatchRetrieveResponse (152-187)
  • BifrostBatchCancelRequest (190-196)
  • BifrostBatchCancelResponse (199-208)
  • BifrostBatchDeleteRequest (266-272)
core/bifrost.go (4)
core/schemas/bifrost.go (5)
  • BatchDeleteRequest (103-103)
  • BifrostError (369-378)
  • ErrorField (387-394)
  • BifrostErrorExtraFields (436-440)
  • RequestType (83-83)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/provider.go (2)
  • Provider (315-364)
  • CustomProviderConfig (249-255)
ui/lib/types/config.ts (2)
  • CustomProviderConfig (163-168)
  • RequestType (122-144)
core/providers/gemini/types.go (1)
core/schemas/models.go (1)
  • Model (109-129)
core/providers/parasail/batch.go (2)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/gemini/gemini.go (4)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (4)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
  • BifrostResponseExtraFields (299-309)
  • RequestType (83-83)
core/providers/utils/utils.go (4)
  • CheckOperationAllowed (472-484)
  • NewBifrostOperationError (513-524)
  • SetExtraHeaders (179-209)
  • MakeRequestWithContext (40-94)
core/schemas/provider.go (1)
  • Provider (315-364)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
🔇 Additional comments (3)
core/providers/perplexity/batch.go (1)

35-38: BatchDelete stub matches existing unsupported-operation contract.
Consistent with other Batch* stubs and uses the shared NewUnsupportedOperationError(...) factory.

transports/bifrost-http/integrations/router.go (1)

80-89: BatchDelete plumbing in GenericRouter is consistent with other batch operations.
The request validation → client call → post-callback → optional converter flow matches the existing create/list/retrieve/cancel/results patterns.

Also applies to: 159-162, 270-301, 825-846

core/providers/gemini/types.go (1)

1623-1697: SDK batch types look fine; keep transport-layer validation strict.
Given Src is interface{}, make sure all validation happens in the HTTP integration layer (type checks + per-item shape checks) to avoid ambiguous downstream failures.

@akshaydeo akshaydeo force-pushed the 12-04-gemini-sdk-batch-support branch from 536795a to bc3aeb6 Compare December 13, 2025 13:11
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (14)
core/providers/ollama/ollama.go (1)

251-254: Make BatchDelete stub match the _-parameter convention used by other unsupported methods in this file.
This is already called out in prior review comments for this PR.

core/providers/cerebras/cerebras.go (1)

264-267: Use _ parameters for consistency with the other unsupported batch stubs in this file.
Already covered by prior review comments.

core/providers/sgl/batch.go (1)

35-38: Match the existing _-parameter pattern used by other SGL batch stubs.
Already flagged in prior review comments.

transports/bifrost-http/integrations/router.go (2)

394-435: Call cancel() before returning on ErrResumableUploadInit.
Right now (Line 428-431) the handler returns early and skips the cancel() from ConvertToBifrostContext, which is a cleanup leak on this path. This was already flagged in prior review comments.


867-900: Remove fmt.Printf debug logging (use structured logger or drop).
Already called out in prior review comments; these should not go in before merge.

Also applies to: 1021-1024

transports/bifrost-http/integrations/genai.go (8)

1253-1262: Reorder route registration to prevent batch requests from matching generic route.

Batch routes are registered after the generic route, causing batch requests to be incorrectly handled. The generic route at /v1beta/models/{model:*} is registered before the batch-specific route at /v1beta/models/{model}:batchGenerateContent. Since fasthttp/router uses first-match-wins semantics, requests like POST /genai/v1beta/models/gemini-1.5-pro:batchGenerateContent will match the generic catch-all pattern (with model="gemini-1.5-pro:batchGenerateContent") and be routed to the generic handler instead of the batch handler.

Reorder route registration so batch routes are appended before generic routes:

 func NewGenAIRouter(client *bifrost.Bifrost, handlerStore lib.HandlerStore, logger schemas.Logger) *GenAIRouter {
-	routes := CreateGenAIRouteConfigs("/genai")
+	routes := CreateGenAIBatchRouteConfigs("/genai", handlerStore)
 	routes = append(routes, CreateGenAIFileRouteConfigs("/genai", handlerStore)...)
-	routes = append(routes, CreateGenAIBatchRouteConfigs("/genai", handlerStore)...)
+	routes = append(routes, CreateGenAIRouteConfigs("/genai")...)
 
 	return &GenAIRouter{
 		GenericRouter: NewGenericRouter(client, handlerStore, routes, logger),
 	}
 }

46-50: Handle error from crypto/rand.Read to prevent weak session IDs.

The error from crypto/rand.Read is ignored at line 48. While rare, failure of the random source could produce predictable session IDs, creating a security risk for session hijacking in resumable uploads.

Apply this diff:

 func generateSessionID() string {
 	bytes := make([]byte, 16)
-	rand.Read(bytes)
+	if _, err := rand.Read(bytes); err != nil {
+		// This should rarely happen; panic is appropriate for unrecoverable failure
+		panic(fmt.Sprintf("failed to generate session ID: %v", err))
+	}
 	return hex.EncodeToString(bytes)
 }

53-68: Add shutdown mechanism for cleanup goroutine.

The cleanup goroutine spawned in init() runs indefinitely with no way to stop it, preventing graceful application shutdown. Consider adding a stop channel:

+var cleanupStop = make(chan struct{})
+
 func init() {
 	go func() {
 		ticker := time.NewTicker(10 * time.Minute)
+		defer ticker.Stop()
-		for range ticker.C {
+		for {
+			select {
+			case <-ticker.C:
 				now := time.Now()
 				uploadSessions.Range(func(key, value interface{}) bool {
 					if session, ok := value.(*uploadSession); ok {
 						if now.Sub(session.CreatedAt) > time.Hour {
 							uploadSessions.Delete(key)
 						}
 					}
 					return true
 				})
+			case <-cleanupStop:
+				return
+			}
 		}
 	}()
 }

246-260: Remove all debug fmt.Printf statements before merging.

Numerous [DEBUG] print statements remain throughout the file (over 30 instances). These should be removed or replaced with the project's structured logger before merging to production.

Examples:

  • Lines 246, 249, 253-254: FileUploadResponseConverter debug logs
  • Lines 798, 813-814, 818-819, 820, 824, 830, 841, 847, 854-855, 860, 866: parseGeminiResumableUpload debug logs
  • Lines 1019, 1022, 1028, 1034, 1040, 1046, 1052, 1062-1063, 1068: parseGeminiResumableUploadPhase2 debug logs
  • Lines 1078-1083: setResumableUploadFinalStatus debug logs
  • Lines 1114, 1131, 1135, 1153: extractGeminiFileUploadParams debug logs

If debug logging is needed for production troubleshooting, replace with the package's structured logger (e.g., logger.Debugf).

Also applies to: 798-799, 813-814, 818-819, 820-820, 824-824, 830-830, 841-841, 847-847, 854-855, 860-860, 866-866, 1001-1001, 1013-1013, 1019-1019, 1022-1022, 1028-1028, 1034-1034, 1040-1040, 1046-1046, 1052-1052, 1062-1063, 1068-1068, 1078-1083, 1114-1114, 1131-1131, 1135-1135, 1153-1153


873-940: Critical: Binary data corruption in multipart/related parsing.

The file content is corrupted during parsing because the request body is converted to a string at line 893 (strings.Split(string(body), delimiter)), then extracted as a string, and finally converted back to bytes at line 935. Go strings are UTF-8 encoded; any binary data containing null bytes or invalid UTF-8 sequences will be lost or replaced with the replacement character (U+FFFD). This breaks all binary file uploads through the multipart/related endpoint.

Use bytes.Split() and byte operations throughout:

 func parseGeminiMultipartRelated(ctx *fasthttp.RequestCtx, uploadReq *gemini.GeminiFileUploadRequest, body []byte, contentType string) error {
 	// Extract boundary from content type
 	boundary := ""
 	for _, param := range strings.Split(contentType, ";") {
 		param = strings.TrimSpace(param)
 		if strings.HasPrefix(param, "boundary=") {
 			boundary = strings.TrimPrefix(param, "boundary=")
 			boundary = strings.Trim(boundary, "\"")
 			break
 		}
 	}
 
 	if boundary == "" {
 		return errors.New("missing boundary in multipart/related content type")
 	}
 
 	// Split body by boundary
-	delimiter := "--" + boundary
-	parts := strings.Split(string(body), delimiter)
+	delimiter := []byte("--" + boundary)
+	parts := bytes.Split(body, delimiter)
 
 	for _, part := range parts {
-		part = strings.TrimSpace(part)
-		if part == "" || part == "--" {
+		part = bytes.TrimSpace(part)
+		if len(part) == 0 || string(part) == "--" {
 			continue
 		}
 
 		// Split headers from content
-		headerEnd := strings.Index(part, "\r\n\r\n")
+		headerEnd := bytes.Index(part, []byte("\r\n\r\n"))
 		if headerEnd == -1 {
-			headerEnd = strings.Index(part, "\n\n")
+			headerEnd = bytes.Index(part, []byte("\n\n"))
 			if headerEnd == -1 {
 				continue
 			}
 		}
 
-		headers := part[:headerEnd]
+		headers := string(part[:headerEnd])
 		content := part[headerEnd:]
-		content = strings.TrimPrefix(content, "\r\n\r\n")
-		content = strings.TrimPrefix(content, "\n\n")
+		content = bytes.TrimPrefix(content, []byte("\r\n\r\n"))
+		content = bytes.TrimPrefix(content, []byte("\n\n"))
 
 		// Check content type of this part
-		headersLower := strings.ToLower(headers)
+		headersLower := strings.ToLower(headers)
 		if strings.Contains(headersLower, "application/json") {
 			// This is metadata - Google GenAI SDK sends snake_case fields
 			var metadata struct {
 				File struct {
 					DisplayName string `json:"display_name"`
 					MimeType    string `json:"mime_type"`
 				} `json:"file"`
 			}
-			if err := sonic.Unmarshal([]byte(content), &metadata); err == nil {
+			if err := sonic.Unmarshal(content, &metadata); err == nil {
 				if metadata.File.DisplayName != "" {
 					uploadReq.Filename = metadata.File.DisplayName
 				}
 				if metadata.File.MimeType != "" {
 					uploadReq.MimeType = metadata.File.MimeType
 				}
 			}
 		} else {
 			// This is file content
-			uploadReq.File = []byte(content)
+			uploadReq.File = make([]byte, len(content))
+			copy(uploadReq.File, content)
 		}
 	}
 
 	return nil
 }

1062-1062: Allow purpose to be specified in phase-1 metadata.

The purpose is hardcoded to "batch" for all file uploads via the resumable upload API (line 1062). If this endpoint is used for other purposes (e.g., general file storage for chat context), this default may be incorrect.

Consider reading the purpose from the phase-1 metadata:

 	// Populate the upload request with session metadata and file content
 	uploadReq.File = make([]byte, len(body))
 	copy(uploadReq.File, body)
 	uploadReq.Filename = session.Filename
 	uploadReq.MimeType = session.MimeType
-	uploadReq.Purpose = "batch" // Default purpose for file uploads via GenAI API
+	// Use purpose from metadata if available, otherwise default to "batch"
+	if session.Purpose != "" {
+		uploadReq.Purpose = session.Purpose
+	} else {
+		uploadReq.Purpose = "batch"
+	}

You'll also need to add a Purpose field to the uploadSession struct and populate it during phase 1.


1144-1169: Support reverse proxy headers for upload URL construction.

The upload URL at lines 1145-1151 uses ctx.Host() and ctx.IsTLS() directly without considering X-Forwarded-Proto and X-Forwarded-Host headers. When Bifrost runs behind a reverse proxy, this URL will reference the internal host/scheme instead of the client-facing address, causing the client's subsequent upload requests to fail.

Follow the pattern from session.go (lines 139, 175) which checks X-Forwarded-Proto:

 		// Build the upload URL for phase 2
-		// Use the request's host and scheme to build the URL
-		scheme := "http"
-		if ctx.IsTLS() {
-			scheme = "https"
+		// Check for reverse proxy headers first
+		scheme := string(ctx.Request.Header.Peek("X-Forwarded-Proto"))
+		if scheme == "" {
+			if ctx.IsTLS() {
+				scheme = "https"
+			} else {
+				scheme = "http"
+			}
+		} else if !strings.Contains(scheme, "https") {
+			scheme = "http"
+		} else {
+			scheme = "https"
 		}
-		host := string(ctx.Host())
+		
+		host := string(ctx.Request.Header.Peek("X-Forwarded-Host"))
+		if host == "" {
+			host = string(ctx.Host())
+		}
+		
 		uploadURL := fmt.Sprintf("%s://%s/genai/upload/v1beta/files/resumable/%s", scheme, host, uploadReq.ResumableSessionID)

469-488: Add default case to handle unexpected src types.

The switch statement on sdkReq.Src handles string and []interface{} cases, but silently ignores any other type. If Src is neither (e.g., due to malformed input or SDK changes), both InputFileID and Requests remain unset, which may cause confusing downstream errors or silent failures.

Add a default case:

 		// Handle src field - can be string (file reference) or array (inline requests)
 		switch src := sdkReq.Src.(type) {
 		case string:
 			// File-based input: src="files/display_name"
 			// TrimPrefix is safe even if prefix doesn't exist
 			bifrostReq.InputFileID = strings.TrimPrefix(src, "files/")
 		case []interface{}:
 			// Inline requests: src=[{contents: [...], config: {...}}]
 			requests := make([]schemas.BatchRequestItem, 0, len(src))
 			for i, item := range src {
 				if itemMap, ok := item.(map[string]interface{}); ok {
 					customID := fmt.Sprintf("request-%d", i)
 					requests = append(requests, schemas.BatchRequestItem{
 						CustomID: customID,
 						Body:     itemMap,
 					})
 				}
 			}
 			bifrostReq.Requests = requests
+		default:
+			return nil, fmt.Errorf("invalid src type: expected string or array, got %T", src)
 		}
core/providers/gemini/batch.go (1)

292-296: Compute BatchStats defensively to prevent negative counts.

The BatchStats computation can produce negative values when resp.RequestCounts.Total is zero but Completed or Failed are non-zero. This occurs in three converter functions and matches the concern raised in previous reviews.

For example, when Total = 0, Completed = 5, Failed = 1:

  • PendingRequestCount = 0 - 5 = -5 (invalid)
  • SuccessfulRequestCount = 5 - 1 = 4 (correct, but misleading given negative pending)

Apply defensive math in all three functions (ToGeminiBatchJobResponse, ToGeminiBatchRetrieveResponse, ToGeminiBatchListResponse):

+	total := resp.RequestCounts.Total
+	if total == 0 && (resp.RequestCounts.Completed > 0 || resp.RequestCounts.Failed > 0) {
+		total = resp.RequestCounts.Completed + resp.RequestCounts.Failed
+	}
+	pending := total - resp.RequestCounts.Completed - resp.RequestCounts.Failed
+	if pending < 0 {
+		pending = 0
+	}
+	success := resp.RequestCounts.Completed - resp.RequestCounts.Failed
+	if success < 0 {
+		success = 0
+	}
+
 	BatchStats: &GeminiBatchStats{
-		RequestCount:           resp.RequestCounts.Total,
-		PendingRequestCount:    resp.RequestCounts.Total - resp.RequestCounts.Completed,
-		SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed,
+		RequestCount:           total,
+		PendingRequestCount:    pending,
+		SuccessfulRequestCount: success,
 	},

Also applies to: 319-323, 358-362

🧹 Nitpick comments (1)
transports/bifrost-http/integrations/router.go (1)

707-858: BatchDelete execution path looks correct; consider mirroring “nil response” handling if PostCallback/converter intentionally suppresses bodies.
For files, you explicitly treat response == nil as “headers/status already set” (Line 1027-1030). Batch handlers always sendSuccess(...). If PostCallback or BatchDeleteResponseConverter can intentionally return nil, you may want the same guard here for consistency.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 536795a and bc3aeb6.

📒 Files selected for processing (19)
  • core/bifrost.go (1 hunks)
  • core/providers/anthropic/batch.go (1 hunks)
  • core/providers/azure/batch.go (1 hunks)
  • core/providers/cerebras/cerebras.go (1 hunks)
  • core/providers/cohere/cohere.go (1 hunks)
  • core/providers/gemini/batch.go (1 hunks)
  • core/providers/gemini/gemini.go (1 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/mistral/batch.go (1 hunks)
  • core/providers/ollama/ollama.go (1 hunks)
  • core/providers/openrouter/openrouter.go (2 hunks)
  • core/providers/parasail/batch.go (1 hunks)
  • core/providers/sgl/batch.go (1 hunks)
  • core/schemas/batch.go (1 hunks)
  • core/schemas/bifrost.go (1 hunks)
  • core/schemas/provider.go (3 hunks)
  • transports/bifrost-http/integrations/genai.go (13 hunks)
  • transports/bifrost-http/integrations/router.go (7 hunks)
  • transports/bifrost-http/integrations/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (8)
  • core/schemas/bifrost.go
  • transports/bifrost-http/integrations/utils.go
  • core/providers/anthropic/batch.go
  • core/providers/azure/batch.go
  • core/bifrost.go
  • core/providers/gemini/gemini.go
  • core/providers/openrouter/openrouter.go
  • core/providers/cohere/cohere.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/providers/parasail/batch.go
  • core/providers/ollama/ollama.go
  • core/schemas/batch.go
  • core/providers/mistral/batch.go
  • core/schemas/provider.go
  • core/providers/sgl/batch.go
  • core/providers/cerebras/cerebras.go
  • core/providers/gemini/batch.go
  • transports/bifrost-http/integrations/router.go
  • transports/bifrost-http/integrations/genai.go
  • core/providers/gemini/types.go
🧠 Learnings (2)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.

Applied to files:

  • core/providers/parasail/batch.go
  • core/providers/ollama/ollama.go
  • core/schemas/batch.go
  • core/providers/mistral/batch.go
  • core/schemas/provider.go
  • core/providers/sgl/batch.go
  • core/providers/cerebras/cerebras.go
  • core/providers/gemini/batch.go
  • transports/bifrost-http/integrations/router.go
  • transports/bifrost-http/integrations/genai.go
  • core/providers/gemini/types.go
📚 Learning: 2025-12-12T08:25:02.629Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: transports/bifrost-http/integrations/router.go:709-712
Timestamp: 2025-12-12T08:25:02.629Z
Learning: In transports/bifrost-http/**/*.go, update streaming response handling to align with OpenAI Responses API: use typed SSE events such as response.created, response.output_text.delta, response.done, etc., and do not rely on the legacy data: [DONE] termination marker. Note that data: [DONE] is only used by the older Chat Completions and Text Completions streaming APIs. Ensure parsers, writers, and tests distinguish SSE events from the [DONE] sentinel and handle each event type accordingly for correct stream termination and progress updates.

Applied to files:

  • transports/bifrost-http/integrations/router.go
  • transports/bifrost-http/integrations/genai.go
🧬 Code graph analysis (8)
core/providers/parasail/batch.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/schemas/batch.go (3)
core/schemas/provider.go (1)
  • Provider (315-364)
core/schemas/bifrost.go (1)
  • ModelProvider (32-32)
ui/lib/types/config.ts (1)
  • ModelProvider (182-185)
core/providers/mistral/batch.go (4)
core/providers/mistral/mistral.go (1)
  • MistralProvider (17-23)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/schemas/provider.go (2)
core/schemas/bifrost.go (1)
  • BatchDeleteRequest (103-103)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/sgl/batch.go (4)
core/providers/sgl/sgl.go (1)
  • SGLProvider (18-24)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/gemini/batch.go (2)
core/schemas/batch.go (12)
  • BatchStatus (5-5)
  • BatchStatusValidating (8-8)
  • BatchStatusInProgress (10-10)
  • BatchStatusFinalizing (11-11)
  • BatchStatusFailed (9-9)
  • BatchStatusCancelling (14-14)
  • BatchStatusCancelled (15-15)
  • BatchStatusExpired (13-13)
  • BifrostBatchCreateResponse (85-109)
  • BifrostBatchRetrieveResponse (152-187)
  • BifrostBatchListResponse (129-140)
  • BifrostBatchCancelResponse (199-208)
core/providers/gemini/types.go (11)
  • GeminiJobStatePending (1627-1627)
  • GeminiJobStateRunning (1628-1628)
  • GeminiJobStateSucceeded (1629-1629)
  • GeminiJobStateFailed (1630-1630)
  • GeminiJobStateCancelling (1631-1631)
  • GeminiJobStateCancelled (1632-1632)
  • GeminiBatchJobResponseSDK (1663-1669)
  • GeminiBatchMetadata (1544-1557)
  • GeminiBatchStats (1478-1482)
  • GeminiBatchDest (1578-1581)
  • GeminiBatchListResponseSDK (1672-1675)
transports/bifrost-http/integrations/genai.go (4)
core/schemas/provider.go (1)
  • Provider (315-364)
core/providers/gemini/types.go (2)
  • GeminiFileUploadRequest (1701-1707)
  • Type (781-781)
core/schemas/files.go (6)
  • BifrostFileUploadRequest (53-66)
  • FilePurpose (5-5)
  • BifrostFileUploadResponse (89-105)
  • BifrostFileListRequest (108-124)
  • BifrostFileRetrieveRequest (137-146)
  • BifrostFileDeleteRequest (168-177)
core/schemas/batch.go (11)
  • BifrostBatchCreateRequest (65-82)
  • BatchRequestItem (31-37)
  • BifrostBatchCreateResponse (85-109)
  • BifrostBatchListRequest (112-126)
  • BifrostBatchListResponse (129-140)
  • BifrostBatchRetrieveRequest (143-149)
  • BifrostBatchRetrieveResponse (152-187)
  • BifrostBatchCancelRequest (190-196)
  • BifrostBatchCancelResponse (199-208)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/gemini/types.go (1)
core/schemas/models.go (1)
  • Model (109-129)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
🔇 Additional comments (5)
core/schemas/batch.go (1)

265-281: BatchDelete schema types look consistent with existing batch schemas.
No concerns with the JSON tags / ExtraFields pattern here. (Noting this PR is in a Graphite stack; ensure downstream OpenAPI/router changes land together.)

core/providers/mistral/batch.go (1)

35-38: BatchDelete unsupported stub matches existing Mistral batch stub style.

core/providers/parasail/batch.go (1)

35-38: BatchDelete unsupported stub matches existing Parasail batch stub style.

core/schemas/provider.go (1)

162-189: Provider permissioning + interface wiring for batch_delete is correct—ensure UI/config surfaces the new flag.
Because AllowedRequests is an explicit allowlist, batch_delete will be denied unless set true (when AllowedRequests is non-nil). Please verify the Graphite stack includes any UI/config/OpenAPI updates needed to set batch_delete.

Also applies to: 191-246, 314-364

transports/bifrost-http/integrations/router.go (1)

80-89: BatchDelete router wiring (request shape + response converter hook) is clean and consistent with other batch ops.

Also applies to: 159-162, 268-301

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
core/providers/azure/batch.go (1)

1-24: Fix build break: add required imports for new BatchDelete.

 package azure
+
+import (
+	"context"
+
+	providerUtils "github.com/maximhq/bifrost/core/providers/utils"
+	"github.com/maximhq/bifrost/core/schemas"
+)
 
 // splitJSONL splits JSONL content into individual lines.
 func splitJSONL(data []byte) [][]byte {
   ...
 }
♻️ Duplicate comments (16)
core/providers/anthropic/batch.go (1)

3-9: Fix build break: missing context import for context.Context.

 import (
+	"context"
 	"time"

 	providerUtils "github.com/maximhq/bifrost/core/providers/utils"
 	"github.com/maximhq/bifrost/core/schemas"
 	"github.com/valyala/fasthttp"
 )

Also applies to: 379-383

core/providers/gemini/gemini.go (1)

2111-2119: Harden delete URL construction to accept full resource paths, not just batches/ prefix.

If callers pass a full resource name containing / (not starting with batches/), the current else branch will generate a malformed URL.

 	// Build URL for delete operation
 	batchID := request.BatchID
 	var url string
-	if strings.HasPrefix(batchID, "batches/") {
-		url = fmt.Sprintf("%s/%s", provider.networkConfig.BaseURL, batchID)
-	} else {
+	if strings.Contains(batchID, "/") {
+		url = fmt.Sprintf("%s/%s", provider.networkConfig.BaseURL, strings.TrimPrefix(batchID, "/"))
+	} else {
 		url = fmt.Sprintf("%s/batches/%s", provider.networkConfig.BaseURL, batchID)
 	}
core/providers/sgl/batch.go (1)

35-38: Use _ for unused params (consistency with other SGL batch stubs).

- func (provider *SGLProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+ func (provider *SGLProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
  	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
  }
core/providers/ollama/ollama.go (1)

251-254: Use _ for unused params (match surrounding unsupported methods).

- func (provider *OllamaProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+ func (provider *OllamaProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
  	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
  }
transports/bifrost-http/integrations/router.go (2)

426-431: Call cancel() before returning on ErrResumableUploadInit.

 if config.PreCallback != nil {
   if err := config.PreCallback(ctx, bifrostCtx, req); err != nil {
     // Check if this is a resumable upload init that was already handled
     if err == ErrResumableUploadInit {
-      // Response was already written by the PreCallback, just return
+      // Response was already written by the PreCallback; clean up context and return
+      cancel()
       return
     }
     g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute pre-request callback: "+err.Error()))
     return
   }
 }

873-897: Remove fmt.Printf debug logging from request path (use structured logger or drop).

Also applies to: 1022-1024

core/providers/gemini/batch.go (1)

286-297: Fix BatchStats math to avoid negative/invalid counts.

Consider deriving totals/pending defensively (and reusing in all 3 converters):

-        BatchStats: &GeminiBatchStats{
-            RequestCount:           resp.RequestCounts.Total,
-            PendingRequestCount:    resp.RequestCounts.Total - resp.RequestCounts.Completed,
-            SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed,
-        },
+        total := resp.RequestCounts.Total
+        if total == 0 && (resp.RequestCounts.Completed > 0 || resp.RequestCounts.Failed > 0) {
+            total = resp.RequestCounts.Completed + resp.RequestCounts.Failed
+        }
+        pending := total - resp.RequestCounts.Completed - resp.RequestCounts.Failed
+        if pending < 0 {
+            pending = 0
+        }
+        success := resp.RequestCounts.Completed - resp.RequestCounts.Failed
+        if success < 0 {
+            success = 0
+        }
+        BatchStats: &GeminiBatchStats{
+            RequestCount:           total,
+            PendingRequestCount:    pending,
+            SuccessfulRequestCount: success,
+        },

Also applies to: 314-324, 353-364

transports/bifrost-http/integrations/genai.go (9)

1255-1258: Register batch routes before the generic {model:*} route (first-match-wins).

CreateGenAIRouteConfigs("/genai") adds /v1beta/models/{model:*}; batch routes should be registered before it.

- routes := CreateGenAIRouteConfigs("/genai")
- routes = append(routes, CreateGenAIFileRouteConfigs("/genai", handlerStore)...)
- routes = append(routes, CreateGenAIBatchRouteConfigs("/genai", handlerStore)...)
+ routes := CreateGenAIBatchRouteConfigs("/genai", handlerStore)
+ routes = append(routes, CreateGenAIFileRouteConfigs("/genai", handlerStore)...)
+ routes = append(routes, CreateGenAIRouteConfigs("/genai")...)

45-50: Handle crypto/rand.Read error in generateSessionID.


53-68: Add a shutdown signal for the cleanup goroutine + stop the ticker.


875-936: Fix binary corruption in multipart/related parsing (don’t convert body to string).


798-799: Remove fmt.Printf debug logs (use structured logger or drop).

Also applies to: 820-825, 830-831, 841-848, 860-861, 1019-1041, 1052-1053, 1068-1069, 1078-1083, 1114-1115, 1131-1136, 1153-1154


1138-1142: Avoid mutating *uploadSession after storing in sync.Map (data race).


1144-1152: Respect X-Forwarded-Proto / X-Forwarded-Host when constructing resumable upload URL.


1057-1063: Don’t hardcode resumable upload Purpose = "batch" for all uploads.


469-488: Fail fast on unexpected sdkReq.Src types (add default:).

🧹 Nitpick comments (1)
core/providers/cohere/cohere.go (1)

869-872: Looks fine; optional: use _ params for consistency with other Cohere unsupported batch methods.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 536795a and bc3aeb6.

📒 Files selected for processing (19)
  • core/bifrost.go (1 hunks)
  • core/providers/anthropic/batch.go (1 hunks)
  • core/providers/azure/batch.go (1 hunks)
  • core/providers/cerebras/cerebras.go (1 hunks)
  • core/providers/cohere/cohere.go (1 hunks)
  • core/providers/gemini/batch.go (1 hunks)
  • core/providers/gemini/gemini.go (1 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/mistral/batch.go (1 hunks)
  • core/providers/ollama/ollama.go (1 hunks)
  • core/providers/openrouter/openrouter.go (2 hunks)
  • core/providers/parasail/batch.go (1 hunks)
  • core/providers/sgl/batch.go (1 hunks)
  • core/schemas/batch.go (1 hunks)
  • core/schemas/bifrost.go (1 hunks)
  • core/schemas/provider.go (3 hunks)
  • transports/bifrost-http/integrations/genai.go (13 hunks)
  • transports/bifrost-http/integrations/router.go (7 hunks)
  • transports/bifrost-http/integrations/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
  • core/schemas/bifrost.go
  • core/providers/mistral/batch.go
  • transports/bifrost-http/integrations/utils.go
  • core/bifrost.go
  • core/providers/cerebras/cerebras.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/schemas/batch.go
  • core/providers/parasail/batch.go
  • core/providers/cohere/cohere.go
  • core/providers/sgl/batch.go
  • core/providers/azure/batch.go
  • core/providers/anthropic/batch.go
  • core/providers/gemini/batch.go
  • core/schemas/provider.go
  • core/providers/gemini/gemini.go
  • transports/bifrost-http/integrations/genai.go
  • core/providers/ollama/ollama.go
  • transports/bifrost-http/integrations/router.go
  • core/providers/gemini/types.go
  • core/providers/openrouter/openrouter.go
🧠 Learnings (2)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.

Applied to files:

  • core/schemas/batch.go
  • core/providers/parasail/batch.go
  • core/providers/cohere/cohere.go
  • core/providers/sgl/batch.go
  • core/providers/azure/batch.go
  • core/providers/anthropic/batch.go
  • core/providers/gemini/batch.go
  • core/schemas/provider.go
  • core/providers/gemini/gemini.go
  • transports/bifrost-http/integrations/genai.go
  • core/providers/ollama/ollama.go
  • transports/bifrost-http/integrations/router.go
  • core/providers/gemini/types.go
  • core/providers/openrouter/openrouter.go
📚 Learning: 2025-12-12T08:25:02.629Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: transports/bifrost-http/integrations/router.go:709-712
Timestamp: 2025-12-12T08:25:02.629Z
Learning: In transports/bifrost-http/**/*.go, update streaming response handling to align with OpenAI Responses API: use typed SSE events such as response.created, response.output_text.delta, response.done, etc., and do not rely on the legacy data: [DONE] termination marker. Note that data: [DONE] is only used by the older Chat Completions and Text Completions streaming APIs. Ensure parsers, writers, and tests distinguish SSE events from the [DONE] sentinel and handle each event type accordingly for correct stream termination and progress updates.

Applied to files:

  • transports/bifrost-http/integrations/genai.go
  • transports/bifrost-http/integrations/router.go
🧬 Code graph analysis (11)
core/schemas/batch.go (2)
core/schemas/provider.go (1)
  • Provider (315-364)
core/schemas/bifrost.go (2)
  • ModelProvider (32-32)
  • BifrostResponseExtraFields (299-309)
core/providers/parasail/batch.go (3)
core/providers/parasail/parasail.go (1)
  • ParasailProvider (17-23)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/sgl/batch.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/azure/batch.go (3)
core/providers/azure/azure.go (1)
  • AzureProvider (26-32)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
core/providers/anthropic/batch.go (2)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/gemini/batch.go (2)
core/schemas/batch.go (13)
  • BatchStatus (5-5)
  • BatchStatusValidating (8-8)
  • BatchStatusInProgress (10-10)
  • BatchStatusFinalizing (11-11)
  • BatchStatusCompleted (12-12)
  • BatchStatusFailed (9-9)
  • BatchStatusCancelling (14-14)
  • BatchStatusCancelled (15-15)
  • BatchStatusExpired (13-13)
  • BifrostBatchCreateResponse (85-109)
  • BifrostBatchRetrieveResponse (152-187)
  • BifrostBatchListResponse (129-140)
  • BifrostBatchCancelResponse (199-208)
core/providers/gemini/types.go (11)
  • GeminiJobStatePending (1627-1627)
  • GeminiJobStateRunning (1628-1628)
  • GeminiJobStateSucceeded (1629-1629)
  • GeminiJobStateFailed (1630-1630)
  • GeminiJobStateCancelling (1631-1631)
  • GeminiJobStateCancelled (1632-1632)
  • GeminiBatchJobResponseSDK (1663-1669)
  • GeminiBatchMetadata (1544-1557)
  • GeminiBatchStats (1478-1482)
  • GeminiBatchDest (1578-1581)
  • GeminiBatchListResponseSDK (1672-1675)
core/schemas/provider.go (2)
core/schemas/bifrost.go (1)
  • BatchDeleteRequest (103-103)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/gemini/gemini.go (4)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (5)
  • BifrostError (369-378)
  • Gemini (48-48)
  • BatchDeleteRequest (103-103)
  • BifrostResponseExtraFields (299-309)
  • RequestType (83-83)
core/providers/utils/utils.go (4)
  • CheckOperationAllowed (472-484)
  • NewBifrostOperationError (513-524)
  • SetExtraHeaders (179-209)
  • MakeRequestWithContext (40-94)
core/schemas/provider.go (1)
  • Provider (315-364)
core/providers/ollama/ollama.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (369-378)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
transports/bifrost-http/integrations/router.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
transports/bifrost-http/integrations/genai.go (1)
  • ErrResumableUploadInit (37-37)
core/schemas/provider.go (1)
  • Provider (315-364)
core/providers/gemini/types.go (1)
core/schemas/models.go (1)
  • Model (109-129)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
🔇 Additional comments (8)
core/schemas/batch.go (1)

265-282: Schema additions look consistent with existing batch request/response patterns.

core/schemas/provider.go (1)

165-189: Provider capability gating + interface extension for BatchDelete looks coherent.

Also applies to: 191-247, 314-364

core/providers/parasail/batch.go (1)

35-38: Parasail BatchDelete unsupported-op stub matches the existing provider pattern.

core/providers/openrouter/openrouter.go (1)

285-309: OpenRouter unsupported BatchDelete stub is consistent with the provider’s existing unsupported batch surface.

transports/bifrost-http/integrations/router.go (1)

88-89: Batch delete routing + converter plumbing looks consistent with other batch ops.

The BatchRequest extension + handleBatchRequest switch case follow the existing create/list/retrieve/cancel/results pattern.

Also applies to: 159-162, 291-292, 825-846

core/providers/gemini/types.go (3)

1623-1634: LGTM! Clear separation of SDK job states from internal batch states.

The new job state constants are well-documented and clearly distinguished from the internal batch states. The naming convention is consistent and the comment clarifies their purpose.


1636-1696: SDK batch type definitions are well-structured.

The new SDK batch types provide a clear interface for the Google GenAI SDK format. The use of interface{} for the Src field in GeminiBatchCreateRequestSDK provides necessary flexibility to handle both string (file reference) and array (inline requests) inputs.

However, note that past review comments have flagged validation concerns about the Src field handling in the transport layer. While the type definition itself is appropriate, ensure the transport layer validation addresses those concerns.


1702-1707: GeminiFileUploadRequest structure is well-designed.

The addition of MimeType and ResumableSessionID fields enhances the file upload functionality:

  • MimeType allows proper MIME type tracking
  • ResumableSessionID supports resumable upload sessions (appropriately marked as internal with json:"-")

The struct fields are appropriately typed and tagged. However, past review comments have identified concerns about MimeType not being consistently extracted and forwarded across all file upload parser paths (multipart, raw, resumable). While the type definition is sound, verify that the transport layer properly populates these fields in all code paths.

Comment on lines +314 to +315
// Add metadata
result.Metadata = &GeminiBatchMetadata{
Name: resp.ID,
State: ToGeminiJobState(resp.Status),
CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339),
BatchStats: &GeminiBatchStats{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Avoid emitting 1970 timestamps when CreatedAt == 0.

In ToGeminiBatchRetrieveResponse / ToGeminiBatchListResponse, consider guarding CreatedAt > 0 (like ToGeminiBatchJobResponse already does) before setting CreateTime.

Also applies to: 353-358

🤖 Prompt for AI Agents
In core/providers/gemini/batch.go around lines 314-319 (and similarly 353-358),
the code unconditionally formats resp.CreatedAt which emits a 1970 timestamp
when CreatedAt == 0; update both places to check if resp.CreatedAt > 0 before
setting result.Metadata.CreateTime (e.g., only set CreateTime =
time.Unix(resp.CreatedAt, 0).Format(time.RFC3339) when > 0), otherwise leave
CreateTime empty or nil to avoid the 1970-01-01 value.

Comment on lines +2093 to +2161
// BatchDelete deletes a batch job for Gemini.
func (provider *GeminiProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
if err := providerUtils.CheckOperationAllowed(schemas.Gemini, provider.customProviderConfig, schemas.BatchDeleteRequest); err != nil {
return nil, err
}

providerName := provider.GetProviderKey()

if request.BatchID == "" {
return nil, providerUtils.NewBifrostOperationError("batch_id is required", nil, providerName)
}

// Create HTTP request
req := fasthttp.AcquireRequest()
resp := fasthttp.AcquireResponse()
defer fasthttp.ReleaseRequest(req)
defer fasthttp.ReleaseResponse(resp)

// Build URL for delete operation
batchID := request.BatchID
var url string
if strings.HasPrefix(batchID, "batches/") {
url = fmt.Sprintf("%s/%s", provider.networkConfig.BaseURL, batchID)
} else {
url = fmt.Sprintf("%s/batches/%s", provider.networkConfig.BaseURL, batchID)
}

provider.logger.Debug("gemini batch delete url: " + url)
providerUtils.SetExtraHeaders(ctx, req, provider.networkConfig.ExtraHeaders, nil)
req.SetRequestURI(url)
req.Header.SetMethod(http.MethodDelete)
if key.Value != "" {
req.Header.Set("x-goog-api-key", key.Value)
}
req.Header.SetContentType("application/json")

// Make request
latency, bifrostErr := providerUtils.MakeRequestWithContext(ctx, provider.client, req, resp)
if bifrostErr != nil {
return nil, bifrostErr
}

// Handle response
if resp.StatusCode() != fasthttp.StatusOK && resp.StatusCode() != fasthttp.StatusNoContent {
return nil, parseGeminiError(resp)
}

return &schemas.BifrostBatchDeleteResponse{
ID: request.BatchID,
Object: "batch",
Deleted: true,
ExtraFields: schemas.BifrostResponseExtraFields{
RequestType: schemas.BatchDeleteRequest,
Provider: providerName,
Latency: latency.Milliseconds(),
},
}, nil
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix likely compile error + preserve error metadata in Gemini BatchDelete.

parseGeminiError is called with a different arity here than elsewhere in this file; also you probably want RequestMetadata for consistent error shaping.

 	// Handle response
 	if resp.StatusCode() != fasthttp.StatusOK && resp.StatusCode() != fasthttp.StatusNoContent {
-		return nil, parseGeminiError(resp)
+		return nil, parseGeminiError(resp, &providerUtils.RequestMetadata{
+			Provider:    providerName,
+			RequestType: schemas.BatchDeleteRequest,
+		})
 	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// BatchDelete deletes a batch job for Gemini.
func (provider *GeminiProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
if err := providerUtils.CheckOperationAllowed(schemas.Gemini, provider.customProviderConfig, schemas.BatchDeleteRequest); err != nil {
return nil, err
}
providerName := provider.GetProviderKey()
if request.BatchID == "" {
return nil, providerUtils.NewBifrostOperationError("batch_id is required", nil, providerName)
}
// Create HTTP request
req := fasthttp.AcquireRequest()
resp := fasthttp.AcquireResponse()
defer fasthttp.ReleaseRequest(req)
defer fasthttp.ReleaseResponse(resp)
// Build URL for delete operation
batchID := request.BatchID
var url string
if strings.HasPrefix(batchID, "batches/") {
url = fmt.Sprintf("%s/%s", provider.networkConfig.BaseURL, batchID)
} else {
url = fmt.Sprintf("%s/batches/%s", provider.networkConfig.BaseURL, batchID)
}
provider.logger.Debug("gemini batch delete url: " + url)
providerUtils.SetExtraHeaders(ctx, req, provider.networkConfig.ExtraHeaders, nil)
req.SetRequestURI(url)
req.Header.SetMethod(http.MethodDelete)
if key.Value != "" {
req.Header.Set("x-goog-api-key", key.Value)
}
req.Header.SetContentType("application/json")
// Make request
latency, bifrostErr := providerUtils.MakeRequestWithContext(ctx, provider.client, req, resp)
if bifrostErr != nil {
return nil, bifrostErr
}
// Handle response
if resp.StatusCode() != fasthttp.StatusOK && resp.StatusCode() != fasthttp.StatusNoContent {
return nil, parseGeminiError(resp)
}
return &schemas.BifrostBatchDeleteResponse{
ID: request.BatchID,
Object: "batch",
Deleted: true,
ExtraFields: schemas.BifrostResponseExtraFields{
RequestType: schemas.BatchDeleteRequest,
Provider: providerName,
Latency: latency.Milliseconds(),
},
}, nil
}
// BatchDelete deletes a batch job for Gemini.
func (provider *GeminiProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
if err := providerUtils.CheckOperationAllowed(schemas.Gemini, provider.customProviderConfig, schemas.BatchDeleteRequest); err != nil {
return nil, err
}
providerName := provider.GetProviderKey()
if request.BatchID == "" {
return nil, providerUtils.NewBifrostOperationError("batch_id is required", nil, providerName)
}
// Create HTTP request
req := fasthttp.AcquireRequest()
resp := fasthttp.AcquireResponse()
defer fasthttp.ReleaseRequest(req)
defer fasthttp.ReleaseResponse(resp)
// Build URL for delete operation
batchID := request.BatchID
var url string
if strings.HasPrefix(batchID, "batches/") {
url = fmt.Sprintf("%s/%s", provider.networkConfig.BaseURL, batchID)
} else {
url = fmt.Sprintf("%s/batches/%s", provider.networkConfig.BaseURL, batchID)
}
provider.logger.Debug("gemini batch delete url: " + url)
providerUtils.SetExtraHeaders(ctx, req, provider.networkConfig.ExtraHeaders, nil)
req.SetRequestURI(url)
req.Header.SetMethod(http.MethodDelete)
if key.Value != "" {
req.Header.Set("x-goog-api-key", key.Value)
}
req.Header.SetContentType("application/json")
// Make request
latency, bifrostErr := providerUtils.MakeRequestWithContext(ctx, provider.client, req, resp)
if bifrostErr != nil {
return nil, bifrostErr
}
// Handle response
if resp.StatusCode() != fasthttp.StatusOK && resp.StatusCode() != fasthttp.StatusNoContent {
return nil, parseGeminiError(resp, &providerUtils.RequestMetadata{
Provider: providerName,
RequestType: schemas.BatchDeleteRequest,
})
}
return &schemas.BifrostBatchDeleteResponse{
ID: request.BatchID,
Object: "batch",
Deleted: true,
ExtraFields: schemas.BifrostResponseExtraFields{
RequestType: schemas.BatchDeleteRequest,
Provider: providerName,
Latency: latency.Milliseconds(),
},
}, nil
}
🤖 Prompt for AI Agents
In core/providers/gemini/gemini.go around lines 2093-2150, the call to
parseGeminiError(resp) here uses the wrong arity and omits RequestMetadata;
update the error path to call parseGeminiError with the same parameters used
elsewhere in this file (pass resp, providerName, and a RequestMetadata that sets
RequestType to schemas.BatchDeleteRequest) so the code compiles and the returned
error includes consistent provider and request metadata.

@akshaydeo akshaydeo force-pushed the 12-04-gemini-sdk-batch-support branch from bc3aeb6 to 28810ab Compare December 15, 2025 03:26
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (15)
transports/bifrost-http/integrations/utils.go (1)

194-196: Remove debug log that prints full JSON response body.

fmt.Printf("[DEBUG] sendSuccess: ... body=%s\n", ...) logs the entire response body, which can leak PII/tokens and bypasses the structured logger. For production, avoid logging full payloads; if you need observability, log only status and length via the configured logger.

Apply this diff to keep behavior while removing the unsafe log:

-	ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody)))
-	ctx.SetBody(responseBody)
-	fmt.Printf("[DEBUG] sendSuccess: status=200, contentLen=%d, body=%s\n", len(responseBody), string(responseBody))
+	ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody)))
+	ctx.SetBody(responseBody)
core/providers/cerebras/cerebras.go (1)

264-267: Fix unused parameters in Cerebras BatchDelete stub to avoid compile error.

ctx, key, and request are unused, which will fail compilation. Match the existing stub pattern by using _ for these parameters.

-// BatchDelete is not supported by Cerebras provider.
-func (provider *CerebrasProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
-	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
-}
+// BatchDelete is not supported by Cerebras provider.
+func (provider *CerebrasProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
+}

Please run your usual go build/CI pipeline to confirm no unused-parameter errors remain in this file.

core/providers/gemini/gemini.go (1)

2105-2162: Fix BatchDelete error path to call parseGeminiError with metadata (correct arity).

parseGeminiError is called elsewhere as parseGeminiError(resp, meta); here it’s invoked with only resp, which will not compile and also drops RequestType/Provider metadata.

Update the error branch to match other batch operations:

-	// Handle response
-	if resp.StatusCode() != fasthttp.StatusOK && resp.StatusCode() != fasthttp.StatusNoContent {
-		return nil, parseGeminiError(resp)
-	}
+	// Handle response
+	if resp.StatusCode() != fasthttp.StatusOK && resp.StatusCode() != fasthttp.StatusNoContent {
+		return nil, parseGeminiError(resp, &providerUtils.RequestMetadata{
+			Provider:    providerName,
+			RequestType: schemas.BatchDeleteRequest,
+		})
+	}

Please verify all parseGeminiError call sites have consistent signatures by grepping for it in core/providers/gemini/gemini.go.

core/bifrost.go (1)

1263-1339: Populate ExtraFields on early BatchDeleteRequest errors for consistency.

The core flow mirrors other batch methods, but the early error paths (nil req, missing provider/batch_id, provider not found, config/key errors) don’t set BifrostError.ExtraFields. Other public APIs (e.g. BatchResultsRequest) populate RequestType and often Provider, which helps clients correlate/log failures.

Consider aligning BatchDeleteRequest with that pattern. Example adjustments:

 	if req == nil {
 		return nil, &schemas.BifrostError{
 			IsBifrostError: false,
 			Error: &schemas.ErrorField{
 				Message: "batch delete request is nil",
 			},
+			ExtraFields: schemas.BifrostErrorExtraFields{
+				RequestType: schemas.BatchDeleteRequest,
+			},
 		}
 	}
 	if req.Provider == "" {
 		return nil, &schemas.BifrostError{
 			IsBifrostError: false,
 			Error: &schemas.ErrorField{
 				Message: "provider is required for batch delete request",
 			},
+			ExtraFields: schemas.BifrostErrorExtraFields{
+				RequestType: schemas.BatchDeleteRequest,
+			},
 		}
 	}
 	if req.BatchID == "" {
 		return nil, &schemas.BifrostError{
 			IsBifrostError: false,
 			Error: &schemas.ErrorField{
 				Message: "batch_id is required for batch delete request",
 			},
+			ExtraFields: schemas.BifrostErrorExtraFields{
+				RequestType: schemas.BatchDeleteRequest,
+				Provider:    req.Provider,
+			},
 		}
 	}
@@
 	provider := bifrost.getProviderByKey(req.Provider)
 	if provider == nil {
 		return nil, &schemas.BifrostError{
 			IsBifrostError: false,
 			Error: &schemas.ErrorField{
 				Message: "provider not found for batch delete request",
 			},
+			ExtraFields: schemas.BifrostErrorExtraFields{
+				RequestType: schemas.BatchDeleteRequest,
+				Provider:    req.Provider,
+			},
 		}
 	}
@@
-	config, err := bifrost.account.GetConfigForProvider(req.Provider)
-	if err != nil {
-		return nil, newBifrostErrorFromMsg(fmt.Sprintf("failed to get config for provider %s: %v", req.Provider, err.Error()))
-	}
-	if config == nil {
-		return nil, newBifrostErrorFromMsg(fmt.Sprintf("config is nil for provider %s", req.Provider))
-	}
+	config, err := bifrost.account.GetConfigForProvider(req.Provider)
+	if err != nil {
+		bErr := newBifrostErrorFromMsg(fmt.Sprintf("failed to get config for provider %s: %v", req.Provider, err.Error()))
+		bErr.ExtraFields = schemas.BifrostErrorExtraFields{
+			RequestType: schemas.BatchDeleteRequest,
+			Provider:    req.Provider,
+		}
+		return nil, bErr
+	}
+	if config == nil {
+		bErr := newBifrostErrorFromMsg(fmt.Sprintf("config is nil for provider %s", req.Provider))
+		bErr.ExtraFields = schemas.BifrostErrorExtraFields{
+			RequestType: schemas.BatchDeleteRequest,
+			Provider:    req.Provider,
+		}
+		return nil, bErr
+	}
@@
 	if providerRequiresKey(baseProvider, config.CustomProviderConfig) {
 		keys, keyErr := bifrost.getAllSupportedKeys(&ctx, req.Provider, baseProvider)
 		if keyErr != nil {
-			return nil, newBifrostError(keyErr)
+			bErr := newBifrostError(keyErr)
+			bErr.ExtraFields = schemas.BifrostErrorExtraFields{
+				RequestType: schemas.BatchDeleteRequest,
+				Provider:    req.Provider,
+			}
+			return nil, bErr
 		}
core/providers/sgl/batch.go (1)

35-38: Use _ for unused parameters in SGL BatchDelete stub to avoid compile error.

ctx, key, and request are unused here, which will fail compilation. Match the existing SGL batch stubs by using _ for these parameters.

-// BatchDelete is not supported by SGL provider.
-func (provider *SGLProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
-	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
-}
+// BatchDelete is not supported by SGL provider.
+func (provider *SGLProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
+}

Please confirm via go build ./... that there are no remaining unused-parameter errors in SGL batch stubs.

core/providers/ollama/ollama.go (1)

251-254: Align BatchDelete stub parameters with other unsupported methods

Other unsupported batch/file methods here use _ for unused parameters; BatchDelete should match for consistency and to satisfy linters.

-// BatchDelete is not supported by Ollama provider.
-func (provider *OllamaProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+// BatchDelete is not supported by Ollama provider.
+func (provider *OllamaProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
 }
core/providers/anthropic/batch.go (1)

3-9: Add missing context import for BatchDelete

BatchDelete uses context.Context but the file doesn’t import the context package, causing a compile error.

-import (
-	"time"
-
-	providerUtils "github.com/maximhq/bifrost/core/providers/utils"
-	"github.com/maximhq/bifrost/core/schemas"
-	"github.com/valyala/fasthttp"
-)
+import (
+	"context"
+	"time"
+
+	providerUtils "github.com/maximhq/bifrost/core/providers/utils"
+	"github.com/maximhq/bifrost/core/schemas"
+	"github.com/valyala/fasthttp"
+)

You can keep the existing BatchDelete body as‑is once the import is added.

Also applies to: 380-383

transports/bifrost-http/integrations/genai.go (7)

798-803: Remove or downgrade [DEBUG] fmt.Printf logging in hot paths

There are many fmt.Printf("[DEBUG] ...") calls in the Gemini file upload/resumable helpers (parseGeminiFileUploadRequest, parseGeminiResumableUpload, parseGeminiMultipartRelated, parseGeminiResumableUploadPhase2, setResumableUploadFinalStatus, extractGeminiResumableUploadParams, extractGeminiFileUploadParams). These:

  • Spam stdout in production.
  • Marshal entire responses (sonic.Marshal(resp)) just for logging, which is expensive.
  • Risk leaking file metadata in logs.

Either remove these or route them through the structured logger at debug level with concise messages, and drop any unnecessary marshaling.

Also applies to: 820-821, 830-831, 840-842, 847-848, 866-867, 1019-1020, 1052-1053, 1068-1069, 1077-1083, 1114-1115, 1131-1136, 1153-1159


45-68: Handle crypto/rand.Read errors and make cleanup goroutine stoppable

generateSessionID ignores the error from rand.Read, and the cleanup goroutine spawned in init runs forever with no shutdown signal.

  • If rand.Read fails, you silently generate an all‑zero session ID, which weakens unpredictability.
  • The unbounded goroutine complicates graceful shutdown and tests.

A minimal improvement:

func generateSessionID() string {
	bytes := make([]byte, 16)
-	rand.Read(bytes)
+	if _, err := rand.Read(bytes); err != nil {
+		// Extremely rare; fall back to a timestamp-based ID
+		return fmt.Sprintf("%d-%d", time.Now().UnixNano(), time.Now().UnixNano())
+	}
	return hex.EncodeToString(bytes)
}

And, as previously suggested, consider adding a package‑level stop signal for the ticker so this goroutine can terminate cleanly on service shutdown.


181-197: Fix Purpose type mismatch, wire MimeType through, and tighten SDK Src validation

Three related issues around file and batch handling:

  1. Purpose type mismatch in resumable converters (compile‑time bug)
    In the POST/PUT resumable routes you assign a string where BifrostFileUploadRequest.Purpose is a schemas.FilePurpose:
// phase 1 / non-resumable (correct)
bifrostReq := &schemas.BifrostFileUploadRequest{
    Provider: provider,
    File:     geminiReq.File,
    Filename: geminiReq.Filename,
-   Purpose:  schemas.FilePurpose(geminiReq.Purpose),
+   Purpose:  schemas.FilePurpose(geminiReq.Purpose),
}

// phase 2 POST / PUT (needs same cast)
bifrostReq := &schemas.BifrostFileUploadRequest{
    Provider: provider,
    File:     geminiReq.File,
    Filename: geminiReq.Filename,
-   Purpose:  geminiReq.Purpose,
+   Purpose:  schemas.FilePurpose(geminiReq.Purpose),
}
  1. MimeType is parsed but never forwarded to core
    GeminiFileUploadRequest.MimeType is populated in some parsers but dropped when building BifrostFileUploadRequest. To keep MIME info end‑to‑end, add:
bifrostReq := &schemas.BifrostFileUploadRequest{
    Provider: provider,
    File:     geminiReq.File,
    Filename: geminiReq.Filename,
-   Purpose:  schemas.FilePurpose(geminiReq.Purpose),
+   Purpose:  schemas.FilePurpose(geminiReq.Purpose),
+   MimeType: geminiReq.MimeType,
}

…and the same in the POST/PUT resumable routes.

  1. Ensure MimeType is set in all upload parsers
    • parseGeminiMultipartRelated and parseGeminiResumableUpload already set uploadReq.MimeType.
    • parseGeminiMultipartUpload only reads displayName; extend the JSON metadata to also accept "mime_type" and set uploadReq.MimeType.
    • parseGeminiRawUpload should at least try to set uploadReq.MimeType from Content-Type:
func parseGeminiRawUpload(ctx *fasthttp.RequestCtx, uploadReq *gemini.GeminiFileUploadRequest) error {
    body := ctx.Request.Body()
    if len(body) == 0 {
        return errors.New("file content is required")
    }

    uploadReq.File = make([]byte, len(body))
    copy(uploadReq.File, body)

+   if ct := string(ctx.Request.Header.ContentType()); ct != "" {
+       uploadReq.MimeType = ct
+   }
@@
    return nil
}

Together this keeps MIME information consistent for all upload paths and avoids a hard compile error in the resumable routes.

Also applies to: 231-237, 287-293, 949-961, 990-1015


448-488: Validate SDK Src more strictly to avoid silent misconfiguration

CreateGenAIBatchRouteConfigs’s create converter only handles Src as string or []interface{}, silently ignoring unsupported types and non‑map entries:

switch src := sdkReq.Src.(type) {
case string:
    ...
case []interface{}:
    ...
}

This can produce an empty InputFileID and Requests, leading to a vague core error (“either input_file_id or requests must be provided”).

Add explicit validation:

        switch src := sdkReq.Src.(type) {
        case string:
            bifrostReq.InputFileID = strings.TrimPrefix(src, "files/")
        case []interface{}:
            requests := make([]schemas.BatchRequestItem, 0, len(src))
            for i, item := range src {
                itemMap, ok := item.(map[string]interface{})
-               if itemMap, ok := item.(map[string]interface{}); ok {
-                   customID := fmt.Sprintf("request-%d", i)
-                   requests = append(requests, schemas.BatchRequestItem{
-                       CustomID: customID,
-                       Body:     itemMap,
-                   })
-               }
+               if !ok {
+                   return nil, fmt.Errorf("src[%d] must be an object, got %T", i, item)
+               }
+               customID := fmt.Sprintf("request-%d", i)
+               requests = append(requests, schemas.BatchRequestItem{
+                   CustomID: customID,
+                   Body:     itemMap,
+               })
            }
            bifrostReq.Requests = requests
+       default:
+           return nil, fmt.Errorf("src must be a string or array of request objects, got %T", src)
        }

This makes bad SDK inputs fail fast with a clear HTTP‑layer error instead of propagating a generic core error.


849-856: Fix data race on uploadSession stored in sync.Map

You store *uploadSession in uploadSessions and later mutate the pointed‑to struct (session.Provider) while the cleanup goroutine concurrently reads from the same objects:

  • Phase 1 metadata: uploadSessions.Store(sessionID, session) (struct with CreatedAt, etc.).
  • extractGeminiFileUploadParams: loads the pointer and writes session.Provider = ....
  • init cleanup goroutine ranges over uploadSessions and reads session.CreatedAt.

Mutating shared structs after storing them in sync.Map leads to data races under the Go memory model.

Treat uploadSession as immutable: when you need to set Provider, copy and re‑store:

        if uploadReq.ResumableSessionID != "" {
            // Update the session with the provider
            if sessionVal, ok := uploadSessions.Load(uploadReq.ResumableSessionID); ok {
                if session, ok := sessionVal.(*uploadSession); ok {
-                   session.Provider = schemas.ModelProvider(provider)
+                   updated := *session
+                   updated.Provider = schemas.ModelProvider(provider)
+                   uploadSessions.Store(uploadReq.ResumableSessionID, &updated)
                }
            }
        }

Keep the cleanup goroutine read‑only and only call Delete on the map entry itself.

Also applies to: 1100-1112, 1138-1141


1145-1152: Honor reverse‑proxy headers when building resumable upload URL

extractGeminiFileUploadParams builds the phase‑2 upload URL using ctx.IsTLS() and ctx.Host() only:

scheme := "http"
if ctx.IsTLS() {
    scheme = "https"
}
host := string(ctx.Host())
uploadURL := fmt.Sprintf("%s://%s/genai/upload/v1beta/files/resumable/%s", scheme, host, uploadReq.ResumableSessionID)

Behind a reverse proxy, this produces internal host/scheme, breaking clients that follow the returned URL.

Mirror the pattern used in session.go: prefer X-Forwarded-Proto and X-Forwarded-Host when present, falling back to ctx.IsTLS() / ctx.Host() only if those headers are absent.


509-549: Fix route registration order so batch endpoints aren’t shadowed

NewGenAIRouter currently registers routes in this order:

routes := CreateGenAIRouteConfigs("/genai")                // generic /v1beta/models/{model:*}
routes = append(routes, CreateGenAIFileRouteConfigs(...)...)
routes = append(routes, CreateGenAIBatchRouteConfigs(...)...)

Because fasthttp/router uses first‑match semantics, the generic /v1beta/models/{model:*} route will capture requests like:

  • POST /genai/v1beta/models/gemini-1.5-pro:batchGenerateContent

before the more specific batch route /v1beta/models/{model}:batchGenerateContent, so batch requests never hit the batch handler.

Register batch routes first, then generic:

 func NewGenAIRouter(client *bifrost.Bifrost, handlerStore lib.HandlerStore, logger schemas.Logger) *GenAIRouter {
-    routes := CreateGenAIRouteConfigs("/genai")
-    routes = append(routes, CreateGenAIFileRouteConfigs("/genai", handlerStore)...)
-    routes = append(routes, CreateGenAIBatchRouteConfigs("/genai", handlerStore)...)
+    routes := CreateGenAIBatchRouteConfigs("/genai", handlerStore)
+    routes = append(routes, CreateGenAIFileRouteConfigs("/genai", handlerStore)...)
+    routes = append(routes, CreateGenAIRouteConfigs("/genai")...)

This ensures :batchGenerateContent paths are handled by the intended batch handlers.

Also applies to: 551-587, 589-625, 627-664, 1253-1258

core/providers/gemini/batch.go (1)

271-359: Defensively compute BatchStats and guard zero CreatedAt timestamps

Across these converters, current math can yield negative PendingRequestCount / SuccessfulRequestCount when Total is zero but Completed / Failed are non‑zero, and CreatedAt == 0 produces a misleading 1970 timestamp.

Consider deriving totals defensively and skipping zero timestamps, e.g.:

 func ToGeminiBatchJobResponse(resp *schemas.BifrostBatchCreateResponse) *GeminiBatchJobResponseSDK {
@@
-	if resp.CreatedAt > 0 {
-		result.Metadata = &GeminiBatchMetadata{
-			Name:       resp.ID,
-			State:      ToGeminiJobState(resp.Status),
-			CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339),
-			BatchStats: &GeminiBatchStats{
-				RequestCount:           resp.RequestCounts.Total,
-				PendingRequestCount:    resp.RequestCounts.Total - resp.RequestCounts.Completed,
-				SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed,
-			},
-		}
-	}
+	if resp.CreatedAt > 0 {
+		total := resp.RequestCounts.Total
+		if total == 0 {
+			total = resp.RequestCounts.Completed + resp.RequestCounts.Failed + resp.RequestCounts.Pending
+		}
+		success := resp.RequestCounts.Completed - resp.RequestCounts.Failed
+		if success < 0 {
+			success = 0
+		}
+		pending := total - resp.RequestCounts.Completed - resp.RequestCounts.Failed
+		if pending < 0 {
+			pending = 0
+		}
+		result.Metadata = &GeminiBatchMetadata{
+			Name:       resp.ID,
+			State:      ToGeminiJobState(resp.Status),
+			CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339),
+			BatchStats: &GeminiBatchStats{
+				RequestCount:           total,
+				PendingRequestCount:    pending,
+				SuccessfulRequestCount: success,
+			},
+		}
+	}

Apply the same pattern (defensive total/success/pending and CreatedAt > 0 guard) in:

  • ToGeminiBatchRetrieveResponse when building result.Metadata
  • ToGeminiBatchListResponse when setting job.Metadata.CreateTime and job.Metadata.BatchStats.
🧹 Nitpick comments (1)
core/providers/cohere/cohere.go (1)

869-872: Make BatchDelete stub match other Cohere batch stubs

For consistency with BatchCreate, BatchList, etc., you can mark unused params with _:

-// BatchDelete is not supported by Cohere provider.
-func (provider *CohereProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+// BatchDelete is not supported by Cohere provider.
+func (provider *CohereProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
}
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bc3aeb6 and 28810ab.

📒 Files selected for processing (19)
  • core/bifrost.go (1 hunks)
  • core/providers/anthropic/batch.go (1 hunks)
  • core/providers/azure/batch.go (1 hunks)
  • core/providers/cerebras/cerebras.go (1 hunks)
  • core/providers/cohere/cohere.go (1 hunks)
  • core/providers/gemini/batch.go (1 hunks)
  • core/providers/gemini/gemini.go (1 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/mistral/batch.go (1 hunks)
  • core/providers/ollama/ollama.go (1 hunks)
  • core/providers/openrouter/openrouter.go (2 hunks)
  • core/providers/parasail/batch.go (1 hunks)
  • core/providers/sgl/batch.go (1 hunks)
  • core/schemas/batch.go (1 hunks)
  • core/schemas/bifrost.go (1 hunks)
  • core/schemas/provider.go (3 hunks)
  • transports/bifrost-http/integrations/genai.go (13 hunks)
  • transports/bifrost-http/integrations/router.go (7 hunks)
  • transports/bifrost-http/integrations/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (7)
  • core/schemas/bifrost.go
  • core/schemas/batch.go
  • core/providers/parasail/batch.go
  • core/providers/azure/batch.go
  • core/providers/mistral/batch.go
  • transports/bifrost-http/integrations/router.go
  • core/providers/openrouter/openrouter.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • transports/bifrost-http/integrations/utils.go
  • core/providers/anthropic/batch.go
  • core/providers/gemini/batch.go
  • core/schemas/provider.go
  • core/providers/sgl/batch.go
  • core/providers/cerebras/cerebras.go
  • core/bifrost.go
  • core/providers/cohere/cohere.go
  • core/providers/ollama/ollama.go
  • core/providers/gemini/types.go
  • core/providers/gemini/gemini.go
  • transports/bifrost-http/integrations/genai.go
🧠 Learnings (2)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.

Applied to files:

  • transports/bifrost-http/integrations/utils.go
  • core/providers/anthropic/batch.go
  • core/providers/gemini/batch.go
  • core/schemas/provider.go
  • core/providers/sgl/batch.go
  • core/providers/cerebras/cerebras.go
  • core/bifrost.go
  • core/providers/cohere/cohere.go
  • core/providers/ollama/ollama.go
  • core/providers/gemini/types.go
  • core/providers/gemini/gemini.go
  • transports/bifrost-http/integrations/genai.go
📚 Learning: 2025-12-12T08:25:02.629Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: transports/bifrost-http/integrations/router.go:709-712
Timestamp: 2025-12-12T08:25:02.629Z
Learning: In transports/bifrost-http/**/*.go, update streaming response handling to align with OpenAI Responses API: use typed SSE events such as response.created, response.output_text.delta, response.done, etc., and do not rely on the legacy data: [DONE] termination marker. Note that data: [DONE] is only used by the older Chat Completions and Text Completions streaming APIs. Ensure parsers, writers, and tests distinguish SSE events from the [DONE] sentinel and handle each event type accordingly for correct stream termination and progress updates.

Applied to files:

  • transports/bifrost-http/integrations/utils.go
  • transports/bifrost-http/integrations/genai.go
🧬 Code graph analysis (9)
core/providers/anthropic/batch.go (4)
core/providers/anthropic/anthropic.go (1)
  • AnthropicProvider (25-33)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (370-379)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/gemini/batch.go (2)
core/schemas/batch.go (12)
  • BatchStatus (5-5)
  • BatchStatusValidating (8-8)
  • BatchStatusInProgress (10-10)
  • BatchStatusFinalizing (11-11)
  • BatchStatusCompleted (12-12)
  • BatchStatusCancelling (14-14)
  • BatchStatusCancelled (15-15)
  • BatchStatusExpired (13-13)
  • BifrostBatchCreateResponse (85-109)
  • BifrostBatchRetrieveResponse (152-187)
  • BifrostBatchListResponse (129-140)
  • BifrostBatchCancelResponse (199-208)
core/providers/gemini/types.go (11)
  • GeminiJobStatePending (1627-1627)
  • GeminiJobStateRunning (1628-1628)
  • GeminiJobStateSucceeded (1629-1629)
  • GeminiJobStateFailed (1630-1630)
  • GeminiJobStateCancelling (1631-1631)
  • GeminiJobStateCancelled (1632-1632)
  • GeminiBatchJobResponseSDK (1663-1669)
  • GeminiBatchMetadata (1544-1557)
  • GeminiBatchStats (1478-1482)
  • GeminiBatchDest (1578-1581)
  • GeminiBatchListResponseSDK (1672-1675)
core/schemas/provider.go (2)
core/schemas/bifrost.go (1)
  • BatchDeleteRequest (103-103)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/sgl/batch.go (3)
core/providers/sgl/sgl.go (1)
  • SGLProvider (18-24)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/cerebras/cerebras.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (370-379)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/bifrost.go (4)
core/schemas/bifrost.go (5)
  • BatchDeleteRequest (103-103)
  • BifrostError (370-379)
  • ErrorField (388-395)
  • BifrostErrorExtraFields (437-441)
  • RequestType (83-83)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/provider.go (2)
  • Provider (315-364)
  • CustomProviderConfig (249-255)
core/schemas/account.go (1)
  • Key (8-17)
core/providers/ollama/ollama.go (4)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (370-379)
  • BatchDeleteRequest (103-103)
ui/lib/types/logs.ts (1)
  • BifrostError (226-232)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/gemini/types.go (1)
core/schemas/models.go (1)
  • Model (109-129)
core/providers/gemini/gemini.go (4)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (4)
  • BifrostError (370-379)
  • BatchDeleteRequest (103-103)
  • BifrostResponseExtraFields (299-310)
  • RequestType (83-83)
core/providers/utils/utils.go (4)
  • CheckOperationAllowed (472-484)
  • NewBifrostOperationError (563-574)
  • SetExtraHeaders (179-209)
  • MakeRequestWithContext (40-94)
core/schemas/provider.go (1)
  • Provider (315-364)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
🔇 Additional comments (3)
core/schemas/provider.go (1)

183-189: BatchDelete wiring in schemas and Provider interface looks consistent.

Adding AllowedRequests.BatchDelete, the BatchDeleteRequest case in IsOperationAllowed, and the Provider.BatchDelete method cleanly extends the existing batch-operation pattern and should integrate smoothly with the rest of the stack.

Also applies to: 232-233, 352-353

core/providers/gemini/types.go (2)

1623-1634: SDK batch job state constants and types look consistent

The new GeminiJobState* constants and SDK batch request/response structs (GeminiBatchCreateRequestSDK, GeminiBatchInlineRequest, GeminiBatchJobResponseSDK, list/retrieve/cancel/delete SDK types) align with the GenAI SDK shapes and are used cleanly by the converters in core/providers/gemini/batch.go and the GenAI HTTP layer.

No issues from a typing/JSON-tag perspective.

Also applies to: 1636-1697


1700-1707: GeminiFileUploadRequest struct shape is appropriate for new upload flows

The updated GeminiFileUploadRequest (raw File bytes, Filename, MimeType, Purpose, and internal ResumableSessionID) is a good fit for the new file/resumable handling in genai.go. With the wiring fixes noted there (propagating MimeType and casting Purpose correctly), this type should work well end‑to‑end.

@akshaydeo akshaydeo force-pushed the 12-04-gemini-sdk-batch-support branch from 28810ab to 45c2601 Compare December 15, 2025 03:56
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (19)
transports/bifrost-http/integrations/utils.go (1)

196-196: Remove debug log that prints the full response body (PII/security risk).

This debug fmt.Printf logs the entire JSON response body, which can easily contain PII, secrets, or other sensitive payloads. For a production‑facing GenAI gateway, logging raw bodies is a significant privacy/compliance risk and contradicts safe logging practices.

Recommend removing this line (or gating it behind a very strict dev‑only flag and never logging full bodies). A safe pattern is to log only metadata (status, content length, maybe a stable response ID), e.g.:

-	fmt.Printf("[DEBUG] sendSuccess: status=200, contentLen=%d, body=%s\n", len(responseBody), string(responseBody))
+	// Optional: keep a non-sensitive log if needed
+	// log.Printf("[DEBUG] sendSuccess: status=200, contentLen=%d\n", len(responseBody))
core/providers/ollama/ollama.go (1)

251-254: Align BatchDelete stub parameters with other unsupported Ollama methods

For consistency with the other unsupported batch/file methods here, consider using _ for the unused parameters:

-// BatchDelete is not supported by Ollama provider.
-func (provider *OllamaProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+// BatchDelete is not supported by Ollama provider.
+func (provider *OllamaProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
 }
transports/bifrost-http/integrations/router.go (2)

423-435: Call cancel() on all PreCallback error paths (including ErrResumableUploadInit)

When PreCallback returns an error, the function currently returns without ever invoking cancel(), so the derived context from ConvertToBifrostContext is never canceled on these paths.

Consider:

		if config.PreCallback != nil {
			if err := config.PreCallback(ctx, bifrostCtx, req); err != nil {
				// Check if this is a resumable upload init that was already handled
				if err == ErrResumableUploadInit {
-					// Response was already written by the PreCallback, just return
-					return
+					// Response was already written by the PreCallback; clean up context and return
+					cancel()
+					return
				}
-				g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute pre-request callback: "+err.Error()))
-				return
+				g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute pre-request callback: "+err.Error()))
+				cancel()
+				return
			}
		}

868-898: Remove fmt.Printf debug logging from file handlers or switch to structured logger

The file upload/content paths still use raw fmt.Printf for debug logging (e.g., “router: calling FileUploadRequest…”, “…error…”, “…success…”, and the conversion error message). These bypass the router’s schemas.Logger and will spam stdout in production.

Recommend either removing them or replacing with g.logger.Debug(...) calls that carry the same context, and dropping the fmt.Printf usage entirely.

Also applies to: 1021-1033

core/providers/gemini/gemini.go (1)

2105-2162: Fix parseGeminiError call in BatchDelete and include request metadata

In BatchDelete, the error path calls parseGeminiError(resp) with the wrong arity and omits the usual RequestMetadata, which will break compilation and drop provider/request‑type context on errors.

Align it with other Gemini batch methods:

 	// Handle response
 	if resp.StatusCode() != fasthttp.StatusOK && resp.StatusCode() != fasthttp.StatusNoContent {
-		return nil, parseGeminiError(resp)
+		return nil, parseGeminiError(resp, &providerUtils.RequestMetadata{
+			Provider:    providerName,
+			RequestType: schemas.BatchDeleteRequest,
+		})
 	}
core/providers/gemini/batch.go (5)

288-292: Fix BatchStats computation to avoid negative counts.

This code has the same issue flagged in the previous review: when resp.RequestCounts.Total is zero but Completed or Failed are non-zero, the computed PendingRequestCount = Total - Completed becomes negative, and SuccessfulRequestCount = Completed - Failed can also be negative.


314-314: Guard against emitting 1970 timestamps when CreatedAt == 0.

This line unconditionally formats resp.CreatedAt, which will emit 1970-01-01 when CreatedAt == 0. This is the same issue flagged in the previous review.


316-320: Fix BatchStats computation to avoid negative counts.

Same BatchStats negative count issue as in ToGeminiBatchJobResponse.


353-353: Guard against emitting 1970 timestamps when CreatedAt == 0.

Same unconditional CreatedAt formatting issue as in ToGeminiBatchRetrieveResponse.


355-359: Fix BatchStats computation to avoid negative counts.

Same BatchStats negative count issue as in previous converters.

transports/bifrost-http/integrations/genai.go (9)

45-50: Handle error from crypto/rand.Read.

The error from crypto/rand.Read is ignored at line 48, which could lead to predictable session IDs if the random source fails. This is a security concern.


52-68: Provide shutdown mechanism for cleanup goroutine.

The cleanup goroutine runs indefinitely with no way to stop it, preventing graceful application shutdown.


469-488: Add default case to handle unexpected src types.

The switch statement handles string and []interface{} cases but silently ignores other types. Additionally, within the []interface{} case, non-map items are silently skipped without indication to the user.


786-812: Remove debug print statement.

Line 798 contains a debug fmt.Printf that should be removed before merging, or replaced with a structured logger call.


814-871: Remove debug print statements.

Multiple debug fmt.Printf statements at lines 820, 824, 830, 841, 847, 860 should be removed before merging.


873-940: Critical: Binary data corruption in multipart/related parsing.

Lines 887 and 935 convert the request body to a string and back to bytes, corrupting binary data. This is the same critical issue flagged in previous reviews.


1017-1070: Remove debug prints and fix hardcoded purpose.

Multiple debug fmt.Printf statements should be removed. Line 1062 hardcodes Purpose = "batch", which may be incorrect for non-batch uploads, as flagged in previous reviews.


1072-1085: Remove debug prints from post-callback helper.

Debug fmt.Printf statements at lines 1078, 1082 should be removed or replaced with structured logging.


1121-1173: Multiple issues: debug prints, reverse proxy headers, and session mutation.

  1. Debug fmt.Printf statements at lines 1131, 1135, 1153 should be removed.
  2. Lines 1144-1151 don't consider X-Forwarded-Proto and X-Forwarded-Host headers for reverse proxy scenarios (flagged in previous reviews).
  3. Line 1140 directly mutates session.Provider, which could cause data races (flagged in previous reviews).
🧹 Nitpick comments (2)
transports/bifrost-http/integrations/utils.go (1)

188-195: Consider relying on fasthttp to manage Content-Length (or ensure it always matches the body).

You’re hard‑coding Content-Length from len(responseBody) just before ctx.SetBody. This is correct today, but it’s easy for the header to become stale if sendSuccess is later refactored (compression, middleware mutation, retries, etc.), while fasthttp can manage it automatically from the response body.

If you don’t have a specific need to override it, consider removing the manual header and letting fasthttp derive Content-Length from the body:

-	ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody)))
-	ctx.SetBody(responseBody)
+	ctx.SetBody(responseBody)

If you keep it, just be aware any future body changes must also update this header to avoid inconsistencies.

core/providers/cohere/cohere.go (1)

869-872: Use _ for unused BatchDelete parameters to match other Cohere stubs

All other Cohere batch stubs use _ for unused parameters; this one can be made consistent:

-// BatchDelete is not supported by Cohere provider.
-func (provider *CohereProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+// BatchDelete is not supported by Cohere provider.
+func (provider *CohereProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
 }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 28810ab and 45c2601.

📒 Files selected for processing (19)
  • core/bifrost.go (1 hunks)
  • core/providers/anthropic/batch.go (1 hunks)
  • core/providers/azure/batch.go (1 hunks)
  • core/providers/cerebras/cerebras.go (1 hunks)
  • core/providers/cohere/cohere.go (1 hunks)
  • core/providers/gemini/batch.go (1 hunks)
  • core/providers/gemini/gemini.go (1 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/mistral/batch.go (1 hunks)
  • core/providers/ollama/ollama.go (1 hunks)
  • core/providers/openrouter/openrouter.go (2 hunks)
  • core/providers/parasail/batch.go (1 hunks)
  • core/providers/sgl/batch.go (1 hunks)
  • core/schemas/batch.go (1 hunks)
  • core/schemas/bifrost.go (1 hunks)
  • core/schemas/provider.go (3 hunks)
  • transports/bifrost-http/integrations/genai.go (13 hunks)
  • transports/bifrost-http/integrations/router.go (7 hunks)
  • transports/bifrost-http/integrations/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (9)
  • core/schemas/bifrost.go
  • core/schemas/batch.go
  • core/providers/cerebras/cerebras.go
  • core/providers/sgl/batch.go
  • core/providers/openrouter/openrouter.go
  • core/providers/mistral/batch.go
  • core/bifrost.go
  • core/providers/azure/batch.go
  • core/providers/anthropic/batch.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • transports/bifrost-http/integrations/utils.go
  • core/providers/ollama/ollama.go
  • core/schemas/provider.go
  • core/providers/gemini/batch.go
  • core/providers/parasail/batch.go
  • core/providers/cohere/cohere.go
  • transports/bifrost-http/integrations/router.go
  • core/providers/gemini/types.go
  • core/providers/gemini/gemini.go
  • transports/bifrost-http/integrations/genai.go
🧠 Learnings (2)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.

Applied to files:

  • transports/bifrost-http/integrations/utils.go
  • core/providers/ollama/ollama.go
  • core/schemas/provider.go
  • core/providers/gemini/batch.go
  • core/providers/parasail/batch.go
  • core/providers/cohere/cohere.go
  • transports/bifrost-http/integrations/router.go
  • core/providers/gemini/types.go
  • core/providers/gemini/gemini.go
  • transports/bifrost-http/integrations/genai.go
📚 Learning: 2025-12-12T08:25:02.629Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: transports/bifrost-http/integrations/router.go:709-712
Timestamp: 2025-12-12T08:25:02.629Z
Learning: In transports/bifrost-http/**/*.go, update streaming response handling to align with OpenAI Responses API: use typed SSE events such as response.created, response.output_text.delta, response.done, etc., and do not rely on the legacy data: [DONE] termination marker. Note that data: [DONE] is only used by the older Chat Completions and Text Completions streaming APIs. Ensure parsers, writers, and tests distinguish SSE events from the [DONE] sentinel and handle each event type accordingly for correct stream termination and progress updates.

Applied to files:

  • transports/bifrost-http/integrations/utils.go
  • transports/bifrost-http/integrations/router.go
  • transports/bifrost-http/integrations/genai.go
🧬 Code graph analysis (5)
core/providers/ollama/ollama.go (4)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (370-379)
  • BatchDeleteRequest (103-103)
ui/lib/types/logs.ts (1)
  • BifrostError (226-232)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/gemini/batch.go (2)
core/schemas/batch.go (13)
  • BatchStatus (5-5)
  • BatchStatusValidating (8-8)
  • BatchStatusInProgress (10-10)
  • BatchStatusFinalizing (11-11)
  • BatchStatusCompleted (12-12)
  • BatchStatusFailed (9-9)
  • BatchStatusCancelling (14-14)
  • BatchStatusCancelled (15-15)
  • BatchStatusExpired (13-13)
  • BifrostBatchCreateResponse (85-109)
  • BifrostBatchRetrieveResponse (152-187)
  • BifrostBatchListResponse (129-140)
  • BifrostBatchCancelResponse (199-208)
core/providers/gemini/types.go (11)
  • GeminiJobStatePending (1627-1627)
  • GeminiJobStateRunning (1628-1628)
  • GeminiJobStateSucceeded (1629-1629)
  • GeminiJobStateFailed (1630-1630)
  • GeminiJobStateCancelling (1631-1631)
  • GeminiJobStateCancelled (1632-1632)
  • GeminiBatchJobResponseSDK (1663-1669)
  • GeminiBatchMetadata (1544-1557)
  • GeminiBatchStats (1478-1482)
  • GeminiBatchDest (1578-1581)
  • GeminiBatchListResponseSDK (1672-1675)
core/providers/parasail/batch.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (370-379)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/cohere/cohere.go (4)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (370-379)
  • BatchDeleteRequest (103-103)
ui/lib/types/logs.ts (1)
  • BifrostError (226-232)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
transports/bifrost-http/integrations/router.go (4)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
transports/bifrost-http/integrations/genai.go (1)
  • ErrResumableUploadInit (37-37)
core/schemas/bifrost.go (2)
  • BatchDeleteRequest (103-103)
  • FileUploadRequest (104-104)
core/schemas/provider.go (1)
  • Provider (315-364)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
🔇 Additional comments (9)
core/schemas/provider.go (1)

162-189: BatchDelete schema and interface wiring looks consistent

The new BatchDelete flag, switch case, and Provider.BatchDelete method are aligned with the existing batch operations and the RequestType enum; no issues from a core/contracts perspective.

Also applies to: 191-247, 314-364

core/providers/parasail/batch.go (1)

35-38: Parasail BatchDelete unsupported stub matches existing pattern

This stub correctly uses NewUnsupportedOperationError and is consistent with the other Parasail batch operations.

transports/bifrost-http/integrations/router.go (1)

80-89: BatchDelete routing and converter hooks look correct

BatchRequest.DeleteRequest, BatchDeleteResponseConverter, the new RouteConfig.BatchDeleteResponseConverter, and the schemas.BatchDeleteRequest case in handleBatchRequest are all wired consistently with the existing batch operations; the flow (validate → core client call → optional PostCallback → optional converter) matches the other Batch* handlers.

Also applies to: 159-162, 268-301, 714-846

core/providers/gemini/batch.go (2)

247-269: LGTM: Status mapping is comprehensive.

The conversion logic correctly maps all Bifrost batch statuses to their Gemini SDK equivalents, with a sensible default to GeminiJobStatePending for unknown states.


375-385: LGTM: Cancel response converter is minimal and correct.

The converter appropriately returns only the essential fields (Name and State) for a cancel operation response.

core/providers/gemini/types.go (3)

1623-1634: LGTM: Job state constants are comprehensive.

The new Google GenAI SDK job state constants cover all expected states and are correctly aligned with the converters in batch.go.


1639-1696: SDK batch types are well-defined; Src field validation noted in transport layer review.

The SDK batch types correctly mirror the Google GenAI SDK format. The Src field in GeminiBatchCreateRequestSDK (line 1646) is an interface{} that requires validation in the transport layer, which was already flagged in previous reviews for genai.go.


1702-1707: LGTM: File upload request restructured for resumable uploads.

The changes appropriately replace Provider with MimeType and add ResumableSessionID for session tracking. Note that past review comments flag MimeType population gaps in the transport layer parsers, which should be addressed.

transports/bifrost-http/integrations/genai.go (1)

669-784: LGTM: Batch parameter extraction is consistent and correct.

The parameter extraction functions consistently handle provider extraction, query parameter parsing, and batch ID normalization with the batches/ prefix.

@akshaydeo akshaydeo force-pushed the 12-04-gemini-sdk-batch-support branch from 45c2601 to 7cb3901 Compare December 15, 2025 04:24
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (17)
core/providers/ollama/ollama.go (1)

251-254: Use _ for unused parameters to match the pattern of other batch/file operation stubs.

All other unsupported batch operation methods (BatchCreate, BatchList, BatchRetrieve, BatchCancel, BatchResults) and file operation methods in this file consistently use _ for unused parameters. The BatchDelete method should follow the same pattern.

Apply this diff to align with the established pattern:

-func (provider *OllamaProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+func (provider *OllamaProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
 }
core/providers/cerebras/cerebras.go (1)

264-267: Use _ for unused parameters to match the pattern of other batch/file operation stubs.

All other unsupported batch operation methods (BatchCreate, BatchList, BatchRetrieve, BatchCancel, BatchResults) and file operation methods in this file consistently use _ for unused parameters. The BatchDelete method should follow the same pattern.

Apply this diff to align with the established pattern:

-func (provider *CerebrasProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+func (provider *CerebrasProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
 }
transports/bifrost-http/integrations/utils.go (1)

194-196: Remove debug log that exposes full response body.

Logging the full response body may expose sensitive information such as PII, API keys, authentication tokens, or other confidential data.

Apply this diff to remove the sensitive debug log:

 	ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody)))
 	ctx.SetBody(responseBody)
-	fmt.Printf("[DEBUG] sendSuccess: status=200, contentLen=%d, body=%s\n", len(responseBody), string(responseBody))
core/bifrost.go (1)

1263-1339: Add ExtraFields.RequestType/Provider to early validation errors for consistency and better error handling.

Early validation error returns (nil request, missing provider, missing batch_id, provider not found, config errors) don't populate ExtraFields, which makes client-side error handling and log correlation more difficult. Compare with BatchResultsRequest (lines 1172-1261), which consistently populates ExtraFields in all early validation errors.

Apply this diff to add ExtraFields to early validation errors:

 func (bifrost *Bifrost) BatchDeleteRequest(ctx context.Context, req *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	if req == nil {
 		return nil, &schemas.BifrostError{
 			IsBifrostError: false,
 			Error: &schemas.ErrorField{
 				Message: "batch delete request is nil",
 			},
+			ExtraFields: schemas.BifrostErrorExtraFields{
+				RequestType: schemas.BatchDeleteRequest,
+			},
 		}
 	}
 	if req.Provider == "" {
 		return nil, &schemas.BifrostError{
 			IsBifrostError: false,
 			Error: &schemas.ErrorField{
 				Message: "provider is required for batch delete request",
 			},
+			ExtraFields: schemas.BifrostErrorExtraFields{
+				RequestType: schemas.BatchDeleteRequest,
+			},
 		}
 	}
 	if req.BatchID == "" {
 		return nil, &schemas.BifrostError{
 			IsBifrostError: false,
 			Error: &schemas.ErrorField{
 				Message: "batch_id is required for batch delete request",
 			},
+			ExtraFields: schemas.BifrostErrorExtraFields{
+				RequestType: schemas.BatchDeleteRequest,
+				Provider:    req.Provider,
+			},
 		}
 	}
 	if ctx == nil {
 		ctx = bifrost.ctx
 	}
 
 	provider := bifrost.getProviderByKey(req.Provider)
 	if provider == nil {
 		return nil, &schemas.BifrostError{
 			IsBifrostError: false,
 			Error: &schemas.ErrorField{
 				Message: "provider not found for batch delete request",
 			},
+			ExtraFields: schemas.BifrostErrorExtraFields{
+				RequestType: schemas.BatchDeleteRequest,
+				Provider:    req.Provider,
+			},
 		}
 	}
 
 	config, err := bifrost.account.GetConfigForProvider(req.Provider)
 	if err != nil {
-		return nil, newBifrostErrorFromMsg(fmt.Sprintf("failed to get config for provider %s: %v", req.Provider, err.Error()))
+		bifrostErr := newBifrostErrorFromMsg(fmt.Sprintf("failed to get config for provider %s: %v", req.Provider, err.Error()))
+		bifrostErr.ExtraFields = schemas.BifrostErrorExtraFields{
+			RequestType: schemas.BatchDeleteRequest,
+			Provider:    req.Provider,
+		}
+		return nil, bifrostErr
 	}
 	if config == nil {
-		return nil, newBifrostErrorFromMsg(fmt.Sprintf("config is nil for provider %s", req.Provider))
+		bifrostErr := newBifrostErrorFromMsg(fmt.Sprintf("config is nil for provider %s", req.Provider))
+		bifrostErr.ExtraFields = schemas.BifrostErrorExtraFields{
+			RequestType: schemas.BatchDeleteRequest,
+			Provider:    req.Provider,
+		}
+		return nil, bifrostErr
 	}
transports/bifrost-http/integrations/genai.go (10)

100-115: Avoid data races on uploadSession by treating it as immutable.

uploadSession pointers are stored in a sync.Map, but the struct is mutated after storage:

  • extractGeminiFileUploadParams sets session.Provider.
  • The cleanup goroutine concurrently reads session.CreatedAt.
  • Phase‑2 helpers read session.Filename/Provider.

Mutating shared structs from multiple goroutines via sync.Map still causes data races.

Instead, copy-on-write when updating:

-			if sessionVal, ok := uploadSessions.Load(uploadReq.ResumableSessionID); ok {
-				if session, ok := sessionVal.(*uploadSession); ok {
-					session.Provider = schemas.ModelProvider(provider)
-				}
-			}
+			if sessionVal, ok := uploadSessions.Load(uploadReq.ResumableSessionID); ok {
+				if session, ok := sessionVal.(*uploadSession); ok {
+					updated := *session
+					updated.Provider = schemas.ModelProvider(provider)
+					uploadSessions.Store(uploadReq.ResumableSessionID, &updated)
+				}
+			}

and keep the cleanup goroutine strictly read‑only except for uploadSessions.Delete(key).

Also applies to: 1088-1112, 1138-1141


786-871: Resumable upload parsing: consider stricter error handling and drop debug prints.

parseGeminiResumableUpload correctly branches on uploadProtocol and content type, but:

  • Metadata JSON parsing errors are logged and then ignored, potentially leaving a half‑initialized session.
  • Numerous fmt.Printf debug statements will be noisy in production.

Consider:

  • Returning an error if metadata JSON is invalid for the metadata‑only phase.
  • Replacing fmt.Printf with the structured logger at debug level, or removing them entirely.

1072-1085: Strip heavy debug logging from setResumableUploadFinalStatus.

Setting X-Goog-Upload-Status: final is correct, but serializing and logging the full response body with sonic.Marshal and fmt.Printf on every resumable completion will be noisy and wasteful.

Either remove these debug prints or gate them behind a debug log level using the project logger.


1087-1119: Resumable upload param extraction: behavior is sound, but remove debug prints.

extractGeminiResumableUploadParams:

  • Correctly looks up the session, falls back to schemas.Gemini, and deletes the session after extracting provider and filename.
  • Stores the provider and original filename in bifrostCtx for later response shaping.

Only concern is the extra fmt.Printf debug line; prefer structured logging or omit.


45-68: Handle rand.Read errors and make cleanup goroutine stoppable.

generateSessionID ignores the error from rand.Read, and the cleanup goroutine spawned in init runs forever with no way to stop it:

  • Ignoring rand.Read can yield weak/predictable session IDs if the entropy source fails.
  • The infinite goroutine complicates graceful shutdown.

Consider:

-func generateSessionID() string {
-	bytes := make([]byte, 16)
-	rand.Read(bytes)
-	return hex.EncodeToString(bytes)
-}
+func generateSessionID() string {
+	bytes := make([]byte, 16)
+	if _, err := rand.Read(bytes); err != nil {
+		// Extremely rare; fall back to time-based ID so we still get uniqueness.
+		now := time.Now().UnixNano()
+		return fmt.Sprintf("%x-%x", now, now)
+	}
+	return hex.EncodeToString(bytes)
+}
+
+var uploadSessionCleanupStop = make(chan struct{})
 
 func init() {
 	go func() {
 		ticker := time.NewTicker(10 * time.Minute)
-		for range ticker.C {
+		defer ticker.Stop()
+		for {
+			select {
+			case <-ticker.C:
 				now := time.Now()
 				uploadSessions.Range(func(key, value interface{}) bool {
 					if session, ok := value.(*uploadSession); ok {
 						if now.Sub(session.CreatedAt) > time.Hour {
 							uploadSessions.Delete(key)
 						}
 					}
 					return true
 				})
+			case <-uploadSessionCleanupStop:
+				return
+			}
 		}
 	}()
 }

And expose a package-level Shutdown (or similar) to close uploadSessionCleanupStop during server shutdown.

#!/bin/bash
# Optional: search for a central shutdown hook where you can close uploadSessionCleanupStop.
rg -n "Shutdown|Close|stop.*router" transports/bifrost-http --type go -C3 || true

873-939: Fix binary corruption in parseGeminiMultipartRelated (operate on []byte, not string).

This parser converts the request body to a string and back to []byte:

  • parts := strings.Split(string(body), delimiter)
  • content slices and []byte(content) for file data.

This corrupts binary uploads (null bytes, non‑UTF‑8 sequences).

Refactor to use byte operations throughout, e.g.:

-	// Split body by boundary
-	delimiter := "--" + boundary
-	parts := strings.Split(string(body), delimiter)
-
-	for _, part := range parts {
-		part = strings.TrimSpace(part)
-		if part == "" || part == "--" {
+	delimiter := []byte("--" + boundary)
+	parts := bytes.Split(body, delimiter)
+
+	for _, part := range parts {
+		part = bytes.TrimSpace(part)
+		if len(part) == 0 || bytes.Equal(part, []byte("--")) {
 			continue
 		}
 
-		// Split headers from content
-		headerEnd := strings.Index(part, "\r\n\r\n")
-		if headerEnd == -1 {
-			headerEnd = strings.Index(part, "\n\n")
+		// Split headers from content
+		headerEnd := bytes.Index(part, []byte("\r\n\r\n"))
+		if headerEnd == -1 {
+			headerEnd = bytes.Index(part, []byte("\n\n"))
 			if headerEnd == -1 {
 				continue
 			}
 		}
 
-		headers := part[:headerEnd]
-		content := part[headerEnd:]
-		content = strings.TrimPrefix(content, "\r\n\r\n")
-		content = strings.TrimPrefix(content, "\n\n")
+		headers := string(part[:headerEnd])
+		content := part[headerEnd:]
+		content = bytes.TrimPrefix(content, []byte("\r\n\r\n"))
+		content = bytes.TrimPrefix(content, []byte("\n\n"))
@@
-		if strings.Contains(headersLower, "application/json") {
+		if strings.Contains(headersLower, "application/json") {
@@
-			if err := sonic.Unmarshal([]byte(content), &metadata); err == nil {
+			if err := sonic.Unmarshal(content, &metadata); err == nil {
@@
-		} else {
-			// This is file content
-			uploadReq.File = []byte(content)
+		} else {
+			// This is file content
+			uploadReq.File = append([]byte(nil), content...)

(remember to import bytes at the top).


469-488: Handle unexpected Src types in batch create converter.

The switch on sdkReq.Src only handles string and []interface{}; any other type silently leaves both InputFileID and Requests empty, which will fail later without a clear error.

Add a default case that fails fast:

 		switch src := sdkReq.Src.(type) {
@@
 		case []interface{}:
@@
-					bifrostReq.Requests = requests
-				}
+					bifrostReq.Requests = requests
+				}
+			default:
+				return nil, fmt.Errorf("invalid src type: expected string or array, got %T", src)
 		}

1017-1070: Resumable phase‑2: avoid hardcoded purpose and drop debug prints.

Phase‑2 parsing is otherwise correct, but:

  • uploadReq.Purpose = "batch" is hardcoded; if resumable uploads are reused for non‑batch purposes later, this will be wrong.
  • Multiple fmt.Printf calls will spam logs.

Suggestions:

-	uploadReq.Purpose = "batch" // Default purpose for file uploads via GenAI API
+	if session.Purpose != "" {
+		uploadReq.Purpose = session.Purpose
+	} else {
+		uploadReq.Purpose = "batch" // sensible default for batch uploads
+	}

(assuming you record Purpose in uploadSession during phase‑1), and replace the fmt.Printf statements with the existing logger (or remove them).


1121-1169: Use forwarded headers when constructing resumable upload URL.

extractGeminiFileUploadParams builds the resumable upload URL from ctx.IsTLS() and ctx.Host(). Behind a reverse proxy, this will point clients at the internal host instead of the public one.

Follow the pattern used elsewhere (e.g., session.go) and respect X-Forwarded-Proto and X-Forwarded-Host:

-			// Use the request's host and scheme to build the URL
-			scheme := "http"
-			if ctx.IsTLS() {
-				scheme = "https"
-			}
-			host := string(ctx.Host())
+			// Use forwarded headers when present to build external URL
+			scheme := "http"
+			if xfProto := string(ctx.Request.Header.Peek("X-Forwarded-Proto")); strings.Contains(strings.ToLower(xfProto), "https") {
+				scheme = "https"
+			} else if ctx.IsTLS() {
+				scheme = "https"
+			}
+			host := string(ctx.Request.Header.Peek("X-Forwarded-Host"))
+			if host == "" {
+				host = string(ctx.Host())
+			}
 			uploadURL := fmt.Sprintf("%s://%s/genai/upload/v1beta/files/resumable/%s", scheme, host, uploadReq.ResumableSessionID)
How does fasthttp recommend handling X-Forwarded-Proto and X-Forwarded-Host when constructing external URLs behind a reverse proxy?

1255-1257: Fix route registration order so batch routes win over generic /models/{model:*}.

NewGenAIRouter still registers the generic catch‑all route before batch routes:

routes := CreateGenAIRouteConfigs("/genai")
routes = append(routes, CreateGenAIFileRouteConfigs("/genai", handlerStore)...)
routes = append(routes, CreateGenAIBatchRouteConfigs("/genai", handlerStore)...)

Given fasthttp/router’s first‑match semantics, a request like:

POST /genai/v1beta/models/gemini-1.5-pro:batchGenerateContent

will match /v1beta/models/{model:*} instead of the batch‑specific route.

Reorder registration:

-func NewGenAIRouter(client *bifrost.Bifrost, handlerStore lib.HandlerStore, logger schemas.Logger) *GenAIRouter {
-	routes := CreateGenAIRouteConfigs("/genai")
-	routes = append(routes, CreateGenAIFileRouteConfigs("/genai", handlerStore)...)
-	routes = append(routes, CreateGenAIBatchRouteConfigs("/genai", handlerStore)...)
+func NewGenAIRouter(client *bifrost.Bifrost, handlerStore lib.HandlerStore, logger schemas.Logger) *GenAIRouter {
+	routes := CreateGenAIBatchRouteConfigs("/genai", handlerStore)
+	routes = append(routes, CreateGenAIFileRouteConfigs("/genai", handlerStore)...)
+	routes = append(routes, CreateGenAIRouteConfigs("/genai")...)

so more specific batch routes take precedence over the generic model route.

core/providers/gemini/gemini.go (1)

2147-2149: Fix parseGeminiError call and attach BatchDelete metadata.

parseGeminiError is called with a single argument here, unlike the rest of this file where it receives the response and a *RequestMetadata. This will not compile and also drops provider/request-type context for the error.

Update to pass metadata consistent with other batch methods:

-	// Handle response
-	if resp.StatusCode() != fasthttp.StatusOK && resp.StatusCode() != fasthttp.StatusNoContent {
-		return nil, parseGeminiError(resp)
-	}
+	// Handle response
+	if resp.StatusCode() != fasthttp.StatusOK && resp.StatusCode() != fasthttp.StatusNoContent {
+		return nil, parseGeminiError(resp, &providerUtils.RequestMetadata{
+			Provider:    providerName,
+			RequestType: schemas.BatchDeleteRequest,
+		})
+	}
core/providers/sgl/batch.go (1)

35-38: Use _ for unused parameters to avoid compile error.

This stub matches the pattern of other unsupported batch methods, but the named parameters are unused and will not compile.

Align with the existing stubs:

-// BatchDelete is not supported by SGL provider.
-func (provider *SGLProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+// BatchDelete is not supported by SGL provider.
+func (provider *SGLProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
 }
core/providers/gemini/batch.go (1)

247-373: Defensively compute BatchStats and guard CreateTime to avoid negative counts and 1970 timestamps.

The current converters can emit invalid stats and bogus timestamps:

  • PendingRequestCount = Total - Completed becomes negative when Total == 0 but Completed > 0.
  • SuccessfulRequestCount = Completed - Failed can be negative if Failed > Completed.
  • CreateTime is always set in ToGeminiBatchRetrieveResponse / ToGeminiBatchListResponse, so CreatedAt == 0 produces a 1970 timestamp.

Recommend:

  1. Centralize stats derivation (example helper):
func computeGeminiBatchStats(c schemas.BatchRequestCounts) *GeminiBatchStats {
	total := c.Total
	if total == 0 && (c.Completed > 0 || c.Failed > 0) {
		total = c.Completed + c.Failed
	}
	success := c.Completed - c.Failed
	if success < 0 {
		success = 0
	}
	pending := total - success - c.Failed
	if pending < 0 {
		pending = 0
	}
	return &GeminiBatchStats{
		RequestCount:           total,
		PendingRequestCount:    pending,
		SuccessfulRequestCount: success,
	}
}
  1. Use it and gate CreateTime in the converters:
 func ToGeminiBatchJobResponse(resp *schemas.BifrostBatchCreateResponse) *GeminiBatchJobResponseSDK {
@@
-	if resp.CreatedAt > 0 {
-		result.Metadata = &GeminiBatchMetadata{
-			Name:       resp.ID,
-			State:      ToGeminiJobState(resp.Status),
-			CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339),
-			BatchStats: &GeminiBatchStats{
-				RequestCount:           resp.RequestCounts.Total,
-				PendingRequestCount:    resp.RequestCounts.Total - resp.RequestCounts.Completed,
-				SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed,
-			},
-		}
-	}
+	if resp.CreatedAt > 0 {
+		result.Metadata = &GeminiBatchMetadata{
+			Name:       resp.ID,
+			State:      ToGeminiJobState(resp.Status),
+			CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339),
+			BatchStats: computeGeminiBatchStats(resp.RequestCounts),
+		}
+	}
@@
 func ToGeminiBatchRetrieveResponse(resp *schemas.BifrostBatchRetrieveResponse) *GeminiBatchJobResponseSDK {
@@
-	result.Metadata = &GeminiBatchMetadata{
-		Name:       resp.ID,
-		State:      ToGeminiJobState(resp.Status),
-		CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339),
-		BatchStats: &GeminiBatchStats{
-			RequestCount:           resp.RequestCounts.Total,
-			PendingRequestCount:    resp.RequestCounts.Total - resp.RequestCounts.Completed,
-			SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed,
-		},
-	}
+	result.Metadata = &GeminiBatchMetadata{
+		Name:  resp.ID,
+		State: ToGeminiJobState(resp.Status),
+	}
+	if resp.CreatedAt > 0 {
+		result.Metadata.CreateTime = time.Unix(resp.CreatedAt, 0).Format(time.RFC3339)
+	}
+	result.Metadata.BatchStats = computeGeminiBatchStats(resp.RequestCounts)
@@
 func ToGeminiBatchListResponse(resp *schemas.BifrostBatchListResponse) *GeminiBatchListResponseSDK {
@@
-		job.Metadata = &GeminiBatchMetadata{
-			Name:       batch.ID,
-			State:      ToGeminiJobState(batch.Status),
-			CreateTime: time.Unix(batch.CreatedAt, 0).Format(time.RFC3339),
-			BatchStats: &GeminiBatchStats{
-				RequestCount:           batch.RequestCounts.Total,
-				PendingRequestCount:    batch.RequestCounts.Total - batch.RequestCounts.Completed,
-				SuccessfulRequestCount: batch.RequestCounts.Completed - batch.RequestCounts.Failed,
-			},
-		}
+		job.Metadata = &GeminiBatchMetadata{
+			Name:  batch.ID,
+			State: ToGeminiJobState(batch.Status),
+		}
+		if batch.CreatedAt > 0 {
+			job.Metadata.CreateTime = time.Unix(batch.CreatedAt, 0).Format(time.RFC3339)
+		}
+		job.Metadata.BatchStats = computeGeminiBatchStats(batch.RequestCounts)

This keeps Gemini SDK stats non-negative and avoids spurious 1970 timestamps when Bifrost doesn't have creation times populated.

🧹 Nitpick comments (2)
transports/bifrost-http/integrations/genai.go (1)

670-705: Minor: reuse existing provider extraction helper for batch routes if possible.

extractGeminiBatchCreateParams duplicates the header‑based provider extraction and bifrostCtx update present in other helpers (e.g. extractGeminiFileListQueryParams). Consider factoring a small helper like:

func setProviderFromHeader(ctx *fasthttp.RequestCtx, bifrostCtx *context.Context) schemas.ModelProvider { ... }

and reusing it across extractors to avoid drift.

core/providers/gemini/types.go (1)

1706-1706: Clarify whether ResumableSessionID should be exported or make it unexported.

The ResumableSessionID field is exported (public) but marked as "internal use" in the comment. If this field is truly for internal package use only, it should be unexported (lowercase: resumableSessionID). If it needs to be accessed from other packages, the comment should be updated to explain why it's public.

If the field is truly internal, apply this diff:

-	ResumableSessionID string `json:"-"`        // Session ID for resumable uploads (internal use)
+	resumableSessionID string `json:"-"`        // Session ID for resumable uploads (internal use)

And add getter/setter methods if external access is needed:

// GetResumableSessionID returns the resumable session ID for internal use
func (g *GeminiFileUploadRequest) GetResumableSessionID() string {
	return g.resumableSessionID
}

// SetResumableSessionID sets the resumable session ID for internal use
func (g *GeminiFileUploadRequest) SetResumableSessionID(id string) {
	g.resumableSessionID = id
}
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 28810ab and 7cb3901.

📒 Files selected for processing (18)
  • core/bifrost.go (1 hunks)
  • core/providers/anthropic/batch.go (1 hunks)
  • core/providers/cerebras/cerebras.go (1 hunks)
  • core/providers/cohere/cohere.go (1 hunks)
  • core/providers/gemini/batch.go (1 hunks)
  • core/providers/gemini/gemini.go (1 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/mistral/batch.go (1 hunks)
  • core/providers/ollama/ollama.go (1 hunks)
  • core/providers/openrouter/openrouter.go (2 hunks)
  • core/providers/parasail/batch.go (1 hunks)
  • core/providers/sgl/batch.go (1 hunks)
  • core/schemas/batch.go (1 hunks)
  • core/schemas/bifrost.go (1 hunks)
  • core/schemas/provider.go (3 hunks)
  • transports/bifrost-http/integrations/genai.go (13 hunks)
  • transports/bifrost-http/integrations/router.go (7 hunks)
  • transports/bifrost-http/integrations/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
  • core/providers/anthropic/batch.go
  • core/providers/openrouter/openrouter.go
  • core/schemas/bifrost.go
  • core/providers/mistral/batch.go
  • transports/bifrost-http/integrations/router.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/schemas/batch.go
  • core/schemas/provider.go
  • core/providers/ollama/ollama.go
  • core/bifrost.go
  • core/providers/gemini/batch.go
  • core/providers/sgl/batch.go
  • transports/bifrost-http/integrations/genai.go
  • core/providers/parasail/batch.go
  • core/providers/gemini/gemini.go
  • core/providers/cohere/cohere.go
  • core/providers/gemini/types.go
  • core/providers/cerebras/cerebras.go
  • transports/bifrost-http/integrations/utils.go
🧠 Learnings (2)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.

Applied to files:

  • core/schemas/batch.go
  • core/schemas/provider.go
  • core/providers/ollama/ollama.go
  • core/bifrost.go
  • core/providers/gemini/batch.go
  • core/providers/sgl/batch.go
  • transports/bifrost-http/integrations/genai.go
  • core/providers/parasail/batch.go
  • core/providers/gemini/gemini.go
  • core/providers/cohere/cohere.go
  • core/providers/gemini/types.go
  • core/providers/cerebras/cerebras.go
  • transports/bifrost-http/integrations/utils.go
📚 Learning: 2025-12-12T08:25:02.629Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: transports/bifrost-http/integrations/router.go:709-712
Timestamp: 2025-12-12T08:25:02.629Z
Learning: In transports/bifrost-http/**/*.go, update streaming response handling to align with OpenAI Responses API: use typed SSE events such as response.created, response.output_text.delta, response.done, etc., and do not rely on the legacy data: [DONE] termination marker. Note that data: [DONE] is only used by the older Chat Completions and Text Completions streaming APIs. Ensure parsers, writers, and tests distinguish SSE events from the [DONE] sentinel and handle each event type accordingly for correct stream termination and progress updates.

Applied to files:

  • transports/bifrost-http/integrations/genai.go
  • transports/bifrost-http/integrations/utils.go
🧬 Code graph analysis (11)
core/schemas/batch.go (2)
core/schemas/provider.go (1)
  • Provider (315-364)
core/schemas/bifrost.go (2)
  • ModelProvider (32-32)
  • BifrostResponseExtraFields (299-310)
core/schemas/provider.go (2)
core/schemas/bifrost.go (1)
  • BatchDeleteRequest (103-103)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/ollama/ollama.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (370-379)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/bifrost.go (3)
core/schemas/bifrost.go (3)
  • BatchDeleteRequest (103-103)
  • BifrostError (370-379)
  • RequestType (83-83)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/provider.go (1)
  • Provider (315-364)
core/providers/gemini/batch.go (2)
core/schemas/batch.go (13)
  • BatchStatus (5-5)
  • BatchStatusValidating (8-8)
  • BatchStatusInProgress (10-10)
  • BatchStatusFinalizing (11-11)
  • BatchStatusCompleted (12-12)
  • BatchStatusFailed (9-9)
  • BatchStatusCancelling (14-14)
  • BatchStatusCancelled (15-15)
  • BatchStatusExpired (13-13)
  • BifrostBatchCreateResponse (85-109)
  • BifrostBatchRetrieveResponse (152-187)
  • BifrostBatchListResponse (129-140)
  • BifrostBatchCancelResponse (199-208)
core/providers/gemini/types.go (11)
  • GeminiJobStatePending (1627-1627)
  • GeminiJobStateRunning (1628-1628)
  • GeminiJobStateSucceeded (1629-1629)
  • GeminiJobStateFailed (1630-1630)
  • GeminiJobStateCancelling (1631-1631)
  • GeminiJobStateCancelled (1632-1632)
  • GeminiBatchJobResponseSDK (1663-1669)
  • GeminiBatchMetadata (1544-1557)
  • GeminiBatchStats (1478-1482)
  • GeminiBatchDest (1578-1581)
  • GeminiBatchListResponseSDK (1672-1675)
core/providers/sgl/batch.go (4)
core/providers/sgl/sgl.go (1)
  • SGLProvider (18-24)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (370-379)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
transports/bifrost-http/integrations/genai.go (4)
core/schemas/bifrost.go (11)
  • ModelProvider (32-32)
  • Gemini (48-48)
  • FileUploadRequest (104-104)
  • FileListRequest (105-105)
  • FileRetrieveRequest (106-106)
  • FileDeleteRequest (107-107)
  • BatchCreateRequest (98-98)
  • BatchListRequest (99-99)
  • BatchRetrieveRequest (100-100)
  • BatchCancelRequest (101-101)
  • BatchDeleteRequest (103-103)
core/providers/gemini/types.go (10)
  • GeminiFileUploadRequest (1701-1707)
  • Type (781-781)
  • GeminiFileListRequest (1710-1714)
  • GeminiFileRetrieveRequest (1717-1719)
  • GeminiFileDeleteRequest (1722-1724)
  • GeminiBatchCreateRequestSDK (1641-1647)
  • GeminiBatchListRequestSDK (1678-1681)
  • GeminiBatchRetrieveRequestSDK (1684-1686)
  • GeminiBatchCancelRequestSDK (1689-1691)
  • GeminiBatchDeleteRequestSDK (1694-1696)
core/schemas/files.go (1)
  • BifrostFileUploadRequest (53-66)
core/schemas/batch.go (6)
  • BifrostBatchCreateRequest (65-82)
  • BifrostBatchListRequest (112-126)
  • BifrostBatchRetrieveRequest (143-149)
  • BifrostBatchCancelRequest (190-196)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/providers/parasail/batch.go (4)
core/providers/parasail/parasail.go (1)
  • ParasailProvider (17-23)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (370-379)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/gemini/gemini.go (4)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (4)
  • BifrostError (370-379)
  • BatchDeleteRequest (103-103)
  • BifrostResponseExtraFields (299-310)
  • RequestType (83-83)
core/providers/utils/utils.go (4)
  • CheckOperationAllowed (472-484)
  • NewBifrostOperationError (563-574)
  • SetExtraHeaders (179-209)
  • MakeRequestWithContext (40-94)
core/schemas/provider.go (1)
  • Provider (315-364)
core/providers/gemini/types.go (1)
core/schemas/models.go (1)
  • Model (109-129)
core/providers/cerebras/cerebras.go (3)
core/schemas/batch.go (2)
  • BifrostBatchDeleteRequest (266-272)
  • BifrostBatchDeleteResponse (275-281)
core/schemas/bifrost.go (2)
  • BifrostError (370-379)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
  • GitHub Check: Graphite / mergeability_check
🔇 Additional comments (14)
core/schemas/provider.go (3)

183-183: LGTM!

The BatchDelete field addition to AllowedRequests follows the established pattern for batch operations.


232-233: LGTM!

The BatchDeleteRequest case handling in IsOperationAllowed is consistent with other batch operation checks.


352-353: LGTM!

The BatchDelete method signature in the Provider interface follows the same pattern as other batch operations (BatchCancel, BatchRetrieve, etc.).

core/schemas/batch.go (1)

265-281: LGTM!

The BifrostBatchDeleteRequest and BifrostBatchDeleteResponse types follow the established patterns for batch operations in this file. The request includes the necessary Provider, BatchID, and ExtraParams fields, while the response appropriately includes ID, Object, Deleted, and ExtraFields.

core/providers/parasail/batch.go (1)

35-38: LGTM!

The BatchDelete method follows the same pattern as other batch operation stubs in this file (BatchCreate, BatchList, BatchRetrieve, BatchCancel, BatchResults), using named parameters consistently.

transports/bifrost-http/integrations/genai.go (7)

627-664: Return a structured delete result instead of always {}.

For the batch delete route, the converter currently always returns an empty object even though BifrostBatchDeleteResponse includes ID, Deleted, etc. Returning at least the identifier and deleted flag can help clients confirm which job was deleted.

For example:

-		BatchDeleteResponseConverter: func(ctx *context.Context, resp *schemas.BifrostBatchDeleteResponse) (interface{}, error) {
-			if resp.ExtraFields.RawResponse != nil {
-				return resp.ExtraFields.RawResponse, nil
-			}
-			// Return empty object on successful delete
-			return map[string]interface{}{}, nil
-		},
+		BatchDeleteResponseConverter: func(ctx *context.Context, resp *schemas.BifrostBatchDeleteResponse) (interface{}, error) {
+			if resp.ExtraFields.RawResponse != nil {
+				return resp.ExtraFields.RawResponse, nil
+			}
+			return map[string]interface{}{
+				"name":    resp.ID,
+				"deleted": resp.Deleted,
+			}, nil
+		},

[ suggest_recommended_refactor ]


753-783: Trim :cancel suffix and normalize batch IDs consistently.

extractGeminiBatchIDFromPathCancel correctly strips :cancel and ensures a batches/ prefix, but similar normalization for non‑cancel paths lives in extractGeminiBatchIDFromPath. Ensure both helpers are used consistently in the route configs so all Gemini batch operations work with either bare IDs or fully‑qualified batches/... names. Current usage looks correct, just calling out the intentional behavior.


942-988: Multipart upload: good handling of metadata; small nit on filename fallback.

The standard multipart/form-data path is fine and uses the metadata JSON to set uploadReq.Filename when present, with a sensible fallback to fileHeader.Filename. No issues here.


990-1015: Raw upload path is reasonable (copies body and optionally derives filename).

The raw upload handler clones the body into a fresh slice and attempts to infer Filename from Content-Disposition. This is fine; just be aware that some clients may omit Content-Disposition, in which case leaving Filename empty is expected.


1175-1183: Provider extraction for file list is consistent; no issues.

extractGeminiFileListQueryParams correctly sets the provider from x-model-provider (defaulting to Gemini) and then maps pageSize and pageToken into the Gemini SDK request. Looks good.


1201-1225: File retrieve: good parameter validation and provider handling.

extractGeminiFileRetrieveParams validates file_id, sets provider from header, and plumbs the ID into GeminiFileRetrieveRequest. No concerns.


1227-1248: File delete: mirrors retrieve behavior correctly.

extractGeminiFileDeleteParams mirrors the retrieve extractor: validates file_id, sets provider, and fills GeminiFileDeleteRequest. Implementation is consistent and fine.

core/providers/gemini/types.go (2)

1623-1634: LGTM! Google GenAI SDK job state constants are well-defined.

The constants follow consistent naming and align with the SDK's expected job states. The distinction from the internal batch states (BATCH_STATE_* vs JOB_STATE_*) is appropriately documented.


1656-1659: No actionable changes needed. GeminiBatchInlineConfig intentionally supports only ResponseModalities because it represents the Google GenAI Python SDK's batch API format, not Bifrost's internal format. This is confirmed by explicit comments (lines 1637, 1662) stating these types match the SDK format. GeminiBatchGenerateContentRequest is for Bifrost's internal file-based batch format and uses the full GenerationConfig, while inline batch requests use the SDK's limited config schema with ResponseModalities as a []string (matching the SDK contract) rather than the internally-typed []Modality.

Comment on lines +869 to +872
// BatchDelete is not supported by Cohere provider.
func (provider *CohereProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix unused parameters in Cohere BatchDelete stub.

ctx, key, and request are unused and will cause a compile failure. Match the surrounding unsupported methods:

-// BatchDelete is not supported by Cohere provider.
-func (provider *CohereProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+// BatchDelete is not supported by Cohere provider.
+func (provider *CohereProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// BatchDelete is not supported by Cohere provider.
func (provider *CohereProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
}
// BatchDelete is not supported by Cohere provider.
func (provider *CohereProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
}
🤖 Prompt for AI Agents
In core/providers/cohere/cohere.go around lines 869-872, the BatchDelete stub
declares named parameters ctx, key, and request but does not use them causing a
compile error; change the signature to use unnamed parameters or blank
identifiers (e.g., func (provider *CohereProvider) BatchDelete(_
context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) ...) so
the parameters are ignored, keeping the same return values and body (returning
the unsupported operation error) to match surrounding unsupported methods.

@akshaydeo akshaydeo force-pushed the 12-04-gemini-sdk-batch-support branch from 7cb3901 to 160c8e7 Compare December 15, 2025 05:49
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (16)
core/schemas/bifrost.go (1)

85-109: Align UI RequestType union with new BatchDeleteRequest.

The new BatchDeleteRequest = "batch_delete" fits the existing enum, but ui/lib/types/config.ts::RequestType (snippet provided) still doesn’t list "batch_delete". Please ensure the UI union is extended in this stack so front‑end config and exhaustive checks stay in sync with the backend request types.

transports/bifrost-http/integrations/utils.go (1)

188-197: Remove fmt.Printf that logs full JSON body from sendSuccess.

Printing the entire responseBody in sendSuccess risks leaking PII, tokens, or other sensitive payloads to logs and can bloat log volume, especially for batch or file responses. The explicit Content-Length header is fine; the raw-body debug print should go.

Recommended change:

-	ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody)))
-	ctx.SetBody(responseBody)
-	fmt.Printf("[DEBUG] sendSuccess: status=200, contentLen=%d, body=%s\n", len(responseBody), string(responseBody))
+	ctx.Response.Header.Set("Content-Length", fmt.Sprintf("%d", len(responseBody)))
+	ctx.SetBody(responseBody)

Use your structured logger with status/length only if you still need observability.

core/providers/ollama/ollama.go (1)

251-254: Make BatchDelete stub match _-param pattern to avoid unused-parameter issues.

ctx, key, and request are unused here, while other unsupported batch/file methods already use _ to satisfy Go’s unused-parameter rules and keep style consistent. Please align this method:

-// BatchDelete is not supported by Ollama provider.
-func (provider *OllamaProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+// BatchDelete is not supported by Ollama provider.
+func (provider *OllamaProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
 }
core/providers/cohere/cohere.go (1)

869-872: Match Cohere BatchDelete stub to _-param style used by other batch methods.

This stub currently declares ctx, key, and request but doesn’t use them, unlike the other unsupported batch/file methods which use _ to avoid unused-parameter problems.

Apply:

-// BatchDelete is not supported by Cohere provider.
-func (provider *CohereProvider) BatchDelete(ctx context.Context, key schemas.Key, request *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
+// BatchDelete is not supported by Cohere provider.
+func (provider *CohereProvider) BatchDelete(_ context.Context, _ schemas.Key, _ *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
 	return nil, providerUtils.NewUnsupportedOperationError(schemas.BatchDeleteRequest, provider.GetProviderKey())
 }
core/bifrost.go (1)

1263-1339: Populate ExtraFields.RequestType/Provider on all early BatchDeleteRequest errors.

The core flow and retry wiring look good, but most early returns (nil req, missing provider/batch_id, provider not found, config/key errors) don’t set ExtraFields, so clients can’t reliably correlate these failures by request type/provider. Only the post‑executeRequestWithRetries error is enriched.

To align with BatchResultsRequest and prior feedback, set ExtraFields on all error paths in this method. For example:

func (bifrost *Bifrost) BatchDeleteRequest(ctx context.Context, req *schemas.BifrostBatchDeleteRequest) (*schemas.BifrostBatchDeleteResponse, *schemas.BifrostError) {
	if req == nil {
-		return nil, &schemas.BifrostError{
+		return nil, &schemas.BifrostError{
 			IsBifrostError: false,
 			Error: &schemas.ErrorField{
 				Message: "batch delete request is nil",
 			},
+			ExtraFields: schemas.BifrostErrorExtraFields{
+				RequestType: schemas.BatchDeleteRequest,
+			},
 		}
 	}
	if req.Provider == "" {
-		return nil, &schemas.BifrostError{
+		return nil, &schemas.BifrostError{
 			IsBifrostError: false,
 			Error: &schemas.ErrorField{
 				Message: "provider is required for batch delete request",
 			},
+			ExtraFields: schemas.BifrostErrorExtraFields{
+				RequestType: schemas.BatchDeleteRequest,
+			},
 		}
 	}
	if req.BatchID == "" {
-		return nil, &schemas.BifrostError{
+		return nil, &schemas.BifrostError{
 			IsBifrostError: false,
 			Error: &schemas.ErrorField{
 				Message: "batch_id is required for batch delete request",
 			},
+			ExtraFields: schemas.BifrostErrorExtraFields{
+				RequestType: schemas.BatchDeleteRequest,
+				Provider:    req.Provider,
+			},
 		}
 	}
	…
	provider := bifrost.getProviderByKey(req.Provider)
	if provider == nil {
-		return nil, &schemas.BifrostError{
+		return nil, &schemas.BifrostError{
 			IsBifrostError: false,
 			Error: &schemas.ErrorField{
 				Message: "provider not found for batch delete request",
 			},
+			ExtraFields: schemas.BifrostErrorExtraFields{
+				RequestType: schemas.BatchDeleteRequest,
+				Provider:    req.Provider,
+			},
 		}
 	}
-
-	config, err := bifrost.account.GetConfigForProvider(req.Provider)
-	if err != nil {
-		return nil, newBifrostErrorFromMsg(fmt.Sprintf("failed to get config for provider %s: %v", req.Provider, err.Error()))
-	}
-	if config == nil {
-		return nil, newBifrostErrorFromMsg(fmt.Sprintf("config is nil for provider %s", req.Provider))
-	}
+	config, err := bifrost.account.GetConfigForProvider(req.Provider)
+	if err != nil {
+		bErr := newBifrostErrorFromMsg(fmt.Sprintf("failed to get config for provider %s: %v", req.Provider, err.Error()))
+		bErr.ExtraFields = schemas.BifrostErrorExtraFields{
+			RequestType: schemas.BatchDeleteRequest,
+			Provider:    req.Provider,
+		}
+		return nil, bErr
+	}
+	if config == nil {
+		bErr := newBifrostErrorFromMsg(fmt.Sprintf("config is nil for provider %s", req.Provider))
+		bErr.ExtraFields = schemas.BifrostErrorExtraFields{
+			RequestType: schemas.BatchDeleteRequest,
+			Provider:    req.Provider,
+		}
+		return nil, bErr
+	}
	…
	if providerRequiresKey(baseProvider, config.CustomProviderConfig) {
		keys, keyErr := bifrost.getAllSupportedKeys(&ctx, req.Provider, baseProvider)
		if keyErr != nil {
-			return nil, newBifrostError(keyErr)
+			bErr := newBifrostError(keyErr)
+			bErr.ExtraFields = schemas.BifrostErrorExtraFields{
+				RequestType: schemas.BatchDeleteRequest,
+				Provider:    req.Provider,
+			}
+			return nil, bErr
		}

The existing enrichment after executeRequestWithRetries can stay as-is.

transports/bifrost-http/integrations/router.go (2)

868-898: Remove fmt.Printf debug logging from file handlers (use structured logger or drop).

There are several raw fmt.Printf("[DEBUG] ...") calls in the file upload path and error handling:

  • Lines 873, 880–881, 884, 886–887, 892, 895, 897 (FileUploadRequest flow).
  • Line 1022 (file response conversion error).

These should not ship to production; use g.logger.Debug(...) (with structured fields) if you need persistent debug logs, or remove them entirely:

-		fmt.Printf("[DEBUG] router: calling FileUploadRequest for provider=%s, purpose=%s, filename=%s\n", fileReq.UploadRequest.Provider, fileReq.UploadRequest.Purpose, fileReq.UploadRequest.Filename)
@@
-			errMsg := "unknown error"
-			if bifrostErr.Error != nil {
-				errMsg = bifrostErr.Error.Message
-			}
-			fmt.Printf("[DEBUG] router: FileUploadRequest error: %s (provider=%s)\n", errMsg, fileReq.UploadRequest.Provider)
@@
-		fmt.Printf("[DEBUG] router: FileUploadRequest success, response ID=%s\n", fileResponse.ID)
@@
-			fmt.Printf("[DEBUG] router: calling PostCallback\n")
@@
-				fmt.Printf("[DEBUG] router: PostCallback error: %v\n", err)
@@
-			fmt.Printf("[DEBUG] router: PostCallback success\n")
@@
-			fmt.Printf("[DEBUG] router: calling FileUploadResponseConverter\n")
@@
-			fmt.Printf("[DEBUG] router: FileUploadResponseConverter done, err=%v\n", err)
@@
-	if err != nil {
-		fmt.Printf("[DEBUG] router: file response conversion error: %v\n", err)
+	if err != nil {

Also applies to: 1021-1023


425-432: Call cancel() before returning on ErrResumableUploadInit to clean up context.

When PreCallback returns ErrResumableUploadInit, the handler returns without ever invoking cancel() from ConvertToBifrostContext, leaking the cancellable context on this path.

You can fix it inline:

		if config.PreCallback != nil {
			if err := config.PreCallback(ctx, bifrostCtx, req); err != nil {
				// Check if this is a resumable upload init that was already handled
				if err == ErrResumableUploadInit {
-					// Response was already written by the PreCallback, just return
-					return
+					// Response was already written by the PreCallback; clean up context and return
+					cancel()
+					return
				}
core/providers/gemini/gemini.go (1)

2146-2149: Fix parseGeminiError call to include metadata (current code won’t compile).

This call site passes only resp, but parseGeminiError elsewhere is always invoked with (*fasthttp.Response, *providerUtils.RequestMetadata). As written, this will not compile and also drops provider/request-type metadata for errors on batch delete.

Consider:

-	if resp.StatusCode() != fasthttp.StatusOK && resp.StatusCode() != fasthttp.StatusNoContent {
-		return nil, parseGeminiError(resp)
-	}
+	if resp.StatusCode() != fasthttp.StatusOK && resp.StatusCode() != fasthttp.StatusNoContent {
+		return nil, parseGeminiError(resp, &providerUtils.RequestMetadata{
+			Provider:    providerName,
+			RequestType: schemas.BatchDeleteRequest,
+		})
+	}
core/providers/gemini/batch.go (1)

247-293: Defensively compute batch stats and avoid emitting 1970 timestamps.

The SDK converters currently:

  • Use RequestCount: resp.RequestCounts.Total and PendingRequestCount: Total - Completed, SuccessfulRequestCount: Completed - Failed.
  • Unconditionally format CreateTime for retrieve/list responses via time.Unix(...).Format(...).

This can yield:

  • Negative PendingRequestCount / SuccessfulRequestCount when Total is unset (0) but Completed / Failed are non-zero, or when counts are inconsistent.
  • A misleading 1970-01-01T00:00:00Z CreateTime when CreatedAt == 0.

Consider centralizing a helper that derives non-negative, consistent stats and only setting CreateTime when CreatedAt > 0, then using it in all three converters. For example (sketch):

+func deriveGeminiBatchStats(c schemas.BatchRequestCounts) *GeminiBatchStats {
+	total := c.Total
+	if total == 0 && (c.Completed > 0 || c.Failed > 0 || c.Pending > 0) {
+		total = c.Completed + c.Failed + c.Pending
+	}
+	if total < 0 {
+		total = 0
+	}
+	pending := total - c.Completed - c.Failed
+	if pending < 0 {
+		pending = 0
+	}
+	success := c.Completed - c.Failed
+	if success < 0 {
+		success = 0
+	}
+	return &GeminiBatchStats{
+		RequestCount:           total,
+		PendingRequestCount:    pending,
+		SuccessfulRequestCount: success,
+	}
+}
@@ func ToGeminiBatchJobResponse(...) {
-	if resp.CreatedAt > 0 {
-		result.Metadata = &GeminiBatchMetadata{
-			Name:       resp.ID,
-			State:      ToGeminiJobState(resp.Status),
-			CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339),
-			BatchStats: &GeminiBatchStats{
-				RequestCount:           resp.RequestCounts.Total,
-				PendingRequestCount:    resp.RequestCounts.Total - resp.RequestCounts.Completed,
-				SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed,
-			},
-		}
-	}
+	if resp.CreatedAt > 0 {
+		result.Metadata = &GeminiBatchMetadata{
+			Name:       resp.ID,
+			State:      ToGeminiJobState(resp.Status),
+			CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339),
+			BatchStats: deriveGeminiBatchStats(resp.RequestCounts),
+		}
+	}
@@ func ToGeminiBatchRetrieveResponse(...) {
-	result.Metadata = &GeminiBatchMetadata{
-		Name:       resp.ID,
-		State:      ToGeminiJobState(resp.Status),
-		CreateTime: time.Unix(resp.CreatedAt, 0).Format(time.RFC3339),
-		BatchStats: &GeminiBatchStats{
-			RequestCount:           resp.RequestCounts.Total,
-			PendingRequestCount:    resp.RequestCounts.Total - resp.RequestCounts.Completed,
-			SuccessfulRequestCount: resp.RequestCounts.Completed - resp.RequestCounts.Failed,
-		},
-	}
+	result.Metadata = &GeminiBatchMetadata{
+		Name:  resp.ID,
+		State: ToGeminiJobState(resp.Status),
+	}
+	if resp.CreatedAt > 0 {
+		result.Metadata.CreateTime = time.Unix(resp.CreatedAt, 0).Format(time.RFC3339)
+	}
+	result.Metadata.BatchStats = deriveGeminiBatchStats(resp.RequestCounts)
@@ func ToGeminiBatchListResponse(...) {
-		job.Metadata = &GeminiBatchMetadata{
-			Name:       batch.ID,
-			State:      ToGeminiJobState(batch.Status),
-			CreateTime: time.Unix(batch.CreatedAt, 0).Format(time.RFC3339),
-			BatchStats: &GeminiBatchStats{
-				RequestCount:           batch.RequestCounts.Total,
-				PendingRequestCount:    batch.RequestCounts.Total - batch.RequestCounts.Completed,
-				SuccessfulRequestCount: batch.RequestCounts.Completed - batch.RequestCounts.Failed,
-			},
-		}
+		job.Metadata = &GeminiBatchMetadata{
+			Name:  batch.ID,
+			State: ToGeminiJobState(batch.Status),
+		}
+		if batch.CreatedAt > 0 {
+			job.Metadata.CreateTime = time.Unix(batch.CreatedAt, 0).Format(time.RFC3339)
+		}
+		job.Metadata.BatchStats = deriveGeminiBatchStats(batch.RequestCounts)

Also applies to: 300-321, 337-359

transports/bifrost-http/integrations/genai.go (7)

45-50: Handle crypto/rand.Read error in generateSessionID (security/robustness).

rand.Read can fail; ignoring the error risks weak or repeated IDs if the random source is unavailable. Better to handle the error and fall back or surface an error.

For example:

 func generateSessionID() string {
 	bytes := make([]byte, 16)
-	rand.Read(bytes)
-	return hex.EncodeToString(bytes)
+	if _, err := rand.Read(bytes); err != nil {
+		// Fallback: time-based ID (should be extremely rare)
+		return fmt.Sprintf("%d-%d", time.Now().UnixNano(), time.Now().UnixNano())
+	}
+	return hex.EncodeToString(bytes)
 }
Is it possible for crypto/rand.Read in Go to return an error, and what are the recommended patterns for handling failures when generating random IDs?

52-68: Cleanup goroutine runs forever with no stop signal or ticker.Stop().

The init()-spawned goroutine loops on ticker.C indefinitely and never stops the ticker. This makes graceful shutdown harder (goroutine keeps running) and slightly leaks resources.

Consider adding a package-level stop channel or context and stopping the ticker:

+var uploadSessionCleanupStop = make(chan struct{})
+
 func init() {
 	go func() {
 		ticker := time.NewTicker(10 * time.Minute)
-		for range ticker.C {
+		defer ticker.Stop()
+		for {
+			select {
+			case <-ticker.C:
+				now := time.Now()
+				uploadSessions.Range(func(key, value interface{}) bool {
+					if session, ok := value.(*uploadSession); ok {
+						if now.Sub(session.CreatedAt) > time.Hour {
+							uploadSessions.Delete(key)
+						}
+					}
+					return true
+				})
+			case <-uploadSessionCleanupStop:
+				return
+			}
 		}
-			now := time.Now()
-			uploadSessions.Range(func(key, value interface{}) bool {
-				if session, ok := value.(*uploadSession); ok {
-					if now.Sub(session.CreatedAt) > time.Hour {
-						uploadSessions.Delete(key)
-					}
-				}
-				return true
-			})
-		}
 	}()
}

and expose a shutdown hook to close uploadSessionCleanupStop on process exit.

What are recommended patterns in Go for stopping background goroutines and tickers during graceful shutdown?

34-35: Avoid mutating uploadSession structs after storing them in sync.Map (data race risk).

uploadSessions stores *uploadSession in a sync.Map, but later code mutates the struct fields:

  • In extractGeminiFileUploadParams, session.Provider is assigned after Store.
  • Cleanup goroutine reads session.CreatedAt concurrently.
  • extractGeminiResumableUploadParams also reads fields and deletes the entry.

Mutating shared structs through *uploadSession while other goroutines read them can introduce data races even though the map itself is concurrency-safe. Treat uploadSession values as immutable after storing:

-			if sessionVal, ok := uploadSessions.Load(uploadReq.ResumableSessionID); ok {
-				if session, ok := sessionVal.(*uploadSession); ok {
-					session.Provider = schemas.ModelProvider(provider)
-				}
-			}
+			if sessionVal, ok := uploadSessions.Load(uploadReq.ResumableSessionID); ok {
+				if session, ok := sessionVal.(*uploadSession); ok {
+					updated := *session
+					updated.Provider = schemas.ModelProvider(provider)
+					uploadSessions.Store(uploadReq.ResumableSessionID, &updated)
+				}
+			}

and keep all other code (cleanup, readers) read-only on the stored value.

In Go, does using sync.Map prevent data races when mutating the pointed-to values (e.g., *struct) concurrently, or must the values themselves be treated as immutable or separately synchronized?

Also applies to: 846-856, 1100-1112


448-488: Handle unexpected sdkReq.Src types in batch create to fail fast.

BatchCreateRequestConverter switches on sdkReq.Src for string and []interface{} but silently ignores any other type, leaving both InputFileID and Requests unset. That will cause confusing downstream errors.

Add a default case that returns a clear error:

		switch src := sdkReq.Src.(type) {
		case string:
@@
		case []interface{}:
@@
-		}
+		default:
+			return nil, fmt.Errorf("invalid src type: expected string or array, got %T", src)
+		}

1255-1258: Register batch routes before the generic /models/{model:*} route to avoid catch‑all hijacking.

NewGenAIRouter currently builds routes as:

routes := CreateGenAIRouteConfigs("/genai")                 // generic /v1beta/models/{model:*}
routes = append(routes, CreateGenAIFileRouteConfigs(...)...)
routes = append(routes, CreateGenAIBatchRouteConfigs(...)...)

With fasthttp/router’s first-match semantics, the generic /v1beta/models/{model:*} handler will match paths like /v1beta/models/gemini-1.5-pro:batchGenerateContent before the more specific batch route (/v1beta/models/{model}:batchGenerateContent), so batch requests will be misrouted.

Reorder registration so batch routes are added first:

-func NewGenAIRouter(client *bifrost.Bifrost, handlerStore lib.HandlerStore, logger schemas.Logger) *GenAIRouter {
-	routes := CreateGenAIRouteConfigs("/genai")
-	routes = append(routes, CreateGenAIFileRouteConfigs("/genai", handlerStore)...)
-	routes = append(routes, CreateGenAIBatchRouteConfigs("/genai", handlerStore)...)
+func NewGenAIRouter(client *bifrost.Bifrost, handlerStore lib.HandlerStore, logger schemas.Logger) *GenAIRouter {
+	routes := CreateGenAIBatchRouteConfigs("/genai", handlerStore)
+	routes = append(routes, CreateGenAIFileRouteConfigs("/genai", handlerStore)...)
+	routes = append(routes, CreateGenAIRouteConfigs("/genai")...)
For github.com/fasthttp/router, do more specific routes need to be registered before generic patterns (e.g., "/v1beta/models/{model:*}") to ensure they are matched first?

Also applies to: 448-452


1060-1063: Avoid hardcoding file Purpose to "batch" and support reverse‑proxy headers when building resumable upload URL.

Two related concerns:

  1. parseGeminiResumableUploadPhase2 currently sets every resumable upload’s purpose to "batch":
uploadReq.Purpose = "batch"

If this endpoint is reused for non-batch uploads in the future, that default may be wrong. Prefer:

  • Reading purpose from phase‑1 metadata or request headers, or
  • At least allowing it to be overridden upstream instead of hardcoding.
  1. extractGeminiFileUploadParams builds the phase‑2 upload URL using ctx.IsTLS() and ctx.Host() only, ignoring X-Forwarded-Proto / X-Forwarded-Host. Behind a reverse proxy this yields internal URLs that clients cannot reach.

A sketch for more proxy‑aware URL building:

-	scheme := "http"
-	if ctx.IsTLS() {
-		scheme = "https"
-	}
-	host := string(ctx.Host())
+	scheme := "http"
+	if xfProto := string(ctx.Request.Header.Peek("X-Forwarded-Proto")); strings.Contains(strings.ToLower(xfProto), "https") {
+		scheme = "https"
+	} else if ctx.IsTLS() {
+		scheme = "https"
+	}
+	host := string(ctx.Request.Header.Peek("X-Forwarded-Host"))
+	if host == "" {
+		host = string(ctx.Host())
+	}

and consider threading a configurable purpose into uploadReq.Purpose from metadata rather than hardcoding "batch".

What are common best practices for constructing external URLs behind reverse proxies using X-Forwarded-Proto and X-Forwarded-Host in Go HTTP servers?

Also applies to: 1121-1168


786-871: Strip or replace fmt.Printf debug logging in GenAI file/resumable paths.

There are many [DEBUG] fmt.Printf calls in the new GenAI integration:

  • parseGeminiFileUploadRequest, parseGeminiResumableUpload, parseGeminiResumableUploadPhase2.
  • setResumableUploadFinalStatus.
  • extractGeminiResumableUploadParams, extractGeminiFileUploadParams.

These:

  • Spam stdout in production.
  • May log request bodies / filenames / provider IDs (potentially sensitive).
  • Add overhead (e.g., marshaling full responses just for debug output).

Before merging, either remove them or replace with the structured logger at an appropriate level (e.g., logger.Debug) and avoid logging full bodies where not strictly necessary.

Example cleanup:

-	fmt.Printf("[DEBUG] parseGeminiResumableUpload: contentType=%s, bodyLen=%d\n", contentType, len(body))
@@
-	fmt.Printf("[DEBUG] setResumableUploadFinalStatus: set X-Goog-Upload-Status=final, response body=%s\n", string(respJSON))
-	fmt.Printf("[DEBUG] setResumableUploadFinalStatus: status code=%d\n", ctx.Response.StatusCode())
@@
-	fmt.Printf("[DEBUG] extractGeminiResumableUploadParams: sessionID=%s, provider=%s, filename=%s\n", sessionIDStr, provider, originalFilename)
@@
-	fmt.Printf("[DEBUG] extractGeminiFileUploadParams: provider=%s\n", provider)

Also applies to: 818-842, 1019-1070, 1072-1083, 1114-1117, 1131-1169

🧹 Nitpick comments (1)
core/providers/gemini/gemini.go (1)

2122-2129: Optionally return the normalized batch ID you actually deleted.

The URL normalizes batchID to "batches/<id>", but the response still returns ID: request.BatchID. For consistency with BatchCreate / BatchRetrieve (which use the canonical resource name), you may want to:

-	batchID := request.BatchID
+	batchID := request.BatchID
 	var url string
 	if strings.HasPrefix(batchID, "batches/") {
 		url = fmt.Sprintf("%s/%s", provider.networkConfig.BaseURL, batchID)
 	} else {
-		url = fmt.Sprintf("%s/batches/%s", provider.networkConfig.BaseURL, batchID)
+		batchID = "batches/" + batchID
+		url = fmt.Sprintf("%s/%s", provider.networkConfig.BaseURL, batchID)
 	}
@@
-		ID:      request.BatchID,
+		ID:      batchID,

Also applies to: 2151-2157

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7cb3901 and 160c8e7.

📒 Files selected for processing (17)
  • core/bifrost.go (1 hunks)
  • core/providers/anthropic/batch.go (1 hunks)
  • core/providers/cerebras/cerebras.go (1 hunks)
  • core/providers/cohere/cohere.go (1 hunks)
  • core/providers/gemini/batch.go (1 hunks)
  • core/providers/gemini/gemini.go (1 hunks)
  • core/providers/gemini/types.go (1 hunks)
  • core/providers/mistral/batch.go (1 hunks)
  • core/providers/ollama/ollama.go (1 hunks)
  • core/providers/openrouter/openrouter.go (2 hunks)
  • core/providers/parasail/batch.go (1 hunks)
  • core/providers/sgl/batch.go (1 hunks)
  • core/schemas/bifrost.go (1 hunks)
  • core/schemas/provider.go (3 hunks)
  • transports/bifrost-http/integrations/genai.go (13 hunks)
  • transports/bifrost-http/integrations/router.go (7 hunks)
  • transports/bifrost-http/integrations/utils.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
  • core/providers/anthropic/batch.go
  • core/providers/cerebras/cerebras.go
  • core/providers/parasail/batch.go
  • core/schemas/provider.go
  • core/providers/sgl/batch.go
  • core/providers/gemini/types.go
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)

Files:

  • core/schemas/bifrost.go
  • core/providers/mistral/batch.go
  • core/bifrost.go
  • core/providers/cohere/cohere.go
  • core/providers/gemini/gemini.go
  • transports/bifrost-http/integrations/utils.go
  • core/providers/gemini/batch.go
  • core/providers/openrouter/openrouter.go
  • core/providers/ollama/ollama.go
  • transports/bifrost-http/integrations/genai.go
  • transports/bifrost-http/integrations/router.go
🧠 Learnings (2)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.

Applied to files:

  • core/schemas/bifrost.go
  • core/providers/mistral/batch.go
  • core/bifrost.go
  • core/providers/cohere/cohere.go
  • core/providers/gemini/gemini.go
  • transports/bifrost-http/integrations/utils.go
  • core/providers/gemini/batch.go
  • core/providers/openrouter/openrouter.go
  • core/providers/ollama/ollama.go
  • transports/bifrost-http/integrations/genai.go
  • transports/bifrost-http/integrations/router.go
📚 Learning: 2025-12-12T08:25:02.629Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: transports/bifrost-http/integrations/router.go:709-712
Timestamp: 2025-12-12T08:25:02.629Z
Learning: In transports/bifrost-http/**/*.go, update streaming response handling to align with OpenAI Responses API: use typed SSE events such as response.created, response.output_text.delta, response.done, etc., and do not rely on the legacy data: [DONE] termination marker. Note that data: [DONE] is only used by the older Chat Completions and Text Completions streaming APIs. Ensure parsers, writers, and tests distinguish SSE events from the [DONE] sentinel and handle each event type accordingly for correct stream termination and progress updates.

Applied to files:

  • transports/bifrost-http/integrations/utils.go
  • transports/bifrost-http/integrations/genai.go
  • transports/bifrost-http/integrations/router.go
🧬 Code graph analysis (8)
core/schemas/bifrost.go (1)
ui/lib/types/config.ts (1)
  • RequestType (122-144)
core/providers/mistral/batch.go (3)
core/providers/mistral/mistral.go (1)
  • MistralProvider (17-23)
core/schemas/bifrost.go (2)
  • BifrostError (370-379)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/cohere/cohere.go (2)
core/schemas/bifrost.go (2)
  • BifrostError (370-379)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/gemini/gemini.go (3)
core/schemas/bifrost.go (4)
  • BifrostError (370-379)
  • BatchDeleteRequest (103-103)
  • BifrostResponseExtraFields (299-310)
  • RequestType (83-83)
core/providers/utils/utils.go (4)
  • CheckOperationAllowed (472-484)
  • NewBifrostOperationError (563-574)
  • SetExtraHeaders (179-209)
  • MakeRequestWithContext (40-94)
core/schemas/provider.go (1)
  • Provider (315-364)
core/providers/gemini/batch.go (2)
core/schemas/batch.go (13)
  • BatchStatus (5-5)
  • BatchStatusValidating (8-8)
  • BatchStatusInProgress (10-10)
  • BatchStatusFinalizing (11-11)
  • BatchStatusCompleted (12-12)
  • BatchStatusFailed (9-9)
  • BatchStatusCancelling (14-14)
  • BatchStatusCancelled (15-15)
  • BatchStatusExpired (13-13)
  • BifrostBatchCreateResponse (85-109)
  • BifrostBatchRetrieveResponse (152-187)
  • BifrostBatchListResponse (129-140)
  • BifrostBatchCancelResponse (199-208)
core/providers/gemini/types.go (11)
  • GeminiJobStatePending (1627-1627)
  • GeminiJobStateRunning (1628-1628)
  • GeminiJobStateSucceeded (1629-1629)
  • GeminiJobStateFailed (1630-1630)
  • GeminiJobStateCancelling (1631-1631)
  • GeminiJobStateCancelled (1632-1632)
  • GeminiBatchJobResponseSDK (1663-1669)
  • GeminiBatchMetadata (1544-1557)
  • GeminiBatchStats (1478-1482)
  • GeminiBatchDest (1578-1581)
  • GeminiBatchListResponseSDK (1672-1675)
core/providers/openrouter/openrouter.go (2)
core/schemas/bifrost.go (2)
  • BifrostError (370-379)
  • BatchDeleteRequest (103-103)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
core/providers/ollama/ollama.go (4)
core/schemas/account.go (1)
  • Key (8-17)
core/schemas/bifrost.go (2)
  • BifrostError (370-379)
  • BatchDeleteRequest (103-103)
ui/lib/types/logs.ts (1)
  • BifrostError (226-232)
core/providers/utils/utils.go (1)
  • NewUnsupportedOperationError (455-466)
transports/bifrost-http/integrations/router.go (3)
transports/bifrost-http/integrations/genai.go (1)
  • ErrResumableUploadInit (37-37)
core/schemas/bifrost.go (2)
  • BatchDeleteRequest (103-103)
  • FileUploadRequest (104-104)
core/schemas/provider.go (1)
  • Provider (315-364)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Graphite / mergeability_check
🔇 Additional comments (2)
core/providers/mistral/batch.go (1)

35-38: Review comment is factually incorrect.

The review claims that "other unsupported stubs in this file" follow the _ pattern and suggests this is the established convention. However, examining core/providers/mistral/batch.go shows that:

  1. All other batch stubs in the same file (BatchCreate, BatchList, BatchRetrieve, BatchCancel, BatchResults at lines 11, 16, 21, 26, 31) use named parameters without blanking them.

  2. Across the codebase, most unsupported batch methods in other providers (Anthropic, Parasail, SGL's BatchDelete) also use named parameters, not _.

  3. Go does not enforce unused parameter rules—only unused local variables. Named parameters that are unused are valid Go style.

The suggestion to use _ may be reasonable for consistency within a particular style guide, but the factual basis claimed in the review (that existing stubs follow this pattern) is incorrect.

Likely an incorrect or invalid review comment.

transports/bifrost-http/integrations/router.go (1)

714-845: BatchDelete switch branch in handleBatchRequest looks consistent with other batch handlers.

The new schemas.BatchDeleteRequest case correctly validates DeleteRequest, calls BatchDeleteRequest, respects PostCallback, and applies BatchDeleteResponseConverter when present, mirroring patterns used for other batch operations.

@akshaydeo akshaydeo force-pushed the 12-04-gemini-sdk-batch-support branch from 160c8e7 to 202b205 Compare December 15, 2025 06:12
@akshaydeo akshaydeo changed the base branch from 11-28-batch-apis to graphite-base/1001 December 15, 2025 06:13
@akshaydeo akshaydeo force-pushed the 12-04-gemini-sdk-batch-support branch from 202b205 to ceff9af Compare December 15, 2025 06:17
@graphite-app graphite-app bot changed the base branch from graphite-base/1001 to main December 15, 2025 06:18
@akshaydeo akshaydeo force-pushed the 12-04-gemini-sdk-batch-support branch from ceff9af to 4f5a21c Compare December 15, 2025 06:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants