-
Notifications
You must be signed in to change notification settings - Fork 194
batch config for each key #1104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughSummary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings. WalkthroughAdds multi-key support for batch and file APIs: provider interface methods now accept []Key; core selects/filters multiple keys and orchestrates serial per-key pagination; new serial cursor/helper types added; DB, migrations, UI, tests, and providers updated to expose UseForBatchAPI and Bedrock batch S3 configuration. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Bifrost as BifrostCore
participant ConfigDB as ConfigStore/DB
participant Provider
Client->>Bifrost: Batch/File request (provider, optional model, cursor)
Bifrost->>ConfigDB: Fetch provider keys (includes UseForBatchAPI, BatchS3Config)
ConfigDB-->>Bifrost: Keys[]
Bifrost->>Bifrost: filter/select keys (model compatibility, UseForBatchAPI, overrides)
Bifrost->>Provider: Provider.FileList/BatchList(keys[], request)
Provider-->>Bifrost: per-key responses (data, native cursors, latency, errors)
Bifrost-->>Client: Aggregated response (data, latency, serial nextCursor or error)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60–90 minutes
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (2 inconclusive)
✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
This stack of pull requests is managed by Graphite. Learn more about stacking. |
🧪 Test Suite AvailableThis PR can be tested by a repository admin. |
2011bb9 to
9bc94c1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (15)
tests/integrations/tests/test_bedrock.py (1)
1243-1285: Avoid hard-coded S3 bucket and emptyroleArnin batch create testHard-coding
s3_bucket = "bifrost-batch-api-file-upload-testing"and sendingroleArn=""makes this integration test tightly coupled to one AWS setup and inconsistent with the rest of the suite, which reads these fromintegration_settings. Consider sourcing both from config (or env) and skipping when missing, or at least centralizing these constants with a clear comment so they can be overridden per environment; ifroleArnis truly unused by Bifrost here, omitting the field may better reflect that contract.core/providers/bedrock/s3.go (1)
87-99: Guard against nilmodelIDinConvertBedrockRequestsToJSONLto avoid panics.
ConvertBedrockRequestsToJSONLnow takesmodelID *stringbut always dereferences it (*modelID) without a nil check, which will panic if a caller passesnil(very easy to do now that the type is a pointer). Consider makingmodelInputconditional onmodelIDinstead of hard dereferencing. For example:-func ConvertBedrockRequestsToJSONL(requests []schemas.BatchRequestItem, modelID *string) ([]byte, error) { +func ConvertBedrockRequestsToJSONL(requests []schemas.BatchRequestItem, modelID *string) ([]byte, error) { var buf bytes.Buffer for _, req := range requests { - // Build the Bedrock batch request format - bedrockReq := map[string]interface{}{ - "recordId": req.CustomID, - "modelInput": map[string]interface{}{ - "modelId": *modelID, - }, - } + // Build the Bedrock batch request format + modelInput := map[string]interface{}{} + if modelID != nil { + modelInput["modelId"] = *modelID + } + bedrockReq := map[string]interface{}{ + "recordId": req.CustomID, + "modelInput": modelInput, + }This keeps existing behavior when
modelIDis non-nil while making the function safe (and future‑proof) when it’s nil.core/providers/openai/openai.go (6)
2367-2440: UsebuildRequestURLand path escaping for FileRetrieve to honor custom provider pathsThe multi-key retry logic and error propagation look good, but the request URL is built with
provider.networkConfig.BaseURL + "/v1/files/" + request.FileID, bypassingbuildRequestURLand not escaping thefile_id. This means:
- Any custom path overrides configured via
customProviderConfig(andGetRequestPath) are ignored forFileRetrieve.- A future change in OpenAI’s ID format with characters needing path-escaping could break the call.
Recommend switching to
buildRequestURLandurl.PathEscapefor consistency with the rest of the provider:- providerUtils.SetExtraHeaders(ctx, req, provider.networkConfig.ExtraHeaders, nil) - req.SetRequestURI(provider.networkConfig.BaseURL + "/v1/files/" + request.FileID) + providerUtils.SetExtraHeaders(ctx, req, provider.networkConfig.ExtraHeaders, nil) + req.SetRequestURI(provider.buildRequestURL( + ctx, + "/v1/files/"+url.PathEscape(request.FileID), + schemas.FileRetrieveRequest, + ))Apply the same pattern to similar OpenAI endpoints that currently concatenate
BaseURLwith IDs.
2442-2534: FileDelete multi-key retry is solid; align URL construction and support raw payloadsThe “try each key until success” pattern, including fasthttp resource release, is implemented correctly. A few consistency improvements:
- URL construction: Like
FileRetrieve, this usesprovider.networkConfig.BaseURL + "/v1/files/" + request.FileID. PreferbuildRequestURLplusurl.PathEscape(request.FileID)to honor custom paths:- providerUtils.SetExtraHeaders(ctx, req, provider.networkConfig.ExtraHeaders, nil) - req.SetRequestURI(provider.networkConfig.BaseURL + "/v1/files/" + request.FileID) + providerUtils.SetExtraHeaders(ctx, req, provider.networkConfig.ExtraHeaders, nil) + req.SetRequestURI(provider.buildRequestURL( + ctx, + "/v1/files/"+url.PathEscape(request.FileID), + schemas.FileDeleteRequest, + ))
- Raw request/response: You correctly plumb
sendBackRawRequest/sendBackRawResponseintoHandleProviderResponseand then copy them intoresult.ExtraFields. That matches existing patterns and looks good.
2536-2612: FileContent multi-key retry is correct; preferbuildRequestURLand path escape output file IDsThe loop over
keyswith first-success semantics and proper Acquire/Release is good. Two points:
- URL construction: You currently use
provider.networkConfig.BaseURL + "/v1/files/" + request.FileID + "/content". For consistency with other operations and custom providers, switch tobuildRequestURLand escape the path segment:- providerUtils.SetExtraHeaders(ctx, req, provider.networkConfig.ExtraHeaders, nil) - req.SetRequestURI(provider.networkConfig.BaseURL + "/v1/files/" + request.FileID + "/content") + providerUtils.SetExtraHeaders(ctx, req, provider.networkConfig.ExtraHeaders, nil) + req.SetRequestURI(provider.buildRequestURL( + ctx, + "/v1/files/"+url.PathEscape(request.FileID)+"/content", + schemas.FileContentRequest, + ))
- Empty keys behavior: If
keysis empty, the method returns(nil, nil)rather than an error. If upstream can construct such a call (e.g., after filtering keys byuse_for_batch_api), it may be safer to fail explicitly.
2826-2899: BatchRetrieve multi-key retry: fix URL construction and provider selection for error metadataThe retry-over-keys logic is fine, but there are two inconsistencies:
- URL construction:
req.SetRequestURI(provider.networkConfig.BaseURL + "/v1/batches/" + request.BatchID)bypassesbuildRequestURLand doesn’t escapebatch_id. For consistency and custom providers:- providerUtils.SetExtraHeaders(ctx, req, provider.networkConfig.ExtraHeaders, nil) - req.SetRequestURI(provider.networkConfig.BaseURL + "/v1/batches/" + request.BatchID) + providerUtils.SetExtraHeaders(ctx, req, provider.networkConfig.ExtraHeaders, nil) + req.SetRequestURI(provider.buildRequestURL( + ctx, + "/v1/batches/"+url.PathEscape(request.BatchID), + schemas.BatchRetrieveRequest, + ))
- Provider in validation error: For missing
batch_idyou userequest.Providerin the error. That can diverge fromprovider.GetProviderKey()for custom-named providers. Using the provider’s own key will keep error metadata consistent:- if request.BatchID == "" { - return nil, providerUtils.NewBifrostOperationError("batch_id is required", nil, request.Provider) - } + if request.BatchID == "" { + return nil, providerUtils.NewBifrostOperationError("batch_id is required", nil, providerName) + }
2901-3001: BatchCancel multi-key retry: align URL building and provider metadata with the rest of the providerThis follows the same retry pattern as
BatchRetrieve, but:
- URL path is hard-coded:
provider.networkConfig.BaseURL + "/v1/batches/" + request.BatchID + "/cancel"should go throughbuildRequestURLandurl.PathEscapeso custom provider paths and unusual IDs are handled:- providerUtils.SetExtraHeaders(ctx, req, provider.networkConfig.ExtraHeaders, nil) - req.SetRequestURI(provider.networkConfig.BaseURL + "/v1/batches/" + request.BatchID + "/cancel") + providerUtils.SetExtraHeaders(ctx, req, provider.networkConfig.ExtraHeaders, nil) + req.SetRequestURI(provider.buildRequestURL( + ctx, + "/v1/batches/"+url.PathEscape(request.BatchID)+"/cancel", + schemas.BatchCancelRequest, + ))
- Validation error provider: For missing
batch_idyou currently hard-codeschemas.OpenAIintoNewBifrostOperationError. UseproviderNameso errors on custom OpenAI-compatible providers point to the right provider:- if request.BatchID == "" { - return nil, providerUtils.NewBifrostOperationError("batch_id is required", nil, schemas.OpenAI) - } + if request.BatchID == "" { + return nil, providerUtils.NewBifrostOperationError("batch_id is required", nil, providerName) + }
3003-3104: BatchResults composition is correct; reusebuildRequestURLfor output file download and clarify provider usageThe higher-level flow (call
BatchRetrieveto getoutput_file_id, then fetch and parse JSONL) is solid, and the JSONL parsing viaParseJSONLis a good reuse. A couple of cleanups:
- Download URL: You currently use
provider.networkConfig.BaseURL + "/v1/files/" + *batchResp.OutputFileID + "/content". For consistency and configurability:- providerUtils.SetExtraHeaders(ctx, req, provider.networkConfig.ExtraHeaders, nil) - req.SetRequestURI(provider.networkConfig.BaseURL + "/v1/files/" + *batchResp.OutputFileID + "/content") + providerUtils.SetExtraHeaders(ctx, req, provider.networkConfig.ExtraHeaders, nil) + req.SetRequestURI(provider.buildRequestURL( + ctx, + "/v1/files/"+url.PathEscape(*batchResp.OutputFileID)+"/content", + schemas.FileContentRequest, + ))
- Provider in validation error: For missing
batch_id, the error hard-codesschemas.OpenAI. PreferproviderName := provider.GetProviderKey()as used elsewhere to ensure the error’s provider field reflects custom names:- if request.BatchID == "" { - return nil, providerUtils.NewBifrostOperationError("batch_id is required", nil, schemas.OpenAI) - } + if request.BatchID == "" { + return nil, providerUtils.NewBifrostOperationError("batch_id is required", nil, providerName) + }core/providers/anthropic/anthropic.go (5)
1224-1304: BatchRetrieve multi-key retry is solid; tweak provider used in validation errorThe per-key retry logic, use of
buildRequestURLwithurl.PathEscape, and propagation of raw payloads into the returnedBifrostBatchRetrieveResponseall look good. One minor consistency issue:
- For missing
batch_id, the validation error usesschemas.Anthropicdirectly rather than the resolvedproviderName := provider.GetProviderKey(). If a custom provider name is configured, this will mislabel the provider in error metadata. Consider:- if request.BatchID == "" { - return nil, providerUtils.NewBifrostOperationError("batch_id is required", nil, schemas.Anthropic) - } + if request.BatchID == "" { + return nil, providerUtils.NewBifrostOperationError("batch_id is required", nil, providerName) + }
1306-1412: BatchCancel multi-key retry: correct flow, but URL should usebuildRequestURLand custom provider key in validation errorThe “try each key until success” pattern, including setting Anthropic headers and mapping request counts, looks good. Two improvements:
- Validation error provider: Same as
BatchRetrieve, the missingbatch_iderror hard-codesschemas.Anthropic. PreferproviderNamefor consistency with custom provider naming.- URL construction:
req.SetRequestURI(provider.networkConfig.BaseURL + "/v1/messages/batches/" + request.BatchID + "/cancel")bypassesbuildRequestURLand omitsurl.PathEscape. Align it with other Anthropic calls:- providerUtils.SetExtraHeaders(ctx, req, provider.networkConfig.ExtraHeaders, nil) - req.SetRequestURI(provider.networkConfig.BaseURL + "/v1/messages/batches/" + request.BatchID + "/cancel") + providerUtils.SetExtraHeaders(ctx, req, provider.networkConfig.ExtraHeaders, nil) + req.SetRequestURI(provider.buildRequestURL( + ctx, + "/v1/messages/batches/"+url.PathEscape(request.BatchID)+"/cancel", + schemas.BatchCancelRequest, + ))
1414-1519: BatchResults JSONL parsing is good; preferbuildRequestURLfor the results endpointThis method correctly fetches the results JSONL and converts each line into a
BatchResultItem, capturing parse errors viaParseJSONL. One consistency issue:
- The URL is currently built with
provider.networkConfig.BaseURL + "/v1/messages/batches/" + request.BatchID + "/results". To keep path overrides and escaping consistent with other Anthropic endpoints, consider:- providerUtils.SetExtraHeaders(ctx, req, provider.networkConfig.ExtraHeaders, nil) - req.SetRequestURI(provider.networkConfig.BaseURL + "/v1/messages/batches/" + request.BatchID + "/results") + providerUtils.SetExtraHeaders(ctx, req, provider.networkConfig.ExtraHeaders, nil) + req.SetRequestURI(provider.buildRequestURL( + ctx, + "/v1/messages/batches/"+url.PathEscape(request.BatchID)+"/results", + schemas.BatchResultsRequest, + ))
1851-1961: FileDelete multi-key retry: behavior is correct; preferbuildRequestURLfor consistencyThe logic for delete (including special handling for 204 No Content and mapping Anthropic’s file delete response into Bifrost) is solid. For consistency with other Anthropic endpoints and custom path overrides:
- Replace the hard-coded URL concatenation with
buildRequestURLandurl.PathEscape:- providerUtils.SetExtraHeaders(ctx, req, provider.networkConfig.ExtraHeaders, nil) - req.SetRequestURI(provider.networkConfig.BaseURL + "/v1/files/" + request.FileID) + providerUtils.SetExtraHeaders(ctx, req, provider.networkConfig.ExtraHeaders, nil) + req.SetRequestURI(provider.buildRequestURL( + ctx, + "/v1/files/"+url.PathEscape(request.FileID), + schemas.FileDeleteRequest, + ))Everything else (beta header, status handling, raw payload propagation) looks good.
1963-2042: FileContent multi-key retry is correct; align URL building with other Anthropic file endpointsThe method correctly:
- Iterates over
keyswith first-success semantics.- Uses the Files beta header and
anthropic-version.- Returns
BifrostFileContentResponsewith copied body bytes and content type, plus latency.As with
FileDelete, the URL is built by string concatenation onprovider.networkConfig.BaseURL. For consistent handling of custom paths and to escapefile_id, consider:- providerUtils.SetExtraHeaders(ctx, req, provider.networkConfig.ExtraHeaders, nil) - req.SetRequestURI(provider.networkConfig.BaseURL + "/v1/files/" + request.FileID + "/content") + providerUtils.SetExtraHeaders(ctx, req, provider.networkConfig.ExtraHeaders, nil) + req.SetRequestURI(provider.buildRequestURL( + ctx, + "/v1/files/"+url.PathEscape(request.FileID)+"/content", + schemas.FileContentRequest, + ))core/providers/bedrock/bedrock.go (1)
1579-1684: Guard multi-key operations against empty key slices to avoid(nil, nil)returnsFor several multi-key functions (
FileRetrieve,FileDelete,FileContent,BatchRetrieve,BatchCancel) the loop overkeysmay be skipped whenlen(keys) == 0, leavinglastErrasniland causing the function to return(nil, nil). That’s a latent correctness issue and can trigger nil dereferences at call sites that expect either a non‑nil response or a non‑nil error.Recommend adding explicit empty‑keys checks, consistent with the Gemini provider’s implementations, e.g.:
@@ func (provider *BedrockProvider) FileRetrieve(ctx context.Context, keys []schemas.Key, request *schemas.BifrostFileRetrieveRequest) (*schemas.BifrostFileRetrieveResponse, *schemas.BifrostError) { - // Parse S3 URI - bucketName, s3Key := parseS3URI(request.FileID) + // Parse S3 URI + bucketName, s3Key := parseS3URI(request.FileID) if bucketName == "" || s3Key == "" { return nil, providerUtils.NewBifrostOperationError("invalid S3 URI format, expected s3://bucket/key", nil, providerName) } - var lastErr *schemas.BifrostError + if len(keys) == 0 { + return nil, providerUtils.NewBifrostOperationError("no keys provided for file retrieve", nil, providerName) + } + + var lastErr *schemas.BifrostError @@ func (provider *BedrockProvider) FileDelete(ctx context.Context, keys []schemas.Key, request *schemas.BifrostFileDeleteRequest) (*schemas.BifrostFileDeleteResponse, *schemas.BifrostError) { - // Parse S3 URI - bucketName, s3Key := parseS3URI(request.FileID) + // Parse S3 URI + bucketName, s3Key := parseS3URI(request.FileID) if bucketName == "" || s3Key == "" { return nil, providerUtils.NewBifrostOperationError("invalid S3 URI format, expected s3://bucket/key", nil, providerName) } - var lastErr *schemas.BifrostError + if len(keys) == 0 { + return nil, providerUtils.NewBifrostOperationError("no keys provided for file delete", nil, providerName) + } + + var lastErr *schemas.BifrostError @@ func (provider *BedrockProvider) FileContent(ctx context.Context, keys []schemas.Key, request *schemas.BifrostFileContentRequest) (*schemas.BifrostFileContentResponse, *schemas.BifrostError) { - // Parse S3 URI - bucketName, s3Key := parseS3URI(request.FileID) + // Parse S3 URI + bucketName, s3Key := parseS3URI(request.FileID) if bucketName == "" || s3Key == "" { return nil, providerUtils.NewBifrostOperationError("invalid S3 URI format, expected s3://bucket/key", nil, providerName) } - var lastErr *schemas.BifrostError + if len(keys) == 0 { + return nil, providerUtils.NewBifrostOperationError("no keys provided for file content", nil, providerName) + } + + var lastErr *schemas.BifrostError @@ func (provider *BedrockProvider) BatchRetrieve(ctx context.Context, keys []schemas.Key, request *schemas.BifrostBatchRetrieveRequest) (*schemas.BifrostBatchRetrieveResponse, *schemas.BifrostError) { - if request.BatchID == "" { + if request.BatchID == "" { return nil, providerUtils.NewBifrostOperationError("batch_id (job ARN) is required", nil, providerName) } - var lastErr *schemas.BifrostError + if len(keys) == 0 { + return nil, providerUtils.NewBifrostOperationError("no keys provided for batch retrieve", nil, providerName) + } + + var lastErr *schemas.BifrostError @@ func (provider *BedrockProvider) BatchCancel(ctx context.Context, keys []schemas.Key, request *schemas.BifrostBatchCancelRequest) (*schemas.BifrostBatchCancelResponse, *schemas.BifrostError) { - if request.BatchID == "" { + if request.BatchID == "" { return nil, providerUtils.NewBifrostOperationError("batch_id (job ARN) is required", nil, providerName) } - var lastErr *schemas.BifrostError + if len(keys) == 0 { + return nil, providerUtils.NewBifrostOperationError("no keys provided for batch cancel", nil, providerName) + } + + var lastErr *schemas.BifrostErrorThis keeps behavior unchanged for normal calls and hardens the API against misconfiguration.
Also applies to: 1686-1774, 1776-1873, 2321-2468, 2470-2580
core/providers/gemini/gemini.go (1)
2704-2721: Unusedkeysparameter inFileContentwill fail to compile
FileContentwas updated to the multi-key signature:func (provider *GeminiProvider) FileContent(ctx context.Context, keys []schemas.Key, request *schemas.BifrostFileContentRequest) ...but the
keysparameter is never used, which is a compile-time error in Go. Since Gemini doesn’t support direct file content download, the parameter can be intentionally ignored using_:-// FileContent downloads file content from Gemini. +// FileContent downloads file content from Gemini. // Note: Gemini Files API doesn't support direct content download. // Files are accessed via their URI in API requests. -func (provider *GeminiProvider) FileContent(ctx context.Context, keys []schemas.Key, request *schemas.BifrostFileContentRequest) (*schemas.BifrostFileContentResponse, *schemas.BifrostError) { +func (provider *GeminiProvider) FileContent(ctx context.Context, _ []schemas.Key, request *schemas.BifrostFileContentRequest) (*schemas.BifrostFileContentResponse, *schemas.BifrostFileContentResponse, *schemas.BifrostError) {(Just change
keysto_; keep the return types as they are in your code.)This preserves behavior while fixing the compilation issue.
🧹 Nitpick comments (17)
core/schemas/batch.go (1)
65-83: PointerModelfields align with optional routing; clarify JSON tag intentSwitching
Modelto*stringacross batch request types is consistent with making the model hint optional and fits the multi-key/batch design. InBifrostBatchCreateRequestyou also added,omitempty, butBifrostBatchList/Retrieve/Cancel/Resultsstill serializemodeleven when nil (asnull). If these are also meant to be optional, consider adding,omitemptyfor consistency; if they’re required for those flows, a brief comment to that effect would make the contract clearer at call sites.Also applies to: 117-133, 150-159, 205-214, 234-246
core/internal/testutil/batch.go (1)
22-40: Tests correctly updated for pointerModelfieldsUsing
schemas.Ptr(testConfig.ChatModel)on batch-create requests keeps the helpers aligned with the new*stringModelfields and avoids any ambiguity at call sites. You might optionally also populateModelon the subsequent retrieve/cancel/results requests when you start relying on model-based routing there, but the current tests remain valid withModeltreated as an optional hint.Also applies to: 114-131, 186-203, 296-311, 692-699
framework/configstore/rdb.go (1)
10-10: Batch key flags and Bedrock batch S3 config persistence look consistent; consider de‑duplicating JSON handling.
UseForBatchAPIis now wired through all write paths (UpdateProvidersConfig,UpdateProvider,AddProvider) and read back viaGetProvidersConfigintoschemas.Key, which will let selection logic gate batch usage per key as intended.- Bedrock
BatchS3Configis serialized withsonic.MarshalintoBedrockBatchS3ConfigJSONwhen non‑nil and cleared (nil) otherwise, so the column won’t retain stale config when a key’s batch S3 config is removed.- One small improvement: the
BatchS3Config→ JSON logic is duplicated in three places; extracting a tiny helper (e.g.marshalBatchS3Config(...) (*string, error)) would reduce repetition and make future tweaks (validation, error wrapping) cheaper.Functionally this all looks sound; the refactor is optional.
Also applies to: 201-318, 371-456, 459-554, 644-683
tests/integrations/tests/test_openai.py (1)
2105-2105: Filtering batch list by both provider and modelIncluding
"model": modelinextra_querywhen listing batches tightens the filter to a specific model, which matches the multi-key/multi-model batch design. Please double‑check that the underlying list endpoint ignores unknown filters and treatsmodelconsistently across providers (especially Anthropic vs Bedrock/OpenAI) so older backends don’t start rejecting these requests.ui/lib/types/schemas.ts (2)
89-99: S3/batch config schemas align with backend types; consider optional invariantsThe new
s3BucketConfigSchema/batchS3ConfigSchemashapes match the TS and GoBatchS3Config/S3BucketConfigstructures (bucket_name/prefix/is_default and buckets[]), so they should serialize cleanly end‑to‑end. If you later need stronger guarantees (e.g., at most oneis_defaultbucket, or at least one bucket present), those can be added here via.refineonbatchS3ConfigSchemawithout changing wire format.
170-170: Per-key use_for_batch_api flag: semantics look good; defaulting is handled upstream
use_for_batch_api: z.boolean().optional()adds the per‑key batch flag without breaking existing configs (field simply omitted). Backend code already treats a nil/absent flag as “false”, so this optional field is safe. If the form UX would benefit from always having an explicit boolean, you could later add.default(false)at the form schema layer instead of here to avoid changing raw config semantics.core/schemas/files.go (1)
55-56: *Model fields as string improve optionality; verify callers and JSON exposureSwitching all
Modelfields on file requests fromstringto*stringis consistent with the broader pointer‑based model handling and makes “no model provided” distinguishable from an empty string. A couple of things to double‑check:
- Callers and downstream handlers that previously compared
req.Model != ""now need to handlenilsafely.- These structs still have
json:"model"withoutomitempty, so if they are ever serialized in responses or logs, unset values will now appear as"model": nullinstead of"model": "".If these types are strictly request‑side and not part of any public response surface, this is fine; otherwise you may want
json:"model,omitempty"or a small helper to normalize nil → empty string at the edge.Also applies to: 111-112, 148-149, 188-189, 217-218
transports/bifrost-http/lib/config.go (1)
2138-2143: Redacted config exposes use_for_batch_api with sensible defaultAdding
UseForBatchAPIto the redacted keys (copying the pointer when set, defaulting toPtr(false)when nil) guarantees the UI / API consumers always see an explicit boolean for the new per‑key batch flag. The only trade‑off is that you can no longer distinguish “unset” from “false” in the redacted view; if that distinction ever matters, you might instead omit the field when nil and let clients treat absence as false.ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx (2)
54-54: Unused variableuseForBatchAPI.The
useForBatchAPIvariable is declared viaform.watch()but never referenced in the component. This appears to be leftover from development or intended for conditional rendering that wasn't implemented.const supportsBatchAPI = BATCH_SUPPORTED_PROVIDERS.includes(providerName); - const useForBatchAPI = form.watch("key.use_for_batch_api");
177-177: Consider passingkeysprop toModelMultiselect.Per the component definition in
modelMultiselect.tsx, thekeysprop enables fetching models via API with specific key IDs. Currently, onlyprovideris passed, which may limit the model options to cached or default models rather than models available for the specific keys being configured.If the intent is to show models available for the key being edited, consider passing the key value:
- <ModelMultiselect provider={providerName} value={field.value || []} onChange={field.onChange} /> + <ModelMultiselect provider={providerName} keys={form.watch("key.value") ? [form.watch("key.value")] : undefined} value={field.value || []} onChange={field.onChange} />framework/configstore/tables/key.go (1)
283-289: Consider usingsonic.Unmarshalfor consistency.The BeforeSave hook uses
sonic.Marshal(line 167) but AfterFind usesjson.Unmarshal. While functionally equivalent, usingsonic.Unmarshalwould maintain consistency with the serialization path and potentially improve performance.if k.BedrockBatchS3ConfigJSON != nil && *k.BedrockBatchS3ConfigJSON != "" { var batchS3Config schemas.BatchS3Config - if err := json.Unmarshal([]byte(*k.BedrockBatchS3ConfigJSON), &batchS3Config); err != nil { + if err := sonic.Unmarshal([]byte(*k.BedrockBatchS3ConfigJSON), &batchS3Config); err != nil { return err } bedrockConfig.BatchS3Config = &batchS3Config }core/providers/openai/openai.go (2)
2244-2365: FileList multi-key aggregation looks correct but drops raw request/response and global pagination semanticsThe per-key looping, error handling, and fasthttp Acquire/Release patterns look sound, and aggregating
Latencyas a sum across keys is reasonable. Two follow-ups to consider:
- Raw payloads:
HandleProviderResponseis invoked withsendBackRawRequest/sendBackRawResponse, but the returnedrawRequest/rawResponseare ignored and never surfaced inbifrostResp.ExtraFields. If raw payloads are expected forFileList, either propagate at least one key’s raw request/response ontobifrostResp.ExtraFieldsor passfalsefor those flags to avoid unnecessary work.- Pagination expectations:
Limit,After, andOrderare applied per key, so a single client-levellimitcan yield up tolimit * len(keys)results andHasMore/cursor-style pagination isn’t represented in the top-level response. If callers rely on list-style pagination, it may be worth documenting or revisiting this behavior.
2718-2824: BatchList aggregation works but ignores raw payloads and pagination metadataThe multi-key aggregation for OpenAI batches is structurally sound (per-key request, aggregation into
[]BifrostBatchRetrieveResponse, summed latency). A few behavior gaps:
- Raw request/response ignored: You compute
sendBackRawRequest/sendBackRawResponse, callHandleProviderResponse, and getrawRequest/rawResponseper key, but never surface them anywhere. Either:
- Attach at least one key’s raw request/response to the top-level
BifrostBatchListResponse.ExtraFields, or- Pass
falsefor these flags to avoid doing extra work.- Pagination fields dropped:
OpenAIBatchListResponseexposesFirstID,LastID, andHasMore, andschemas.BifrostBatchListResponsehas matching fields, but the code only setsObject,Data, andLatency. If clients rely on pagination, consider at least wiring through the first key’s metadata or explicitly documenting that pagination isn’t preserved in multi-key mode.core/providers/anthropic/anthropic.go (2)
1112-1222: Anthropic BatchList aggregation works but ignores raw payloads and pagination metadataThe implementation correctly loops over
keys, applies relevant query params (limit,before_id,after_id), and aggregatesBifrostBatchRetrieveResponseitems with summed latency. A few notes:
HandleProviderResponseis called withsendBackRawResponsebutrawRequest/rawResponseare discarded and not exposed anywhere; if raw payloads are not needed for list operations, consider calling it withfalsefor both flags to avoid the overhead.AnthropicBatchListResponse’s pagination fields (e.g.,HasMore, cursors) are dropped;schemas.BifrostBatchListResponsealso has pagination fields that remain at zero values. If clients rely on pagination, you may want to at least surface the first key’s metadata or clearly document that pagination isn’t preserved in multi-key mode.
1652-1768: FileList multi-key implementation for Anthropic is consistent with Files API designThe Anthropic Files API list implementation:
- Correctly applies
limitandafter_idquery params per key.- Includes required headers (
x-api-key,anthropic-version,AnthropicFilesAPIBetaHeader).- Aggregates
FileObjectentries with appropriate mapping (bytes, created_at, filename, purpose, status) and sums latency intoExtraFields.Latency.Same caveats as for batch list: raw request/response are computed but unused, and any Files API pagination metadata is dropped. If you don’t intend to expose raw payloads or pagination for multi-key list operations, this is fine as-is; otherwise, consider wiring those through.
core/providers/bedrock/bedrock.go (2)
1889-1936: Clarify role_arn error message now that key-config ARN is supportedThe new precedence (
ExtraParams["role_arn"]first, thenkey.BedrockKeyConfig.ARN) is good, and the nil-check onrequest.Modelis also a necessary guard with the new pointer type. The remaining log/error message:provider.logger.Error("role_arn is required for Bedrock batch API (provide in extra_params)")is now slightly misleading since
role_arncan also come from key config. Consider updating it to mention both sources to avoid confusion for users debugging config issues.
2120-2257: Consider explicitly rejecting empty key slices in BatchList for consistency
BatchListaggregates jobs across all keys and always returns a response, even whenlen(keys) == 0, yielding an empty list with zero latency. Other providers (e.g., Gemini) explicitly treat “no keys” as a configuration error. For consistency and clearer failure modes, consider adding an empty‑keys guard here as well, similar to:func (provider *BedrockProvider) BatchList(ctx context.Context, keys []schemas.Key, request *schemas.BifrostBatchListRequest) (*schemas.BifrostBatchListResponse, *schemas.BifrostError) { if err := providerUtils.CheckOperationAllowed(schemas.Bedrock, provider.customProviderConfig, schemas.BatchListRequest); err != nil { return nil, err } providerName := provider.GetProviderKey() + + if len(keys) == 0 { + return nil, providerUtils.NewBifrostOperationError("no keys provided for batch list", nil, providerName) + }Not critical given typical call paths, but it will surface misconfigurations more predictably.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
ui/package-lock.jsonis excluded by!**/package-lock.jsonui/public/images/nebius.jpegis excluded by!**/*.jpeg
📒 Files selected for processing (46)
Makefile(1 hunks)core/bifrost.go(9 hunks)core/internal/testutil/batch.go(5 hunks)core/providers/anthropic/anthropic.go(8 hunks)core/providers/azure/azure.go(3 hunks)core/providers/bedrock/batch.go(2 hunks)core/providers/bedrock/bedrock.go(13 hunks)core/providers/bedrock/s3.go(1 hunks)core/providers/cerebras/cerebras.go(2 hunks)core/providers/cohere/cohere.go(2 hunks)core/providers/elevenlabs/elevenlabs.go(2 hunks)core/providers/gemini/gemini.go(21 hunks)core/providers/groq/groq.go(2 hunks)core/providers/mistral/mistral.go(2 hunks)core/providers/nebius/nebius.go(2 hunks)core/providers/ollama/ollama.go(2 hunks)core/providers/openai/openai.go(9 hunks)core/providers/openrouter/openrouter.go(2 hunks)core/providers/parasail/parasail.go(2 hunks)core/providers/perplexity/perplexity.go(2 hunks)core/providers/sgl/sgl.go(2 hunks)core/providers/vertex/vertex.go(2 hunks)core/schemas/account.go(2 hunks)core/schemas/batch.go(5 hunks)core/schemas/bifrost.go(1 hunks)core/schemas/files.go(5 hunks)core/schemas/provider.go(1 hunks)core/utils.go(1 hunks)framework/configstore/migrations.go(2 hunks)framework/configstore/rdb.go(9 hunks)framework/configstore/tables/key.go(4 hunks)tests/integrations/config.json(5 hunks)tests/integrations/tests/test_bedrock.py(2 hunks)tests/integrations/tests/test_openai.py(5 hunks)transports/bifrost-http/handlers/inference.go(1 hunks)transports/bifrost-http/integrations/anthropic.go(1 hunks)transports/bifrost-http/lib/config.go(4 hunks)transports/config.schema.json(1 hunks)ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx(4 hunks)ui/app/workspace/providers/views/providerKeyForm.tsx(0 hunks)ui/components/ui/modelMultiselect.tsx(1 hunks)ui/components/ui/tagInput.tsx(1 hunks)ui/lib/schemas/providerForm.ts(3 hunks)ui/lib/types/config.ts(6 hunks)ui/lib/types/schemas.ts(3 hunks)ui/package.json(2 hunks)
💤 Files with no reviewable changes (1)
- ui/app/workspace/providers/views/providerKeyForm.tsx
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
core/schemas/account.gocore/utils.gocore/providers/bedrock/s3.gocore/schemas/bifrost.gotransports/config.schema.jsoncore/providers/bedrock/batch.gocore/schemas/batch.goui/package.jsoncore/providers/sgl/sgl.goui/components/ui/modelMultiselect.tsxframework/configstore/tables/key.goMakefilecore/schemas/files.goui/app/workspace/providers/fragments/apiKeysFormFragment.tsxtransports/bifrost-http/lib/config.goui/lib/types/config.tstests/integrations/tests/test_bedrock.pycore/internal/testutil/batch.goframework/configstore/rdb.gotransports/bifrost-http/handlers/inference.gocore/providers/elevenlabs/elevenlabs.gocore/providers/ollama/ollama.gotests/integrations/tests/test_openai.pyui/components/ui/tagInput.tsxframework/configstore/migrations.goui/lib/types/schemas.tscore/providers/cohere/cohere.gocore/providers/vertex/vertex.gotransports/bifrost-http/integrations/anthropic.gocore/bifrost.gocore/providers/parasail/parasail.gocore/schemas/provider.goui/lib/schemas/providerForm.tscore/providers/nebius/nebius.gocore/providers/groq/groq.gocore/providers/cerebras/cerebras.gocore/providers/mistral/mistral.gocore/providers/perplexity/perplexity.gocore/providers/openai/openai.gocore/providers/azure/azure.gotests/integrations/config.jsoncore/providers/bedrock/bedrock.gocore/providers/anthropic/anthropic.gocore/providers/openrouter/openrouter.gocore/providers/gemini/gemini.go
🧠 Learnings (4)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.
Applied to files:
core/schemas/account.gocore/utils.gocore/providers/bedrock/s3.gocore/schemas/bifrost.gocore/providers/bedrock/batch.gocore/schemas/batch.gocore/providers/sgl/sgl.goframework/configstore/tables/key.gocore/schemas/files.gotransports/bifrost-http/lib/config.gocore/internal/testutil/batch.goframework/configstore/rdb.gotransports/bifrost-http/handlers/inference.gocore/providers/elevenlabs/elevenlabs.gocore/providers/ollama/ollama.goframework/configstore/migrations.gocore/providers/cohere/cohere.gocore/providers/vertex/vertex.gotransports/bifrost-http/integrations/anthropic.gocore/bifrost.gocore/providers/parasail/parasail.gocore/schemas/provider.gocore/providers/nebius/nebius.gocore/providers/groq/groq.gocore/providers/cerebras/cerebras.gocore/providers/mistral/mistral.gocore/providers/perplexity/perplexity.gocore/providers/openai/openai.gocore/providers/azure/azure.gocore/providers/bedrock/bedrock.gocore/providers/anthropic/anthropic.gocore/providers/openrouter/openrouter.gocore/providers/gemini/gemini.go
📚 Learning: 2025-12-12T08:25:02.629Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: transports/bifrost-http/integrations/router.go:709-712
Timestamp: 2025-12-12T08:25:02.629Z
Learning: In transports/bifrost-http/**/*.go, update streaming response handling to align with OpenAI Responses API: use typed SSE events such as response.created, response.output_text.delta, response.done, etc., and do not rely on the legacy data: [DONE] termination marker. Note that data: [DONE] is only used by the older Chat Completions and Text Completions streaming APIs. Ensure parsers, writers, and tests distinguish SSE events from the [DONE] sentinel and handle each event type accordingly for correct stream termination and progress updates.
Applied to files:
transports/bifrost-http/lib/config.gotransports/bifrost-http/handlers/inference.gotransports/bifrost-http/integrations/anthropic.go
📚 Learning: 2025-12-11T11:58:25.307Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: core/providers/openai/responses.go:42-84
Timestamp: 2025-12-11T11:58:25.307Z
Learning: In core/providers/openai/responses.go (and related OpenAI response handling), document and enforce the API format constraint: if ResponsesReasoning != nil and the response contains content blocks, all content blocks should be treated as reasoning blocks by default. Implement type guards or parsing logic accordingly, and add unit tests to verify that when ResponsesReasoning is non-nil, content blocks are labeled as reasoning blocks. Include clear comments in the code explaining the rationale and ensure downstream consumers rely on this behavior.
Applied to files:
core/providers/openai/openai.go
📚 Learning: 2025-12-14T14:43:30.902Z
Learnt from: Radheshg04
Repo: maximhq/bifrost PR: 980
File: core/providers/openai/images.go:10-22
Timestamp: 2025-12-14T14:43:30.902Z
Learning: Enforce the OpenAI image generation SSE event type values across the OpenAI image flow in the repository: use "image_generation.partial_image" for partial chunks, "image_generation.completed" for the final result, and "error" for errors. Apply this consistently in schemas, constants, tests, accumulator routing, and UI code within core/providers/openai (and related Go files) to ensure uniform event typing and avoid mismatches.
Applied to files:
core/providers/openai/openai.go
🧬 Code graph analysis (27)
core/schemas/account.go (1)
ui/lib/types/config.ts (2)
S3BucketConfig(52-56)BatchS3Config(58-60)
core/utils.go (3)
core/schemas/bifrost.go (10)
RequestType(85-85)BatchCreateRequest(100-100)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)BatchResultsRequest(104-104)FileListRequest(106-106)FileRetrieveRequest(107-107)FileDeleteRequest(108-108)FileContentRequest(109-109)ui/lib/types/config.ts (1)
RequestType(137-159)transports/bifrost-http/handlers/inference.go (2)
BatchCreateRequest(292-299)BatchListRequest(302-307)
core/providers/bedrock/s3.go (1)
core/schemas/batch.go (1)
BatchRequestItem(31-37)
core/schemas/batch.go (3)
core/schemas/provider.go (1)
Provider(312-359)core/schemas/bifrost.go (1)
ModelProvider(32-32)core/schemas/models.go (1)
Model(109-129)
framework/configstore/tables/key.go (2)
core/schemas/account.go (2)
BedrockKeyConfig(56-64)BatchS3Config(50-52)ui/lib/types/config.ts (2)
BedrockKeyConfig(63-71)BatchS3Config(58-60)
core/schemas/files.go (1)
core/schemas/models.go (1)
Model(109-129)
ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx (7)
ui/components/ui/form.tsx (5)
FormItem(161-161)FormLabel(162-162)FormDescription(164-164)FormControl(163-163)FormMessage(165-165)ui/components/ui/switch.tsx (1)
Switch(36-36)ui/components/ui/modelMultiselect.tsx (1)
ModelMultiselect(27-192)ui/components/ui/separator.tsx (1)
Separator(43-43)ui/components/ui/button.tsx (1)
Button(70-70)ui/components/ui/alert.tsx (3)
Alert(42-42)AlertTitle(42-42)AlertDescription(42-42)ui/components/ui/input.tsx (1)
Input(15-69)
transports/bifrost-http/lib/config.go (4)
core/utils.go (1)
Ptr(56-58)core/schemas/utils.go (1)
Ptr(16-18)core/schemas/account.go (2)
BedrockKeyConfig(56-64)BatchS3Config(50-52)ui/lib/types/config.ts (2)
BedrockKeyConfig(63-71)BatchS3Config(58-60)
ui/lib/types/config.ts (2)
core/schemas/account.go (2)
S3BucketConfig(42-46)BatchS3Config(50-52)core/network/http.go (1)
GlobalProxyType(46-46)
core/internal/testutil/batch.go (1)
core/utils.go (1)
Ptr(56-58)
framework/configstore/rdb.go (2)
core/schemas/account.go (2)
BedrockKeyConfig(56-64)BatchS3Config(50-52)ui/lib/types/config.ts (2)
BedrockKeyConfig(63-71)BatchS3Config(58-60)
core/providers/elevenlabs/elevenlabs.go (4)
core/schemas/batch.go (2)
BifrostBatchListResponse(136-147)BifrostBatchRetrieveResponse(167-202)core/schemas/bifrost.go (4)
BifrostError(461-470)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)ui/lib/types/logs.ts (1)
BifrostError(226-232)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)
framework/configstore/migrations.go (2)
framework/migrator/migrator.go (3)
New(131-149)DefaultOptions(100-106)Migration(62-69)framework/configstore/tables/key.go (2)
TableKey(13-58)TableKey(61-61)
core/providers/cohere/cohere.go (3)
core/schemas/batch.go (1)
BifrostBatchListRequest(118-133)core/schemas/bifrost.go (2)
BifrostError(461-470)BatchListRequest(101-101)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)
core/providers/vertex/vertex.go (3)
core/schemas/bifrost.go (7)
BifrostError(461-470)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)FileListRequest(106-106)FileRetrieveRequest(107-107)FileDeleteRequest(108-108)ui/lib/types/logs.ts (1)
BifrostError(226-232)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)
transports/bifrost-http/integrations/anthropic.go (3)
core/schemas/batch.go (1)
BatchRequestItem(31-37)core/utils.go (1)
Ptr(56-58)core/schemas/utils.go (1)
Ptr(16-18)
core/bifrost.go (3)
core/schemas/account.go (1)
Key(8-19)core/schemas/provider.go (1)
Provider(312-359)core/schemas/bifrost.go (19)
RequestType(85-85)BatchCreateRequest(100-100)BifrostError(461-470)ErrorField(479-486)BifrostErrorExtraFields(528-532)BifrostContextKeySelectedKeyID(122-122)BifrostContextKeySelectedKeyName(123-123)BifrostResponse(322-342)FileListRequest(106-106)FileRetrieveRequest(107-107)FileDeleteRequest(108-108)FileContentRequest(109-109)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)BatchResultsRequest(104-104)ModelProvider(32-32)BifrostContextKeyDirectKey(121-121)BifrostContextKeySkipKeySelection(127-127)
core/schemas/provider.go (2)
core/schemas/account.go (1)
Key(8-19)core/schemas/bifrost.go (1)
BifrostError(461-470)
core/providers/nebius/nebius.go (4)
core/schemas/batch.go (3)
BifrostBatchListRequest(118-133)BifrostBatchListResponse(136-147)BifrostBatchRetrieveResponse(167-202)core/schemas/bifrost.go (7)
BifrostError(461-470)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)FileListRequest(106-106)FileRetrieveRequest(107-107)FileDeleteRequest(108-108)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)transports/bifrost-http/handlers/inference.go (1)
BatchListRequest(302-307)
core/providers/groq/groq.go (2)
core/schemas/bifrost.go (7)
BifrostError(461-470)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)FileListRequest(106-106)FileRetrieveRequest(107-107)FileDeleteRequest(108-108)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)
core/providers/cerebras/cerebras.go (4)
core/schemas/files.go (8)
BifrostFileListRequest(109-128)BifrostFileListResponse(136-143)BifrostFileRetrieveRequest(146-159)BifrostFileRetrieveResponse(167-183)BifrostFileDeleteRequest(186-198)BifrostFileDeleteResponse(206-212)BifrostFileContentRequest(215-227)BifrostFileContentResponse(235-241)core/schemas/bifrost.go (7)
BifrostError(461-470)FileListRequest(106-106)FileRetrieveRequest(107-107)FileDeleteRequest(108-108)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)core/schemas/batch.go (8)
BifrostBatchListRequest(118-133)BifrostBatchListResponse(136-147)BifrostBatchRetrieveRequest(150-159)BifrostBatchRetrieveResponse(167-202)BifrostBatchCancelRequest(205-214)BifrostBatchCancelResponse(222-231)BifrostBatchResultsRequest(234-246)BifrostBatchResultsResponse(285-294)
core/providers/mistral/mistral.go (2)
core/schemas/bifrost.go (2)
BifrostError(461-470)BatchListRequest(101-101)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)
core/providers/openai/openai.go (5)
core/schemas/bifrost.go (7)
BifrostError(461-470)OpenAI(35-35)FileListRequest(106-106)RequestType(85-85)FileRetrieveRequest(107-107)BatchListRequest(101-101)BatchRetrieveRequest(102-102)core/providers/openai/errors.go (1)
ParseOpenAIError(10-42)core/schemas/provider.go (1)
Provider(312-359)core/providers/openai/files.go (3)
OpenAIFileListResponse(26-30)OpenAIFileResponse(14-23)OpenAIFileDeleteResponse(33-37)core/providers/openai/batch.go (2)
OpenAIBatchListResponse(51-57)OpenAIBatchResponse(20-41)
core/providers/bedrock/bedrock.go (6)
core/schemas/account.go (2)
Key(8-19)BedrockKeyConfig(56-64)core/providers/bedrock/types.go (1)
DefaultBedrockRegion(11-11)core/providers/utils/utils.go (3)
NewBifrostOperationError(564-575)NewConfigurationError(550-560)NewProviderAPIError(579-594)core/utils.go (1)
Ptr(56-58)core/providers/bedrock/files.go (1)
S3ListObjectsResponse(46-50)core/providers/bedrock/batch.go (1)
ToBifrostBatchStatus(127-148)
core/providers/anthropic/anthropic.go (4)
core/schemas/bifrost.go (3)
BifrostError(461-470)Anthropic(37-37)RequestType(85-85)core/providers/utils/utils.go (2)
MakeRequestWithContext(40-94)NewBifrostOperationError(564-575)core/providers/anthropic/batch.go (1)
ParseAnthropicError(232-249)core/providers/anthropic/types.go (1)
AnthropicFilesAPIBetaHeader(17-17)
core/providers/openrouter/openrouter.go (3)
core/schemas/account.go (1)
Key(8-19)core/schemas/bifrost.go (1)
BifrostError(461-470)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)
core/providers/gemini/gemini.go (4)
core/schemas/bifrost.go (6)
Gemini(48-48)BatchListRequest(101-101)BifrostResponseExtraFields(390-401)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)BatchResultsRequest(104-104)core/schemas/batch.go (8)
BifrostBatchListRequest(118-133)BifrostBatchListResponse(136-147)BifrostBatchRetrieveResponse(167-202)BifrostBatchRetrieveRequest(150-159)BifrostBatchCancelRequest(205-214)BifrostBatchCancelResponse(222-231)BifrostBatchResultsRequest(234-246)BifrostBatchResultsResponse(285-294)core/providers/gemini/types.go (1)
GeminiBatchListResponse(1607-1610)core/providers/gemini/files.go (1)
GeminiFileListResponse(35-38)
9bc94c1 to
95bcd45
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
core/providers/bedrock/s3.go (1)
87-99: Guard against nilmodelIDbefore dereferencing
ConvertBedrockRequestsToJSONLnow takesmodelID *stringbut unconditionally uses*modelIDfor"modelId", so a nilmodelIDwill panic at runtime. Also, with a nilmodelIDyou’d strip"model"fromreq.Body/req.Paramsand never set any model.Consider validating up front and failing fast, e.g.:
func ConvertBedrockRequestsToJSONL(requests []schemas.BatchRequestItem, modelID *string) ([]byte, error) { if modelID == nil || *modelID == "" { return nil, fmt.Errorf("modelID is required for Bedrock batch JSONL conversion") } // existing logic... }
♻️ Duplicate comments (1)
ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx (1)
539-643:BedrockBatchS3ConfigSectioncomponent is defined but never rendered.This is a duplicate of the past review comment. The component is fully implemented with correct bucket management logic (add/remove/set default), but has no JSX integration. It should be conditionally rendered within the Bedrock section after the
BatchAPIFormFieldat line 532:{supportsBatchAPI && <BatchAPIFormField control={control} form={form} />} +{supportsBatchAPI && useForBatchAPI && <BedrockBatchS3ConfigSection control={control} form={form} />}Note: The condition should include
useForBatchAPIto only show S3 configuration when batch API is enabled for the key.
🧹 Nitpick comments (12)
ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx (2)
54-54: Unused variable: consider using to conditionally render S3 configuration.The
useForBatchAPIvariable is watched but never used. It likely should gate theBedrockBatchS3ConfigSectionrendering to only show S3 bucket configuration when batch API is enabled for the key:{supportsBatchAPI && useForBatchAPI && <BedrockBatchS3ConfigSection control={control} form={form} />}This would align with the feature's intent: only show S3 configuration when the user has enabled batch API support for the specific key.
554-554: Consider improving type safety for bucket operations.The bucket manipulation functions use
anytypes which reduces type safety. Consider defining a proper type for the bucket structure:type S3Bucket = { bucket_name: string; prefix: string; is_default: boolean; };Then use it in the functions:
-const newBuckets = currentBuckets.filter((_: any, i: number) => i !== index); +const newBuckets = currentBuckets.filter((_: S3Bucket, i: number) => i !== index); -const newBuckets = currentBuckets.map((bucket: any, i: number) => ({ +const newBuckets = currentBuckets.map((bucket: S3Bucket, i: number) => ({Also applies to: 564-564
core/schemas/bifrost.go (1)
195-247: Nil‑safe model handling for file/batch requests is correctThe updated
GetRequestFieldsbranches for File*/Batch* requests correctly avoid nil dereferences and return an empty model string whenModelis unset, which matches the new pointer semantics and preserves prior behavior for non-batch/file calls. If this grows further, you could factor the repeated pointer‑unwrapping into a small helper, but it’s not required now.framework/configstore/rdb.go (1)
271-287: Bedrock BatchS3Config JSON persistence is sound; consider tightening one branchUsing
sonic.Marshalto populateBedrockBatchS3ConfigJSONwhenBedrockKeyConfig.BatchS3Config != nilin UpdateProvidersConfig, UpdateProvider, and AddProvider correctly serializes the batch S3 config alongside the structuredBedrockKeyConfig. UpdateProvider/AddProvider also explicitly setBedrockBatchS3ConfigJSON = nilwhenBatchS3Configis nil, ensuring stale JSON is cleared.In UpdateProvidersConfig, you only set
BedrockBatchS3ConfigJSONwhenBatchS3Config != nil; for clarity and future‑proofing, you might also explicitly set it to nil whenBedrockKeyConfigis non‑nil butBatchS3Configis nil, mirroring the other two code paths and avoiding any dependence on GORM’s default‑value behavior.For example inside the
if key.BedrockKeyConfig != nilblock:if key.BedrockKeyConfig.BatchS3Config != nil { data, err := sonic.Marshal(key.BedrockKeyConfig.BatchS3Config) if err != nil { return fmt.Errorf("failed to marshal Bedrock batch S3 config: %w", err) } s := string(data) dbKey.BedrockBatchS3ConfigJSON = &s } else { dbKey.BedrockBatchS3ConfigJSON = nil }Also applies to: 409-425, 529-545, 10-10
core/providers/utils/pagination.go (1)
10-14: Logger field is stored but never used.The
Loggerfield is passed toNewSerialListHelperand stored in the struct, but it's never referenced in any of the helper's methods. Consider removing it if logging is not needed, or add logging statements where appropriate (e.g., when cursor decoding fails or when advancing to the next key).type SerialListHelper struct { Keys []schemas.Key Cursor *schemas.SerialCursor - Logger schemas.Logger }If you intend to use logging, the corresponding parameter in
NewSerialListHelpershould also be removed.core/internal/testutil/account.go (1)
163-181: Fix indentation inconsistency in new Bedrock key block.The new Bedrock key block has inconsistent indentation compared to surrounding code — it's missing the leading tab that aligns it with the other keys in the slice.
- { - Models: []string{}, - Weight: 1.0, - BedrockKeyConfig: &schemas.BedrockKeyConfig{ + { + Models: []string{}, + Weight: 1.0, + BedrockKeyConfig: &schemas.BedrockKeyConfig{core/providers/openai/openai.go (1)
2374-2619: Multi-key “try-each-key” loops are robust; optionally guard against empty key slicesThe new implementations for:
FileRetrieve,FileDelete,FileContentBatchRetrieve,BatchCancelall follow a consistent and sound pattern:
- Iterate sequentially over
keys, attempting the request with each.- On provider/network/HTTP/parse errors, set
lastErrand continue.- On first success, return the converted Bifrost response with latency and raw fields.
- Ensure every
fasthttp.AcquireRequest/AcquireResponseis matched by aReleaseon all branches, avoiding leaks.One edge case: if
len(keys) == 0, these methods return(nil, nil)because the loop body never runs andlastErrremains nil. If upstream code already guarantees a non-empty key slice for these operations, this is fine; otherwise you may want to add an early check such as:if len(keys) == 0 { return nil, providerUtils.NewBifrostOperationError("no keys configured for this operation", nil, providerName) }to make the failure mode explicit rather than returning a nil result and nil error.
Also applies to: 2840-3015
core/providers/azure/azure.go (1)
943-1072: Azure FileList doesn't wire through Limit/Order; consider adding for consistency with OpenAI and AnthropicThe serial pagination across keys is well-structured. However, OpenAI's Files API supports limit (1-10,000, default 10,000) and order parameters (asc/desc by created_at), and other providers like Anthropic already propagate these from the request into their query parameters. Azure's FileList currently only passes
api-version, optionalpurpose, and the nativeaftercursor—it ignoresrequest.Limitandrequest.Order. If Azure's files API supports these parameters, consider wiring them through for consistency. If not, this is safe to leave as-is.core/providers/anthropic/anthropic.go (1)
1112-1229: Anthropic: multi-key batch/file orchestration and serial pagination look solid
BatchListandFileListnow useNewSerialListHelperwith provider-native cursors (after_id/ last IDs), building encoded cursors (NextCursor/After) across keys; request construction and cursor plumbing are coherent with the helper’s design.BatchRetrieve,BatchCancel,BatchResults,FileRetrieve,FileDelete, andFileContentiterate over keys sequentially, return on first success, and reliably close fasthttp requests/responses on all paths while propagating the last error if all keys fail.BatchCreate’s updatedroleArnandmodelIDderivation (clientextra_paramsfirst, then key config deployments/ARN) is backward compatible and aligns with how Anthropic batch jobs are modeled.- Optional: if there is any scenario where these APIs might be called with an empty
keysslice, consider returning a configuration error instead of(nil, nil)to make such misconfigurations easier to debug.Also applies to: 1231-1526, 1659-2057
core/providers/bedrock/bedrock.go (1)
1422-2727: Bedrock: multi-key S3 and batch orchestration are correctly wired and resource-safe
FileListusesNewSerialListHelperoverkeyswith S3NextContinuationToken, building a unifiedFileListcursor (After) while correctly parsing S3 URIs and prefixes fromstorage_config/extra_params.FileRetrieve,FileDelete, andFileContentsequentially try each Bedrock key, signing S3 requests per-key, closing response bodies on all branches, and returning the first successful result or a meaningfullastErrif all fail.BatchListmirrors this serial-pagination pattern over Bedrock’snextToken, andBatchRetrieve/BatchCancelcorrectly map Bedrock job responses intoBifrostBatchRetrieveResponse/BifrostBatchCancelResponse, including metadata, request counts (via manifest), and timing.BatchResultscomposesBatchRetrieve,FileList, andFileContentto walk all JSONL result files under the output S3 prefix, aggregating results and parse errors and falling back to a single JSONL object when listing fails.BatchCreate’s newroleArn/modelIDresolution (client override, then key config) plus the updated call to the multi-keyBatchRetrieveimplementation keeps behavior consistent while aligning with the new interface.- Optional: as with Anthropic, if upstream ever passes an empty
keysslice into these multi-key methods, consider returning a configuration error rather than(nil, nil)to make misconfigurations explicit.core/providers/gemini/gemini.go (2)
1516-1523: Consider simplifying the model extraction logic.The current pattern declares
var model stringand then conditionally assigns it. This can be simplified using idiomatic Go:- var model string - if request.Model != nil { - _, model = schemas.ParseModelString(*request.Model, schemas.Gemini) - } - // We default gemini 2.5 flash + model := "" + if request.Model != nil { + _, model = schemas.ParseModelString(*request.Model, schemas.Gemini) + } + // Default to gemini-2.5-flash if no model specified if model == "" { model = "gemini-2.5-flash" }
1896-1925: Consider enhancing error context for multi-key operations.The multi-key fallback pattern correctly tries each key until success. However, when all keys fail, only the last error is returned without indicating that multiple keys were attempted. Consider whether aggregating errors or providing context about the number of keys tried would improve debuggability.
This pattern is consistent across BatchRetrieve, BatchCancel, BatchResults, FileRetrieve, FileDelete, and other operations, so any improvement would benefit the entire codebase.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
ui/package-lock.jsonis excluded by!**/package-lock.jsonui/public/images/nebius.jpegis excluded by!**/*.jpeg
📒 Files selected for processing (62)
Makefile(1 hunks)core/bifrost.go(16 hunks)core/go.mod(1 hunks)core/internal/testutil/account.go(6 hunks)core/internal/testutil/batch.go(5 hunks)core/providers/anthropic/anthropic.go(12 hunks)core/providers/azure/azure.go(8 hunks)core/providers/bedrock/batch.go(2 hunks)core/providers/bedrock/bedrock.go(22 hunks)core/providers/bedrock/s3.go(1 hunks)core/providers/cerebras/cerebras.go(2 hunks)core/providers/cohere/cohere.go(2 hunks)core/providers/elevenlabs/elevenlabs.go(2 hunks)core/providers/gemini/gemini.go(21 hunks)core/providers/groq/groq.go(2 hunks)core/providers/mistral/mistral.go(2 hunks)core/providers/nebius/nebius.go(2 hunks)core/providers/ollama/ollama.go(2 hunks)core/providers/openai/openai.go(14 hunks)core/providers/openrouter/openrouter.go(2 hunks)core/providers/parasail/parasail.go(2 hunks)core/providers/perplexity/perplexity.go(2 hunks)core/providers/sgl/sgl.go(2 hunks)core/providers/utils/pagination.go(1 hunks)core/providers/vertex/vertex.go(2 hunks)core/schemas/account.go(2 hunks)core/schemas/batch.go(5 hunks)core/schemas/bifrost.go(1 hunks)core/schemas/files.go(5 hunks)core/schemas/pagination.go(1 hunks)core/schemas/provider.go(1 hunks)core/utils.go(1 hunks)examples/plugins/hello-world/go.mod(1 hunks)framework/configstore/migrations.go(2 hunks)framework/configstore/rdb.go(9 hunks)framework/configstore/tables/key.go(4 hunks)framework/go.mod(1 hunks)plugins/governance/go.mod(1 hunks)plugins/jsonparser/go.mod(1 hunks)plugins/logging/go.mod(1 hunks)plugins/maxim/go.mod(1 hunks)plugins/mocker/go.mod(1 hunks)plugins/otel/go.mod(1 hunks)plugins/semanticcache/go.mod(1 hunks)plugins/telemetry/go.mod(1 hunks)tests/integrations/config.json(5 hunks)tests/integrations/tests/test_bedrock.py(2 hunks)tests/integrations/tests/test_openai.py(5 hunks)tests/scripts/1millogs/go.mod(1 hunks)transports/bifrost-http/handlers/inference.go(1 hunks)transports/bifrost-http/integrations/anthropic.go(1 hunks)transports/bifrost-http/lib/config.go(4 hunks)transports/config.schema.json(1 hunks)transports/go.mod(1 hunks)ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx(4 hunks)ui/app/workspace/providers/views/providerKeyForm.tsx(0 hunks)ui/components/ui/modelMultiselect.tsx(1 hunks)ui/components/ui/tagInput.tsx(1 hunks)ui/lib/schemas/providerForm.ts(3 hunks)ui/lib/types/config.ts(6 hunks)ui/lib/types/schemas.ts(3 hunks)ui/package.json(2 hunks)
💤 Files with no reviewable changes (1)
- ui/app/workspace/providers/views/providerKeyForm.tsx
✅ Files skipped from review due to trivial changes (4)
- tests/scripts/1millogs/go.mod
- plugins/jsonparser/go.mod
- framework/go.mod
- examples/plugins/hello-world/go.mod
🚧 Files skipped from review as they are similar to previous changes (20)
- core/providers/perplexity/perplexity.go
- core/providers/bedrock/batch.go
- ui/components/ui/tagInput.tsx
- tests/integrations/tests/test_bedrock.py
- tests/integrations/tests/test_openai.py
- Makefile
- core/schemas/files.go
- transports/bifrost-http/lib/config.go
- transports/bifrost-http/integrations/anthropic.go
- tests/integrations/config.json
- transports/config.schema.json
- ui/lib/types/schemas.ts
- core/utils.go
- ui/lib/types/config.ts
- ui/components/ui/modelMultiselect.tsx
- core/providers/vertex/vertex.go
- core/schemas/account.go
- core/schemas/batch.go
- ui/lib/schemas/providerForm.ts
- framework/configstore/migrations.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
plugins/governance/go.modcore/providers/utils/pagination.goplugins/logging/go.modcore/schemas/bifrost.goplugins/semanticcache/go.modplugins/otel/go.modcore/internal/testutil/batch.goui/app/workspace/providers/fragments/apiKeysFormFragment.tsxui/package.jsonplugins/maxim/go.modcore/schemas/pagination.goframework/configstore/tables/key.gocore/go.modplugins/telemetry/go.modtransports/go.modcore/providers/parasail/parasail.goplugins/mocker/go.modtransports/bifrost-http/handlers/inference.gocore/bifrost.gocore/providers/cohere/cohere.goframework/configstore/rdb.gocore/schemas/provider.gocore/providers/bedrock/s3.gocore/providers/ollama/ollama.gocore/providers/openai/openai.gocore/providers/elevenlabs/elevenlabs.gocore/providers/azure/azure.gocore/providers/nebius/nebius.gocore/providers/sgl/sgl.gocore/internal/testutil/account.gocore/providers/mistral/mistral.gocore/providers/cerebras/cerebras.gocore/providers/openrouter/openrouter.gocore/providers/anthropic/anthropic.gocore/providers/gemini/gemini.gocore/providers/groq/groq.gocore/providers/bedrock/bedrock.go
🧠 Learnings (4)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.
Applied to files:
core/providers/utils/pagination.gocore/schemas/bifrost.gocore/internal/testutil/batch.gocore/schemas/pagination.goframework/configstore/tables/key.gocore/providers/parasail/parasail.gotransports/bifrost-http/handlers/inference.gocore/bifrost.gocore/providers/cohere/cohere.goframework/configstore/rdb.gocore/schemas/provider.gocore/providers/bedrock/s3.gocore/providers/ollama/ollama.gocore/providers/openai/openai.gocore/providers/elevenlabs/elevenlabs.gocore/providers/azure/azure.gocore/providers/nebius/nebius.gocore/providers/sgl/sgl.gocore/internal/testutil/account.gocore/providers/mistral/mistral.gocore/providers/cerebras/cerebras.gocore/providers/openrouter/openrouter.gocore/providers/anthropic/anthropic.gocore/providers/gemini/gemini.gocore/providers/groq/groq.gocore/providers/bedrock/bedrock.go
📚 Learning: 2025-12-12T08:25:02.629Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: transports/bifrost-http/integrations/router.go:709-712
Timestamp: 2025-12-12T08:25:02.629Z
Learning: In transports/bifrost-http/**/*.go, update streaming response handling to align with OpenAI Responses API: use typed SSE events such as response.created, response.output_text.delta, response.done, etc., and do not rely on the legacy data: [DONE] termination marker. Note that data: [DONE] is only used by the older Chat Completions and Text Completions streaming APIs. Ensure parsers, writers, and tests distinguish SSE events from the [DONE] sentinel and handle each event type accordingly for correct stream termination and progress updates.
Applied to files:
transports/bifrost-http/handlers/inference.go
📚 Learning: 2025-12-11T11:58:25.307Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: core/providers/openai/responses.go:42-84
Timestamp: 2025-12-11T11:58:25.307Z
Learning: In core/providers/openai/responses.go (and related OpenAI response handling), document and enforce the API format constraint: if ResponsesReasoning != nil and the response contains content blocks, all content blocks should be treated as reasoning blocks by default. Implement type guards or parsing logic accordingly, and add unit tests to verify that when ResponsesReasoning is non-nil, content blocks are labeled as reasoning blocks. Include clear comments in the code explaining the rationale and ensure downstream consumers rely on this behavior.
Applied to files:
core/providers/openai/openai.go
📚 Learning: 2025-12-14T14:43:30.902Z
Learnt from: Radheshg04
Repo: maximhq/bifrost PR: 980
File: core/providers/openai/images.go:10-22
Timestamp: 2025-12-14T14:43:30.902Z
Learning: Enforce the OpenAI image generation SSE event type values across the OpenAI image flow in the repository: use "image_generation.partial_image" for partial chunks, "image_generation.completed" for the final result, and "error" for errors. Apply this consistently in schemas, constants, tests, accumulator routing, and UI code within core/providers/openai (and related Go files) to ensure uniform event typing and avoid mismatches.
Applied to files:
core/providers/openai/openai.go
🧬 Code graph analysis (20)
core/providers/utils/pagination.go (3)
core/schemas/account.go (1)
Key(8-19)core/schemas/pagination.go (4)
SerialCursor(12-16)DecodeSerialCursor(32-53)NewSerialCursor(56-62)EncodeSerialCursor(19-28)core/schemas/logger.go (1)
Logger(28-55)
core/schemas/bifrost.go (3)
transports/bifrost-http/handlers/inference.go (3)
TranscriptionRequest(285-289)BatchCreateRequest(292-299)BatchListRequest(302-307)core/schemas/provider.go (1)
Provider(312-359)core/schemas/models.go (1)
Model(109-129)
core/internal/testutil/batch.go (2)
core/schemas/models.go (1)
Model(109-129)core/utils.go (1)
Ptr(56-58)
ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx (1)
ui/components/ui/modelMultiselect.tsx (1)
ModelMultiselect(27-192)
core/schemas/pagination.go (1)
transports/bifrost-http/main.go (1)
Version(72-72)
core/providers/parasail/parasail.go (4)
core/schemas/files.go (8)
BifrostFileListRequest(109-128)BifrostFileListResponse(136-143)BifrostFileRetrieveRequest(146-159)BifrostFileRetrieveResponse(167-183)BifrostFileDeleteRequest(186-198)BifrostFileDeleteResponse(206-212)BifrostFileContentRequest(215-227)BifrostFileContentResponse(235-241)core/schemas/bifrost.go (7)
BifrostError(461-470)FileListRequest(106-106)FileRetrieveRequest(107-107)FileDeleteRequest(108-108)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)core/schemas/batch.go (8)
BifrostBatchListRequest(118-133)BifrostBatchListResponse(136-147)BifrostBatchRetrieveRequest(150-159)BifrostBatchRetrieveResponse(167-202)BifrostBatchCancelRequest(205-214)BifrostBatchCancelResponse(222-231)BifrostBatchResultsRequest(234-246)BifrostBatchResultsResponse(285-294)
transports/bifrost-http/handlers/inference.go (3)
core/utils.go (1)
Ptr(56-58)core/schemas/batch.go (1)
BifrostBatchCreateRequest(65-83)core/schemas/bifrost.go (1)
ModelProvider(32-32)
core/bifrost.go (3)
core/schemas/bifrost.go (13)
BifrostError(461-470)FileListRequest(106-106)FileRetrieveRequest(107-107)FileDeleteRequest(108-108)FileContentRequest(109-109)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)BatchResultsRequest(104-104)ModelProvider(32-32)BifrostContextKeyDirectKey(121-121)BifrostContextKeySkipKeySelection(127-127)ListModelsRequest(88-88)core/schemas/provider.go (1)
Provider(312-359)core/schemas/account.go (1)
Key(8-19)
core/providers/cohere/cohere.go (3)
core/schemas/batch.go (7)
BifrostBatchListRequest(118-133)BifrostBatchListResponse(136-147)BifrostBatchRetrieveRequest(150-159)BifrostBatchRetrieveResponse(167-202)BifrostBatchCancelRequest(205-214)BifrostBatchCancelResponse(222-231)BifrostBatchResultsRequest(234-246)core/schemas/bifrost.go (4)
BifrostError(461-470)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)
framework/configstore/rdb.go (2)
core/schemas/account.go (2)
BedrockKeyConfig(56-64)BatchS3Config(50-52)ui/lib/types/config.ts (2)
BedrockKeyConfig(63-71)BatchS3Config(58-60)
core/schemas/provider.go (1)
core/schemas/account.go (1)
Key(8-19)
core/providers/bedrock/s3.go (1)
core/schemas/batch.go (1)
BatchRequestItem(31-37)
core/providers/ollama/ollama.go (2)
core/schemas/bifrost.go (4)
BifrostError(461-470)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)
core/providers/elevenlabs/elevenlabs.go (2)
core/schemas/bifrost.go (7)
BifrostError(461-470)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)FileListRequest(106-106)FileRetrieveRequest(107-107)FileDeleteRequest(108-108)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)
core/providers/azure/azure.go (5)
core/schemas/files.go (9)
BifrostFileListRequest(109-128)BifrostFileListResponse(136-143)FileObject(40-50)BifrostFileRetrieveRequest(146-159)BifrostFileRetrieveResponse(167-183)BifrostFileDeleteRequest(186-198)BifrostFileDeleteResponse(206-212)BifrostFileContentRequest(215-227)BifrostFileContentResponse(235-241)core/schemas/bifrost.go (7)
BifrostError(461-470)BifrostResponseExtraFields(390-401)RequestType(85-85)FileListRequest(106-106)FileRetrieveRequest(107-107)FileDeleteRequest(108-108)FileContentRequest(109-109)core/providers/utils/utils.go (2)
ShouldSendBackRawRequest(607-612)ShouldSendBackRawResponse(615-620)core/providers/utils/pagination.go (1)
NewSerialListHelper(19-34)core/schemas/provider.go (1)
Provider(312-359)
core/providers/mistral/mistral.go (2)
core/schemas/bifrost.go (4)
BifrostError(461-470)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)
core/providers/cerebras/cerebras.go (4)
core/schemas/files.go (7)
BifrostFileListRequest(109-128)BifrostFileListResponse(136-143)BifrostFileRetrieveRequest(146-159)BifrostFileDeleteRequest(186-198)BifrostFileDeleteResponse(206-212)BifrostFileContentRequest(215-227)BifrostFileContentResponse(235-241)core/schemas/bifrost.go (7)
BifrostError(461-470)FileListRequest(106-106)FileRetrieveRequest(107-107)FileDeleteRequest(108-108)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)core/schemas/batch.go (6)
BifrostBatchListRequest(118-133)BifrostBatchListResponse(136-147)BifrostBatchRetrieveRequest(150-159)BifrostBatchRetrieveResponse(167-202)BifrostBatchCancelRequest(205-214)BifrostBatchResultsRequest(234-246)
core/providers/openrouter/openrouter.go (5)
core/schemas/account.go (1)
Key(8-19)core/schemas/batch.go (3)
BifrostBatchListRequest(118-133)BifrostBatchListResponse(136-147)BifrostBatchRetrieveResponse(167-202)core/schemas/bifrost.go (6)
BifrostError(461-470)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)FileListRequest(106-106)FileRetrieveRequest(107-107)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)core/schemas/files.go (1)
BifrostFileListRequest(109-128)
core/providers/groq/groq.go (4)
core/schemas/batch.go (1)
BifrostBatchListRequest(118-133)core/schemas/bifrost.go (2)
BifrostError(461-470)BatchListRequest(101-101)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)transports/bifrost-http/handlers/inference.go (1)
BatchListRequest(302-307)
core/providers/bedrock/bedrock.go (9)
core/schemas/account.go (2)
Key(8-19)BedrockKeyConfig(56-64)core/schemas/files.go (9)
BifrostFileListRequest(109-128)BifrostFileListResponse(136-143)FileObject(40-50)BifrostFileRetrieveRequest(146-159)BifrostFileRetrieveResponse(167-183)BifrostFileDeleteRequest(186-198)BifrostFileDeleteResponse(206-212)BifrostFileContentRequest(215-227)BifrostFileContentResponse(235-241)core/schemas/bifrost.go (13)
BifrostError(461-470)BifrostResponseExtraFields(390-401)RequestType(85-85)FileListRequest(106-106)ErrorField(479-486)FileRetrieveRequest(107-107)FileDeleteRequest(108-108)FileContentRequest(109-109)Bedrock(38-38)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)BatchResultsRequest(104-104)core/providers/utils/pagination.go (1)
NewSerialListHelper(19-34)core/providers/utils/utils.go (3)
NewBifrostOperationError(564-575)NewConfigurationError(550-560)NewProviderAPIError(579-594)core/schemas/provider.go (1)
Provider(312-359)core/schemas/utils.go (1)
Ptr(16-18)core/schemas/batch.go (10)
BifrostBatchRetrieveRequest(150-159)BifrostBatchListRequest(118-133)BifrostBatchListResponse(136-147)BifrostBatchRetrieveResponse(167-202)BatchRequestCounts(40-48)BifrostBatchCancelRequest(205-214)BifrostBatchCancelResponse(222-231)BatchStatusCancelling(14-14)BifrostBatchResultsRequest(234-246)BifrostBatchResultsResponse(285-294)core/providers/bedrock/batch.go (2)
ToBifrostBatchStatus(127-148)BedrockBatchJobResponse(53-69)
🪛 GitHub Actions: PR #1104
framework/configstore/tables/key.go
[warning] 166-166: Go extractor warning: BedrockKeyConfig.BatchS3Config undefined. (BedrockKeyConfig type has no field named BatchS3Config)
🪛 GitHub Check: CodeQL
core/bifrost.go
[failure] 3347-3347: Clear-text logging of sensitive information
Sensitive data returned by an access to SecretKey flows to a logging call.
Sensitive data returned by an access to SecretKey flows to a logging call.
Sensitive data returned by an access to SecretKey flows to a logging call.
🔇 Additional comments (55)
plugins/maxim/go.mod (1)
3-3: Verify Go 1.25.5 compatibility with all project dependencies and workflows.The Go version is being bumped from 1.24.3 to 1.25.5. While the PR checklist indicates Go builds have been verified, confirm that this version bump does not introduce any breaking changes or compatibility issues with the project's dependency tree (particularly the AWS SDK, database drivers, and protocol buffer dependencies listed below).
plugins/semanticcache/go.mod (1)
3-3: Go version upgrade is straightforward; verify full stack for compatibility.The toolchain version bump from 1.24.3 to 1.25.5 is a minor, low-risk change. Since you've noted in the PR checklist that Go builds were verified locally and in CI, this appears safe.
However, I have visibility only into this
go.modfile. To confirm the full scope of this PR—which objectives claim multi-key batch/file API support—I'd need to review the actual implementation files (provider interfaces, core routing, DB schema changes, migrations, etc.). Those files should contain the substantive changes referenced in the AI summary.Can you provide the full set of changed files in this PR or its stack so I can review the batch/file API implementation changes? Alternatively, if this is a stack-managed PR, point me to the other PR(s) in the stack so I can understand the full context.
plugins/otel/go.mod (1)
3-3: Toolchain upgrade to Go 1.25.5 is consistent with repo-wide changes.No dependency modifications; all pinned versions remain unchanged. This toolchain bump aligns with the coordinated upgrades across other modules in the repository.
plugins/governance/go.mod (1)
3-3: Go toolchain version bump is consistent with repository-wide upgrade.core/go.mod (1)
3-3: Go toolchain upgrade applied consistently.plugins/mocker/go.mod (1)
3-3: Toolchain version bump applied consistently with other modules.transports/go.mod (1)
3-3: Go toolchain upgrade is consistent across modules.plugins/telemetry/go.mod (1)
3-3: Toolchain upgrade aligns with repo-wide changes.plugins/logging/go.mod (1)
3-3: Go toolchain upgrade is consistent with coordinated repo updates.ui/package.json (2)
46-46: Dependency updates look good.lucide-react version 0.552.0 exists and has no known security vulnerabilities. The update from ^0.542.0 improves reproducibility by using exact version pinning. Similarly, zod 4.2.1 has no direct vulnerabilities and is a safe patch upgrade within Zod 4.
63-63: Test batch API form validations after updating Zod.Zod 4.2.1 is a stable release and represents a safe minor version update from 4.1.5 within Zod 4. No vulnerabilities have been reported for Zod 4.x versions. Run the UI test suite (
pnpm test) to ensure form validation behavior remains consistent with this update.ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx (4)
4-4: LGTM!The new imports support batch API functionality (Button, Switch) and S3 bucket management (Plus, Trash2 icons), plus the ModelMultiselect component for improved model selection.
Also applies to: 7-7, 9-9, 14-14
17-47: LGTM!The batch API support constant and form field are well-implemented. The provider list clearly defines which providers support batch operations, and the form field provides a clean toggle with helpful descriptions.
177-177: LGTM!The migration from TagInput to ModelMultiselect improves UX with async model loading, better multi-selection, and provider-aware model fetching.
183-183: LGTM!The conditional rendering strategy correctly places the batch API toggle: generic providers (OpenAI, Anthropic, Gemini) render it at line 183, while Azure and Bedrock render it within their provider-specific configuration sections for better grouping.
Also applies to: 251-251, 532-532
core/internal/testutil/batch.go (1)
23-40: Pointer-basedModelwiring in batch tests looks consistentUpdating
BifrostBatchCreateRequest.Modeltoschemas.Ptr(testConfig.ChatModel)across the batch tests correctly matches the new*stringfield while preserving the raw"model"in request bodies for downstream providers. No issues spotted.Also applies to: 114-131, 186-203, 296-312, 691-699
core/providers/elevenlabs/elevenlabs.go (1)
720-737: Multi‑key signatures for Elevenlabs batch/file ops match the interfaceSwitching these unsupported methods to accept
[]schemas.Keykeeps behavior the same while aligning Elevenlabs with the updatedProviderinterface for batch and file operations. No further changes needed.Also applies to: 745-762
core/providers/cerebras/cerebras.go (1)
219-237: Cerebras batch/file methods correctly updated to[]schemas.KeyThe signature changes to accept
[]schemas.Keyfor unsupported batch and file operations are consistent with the new multi-key Provider contract and keep the existing “unsupported_operation” behavior intact.Also applies to: 245-262
framework/configstore/rdb.go (1)
241-250: UseForBatchAPI/Enabled propagation between DB and public config looks correctPersisting
UseForBatchAPIontables.TableKeyin UpdateProvidersConfig/UpdateProvider/AddProvider and then surfacing bothEnabledandUseForBatchAPIback intoschemas.Keyin GetProvidersConfig matches the new batch‑routing semantics and keeps “Enabled” as the runtime toggle while letting the config drive batch eligibility. This is consistent across all three write paths.Also applies to: 379-388, 499-508, 672-680
transports/bifrost-http/handlers/inference.go (1)
1232-1266: BatchCreate now correctly populates pointerModelCreating a local
*stringfrommodelNameand wiring it intoBifrostBatchCreateRequest.Modelaligns this handler with the new optional*stringschema while preserving existing validation and extra‑params handling. Looks good.core/schemas/pagination.go (1)
9-28: SerialCursor encoding/versioning utilities look correctThe
SerialCursorshape plusEncodeSerialCursor/NewSerialCursorprovide a compact, URL‑safe cursor with explicit versioning. Aside from the Decode comment nit, the implementation is straightforward and suitable for multi‑key serial pagination.Also applies to: 55-62
core/providers/utils/pagination.go (2)
39-58: LGTM — GetCurrentKey handles edge cases correctly.The method properly handles empty keys slice, nil cursor (defaults to index 0), and out-of-bounds cursor index. The logic is clear and defensive.
68-94: LGTM — BuildNextCursor correctly manages pagination state transitions.The logic properly handles:
- Empty keys (returns exhausted)
- Current key has more pages (returns cursor for same key)
- Current key exhausted with more keys remaining (advances to next key)
- All keys exhausted (returns empty cursor)
core/internal/testutil/account.go (1)
117-314: LGTM — UseForBatchAPI flags consistently applied across providers.The
UseForBatchAPI: bifrost.Ptr(true)flag is correctly added to enable batch API testing for OpenAI, Anthropic, Cohere, Azure, Vertex, Mistral, Groq, Parasail, Elevenlabs, Perplexity, Cerebras, Gemini, OpenRouter, and Nebius providers.framework/configstore/tables/key.go (3)
48-51: LGTM — New persistence fields for batch API configuration.The new fields
BedrockBatchS3ConfigJSON(for Bedrock batch S3 config) andUseForBatchAPI(with default false) are correctly defined with appropriate GORM tags.
258-291: LGTM — AfterFind correctly reconstructs BatchS3Config.The condition at line 258 properly includes
BedrockBatchS3ConfigJSONpresence check, and the deserialization logic (lines 283-289) correctly unmarshals the JSON into aBatchS3Configstruct and assigns it tobedrockConfig.BatchS3Config.
166-175: The code at lines 166-175 is correct. TheBatchS3Configfield exists inschemas.BedrockKeyConfig(defined incore/schemas/account.goat line 63) and is properly used for JSON serialization in theBeforeSavehook.Likely an incorrect or invalid review comment.
core/bifrost.go (4)
3220-3288: LGTM — Multi-key retrieval for batch/file operations is well-implemented.The
getKeysForBatchAndFileOpsfunction correctly:
- Respects direct key override from context
- Filters disabled keys
- Filters by
UseForBatchAPIfor batch operations- Applies model filtering when specified
- Sorts keys by ID for deterministic pagination order
- Returns clear error messages distinguishing batch vs non-batch failures
2717-2767: LGTM — requestWorker correctly routes single-key vs multi-key operations.The logic properly distinguishes:
BatchCreateandFileUploaduse single key selection- Other batch/file operations (
BatchList,BatchRetrieve, etc.) use multi-key retrieval- Keys are passed to
handleProviderRequestfor downstream provider calls
2849-2850: LGTM — handleProviderRequest signature change supports dual-path operations.The updated signature accepting both
key schemas.Keyandkeys []schemas.Keyallows the function to handle both single-key operations (chat, completion, etc.) and multi-key operations (batch list, file list, etc.) cleanly.
3314-3326: LGTM — Batch API key filtering in selectKeyFromProviderForModel.The added logic correctly filters keys to only include those with
UseForBatchAPIenabled when processing batch or file request types, with a clear error message when no batch-enabled keys are found.core/schemas/provider.go (1)
341-358: Interface updated for multi-key batch and file operations.The changes are correct and complete:
- Multi-key operations (
BatchList,BatchRetrieve,BatchCancel,BatchResults,FileList,FileRetrieve,FileDelete,FileContent) acceptkeys []Keyto enable operations across multiple provider keys- Single-key operations (
BatchCreate,FileUpload) correctly remain withkey Keysince they create resources tied to a specific keyAll provider implementations have been updated to match these new signatures.
core/providers/parasail/parasail.go (1)
190-233: Multi-key file/batch signatures for Parasail are correctly aligned with the Provider interfaceAll affected Parasail methods now accept
[]schemas.Keyand still correctly returnNewUnsupportedOperationErrorwith the appropriateRequestType. This keeps the implementation consistent with the updatedProviderinterface without changing runtime behavior.core/providers/ollama/ollama.go (1)
231-274: Ollama batch/file methods correctly switched to multi-key signaturesThe Batch* and File* methods now take
[]schemas.Keyand still immediately returnNewUnsupportedOperationErrorwith the rightRequestType, matching the updated Provider interface without altering behavior.core/providers/cohere/cohere.go (1)
849-892: Cohere batch/file methods now conform to multi-key Provider interfaceThe Cohere Batch* and File* methods correctly switched to
[]schemas.Keyparameters and still returnNewUnsupportedOperationErrorwith the rightRequestType. This keeps the provider aligned with the new interface while maintaining previous behavior.core/providers/openai/openai.go (1)
2244-2372: Serial multi-key FileList/BatchList implementation for OpenAI looks correct and API-friendlyThe new FileList and BatchList implementations:
- Use
NewSerialListHelper+GetCurrentKey/BuildNextCursorto serialize pagination across keys while honoring each key’s native cursor.- Correctly propagate
limit,purpose(FileList), andorder(FileList) into OpenAI query params.- Build result slices with
make(..., 0, len(...))so empty responses marshal as[]rather thannull.- Set
HasMoreand cross-key cursor (After/NextCursor) via the helper.This is consistent with the multi-key pattern used elsewhere and should work well with clients relying on unified pagination semantics.
Also applies to: 2725-2838
core/providers/sgl/sgl.go (1)
229-271: SGL: multi-key batch/file signatures correctly aligned with Provider interfaceThe updated file and batch methods now take
[]schemas.Keywhile still returningNewUnsupportedOperationError, keeping SGL in sync with the multi-key Provider interface without changing behavior.core/providers/nebius/nebius.go (1)
244-285: Nebius: multi-key batch/file method signatures updated without behavior changeAll batch and file methods now accept
[]schemas.Keyand still return an unsupported-operation error, which is consistent with the new Provider interface and preserves existing semantics.core/providers/groq/groq.go (1)
258-300: Groq: batch/file APIs now multi-key and remain explicitly unsupportedThe batch and file methods now take
[]schemas.Keyand consistently returnNewUnsupportedOperationErrorwith the correct request types, matching the multi-key Provider interface while keeping Groq’s unsupported status explicit.core/providers/openrouter/openrouter.go (1)
291-333: OpenRouter: multi-key batch/file signatures with unchanged unsupported behaviorBatch and file methods now accept slices of keys and still use
NewUnsupportedOperationErrorwith the appropriateRequestType, keeping OpenRouter compatible with the multi-key Provider interface without altering behavior.core/providers/mistral/mistral.go (1)
266-308: Mistral: batch/file methods migrated to []Key and kept unsupportedThe batch and file operations now accept
[]schemas.Keyand still returnNewUnsupportedOperationErrorfor the appropriate request types, which is consistent with the multi-key Provider contract and current feature set.core/providers/gemini/gemini.go (14)
1629-1734: LGTM!The per-key batch list helper is well-structured. Error handling correctly returns an empty list for unsupported endpoints (404/405) while propagating other errors. Latency tracking and response conversion follow established patterns.
1736-1810: LGTM! Multi-key serial pagination is correctly implemented.The BatchList function properly orchestrates serial pagination across multiple keys using the
SerialListHelper. The implementation:
- Validates keys before processing
- Handles exhausted keys gracefully
- Correctly maps between Bifrost's PageToken and native cursors
- Propagates latency and builds appropriate next cursors
1812-1894: LGTM!The per-key batch retrieve helper is well-implemented with proper error handling and response construction. The logic correctly determines if a batch job is done based on its state.
1927-1990: LGTM!The per-key batch cancel helper is correctly implemented with appropriate error handling for both supported and unsupported scenarios. Debug logging aids troubleshooting.
1992-2022: LGTM!BatchCancel correctly implements the multi-key fallback pattern, consistent with other operations. The implementation properly validates inputs and tries each key sequentially.
2046-2202: LGTM!The per-key batch results helper handles the complexity of Gemini's dual result formats (file-based and inline) correctly. The implementation:
- Properly checks processing state before attempting retrieval
- Handles both file-based and inline response formats
- Provides informative messages when results aren't available
- Includes parse error tracking
2204-2235: LGTM!BatchResults correctly implements the multi-key fallback pattern. The function properly validates inputs and iterates through keys until results are successfully retrieved.
2373-2474: LGTM!The per-key file list helper is well-implemented with proper pagination support. File metadata conversion and error handling follow established patterns. Latency tracking is correctly propagated.
2476-2549: LGTM!FileList correctly implements serial pagination across multiple keys, mirroring the BatchList pattern. The implementation properly handles cursor mapping between Bifrost's After field and Gemini's native pageToken.
2551-2633: LGTM!The per-key file retrieve helper correctly handles Gemini's file ID format requirements and properly converts metadata to Bifrost format. Implementation is clean and follows established patterns.
2635-2664: LGTM!FileRetrieve correctly implements the multi-key fallback pattern. Input validation and key iteration follow the established approach used in other operations.
2666-2715: LGTM!The per-key file delete helper correctly implements the DELETE operation with appropriate handling of both 200 and 204 success status codes. Implementation is clean and follows established patterns.
2717-2746: LGTM!FileDelete correctly implements the multi-key fallback pattern. The implementation is consistent with other file and batch operations.
2751-2764: Signature updated for consistency with multi-key API pattern.FileContent's signature was updated to accept
[]schemas.Keyfor consistency with other operations, even though the function returns an unsupported operation error. This maintains interface consistency across all multi-key operations.
95bcd45 to
d3526f2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
transports/bifrost-http/handlers/inference.go (2)
1378-1382: Potential panic from unchecked type assertion.The type assertion on line 1380 lacks the
okcheck, which could cause a panic ifbatch_idis not a string. Compare withbatchRetrieve(line 1341) which uses the safer pattern.Apply this diff to add the safety check:
func (h *CompletionHandler) batchCancel(ctx *fasthttp.RequestCtx) { // Get batch ID from URL parameter - batchID := ctx.UserValue("batch_id").(string) - if batchID == "" { + batchID, ok := ctx.UserValue("batch_id").(string) + if !ok || batchID == "" { SendError(ctx, fasthttp.StatusBadRequest, "batch_id is required") return }
1417-1423: Same unchecked type assertion issue.Apply the same fix as
batchCancel:func (h *CompletionHandler) batchResults(ctx *fasthttp.RequestCtx) { // Get batch ID from URL parameter - batchID := ctx.UserValue("batch_id").(string) - if batchID == "" { + batchID, ok := ctx.UserValue("batch_id").(string) + if !ok || batchID == "" { SendError(ctx, fasthttp.StatusBadRequest, "batch_id is required") return }framework/configstore/rdb.go (1)
271-287: Read path confirmed, but fix sonic/json serialization inconsistencyThe read/unmarshal path exists in
framework/configstore/tables/key.go(AfterFindhook, line 285) and correctly hydratesBedrockKeyConfig.BatchS3Configfrom the stored JSON. However, there's a serialization mismatch: writes usesonic.Marshal(rdb.go and tables/key.go:167) while reads usejson.Unmarshal(tables/key.go:285). Usesonic.Unmarshalfor the read path to match the write library, or switch all Bedrock marshalling tojson.Marshalfor consistency with Azure and Vertex providers.Optionally wrap marshal/unmarshal errors with context for better diagnostics, e.g.
fmt.Errorf("failed to marshal bedrock batch_s3_config: %w", err).
♻️ Duplicate comments (5)
core/schemas/account.go (1)
18-18: Backfill for existing keys still not implemented.The comment says "migrated keys default to true" but there's no actual backfill UPDATE in the migration. Either add the backfill logic or update the comment to reflect the actual behavior (NULL/false for existing keys).
Based on past review comments, this issue was already flagged but remains unresolved.
framework/configstore/migrations.go (1)
1781-1803: Migration comment is inconsistent with implementation.The comment on line 1782 states "Existing keys are backfilled with use_for_batch_api = TRUE" but no backfill UPDATE is performed. Compare with
migrationAddEnabledColumnToKeyTable(lines 1121-1123) which correctly backfills:if err := tx.Exec("UPDATE config_keys SET enabled = TRUE WHERE enabled IS NULL").Error; err != nil { return fmt.Errorf("failed to backfill enabled column: %w", err) }Either add the backfill or correct the comment.
This was flagged in past reviews. If the intent is for existing keys to be excluded from batch APIs by default (NULL/false), update the comment. If existing keys should have batch API enabled, add the backfill.
ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx (1)
539-643:BedrockBatchS3ConfigSectionis defined but never rendered.This component implements full S3 bucket configuration UI for Bedrock batch operations but is not integrated into the JSX. It should be conditionally rendered after the
BatchAPIFormFieldfor Bedrock when batch API is enabled.Based on past review comments, add the component to the Bedrock section:
{supportsBatchAPI && <BatchAPIFormField control={control} form={form} />} + {supportsBatchAPI && <BedrockBatchS3ConfigSection control={control} form={form} />} </div>core/bifrost.go (1)
3347-3347: Security: Clear-text logging of sensitive data (SecretKey).This debug log statement outputs the entire
keysslice, which includes sensitive fields likeSecretKeyfor Bedrock configurations. This is flagged by CodeQL as a security vulnerability.- bifrost.logger.Debug(fmt.Sprintf("filtering keys for model: %s %v", model, keys)) + bifrost.logger.Debug(fmt.Sprintf("filtering keys for model: %s, key count: %d", model, len(keys)))Alternatively, if key identifiers are needed for debugging, log only non-sensitive fields:
keyIDs := make([]string, len(keys)) for i, k := range keys { keyIDs[i] = k.ID } bifrost.logger.Debug(fmt.Sprintf("filtering keys for model: %s, keys: %v", model, keyIDs))core/providers/openai/openai.go (1)
3017-3118: BatchResults’ reuse of BatchRetrieve + JSONL parsing is correct and logging is compatible
- First calls
BatchRetrieve(ctx, keys, ...)to obtainOutputFileID, then checks for empty ID and errors out early if results are not ready.- Then iterates over
keys, performing/v1/files/{output_file_id}/contentGETs until one succeeds:
- Releases
req/respon all paths.- Records
lastErron HTTP or decode failure and continues.- Uses
ParseJSONLwith a callback that:
- Unmarshals each line into
schemas.BatchResultItem.- Logs parsing failures via
provider.logger.Warn("failed to parse batch result line: %v", err)and returns the error so it’s captured inparseResult.Errors.- Builds a
BifrostBatchResultsResponsewithResults,Latency(from the successful download),ParseErrorsif any, andRequestType/Provider.The flow is coherent and resilient to partial JSONL corruption while preserving detailed parse errors.
🧹 Nitpick comments (12)
ui/lib/schemas/providerForm.ts (1)
94-104: Bedrock batch S3 + use_for_batch_api wiring looks good; consider tightening validationThe new
S3BucketConfigSchema/BatchS3ConfigSchema,bedrock_key_config.batch_s3_config, andkey.use_for_batch_apifields align with the Go schemas and should serialize cleanly through the form.You may want to optionally add a couple of refinements:
- Enforce that, if
batch_s3_configis present,buckets(if provided) is non-empty.- Enforce at most one
is_defaultbucket to avoid ambiguous configuration.Those can be layered as
.superRefineonBatchS3ConfigSchemawithout changing the external shape.Also applies to: 105-119, 146-156
core/schemas/account.go (1)
41-46: Consider using*boolforIsDefaultto properly supportomitempty.The
IsDefaultfield is a plainboolwithomitempty, which meansfalsevalues will be omitted from JSON output sincefalseis the zero value. If you need to distinguish between "explicitly false" and "not set," use*boolinstead. Iffalsebeing omitted is acceptable, this is fine.ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx (1)
54-54: Remove unused variableuseForBatchAPI.The variable is watched but never used in the component. This appears to be leftover from an incomplete implementation.
const supportsBatchAPI = BATCH_SUPPORTED_PROVIDERS.includes(providerName); - const useForBatchAPI = form.watch("key.use_for_batch_api");ui/lib/types/config.ts (1)
98-106: Consider addinguse_for_batch_apito DefaultModelProviderKey for consistency.The new
use_for_batch_apifield is not included inDefaultModelProviderKey. Whileundefinedwill default tofalseper the Go implementation, consider explicitly setting it for clarity:export const DefaultModelProviderKey: ModelProviderKey = { id: "", name: "", value: "", models: [], weight: 1.0, enabled: true, + use_for_batch_api: false, };core/internal/testutil/account.go (1)
163-181: Formatting inconsistency in new Bedrock key block.The new Bedrock key entry has inconsistent indentation compared to other entries in the slice. The opening brace should be on the same line as a comma from the previous entry, and the content should be indented with a tab.
- { - Models: []string{}, - Weight: 1.0, - BedrockKeyConfig: &schemas.BedrockKeyConfig{ + { + Models: []string{}, + Weight: 1.0, + BedrockKeyConfig: &schemas.BedrockKeyConfig{core/providers/utils/pagination.go (2)
10-14: Logger field is stored but unused.The
Loggerfield is passed toNewSerialListHelperand stored in the struct, but it's never used by any of the helper methods. Consider either using it for debug logging or removing it to avoid confusion.type SerialListHelper struct { Keys []schemas.Key Cursor *schemas.SerialCursor - Logger schemas.Logger }Or if logging is intended for future use, add a brief comment explaining the purpose.
19-34: Consider validating cursor KeyIndex bounds during construction.If the encoded cursor contains a
KeyIndexthat exceedslen(keys), this won't be detected untilGetCurrentKey()is called. Consider adding bounds validation in the constructor:if encodedCursor != nil && *encodedCursor != "" { cursor, err := schemas.DecodeSerialCursor(*encodedCursor) if err != nil { return nil, err } + if cursor.KeyIndex >= len(keys) { + return nil, fmt.Errorf("cursor key index %d out of bounds (have %d keys)", cursor.KeyIndex, len(keys)) + } helper.Cursor = cursor }This provides a clearer error message upfront rather than silently returning
falsefromGetCurrentKey().core/providers/anthropic/anthropic.go (2)
1659-2057: Anthropic Files multi-key implementation is correct; same empty-keys guard + minor doc nitThe new multi-key versions of
FileList,FileRetrieve,FileDelete, andFileContentare consistent and look correct:
FileListleveragesNewSerialListHelperoverrequest.After, uses the helper’s native cursor per key (after_id), and builds an encoded cursor viaBuildNextCursorintoBifrostFileListResponse.After. File conversions (AnthropicFileListResponse→schemas.FileObject) are straightforward and reuse existing timestamp handling.FileRetrieve,FileDelete, andFileContentcorrectly:
- Iterate keys in order, setting
x-api-key,anthropic-version, andanthropic-beta.- On each key, perform the appropriate HTTP method and decode/convert into Bifrost responses or raw content.
- Track
lastErrso failures on some keys don’t prevent trying remaining keys.Two small follow-ups:
Same as for batch methods: if
keysis empty, these functions currently fall through and return(nil, nil). Adding an earlyif len(keys) == 0 { return nil, NewConfigurationError("no keys provided for files operation", providerName) }(or similar) would make misuse fail loudly instead of returning an ambiguous success withnilresponse.The duplicated doc comments above
FileList(“lists files from all provided keys and aggregates results” vs “lists files using serial pagination across keys”) say essentially the same thing. You could trim to a single, focused comment to keep docs tight.
1112-1231: Anthropic batch multi-key behavior is solid; consider guarding empty key slicesThe multi-key implementations for
BatchRetrieve(line 1232),BatchCancel(line 1314), andBatchResults(line 1422) correctly implement "try each key until found/successful" patterns:
- All iterate keys, collect per-key errors in
lastErr, and return on first success.- Each properly sets headers (
x-api-key,anthropic-version), handles decode/parse errors, and adapts Anthropic responses to Bifrost formats.However, there is an unguarded edge case: if
keysis empty, thefor _, key := range keysloop never executes,lastErrremains nil, and these methods return(nil, nil). While this may be intentional, it creates an ambiguous success state without data and without an explicit error. Adding an early check—if len(keys) == 0 { return nil, providerUtils.NewBifrostOperationError("no keys provided", nil, providerName) }—would improve clarity and prevent nil returns that might be misinterpreted upstream.The pattern of preserving only the last per-key error is acceptable for now; future observability improvements could aggregate per-key failures into
ExtraFields.ParseErrorsor similar if needed.core/providers/bedrock/bedrock.go (3)
1591-1885: S3 multi-key FileRetrieve/Delete/Content logic is sound; add an explicit empty-keys errorThe new multi-key S3 helpers:
FileRetrieve: parsesfile_idas S3 URI once, then iterates keys performingHEADagainst the bucket with per-key region and signed request, returning on first 200 and constructingBifrostFileRetrieveResponsefrom headers.FileDelete: similarly loops over keys performing signedDELETEand returns success on first 204/200; otherwise accumulates the last failure.FileContent: loops over keys doing signedGET, returning content +Content-Typeon first 200 and capturing per-call latency.All response bodies are read and closed on every path, and S3 URL construction correctly uses
escapeS3KeyForURL.As with Anthropic, the only robustness gap is:
- If
keysis empty, these helpers fall through and return(nil, nil). Upstream code is unlikely to expect anilresponse with no error. Adding a shared guard (e.g.,if len(keys) == 0 { return nil, NewConfigurationError("no keys provided for Bedrock file operation", providerName) }) would avoid silent misconfiguration.
1903-1936: BatchCreate: clearer role ARN precedence, model validation, and integration with new BatchRetrieveThe
BatchCreatechanges look good:
- Role resolution now sensibly prefers
request.ExtraParams["role_arn"]and falls back tokey.BedrockKeyConfig.ARNif present, only erroring when neither is set.- A missing
request.Modelis now rejected early with a clear error instead of flowing into later logic.modelIDproperly respects per-keyDeploymentsmapping, defaulting back to the requested model when no mapping exists.- After
CreateModelInvocationJob, you now call the multi-keyBatchRetrievewith[]schemas.Key{key}to enrich the create response with real job status, falling back to a minimalVALIDATINGresponse if retrieval fails.All of this aligns well with the new multi-key job retrieval path and improves configuration ergonomics for Bedrock batch users.
Also applies to: 2092-2113
2132-2285: Add upfront guards inBatchRetrieveandBatchCancelfor empty keys; fix latency reporting inconsistency inBatchCancelThe multi-key batch operations handle pagination and key iteration correctly, but two minor refinements improve clarity and consistency:
Empty keys defense –
BatchRetrieve,BatchCancel, andBatchResultscurrently loop overkeyswithout an explicit guard. Ifkeysis empty, they return(nil, nil)without a clear error. Although Bifrost validates keys before invoking provider methods, adding a defensive check at the provider level (similar to the explicit check inBatchListviaSerialListHelper.GetCurrentKey()) makes this failure mode unambiguous and matches the defensive pattern used elsewhere in the codebase.Latency reporting in
BatchCancel– The success path (after retrieve succeeds) reports only the stop call's latency, while the failure path (after retrieve fails) computestotalLatency := time.Since(startTime)covering both stop and retrieve. For consistency, measure and report the combined stop+retrieve latency in both cases.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
ui/package-lock.jsonis excluded by!**/package-lock.jsonui/public/images/nebius.jpegis excluded by!**/*.jpeg
📒 Files selected for processing (62)
Makefile(1 hunks)core/bifrost.go(16 hunks)core/go.mod(1 hunks)core/internal/testutil/account.go(6 hunks)core/internal/testutil/batch.go(5 hunks)core/providers/anthropic/anthropic.go(12 hunks)core/providers/azure/azure.go(8 hunks)core/providers/bedrock/batch.go(2 hunks)core/providers/bedrock/bedrock.go(22 hunks)core/providers/bedrock/s3.go(1 hunks)core/providers/cerebras/cerebras.go(2 hunks)core/providers/cohere/cohere.go(2 hunks)core/providers/elevenlabs/elevenlabs.go(2 hunks)core/providers/gemini/gemini.go(21 hunks)core/providers/groq/groq.go(2 hunks)core/providers/mistral/mistral.go(2 hunks)core/providers/nebius/nebius.go(2 hunks)core/providers/ollama/ollama.go(2 hunks)core/providers/openai/openai.go(14 hunks)core/providers/openrouter/openrouter.go(2 hunks)core/providers/parasail/parasail.go(2 hunks)core/providers/perplexity/perplexity.go(2 hunks)core/providers/sgl/sgl.go(2 hunks)core/providers/utils/pagination.go(1 hunks)core/providers/vertex/vertex.go(2 hunks)core/schemas/account.go(2 hunks)core/schemas/batch.go(5 hunks)core/schemas/bifrost.go(1 hunks)core/schemas/files.go(5 hunks)core/schemas/pagination.go(1 hunks)core/schemas/provider.go(1 hunks)core/utils.go(1 hunks)examples/plugins/hello-world/go.mod(1 hunks)framework/configstore/migrations.go(2 hunks)framework/configstore/rdb.go(9 hunks)framework/configstore/tables/key.go(4 hunks)framework/go.mod(1 hunks)plugins/governance/go.mod(1 hunks)plugins/jsonparser/go.mod(1 hunks)plugins/logging/go.mod(1 hunks)plugins/maxim/go.mod(1 hunks)plugins/mocker/go.mod(1 hunks)plugins/otel/go.mod(1 hunks)plugins/semanticcache/go.mod(1 hunks)plugins/telemetry/go.mod(1 hunks)tests/integrations/config.json(5 hunks)tests/integrations/tests/test_bedrock.py(2 hunks)tests/integrations/tests/test_openai.py(5 hunks)tests/scripts/1millogs/go.mod(1 hunks)transports/bifrost-http/handlers/inference.go(1 hunks)transports/bifrost-http/integrations/anthropic.go(1 hunks)transports/bifrost-http/lib/config.go(4 hunks)transports/config.schema.json(1 hunks)transports/go.mod(1 hunks)ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx(4 hunks)ui/app/workspace/providers/views/providerKeyForm.tsx(0 hunks)ui/components/ui/modelMultiselect.tsx(1 hunks)ui/components/ui/tagInput.tsx(1 hunks)ui/lib/schemas/providerForm.ts(3 hunks)ui/lib/types/config.ts(6 hunks)ui/lib/types/schemas.ts(3 hunks)ui/package.json(2 hunks)
💤 Files with no reviewable changes (1)
- ui/app/workspace/providers/views/providerKeyForm.tsx
🚧 Files skipped from review as they are similar to previous changes (26)
- Makefile
- core/utils.go
- plugins/otel/go.mod
- tests/integrations/config.json
- plugins/semanticcache/go.mod
- transports/bifrost-http/integrations/anthropic.go
- tests/scripts/1millogs/go.mod
- core/providers/bedrock/batch.go
- plugins/maxim/go.mod
- transports/config.schema.json
- core/providers/bedrock/s3.go
- core/schemas/pagination.go
- plugins/logging/go.mod
- ui/components/ui/modelMultiselect.tsx
- transports/go.mod
- core/schemas/files.go
- ui/package.json
- tests/integrations/tests/test_bedrock.py
- core/providers/mistral/mistral.go
- plugins/governance/go.mod
- core/schemas/provider.go
- core/providers/cohere/cohere.go
- plugins/mocker/go.mod
- core/go.mod
- framework/go.mod
- core/providers/parasail/parasail.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
plugins/jsonparser/go.modui/lib/types/schemas.tscore/providers/ollama/ollama.gotransports/bifrost-http/lib/config.goui/lib/schemas/providerForm.tscore/schemas/account.gocore/providers/utils/pagination.gotransports/bifrost-http/handlers/inference.gocore/schemas/bifrost.goui/app/workspace/providers/fragments/apiKeysFormFragment.tsxplugins/telemetry/go.modcore/schemas/batch.goframework/configstore/tables/key.goframework/configstore/migrations.gocore/providers/vertex/vertex.gocore/bifrost.gotests/integrations/tests/test_openai.pycore/providers/nebius/nebius.goframework/configstore/rdb.goexamples/plugins/hello-world/go.modui/lib/types/config.tscore/providers/openai/openai.gocore/internal/testutil/batch.goui/components/ui/tagInput.tsxcore/providers/perplexity/perplexity.gocore/providers/azure/azure.gocore/providers/sgl/sgl.gocore/providers/groq/groq.gocore/providers/anthropic/anthropic.gocore/internal/testutil/account.gocore/providers/cerebras/cerebras.gocore/providers/elevenlabs/elevenlabs.gocore/providers/bedrock/bedrock.gocore/providers/openrouter/openrouter.gocore/providers/gemini/gemini.go
🧠 Learnings (4)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.
Applied to files:
core/providers/ollama/ollama.gotransports/bifrost-http/lib/config.gocore/schemas/account.gocore/providers/utils/pagination.gotransports/bifrost-http/handlers/inference.gocore/schemas/bifrost.gocore/schemas/batch.goframework/configstore/tables/key.goframework/configstore/migrations.gocore/providers/vertex/vertex.gocore/bifrost.gocore/providers/nebius/nebius.goframework/configstore/rdb.gocore/providers/openai/openai.gocore/internal/testutil/batch.gocore/providers/perplexity/perplexity.gocore/providers/azure/azure.gocore/providers/sgl/sgl.gocore/providers/groq/groq.gocore/providers/anthropic/anthropic.gocore/internal/testutil/account.gocore/providers/cerebras/cerebras.gocore/providers/elevenlabs/elevenlabs.gocore/providers/bedrock/bedrock.gocore/providers/openrouter/openrouter.gocore/providers/gemini/gemini.go
📚 Learning: 2025-12-12T08:25:02.629Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: transports/bifrost-http/integrations/router.go:709-712
Timestamp: 2025-12-12T08:25:02.629Z
Learning: In transports/bifrost-http/**/*.go, update streaming response handling to align with OpenAI Responses API: use typed SSE events such as response.created, response.output_text.delta, response.done, etc., and do not rely on the legacy data: [DONE] termination marker. Note that data: [DONE] is only used by the older Chat Completions and Text Completions streaming APIs. Ensure parsers, writers, and tests distinguish SSE events from the [DONE] sentinel and handle each event type accordingly for correct stream termination and progress updates.
Applied to files:
transports/bifrost-http/lib/config.gotransports/bifrost-http/handlers/inference.go
📚 Learning: 2025-12-11T11:58:25.307Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: core/providers/openai/responses.go:42-84
Timestamp: 2025-12-11T11:58:25.307Z
Learning: In core/providers/openai/responses.go (and related OpenAI response handling), document and enforce the API format constraint: if ResponsesReasoning != nil and the response contains content blocks, all content blocks should be treated as reasoning blocks by default. Implement type guards or parsing logic accordingly, and add unit tests to verify that when ResponsesReasoning is non-nil, content blocks are labeled as reasoning blocks. Include clear comments in the code explaining the rationale and ensure downstream consumers rely on this behavior.
Applied to files:
core/providers/openai/openai.go
📚 Learning: 2025-12-14T14:43:30.902Z
Learnt from: Radheshg04
Repo: maximhq/bifrost PR: 980
File: core/providers/openai/images.go:10-22
Timestamp: 2025-12-14T14:43:30.902Z
Learning: Enforce the OpenAI image generation SSE event type values across the OpenAI image flow in the repository: use "image_generation.partial_image" for partial chunks, "image_generation.completed" for the final result, and "error" for errors. Apply this consistently in schemas, constants, tests, accumulator routing, and UI code within core/providers/openai (and related Go files) to ensure uniform event typing and avoid mismatches.
Applied to files:
core/providers/openai/openai.go
🧬 Code graph analysis (18)
core/providers/ollama/ollama.go (2)
core/schemas/bifrost.go (7)
BifrostError(461-470)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)FileListRequest(106-106)FileRetrieveRequest(107-107)FileDeleteRequest(108-108)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)
transports/bifrost-http/lib/config.go (3)
core/utils.go (1)
Ptr(56-58)core/schemas/utils.go (1)
Ptr(16-18)core/schemas/account.go (2)
BedrockKeyConfig(56-64)BatchS3Config(50-52)
core/schemas/account.go (1)
ui/lib/types/config.ts (2)
S3BucketConfig(52-56)BatchS3Config(58-60)
core/providers/utils/pagination.go (3)
core/schemas/account.go (1)
Key(8-19)core/schemas/pagination.go (4)
SerialCursor(12-16)DecodeSerialCursor(32-53)NewSerialCursor(56-62)EncodeSerialCursor(19-28)core/schemas/logger.go (1)
Logger(28-55)
transports/bifrost-http/handlers/inference.go (3)
core/utils.go (1)
Ptr(56-58)core/schemas/utils.go (1)
Ptr(16-18)core/schemas/batch.go (1)
BifrostBatchCreateRequest(65-83)
core/schemas/bifrost.go (1)
transports/bifrost-http/handlers/inference.go (3)
TranscriptionRequest(285-289)BatchCreateRequest(292-299)BatchListRequest(302-307)
ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx (7)
ui/components/ui/form.tsx (5)
FormItem(161-161)FormLabel(162-162)FormDescription(164-164)FormControl(163-163)FormMessage(165-165)ui/components/ui/switch.tsx (1)
Switch(36-36)ui/components/ui/modelMultiselect.tsx (1)
ModelMultiselect(27-192)ui/components/ui/separator.tsx (1)
Separator(43-43)ui/components/ui/button.tsx (1)
Button(70-70)ui/components/ui/alert.tsx (3)
Alert(42-42)AlertTitle(42-42)AlertDescription(42-42)ui/components/ui/input.tsx (1)
Input(15-69)
framework/configstore/tables/key.go (2)
core/schemas/account.go (2)
BedrockKeyConfig(56-64)BatchS3Config(50-52)ui/lib/types/config.ts (2)
BedrockKeyConfig(63-71)BatchS3Config(58-60)
framework/configstore/migrations.go (2)
framework/migrator/migrator.go (3)
New(131-149)DefaultOptions(100-106)Migration(62-69)framework/configstore/tables/key.go (2)
TableKey(13-58)TableKey(61-61)
core/providers/vertex/vertex.go (5)
core/schemas/batch.go (8)
BifrostBatchListRequest(118-133)BifrostBatchListResponse(136-147)BifrostBatchRetrieveRequest(150-159)BifrostBatchRetrieveResponse(167-202)BifrostBatchCancelRequest(205-214)BifrostBatchCancelResponse(222-231)BifrostBatchResultsRequest(234-246)BifrostBatchResultsResponse(285-294)core/schemas/bifrost.go (6)
BifrostError(461-470)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)FileListRequest(106-106)FileRetrieveRequest(107-107)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)transports/bifrost-http/handlers/inference.go (1)
BatchListRequest(302-307)core/schemas/files.go (1)
BifrostFileListRequest(109-128)
ui/lib/types/config.ts (2)
core/schemas/account.go (2)
S3BucketConfig(42-46)BatchS3Config(50-52)core/network/http.go (1)
GlobalProxyType(46-46)
core/internal/testutil/batch.go (1)
core/utils.go (1)
Ptr(56-58)
core/providers/perplexity/perplexity.go (5)
core/schemas/batch.go (3)
BifrostBatchListRequest(118-133)BifrostBatchListResponse(136-147)BifrostBatchRetrieveResponse(167-202)core/schemas/bifrost.go (4)
BifrostError(461-470)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)ui/lib/types/logs.ts (1)
BifrostError(226-232)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)transports/bifrost-http/handlers/inference.go (1)
BatchListRequest(302-307)
core/providers/sgl/sgl.go (4)
core/schemas/files.go (1)
BifrostFileListRequest(109-128)core/schemas/bifrost.go (3)
BifrostError(461-470)FileListRequest(106-106)BatchListRequest(101-101)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)core/schemas/batch.go (1)
BifrostBatchListRequest(118-133)
core/providers/anthropic/anthropic.go (6)
core/schemas/batch.go (6)
BifrostBatchListRequest(118-133)BifrostBatchListResponse(136-147)BifrostBatchRetrieveResponse(167-202)BifrostBatchRetrieveRequest(150-159)BifrostBatchCancelRequest(205-214)BifrostBatchCancelResponse(222-231)core/schemas/bifrost.go (9)
BifrostError(461-470)Anthropic(37-37)BatchListRequest(101-101)BifrostResponseExtraFields(390-401)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)FileListRequest(106-106)FileRetrieveRequest(107-107)FileDeleteRequest(108-108)core/providers/utils/utils.go (3)
ShouldSendBackRawResponse(615-620)NewBifrostOperationError(564-575)ShouldSendBackRawRequest(607-612)core/providers/utils/pagination.go (1)
NewSerialListHelper(19-34)core/schemas/files.go (7)
BifrostFileListRequest(109-128)BifrostFileListResponse(136-143)FileObject(40-50)BifrostFileRetrieveRequest(146-159)BifrostFileRetrieveResponse(167-183)BifrostFileDeleteRequest(186-198)BifrostFileDeleteResponse(206-212)core/providers/anthropic/types.go (2)
AnthropicFileListResponse(516-521)AnthropicFileResponse(505-513)
core/providers/cerebras/cerebras.go (5)
core/schemas/files.go (8)
BifrostFileListRequest(109-128)BifrostFileListResponse(136-143)BifrostFileRetrieveRequest(146-159)BifrostFileRetrieveResponse(167-183)BifrostFileDeleteRequest(186-198)BifrostFileDeleteResponse(206-212)BifrostFileContentRequest(215-227)BifrostFileContentResponse(235-241)core/schemas/bifrost.go (7)
BifrostError(461-470)FileListRequest(106-106)FileRetrieveRequest(107-107)FileDeleteRequest(108-108)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)ui/lib/types/logs.ts (1)
BifrostError(226-232)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)core/schemas/batch.go (8)
BifrostBatchListRequest(118-133)BifrostBatchListResponse(136-147)BifrostBatchRetrieveRequest(150-159)BifrostBatchRetrieveResponse(167-202)BifrostBatchCancelRequest(205-214)BifrostBatchCancelResponse(222-231)BifrostBatchResultsRequest(234-246)BifrostBatchResultsResponse(285-294)
core/providers/elevenlabs/elevenlabs.go (3)
core/schemas/account.go (1)
Key(8-19)core/schemas/bifrost.go (4)
BifrostError(461-470)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)
core/providers/gemini/gemini.go (3)
core/schemas/batch.go (7)
BifrostBatchListRequest(118-133)BifrostBatchListResponse(136-147)BifrostBatchRetrieveResponse(167-202)BifrostBatchRetrieveRequest(150-159)BifrostBatchCancelRequest(205-214)BifrostBatchCancelResponse(222-231)BifrostBatchResultsRequest(234-246)core/providers/gemini/types.go (1)
GeminiBatchListResponse(1607-1610)core/providers/utils/pagination.go (1)
NewSerialListHelper(19-34)
🪛 GitHub Actions: PR #1104
framework/configstore/tables/key.go
[error] 166-166: BedrockKeyConfig.BatchS3Config undefined. (type *schemas.BedrockKeyConfig has no field or method BatchS3Config)
🪛 GitHub Check: CodeQL
core/bifrost.go
[failure] 3347-3347: Clear-text logging of sensitive information
Sensitive data returned by an access to SecretKey flows to a logging call.
Sensitive data returned by an access to SecretKey flows to a logging call.
Sensitive data returned by an access to SecretKey flows to a logging call.
d3526f2 to
e02b598
Compare
e02b598 to
e4552d3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
core/providers/bedrock/s3.go (1)
87-125: Potential nil-pointer panic onmodelIDinConvertBedrockRequestsToJSONL
modelIDis now a*string, but the function unconditionally dereferences it:"modelInput": map[string]interface{}{ "modelId": *modelID, },If any caller passes
nil(e.g., optional model not set on the batch request), this will panic.Consider defensively validating
modelIDand returning an error instead of panicking, e.g.:-func ConvertBedrockRequestsToJSONL(requests []schemas.BatchRequestItem, modelID *string) ([]byte, error) { +func ConvertBedrockRequestsToJSONL(requests []schemas.BatchRequestItem, modelID *string) ([]byte, error) { + if modelID == nil || *modelID == "" { + return nil, fmt.Errorf("modelID is required for Bedrock batch JSONL conversion") + } @@ - "modelInput": map[string]interface{}{ - "modelId": *modelID, - }, + "modelInput": map[string]interface{}{ + "modelId": *modelID, + },(or alternatively, change the parameter back to a plain
stringand dereference at the call site where you can guarantee non-nil.)tests/integrations/tests/test_bedrock.py (1)
1246-1285: Hard-coded S3 bucket and emptyroleArnmay reduce test configurabilityIn
test_16_batch_createyou now:
- Hard-code
s3_bucket = "bifrost-batch-api-file-upload-testing".- Pass
roleArn=""tocreate_model_invocation_job.If this test runs in environments where that bucket doesn’t exist or isn’t reachable, the S3
put_objectcall will fail before you hit thetry/exceptthat skips on authorization errors. Other tests in this file still takes3_bucket(andbatch_role_arn) from integration config, which keeps them environment-configurable.Consider either:
- Deriving
s3_bucketfrom integration settings (as in other tests) or from the key’s batch S3 config, or- Adding a quick existence/authorization guard (similar to the config-based checks) so this test skips cleanly when the bucket isn’t configured/usable.
The empty
roleArnis fine as long as the Bifrost Bedrock endpoint now sources the actual role from key/config rather than the request.core/providers/openai/openai.go (1)
2374-2447: Guard multi-key “try-until-success” flows against empty key slices.Across FileRetrieve, FileDelete, FileContent, BatchRetrieve, BatchCancel, and the inner loop of BatchResults, you follow a
var lastErr *BifrostError; for _, key := range keys { ... }; return nil, lastErrpattern. Ifkeysis empty, these methods currently return(nil, nil), which is ambiguous at the call site and forces callers to special-case nil responses.Consider adding an explicit
if len(keys) == 0 { ... }guard at the top of each of these methods to return a configuration-style error (or at least a clear BifrostError) when no usable keys are available.Also applies to: 2450-2541, 2543-2619, 2840-2913, 2915-3015, 3017-3118
♻️ Duplicate comments (1)
ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx (1)
539-644: Critical: BedrockBatchS3ConfigSection is dead code.This component is fully implemented with S3 bucket management functionality (add/remove buckets, set default, form fields) but is never rendered anywhere in the JSX. As noted in the past review comment, this appears to be part of an incomplete feature implementation.
The component should be conditionally rendered in the Bedrock section when batch API support is enabled. Apply this fix after Line 532:
{supportsBatchAPI && <BatchAPIFormField control={control} form={form} />} + {supportsBatchAPI && useForBatchAPI && <BedrockBatchS3ConfigSection control={control} form={form} />} </div> )}Note: This fix also addresses the unused
useForBatchAPIvariable issue (Line 54) by using it to conditionally show the S3 config section only when the batch API toggle is enabled.
🧹 Nitpick comments (9)
ui/app/workspace/logs/views/filters.tsx (1)
323-339: Disable action button during recalculation.The "Recalculate costs" menu item should be disabled while
recalculatingis true to prevent duplicate submissions and provide clear feedback that an operation is in progress.Apply this diff:
- <CommandItem className="cursor-pointer" onSelect={handleRecalculateCosts}> + <CommandItem + className="cursor-pointer" + disabled={recalculating} + onSelect={handleRecalculateCosts} + > <Calculator className="text-muted-foreground size-4" /> - <span className="text-sm">Recalculate costs</span> + <span className="text-sm"> + {recalculating ? "Recalculating..." : "Recalculate costs"} + </span> </CommandItem>ui/components/ui/checkbox.tsx (1)
14-14: Extra space in className string.There's an extra space between "border" and "outline-none" in the className. While CSS parsers handle this gracefully, it's inconsistent with the rest of the string.
Apply this diff to remove the extra space:
- "peer border-input dark:bg-input/30 data-[state=checked]:bg-primary data-[state=checked]:text-primary-foreground dark:data-[state=checked]:bg-primary data-[state=checked]:border-primary focus-visible:border-ring focus-visible:ring-ring/50 aria-invalid:ring-destructive/20 dark:aria-invalid:ring-destructive/40 aria-invalid:border-destructive size-4 shrink-0 rounded-[4px] border outline-none focus-visible:ring-[3px] disabled:cursor-not-allowed disabled:opacity-50", + "peer border-input dark:bg-input/30 data-[state=checked]:bg-primary data-[state=checked]:text-primary-foreground dark:data-[state=checked]:bg-primary data-[state=checked]:border-primary focus-visible:border-ring focus-visible:ring-ring/50 aria-invalid:ring-destructive/20 dark:aria-invalid:ring-destructive/40 aria-invalid:border-destructive size-4 shrink-0 rounded-[4px] border outline-none focus-visible:ring-[3px] disabled:cursor-not-allowed disabled:opacity-50",core/schemas/bifrost.go (1)
183-247: Pointer-aware model handling for file/batch requests looks solid; minor DRY opportunityThe new branches correctly handle
*stringmodels without risking panics and preserve prior behavior (empty model when unset, no fallbacks) for file and batch requests. The repeatedif req.Model != nil { ... } else { ... }pattern across all file/batch cases is slightly verbose; if this grows further, consider a small helper (e.g.,modelOrEmpty(ptr *string) string) to keepGetRequestFieldscompact.core/internal/testutil/account.go (1)
118-315: Per-keyUseForBatchAPIflags and extra Bedrock key are consistent with new batch semanticsMarking test keys with
UseForBatchAPI: bifrost.Ptr(true)across providers, plus introducing a dedicated Bedrock key (usingAWS_BEDROCK_ARNand deployments) gives you fine-grained control over which keys participate in batch flows and matches the PR’s intent. The pattern is consistent across providers and keeps batch concerns test-local.One thing to keep in mind: because these are test utilities, a missing env var will still yield a key with an empty
Value/ARN. If that becomes noisy in CI, consider adding lightweight validation or comments documenting the required env vars for batch tests.tests/integrations/tests/test_openai.py (1)
2103-2106: Includingmodelinextra_queryforbatches.listis reasonablePassing both
providerandmodelintoextra_queryintest_47_batch_listis consistent with per-model batch routing. Even if the backend currently ignoresmodelfor listing, this is backwards compatible and gives you room to filter by model as needed.framework/configstore/rdb.go (1)
668-668: Minor: trailing whitespace.Line 668 has trailing whitespace after the closing brace. Consider removing it for consistency.
core/bifrost.go (2)
2319-2433: Warn-level logging on dropped requests may be noisy under sustained loadThe new
Warnin bothtryRequestandtryStreamRequestwhendropExcessRequestsis enabled will emit one log per dropped request. At high QPS this can flood logs.Consider either:
- Downgrading to
Info/Debug, or- Adding basic rate limiting or sampling around this log.
Behaviorally it’s correct; this is purely about operational noise.
Also applies to: 2438-2610
2693-2769: Multi-key selection +UseForBatchAPIsemantics work but are slightly inconsistent across file endpointsThe new flow in
requestWorkerandgetKeysForBatchAndFileOpsfor batch/file operations looks sound overall:
- Direct key in context (
BifrostContextKeyDirectKey) still short-circuits selection for both list-all and batch/file ops.getKeysForBatchAndFileOpscorrectly:
- Skips disabled keys.
- For batch ops, requires
UseForBatchAPI == true.- Optionally filters by
Modelswhenmodelis provided.- Respects
canProviderKeyValueBeEmpty(baseProviderType)for keyless providers.- Sorts by ID for deterministic pagination.
- Single-key selection still goes through
selectKeyFromProviderForModel, which now:
- Can skip selection entirely for keyless providers.
- Applies model/deployment checks and Azure/Bedrock/Vertex deployment maps.
- Allows explicit key selection via
BifrostContextKeyAPIKeyName.- Falls back to weighted random selection via
keySelector.Two semantic points worth double-checking:
UseForBatchAPIcoverage differs for file operations:
- Batch endpoints:
BatchCreateis single-key and gated viaselectKeyFromProviderForModel, which filters toUseForBatchAPI == trueforisBatchRequestType.BatchList/Retrieve/Cancel/Resultsare multi-key and gated viagetKeysForBatchAndFileOpswithisBatchOp=true, which also enforcesUseForBatchAPI == true.- File endpoints:
FileUploadis single-key and also gated viaselectKeyFromProviderForModelbecauseisFileRequestTypeis true there.FileList/Retrieve/Delete/Contentare multi-key but do not useUseForBatchAPI;getKeysForBatchAndFileOpsskips that check whenisBatchOp=false.That means disabling “Use for Batch APIs” on a key will block it for
Batch*andFileUpload, but the same key can still be used forFileList/FileRetrieve/FileDelete/FileContent.If your intent is “UseForBatchAPI controls all batch + file APIs”, you probably want the file multi-key path to honor it as well (e.g., by passing a separate
isFileOpflag intogetKeysForBatchAndFileOpsand checking it there), and/or to scope theselectKeyFromProviderForModelfilter to onlyisBatchRequestType. Right now behavior is asymmetric.Error message wording for file-only gating:
selectKeyFromProviderForModelreturns:
no config found for batch APIs. Please enable 'Use for Batch APIs' on at least one key for provider: %v- This message is used for both batch and file request types because the guard is
isBatchRequestType(requestType) || isFileRequestType(requestType).If you keep gating file operations with
UseForBatchAPI, consider updating the wording (e.g., “batch/file APIs”) so it matches what actually happens, or tightening the condition to only batch types if that’s the real intent.The implementation is otherwise correct; this is mainly about ensuring the
UseForBatchAPIsemantics and error text match what you plan to expose to users.Also applies to: 2789-2792, 3220-3288, 3293-3326, 3331-3383
core/providers/bedrock/bedrock.go (1)
2498-2608: Latency accounting inconsistency between success and error paths.In BatchCancel, the error path (line 2590) includes the latency of the failed BatchRetrieve call via
time.Since(startTime), but the success path (line 2602) only includes the stop operation latency and excludes the BatchRetrieve latency from line 2575. For consistent observability, both paths should either include or exclude the retrieve operation time.Consider this adjustment for consistent latency tracking on the success path:
// After stopping, retrieve the job to get updated status + retrieveStartTime := time.Now() retrieveResp, bifrostErr := provider.BatchRetrieve(ctx, keys, &schemas.BifrostBatchRetrieveRequest{ Provider: request.Provider, BatchID: request.BatchID, }) if bifrostErr != nil { // Return basic response if retrieve fails // Compute total latency including stop + failed retrieve totalLatency := time.Since(startTime) return &schemas.BifrostBatchCancelResponse{ ID: request.BatchID, Object: "batch", Status: schemas.BatchStatusCancelling, ExtraFields: schemas.BifrostResponseExtraFields{ RequestType: schemas.BatchCancelRequest, Provider: providerName, Latency: totalLatency.Milliseconds(), }, }, nil } + totalLatency := time.Since(startTime) return &schemas.BifrostBatchCancelResponse{ ID: retrieveResp.ID, Object: "batch", Status: retrieveResp.Status, ExtraFields: schemas.BifrostResponseExtraFields{ RequestType: schemas.BatchCancelRequest, Provider: providerName, - Latency: latency.Milliseconds(), + Latency: totalLatency.Milliseconds(), }, }, nil
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
ui/package-lock.jsonis excluded by!**/package-lock.jsonui/public/images/nebius.jpegis excluded by!**/*.jpeg
📒 Files selected for processing (68)
Makefile(1 hunks)core/bifrost.go(16 hunks)core/changelog.md(1 hunks)core/go.mod(1 hunks)core/internal/testutil/account.go(6 hunks)core/internal/testutil/batch.go(5 hunks)core/providers/anthropic/anthropic.go(12 hunks)core/providers/azure/azure.go(8 hunks)core/providers/bedrock/batch.go(2 hunks)core/providers/bedrock/bedrock.go(22 hunks)core/providers/bedrock/s3.go(1 hunks)core/providers/cerebras/cerebras.go(2 hunks)core/providers/cohere/cohere.go(2 hunks)core/providers/elevenlabs/elevenlabs.go(2 hunks)core/providers/gemini/gemini.go(21 hunks)core/providers/groq/groq.go(2 hunks)core/providers/mistral/mistral.go(2 hunks)core/providers/nebius/nebius.go(2 hunks)core/providers/ollama/ollama.go(2 hunks)core/providers/openai/openai.go(14 hunks)core/providers/openrouter/openrouter.go(2 hunks)core/providers/parasail/parasail.go(2 hunks)core/providers/perplexity/perplexity.go(2 hunks)core/providers/sgl/sgl.go(2 hunks)core/providers/utils/pagination.go(1 hunks)core/providers/vertex/vertex.go(2 hunks)core/schemas/account.go(2 hunks)core/schemas/batch.go(5 hunks)core/schemas/bifrost.go(1 hunks)core/schemas/files.go(5 hunks)core/schemas/pagination.go(1 hunks)core/schemas/provider.go(1 hunks)core/utils.go(1 hunks)examples/plugins/hello-world/go.mod(1 hunks)framework/configstore/migrations.go(2 hunks)framework/configstore/rdb.go(9 hunks)framework/configstore/tables/key.go(4 hunks)framework/go.mod(1 hunks)plugins/governance/go.mod(1 hunks)plugins/jsonparser/go.mod(1 hunks)plugins/logging/go.mod(1 hunks)plugins/maxim/go.mod(1 hunks)plugins/mocker/go.mod(1 hunks)plugins/otel/go.mod(1 hunks)plugins/semanticcache/go.mod(1 hunks)plugins/telemetry/go.mod(1 hunks)tests/integrations/config.json(5 hunks)tests/integrations/tests/test_bedrock.py(2 hunks)tests/integrations/tests/test_openai.py(5 hunks)tests/scripts/1millogs/go.mod(1 hunks)transports/bifrost-http/handlers/inference.go(1 hunks)transports/bifrost-http/integrations/anthropic.go(1 hunks)transports/bifrost-http/lib/config.go(4 hunks)transports/changelog.md(1 hunks)transports/config.schema.json(1 hunks)transports/go.mod(1 hunks)ui/app/workspace/logs/page.tsx(4 hunks)ui/app/workspace/logs/views/filters.tsx(7 hunks)ui/app/workspace/logs/views/logsTable.tsx(3 hunks)ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx(4 hunks)ui/app/workspace/providers/views/providerKeyForm.tsx(0 hunks)ui/components/ui/checkbox.tsx(2 hunks)ui/components/ui/modelMultiselect.tsx(1 hunks)ui/components/ui/tagInput.tsx(1 hunks)ui/lib/schemas/providerForm.ts(3 hunks)ui/lib/types/config.ts(6 hunks)ui/lib/types/schemas.ts(3 hunks)ui/package.json(2 hunks)
💤 Files with no reviewable changes (1)
- ui/app/workspace/providers/views/providerKeyForm.tsx
✅ Files skipped from review due to trivial changes (1)
- core/changelog.md
🚧 Files skipped from review as they are similar to previous changes (30)
- transports/bifrost-http/integrations/anthropic.go
- plugins/maxim/go.mod
- transports/go.mod
- ui/lib/types/schemas.ts
- transports/bifrost-http/handlers/inference.go
- plugins/jsonparser/go.mod
- core/internal/testutil/batch.go
- plugins/telemetry/go.mod
- core/utils.go
- core/schemas/account.go
- plugins/otel/go.mod
- core/schemas/batch.go
- core/schemas/files.go
- ui/lib/schemas/providerForm.ts
- core/go.mod
- core/providers/utils/pagination.go
- plugins/mocker/go.mod
- core/providers/vertex/vertex.go
- core/providers/elevenlabs/elevenlabs.go
- Makefile
- core/schemas/pagination.go
- framework/configstore/migrations.go
- core/providers/bedrock/batch.go
- transports/config.schema.json
- framework/go.mod
- ui/components/ui/modelMultiselect.tsx
- tests/integrations/config.json
- core/providers/ollama/ollama.go
- ui/lib/types/config.ts
- tests/scripts/1millogs/go.mod
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
plugins/semanticcache/go.modtransports/changelog.mdui/app/workspace/logs/views/logsTable.tsxtests/integrations/tests/test_bedrock.pyui/components/ui/checkbox.tsxtransports/bifrost-http/lib/config.goui/package.jsonui/app/workspace/providers/fragments/apiKeysFormFragment.tsxframework/configstore/tables/key.gocore/providers/parasail/parasail.goplugins/logging/go.modcore/providers/mistral/mistral.gocore/schemas/bifrost.goui/app/workspace/logs/page.tsxcore/schemas/provider.gotests/integrations/tests/test_openai.pycore/internal/testutil/account.gocore/providers/azure/azure.gocore/bifrost.goframework/configstore/rdb.goui/components/ui/tagInput.tsxcore/providers/bedrock/s3.gocore/providers/groq/groq.goui/app/workspace/logs/views/filters.tsxcore/providers/cerebras/cerebras.goexamples/plugins/hello-world/go.modcore/providers/perplexity/perplexity.gocore/providers/sgl/sgl.gocore/providers/openrouter/openrouter.gocore/providers/openai/openai.gocore/providers/nebius/nebius.goplugins/governance/go.modcore/providers/gemini/gemini.gocore/providers/anthropic/anthropic.gocore/providers/cohere/cohere.gocore/providers/bedrock/bedrock.go
🧠 Learnings (4)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.
Applied to files:
transports/bifrost-http/lib/config.goframework/configstore/tables/key.gocore/providers/parasail/parasail.gocore/providers/mistral/mistral.gocore/schemas/bifrost.gocore/schemas/provider.gocore/internal/testutil/account.gocore/providers/azure/azure.gocore/bifrost.goframework/configstore/rdb.gocore/providers/bedrock/s3.gocore/providers/groq/groq.gocore/providers/cerebras/cerebras.gocore/providers/perplexity/perplexity.gocore/providers/sgl/sgl.gocore/providers/openrouter/openrouter.gocore/providers/openai/openai.gocore/providers/nebius/nebius.gocore/providers/gemini/gemini.gocore/providers/anthropic/anthropic.gocore/providers/cohere/cohere.gocore/providers/bedrock/bedrock.go
📚 Learning: 2025-12-12T08:25:02.629Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: transports/bifrost-http/integrations/router.go:709-712
Timestamp: 2025-12-12T08:25:02.629Z
Learning: In transports/bifrost-http/**/*.go, update streaming response handling to align with OpenAI Responses API: use typed SSE events such as response.created, response.output_text.delta, response.done, etc., and do not rely on the legacy data: [DONE] termination marker. Note that data: [DONE] is only used by the older Chat Completions and Text Completions streaming APIs. Ensure parsers, writers, and tests distinguish SSE events from the [DONE] sentinel and handle each event type accordingly for correct stream termination and progress updates.
Applied to files:
transports/bifrost-http/lib/config.go
📚 Learning: 2025-12-11T11:58:25.307Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: core/providers/openai/responses.go:42-84
Timestamp: 2025-12-11T11:58:25.307Z
Learning: In core/providers/openai/responses.go (and related OpenAI response handling), document and enforce the API format constraint: if ResponsesReasoning != nil and the response contains content blocks, all content blocks should be treated as reasoning blocks by default. Implement type guards or parsing logic accordingly, and add unit tests to verify that when ResponsesReasoning is non-nil, content blocks are labeled as reasoning blocks. Include clear comments in the code explaining the rationale and ensure downstream consumers rely on this behavior.
Applied to files:
core/providers/openai/openai.go
📚 Learning: 2025-12-14T14:43:30.902Z
Learnt from: Radheshg04
Repo: maximhq/bifrost PR: 980
File: core/providers/openai/images.go:10-22
Timestamp: 2025-12-14T14:43:30.902Z
Learning: Enforce the OpenAI image generation SSE event type values across the OpenAI image flow in the repository: use "image_generation.partial_image" for partial chunks, "image_generation.completed" for the final result, and "error" for errors. Apply this consistently in schemas, constants, tests, accumulator routing, and UI code within core/providers/openai (and related Go files) to ensure uniform event typing and avoid mismatches.
Applied to files:
core/providers/openai/openai.go
🧬 Code graph analysis (19)
transports/bifrost-http/lib/config.go (3)
core/utils.go (1)
Ptr(56-58)core/schemas/utils.go (1)
Ptr(16-18)core/schemas/account.go (2)
BedrockKeyConfig(56-64)BatchS3Config(50-52)
framework/configstore/tables/key.go (2)
core/schemas/account.go (2)
BedrockKeyConfig(56-64)BatchS3Config(50-52)ui/lib/types/config.ts (2)
BedrockKeyConfig(63-71)BatchS3Config(58-60)
core/providers/parasail/parasail.go (3)
core/schemas/account.go (1)
Key(8-19)core/schemas/bifrost.go (7)
BifrostError(461-470)FileListRequest(106-106)FileRetrieveRequest(107-107)FileDeleteRequest(108-108)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)
core/schemas/bifrost.go (3)
transports/bifrost-http/handlers/inference.go (3)
TranscriptionRequest(285-289)BatchCreateRequest(292-299)BatchListRequest(302-307)core/schemas/provider.go (1)
Provider(312-359)core/schemas/models.go (1)
Model(109-129)
core/schemas/provider.go (3)
core/schemas/account.go (1)
Key(8-19)core/schemas/bifrost.go (1)
BifrostError(461-470)ui/lib/types/logs.ts (1)
BifrostError(226-232)
core/internal/testutil/account.go (3)
core/utils.go (1)
Ptr(56-58)core/schemas/utils.go (1)
Ptr(16-18)core/schemas/account.go (2)
Key(8-19)BedrockKeyConfig(56-64)
core/providers/azure/azure.go (5)
core/providers/utils/utils.go (2)
SetExtraHeaders(179-209)MakeRequestWithContext(40-94)core/providers/utils/pagination.go (1)
NewSerialListHelper(19-34)core/utils.go (1)
Ptr(56-58)core/schemas/utils.go (1)
Ptr(16-18)core/providers/azure/types.go (1)
AzureAPIVersionDefault(4-4)
core/bifrost.go (3)
core/schemas/bifrost.go (21)
BifrostContextKeyFallbackIndex(125-125)BifrostContextKeyFallbackRequestID(120-120)RequestType(85-85)BatchCreateRequest(100-100)FileUploadRequest(105-105)BifrostError(461-470)BifrostContextKeySelectedKeyID(122-122)BifrostContextKeySelectedKeyName(123-123)BifrostResponse(322-342)FileListRequest(106-106)FileRetrieveRequest(107-107)FileDeleteRequest(108-108)FileContentRequest(109-109)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)BatchResultsRequest(104-104)ModelProvider(32-32)BifrostContextKeyDirectKey(121-121)BifrostContextKeySkipKeySelection(127-127)ListModelsRequest(88-88)core/schemas/provider.go (2)
Provider(312-359)CustomProviderConfig(246-252)core/schemas/account.go (1)
Key(8-19)
framework/configstore/rdb.go (2)
core/schemas/account.go (2)
BedrockKeyConfig(56-64)BatchS3Config(50-52)ui/lib/types/config.ts (2)
BedrockKeyConfig(63-71)BatchS3Config(58-60)
core/providers/bedrock/s3.go (1)
core/schemas/batch.go (1)
BatchRequestItem(31-37)
core/providers/groq/groq.go (3)
core/schemas/batch.go (8)
BifrostBatchListRequest(118-133)BifrostBatchListResponse(136-147)BifrostBatchRetrieveRequest(150-159)BifrostBatchRetrieveResponse(167-202)BifrostBatchCancelRequest(205-214)BifrostBatchCancelResponse(222-231)BifrostBatchResultsRequest(234-246)BifrostBatchResultsResponse(285-294)core/schemas/bifrost.go (7)
BifrostError(461-470)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)FileListRequest(106-106)FileRetrieveRequest(107-107)FileDeleteRequest(108-108)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)
core/providers/cerebras/cerebras.go (4)
core/schemas/files.go (6)
BifrostFileListRequest(109-128)BifrostFileListResponse(136-143)BifrostFileRetrieveRequest(146-159)BifrostFileRetrieveResponse(167-183)BifrostFileContentRequest(215-227)BifrostFileContentResponse(235-241)core/schemas/bifrost.go (7)
BifrostError(461-470)FileListRequest(106-106)FileRetrieveRequest(107-107)FileDeleteRequest(108-108)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)core/schemas/batch.go (4)
BifrostBatchListRequest(118-133)BifrostBatchListResponse(136-147)BifrostBatchRetrieveRequest(150-159)BifrostBatchRetrieveResponse(167-202)
core/providers/perplexity/perplexity.go (3)
core/schemas/batch.go (8)
BifrostBatchListRequest(118-133)BifrostBatchListResponse(136-147)BifrostBatchRetrieveRequest(150-159)BifrostBatchRetrieveResponse(167-202)BifrostBatchCancelRequest(205-214)BifrostBatchCancelResponse(222-231)BifrostBatchResultsRequest(234-246)BifrostBatchResultsResponse(285-294)core/schemas/bifrost.go (5)
BifrostError(461-470)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)FileListRequest(106-106)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)
core/providers/sgl/sgl.go (3)
core/schemas/files.go (1)
BifrostFileListRequest(109-128)core/schemas/bifrost.go (7)
BifrostError(461-470)FileListRequest(106-106)FileRetrieveRequest(107-107)FileDeleteRequest(108-108)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)
core/providers/openrouter/openrouter.go (5)
core/schemas/account.go (1)
Key(8-19)core/schemas/batch.go (1)
BifrostBatchListRequest(118-133)core/schemas/bifrost.go (4)
BifrostError(461-470)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)ui/lib/types/logs.ts (1)
BifrostError(226-232)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)
core/providers/openai/openai.go (2)
core/providers/utils/utils.go (2)
ShouldSendBackRawResponse(615-620)ShouldSendBackRawRequest(607-612)core/providers/utils/pagination.go (1)
NewSerialListHelper(19-34)
core/providers/nebius/nebius.go (4)
core/schemas/batch.go (8)
BifrostBatchListRequest(118-133)BifrostBatchListResponse(136-147)BifrostBatchRetrieveRequest(150-159)BifrostBatchRetrieveResponse(167-202)BifrostBatchCancelRequest(205-214)BifrostBatchCancelResponse(222-231)BifrostBatchResultsRequest(234-246)BifrostBatchResultsResponse(285-294)core/schemas/bifrost.go (4)
BifrostError(461-470)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)transports/bifrost-http/handlers/inference.go (1)
BatchListRequest(302-307)
core/providers/anthropic/anthropic.go (5)
core/schemas/batch.go (7)
BifrostBatchListRequest(118-133)BifrostBatchListResponse(136-147)BifrostBatchRetrieveResponse(167-202)BifrostBatchRetrieveRequest(150-159)BifrostBatchCancelRequest(205-214)BifrostBatchCancelResponse(222-231)BatchRequestCounts(40-48)core/schemas/bifrost.go (8)
BifrostError(461-470)Anthropic(37-37)BatchListRequest(101-101)BifrostResponseExtraFields(390-401)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)FileListRequest(106-106)FileRetrieveRequest(107-107)core/providers/utils/utils.go (3)
ShouldSendBackRawResponse(615-620)HandleProviderResponse(359-442)ShouldSendBackRawRequest(607-612)core/providers/utils/pagination.go (1)
NewSerialListHelper(19-34)core/schemas/files.go (5)
BifrostFileListRequest(109-128)BifrostFileListResponse(136-143)FileObject(40-50)BifrostFileRetrieveRequest(146-159)BifrostFileContentResponse(235-241)
core/providers/bedrock/bedrock.go (6)
core/schemas/account.go (2)
Key(8-19)BedrockKeyConfig(56-64)core/schemas/files.go (3)
BifrostFileListRequest(109-128)BifrostFileListResponse(136-143)FileObject(40-50)core/providers/utils/pagination.go (1)
NewSerialListHelper(19-34)core/providers/utils/utils.go (3)
NewBifrostOperationError(564-575)NewConfigurationError(550-560)NewProviderAPIError(579-594)core/providers/bedrock/types.go (2)
DefaultBedrockRegion(11-11)BedrockError(463-467)core/providers/bedrock/batch.go (1)
ToBifrostBatchStatus(127-148)
🪛 GitHub Actions: PR #1104
framework/configstore/tables/key.go
[error] 166-166: BedrockKeyConfig.BatchS3Config undefined (type *schemas.BedrockKeyConfig has no field or method BatchS3Config)
🪛 LanguageTool
transports/changelog.md
[grammar] ~3-~3: Use a hyphen to join words.
Context: ....com/hpbyte) - chore: increased provider level timeout limit to 48 hours - chore:...
(QB_NEW_EN_HYPHEN)
🔇 Additional comments (63)
examples/plugins/hello-world/go.mod (1)
3-3: Go version upgrade aligns with coordinated repo update.The toolchain version bump from 1.24.3 to 1.25.5 is part of a repo-wide upgrade and poses minimal risk. No transitive dependencies have changed, and the module declaration remains intact.
Verify that all
go.modfiles across the repository (including other example plugins, core, framework, transports, and tests) have been updated to 1.25.5 for consistency.plugins/logging/go.mod (1)
3-3: Go version upgrade consistent with repo-wide coordinated change.The toolchain bump to 1.25.5 aligns with the coordinated upgrade across multiple modules. All require blocks (direct and indirect dependencies) remain unchanged, indicating a safe version transition.
Confirm that the CI pipeline (Go build and tests) passes with Go 1.25.5 for all modules, and that all plugin modules in
plugins/*/go.modhave been similarly updated.ui/components/ui/tagInput.tsx (1)
69-69: LGTM! Verify visual consistency across input components.The height increase and addition of horizontal padding improve the visual appearance and user experience of the tag input field. The change aligns with similar UI refinements noted in related components.
Please verify that the new height (
h-7) and horizontal padding (px-2) are consistent with other input elements in your design system to ensure a cohesive user interface across the application.transports/changelog.md (1)
1-5: Verify scope alignment and changelog accuracy.The changelog entries span multiple features beyond the stated PR objective ("batch config for each key"). Line 2 (cost recalculation) appears orthogonal to batch API configuration. Confirm that:
- All five entries represent genuine changes in this PR (not cherry-picked or unrelated)
- The entries accurately reflect the code changes across Core, Transports, UI, and tests mentioned in the PR objectives
- Any multi-key method signature changes or provider interface updates are adequately captured
plugins/semanticcache/go.mod (3)
3-3: Go 1.25.5 is a stable production-ready release. The release maintains the Go 1 promise of compatibility, with almost all Go programs expected to continue compiling and running as before. Go 1.25.5 (released 2025-12-02) includes two security fixes to the crypto/x509 package, as well as bug fixes to the mime and os packages. No action required beyond standard testing in the target deployment environment.
3-3: No action needed. Thego.sumfile is properly maintained and consistent withgo.mod, containing correct checksums for all dependencies.
5-11: Ensure all dependencies are tested with Go 1.25.5.Confirm
go test ./...passes for this plugin. All modules in the repository have been consistently upgraded to Go 1.25.5, and backward compatibility is maintained.plugins/governance/go.mod (1)
3-3: All Go version directives across the repository (13 modules) have been consistently updated to 1.25.5, confirming proper cross-module alignment in the stack. Go 1.25 maintains backward compatibility with no breaking language changes, and Go 1.25.5 includes security fixes to the crypto/x509 package. The change is correct and ready.ui/app/workspace/logs/page.tsx (1)
564-565: LGTM! Clean prop passing for external data refresh.The
fetchLogsandfetchStatscallbacks are correctly passed toLogsDataTable, enabling the child components to trigger data refreshes after operations like cost recalculation. The functions are well-defined with proper error handling.ui/app/workspace/logs/views/logsTable.tsx (1)
24-25: LGTM! Proper prop threading to child components.The
fetchLogsandfetchStatsprops are correctly added to the interface, destructured, and propagated toLogFiltersComponent. Type signatures are consistent throughout.Also applies to: 41-42, 88-88
ui/app/workspace/logs/views/filters.tsx (1)
8-9: BothuseRecalculateLogCostsMutationandRecalculateCostResponseexist in the codebase and are properly exported. No issues with these imports.ui/components/ui/checkbox.tsx (2)
5-5: Import order change has no functional impact.The React import has been moved after the lucide-react import. This is a stylistic change with no runtime implications.
20-20: Good enhancement for checked state contrast.Adding
text-primary-foregroundto the CheckIcon ensures proper contrast in the checked state, improving accessibility and visual consistency.ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx (6)
4-4: New imports support batch API UI elements.The added imports (Button, ModelMultiselect, Switch, Info/Plus/Trash2 icons) are appropriately used throughout the component for batch API functionality.
Also applies to: 7-7, 9-9, 14-14
26-47: BatchAPIFormField component is well-structured.The component provides clear UI for enabling batch API per key with appropriate description and Switch control. The default value handling (
field.value ?? false) prevents undefined state.
183-183: Correct conditional rendering for non-Bedrock/non-Azure batch support.The BatchAPIFormField is appropriately shown for providers that support batch APIs, excluding Bedrock and Azure which have their own sections. The logic is clear and maintainable.
251-251: BatchAPIFormField correctly placed in Azure section.The field is rendered after Azure-specific configuration, maintaining logical grouping of related settings.
532-532: BatchAPIFormField correctly placed in Bedrock section.The field is rendered after Bedrock-specific configuration. However, this highlights that BedrockBatchS3ConfigSection (defined below) should also be rendered here when batch API is enabled.
Based on the past review comment and the defined BedrockBatchS3ConfigSection component, verify whether this component should be rendered conditionally after Line 532.
177-177: ModelMultiselect component is properly implemented and handles provider context correctly.The component exists at
ui/components/ui/modelMultiselect.tsxwith proper TypeScript types. It fetches provider-specific models viauseLazyGetModelsQuery, supports batch filtering through thekeysprop, and disables appropriately when no provider is selected. The change to AsyncMultiselect provides improved UX with search, debouncing, and dynamic model creation.ui/package.json (2)
46-46: Exact version pin prevents automatic patch updates.Version 0.552.0 is confirmed to exist. By removing the caret prefix and pinning to an exact version, this change prevents automatic patch-level updates while also being 9 minor versions behind the current latest (0.561.0). Consider whether this freeze is intentional or if the caret prefix should be restored to allow patch-level updates like
^0.552.0.
63-63: Clarify that Zod 4 breaking changes occurred in 4.0.0, not 4.2.1.Zod 4 introduces breaking changes primarily in error customization APIs, but these were established in version 4.0.0, not in 4.2.1. Version 4.2.1 is a minor release containing only bug fixes and enhancements, not breaking changes.
If upgrading from Zod 3 to 4.2.1, the compatibility concerns are valid and include changes to default value logic where defaults are now applied even within optional fields, stricter number validation (z.number() no longer accepts infinite values, .int() accepts safe integers only), and unified error customization API.
However, the verification script includes
useRef()checks which are not Zod-related (useRef is a React hook, not a Zod validation method). Remove this check and focus on the Zod-specific patterns:.refine(),.superRefine(),z.number(),.default(), and.prefault().core/internal/testutil/account.go (1)
69-77: Expanded fallback configuration is clear and flexibleAdding typed fallback slices per modality (text, transcription, speech, embedding) makes test scenarios more explicit and avoids overloading a single
Fallbacksfield. No functional issues here.tests/integrations/tests/test_openai.py (2)
1961-2061: Batch create test remains consistent with provider-specific S3 configWithin
test_46_batch_create_with_file, the current flow—uploading the JSONL via Files API, then creating a batch withextra_bodycontainingprovider,model, andoutput_s3_uri—matches the new per-key/per-provider batch semantics. No issues from this change in this diff.
2171-2173: Whitespace-only adjustments in batch testsThe changes around these lines are purely formatting/blank-line tweaks in
test_48_batch_retrieve,test_49_batch_cancel, andtest_50_batch_e2e_file_api. They don’t affect behavior.Also applies to: 2292-2297, 2372-2380
transports/bifrost-http/lib/config.go (1)
2138-2143: LGTM - proper default handling for batch API flag.The code correctly defaults
UseForBatchAPItofalsewhen the field is nil, ensuring consistent redacted output.framework/configstore/rdb.go (1)
277-286: Consistent BatchS3Config serialization pattern.The code correctly serializes
BatchS3Configto JSON when present and sets the field to nil otherwise. This pattern is consistently applied acrossUpdateProvidersConfig,UpdateProvider, andAddProvidermethods.core/bifrost.go (2)
1948-1986: Defensive queue existence check afterprepareProviderlooks correctThe extra
Load+okcheck afterprepareProvideradds a useful guard against inconsistent state and concurrent races; returning a clear"request queue not found"error is better than panicking on a bad type assertion. No issues with the locking pattern here (RW → upgrade to write → re-check then prepare).
2037-2067: Additional fallback debug logging is safe and helpfulThe new debug logs in
shouldTryFallbacks,prepareFallbackRequest,handleRequest, andhandleStreamRequestonly include provider/model identifiers and error messages—no key material or other secrets—and should materially help debugging fallback behavior without impacting prod logs (since they’re at debug level).Also applies to: 2072-2125, 2148-2233, 2240-2314
core/schemas/provider.go (1)
311-359: Provider interface multi-key signatures are consistent with core orchestrationUpdating
BatchList/BatchRetrieve/BatchCancel/BatchResultsandFileList/FileRetrieve/FileDelete/FileContentto acceptkeys []Keyaligns with howcore/bifrost.gonow passes multiple keys into providers for batch/file operations. The interface shape matches the updated provider implementations in this PR and keeps single-key operations (BatchCreate,FileUpload, chat/embedding, etc.) unchanged.core/providers/parasail/parasail.go (1)
185-233: Parasail batch/file signatures now match the multi-key Provider interfaceThe updated
FileList/FileRetrieve/FileDelete/FileContentandBatchList/BatchRetrieve/BatchCancel/BatchResultsmethods correctly accept[]schemas.Keyand still return a standardized unsupported-operation error viaproviderUtils.NewUnsupportedOperationError. This keeps Parasail compliant with the Provider interface while preserving existing behavior.core/providers/mistral/mistral.go (1)
551-574: Mistral batch/file unsupported stubs updated cleanly to multi-key
BatchList/BatchRetrieve/BatchCancel/BatchResultsandFileList/FileRetrieve/FileDelete/FileContentnow take[]schemas.Keyand continue to delegate toNewUnsupportedOperationError. This neatly updates the API surface without altering runtime behavior.Also applies to: 576-599
core/providers/nebius/nebius.go (1)
238-261: Nebius multi-key batch/file signatures are consistent and behavior-preservingThe Nebius provider’s
BatchList/BatchRetrieve/BatchCancel/BatchResultsandFileList/FileRetrieve/FileDelete/FileContentfunctions now accept[]schemas.Keyand still return unsupported-operation errors. This keeps Nebius aligned with the new Provider interface while leaving behavior unchanged.Also applies to: 263-285
core/providers/sgl/sgl.go (1)
223-246: SGL provider: multi-key batch/file stubs correctly mirror interface
FileList/FileRetrieve/FileDelete/FileContentandBatchList/BatchRetrieve/BatchCancel/BatchResultsnow take slices of keys and continue to returnNewUnsupportedOperationError. This is the expected no-op implementation for an unsupported feature with the new multi-key shape.Also applies to: 248-271
core/providers/groq/groq.go (1)
252-275: Groq batch/file method updates match the new multi-key contractThe Groq provider’s
BatchList/BatchRetrieve/BatchCancel/BatchResultsandFileList/FileRetrieve/FileDelete/FileContentimplementations now accept[]schemas.Keyand still return the standardized unsupported-operation error. No behavioral change beyond the updated parameter type.Also applies to: 277-300
core/providers/perplexity/perplexity.go (1)
255-278: Perplexity provider: multi-key batch/file stubs are consistent with interface
BatchList/BatchRetrieve/BatchCancel/BatchResultsandFileList/FileRetrieve/FileDelete/FileContentnow accept[]schemas.Keyand still immediately returnNewUnsupportedOperationError. This keeps the Perplexity provider compatible with the multi-key Provider interface without changing runtime behavior.Also applies to: 280-303
core/providers/openai/openai.go (3)
2244-2372: Multi-key FileList pagination looks correct and consistent with SerialListHelper.Serial helper initialization, use of the native cursor instead of request.After, and the BuildNextCursor-based After construction all look sound. Empty-data responses use an empty slice, avoiding
nullin JSON.
2725-2837: BatchList multi-key pagination and cursor wiring are aligned with the new helper pattern.Using NewSerialListHelper + nativeCursor and feeding HasMore/lastBatchID into BuildNextCursor gives consistent cross-key pagination, and the response uses NextCursor as expected for batch lists.
3017-3118: BatchResults JSONL parsing and error surfacing look solid.Reusing BatchRetrieve to obtain
output_file_id, then downloading content per key and decoding JSONL with ParseJSONL, correctly separates transport errors (via lastErr) from per-line parse errors (reported via ExtraFields.ParseErrors). Logging of malformed lines vialogger.Warn("failed to parse batch result line: %v", err)is appropriate and consistent with the Logger varargs contract. Based on learnings, this logging pattern is correct.core/providers/cerebras/cerebras.go (2)
219-237: Cerebras file/batch signatures correctly follow the new multi-key provider interface.Updating these unsupported operations to accept
[]schemas.Keywhile still returningNewUnsupportedOperationErrorkeeps Cerebras aligned with the shared Provider interface without changing runtime behavior.Also applies to: 244-262
244-262: Cerebras batch APIs remain explicitly unsupported with the right request types.BatchList/BatchRetrieve/BatchCancel/BatchResults all now take
[]schemas.Keyand still produce the expected “unsupported_operation” BifrostError for their respective request types, which is the right contract until Cerebras adds native file/batch support.core/providers/cohere/cohere.go (2)
844-867: Cohere batch APIs correctly updated to the multi-key signature while remaining unsupported.The switch from
schemas.Keyto[]schemas.Keyon BatchList/BatchRetrieve/BatchCancel/BatchResults matches the new Provider interface, and continuing to returnNewUnsupportedOperationErrorpreserves existing behavior.
874-892: Cohere file APIs now conform to the multi-key shape without changing behavior.FileList/FileRetrieve/FileDelete/FileContent now accept
[]schemas.Keyand still return the standard unsupported-operation error, which is consistent with the current capabilities of the Cohere provider.core/providers/openrouter/openrouter.go (1)
290-308: OpenRouter unsupported batch/file APIs correctly moved to multi-key signatures.All batch and file operations now take
[]schemas.Keyand still returnNewUnsupportedOperationErrorwith the right request type, keeping OpenRouter compatible with the updated Provider interface without altering runtime behavior.Also applies to: 315-333
core/providers/azure/azure.go (3)
943-1072: Azure FileList serial pagination + empty-slice Data behavior look good.The NewSerialListHelper-based flow, use of the helper’s native cursor instead of request.After, and the After/HasMore handling via BuildNextCursor are all consistent with the intended cross-key pagination semantics. Returning an empty
[]FileObject{}when all keys are exhausted also fixes the previousnull-vs-[]concern.
1484-1602: Azure BatchList serial pagination and cursor wiring are consistent with the OpenAI pattern.SerialListHelper integration, reusing the native cursor, and constructing NextCursor/HasMore from
openAIResp.HasMoreand the last batch ID provide predictable multi-key pagination, and the batch conversion properly reuses raw request/response when enabled.
1809-1864: BatchResults composition via BatchRetrieve + FileContent and JSONL parsing is clean.Reusing the multi-key BatchRetrieve and FileContent implementations to obtain
output_file_idand then download content, followed by a single ParseJSONL pass with parse errors surfaced viaExtraFields.ParseErrors, yields a simple and robust implementation. Latency is sensibly taken from the file-content response.core/providers/gemini/gemini.go (6)
1516-1523: LGTM! Model parsing logic correctly handles pointer and provides sensible default.The implementation properly handles the pointer to
Model, usesschemas.ParseModelStringto extract the model name (handling provider-prefixed strings), and defaults to"gemini-2.5-flash"when no model is specified. This aligns with Gemini's stable model offerings.
1736-1810: LGTM! Serial pagination across keys implemented correctly.The
BatchListmethod properly:
- Uses
NewSerialListHelperto manage multi-key pagination state- Exhausts all pages from one key before moving to the next
- Handles empty keys gracefully
- Propagates per-request latency
- Builds and encodes continuation cursors correctly
The pattern ensures consistent pagination within each key's scope and provides a seamless experience across multiple keys.
1896-1925: LGTM! Multi-key retrieval pattern implemented correctly.The method properly iterates through all provided keys, attempting retrieval with each until successful. Debug logging for failed attempts aids troubleshooting without exposing sensitive details. Resource management is handled cleanly within the
batchRetrieveByKeyhelper using defer statements.
2476-2549: LGTM! FileList serial pagination follows established pattern.The implementation mirrors the BatchList approach, using serial pagination to exhaust one key's files before moving to the next. Cursor handling is appropriate for Gemini's
Aftertoken-based pagination, and the empty keys case is handled gracefully.
2635-2746: LGTM! FileRetrieve and FileDelete consistently implement multi-key pattern.Both methods follow the established multi-key orchestration pattern:
- Iterate through keys until one succeeds
- Track and return the last error if all attempts fail
- Log debug information for troubleshooting
- Handle empty keys defensively
The consistency across all multi-key operations (batch and file) improves maintainability and reduces cognitive load.
2751-2765: LGTM! FileContent signature updated for consistency.The signature now accepts multiple keys to match other file operations, even though Gemini doesn't support direct file content downloads. The clear error message guides users to use the file URI in requests instead, which is the Gemini-recommended approach.
core/providers/anthropic/anthropic.go (5)
1112-1229: LGTM! BatchList serial pagination correctly adapted for Anthropic.The implementation properly:
- Uses
AfterIDcursor field appropriate for Anthropic's pagination- Converts Anthropic batches to Bifrost format
- Tracks
lastBatchIDto build continuation cursors- Handles edge cases (empty keys, no results)
The serial pagination helper abstracts cursor management while respecting Anthropic's native pagination semantics.
1231-1419: LGTM! BatchRetrieve and BatchCancel properly manage resources in multi-key loops.Both methods correctly handle fasthttp resource lifecycle:
- Acquire request/response per iteration
- Release resources in all error paths with explicit
fasthttp.ReleaseRequest(req)andfasthttp.ReleaseResponse(resp)calls- Release resources before returning success
This explicit resource management approach (vs defer in helpers) is appropriate for loops where resources are created and released per iteration.
1421-1526: LGTM! BatchResults correctly implements multi-key pattern with JSONL parsing.The method properly:
- Manages fasthttp resources explicitly in the multi-key loop
- Uses
providerUtils.ParseJSONLfor line-by-line parsing of Anthropic's JSONL results- Aggregates parse errors into
ExtraFields.ParseErrorsfor visibility- Returns results from the first successful key
The JSONL parsing is handled robustly with per-line error handling.
1659-1783: LGTM! FileList serial pagination mirrors BatchList pattern.The implementation correctly adapts serial pagination for file listings:
- Uses
Aftercursor field withafter_idquery parameter- Tracks
lastFileIDfor cursor generation- Converts Anthropic file objects to Bifrost format
- Handles empty keys and exhausted pagination gracefully
The consistency with BatchList reduces cognitive load and improves maintainability.
1785-2057: LGTM! FileRetrieve, FileDelete, and FileContent correctly implement multi-key pattern.All three methods properly:
- Manage fasthttp resources explicitly with proper cleanup in all paths
- Iterate through keys until one succeeds
- Track and return the last error if all attempts fail
Noteworthy details:
- FileDelete correctly handles both 200 OK (with body) and 204 No Content responses
- FileContent properly extracts
Content-Typewith a sensible default and copies the body to prevent use-after-freeThe consistent implementation across all file operations ensures predictable behavior.
core/providers/bedrock/bedrock.go (6)
1422-1589: Serial pagination implementation looks solid.The FileList method correctly implements serial pagination across multiple keys using the SerialListHelper. The logic properly handles key exhaustion, native cursor continuation, and per-key configuration. Latency tracking is appropriate for the serial pattern.
2132-2285: BatchList serial pagination is consistent with FileList.The implementation mirrors the FileList pattern appropriately, using Bedrock's NextToken for native pagination. The helper correctly manages cursor state across keys.
1591-1696: Try-each-key pattern is implemented consistently.The FileRetrieve, FileDelete, FileContent, and BatchRetrieve methods all follow a consistent pattern: iterate through keys, attempt the operation with per-key credentials/region, return on first success, or surface the last error. Context cancellation is properly handled by returning immediately, which is the correct behavior when a user cancels a request.
Also applies to: 1698-1786, 1788-1885, 2349-2496
2610-2727: BatchResults orchestration is well-structured.The method correctly delegates to BatchRetrieve, FileList, and FileContent with the full key set, allowing each operation to try multiple keys as needed. Latency aggregation across multiple file downloads is appropriate.
1903-1919: Role ARN fallback logic is well-implemented.The method first checks client-provided
role_arnfrom extra_params, then falls back to the key's configured ARN. This provides a good balance of flexibility and default configuration.
1932-1947: Model ID resolution is correct.The nil check at line 1932 guards the subsequent usage, and the model ID correctly falls back to the deployment mapping when available.
e4552d3 to
93c6ab1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (7)
core/bifrost.go (2)
2716-2792: Multi‑key batch/file flow mostly looks good; watch UseForBatchAPI semantics and add a ctx nil guardThe refactor in
requestWorkerto distinguish:
- single‑key ops (
BatchCreate,FileUpload) usingselectKeyFromProviderForModel, and- multi‑key ops (other batch/file requests) using
getKeysForBatchAndFileOpswith deterministic ID‑sorted keysis consistent with the new provider interface and should work fine for key‑requiring providers. A few follow‑ups are worth considering:
Inconsistent UseForBatchAPI enforcement between single‑key and multi‑key file ops
selectKeyFromProviderForModelnow filters keys for bothisBatchRequestType(requestType)andisFileRequestType(requestType)so that FileUpload uses onlyUseForBatchAPI == truekeys.getKeysForBatchAndFileOpsonly applies theUseForBatchAPIfilter whenisBatchOp == true, so multi‑key file list/retrieve/delete/content will happily use keys whereUseForBatchAPIisnilorfalse.That yields:
- FileUpload → requires
use_for_batch_api: true- FileList/Retrieve/Delete/Content → do not require
use_for_batch_api: trueIf the intent is that the flag truly gates “batch APIs” only (and not Files), then
selectKeyFromProviderForModelprobably should not treat file request types as batch‑gated. If the intent is that the flag also gates file APIs, thengetKeysForBatchAndFileOpsshould apply the filter wheneverisFileRequestType(requestType)as well, not just forisBatchOp. Right now behavior is split.I’d recommend aligning these two paths so “Use for Batch APIs” has a single, predictable meaning.
Defensive nil guard for ctx in selectKeyFromProviderForModel
selectKeyFromProviderForModelcurrently does:if skipKeySelection, ok := (*ctx).Value(schemas.BifrostContextKeySkipKeySelection).(bool); ok && skipKeySelection && isKeySkippingAllowed(providerKey) { return schemas.Key{}, nil }without checking
ctx != nil. All current call sites pass&req.Contextwhich is non‑nil, but the method signature doesn’t document that requirement and future callers could easily passniland hit a panic.A small defensive tweak keeps things safe:
func (bifrost *Bifrost) selectKeyFromProviderForModel(ctx *context.Context, requestType schemas.RequestType, providerKey schemas.ModelProvider, model string, baseProviderType schemas.ModelProvider) (schemas.Key, error) {
- // Check if key has been set in the context explicitly
- if ctx != nil {
key, ok := (*ctx).Value(schemas.BifrostContextKeyDirectKey).(schemas.Key)if ok {return key, nil}- }
- // Check if key skipping is allowed
- if skipKeySelection, ok := (*ctx).Value(schemas.BifrostContextKeySkipKeySelection).(bool); ok && skipKeySelection && isKeySkippingAllowed(providerKey) {
return schemas.Key{}, nil- }
- // Check if key has been set in the context explicitly
- if ctx != nil {
if key, ok := (*ctx).Value(schemas.BifrostContextKeyDirectKey).(schemas.Key); ok {return key, nil}// Check if key skipping is allowedif skipKeySelection, ok := (*ctx).Value(schemas.BifrostContextKeySkipKeySelection).(bool); ok && skipKeySelection && isKeySkippingAllowed(providerKey) {return schemas.Key{}, nil}- }
This keeps behavior the same for all existing call sites while avoiding a latent panic if someone ever passes a nil context pointer. --- `3180-3288`: **getKeysForBatchAndFileOps is generally correct; align UseForBatchAPI semantics with selectKeyFromProviderForModel** `getKeysForBatchAndFileOps` does the right things structurally: - honors a direct key from `BifrostContextKeyDirectKey`, - pulls keys from the account and fails fast on “no keys found”, - filters out disabled keys and, when `isBatchOp == true`, only keeps `UseForBatchAPI == true`, - optionally filters by `model` when `key.Models` is non‑empty, - respects providers whose key value can be empty, and - sorts keys by ID to keep pagination deterministic. The only concern is consistency with `selectKeyFromProviderForModel`: - Here, `UseForBatchAPI` is **only** enforced when `isBatchOp` is true. - In `selectKeyFromProviderForModel`, `UseForBatchAPI` is enforced for **both** batch and file request types. That means: - FileUpload (single‑key) obeys `UseForBatchAPI`. - File list/retrieve/delete/content (multi‑key) **ignore** `UseForBatchAPI`. If the flag is meant to gate *all* “batch‑style” operations (including Files), consider updating `getKeysForBatchAndFileOps` to also respect it when `isFileRequestType(requestType)` in the caller. If not, it may be worth tightening the condition in `selectKeyFromProviderForModel` to only apply to `isBatchRequestType`, so file behavior is consistent across single vs multi‑key paths. </blockquote></details> <details> <summary>core/providers/openai/openai.go (3)</summary><blockquote> `2374-2619`: **Guard against empty key slices in FileRetrieve/Delete/Content** The “try each key until one succeeds” pattern for `FileRetrieve`, `FileDelete`, and `FileContent` is good and the fasthttp acquire/release lifecycle is handled carefully in all branches. One corner case: if `keys` is empty, these functions skip the loop and return `(nil, lastErr)` where `lastErr` is `nil`. That yields `(nil, nil)`, which makes it hard for callers to distinguish “no keys available” from “successful lookup with an empty result” and can lead to nil dereferences if callers assume a non-nil response whenever `err == nil`. Consider an explicit guard at the top of each function, after basic validation: ```diff func (provider *OpenAIProvider) FileRetrieve(ctx context.Context, keys []schemas.Key, request *schemas.BifrostFileRetrieveRequest) (*schemas.BifrostFileRetrieveResponse, *schemas.BifrostError) { @@ - providerName := provider.GetProviderKey() + providerName := provider.GetProviderKey() + + if len(keys) == 0 { + return nil, providerUtils.NewBifrostOperationError("no keys provided for file retrieval", nil, providerName) + }and analogous checks for
FileDeleteandFileContent. This makes the contract explicit and avoids ambiguous(nil, nil)returns.
2840-3015: BatchRetrieve/BatchCancel multi-key retry pattern is good; same empty-keys concernFor
BatchRetrieveandBatchCancel, the sequential “try each key until success, rememberlastErrif all fail” behavior is appropriate, and fasthttp resources are correctly released along every path. The additional wiring ofExtraFields.RequestTypeandRequestCountson success is also correct.As with the File* methods, if
keysis empty these functions fall through and return(nil, lastErr)wherelastErrisnil. To avoid an ambiguous(nil, nil)outcome, it would be safer to explicitly error on an empty key slice at the top of each method, e.g.:func (provider *OpenAIProvider) BatchRetrieve(ctx context.Context, keys []schemas.Key, request *schemas.BifrostBatchRetrieveRequest) (*schemas.BifrostBatchRetrieveResponse, *schemas.BifrostError) { @@ - providerName := provider.GetProviderKey() + providerName := provider.GetProviderKey() + + if len(keys) == 0 { + return nil, providerUtils.NewBifrostOperationError("no keys provided for batch retrieve", nil, providerName) + }and similarly for
BatchCancel.
3017-3117: BatchResults JSONL parsing is well-structured; handle empty keys explicitlyThe
BatchResultsflow is nicely composed:
- Reusing
BatchRetrieveto locateoutput_file_id.- Trying each key until a successful
/v1/files/{output_file_id}/contentdownload.- Parsing JSONL with
providerUtils.ParseJSONL, accumulatingBatchResultItementries and surfacing per-line parse failures viaExtraFields.ParseErrorswhile logging them withlogger.Warn.Two robustness points:
Empty keys + BatchRetrieve
Ifkeysis empty,BatchRetrievecurrently returns(nil, nil), which would causebatchRespto benilhere and panic when dereferencingbatchResp.OutputFileID. Given you already depend onkeysbeing non-empty, it’s safer to enforce that locally:func (provider *OpenAIProvider) BatchResults(ctx context.Context, keys []schemas.Key, request *schemas.BifrostBatchResultsRequest) (*schemas.BifrostBatchResultsResponse, *schemas.BifrostError) { @@
- providerName := provider.GetProviderKey()
- providerName := provider.GetProviderKey()
- if len(keys) == 0 {
- return nil, providerUtils.NewBifrostOperationError("no keys provided for batch results", nil, providerName)
- }
- Empty keys in the download loop
Similarly, ifkeysis empty the download loop is skipped and you return(nil, lastErr)withlastErr == nil. Adding the guard above also covers this case and guarantees callers never see a(nil, nil)result.With that small guard, the JSONL parsing and multi-key retry behavior look solid.
core/providers/azure/azure.go (1)
1836-1848: Fix theBatchResultscompile error by correcting theBatchRetrievecall.There is a syntax error (
BatchRetr ieve) that breaks the build (expected ';', found ieve). This must be corrected to the proper method name.func (provider *AzureProvider) BatchResults(ctx context.Context, keys []schemas.Key, request *schemas.BifrostBatchResultsRequest) (*schemas.BifrostBatchResultsResponse, *schemas.BifrostError) { providerName := provider.GetProviderKey() - // First, retrieve the batch to get the output_file_id (using all keys) - batchResp, bifrostErr := provider.BatchRetr ieve(ctx, keys, &schemas.BifrostBatchRetrieveRequest{ + // First, retrieve the batch to get the output_file_id (using all keys) + batchResp, bifrostErr := provider.BatchRetrieve(ctx, keys, &schemas.BifrostBatchRetrieveRequest{ Provider: request.Provider, BatchID: request.BatchID, })core/providers/bedrock/bedrock.go (1)
1591-1696: Guard Bedrock multi-key operations against empty key slices to avoid(nil, nil)and potential panics.All these methods iterate over
keysand return(nil, nil)whenkeysis empty (the loop body never runs andlastErrremains nil). In particular,BatchResultsimmediately dereferencesbatchRespfromBatchRetrieve, so ifBatchRetrievereturns(nil, nil)whenkeysis empty,BatchResultswill panic. Returning a clear configuration error when no Bedrock keys are available is safer and consistent with other providers.func (provider *BedrockProvider) FileRetrieve(ctx context.Context, keys []schemas.Key, request *schemas.BifrostFileRetrieveRequest) (*schemas.BifrostFileRetrieveResponse, *schemas.BifrostError) { @@ if bucketName == "" || s3Key == "" { return nil, providerUtils.NewBifrostOperationError("invalid S3 URI format, expected s3://bucket/key", nil, providerName) } + if len(keys) == 0 { + return nil, providerUtils.NewConfigurationError("no Bedrock keys available for file retrieve operation", providerName) + } + var lastErr *schemas.BifrostError for _, key := range keys { @@ func (provider *BedrockProvider) FileDelete(ctx context.Context, keys []schemas.Key, request *schemas.BifrostFileDeleteRequest) (*schemas.BifrostFileDeleteResponse, *schemas.BifrostError) { @@ if bucketName == "" || s3Key == "" { return nil, providerUtils.NewBifrostOperationError("invalid S3 URI format, expected s3://bucket/key", nil, providerName) } + if len(keys) == 0 { + return nil, providerUtils.NewConfigurationError("no Bedrock keys available for file delete operation", providerName) + } + var lastErr *schemas.BifrostError for _, key := range keys { @@ func (provider *BedrockProvider) FileContent(ctx context.Context, keys []schemas.Key, request *schemas.BifrostFileContentRequest) (*schemas.BifrostFileContentResponse, *schemas.BifrostError) { @@ if bucketName == "" || s3Key == "" { return nil, providerUtils.NewBifrostOperationError("invalid S3 URI format, expected s3://bucket/key", nil, providerName) } + if len(keys) == 0 { + return nil, providerUtils.NewConfigurationError("no Bedrock keys available for file content operation", providerName) + } + var lastErr *schemas.BifrostError for _, key := range keys { @@ func (provider *BedrockProvider) BatchRetrieve(ctx context.Context, keys []schemas.Key, request *schemas.BifrostBatchRetrieveRequest) (*schemas.BifrostBatchRetrieveResponse, *schemas.BifrostError) { @@ if request.BatchID == "" { return nil, providerUtils.NewBifrostOperationError("batch_id (job ARN) is required", nil, providerName) } + if len(keys) == 0 { + return nil, providerUtils.NewConfigurationError("no Bedrock keys available for batch retrieve operation", providerName) + } + var lastErr *schemas.BifrostError for _, key := range keys { @@ func (provider *BedrockProvider) BatchCancel(ctx context.Context, keys []schemas.Key, request *schemas.BifrostBatchCancelRequest) (*schemas.BifrostBatchCancelResponse, *schemas.BifrostError) { @@ if request.BatchID == "" { return nil, providerUtils.NewBifrostOperationError("batch_id (job ARN) is required", nil, providerName) } + if len(keys) == 0 { + return nil, providerUtils.NewConfigurationError("no Bedrock keys available for batch cancel operation", providerName) + } + var lastErr *schemas.BifrostError for _, key := range keys { @@ func (provider *BedrockProvider) BatchResults(ctx context.Context, keys []schemas.Key, request *schemas.BifrostBatchResultsRequest) (*schemas.BifrostBatchResultsResponse, *schemas.BifrostError) { @@ providerName := provider.GetProviderKey() + if len(keys) == 0 { + return nil, providerUtils.NewConfigurationError("no Bedrock keys available for batch results operation", providerName) + } + // First, retrieve the batch to get the output S3 URI prefix (using all keys) batchResp, bifrostErr := provider.BatchRetrieve(ctx, keys, &schemas.BifrostBatchRetrieveRequest{Also applies to: 1698-1786, 1788-1885, 2349-2496, 2498-2608, 2610-2727
♻️ Duplicate comments (3)
ui/app/workspace/logs/views/filters.tsx (1)
63-63: Use therecalculatingstate to provide loading feedback.The
recalculatingloading state is destructured but never used. Users have no visual indication that recalculation is in progress, which can lead to confusion or duplicate clicks.Consider applying this enhancement to show loading state on the button:
<CommandItem className="cursor-pointer" onSelect={handleRecalculateCosts}> - <Calculator className="text-muted-foreground size-4" /> - <span className="text-sm">Recalculate costs</span> + <Calculator className="text-muted-foreground size-4" /> + <span className="text-sm">{recalculating ? "Recalculating..." : "Recalculate costs"}</span> </CommandItem>Alternatively, disable the button during recalculation:
-<CommandItem className="cursor-pointer" onSelect={handleRecalculateCosts}> +<CommandItem + className="cursor-pointer" + onSelect={handleRecalculateCosts} + disabled={recalculating} +> <Calculator className="text-muted-foreground size-4" /> <span className="text-sm">Recalculate costs</span> </CommandItem>core/bifrost.go (1)
2039-2141: Additional debug logging around fallbacks improves observability and avoids secretsThe new debug logs in
shouldTryFallbacks,handleRequest, andshouldContinueWithFallbacks:
- only include provider, model, counts, and error messages, and
- do not log key values or other sensitive fields,
which both fixes the prior CodeQL complaint about logging
SecretKeyand gives better insight into fallback behavior without increasing risk.core/providers/azure/azure.go (1)
1079-1164: Add an explicit empty-keys guard to Azure FileRetrieve to avoid(nil, nil)responses.If
keysis empty, the loop never executes and the function returns(nil, nil), which is ambiguous for callers and inconsistent with other Azure multi-key methods that now return a configuration error on empty key sets.func (provider *AzureProvider) FileRetrieve(ctx context.Context, keys []schemas.Key, request *schemas.BifrostFileRetrieveRequest) (*schemas.BifrostFileRetrieveResponse, *schemas.BifrostError) { providerName := provider.GetProviderKey() if request.FileID == "" { return nil, providerUtils.NewBifrostOperationError("file_id is required", nil, providerName) } + if len(keys) == 0 { + return nil, providerUtils.NewConfigurationError("no Azure keys available for file retrieve operation", providerName) + } + sendBackRawRequest := providerUtils.ShouldSendBackRawRequest(ctx, provider.sendBackRawRequest) sendBackRawResponse := providerUtils.ShouldSendBackRawResponse(ctx, provider.sendBackRawResponse) var lastErr *schemas.BifrostError for _, key := range keys {
🧹 Nitpick comments (6)
tests/integrations/tests/test_openai.py (1)
2105-2105: Model parameter in batch list query aligns with multi-key support implementationThe official OpenAI batch list endpoint accepts
limitas a query parameter, but the model parameter is not documented in the standard OpenAI API. This is a custom backend extension for the multi-key batch API support mentioned in this PR's objectives. The addition of"model": modelto the extra_query for batch list operations is intentional and necessary for the backend to select the appropriate provider key with the UseForBatchAPI flag enabled. Since this is part of a wrapper layer that filters batch operations by provider and model (unlike OpenAI's native API which only supportslimit,after, andorder), including the model parameter in the batch list query is appropriate and consistent with the multi-key support implementation.core/utils.go (1)
201-209: Batch/file RequestType helpers are correct and centralize logic
isBatchRequestTypeandisFileRequestTypecorrectly cover all current batch and fileRequestTypevariants and keep classification in one place, which should simplify future branching. Just remember to extend these helpers if new batch/file request types are added.tests/integrations/tests/test_bedrock.py (1)
1246-1275: Hard‑coded S3 bucket and emptyroleArncouple test to infraIn
test_16_batch_createyou now:
- Hard‑code
s3_bucket = "bifrost-batch-api-file-upload-testing".- Pass
roleArn=""intocreate_model_invocation_job.This makes the test rely on server‑side configuration (e.g., Bedrock
BatchS3Configand role) matching these values. That’s fine if the integration config is guaranteed to be in sync, but it will cause surprising failures if someone points tests at a different environment.It’s worth either:
- Documenting that this bucket name must match the server’s batch S3 config, or
- Reading the bucket name (and, if needed later, any role defaults) from
integration_settingsto keep the test environment‑agnostic.transports/bifrost-http/lib/config.go (1)
2138-2143: Redacted provider config now surfaces batch flags and Bedrock batch S3 configIn
GetProviderConfigRedacted:
- You re‑expose
UseForBatchAPI, defaulting nil tofalse, so external consumers can see which keys are batch‑eligible even in redacted views.- You also pass through
BedrockKeyConfig.BatchS3Configunmodified on the redacted config.Both seem intentional and useful for UI/config tooling. Just confirm:
- That treating missing
UseForBatchAPIasfalsein the redacted view matches your intended semantics (vs. leaving it nil/omitted), and- That all fields inside
BatchS3Configare indeed considered non‑sensitive for API consumers; if any become sensitive later (e.g., secrets rather than ARNs/paths), they’ll need their own redaction rules.Also applies to: 2260-2263
core/providers/openai/openai.go (1)
2244-2371: Serial multi-key FileList implementation looks solid; optional raw payload wiringThe serial pagination across keys via
NewSerialListHelper,GetCurrentKey, andBuildNextCursoris a clean way to implement cross-keyafterhandling, and the OpenAI-specific cursor (after=lastFileID) is respected. Empty-key cases also return an empty list withHasMore=false, which is a sensible contract.If you ever want
FileListto expose raw request/response like other OpenAI endpoints, you could capture and attach the raw payloads instead of discarding them:- var openAIResp OpenAIFileListResponse - _, _, bifrostErr = providerUtils.HandleProviderResponse(body, &openAIResp, nil, sendBackRawRequest, sendBackRawResponse) + var openAIResp OpenAIFileListResponse + rawRequest, rawResponse, bifrostErr := providerUtils.HandleProviderResponse(body, &openAIResp, nil, sendBackRawRequest, sendBackRawResponse) if bifrostErr != nil { return nil, bifrostErr } @@ - bifrostResp := &schemas.BifrostFileListResponse{ + bifrostResp := &schemas.BifrostFileListResponse{ @@ - } + } + if sendBackRawRequest { + bifrostResp.ExtraFields.RawRequest = rawRequest + } + if sendBackRawResponse { + bifrostResp.ExtraFields.RawResponse = rawResponse + }This keeps list-level behavior consistent with other endpoints that honor the
SendBackRaw*flags.core/providers/anthropic/anthropic.go (1)
1659-1662: Remove duplicate docstring.There are two docstrings for FileList. The first one ("lists files from all provided keys and aggregates results") is inaccurate - the implementation uses serial pagination, not aggregation. Keep only the second docstring.
-// FileList lists files from all provided keys and aggregates results. // FileList lists files using serial pagination across keys. // Exhausts all pages from one key before moving to the next. func (provider *AnthropicProvider) FileList(ctx context.Context, keys []schemas.Key, request *schemas.BifrostFileListRequest) (*schemas.BifrostFileListResponse, *schemas.BifrostError) {
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
ui/package-lock.jsonis excluded by!**/package-lock.jsonui/public/images/nebius.jpegis excluded by!**/*.jpeg
📒 Files selected for processing (70)
Makefile(1 hunks)core/bifrost.go(16 hunks)core/changelog.md(1 hunks)core/go.mod(1 hunks)core/internal/testutil/account.go(6 hunks)core/internal/testutil/batch.go(5 hunks)core/providers/anthropic/anthropic.go(12 hunks)core/providers/azure/azure.go(8 hunks)core/providers/bedrock/batch.go(2 hunks)core/providers/bedrock/bedrock.go(22 hunks)core/providers/bedrock/s3.go(1 hunks)core/providers/cerebras/cerebras.go(2 hunks)core/providers/cohere/cohere.go(2 hunks)core/providers/elevenlabs/elevenlabs.go(2 hunks)core/providers/gemini/gemini.go(21 hunks)core/providers/groq/groq.go(2 hunks)core/providers/mistral/mistral.go(2 hunks)core/providers/nebius/nebius.go(2 hunks)core/providers/ollama/ollama.go(2 hunks)core/providers/openai/openai.go(14 hunks)core/providers/openrouter/openrouter.go(2 hunks)core/providers/parasail/parasail.go(2 hunks)core/providers/perplexity/perplexity.go(2 hunks)core/providers/sgl/sgl.go(2 hunks)core/providers/utils/pagination.go(1 hunks)core/providers/vertex/vertex.go(2 hunks)core/schemas/account.go(2 hunks)core/schemas/batch.go(5 hunks)core/schemas/bifrost.go(1 hunks)core/schemas/files.go(5 hunks)core/schemas/pagination.go(1 hunks)core/schemas/provider.go(1 hunks)core/utils.go(1 hunks)docs/integrations/bedrock-sdk/files-and-batch.mdx(1 hunks)docs/integrations/openai-sdk/files-and-batch.mdx(2 hunks)examples/plugins/hello-world/go.mod(1 hunks)framework/configstore/migrations.go(2 hunks)framework/configstore/rdb.go(9 hunks)framework/configstore/tables/key.go(4 hunks)framework/go.mod(1 hunks)plugins/governance/go.mod(1 hunks)plugins/jsonparser/go.mod(1 hunks)plugins/logging/go.mod(1 hunks)plugins/maxim/go.mod(1 hunks)plugins/mocker/go.mod(1 hunks)plugins/otel/go.mod(1 hunks)plugins/semanticcache/go.mod(1 hunks)plugins/telemetry/go.mod(1 hunks)tests/integrations/config.json(5 hunks)tests/integrations/tests/test_bedrock.py(2 hunks)tests/integrations/tests/test_openai.py(5 hunks)tests/scripts/1millogs/go.mod(1 hunks)transports/bifrost-http/handlers/inference.go(1 hunks)transports/bifrost-http/integrations/anthropic.go(1 hunks)transports/bifrost-http/lib/config.go(4 hunks)transports/changelog.md(1 hunks)transports/config.schema.json(1 hunks)transports/go.mod(1 hunks)ui/app/workspace/logs/page.tsx(4 hunks)ui/app/workspace/logs/views/filters.tsx(7 hunks)ui/app/workspace/logs/views/logsTable.tsx(3 hunks)ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx(4 hunks)ui/app/workspace/providers/views/providerKeyForm.tsx(0 hunks)ui/components/ui/checkbox.tsx(2 hunks)ui/components/ui/modelMultiselect.tsx(1 hunks)ui/components/ui/tagInput.tsx(1 hunks)ui/lib/schemas/providerForm.ts(3 hunks)ui/lib/types/config.ts(6 hunks)ui/lib/types/schemas.ts(3 hunks)ui/package.json(2 hunks)
💤 Files with no reviewable changes (1)
- ui/app/workspace/providers/views/providerKeyForm.tsx
✅ Files skipped from review due to trivial changes (1)
- transports/changelog.md
🚧 Files skipped from review as they are similar to previous changes (25)
- examples/plugins/hello-world/go.mod
- transports/bifrost-http/integrations/anthropic.go
- framework/configstore/migrations.go
- Makefile
- ui/app/workspace/logs/views/logsTable.tsx
- plugins/maxim/go.mod
- tests/scripts/1millogs/go.mod
- core/schemas/account.go
- ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx
- core/changelog.md
- ui/package.json
- framework/go.mod
- plugins/mocker/go.mod
- transports/config.schema.json
- transports/go.mod
- ui/lib/schemas/providerForm.ts
- ui/lib/types/schemas.ts
- ui/components/ui/checkbox.tsx
- core/schemas/pagination.go
- core/providers/nebius/nebius.go
- ui/components/ui/tagInput.tsx
- core/go.mod
- core/providers/ollama/ollama.go
- core/providers/openrouter/openrouter.go
- plugins/telemetry/go.mod
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
always check the stack if there is one for the current PR. do not give localized reviews for the PR, always see all changes in the light of the whole stack of PRs (if there is a stack, if there is no stack you can continue to make localized suggestions/reviews)
Files:
plugins/governance/go.moddocs/integrations/bedrock-sdk/files-and-batch.mdxcore/schemas/batch.goui/app/workspace/logs/views/filters.tsxtests/integrations/tests/test_bedrock.pycore/providers/bedrock/batch.goplugins/semanticcache/go.modcore/schemas/bifrost.gocore/schemas/files.gotests/integrations/tests/test_openai.pycore/providers/utils/pagination.gocore/providers/bedrock/s3.godocs/integrations/openai-sdk/files-and-batch.mdxui/app/workspace/logs/page.tsxtransports/bifrost-http/lib/config.gocore/providers/mistral/mistral.gotransports/bifrost-http/handlers/inference.goframework/configstore/tables/key.goui/lib/types/config.tscore/providers/parasail/parasail.gocore/schemas/provider.gotests/integrations/config.jsonframework/configstore/rdb.gocore/internal/testutil/account.gocore/utils.gocore/internal/testutil/batch.goplugins/jsonparser/go.modcore/bifrost.gocore/providers/sgl/sgl.gocore/providers/cerebras/cerebras.goplugins/otel/go.modcore/providers/elevenlabs/elevenlabs.gocore/providers/vertex/vertex.gocore/providers/openai/openai.goplugins/logging/go.modcore/providers/cohere/cohere.goui/components/ui/modelMultiselect.tsxcore/providers/azure/azure.gocore/providers/groq/groq.gocore/providers/perplexity/perplexity.gocore/providers/anthropic/anthropic.gocore/providers/gemini/gemini.gocore/providers/bedrock/bedrock.go
🧠 Learnings (4)
📚 Learning: 2025-12-09T17:07:42.007Z
Learnt from: qwerty-dvorak
Repo: maximhq/bifrost PR: 1006
File: core/schemas/account.go:9-18
Timestamp: 2025-12-09T17:07:42.007Z
Learning: In core/schemas/account.go, the HuggingFaceKeyConfig field within the Key struct is currently unused and reserved for future Hugging Face inference endpoint deployments. Do not flag this field as missing from OpenAPI documentation or require its presence in the API spec until the feature is actively implemented and used. When the feature is added, update the OpenAPI docs accordingly; otherwise, treat this field as non-breaking and not part of the current API surface.
Applied to files:
core/schemas/batch.gocore/providers/bedrock/batch.gocore/schemas/bifrost.gocore/schemas/files.gocore/providers/utils/pagination.gocore/providers/bedrock/s3.gotransports/bifrost-http/lib/config.gocore/providers/mistral/mistral.gotransports/bifrost-http/handlers/inference.goframework/configstore/tables/key.gocore/providers/parasail/parasail.gocore/schemas/provider.goframework/configstore/rdb.gocore/internal/testutil/account.gocore/utils.gocore/internal/testutil/batch.gocore/bifrost.gocore/providers/sgl/sgl.gocore/providers/cerebras/cerebras.gocore/providers/elevenlabs/elevenlabs.gocore/providers/vertex/vertex.gocore/providers/openai/openai.gocore/providers/cohere/cohere.gocore/providers/azure/azure.gocore/providers/groq/groq.gocore/providers/perplexity/perplexity.gocore/providers/anthropic/anthropic.gocore/providers/gemini/gemini.gocore/providers/bedrock/bedrock.go
📚 Learning: 2025-12-12T08:25:02.629Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: transports/bifrost-http/integrations/router.go:709-712
Timestamp: 2025-12-12T08:25:02.629Z
Learning: In transports/bifrost-http/**/*.go, update streaming response handling to align with OpenAI Responses API: use typed SSE events such as response.created, response.output_text.delta, response.done, etc., and do not rely on the legacy data: [DONE] termination marker. Note that data: [DONE] is only used by the older Chat Completions and Text Completions streaming APIs. Ensure parsers, writers, and tests distinguish SSE events from the [DONE] sentinel and handle each event type accordingly for correct stream termination and progress updates.
Applied to files:
transports/bifrost-http/lib/config.gotransports/bifrost-http/handlers/inference.go
📚 Learning: 2025-12-11T11:58:25.307Z
Learnt from: Pratham-Mishra04
Repo: maximhq/bifrost PR: 1000
File: core/providers/openai/responses.go:42-84
Timestamp: 2025-12-11T11:58:25.307Z
Learning: In core/providers/openai/responses.go (and related OpenAI response handling), document and enforce the API format constraint: if ResponsesReasoning != nil and the response contains content blocks, all content blocks should be treated as reasoning blocks by default. Implement type guards or parsing logic accordingly, and add unit tests to verify that when ResponsesReasoning is non-nil, content blocks are labeled as reasoning blocks. Include clear comments in the code explaining the rationale and ensure downstream consumers rely on this behavior.
Applied to files:
core/providers/openai/openai.go
📚 Learning: 2025-12-14T14:43:30.902Z
Learnt from: Radheshg04
Repo: maximhq/bifrost PR: 980
File: core/providers/openai/images.go:10-22
Timestamp: 2025-12-14T14:43:30.902Z
Learning: Enforce the OpenAI image generation SSE event type values across the OpenAI image flow in the repository: use "image_generation.partial_image" for partial chunks, "image_generation.completed" for the final result, and "error" for errors. Apply this consistently in schemas, constants, tests, accumulator routing, and UI code within core/providers/openai (and related Go files) to ensure uniform event typing and avoid mismatches.
Applied to files:
core/providers/openai/openai.go
🧬 Code graph analysis (23)
core/schemas/batch.go (2)
core/schemas/bifrost.go (1)
ModelProvider(32-32)core/schemas/models.go (1)
Model(109-129)
ui/app/workspace/logs/views/filters.tsx (2)
ui/lib/types/logs.ts (1)
LogFilters(286-301)ui/lib/store/apis/baseApi.ts (1)
getErrorMessage(174-199)
core/schemas/bifrost.go (3)
transports/bifrost-http/handlers/inference.go (3)
TranscriptionRequest(285-289)BatchCreateRequest(292-299)BatchListRequest(302-307)core/schemas/provider.go (1)
Provider(312-359)core/schemas/models.go (1)
Model(109-129)
core/schemas/files.go (1)
core/schemas/models.go (1)
Model(109-129)
core/providers/utils/pagination.go (3)
core/schemas/account.go (1)
Key(8-19)core/schemas/pagination.go (4)
SerialCursor(12-16)DecodeSerialCursor(32-53)NewSerialCursor(56-62)EncodeSerialCursor(19-28)core/schemas/logger.go (1)
Logger(28-55)
core/providers/bedrock/s3.go (1)
core/schemas/batch.go (1)
BatchRequestItem(31-37)
core/providers/mistral/mistral.go (4)
core/schemas/batch.go (2)
BifrostBatchListResponse(136-147)BifrostBatchRetrieveResponse(167-202)core/schemas/bifrost.go (3)
BifrostError(461-470)BatchListRequest(101-101)FileListRequest(106-106)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)core/schemas/files.go (2)
BifrostFileListRequest(109-128)BifrostFileListResponse(136-143)
transports/bifrost-http/handlers/inference.go (6)
core/utils.go (1)
Ptr(56-58)core/schemas/utils.go (1)
Ptr(16-18)core/schemas/batch.go (1)
BifrostBatchCreateRequest(65-83)core/schemas/provider.go (1)
Provider(312-359)core/schemas/bifrost.go (1)
ModelProvider(32-32)core/schemas/models.go (1)
Model(109-129)
framework/configstore/tables/key.go (2)
core/schemas/account.go (2)
BedrockKeyConfig(56-64)BatchS3Config(50-52)ui/lib/types/config.ts (2)
BedrockKeyConfig(63-71)BatchS3Config(58-60)
ui/lib/types/config.ts (2)
core/schemas/account.go (2)
S3BucketConfig(42-46)BatchS3Config(50-52)core/network/http.go (1)
GlobalProxyType(46-46)
core/schemas/provider.go (4)
core/schemas/account.go (1)
Key(8-19)core/schemas/batch.go (1)
BifrostBatchListRequest(118-133)core/schemas/bifrost.go (1)
BifrostError(461-470)ui/lib/types/logs.ts (1)
BifrostError(226-232)
core/utils.go (3)
core/schemas/bifrost.go (11)
RequestType(85-85)BatchCreateRequest(100-100)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)BatchResultsRequest(104-104)FileUploadRequest(105-105)FileListRequest(106-106)FileRetrieveRequest(107-107)FileDeleteRequest(108-108)FileContentRequest(109-109)ui/lib/types/config.ts (1)
RequestType(137-159)transports/bifrost-http/handlers/inference.go (2)
BatchCreateRequest(292-299)BatchListRequest(302-307)
core/internal/testutil/batch.go (1)
core/utils.go (1)
Ptr(56-58)
core/bifrost.go (3)
core/schemas/bifrost.go (19)
RequestType(85-85)BatchCreateRequest(100-100)FileUploadRequest(105-105)BifrostError(461-470)BifrostContextKeySelectedKeyID(122-122)BifrostContextKeySelectedKeyName(123-123)BifrostResponse(322-342)FileListRequest(106-106)FileRetrieveRequest(107-107)FileDeleteRequest(108-108)FileContentRequest(109-109)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)BatchResultsRequest(104-104)ModelProvider(32-32)BifrostContextKeyDirectKey(121-121)BifrostContextKeySkipKeySelection(127-127)ListModelsRequest(88-88)core/schemas/provider.go (1)
Provider(312-359)core/schemas/account.go (1)
Key(8-19)
core/providers/sgl/sgl.go (4)
core/schemas/files.go (3)
BifrostFileListRequest(109-128)BifrostFileListResponse(136-143)BifrostFileRetrieveRequest(146-159)core/schemas/bifrost.go (7)
BifrostError(461-470)FileListRequest(106-106)FileRetrieveRequest(107-107)FileDeleteRequest(108-108)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)core/schemas/batch.go (3)
BifrostBatchListRequest(118-133)BifrostBatchListResponse(136-147)BifrostBatchRetrieveResponse(167-202)
core/providers/vertex/vertex.go (3)
core/schemas/batch.go (5)
BifrostBatchListRequest(118-133)BifrostBatchRetrieveRequest(150-159)BifrostBatchCancelRequest(205-214)BifrostBatchCancelResponse(222-231)BifrostBatchResultsRequest(234-246)core/schemas/bifrost.go (4)
BifrostError(461-470)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)
core/providers/openai/openai.go (8)
core/schemas/files.go (9)
BifrostFileListRequest(109-128)BifrostFileListResponse(136-143)FileObject(40-50)BifrostFileRetrieveRequest(146-159)BifrostFileRetrieveResponse(167-183)BifrostFileDeleteRequest(186-198)BifrostFileDeleteResponse(206-212)BifrostFileContentRequest(215-227)BifrostFileContentResponse(235-241)core/schemas/bifrost.go (9)
BifrostError(461-470)OpenAI(35-35)FileListRequest(106-106)BifrostResponseExtraFields(390-401)RequestType(85-85)FileRetrieveRequest(107-107)FileDeleteRequest(108-108)FileContentRequest(109-109)BatchListRequest(101-101)core/providers/utils/utils.go (5)
ShouldSendBackRawResponse(615-620)ShouldSendBackRawRequest(607-612)NewBifrostOperationError(564-575)CheckAndDecodeBody(488-496)HandleProviderResponse(359-442)core/providers/utils/pagination.go (1)
NewSerialListHelper(19-34)core/providers/openai/files.go (3)
OpenAIFileListResponse(26-30)OpenAIFileResponse(14-23)OpenAIFileDeleteResponse(33-37)core/providers/openai/errors.go (1)
ParseOpenAIError(10-42)core/schemas/batch.go (4)
BifrostBatchListRequest(118-133)BifrostBatchListResponse(136-147)BifrostBatchRetrieveResponse(167-202)BatchRequestCounts(40-48)core/providers/openai/batch.go (2)
OpenAIBatchListResponse(51-57)OpenAIBatchResponse(20-41)
core/providers/cohere/cohere.go (3)
core/schemas/batch.go (4)
BifrostBatchListRequest(118-133)BifrostBatchListResponse(136-147)BifrostBatchRetrieveRequest(150-159)BifrostBatchRetrieveResponse(167-202)core/schemas/bifrost.go (3)
BifrostError(461-470)BatchListRequest(101-101)BatchRetrieveRequest(102-102)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)
core/providers/azure/azure.go (6)
core/schemas/bifrost.go (3)
BifrostError(461-470)BifrostResponseExtraFields(390-401)RequestType(85-85)core/providers/utils/utils.go (6)
NewConfigurationError(550-560)ShouldSendBackRawRequest(607-612)ShouldSendBackRawResponse(615-620)HandleProviderResponse(359-442)SetExtraHeaders(179-209)MakeRequestWithContext(40-94)core/providers/utils/pagination.go (1)
NewSerialListHelper(19-34)core/utils.go (1)
Ptr(56-58)core/schemas/utils.go (1)
Ptr(16-18)core/providers/openai/errors.go (1)
ParseOpenAIError(10-42)
core/providers/groq/groq.go (4)
core/schemas/account.go (1)
Key(8-19)core/schemas/batch.go (8)
BifrostBatchListRequest(118-133)BifrostBatchListResponse(136-147)BifrostBatchRetrieveRequest(150-159)BifrostBatchRetrieveResponse(167-202)BifrostBatchCancelRequest(205-214)BifrostBatchCancelResponse(222-231)BifrostBatchResultsRequest(234-246)BifrostBatchResultsResponse(285-294)core/schemas/bifrost.go (7)
BifrostError(461-470)BatchListRequest(101-101)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)FileListRequest(106-106)FileRetrieveRequest(107-107)FileDeleteRequest(108-108)core/schemas/files.go (8)
BifrostFileListRequest(109-128)BifrostFileListResponse(136-143)BifrostFileRetrieveRequest(146-159)BifrostFileRetrieveResponse(167-183)BifrostFileDeleteRequest(186-198)BifrostFileDeleteResponse(206-212)BifrostFileContentRequest(215-227)BifrostFileContentResponse(235-241)
core/providers/perplexity/perplexity.go (3)
core/schemas/batch.go (1)
BifrostBatchListRequest(118-133)core/schemas/bifrost.go (2)
BifrostError(461-470)BatchListRequest(101-101)core/providers/utils/utils.go (1)
NewUnsupportedOperationError(455-467)
core/providers/anthropic/anthropic.go (4)
core/schemas/batch.go (6)
BifrostBatchListRequest(118-133)BifrostBatchListResponse(136-147)BifrostBatchRetrieveResponse(167-202)BifrostBatchRetrieveRequest(150-159)BifrostBatchCancelRequest(205-214)BatchRequestCounts(40-48)core/providers/utils/utils.go (3)
CheckOperationAllowed(473-485)HandleProviderResponse(359-442)ShouldSendBackRawRequest(607-612)core/providers/utils/pagination.go (1)
NewSerialListHelper(19-34)core/providers/anthropic/types.go (4)
AnthropicFileListResponse(516-521)AnthropicFilesAPIBetaHeader(17-17)AnthropicFileResponse(505-513)AnthropicFileDeleteResponse(524-527)
core/providers/gemini/gemini.go (6)
core/schemas/utils.go (1)
ParseModelString(23-36)core/schemas/bifrost.go (11)
Gemini(48-48)BifrostError(461-470)RequestType(85-85)BatchListRequest(101-101)BifrostResponseExtraFields(390-401)BatchRetrieveRequest(102-102)BatchCancelRequest(103-103)BatchResultsRequest(104-104)FileListRequest(106-106)FileRetrieveRequest(107-107)FileDeleteRequest(108-108)core/providers/utils/utils.go (3)
RequestMetadata(598-602)NewBifrostOperationError(564-575)SetExtraHeaders(179-209)core/schemas/provider.go (1)
Provider(312-359)core/providers/gemini/types.go (1)
GeminiBatchListResponse(1607-1610)core/providers/utils/pagination.go (1)
NewSerialListHelper(19-34)
🪛 GitHub Actions: PR #1104
core/providers/azure/azure.go
[error] 1842-1842: azure.go:1842:46: expected ';', found ieve
Merge activity
|

Summary
Adds explicit option to enable or disable batch APIs on a provider key.
Changes
Type of change
Affected areas
How to test
Describe the steps to validate this change. Include commands and expected outcomes.
If adding new configs or environment variables, document them here.
Screenshots/Recordings
If UI changes, add before/after screenshots or short clips.
Breaking changes
If yes, describe impact and migration instructions.
Related issues
Link related issues and discussions. Example: Closes #123
Security considerations
Note any security implications (auth, secrets, PII, sandboxing, etc.).
Checklist
docs/contributing/README.mdand followed the guidelines