Feature: Add local OpenAI-compatible model discovery to /model#201
Conversation
This PR is intentionally limited to local OpenAI compatible providers, but I think it would definitely be possible to implement something for remote OpenAI providers as well in a future update. |
Vasanthdev2004
left a comment
There was a problem hiding this comment.
I took a closer pass on this and I don't think it's ready to merge yet. Two issues stood out:
-
Local OpenAI-compatible model discovery can accidentally flip a local backend onto the Codex transport.
The PR now exposes raw/v1/modelsIDs directly in/model, but provider resolution still special-cases names likegpt-5.4andcodexplanas Codex aliases regardless of base URL. I verified that withCLAUDE_CODE_USE_OPENAI=1andOPENAI_BASE_URL=http://127.0.0.1:8080/v1, selectinggpt-5.4resolves tocodex_responses, and/providersummary starts reportingCodexfor what is actually a local OpenAI-compatible setup.Relevant paths:
src/utils/providerDiscovery.tssrc/utils/model/modelOptions.tssrc/services/api/providerConfig.tssrc/utils/model/providers.tssrc/commands/provider/provider.tsx
I think this needs a guard before merge, e.g. don't apply Codex alias routing when the configured base URL is explicitly local/non-Codex, or filter/remap conflicting discovered IDs.
-
/modelis now synchronously blocked on refreshing the local/modelsendpoint.
The command now doesawait fetchBootstrapData()before the picker renders for local OpenAI-compatible scopes, and that local discovery path waits onlistOpenAICompatibleModels()with a 5s timeout. So if the saved local backend is down, sleeping, or misconfigured, opening/modelstalls before the picker even appears.That's especially rough because saved
/providerprofiles are applied at startup, so/modelis one of the main recovery paths when a local provider is broken.Relevant paths:
src/commands/model/model.tsxsrc/services/api/bootstrap.tssrc/utils/providerDiscovery.ts
I think this refresh should be backgrounded, or the picker should open immediately using the last scoped cache and refresh asynchronously.
Non-blocking note:
- The
/providerintegration is only partial right now.buildCurrentProviderSummary()uses the new local-provider label helper, but the saved-profile confirmation path still hardcodesOpenAI-compatibleforopenaiprofiles. Not a merge blocker by itself, but it leaves/providerUX inconsistent with the new startup/current-provider labeling.
Addressed the requested changes in 54a355d. What changed:
Added regression coverage for:
Validated with: |
|
please fix conflicts |
|
@kevincodex1 Conflicts resolved |
kevincodex1
left a comment
There was a problem hiding this comment.
This is great! one more look @Vasanthdev2004 @gnanam1990
Vasanthdev2004
left a comment
There was a problem hiding this comment.
Rechecked this on the latest head (ca8d62e) and the earlier blockers look resolved now.
What I verified:
- local OpenAI-compatible endpoints no longer get flipped onto the Codex transport just because they expose IDs like gpt-5.4 or codexplan
- /model opens immediately for local OpenAI-compatible scopes instead of blocking on the /models refresh
- saved-profile labeling is now aligned with the local-provider labeling used in the current-provider/startup summaries
I reran:
- �un test ./src/services/api/providerConfig.local.test.ts ./src/utils/providerDiscovery.test.ts ./src/utils/model/providers.test.ts ./src/commands/model/model.test.tsx ./src/commands/provider/provider.test.tsx ./src/services/api/codexShim.test.ts
- �un run build
- �un run smoke
All of those passed on the current head, so from my side this looks good to merge.
…wb#201) * Add local OpenAI-compatible model discovery to /model * Guard local OpenAI model discovery from Codex routing * Preserve remote OpenAI Codex alias behavior

Summary
This PR extends model selection beyond Anthropic-only entries when OpenClaude is pointed at a local OpenAI-compatible server.
It adds support for discovering models from local
/v1/modelsendpoints, caches those results per local provider/base URL, and uses them in/modelso users can switch directly between locally hosted models such as LM Studio and other OpenAI-compatible servers.What changed
/models/modelopens so the picker is current in the active sessionUser impact
Before:
/modelmostly showed Anthropic-first options, even when OpenClaude was configured against LM Studio or another local OpenAI-compatible backendAfter:
/modelshows models reported by the active local OpenAI-compatible serverImplementation notes
/modeltriggers a refresh for scoped local OpenAI-compatible providers before opening the pickerLocal OpenAI-compatiblewhen the server is local but not recognizedTesting
Passed:
bun test src/utils/providerDiscovery.test.ts src/services/api/providerConfig.local.test.ts src/commands/provider/provider.test.tsxFocused coverage includes:
/modelsdiscoveryNotes