Skip to content

Feature: Add local OpenAI-compatible model discovery to /model#201

Merged
kevincodex1 merged 4 commits intoGitlawb:mainfrom
technomancer702:feature/local-openai-model-discovery
Apr 5, 2026
Merged

Feature: Add local OpenAI-compatible model discovery to /model#201
kevincodex1 merged 4 commits intoGitlawb:mainfrom
technomancer702:feature/local-openai-model-discovery

Conversation

@technomancer702
Copy link
Copy Markdown
Contributor

@technomancer702 technomancer702 commented Apr 2, 2026

Summary

This PR extends model selection beyond Anthropic-only entries when OpenClaude is pointed at a local OpenAI-compatible server.

It adds support for discovering models from local /v1/models endpoints, caches those results per local provider/base URL, and uses them in /model so users can switch directly between locally hosted models such as LM Studio and other OpenAI-compatible servers.

What changed

  • Added local OpenAI-compatible model discovery via /models
  • Scoped cached model options by active provider/base URL
  • Refreshed local model discovery when /model opens so the picker is current in the active session
  • Reused shared local-provider labeling across the UI
  • Generalized local-provider naming beyond LM Studio to common local OpenAI-compatible servers

User impact

Before:

  • /model mostly showed Anthropic-first options, even when OpenClaude was configured against LM Studio or another local OpenAI-compatible backend

After:

  • /model shows models reported by the active local OpenAI-compatible server
  • Local providers are labeled more clearly in provider summaries and startup UI

Implementation notes

  • Local OpenAI-compatible model discovery is handled in bootstrap and cached in global config
  • Cache entries are scoped so models from one local backend do not leak into another provider session
  • /model triggers a refresh for scoped local OpenAI-compatible providers before opening the picker
  • Added provider-label detection for common local servers including:
    • LM Studio
    • Ollama
    • LocalAI
    • Jan
    • KoboldCpp
    • llama.cpp
    • vLLM
    • Open WebUI
    • text-generation-webui
  • Falls back to Local OpenAI-compatible when the server is local but not recognized

Testing

Passed:

  • bun test src/utils/providerDiscovery.test.ts src/services/api/providerConfig.local.test.ts src/commands/provider/provider.test.tsx

Focused coverage includes:

  • local /models discovery
  • cache scoping for local OpenAI-compatible providers
  • provider summary labeling for generic local backends
  • LM Studio and other local-provider label detection

Notes

  • This change is intentionally limited to local OpenAI-compatible providers and does not alter remote OpenAI-compatible provider behavior.

@tunnckoCore
Copy link
Copy Markdown
Contributor

tunnckoCore commented Apr 2, 2026

Sounds good. We should support the discovery of OpenAI too. Does that fixes that too? I mean, currently they are hardcoded to codexplan and codexspark.

image

Does that change actually work for any OpenAI-compat, thus including OpenAI itself? Or it has some special paths?

@technomancer702
Copy link
Copy Markdown
Contributor Author

technomancer702 commented Apr 2, 2026

Sounds good. We should support the discovery of OpenAI too. Does that fixes that too? I mean, currently they are hardcoded to codexplan and codexspark.

Does that change actually work for any OpenAI-compat, thus including OpenAI itself? Or it has some special paths?

This PR is intentionally limited to local OpenAI compatible providers, but I think it would definitely be possible to implement something for remote OpenAI providers as well in a future update.

Copy link
Copy Markdown
Collaborator

@Vasanthdev2004 Vasanthdev2004 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I took a closer pass on this and I don't think it's ready to merge yet. Two issues stood out:

  1. Local OpenAI-compatible model discovery can accidentally flip a local backend onto the Codex transport.
    The PR now exposes raw /v1/models IDs directly in /model, but provider resolution still special-cases names like gpt-5.4 and codexplan as Codex aliases regardless of base URL. I verified that with CLAUDE_CODE_USE_OPENAI=1 and OPENAI_BASE_URL=http://127.0.0.1:8080/v1, selecting gpt-5.4 resolves to codex_responses, and /provider summary starts reporting Codex for what is actually a local OpenAI-compatible setup.

    Relevant paths:

    • src/utils/providerDiscovery.ts
    • src/utils/model/modelOptions.ts
    • src/services/api/providerConfig.ts
    • src/utils/model/providers.ts
    • src/commands/provider/provider.tsx

    I think this needs a guard before merge, e.g. don't apply Codex alias routing when the configured base URL is explicitly local/non-Codex, or filter/remap conflicting discovered IDs.

  2. /model is now synchronously blocked on refreshing the local /models endpoint.
    The command now does await fetchBootstrapData() before the picker renders for local OpenAI-compatible scopes, and that local discovery path waits on listOpenAICompatibleModels() with a 5s timeout. So if the saved local backend is down, sleeping, or misconfigured, opening /model stalls before the picker even appears.

    That's especially rough because saved /provider profiles are applied at startup, so /model is one of the main recovery paths when a local provider is broken.

    Relevant paths:

    • src/commands/model/model.tsx
    • src/services/api/bootstrap.ts
    • src/utils/providerDiscovery.ts

    I think this refresh should be backgrounded, or the picker should open immediately using the last scoped cache and refresh asynchronously.

Non-blocking note:

  • The /provider integration is only partial right now. buildCurrentProviderSummary() uses the new local-provider label helper, but the saved-profile confirmation path still hardcodes OpenAI-compatible for openai profiles. Not a merge blocker by itself, but it leaves /provider UX inconsistent with the new startup/current-provider labeling.

@technomancer702
Copy link
Copy Markdown
Contributor Author

I took a closer pass on this and I don't think it's ready to merge yet. Two issues stood out:

  1. Local OpenAI-compatible model discovery can accidentally flip a local backend onto the Codex transport.
    The PR now exposes raw /v1/models IDs directly in /model, but provider resolution still special-cases names like gpt-5.4 and codexplan as Codex aliases regardless of base URL. I verified that with CLAUDE_CODE_USE_OPENAI=1 and OPENAI_BASE_URL=http://127.0.0.1:8080/v1, selecting gpt-5.4 resolves to codex_responses, and /provider summary starts reporting Codex for what is actually a local OpenAI-compatible setup.
    Relevant paths:

    • src/utils/providerDiscovery.ts
    • src/utils/model/modelOptions.ts
    • src/services/api/providerConfig.ts
    • src/utils/model/providers.ts
    • src/commands/provider/provider.tsx

    I think this needs a guard before merge, e.g. don't apply Codex alias routing when the configured base URL is explicitly local/non-Codex, or filter/remap conflicting discovered IDs.

  2. /model is now synchronously blocked on refreshing the local /models endpoint.
    The command now does await fetchBootstrapData() before the picker renders for local OpenAI-compatible scopes, and that local discovery path waits on listOpenAICompatibleModels() with a 5s timeout. So if the saved local backend is down, sleeping, or misconfigured, opening /model stalls before the picker even appears.
    That's especially rough because saved /provider profiles are applied at startup, so /model is one of the main recovery paths when a local provider is broken.
    Relevant paths:

    • src/commands/model/model.tsx
    • src/services/api/bootstrap.ts
    • src/utils/providerDiscovery.ts

    I think this refresh should be backgrounded, or the picker should open immediately using the last scoped cache and refresh asynchronously.

Non-blocking note:

  • The /provider integration is only partial right now. buildCurrentProviderSummary() uses the new local-provider label helper, but the saved-profile confirmation path still hardcodes OpenAI-compatible for openai profiles. Not a merge blocker by itself, but it leaves /provider UX inconsistent with the new startup/current-provider labeling.

Addressed the requested changes in 54a355d.

What changed:

  • Guarded Codex transport/provider routing behind the configured base URL so explicit local OpenAI-compatible endpoints no longer flip to Codex just because they expose IDs like gpt-5.4 or codexplan.
  • Changed /model to open immediately for local OpenAI-compatible scopes and refresh model discovery in the background instead of awaiting the /models call before rendering.
  • Aligned saved-profile labeling with the local-provider labeling already used in current-provider/startup summaries.

Added regression coverage for:

  • local OpenAI-compatible + gpt-5.4 staying on chat_completions/openai
  • Codex aliases still resolving to Codex when no non-Codex base URL is configured
  • /provider summary/save messaging for local OpenAI-compatible endpoints
  • /model not awaiting local discovery refresh before opening

Validated with:
bun test src/services/api/providerConfig.local.test.ts src/utils/model/providers.test.ts src/commands/provider/provider.test.tsx src/commands/model/model.test.tsx src/services/api/codexShim.test.ts

@kevincodex1
Copy link
Copy Markdown
Contributor

please fix conflicts

@technomancer702
Copy link
Copy Markdown
Contributor Author

@kevincodex1 Conflicts resolved

Copy link
Copy Markdown
Contributor

@kevincodex1 kevincodex1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great! one more look @Vasanthdev2004 @gnanam1990

Copy link
Copy Markdown
Collaborator

@Vasanthdev2004 Vasanthdev2004 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rechecked this on the latest head (ca8d62e) and the earlier blockers look resolved now.

What I verified:

  • local OpenAI-compatible endpoints no longer get flipped onto the Codex transport just because they expose IDs like gpt-5.4 or codexplan
  • /model opens immediately for local OpenAI-compatible scopes instead of blocking on the /models refresh
  • saved-profile labeling is now aligned with the local-provider labeling used in the current-provider/startup summaries

I reran:

  • �un test ./src/services/api/providerConfig.local.test.ts ./src/utils/providerDiscovery.test.ts ./src/utils/model/providers.test.ts ./src/commands/model/model.test.tsx ./src/commands/provider/provider.test.tsx ./src/services/api/codexShim.test.ts
  • �un run build
  • �un run smoke

All of those passed on the current head, so from my side this looks good to merge.

@kevincodex1 kevincodex1 merged commit c534aa5 into Gitlawb:main Apr 5, 2026
1 check passed
euxaristia pushed a commit to euxaristia/openclaude that referenced this pull request Apr 13, 2026
…wb#201)

* Add local OpenAI-compatible model discovery to /model

* Guard local OpenAI model discovery from Codex routing

* Preserve remote OpenAI Codex alias behavior
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants