feat(provider): align provider and model workflows#324
feat(provider): align provider and model workflows#324kevincodex1 merged 8 commits intoGitlawb:mainfrom
Conversation
There was a problem hiding this comment.
Pull request overview
This PR aligns the /provider and /model workflows by introducing persisted, in-app managed provider profiles (add/edit/delete/activate), applying the active profile into the runtime environment merge, and refreshing OpenAI-compatible model options from the active provider endpoint before showing the model picker.
Changes:
- Added persisted provider profile management (including active profile selection) and applied the active profile into runtime env merges.
- Added OpenAI-compatible model discovery (with Ollama tags fallback) and wired
/modelto refresh/discover models and use a per-profile cache. - Updated model options resolution to read additional OpenAI model options from the active provider profile cache.
Reviewed changes
Copilot reviewed 9 out of 9 changed files in this pull request and generated 8 comments.
Show a summary per file
| File | Description |
|---|---|
| src/utils/providerProfiles.ts | New provider profile CRUD + active profile env application + per-profile model option cache. |
| src/utils/model/openaiModelDiscovery.ts | New utility to discover models from OpenAI-compatible /models (with Ollama /api/tags fallback). |
| src/utils/model/modelOptions.ts | Reads additional model options from the active profile’s OpenAI cache when applicable. |
| src/utils/managedEnv.ts | Ensures active provider profile is applied after settings/env merges so it “wins”. |
| src/utils/config.ts | Extends global config shape with provider profiles and per-profile OpenAI model cache. |
| src/components/ProviderManager.tsx | New TUI flow for managing provider profiles. |
| src/commands/provider/provider.tsx | Replaces /provider wizard entrypoint with the ProviderManager flow. |
| src/commands/provider/index.ts | Updates command metadata (description; removes immediate execution hook). |
| src/commands/model/model.tsx | Refreshes OpenAI-compatible discovered models before showing picker and persists cache. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| delete process.env.CLAUDE_CODE_USE_OPENAI | ||
| delete process.env.CLAUDE_CODE_USE_BEDROCK | ||
| delete process.env.CLAUDE_CODE_USE_VERTEX | ||
| delete process.env.CLAUDE_CODE_USE_FOUNDRY |
There was a problem hiding this comment.
applyProviderProfileToProcessEnv() clears several provider-routing flags, but it does not clear CLAUDE_CODE_USE_GEMINI or CLAUDE_CODE_USE_GITHUB. Because getAPIProvider() checks GEMINI/GITHUB before OPENAI, a previously-set GEMINI/GITHUB flag will override the selected provider profile and prevent the profile from taking effect. Clear those flags here as well (and any other higher-precedence routing flags you want the profile to override).
| delete process.env.CLAUDE_CODE_USE_FOUNDRY | |
| delete process.env.CLAUDE_CODE_USE_FOUNDRY | |
| delete process.env.CLAUDE_CODE_USE_GEMINI | |
| delete process.env.CLAUDE_CODE_USE_GITHUB |
| process.env.ANTHROPIC_MODEL = profile.model | ||
| if (profile.provider === 'anthropic') { | ||
| process.env.ANTHROPIC_BASE_URL = profile.baseUrl | ||
|
|
||
| if (profile.apiKey) { | ||
| process.env.ANTHROPIC_API_KEY = profile.apiKey | ||
| } else { | ||
| delete process.env.ANTHROPIC_API_KEY | ||
| } | ||
|
|
||
| delete process.env.OPENAI_BASE_URL | ||
| delete process.env.OPENAI_API_BASE | ||
| delete process.env.OPENAI_MODEL | ||
| delete process.env.OPENAI_API_KEY | ||
| return | ||
| } | ||
|
|
||
| process.env.CLAUDE_CODE_USE_OPENAI = '1' | ||
| process.env.OPENAI_BASE_URL = profile.baseUrl | ||
| process.env.OPENAI_MODEL = profile.model | ||
|
|
||
| if (profile.apiKey) { |
There was a problem hiding this comment.
applyProviderProfileToProcessEnv() sets process.env.ANTHROPIC_MODEL unconditionally, even when applying an OpenAI-compatible profile. Since ANTHROPIC_MODEL is treated as a global model override (and is consulted before OPENAI_MODEL in getUserSpecifiedModelSetting), this can create confusing/incorrect routing if other code expects ANTHROPIC_MODEL to only contain Anthropic model IDs. Consider only setting ANTHROPIC_MODEL for Anthropic profiles (and optionally deleting it when switching to OpenAI) so OPENAI_MODEL is the source of truth for OpenAI-compatible profiles.
| }); | ||
| return <SetModelAndClose args={args} onDone={onDone} />; | ||
| } | ||
| await refreshOpenAIModelOptionsCache(); |
There was a problem hiding this comment.
refreshOpenAIModelOptionsCache() is awaited before rendering the picker. With DISCOVERY_TIMEOUT_MS=5000 and sequential fallbacks (multiple /models URLs + optional Ollama tags), /model can block for up to ~15s on unreachable endpoints, even though failures are swallowed. Consider making discovery non-blocking (render picker immediately and refresh cache in the background), or parallelizing requests / lowering timeouts so the command stays responsive.
| await refreshOpenAIModelOptionsCache(); | |
| void refreshOpenAIModelOptionsCache().catch(() => {}); |
| function haveSameModelOptions(left: ModelOption[], right: ModelOption[]): boolean { | ||
| if (left.length !== right.length) { | ||
| return false; | ||
| } | ||
| return left.every((option, index) => { | ||
| const other = right[index]; | ||
| return other !== undefined && option.value === other.value && option.label === other.label && option.description === other.description && option.descriptionForModel === other.descriptionForModel; | ||
| }); | ||
| } | ||
| async function refreshOpenAIModelOptionsCache(): Promise<void> { | ||
| if (getAPIProvider() !== 'openai') { | ||
| return; | ||
| } | ||
| try { | ||
| const discoveredOptions = await discoverOpenAICompatibleModelOptions(); | ||
| if (discoveredOptions.length === 0) { | ||
| return; | ||
| } | ||
| const currentOptions = getActiveOpenAIModelOptionsCache(); | ||
| if (haveSameModelOptions(currentOptions, discoveredOptions)) { | ||
| return; | ||
| } | ||
| setActiveOpenAIModelOptionsCache(discoveredOptions); |
There was a problem hiding this comment.
haveSameModelOptions() does an order-sensitive deep comparison. The /v1/models response order isn’t guaranteed to be stable across calls, so this can cause unnecessary config writes (and repeated discovery on every /model invocation). Sorting both option arrays (e.g., by value) before comparing/storing, or comparing by a Set of values, would make the cache update idempotent.
| export async function discoverOpenAICompatibleModelOptions(): Promise< | ||
| ModelOption[] | ||
| > { | ||
| if (getAPIProvider() !== 'openai') { |
There was a problem hiding this comment.
discoverOpenAICompatibleModelOptions() bails unless getAPIProvider() === 'openai'. However, the app treats certain OpenAI models as a separate 'codex' provider (see getAPIProvider() logic), and the new OpenAI preset default model is a codex model. That means model discovery will silently never run for those OpenAI-compatible sessions. Consider allowing discovery for both 'openai' and 'codex' (and any other OpenAI-shim providers that should share this behavior).
| if (getAPIProvider() !== 'openai') { | |
| const openAICompatibleProviders = ['openai', 'codex'] | |
| if (!openAICompatibleProviders.includes(getAPIProvider())) { |
| }); | ||
| } | ||
| async function refreshOpenAIModelOptionsCache(): Promise<void> { | ||
| if (getAPIProvider() !== 'openai') { |
There was a problem hiding this comment.
refreshOpenAIModelOptionsCache() only runs when getAPIProvider() === 'openai'. But getAPIProvider() returns 'codex' for certain OpenAI models (including the new OpenAI preset default gpt-5.3-codex), which will skip discovery and keep the picker stale for those sessions. Consider enabling this cache refresh for both 'openai' and 'codex' providers (since both route through the OpenAI-compatible shim).
| if (getAPIProvider() !== 'openai') { | |
| const provider = getAPIProvider(); | |
| if (provider !== 'openai' && provider !== 'codex') { |
| const additionalOptions = | ||
| getAPIProvider() === 'openai' |
There was a problem hiding this comment.
getModelOptions() only pulls from getActiveOpenAIModelOptionsCache() when getAPIProvider() === 'openai'. If the session is using the OpenAI-compatible shim in 'codex' mode (getAPIProvider() === 'codex'), the discovered model cache won’t be shown in the picker. Consider treating both 'openai' and 'codex' as OpenAI-compatible here.
| const additionalOptions = | |
| getAPIProvider() === 'openai' | |
| const apiProvider = getAPIProvider() | |
| const additionalOptions = | |
| apiProvider === 'openai' || apiProvider === 'codex' |
| export type ProviderProfile = { | ||
| id: string | ||
| name: string | ||
| provider: 'openai' | 'anthropic' | ||
| baseUrl: string | ||
| model: string | ||
| apiKey?: string | ||
| } |
There was a problem hiding this comment.
The new exported type name ProviderProfile in utils/config.ts clashes with the existing exported ProviderProfile union type in utils/providerProfile.ts. This makes imports ambiguous/confusing and increases the chance of accidentally importing the wrong type. Consider renaming one of them (e.g., ManagedProviderProfile / ProviderProfileRecord) to keep type names unambiguous.
Vasanthdev2004
left a comment
There was a problem hiding this comment.
Two blocker-level issues stood out for me on this pass:
src/utils/managedEnv.ts:180-182/src/utils/providerProfiles.ts:299-306
The new active-profile application is unconditional during env setup, so it overrides explicit provider selection that was already placed in process.env before startup. That breaks the existing CLI contract where explicit flags/env should win. A concrete repro is:
- apply
--provider ollama --model qwen2.5:3b(which setsCLAUDE_CODE_USE_OPENAI=1,OPENAI_BASE_URL=http://localhost:11434/v1,OPENAI_MODEL=qwen2.5:3b) - then
applyActiveProviderProfileFromConfig()runs - the active saved profile rewrites
OPENAI_BASE_URL/OPENAI_MODEL
I reproduced the equivalent directly in the review worktree and saw the env change from Ollama values to the saved OpenAI profile values. That means saved profile state is currently stronger than explicit startup intent, which feels wrong for --provider.
src/utils/providerProfiles.ts:265-297
applyProviderProfileToProcessEnv() does not clear all provider-selection flags before applying the active profile. It deletes CLAUDE_CODE_USE_OPENAI, CLAUDE_CODE_USE_BEDROCK, CLAUDE_CODE_USE_VERTEX, and CLAUDE_CODE_USE_FOUNDRY, but it leaves CLAUDE_CODE_USE_GEMINI and CLAUDE_CODE_USE_GITHUB untouched.
That makes the active profile non-authoritative. I reproduced this directly by setting stale CLAUDE_CODE_USE_GEMINI=1 / CLAUDE_CODE_USE_GITHUB=1, then applying an Anthropic profile: both stale flags remained set afterward. Since getAPIProvider() checks those flags first, the runtime can still believe it is on Gemini/GitHub even after the active profile is applied.
The overall provider-manager direction is good, but I think these env-precedence issues need to be fixed before merge.
gnanam1990
left a comment
There was a problem hiding this comment.
Requesting changes.
Thanks for the work on this. I like the overall direction, but I found two issues that look important to fix before merge.
-
Active provider profiles do not fully clear competing provider flags.
InapplyProviderProfileToProcessEnv(), the new code clearsCLAUDE_CODE_USE_OPENAI,CLAUDE_CODE_USE_BEDROCK,CLAUDE_CODE_USE_VERTEX, andCLAUDE_CODE_USE_FOUNDRY, but it does not clearCLAUDE_CODE_USE_GEMINIorCLAUDE_CODE_USE_GITHUB. Provider resolution checks Gemini first, then GitHub, then OpenAI, so if either of those env vars is already set, activating an OpenAI or Anthropic profile can still leave the process logically pointed at the wrong provider. That looks like a real behavior regression for the new provider manager flow. -
The Ollama preset default model looks mismatched for a local Ollama profile.
DEFAULT_OLLAMA_MODELis currently set tokimi-k2.5:cloud, and the provider manager form/placeholder uses that for Ollama too. Since this preset is wired tohttp://localhost:11434/v1, a cloud-style default model here looks inconsistent with the repo’s other Ollama defaults and likely wrong for a typical local install. I think this should default to a normal local Ollama model instead.
I’d suggest fixing both before merge:
- clear
CLAUDE_CODE_USE_GEMINIandCLAUDE_CODE_USE_GITHUBwhen applying an active provider profile - change the Ollama preset/default model to a local Ollama model that matches the repo’s existing expectations
|
Thanks for the detailed review. Addressed both the earlier and latest blocker comments across 00a3a5d and efa2c98: Active profile apply now clears all competing provider flags, including Gemini and GitHub, so provider resolution is authoritative: src/utils/providerProfiles.ts bun test src/utils/providerProfiles.test.ts |
|
Thanks for the follow-up. The earlier blockers look addressed now, but I still see one remaining functional issue, so I’m holding approval. If the last provider profile is deleted, the current session env is not cleared. In I also noticed one smaller UX inconsistency: the new form still shows the model placeholder |
Latest commit:
Included in this commit:
|
|
Thanks for the follow-up. The earlier blocker comments look addressed, but I’m still holding approval on one small edge case. In the delete-last-profile path, the new cleanup now clears the current session’s provider env whenever there is no next active profile. That appears to remove not only saved-profile state, but also an explicit provider selection that may have come from startup flags or shell env. Since this PR now intentionally preserves explicit startup provider selection in the normal env-merge path, I think this final delete path should respect that same contract. Please fix that edge case, and I’ll be happy to recheck. |
|
Went through the PR and just saw some minor things, which I commented. More nitpicking than actually blocking. Feel free to take the comments into account before merging. |
|
@chrizzlekicks Thank you for the suggestion. I will merge the updates |
|
@gnanam1990 Thanks for catching this edge case. Fixed in 52fa0fc: delete-last-profile now preserves explicit startup provider env and only clears provider env when it was applied by active-profile management. Added regression tests for both branches in src/utils/providerProfiles.test.ts. Re-ran focused tests + smoke, all passing. |
gnanam1990
left a comment
There was a problem hiding this comment.
Thanks for the follow-up work here. I rechecked the latest head, the earlier blocker comments now look addressed, and I appreciate the careful fix/test pass across the provider env edge cases.
|
@Varshith-JV-1410 fix conflicts |
|
please have a look to @Vasanthdev2004 |
|
@kevincodex1 just a min |
|
@gnanam1990 Thanks for the review and follow-up. I've addressed the requested fixes, resolved the merge conflict with latest main, and re-ran smoke on the current PR head (5578953) successfully. |
Vasanthdev2004
left a comment
There was a problem hiding this comment.
I rechecked the latest head, and two blocker-level issues still remain for me:
-
applyProviderProfileToProcessEnv()still leaves staleCLAUDE_CODE_USE_GEMINI/CLAUDE_CODE_USE_GITHUBbehind.
Insrc/utils/providerProfiles.ts:265-297, the function clearsCLAUDE_CODE_USE_OPENAI,CLAUDE_CODE_USE_BEDROCK,CLAUDE_CODE_USE_VERTEX, andCLAUDE_CODE_USE_FOUNDRY, but it still does not clear the Gemini/GitHub flags. I reproduced this directly in the review worktree: starting with both stale flags set, then applying an Anthropic profile still leavesCLAUDE_CODE_USE_GEMINI=1andCLAUDE_CODE_USE_GITHUB=1inprocess.env. -
Explicit startup provider/model selection is still overridden by the active saved profile during env setup.
applySafeConfigEnvironmentVariables()still unconditionally callsapplyActiveProviderProfileFromConfig()insrc/utils/managedEnv.ts:180-182. I reproduced this by setting an explicit OpenAI/Ollama-style startup intent inprocess.env(CLAUDE_CODE_USE_OPENAI=1,OPENAI_BASE_URL=http://localhost:11434/v1,OPENAI_MODEL=qwen2.5:3b), then saving an active OpenAI profile in config and callingapplySafeConfigEnvironmentVariables(): the env gets rewritten back to the saved profile (https://api.openai.com/v1,gpt-4o, persisted API key). So the �preserve explicit startup provider selection� fix is still incomplete.
The direction here is good, but I still need those two precedence/authoritativeness issues fixed before I can approve it.
|
@Vasanthdev2004 Thanks for the review, looking into it. |
|
@Varshith-JV-1410 ping when you done! |
|
@Vasanthdev2004 Thanks for the detailed recheck. I pushed an additional hardening fix in 48fe3da to cover the stale-marker edge case you called out. The active profile now does not override explicit startup provider/model selection even if CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED is stale, and I added a regression test for that case in src/utils/providerProfiles.test.ts. Re-ran focused tests + smoke, all passing. |
Vasanthdev2004
left a comment
There was a problem hiding this comment.
Looks good to merge from my side now.
I rechecked the two blocker paths we flagged earlier:
applyProviderProfileToProcessEnv()now clears the competing Gemini/GitHub flags- active profile application now respects explicit startup provider intent, including the stale profile-marker case
I also reran:
bun test ./src/utils/providerProfiles.test.tsnpm run test:provider-recommendationbun run smoke
Those all passed, and my direct env repro now keeps the explicit Ollama-style startup selection intact instead of rewriting it back to the saved profile.
|
@kevincodex1 good to go! |
|
@gnanam1990 @kevincodex1 @Vasanthdev2004 @chrizzlekicks Thanks a lot for the thorough review and iterative feedback throughout this PR. Really appreciate the careful checks, it helped tighten the implementation significantly. |
* feat(provider): align provider and model workflows * fix(provider): clear gemini/github flags and use local ollama default * fix(provider): preserve explicit startup provider selection * fix(provider): clear env when deleting last profile * chore(provider): apply review nits in ProviderManager * fix(provider): preserve explicit env on last-profile delete * fix(provider): preserve explicit env when profile marker is stale --------- Co-authored-by: Gitlawb <gitlawb@users.noreply.github.com>
Summary
what changed:
why it changed:
Impact
user-facing impact:
developer/maintainer impact:
Testing
Notes
provider/model path tested:
screenshots attached (if UI changed):