Skip to content

feat(provider): align provider and model workflows#324

Merged
kevincodex1 merged 8 commits intoGitlawb:mainfrom
Varshith-JV-1410:feat/provider-model-parity
Apr 4, 2026
Merged

feat(provider): align provider and model workflows#324
kevincodex1 merged 8 commits intoGitlawb:mainfrom
Varshith-JV-1410:feat/provider-model-parity

Conversation

@Varshith-JV-1410
Copy link
Copy Markdown
Contributor

Summary

  • what changed:

    • Replaced the existing /provider entry behavior with a full provider manager flow for add, edit, delete, and activate profile actions.
    • Added persisted provider profile state, active profile state, and per-profile model cache state.
    • Wired active profile application into runtime environment merge so provider/model/base URL/key are applied consistently.
    • Updated /model to refresh OpenAI-compatible model options before showing the picker, and read model options from the active profile cache.
    • Added OpenAI-compatible model discovery utility with fallback behavior for Ollama-style tags endpoints.
    • Updated Ollama preset defaults to use localhost base URL.
  • why it changed:

    • Align openclaude provider/model behavior with free-code-ollama.
    • Enable multi-provider setup without manual environment edits each time.
    • Keep model selection aligned with the currently active provider profile.
    • Improve local development UX by defaulting Ollama to localhost.

Impact

  • user-facing impact:

    • /provider now supports end-to-end profile management in-app.
    • Active provider choice persists and is applied automatically.
    • /model can show freshly discovered models from active OpenAI-compatible endpoints.
  • developer/maintainer impact:

    • Added dedicated provider profile management and model discovery modules.
    • Extended config shape for provider profiles and per-profile model cache.
    • Environment application now intentionally prioritizes active provider profile values.
    • Change set is intentionally scoped to provider/model path files only.

Testing

  • bun run build
  • bun run smoke
  • focused tests:
    • bun run test:provider
    • npm run test:provider-recommendation
    • Result: 69 pass, 0 fail (provider test run)
    • Result: 41 pass, 0 fail (provider-recommendation/profile run)

Notes

  • provider/model path tested:

    • Automated tests covered provider profile persistence, startup env application, profile mismatch handling, and provider recommendation behavior.
    • Smoke run validated build + CLI launch/version path.
  • screenshots attached (if UI changed):

image image image image image image image image image
  • follow-up work or known limitations:
    • OpenAI-compatible model discovery is best-effort and depends on endpoint reachability/auth.
    • Discovery failures are intentionally non-blocking so /model remains usable.
    • A manual UX pass on interactive provider manager flows can be added after merge if maintainers want extra validation.

Copilot AI review requested due to automatic review settings April 4, 2026 05:52
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR aligns the /provider and /model workflows by introducing persisted, in-app managed provider profiles (add/edit/delete/activate), applying the active profile into the runtime environment merge, and refreshing OpenAI-compatible model options from the active provider endpoint before showing the model picker.

Changes:

  • Added persisted provider profile management (including active profile selection) and applied the active profile into runtime env merges.
  • Added OpenAI-compatible model discovery (with Ollama tags fallback) and wired /model to refresh/discover models and use a per-profile cache.
  • Updated model options resolution to read additional OpenAI model options from the active provider profile cache.

Reviewed changes

Copilot reviewed 9 out of 9 changed files in this pull request and generated 8 comments.

Show a summary per file
File Description
src/utils/providerProfiles.ts New provider profile CRUD + active profile env application + per-profile model option cache.
src/utils/model/openaiModelDiscovery.ts New utility to discover models from OpenAI-compatible /models (with Ollama /api/tags fallback).
src/utils/model/modelOptions.ts Reads additional model options from the active profile’s OpenAI cache when applicable.
src/utils/managedEnv.ts Ensures active provider profile is applied after settings/env merges so it “wins”.
src/utils/config.ts Extends global config shape with provider profiles and per-profile OpenAI model cache.
src/components/ProviderManager.tsx New TUI flow for managing provider profiles.
src/commands/provider/provider.tsx Replaces /provider wizard entrypoint with the ProviderManager flow.
src/commands/provider/index.ts Updates command metadata (description; removes immediate execution hook).
src/commands/model/model.tsx Refreshes OpenAI-compatible discovered models before showing picker and persists cache.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread src/utils/providerProfiles.ts Outdated
delete process.env.CLAUDE_CODE_USE_OPENAI
delete process.env.CLAUDE_CODE_USE_BEDROCK
delete process.env.CLAUDE_CODE_USE_VERTEX
delete process.env.CLAUDE_CODE_USE_FOUNDRY
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

applyProviderProfileToProcessEnv() clears several provider-routing flags, but it does not clear CLAUDE_CODE_USE_GEMINI or CLAUDE_CODE_USE_GITHUB. Because getAPIProvider() checks GEMINI/GITHUB before OPENAI, a previously-set GEMINI/GITHUB flag will override the selected provider profile and prevent the profile from taking effect. Clear those flags here as well (and any other higher-precedence routing flags you want the profile to override).

Suggested change
delete process.env.CLAUDE_CODE_USE_FOUNDRY
delete process.env.CLAUDE_CODE_USE_FOUNDRY
delete process.env.CLAUDE_CODE_USE_GEMINI
delete process.env.CLAUDE_CODE_USE_GITHUB

Copilot uses AI. Check for mistakes.
Comment on lines +271 to +292
process.env.ANTHROPIC_MODEL = profile.model
if (profile.provider === 'anthropic') {
process.env.ANTHROPIC_BASE_URL = profile.baseUrl

if (profile.apiKey) {
process.env.ANTHROPIC_API_KEY = profile.apiKey
} else {
delete process.env.ANTHROPIC_API_KEY
}

delete process.env.OPENAI_BASE_URL
delete process.env.OPENAI_API_BASE
delete process.env.OPENAI_MODEL
delete process.env.OPENAI_API_KEY
return
}

process.env.CLAUDE_CODE_USE_OPENAI = '1'
process.env.OPENAI_BASE_URL = profile.baseUrl
process.env.OPENAI_MODEL = profile.model

if (profile.apiKey) {
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

applyProviderProfileToProcessEnv() sets process.env.ANTHROPIC_MODEL unconditionally, even when applying an OpenAI-compatible profile. Since ANTHROPIC_MODEL is treated as a global model override (and is consulted before OPENAI_MODEL in getUserSpecifiedModelSetting), this can create confusing/incorrect routing if other code expects ANTHROPIC_MODEL to only contain Anthropic model IDs. Consider only setting ANTHROPIC_MODEL for Anthropic profiles (and optionally deleting it when switching to OpenAI) so OPENAI_MODEL is the source of truth for OpenAI-compatible profiles.

Copilot uses AI. Check for mistakes.
});
return <SetModelAndClose args={args} onDone={onDone} />;
}
await refreshOpenAIModelOptionsCache();
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

refreshOpenAIModelOptionsCache() is awaited before rendering the picker. With DISCOVERY_TIMEOUT_MS=5000 and sequential fallbacks (multiple /models URLs + optional Ollama tags), /model can block for up to ~15s on unreachable endpoints, even though failures are swallowed. Consider making discovery non-blocking (render picker immediately and refresh cache in the background), or parallelizing requests / lowering timeouts so the command stays responsive.

Suggested change
await refreshOpenAIModelOptionsCache();
void refreshOpenAIModelOptionsCache().catch(() => {});

Copilot uses AI. Check for mistakes.
Comment on lines +275 to +297
function haveSameModelOptions(left: ModelOption[], right: ModelOption[]): boolean {
if (left.length !== right.length) {
return false;
}
return left.every((option, index) => {
const other = right[index];
return other !== undefined && option.value === other.value && option.label === other.label && option.description === other.description && option.descriptionForModel === other.descriptionForModel;
});
}
async function refreshOpenAIModelOptionsCache(): Promise<void> {
if (getAPIProvider() !== 'openai') {
return;
}
try {
const discoveredOptions = await discoverOpenAICompatibleModelOptions();
if (discoveredOptions.length === 0) {
return;
}
const currentOptions = getActiveOpenAIModelOptionsCache();
if (haveSameModelOptions(currentOptions, discoveredOptions)) {
return;
}
setActiveOpenAIModelOptionsCache(discoveredOptions);
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

haveSameModelOptions() does an order-sensitive deep comparison. The /v1/models response order isn’t guaranteed to be stable across calls, so this can cause unnecessary config writes (and repeated discovery on every /model invocation). Sorting both option arrays (e.g., by value) before comparing/storing, or comparing by a Set of values, would make the cache update idempotent.

Copilot uses AI. Check for mistakes.
export async function discoverOpenAICompatibleModelOptions(): Promise<
ModelOption[]
> {
if (getAPIProvider() !== 'openai') {
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

discoverOpenAICompatibleModelOptions() bails unless getAPIProvider() === 'openai'. However, the app treats certain OpenAI models as a separate 'codex' provider (see getAPIProvider() logic), and the new OpenAI preset default model is a codex model. That means model discovery will silently never run for those OpenAI-compatible sessions. Consider allowing discovery for both 'openai' and 'codex' (and any other OpenAI-shim providers that should share this behavior).

Suggested change
if (getAPIProvider() !== 'openai') {
const openAICompatibleProviders = ['openai', 'codex']
if (!openAICompatibleProviders.includes(getAPIProvider())) {

Copilot uses AI. Check for mistakes.
});
}
async function refreshOpenAIModelOptionsCache(): Promise<void> {
if (getAPIProvider() !== 'openai') {
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

refreshOpenAIModelOptionsCache() only runs when getAPIProvider() === 'openai'. But getAPIProvider() returns 'codex' for certain OpenAI models (including the new OpenAI preset default gpt-5.3-codex), which will skip discovery and keep the picker stale for those sessions. Consider enabling this cache refresh for both 'openai' and 'codex' providers (since both route through the OpenAI-compatible shim).

Suggested change
if (getAPIProvider() !== 'openai') {
const provider = getAPIProvider();
if (provider !== 'openai' && provider !== 'codex') {

Copilot uses AI. Check for mistakes.
Comment on lines +546 to +547
const additionalOptions =
getAPIProvider() === 'openai'
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

getModelOptions() only pulls from getActiveOpenAIModelOptionsCache() when getAPIProvider() === 'openai'. If the session is using the OpenAI-compatible shim in 'codex' mode (getAPIProvider() === 'codex'), the discovered model cache won’t be shown in the picker. Consider treating both 'openai' and 'codex' as OpenAI-compatible here.

Suggested change
const additionalOptions =
getAPIProvider() === 'openai'
const apiProvider = getAPIProvider()
const additionalOptions =
apiProvider === 'openai' || apiProvider === 'codex'

Copilot uses AI. Check for mistakes.
Comment thread src/utils/config.ts
Comment on lines +183 to +190
export type ProviderProfile = {
id: string
name: string
provider: 'openai' | 'anthropic'
baseUrl: string
model: string
apiKey?: string
}
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new exported type name ProviderProfile in utils/config.ts clashes with the existing exported ProviderProfile union type in utils/providerProfile.ts. This makes imports ambiguous/confusing and increases the chance of accidentally importing the wrong type. Consider renaming one of them (e.g., ManagedProviderProfile / ProviderProfileRecord) to keep type names unambiguous.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Collaborator

@Vasanthdev2004 Vasanthdev2004 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two blocker-level issues stood out for me on this pass:

  1. src/utils/managedEnv.ts:180-182 / src/utils/providerProfiles.ts:299-306

The new active-profile application is unconditional during env setup, so it overrides explicit provider selection that was already placed in process.env before startup. That breaks the existing CLI contract where explicit flags/env should win. A concrete repro is:

  • apply --provider ollama --model qwen2.5:3b (which sets CLAUDE_CODE_USE_OPENAI=1, OPENAI_BASE_URL=http://localhost:11434/v1, OPENAI_MODEL=qwen2.5:3b)
  • then applyActiveProviderProfileFromConfig() runs
  • the active saved profile rewrites OPENAI_BASE_URL / OPENAI_MODEL

I reproduced the equivalent directly in the review worktree and saw the env change from Ollama values to the saved OpenAI profile values. That means saved profile state is currently stronger than explicit startup intent, which feels wrong for --provider.

  1. src/utils/providerProfiles.ts:265-297

applyProviderProfileToProcessEnv() does not clear all provider-selection flags before applying the active profile. It deletes CLAUDE_CODE_USE_OPENAI, CLAUDE_CODE_USE_BEDROCK, CLAUDE_CODE_USE_VERTEX, and CLAUDE_CODE_USE_FOUNDRY, but it leaves CLAUDE_CODE_USE_GEMINI and CLAUDE_CODE_USE_GITHUB untouched.

That makes the active profile non-authoritative. I reproduced this directly by setting stale CLAUDE_CODE_USE_GEMINI=1 / CLAUDE_CODE_USE_GITHUB=1, then applying an Anthropic profile: both stale flags remained set afterward. Since getAPIProvider() checks those flags first, the runtime can still believe it is on Gemini/GitHub even after the active profile is applied.

The overall provider-manager direction is good, but I think these env-precedence issues need to be fixed before merge.

Copy link
Copy Markdown
Collaborator

@gnanam1990 gnanam1990 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Requesting changes.

Thanks for the work on this. I like the overall direction, but I found two issues that look important to fix before merge.

  1. Active provider profiles do not fully clear competing provider flags.
    In applyProviderProfileToProcessEnv(), the new code clears CLAUDE_CODE_USE_OPENAI, CLAUDE_CODE_USE_BEDROCK, CLAUDE_CODE_USE_VERTEX, and CLAUDE_CODE_USE_FOUNDRY, but it does not clear CLAUDE_CODE_USE_GEMINI or CLAUDE_CODE_USE_GITHUB. Provider resolution checks Gemini first, then GitHub, then OpenAI, so if either of those env vars is already set, activating an OpenAI or Anthropic profile can still leave the process logically pointed at the wrong provider. That looks like a real behavior regression for the new provider manager flow.

  2. The Ollama preset default model looks mismatched for a local Ollama profile.
    DEFAULT_OLLAMA_MODEL is currently set to kimi-k2.5:cloud, and the provider manager form/placeholder uses that for Ollama too. Since this preset is wired to http://localhost:11434/v1, a cloud-style default model here looks inconsistent with the repo’s other Ollama defaults and likely wrong for a typical local install. I think this should default to a normal local Ollama model instead.

I’d suggest fixing both before merge:

  • clear CLAUDE_CODE_USE_GEMINI and CLAUDE_CODE_USE_GITHUB when applying an active provider profile
  • change the Ollama preset/default model to a local Ollama model that matches the repo’s existing expectations

@Varshith-JV-1410 Varshith-JV-1410 marked this pull request as draft April 4, 2026 06:12
@Varshith-JV-1410 Varshith-JV-1410 marked this pull request as ready for review April 4, 2026 06:12
@Varshith-JV-1410
Copy link
Copy Markdown
Contributor Author

Thanks for the detailed review. Addressed both the earlier and latest blocker comments across 00a3a5d and efa2c98:

Active profile apply now clears all competing provider flags, including Gemini and GitHub, so provider resolution is authoritative: src/utils/providerProfiles.ts
Explicit startup provider selection now wins over saved active profile state, so startup flags/env are not overwritten: src/utils/providerProfiles.ts
Ollama preset default now uses a local model default (llama3.1:8b): src/utils/providerProfiles.ts
Added regression coverage for all of the above: src/utils/providerProfiles.test.ts
Validation run (all passing):

bun test src/utils/providerProfiles.test.ts
npm run test:provider-recommendation
bun run smoke

@gnanam1990
Copy link
Copy Markdown
Collaborator

Thanks for the follow-up. The earlier blockers look addressed now, but I still see one remaining functional issue, so I’m holding approval.

If the last provider profile is deleted, the current session env is not cleared. In deleteProviderProfile() the code only reapplies env when a nextActiveProfile exists. When the final profile is removed, nothing clears the old CLAUDE_CODE_USE_*, OPENAI_*, or ANTHROPIC_* values from the running session, so the process can keep using the deleted provider until restart even though no active profile remains.

I also noticed one smaller UX inconsistency: the new form still shows the model placeholder e.g. kimi-k2.5:cloud even though the Ollama default was changed to llama3.1:8b.

@Varshith-JV-1410
Copy link
Copy Markdown
Contributor Author

Latest commit:

  • dc3b26d fix(provider): clear env when deleting last profile

Included in this commit:

  • Clears provider-related session env when deleting the final profile (no next active profile).
  • Keeps prior precedence/flag-clearing behavior intact.
  • Updates model placeholder to local Ollama example (llama3.1:8b).
  • Adds/updates regression tests in src/utils/providerProfiles.test.ts.

@gnanam1990
Copy link
Copy Markdown
Collaborator

Thanks for the follow-up. The earlier blocker comments look addressed, but I’m still holding approval on one small edge case.

In the delete-last-profile path, the new cleanup now clears the current session’s provider env whenever there is no next active profile. That appears to remove not only saved-profile state, but also an explicit provider selection that may have come from startup flags or shell env. Since this PR now intentionally preserves explicit startup provider selection in the normal env-merge path, I think this final delete path should respect that same contract.

Please fix that edge case, and I’ll be happy to recheck.

Comment thread src/components/ProviderManager.tsx Outdated
Comment thread src/components/ProviderManager.tsx Outdated
Comment thread src/components/ProviderManager.tsx Outdated
Comment thread src/components/ProviderManager.tsx Outdated
Comment thread src/components/ProviderManager.tsx Outdated
Comment thread src/components/ProviderManager.tsx Outdated
Comment thread src/components/ProviderManager.tsx Outdated
Comment thread src/components/ProviderManager.tsx Outdated
Comment thread src/components/ProviderManager.tsx Outdated
Comment thread src/components/ProviderManager.tsx Outdated
@chrizzlekicks
Copy link
Copy Markdown
Contributor

Went through the PR and just saw some minor things, which I commented. More nitpicking than actually blocking. Feel free to take the comments into account before merging.

@Varshith-JV-1410
Copy link
Copy Markdown
Contributor Author

@chrizzlekicks Thank you for the suggestion. I will merge the updates

@Varshith-JV-1410
Copy link
Copy Markdown
Contributor Author

@gnanam1990 Thanks for catching this edge case. Fixed in 52fa0fc: delete-last-profile now preserves explicit startup provider env and only clears provider env when it was applied by active-profile management. Added regression tests for both branches in src/utils/providerProfiles.test.ts. Re-ran focused tests + smoke, all passing.

Copy link
Copy Markdown
Collaborator

@gnanam1990 gnanam1990 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the follow-up work here. I rechecked the latest head, the earlier blocker comments now look addressed, and I appreciate the careful fix/test pass across the provider env edge cases.

@gnanam1990
Copy link
Copy Markdown
Collaborator

@Varshith-JV-1410 fix conflicts

Copy link
Copy Markdown
Contributor

@kevincodex1 kevincodex1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is great!

@kevincodex1
Copy link
Copy Markdown
Contributor

please have a look to @Vasanthdev2004

@Vasanthdev2004
Copy link
Copy Markdown
Collaborator

@kevincodex1 just a min

@Varshith-JV-1410
Copy link
Copy Markdown
Contributor Author

@gnanam1990 Thanks for the review and follow-up. I've addressed the requested fixes, resolved the merge conflict with latest main, and re-ran smoke on the current PR head (5578953) successfully.

Copy link
Copy Markdown
Collaborator

@Vasanthdev2004 Vasanthdev2004 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I rechecked the latest head, and two blocker-level issues still remain for me:

  1. applyProviderProfileToProcessEnv() still leaves stale CLAUDE_CODE_USE_GEMINI / CLAUDE_CODE_USE_GITHUB behind.
    In src/utils/providerProfiles.ts:265-297, the function clears CLAUDE_CODE_USE_OPENAI, CLAUDE_CODE_USE_BEDROCK, CLAUDE_CODE_USE_VERTEX, and CLAUDE_CODE_USE_FOUNDRY, but it still does not clear the Gemini/GitHub flags. I reproduced this directly in the review worktree: starting with both stale flags set, then applying an Anthropic profile still leaves CLAUDE_CODE_USE_GEMINI=1 and CLAUDE_CODE_USE_GITHUB=1 in process.env.

  2. Explicit startup provider/model selection is still overridden by the active saved profile during env setup.
    applySafeConfigEnvironmentVariables() still unconditionally calls applyActiveProviderProfileFromConfig() in src/utils/managedEnv.ts:180-182. I reproduced this by setting an explicit OpenAI/Ollama-style startup intent in process.env (CLAUDE_CODE_USE_OPENAI=1, OPENAI_BASE_URL=http://localhost:11434/v1, OPENAI_MODEL=qwen2.5:3b), then saving an active OpenAI profile in config and calling applySafeConfigEnvironmentVariables(): the env gets rewritten back to the saved profile (https://api.openai.com/v1, gpt-4o, persisted API key). So the �preserve explicit startup provider selection� fix is still incomplete.

The direction here is good, but I still need those two precedence/authoritativeness issues fixed before I can approve it.

@Varshith-JV-1410
Copy link
Copy Markdown
Contributor Author

@Vasanthdev2004 Thanks for the review, looking into it.

@Vasanthdev2004
Copy link
Copy Markdown
Collaborator

@Varshith-JV-1410 ping when you done!

@Varshith-JV-1410
Copy link
Copy Markdown
Contributor Author

@Vasanthdev2004 Thanks for the detailed recheck. I pushed an additional hardening fix in 48fe3da to cover the stale-marker edge case you called out. The active profile now does not override explicit startup provider/model selection even if CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED is stale, and I added a regression test for that case in src/utils/providerProfiles.test.ts. Re-ran focused tests + smoke, all passing.

Copy link
Copy Markdown
Collaborator

@Vasanthdev2004 Vasanthdev2004 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to merge from my side now.

I rechecked the two blocker paths we flagged earlier:

  • applyProviderProfileToProcessEnv() now clears the competing Gemini/GitHub flags
  • active profile application now respects explicit startup provider intent, including the stale profile-marker case

I also reran:

  • bun test ./src/utils/providerProfiles.test.ts
  • npm run test:provider-recommendation
  • bun run smoke

Those all passed, and my direct env repro now keeps the explicit Ollama-style startup selection intact instead of rewriting it back to the saved profile.

@Vasanthdev2004
Copy link
Copy Markdown
Collaborator

@kevincodex1 good to go!

@Varshith-JV-1410
Copy link
Copy Markdown
Contributor Author

@gnanam1990 @kevincodex1 @Vasanthdev2004 @chrizzlekicks Thanks a lot for the thorough review and iterative feedback throughout this PR. Really appreciate the careful checks, it helped tighten the implementation significantly.

@kevincodex1 kevincodex1 merged commit ef881b2 into Gitlawb:main Apr 4, 2026
euxaristia pushed a commit to euxaristia/openclaude that referenced this pull request Apr 13, 2026
* feat(provider): align provider and model workflows

* fix(provider): clear gemini/github flags and use local ollama default

* fix(provider): preserve explicit startup provider selection

* fix(provider): clear env when deleting last profile

* chore(provider): apply review nits in ProviderManager

* fix(provider): preserve explicit env on last-profile delete

* fix(provider): preserve explicit env when profile marker is stale

---------

Co-authored-by: Gitlawb <gitlawb@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants