Conversation
The Ollama docs at github.com/ollama/ollama/blob/main/docs/openai.md now 404 — the OpenAI-compatibility doc was moved to docs/api/openai-compatibility.mdx and published at https://docs.ollama.com/api/openai-compatibility. Point the Setup Guide link there so users can actually read the docs.
Sub-1s recordings were silently returning an empty string, so short utterances like 'yes', 'no', or 'stop' never made it into the transcript and the user had no indication anything went wrong. The original guard existed to avoid a whisper.cpp assertion on buffers shorter than 1s, but it applied unconditionally — including to Parakeet, Apple Speech, and Cohere which have no such constraint. Rather than branch per provider, pad the buffer with trailing silence on the way in: whisper.cpp no longer asserts, every other provider just sees a moment of silence after the speech. An empty buffer (no audio at all) is still an early-return, since padding zero samples wouldn't help. Closes #276
Claude Opus 4.7 uses extended thinking by default and rejects the `temperature` parameter. Sending it causes every request to fail with HTTP 400: `temperature is deprecated for this model`. Add `SettingsStore.isTemperatureUnsupported(_:)` that covers both reasoning models (o1/o3/gpt-5/...) and Claude Opus 4.7+, and use it at each live call site (ContentView, CommandModeService, RewriteModeService) to gate the temperature parameter. `isReasoningModel` still gates `max_completion_tokens` / reasoning token budgets, which remain OpenAI-specific. Fixes #285
Adds abstractation to choose provider strategy based on model provider, for anthropic we use their messages API for others, we start with responses and fallback to completions api
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: d8de00483b
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| "type": "function", | ||
| "function": [ | ||
| "name": "execute_terminal_command", | ||
| "description": """ | ||
| "name": "execute_terminal_command", | ||
| "description": """ |
There was a problem hiding this comment.
Keep chat tool definitions in
function wrapper
This tool schema was flattened to top-level name/description/parameters, but the chat-completions path still sends config.tools as-is, and chat-completions expects each tool under {"type":"function","function":{...}}. When /responses is unavailable and the client falls back to /chat/completions, command-mode tool calls can be rejected or ignored, so terminal/MCP execution stops working on chat-only OpenAI-compatible endpoints.
Useful? React with 👍 / 👎.
| } else if endpoint.contains("/responses") || endpoint.contains("/chat/completions") || endpoint.contains("/api/chat") || endpoint.contains("/api/generate") { | ||
| fullURL = endpoint | ||
| } else { | ||
| fullURL = endpoint + "/chat/completions" | ||
| fullURL = endpoint + "/responses" | ||
| } |
There was a problem hiding this comment.
Retry API verification on chat-completions endpoint
The connection test now defaults non-Anthropic providers to .../responses and treats failures as hard errors, but there is no fallback to .../chat/completions. For providers that are OpenAI-compatible but chat-only, verification will report failure even though runtime requests can still work via chat-completions, creating false negatives in settings and blocking a reliable “verify connection” flow.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: bf353d4ff6
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| where !attemptedResponsesFallback && | ||
| self.isResponsesRequest(currentRequest) && | ||
| routePlan.fallbackFormat != nil && |
There was a problem hiding this comment.
Allow fallback for responses payloads on chat-style endpoints
executeWithRetry only triggers the Responses→chat-completions fallback when isResponsesRequest(currentRequest) is true, but this predicate is path-based. In the same change, ResponsesRouteStrategy.endpoint intentionally keeps /api/chat and /api/generate unchanged, so those requests are sent with Responses-format bodies but are not recognized as “responses requests” here. On chat-only OpenAI-compatible providers, a 400/404 from that first attempt will now fail the turn instead of retrying with chat-completions, breaking the intended compatibility flow.
Useful? React with 👍 / 👎.
| while usedToolNames.contains(candidate) { | ||
| candidate = Self.sanitizeToolName("\(base)_\(counter)") | ||
| counter += 1 |
There was a problem hiding this comment.
Avoid non-terminating dedupe loop for long MCP tool names
This deduplication loop can become non-terminating when the sanitized base name is already 64 characters. sanitizeToolName truncates to 64 chars, so "\(base)_2", "\(base)_3", etc. collapse back to the same truncated string, candidate never changes, and while usedToolNames.contains(candidate) never exits. A server set with two colliding long tool names would hang MCP tool catalog rebuild/reload.
Useful? React with 👍 / 👎.
Description
This PR adds a bunch a features in order to support MCP for command mode execution. The list of the features are:
Type of Change
Related Issues
Testing
swiftlint --strict --config .swiftlint.yml Sourcesswiftformat --config .swiftformat Sourcessh build_incremental.shNotes