fix: use correct token param for newer OpenAI models#435
Merged
gabrielste1n merged 4 commits intoOpenWhispr:mainfrom Mar 15, 2026
Merged
fix: use correct token param for newer OpenAI models#435gabrielste1n merged 4 commits intoOpenWhispr:mainfrom
gabrielste1n merged 4 commits intoOpenWhispr:mainfrom
Conversation
- Add tokenParam and supportsTemperature fields to CloudModelDefinition and model registry, replacing fragile string prefix matching - Add getOpenAiApiConfig() helper with fallback for unregistered models - Fix temperature regression: gpt-4.1 models now correctly send temperature (they support it), gpt-5 models correctly omit it - Add GPT-5.4 as new flagship model across all 10 locales - Remove isOlderOpenAiModel() in favor of registry-driven lookup
gpt-4.1 models support temperature on both the Responses API and Chat Completions endpoints. Move the temperature check outside the if/else branch so it applies to both API paths.
gabrielste1n
approved these changes
Mar 15, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fix missing
max_completion_tokensparameter for GPT-5 modelsWhen selecting a GPT-5 model (e.g. gpt-5-mini, gpt-5-nano, gpt-5.2) as the reasoning model, the app would send no token limit parameter at all in the API request body. The OpenAI API rejected this, returning:
The root cause was that the request body construction had no else branch for models that weren't classified as legacy — so gpt-5* models fell through with no token parameter set.
Changes
isOlderOpenAiModel()helper with a clear comment, replacing two duplicated inline conditionsmax_completion_tokensfor non-legacy models (gpt-5* and gpt-4.1*)processWithOpenAI(): now sends max_tokens for legacy models (gpt-4o*, gpt-4-, gpt-3),max_completion_tokensfor newer chat completions models, andmax_output_tokensfor the Responses API endpointprocessTextStreaming(): same fix applied to the streaming path