Skip to content

Conversation

@luispater
Copy link

No description provided.

@gemini-code-assist
Copy link

Summary of Changes

Hello @luispater, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly refactors the model resolution and configuration management within the system. The primary goal is to provide more explicit control over how models are aliased and routed to their upstream providers, particularly for Gemini, Claude, and Codex. By introducing dedicated configuration sections for model mappings and updating the underlying executor logic, the changes ensure that the correct upstream model is consistently identified and used, while also improving the system's ability to detect and react to configuration changes. This leads to a more robust and configurable model handling pipeline.

Highlights

  • Enhanced Model Configuration: Introduced explicit models configuration for Gemini, Claude, and Codex providers in config.example.yaml, allowing users to define upstream model names and client-facing aliases for better control over model routing.
  • Refactored Model Resolution: Streamlined model resolution logic across various executors (Antigravity, Claude, Codex, Gemini, Vertex, IFlow, OpenAI Compat, Qwen) by replacing util.ResolveOriginalModel with direct req.Model usage or new resolveUpstreamModel methods, ensuring consistent handling of model aliases.
  • Improved Configuration Change Detection: Added new functions and structures (models_summary.go, oauth_model_mappings.go) to accurately summarize and hash model configurations and OAuth model mappings, enabling more precise detection of configuration changes.
  • Generic Model Info Builder: Implemented a generic buildConfigModels function in sdk/cliproxy/service.go to dynamically create ModelInfo objects from various provider-specific model configurations, reducing code duplication and improving maintainability.
  • OAuth Model Mapping Update: Modified the applyOAuthModelMapping function in sdk/cliproxy/auth/model_name_mappings.go to return both the resolved upstream model and updated metadata, enhancing flexibility in how model mappings are applied during request processing.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant feature for model aliasing for API key-based providers, along with a substantial refactoring of model name resolution across various executors. The changes enhance configuration flexibility and improve code maintainability by reducing duplication, notably through the introduction of a generic function for building model configurations. My review focuses on ensuring the new logic is applied consistently and identifying opportunities for performance improvements. I've found a potential issue in openai_compat_executor.go regarding model validation and suggested a performance enhancement for model lookups in model_definitions.go.

Comment on lines +64 to 67
translated = NormalizeThinkingConfig(translated, req.Model, allowCompat)
if errValidate := ValidateThinkingConfig(translated, req.Model); errValidate != nil {
return resp, errValidate
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The NormalizeThinkingConfig and ValidateThinkingConfig functions are called with req.Model, but they should use the modelOverride if it's available. This is inconsistent with other executors (e.g., codex_executor.go) and could lead to incorrect thinking configuration validation if an alias maps to a model with different capabilities. The actual model that will be used for the upstream request should be used for validation.

	model := req.Model
	if modelOverride != "" {
		model = modelOverride
	}
	translated = NormalizeThinkingConfig(translated, model, allowCompat)
	if errValidate := ValidateThinkingConfig(translated, model); errValidate != nil {
		return resp, errValidate
	}

Comment on lines +156 to 159
translated = NormalizeThinkingConfig(translated, req.Model, allowCompat)
if errValidate := ValidateThinkingConfig(translated, req.Model); errValidate != nil {
return nil, errValidate
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to the Execute method, NormalizeThinkingConfig and ValidateThinkingConfig should use the modelOverride if it exists, instead of req.Model. This ensures the thinking configuration is validated against the actual upstream model.

	model := req.Model
	if modelOverride != "" {
		model = modelOverride
	}
	translated = NormalizeThinkingConfig(translated, model, allowCompat)
	if errValidate := ValidateThinkingConfig(translated, model); errValidate != nil {
		return nil, errValidate
	}

Comment on lines +785 to +809
// LookupStaticModelInfo searches all static model definitions for a model by ID.
// Returns nil if no matching model is found.
func LookupStaticModelInfo(modelID string) *ModelInfo {
if modelID == "" {
return nil
}
allModels := [][]*ModelInfo{
GetClaudeModels(),
GetGeminiModels(),
GetGeminiVertexModels(),
GetGeminiCLIModels(),
GetAIStudioModels(),
GetOpenAIModels(),
GetQwenModels(),
GetIFlowModels(),
}
for _, models := range allModels {
for _, m := range models {
if m != nil && m.ID == modelID {
return m
}
}
}
return nil
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The current implementation of LookupStaticModelInfo re-creates a slice of all model groups on every call. For better performance and cleaner code, consider initializing a map of all static models once and then performing a simple map lookup. This would be more efficient, especially if the number of models grows.

You can add the following at the package level:

import "sync"

var (
	allStaticModels map[string]*ModelInfo
	initOnce        sync.Once
)

func initializeStaticModels() {
	allStaticModels = make(map[string]*ModelInfo)
	modelGroups := [][]*ModelInfo{
		GetClaudeModels(),
		GetGeminiModels(),
		GetGeminiVertexModels(),
		GetGeminiCLIModels(),
		GetAIStudioModels(),
		GetOpenAIModels(),
		GetQwenModels(),
		GetIFlowModels(),
	}
	for _, group := range modelGroups {
		for _, model := range group {
			if model != nil && model.ID != "" {
				if _, exists := allStaticModels[model.ID]; !exists {
					allStaticModels[model.ID] = model
				}
			}
		}
	}
}

And then replace the function with this suggestion.

// LookupStaticModelInfo searches all static model definitions for a model by ID.
// Returns nil if no matching model is found.
func LookupStaticModelInfo(modelID string) *ModelInfo {
	initOnce.Do(initializeStaticModels)
	if modelID == "" {
		return nil
	}
	return allStaticModels[modelID]
}

@luispater luispater merged commit 06075c0 into router-for-me:main Dec 30, 2025
1 of 2 checks passed
@luispater luispater deleted the plus branch December 30, 2025 15:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants