feat(llm): add OPENAI_BASE_URL environment variable support #3
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
🎯 Overview
Add support for customizing OpenAI API endpoint via
OPENAI_BASE_URLenvironment variable, enabling users to:💡 Motivation
Many users need to:
Currently, the OpenAI provider has a hardcoded endpoint, requiring code changes or creating separate providers.
🔧 Solution
This PR adds minimal, backward-compatible support for endpoint customization:
Changes Made
New constants (2 lines):
OPENAI_BASE_URL_ENV: Environment variable nameDEFAULT_OPENAI_API_URL: Renamed fromOPENAI_API_URLNew helper function (16 lines):
get_api_endpoint(): Resolves URL from env or defaultRequest execution update (2 lines):
Comprehensive tests (77 lines):
Total: 96 lines added, 2 lines modified
Design Principles
📊 Testing
All tests passing:
Test Coverage
📖 Usage Examples
Default OpenAI (unchanged)
Local proxy
Enterprise gateway
🔍 Edge Cases Handled
""" "/v1//v1Model validation still applies: This only changes the endpoint URL. The provider still validates OpenAI model names (gpt-4o, gpt-4.5, etc.)
For non-OpenAI models: If you need to use models with different names (e.g.,
llama-3), consider using a separate custom provider or relaxingsupports_model().Embeddings not included: This PR only covers chat completions. If needed, a follow-up PR can add similar support to
src/embedding/provider/openai.rs.📝 Checklist
🤝 Design Review
This implementation was designed after consulting with multiple AI models (Gemini and Codex) to ensure:
Co-Authored-By: Claude Sonnet 4.5 [email protected]