English | δΈζ
Enable OpenAI Codex CLI to work with GLM (ζΊθ°± AI) models by running a local proxy that converts OpenAI Responses API format to GLM Chat Completions format.
- β Full Codex Compatibility - Works seamlessly with OpenAI Codex CLI
- β Streaming Support - Real-time streaming responses
- β
Tool Calling - Supports
apply_patch,exec, and other Codex tools - β Multi-turn Conversations - Maintains conversation context
- β Automatic Model Mapping - Maps OpenAI model names to GLM equivalents
- β Easy Setup - Single Python file, no complex dependencies
- Chinese Developers - Use GLM (ζΊθ°± AI) instead of OpenAI GPT-4
- Cost-Conscious Teams - GLM offers competitive pricing for Chinese users
- Local Development - Run Codex with domestic LLM providers
- Privacy-Focused Projects - Keep code within China's borders
- Learning & Education - Access powerful coding AI without OpenAI account
| Scenario | Example Command |
|---|---|
| Quick Prototyping | codex exec "Create a REST API with authentication" |
| Code Generation | codex exec "Generate TypeScript interfaces from JSON schema" |
| Bug Fixing | codex exec "Fix the null pointer exception in UserService.java" |
| Testing | codex exec "Add unit tests for the calculator module" |
| Documentation | codex exec "Generate API documentation from code comments" |
| Refactoring | codex exec "Refactor this class to use dependency injection" |
βββββββββββββββββββ
β Codex CLI β Sends: Responses API Request
β (User Side) β Receives: Responses API Response
ββββββββββ¬βββββββββ
β Responses API Format
βΌ
βββββββββββββββββββββββββββββββββββββββββββ
β Codex GLM Proxy (localhost:18765) β
β β
β ββββββββββββββββββββββββββββββββββββββ β
β β Request Converter β β
β β - Responses API β Chat Completionsβ β
β β - Tool call history handling β β
β β - Model name mapping β β
β ββββββββββββββββββββββββββββββββββββββ β
β β
β ββββββββββββββββββββββββββββββββββββββ β
β β Response Converter β β
β β - Chat Completions β Responses APIβ β
β β - Tool call streaming β β
β β - Event sequencing β β
β ββββββββββββββββββββββββββββββββββββββ β
ββββββββββ¬βββββββββββββββββββββββββββββββββ
β Chat Completions API Format
βΌ
βββββββββββββββββββ
β GLM API β Sends: Chat Completions Request
β (ζΊθ°± AI) β Receives: Chat Completions Response
βββββββββββββββββββ
sequenceDiagram
participant U as User
participant C as Codex CLI
participant P as GLM Proxy
participant G as GLM API
U->>C: codex exec "Create hello.py"
C->>P: Responses API Request<br/>(Responses API format)
Note over P: Convert to Chat Completions API
P->>G: Chat Completions Request<br/>(GLM format)
G-->>P: Chat Completions Response<br/>(Streaming)
Note over P: Convert back to Responses API
P-->>C: Responses API Response<br/>(Streaming events)
C-->>U: Display results & apply changes
1. Request Conversion (Codex β GLM)
// OpenAI Responses API (from Codex)
{
"model": "gpt-4o",
"input": [
{"type": "message", "role": "user", "content": "..."},
{"type": "function_call", "call_id": "...", "name": "exec"}
]
}
β¬οΈ Proxy Converts β¬οΈ
// GLM Chat Completions (to GLM)
{
"model": "glm-4-plus",
"messages": [
{"role": "user", "content": "..."},
{"role": "assistant", "tool_calls": [...]}
],
"tools": [...]
}2. Response Conversion (GLM β Codex)
GLM Chat Completions β Responses API Event
ββββββββββββββββββββ ββββββββββββββββββββββββ
delta.content β response.output_text.delta
delta.tool_calls β response.function_call_arguments.delta
finish_reason β response.completed
Note: Codex CLI sends and receives Responses API format, while GLM API uses Chat Completions format. The proxy handles the bidirectional conversion.
graph LR
A[Codex Request] --> B[Proxy]
B --> C{Tool Call?}
C -->|Yes| D[Execute Tool]
D --> E[Return Result]
E --> B
C -->|No| F[Stream to GLM]
F --> G[Get Response]
G --> H[Convert & Return]
H --> A
- Python 3.8+
- GLM API key (Get one here)
- OpenAI Codex CLI installed
-
Clone the repository
git clone https://github.com/JichinX/codex-glm-proxy.git cd codex-glm-proxy -
Set your GLM API key
export GLM_API_KEY="your_glm_api_key_here"
-
Start the proxy
python3 proxy.py # ζδ½Ώη¨δΎΏζ·θζ¬ ./scripts/start.shProxy will run on
http://localhost:18765 -
Configure Codex CLI
Create or update
~/.codex/config.toml:model_provider = "glm-proxy" model = "gpt-4o" [model_providers.glm-proxy] name = "GLM via Proxy" base_url = "http://localhost:18765/v4" wire_api = "responses"
-
Test it!
mkdir test-codex && cd test-codex && git init codex exec "Create a Python hello world program" --full-auto
| Environment Variable | Default | Description |
|---|---|---|
GLM_API_KEY |
(required) | Your GLM API key |
GLM_API_BASE |
https://open.bigmodel.cn/api/coding/paas/v4 |
GLM API endpoint |
PROXY_PORT |
18765 |
Local proxy port |
The proxy automatically maps OpenAI model names to GLM equivalents:
| OpenAI Model | GLM Model | Notes |
|---|---|---|
gpt-4 |
glm-4 |
Standard GPT-4 |
gpt-4-turbo |
glm-4 |
GPT-4 Turbo |
gpt-4o |
glm-4-plus |
Recommended for best coding |
gpt-4o-mini |
glm-4-flash |
Faster, cheaper |
gpt-3.5-turbo |
glm-4-flash |
Legacy support |
gpt-5.x-codex |
glm-5 |
Future Codex models |
Recommendation: Use model = "gpt-4o" in your Codex config for best results.
# Start proxy (background)
./scripts/start.sh
# Check if running
curl http://localhost:18765/health
# View logs
tail -f /tmp/codex-glm-proxy.log
# Stop proxy
./scripts/stop.shThe proxy sits between Codex CLI and GLM API, converting:
Codex CLI β Responses API β Proxy β Chat Completions API β GLM
- Converts OpenAI Responses API format to GLM Chat Completions
- Handles tool call history and function results
- Filters unsupported tools
- Streams Chat Completions responses back to Responses API format
- Maintains proper event sequencing
- Handles tool call streaming
# Simple task
codex exec "Create a Python function to calculate Fibonacci" --full-auto
# More complex project
codex exec "Build a REST API with FastAPI for todo management" --full-auto
# With tests
codex exec "Create a calculator module with unit tests" --full-autoCause: Model name not properly mapped
Solution: Ensure you're using a known model like gpt-4o in config
Cause: Tool call history not properly handled
Solution: Update to latest version of proxy
Cause: Proxy crashed
Solution: Check logs at /tmp/codex-glm-proxy.log and restart
Cause: Proxy not running
Solution: Start proxy with ./scripts/start.sh
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
- OpenAI Codex - The amazing coding agent
- ζΊθ°± AI GLM - Powerful Chinese LLM
- Inspired by the need to use Codex with local/alternative LLM providers
β Production Ready - Fully tested with all Codex features
| Feature | Status |
|---|---|
| Text conversations | β Working |
| Model mapping | β Working |
| Streaming responses | β Working |
| Tool calling | β Working |
| Multi-turn conversations | β Working |
| Tool call history | β Working |
| Tool call results | β Working |
Made with β€οΈ by the community, for the community
Star β this repo if you find it useful!