English | ไธญๆ
Enable OpenAI Codex CLI to work with GLM (ๆบ่ฐฑ AI) models by running a local proxy that converts OpenAI Responses API format to GLM Chat Completions format.
- โ Full Codex Compatibility - Works seamlessly with OpenAI Codex CLI
- โ Streaming Support - Real-time streaming responses
- โ
Tool Calling - Supports
apply_patch,exec, and other Codex tools - โ Multi-turn Conversations - Maintains conversation context
- โ Automatic Model Mapping - Maps OpenAI model names to GLM equivalents
- โ Easy Setup - Single Python file, no complex dependencies
- Chinese Developers - Use GLM (ๆบ่ฐฑ AI) instead of OpenAI GPT-4
- Cost-Conscious Teams - GLM offers competitive pricing for Chinese users
- Local Development - Run Codex with domestic LLM providers
- Privacy-Focused Projects - Keep code within China's borders
- Learning & Education - Access powerful coding AI without OpenAI account
| Scenario | Example Command |
|---|---|
| Quick Prototyping | codex exec "Create a REST API with authentication" |
| Code Generation | codex exec "Generate TypeScript interfaces from JSON schema" |
| Bug Fixing | codex exec "Fix the null pointer exception in UserService.java" |
| Testing | codex exec "Add unit tests for the calculator module" |
| Documentation | codex exec "Generate API documentation from code comments" |
| Refactoring | codex exec "Refactor this class to use dependency injection" |
โโโโโโโโโโโโโโโโโโโ
โ Codex CLI โ Sends: Responses API Request
โ (User Side) โ Receives: Responses API Response
โโโโโโโโโโฌโโโโโโโโโ
โ Responses API Format
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Codex GLM Proxy (localhost:18765) โ
โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Request Converter โ โ
โ โ - Responses API โ Chat Completionsโ โ
โ โ - Tool call history handling โ โ
โ โ - Model name mapping โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Response Converter โ โ
โ โ - Chat Completions โ Responses APIโ โ
โ โ - Tool call streaming โ โ
โ โ - Event sequencing โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Chat Completions API Format
โผ
โโโโโโโโโโโโโโโโโโโ
โ GLM API โ Sends: Chat Completions Request
โ (ๆบ่ฐฑ AI) โ Receives: Chat Completions Response
โโโโโโโโโโโโโโโโโโโ
sequenceDiagram
participant U as User
participant C as Codex CLI
participant P as GLM Proxy
participant G as GLM API
U->>C: codex exec "Create hello.py"
C->>P: Responses API Request<br/>(Responses API format)
Note over P: Convert to Chat Completions API
P->>G: Chat Completions Request<br/>(GLM format)
G-->>P: Chat Completions Response<br/>(Streaming)
Note over P: Convert back to Responses API
P-->>C: Responses API Response<br/>(Streaming events)
C-->>U: Display results & apply changes
1. Request Conversion (Codex โ GLM)
// OpenAI Responses API (from Codex)
{
"model": "gpt-4o",
"input": [
{"type": "message", "role": "user", "content": "..."},
{"type": "function_call", "call_id": "...", "name": "exec"}
]
}
โฌ๏ธ Proxy Converts โฌ๏ธ
// GLM Chat Completions (to GLM)
{
"model": "glm-4-plus",
"messages": [
{"role": "user", "content": "..."},
{"role": "assistant", "tool_calls": [...]}
],
"tools": [...]
}2. Response Conversion (GLM โ Codex)
GLM Chat Completions โ Responses API Event
โโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโ
delta.content โ response.output_text.delta
delta.tool_calls โ response.function_call_arguments.delta
finish_reason โ response.completed
Note: Codex CLI sends and receives Responses API format, while GLM API uses Chat Completions format. The proxy handles the bidirectional conversion.
graph LR
A[Codex Request] --> B[Proxy]
B --> C{Tool Call?}
C -->|Yes| D[Execute Tool]
D --> E[Return Result]
E --> B
C -->|No| F[Stream to GLM]
F --> G[Get Response]
G --> H[Convert & Return]
H --> A
- Python 3.8+
- GLM API key (Get one here)
- OpenAI Codex CLI installed
-
Clone the repository
git clone https://github.com/JichinX/codex-glm-proxy.git cd codex-glm-proxy -
Set your GLM API key
export GLM_API_KEY="your_glm_api_key_here"
-
Start the proxy
python3 proxy.py # ๆไฝฟ็จไพฟๆท่ๆฌ ./scripts/start.shProxy will run on
http://localhost:18765 -
Configure Codex CLI
Create or update
~/.codex/config.toml:model_provider = "glm-proxy" model = "gpt-4o" [model_providers.glm-proxy] name = "GLM via Proxy" base_url = "http://localhost:18765/v4" wire_api = "responses"
-
Test it!
mkdir test-codex && cd test-codex && git init codex exec "Create a Python hello world program" --full-auto
| Environment Variable | Default | Description |
|---|---|---|
GLM_API_KEY |
(required) | Your GLM API key |
GLM_API_BASE |
https://open.bigmodel.cn/api/coding/paas/v4 |
GLM API endpoint |
PROXY_PORT |
18765 |
Local proxy port |
The proxy automatically maps OpenAI model names to GLM equivalents:
| OpenAI Model | GLM Model | Notes |
|---|---|---|
gpt-4 |
glm-4 |
Standard GPT-4 |
gpt-4-turbo |
glm-4 |
GPT-4 Turbo |
gpt-4o |
glm-4-plus |
Recommended for best coding |
gpt-4o-mini |
glm-4-flash |
Faster, cheaper |
gpt-3.5-turbo |
glm-4-flash |
Legacy support |
gpt-5.x-codex |
glm-5 |
Future Codex models |
Recommendation: Use model = "gpt-4o" in your Codex config for best results.
# Start proxy (background)
./scripts/start.sh
# Check if running
curl http://localhost:18765/health
# View logs
tail -f /tmp/codex-glm-proxy.log
# Stop proxy
./scripts/stop.shThe proxy sits between Codex CLI and GLM API, converting:
Codex CLI โ Responses API โ Proxy โ Chat Completions API โ GLM
- Converts OpenAI Responses API format to GLM Chat Completions
- Handles tool call history and function results
- Filters unsupported tools
- Streams Chat Completions responses back to Responses API format
- Maintains proper event sequencing
- Handles tool call streaming
# Simple task
codex exec "Create a Python function to calculate Fibonacci" --full-auto
# More complex project
codex exec "Build a REST API with FastAPI for todo management" --full-auto
# With tests
codex exec "Create a calculator module with unit tests" --full-autoCause: Model name not properly mapped
Solution: Ensure you're using a known model like gpt-4o in config
Cause: Tool call history not properly handled
Solution: Update to latest version of proxy
Cause: Proxy crashed
Solution: Check logs at /tmp/codex-glm-proxy.log and restart
Cause: Proxy not running
Solution: Start proxy with ./scripts/start.sh
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
- OpenAI Codex - The amazing coding agent
- ๆบ่ฐฑ AI GLM - Powerful Chinese LLM
- Inspired by the need to use Codex with local/alternative LLM providers
โ Production Ready - Fully tested with all Codex features
| Feature | Status |
|---|---|
| Text conversations | โ Working |
| Model mapping | โ Working |
| Streaming responses | โ Working |
| Tool calling | โ Working |
| Multi-turn conversations | โ Working |
| Tool call history | โ Working |
| Tool call results | โ Working |
Made with โค๏ธ by the community, for the community
Star โญ this repo if you find it useful!