Skip to content
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
134 changes: 51 additions & 83 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,33 +1,24 @@
# OpenClaude

OpenClaude is an open-source coding-agent CLI that works with more than one model provider.
OpenClaude is an open-source coding-agent CLI for cloud and local model providers.

Use OpenAI-compatible APIs, Gemini, GitHub Models, Codex, Ollama, Atomic Chat, and other supported backends while keeping the same terminal-first workflow: prompts, tools, agents, MCP, slash commands, and streaming output.
Use OpenAI-compatible APIs, Gemini, GitHub Models, Codex, Ollama, Atomic Chat, and other supported backends while keeping one terminal-first workflow: prompts, tools, agents, MCP, slash commands, and streaming output.

## Why OpenClaude

- Use one CLI across cloud and local model providers
- Save provider profiles inside the app with `/provider`
- Run locally with Ollama or Atomic Chat
- Keep core coding-agent workflows: bash, file tools, grep, glob, agents, tasks, MCP, and web tools

## Provenance & Legal Notice

OpenClaude is derived from Anthropic's Claude Code CLI source code, which was
inadvertently exposed in March 2026 through a packaging error in npm. The
original Claude Code source is proprietary software owned by Anthropic PBC.

This project adds multi-provider support, strips telemetry, and adapts the
codebase for open use. It is not an authorized fork or open-source release
by Anthropic.
[![PR Checks](https://github.com/Gitlawb/openclaude/actions/workflows/pr-checks.yml/badge.svg?branch=main)](https://github.com/Gitlawb/openclaude/actions/workflows/pr-checks.yml)
[![Release](https://img.shields.io/github/v/tag/Gitlawb/openclaude?label=release&color=0ea5e9)](https://github.com/Gitlawb/openclaude/tags)
[![Discussions](https://img.shields.io/badge/discussions-open-7c3aed)](https://github.com/Gitlawb/openclaude/discussions)
[![Security Policy](https://img.shields.io/badge/security-policy-0f766e)](SECURITY.md)
[![License](https://img.shields.io/badge/license-MIT-2563eb)](LICENSE)

**"Claude" and "Claude Code" are trademarks of Anthropic PBC.**
[Quick Start](#quick-start) | [Setup Guides](#setup-guides) | [Providers](#supported-providers) | [Source Build](#source-build-and-local-development) | [VS Code Extension](#vs-code-extension) | [Community](#community)

Contributors should be aware that the legal status of distributing code
derived from Anthropic's proprietary source is unresolved. See the LICENSE
file for details.
## Why OpenClaude

---
- Use one CLI across cloud APIs and local model backends
- Save provider profiles inside the app with `/provider`
- Run with OpenAI-compatible services, Gemini, GitHub Models, Codex, Ollama, Atomic Chat, and other supported providers
- Keep coding-agent workflows in one place: bash, file tools, grep, glob, agents, tasks, MCP, and web tools
- Use the bundled VS Code extension for launch integration and theme support

## Quick Start

Expand All @@ -37,7 +28,7 @@ file for details.
npm install -g @gitlawb/openclaude
```

If the npm install path later reports `ripgrep not found`, install ripgrep system-wide and confirm `rg --version` works in the same terminal before starting OpenClaude.
If the install later reports `ripgrep not found`, install ripgrep system-wide and confirm `rg --version` works in the same terminal before starting OpenClaude.

### Start

Expand All @@ -47,8 +38,8 @@ openclaude

Inside OpenClaude:

- run `/provider` for guided setup of OpenAI-compatible, Gemini, Ollama, or Codex profiles
- run `/onboard-github` for GitHub Models setup
- run `/provider` for guided provider setup and saved profiles
- run `/onboard-github` for GitHub Models onboarding

### Fastest OpenAI setup

Expand Down Expand Up @@ -94,8 +85,6 @@ $env:OPENAI_MODEL="qwen2.5-coder:7b"
openclaude
```

---

## Setup Guides

Beginner-friendly guides:
Expand All @@ -109,40 +98,26 @@ Advanced and source-build guides:
- [Advanced Setup](docs/advanced-setup.md)
- [Android Install](ANDROID_INSTALL.md)

---

## Supported Providers

| Provider | Setup Path | Notes |
| --- | --- | --- |
| OpenAI-compatible | `/provider` or env vars | Works with OpenAI, OpenRouter, DeepSeek, Groq, Mistral, LM Studio, and compatible local `/v1` servers |
| Gemini | `/provider` or env vars | Google Gemini support through the runtime provider layer (API key, access token, or local ADC) |
| OpenAI-compatible | `/provider` or env vars | Works with OpenAI, OpenRouter, DeepSeek, Groq, Mistral, LM Studio, and other compatible `/v1` servers |
| Gemini | `/provider` or env vars | Supports API key, access token, or local ADC workflow on current `main` |
| GitHub Models | `/onboard-github` | Interactive onboarding with saved credentials |
| Codex | `/provider` | Uses existing Codex credentials when available |
| Ollama | `/provider` or env vars | Local inference with no API key |
| Atomic Chat | advanced setup | Local Apple Silicon backend |
| Bedrock / Vertex / Foundry | env vars | Additional provider integrations for supported environments |

For Gemini, `/provider` can now save either the API-key path, a securely stored access-token path, or a local ADC profile.

---

## What Works

- Tool-driven coding workflows
Bash, file read/write/edit, grep, glob, agents, tasks, MCP, and slash commands
- Streaming responses
Real-time token output and tool progress
- Tool calling
Multi-step tool loops with model calls, tool execution, and follow-up responses
- Images
URL and base64 image inputs for providers that support vision
- Provider profiles
Guided setup plus saved `.openclaude-profile.json` support
- Local and remote model backends
Cloud APIs, local servers, and Apple Silicon local inference

---
- **Tool-driven coding workflows**: Bash, file read/write/edit, grep, glob, agents, tasks, MCP, and slash commands
- **Streaming responses**: Real-time token output and tool progress
- **Tool calling**: Multi-step tool loops with model calls, tool execution, and follow-up responses
- **Images**: URL and base64 image inputs for providers that support vision
- **Provider profiles**: Guided setup plus saved `.openclaude-profile.json` support
- **Local and remote model backends**: Cloud APIs, local servers, and Apple Silicon local inference

## Provider Notes

Expand All @@ -155,13 +130,9 @@ OpenClaude supports multiple providers, but behavior is not identical across all

For best results, use models with strong tool/function calling support.

---

## Agent Routing

Route different agents to different AI providers within the same session. Useful for cost optimization (cheap model for code review, powerful model for complex coding) or leveraging model strengths.

### Configuration
OpenClaude can route different agents to different models through settings-based routing. This is useful for cost optimization or splitting work by model strength.

Add to `~/.claude/settings.json`:

Expand All @@ -187,29 +158,19 @@ Add to `~/.claude/settings.json`:
}
```

### How It Works

- **agentModels**: Maps model names to OpenAI-compatible API endpoints
- **agentRouting**: Maps agent types or team member names to model names
- **Priority**: `name` > `subagent_type` > `"default"` > global provider
- **Matching**: Case-insensitive, hyphen/underscore equivalent (`general-purpose` = `general_purpose`)
- **Teams**: Team members are routed by their `name` — no extra config needed

When no routing match is found, the global provider (env vars) is used as fallback.
When no routing match is found, the global provider remains the fallback.

> **Note:** `api_key` values in `settings.json` are stored in plaintext. Keep this file private and do not commit it to version control.

---

## Web Search and Fetch

By default, `WebSearch` now works on non-Anthropic models using DuckDuckGo. This gives GPT-4o, DeepSeek, Gemini, Ollama, and other OpenAI-compatible providers a free web search path out of the box.
By default, `WebSearch` works on non-Anthropic models using DuckDuckGo. This gives GPT-4o, DeepSeek, Gemini, Ollama, and other OpenAI-compatible providers a free web search path out of the box.

>**Note:** DuckDuckGo fallback works by scraping search results and may be rate-limited, blocked, or subject to DuckDuckGo's Terms of Service. If you want a more reliable supported option, configure Firecrawl.
> **Note:** DuckDuckGo fallback works by scraping search results and may be rate-limited, blocked, or subject to DuckDuckGo's Terms of Service. If you want a more reliable supported option, configure Firecrawl.

For Anthropic-native backends (Anthropic/Vertex/Foundry) and Codex responses, OpenClaude keeps the native provider web search behavior.
For Anthropic-native backends and Codex responses, OpenClaude keeps the native provider web search behavior.

`WebFetch` works but uses basic HTTP plus HTML-to-markdown conversion. That fails on JavaScript-rendered pages (React, Next.js, Vue SPAs) and sites that block plain HTTP requests.
`WebFetch` works, but its basic HTTP plus HTML-to-markdown path can still fail on JavaScript-rendered sites or sites that block plain HTTP requests.

Set a [Firecrawl](https://firecrawl.dev) API key if you want Firecrawl-powered search/fetch behavior:

Expand All @@ -219,14 +180,12 @@ export FIRECRAWL_API_KEY=your-key-here

With Firecrawl enabled:

- `WebSearch` can use Firecrawl's search API (while DuckDuckGo remains the default free path for non-Claude models)
- `WebSearch` can use Firecrawl's search API while DuckDuckGo remains the default free path for non-Claude models
- `WebFetch` uses Firecrawl's scrape endpoint instead of raw HTTP, handling JS-rendered pages correctly

Free tier at [firecrawl.dev](https://firecrawl.dev) includes 500 credits. The key is optional.

---

## Source Build
## Source Build And Local Development

```bash
bun install
Expand All @@ -239,20 +198,31 @@ Helpful commands:
- `bun run dev`
- `bun run smoke`
- `bun run doctor:runtime`
- `bun run verify:privacy`
- focused `bun test ...` runs for the areas you touch

---
## Repository Structure

## VS Code Extension
- `src/` - core CLI/runtime
- `scripts/` - build, verification, and maintenance scripts
- `docs/` - setup, contributor, and project documentation
- `python/` - standalone Python helpers and their tests
- `vscode-extension/openclaude-vscode/` - VS Code extension
- `.github/` - repo automation, templates, and CI configuration
- `bin/` - CLI launcher entrypoints

The repo includes a VS Code extension in [`vscode-extension/openclaude-vscode`](vscode-extension/openclaude-vscode) for OpenClaude launch integration and theme support.
## VS Code Extension

---
The repo includes a VS Code extension in [`vscode-extension/openclaude-vscode`](vscode-extension/openclaude-vscode) for OpenClaude launch integration, provider-aware control-center UI, and theme support.

## Security

If you believe you found a security issue, see [SECURITY.md](SECURITY.md).

---
## Community

- Use [GitHub Discussions](https://github.com/Gitlawb/openclaude/discussions) for Q&A, ideas, and community conversation
- Use [GitHub Issues](https://github.com/Gitlawb/openclaude/issues) for confirmed bugs and actionable feature work

## Contributing

Expand All @@ -264,16 +234,14 @@ For larger changes, open an issue first so the scope is clear before implementat
- `bun run smoke`
- focused `bun test ...` runs for touched areas

---

## Disclaimer

OpenClaude is an independent community project and is not affiliated with, endorsed by, or sponsored by Anthropic.

"Claude" and "Claude Code" are trademarks of Anthropic.

---
The project has a complex history around the original Claude Code codebase. This README is intended to document the current OpenClaude project and its workflows; provenance and legal questions are separate from ordinary feature and contributor documentation.

## License

MIT
See [LICENSE](LICENSE).
1 change: 1 addition & 0 deletions python/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# Python helper package for standalone provider-side utilities.
File renamed without changes.
File renamed without changes.
File renamed without changes.
1 change: 1 addition & 0 deletions python/tests/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# Pytest package marker for the Python helper test suite.
5 changes: 5 additions & 0 deletions python/tests/conftest.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
from pathlib import Path
import sys

# Make the sibling `python/` helper modules importable from this test package.
sys.path.insert(0, str(Path(__file__).resolve().parents[1]))
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
"""
test_atomic_chat_provider.py
Run: pytest test_atomic_chat_provider.py -v
Run: pytest python/tests/test_atomic_chat_provider.py -v
"""

import pytest
Expand Down
16 changes: 14 additions & 2 deletions test_ollama_provider.py → python/tests/test_ollama_provider.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
"""
test_ollama_provider.py
Run: pytest test_ollama_provider.py -v
Run: pytest python/tests/test_ollama_provider.py -v
"""

import pytest
Expand All @@ -13,25 +13,31 @@
check_ollama_running,
)


def test_normalize_strips_prefix():
assert normalize_ollama_model("ollama/llama3:8b") == "llama3:8b"


def test_normalize_no_prefix():
assert normalize_ollama_model("codellama:34b") == "codellama:34b"


def test_normalize_empty():
assert normalize_ollama_model("") == ""


def test_converts_string_content():
messages = [{"role": "user", "content": "Hello!"}]
result = anthropic_to_ollama_messages(messages)
assert result == [{"role": "user", "content": "Hello!"}]


def test_converts_text_block_list():
messages = [{"role": "user", "content": [{"type": "text", "text": "What is Python?"}]}]
result = anthropic_to_ollama_messages(messages)
assert result[0]["content"] == "What is Python?"


def test_converts_image_block_to_placeholder():
messages = [{"role": "user", "content": [{"type": "image", "source": {}}, {"type": "text", "text": "Describe this"}]}]
result = anthropic_to_ollama_messages(messages)
Expand Down Expand Up @@ -68,6 +74,7 @@ def test_converts_multi_turn():
assert len(result) == 3
assert result[1]["role"] == "assistant"


@pytest.mark.asyncio
async def test_ollama_running_true():
mock_response = MagicMock()
Expand All @@ -77,13 +84,15 @@ async def test_ollama_running_true():
result = await check_ollama_running()
assert result is True


@pytest.mark.asyncio
async def test_ollama_running_false_on_exception():
with patch("ollama_provider.httpx.AsyncClient") as MockClient:
MockClient.return_value.__aenter__.return_value.get = AsyncMock(side_effect=Exception("refused"))
result = await check_ollama_running()
assert result is False


@pytest.mark.asyncio
async def test_list_models_returns_names():
mock_response = MagicMock()
Expand All @@ -95,6 +104,7 @@ async def test_list_models_returns_names():
models = await list_ollama_models()
assert "llama3:8b" in models


@pytest.mark.asyncio
async def test_ollama_chat_returns_anthropic_format():
mock_response = MagicMock()
Expand All @@ -115,9 +125,11 @@ async def test_ollama_chat_returns_anthropic_format():
assert result["role"] == "assistant"
assert "42" in result["content"][0]["text"]


@pytest.mark.asyncio
async def test_ollama_chat_prepends_system():
captured = {}

async def mock_post(url, json=None, **kwargs):
captured.update(json or {})
m = MagicMock()
Expand All @@ -134,7 +146,7 @@ async def mock_post(url, json=None, **kwargs):
await ollama_chat(
model="llama3:8b",
messages=[{"role": "user", "content": "Hi"}],
system="Be helpful."
system="Be helpful.",
)
assert captured["messages"][0]["role"] == "system"
assert "helpful" in captured["messages"][0]["content"]
Expand Down
5 changes: 3 additions & 2 deletions test_smart_router.py → python/tests/test_smart_router.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
test_smart_router.py
--------------------
Tests for the SmartRouter.
Run: pytest test_smart_router.py -v
Run: pytest python/tests/test_smart_router.py -v
"""

import pytest
Expand All @@ -18,6 +18,7 @@
def fake_api_key(monkeypatch):
monkeypatch.setenv("FAKE_KEY", "test-key")


def make_provider(name, healthy=True, configured=True,
latency=100.0, cost=0.002, errors=0, requests=0):
p = Provider(
Expand All @@ -33,7 +34,7 @@ def make_provider(name, healthy=True, configured=True,
p.error_count = errors
p.request_count = requests
if not configured:
p.api_key_env = "" # makes is_configured False for non-ollama
p.api_key_env = "" # makes is_configured False for non-local providers
return p


Expand Down
Loading