Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion hindsight-api-slim/hindsight_api/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -354,7 +354,7 @@ def normalize_config_dict(config: dict[str, Any]) -> dict[str, Any]:
# Provider-specific default models
PROVIDER_DEFAULT_MODELS = {
"openai": "gpt-4o-mini",
"anthropic": "claude-haiku-4-5-20251001",
"anthropic": "claude-haiku-4-5",
"gemini": "gemini-2.5-flash",
"groq": "openai/gpt-oss-120b",
"minimax": "MiniMax-M2.7",
Expand Down
6 changes: 4 additions & 2 deletions hindsight-docs/docs/sdks/integrations/claude-code.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,13 +74,15 @@ The plugin automatically starts and stops `hindsight-embed` via `uvx`. Requires

Set an LLM provider:
```bash
export OPENAI_API_KEY="sk-your-key" # Auto-detected, uses gpt-4o-mini
export OPENAI_API_KEY="sk-your-key"
# or
export ANTHROPIC_API_KEY="your-key" # Auto-detected, uses claude-3-5-haiku
export ANTHROPIC_API_KEY="your-key"
# or
export HINDSIGHT_LLM_PROVIDER=claude-code # No API key needed
```

The model is selected automatically by the Hindsight API. To override, set `HINDSIGHT_API_LLM_MODEL`.

### 3. Existing Local Server

If you already have `hindsight-embed` running, leave `hindsightApiUrl` empty and set `apiPort` to match your server's port. The plugin will detect it automatically.
Expand Down
31 changes: 16 additions & 15 deletions hindsight-docs/docs/sdks/integrations/openclaw.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,22 +17,22 @@ This plugin integrates [hindsight-embed](https://vectorize.io/hindsight/cli), a
Choose one provider and set its API key:

```bash
# Option A: OpenAI (uses gpt-4o-mini for memory extraction)
# Option A: OpenAI
export OPENAI_API_KEY="sk-your-key"

# Option B: Anthropic (uses claude-3-5-haiku for memory extraction)
# Option B: Anthropic
export ANTHROPIC_API_KEY="your-key"

# Option C: Gemini (uses gemini-2.5-flash for memory extraction)
# Option C: Gemini
export GEMINI_API_KEY="your-key"

# Option D: Groq (uses openai/gpt-oss-20b for memory extraction)
# Option D: Groq
export GROQ_API_KEY="your-key"

# Option E: Claude Code (uses claude-sonnet-4-20250514, no API key needed)
# Option E: Claude Code (no API key needed)
export HINDSIGHT_API_LLM_PROVIDER=claude-code

# Option F: OpenAI Codex (uses gpt-4o-mini, no API key needed)
# Option F: OpenAI Codex (no API key needed)
export HINDSIGHT_API_LLM_PROVIDER=openai-codex
```

Expand Down Expand Up @@ -161,20 +161,21 @@ By default, the plugin retains `user` and `assistant` messages after each turn.

The plugin auto-detects your LLM provider from these environment variables:

| Provider | Env Var | Default Model | Notes |
|----------|---------|---------------|-------|
| OpenAI | `OPENAI_API_KEY` | `gpt-4o-mini` | |
| Anthropic | `ANTHROPIC_API_KEY` | `claude-3-5-haiku-20241022` | |
| Gemini | `GEMINI_API_KEY` | `gemini-2.5-flash` | |
| Groq | `GROQ_API_KEY` | `openai/gpt-oss-20b` | |
| Claude Code | `HINDSIGHT_API_LLM_PROVIDER=claude-code` | `claude-sonnet-4-20250514` | No API key needed |
| OpenAI Codex | `HINDSIGHT_API_LLM_PROVIDER=openai-codex` | `gpt-4o-mini` | No API key needed |
| Provider | Env Var | Notes |
|----------|---------|-------|
| OpenAI | `OPENAI_API_KEY` | |
| Anthropic | `ANTHROPIC_API_KEY` | |
| Gemini | `GEMINI_API_KEY` | |
| Groq | `GROQ_API_KEY` | |
| Claude Code | `HINDSIGHT_API_LLM_PROVIDER=claude-code` | No API key needed |
| OpenAI Codex | `HINDSIGHT_API_LLM_PROVIDER=openai-codex` | No API key needed |

The model is selected automatically by the Hindsight API. To override, set `HINDSIGHT_API_LLM_MODEL`.

**Override with explicit config:**

```bash
export HINDSIGHT_API_LLM_PROVIDER=openai
export HINDSIGHT_API_LLM_MODEL=gpt-4o-mini
export HINDSIGHT_API_LLM_API_KEY=sk-your-key

# Optional: custom base URL (OpenRouter, Azure, vLLM, etc.)
Expand Down
27 changes: 14 additions & 13 deletions hindsight-integrations/claude-code/scripts/lib/llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,13 +16,13 @@

# Provider detection table — same order as Openclaw
PROVIDER_DETECTION = [
{"name": "openai", "key_env": "OPENAI_API_KEY", "default_model": "gpt-4o-mini"},
{"name": "anthropic", "key_env": "ANTHROPIC_API_KEY", "default_model": "claude-3-5-haiku-20241022"},
{"name": "gemini", "key_env": "GEMINI_API_KEY", "default_model": "gemini-2.5-flash"},
{"name": "groq", "key_env": "GROQ_API_KEY", "default_model": "openai/gpt-oss-20b"},
{"name": "ollama", "key_env": "", "default_model": "llama3.2"},
{"name": "openai-codex", "key_env": "", "default_model": "gpt-5.2-codex"},
{"name": "claude-code", "key_env": "", "default_model": "claude-sonnet-4-5-20250929"},
{"name": "openai", "key_env": "OPENAI_API_KEY"},
{"name": "anthropic", "key_env": "ANTHROPIC_API_KEY"},
{"name": "gemini", "key_env": "GEMINI_API_KEY"},
{"name": "groq", "key_env": "GROQ_API_KEY"},
{"name": "ollama", "key_env": ""},
{"name": "openai-codex", "key_env": ""},
{"name": "claude-code", "key_env": ""},
]

# Providers that don't require an API key
Expand Down Expand Up @@ -59,7 +59,7 @@ def detect_llm_config(config: dict) -> dict:
return {
"provider": override_provider,
"api_key": override_key or "",
"model": override_model or (pinfo["default_model"] if pinfo else None),
"model": override_model,
"base_url": override_base_url,
"source": "HINDSIGHT_API_LLM_PROVIDER override",
}
Expand All @@ -83,7 +83,7 @@ def detect_llm_config(config: dict) -> dict:
return {
"provider": cfg_provider,
"api_key": api_key,
"model": config.get("llmModel") or override_model or (pinfo["default_model"] if pinfo else None),
"model": config.get("llmModel") or override_model,
"base_url": override_base_url,
"source": "plugin config",
}
Expand All @@ -99,7 +99,7 @@ def detect_llm_config(config: dict) -> dict:
return {
"provider": pinfo["name"],
"api_key": api_key,
"model": override_model or pinfo["default_model"],
"model": override_model,
"base_url": override_base_url,
"source": f"auto-detected from {pinfo['key_env']}",
}
Expand All @@ -117,13 +117,14 @@ def detect_llm_config(config: dict) -> dict:
raise RuntimeError(
"No LLM configuration found for Hindsight.\n\n"
"Option 1: Set a standard provider API key (auto-detect):\n"
" export OPENAI_API_KEY=sk-your-key # Uses gpt-4o-mini\n"
" export ANTHROPIC_API_KEY=your-key # Uses claude-3-5-haiku\n\n"
" export OPENAI_API_KEY=sk-your-key\n"
" export ANTHROPIC_API_KEY=your-key\n\n"
"Option 2: Override with Hindsight-specific env vars:\n"
" export HINDSIGHT_API_LLM_PROVIDER=openai\n"
" export HINDSIGHT_API_LLM_API_KEY=sk-your-key\n\n"
"Option 3: Use an external Hindsight API (server-side LLM):\n"
" Set hindsightApiUrl in settings.json or HINDSIGHT_API_URL env var"
" Set hindsightApiUrl in settings.json or HINDSIGHT_API_URL env var\n\n"
"The model will be selected automatically by Hindsight. To override: export HINDSIGHT_API_LLM_MODEL=your-model"
)


Expand Down
36 changes: 17 additions & 19 deletions hindsight-integrations/openclaw/src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -496,13 +496,13 @@ export function formatMemories(results: MemoryResult[]): string {

// Provider detection from standard env vars
const PROVIDER_DETECTION = [
{ name: 'openai', keyEnv: 'OPENAI_API_KEY', defaultModel: 'gpt-4o-mini' },
{ name: 'anthropic', keyEnv: 'ANTHROPIC_API_KEY', defaultModel: 'claude-3-5-haiku-20241022' },
{ name: 'gemini', keyEnv: 'GEMINI_API_KEY', defaultModel: 'gemini-2.5-flash' },
{ name: 'groq', keyEnv: 'GROQ_API_KEY', defaultModel: 'openai/gpt-oss-20b' },
{ name: 'ollama', keyEnv: '', defaultModel: 'llama3.2' },
{ name: 'openai-codex', keyEnv: '', defaultModel: 'gpt-5.2-codex' },
{ name: 'claude-code', keyEnv: '', defaultModel: 'claude-sonnet-4-5-20250929' },
{ name: 'openai', keyEnv: 'OPENAI_API_KEY' },
{ name: 'anthropic', keyEnv: 'ANTHROPIC_API_KEY' },
{ name: 'gemini', keyEnv: 'GEMINI_API_KEY' },
{ name: 'groq', keyEnv: 'GROQ_API_KEY' },
{ name: 'ollama', keyEnv: '' },
{ name: 'openai-codex', keyEnv: '' },
{ name: 'claude-code', keyEnv: '' },
];

function detectLLMConfig(pluginConfig?: PluginConfig): {
Expand All @@ -529,11 +529,10 @@ function detectLLMConfig(pluginConfig?: PluginConfig): {
);
}

const providerInfo = PROVIDER_DETECTION.find(p => p.name === overrideProvider);
return {
provider: overrideProvider,
apiKey: overrideKey || '',
model: overrideModel || (providerInfo?.defaultModel),
model: overrideModel,
baseUrl: overrideBaseUrl,
source: 'HINDSIGHT_API_LLM_PROVIDER override',
};
Expand Down Expand Up @@ -565,7 +564,7 @@ function detectLLMConfig(pluginConfig?: PluginConfig): {
return {
provider: pluginConfig.llmProvider,
apiKey,
model: pluginConfig.llmModel || overrideModel || providerInfo?.defaultModel,
model: pluginConfig.llmModel || overrideModel,
baseUrl: overrideBaseUrl,
source: 'plugin config',
};
Expand All @@ -585,8 +584,8 @@ function detectLLMConfig(pluginConfig?: PluginConfig): {
return {
provider: providerInfo.name,
apiKey,
model: overrideModel || providerInfo.defaultModel,
baseUrl: overrideBaseUrl, // Only use explicit HINDSIGHT_API_LLM_BASE_URL
model: overrideModel,
baseUrl: overrideBaseUrl,
source: `auto-detected from ${providerInfo.keyEnv}`,
};
}
Expand All @@ -609,21 +608,20 @@ function detectLLMConfig(pluginConfig?: PluginConfig): {
throw new Error(
`No LLM configuration found for Hindsight memory plugin.\n\n` +
`Option 1: Set a standard provider API key (auto-detect):\n` +
` export OPENAI_API_KEY=sk-your-key # Uses gpt-4o-mini\n` +
` export ANTHROPIC_API_KEY=your-key # Uses claude-3-5-haiku\n` +
` export GEMINI_API_KEY=your-key # Uses gemini-2.5-flash\n` +
` export GROQ_API_KEY=your-key # Uses openai/gpt-oss-20b\n\n` +
` export OPENAI_API_KEY=sk-your-key\n` +
` export ANTHROPIC_API_KEY=your-key\n` +
` export GEMINI_API_KEY=your-key\n` +
` export GROQ_API_KEY=your-key\n\n` +
`Option 2: Use Codex or Claude Code (no API key needed):\n` +
` export HINDSIGHT_API_LLM_PROVIDER=openai-codex # Requires 'codex auth login'\n` +
` export HINDSIGHT_API_LLM_PROVIDER=claude-code # Requires Claude Code CLI\n\n` +
`Option 3: Set llmProvider in openclaw.json plugin config:\n` +
` "llmProvider": "openai", "llmModel": "gpt-4o-mini"\n\n` +
` "llmProvider": "openai"\n\n` +
`Option 4: Override with Hindsight-specific env vars:\n` +
` export HINDSIGHT_API_LLM_PROVIDER=openai\n` +
` export HINDSIGHT_API_LLM_MODEL=gpt-4o-mini\n` +
` export HINDSIGHT_API_LLM_API_KEY=sk-your-key\n` +
` export HINDSIGHT_API_LLM_BASE_URL=https://openrouter.ai/api/v1 # Optional\n\n` +
`Tip: Use a cheap/fast model for memory extraction (e.g., gpt-4o-mini, claude-3-5-haiku, or free models on OpenRouter)`
`The model will be selected automatically by Hindsight. To override: export HINDSIGHT_API_LLM_MODEL=your-model`
);
}

Expand Down
Loading