|
| 1 | +--- |
| 2 | +title: "What's new in Hindsight 0.4.22" |
| 3 | +description: New features and improvements in Hindsight 0.4.22 |
| 4 | +authors: [nicoloboschi] |
| 5 | +date: 2026-03-31 |
| 6 | +hide_table_of_contents: true |
| 7 | +--- |
| 8 | + |
| 9 | +Hindsight 0.4.22 is primarily a bugfix release, with fixes across providers, integrations, and the recall pipeline. It also adds mental model trigger tag filtering and exposes document metadata through the API and Control Plane. |
| 10 | + |
| 11 | +<!-- truncate --> |
| 12 | + |
| 13 | +- [**Mental Model Tag Filtering**](#mental-model-tag-filtering): Control how memories are filtered during mental model refresh with `tags_match` and `tag_groups`. |
| 14 | +- [**Document Metadata API**](#document-metadata-api): Retained document metadata is now exposed in list/get endpoints and the Control Plane UI. |
| 15 | + |
| 16 | +## Mental Model Tag Filtering |
| 17 | + |
| 18 | +When refreshing a mental model with tags, Hindsight previously hardcoded `all_strict` tag matching, which silently excluded all untagged memories from the refresh. Mental model triggers now support `tags_match` and `tag_groups` fields, giving you explicit control over how memories are filtered during refresh. |
| 19 | + |
| 20 | +For example, you can use `any` matching to include untagged memories alongside tagged ones, or define tag groups for complex filtering logic. The Control Plane UI adds corresponding dropdowns and inputs for these new fields. |
| 21 | + |
| 22 | +Existing mental models without these fields keep the previous `all_strict` behavior — no migration required. |
| 23 | + |
| 24 | +## Document Metadata API |
| 25 | + |
| 26 | +The `retain` endpoint has always accepted a `metadata` dict, but that data was never surfaced back through the API. This release properly exposes `document_metadata` in both the list and get document endpoints, and displays it in the Control Plane documents table and detail panel. |
| 27 | + |
| 28 | +This is useful for any workflow that tags documents at ingest time — for example, by source system, user, or session ID. |
| 29 | + |
| 30 | +## Other Updates |
| 31 | + |
| 32 | +**Improvements** |
| 33 | +- Custom LLM parameters via `HINDSIGHT_API_LLM_EXTRA_BODY` — pass arbitrary JSON to `extra_body` on every OpenAI-compatible API call, useful for vLLM and custom model servers. *(Contributed by @emirhan-gazi.)* |
| 34 | +- Codex integration now retains structured tool calls (`function_call`, `local_shell_call`, `web_search_call`, etc.) as JSON content blocks, enabled by default. |
| 35 | +- API responses now include an `X-Ignored-Params` header to warn when unknown request parameters were silently ignored. |
| 36 | +- LiteLLM embeddings support optional output dimensions via `HINDSIGHT_API_EMBEDDINGS_LITELLM_SDK_OUTPUT_DIMENSIONS`. *(Contributed by @bullbo.)* |
| 37 | +- ZeroEntropy reranker now supports a configurable base URL for self-hosted deployments. *(Contributed by @iskhakovt.)* |
| 38 | +- Experience fact classification now correctly categorizes first-person agent actions (code changes, debugging, discoveries) as `experience` facts instead of `world` facts, improving recall precision for coding agents and agentic workflows. |
| 39 | +- LLM provider initialization refactored to use centralized `from_env()` pattern with proper config constants. |
| 40 | +- 13 previously undocumented config fields are now documented (Gemini safety settings, retain batch tokens, webhook settings, audit log settings, and more). |
| 41 | + |
| 42 | +**Bug Fixes** |
| 43 | +- Cohere reranker on Azure AI Foundry endpoints no longer hits 404 errors from double-path URLs — uses httpx directly when a custom `base_url` is configured. *(Contributed by @kagura-agent.)* |
| 44 | +- Claude Code LLM provider no longer suffers from MCP tool deferral when too many built-in tools are loaded — built-in tools are now disabled so MCP tools load eagerly. *(Contributed by @mkremnev.)* |
| 45 | +- Recall endpoint no longer drops metadata from response. |
| 46 | +- Codex integration merges new settings on upgrade instead of overwriting existing configuration. |
| 47 | +- LlamaIndex integration fixes for `document_id` handling, memory API, and ReAct trace formatting. |
| 48 | +- OpenClaw defers heavy initialization to `service.start()` to avoid CLI slowdown. |
| 49 | +- Gemini `thought_signature` is now read from the correct object for 3.1+ tool calls. |
| 50 | + |
| 51 | +## Feedback and Community |
| 52 | + |
| 53 | +Hindsight 0.4.22 is a drop-in replacement for 0.4.x with no breaking changes. |
| 54 | + |
| 55 | +Share your feedback: |
| 56 | + |
| 57 | +- [GitHub Discussions](https://github.com/vectorize-io/hindsight/discussions) |
| 58 | +- [GitHub Issues](https://github.com/vectorize-io/hindsight/issues) |
| 59 | + |
| 60 | +For detailed changes, see the [full changelog](/changelog). |
0 commit comments