Basic Memory is a local-first knowledge management system built on the Model Context Protocol (MCP). It enables bidirectional communication between LLMs (like Claude) and markdown files, creating a personal knowledge graph that can be traversed using links between documents.
See the README.md file for a project overview.
- Install:
just installorpip install -e ".[dev]" - Run tests:
uv run pytest -p pytest_mock -vorjust test - Single test:
pytest tests/path/to/test_file.py::test_function_name - Lint:
just lintorruff check . --fix - Type check:
just type-checkoruv run pyright - Format:
just formatoruv run ruff format . - Run all code checks:
just check(runs lint, format, type-check, test) - Create db migration:
just migration "Your migration message" - Run development MCP Inspector:
just run-inspector
- Line length: 100 characters max
- Python 3.12+ with full type annotations
- Format with ruff (consistent styling)
- Import order: standard lib, third-party, local imports
- Naming: snake_case for functions/variables, PascalCase for classes
- Prefer async patterns with SQLAlchemy 2.0
- Use Pydantic v2 for data validation and schemas
- CLI uses Typer for command structure
- API uses FastAPI for endpoints
- Follow the repository pattern for data access
- Tools communicate to api routers via the httpx ASGI client (in process)
- avoid using "private" functions in modules or classes (prepended with _)
/alembic- Alembic db migrations/api- FastAPI implementation of REST endpoints/cli- Typer command-line interface/markdown- Markdown parsing and processing/mcp- Model Context Protocol server implementation/models- SQLAlchemy ORM models/repository- Data access layer/schemas- Pydantic models for validation/services- Business logic layer/sync- File synchronization services
- MCP tools are defined in src/basic_memory/mcp/tools/
- MCP prompts are defined in src/basic_memory/mcp/prompts/
- MCP tools should be atomic, composable operations
- Use
textwrap.dedent()for multi-line string formatting in prompts and tools - MCP Prompts are used to invoke tools and format content with instructions for an LLM
- Schema changes require Alembic migrations
- SQLite is used for indexing and full text search, files are source of truth
- Testing uses pytest with asyncio support (strict mode)
- Test database uses in-memory SQLite
- Avoid creating mocks in tests in most circumstances.
- Each test runs in a standalone environment with in memory SQLite and tmp_file directory
- Do not use mocks in tests if possible. Tests run with an in memory sqlite db, so they are not needed. See fixtures in conftest.py
- Entity: Any concept, document, or idea represented as a markdown file
- Observation: A categorized fact about an entity (
- [category] content) - Relation: A directional link between entities (
- relation_type [[Target]]) - Frontmatter: YAML metadata at the top of markdown files
- Knowledge representation follows precise markdown format:
- Observations with [category] prefixes
- Relations with WikiLinks [[Entity]]
- Frontmatter with metadata
- Sync knowledge:
basic-memory syncorbasic-memory sync --watch - Import from Claude:
basic-memory import claude conversations - Import from ChatGPT:
basic-memory import chatgpt - Import from Memory JSON:
basic-memory import memory-json - Check sync status:
basic-memory status - Tool access:
basic-memory tools(provides CLI access to MCP tools)- Guide:
basic-memory tools basic-memory-guide - Continue:
basic-memory tools continue-conversation --topic="search"
- Guide:
-
Basic Memory exposes these MCP tools to LLMs:
Content Management:
write_note(title, content, folder, tags)- Create/update markdown notes with semantic observations and relationsread_note(identifier, page, page_size)- Read notes by title, permalink, or memory:// URL with knowledge graph awarenessread_file(path)- Read raw file content (text, images, binaries) without knowledge graph processing
Knowledge Graph Navigation:
build_context(url, depth, timeframe)- Navigate the knowledge graph via memory:// URLs for conversation continuityrecent_activity(type, depth, timeframe)- Get recently updated information with specified timeframe (e.g., " 1d", "1 week")
Search & Discovery:
search_notes(query, page, page_size)- Full-text search across all content with filtering options
Visualization:
canvas(nodes, edges, title, folder)- Generate Obsidian canvas files for knowledge graph visualization
-
MCP Prompts for better AI interaction:
ai_assistant_guide()- Guidance on effectively using Basic Memory tools for AI assistantscontinue_conversation(topic, timeframe)- Continue previous conversations with relevant historical contextsearch_notes(query, after_date)- Search with detailed, formatted results for better context understandingrecent_activity(timeframe)- View recently changed items with formatted outputjson_canvas_spec()- Full JSON Canvas specification for Obsidian visualization
Basic Memory emerged from and enables a new kind of development process that combines human and AI capabilities. Instead of using AI just for code generation, we've developed a true collaborative workflow:
- AI (LLM) writes initial implementation based on specifications and context
- Human reviews, runs tests, and commits code with any necessary adjustments
- Knowledge persists across conversations using Basic Memory's knowledge graph
- Development continues seamlessly across different AI sessions with consistent context
- Results improve through iterative collaboration and shared understanding
This approach has allowed us to tackle more complex challenges and build a more robust system than either humans or AI could achieve independently.
Basic Memory uses Claude directly into the development workflow through GitHub:
Using the GitHub Model Context Protocol server, Claude can:
-
Repository Management:
- View repository files and structure
- Read file contents
- Create new branches
- Create and update files
-
Issue Management:
- Create new issues
- Comment on existing issues
- Close and update issues
- Search across issues
-
Pull Request Workflow:
- Create pull requests
- Review code changes
- Add comments to PRs
This integration enables Claude to participate as a full team member in the development process, not just as a code generation tool. Claude's GitHub account (bm-claudeai) is a member of the Basic Machines organization with direct contributor access to the codebase.
With GitHub integration, the development workflow includes:
- Direct code review - Claude can analyze PRs and provide detailed feedback
- Contribution tracking - All of Claude's contributions are properly attributed in the Git history
- Branch management - Claude can create feature branches for implementations
- Documentation maintenance - Claude can keep documentation updated as the code evolves
With this integration, the AI assistant is a full-fledged team member rather than just a tool for generating code snippets.
Basic Memory Pro is a desktop GUI application that wraps the basic-memory CLI/MCP tools:
- Built with Tauri (Rust), React (TypeScript), and a Python FastAPI sidecar
- Provides visual knowledge graph exploration and project management
- Uses the same core codebase but adds a desktop-friendly interface
- Project configuration is shared between CLI and Pro versions
- Multiple project support with visual switching interface
local repo: /Users/phernandez/dev/basicmachines/basic-memory-pro github: https://github.com/basicmachines-co/basic-memory-pro
Basic Memory uses uv-dynamic-versioning for automatic version management based on git tags:
- Development versions: Automatically generated from commits (e.g.,
0.12.4.dev26+468a22f) - Beta releases: Created by tagging with beta suffixes (e.g.,
v0.13.0b1,v0.13.0rc1) - Stable releases: Created by tagging with version numbers (e.g.,
v0.13.0)
- Triggered on every push to
mainbranch - Publishes dev versions like
0.12.4.dev26+468a22fto PyPI - Allows continuous testing of latest changes
- Users install with:
pip install basic-memory --pre --force-reinstall
- Create beta tag:
git tag v0.13.0b1 && git push origin v0.13.0b1 - Automatically builds and publishes to PyPI as pre-release
- Users install with:
pip install basic-memory --pre - Use for milestone testing before stable release
- Create version tag:
git tag v0.13.0 && git push origin v0.13.0 - Automatically builds, creates GitHub release, and publishes to PyPI
- Users install with:
pip install basic-memory
- No manual version bumping required
- Versions automatically derived from git tags
pyproject.tomlusesdynamic = ["version"]__init__.pydynamically reads version from package metadata