Skip to content

GibsonAI/memori

Repository files navigation

GibsonAI

memori

Open-Source Memory Engine for LLMs, AI Agents & Multi-Agent Systems

Make LLMs context-aware with human-like memory, dual-mode retrieval, and automatic context injection.

Learn more · Join Discord

PyPI version Downloads License: MIT Python 3.8+


🎯 Philosophy

  • Second-memory for all your LLM work - Never repeat context again
  • Dual-mode memory injection - Conscious short-term memory + Auto intelligent search
  • Flexible database connections - SQLite, PostgreSQL, MySQL support
  • Pydantic-based intelligence - Structured memory processing with validation
  • Simple, reliable architecture - Just works out of the box

⚡ Quick Start

Install Memori:

pip install memorisdk

Example with LiteLLM

  1. Install LiteLLM:
pip install litellm
  1. Set OpenAI API Key:
export OPENAI_API_KEY="sk-your-openai-key-here"
  1. Run this Python script:
from memori import Memori
from litellm import completion

# Initialize memory
memori = Memori(conscious_ingest=True)
memori.enable()

print("=== First Conversation - Establishing Context ===")
response1 = completion(
    model="gpt-4o-mini",
    messages=[{
        "role": "user", 
        "content": "I'm working on a Python FastAPI project"
    }]
)

print("Assistant:", response1.choices[0].message.content)
print("\n" + "="*50)
print("=== Second Conversation - Memory Provides Context ===")

response2 = completion(
    model="gpt-4o-mini", 
    messages=[{
        "role": "user",
        "content": "Help me add user authentication"
    }]
)
print("Assistant:", response2.choices[0].message.content)
print("\n💡 Notice: Memori automatically knows about your FastAPI Python project!")

🚀 Ready to explore more?


🧠 How It Works

1. Universal Recording

office_work.enable()  # Records ALL LLM conversations automatically

2. Intelligent Processing

  • Entity Extraction: Extracts people, technologies, projects
  • Smart Categorization: Facts, preferences, skills, rules
  • Pydantic Validation: Structured, type-safe memory storage

3. Dual Memory Modes

🧠 Conscious Mode - Short-Term Working Memory

conscious_ingest=True  # One-shot short-term memory injection
  • At Startup: Conscious agent analyzes long-term memory patterns
  • Memory Promotion: Moves essential conversations to short-term storage
  • One-Shot Injection: Injects working memory once at conversation start
  • Like Human Short-Term Memory: Names, current projects, preferences readily available

🔍 Auto Mode - Dynamic Database Search

auto_ingest=True  # Continuous intelligent memory retrieval
  • Every LLM Call: Retrieval agent analyzes user query intelligently
  • Full Database Search: Searches through entire memory database
  • Context-Aware: Injects relevant memories based on current conversation
  • Performance Optimized: Caching, async processing, background threads

🧠 Memory Modes Explained

Conscious Mode - Short-Term Working Memory

# Mimics human conscious memory - essential info readily available
memori = Memori(
    database_connect="sqlite:///my_memory.db",
    conscious_ingest=True,  # 🧠 Short-term working memory
    openai_api_key="sk-..."
)

How Conscious Mode Works:

  1. At Startup: Conscious agent analyzes long-term memory patterns
  2. Essential Selection: Promotes 5-10 most important conversations to short-term
  3. One-Shot Injection: Injects this working memory once at conversation start
  4. No Repeats: Won't inject again during the same session

Auto Mode - Dynamic Intelligent Search

# Searches entire database dynamically based on user queries
memori = Memori(
    database_connect="sqlite:///my_memory.db", 
    auto_ingest=True,  # 🔍 Smart database search
    openai_api_key="sk-..."
)

How Auto Mode Works:

  1. Every LLM Call: Retrieval agent analyzes user input
  2. Query Planning: Uses AI to understand what memories are needed
  3. Smart Search: Searches through entire database (short-term + long-term)
  4. Context Injection: Injects 3-5 most relevant memories per call

Combined Mode - Best of Both Worlds

# Get both working memory AND dynamic search
memori = Memori(
    conscious_ingest=True,  # Working memory once
    auto_ingest=True,       # Dynamic search every call
    openai_api_key="sk-..."
)

Intelligence Layers:

  1. Memory Agent - Processes every conversation with Pydantic structured outputs
  2. Conscious Agent - Analyzes patterns, promotes long-term → short-term memories
  3. Retrieval Agent - Intelligently searches and selects relevant context

What gets prioritized in Conscious Mode:

  • 👤 Personal Identity: Your name, role, location, basic info
  • ❤️ Preferences & Habits: What you like, work patterns, routines
  • 🛠️ Skills & Tools: Technologies you use, expertise areas
  • 📊 Current Projects: Ongoing work, learning goals
  • 🤝 Relationships: Important people, colleagues, connections
  • 🔄 Repeated References: Information you mention frequently

🗄️ Memory Types

Type Purpose Example Auto-Promoted
Facts Objective information "I use PostgreSQL for databases" ✅ High frequency
Preferences User choices "I prefer clean, readable code" ✅ Personal identity
Skills Abilities & knowledge "Experienced with FastAPI" ✅ Expertise areas
Rules Constraints & guidelines "Always write tests first" ✅ Work patterns
Context Session information "Working on e-commerce project" ✅ Current projects

🔧 Configuration

Simple Setup

from memori import Memori

# Conscious mode - Short-term working memory
memori = Memori(
    database_connect="sqlite:///my_memory.db",
    template="basic", 
    conscious_ingest=True,  # One-shot context injection
    openai_api_key="sk-..."
)

# Auto mode - Dynamic database search
memori = Memori(
    database_connect="sqlite:///my_memory.db",
    auto_ingest=True,  # Continuous memory retrieval
    openai_api_key="sk-..."
)

# Combined mode - Best of both worlds
memori = Memori(
    conscious_ingest=True,  # Working memory + 
    auto_ingest=True,       # Dynamic search
    openai_api_key="sk-..."
)

Advanced Configuration

from memori import Memori, ConfigManager

# Load from memori.json or environment
config = ConfigManager()
config.auto_load()

memori = Memori()
memori.enable()

Create memori.json:

{
  "database": {
    "connection_string": "postgresql://user:pass@localhost/memori"
  },
  "agents": {
    "openai_api_key": "sk-...",
    "conscious_ingest": true,
    "auto_ingest": false
  },
  "memory": {
    "namespace": "my_project",
    "retention_policy": "30_days"
  }
}

🔌 Universal Integration

Works with ANY LLM library:

memori.enable()  # Enable universal recording

# LiteLLM (recommended)
from litellm import completion
completion(model="gpt-4", messages=[...])

# OpenAI
import openai
client = openai.OpenAI()
client.chat.completions.create(...)

# Anthropic  
import anthropic
client = anthropic.Anthropic()
client.messages.create(...)

# All automatically recorded and contextualized!

🛠️ Memory Management

Automatic Background Analysis

# Automatic analysis every 6 hours (when conscious_ingest=True)
memori.enable()  # Starts background conscious agent

# Manual analysis trigger
memori.trigger_conscious_analysis()

# Get essential conversations
essential = memori.get_essential_conversations(limit=5)

Memory Retrieval Tools

from memori.tools import create_memory_tool

# Create memory search tool for your LLM
memory_tool = create_memory_tool(memori)

# Use in function calling
tools = [memory_tool]
completion(model="gpt-4", messages=[...], tools=tools)

Context Control

# Get relevant context for a query
context = memori.retrieve_context("Python testing", limit=5)
# Returns: 3 essential + 2 specific memories

# Search by category
skills = memori.search_memories_by_category("skill", limit=10)

# Get memory statistics
stats = memori.get_memory_stats()

📋 Database Schema

-- Core tables created automatically
chat_history        # All conversations
short_term_memory   # Recent context (expires)
long_term_memory    # Permanent insights  
rules_memory        # User preferences
memory_entities     # Extracted entities
memory_relationships # Entity connections

📁 Project Structure

memori/
├── core/           # Main Memori class, database manager
├── agents/         # Memory processing with Pydantic  
├── database/       # SQLite/PostgreSQL/MySQL support
├── integrations/   # LiteLLM, OpenAI, Anthropic
├── config/         # Configuration management
├── utils/          # Helpers, validation, logging
└── tools/          # Memory search tools

Examples

Framework Integrations

Memori works seamlessly with popular AI frameworks:

Framework Description Example Features
🤖 Agno Memory-enhanced agent framework integration with persistent conversations Simple chat agent with memory search Memory tools, conversation persistence, contextual responses
👥 CrewAI Multi-agent system with shared memory across agent interactions Collaborative agents with memory Agent coordination, shared memory, task-based workflows
🌊 Digital Ocean AI Memory-enhanced customer support using Digital Ocean's AI platform Customer support assistant with conversation history Context injection, session continuity, support analytics
🔗 LangChain Enterprise-grade agent framework with advanced memory integration AI assistant with LangChain tools and memory Custom tools, agent executors, memory persistence, error handling
🚀 Swarms Multi-agent system framework with persistent memory capabilities Memory-enhanced Swarms agents with auto/conscious ingestion Agent memory persistence, multi-agent coordination, contextual awareness

Interactive Demos

Explore Memori's capabilities through these interactive demonstrations:

Title Description Tools Used Live Demo
🌟 Personal Diary Assistant A comprehensive diary assistant with mood tracking, pattern analysis, and personalized recommendations. Streamlit, LiteLLM, OpenAI, SQLite Run Demo
🌍 Travel Planner Agent Intelligent travel planning with CrewAI agents, real-time web search, and memory-based personalization. Plans complete itineraries with budget analysis. CrewAI, Streamlit, OpenAI, SQLite
🧑‍🔬 Researcher Agent Advanced AI research assistant with persistent memory, real-time web search, and comprehensive report generation. Builds upon previous research sessions. Agno, Streamlit, OpenAI, ExaAI, SQLite Run Demo

🤝 Contributing

📄 License

MIT License - see LICENSE for details.


Made for developers who want their AI agents to remember and learn