Bhodi is a chatbot framework to work with vectorized documentation.
uv venv -p 3.10
source .venv/bin/activate
mkdir models # store your models here
mkdir data_to_vector # store the data to vectorize here
Check config.py and configure your models, embedders, and tokenizers.
I suggest putting your models in the models
directory.
Hint: You can download models from Huggingface.
- Install pyproject packages
uv sync
- For Nvidia CUDA:
export CMAKE_ARGS="-DGGML_CUDA=on"
uv pip install llama-cpp-python
Llama.cpp Installation: Check official repo of llama.cpp to see how to install lama.cpp for your hardware.
- build our cli tool to index data in vectorstore and your caller for the chatbot
- execute this in root project directory
uv build
uv pip install dis dist/bhodi_doc_analyzer-0.1.0-py3-none-any.whl
- to use bhodi-index:
bhodi-index path/to/your/data # Support file or directory (only trully tested with PDF)
- to use bhodi as a chat bot
bhodi
🌟 ACTUAL FEATURES / IMPLEMENTED
- Basic chatbot TUI
- Vectorize/index several type of files
- Chat logs
- Easy to use
- Chat Memory with RAG
⚙️ NEEDED FEATURES / NEED TO BE IMPLEMENTED (PRIORITY)
- Copy MD blocks of code generated by the chat (maybe a widget in textual)
- Better indexing of files = more consistent and accurate RAG
💡 COULD BE USEFUL / NEED TO BE IMPLEMENTED (NOT PRIORITY)
- Embeding models using API, like google-gemini or OPENAI