This is the code repository accompanying the Medium Article: A Starter Pack to building a local Chatbot using Ollama, LangGraph and RAG
-
Download Ollama You can download Ollama from https://www.ollama.com.
-
Verify Ollama is Running
- Once installed, ensure that Ollama is running by accessing: http://localhost:11434/
- Pull Required Models
- With Ollama running, you’ll need to pull the following three models:
ollama pull nomic-embed-text
ollama pull llama3.1
ollama pull deepseek-r1:8b╔══════════════════╦═════════════════════════════════════════╗
║ Name ║ Usage ║
╠══════════════════╬═════════════════════════════════════════╣
║ nomic-embed-text ║ text embedding for RAG ║
║ llama3.1 ║ Simple Chat, Fast Response ║
║ deepseek-r1:8b ║ Complex Chat, Well thought out Response ║
╚══════════════════╩═════════════════════════════════════════╝conda env create -f env.yml
conda activate chatbotCreate vector store
python create_vs.pyRun webapp
streamlit run app.py- Trigger RAG: ask question related to Kredivo
- Trigger system 2: ask it to plan an itinerary
