This is a web-based chatbot application designed to answer transportation-related queries using an LLM (Microsoft-Phi-3-mini-128k) with a Retrieval-Augmented Generation (RAG) implementation.
This project is a chatbot application that uses the Microsoft-Phi-3-mini-128k model to generate answers to user queries about transportation. It leverages Retrieval-Augmented Generation (RAG) to provide accurate and contextually relevant responses.
- Real-time query processing
- Retrieval-Augmented Generation (RAG) for enhanced answer accuracy
- Interactive and responsive user interface
- Model: Microsoft-Phi-3-mini-128k
- Backend Framework: FastAPI
- Frontend Framework: React + Vite
- CSS: Tailwind, Material-Tailwind, hover.dev
To get the project up and running, follow these steps:
All necessary dependencies for the backend are listed in the requirements.txt
file located in the backend
folder. To install them, run:
pip install -r backend/requirements.txt
cd backend
uvicorn main:app --reload
Install the required dependencies for the frontend by navigating to the frontend folder and running:
Running the Frontend
Navigate to the frontend directory:
cd frontend
Start the Vite development server:
npm run dev
Contributions are welcome! Please fork the repository and create a pull request with your changes. Ensure your code follows the project's coding standards and includes appropriate tests.
This project is licensed under the MIT License. See the LICENSE file for more details.
Feel free to adjust any part of the content to better suit your needs.