AI Assistant to organize visits to the Château de Versailles.
- Docker and Docker Compose installed
- Mistral AI API Key (create a
.envfile at the root with your key)
Create a .env file at the root of the project:
MISTRAL_API_KEY=your_mistral_api_key
MISTRAL_MODEL=mistral-medium-latest
EMBEDDING_MODEL=mistral-embedFrom the project root, execute:
docker compose up --buildWait until the logs display:
backend-1 | INFO: Started server process [1]
backend-1 | INFO: Application startup complete.
backend-1 | INFO: Uvicorn running on http://0.0.0.0:8001
- Frontend (user interface): http://localhost:8501
- Backend API (Swagger UI): http://localhost:8001/docs
Vers-AI-lles/
├── backend/ # FastAPI API + LangGraph agents
├── frontend/ # Streamlit Interface
└── docker-compose.yaml
The backend is built with FastAPI and LangGraph to orchestrate conversational AI agents.
app.py - FastAPI entry point
- Exposes 2 endpoints:
POST /chat: Evaluation endpoint (stateless)POST /: Main endpoint with session management
- Configures CORS to allow frontend requests
- Initializes graph managers (
GraphManagerandGraphManagerEval)
setup_graph.py - Core of the LangGraph agent system
-
State: Pydantic model that maintains conversation state
messages: Message historyuser_wants_road_in_versailles: User wants an itineraryuser_wants_specific_info: User wants specific informationuser_asks_off_topic: Off-topic questionnecessary_info_for_road: Information collected to create the itinerary (date, time, group type, duration, budget)
-
Agents:
IntentAgent: Analyzes user intent (visit, specific info, off-topic)ItineraryInfoAgent: Collects necessary information to create an itinerary (conversational mode)ItineraryInfoAgentEval: Evaluation version that extracts all info in a single passRoadInVersaillesAgent: Generates personalized itinerary using RAGSpecificInfoAgent: Answers specific questions about the châteauOffTopicAgent: Handles off-topic or courtesy questions
-
Graphs:
GraphManager: Standard conversational mode (progressive info collection)GraphManagerEval: Evaluation mode (direct extraction without intermediate questions)
embedding.py - Embedding system
- Uses Mistral AI API to generate embeddings
embed_query()function: Handles long texts by splitting them into chunks with overlap- Similarity functions: cosine, Manhattan, Euclidean
select_top_n_similar_documents(): Selects the most relevant documents for RAG
create_db.py - Data preparation
- Loads data from
data/tips.json(tips about Versailles) - Generates embeddings for each tip
- Saves enriched documents in
data/tips_embedded.json
list.py - In-memory database
- Contains
longlist: list of embedding documents loaded at startup - Used by
RoadInVersaillesAgentfor RAG
rag_config.py - RAG Configuration (legacy)
- Configuration file for the RAG system
User interface built with Streamlit.
front.py - Main application
- Chat interface with the backend
- Message history management in
st.session_state - API calls to
http://backend:8001/ - Styled display of user and assistant messages
components.py - UI Components
- HTML rendering functions for messages
- Header, loading spinner, etc.
styles.css - Custom styles
- Modern design for the chat interface
docker-compose.yaml
backendservice: Exposes port 8001frontendservice: Exposes port 8501- Environment variables shared from
.env - Mounted volumes for hot development
- Intent Analysis: The
IntentAgentdetermines what the user wants - Conditional Routing:
- If visit →
ItineraryInfoAgentcollects necessary info - If specific info →
SpecificInfoAgentresponds directly - If off-topic →
OffTopicAgentredirects politely
- If visit →
- Itinerary Generation: Once all info is collected,
RoadInVersaillesAgentcreates a personalized plan using RAG (Retrieval-Augmented Generation) with 50 relevant documents
The system uses a database of tips about Versailles:
- Embedding documents with Mistral AI
- Similarity calculated by Euclidean distance
- Top 50 documents injected into the prompt to contextualize the response
cd backend
pip install -r requirements.txt
uvicorn app:app --reload --port 8001cd frontend
pip install -r requirements.txt
streamlit run front.pycd backend
python create_db.py- LangGraph allows creating agent workflows with routing conditions
- Structured Output: All agents use
with_structured_output()to guarantee valid JSON responses - Evaluation mode: Disabled for now, designed to extract information in a single request
- Session management: Messages are kept in the
Stateto maintain context
Project developed during Hackathon Datacraft 2024