The LLM Visibility Optimization Tool is a comprehensive system designed to enhance the visibility of websites in responses generated by Large Language Models (LLMs). It combines advanced scraping, embedding, and benchmarking techniques to provide actionable insights for improving content quality and relevance.
- Purpose: Converts any website into a private, conversational AI knowledge base.
- How It Works:
- Scrapes website content using Playwright and BeautifulSoup.
- Converts HTML to Markdown for better readability.
- Embeds content locally using ChromaDB and HuggingFace embeddings.
- Allows users to query the content via a locally hosted LLM (e.g., Ollama).
- Tech Stack:
- Backend: Python (Flask)
- Frontend: HTML, TailwindCSS, JavaScript
- Web Scraping: Playwright, BeautifulSoup4, markdownify
- Vector Database: ChromaDB
- Embeddings: all-MiniLM-L6-v2
- LLM Runtime: Ollama
- Orchestration: LangChain
- Purpose: Evaluates how well a website is optimized for visibility in LLM-generated responses.
- Key Dimensions:
- Relevance: Measures alignment with user queries.
- Authority: Assesses trustworthiness and credibility.
- Comprehensiveness: Evaluates the depth and breadth of content.
- Clarity: Analyzes content structure and readability.
- Recency: Checks for up-to-date information.
- Actionability: Looks for clear calls-to-action.
- Scoring System:
- Weighted average of dimension scores.
- Provides actionable recommendations based on scores.
- Tech Stack:
- Python modules: LangChain, ChromaDB, HuggingFace, Ollama
- Configuration:
benchmark_config.py - Analysis Engine:
geo_benchmark.py
- Python 3.8 or higher
- pip
ollama pull llama3git clone https://github.com/sh4shv4t/LLM-Visibility-Optimization-Tool.git
cd LLM-Visibility-Optimization-ToolWindows
python -m venv venv
.\venv\Scripts\activateMac/Linux
python3 -m venv venv
source venv/bin/activatepip install -r requirements.txtplaywright installpython app.pyOpen:
http://127.0.0.1:5000
- Enter a URL.
- Click Scrape & Index.
- Wait for success message.
- Click Analyze Quality to run the GEO Benchmark System.
- Review the detailed report with scores and recommendations.
- Type a question related to the scraped content.
- Click Ask.
- Answer is generated using retrieved context + local LLM.
LLM-Visibility-Optimization-Tool/
├── app.py
├── scraper.py
├── vector_store.py
├── qa_app.py
├── geo_benchmark.py
├── benchmark_config.py
├── templates/
│ └── index.html
├── scraped_content.md
├── vector_db/
├── requirements.txt
└── README.md
- Multi-document ingestion.
- Chat history and follow-up question memory.
- Streaming model responses.
- Support for additional file formats (e.g., PDFs, text files).
- Visual dashboards for benchmarking results.
For issues or questions:
- Check the troubleshooting section in the
GEO_BENCHMARK_GUIDE.md. - Open an issue in the GitHub repository.
- Contact the repository owner for further assistance.