Skip to content

bigph00t/enhanced-browser-mcp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Enhanced Browser MCP

A crew of AI agents that handle web research so Claude doesn't have to.

Enhanced Browser MCP delegates research to a specialized crew of AI workers. One tool call triggers parallel searches across 25+ sources, intelligent scraping, and AI synthesis - returning concise findings instead of raw web content.

90%+ Token Reduction

Without Enhanced Browser With Enhanced Browser
Claude searches, gets 10+ URLs AI crew searches 25+ sources in parallel
Claude reads each page (5,000+ tokens each) Jina Reader scrapes pages (free)
Claude synthesizes manually Gemini synthesizes findings (free)
50,000+ tokens consumed ~200-1200 tokens returned

The crew does the heavy lifting. Claude gets the insights.

How It Works

Claude Code
    │
    │ research("your question")
    ▼
┌─────────────────────────────────────────────┐
│         Enhanced Browser MCP                 │
│         (AI Research Crew)                   │
│                                              │
│  ┌──────────────────────────────────────┐   │
│  │     Query Intelligence Layer          │   │
│  │  Detects: code, finance, medical,     │   │
│  │  academic, weather, packages, etc.    │   │
│  └─────────────────┬────────────────────┘   │
│                    │                         │
│    ┌───────────────┼───────────────┐        │
│    ▼               ▼               ▼        │
│ ┌──────┐    ┌───────────┐    ┌──────────┐  │
│ │Search│    │  Scrape   │    │ Synthesize│  │
│ │ Crew │───▶│   Crew    │───▶│   Crew   │  │
│ │(25+) │    │  (Jina)   │    │ (Gemini) │  │
│ └──────┘    └───────────┘    └──────────┘  │
│                                              │
│  Returns: Key findings, details, sources     │
└─────────────────────────────────────────────┘

25+ Search Sources

The crew automatically selects relevant sources based on your query:

General Web (API key needed)

  • Serper - Google results (2,500 free/month)
  • Brave - Privacy-focused search (2,000 free/month)
  • Tavily - AI-optimized search (1,000 free/month)
  • Exa - Neural/semantic search (1,000 free/month)

Code & Packages (17 free sources)

  • GitHub - Code and repositories
  • StackOverflow - Programming Q&A
  • npm - Node.js packages
  • PyPI - Python packages
  • crates.io - Rust packages
  • Docker Hub - Container images
  • Libraries.io - Cross-platform packages (API key optional)

Academic & Medical

  • ArXiv - Scientific preprints (free)
  • PubMed - Medical literature (free)
  • Semantic Scholar - Academic papers (free)
  • Wikipedia - Encyclopedia (free)

Knowledge & Reference

  • Wolfram Alpha - Computational knowledge (2,000 free/month)
  • Wikidata - Structured facts (free)
  • Open Library - Books (free)

Finance

  • Alpha Vantage - Stocks and forex (25 free/day)
  • CoinGecko - Cryptocurrency data (free)

Other

  • OpenWeatherMap - Weather data (1,000 free/day)
  • Nominatim - Geocoding (free)
  • HackerNews - Tech discussions (free)
  • Reddit - Community discussions (API key needed)
  • NewsAPI - News articles (100 free/day)

Quick Start

1. Clone and Install

git clone https://github.com/bigph00t/enhanced-browser-mcp.git
cd enhanced-browser-mcp
python -m venv venv
source venv/bin/activate  # Windows: venv\Scripts\activate
pip install -e .

2. Get API Keys

Required:

API Free Tier Get Key
Google Gemini Generous aistudio.google.com/app/apikey

Recommended (better results):

API Free Tier Get Key
Serper 2,500/month serper.dev
Brave 2,000/month brave.com/search/api

17 free sources work without any API keys - see .env.example for full list.

3. Configure

cp .env.example .env
# Edit .env with your API keys

4. Add to Claude Code

Add to ~/.claude.json:

{
  "mcpServers": {
    "enhanced-browser": {
      "type": "stdio",
      "command": "/path/to/enhanced-browser-mcp/venv/bin/enhanced-browser",
      "args": [],
      "env": {
        "GOOGLE_API_KEY": "your-gemini-key",
        "SERPER_API_KEY": "your-serper-key",
        "BRAVE_API_KEY": "your-brave-key"
      }
    }
  }
}

5. Restart Claude Code

Usage

Just use research() for everything:

research("your question here")

The crew automatically selects engines based on your query:

Query Type Engines Auto-Selected
Code questions + GitHub, StackOverflow, npm, PyPI, crates
Finance questions + Alpha Vantage, CoinGecko
Medical questions + PubMed, Semantic Scholar
Academic topics + ArXiv, Wikipedia, Semantic Scholar
Math/calculations + Wolfram Alpha
Weather queries + OpenWeatherMap
Location queries + Nominatim
Opinions/comparisons + HackerNews, Reddit

Depth Options

research("What is HTMX?", depth="quick")           # ~10s, simple facts
research("How to implement rate limiting")          # ~30s, default
research("Rust vs Go for CLI tools", depth="deep")  # ~60s, comprehensive
Depth Time Tokens Best For
quick ~10s 200-400 Facts, definitions, "what is X"
medium ~30s 400-800 How-to, technical docs (DEFAULT)
deep ~60s 800-1500 Comparisons, thorough research

Add Context

research("best TTS API", context="for YouTube automation, comparing cost and quality")

Tools Reference

Tool Description
research Main tool - the crew searches, scrapes, synthesizes
research_async Start research in background
research_status Check if background task is done
research_wait Wait for background task and get results
scrape Read a specific URL via Jina Reader

Async Research

For deep research while continuing other work:

# Start background research
task = research_async("PostgreSQL indexing best practices", depth="deep")
# Returns: {"task_id": "a1b2c3d4", "status": "started"}

# Continue working...

# Get results when ready
results = research_wait(task_id="a1b2c3d4")

Example Output

## Research Results (medium)
**Query:** FastAPI rate limiting with Redis

## Key Findings
FastAPI doesn't include built-in rate limiting. The standard solution is **SlowAPI**,
a library that wraps the `limits` package and integrates with FastAPI's dependency
injection system.

## Important Details
- SlowAPI decorator syntax: `@limiter.limit("5/minute")`
- For distributed deployments, use Redis backend
- Alternative: Implement at reverse proxy level (nginx, Cloudflare)

## Recommended Links
- [SlowAPI Documentation](https://slowapi.readthedocs.io)
- [FastAPI Production Guide](https://fastapi.tiangolo.com/deployment/)

---
*Engines: serper, brave, stackoverflow | Pages scraped: 3 | Results: 12*

History & Session Awareness

The crew remembers your searches and uses them to provide better, non-repetitive results.

How it works:

  • Each search saves findings, scraped content, and sources to history/
  • Related searches are automatically detected via keyword matching
  • Gemini sees previous findings and focuses on NEW information
  • Supports up to 1000 historical searches

Progressive research example:

research("what is WebSockets", depth="quick")        # Baseline knowledge
research("WebSocket vs SSE", depth="medium")         # Finds 1 related, avoids repetition
research("real-time architecture patterns", depth="deep")  # Finds 2 related, builds on prior

History is stored locally in history/ and is gitignored by default.

Configuration

Environment Variables

# Required
GOOGLE_API_KEY=your_gemini_key

# Recommended
SERPER_API_KEY=your_serper_key
BRAVE_API_KEY=your_brave_key

# Optional premium
TAVILY_API_KEY=your_tavily_key
EXA_API_KEY=your_exa_key
NEWSAPI_KEY=your_newsapi_key
WOLFRAM_APP_ID=your_wolfram_key
ALPHAVANTAGE_API_KEY=your_alphavantage_key
OPENWEATHER_API_KEY=your_openweather_key
LIBRARIES_API_KEY=your_libraries_key
GITHUB_TOKEN=your_github_token

The crew automatically detects which APIs are configured and only uses those.

Troubleshooting

"No results found"

  • Check GOOGLE_API_KEY is set (required for synthesis)
  • Add at least one search API key (Serper, Brave) for better results
  • Free sources (DuckDuckGo, Wikipedia) work without keys

Slow responses

  • deep mode takes 45-90 seconds by design
  • Use quick or medium for faster results

Missing expected sources

  • Check .env has the API key for that source
  • The crew only calls configured engines

Development

pip install -e .

# Run server directly
python -m search_crew.server

# Test research function
python -c "
import asyncio
from search_crew.research import research
result = asyncio.run(research('test query', depth='quick'))
print(result)
"

License

MIT

Credits

About

Token-efficient web research MCP server for Claude Code

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages