A powerful search agent built with the Google Agent Development Kit (ADK) that performs web searches and content extraction.
This project implements a sophisticated, autonomous search agent powered by Google's Agent Development Kit (ADK) and the Gemini 2.5 Flash model. It's designed to be a multi-platform research tool that leverages AI to intelligently search, extract, and organize information from the web.
🔗 Multi-Source Search: Instead of relying on a single search engine, the agent can search across:
- GitHub repositories
- Reddit communities and discussions
- Stack Overflow questions and answers
- Wikipedia articles
- Python 3.7+
- Google ADK - Agent Development Kit for building AI agents
- BeautifulSoup4 - HTML parsing and content extraction
- Requests - HTTP library for web requests
- Clone the repository:
git clone <repository-url>
cd Search-Agent-Using-Google-ADK- Create Virtual Environment:
python -m venv venv- Set up environment variables:
Go to the search-agent folder, then create a
.envfile based on.env.exampleand add your Google API credentials:
GOOGLE_GENAI_USE_VERTEXAI=0
GOOGLE_API_KEY=your_google_genai_api_key
GOOGLE_CSE_ID=your_google_custom_search_engine_id
SEARCH_API_KEY=your_google_search_api_key
- GOOGLE_GENAI_USE_VERTEXAI: Set to 0 to use standard Google AI API (or 1 for Vertex AI)
- GOOGLE_API_KEY: Your Google Gemini API key. Get it from Google AI Studio
- GOOGLE_CSE_ID: Your Google Custom Search Engine ID. Create a custom search engine at Google Custom Search
- SEARCH_API_KEY: Your Google Search API key for programmatic search access. Get it from Google Cloud Console
- Install dependencies:
pip install -r requirements.txt- Run the App:
adk web- Open the URL from the terminal
The project requires the following Python packages:
requests>=2.25.0- HTTP requests librarybeautifulsoup4>=4.9.0- HTML/XML parsinggoogle-adk- Google Agent Development Kit
Search-Agent-Using-Google-ADK/
├── README.md # This file
├── requirements.txt # Python dependencies
└── search-agent/
├── __init__.py # Package initialization
├── agent.py # Main agent implementation
├── __pycache__/ # Python cache directory
└── data/ # Storage for search results and content
The search agent includes the following tools that can be used autonomously:
Performs a web search using Google Custom Search API (CSE).
- Input:
query- Search query string - Output: Dictionary with query, source, results, and total count
- Uses:
SEARCH_API_KEYandGOOGLE_CSE_IDenvironment variables - Returns per result: title, URL, snippet
Searches GitHub for repositories.
- Input:
query- Search query stringsort- Sort by (default: "stars", options: stars, forks, updated)
- Output: Dictionary with query, total count, and repository results
- Returns per repo: name, URL, description, stars, language, updated_at
Searches Reddit for posts across all subreddits.
- Input:
query- Search query stringsort- Sort by (default: "relevance")
- Output: Dictionary with query and post results
- Returns per post: title, URL, subreddit, score, num_comments, author, selftext (truncated to 500 chars)
Searches within a specific subreddit.
- Input:
subreddit- Subreddit name (without r/)query- Search query stringsort- Sort by (default: "relevance")
- Output: Dictionary with query, subreddit, and post results
- Returns per post: title, URL, score, num_comments, author, selftext (truncated to 500 chars)
Searches Stack Overflow for questions.
- Input:
query- Search query stringsort- Sort by (default: "relevance")
- Output: Dictionary with query, total count, and question results
- Returns per question: title, URL, score, answer_count, is_answered, tags, body (truncated to 300 chars)
Searches Wikipedia for articles.
- Input:
query- Search query string - Output: Dictionary with query and article results
- Returns per article: title, URL, snippet, content (truncated to 1000 chars)
Extracts and parses content from a URL.
- Input:
url- URL string - Output: Dictionary with URL, title, content, and status code
Saves content to a file in the data directory.
- Input:
file_name- Name for the filecontent- Content to write
- Output: Status dictionary with file path, success status, and content size
Lists all files in the data directory.
- Output: Dictionary with success status and list of file names
Reads content from a file in the data directory.
- Input:
file_name- Name of the file to read - Output: Dictionary with file path, content, size, and status
This project is open source and available under the MIT License.
Contributions are welcome! Please feel free to submit pull requests or open issues for bugs and feature requests.