A high-powered content collection and analysis system that uses multiple Large Language Models (LLMs) to collect, validate, and summarize content from various sources. It helps you stay up-to-date with the latest information by automatically filtering and processing content based on your interests.
-
Multi-source Content Collection
- RSS feeds from major tech and security blogs
- Reddit channel monitoring
-
Intelligent Content Processing
- Keyword-based content filtering
- Date-based filtering (excludes content older than 10 days)
- Multi-LLM validation system for relevance checking
- Automated content summarization
- Configurable output formats
- Install and run Ollama locally:
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Start Ollama service
ollama serve
# In a new terminal, pull required models
ollama pull mistral
- Install uv (if not already installed):
curl -LsSf https://astral.sh/uv/install.sh | sh
- Clone the repository:
git clone https://github.com/shaidar/ion-cannon.git
cd ion_cannon
- Create and activate a virtual environment:
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
- Install the package:
uv pip install -e ".[dev]"
The system is highly configurable through the settings file. You can customize:
-
Content Sources
RSS_FEEDS
: List of RSS feed URLs to monitorREDDIT_CHANNELS
: Reddit channels to monitor
-
Reddit API Setup
- Go to https://www.reddit.com/prefs/apps
- Click "create another app..." at the bottom
- Select "script" as the application type
- Fill in:
- name: ion-cannon (or any name you prefer)
- description: Content collection tool
- redirect uri: http://localhost:8080
- Click "create app"
- Note your:
- client_id (under your app name)
- client_secret
- Add credentials to your settings:
# filepath: ion_cannon/config/settings.py REDDIT_CLIENT_ID = "your_client_id" REDDIT_CLIENT_SECRET = "your_client_secret" REDDIT_USER_AGENT = "ion-cannon:v1.0.0 (by /u/your_username)"
-
Content Filtering
KEYWORDS
: List of keywords to filter content (matches both title and content)- Default age filter of 10 days (configurable in code)
-
LLM Settings
- Configure multiple LLMs for content validation
- Set up dedicated LLMs for summarization
Example settings:
# filepath: ion_cannon/config/settings.py
RSS_FEEDS = [
"https://www.schneier.com/blog/atom.xml",
"https://krebsonsecurity.com/feed/"
]
KEYWORDS = [
"security",
"artificial intelligence",
"machine learning"
]
Basic usage:
# Collect and process content with default settings
ion-cannon collect
# Use multiple LLMs for better validation
ion-cannon collect --multi-llm
# Save processed content to a specific directory
ion-cannon collect --output ./my-reports
# Show verbose output during processing
ion-cannon collect --verbose
List configured sources:
# Show basic source configuration
ion-cannon sources
# Show detailed source information
ion-cannon sources --verbose
Set up the development environment:
# Install development dependencies
uv pip install -e ".[dev]"
# Run tests
pytest
# Run linter
ruff check .
# Run formatter
ruff format .
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project is licensed under the GNU General Public License v3 - see the LICENSE file for details.