topdown-profiler exposes an MCP (Model Context Protocol) server that enables AI assistants to collect, query, and analyze Intel Top-Down Microarchitecture Analysis (TMA) data.
Add to your project's .mcp.json:
{
"mcpServers": {
"topdown": {
"command": "topdown",
"args": ["mcp-serve"]
}
}
}Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or ~/.config/Claude/claude_desktop_config.json (Linux):
{
"mcpServers": {
"topdown": {
"command": "topdown",
"args": ["mcp-serve"],
"env": {
"TOPDOWN_DB_PATH": "/path/to/your/data.db"
}
}
}
}{
"mcpServers": {
"topdown": {
"command": "topdown",
"args": ["mcp-serve", "--transport", "http", "--port", "8000"]
}
}
}Run a TMA collection for a process.
Parameters:
process_name(str, required): Process name to profile (e.g.redis-server)level(int, default 2): TMA analysis level 1-6duration_seconds(int, default 30): Collection durationsystem_wide(bool, default false): Profile all CPUslabels(dict, optional): Labels like{"git_branch": "unstable", "test_name": "set-get-100"}
Example prompt: "Profile redis-server for 30 seconds at level 3 with labels git_branch=unstable and test_name=set-get-100"
Find ranked CPU bottlenecks from stored data.
Parameters:
process_name(str, optional): Filter by processlabels(dict, optional): Filter by labelslast_hours(float, default 24): Time windowmin_percentage(float, default 5): Minimum thresholdtop_n(int, default 10): Max results
Example prompt: "What are the top bottlenecks for redis-server on branch unstable?"
Find which benchmarks/runs hit a specific TMA bottleneck.
Parameters:
metric_name(str, required): TMA metric (e.g.DRAM_Bound,L3_Bound)min_pct(float, default 5): Minimum percentagelabels(dict, optional): Label filterslast_hours(float, default 24): Time window
Example prompt: "Which benchmarks are DRAM-bound above 15%?"
VTune-style pipeline slot funnel showing where 100% of CPU slots go.
Parameters:
run_id(str, optional): Specific runprocess_name(str, optional): Filter by processlabels(dict, optional): Filter by labelslevel(int, default 3): Max drill-down depth
Example prompt: "Show me the pipeline funnel for redis-server running set-get-100"
Compare two profiling runs by ID.
Parameters:
run_id_a(str, required): Baseline run IDrun_id_b(str, required): Comparison run ID
Example prompt: "Compare run abc123 with run def456"
Compare latest runs matching two different label sets.
Parameters:
label_a(dict, required): Baseline labels (e.g.{"build_variant": "release"})label_b(dict, required): Comparison labels (e.g.{"build_variant": "debug"})process_name(str, optional): Process filter
Example prompt: "Compare release vs debug builds of redis-server"
Explain a TMA metric with description, typical causes, and tuning hints.
Parameters:
metric_name(str, required): Full path or leaf name (e.g.DRAM_BoundorBackend_Bound.Memory_Bound.DRAM_Bound)
Example prompt: "Explain what L3_Bound means and how to fix it"
List recent profiling runs.
Parameters:
process_name(str, optional): Filter by processlabels(dict, optional): Filter by labelslast_hours(float, default 24): Time window
Example prompt: "Show me all profiling runs from the last 24 hours"
| URI | Description |
|---|---|
topdown://runs/{run_id}/tree |
Full TMA hierarchy for a run |
topdown://metrics |
All 120+ known TMA metrics |
topdown://methodology |
Intel TMA methodology overview |
Runs are tagged with auto-detected labels (arch, kernel, cpu, hostname) and user-supplied labels. The AI can filter queries by any combination:
- "What are the bottlenecks for branch unstable on test set-get-100?" → filters by
git_branch+test_name - "Compare oss-standalone vs oss-cluster topology" → filters by
topology
| Variable | Description | Default |
|---|---|---|
TOPDOWN_DB_PATH |
SQLite database path | ~/.topdown/data.db |
TOPDOWN_BACKEND |
Storage backend (sqlite or postgresql) |
sqlite |
TOPDOWN_DSN |
PostgreSQL connection string | - |
TOPDOWN_TOPLEV_PATH |
Path to toplev.py | toplev.py |