Skip to content

meta-pytorch/tritonparse

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

TritonParse

License: BSD-3 GitHub Pages

A comprehensive visualization and analysis tool for Triton kernel compilation and launch β€” helping developers analyze, debug, and understand Triton kernel compilation processes.

🌐 Try it online β†’

✨ Key Features

πŸ” Visualization & Analysis

  • πŸš€ Launch Difference Analysis - Detect and visualize kernel launch parameter variations
  • πŸ“Š IR Code View - Side-by-side IR viewing with synchronized highlighting and line mapping
  • πŸ”„ File Diff View - Compare kernels across different trace files side-by-side
  • πŸ“ Multi-format IR Support - View TTGIR, TTIR, LLIR, PTX, and AMDGCN
  • 🎯 Interactive Code Views - Click-to-highlight corresponding lines across IR stages

πŸ”§ Reproducer & Debugging Tools

  • πŸ”„ Standalone Script Generation - Extract any kernel into a self-contained Python script
  • πŸ’Ύ Tensor Data Reconstruction - Preserve actual tensor data or use statistical approximation
  • 🎯 Custom Templates - Flexible reproducer templates for different workflows
  • πŸ› Bug Isolation - Share reproducible test cases for debugging and collaboration

πŸ“Š Structured Logging & Analysis

  • πŸ“ Compilation & Launch Tracing - Capture detailed events with source mapping
  • πŸ” Stack Trace Integration - Full Python stack traces for debugging
  • πŸ“ˆ Metadata Extraction - Comprehensive kernel statistics

πŸ› οΈ Developer Tools

  • 🌐 Browser-based Interface - No installation required, works in your browser
  • πŸ”’ Privacy-first - All processing happens locally, no data uploaded

πŸš€ Quick Start

1. Generate Traces

import tritonparse.structured_logging
import tritonparse.utils

# Initialize logging
tritonparse.structured_logging.init("./logs/", enable_trace_launch=True)

# Your Triton/PyTorch code here
# ... your kernels ...

# Parse and generate trace files
tritonparse.utils.unified_parse("./logs/", out="./parsed_output")
πŸ“ Example output (click to expand)
================================================================================
πŸ“ TRITONPARSE PARSING RESULTS
================================================================================
πŸ“‚ Parsed files directory: /scratch/findhao/tritonparse/tests/parsed_output
πŸ“Š Total files generated: 2

πŸ“„ Generated files:
   1. πŸ“ dedicated_log_triton_trace_findhao__mapped.ndjson.gz (7.2KB)
   2. πŸ“ log_file_list.json (181B)
================================================================================
βœ… Parsing completed successfully!
================================================================================

2. Visualize Results

Visit https://meta-pytorch.org/tritonparse/ and open your local trace files (.ndjson.gz format).

πŸ”’ Privacy Note: Your trace files are processed entirely in your browser - nothing is uploaded to any server!

3. Generate Reproducers (Optional)

Extract any kernel into a standalone, executable Python script for debugging or testing:

# Generate reproducer from first launch event
tritonparse reproduce ./parsed_output/trace.ndjson.gz --line 2 --out-dir repro_output

# Run the generated reproducer
cd repro_output/<kernel_name>/
python repro_*.py

Python API:

from tritonparse.reproducer.orchestrator import reproduce

result = reproduce(
    input_path="./parsed_output/trace.ndjson.gz",
    line_index=1,           # Which launch event (1-based)
    out_dir="repro_output"
)
🎯 Common Reproducer Use Cases (click to expand)
  • πŸ› Bug Isolation: Extract a failing kernel into a minimal standalone script
  • ⚑ Performance Testing: Benchmark specific kernels without running the full application
  • 🀝 Team Collaboration: Share reproducible test cases with colleagues or in bug reports
  • πŸ“Š Regression Testing: Compare kernel behavior and performance across different versions
  • πŸ” Deep Debugging: Modify and experiment with kernel parameters in isolation

πŸ› οΈ Installation

For basic usage (trace generation): Four options:

# install nightly version
pip install -U --pre tritonparse
# install stable version
pip install tritonparse
# install from source
git clone https://github.com/meta-pytorch/tritonparse.git
cd tritonparse
pip install -e .
# pip install the latest version from github
pip install git+https://github.com/meta-pytorch/tritonparse.git

Prerequisites: Python β‰₯ 3.10, Triton β‰₯ 3.4.0, GPU required (NVIDIA/AMD)

TritonParse relies on new features in Triton. Please install the latest version of Triton:

pip install triton

πŸ“š Complete Documentation

πŸ“– Guide Description
🏠 Wiki Home Complete documentation and quick navigation
πŸ“¦ Installation Setup guide for all scenarios
πŸ“‹ Usage Guide Complete workflow, reproducer generation, and examples
🌐 Web Interface Master the visualization interface
πŸ”§ Developer Guide Contributing and architecture overview
πŸ“ Code Formatting Formatting standards and tools
❓ FAQ Quick answers and troubleshooting

πŸ“Š Understanding Triton Compilation

TritonParse visualizes the complete Triton compilation pipeline:

Python Source β†’ TTIR β†’ TTGIR β†’ LLIR β†’ PTX/AMDGCN

Each stage can be inspected and compared to understand optimization transformations.

🀝 Contributing

We welcome contributions! Please see our Developer Guide for:

  • Development setup and prerequisites
  • Code formatting standards (Formatting Guide)
  • Pull request and code review process
  • Testing guidelines
  • Architecture overview

πŸ“ž Support & Community

πŸ“„ License

This project is licensed under the BSD-3 License - see the LICENSE file for details.


✨ Ready to get started? Visit our Installation Guide or try the online tool directly!