Prebuilt, standalone C++ artifacts for spconv — no Python, no pain.
This repository provides a fully automated build & release pipeline that converts
traveller59/spconv + FindDefinition/cumm
into production-ready C++ headers and shared libraries (libspconv.so).
Designed for high-performance inference environments such as:
- TensorRT custom plugins
- CUDA-native C++ inference engines
- Embedded & autonomous driving platforms (Jetson Orin)
💡 If you’ve ever struggled with “spconv build” — this repo is for you.
spconv is an industry-standard sparse convolution library, but:
- ❌ Strongly coupled with Python build/runtime
- ❌ Painful to integrate into pure C++ inference stacks
- ❌ Hard to cross-compile for Jetson / Drive platforms
- ❌ CI-unfriendly for reproducible builds
This project solves exactly that.
✅ Standalone C++ Distribution
- No Python required at runtime or link time
- Clean headers +
libspconv.so
✅ Production-Grade Architectures
- x86_64: Data center / workstation GPUs
- aarch64: NVIDIA Jetson AGX Orin / OrinX
✅ Inference-Oriented Build
- Optimized for TensorRT plugin integration
- Suitable for custom CUDA operators
✅ Fully Automated CI
- GitHub Actions builds & publishes binaries
- Versioned, reproducible, downloadable artifacts
✅ Flattened Header Layout
spconv,cumm, andtensorviewheaders included- Ready for direct CMake consumption
All binaries are published on the 👉 Releases Page
libspconv-{version}-{arch}-cuda{cuda_version}.tar.gz
├── include/
│ ├── spconvlib/
│ │ ├── cumm/ # CUDA MMA & kernel primitives
│ │ └── spconv/ # Sparse convolution core
│ └── tensorview/ # TensorView utilities
└── lib/
└── libspconv.so # Shared library
- 🔌 TensorRT Plugin Development
- 🚗 Autonomous Driving Perception Pipelines
- ⚡ Sparse CNN Inference Engines
- 🧠 CUDA / HPC Research Projects
- 📦 Binary Distribution in CI/CD
| Architecture | Platform | OS | CUDA | Compute Capability |
|---|---|---|---|---|
| x86_64 | PC / Server | Ubuntu 20.04 | 11.8 | 8.0 / 8.6 / 8.9 |
| aarch64 | Jetson Orin / OrinX | Ubuntu 20.04 (JetPack 5.x / DriveOS 6.0+) | 11.4 | 8.7 |
⚠️ Other CUDA / OS versions may work but are not officially validated.
- Inference-first, not training
- C++ native, not Python-bound
- Reproducible builds, not local hacks
- CI-driven, not manual compilation
This repository does not modify the original algorithms.
- Build & CI Scripts: MIT License
- spconv: Apache 2.0 — traveller59
- cumm: Apache 2.0 — FindDefinition
🙏 Full credit to the original authors.
If you use this in research or production, please cite the original spconv project.
If this saved you days of build pain or unblocked your inference pipeline, a star is the best way to say thanks ⭐
- [ ] TensorRT plugin examples
- [ ] CMake config package (find_package support)
- [ ] CUDA 12.x support