Multi-Object Tracking (MOT) system using YOLOv8 ONNX + ByteTrack, optimized for CPU-only deployment on OpenShift.
This is a proof-of-concept for a lightweight, containerized MOT system designed for:
- Small footprint: CPU-only ONNXRuntime stack (~200-400 MB container vs 12+ GB GPU stacks)
- Fast iteration: Quick builds and deployments for development
- Scalable architecture: One pod per RTSP stream
- Production-ready: Easy GPU/TensorRT migration path without changing app I/O
- Detector: YOLOv8n in ONNX format (~13 MB model)
- Runtime: ONNXRuntime with CPUExecutionProvider
- Tracker: Standalone ByteTrack (pure Python, no PyTorch dependency)
- Video I/O: OpenCV with RTSP support
bytetrack-onyx/
├── models/ # Place yolov8n.onnx here (~13 MB)
├── app/
│ ├── track.py # Main tracking application
│ └── requirements.txt # Python dependencies
├── k8s/ # OpenShift/Kubernetes manifests
│ ├── configmap-streams.yaml
│ └── deployment.yaml
├── Dockerfile # CPU-only container image
└── pyproject.toml # UV/pip package configuration
- UV (Python package manager):
curl -LsSf https://astral.sh/uv/install.sh | sh - YOLOv8n ONNX model: Download from Hugging Face and place in
models/yolov8n.onnx - Container runtime: Podman or Docker
- OpenShift CLI:
oc(for deployment)
# Install dependencies with UV
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
uv pip install -e ".[dev]"
# Or use pip directly
pip install -r app/requirements.txt
# Run locally (requires model and video source)
export STREAM_URL="rtsp://user:pass@camera/stream1"
export MODEL_PATH="models/yolov8n.onnx"
python app/track.pyThe project uses GitHub Actions for automated builds and deployments:
- Trigger: Push to main branch or create a git tag
- Registry: GitHub Container Registry (ghcr.io)
- Security: Automatic Trivy vulnerability scanning
- Caching: BuildKit layer caching for fast rebuilds
Workflow:
# 1. Commit and push changes
git add .
git commit -m "feat: add new feature"
git push origin main
# 2. GitHub Actions automatically:
# - Builds AMD64 image
# - Scans for vulnerabilities
# - Pushes to ghcr.io/daniwk/mot-lite:cpu
# 3. Deploy to OpenShift
oc set image deployment/mot-lite-stream1 \
tracker=ghcr.io/daniwk/mot-lite:cpu \
-n mot-bytetrackSee CI-CD.md for detailed documentation.
# Standard build
docker build -t ghcr.io/daniwk/mot-lite:cpu .
# Optimized build with BuildKit
DOCKER_BUILDKIT=1 docker build \
--build-arg BUILDKIT_INLINE_CACHE=1 \
-t ghcr.io/daniwk/mot-lite:cpu .
# Push to registry (requires authentication)
docker push ghcr.io/daniwk/mot-lite:cpu# Create namespace
oc new-project mot-dev
# Update stream URLs in k8s/configmap-streams.yaml
# Update image registry in k8s/deployment.yaml
# Deploy
oc apply -f k8s/configmap-streams.yaml
oc apply -f k8s/deployment.yaml
# Monitor logs
oc logs -f deployment/mot-lite-stream1 -n mot-dev
# Check all streams
oc get pods -n mot-dev -l app=mot-liteEnvironment variables control runtime behavior:
STREAM_URL: RTSP/video source (required)MODEL_PATH: Path to ONNX model (default:/models/yolov8n.onnx)IMG: Input size for detection (default:640)CONF: Confidence threshold (default:0.5)NMS: Non-maximum suppression threshold (default:0.45)
The project includes a fully automated CI/CD pipeline using GitHub Actions:
- Automated builds on every push to main branch
- Pull request testing (builds but doesn't push)
- Security scanning with Trivy
- Multi-tag support (cpu, latest, semantic versioning)
- BuildKit caching for 50-95% faster rebuilds
- Deployment commands auto-generated in build summary
See CI-CD.md for complete documentation.
- GPU acceleration: Swap to TensorRT backend for 30+ FPS
- API endpoint: Add FastAPI for metrics/tracks export
- Persistent tracking: Store track data in database
- StatefulSet: Auto-map pod ordinals to stream keys
- Health metrics: Prometheus/Grafana monitoring
- Multi-stage Dockerfile: Use
Dockerfile.optimizedfor smaller images