[CVPR 2026] VAD-GS: Visibility-Aware Densification for 3D Gaussian Splatting in Dynamic Urban Scenes
Project page | Paper | Youtube | Bilibili
Clone this repository and checkout dev branch
git clone https://github.com/YikangZhang1641/VAD-GS.git
git checkout -b dev origin/dev
Build tools
Set up the environment
# Set conda environment
conda create -n vadgs python=3.8
conda activate vadgs
# Install torch (corresponding to your CUDA version)
pip install torch==1.12.0+cu113 torchvision==0.13.0+cu113 torchaudio==0.12.0 --extra-index-url https://download.pytorch.org/whl/cu113
# Install requirements
pip install -r requirements.txt
# Install submodules
pip install ./submodules/diff-gaussian-rasterization
pip install ./submodules/simple-knn
pip install ./submodules/simple-waymo-open-dataset-reader
pip install ./submodules/MyPropagation
data/
├── nuscenes/
│ ├── raw/
│ ├── processed_10Hz/
│ │ ├── mini/
│ │ │ ├── 000/
│ │ │ │ ├── images/
│ | │ │ ├── ego_pose/
│ | │ │ ├── lidar_depth/
│ | │ │ └── ...
│ │ │ ├── 001/
│ │ │ ├── ...
└── waymo/
|...
- We provide a nuScenes example here. Download and extract it to the folder path above.
For training:
python train.py --config configs/example/nuscenes_train_000.yaml
To generate visual outputs:
python render.py --config configs/example/nuscenes_train_000.yaml mode evaluate
For evaluation:
python metrics.py --config configs/example/nuscenes_train_000.yaml
