Skip to content

[CVPR 2025] WildGS-SLAM: Monocular Gaussian Splatting SLAM in Dynamic Environments

License

Notifications You must be signed in to change notification settings

GradientSpaces/WildGS-SLAM

Repository files navigation

WildGS-SLAM: Monocular Gaussian Splatting SLAM in Dynamic Environments

Jianhao Zheng* . Zihan Zhu* · Valentin Bieri . Marc Pollefeys · Songyou Peng · Iro Armeni

Computer Vision And Pattern Recognition (CVPR) 2025

teaser_image

Given a monocular video sequence captured in the wild with dynamic distractors, WildGS-SLAM accurately tracks the camera trajectory and reconstructs a 3D Gaussian map for static elements, effectively removing all dynamic components.



Table of Contents
  1. Installation
  2. Quick Demo
  3. Run
  4. Evaluation
  5. Acknowledgement
  6. Citation
  7. Contact

Installation

  1. First you have to make sure that you clone the repo with the --recursive flag. The simplest way to do so, is to use anaconda.
git clone --recursive https://github.com/GradientSpaces/WildGS-SLAM.git
cd WildGS-SLAM
  1. Creating a new conda environment.
conda create --name wildgs-slam python=3.10
conda activate wildgs-slam
  1. Install CUDA 11.8 and torch-related pacakges
pip install numpy==1.26.3 # do not use numpy >= v2.0.0
conda install --channel "nvidia/label/cuda-11.8.0" cuda-toolkit
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118
pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-2.1.0+cu118.html
pip3 install -U xformers==0.0.22.post7+cu118 --index-url https://download.pytorch.org/whl/cu118
  1. Install the remaining dependencies.
python -m pip install -e thirdparty/lietorch/
python -m pip install -e thirdparty/diff-gaussian-rasterization-w-pose/
python -m pip install -e thirdparty/simple-knn/
  1. Check installation.
python -c "import torch; import lietorch; import simple_knn; import diff_gaussian_rasterization; print(torch.cuda.is_available())"
  1. Now install the droid backends and the other requirements
python -m pip install -e .
python -m pip install -r requirements.txt
  1. Install MMCV (used by metric depth estimator)
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu118/torch2.1.0/index.html
  1. Download the pretained models droid.pth, put it inside the pretrained folder.

Quick Demo

First download and zip the crowd sequence of Wild-SLAM dataset

bash scripts_downloading/download_demo_data.sh

Then, run WildGS-SLAM by the following command:

python run.py  ./configs/Dynamic/Wild_SLAM_Mocap/crowd_demo.yaml

If you encounter a CUDA out-of-memory error, a quick fix is to lower the image resolution. For example, add the following lines to your configs/Dynamic/Wild_SLAM_Mocap/crowd_demo.yaml file:

cam:
  H_out: 240
  W_out: 400

Run

Wild-SLAM Mocap Dataset (🤗 Hugging Face)

Download the dataset by the following command. Although WildGS-SLAM is a monocular SLAM system, we also provide depth frames for other RGB-D SLAM methods. The following command only downloads the 10 dynamic sequences. However, we also provide some static sequences. Please check the huggingface page to download them if you are interested in testing with these sequences.

bash scripts_downloading/download_wild_slam_mocap_scene1.sh
bash scripts_downloading/download_wild_slam_mocap_scene2.sh

You can run WildGS-SLAM via the following command:

python run.py  ./configs/Dynamic/Wild_SLAM_Mocap/{config_file} #run a single sequence
bash scripts_run/run_wild_slam_mocap_all.sh #run all dynamic sequences

Wild-SLAM iPhone Dataset (🤗 Hugging Face)

Download the dataset by the following command:

bash scripts_downloading/download_wild_slam_iphone.sh

You can run WildGS-SLAM on any of the sequences via the following command:

python run.py  ./configs/Dynamic/Wild_SLAM_iPhone/{config_file} #run a single sequence

The data is collected by an iPhone and no GT camera pose is available. Therefore, it will be no files related to pose evaluation saved.

Bonn Dynamic Dataset

Download the data as below and the data is saved into the ./Datasets/Bonn folder. Note that the script only downloads the 8 sequences reported in the paper. To get other sequences, you can download from the webiste of Bonn Dynamic Dataset.

bash scripts_downloading/download_bonn.sh

You can run WildGS-SLAM via the following command:

python run.py  ./configs/Dynamic/Bonn/{config_file} #run a single sequence
bash scripts_run/run_bonn_all.sh #run all dynamic sequences

We have prepared config files for the 8 sequences. Note that this dataset needs preprocessing the pose. We have implemented that in the dataloader. If you want to test with sequences other than the ones provided, don't forget to specify dataset: 'bonn_dynamic' in your config file. The easiest way is to inherit from bonn_dynamic.yaml.

TUM RGB-D (dynamic) Dataset

Download the data (9 dynamic sequences) as below and the data is saved into the ./Datasets/TUM_RGBD folder.

bash scripts_downloading/download_tum.sh

The config files for 9 dynamic sequences of this dataset can be found under ./configs/Dynamic/TUM_RGBD. You can run WildGS-SLAM as the following:

python run.py  ./configs/Dynamic/TUM_RGBD/{config_file} #run a single sequence
bash scripts_run/run_tum_dynamic_all.sh #run all dynamic sequences

Evaluation

Camera poses

The camera trajectories will be automatically evaluated after each run of WildGS-SLAM (if GT pose is provided). Statistics of the results are summarized in {save_dir}/traj/metrics_full_traj.txt. The estimated camera poses are saved in {save_dir}/traj/est_poses_full.txt following the TUM format.

We provide a python script to summarize the RMSE of ATE [cm]:

python scripts_run/summarize_pose_eval.py

Novel View Synthesis

Only support for Wild-SLAM Mocap dataset. (Todo: this needs some time to be released)

Acknowledgement

We adapted some codes from some awesome repositories including MonoGS, DROID-SLAM, Splat-SLAM, GIORIE-SLAM, nerf-on-the-go and Metric3D V2. Thanks for making codes publicly available.

Citation

If you find our code or paper useful, please cite

@inproceedings{Zheng2025WildGS,
  author={Zheng, Jianhao and Zhu, Zihan and Bieri, Valentin and Pollefeys, Marc and Peng, Songyou and Armeni Iro},
  title     = {WildGS-SLAM: Monocular Gaussian Splatting SLAM in Dynamic Environments},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year      = {2025}
}

Contact

Contact Jianhao Zheng for questions, comments and reporting bugs.

About

[CVPR 2025] WildGS-SLAM: Monocular Gaussian Splatting SLAM in Dynamic Environments

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published