Skip to content

[ICML 2023] "NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations" by Yonggan Fu, Ye Yuan, Souvik Kundu, Shang Wu, Shunyao Zhang, Yingyan (Celine) Lin

License

Notifications You must be signed in to change notification settings

GATECH-EIC/NeRFool

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations

Yonggan Fu, Ye Yuan, Souvik Kundu, Shang Wu, Shunyao Zhang, Yingyan (Celine) Lin

Accepted at ICML 2023. [ Paper | Video | Slide ]

An Overview of NeRFool

  • Generalizable Neural Radiance Fields (GNeRF) are one of the most promising real-world solutions for novel view synthesis, thanks to their cross-scene generalization capability and thus the possibility of instant rendering on new scenes. While adversarial robustness is essential for real-world applications, little study has been devoted to understanding its implication on GNeRF. In this work, we present NeRFool, which to the best of our knowledge is the first work that sets out to understand the adversarial robustness of GNeRF. Specifically, NeRFool unveils the vulnerability patterns and important insights regarding GNeRF's adversarial robustness and provides guidelines for defending against our proposed attacks.

Citation

  • If you find our work interesting or helpful to your research, welcome to cite our paper:
@article{fu2023nerfool,
  title={NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations},
  author={Fu, Yonggan and Yuan, Ye and Kundu, Souvik and Wu, Shang and Zhang, Shunyao and Lin, Yingyan},
  journal={arXiv preprint arXiv:2306.06359},
  year={2023}
}

Code Usage

Prerequisites

  • Install the conda environment:
conda env create -f env.yml
  • Prepare the evaluation data: The evaluation datasets, including LLFF, NeRF Synthetic, and DeepVoxels, are organized in the following structure:
├──data/
    ├──nerf_llff_data/
    ├──nerf_synthetic/
    ├──deepvoxels/

They can be downloaded by running the following command under the data/ directory:

bash download_eval_data.sh
  • Prepare the pretrained model: To evaluate the adversarial robustness of pretrained GNeRFs, you can download the official IBRNet model from here.

  • Update the paths to datasets & pretrained models in the configuration files: configs/eval_*.

Attacking GNeRFs using NeRFool

  • Attack a specific view direction using a view-specific attack scheme on the LLFF dataset:
CUDA_VISIBLE_DEVICES=0 python eval_adv.py --config ../configs/eval_llff.txt --expname test --num_source_views 4 --adv_iters 1000 --adv_lr 1 --epsilon 8 --use_adam --adam_lr 1e-3 --lr_gamma=1 --view_specific
  • Generate universal adversarial perturbations across different views on the LLFF dataset:
CUDA_VISIBLE_DEVICES=0 python eval_adv.py --config ../configs/eval_llff.txt --expname test --num_source_views 4 --adv_iters 1000 --adv_lr 1 --epsilon 8 --use_adam --adam_lr 1e-3 --lr_gamma=1

Acknowledgement

This codebase is modified on top of [IBRNet].

About

[ICML 2023] "NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations" by Yonggan Fu, Ye Yuan, Souvik Kundu, Shang Wu, Shunyao Zhang, Yingyan (Celine) Lin

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published