Skip to content

zerchen/vividex_sapien

Repository files navigation

ViViDex: Learning Vision-based Dexterous Manipulation from Human Videos

Zerui Chen1Shizhe Chen1Etienne Arlaud1Ivan Laptev2Cordelia Schmid1

1WILLOW, INRIA Paris, France
2MBZUAI

This is the implementation of ViViDex under the SAPIEN simulator, a novel system for learning dexterous manipulation skills from human videos: teaser

Installation 👷

git clone https://github.com/zerchen/vividex_sapien.git

conda create -n rl python=3.10
conda activate rl
conda install pytorch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 pytorch-cuda=12.1 -c pytorch -c nvidia
pip install -r requirements.txt

Usuage 🚀

cd tools
# Train the state-based policy
python train.py env.name=seq_name env.norm_traj=True

Available seq_name can be found at: norm_trajectories. You can also download trained checkpoints here and check their config files for a reference. When state-based policies are trained, rollout these policies with generate_expert_trajs.py and train the visual policy with imitate_train.py using either BC or diffusion policy.

Real robot 🤖

Please refer to our UR5 ROS code and Allegro hand ROS code as an example to set up the real robot experiment.

Acknowledgements

Parts of the code are based on DexArt, DexPoint and 3D-Diffusion-Policy. We thank the authors for sharing their excellent work!

Citation 📝

If you find ViViDex useful for your research, please consider citing our paper:

@inproceedings{chen2025vividex,
  title={{ViViDex}: Learning Vision-based Dexterous Manipulation from Human Videos},
  author={Chen, Zerui and Chen, Shizhe and Arlaud, Etienne and Laptev, Ivan and Schmid, Cordelia},
  booktitle={ICRA},
  year={2025}
}

About

ViViDex implementation under the SAPIEN simulator, ICRA 2025

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published