Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
94 changes: 90 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,20 +14,106 @@ We use `python 3.7.6`. You should install the newest `pytorch chumpy vctoolkit o

*Installing `pytorch` with CUDA is recommended. The system can only run at ~40 fps on a CPU (i7-8700) and ~90 fps on a GPU (GTX 1080Ti).*

```
conda create -n "imuposer" python=3.7
conda activate imuposer
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=10.2 -c pytorch

python -m pip install -r requirements.txt
python -m pip install -e src/
```

We use <ROOT> to refer to the root path of this repository in your file system. Prepare folders in the following format:
```
<ROOT>
└── TransPose
└── data
└── dataset_raw
```

### Prepare SMPL body model

1. Download SMPL model from [here](https://smpl.is.tue.mpg.de/). You should click `SMPL for Python` and download the `version 1.0.0 for Python 2.7 (10 shape PCs)`. Then unzip it.
1. Register an account in https://smpl.is.tue.mpg.de/download.php. Click on ```Download version 1.0.0 for Python 2.7 (female/male. 10 shape PCs)```. The ```SMPL_python_v.1.0.0.zip``` file will be downloaded. Put it in ```<ROOT>/TransPose/data/dataset_raw```.

```
<ROOT>/data/dataset_raw$ unzip SMPL_python_v.1.0.0.zip
rm -r __MACOSX/
rm -r SMPL_python_v.1.0.0.zip
<ROOT>/data/dataset_raw$ mv smpl/models ../../
<ROOT>/data/dataset_raw$ rm -r smpl/
```
2. In `config.py`, set `paths.smpl_file` to the model path.

### Prepare pre-trained network weights

1. Download weights from [here](https://xinyu-yi.github.io/TransPose/files/weights.pt).
```
<ROOT>/data/dataset_raw$ wget https://xinyu-yi.github.io/TransPose/files/weights.pt
<ROOT>/data/dataset_raw$ mv weights.pt ..
```
2. In `config.py`, set `paths.weights_file` to the weights path.

### Prepare test datasets (optional)

1. Download DIP-IMU dataset from [here](https://dip.is.tue.mpg.de/). We use the raw (unnormalized) data.
2. Download TotalCapture dataset from [here](https://cvssp.org/data/totalcapture/). You need to download `the real world position and orientation` under `Vicon Groundtruth` in the website and unzip them. The ground-truth SMPL poses used in our evaluation are provided by the DIP authors. So you may also need to contact the DIP authors for them.
1. Register an account and download DIP-IMU dataset from [here](https://dip.is.tue.mpg.de/). Click on ```DIP IMU AND OTHERS - DOWNLOAD SERVER 1 ``` (approx. 2.5GB). We use the raw (unnormalized) data.
```
<ROOT>/data/dataset_raw$ unzip DIPIMUandOthers.zip
<ROOT>/data/dataset_raw$ rm DIPIMUandOthers.zip
<ROOT>/data/dataset_raw/DIP_IMU_and_Others$ unzip DIP_IMU.zip
<ROOT>/data/dataset_raw/DIP_IMU_and_Others$ mv DIP_IMU ..
<ROOT>/data/dataset_raw$ rm -r DIP_IMU_and_Others
```
Follow ```Prepare AMASS and DIP_IMU``` or ```3. Download training data``` from https://github.com/bryanbocao/IMUPoser/blob/main/README.md to download dataset AMASS. Note that the ```<ROOT>/data/raw``` in the IMUPoser should be changed to ```<ROOT>/data/dataset_raw``` in this repository.

2. Download TotalCapture dataset from https://cvssp.org/data/totalcapture/data. Select ```Vicon Groundtruth - The real world position and orinetation```
The following 5 subjects' data (```Subject1 Subject2 Subject3 Subject4 Subject5```) files will be downloaded:
```
s1_vicon_pos_ori.tar.gz
s2_vicon_pos_ori.tar.gz
s3_vicon_pos_ori.tar.gz
s4_vicon_pos_ori.tar.gz
s5_vicon_pos_ori.tar.gz
```
Put them into this folder: ```<ROOT>/data/dataset_raw/TotalCapture/official```. Untar the files by
```
<ROOT>/data/dataset_raw/TotalCapture/official$ for file in *.tar.gz; do tar -xvzf "$file" -C .; done
<ROOT>/data/dataset_raw/TotalCapture/official$ rm -r *.tar.gz
```

Where to find the ```DIP_recalculate``` data:

https://github.com/Xinyu-Yi/TransPose/blob/4963e71ae33c3ea5ac24fcc053015804e9705ad1/config.py#L21
```
# DIP recalculates the SMPL poses for TotalCapture dataset. You should acquire the pose data from the DIP authors.
raw_totalcapture_dip_dir = 'data/dataset_raw/TotalCapture/DIP_recalculate' # contain ground-truth SMPL pose (*.pkl)
```
Load ground-truth SMPL poses and IMUs from the TotalCapture dataset.

Pointers:
```
https://github.com/eth-ait/dip18?tab=readme-ov-file
https://github.com/eth-ait/aitviewer/blob/main/examples/load_DIP_TC.py
https://github.com/eth-ait/aitviewer/blob/8fb6d4661303579ef04b3bf63ac907dbaecff2ff/examples/load_DIP_TC.py#L14
```

From https://dip.is.tue.mpg.de/download.php, select ```ORIGINAL TotalCapture DATA W/ CORRESPONDING REFERENCE SMPL Poses (wo/ normalization, approx. 250MB)```. The file named ```TotalCapture_Real_60FPS.zip``` will be downloaded.
```
<ROOT>/data/dataset_raw/TotalCapture/DIP_recalculate$ unzip TotalCapture_Real_60FPS.zip
Archive: TotalCapture_Real_60FPS.zip
creating: TotalCapture_Real_60FPS/
inflating: TotalCapture_Real_60FPS/s4_acting3.pkl
inflating: TotalCapture_Real_60FPS/s5_freestyle3.pkl
inflating: TotalCapture_Real_60FPS/s3_freestyle3.pkl
inflating: TotalCapture_Real_60FPS/s3_acting2.pkl
inflating: TotalCapture_Real_60FPS/s1_rom3.pkl
...
<ROOT>/data/dataset_raw/TotalCapture/DIP_recalculate$ rm TotalCapture_Real_60FPS.zip
<ROOT>/data/dataset_raw/TotalCapture/DIP_recalculate$ mv TotalCapture_Real_60FPS/* .
<ROOT>/data/dataset_raw/TotalCapture/DIP_recalculate$ rm -r TotalCapture_Real_60FPS
```

The ground-truth SMPL poses used in our evaluation are provided by the DIP authors. So you may also need to contact the DIP authors for them.

3. In `config.py`, set `paths.raw_dipimu_dir` to the DIP-IMU dataset path; set `paths.raw_totalcapture_dip_dir` to the TotalCapture SMPL poses (from DIP authors) path; and set `paths.raw_totalcapture_official_dir` to the TotalCapture official `gt` path. Please refer to the comments in the codes for more details.

### Run the example
Expand Down Expand Up @@ -76,7 +162,7 @@ The saved files are:
- `shape.pt`, which contains a list of tensors in shape [10] for the subject shape (SMPL parameter).
- `tran.pt`, which contains a list of tensors in shape [#frames, 3] for the global (root) 3D positions.
- `vacc.pt`, which contains a list of tensors in shape [#frames, 6, 3] for 6 synthetic IMU acceleration measurements (global).
- `joint.pt`, which contains a list of tensors in shape [#frames, 6, 3, 3] for 6 synthetic IMU orientation measurements (global).
- `vrot.pt`, which contains a list of tensors in shape [#frames, 6, 3, 3] for 6 synthetic IMU orientation measurements (global).

All sequences are in 60 fps.

Expand Down
9 changes: 9 additions & 0 deletions config.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,3 +44,12 @@ class joint_set:

acc_scale = 30
vel_scale = 3

class train:
'''
Batch size of 256 using an Adam [Kingma and Ba 2014] optimizer with a learning rate lr = 10−3 from the paper.
'''
epochs = 3000
batch_size = 256
learning_rate = 10e-3
optimizer_str = 'Adam'
2 changes: 1 addition & 1 deletion preprocess.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ def _syn_acc(v):
data_pose, data_trans, data_beta, length = [], [], [], []
for ds_name in amass_data:
print('\rReading', ds_name)
for npz_fname in tqdm(glob.glob(os.path.join(paths.raw_amass_dir, ds_name, ds_name, '*/*_poses.npz'))):
for npz_fname in tqdm(glob.glob(os.path.join(paths.raw_amass_dir, ds_name, '*/*_poses.npz'))):
try: cdata = np.load(npz_fname)
except: continue

Expand Down