Skip to content

Commit 03f1a8c

Browse files
committed
add evaluation codes
1 parent 23f9f25 commit 03f1a8c

31 files changed

+161884
-1
lines changed

.gitignore

+2
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
.idea*
2+
*~

README.md

+4-1
Original file line numberDiff line numberDiff line change
@@ -143,7 +143,7 @@ _Ayan Sinha\*, Chiho Choi\*, Karthik Ramani_
143143
##### Realtime and robust hand tracking from depth. [\[PDF\]](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/yichenw-cvpr14_handtracking.pdf) [\[Project Page\]](https://www.microsoft.com/en-us/research/people/yichenw/)
144144
*Chen Qian, Xiao Sun, Yichen Wei, Xiaoou Tang and Jian Sun*
145145

146-
##### Latent Regression Forest: Structured Estimation of 3D Hand Posture. [\[PDF\]](http://www.iis.ee.ic.ac.uk/dtang/cvpr_14.pdf) [\[Project Page\]](http://www.iis.ee.ic.ac.uk/dtang/hand.html)
146+
##### Latent regression forest: Structured estimation of 3d articulated hand posture. [\[PDF\]](http://www.iis.ee.ic.ac.uk/dtang/cvpr_14.pdf) [\[Project Page\]](http://www.iis.ee.ic.ac.uk/dtang/hand.html)
147147
*Danhang Tang, Hyung Jin Chang, Alykhan Tejani, T-K. Kim*
148148

149149
##### User-specific hand modeling from monocular depth sequences. [\[PDF\]](http://www.cs.toronto.edu/~jtaylor/papers/CVPR2014-UserSpecificHandModeling.pdf) [\[Project Page\]](https://www.microsoft.com/en-us/research/publication/user-specific-hand-modeling-from-monocular-depth-sequences/)
@@ -217,6 +217,9 @@ _Ayan Sinha\*, Chiho Choi\*, Karthik Ramani_
217217
*[Jonathan Tompson](http://cims.nyu.edu/~tompson/), New York University*
218218

219219
## Other Related Papers
220+
##### \[2017 Neurocomputing\] Multi-task, Multi-domain Learning: application to semantic segmentation and pose regression.
221+
*Multi-task, Multi-domain Learning: application to semantic segmentation and pose regression*
222+
220223
##### [\[arXiv:1704.02463\]](https://arxiv.org/abs/1704.02463) First-Person Hand Action Benchmark with RGB-D Videos and 3D Hand Pose Annotations. [\[PDF\]](https://arxiv.org/pdf/1704.02463.pdf)
221224
*Guillermo Garcia-Hernando, Shanxin Yuan, Seungryul Baek, Tae-Kyun Kim*
222225

evaluation/.gitignore

+3
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
*~
2+
*.pyc
3+
src/permute_id_results.py

evaluation/README.md

+58
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
# Show evaluations on hand pose estimation
2+
3+
## Description
4+
This project provides codes to evaluate performances of hand pose estimation on several public datasets, including [NYU](cims.nyu.edu/~tompson/NYU_Hand_Pose_Dataset.htm), [ICVL](http://www.iis.ee.ic.ac.uk/~dtang/hand.html), [MSRA](https://www.microsoft.com/en-us/research/people/yichenw/?from=http%3A%2F%2Fresearch.microsoft.com%2Fen-us%2Fpeople%2Fyichenw%2F) hand pose dataset. We collect predicted labels of some prior works which are available online and visualize the performances.
5+
6+
## Evaluation metric
7+
There are two types of evaluation metrics that are widely used for hand pose estimation:
8+
- Mean error for each joint
9+
- Success rate:
10+
- The proportion of test frames whose average error falls below a threshold
11+
- The proportion of test frames whose maximum error falls below a threshold
12+
- The proportion of all joints whose error falls below a threshold
13+
14+
## Methods and corresponding predicted labels
15+
### ICVL
16+
- LRF \[1\]: CVPR'14, [Predicted labels](http://www.iis.ee.ic.ac.uk/~dtang/dataset/Results.tar.gz)
17+
- DeepModel \[2\]: IJCAI'16, [Predicted labels](http://xingyizhou.xyz/IJCAI16_ICVL.txt)
18+
- Guo_Baseline \[3\]: ICIP'17, [Predicted labels](https://github.com/guohengkai/region-ensemble-network/blob/master/results/icvl_basic.txt)
19+
- REN_4x6x6 \[3\]: ICIP'17, [Predicted labels](https://github.com/guohengkai/region-ensemble-network/blob/master/results/icvl_ren_4x6x6.txt)
20+
21+
### NYU
22+
- DeepPrior \[4\]: CVWW'15, [Predicted labels](https://www.tugraz.at/fileadmin/user_upload/Institute/ICG/Downloads/team_lepetit/3d_hand_pose/CVWW15_ICVL_Prior.txt)
23+
- DeepPrior-Refinement \[4\]: CVWW'15, [Predicted labels](https://www.tugraz.at/fileadmin/user_upload/Institute/ICG/Downloads/team_lepetit/3d_hand_pose/CVWW15_ICVL_Prior-Refinement.txt)
24+
- Feedback \[5\]: CVPR'15, [Predicted labels](https://www.tugraz.at/fileadmin/user_upload/Institute/ICG/Downloads/team_lepetit/3d_hand_pose/ICCV15_NYU_Feedback.txt)
25+
- DeepModel \[2\]: IJCAI'16, [Predicted labels](http://xingyizhou.xyz/IJCAI16_NYU.txt)
26+
- Lie-X \[6\]: IJCV'16, [Predicted labels](https://web.bii.a-star.edu.sg/~xuchi/Lie-X/lie_hand_jnts_estm_result.txt)
27+
- Guo_Baseline \[3\]: ICIP'17, [Predicted labels](https://github.com/guohengkai/region-ensemble-network/blob/master/results/nyu_basic.txt)
28+
- REN_4x6x6 \[3\]: ICIP'17, [Predicted labels](https://github.com/guohengkai/region-ensemble-network/blob/master/results/nyu_ren_4x6x6.txt)
29+
30+
### MSRA
31+
- TODO
32+
33+
### Notes
34+
Note that only 14 out of 36 joints are used for evaluation and we use the joints with id [0, 3, 6, 9, 12, 15, 18, 21, 24, 25, 27, 30, 31, 32]. All labels are in the format of (u, v, d) where u and v are pixel coordinates.
35+
36+
For Lie-X, the original predicted labels are in format of (x, y, z) and the order of joints is different. We convert the labels from xyz to uvd and permute the order of joints to keep consistent with other methods.
37+
38+
For Guo_Baseline and REN_4x6x6, we also permute the order of joints. Therefore, the corresponding labels provided in folder results/nyu are not essential the same with the [original ones](https://github.com/guohengkai/region-ensemble-network/blob/master/results/).
39+
40+
## Usage
41+
Use the python code to show the evaluation results:
42+
```
43+
python compute_error.py icvl/nyu/msra max-frame/mean-frame/joint method_names in_files
44+
```
45+
The first parameter indicates which dataset is being evaluated while the second one indicates which type of success rate that is listed above is being chosen. The following parameters specify the names of methods and their corresponding predict label files.
46+
47+
We provide easy-to-use bash scripts to display performances of some methods, just run the following command:
48+
```
49+
sh evaluate_{dataset}.sh
50+
```
51+
52+
## Reference
53+
- \[1\] Tang, Danhang, et al. "Latent regression forest: Structured estimation of 3d articulated hand posture." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2014.
54+
- \[2\] Zhou, Xingyi, et al. "Model-based deep hand pose estimation." arXiv preprint arXiv:1606.06854 (2016).
55+
- \[3\] Guo, Hengkai, et al. "Region Ensemble Network: Improving Convolutional Network for Hand Pose Estimation." arXiv preprint arXiv:1702.02447 (2017).
56+
- \[4\] Oberweger, Markus, Paul Wohlhart, and Vincent Lepetit. "Hands deep in deep learning for hand pose estimation." arXiv preprint arXiv:1502.06807 (2015).
57+
- \[5\] Oberweger, Markus, Paul Wohlhart, and Vincent Lepetit. "Training a feedback loop for hand pose estimation." Proceedings of the IEEE International Conference on Computer Vision. 2015.
58+
- \[6\] Xu, Chi, et al. "Lie-X: Depth Image Based Articulated Object Pose Estimation, Tracking, and Action Recognition on Lie Groups." arXiv preprint arXiv:1609.03773 (2016).
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
python src/convert_results_xyz2uvd.py nyu results/nyu/IJCV16_lie_hand_jnts_estm_result.txt results/nyu/IJCV16_lie_hand_jnts_estm_result_uvd.txt

evaluation/evaluate_icvl.sh

+7
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
python src/compute_error.py icvl max-frame\
2+
baseline results/icvl/ICIP17_ICVL_Guo_Basic.txt\
3+
ren_4x6x6 results/icvl/ICIP17_ICVL_Guo_REN_4x6x6.txt\
4+
DeepModel results/icvl/IJCAI16_ICVL_DeepModel.txt\
5+
LRF results/icvl/CVPR14_LRF_Results.txt
6+
#DeepPrior results/icvl/CVWW15_ICVL_Prior.txt\
7+
#DeepPrior-Refine results/nyu/CVWW15_ICVL_Prior-Refinement.txt\

evaluation/evaluate_msra.sh

Whitespace-only changes.

evaluation/evaluate_nyu.sh

+8
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
python src/compute_error.py nyu max-frame\
2+
Baseline results/nyu/ICIP17_NYU_Guo_Basic.txt\
3+
REN_4x6x6 results/nyu/ICIP17_NYU_Guo_REN_4x6x6.txt\
4+
DeepPrior results/nyu/CVWW15_NYU_Prior.txt\
5+
DeepPrior-Refine results/nyu/CVWW15_NYU_Prior-Refinement.txt\
6+
Feedback results/nyu/ICCV15_NYU_Feedback.txt\
7+
DeepModel results/nyu/IJCAI16_NYU_DeepModel.txt\
8+
Lie-X results/nyu/IJCV16_lie_hand_jnts_estm_result_uvd.txt

0 commit comments

Comments
 (0)