Skip to content

Commit f6e4848

Browse files
authored
Typos on paragraph #125; 137
Plus minor formatting proposals
1 parent 46f4c2f commit f6e4848

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

README.md

+7-7
Original file line numberDiff line numberDiff line change
@@ -64,10 +64,10 @@ conda install -c conda-forge pytables
6464
```
6565
## Codes
6666

67-
There are three different section of this project.
68-
1.Data pre-processing
69-
2.Training and testing
70-
3.Video demo section
67+
There are three different section of this project.
68+
1. Data pre-processing
69+
2. Training and testing
70+
3. Video demo section
7171
We will go through the details in the following sections.
7272

7373
This repository is for IMDB, WIKI, and Morph2 datasets.
@@ -89,7 +89,7 @@ python TYY_MORPH_create_db.py --output morph_db_align.npz
8989

9090
<img src="https://github.com/shamangary/SSR-Net/blob/master/merge_val_morph2.png" height="300"/>
9191

92-
The experiments are done by randomly choosing 80% of the dataset as training and 20% of the dataset as validation(or testing). The details of the setting in each dataset is in the paper.
92+
The experiments are done by randomly choosing 80% of the dataset as training and 20% of the dataset as validation (or testing). The details of the setting in each dataset is in the paper.
9393

9494
For MobileNet and DenseNet:
9595
```
@@ -122,7 +122,7 @@ KERAS_BACKEND=tensorflow CUDA_VISIBLE_DEVICES='' python TYY_demo_mtcnn.py TGOP.m
122122
123123
KERAS_BACKEND=tensorflow CUDA_VISIBLE_DEVICES='' python TYY_demo_mtcnn.py TGOP.mp4 '3'
124124
```
125-
+ Note: You may choose different pre-trained models. However, the morph2 dataset is under a well controlled enviroment and it is much more smaller than IMDB and WIKI, the pre-trained models from morph2 may perform ly on the in-the-wild images. Therefore, IMDB or WIKI pre-trained models are recommended for in-the-wild images or video demo.
125+
+ Note: You may choose different pre-trained models. However, the morph2 dataset is under a well controlled environment and it is much more smaller than IMDB and WIKI, the pre-trained models from morph2 may perform ly on the in-the-wild images. Therefore, IMDB or WIKI pre-trained models are recommended for in-the-wild images or video demo.
126126

127127
+ We use dlib detection and face alignment in the previous experimental section since the face data is well organized. However, dlib cannot provide satisfactory face detection for in-the-wild video. Therefore we use mtcnn as the detection process in the demo section.
128128

@@ -134,7 +134,7 @@ Considering the face detection process (MTCNN or Dlib) is not fast enough for re
134134
cd ./demo
135135
KERAS_BACKEND=tensorflow CUDA_VISIBLE_DEVICES='' python TYY_demo_ssrnet_lbp_webcam.py
136136
```
137-
+ Note that the covered region of face detection is different when you use MTCNN, Dlib, or LBP. You should choose similiar size between the inference and the training.
137+
+ Note that the covered region of face detection is different when you use MTCNN, Dlib, or LBP. You should choose similar size between the inference and the training.
138138
+ Also, the pre-trained models are mainly for the evaluation of the datasets. They are not really for the real-world images. You should always retrain the model by your own dataset. In webcam demo, we found that morph2 pre-trained model actually perform better than wiki pre-trained model. The discussion will be included in our future work.
139139
+ If you are Asian, you might want to use the megaage_asian pre-trained model.
140140
+ The Morph2 pre-trained model is good for webcam but the gender model is overfitted and not practical.

0 commit comments

Comments
 (0)