|
| 1 | +# Smart classroom scenario |
| 2 | +This repository contains TensorFlow code for deployment of person detection (PD) and action recognition (AR) models for smart classroom use-case. You can define own list of possible actions (see annotation file [format](./README_DATA.md) and steps for model training to change the list of actions) but this repository shows example for 6 action classes: standing, sitting, raising hand, writing, turned-around and lie-on-the-desk. |
| 3 | + |
| 4 | +## Pre-requisites |
| 5 | +- Ubuntu 16.04 / 18.04 |
| 6 | +- Python 2.7 |
| 7 | + |
| 8 | +## Installation |
| 9 | + 1. Create virtual environment |
| 10 | + ```bash |
| 11 | + virtualenv venv -p python2 --prompt="(action)" |
| 12 | + ``` |
| 13 | + |
| 14 | + 2. Activate virtual environment and setup OpenVINO variables |
| 15 | + ```bash |
| 16 | + . venv/bin/activate |
| 17 | + . /opt/intel/openvino/bin/setupvars.sh |
| 18 | + ``` |
| 19 | + **NOTE** Good practice is adding `. /opt/intel/openvino/bin/setupvars.sh` to the end of the `venv/bin/activate`. |
| 20 | + ``` |
| 21 | + echo ". /opt/intel/openvino/bin/setupvars.sh" >> venv/bin/activate |
| 22 | + ``` |
| 23 | + |
| 24 | + 3. Install modules |
| 25 | + ```bash |
| 26 | + pip2 install -e . |
| 27 | + ``` |
| 28 | + |
| 29 | +## Model training |
| 30 | +Proposed repository allows to carry out the full cycle model training procedure. There are two ways to get the high accurate model: |
| 31 | + - Fine-tune from the proposed [initial weights](https://download.01.org/opencv/openvino_training_extensions/models/action_detection/person-detection-action-recognition-0006.tar.gz). This way is most simple and fast due to reducing training stages to single one - training PD&AR model directly. |
| 32 | + - Full cycle model pre-training on classification and detection datasets and final PD&AR model training. To get most accurate model we recommend to pre-train model on the next tasks: |
| 33 | + 1. Classification on ImageNet dataset (see classifier training [instruction](./README_CLASSIFIER.md)) |
| 34 | + 2. Detection on Pascal VOC0712 dataset (see detector training [instruction](./README_DETECTOR.md)) |
| 35 | + 3. Detection on MS COCO dataset |
| 36 | + |
| 37 | +## Data preparation |
| 38 | +To prepare a dataset follow the [instruction](./README_DATA.md) |
| 39 | + |
| 40 | +## Action list definition |
| 41 | +Current repository is configured to work with 6-class action detection task but you can easily define own set of actions. After the [data preparation](#data-preparation) step you should have the configured class mapping file. Next we will use class `IDs` from there. Then change `configs/action/pedestriandb_twinnet_actionnet.yml` file according you set of actions: |
| 42 | + 1. Field `ACTIONS_MAP` maps class `IDs` of input data into final set of actions. Note, that some kind of `undefined` class (if you have it) should be placed at he end of action list (to exclude it during training). |
| 43 | + 2. Field `VALID_ACTION_NAMES` stores names of valid actions, which you want to recognize (excluding `undefined` action). |
| 44 | + 4. If you have the `undefined` class set field `UNDEFINED_ACTION_ID` to `ID` of this class from `ACTIONS_MAP` map. Also add this `ID` to list: `IGNORE_CLASSES`. |
| 45 | + 4. If you plan to use the demo mode (see [header](#action-detection-model-demostration)) change colors of the actions by setting fields: `ACTION_COLORS_MAP` and `UNDEFINED_ACTION_COLOR`. |
| 46 | + 5. You can exclude some actions from the training procedure by including them into the list `IGNORE_CLASSES` but to achieve best performance it's recommended to label all boxes with persons even the target action is undefined for them (this boxes is still useful to train person detector model part). |
| 47 | + |
| 48 | +Bellow you can see the example of the valid field definition: |
| 49 | +```yaml |
| 50 | +"ACTIONS_MAP": {0: 0, # sitting --> sitting |
| 51 | + 1: 3, # standing --> standing |
| 52 | + 2: 2, # raising_hand --> raising_hand |
| 53 | + 3: 0, # listening --> sitting |
| 54 | + 4: 0, # reading --> sitting |
| 55 | + 5: 1, # writing --> writing |
| 56 | + 6: 5, # lie_on_the_desk --> lie_on_the_desk |
| 57 | + 7: 0, # busy --> sitting |
| 58 | + 8: 0, # in_group_discussions --> sitting |
| 59 | + 9: 4, # turned_around --> turned_around |
| 60 | + 10: 6} # __undefined__ --> __undefined__ |
| 61 | +"VALID_ACTION_NAMES": ["sitting", "writing", "raising_hand", "standing", "turned_around", "lie_on_the_desk"] |
| 62 | +"UNDEFINED_ACTION_NAME": "undefined" |
| 63 | +"UNDEFINED_ACTION_ID": 6 |
| 64 | +"IGNORE_CLASSES": [6] |
| 65 | +"ACTION_COLORS_MAP": {0: [0, 255, 0], |
| 66 | + 1: [255, 0, 255], |
| 67 | + 2: [0, 0, 255], |
| 68 | + 3: [255, 0, 0], |
| 69 | + 4: [0, 153, 255], |
| 70 | + 5: [153, 153, 255]} |
| 71 | +"UNDEFINED_ACTION_COLOR": [255, 255, 255] |
| 72 | +``` |
| 73 | +
|
| 74 | +## Person Detection and Action Recognition model training |
| 75 | +Assume we have a pre-trained model and want to fine-tune PD&AR model. In this case the the train procedure consists of next consistent stages: |
| 76 | + 1. [Model training](#action-detection-model-training) |
| 77 | + 2. [Model evaluation](#action-detection-model-evaluation) |
| 78 | + 3. [Model demonstration](#action-detection-model-demonstration) |
| 79 | + 4. [Graph optimization](#action-detection-model-optimization) |
| 80 | + 5. [Export to IR format](#export-to-ir-format) |
| 81 | +
|
| 82 | +
|
| 83 | +### Action Detection model training |
| 84 | +If you want to fine-tune the model with custom set of actions you can use the provided init weights. To do this run the command: |
| 85 | +```Shell |
| 86 | +python2 tools/models/train.py -c configs/action/pedestriandb_twinnet_actionnet.yml \ # path to config file |
| 87 | + -t <PATH_TO_DATA_FILE> \ # file with train data paths |
| 88 | + -l <PATH_TO_LOG_DIR> \ # directory for logging |
| 89 | + -b 4 \ # batch size |
| 90 | + -n 1 \ # number of target GPU devices |
| 91 | + -i <PATH_TO_INIT_WEIGHTS> \ # initialize model weights |
| 92 | + --src_scope "ActionNet/twinnet" # name of scope to load weights from |
| 93 | +``` |
| 94 | + |
| 95 | +Note to continue model training (e.g. after stopping) from your snapshot you should run the same command but with key `-s <PATH_TO_SNAPSHOT>` and without specifying `--src_scope` key: |
| 96 | +```Shell |
| 97 | +python2 tools/models/train.py -c configs/action/pedestriandb_twinnet_actionnet.yml \ # path to config file |
| 98 | + -t <PATH_TO_DATA_FILE> \ # file with train data paths |
| 99 | + -l <PATH_TO_LOG_DIR> \ # directory for logging |
| 100 | + -b 4 \ # batch size |
| 101 | + -n 1 \ # number of target GPU devices |
| 102 | + -s <PATH_TO_SNAPSHOT> \ # snapshot model weights |
| 103 | +``` |
| 104 | + |
| 105 | +If you want to initialize the model from the weights differ than provided you should set the valid `--src_scope` key value: |
| 106 | + - To initialize the model after pre-training on ImageNet classification dataset set `--src_scope "ImageNetModel/rmnet"` |
| 107 | + - To initialize the model after pre-training on Pascal or COCO detection dataset set `--src_scope "SSD/rmnet"` |
| 108 | + |
| 109 | +### Action Detection model evaluation |
| 110 | +To evaluate the quality of the trained Action Detection model you should prepare the test data according [instruction](./README_DATA.md). |
| 111 | + |
| 112 | +```Shell |
| 113 | +python2 tools/models/eval.py -c configs/action/pedestriandb_twinnet_actionnet.yml \ # path to config file |
| 114 | + -v <PATH_TO_DATA_FILE> \ # file with test data paths |
| 115 | + -b 4 \ # batch size |
| 116 | + -s <PATH_TO_SNAPSHOT> \ # snapshot model weights |
| 117 | +``` |
| 118 | + |
| 119 | + |
| 120 | +### Action Detection model demonstration |
| 121 | + |
| 122 | +```Shell |
| 123 | +python2 tools/models/demo.py -c configs/action/pedestriandb_twinnet_actionnet.yml \ # path to config file |
| 124 | + -i <PATH_TO_VIDEO_FILE> \ # file with video |
| 125 | + -s <PATH_TO_SNAPSHOT> \ # snapshot model weights |
| 126 | +``` |
| 127 | + |
| 128 | +Note to scale the output screen size you can specify the `--out_scale` key with desirable scale factor: `--out_scale 0.5` |
| 129 | + |
| 130 | +### Action Detection model optimization |
| 131 | + |
| 132 | +```Shell |
| 133 | +python2 tools/models/export.py -c configs/action/pedestriandb_twinnet_actionnet.yml \ # path to config file |
| 134 | + -s <PATH_TO_SNAPSHOT> \ # snapshot model weights |
| 135 | + -o <PATH_TO_OUTPUT_DIR> \ # directory for the output model |
| 136 | +``` |
| 137 | + |
| 138 | +Note that the frozen graph will be stored in: `<PATH_TO_OUTPUT_DIR>/frozen.pb`. |
| 139 | + |
| 140 | +### Export to IR format |
| 141 | + |
| 142 | +Run model optimizer for the trained Action Detection model (OpenVINO should be installed before): |
| 143 | +```Shell |
| 144 | +python mo_tf.py --input_model <PATH_TO_FROZEN_GRAPH> \ |
| 145 | + --output_dir <OUTPUT_DIR> \ |
| 146 | + --model_name SmartClassroomActionNet |
| 147 | +``` |
0 commit comments