Please install and setup AIMET before proceeding further.
This model was tested with the torch_gpu
variant of AIMET 1.25.
Install necessary dependencies as follows:
pip install pycocotools
export PYTHONPATH=$PYTHONPATH:<path to parent of aimet_model_zoo_path>
The MS-COCO 2017 validation dataset can be downloaded from here:
The dataset's root folder (which you pass an as arg), should have 2 subfolders: annotations, and images. The annotations folder should contain only json files. The images folder should contain three subfolders: train2017, val2017, test2017.
To run evaluation with QuantSim in AIMET, use the following
python3 aimet_zoo_torch/yolox/evaluators/yolox_quanteval.py \
--model-config <configuration to be tested> \
--dataset-path <Path to MS-COCO 2017> \
--batch-size <Number of images per batch, default is 64>
Available model configurations are:
- yolox_s
- yolox_l
- The original prepared YOLOX checkpoint can be downloaded from here:
- The Quantization Simulation (Quantsim) Configuration file can be downloaded from here: default_config_per_channel.json (Please see this page for more information on this file).
- Weight quantization: 8 bits, per channel symmetric quantization
- Bias parameters are not quantized
- Activation quantization: 8 bits, asymmetric quantization
- Model inputs are quantized
- Percentile was used as quantization scheme
- percentile value is set as 99.9942 by searching for YOLOX-s
- percentile value is set as 99.99608 by searching for YOLOX-l
- BatchNorm Folding (BNF) has been applied on optimized checkpoint
Below are the [email protected]:0.95 results of the PyTorch YOLOX model for the MS-COCO2017 dataset:
Model Configuration | FP32 (%) | INT8 (%) |
---|---|---|
YOLOX-s | 40.5 | 39.7 |
YOLOX-l | 49.7 | 48.8 |