Please install and setup AIMET before proceeding further.
This model was tested with the torch_gpu
variant of AIMET 1.22.2.
- Install skimage as follows
pip install scikit-image
- Clone the FFNet repo
git clone https://github.com/Qualcomm-AI-research/FFNet
- Hardcopy the two folders below for dataloader and model evaluation imports
datasets/cityscapes/dataloader
datasets/cityscapes/utils
- Add AIMET Model Zoo to the pythonpath
export PYTHONPATH=$PYTHONPATH:<aimet_model_zoo_path>
The Cityscape Dataset can be downloaded from here:
In the datasets/cityscapes/dataloader/base_loader.py
script, change the Cityscape dataset path to point to the path where the dataset was downloaded.
- The original prepared FFNet checkpoint can be downloaded from here:
- The Quantization Simulation (Quantsim) Configuration file can be downloaded from here: default_config_per_channel.json (Please see this page for more information on this file).
To run evaluation with QuantSim in AIMET, use the following
python ffnet_quanteval.py \
--model-name <model name for quantization, default is segmentation_ffnet78S_dBBB_mobile> \
--use-cuda <Use cuda or cpu, default is True> \
--batch-size <Number of images per batch, default is 8>
- Weight quantization: 8 bits, per channel symmetric quantization
- Bias parameters are not quantized
- Activation quantization: 8 bits, asymmetric quantization
- Model inputs are quantized
- TF-Enhanced was used as quantization scheme
- Cross layer equalization (CLE) has been applied on optimized checkpoint
- for low resolution models with pre_down suffix, the GaussianConv2D layer is disabled for quantization.
Below are the mIoU results of the PyTorch FFNet model for the Cityscapes dataset:
Model Configuration | FP32 (%) | INT8 (%) |
---|---|---|
segmentation_ffnet78S_dBBB_mobile | 81.3 | 80.7 |
segmentation_ffnet54S_dBBB_mobile | 80.8 | 80.1 |
segmentation_ffnet40S_dBBB_mobile | 79.2 | 78.9 |
segmentation_ffnet78S_BCC_mobile_pre_down | 80.6 | 80.4 |
segmentation_ffnet122NS_CCC_mobile_pre_down | 79.3 | 79.0 |