Skip to content

Latest commit

 

History

History
executable file
·
92 lines (81 loc) · 3.09 KB

FFNet.md

File metadata and controls

executable file
·
92 lines (81 loc) · 3.09 KB

PyTorch-FFNet

Setup AI Model Efficiency Toolkit

Please install and setup AIMET before proceeding further. This model was tested with the torch_gpu variant of AIMET 1.22.2.

Additional Dependencies

  1. Install skimage as follows
pip install scikit-image

Model modifications & Experiment Setup

  1. Clone the FFNet repo
git clone https://github.com/Qualcomm-AI-research/FFNet
  1. Hardcopy the two folders below for dataloader and model evaluation imports
datasets/cityscapes/dataloader
datasets/cityscapes/utils
  1. Add AIMET Model Zoo to the pythonpath
export PYTHONPATH=$PYTHONPATH:<aimet_model_zoo_path>

Dataset

The Cityscape Dataset can be downloaded from here:

In the datasets/cityscapes/dataloader/base_loader.py script, change the Cityscape dataset path to point to the path where the dataset was downloaded.

Model checkpoint and configuration

Usage

To run evaluation with QuantSim in AIMET, use the following

python ffnet_quanteval.py \
		--model-name <model name for quantization, default is segmentation_ffnet78S_dBBB_mobile> \
		--use-cuda <Use cuda or cpu, default is True> \
		--batch-size <Number of images per batch, default is 8>

Quantization Configuration (INT8)

  • Weight quantization: 8 bits, per channel symmetric quantization
  • Bias parameters are not quantized
  • Activation quantization: 8 bits, asymmetric quantization
  • Model inputs are quantized
  • TF-Enhanced was used as quantization scheme
  • Cross layer equalization (CLE) has been applied on optimized checkpoint
  • for low resolution models with pre_down suffix, the GaussianConv2D layer is disabled for quantization.

Results

Below are the mIoU results of the PyTorch FFNet model for the Cityscapes dataset:

Model Configuration FP32 (%) INT8 (%)
segmentation_ffnet78S_dBBB_mobile 81.3 80.7
segmentation_ffnet54S_dBBB_mobile 80.8 80.1
segmentation_ffnet40S_dBBB_mobile 79.2 78.9
segmentation_ffnet78S_BCC_mobile_pre_down 80.6 80.4
segmentation_ffnet122NS_CCC_mobile_pre_down 79.3 79.0