Skip to content

Latest commit

 

History

History
54 lines (48 loc) · 2.09 KB

InverseForm.md

File metadata and controls

54 lines (48 loc) · 2.09 KB

Pytorch InverseForm (Semantic Segmentation)

Setup AI Model Efficiency Toolkit (AIMET)

Please install and setup AIMET before proceeding further. This model was tested with the torch_gpu variant of AIMET 1.22.2.

Experiment setup

  git clone https://github.com/Qualcomm-AI-research/InverseForm.git
  cd InverseForm
  git checkout d5fec5b065c9a3c7afab48db84071fc3537bad7a
  patch -p1 < ../aimet-model-zoo/zoo_torch/examples/inverseform/inverseform_eval.patch
  • Install dependencies
   python -m pip install runx
   python -m pip install fire
   python -m pip install scikit-image
  • Append the repo location to your PYTHONPATH with the following: export PYTHONPATH=<path to inverseform repo>:$PYTHONPATH

Model checkpoints and configuration

  • The InverseForm model checkpoints can be downloaded from the Releases page.

Dataset

Usage

  • To run HRNet-16-Slim-IF evaluation with QuantSim in AIMET, use the following
python inverseform_quanteval.py \
	--checkpoint-prefix inverseform-w16_w8a8 \
	--arch "lighthrnet.HRNet16" --hrnet_base "16" \
	--use-cuda <Run evaluation on GPU>
  • To run OCRNet-48-IF evaluation with QuantSim in AIMET, use the following
python inverseform_quanteval.py \
	--checkpoint-prefix inverseform-w48_w8a8 \
	--arch "ocrnet.HRNet" --hrnet_base "48" \
	--use-cuda <Run evaluation on GPU>

Quantization configuration

  • Weight quantization: 8 bits per tensor symmetric quantization
  • Bias parameters are not quantized
  • Activation quantization: 8 bits asymmetric quantization
  • Model inputs are quantized
  • TF-Enhanced was used as quantization scheme
  • Cross layer equalization and Adaround have been applied on optimized checkpoint