|
10 | 10 | |Model |Download |Download (with sample test data)|ONNX version|Opset version|Accuracy |
|
11 | 11 | |-------------|:--------------|:--------------|:--------------|:--------------|:--------------|
|
12 | 12 | |YOLOv4 |[251 MB](model/yolov4.onnx) |[236 MB](model/yolov4.tar.gz)|1.6 |11 |mAP of 0.5733 |
|
| 13 | +|YOLOv4-int8 |[63.0 MB](model/yolov4-int8.onnx) | [61.8 MB](model/yolov4-int8.tar.gz) |1.9.0 |11 |mAP of 0.570 | |
| 14 | +> Compared with the YOLOv4, YOLOv4-int8's mAP decline is 0.33% and performance improvement is 1.59x. |
| 15 | +> |
| 16 | +> Note the performance depends on the test hardware. |
| 17 | +> |
| 18 | +> Performance data here is collected with Intel® Xeon® Platinum 8280 Processor, 1s 4c per instance, CentOS Linux 8.3, data batch size is 1. |
| 19 | +
|
13 | 20 |
|
14 | 21 | ### Source
|
15 | 22 | Tensorflow YOLOv4 => ONNX YOLOv4
|
@@ -255,17 +262,51 @@ def draw_bbox(image, bboxes, classes=read_class_names("coco.names"), show_label=
|
255 | 262 | Pretrained yolov4 weights can be downloaded [here](https://drive.google.com/open?id=1cewMfusmPjYWbrnuJRuKhPMwRe_b9PaT).
|
256 | 263 |
|
257 | 264 | ## Validation accuracy
|
| 265 | +YOLOv4: |
258 | 266 | mAP50 on COCO 2017 dataset is 0.5733, based on the original tensorflow [model](https://github.com/hunglc007/tensorflow-yolov4-tflite#map50-on-coco-2017-dataset).
|
259 | 267 |
|
| 268 | +YOLOv4-int8: |
| 269 | +mAP50 on COCO 2017 dataset is 0.570, metric is COCO box mAP@[IoU=0.50:0.95 | area= large | maxDets=100]. |
| 270 | +<hr> |
| 271 | + |
| 272 | +## Quantization |
| 273 | +YOLOv4-int8 is obtained by quantizing YOLOv4 model. We use [Intel® Neural Compressor](https://github.com/intel/neural-compressor) with onnxruntime backend to perform quantization. View the [instructions](https://github.com/intel/neural-compressor/blob/master/examples/onnxrt/object_detection/onnx_model_zoo/yolov4/quantization/ptq/README.md) to understand how to use Intel® Neural Compressor for quantization. |
| 274 | + |
| 275 | +### Environment |
| 276 | +onnx: 1.9.0 |
| 277 | +onnxruntime: 1.10.0 |
| 278 | + |
| 279 | +### Prepare model |
| 280 | +```shell |
| 281 | +wget https://github.com/onnx/models/blob/main/vision/object_detection_segmentation/yolov4/model/yolov4.onnx |
| 282 | +``` |
| 283 | + |
| 284 | +### Model quantize |
| 285 | +```bash |
| 286 | +bash run_tuning.sh --input_model=path/to/model \ # model path as *.onnx |
| 287 | + --config=yolov4.yaml \ |
| 288 | + --data_path=path/to/COCO2017 \ |
| 289 | + --output_model=path/to/save |
| 290 | +``` |
| 291 | +<hr> |
| 292 | + |
260 | 293 | ## Publication/Attribution
|
261 | 294 | * [YOLOv4: Optimal Speed and Accuracy of Object Detection](https://arxiv.org/abs/2004.10934). Alexey Bochkovskiy, Chien-Yao Wang, Hong-Yuan Mark Liao.
|
262 | 295 | * Original models from [Darknet Github repository](https://github.com/AlexeyAB/darknet).
|
263 | 296 |
|
264 | 297 | ## References
|
265 |
| -This model is directly converted from [hunglc007/tensorflow-yolov4-tflite](https://github.com/hunglc007/tensorflow-yolov4-tflite). |
| 298 | +* This model is directly converted from [hunglc007/tensorflow-yolov4-tflite](https://github.com/hunglc007/tensorflow-yolov4-tflite). |
| 299 | + |
| 300 | +* [Intel® Neural Compressor](https://github.com/intel/neural-compressor) |
| 301 | +<hr> |
266 | 302 |
|
267 | 303 | ## Contributors
|
268 |
| -[Jennifer Wang](https://github.com/jennifererwangg) |
| 304 | +* [Jennifer Wang](https://github.com/jennifererwangg) |
| 305 | +* [XinyuYe-Intel](https://github.com/XinyuYe-Intel) (Intel) |
| 306 | +* [mengniwang95](https://github.com/mengniwang95) (Intel) |
| 307 | +* [airMeng](https://github.com/airMeng) (Intel) |
| 308 | +* [ftian1](https://github.com/ftian1) (Intel) |
| 309 | +* [hshen14](https://github.com/hshen14) (Intel) |
269 | 310 |
|
270 | 311 | ## License
|
271 | 312 | MIT License
|
0 commit comments