You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Compared with the model v2, the 3rd version of the detection model has a improvement in accuracy, and the 2.1 version of the recognition model has optimizations in accuracy and speed with CPU.
Compared with models 1.1, which are trained with static graph programming paradigm, models 2.0 or higher are the dynamic graph trained version and achieve close performance.
All models in this tutorial are all ppocr-series models, for more introduction of algorithms and models based on public dataset, you can refer to algorithm overview tutorial.
The downloadable models provided by PaddleOCR include inference model, trained model, pre-trained model and nb model. The differences between the models are as follows:
model type
model format
description
inference model
inference.pdmodel、inference.pdiparams
Used for inference based on Paddle inference engine,detail
trained model, pre-trained model
*.pdparams、*.pdopt、*.states
The checkpoints model saved in the training process, which stores the parameters of the model, mostly used for model evaluation and continuous training.
nb model
*.nb
Model optimized by Paddle-Lite, which is suitable for mobile-side deployment scenarios (Paddle-Lite is needed for nb model deployment).
Relationship of the above models is as follows.
1. Text Detection Model
1. Chinese Detection Model
model name
description
config
model size
download
ch_PP-OCRv3_det_slim
[New] slim quantization with distillation lightweight model, supporting Chinese, English, multilingual text detection
Note: The trained model is fine-tuned on the pre-trained model with real data and synthesized vertical text data, which achieved better performance in real scene. The pre-trained model is directly trained on the full amount of real data and synthesized data, which is more suitable for fine-tune on your own dataset.
2.2 English Recognition Model
model name
description
config
model size
download
en_PP-OCRv3_rec_slim
[New] Slim qunatization with distillation lightweight model, supporting english, English text recognition
Paddle Lite is an updated version of Paddle-Mobile, an open-open source deep learning framework designed to make it easy to perform inference on mobile, embeded, and IoT devices. It can further optimize the inference model and generate nb model used for edge devices. It's suggested to optimize the quantization model using Paddle-Lite because INT8 format is used for the model storage and inference.
This chapter lists OCR nb models with PP-OCRv2 or earlier versions. You can access to the latest nb models from the above tables.