All Projects → zihaomu → deep-text-recognition-benchmark

zihaomu / deep-text-recognition-benchmark

Licence: Apache-2.0 license
Provide the OCR model in ONNX format so that the OpenCV DNN module can use them directly and correctly.

Labels

Projects that are alternatives of or similar to deep-text-recognition-benchmark

video-subtitle-extractor
视频硬字幕提取,生成srt文件。无需申请第三方API,本地实现文本识别。基于深度学习的视频字幕提取框架,包含字幕区域检测、字幕内容提取。A GUI tool for extracting hard-coded subtitle (hardsub) from videos and generating srt files.
Stars: ✭ 1,763 (+5409.38%)
Mutual labels:  ocr
ddddocr
带带弟弟 通用验证码识别OCR pypi版
Stars: ✭ 4,093 (+12690.63%)
Mutual labels:  ocr
Inventory Kamera
Scans Genshin Impact characters, artifacts, and weapons from the game window into a JSON file.
Stars: ✭ 348 (+987.5%)
Mutual labels:  ocr
scanbot-sdk-example-ios
No description or website provided.
Stars: ✭ 17 (-46.87%)
Mutual labels:  ocr
CLPR.pytorch
End to End Chinese License Plate Recognition
Stars: ✭ 75 (+134.38%)
Mutual labels:  ocr
blinkid-in-browser
BlinkID In-browser SDK for WebAssembly-enabled browsers.
Stars: ✭ 40 (+25%)
Mutual labels:  ocr
paperbase
Open source document organizer with automatic OCR and full text search
Stars: ✭ 21 (-34.37%)
Mutual labels:  ocr
tibetan-ocr
Python OCR for Handwritten Tibetan Mauscripts
Stars: ✭ 19 (-40.62%)
Mutual labels:  ocr
erpnext ocr
🐍 ⚗️ Optical Character Recognition using tesseract within Frappe.
Stars: ✭ 58 (+81.25%)
Mutual labels:  ocr
shape-context-ocr
The Shape Context is a shape descriptor that captures the relative positions of other points on the shape contours, and is used to recognize characters.
Stars: ✭ 20 (-37.5%)
Mutual labels:  ocr
form-segmentation
Let's explore how we can extract text from forms
Stars: ✭ 42 (+31.25%)
Mutual labels:  ocr
crnn.mxnet
crnn in mxnet.can train with chinese characters
Stars: ✭ 47 (+46.88%)
Mutual labels:  ocr
Handwritten-Text-Recognition
IAM dataset
Stars: ✭ 25 (-21.87%)
Mutual labels:  ocr
ocr2text
Convert a PDF via OCR to a TXT file in UTF-8 encoding
Stars: ✭ 90 (+181.25%)
Mutual labels:  ocr
i-librarian-free
I, Librarian - open-source version of a PDF managing SaaS.
Stars: ✭ 110 (+243.75%)
Mutual labels:  ocr
webgrep
Grep Web pages with extra features like JS deobfuscation and OCR
Stars: ✭ 86 (+168.75%)
Mutual labels:  ocr
lego-mindstorms-51515-jetson-nano
Combines the LEGO Mindstorms 51515 with the NVIDIA Jetson Nano
Stars: ✭ 31 (-3.12%)
Mutual labels:  ocr
ZUCC ZhenFangHelper
正方教务管理系统学生版的自动登录、选课、信息获取
Stars: ✭ 36 (+12.5%)
Mutual labels:  ocr
NLP-image-to-text
code to extract text from images
Stars: ✭ 28 (-12.5%)
Mutual labels:  ocr
receipt-parser-server
Receipt parser server written in python.
Stars: ✭ 64 (+100%)
Mutual labels:  ocr

Provide OCR model for OpenCV DNN

In the original repository, the model definition method is incompatible, which makes OpenCV DNN module unable to directly load the converted ONNX model correctly. Here I have made some adjustments to the model structure and provided the script for conversion to ONNX.

After completing the model training, use transform_to_onnx.py to convert the model into onnx format.

You can use OpenCV C++ samples text_recog.cpp or text_recog.py to check if the model run in the right way. More usable examples can be found https://github.com/opencv/opencv/blob/master/samples/dnn/text_detection.cpp.

Pre-trained models in onnx format are provided

These models run correctly in OpenCV DNN. Download Link: https://drive.google.com/drive/folders/1cTbQ3nuZG-EKWak6emD_s8_hHXWz7lAr?usp=sharing


Below is the original document

What Is Wrong With Scene Text Recognition Model Comparisons? Dataset and Model Analysis

| paper | training and evaluation data | failure cases and cleansed label | pretrained model | Baidu ver(passwd:rryk) |

Official PyTorch implementation of our four-stage STR framework, that most existing STR models fit into.
Using this framework allows for the module-wise contributions to performance in terms of accuracy, speed, and memory demand, under one consistent set of training and evaluation datasets.
Such analyses clean up the hindrance on the current comparisons to understand the performance gain of the existing modules.

Honors

Based on this framework, we recorded the 1st place of ICDAR2013 focused scene text, ICDAR2019 ArT and 3rd place of ICDAR2017 COCO-Text, ICDAR2019 ReCTS (task1).
The difference between our paper and ICDAR challenge is summarized here.

Updates

Aug 3, 2020: added guideline to use Baidu warpctc which reproduces CTC results of our paper.
Dec 27, 2019: added FLOPS in our paper, and minor updates such as log_dataset.txt and ICDAR2019-NormalizedED.
Oct 22, 2019: added confidence score, and arranged the output form of training logs.
Jul 31, 2019: The paper is accepted at International Conference on Computer Vision (ICCV), Seoul 2019, as an oral talk.
Jul 25, 2019: The code for floating-point 16 calculation, check @YacobBY's pull request
Jul 16, 2019: added ST_spe.zip dataset, word images contain special characters in SynthText (ST) dataset, see this issue
Jun 24, 2019: added gt.txt of failure cases that contains path and label of each image, see image_release_190624.zip
May 17, 2019: uploaded resources in Baidu Netdisk also, added Run demo. (check @sharavsambuu's colab demo also)
May 9, 2019: PyTorch version updated from 1.0.1 to 1.1.0, use torch.nn.CTCLoss instead of torch-baidu-ctc, and various minor updated.

Getting Started

Dependency

  • This work was tested with PyTorch 1.3.1, CUDA 10.1, python 3.6 and Ubuntu 16.04.
    You may need pip3 install torch==1.3.1.
    In the paper, expriments were performed with PyTorch 0.4.1, CUDA 9.0.
  • requirements : lmdb, pillow, torchvision, nltk, natsort
pip3 install lmdb pillow torchvision nltk natsort

Download lmdb dataset for traininig and evaluation from here

data_lmdb_release.zip contains below.
training datasets : MJSynth (MJ)[1] and SynthText (ST)[2]
validation datasets : the union of the training sets IC13[3], IC15[4], IIIT[5], and SVT[6].
evaluation datasets : benchmark evaluation datasets, consist of IIIT[5], SVT[6], IC03[7], IC13[3], IC15[4], SVTP[8], and CUTE[9].

Run demo with pretrained model

  1. Download pretrained model from here
  2. Add image files to test into demo_image/
  3. Run demo.py (add --sensitive option if you use case-sensitive model)
CUDA_VISIBLE_DEVICES=0 python3 demo.py \
--Transformation TPS --FeatureExtraction ResNet --SequenceModeling BiLSTM --Prediction Attn \
--image_folder demo_image/ \
--saved_model TPS-ResNet-BiLSTM-Attn.pth

prediction results

demo images TRBA (TPS-ResNet-BiLSTM-Attn) TRBA (case-sensitive version)
available Available
shakeshack SHARESHACK
london Londen
greenstead Greenstead
toast TOAST
merry MERRY
underground underground
ronaldo RONALDO
bally BALLY
university UNIVERSITY

Training and evaluation

  1. Train CRNN[10] model
CUDA_VISIBLE_DEVICES=0 python3 train.py \
--train_data data_lmdb_release/training --valid_data data_lmdb_release/validation \
--select_data MJ-ST --batch_ratio 0.5-0.5 \
--Transformation None --FeatureExtraction VGG --SequenceModeling BiLSTM --Prediction CTC
  1. Test CRNN[10] model. If you want to evaluate IC15-2077, check data filtering part.
CUDA_VISIBLE_DEVICES=0 python3 test.py \
--eval_data data_lmdb_release/evaluation --benchmark_all_eval \
--Transformation None --FeatureExtraction VGG --SequenceModeling BiLSTM --Prediction CTC \
--saved_model saved_models/None-VGG-BiLSTM-CTC-Seed1111/best_accuracy.pth
  1. Try to train and test our best accuracy model TRBA (TPS-ResNet-BiLSTM-Attn) also. (download pretrained model)
CUDA_VISIBLE_DEVICES=0 python3 train.py \
--train_data data_lmdb_release/training --valid_data data_lmdb_release/validation \
--select_data MJ-ST --batch_ratio 0.5-0.5 \
--Transformation TPS --FeatureExtraction ResNet --SequenceModeling BiLSTM --Prediction Attn
CUDA_VISIBLE_DEVICES=0 python3 test.py \
--eval_data data_lmdb_release/evaluation --benchmark_all_eval \
--Transformation TPS --FeatureExtraction ResNet --SequenceModeling BiLSTM --Prediction Attn \
--saved_model saved_models/TPS-ResNet-BiLSTM-Attn-Seed1111/best_accuracy.pth

Arguments

  • --train_data: folder path to training lmdb dataset.
  • --valid_data: folder path to validation lmdb dataset.
  • --eval_data: folder path to evaluation (with test.py) lmdb dataset.
  • --select_data: select training data. default is MJ-ST, which means MJ and ST used as training data.
  • --batch_ratio: assign ratio for each selected data in the batch. default is 0.5-0.5, which means 50% of the batch is filled with MJ and the other 50% of the batch is filled ST.
  • --data_filtering_off: skip data filtering when creating LmdbDataset.
  • --Transformation: select Transformation module [None | TPS].
  • --FeatureExtraction: select FeatureExtraction module [VGG | RCNN | ResNet].
  • --SequenceModeling: select SequenceModeling module [None | BiLSTM].
  • --Prediction: select Prediction module [CTC | Attn].
  • --saved_model: assign saved model to evaluation.
  • --benchmark_all_eval: evaluate with 10 evaluation dataset versions, same with Table 1 in our paper.

Download failure cases and cleansed label from here

image_release.zip contains failure case images and benchmark evaluation images with cleansed label.

When you need to train on your own dataset or Non-Latin language datasets.

  1. Create your own lmdb dataset.
pip3 install fire
python3 create_lmdb_dataset.py --inputPath data/ --gtFile data/gt.txt --outputPath result/

The structure of data folder as below.

data
├── gt.txt
└── test
    ├── word_1.png
    ├── word_2.png
    ├── word_3.png
    └── ...

At this time, gt.txt should be {imagepath}\t{label}\n
For example

test/word_1.png Tiredness
test/word_2.png kills
test/word_3.png A
...
  1. Modify --select_data, --batch_ratio, and opt.character, see this issue.

Acknowledgements

This implementation has been based on these repository crnn.pytorch, ocr_attention.

Reference

[1] M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman. Synthetic data and artificial neural networks for natural scenetext recognition. In Workshop on Deep Learning, NIPS, 2014.
[2] A. Gupta, A. Vedaldi, and A. Zisserman. Synthetic data fortext localisation in natural images. In CVPR, 2016.
[3] D. Karatzas, F. Shafait, S. Uchida, M. Iwamura, L. G. i Big-orda, S. R. Mestre, J. Mas, D. F. Mota, J. A. Almazan, andL. P. De Las Heras. ICDAR 2013 robust reading competition. In ICDAR, pages 1484–1493, 2013.
[4] D. Karatzas, L. Gomez-Bigorda, A. Nicolaou, S. Ghosh, A. Bagdanov, M. Iwamura, J. Matas, L. Neumann, V. R.Chandrasekhar, S. Lu, et al. ICDAR 2015 competition on ro-bust reading. In ICDAR, pages 1156–1160, 2015.
[5] A. Mishra, K. Alahari, and C. Jawahar. Scene text recognition using higher order language priors. In BMVC, 2012.
[6] K. Wang, B. Babenko, and S. Belongie. End-to-end scenetext recognition. In ICCV, pages 1457–1464, 2011.
[7] S. M. Lucas, A. Panaretos, L. Sosa, A. Tang, S. Wong, andR. Young. ICDAR 2003 robust reading competitions. In ICDAR, pages 682–687, 2003.
[8] T. Q. Phan, P. Shivakumara, S. Tian, and C. L. Tan. Recognizing text with perspective distortion in natural scenes. In ICCV, pages 569–576, 2013.
[9] A. Risnumawan, P. Shivakumara, C. S. Chan, and C. L. Tan. A robust arbitrary text detection system for natural scene images. In ESWA, volume 41, pages 8027–8048, 2014.
[10] B. Shi, X. Bai, and C. Yao. An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. In TPAMI, volume 39, pages2298–2304. 2017.

Links

Citation

Please consider citing this work in your publications if it helps your research.

@inproceedings{baek2019STRcomparisons,
  title={What Is Wrong With Scene Text Recognition Model Comparisons? Dataset and Model Analysis},
  author={Baek, Jeonghun and Kim, Geewook and Lee, Junyeop and Park, Sungrae and Han, Dongyoon and Yun, Sangdoo and Oh, Seong Joon and Lee, Hwalsuk},
  booktitle = {International Conference on Computer Vision (ICCV)},
  year={2019},
  pubstate={published},
  tppubtype={inproceedings}
}

Contact

Feel free to contact us if there is any question:
for code/paper Jeonghun Baek [email protected]; for collaboration [email protected] (our team leader).

License

Copyright (c) 2019-present NAVER Corp.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].