clovaai / Cleval
Licence: mit
CLEval: Character-Level Evaluation for Text Detection and Recognition Tasks
Stars: ✭ 92
Programming Languages
python
139335 projects - #7 most used programming language
Projects that are alternatives of or similar to Cleval
Chinese Text Detection And Recognition
Assignment of Image Analysis and Understanding
Stars: ✭ 53 (-42.39%)
Mutual labels: text-detection, text-recognition
doctr
docTR (Document Text Recognition) - a seamless, high-performing & accessible library for OCR-related tasks powered by Deep Learning.
Stars: ✭ 1,409 (+1431.52%)
Mutual labels: text-recognition, text-detection
Awesome Deep Text Detection Recognition
A curated list of resources for text detection/recognition (optical character recognition ) with deep learning methods.
Stars: ✭ 2,282 (+2380.43%)
Mutual labels: text-detection, text-recognition
Total Text Dataset
Total Text Dataset. It consists of 1555 images with more than 3 different text orientations: Horizontal, Multi-Oriented, and Curved, one of a kind.
Stars: ✭ 580 (+530.43%)
Mutual labels: text-detection, text-recognition
Training extensions
Trainable models and NN optimization tools
Stars: ✭ 857 (+831.52%)
Mutual labels: text-detection, text-recognition
Adelaidet
AdelaiDet is an open source toolbox for multiple instance-level detection and recognition tasks.
Stars: ✭ 2,565 (+2688.04%)
Mutual labels: text-detection, text-recognition
AE TextSpotter
No description or website provided.
Stars: ✭ 68 (-26.09%)
Mutual labels: text-recognition, text-detection
Pan pp.pytorch
Official implementations of PSENet, PAN and PAN++.
Stars: ✭ 141 (+53.26%)
Mutual labels: text-detection, text-recognition
Megreader
A research project for text detection and recognition using PyTorch 1.2.
Stars: ✭ 332 (+260.87%)
Mutual labels: text-detection, text-recognition
Chineseaddress ocr
Photographing Chinese-Address OCR implemented using CTPN+CTC+Address Correction. 拍照文档中文地址文字识别。
Stars: ✭ 309 (+235.87%)
Mutual labels: text-detection, text-recognition
Awesome Scene Text Recognition
A curated list of resources dedicated to scene text localization and recognition
Stars: ✭ 1,637 (+1679.35%)
Mutual labels: text-detection, text-recognition
Image Text Localization Recognition
A general list of resources to image text localization and recognition 场景文本位置感知与识别的论文资源与实现合集 シーンテキストの位置認識と識別のための論文リソースの要約
Stars: ✭ 788 (+756.52%)
Mutual labels: text-detection, text-recognition
Ocr.pytorch
A pure pytorch implemented ocr project including text detection and recognition
Stars: ✭ 196 (+113.04%)
Mutual labels: text-detection, text-recognition
awesome-scene-text
A curated list of papers and resources for scene text detection and recognition
Stars: ✭ 43 (-53.26%)
Mutual labels: text-recognition, text-detection
Awesome Ocr Resources
A collection of resources (including the papers and datasets) of OCR (Optical Character Recognition).
Stars: ✭ 335 (+264.13%)
Mutual labels: text-detection, text-recognition
React Native Tesseract Ocr
Tesseract OCR wrapper for React Native
Stars: ✭ 384 (+317.39%)
Mutual labels: text-detection, text-recognition
Crnn
Convolutional recurrent neural network for scene text recognition or OCR in Keras
Stars: ✭ 68 (-26.09%)
Mutual labels: text-recognition
Crnn With Stn
implement CRNN in Keras with Spatial Transformer Network
Stars: ✭ 83 (-9.78%)
Mutual labels: text-recognition
Seglink
An Implementation of the seglink alogrithm in paper Detecting Oriented Text in Natural Images by Linking Segments
Stars: ✭ 479 (+420.65%)
Mutual labels: text-detection
CLEval: Character-Level Evaluation for Text Detection and Recognition Tasks
Official implementation of CLEval | paper
Overview
We propose a Character-Level Evaluation metric (CLEval). To perform fine-grained assessment of the results, instance matching process handles granularity difference and scoring process conducts character-level evaluation. Please refer to the paper for more details. This code is based on ICDAR15 official evaluation code.
Simplified Method Description
Notification
- 15 Jun, 2020 | initial release
- the reported evaluation results in our paper is measured by setting the
CASE_SENSITIVE
option asFalse
.
- the reported evaluation results in our paper is measured by setting the
Supported annotation types
- LTRB(xmin, ymin, xmax, ymax)
- QUAD(x1, y1, x2, y2, x3, y3, x4, y4)
- POLY(x1, y1, x2, y2, ..., x_2n, y_2n)
Supported datasets
- ICDAR 2013 Focused Scene Text Link
- ICDAR 2015 Incidental Scene Text Link
- TotalText Link
- Any other datasets that have a similar format with the datasets mentioned above
Getting started
Clone repository
git clone https://github.com/clovaai/CLEval.git
Requirements
- python 3.x
- see requirements.txt file to check package dependency. To install, command
pip3 install -r requirements.txt
Instructions for the standalone scripts
Detection evaluation
python script.py -g=gt/gt_IC13.zip -s=[result.zip] --BOX_TYPE=LTRB # IC13
python script.py -g=gt/gt_IC15.zip -s=[result.zip] # IC15
python script.py -g=gt/gt_TotalText.zip -s=[result.zip] --BOX_TYPE=POLY # TotalText
- Notes
- The default value of
BOX_TYPE
is set toQUAD
. It can be explicitly set to--BOX_TYPE=QUAD
when running evaluation on IC15 dataset. - Add
--TANSCRIPTION
option if the result file contains transcription. - Add
--CONFIDENCES
option if the result file contains confidence.
- The default value of
End-to-end evaluation
python script.py -g=gt/gt_IC13.zip -s=[result.zip] --E2E --BOX_TYPE=LTRB # IC13
python script.py -g=gt/gt_IC15.zip -s=[result.zip] --E2E # IC15
python script.py -g=gt/gt_TotalText.zip -s=[result.zip] --E2E --BOX_TYPE=POLY # TotalText
- Notes
- Adding
--E2E
also automatically adds--TANSCRIPTION
option. Make sure that the transcriptions are included in the result file. - Add
--CONFIDENCES
option if the result file contains confidence.
- Adding
Paramter list
name | type | default | description |
---|---|---|---|
--BOX_TYPE | string |
QUAD |
annotation type of box (LTRB, QUAD, POLY) |
--TRANSCRIPTION | boolean |
False |
set True if result file has transcription |
--CONFIDENCES | boolean |
False |
set True if result file has confidence |
--E2E | boolean |
False |
to measure end-to-end evaluation (if not, detection evalution only) |
--CASE_SENSITIVE | boolean |
True |
set True to evaluate case-sensitively. (only used in end-to-end evaluation) |
- Note : Please refer to
arg_parser.py
file for additional parameters and default settings used internally.
Instructions for the webserver
Procedure
- Compress the GT file of the dataset you want to evaluate into
gt.zip
file and the image files intoimages.zip
. - Copy the two files to the
./gt/
directory. - Run web.py with
BOX_TYPE
option.
python web.py --BOX_TYPE=[LTRB,QUAD,POLY] --PORT=8080
Paramters for webserver
name | type | default | description |
---|---|---|---|
--BOX_TYPE | string |
QUAD |
annotation type of box (LTRB, QUAD, POLY) |
--PORT | integer |
8080 |
port number for web visualization |
Web Server screenshots
TODO
- [ ] Support to run the webserver with the designated GT and image files
- [ ] Calculate the length of text based on grapheme for Mulit-lingual dataset
Citation
@article{baek2020cleval,
title={CLEval: Character-Level Evaluation for Text Detection and Recognition Tasks},
author={Youngmin Baek, Daehyun Nam, Sungrae Park, Junyeop Lee, Seung Shin, Jeonghun Baek, Chae Young Lee and Hwalsuk Lee},
journal={arXiv preprint arXiv:2006.06244},
year={2020}
}
Contact us
CLEval has been proposed to make fair evaluation in the OCR community, so we want to hear from many researchers. We welcome any feedbacks to our metric, and appreciate pull requests if you have any comments or improvements.
License
Copyright (c) 2020-present NAVER Corp.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
Note that the project description data, including the texts, logos, images, and/or trademarks,
for each open source project belongs to its rightful owner.
If you wish to add or remove any projects, please contact us at [email protected].