All Projects → autonise → Craft Remade

autonise / Craft Remade

Licence: mit
Implementation of CRAFT Text Detection

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Craft Remade

Craft Pytorch
Official implementation of Character Region Awareness for Text Detection (CRAFT)
Stars: ✭ 2,220 (+1648.03%)
Mutual labels:  craft, text-detection, ocr, detection
East icpr
Forked from argman/EAST for the ICPR MTWI 2018 CHALLENGE
Stars: ✭ 154 (+21.26%)
Mutual labels:  text-detection, ocr, detection
craft-text-detector
Packaged, Pytorch-based, easy to use, cross-platform version of the CRAFT text detector
Stars: ✭ 151 (+18.9%)
Mutual labels:  ocr, craft, text-detection
Dbnet.pytorch
A pytorch re-implementation of Real-time Scene Text Detection with Differentiable Binarization
Stars: ✭ 435 (+242.52%)
Mutual labels:  text-detection, ocr
Craft Reimplementation
CRAFT-Pyotorch:Character Region Awareness for Text Detection Reimplementation for Pytorch
Stars: ✭ 343 (+170.08%)
Mutual labels:  craft, text-detection
React Native Tesseract Ocr
Tesseract OCR wrapper for React Native
Stars: ✭ 384 (+202.36%)
Mutual labels:  text-detection, ocr
Text Detection Ctpn
text detection mainly based on ctpn model in tensorflow, id card detect, connectionist text proposal network
Stars: ✭ 3,242 (+2452.76%)
Mutual labels:  text-detection, ocr
Keras Ocr
A packaged and flexible version of the CRAFT text detector and Keras CRNN recognition model.
Stars: ✭ 782 (+515.75%)
Mutual labels:  text-detection, ocr
Tensorflow psenet
This is a tensorflow re-implementation of PSENet: Shape Robust Text Detection with Progressive Scale Expansion Network.My blog:
Stars: ✭ 472 (+271.65%)
Mutual labels:  text-detection, ocr
Image Text Localization Recognition
A general list of resources to image text localization and recognition 场景文本位置感知与识别的论文资源与实现合集 シーンテキストの位置認識と識別のための論文リソースの要約
Stars: ✭ 788 (+520.47%)
Mutual labels:  text-detection, ocr
Ctpn
Detecting Text in Natural Image with Connectionist Text Proposal Network (ECCV'16)
Stars: ✭ 1,220 (+860.63%)
Mutual labels:  text-detection, ocr
Awesome Ocr Resources
A collection of resources (including the papers and datasets) of OCR (Optical Character Recognition).
Stars: ✭ 335 (+163.78%)
Mutual labels:  text-detection, ocr
Megreader
A research project for text detection and recognition using PyTorch 1.2.
Stars: ✭ 332 (+161.42%)
Mutual labels:  text-detection, ocr
Psenet.pytorch
A pytorch re-implementation of PSENet: Shape Robust Text Detection with Progressive Scale Expansion Network
Stars: ✭ 416 (+227.56%)
Mutual labels:  text-detection, ocr
Chineseaddress ocr
Photographing Chinese-Address OCR implemented using CTPN+CTC+Address Correction. 拍照文档中文地址文字识别。
Stars: ✭ 309 (+143.31%)
Mutual labels:  text-detection, ocr
Seglink
An Implementation of the seglink alogrithm in paper Detecting Oriented Text in Natural Images by Linking Segments
Stars: ✭ 479 (+277.17%)
Mutual labels:  text-detection, ocr
Eyevis
Android based Vocal Vision for Visually Impaired. Object Detection, Voice Assistance, Optical Character Reader, Read Aloud, Face Recognition, Landmark Recognition, Image Labelling etc.
Stars: ✭ 48 (-62.2%)
Mutual labels:  ocr, detection
Keras Ctpn
keras复现场景文本检测网络CPTN: 《Detecting Text in Natural Image with Connectionist Text Proposal Network》;欢迎试用,关注,并反馈问题...
Stars: ✭ 89 (-29.92%)
Mutual labels:  text-detection, ocr
Tabulo
Table Detection and Extraction Using Deep Learning ( It is built in Python, using Luminoth, TensorFlow<2.0 and Sonnet.)
Stars: ✭ 110 (-13.39%)
Mutual labels:  ocr, detection
PSENet-Tensorflow
TensorFlow implementation of PSENet text detector (Shape Robust Text Detection with Progressive Scale Expansion Networkt)
Stars: ✭ 51 (-59.84%)
Mutual labels:  ocr, text-detection

Re-Implementing CRAFT-Character Region Awareness for Text Detection

Objective

  • [X] Reproduce weak-supervision training as mentioned in the paper https://arxiv.org/pdf/1904.01941.pdf
  • [ ] Generate character bbox on all the popular data sets.
  • [ ] Expose pre-trained models with command line interface to synthesize results on custom images

Clone the repository

git clone https://github.com/autonise/CRAFT-Remade.git
cd CRAFT-Remade

Option 1: Conda Environment Installation

conda env create -f environment.yml
conda activate craft

Option 2: Pip Installation

pip install -r requirements.txt

Running on custom images

Put the images inside a folder.
Get a pre-trained model from the pre-trained model list (Currently only strong supervision using SYNTH-Text available)
Run the command -

python main.py synthesize --model=./model/final_model.pkl --folder=./input

Results

Dataset Recall Precision F-score
ICDAR2013 TBD TBD 0.8201(Improving)
ICDAR2015 TBD TBD TBD(Coming Soon)
ICDAR2017 TBD TBD TBD(Coming Soon)
Total Text TBD TBD TBD(Coming Soon)
MS COCO TBD TBD TBD(Coming Soon)

Pre-trained models

Strong Supervision

SynthText(CRAFT Model) - https://drive.google.com/open?id=1QH0B-iQ1Ob2HkWCQ2bVCsLPwVSmbcSgN
SynthText(ResNet-UNet Model) - https://drive.google.com/file/d/1qnLM_iMnR1P_6OLoUoFtrReHe4bpFW3T
Original Model by authors - https://drive.google.com/open?id=1ZQE0tK9498RhLcXwYRgod4upmrYWdgl9

Weak Supervision

Pre-generated on popular data sets

  • [ ] ICDAR 2013 - In Progress
  • [ ] ICDAR 2015 - In Progress
  • [ ] ICDAR 2017 - yet_to_be_completed
  • [ ] Total Text - yet_to_be_completed
  • [ ] MS-COCO - yet_to_be_completed

How to train the model from scratch

Strong Supervision on Synthetic dataset

Download the pre-trained model on Synthetic dataset at https://drive.google.com/open?id=1qnLM_iMnR1P_6OLoUoFtrReHe4bpFW3T
Otherwise if you want to train from scratch
Run the command -

python main.py train_synth


To test your model on SynthText, Run the command -

python main.py test_synth --model /path/to/model

Weak Supervision

First Pre-Process your dataset

*Currently Supported - [IC13, IC15]

The assumed structure of the dataset is

.
├── Generated (This folder will contain the weak-supervision intermediate targets)
└── Images
    ├── test
    │   ├── img_1.jpg
    │   ├── img_2.jpg
    │   ├── img_3.jpg
    │   ├── img_4.jpg
    │   └── img_5.jpg
    │   └── ...
    ├── test_gt.json (This can be generated using the pre_process function described below)
    ├── train
    │   ├── img_1.jpg
    │   ├── img_2.jpg
    │   ├── img_3.jpg
    │   ├── img_4.jpg
    │   └── img_5.jpg
    │   └── ...
    └── train_gt.json (This can be generated using the pre_process function described below)

To generate the json files for IC13 -

In config.py change the corresponding values

'ic13': {
	'train': {
		'target_json_path': None,  --> path to where you want the target json file (Images/train_gt.json)
		'target_folder_path': None,  --> path to where you downloaded the train gt (ch2_training_localization_transcription_gt)
	},
	'test': {
		'target_json_path': None,  --> path to where you want the target json file (Images/test_gt.json)
		'target_folder_path': None,  --> path to where you downloaded the train gt (Challenge2_Test_Task1_GT)
	}

Run the command -

python main.py pre_process --dataset IC13

To generate the json files for IC15 -

In config.py change the corresponding values

'ic15': {
	'train': {
		'target_json_path': None,  --> path to where you want the target json file (Images/train_gt.json)
		'target_folder_path': None,  --> path to where you downloaded the train gt (ch4_training_localization_transcription_gt)
	},
	'test': {
		'target_json_path': None,  --> path to where you want the target json file (Images/test_gt.json)
		'target_folder_path': None,  --> path to where you downloaded the train gt (Challenge4_Test_Task1_GT)
	}

Run the command - python main.py pre_process --dataset IC15

Second Train your model based on weak-supervision


Run the command -

python main.py weak_supervision --model /path/to/strong/supervision/model --iterations <num_of_iterations(20)>

This will train the weak supervision model for the number of iterations you specified

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].