All Projects → ufownl → alpr_utils

ufownl / alpr_utils

Licence: GPL-3.0 license
ALPR model in unconstrained scenarios for Chinese license plates

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to alpr utils

vrpdr
Deep Learning Applied To Vehicle Registration Plate Detection and Recognition in PyTorch.
Stars: ✭ 36 (-77.22%)
Mutual labels:  license-plate-recognition, license-plate-detection
open-lpr
Open Source and Free License Plate Recognition Software
Stars: ✭ 74 (-53.16%)
Mutual labels:  license-plate-recognition, license-plate-detection
license-plate-detect-recoginition-pytorch
深度学习车牌检测与识别,检测结果包含车牌矩形框和4个角点,基于pytorch框架运行
Stars: ✭ 77 (-51.27%)
Mutual labels:  license-plate-recognition, license-plate-detection
sparql-transformer
A more handy way to use SPARQL data in your web app
Stars: ✭ 38 (-75.95%)
Mutual labels:  transformer
les-military-mrc-rank7
莱斯杯:全国第二届“军事智能机器阅读”挑战赛 - Rank7 解决方案
Stars: ✭ 37 (-76.58%)
Mutual labels:  transformer
dingo-serializer-switch
A middleware to switch fractal serializers in dingo
Stars: ✭ 49 (-68.99%)
Mutual labels:  transformer
YaEtl
Yet Another ETL in PHP
Stars: ✭ 60 (-62.03%)
Mutual labels:  transformer
TransBTS
This repo provides the official code for : 1) TransBTS: Multimodal Brain Tumor Segmentation Using Transformer (https://arxiv.org/abs/2103.04430) , accepted by MICCAI2021. 2) TransBTSV2: Towards Better and More Efficient Volumetric Segmentation of Medical Images(https://arxiv.org/abs/2201.12785).
Stars: ✭ 254 (+60.76%)
Mutual labels:  transformer
catr
Image Captioning Using Transformer
Stars: ✭ 206 (+30.38%)
Mutual labels:  transformer
Neural-Scam-Artist
Web Scraping, Document Deduplication & GPT-2 Fine-tuning with a newly created scam dataset.
Stars: ✭ 18 (-88.61%)
Mutual labels:  transformer
VideoTransformer-pytorch
PyTorch implementation of a collections of scalable Video Transformer Benchmarks.
Stars: ✭ 159 (+0.63%)
Mutual labels:  transformer
Variational-Transformer
Variational Transformers for Diverse Response Generation
Stars: ✭ 79 (-50%)
Mutual labels:  transformer
text simplification
Text Simplification Model based on Encoder-Decoder (includes Transformer and Seq2Seq) model.
Stars: ✭ 66 (-58.23%)
Mutual labels:  transformer
TokenLabeling
Pytorch implementation of "All Tokens Matter: Token Labeling for Training Better Vision Transformers"
Stars: ✭ 385 (+143.67%)
Mutual labels:  transformer
TabFormer
Code & Data for "Tabular Transformers for Modeling Multivariate Time Series" (ICASSP, 2021)
Stars: ✭ 209 (+32.28%)
Mutual labels:  transformer
sister
SImple SenTence EmbeddeR
Stars: ✭ 66 (-58.23%)
Mutual labels:  transformer
Video-Action-Transformer-Network-Pytorch-
Implementation of the paper Video Action Transformer Network
Stars: ✭ 126 (-20.25%)
Mutual labels:  transformer
transformer-ls
Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).
Stars: ✭ 201 (+27.22%)
Mutual labels:  transformer
License-plate-recognition
使用 "Darknet yolov3-tiny" 进行车牌识别
Stars: ✭ 90 (-43.04%)
Mutual labels:  chinese-license-plate
transformer
A simple TensorFlow implementation of the Transformer
Stars: ✭ 25 (-84.18%)
Mutual labels:  transformer

ALPR utils

This is a DL model to detect & recognize Chinese license plates in unconstrained scenarios.

p1 p2 p3 p4 p5

Requirements

Download models

You can checkout the pre-trained models in pretrained/* branches.

Run a cli-demo

Simplest:

python3 test.py /path/to/image

Details:

$ python3 test.py --help
usage: test.py [-h] [--dims DIMS] [--threshold THRESHOLD] [--plt_w PLT_W]
               [--plt_h PLT_H] [--seq_len SEQ_LEN] [--no_yolo] [--beam]
               [--beam_size BEAM_SIZE] [--device_id DEVICE_ID] [--gpu]
               IMG [IMG ...]

Start a ALPR tester.

positional arguments:
  IMG                   path of the image file[s]

optional arguments:
  -h, --help            show this help message and exit
  --dims DIMS           set the sample dimentions (default: 208)
  --threshold THRESHOLD
                        set the positive threshold (default: 0.9)
  --plt_w PLT_W         set the max width of output plate images (default:
                        144)
  --plt_h PLT_H         set the max height of output plate images (default:
                        48)
  --seq_len SEQ_LEN     set the max length of output sequences (default: 8)
  --no_yolo             do not extract automobiles using YOLOv3
  --beam                using beam search
  --beam_size BEAM_SIZE
                        set the size of beam (default: 5)
  --device_id DEVICE_ID
                        select device that the model using (default: 0)
  --gpu                 using gpu acceleration

Run a demo server

Simplest:

python3 server.py

Details:

$ python3 server.py --help
usage: server.py [-h] [--dims DIMS] [--threshold THRESHOLD] [--plt_w PLT_W]
                 [--plt_h PLT_H] [--seq_len SEQ_LEN] [--beam_size BEAM_SIZE]
                 [--no_yolo] [--addr ADDR] [--port PORT]
                 [--device_id DEVICE_ID] [--gpu]

Start a ALPR demo server.

optional arguments:
  -h, --help            show this help message and exit
  --dims DIMS           set the sample dimentions (default: 208)
  --threshold THRESHOLD
                        set the positive threshold (default: 0.9)
  --plt_w PLT_W         set the max width of output plate images (default:
                        144)
  --plt_h PLT_H         set the max height of output plate images (default:
                        48)
  --seq_len SEQ_LEN     set the max length of output sequences (default: 8)
  --beam_size BEAM_SIZE
                        set the size of beam (default: 5)
  --addr ADDR           set address of ALPR server (default: 0.0.0.0)
  --port PORT           set port of ALPR server (default: 80)
  --device_id DEVICE_ID
                        select device that the model using (default: 0)
  --gpu                 using gpu acceleration

References

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].