All Projects → ritheshkumar95 → Im2latex Tensorflow

ritheshkumar95 / Im2latex Tensorflow

Tensorflow implementation of the HarvardNLP paper - What You Get Is What You See: A Visual Markup Decompiler (https://arxiv.org/pdf/1609.04938v1.pdf)

Projects that are alternatives of or similar to Im2latex Tensorflow

Brainforge
A Neural Networking library based on NumPy only
Stars: ✭ 114 (-44.93%)
Mutual labels:  convolutional-neural-networks, recurrent-neural-networks
Iseebetter
iSeeBetter: Spatio-Temporal Video Super Resolution using Recurrent-Generative Back-Projection Networks | Python3 | PyTorch | GANs | CNNs | ResNets | RNNs | Published in Springer Journal of Computational Visual Media, September 2020, Tsinghua University Press
Stars: ✭ 202 (-2.42%)
Mutual labels:  convolutional-neural-networks, recurrent-neural-networks
Deepecg
ECG classification programs based on ML/DL methods
Stars: ✭ 124 (-40.1%)
Mutual labels:  convolutional-neural-networks, recurrent-neural-networks
Pytorch Learners Tutorial
PyTorch tutorial for learners
Stars: ✭ 97 (-53.14%)
Mutual labels:  convolutional-neural-networks, recurrent-neural-networks
Tfvos
Semi-Supervised Video Object Segmentation (VOS) with Tensorflow. Includes implementation of *MaskRNN: Instance Level Video Object Segmentation (NIPS 2017)* as part of the NIPS Paper Implementation Challenge.
Stars: ✭ 151 (-27.05%)
Mutual labels:  convolutional-neural-networks, recurrent-neural-networks
Top Deep Learning
Top 200 deep learning Github repositories sorted by the number of stars.
Stars: ✭ 1,365 (+559.42%)
Mutual labels:  convolutional-neural-networks, recurrent-neural-networks
Coursera Deep Learning Specialization
Notes, programming assignments and quizzes from all courses within the Coursera Deep Learning specialization offered by deeplearning.ai: (i) Neural Networks and Deep Learning; (ii) Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization; (iii) Structuring Machine Learning Projects; (iv) Convolutional Neural Networks; (v) Sequence Models
Stars: ✭ 188 (-9.18%)
Mutual labels:  convolutional-neural-networks, recurrent-neural-networks
Text classification
Text Classification Algorithms: A Survey
Stars: ✭ 1,276 (+516.43%)
Mutual labels:  convolutional-neural-networks, recurrent-neural-networks
Image Caption Generator
[DEPRECATED] A Neural Network based generative model for captioning images using Tensorflow
Stars: ✭ 141 (-31.88%)
Mutual labels:  convolutional-neural-networks, recurrent-neural-networks
Deep Learning With Pytorch Tutorials
深度学习与PyTorch入门实战视频教程 配套源代码和PPT
Stars: ✭ 1,986 (+859.42%)
Mutual labels:  convolutional-neural-networks, recurrent-neural-networks
Kerasr
R interface to the keras library
Stars: ✭ 90 (-56.52%)
Mutual labels:  convolutional-neural-networks, recurrent-neural-networks
Hdltex
HDLTex: Hierarchical Deep Learning for Text Classification
Stars: ✭ 191 (-7.73%)
Mutual labels:  convolutional-neural-networks, recurrent-neural-networks
Deep Learning For Beginners
videos, lectures, blogs for Deep Learning
Stars: ✭ 89 (-57%)
Mutual labels:  convolutional-neural-networks, recurrent-neural-networks
Keras English Resume Parser And Analyzer
keras project that parses and analyze english resumes
Stars: ✭ 192 (-7.25%)
Mutual labels:  convolutional-neural-networks, recurrent-neural-networks
Malware Classification
Towards Building an Intelligent Anti-Malware System: A Deep Learning Approach using Support Vector Machine for Malware Classification
Stars: ✭ 88 (-57.49%)
Mutual labels:  convolutional-neural-networks, recurrent-neural-networks
Rcnn Text Classification
Tensorflow Implementation of "Recurrent Convolutional Neural Network for Text Classification" (AAAI 2015)
Stars: ✭ 127 (-38.65%)
Mutual labels:  convolutional-neural-networks, recurrent-neural-networks
Emnist
A project designed to explore CNN and the effectiveness of RCNN on classifying the EMNIST dataset.
Stars: ✭ 81 (-60.87%)
Mutual labels:  convolutional-neural-networks, recurrent-neural-networks
Tnn
Biologically-realistic recurrent convolutional neural networks
Stars: ✭ 83 (-59.9%)
Mutual labels:  convolutional-neural-networks, recurrent-neural-networks
Image Caption Generator
A neural network to generate captions for an image using CNN and RNN with BEAM Search.
Stars: ✭ 126 (-39.13%)
Mutual labels:  convolutional-neural-networks, recurrent-neural-networks
Brain.js
brain.js is a GPU accelerated library for Neural Networks written in JavaScript.
Stars: ✭ 12,358 (+5870.05%)
Mutual labels:  convolutional-neural-networks, recurrent-neural-networks

im2latex tensorflow implementation

This is a tensorflow implementation of the HarvardNLP paper - What You Get Is What You See: A Visual Markup Decompiler.

This is also a potential solution to OpenAI's Requests For Research Problem - im2latex

The paper (http://arxiv.org/pdf/1609.04938v1.pdf) provides technical details of the model.

Original Torch implementation of the paper[https://github.com/harvardnlp/im2markup/blob/master/]

What You Get Is What You See: A Visual Markup Decompiler  
Yuntian Deng, Anssi Kanervisto, and Alexander M. Rush
http://arxiv.org/pdf/1609.04938v1.pdf

This is a general-purpose, deep learning-based system to decompile an image into presentational markup. For example, we can infer the LaTeX or HTML source from a rendered image.

An example input is a rendered LaTeX formula:

The goal is to infer the LaTeX formula that can render such an image:

 d s _ { 1 1 } ^ { 2 } = d x ^ { + } d x ^ { - } + l _ { p } ^ { 9 } \frac { p _ { - } } { r ^ { 7 } } \delta ( x ^ { - } ) d x ^ { - } d x ^ { - } + d x _ { 1 } ^ { 2 } + \; \cdots \; + d x _ { 9 } ^ { 2 }

Sample results from this implementation

png

For more results, view results_validset.html, results_testset.html files.

Prerequsites

Most of the code is written in tensorflow, with Python for preprocessing.

Preprocess

The proprocessing for this dataset is exactly reproduced as the original torch implementation by the HarvardNLP group

Python

  • Pillow
  • numpy

Optional: We use Node.js and KaTeX for preprocessing Installation

pdflatex Installaton

Pdflatex is used for rendering LaTex during evaluation.

ImageMagick convert Installation

Convert is used for rending LaTex during evaluation.

Webkit2png Installation

Webkit2png is used for rendering HTML during evaluation.

Preprocessing Instructions

The images in the dataset contain a LaTeX formula rendered on a full page. To accelerate training, we need to preprocess the images.

Please download the training data from https://zenodo.org/record/56198#.WFojcXV94jA and extract into source (master) folder.

cd im2markup
python scripts/preprocessing/preprocess_images.py --input-dir ../formula_images --output-dir ../images_processed

The above command will crop the formula area, and group images of similar sizes to facilitate batching.

Next, the LaTeX formulas need to be tokenized or normalized.

python scripts/preprocessing/preprocess_formulas.py --mode normalize --input-file ../im2latex_formulas.lst --output-file formulas.norm.lst

The above command will normalize the formulas. Note that this command will produce some error messages since some formulas cannot be parsed by the KaTeX parser.

Then we need to prepare train, validation and test files. We will exclude large images from training and validation set, and we also ignore formulas with too many tokens or formulas with grammar errors.

python scripts/preprocessing/preprocess_filter.py --filter --image-dir ../images_processed --label-path formulas.norm.lst --data-path ../im2latex_train.lst --output-path train.lst
python scripts/preprocessing/preprocess_filter.py --filter --image-dir ../images_processed --label-path formulas.norm.lst --data-path ../im2latex_validate.lst --output-path validate.lst
python scripts/preprocessing/preprocess_filter.py --no-filter --image-dir ../images_processed --label-path formulas.norm.lst --data-path ../im2latex_test.lst --output-path test.lst

Finally, we generate the vocabulary from training set. All tokens occuring less than (including) 1 time will be excluded from the vocabulary.

python scripts/preprocessing/generate_latex_vocab.py --data-path train.lst --label-path formulas.norm.lst --output-file latex_vocab.txt

Train, Test and Valid images need to be segmented into buckets based on image size (height, width) to facilitate batch processing.

train_buckets.npy, valid_buckets.npy, test_buckets.npy can be generated using the DataProcessing.ipynb script

### Run the individual cells from this notebook
ipython notebook DataProcessing.ipynb

Train

python attention.py

Default hyperparameters used:

  • BATCH_SIZE = 20
  • EMB_DIM = 80
  • ENC_DIM = 256
  • DEC_DIM = ENC_DIM*2
  • D = 512 (#channels in feature grid)
  • V = 502 (vocab size)
  • NB_EPOCHS = 50
  • H = 20 (Maximum height of feature grid)
  • W = 50 (Maximum width of feature grid)

The train NLL drops to 0.08 after 18 epochs of training on 24GB Nvidia M40 GPU.

Test

predict() function in the attention.py script can be called to predict from validation or test sets.

Predict.ipynb script displays and renders the results saved by the predict() function

Evaluate

attention.py scores the train set and validation set after each epoch (measures mean train NLL, perplexity)

Scores from this implementation

results_1 results_2

Weight files

Google Drive

Visualizing the attention mechanism

att_1

att_2

att_3

att_4

att_5

att_6

att_7

att_8

att_9

att_10

att_11

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].