All Projects → ankanbhunia → AdverseBiNet

ankanbhunia / AdverseBiNet

Licence: other
Improving Document Binarization via Adversarial Noise-Texture Augmentation

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to AdverseBiNet

Ali Pytorch
PyTorch implementation of Adversarially Learned Inference (BiGAN).
Stars: ✭ 61 (+79.41%)
Mutual labels:  generative-adversarial-network, adversarial-learning
Selectiongan
[CVPR 2019 Oral] Multi-Channel Attention Selection GAN with Cascaded Semantic Guidance for Cross-View Image Translation
Stars: ✭ 366 (+976.47%)
Mutual labels:  generative-adversarial-network, adversarial-learning
Adaptsegnet
Learning to Adapt Structured Output Space for Semantic Segmentation, CVPR 2018 (spotlight)
Stars: ✭ 654 (+1823.53%)
Mutual labels:  generative-adversarial-network, adversarial-learning
Lggan
[CVPR 2020] Local Class-Specific and Global Image-Level Generative Adversarial Networks for Semantic-Guided Scene Generation
Stars: ✭ 97 (+185.29%)
Mutual labels:  generative-adversarial-network, adversarial-learning
Semantic Pyramid for Image Generation
PyTorch reimplementation of the paper: "Semantic Pyramid for Image Generation" [CVPR 2020].
Stars: ✭ 45 (+32.35%)
Mutual labels:  generative-adversarial-network, adversarial-learning
Awesome Tensorlayer
A curated list of dedicated resources and applications
Stars: ✭ 248 (+629.41%)
Mutual labels:  generative-adversarial-network, adversarial-learning
Gpnd
Generative Probabilistic Novelty Detection with Adversarial Autoencoders
Stars: ✭ 112 (+229.41%)
Mutual labels:  generative-adversarial-network, adversarial-learning
Adversarial-Learning-for-Generative-Conversational-Agents
This repository contains a new adversarial training method for Generative Conversational Agents
Stars: ✭ 71 (+108.82%)
Mutual labels:  generative-adversarial-network, adversarial-learning
BicycleGAN-pytorch
Pytorch implementation of BicycleGAN with implementation details
Stars: ✭ 99 (+191.18%)
Mutual labels:  generative-adversarial-network
tulip
Scaleable input gradient regularization
Stars: ✭ 19 (-44.12%)
Mutual labels:  adversarial-learning
tfjs-gan
Simple GAN example using tensorflow JS core
Stars: ✭ 56 (+64.71%)
Mutual labels:  generative-adversarial-network
CS231n
PyTorch/Tensorflow solutions for Stanford's CS231n: "CNNs for Visual Recognition"
Stars: ✭ 47 (+38.24%)
Mutual labels:  generative-adversarial-network
IrwGAN
Official pytorch implementation of the IrwGAN for unaligned image-to-image translation
Stars: ✭ 33 (-2.94%)
Mutual labels:  generative-adversarial-network
Improved-Wasserstein-GAN-application-on-MRI-images
Improved Wasserstein GAN (WGAN-GP) application on medical (MRI) images
Stars: ✭ 23 (-32.35%)
Mutual labels:  generative-adversarial-network
catgan pytorch
Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks
Stars: ✭ 50 (+47.06%)
Mutual labels:  generative-adversarial-network
BicycleGAN
Tensorflow implementation of the NIPS paper "Toward Multimodal Image-to-Image Translation"
Stars: ✭ 30 (-11.76%)
Mutual labels:  generative-adversarial-network
CWR
Code and dataset for Single Underwater Image Restoration by Contrastive Learning, IGARSS 2021, oral.
Stars: ✭ 43 (+26.47%)
Mutual labels:  generative-adversarial-network
Adventures-with-GANS
Showcasing various fun adventures with GANs
Stars: ✭ 13 (-61.76%)
Mutual labels:  generative-adversarial-network
Awesome-Text-to-Image
A Survey on Text-to-Image Generation/Synthesis.
Stars: ✭ 251 (+638.24%)
Mutual labels:  generative-adversarial-network
esrgan
Enhanced SRGAN. Champion PIRM Challenge on Perceptual Super-Resolution
Stars: ✭ 48 (+41.18%)
Mutual labels:  generative-adversarial-network

Improving Document Binarization via Adversarial Noise-Texture Augmentation [paper] [ICIP 2019]

This repository contains the full source codes and instructions to use the codes on the datasets described in the paper. The paper re-visits the binarization problem by introducingan adversarial learning approach.

The most significant contribution of our framework is that it does not require any paired data unlike other Deep Learning-based methods [ronneberger et al., Vo et al.,Konwer et al.].Such a novel approach has never been implemented earlier thus making it the very first of its kind in Document Image Analysis community.

Results

In this paper, we propose a two-stage network that first learns to augment the document images by using neural style transfer technique. For this purpose, we construct a Texture Augmentation Network that transfers the texture element of a degraded reference document image to a clean binary image.

Achitecture In this way, the network creates multiple versions of the same textual content with various noisy textures, enlarging the available document binarization datasets. At last, the newly generated images are passed through a Binarization network to get back the clean version.

Pre-requisites

  • python 2.7
  • Tensorflow
  • OpenCV
  • matplotlib

Training

  • A total of 9 datasets are used in this work: DIBCO 2009, DIBCO 2011, DIBCO 2013, H-DIBCO 2010, HDIBCO 2012, H-DIBCO 2014, Bickley diary, PHIDB, and S-MS datasets.
  • Out of these datasets, DIBCO 2013 dataset is selected for testing purposes. For the testing, the remaining datasets are used as a training set.
  • We convert the images from these datasets to patches of size 256 X 256.
  • Download the VGG weights from here and put it in the repository folder.
  • To train the model run the 'train.py' file

Citation

If you find this code useful in your research, please consider citing:

@article{bhunia2018improving,
  title={Improving Document Binarization via Adversarial Noise-Texture Augmentation},
  author={Bhunia, Ankan Kumar and Bhunia, Ayan Kumar and Sain, Aneeshan and Roy, Partha Pratim},
  journal={arXiv preprint arXiv:1810.11120},
  year={2018}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].