All Projects → ArkaJU → Image-Colorization-CycleGAN

ArkaJU / Image-Colorization-CycleGAN

Licence: MIT License
Colorization of grayscale images using CycleGAN in TensorFlow.

Programming Languages

python
139335 projects - #7 most used programming language
Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to Image-Colorization-CycleGAN

Cyclegan Qp
Official PyTorch implementation of "Artist Style Transfer Via Quadratic Potential"
Stars: ✭ 59 (+268.75%)
Mutual labels:  generative-adversarial-network, cyclegan
CycleGAN-gluon-mxnet
this repo attemps to reproduce Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks(CycleGAN) use gluon reimplementation
Stars: ✭ 31 (+93.75%)
Mutual labels:  generative-adversarial-network, cyclegan
Cyclegan
Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.
Stars: ✭ 10,933 (+68231.25%)
Mutual labels:  generative-adversarial-network, cyclegan
Pytorch Cyclegan
A clean and readable Pytorch implementation of CycleGAN
Stars: ✭ 558 (+3387.5%)
Mutual labels:  generative-adversarial-network, cyclegan
multitask-CycleGAN
Pytorch implementation of multitask CycleGAN with auxiliary classification loss
Stars: ✭ 88 (+450%)
Mutual labels:  generative-adversarial-network, cyclegan
Contrastive Unpaired Translation
Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)
Stars: ✭ 822 (+5037.5%)
Mutual labels:  generative-adversarial-network, cyclegan
Pytorch Cyclegan And Pix2pix
Image-to-Image Translation in PyTorch
Stars: ✭ 16,477 (+102881.25%)
Mutual labels:  generative-adversarial-network, cyclegan
Gannotation
GANnotation (PyTorch): Landmark-guided face to face synthesis using GANs (And a triple consistency loss!)
Stars: ✭ 167 (+943.75%)
Mutual labels:  generative-adversarial-network, cyclegan
gans-2.0
Generative Adversarial Networks in TensorFlow 2.0
Stars: ✭ 76 (+375%)
Mutual labels:  generative-adversarial-network, cyclegan
publications-arruda-ijcnn-2019
Cross-Domain Car Detection Using Unsupervised Image-to-Image Translation: From Day to Night
Stars: ✭ 59 (+268.75%)
Mutual labels:  generative-adversarial-network, cyclegan
Von
[NeurIPS 2018] Visual Object Networks: Image Generation with Disentangled 3D Representation.
Stars: ✭ 497 (+3006.25%)
Mutual labels:  generative-adversarial-network, cyclegan
BicycleGAN-pytorch
Pytorch implementation of BicycleGAN with implementation details
Stars: ✭ 99 (+518.75%)
Mutual labels:  generative-adversarial-network, cyclegan
Cyclegan
Tensorflow implementation of CycleGAN
Stars: ✭ 348 (+2075%)
Mutual labels:  generative-adversarial-network, cyclegan
Cyclegan Tensorflow
An implementation of CycleGan using TensorFlow
Stars: ✭ 1,096 (+6750%)
Mutual labels:  generative-adversarial-network, cyclegan
Generative models tutorial with demo
Generative Models Tutorial with Demo: Bayesian Classifier Sampling, Variational Auto Encoder (VAE), Generative Adversial Networks (GANs), Popular GANs Architectures, Auto-Regressive Models, Important Generative Model Papers, Courses, etc..
Stars: ✭ 276 (+1625%)
Mutual labels:  generative-adversarial-network, cyclegan
pytorch-gans
PyTorch implementation of GANs (Generative Adversarial Networks). DCGAN, Pix2Pix, CycleGAN, SRGAN
Stars: ✭ 21 (+31.25%)
Mutual labels:  generative-adversarial-network, cyclegan
pytorch-CycleGAN
Pytorch implementation of CycleGAN.
Stars: ✭ 39 (+143.75%)
Mutual labels:  generative-adversarial-network, cyclegan
AdvSegLoss
Official Pytorch implementation of Adversarial Segmentation Loss for Sketch Colorization [ICIP 2021]
Stars: ✭ 24 (+50%)
Mutual labels:  generative-adversarial-network, image-colorization
gan tensorflow
Automatic feature engineering using Generative Adversarial Networks using TensorFlow.
Stars: ✭ 48 (+200%)
Mutual labels:  generative-adversarial-network
TriangleGAN
TriangleGAN, ACM MM 2019.
Stars: ✭ 28 (+75%)
Mutual labels:  generative-adversarial-network

Image-colorization-using-CycleGAN

Introduction

Automatic image colorization has been a popular image-to-image translation problem of significant interest for several practical application areas including restoration of aged or degraded images. This project attempts to utilize CycleGANs to colorize grayscale images back to their colorful RGB form.

Overview

Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. But for many tasks, paired training data may not be available like this problem of image colorization. This is where the power of CycleGAN becomes apparent. Superiority of CycleGAN has been demonstrated on several tasks where paired training data hardly exist, e.g., in object transfiguration and painting style and season transfer

Model

Generative Adversarial Networks(GANs) are composed of two models:

  1. Generator: Aims to generate new data similar to the expected one. The Generator could be related to a human art forger, which creates fake works of art.
  2. Discriminator: It's goal is to recognize if an input data is ‘real’ — belongs to the original dataset — or if it is ‘fake’ — generated by a forger. In this scenario, a Discriminator is analogous to an art expert, who tries to detect artworks as truthful or fraud.

The CycleGAN consists of 2 generators and discriminators. One generator maps from domain A to B and the other one, from B to A. They compete with their corresponding adversarial discriminators.

To regularize the model, the authors introduce the constraint of cycle-consistency - if we transform from source distribution to target and then back again to source distribution, we should get samples from our source distribution.

Data

The experiment was done on 2 datasets:

  1. Grayscale of flowers(domain A) and their RGB version(domain B): 2K images in each folder.
  2. Frames extracted from old B&W movies(domain A) and new movies (domain B): 24K images in each folder.

The second problem is a very interesting one as the frames are taken from very old movies(1950s and before) and there is no scope for paired data, making this a useful application for CycleGAN.

Training

The models were trained on a GPU. It took about 15 hours for the first model to train. The 2nd model took a bit longer to achieve decent results, after training about 20 hours. Sample results were frequently monitored through TensorBoard.

Results

The first model yielded fine results. Some of the best ones are shown below:

For the second model the results were also good, some of which are shown below:

References

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].