All Projects → nipunmanral → Cross-Domain-Image-Translation-Using-CycleGAN

nipunmanral / Cross-Domain-Image-Translation-Using-CycleGAN

Licence: other
CycleGAN based neural network architecture to change the gender of a person’s face

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Cross-Domain-Image-Translation-Using-CycleGAN

Awesome Gan For Medical Imaging
Awesome GAN for Medical Imaging
Stars: ✭ 1,814 (+11993.33%)
Mutual labels:  generative-adversarial-network, deeplearning
Deep-Learning
It contains the coursework and the practice I have done while learning Deep Learning.🚀 👨‍💻💥 🚩🌈
Stars: ✭ 21 (+40%)
Mutual labels:  generative-adversarial-network, deeplearning
Articles-Bookmarked
No description or website provided.
Stars: ✭ 30 (+100%)
Mutual labels:  generative-adversarial-network, deeplearning
Contrastive Unpaired Translation
Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)
Stars: ✭ 822 (+5380%)
Mutual labels:  generative-adversarial-network, deeplearning
Specgan
SpecGAN - generate audio with adversarial training
Stars: ✭ 92 (+513.33%)
Mutual labels:  generative-adversarial-network, deeplearning
Relativistic Average Gan Keras
The implementation of Relativistic average GAN with Keras
Stars: ✭ 36 (+140%)
Mutual labels:  generative-adversarial-network, deeplearning
Deep Learning Resources
由淺入深的深度學習資源 Collection of deep learning materials for everyone
Stars: ✭ 422 (+2713.33%)
Mutual labels:  generative-adversarial-network, deeplearning
Spectralnormalizationkeras
Spectral Normalization for Keras Dense and Convolution Layers
Stars: ✭ 100 (+566.67%)
Mutual labels:  generative-adversarial-network, deeplearning
gan deeplearning4j
Automatic feature engineering using Generative Adversarial Networks using Deeplearning4j and Apache Spark.
Stars: ✭ 19 (+26.67%)
Mutual labels:  generative-adversarial-network, deeplearning
BPPNet-Back-Projected-Pyramid-Network
This is the official GitHub repository for ECCV 2020 Workshop paper "Single image dehazing for a variety of haze scenarios using back projected pyramid network"
Stars: ✭ 35 (+133.33%)
Mutual labels:  generative-adversarial-network
GraphCNN-GAN
Graph-convolutional GAN for point cloud generation. Code from ICLR 2019 paper Learning Localized Generative Models for 3D Point Clouds via Graph Convolution
Stars: ✭ 50 (+233.33%)
Mutual labels:  generative-adversarial-network
googlecodelabs
TPU ile Yapay Sinir Ağlarınızı Çok Daha Hızlı Eğitin
Stars: ✭ 116 (+673.33%)
Mutual labels:  deeplearning
IMTA
No description or website provided.
Stars: ✭ 38 (+153.33%)
Mutual labels:  deeplearning
Printed-Chinese-Character-OCR
This is a Chinese Character ocr system based on Deep learning (VGG like CNN neural net work),this rep include trainning set generating,image preprocesing,NN model optimizing based on Keras high level NN framwork
Stars: ✭ 21 (+40%)
Mutual labels:  deeplearning
buildTensorflow
A lightweight deep learning framework made with ❤️
Stars: ✭ 28 (+86.67%)
Mutual labels:  deeplearning
lightweight-temporal-attention-pytorch
A PyTorch implementation of the Light Temporal Attention Encoder (L-TAE) for satellite image time series. classification
Stars: ✭ 43 (+186.67%)
Mutual labels:  deeplearning
simplegan
Tensorflow-based framework to ease training of generative models
Stars: ✭ 19 (+26.67%)
Mutual labels:  generative-adversarial-network
DiscoGAN-TF
Tensorflow Implementation of DiscoGAN
Stars: ✭ 57 (+280%)
Mutual labels:  generative-adversarial-network
rembg-greenscreen
Rembg Video Virtual Green Screen Edition
Stars: ✭ 210 (+1300%)
Mutual labels:  deeplearning
TFDeepSurv
COX Proportional risk model and survival analysis implemented by tensorflow.
Stars: ✭ 75 (+400%)
Mutual labels:  deeplearning

Gender Change of People's Face using CycleGAN

Summary

We implement the CycleGAN architecture in Keras and train the model with CelebA faces dataset to perform gender change on people's faces. There are two main scripts in the code - predict.py and train.py

Environment Setup

Download the codebase and open up a terminal in the root directory. Make sure python 3.6 is installed in the current environment. Then execute

pip install -r requirements.txt

This should install all the necessary packages for the code to run.

The data used in this project is obtained from the CelebA Dataset and it has to be saved in the folder structure described below. For training, we have to pre-process the images by resizing and removing bad quality photos. For testing, the code automatically handles the conversion of the image files into appropriate size. The generated images will be 128x128 RGB images.

Training

By default, you can put the data in Code/data/male_female/ directory. The training and test data should be provided in 4 seperate directories as:

  • trainA - images of male faces for training
  • trainB - images of female faces for training
  • testA - images of male faces for testing
  • testB - images of female faces for testing

Run the code as

python train.py

If the data is in a different directory, you can specify the path at runtime by using the --data_dir flag:

python train.py --data_dir <path/to/data>

To see the parameters that can be changed, run

python train.py -h

The training loss will be updated in the logs folder which can be run with tensorboard to visualise the generator and discriminator loss on the browser. Run this command:

tensorboard --logdir=logs

At the end of training, the output of this code is the model weights of the two generators and the two discriminators. These will be saved as:

  • generatorAToB.h5 - 43.5MB
  • generatorBToA.h5 - 43.5MB
  • discriminatorA.h5 - 10.5MB
  • discriminatorB.h5 - 10.5MB

Testing

Once the model is trained, the model file should be in the same folder as the test script with the name: generatorAToB.h5 and generatorBToA.h5. The test data should be in --data_dir parameter in the folder structure as described above. Using the --batchSize flag, you can specify home many test images should get modified.

You can test the model by running:

python predict.py

The results will be generated in Code/results folder. Create two folders m2f and f2m inside the results folder. The corresponding transformations will appear in these two folders. The results will contain - fake, real, reconstructed and identity images.

Video

Project Video: https://www.youtube.com/watch?v=PgPfN3v4lG4&t=404s

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].