All Projects → marcin7Cd → Variant Of Cppn Gan

marcin7Cd / Variant Of Cppn Gan

based on https://github.com/kwj2104/CPPN-WGAN, but on chineses fonts and improved architecture

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Variant Of Cppn Gan

Musegan
An AI for Music Generation
Stars: ✭ 794 (+1746.51%)
Mutual labels:  generative-adversarial-network
Residual image learning gan
Tensorflow implementation for Paper "Learning Residual Images for Face Attribute Manipulation"
Stars: ✭ 29 (-32.56%)
Mutual labels:  generative-adversarial-network
3d conditional gan
The codes of VAE-GAN model for 3d shape reconstruction from depth data
Stars: ✭ 40 (-6.98%)
Mutual labels:  generative-adversarial-network
Cadgan
ICML 2019. Turn a pre-trained GAN model into a content-addressable model without retraining.
Stars: ✭ 19 (-55.81%)
Mutual labels:  generative-adversarial-network
Wgan Lp Tensorflow
Reproduction code for WGAN-LP
Stars: ✭ 13 (-69.77%)
Mutual labels:  generative-adversarial-network
Tensorflow Srgan
Tensorflow implementation of "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network" (Ledig et al. 2017)
Stars: ✭ 33 (-23.26%)
Mutual labels:  generative-adversarial-network
Pytorch Pretrained Biggan
🦋A PyTorch implementation of BigGAN with pretrained weights and conversion scripts.
Stars: ✭ 779 (+1711.63%)
Mutual labels:  generative-adversarial-network
Pytorch Cpp
C++ Implementation of PyTorch Tutorials for Everyone
Stars: ✭ 1,014 (+2258.14%)
Mutual labels:  generative-adversarial-network
Udacity Deep Learning Nanodegree
This is just a collection of projects that made during my DEEPLEARNING NANODEGREE by UDACITY
Stars: ✭ 15 (-65.12%)
Mutual labels:  generative-adversarial-network
Relativistic Average Gan Keras
The implementation of Relativistic average GAN with Keras
Stars: ✭ 36 (-16.28%)
Mutual labels:  generative-adversarial-network
Multi Viewpoint Image Generation
Given an image and a target viewpoint, generate synthetic image in the target viewpoint
Stars: ✭ 23 (-46.51%)
Mutual labels:  generative-adversarial-network
St Cgan
Dataset and Code for our CVPR'18 paper ST-CGAN: "Stacked Conditional Generative Adversarial Networks for Jointly Learning Shadow Detection and Shadow Removal"
Stars: ✭ 13 (-69.77%)
Mutual labels:  generative-adversarial-network
Deep Generative Models
Deep generative models implemented with TensorFlow 2.0: eg. Restricted Boltzmann Machine (RBM), Deep Belief Network (DBN), Deep Boltzmann Machine (DBM), Convolutional Variational Auto-Encoder (CVAE), Convolutional Generative Adversarial Network (CGAN)
Stars: ✭ 34 (-20.93%)
Mutual labels:  generative-adversarial-network
Contrastive Unpaired Translation
Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)
Stars: ✭ 822 (+1811.63%)
Mutual labels:  generative-adversarial-network
Yann
This toolbox is support material for the book on CNN (http://www.convolution.network).
Stars: ✭ 41 (-4.65%)
Mutual labels:  generative-adversarial-network
Data Augmentation Review
List of useful data augmentation resources. You will find here some not common techniques, libraries, links to github repos, papers and others.
Stars: ✭ 785 (+1725.58%)
Mutual labels:  generative-adversarial-network
Image To Image Papers
🦓<->🦒 🌃<->🌆 A collection of image to image papers with code (constantly updating)
Stars: ✭ 949 (+2106.98%)
Mutual labels:  generative-adversarial-network
Bringing Old Photos Back To Life
Bringing Old Photo Back to Life (CVPR 2020 oral)
Stars: ✭ 9,525 (+22051.16%)
Mutual labels:  generative-adversarial-network
Jsi Gan
Official repository of JSI-GAN (Accepted at AAAI 2020).
Stars: ✭ 42 (-2.33%)
Mutual labels:  generative-adversarial-network
Conversational Ai
Conversational AI Reading Materials
Stars: ✭ 34 (-20.93%)
Mutual labels:  generative-adversarial-network

variant-of-CPPN-GAN

based on https://github.com/kwj2104/CPPN-WGAN, but with chineses fonts and improved architecture.

results for chinese fonts
results for MNIST
enlarged image

Interpolation

to interpolate you run interpolator_casia.py. You may change in main which samples are displayed, size of images and size of grid of images. By default it takes 21st(last) generator from tmp\chinese where generators from different stages of my training are stored.

Training

Before training you have to download fonts from https://www.kaggle.com/dylanli/chinesecharacter. and put them in file fonts. Then execute script chinese.py To generate images by default it will generate 8k 64x64 images. The periodical image samples, checkpoint and graphs of losses will be saved by default in file tmp/chinese_current. To train you execute gan_cppn_chinese.py script. It should give interesting results fairly quickly. From experience the longer you run the variety of samples increases. You may want to decreas learning rate further in training. The generator checkpoints of my run are saved in file tmp/chinese you may look at them for comparison(use interpolator_chinese).

MNIST Dataset

There are also analogous scripts for mnist dataset. They run on little bit less complex architecture. The scripts gan_cppn_mnist3.py and interpolator_mnist3.py are a experiment with additional one_hot vector noise to force a generator to use a discrete variables, which are used to learn categories in a unsupervised way. The generator learned most classes of digits, but got confused about two types of 4's and was forced to put 2 and 3 in the same category. The one_hot representation during traning made a possibility of mixing two categories of digits, which is in interpolation from interpolator_mnist3.py.

Generator exploration

There are also scipts for more sophisticated image and gif creation.

generator_exploration_main.py

Experimental method of finding sematically meaningfull directions in the latent space. Every iteration produces an interpolation along 4 directions starting from 9 common points in latent space (in total 4*9 interpolations). The user has to decide, which directions show a desired change of property and which do not. Then algorithm will change probabilities of sampling directions. In current implementation the visualization is saved as a gif and the choice of directions is made by typing subset of numbers {0,1,2,3}denoting desired directions (to select no direction press enter). The algorithm is sampling from lineary transformed normal distribution. Every time the visualized direction is not chosen the distribution is squished along this direction, which make it less probable.

image_enhancer_main.py

Simple scipt to optimize latent space position for better images in cases that current position is during transition between two states. The script just moves point in latent space to closest minimum(with respect to discriminator loss) in the randomly chosen subspace of latent space(the dimensionality of subspace can be chosen).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].