All Projects → tneumann → minimal_glo

tneumann / minimal_glo

Licence: MIT License
Minimal PyTorch implementation of Generative Latent Optimization from the paper "Optimizing the Latent Space of Generative Networks"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to minimal glo

Stargan
StarGAN - Official PyTorch Implementation (CVPR 2018)
Stars: ✭ 4,946 (+4316.07%)
Mutual labels:  generative-models
PanoDR
Code and models for "PanoDR: Spherical Panorama Diminished Reality for Indoor Scenes" presented at the OmniCV workshop of CVPR21.
Stars: ✭ 22 (-80.36%)
Mutual labels:  generative-models
Autoregressive-models
Tensorflow 2.0 implementation of Deep Autoregressive Models
Stars: ✭ 18 (-83.93%)
Mutual labels:  generative-models
Stargan V2
StarGAN v2 - Official PyTorch Implementation (CVPR 2020)
Stars: ✭ 2,700 (+2310.71%)
Mutual labels:  generative-models
Generative Continual Learning
No description or website provided.
Stars: ✭ 51 (-54.46%)
Mutual labels:  generative-models
fewshot-font-generation
The unified repository for few-shot font generation methods. This repository includes FUNIT (ICCV'19), DM-Font (ECCV'20), LF-Font (AAAI'21) and MX-Font (ICCV'21).
Stars: ✭ 76 (-32.14%)
Mutual labels:  generative-models
DeepDream
Generative deep learning: DeepDream
Stars: ✭ 17 (-84.82%)
Mutual labels:  generative-models
multimodal-vae-public
A PyTorch implementation of "Multimodal Generative Models for Scalable Weakly-Supervised Learning" (https://arxiv.org/abs/1802.05335)
Stars: ✭ 98 (-12.5%)
Mutual labels:  generative-models
score sde pytorch
PyTorch implementation for Score-Based Generative Modeling through Stochastic Differential Equations (ICLR 2021, Oral)
Stars: ✭ 755 (+574.11%)
Mutual labels:  generative-models
overlord
Official pytorch implementation of "Scaling-up Disentanglement for Image Translation", ICCV 2021.
Stars: ✭ 35 (-68.75%)
Mutual labels:  generative-models
char-rnn
medium.com/@jctestud/yet-another-text-generation-project-5cfb59b26255
Stars: ✭ 20 (-82.14%)
Mutual labels:  generative-models
precision-recall-distributions
Assessing Generative Models via Precision and Recall (official repository)
Stars: ✭ 80 (-28.57%)
Mutual labels:  generative-models
simsg
Semantic Image Manipulation using Scene Graphs (CVPR 2020)
Stars: ✭ 49 (-56.25%)
Mutual labels:  generative-models
Zhusuan
A probabilistic programming library for Bayesian deep learning, generative models, based on Tensorflow
Stars: ✭ 2,093 (+1768.75%)
Mutual labels:  generative-models
paccmann rl
Code pipeline for the PaccMann^RL in iScience: https://www.cell.com/iscience/fulltext/S2589-0042(21)00237-6
Stars: ✭ 22 (-80.36%)
Mutual labels:  generative-models
Tensorflow Generative Model Collections
Collection of generative models in Tensorflow
Stars: ✭ 3,785 (+3279.46%)
Mutual labels:  generative-models
Generative-Model
Repository for implementation of generative models with Tensorflow 1.x
Stars: ✭ 66 (-41.07%)
Mutual labels:  generative-models
BIGPrior
(TIP 2022) Bayesian Integration of a Generative Prior for Image Restoration
Stars: ✭ 20 (-82.14%)
Mutual labels:  generative-models
cfg-gan
CFG-GAN: Composite functional gradient learning of generative adversarial models
Stars: ✭ 15 (-86.61%)
Mutual labels:  generative-models
lffont
Official PyTorch implementation of LF-Font (Few-shot Font Generation with Localized Style Representations and Factorization) AAAI 2021
Stars: ✭ 110 (-1.79%)
Mutual labels:  generative-models

Minimal PyTorch implementation of Generative Latent Optimization

This is a reimplementation of the paper

Piotr Bojanowski, Armand Joulin, David Lopez-Paz, Arthur Szlam:
Optimizing the Latent Space of Generative Networks

I'm not one of the authors. I just reimplemented parts of the paper in PyTorch for learning about PyTorch and generative models. Also, I liked the idea in the paper and was surprised that the approach actually works.

Implementation of the Laplacian pyramid L1 loss is inspired by https://github.com/mtyka/laploss. DCGAN network architecture follows https://github.com/pytorch/examples/tree/master/dcgan.

Running the code

First, install the required packages. For example, in Anaconda, you can simple do

conda install pytorch torchvision -c pytorch
conda install scikit-learn tqdm plac python-lmdb pillow

Download the LSUN dataset (only the bedroom training images are used here) into $LSUN_DIR. Then, simply run:

python glo.py $LSUN_DIR

You can learn more about the settings by running python glo.py --help.

Results

Unless mentioned otherwise, results are shown from a run over only a subset of the data (100000 samples - can be specified via the -n argument). Optimization was performed for only 25 epochs. The images below show reconstructions from the optimized latent space.

Results with 100-dimensional representation space look quite good, similar to the results shown in Fig. 1 in the paper.

python glo.py $LSUN_DIR -o d100 -gpu -d 100 -n 100000

Training for more epochs and from the whole dataset will make the images even sharper. Here are results (with 100D latent space) from a longer run of 50 epochs on the full dataset.

python glo.py $LSUN_DIR -o d100_full -gpu -d 100 -e 50

I'm not sure how many pyramid levels the authors used for the Laplacian pyramid L1 loss (here, we use 3 levels, but more might be better ... or not). But these results seem close enough.


Results with 512-dimensional representation space:

python glo.py $LSUN_DIR -o d512 -gpu -d 512 -n 100000

One of the main contributions of the paper is the use of the Laplacian pyramid L1 loss. Lets see how it compares to reconstructions using a simple L2 loss, again from 100-d representation space:

python glo.py $LSUN_DIR -o d100_l2 -gpu -d 512 -n 100000 -l l2


Comparison to L2 reconstruction loss, 512-d representation space:

python glo.py $LSUN_DIR -o d512_l2 -gpu -d 512 -n 100000 -l l2

I observed that initialization of the latent vectors with PCA is very crucial. Below are results from (normally distributed) random latent vectors. After 25 epochs, loss is only 0.31 (when initializing from PCA, loss after only 1 epoch is already 0.23). Reconstructions look really blurry.

python glo.py $LSUN_DIR -o d100_rand -gpu -d 100 -n 100000 -i random -e 500

It gets better after 500 epochs, but still very slow convergence and the results are not as clear as with PCA initialization.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].