All Projects → DmitryUlyanov → Age

DmitryUlyanov / Age

Code for the paper "Adversarial Generator-Encoder Networks"

Programming Languages

python
139335 projects - #7 most used programming language

Labels

Projects that are alternatives of or similar to Age

DLSS
Deep Learning Super Sampling with Deep Convolutional Generative Adversarial Networks.
Stars: ✭ 88 (-67.88%)
Mutual labels:  gan
TransGAN
This is a re-implementation of TransGAN: Two Pure Transformers Can Make One Strong GAN (CVPR 2021) in PyTorch.
Stars: ✭ 57 (-79.2%)
Mutual labels:  gan
Inpainting gmcnn
Image Inpainting via Generative Multi-column Convolutional Neural Networks, NeurIPS2018
Stars: ✭ 256 (-6.57%)
Mutual labels:  gan
EigenGAN-Tensorflow
EigenGAN: Layer-Wise Eigen-Learning for GANs (ICCV 2021)
Stars: ✭ 294 (+7.3%)
Mutual labels:  gan
lecam-gan
Regularizing Generative Adversarial Networks under Limited Data (CVPR 2021)
Stars: ✭ 127 (-53.65%)
Mutual labels:  gan
hifigan-denoiser
HiFi-GAN: High Fidelity Denoising and Dereverberation Based on Speech Deep Features in Adversarial Networks
Stars: ✭ 88 (-67.88%)
Mutual labels:  gan
Unsupervised-Anomaly-Detection-with-Generative-Adversarial-Networks
Unsupervised Anomaly Detection with Generative Adversarial Networks on MIAS dataset
Stars: ✭ 95 (-65.33%)
Mutual labels:  gan
My Data Competition Experience
本人多次机器学习与大数据竞赛Top5的经验总结,满满的干货,拿好不谢
Stars: ✭ 271 (-1.09%)
Mutual labels:  gan
AsymmetricGAN
[ACCV 2018 Oral] Dual Generator Generative Adversarial Networks for Multi-Domain Image-to-Image Translation
Stars: ✭ 42 (-84.67%)
Mutual labels:  gan
DCGAN-CelebA-PyTorch-CPP
DCGAN Implementation using PyTorch in both C++ and Python
Stars: ✭ 14 (-94.89%)
Mutual labels:  gan
dcgan anime avatars
基于keras使用dcgan自动生成动漫头像
Stars: ✭ 37 (-86.5%)
Mutual labels:  gan
Tensorflow DCGAN
Study Friendly Implementation of DCGAN in Tensorflow
Stars: ✭ 22 (-91.97%)
Mutual labels:  gan
Awesome-ICCV2021-Low-Level-Vision
A Collection of Papers and Codes for ICCV2021 Low Level Vision and Image Generation
Stars: ✭ 163 (-40.51%)
Mutual labels:  gan
VSGAN
VapourSynth Single Image Super-Resolution Generative Adversarial Network (GAN)
Stars: ✭ 124 (-54.74%)
Mutual labels:  gan
Singan
Official pytorch implementation of the paper: "SinGAN: Learning a Generative Model from a Single Natural Image"
Stars: ✭ 2,983 (+988.69%)
Mutual labels:  gan
cgan-face-generator
Face generator from sketches using cGAN (pix2pix) model
Stars: ✭ 52 (-81.02%)
Mutual labels:  gan
pytorch-ACSCP
Unofficial implementation of "Crowd Counting via Adversarial Cross-Scale Consistency Pursuit" with pytorch - CVPR 2018
Stars: ✭ 18 (-93.43%)
Mutual labels:  gan
Anime Face Dataset
🖼 A collection of high-quality anime faces.
Stars: ✭ 272 (-0.73%)
Mutual labels:  gan
Face Generator
Generate human faces with neural networks
Stars: ✭ 266 (-2.92%)
Mutual labels:  gan
UEGAN
[TIP2020] Pytorch implementation of "Towards Unsupervised Deep Image Enhancement with Generative Adversarial Network"
Stars: ✭ 68 (-75.18%)
Mutual labels:  gan

This repository contains code for the paper

"Adversarial Generator-Encoder Networks" (AAAI'18) by Dmitry Ulyanov, Andrea Vedaldi, Victor Lempitsky.

Pretrained models

This is how you can access the models used to generate figures in the paper.

  1. First install dev version of pytorch 0.2 and make sure you have jupyter notebook ready.

  2. Then download the models with the script:

bash download_pretrained.sh
  1. Run jupyter notebook and go through evaluate.ipynb.

Here is an example of samples and reconstructions for imagenet, celeba and cifar10 datasets generated with evaluate.ipynb.

Celeba

Samples Reconstructions

Cifar10

Samples Reconstructions

Tiny ImageNet

Samples Reconstructions

Training

Use age.py script to train a model. Here are the most important parameters:

  • --dataset: one of [celeba, cifar10, imagenet, svhn, mnist]
  • --dataroot: for datasets included in torchvision it is a directory where everything will be downloaded to; for imagenet, celeba datasets it is a path to a directory with folders train and val inside.
  • --image_size:
  • --save_dir: path to a folder, where checkpoints will be stored
  • --nz: dimensionality of latent space
  • -- batch_size: Batch size. Default 64.
  • --netG: .py file with generator definition. Searched in models directory
  • --netE: .py file with generator definition. Searched in models directory
  • --netG_chp: path to a generator checkpoint to load from
  • --netE_chp: path to an encoder checkpoint to load from
  • --nepoch: number of epoch to run
  • --start_epoch: epoch number to start from. Useful for finetuning.
  • --e_updates: Update plan for encoder. <num steps>;KL_fake:<weight>,KL_real:<weight>,match_z:<weight>,match_x:<weight>.
  • --g_updates: Update plan for generator. <num steps>;KL_fake:<weight>,match_z:<weight>,match_x:<weight>.

And misc arguments:

  • --workers: number of dataloader workers.
  • --ngf: controlles number of channels in generator
  • --ndf: controlles number of channels in encoder
  • --beta1: parameter for ADAM optimizer
  • --cpu: do not use GPU
  • --criterion: Parametric param or non-parametric nonparam way to compute KL. Parametric fits Gaussian into data, non-parametric is based on nearest neighbors. Default: param.
  • --KL: What KL to compute: qp or pq. Default is qp.
  • --noise: sphere for uniform on sphere or gaussian. Default sphere.
  • --match_z: loss to use as reconstruction loss in latent space. L1|L2|cos. Default cos.
  • --match_x: loss to use as reconstruction loss in data space. L1|L2|cos. Default L1.
  • --drop_lr: each drop_lr epochs a learning rate is dropped.
  • --save_every: controls how often intermediate results are stored. Default 50.
  • --manual_seed: random seed. Default 123.

Here is cmd you can start with:

Celeba

Let data_root to be a directory with two folders train, val, each with the images for corresponding split.

python age.py --dataset celeba --dataroot <data_root> --image_size 64 --save_dir <save_dir> --lr 0.0002 --nz 64 --batch_size 64 --netG dcgan64px --netE dcgan64px --nepoch 5 --drop_lr 5 --e_updates '1;KL_fake:1,KL_real:1,match_z:0,match_x:10' --g_updates '3;KL_fake:1,match_z:1000,match_x:0'

It is beneficial to finetune the model with larger batch_size and stronger matching weight then:

python age.py --dataset celeba --dataroot <data_root> --image_size 64 --save_dir <save_dir> --start_epoch 5 --lr 0.0002 --nz 64 --batch_size 256 --netG dcgan64px --netE dcgan64px --nepoch 6 --drop_lr 5   --e_updates '1;KL_fake:1,KL_real:1,match_z:0,match_x:15' --g_updates '3;KL_fake:1,match_z:1000,match_x:0' --netE_chp  <save_dir>/netE_epoch_5.pth --netG_chp <save_dir>/netG_epoch_5.pth

Imagenet

python age.py --dataset imagenet --dataroot /path/to/imagenet_dir/ --save_dir <save_dir> --image_size 32 --save_dir ${pdir} --lr 0.0002 --nz 128 --netG dcgan32px --netE dcgan32px --nepoch 6 --drop_lr 3  --e_updates '1;KL_fake:1,KL_real:1,match_z:0,match_x:10' --g_updates '2;KL_fake:1,match_z:2000,match_x:0' --workers 12

It can be beneficial to switch to 256 batch size after several epochs.

Cifar10

python age.py --dataset cifar10 --image_size 32 --save_dir <save_dir> --lr 0.0002 --nz 128 --netG dcgan32px --netE dcgan32px --nepoch 150 --drop_lr 40  --e_updates '1;KL_fake:1,KL_real:1,match_z:0,match_x:10' --g_updates '2;KL_fake:1,match_z:1000,match_x:0'

Tested with python 2.7.

Implementation is based on pyTorch DCGAN code.

Citation

If you found this code useful please cite our paper

@inproceedings{DBLP:conf/aaai/UlyanovVL18,
  author    = {Dmitry Ulyanov and
               Andrea Vedaldi and
               Victor S. Lempitsky},
  title     = {It Takes (Only) Two: Adversarial Generator-Encoder Networks},
  booktitle = {{AAAI}},
  publisher = {{AAAI} Press},
  year      = {2018}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].