All Projects β†’ edgarriba β†’ Ali Pytorch

edgarriba / Ali Pytorch

Licence: bsd-3-clause
Adversarially Learned Inference in Pytorch

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Ali Pytorch

Texturize
πŸ€–πŸ–ŒοΈ Generate photo-realistic textures based on source images. Remix, remake, mashup! Useful if you want to create variations on a theme or elaborate on an existing texture.
Stars: ✭ 366 (+1255.56%)
Mutual labels:  generative-model
Seqgan
A simplified PyTorch implementation of "SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient." (Yu, Lantao, et al.)
Stars: ✭ 502 (+1759.26%)
Mutual labels:  generative-model
Cadgan
ICML 2019. Turn a pre-trained GAN model into a content-addressable model without retraining.
Stars: ✭ 19 (-29.63%)
Mutual labels:  generative-model
Tensorflow Generative Model Collections
Collection of generative models in Tensorflow
Stars: ✭ 3,785 (+13918.52%)
Mutual labels:  generative-model
Dancenet
DanceNet -πŸ’ƒπŸ’ƒDance generator using Autoencoder, LSTM and Mixture Density Network. (Keras)
Stars: ✭ 469 (+1637.04%)
Mutual labels:  generative-model
Simulated Unsupervised Tensorflow
TensorFlow implementation of "Learning from Simulated and Unsupervised Images through Adversarial Training"
Stars: ✭ 558 (+1966.67%)
Mutual labels:  generative-model
Neuraldialog Cvae
Tensorflow Implementation of Knowledge-Guided CVAE for dialog generation ACL 2017. It is released by Tiancheng Zhao (Tony) from Dialog Research Center, LTI, CMU
Stars: ✭ 279 (+933.33%)
Mutual labels:  generative-model
Dcgan Tensorflow
A tensorflow implementation of "Deep Convolutional Generative Adversarial Networks"
Stars: ✭ 6,963 (+25688.89%)
Mutual labels:  generative-model
Pixel Rnn Tensorflow
in progress
Stars: ✭ 478 (+1670.37%)
Mutual labels:  generative-model
Delving Deep Into Gans
Generative Adversarial Networks (GANs) resources sorted by citations
Stars: ✭ 834 (+2988.89%)
Mutual labels:  generative-model
Awesome Vaes
A curated list of awesome work on VAEs, disentanglement, representation learning, and generative models.
Stars: ✭ 418 (+1448.15%)
Mutual labels:  generative-model
Sentence Vae
PyTorch Re-Implementation of "Generating Sentences from a Continuous Space" by Bowman et al 2015 https://arxiv.org/abs/1511.06349
Stars: ✭ 462 (+1611.11%)
Mutual labels:  generative-model
Segan
Speech Enhancement Generative Adversarial Network in TensorFlow
Stars: ✭ 661 (+2348.15%)
Mutual labels:  generative-model
Curated List Of Awesome 3d Morphable Model Software And Data
The idea of this list is to collect shared data and algorithms around 3D Morphable Models. You are invited to contribute to this list by adding a pull request. The original list arised from the Dagstuhl seminar on 3D Morphable Models https://www.dagstuhl.de/19102 in March 2019.
Stars: ✭ 375 (+1288.89%)
Mutual labels:  generative-model
Began Tensorflow
Tensorflow implementation of "BEGAN: Boundary Equilibrium Generative Adversarial Networks"
Stars: ✭ 904 (+3248.15%)
Mutual labels:  generative-model
Gran
Efficient Graph Generation with Graph Recurrent Attention Networks, Deep Generative Model of Graphs, Graph Neural Networks, NeurIPS 2019
Stars: ✭ 312 (+1055.56%)
Mutual labels:  generative-model
Awesome Semi Supervised Learning
πŸ“œ An up-to-date & curated list of awesome semi-supervised learning papers, methods & resources.
Stars: ✭ 538 (+1892.59%)
Mutual labels:  generative-model
Simple Variational Autoencoder
A VAE written entirely in Numpy/Cupy
Stars: ✭ 20 (-25.93%)
Mutual labels:  generative-model
Mnist inception score
Training a MNIST classifier, and use it to compute inception score (ICP)
Stars: ✭ 25 (-7.41%)
Mutual labels:  generative-model
Generative Models
Collection of generative models, e.g. GAN, VAE in Pytorch and Tensorflow.
Stars: ✭ 6,701 (+24718.52%)
Mutual labels:  generative-model

Adversarially Learned Inference

Implementation of paper Aversarially Learned Inference in Pytorch

main.py includes training code for datasets

  • [X] SVHN
  • [ ] CIFAR10
  • [ ] CelebA

models.py includes the network architectures for the different datasets as defined in the orginal paper

Usage

usage: main.py [-h] --dataset DATASET --dataroot DATAROOT [--workers WORKERS]
               [--batch-size BATCH_SIZE] [--image-size IMAGE_SIZE] [--nc NC]
               [--nz NZ] [--epochs EPOCHS] [--lr LR] [--beta1 BETA1]
               [--beta2 BETA2] [--cuda] [--ngpu NGPU] [--gpu-id GPU_ID]
               [--netGx NETGX] [--netGz NETGZ] [--netDz NETDZ] [--netDx NETDX]
               [--netDxz NETDXZ] [--clamp_lower CLAMP_LOWER]
               [--clamp_upper CLAMP_UPPER] [--experiment EXPERIMENT]

optional arguments:
  -h, --help            show this help message and exit
  --dataset DATASET     cifar10 | svhn | celeba
  --dataroot DATAROOT   path to dataset
  --workers WORKERS     number of data loading workers
  --batch-size BATCH_SIZE
                        input batch size
  --image-size IMAGE_SIZE
                        the height / width of the input image to network
  --nc NC               input image channels
  --nz NZ               size of the latent z vector
  --epochs EPOCHS       number of epochs to train for
  --lr LR               learning rate for optimizer, default=0.00005
  --beta1 BETA1         beta1 for adam. default=0.5
  --beta2 BETA2         beta2 for adam. default=0.999
  --cuda                enables cuda
  --ngpu NGPU           number of GPUs to use
  --gpu-id GPU_ID       id(s) for CUDA_VISIBLE_DEVICES
  --netGx NETGX         path to netGx (to continue training)
  --netGz NETGZ         path to netGz (to continue training)
  --netDz NETDZ         path to netDz (to continue training)
  --netDx NETDX         path to netDx (to continue training)
  --netDxz NETDXZ       path to netDxz (to continue training)
  --clamp_lower CLAMP_LOWER
  --clamp_upper CLAMP_UPPER
  --experiment EXPERIMENT
                        Where to store samples and models

Example

command line example for training SVHN

python main.py --dataset svhn --dataroot . --experiment svhn_ali --cuda --ngpu 1 --gpu-id 1 --batch-size 100 --epochs 100 --image-size 32 --nz 256 --lr 1e-4 --beta1 0.5 --beta2 10e-3

Cite

@article{DBLP:journals/corr/DumoulinBPLAMC16,
  author    = {Vincent Dumoulin and
               Ishmael Belghazi and
               Ben Poole and
               Alex Lamb and
               Mart{\'{\i}}n Arjovsky and
               Olivier Mastropietro and
               Aaron C. Courville},
  title     = {Adversarially Learned Inference},
  journal   = {CoRR},
  volume    = {abs/1606.00704},
  year      = {2016},
  url       = {http://arxiv.org/abs/1606.00704},
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].