All Projects → seangal → dcgan_vae_pytorch

seangal / dcgan_vae_pytorch

Licence: BSD-3-Clause license
dcgan combined with vae in pytorch!

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to dcgan vae pytorch

Advanced Models
여러가지 유명한 신경망 모델들을 제공합니다. (DCGAN, VAE, Resnet 등등)
Stars: ✭ 48 (-56.36%)
Mutual labels:  dcgan, vae
benchmark VAE
Unifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
Stars: ✭ 1,211 (+1000.91%)
Mutual labels:  vae, vae-gan
Generative-Model
Repository for implementation of generative models with Tensorflow 1.x
Stars: ✭ 66 (-40%)
Mutual labels:  dcgan, vae
Pycadl
Python package with source code from the course "Creative Applications of Deep Learning w/ TensorFlow"
Stars: ✭ 356 (+223.64%)
Mutual labels:  dcgan, vae
Deepnude An Image To Image Technology
DeepNude's algorithm and general image generation theory and practice research, including pix2pix, CycleGAN, UGATIT, DCGAN, SinGAN, ALAE, mGANprior, StarGAN-v2 and VAE models (TensorFlow2 implementation). DeepNude的算法以及通用生成对抗网络(GAN,Generative Adversarial Network)图像生成的理论与实践研究。
Stars: ✭ 4,029 (+3562.73%)
Mutual labels:  dcgan, vae
Pytorch cpp
Deep Learning sample programs using PyTorch in C++
Stars: ✭ 114 (+3.64%)
Mutual labels:  dcgan, vae
Deeplearningmugenknock
でぃーぷらーにんぐを無限にやってディープラーニングでDeepLearningするための実装CheatSheet
Stars: ✭ 684 (+521.82%)
Mutual labels:  dcgan, vae
Deep Learning With Python
Example projects I completed to understand Deep Learning techniques with Tensorflow. Please note that I do no longer maintain this repository.
Stars: ✭ 134 (+21.82%)
Mutual labels:  dcgan, vae
prediction gan
PyTorch Impl. of Prediction Optimizer (to stabilize GAN training)
Stars: ✭ 31 (-71.82%)
Mutual labels:  dcgan
yann
Yet Another Neural Network Library 🤔
Stars: ✭ 26 (-76.36%)
Mutual labels:  nn
EfficientMORL
EfficientMORL (ICML'21)
Stars: ✭ 22 (-80%)
Mutual labels:  vae
neat-python
Python implementation of the NEAT neuroevolution algorithm
Stars: ✭ 32 (-70.91%)
Mutual labels:  nn
concept-based-xai
Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (-62.73%)
Mutual labels:  vae
Densenet
MXNet implementation for DenseNet
Stars: ✭ 28 (-74.55%)
Mutual labels:  nn
Bagel
IPCCC 2018: Robust and Unsupervised KPI Anomaly Detection Based on Conditional Variational Autoencoder
Stars: ✭ 45 (-59.09%)
Mutual labels:  vae
Pytorch-Basic-GANs
Simple Pytorch implementations of most used Generative Adversarial Network (GAN) varieties.
Stars: ✭ 101 (-8.18%)
Mutual labels:  dcgan
molecular-VAE
Implementation of the paper - Automatic chemical design using a data-driven continuous representation of molecules
Stars: ✭ 36 (-67.27%)
Mutual labels:  vae
tensorflow-mnist-AAE
Tensorflow implementation of adversarial auto-encoder for MNIST
Stars: ✭ 86 (-21.82%)
Mutual labels:  vae
pytorch-gans
PyTorch implementation of GANs (Generative Adversarial Networks). DCGAN, Pix2Pix, CycleGAN, SRGAN
Stars: ✭ 21 (-80.91%)
Mutual labels:  dcgan
precision-recall-distributions
Assessing Generative Models via Precision and Recall (official repository)
Stars: ✭ 80 (-27.27%)
Mutual labels:  vae

dcgan_vae_pytorch

dcgan combined with vae in pytorch!

this code is based on pytorch/examples and staturecrane/dcgan_vae_torch

The original artical can be found here

Requirements

  • torch
  • torchvision
  • visdom
  • (optional) lmdb

Usage

to start visdom:

python -m visdom.server

to start the training:

usage: main.py [-h] --dataset DATASET --dataroot DATAROOT [--workers WORKERS]
               [--batchSize BATCHSIZE] [--imageSize IMAGESIZE] [--nz NZ]
               [--ngf NGF] [--ndf NDF] [--niter NITER] [--saveInt SAVEINT] [--lr LR]
               [--beta1 BETA1] [--cuda] [--ngpu NGPU] [--netG NETG]
               [--netD NETD]

optional arguments:
  -h, --help            show this help message and exit
  --dataset DATASET     cifar10 | lsun | imagenet | folder | lfw
  --dataroot DATAROOT   path to dataset
  --workers WORKERS     number of data loading workers
  --batchSize BATCHSIZE
                        input batch size
  --imageSize IMAGESIZE
                        the height / width of the input image to network
  --nz NZ               size of the latent z vector
  --ngf NGF
  --ndf NDF
  --niter NITER         number of epochs to train for
  --saveInt SAVEINT     number of epochs between checkpoints
  --lr LR               learning rate, default=0.0002
  --beta1 BETA1         beta1 for adam. default=0.5
  --cuda                enables cuda
  --ngpu NGPU           number of GPUs to use
  --netG NETG           path to netG (to continue training)
  --netD NETD           path to netD (to continue training)
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].