All Projects → rimchang → 3dgan Pytorch

rimchang / 3dgan Pytorch

3DGAN-Pytorch

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to 3dgan Pytorch

Specgan
SpecGAN - generate audio with adversarial training
Stars: ✭ 92 (-12.38%)
Mutual labels:  gan
Tagan
An official PyTorch implementation of the paper "Text-Adaptive Generative Adversarial Networks: Manipulating Images with Natural Language", NeurIPS 2018
Stars: ✭ 97 (-7.62%)
Mutual labels:  gan
Zerospeech Tts Without T
A Pytorch implementation for the ZeroSpeech 2019 challenge.
Stars: ✭ 100 (-4.76%)
Mutual labels:  gan
Dped
Software and pre-trained models for automatic photo quality enhancement using Deep Convolutional Networks
Stars: ✭ 1,315 (+1152.38%)
Mutual labels:  gan
Doppelganger
[IMC 2020 (Best Paper Finalist)] Using GANs for Sharing Networked Time Series Data: Challenges, Initial Promise, and Open Questions
Stars: ✭ 97 (-7.62%)
Mutual labels:  gan
Lggan
[CVPR 2020] Local Class-Specific and Global Image-Level Generative Adversarial Networks for Semantic-Guided Scene Generation
Stars: ✭ 97 (-7.62%)
Mutual labels:  gan
Deep Learning With Cats
Deep learning with cats (^._.^)
Stars: ✭ 1,290 (+1128.57%)
Mutual labels:  gan
Tbgan
Project Page of 'Synthesizing Coupled 3D Face Modalities by Trunk-Branch Generative Adversarial Networks'
Stars: ✭ 105 (+0%)
Mutual labels:  gan
Pytorch Generative Adversarial Networks
A very simple generative adversarial network (GAN) in PyTorch
Stars: ✭ 1,345 (+1180.95%)
Mutual labels:  gan
Lsd Seg
Learning from Synthetic Data: Addressing Domain Shift for Semantic Segmentation
Stars: ✭ 99 (-5.71%)
Mutual labels:  gan
Unified Gan Tensorflow
A Tensorflow implementation of GAN, WGAN and WGAN with gradient penalty.
Stars: ✭ 93 (-11.43%)
Mutual labels:  gan
Generative Models
Comparison of Generative Models in Tensorflow
Stars: ✭ 96 (-8.57%)
Mutual labels:  gan
Torchelie
Torchélie is a set of utility functions, layers, losses, models, trainers and other things for PyTorch.
Stars: ✭ 98 (-6.67%)
Mutual labels:  gan
Sprint gan
Privacy-preserving generative deep neural networks support clinical data sharing
Stars: ✭ 92 (-12.38%)
Mutual labels:  gan
Spectralnormalizationkeras
Spectral Normalization for Keras Dense and Convolution Layers
Stars: ✭ 100 (-4.76%)
Mutual labels:  gan
Awesome Computer Vision
Awesome Resources for Advanced Computer Vision Topics
Stars: ✭ 92 (-12.38%)
Mutual labels:  gan
3d Gan Superresolution
3D super-resolution using Generative Adversarial Networks
Stars: ✭ 97 (-7.62%)
Mutual labels:  gan
Fagan
A variant of the Self Attention GAN named: FAGAN (Full Attention GAN)
Stars: ✭ 105 (+0%)
Mutual labels:  gan
Tensorflow2.0 Examples
🙄 Difficult algorithm, Simple code.
Stars: ✭ 1,397 (+1230.48%)
Mutual labels:  gan
Exprgan
Facial Expression Editing with Controllable Expression Intensity
Stars: ✭ 98 (-6.67%)
Mutual labels:  gan

3DGAN-Pytorch

license arXiv Tag

Pytorch implementation of 3D Generative Adversarial Network.

This is a Pytorch implementation of the paper "Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling". I reference tf-3dgan and translate to pytorch code, Difference is that i use output layer of sigmoid, soft-labelling, learning-rate scheduler. I think all input voxels is positive( 0 or 1), so sigmoid is better than tanh. soft-labelling is good for smoothing loss, I use learning-rate decay to get better quality output

Requirements

  • pytoch
  • scipy
  • scikit-image

Usage

I use floydhub to train model
Floydhub is simple deeplearining training tool
They offer free-tier for 100h gpu server

pip install -U floyd-cli
#./input
floyd data init chair
floyd data upload
#./3D_GAN
floyd init 3dgan
floyd data status
floyd run --env pytorch --gpu --data [your data id] "python3 main.py"

This porject structure is fitted with floydhub structure, so parent directory contain input, output, 3D_GAN directory

GAN Trick

I use some more trick for better result

  • the loss function to optimize G is min (log 1-D), but in practice folks practically use max log D
  • Z is Sampled from a gaussian distribution [0, 0.33]
  • Use Soft Labels - It make loss function smoothing (When I don't use soft labels , I observe divergence after 500 epochs)
  • learning rate scheduler - after 500 epoch, descriminator's learning rate is decayed

If you want to know more trick , go to Soumith’s ganhacks repo.

Result

1000 epochs

Reference

tf-3dgan
yunjey's pytorch-tutorial

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].