All Projects → yilei0620 → 3d_conditional_gan

yilei0620 / 3d_conditional_gan

Licence: mit
The codes of VAE-GAN model for 3d shape reconstruction from depth data

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to 3d conditional gan

Srgan Tensorflow
Tensorflow implementation of the SRGAN algorithm for single image super-resolution
Stars: ✭ 754 (+1785%)
Mutual labels:  generative-adversarial-network
Multi Viewpoint Image Generation
Given an image and a target viewpoint, generate synthetic image in the target viewpoint
Stars: ✭ 23 (-42.5%)
Mutual labels:  generative-adversarial-network
Image To Image Papers
🦓<->🦒 🌃<->🌆 A collection of image to image papers with code (constantly updating)
Stars: ✭ 949 (+2272.5%)
Mutual labels:  generative-adversarial-network
Instagan
InstaGAN: Instance-aware Image Translation (ICLR 2019)
Stars: ✭ 761 (+1802.5%)
Mutual labels:  generative-adversarial-network
Contrastive Unpaired Translation
Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)
Stars: ✭ 822 (+1955%)
Mutual labels:  generative-adversarial-network
St Cgan
Dataset and Code for our CVPR'18 paper ST-CGAN: "Stacked Conditional Generative Adversarial Networks for Jointly Learning Shadow Detection and Shadow Removal"
Stars: ✭ 13 (-67.5%)
Mutual labels:  generative-adversarial-network
Context Encoder
[CVPR 2016] Unsupervised Feature Learning by Image Inpainting using GANs
Stars: ✭ 731 (+1727.5%)
Mutual labels:  generative-adversarial-network
Conversational Ai
Conversational AI Reading Materials
Stars: ✭ 34 (-15%)
Mutual labels:  generative-adversarial-network
Cadgan
ICML 2019. Turn a pre-trained GAN model into a content-addressable model without retraining.
Stars: ✭ 19 (-52.5%)
Mutual labels:  generative-adversarial-network
Residual image learning gan
Tensorflow implementation for Paper "Learning Residual Images for Face Attribute Manipulation"
Stars: ✭ 29 (-27.5%)
Mutual labels:  generative-adversarial-network
Pytorch Pretrained Biggan
🦋A PyTorch implementation of BigGAN with pretrained weights and conversion scripts.
Stars: ✭ 779 (+1847.5%)
Mutual labels:  generative-adversarial-network
Musegan
An AI for Music Generation
Stars: ✭ 794 (+1885%)
Mutual labels:  generative-adversarial-network
Wgan Lp Tensorflow
Reproduction code for WGAN-LP
Stars: ✭ 13 (-67.5%)
Mutual labels:  generative-adversarial-network
Anime Inpainting
An application tool of edge-connect, which can do anime inpainting and drawing. 动漫人物图片自动修复,去马赛克,填补,去瑕疵
Stars: ✭ 761 (+1802.5%)
Mutual labels:  generative-adversarial-network
Tensorflow Srgan
Tensorflow implementation of "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network" (Ledig et al. 2017)
Stars: ✭ 33 (-17.5%)
Mutual labels:  generative-adversarial-network
Gans In Action
Companion repository to GANs in Action: Deep learning with Generative Adversarial Networks
Stars: ✭ 748 (+1770%)
Mutual labels:  generative-adversarial-network
Mnist inception score
Training a MNIST classifier, and use it to compute inception score (ICP)
Stars: ✭ 25 (-37.5%)
Mutual labels:  generative-adversarial-network
Relativistic Average Gan Keras
The implementation of Relativistic average GAN with Keras
Stars: ✭ 36 (-10%)
Mutual labels:  generative-adversarial-network
Deep Generative Models
Deep generative models implemented with TensorFlow 2.0: eg. Restricted Boltzmann Machine (RBM), Deep Belief Network (DBN), Deep Boltzmann Machine (DBM), Convolutional Variational Auto-Encoder (CVAE), Convolutional Generative Adversarial Network (CGAN)
Stars: ✭ 34 (-15%)
Mutual labels:  generative-adversarial-network
Udacity Deep Learning Nanodegree
This is just a collection of projects that made during my DEEPLEARNING NANODEGREE by UDACITY
Stars: ✭ 15 (-62.5%)
Mutual labels:  generative-adversarial-network

3d_conditional_gan

We use Conditional Generative Adversarial Nets to process the reconstruction of 3D shape.

The data set is ModelNet40 (http://3dshapenets.cs.princeton.edu/). We translate the mesh data to 64 X 64 X 64 voxelized data, which is used for training our model.

The first goal is to reconstruct 3D objects according to an input label and a random 200-vector.

The codes in lib are heavily borrowed from DCGAN.

The training process is:

Step1: Pre-train an Encoder-Decoder model based on the training set.

Step2: Use the trained Encoder-Decoder model to encode all the samples in the training set. Collect all the latent vectors of training samples and then find their distribution in the latent space by tting a multivariate Gaussian distribution. (BIG ASSUMPTION: the covariance matrix is diagonal which means the each dimensions are independent with other dimensions.)

Step3: Train the GAN model. Initialize the parameters of discriminator randomly. Initialize the parameters of generator by COPING the parameters of decoder. Different from usual GAN model, here we randomly generate noise vector z by the distribution found through Encoder-Decoder model in latent space. Carefully setting of learning rate is helpful to keep generator and discriminator trained in the same pace. If one is learning much ahead than another, it will make another's gradient keep large and not decay.

The examples of our GAN model: alt tag

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].