All Projects → marian42 → Shapegan

marian42 / Shapegan

Generative Adversarial Networks and Autoencoders for 3D Shapes

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Shapegan

Alae
[CVPR2020] Adversarial Latent Autoencoders
Stars: ✭ 3,178 (+2004.64%)
Mutual labels:  gan, generative-adversarial-network, autoencoder
Ad examples
A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
Stars: ✭ 641 (+324.5%)
Mutual labels:  gan, generative-adversarial-network, autoencoder
Tensorflow Tutorial
Tensorflow tutorial from basic to hard, 莫烦Python 中文AI教学
Stars: ✭ 4,122 (+2629.8%)
Mutual labels:  gan, generative-adversarial-network, autoencoder
Gpnd
Generative Probabilistic Novelty Detection with Adversarial Autoencoders
Stars: ✭ 112 (-25.83%)
Mutual labels:  gan, generative-adversarial-network, autoencoder
Generative Models
Annotated, understandable, and visually interpretable PyTorch implementations of: VAE, BIRVAE, NSGAN, MMGAN, WGAN, WGANGP, LSGAN, DRAGAN, BEGAN, RaGAN, InfoGAN, fGAN, FisherGAN
Stars: ✭ 438 (+190.07%)
Mutual labels:  gan, generative-adversarial-network, autoencoder
Focal Frequency Loss
Focal Frequency Loss for Generative Models
Stars: ✭ 141 (-6.62%)
Mutual labels:  gan, generative-adversarial-network, autoencoder
P2pala
Page to PAGE Layout Analysis Tool
Stars: ✭ 147 (-2.65%)
Mutual labels:  gan, generative-adversarial-network
Cyclegan
Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.
Stars: ✭ 10,933 (+7140.4%)
Mutual labels:  gan, generative-adversarial-network
Ganimation
GANimation: Anatomically-aware Facial Animation from a Single Image (ECCV'18 Oral) [PyTorch]
Stars: ✭ 1,730 (+1045.7%)
Mutual labels:  gan, generative-adversarial-network
Generative adversarial networks 101
Keras implementations of Generative Adversarial Networks. GANs, DCGAN, CGAN, CCGAN, WGAN and LSGAN models with MNIST and CIFAR-10 datasets.
Stars: ✭ 138 (-8.61%)
Mutual labels:  gan, generative-adversarial-network
Capsule Gan
Code for my Master thesis on "Capsule Architecture as a Discriminator in Generative Adversarial Networks".
Stars: ✭ 120 (-20.53%)
Mutual labels:  gan, generative-adversarial-network
Gandissect
Pytorch-based tools for visualizing and understanding the neurons of a GAN. https://gandissect.csail.mit.edu/
Stars: ✭ 1,700 (+1025.83%)
Mutual labels:  gan, generative-adversarial-network
Stylegan.pytorch
A PyTorch implementation for StyleGAN with full features.
Stars: ✭ 150 (-0.66%)
Mutual labels:  gan, generative-adversarial-network
Mlds2018spring
Machine Learning and having it Deep and Structured (MLDS) in 2018 spring
Stars: ✭ 124 (-17.88%)
Mutual labels:  gan, generative-adversarial-network
Tensorflow Mnist Cgan Cdcgan
Tensorflow implementation of conditional Generative Adversarial Networks (cGAN) and conditional Deep Convolutional Adversarial Networks (cDCGAN) for MANIST dataset.
Stars: ✭ 122 (-19.21%)
Mutual labels:  gan, generative-adversarial-network
Awesome Gan For Medical Imaging
Awesome GAN for Medical Imaging
Stars: ✭ 1,814 (+1101.32%)
Mutual labels:  gan, generative-adversarial-network
Rectorch
rectorch is a pytorch-based framework for state-of-the-art top-N recommendation
Stars: ✭ 121 (-19.87%)
Mutual labels:  generative-adversarial-network, autoencoder
Deep Learning With Python
Example projects I completed to understand Deep Learning techniques with Tensorflow. Please note that I do no longer maintain this repository.
Stars: ✭ 134 (-11.26%)
Mutual labels:  gan, generative-adversarial-network
Human Video Generation
Human Video Generation Paper List
Stars: ✭ 139 (-7.95%)
Mutual labels:  3d, gan
Nice Gan Pytorch
Official PyTorch implementation of NICE-GAN: Reusing Discriminators for Encoding: Towards Unsupervised Image-to-Image Translation
Stars: ✭ 140 (-7.28%)
Mutual labels:  gan, generative-adversarial-network

Generative Adversarial Networks and Autoencoders for 3D Shapes

Shapes generated with our propsed GAN architecture and reconstructed using Marching Cubes

This repository provides code for the paper "Adversarial Generation of Continuous Implicit Shape Representations" and for my master thesis about generative machine learning models for 3D shapes. It contains:

  • the networks proposed in the paper (GANs with a DeepSDF network as the generator and a 3D CNN or Pointnet as discriminator)
  • an autoencoder, variational autoencoder and GANs for SDF voxel volumes using 3D CNNs
  • an implementation of the DeepSDF autodecoder that learns implicit function representations of 3D shapes
  • a GAN that uses a DeepSDF network as the generator and a 3D CNN as the discriminator ("Hybrid GAN", as proposed in the paper, but without progressive growing and without gradient penalty)
  • a data prepration pipeline that can prepare SDF datasets from triangle meshes, such as the Shapenet dataset (based on my mesh_to_sdf project)
  • a ray marching renderer to render signed distance fields given by a neural network, as well as a classic rasterized renderer to render triangle meshes reconstructed with Marching Cubes
  • tools to visualize the results

Note that although the code provided here works, most of the scripts need some configuration to work for a specific task.

This project uses two different ways to represent 3D shapes. These representations are voxel volumes and implicit functions. Both use signed distances.

For both representations, there are networks that learn latent embeddings and then reconstruct objects from latent codes. These are the autoencoder and variational autoencoder for voxel volumes and the autodecoder for the DeepSDF network.

In addition, for both representations, there are generative adversarial networks that learn to generate novel objects from random latent codes. The GANs come in a classic and a Wasserstein flavor.

Reproducing the paper

This section explains how to reproduce the paper "Generative Adversarial Networks and Autoencoders for 3D Shapes".

Data preparation

To train the model, the meshes in the Shapenet dataset need to be voxelized for the voxel-based approach and converted to SDF point clouds for the point based approach.

We provide readily prepared datasets for the Chairs, Airplanes and Sofas categories of Shapenet as a download. The size of that dataset is 71 GB.

To prepare the data yourself, follow these steps:

  1. install the mesh_to_sdf pip module.
  2. Download the Shapenet files to the data/shapenet/ directory or create an equivalent symlink.
  3. Review the settings at the top of prepare_shapenet_dataset.py. The default settings are configured for reproducing the GAN paper, so you shouldn't need to change anything. You can change the dataset category that will be prepared, the default is the chairs category. You can disable preparation of either the voxel or point datasets if you don't need both.
  4. Run prepare_shapenet_dataset.py. You can stop and resume this script and it will continue where it left off.

Training

Voxel-based discriminator

To train the GAN with the 3D CNN discriminator, run

python3 train_hybrid_progressive_gan.py iteration=0
python3 train_hybrid_progressive_gan.py iteration=1
python3 train_hybrid_progressive_gan.py iteration=2
python3 train_hybrid_progressive_gan.py iteration=3

This runs the four steps of progressive growing. Each iteration will start with the result of the previous iteration or the most recent result of the current iteration if the "continue" parameter is supplied. Add the nogui parameter to disable the model viewer during training. This parameter should be used when the script is run remotely.

Point-based discriminator

TODO

Note that the pointnet-based approach currently has a separate implementation of the generator and doesn't work with the visualization scripts provided here. The two implementations will be merged soon so that the demos work.

Use pretrained generator models

In the examples directory, you find network parameters for the GAN generators trained on chairs, airplanes and sofas with the 3D CNN discriminator. You can use these by loading the generator from these files, i.e. in demo_sdf_net.py you can change sdf_net.filename accordingly.

TODO: Examples for the pointnet-based GANs will be added soon.

Running other 3D deep learning models

Data preparation

Two data preparation scripts are available, prepare_shapenet_dataset.py is configured to work specifically with the Shapenet dataset. prepare_data.py can be used with any folder of 3D meshes. Both need to be configured depending on what data you want to prepare. Most of the time, not all types of data need to be prepared. For the DeepSDF network, you need SDF clouds. For the remaining networks, you need voxels of resolution 32. The "uniform" and "surface" datasets, as well as the voxels of other resolutions are only needed for the GAN paper (see the section above).

Training

Run any of the scripts that start with train_ to train the networks. The train_autoencoder.py trains the variational autoencoder, unless the classic argument is supplied. All training scripts take these command line arguments:

  • continue to load existing parameters
  • nogui to not show the model viewer, which is useful for VMs
  • show_slice to show a text representation of the learned shape

Progress is saved after each epoch. There is no stopping criterion. The longer you train, the better the result. You should have at least 8GB of GPU RAM available. Use a datacenter GPU, training on a desktop GPU will take several days to get good results. The classifiers take the least time to train and the GANs take the most time.

Visualization

To visualize the results, run any of the scripts starting with demo_. They might need to be configured depending on the model that was trained and the visualizations needed. The create_plot.py contains code to generate figures for my thesis.

Using the pretrained DeepSDF model and recreating the latent space traversal animation

This section explains how get a DeepSDF network model that was pre-trained on the Shapenet dataset and how to use it to recreate this latent space traversal animation.

Since the model was trained, some network parameters have changed. If you're training a new model, you can use the parameters on the master branch and it will work as well. To be compatible with the pretrained model, you'll need the changes in the pretrained-deepsdf-shapenet branch.

To generate the latent space animation, follow these steps:

  1. Switch to the the pretrained-deepsdf-shapenet branch.

  2. Move the contents of the examples/deepsdf-shapenet-pretrained directory to the project root directory. The scripts will look for the .to files in /models and /data relative to the project root.

  3. Run python3 demo_latent_space.py. This takes about 40 minutes on my machine. To make it faster, you can lower the values of SAMPLE_COUNT and TRANSITION_FRAMES in demo_latent_space.py.

  4. To render a video file from the frames, run ffmpeg -framerate 30 -i images/frame-%05d.png -c:v libx264 -profile:v high -crf 19 -pix_fmt yuv420p video.mp4.

Note that after completing steps 1 and 2, you can run python3 demo_sdf_net.py to show a realtime latent space interpolation.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].