All Projects → houxianxu → Dfc Vae

houxianxu / Dfc Vae

Variational Autoencoder trained by Feature Perceputal Loss

Programming Languages

lua
6591 projects

Projects that are alternatives of or similar to Dfc Vae

vqvae-2
PyTorch implementation of VQ-VAE-2 from "Generating Diverse High-Fidelity Images with VQ-VAE-2"
Stars: ✭ 65 (-12.16%)
Mutual labels:  generative-model, vae
DiffuseVAE
A combination of VAE's and Diffusion Models for efficient, controllable and high-fidelity generation from low-dimensional latents
Stars: ✭ 81 (+9.46%)
Mutual labels:  generative-model, vae
style-vae
Implementation of VAE and Style-GAN Architecture Achieving State of the Art Reconstruction
Stars: ✭ 25 (-66.22%)
Mutual labels:  generative-model, vae
Tf Vqvae
Tensorflow Implementation of the paper [Neural Discrete Representation Learning](https://arxiv.org/abs/1711.00937) (VQ-VAE).
Stars: ✭ 226 (+205.41%)
Mutual labels:  generative-model, vae
Vae protein function
Protein function prediction using a variational autoencoder
Stars: ✭ 57 (-22.97%)
Mutual labels:  generative-model, vae
InpaintNet
Code accompanying ISMIR'19 paper titled "Learning to Traverse Latent Spaces for Musical Score Inpaintning"
Stars: ✭ 48 (-35.14%)
Mutual labels:  generative-model, vae
generative deep learning
Generative Deep Learning Sessions led by Anugraha Sinha (Machine Learning Tokyo)
Stars: ✭ 24 (-67.57%)
Mutual labels:  generative-model, vae
char-VAE
Inspired by the neural style algorithm in the computer vision field, we propose a high-level language model with the aim of adapting the linguistic style.
Stars: ✭ 18 (-75.68%)
Mutual labels:  generative-model, vae
Awesome Vaes
A curated list of awesome work on VAEs, disentanglement, representation learning, and generative models.
Stars: ✭ 418 (+464.86%)
Mutual labels:  generative-model, vae
Tensorflow Generative Model Collections
Collection of generative models in Tensorflow
Stars: ✭ 3,785 (+5014.86%)
Mutual labels:  generative-model, vae
Vae For Image Generation
Implemented Variational Autoencoder generative model in Keras for image generation and its latent space visualization on MNIST and CIFAR10 datasets
Stars: ✭ 87 (+17.57%)
Mutual labels:  generative-model, vae
Generative Models
Collection of generative models, e.g. GAN, VAE in Pytorch and Tensorflow.
Stars: ✭ 6,701 (+8955.41%)
Mutual labels:  generative-model, vae
Attend infer repeat
A Tensorfflow implementation of Attend, Infer, Repeat
Stars: ✭ 82 (+10.81%)
Mutual labels:  generative-model, vae
eccv16 attr2img
Torch Implemention of ECCV'16 paper: Attribute2Image
Stars: ✭ 93 (+25.68%)
Mutual labels:  torch, generative-model
srVAE
VAE with RealNVP prior and Super-Resolution VAE in PyTorch. Code release for https://arxiv.org/abs/2006.05218.
Stars: ✭ 56 (-24.32%)
Mutual labels:  generative-model, vae
Sentence Vae
PyTorch Re-Implementation of "Generating Sentences from a Continuous Space" by Bowman et al 2015 https://arxiv.org/abs/1511.06349
Stars: ✭ 462 (+524.32%)
Mutual labels:  generative-model, vae
Pytorch Mnist Vae
Stars: ✭ 32 (-56.76%)
Mutual labels:  generative-model, vae
Csgnet
CSGNet: Neural Shape parser for Constructive Solid Geometry
Stars: ✭ 55 (-25.68%)
Mutual labels:  generative-model
Torch Models
Stars: ✭ 65 (-12.16%)
Mutual labels:  torch
Notes
The notes for Math, Machine Learning, Deep Learning and Research papers.
Stars: ✭ 53 (-28.38%)
Mutual labels:  generative-model

DFC-VAE

This is the code for the paper

Deep Feature Consistent Variational Autoencoder

The paper trained a Variational Autoencoder (VAE) model for face image generation. In addition, it provided a method to manipluate facial attributes by using attribute-specific vector.

  • Pretrained model trained on CelebA dataset
  • Code for training on GPU
  • Code for different analysis

Installation

Our implementation is based on Torch and several dependencies.

After installing Torch according to this tutorial, use following code to install dependencies:

sudo apt-get install libprotobuf-dev protobuf-compiler
luarocks install loadcaffe
luarocks install https://raw.githubusercontent.com/szym/display/master/display-scm-0.rockspec
luarocks install nngraph
sudo apt-get install libmatio2
luarocks install matio
luarocks install manifold
sudo apt-get install libatlas3-base # for manifold

we use a NVIDIA GPU for training and testing, so you also need install GPU related packages

luarocks install cutorch
luarocks install cunn
luarocks install cudnn

Dataset preparation

cd celebA

Download img_align_celeba.zip from http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html under the link "Align&Cropped Images".

Download list_attr_celeba.txt for annotation.

unzip img_align_celeba.zip; cd ..
DATA_ROOT=celebA th data/crop_celebA.lua

We need pretrained VGG-19 to compute feature perceptual loss.

cd data/pretrained && bash download_models.sh && cd ../..

Training

Open a new terminal and start the server for display images in the browser

th -ldisplay.start 8000 0.0.0.0

The images can be seen at http://localhost:8000 in the browser

Training with feature perceptual loss.

DATA_ROOT=celebA th main_cvae_content.lua

Training with pixel-by-pixel loss.

DATA_ROOT=celebA th main_cvae.lua

Generate face images using pretrained Encoder and Decoder

Pretrained model

mkdir checkpoints; cd checkpoints

We provide both Encoder and Decoder

cvae_content_123_encoder.t7 and cvae_content_123_decoder.t7 trained with relu1_1, relu2_1, relu3_1 in VGG.

Reconstruction with CelebA dataset:

DATA_ROOT=celebA reconstruction=1 th generate.lua

Face images randomly generated from latent variables:

DATA_ROOT=celebA reconstruction=0 th generate.lua

Following are some examples:

## Linear interpolation between two face images
th linear_walk_two_images.lua
## Vector arithmetic for visual attributes

First preprocess the celebA dataset annotations to separate the dataset to two parts for each attribute, indicating whether containing the specific attribute or not.

cd celebA
python get_binary_attr.py
# should generate file: 'celebA/attr_binary_list.txt'
cd ..
th linear_walk_attribute_vector.lua

Here are some examples:

Better face attributes manipulation by incorporating GAN

       

       

       

Credits

The code is based on neural-style, dcgan.torch, VAE-Torch and texture_nets.

Citation

If you find this code useful for your research, please cite:

@inproceedings{hou2017deep,
  title={Deep Feature Consistent Variational Autoencoder},
  author={Hou, Xianxu and Shen, Linlin and Sun, Ke and Qiu, Guoping},
  booktitle={Applications of Computer Vision (WACV), 2017 IEEE Winter Conference on},
  pages={1133--1141},
  year={2017},
  organization={IEEE}
}
@article{hou2019improving,
  title={Improving Variational Autoencoder with Deep Feature Consistent and Generative Adversarial Training},
  author={Hou, Xianxu and Sun, Ke and Shen, Linlin and Qiu, Guoping},
  journal={Neurocomputing},
  volume={341},
  pages={183--194},
  year={2019},
  publisher={Elsevier},
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].