All Projects → ChengtaoLi → dan

ChengtaoLi / dan

Licence: other
Demo code for the paper ''Distributional Adversarial Networks''

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to dan

Adversarialnetspapers
Awesome paper list with code about generative adversarial nets
Stars: ✭ 6,219 (+34450%)
Mutual labels:  adversarial-networks
Electra
中文 预训练 ELECTRA 模型: 基于对抗学习 pretrain Chinese Model
Stars: ✭ 132 (+633.33%)
Mutual labels:  adversarial-networks
EANN-KDD18
EANN: event-adversarial neural networks for multi-modal fake news detection
Stars: ✭ 171 (+850%)
Mutual labels:  adversarial-networks
Robust Adv Malware Detection
Code repository for the paper "Adversarial Deep Learning for Robust Detection of Binary Encoded Malware"
Stars: ✭ 63 (+250%)
Mutual labels:  adversarial-networks
Moving Semantic Transfer Network
Tensorflow codes for ICML2018, Learning Semantic Representations for Unsupervised Domain Adaptation
Stars: ✭ 94 (+422.22%)
Mutual labels:  adversarial-networks
Adversarial Semisupervised Semantic Segmentation
Pytorch Implementation of "Adversarial Learning For Semi-Supervised Semantic Segmentation" for ICLR 2018 Reproducibility Challenge
Stars: ✭ 147 (+716.67%)
Mutual labels:  adversarial-networks
All About The Gan
All About the GANs(Generative Adversarial Networks) - Summarized lists for GAN
Stars: ✭ 630 (+3400%)
Mutual labels:  adversarial-networks
tensorflow-mnist-AAE
Tensorflow implementation of adversarial auto-encoder for MNIST
Stars: ✭ 86 (+377.78%)
Mutual labels:  adversarial-networks
Chromagan
Official Implementation of ChromaGAN: An Adversarial Approach for Picture Colorization
Stars: ✭ 117 (+550%)
Mutual labels:  adversarial-networks
nemesyst
Generalised and highly customisable, hybrid-parallelism, database based, deep learning framework.
Stars: ✭ 17 (-5.56%)
Mutual labels:  adversarial-networks
Man
Multinomial Adversarial Networks for Multi-Domain Text Classification (NAACL 2018)
Stars: ✭ 72 (+300%)
Mutual labels:  adversarial-networks
Torchadver
A PyTorch Toolbox for creating adversarial examples that fool neural networks.
Stars: ✭ 88 (+388.89%)
Mutual labels:  adversarial-networks
Adversarial Autoencoders
Tensorflow implementation of Adversarial Autoencoders
Stars: ✭ 215 (+1094.44%)
Mutual labels:  adversarial-networks
Delving Deep Into Gans
Generative Adversarial Networks (GANs) resources sorted by citations
Stars: ✭ 834 (+4533.33%)
Mutual labels:  adversarial-networks
MCS2018 Solution
No description or website provided.
Stars: ✭ 16 (-11.11%)
Mutual labels:  adversarial-networks
Adversarial video generation
A TensorFlow Implementation of "Deep Multi-Scale Video Prediction Beyond Mean Square Error" by Mathieu, Couprie & LeCun.
Stars: ✭ 662 (+3577.78%)
Mutual labels:  adversarial-networks
Show Adapt And Tell
Code for "Show, Adapt and Tell: Adversarial Training of Cross-domain Image Captioner" in ICCV 2017
Stars: ✭ 146 (+711.11%)
Mutual labels:  adversarial-networks
Adversarial-Semisupervised-Semantic-Segmentation
Pytorch Implementation of "Adversarial Learning For Semi-Supervised Semantic Segmentation" for ICLR 2018 Reproducibility Challenge
Stars: ✭ 151 (+738.89%)
Mutual labels:  adversarial-networks
precision-recall-distributions
Assessing Generative Models via Precision and Recall (official repository)
Stars: ✭ 80 (+344.44%)
Mutual labels:  adversarial-networks
EAD Attack
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
Stars: ✭ 34 (+88.89%)
Mutual labels:  adversarial-networks

DAN: Distributional Adversarial Networks

Tensorflow demo code for paper Distributional Adversarial Networks by Chengtao Li*, David Alvarez-Melis*, Keyulu Xu, Stefanie Jegelka and Suvrit Sra.

Summary

The main difference with the original GAN method is that the Discriminator is operates on samples (of n>1 examples) rather than a single sample point to discriminate between real and generated distributions. In the paper we propose two such type of methods:

  • A single-sample classifier $M_S$ which classifies samples as fake or real (i.e. a sample-based analogue to the original GAN classifier)
  • A two-sample discriminator $M_{2S}$ which must decide whether two samples are drawn from the same distribution or not (reminiscent of two-sample tests in the the statistics literature)

Both of these methods relies on a first stage encoder (Deep Mean Encoder), which embeds and aggregates individual examples to obtain a fixed-size representation of the sample. These vectors are then used as inputs to the two types of classifiers.

A schematic representation of these two methods is:

Prerequisites

  • Python 2.7
  • tensorflow >= 1.0
  • numpy
  • scipy
  • matplotlib

Toy Experiments

A self-contained implementation of the two DAN models applied to a simple 2D mixture of gaussians examples can be found in this notebook in toy folder. Some of the visualization tools were borrowed from here.

Visualization

Vanilla GAN DAN-S
DAN-2S Ground Truth

MNIST Digit Generation

This part of code can be used to reproduce experimental results on MNIST digit generation. It lies in mnist folder and is built based on DCGAN Implementation.

Training

To train the adversarial network, run

python main_mnist.py --model_mode [MODEL_MODE] --is_train True

Here MODEL_MODE can be one of gan (for vanilla GAN model), dan_s (for DAN-S) or dan_2s (for DAN-2S).

Evaluation

To evaluate how well the model recovers the mode frequencies, one need an accurate classifier on MNIST dataset as an approximate label indicator. The code for the classifier is in mnist_classifier.py and is adapted from Tensorflow-Examples. To train the classifier, run

python mnist_classifier.py

The classifier has an accuracy of ~97.6% on test set after 10 epochs and is stored in the folder mnist_cnn for later evaluation. To use the classifier to estimate the label frequencies of generated figures, run

python main_mnist.py --model_mode [MODEL_MODE] --is_train False

The result will be saved to the file specified by savepath. A random run gives the following results with different model_mode's.

Vanilla GAN DAN-S DAN-2S
Entropy (the higher the better) 1.623 2.295 2.288
TV Dist (the lower the better) 0.461 0.047 0.061
L2 Dist (the lower the better) 0.183 0.001 0.003

Visualization

The following visualization shows how the randomly generated figures evolve through 100 epochs with different models. While for vanilla GAN the figures mostly concentrate on ''easy-to-generate'' modes like 1, models within DAN framework generate figures that have better coverages over different modes.

Vanilla GAN DAN-S DAN-2S

Domain Adaptation

This part of code can be used to reproduce experimental results of domain adaptation from MNIST to MNIST-M. It lies in dann folder and is built based on DANN Implementation.

Build Dataset

Run the following commands to download and create MNIST-M dataset.

curl -O http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/BSR/BSR_bsds500.tgz
python create_mnistm.py

(instructions from here)

Training

To train the adversarial network, run

python mnist_dann.py --model_mode [MODEL_MODE]

Here MODEL_MODE can be one of gan (for vanilla GAN model), dan_s (for DAN-S) or dan_2s (for DAN-2S). A random run with different different modes gives the following prediction accuracy on MNIST-M when the classifier is trained on MNIST.

Vanilla GAN DAN-S DAN-2S
Accuracy 77.0% 78.8% 80.4%

Citation

If you use this code for your research, please cite our paper:

@article{li2017distributional,
  title={Distributional Adversarial Networks},
  author={Chengtao Li, David Alvarez-Melis, Keyulu Xu, Stefanie Jegelka, Suvrit Sra},
  journal={arXiv preprint arXiv:1706.09549},
  year={2017}
}

Contact

Please email to [email protected] should you have any questions, comments or suggestions.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].