All Projects → gplhegde → Theano Xnor Net

gplhegde / Theano Xnor Net

Licence: mit
Theano implementation of XNOR-Net

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Theano Xnor Net

image-defect-detection-based-on-CNN
TensorBasicModel
Stars: ✭ 17 (-26.09%)
Mutual labels:  mnist, cifar10
Deepo
Setup and customize deep learning environment in seconds.
Stars: ✭ 6,145 (+26617.39%)
Mutual labels:  lasagne, theano
deeplearning-mpo
Replace FC2, LeNet-5, VGG, Resnet, Densenet's full-connected layers with MPO
Stars: ✭ 26 (+13.04%)
Mutual labels:  mnist, cifar10
rnn benchmarks
RNN benchmarks of pytorch, tensorflow and theano
Stars: ✭ 85 (+269.57%)
Mutual labels:  lasagne, theano
Agentnet
Deep Reinforcement Learning library for humans
Stars: ✭ 298 (+1195.65%)
Mutual labels:  lasagne, theano
gans-2.0
Generative Adversarial Networks in TensorFlow 2.0
Stars: ✭ 76 (+230.43%)
Mutual labels:  mnist, cifar10
Csc deeplearning
3-day dive into deep learning at csc
Stars: ✭ 22 (-4.35%)
Mutual labels:  lasagne, theano
Label Embedding Network
Label Embedding Network
Stars: ✭ 69 (+200%)
Mutual labels:  mnist, cifar10
Cifar-Autoencoder
A look at some simple autoencoders for the Cifar10 dataset, including a denoising autoencoder. Python code included.
Stars: ✭ 42 (+82.61%)
Mutual labels:  mnist, cifar10
SymJAX
Documentation:
Stars: ✭ 103 (+347.83%)
Mutual labels:  lasagne, theano
Tf Vqvae
Tensorflow Implementation of the paper [Neural Discrete Representation Learning](https://arxiv.org/abs/1711.00937) (VQ-VAE).
Stars: ✭ 226 (+882.61%)
Mutual labels:  mnist, cifar10
Practical rl
A course in reinforcement learning in the wild
Stars: ✭ 4,741 (+20513.04%)
Mutual labels:  lasagne, theano
Nnpulearning
Non-negative Positive-Unlabeled (nnPU) and unbiased Positive-Unlabeled (uPU) learning reproductive code on MNIST and CIFAR10
Stars: ✭ 181 (+686.96%)
Mutual labels:  mnist, cifar10
2D-and-3D-Deep-Autoencoder
Convolutional AutoEncoder application on MRI images
Stars: ✭ 57 (+147.83%)
Mutual labels:  lasagne, theano
Generative adversarial networks 101
Keras implementations of Generative Adversarial Networks. GANs, DCGAN, CGAN, CCGAN, WGAN and LSGAN models with MNIST and CIFAR-10 datasets.
Stars: ✭ 138 (+500%)
Mutual labels:  mnist, cifar10
Improved-Wasserstein-GAN-application-on-MRI-images
Improved Wasserstein GAN (WGAN-GP) application on medical (MRI) images
Stars: ✭ 23 (+0%)
Mutual labels:  lasagne, theano
Randwire tensorflow
tensorflow implementation of Exploring Randomly Wired Neural Networks for Image Recognition
Stars: ✭ 29 (+26.09%)
Mutual labels:  mnist, cifar10
Relativistic Average Gan Keras
The implementation of Relativistic average GAN with Keras
Stars: ✭ 36 (+56.52%)
Mutual labels:  mnist, cifar10
PFL-Non-IID
The origin of the Non-IID phenomenon is the personalization of users, who generate the Non-IID data. With Non-IID (Not Independent and Identically Distributed) issues existing in the federated learning setting, a myriad of approaches has been proposed to crack this hard nut. In contrast, the personalized federated learning may take the advantage…
Stars: ✭ 58 (+152.17%)
Mutual labels:  mnist, cifar10
Capsnet
CapsNet (Capsules Net) in Geoffrey E Hinton paper "Dynamic Routing Between Capsules" - State Of the Art
Stars: ✭ 423 (+1739.13%)
Mutual labels:  mnist, lasagne

Theano Implementation of XNOR-Net


This is the python based implentation of XNOR-Net(this paper) using Theano. New derived layer classes for Lasagne are implemented to support the XNOR-Net Convolution and Fully connected layers. The implementation is used to train and test convnets on MNIST and CIFAR-10 classification tasks. This project is tested on python 2.7.

Major dependencies

  • Bleeding edge version of Lasagne. Installation instructions here
  • Bleeding edge version of Pylearn2. Installation instructions here
  • theano, numpy
  • Reference datasets (downloading of which is explaained below).

Steps to download example datasets

  • Install pylearn2 as explained in the above link. Set the data path which pylearn2 uses to store the datasets as shown below. You can choose the directory of your choice.
export PYLEARN2_DATA_PATH=/opt/lisa/data
  • Execute below commands to download the MNIST, CIFAR-10 and SVHN datasets respectively.
python <pylearn2 install path>/pylearn2/scripts/datasets/download_mnist.py

bash  <pylearn2 install path>/pylearn2/scripts/datasets/download_cifar10.sh

bash  <pylearn2 install path>/pylearn2/scripts/datasets/download_svhn.sh

Before running

  • Make sure theano.config.floatX is set to 'float32'. Refer to the guidelines to configure theano
  • You can enable GPU mode for faster training. Refer to the same theano configuration guide for enable GPU mode. The training of XNOR-Nets is slower than non-xnor counterparts, because it requires more computations(to binarize the inputs and weights, compute scaling factors and so on...)

Instructions to run

Training

To train 3 representative networks performing classification tasks on MNIST, CIFAR-10 and SVHN datsets, run the below commands from this directory.

bash ./train/train_mnist.sh

bash ./train/train_cifar.sh

bash ./train/train_svhn.sh

The MNIST and CIFAR-10 networks produce around 3.2% and 13.8% error rate respectively.

Testing

The testing of the above representative XNOR-Networks supports two modes - FIXED point and floating point mode. Since the purpose of these networks are embedde classification tasks, it is more efficient to implement them using FIXED point arithmetic. The scripts under ./test just simualte fixed point mode to see the effect of rounding.

To test the networks that you trained using above commands, run the following commands.

python test/mnist_test.py --model <model file path> --no <no of images to test>   --mode <fixed OR float>

python test/cifar10_test.py --model <model file path> --no <no of images to test>   --mode <fixed OR float>

The model file will be saved during the training process in the .npz format. Use this model file for the --model argument. The default test mode is floating point . Use fixed to enable fixed point. Note that the 4 different parameters in the batch normalization layer ( mean, variance, gamma, beta) are merged into two parameters ( referred as scale and offset in these scripts). This is to reduce computations as they are constant during the inference.

If you need the trained model for any of the above networks, let me know. Also, please contribute if you manage train XNOR-Nets for different computer vision tasks using this project !

Misc

Similar Binary Networks

  1. BinaryNet

Paper

Repo 2. BinaryConnect

Paper

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].