All Projects → aaron-xichen → Pytorch Playground

aaron-xichen / Pytorch Playground

Licence: mit
Base pretrained models and datasets in pytorch (MNIST, SVHN, CIFAR10, CIFAR100, STL10, AlexNet, VGG16, VGG19, ResNet, Inception, SqueezeNet)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Pytorch Playground

tutorials-kr
🇰🇷파이토치에서 제공하는 튜토리얼의 한국어 번역을 위한 저장소입니다. (Translate PyTorch tutorials in Korean🇰🇷)
Stars: ✭ 271 (-87.69%)
Mutual labels:  pytorch-tutorial, pytorch-tutorials
Pytorch Image Classification
Tutorials on how to implement a few key architectures for image classification using PyTorch and TorchVision.
Stars: ✭ 272 (-87.64%)
Mutual labels:  pytorch-tutorial, pytorch-tutorials
Pytorch Seq2seq
Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
Stars: ✭ 3,418 (+55.29%)
Mutual labels:  pytorch-tutorial, pytorch-tutorials
Pytorch Sentiment Analysis
Tutorials on getting started with PyTorch and TorchText for sentiment analysis.
Stars: ✭ 3,209 (+45.8%)
Mutual labels:  pytorch-tutorial, pytorch-tutorials
Keras Pytorch Avp Transfer Learning
We pit Keras and PyTorch against each other, showing their strengths and weaknesses in action. We present a real problem, a matter of life-and-death: distinguishing Aliens from Predators!
Stars: ✭ 42 (-98.09%)
Mutual labels:  pytorch-tutorial, pytorch-tutorials
Daily Deeplearning
🔥机器学习/深度学习/Python/算法面试/自然语言处理教程/剑指offer/machine learning/deeplearning/Python/Algorithm interview/NLP Tutorial
Stars: ✭ 381 (-82.69%)
Mutual labels:  pytorch-tutorial, pytorch-tutorials
Deep-Learning-with-PyTorch-A-60-Minute-Blitz-cn
PyTorch1.0 深度学习:60分钟入门与实战(Deep Learning with PyTorch: A 60 Minute Blitz 中文翻译与学习)
Stars: ✭ 127 (-94.23%)
Mutual labels:  pytorch-tutorial, pytorch-tutorials
Pytorch Pos Tagging
A tutorial on how to implement models for part-of-speech tagging using PyTorch and TorchText.
Stars: ✭ 96 (-95.64%)
Mutual labels:  pytorch-tutorial, pytorch-tutorials
Deep Tutorials For Pytorch
In-depth tutorials for implementing deep learning models on your own with PyTorch.
Stars: ✭ 971 (-55.88%)
Mutual labels:  pytorch-tutorial, pytorch-tutorials
Machine Learning Collection
A resource for learning about ML, DL, PyTorch and TensorFlow. Feedback always appreciated :)
Stars: ✭ 883 (-59.88%)
Mutual labels:  pytorch-tutorial, pytorch-tutorials
Deep Learning Boot Camp
A community run, 5-day PyTorch Deep Learning Bootcamp
Stars: ✭ 1,270 (-42.3%)
Mutual labels:  pytorch-tutorial, pytorch-tutorials
Pytorch Rl
Tutorials for reinforcement learning in PyTorch and Gym by implementing a few of the popular algorithms. [IN PROGRESS]
Stars: ✭ 121 (-94.5%)
Mutual labels:  pytorch-tutorial, pytorch-tutorials
Inq Pytorch
A PyTorch implementation of "Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights"
Stars: ✭ 147 (-93.32%)
Mutual labels:  quantization
Frontalization
Pytorch deep learning face frontalization model
Stars: ✭ 160 (-92.73%)
Mutual labels:  pytorch-tutorial
Applied Dl 2018
Tel-Aviv Deep Learning Boot-camp: 12 Applied Deep Learning Labs
Stars: ✭ 146 (-93.37%)
Mutual labels:  pytorch-tutorial
Cnn Quantization
Quantization of Convolutional Neural networks.
Stars: ✭ 141 (-93.59%)
Mutual labels:  quantization
Terngrad
Ternary Gradients to Reduce Communication in Distributed Deep Learning (TensorFlow)
Stars: ✭ 168 (-92.37%)
Mutual labels:  quantization
A Pytorch Tutorial To Super Resolution
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network | a PyTorch Tutorial to Super-Resolution
Stars: ✭ 157 (-92.87%)
Mutual labels:  pytorch-tutorial
Ctranslate2
Fast inference engine for OpenNMT models
Stars: ✭ 140 (-93.64%)
Mutual labels:  quantization
Awesome Edge Machine Learning
A curated list of awesome edge machine learning resources, including research papers, inference engines, challenges, books, meetups and others.
Stars: ✭ 139 (-93.68%)
Mutual labels:  quantization

This is a playground for pytorch beginners, which contains predefined models on popular dataset. Currently we support

  • mnist, svhn
  • cifar10, cifar100
  • stl10
  • alexnet
  • vgg16, vgg16_bn, vgg19, vgg19_bn
  • resnet18, resnet34, resnet50, resnet101, resnet152
  • squeezenet_v0, squeezenet_v1
  • inception_v3

Here is an example for MNIST dataset. This will download the dataset and pre-trained model automatically.

import torch
from torch.autograd import Variable
from utee import selector
model_raw, ds_fetcher, is_imagenet = selector.select('mnist')
ds_val = ds_fetcher(batch_size=10, train=False, val=True)
for idx, (data, target) in enumerate(ds_val):
    data =  Variable(torch.FloatTensor(data)).cuda()
    output = model_raw(data)

Also, if want to train the MLP model on mnist, simply run python mnist/train.py

Install

python3 setup.py develop --user

ImageNet dataset

We provide precomputed imagenet validation dataset with 224x224x3 size. We first resize the shorter size of image to 256, then we crop 224x224 image in the center. Then we encode the cropped images to jpg string and dump to pickle.

Quantization

We also provide a simple demo to quantize these models to specified bit-width with several methods, including linear method, minmax method and non-linear method.

quantize --type cifar10 --quant_method linear --param_bits 8 --fwd_bits 8 --bn_bits 8 --ngpu 1

Top1 Accuracy

We evaluate the performance of popular dataset and models with linear quantized method. The bit-width of running mean and running variance in BN are 10 bits for all results. (except for 32-float)

Model 32-float 12-bit 10-bit 8-bit 6-bit
MNIST 98.42 98.43 98.44 98.44 98.32
SVHN 96.03 96.03 96.04 96.02 95.46
CIFAR10 93.78 93.79 93.80 93.58 90.86
CIFAR100 74.27 74.21 74.19 73.70 66.32
STL10 77.59 77.65 77.70 77.59 73.40
AlexNet 55.70/78.42 55.66/78.41 55.54/78.39 54.17/77.29 18.19/36.25
VGG16 70.44/89.43 70.45/89.43 70.44/89.33 69.99/89.17 53.33/76.32
VGG19 71.36/89.94 71.35/89.93 71.34/89.88 70.88/89.62 56.00/78.62
ResNet18 68.63/88.31 68.62/88.33 68.49/88.25 66.80/87.20 19.14/36.49
ResNet34 72.50/90.86 72.46/90.82 72.45/90.85 71.47/90.00 32.25/55.71
ResNet50 74.98/92.17 74.94/92.12 74.91/92.09 72.54/90.44 2.43/5.36
ResNet101 76.69/93.30 76.66/93.25 76.22/92.90 65.69/79.54 1.41/1.18
ResNet152 77.55/93.59 77.51/93.62 77.40/93.54 74.95/92.46 9.29/16.75
SqueezeNetV0 56.73/79.39 56.75/79.40 56.70/79.27 53.93/77.04 14.21/29.74
SqueezeNetV1 56.52/79.13 56.52/79.15 56.24/79.03 54.56/77.33 17.10/32.46
InceptionV3 76.41/92.78 76.43/92.71 76.44/92.73 73.67/91.34 1.50/4.82

Note: ImageNet 32-float models are directly from torchvision

Selected Arguments

Here we give an overview of selected arguments of quantize.py

Flag Default value Description & Options
type cifar10 mnist,svhn,cifar10,cifar100,stl10,alexnet,vgg16,vgg16_bn,vgg19,vgg19_bn,resent18,resent34,resnet50,resnet101,resnet152,squeezenet_v0,squeezenet_v1,inception_v3
quant_method linear quantization method:linear,minmax,log,tanh
param_bits 8 bit-width of weights and bias
fwd_bits 8 bit-width of activation
bn_bits 32 bit-width of running mean and running vairance
overflow_rate 0.0 overflow rate threshold for linear quantization method
n_samples 20 number of samples to make statistics for activation
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].