All Projects → chickenbestlover → ELM-pytorch

chickenbestlover / ELM-pytorch

Licence: other
Extreme Learning Machine implemented in Pytorch

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to ELM-pytorch

Pytorch
PyTorch tutorials A to Z
Stars: ✭ 87 (+27.94%)
Mutual labels:  gpu, mnist
Tsne Cuda
GPU Accelerated t-SNE for CUDA with Python bindings
Stars: ✭ 1,120 (+1547.06%)
Mutual labels:  gpu, mnist
tensorflow-mnist-convnets
Neural nets for MNIST classification, simple single layer NN, 5 layer FC NN and convolutional neural networks with different architectures
Stars: ✭ 22 (-67.65%)
Mutual labels:  mnist
allgebra
Base container for developing C++ and Fortran HPC applications
Stars: ✭ 14 (-79.41%)
Mutual labels:  gpu
VAE-Gumbel-Softmax
An implementation of a Variational-Autoencoder using the Gumbel-Softmax reparametrization trick in TensorFlow (tested on r1.5 CPU and GPU) in ICLR 2017.
Stars: ✭ 66 (-2.94%)
Mutual labels:  mnist
Open-Set-Recognition
Open Set Recognition
Stars: ✭ 49 (-27.94%)
Mutual labels:  mnist
haskell-vae
Learning about Haskell with Variational Autoencoders
Stars: ✭ 18 (-73.53%)
Mutual labels:  mnist
deeplearning-mpo
Replace FC2, LeNet-5, VGG, Resnet, Densenet's full-connected layers with MPO
Stars: ✭ 26 (-61.76%)
Mutual labels:  mnist
RenderScriptOps
🚀 TensorFlow running with RenderScript on Android GPU
Stars: ✭ 15 (-77.94%)
Mutual labels:  gpu
MNIST-multitask
6️⃣6️⃣6️⃣ Reproduce ICLR '18 under-reviewed paper "MULTI-TASK LEARNING ON MNIST IMAGE DATASETS"
Stars: ✭ 34 (-50%)
Mutual labels:  mnist
coreos-gpu-installer
Scripts to build and use a container to install GPU drivers on CoreOS Container Linux
Stars: ✭ 21 (-69.12%)
Mutual labels:  gpu
mnist-challenge
My solution to TUM's Machine Learning MNIST challenge 2016-2017 [winner]
Stars: ✭ 68 (+0%)
Mutual labels:  mnist
chainer-ADDA
Adversarial Discriminative Domain Adaptation in Chainer
Stars: ✭ 24 (-64.71%)
Mutual labels:  mnist
fixmatch-pytorch
90%+ with 40 labels. please see the readme for details.
Stars: ✭ 27 (-60.29%)
Mutual labels:  gpu
AdaBound-tensorflow
An optimizer that trains as fast as Adam and as good as SGD in Tensorflow
Stars: ✭ 44 (-35.29%)
Mutual labels:  mnist
FGPU
No description or website provided.
Stars: ✭ 30 (-55.88%)
Mutual labels:  gpu
mnist-flask
A Flask web app for handwritten digit recognition using machine learning
Stars: ✭ 34 (-50%)
Mutual labels:  mnist
rust-simple-nn
Simple neural network implementation in Rust
Stars: ✭ 24 (-64.71%)
Mutual labels:  mnist
Python-TensorFlow-WebApp
Emerging Technologies Project - 4th Year 2017
Stars: ✭ 16 (-76.47%)
Mutual labels:  mnist
BifurcationKit.jl
A Julia package to perform Bifurcation Analysis
Stars: ✭ 185 (+172.06%)
Mutual labels:  gpu

ELM-pytorch

Extreme Learning Machine (ELM) implemented in Pytorch.

It's MNIST tutorial with basic ELM algorithm, Online Sequential ELM (OS-ELM), and Convolutional ELM.

You can run the code using cpu or gpu mode.

Requirements

  • Python 3.5+
  • Pytorch 0.3.1+

Extreme Learning Machine

Usage:

cd mnist

GPU mode: python main_ELM.py

CPU mode: python main_ELM.py --no-cuda

The training was completed in 2.0sec and the accuracy reached 97.77%. (Geforce GTX1080Ti 11GB, #hidden neurons=7000)

In CPU mode, the training was completed in 26.92sec and the accuracy was the same. (intel Core i7-6700K CPU 4.00GHz x 8 64GB RAM, #hidden neurons=7000)

If you do not have enough memory for the training process, reduce the number of hidden neurons and try again.

Online Sequential Extreme Learning Machine

Usage:

cd mnist

GPU mode: python main_ELM.py

CPU mode: python main_ELM.py --no-cuda

The training was completed in 10.0sec and the accuracy reached 97.77%. (Geforce GTX1080Ti 11GB, #hidden neurons=7000, batch_size=1000)

In CPU mode, the training was completed in 100.92sec and the accuracy was the same. (intel Core i7-6700K CPU 4.00GHz x 8 64GB RAM, #hidden neurons=7000, batch_size=1000)

If you do not have enough memory for the training process, reduce the number of hidden neurons and try again.

Convolutional Extreme Learning Machine

Usage:

cd mnist

GPU mode: python main_CNNELM.py

CPU mode: python main_CNNELM.py --no-cuda

The training was completed in 7.2sec and the accuracy reached 98.01%. (Geforce GTX1080Ti 11GB, the code used almost all RAM.)

Network configuration

ConvLayer1: kernel_size=5, #channel=10, padding=1 PoolLayer1: kernel_size=2 ReluLayer1: ConvLayer2: kernel_size=4, #channel=80, padding=1 PoolLayer2: kernel_size=2 ReluLayer2: FCLayer:

In CPU mode, the training was completed in 177.92sec and the accuracy was 98.80%. (intel Core i7-6700K CPU 4.00GHz x 8 64GB RAM, the code used almost all RAM.)

Network configuration

ConvLayer1: kernel_size=5, #channel=10, padding=1 PoolLayer1: kernel_size=2 ReluLayer1: ConvLayer2: kernel_size=4, #channel=450, padding=1 PoolLayer2: kernel_size=2 ReluLayer2: FCLayer:

If you do not have enough memory for the training process, reduce the number of hidden neurons and try again.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].