All Projects → krocki → ArrayLSTM

krocki / ArrayLSTM

Licence: other
GPU/CPU (CUDA) Implementation of "Recurrent Memory Array Structures", Simple RNN, LSTM, Array LSTM..

Programming Languages

C++
36643 projects - #6 most used programming language
Cuda
1817 projects
c
50402 projects - #5 most used programming language
CMake
9771 projects
Makefile
30231 projects
Batchfile
5799 projects

Projects that are alternatives of or similar to ArrayLSTM

DrowsyDriverDetection
This is a project implementing Computer Vision and Deep Learning concepts to detect drowsiness of a driver and sound an alarm if drowsy.
Stars: ✭ 82 (+290.48%)
Mutual labels:  lstm, rnn
5baa61e4c9b93f3f0682250b6cf8331b7ee68fd8
RNN-LSTM that learns passwords from a starting list
Stars: ✭ 35 (+66.67%)
Mutual labels:  lstm, rnn
Pytorch Sentiment Analysis
Tutorials on getting started with PyTorch and TorchText for sentiment analysis.
Stars: ✭ 3,209 (+15180.95%)
Mutual labels:  lstm, rnn
Crnn Audio Classification
UrbanSound classification using Convolutional Recurrent Networks in PyTorch
Stars: ✭ 235 (+1019.05%)
Mutual labels:  lstm, rnn
Deepjazz
Deep learning driven jazz generation using Keras & Theano!
Stars: ✭ 2,766 (+13071.43%)
Mutual labels:  lstm, rnn
EBIM-NLI
Enhanced BiLSTM Inference Model for Natural Language Inference
Stars: ✭ 24 (+14.29%)
Mutual labels:  lstm, rnn
Lightnet
Efficient, transparent deep learning in hundreds of lines of code.
Stars: ✭ 243 (+1057.14%)
Mutual labels:  lstm, rnn
Sign Language Gesture Recognition
Sign Language Gesture Recognition From Video Sequences Using RNN And CNN
Stars: ✭ 214 (+919.05%)
Mutual labels:  lstm, rnn
Har Stacked Residual Bidir Lstms
Using deep stacked residual bidirectional LSTM cells (RNN) with TensorFlow, we do Human Activity Recognition (HAR). Classifying the type of movement amongst 6 categories or 18 categories on 2 different datasets.
Stars: ✭ 250 (+1090.48%)
Mutual labels:  lstm, rnn
Pytorch Seq2seq
Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
Stars: ✭ 3,418 (+16176.19%)
Mutual labels:  lstm, rnn
Natural Language Processing With Tensorflow
Natural Language Processing with TensorFlow, published by Packt
Stars: ✭ 222 (+957.14%)
Mutual labels:  lstm, rnn
lstm har
LSTM based human activity recognition using smart phone sensor dataset
Stars: ✭ 20 (-4.76%)
Mutual labels:  lstm, rnn
Rnn ctc
Recurrent Neural Network and Long Short Term Memory (LSTM) with Connectionist Temporal Classification implemented in Theano. Includes a Toy training example.
Stars: ✭ 220 (+947.62%)
Mutual labels:  lstm, rnn
tf-ran-cell
Recurrent Additive Networks for Tensorflow
Stars: ✭ 16 (-23.81%)
Mutual labels:  lstm, rnn
Kprn
Reasoning Over Knowledge Graph Paths for Recommendation
Stars: ✭ 220 (+947.62%)
Mutual labels:  lstm, rnn
Caption generator
A modular library built on top of Keras and TensorFlow to generate a caption in natural language for any input image.
Stars: ✭ 243 (+1057.14%)
Mutual labels:  lstm, rnn
Chameleon recsys
Source code of CHAMELEON - A Deep Learning Meta-Architecture for News Recommender Systems
Stars: ✭ 202 (+861.9%)
Mutual labels:  lstm, rnn
Haste
Haste: a fast, simple, and open RNN library
Stars: ✭ 214 (+919.05%)
Mutual labels:  lstm, rnn
Nlstm
Nested LSTM Cell
Stars: ✭ 246 (+1071.43%)
Mutual labels:  lstm, rnn
Automatic speech recognition
End-to-end Automatic Speech Recognition for Madarian and English in Tensorflow
Stars: ✭ 2,751 (+13000%)
Mutual labels:  lstm, rnn

Array-LSTM

Implementation of recurrent array memory from "Recurrent Memory Array Structures" Technical Report

https://arxiv.org/abs/1607.03085

It is OK to use this code for research and other non-commercial purposes, please cite the report:

@article{rocki2016recurrent,
  title={Recurrent Memory Array Structures},
  author={Rocki, Kamil},
  journal={arXiv preprint arXiv:1607.03085},
  year={2016}
}

Getting Started

to compile (with CUDA):

make PRECISE_MATH=0 cuda

or for precise FP64 (useful for debugging)

make PRECISE_MATH=1 cuda

CUDA version is most recent, CPU versions are outdated, OpenCL version is not really implemented

(Waiting for CUDA 8 final and heterogenous lambdas)

If you still wish to compile the CPU version run:

make cpu

Usage

run like this

./deeplstm N B S GPU (test_every_seconds)

example (512 hidden nodes, 100 BPTT steps, batchsize = 64, GPU id = 0, run test every 1000s):

./deeplstm 512 100 64 0 1000

Author

Kamil M Rocki ([email protected])

License

Copyright (c) 2016, IBM Corporation. All rights reserved.

This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].