All Projects → jpuigcerver → rnn2d

jpuigcerver / rnn2d

Licence: MIT License
CPU and GPU implementations of some 2D RNN layers

Programming Languages

C++
36643 projects - #6 most used programming language
Cuda
1817 projects
lua
6591 projects
CMake
9771 projects
python
139335 projects - #7 most used programming language
c
50402 projects - #5 most used programming language

Projects that are alternatives of or similar to rnn2d

deep-improvisation
Easy-to-use Deep LSTM Neural Network to generate song sounds like containing improvisation.
Stars: ✭ 53 (+103.85%)
Mutual labels:  lstm, rnn
lstm-electric-load-forecast
Electric load forecast using Long-Short-Term-Memory (LSTM) recurrent neural network
Stars: ✭ 56 (+115.38%)
Mutual labels:  lstm, rnn
air writing
Online Hand Writing Recognition using BLSTM
Stars: ✭ 26 (+0%)
Mutual labels:  lstm, rnn
Base-On-Relation-Method-Extract-News-DA-RNN-Model-For-Stock-Prediction--Pytorch
基於關聯式新聞提取方法之雙階段注意力機制模型用於股票預測
Stars: ✭ 33 (+26.92%)
Mutual labels:  lstm, rnn
ConvLSTM-PyTorch
ConvLSTM/ConvGRU (Encoder-Decoder) with PyTorch on Moving-MNIST
Stars: ✭ 202 (+676.92%)
Mutual labels:  lstm, rnn
Paper-Implementation-DSTP-RNN-For-Stock-Prediction-Based-On-DA-RNN
基於DA-RNN之DSTP-RNN論文試做(Ver1.0)
Stars: ✭ 62 (+138.46%)
Mutual labels:  lstm, rnn
sequence-rnn-py
Sequence analyzing using Recurrent Neural Networks (RNN) based on Keras
Stars: ✭ 28 (+7.69%)
Mutual labels:  lstm, rnn
tf-ran-cell
Recurrent Additive Networks for Tensorflow
Stars: ✭ 16 (-38.46%)
Mutual labels:  lstm, rnn
automatic-personality-prediction
[AAAI 2020] Modeling Personality with Attentive Networks and Contextual Embeddings
Stars: ✭ 43 (+65.38%)
Mutual labels:  lstm, rnn
Speech-Recognition
End-to-end Automatic Speech Recognition for Madarian and English in Tensorflow
Stars: ✭ 21 (-19.23%)
Mutual labels:  lstm, rnn
ArrayLSTM
GPU/CPU (CUDA) Implementation of "Recurrent Memory Array Structures", Simple RNN, LSTM, Array LSTM..
Stars: ✭ 21 (-19.23%)
Mutual labels:  lstm, rnn
medical-diagnosis-cnn-rnn-rcnn
分别使用rnn/cnn/rcnn来实现根据患者描述,进行疾病诊断
Stars: ✭ 39 (+50%)
Mutual labels:  lstm, rnn
EBIM-NLI
Enhanced BiLSTM Inference Model for Natural Language Inference
Stars: ✭ 24 (-7.69%)
Mutual labels:  lstm, rnn
novel writer
Train LSTM to writer novel (HongLouMeng here) in Pytorch.
Stars: ✭ 14 (-46.15%)
Mutual labels:  lstm, rnn
5baa61e4c9b93f3f0682250b6cf8331b7ee68fd8
RNN-LSTM that learns passwords from a starting list
Stars: ✭ 35 (+34.62%)
Mutual labels:  lstm, rnn
SpeakerDiarization RNN CNN LSTM
Speaker Diarization is the problem of separating speakers in an audio. There could be any number of speakers and final result should state when speaker starts and ends. In this project, we analyze given audio file with 2 channels and 2 speakers (on separate channels).
Stars: ✭ 56 (+115.38%)
Mutual labels:  lstm, rnn
dltf
Hands-on in-person workshop for Deep Learning with TensorFlow
Stars: ✭ 14 (-46.15%)
Mutual labels:  lstm, rnn
DrowsyDriverDetection
This is a project implementing Computer Vision and Deep Learning concepts to detect drowsiness of a driver and sound an alarm if drowsy.
Stars: ✭ 82 (+215.38%)
Mutual labels:  lstm, rnn
theano-recurrence
Recurrent Neural Networks (RNN, GRU, LSTM) and their Bidirectional versions (BiRNN, BiGRU, BiLSTM) for word & character level language modelling in Theano
Stars: ✭ 40 (+53.85%)
Mutual labels:  lstm, rnn
myDL
Deep Learning
Stars: ✭ 18 (-30.77%)
Mutual labels:  lstm, rnn

rnn2d

The purpose of this library is to have a open source implementation of the most common 2D Recurrent Neural Network (RNN) layers, for both CPUs and GPUs.

2D RNNs are widely used in many applications manipulating 2D objects, like images. For instance, 2D-LSTMs have become the state-of-the-art in Handwritten Text Recognition, and, yet, it is very hard to find an open source CPU implementation which is well optimized and parallelized, and it is even more difficult to find a GPU implementation.

I am also including bindings for Torch, since it is the Deep Learning framework that I am currently using.

Principles

  1. Open source: MIT License.
  2. CPU and GPU: BLAS and CUDA.
  3. Efficiency: both memory and speed, controlling the tradeoff if possible.
  4. Portability: you should be able to easily use the library in your favorite Deep Learning frameworks (i.e. Tensorflow, Theano, Torch, etc).

Available layers

Requirements

  • GNU C++11 compiler (once the library is compiled, you can use it from C, C++03, etc)
  • CMake 3.0
  • Google Logging (Glog)
  • BLAS implementation (ATLAS, OpenBLAS, Intel MKL, etc)
  • If you want the GPU implementation:
    • CUDA toolkit
    • cuBLAS 2 (included with CUDA toolkit >= 6.0)
    • Thurst (included with CUDA toolkit >= 6.0)

It's also recommended (but not required) to have the following packages:

  • OpenMP, for faster CPU implementations.
  • Google Perftools, for faster memory allocation in the CPU.
  • Google Test and Google Mock, for testing.
  • Google Benchmark, for benchmarking.

Install

If you are going to use this library from Torch, I recommend to install it using the provided rock:

$ luarocks install https://raw.githubusercontent.com/jpuigcerver/rnn2d/master/torch/rnn2d-scm-1.rockspec

If you want to do a more costumized install, clone the repository and cd into it. Then, you'll just need to use cmake to compile and install the library as with any CMake install.

$ mkdir build && cd build
$ cmake .. -DCMAKE_BUILD_TYPE=Release -DBLAS_VENDORS=ATLAS;GENERIC -DWITH_CUDA=ON -DWITH_TORCH=ON
$ make -j8
$ make install

BLAS_VENDORS is a semicolon-separated list containing different BLAS implementations to search for. In this example, it will first try to use the ATLAS implementation (recommended) if available and, otherwise, it will use the generic BLAS implementation.

WITH_CUDA indicates that the CUDA implementation of the layers should be compiled and installed. By default this is ON. Of course, if CMake does not find the CUDA toolkit, it will ignore this flag. You can use the variable CUDA_TOOLKIT_ROOT_DIR to help CMake find your CUDA installation.

WITH_TORCH indicates that the Torch bindings for the layers should also be compiled and installed. By default this is ON. Again, if CMake does not find a Torch installation in your PATH, it will ignore this flag. You can use the variable TORCH_ROOT to help CMake find the Torch installation.

There are other variables that CMake supports to help it find other required or recommended packages. If CMake can't find a dependency, take a look at the cmake/Find*.cmake files.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].