All Projects → happyjin → ConvGRU-pytorch

happyjin / ConvGRU-pytorch

Licence: MIT license
Convolutional GRU

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to ConvGRU-pytorch

ConvLSTM-PyTorch
ConvLSTM/ConvGRU (Encoder-Decoder) with PyTorch on Moving-MNIST
Stars: ✭ 202 (+85.32%)
Mutual labels:  gru, convgru
Rnn ctc
Recurrent Neural Network and Long Short Term Memory (LSTM) with Connectionist Temporal Classification implemented in Theano. Includes a Toy training example.
Stars: ✭ 220 (+101.83%)
Mutual labels:  gru
Tensorflow Lstm Sin
TensorFlow 1.3 experiment with LSTM (and GRU) RNNs for sine prediction
Stars: ✭ 52 (-52.29%)
Mutual labels:  gru
Skip Thoughts.torch
Porting of Skip-Thoughts pretrained models from Theano to PyTorch & Torch7
Stars: ✭ 146 (+33.94%)
Mutual labels:  gru
Cikm analyticup 2017
CIKM AnalytiCup 2017 is an open competition that is sponsored by Shenzhen Meteorological Bureau, Alibaba Group and CIKM2017. Our team got the third place in the first phrase. And in the second phrase we got the fourth place.
Stars: ✭ 66 (-39.45%)
Mutual labels:  gru
Load forecasting
Load forcasting on Delhi area electric power load using ARIMA, RNN, LSTM and GRU models
Stars: ✭ 160 (+46.79%)
Mutual labels:  gru
Tensorflow Sentiment Analysis On Amazon Reviews Data
Implementing different RNN models (LSTM,GRU) & Convolution models (Conv1D, Conv2D) on a subset of Amazon Reviews data with TensorFlow on Python 3. A sentiment analysis project.
Stars: ✭ 34 (-68.81%)
Mutual labels:  gru
TF-NNLM-TK
A toolkit for neural language modeling using Tensorflow including basic models like RNNs and LSTMs as well as more advanced models.
Stars: ✭ 20 (-81.65%)
Mutual labels:  gru
Haste
Haste: a fast, simple, and open RNN library
Stars: ✭ 214 (+96.33%)
Mutual labels:  gru
Hierarchical Attention Network
Implementation of Hierarchical Attention Networks in PyTorch
Stars: ✭ 120 (+10.09%)
Mutual labels:  gru
Rnn Text Classification Tf
Tensorflow Implementation of Recurrent Neural Network (Vanilla, LSTM, GRU) for Text Classification
Stars: ✭ 114 (+4.59%)
Mutual labels:  gru
Gru Svm
[ICMLC 2018] A Neural Network Architecture Combining Gated Recurrent Unit (GRU) and Support Vector Machine (SVM) for Intrusion Detection
Stars: ✭ 76 (-30.28%)
Mutual labels:  gru
Pytorch Kaldi
pytorch-kaldi is a project for developing state-of-the-art DNN/RNN hybrid speech recognition systems. The DNN part is managed by pytorch, while feature extraction, label computation, and decoding are performed with the kaldi toolkit.
Stars: ✭ 2,097 (+1823.85%)
Mutual labels:  gru
Gdax Orderbook Ml
Application of machine learning to the Coinbase (GDAX) orderbook
Stars: ✭ 60 (-44.95%)
Mutual labels:  gru
Trafficflowprediction
Traffic Flow Prediction with Neural Networks(SAEs、LSTM、GRU).
Stars: ✭ 242 (+122.02%)
Mutual labels:  gru
Rnn Notebooks
RNN(SimpleRNN, LSTM, GRU) Tensorflow2.0 & Keras Notebooks (Workshop materials)
Stars: ✭ 48 (-55.96%)
Mutual labels:  gru
Pytorch Rnn Text Classification
Word Embedding + LSTM + FC
Stars: ✭ 112 (+2.75%)
Mutual labels:  gru
Speech Recognition Neural Network
This is the end-to-end Speech Recognition neural network, deployed in Keras. This was my final project for Artificial Intelligence Nanodegree @Udacity.
Stars: ✭ 148 (+35.78%)
Mutual labels:  gru
rnn-theano
RNN(LSTM, GRU) in Theano with mini-batch training; character-level language models in Theano
Stars: ✭ 68 (-37.61%)
Mutual labels:  gru
Pytorch Seq2seq
Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
Stars: ✭ 3,418 (+3035.78%)
Mutual labels:  gru

ConvLSTM_pytorch

This file contains the implementation of Convolutional LSTM in PyTorch

How to Use

The ConvGRU module derives from nn.Module so it can be used as any other PyTorch module.

The ConvGRU class supports an arbitrary number of stacked hidden layers in GRU. In this case, it can be specified the hidden dimension (that is, the number of channels) and the kernel size of each layer. In the case more layers are present but a single value is provided, this is replicated for all the layers. For example, in the following snippet each of the three layers has a different hidden dimension but the same kernel size.

Example usage:

# set CUDA device
os.environ["CUDA_VISIBLE_DEVICES"] = "3"

# detect if CUDA is available or not
use_gpu = torch.cuda.is_available()
if use_gpu:
    dtype = torch.cuda.FloatTensor # computation in GPU
else:
    dtype = torch.FloatTensor

height = width = 6
channels = 256
hidden_dim = [32, 64]
kernel_size = (3,3) # for two stacked hidden layers with same kernel_size or write as [(3,3), (3,3)]
num_layers = 2 # number of stacked hidden layer
model = ConvGRU(input_size=(height, width),
                input_dim=channels,
                hidden_dim=hidden_dim,
                kernel_size=kernel_size,
                num_layers=num_layers,
                dtype=dtype,
                batch_first=True,
                bias = True,
                return_all_layers = False)

batch_size = 1
time_steps = 1
input_tensor = torch.rand(batch_size, time_steps, channels, height, width)  # (b,t,c,h,w)
layer_output_list, last_state_list = model(input_tensor)

Disclaimer

This is still a work in progress and is far from being perfect: if you find any bug please don't hesitate to open an issue.

License

ConvLSTM_pytorch is released under the MIT License (refer to the LICENSE file for details).

Acknowledgment

This repo borrows some codes from

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].