All Projects → Shivanshu-Gupta → Pytorch-POS-Tagger

Shivanshu-Gupta / Pytorch-POS-Tagger

Licence: other
Part-of-Speech Tagger and custom implementations of LSTM, GRU and Vanilla RNN

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to Pytorch-POS-Tagger

See Rnn
RNN and general weights, gradients, & activations visualization in Keras & TensorFlow
Stars: ✭ 102 (+325%)
Mutual labels:  lstm, gru, rnn
myDL
Deep Learning
Stars: ✭ 18 (-25%)
Mutual labels:  lstm, gru, rnn
Pytorch Rnn Text Classification
Word Embedding + LSTM + FC
Stars: ✭ 112 (+366.67%)
Mutual labels:  lstm, gru, rnn
Load forecasting
Load forcasting on Delhi area electric power load using ARIMA, RNN, LSTM and GRU models
Stars: ✭ 160 (+566.67%)
Mutual labels:  lstm, gru, rnn
Pytorch Seq2seq
Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
Stars: ✭ 3,418 (+14141.67%)
Mutual labels:  lstm, gru, rnn
Rnn Notebooks
RNN(SimpleRNN, LSTM, GRU) Tensorflow2.0 & Keras Notebooks (Workshop materials)
Stars: ✭ 48 (+100%)
Mutual labels:  lstm, gru, rnn
Pytorch Kaldi
pytorch-kaldi is a project for developing state-of-the-art DNN/RNN hybrid speech recognition systems. The DNN part is managed by pytorch, while feature extraction, label computation, and decoding are performed with the kaldi toolkit.
Stars: ✭ 2,097 (+8637.5%)
Mutual labels:  lstm, gru, rnn
Easy Deep Learning With Keras
Keras tutorial for beginners (using TF backend)
Stars: ✭ 367 (+1429.17%)
Mutual labels:  lstm, gru, rnn
Rnn ctc
Recurrent Neural Network and Long Short Term Memory (LSTM) with Connectionist Temporal Classification implemented in Theano. Includes a Toy training example.
Stars: ✭ 220 (+816.67%)
Mutual labels:  lstm, gru, rnn
Haste
Haste: a fast, simple, and open RNN library
Stars: ✭ 214 (+791.67%)
Mutual labels:  lstm, gru, rnn
Eeg Dl
A Deep Learning library for EEG Tasks (Signals) Classification, based on TensorFlow.
Stars: ✭ 165 (+587.5%)
Mutual labels:  lstm, gru, rnn
theano-recurrence
Recurrent Neural Networks (RNN, GRU, LSTM) and their Bidirectional versions (BiRNN, BiGRU, BiLSTM) for word & character level language modelling in Theano
Stars: ✭ 40 (+66.67%)
Mutual labels:  lstm, gru, rnn
tf-ran-cell
Recurrent Additive Networks for Tensorflow
Stars: ✭ 16 (-33.33%)
Mutual labels:  lstm, gru, rnn
ConvLSTM-PyTorch
ConvLSTM/ConvGRU (Encoder-Decoder) with PyTorch on Moving-MNIST
Stars: ✭ 202 (+741.67%)
Mutual labels:  lstm, gru, rnn
Manhattan-LSTM
Keras and PyTorch implementations of the MaLSTM model for computing Semantic Similarity.
Stars: ✭ 28 (+16.67%)
Mutual labels:  lstm, gru
NoiseReductionUsingGRU
This is my graduation project in BIT. Title: Noise Reduction Using GRU.
Stars: ✭ 25 (+4.17%)
Mutual labels:  gru, rnn
question-pair
A siamese LSTM to detect sentence/question pairs.
Stars: ✭ 25 (+4.17%)
Mutual labels:  lstm, rnn
deep-improvisation
Easy-to-use Deep LSTM Neural Network to generate song sounds like containing improvisation.
Stars: ✭ 53 (+120.83%)
Mutual labels:  lstm, rnn
Base-On-Relation-Method-Extract-News-DA-RNN-Model-For-Stock-Prediction--Pytorch
基於關聯式新聞提取方法之雙階段注意力機制模型用於股票預測
Stars: ✭ 33 (+37.5%)
Mutual labels:  lstm, rnn
dts
A Keras library for multi-step time-series forecasting.
Stars: ✭ 130 (+441.67%)
Mutual labels:  lstm, gru

Parts-of-Speech Tagger

The purpose of this project was to learn how to implement RNNs and compare different types of RNNs on the task of Parts-of-Speech tagging using a part of the CoNLL-2012 dataset with 42 possible tags. This repository contains:

  1. a custom implementation of the GRU cell.
  2. a custom implementation of the RNN architecture that may be configured to be used as an LSTM, GRU or Vanilla RNN.
  3. a Parts-of-Speech tagger that can be configured to use any of the above custom RNN implementations.

Requirements

Organisation

The code in the repository are organised as follows:

The raw dataset is in RNN_Data_files/.

Usage

Preprocessing datasets

Use preprocess.sh to generate tsv datasets containing sentences and POS tags in the intended data_dir (RNN_Data_files/ here).

$ ./preprocess.sh RNN_Data_files/train/sentences.tsv RNN_Data_files/train/tags.tsv RNN_Data_files/train_data.tsv
$ ./preprocess.sh RNN_Data_files/val/sentences.tsv RNN_Data_files/val/tags.tsv RNN_Data_files/val_data.tsv

Training/Testing

usage: main.py [-h] [--use_gpu] [--data_dir PATH] [--save_dir PATH]
                    [--rnn_class RNN_CLASS] [--reload PATH] [--test]
                    [--batch_size BATCH_SIZE] [--epochs EPOCHS] [--lr LR]
                    [--step_size N] [--gamma GAMMA] [--seed SEED]

PyTorch Parts-of-Speech Tagger

optional arguments:
  -h, --help            show this help message and exit
  --use_gpu
  --data_dir PATH       directory containing train_data.tsv and val_data.tsv (default=RNN_Data_files/)
  --save_dir PATH
  --rnn_class RNN_CLASS
                        class of underlying RNN to use
  --reload PATH         path to checkpoint to load (default: none)
  --test                test model on test set (use with --reload)
  --batch_size BATCH_SIZE
                        batchsize for optimizer updates
  --epochs EPOCHS       number of total epochs to run
  --lr LR               initial learning rate
  --step_size N
  --gamma GAMMA
  --seed SEED           random seed (default: 123)

Results

Results.pdf compares the results for LSTM, GRU and Vanilla RNN based POS Taggers on various metrics. The best accuracy of 96.12% was obtained using LSTM-based POS Tagger. The pretrained model can be downloaded from here.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].