Relational Rnn PytorchAn implementation of DeepMind's Relational Recurrent Neural Networks in PyTorch.
Stars: ✭ 236 (+21.65%)
CtcdecoderConnectionist Temporal Classification (CTC) decoding algorithms: best path, prefix search, beam search and token passing. Implemented in Python.
Stars: ✭ 529 (+172.68%)
Attention MechanismsImplementations for a family of attention mechanisms, suitable for all kinds of natural language processing tasks and compatible with TensorFlow 2.0 and Keras.
Stars: ✭ 203 (+4.64%)
Mead BaselineDeep-Learning Model Exploration and Development for NLP
Stars: ✭ 238 (+22.68%)
CtcwordbeamsearchConnectionist Temporal Classification (CTC) decoder with dictionary and language model for TensorFlow.
Stars: ✭ 398 (+105.15%)
Bit RnnQuantize weights and activations in Recurrent Neural Networks.
Stars: ✭ 86 (-55.67%)
Electra pytorchPretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)
Stars: ✭ 149 (-23.2%)
Mss pytorchSinging Voice Separation via Recurrent Inference and Skip-Filtering Connections - PyTorch Implementation. Demo:
Stars: ✭ 165 (-14.95%)
Stock Price PredictorThis project seeks to utilize Deep Learning models, Long-Short Term Memory (LSTM) Neural Network algorithm, to predict stock prices.
Stars: ✭ 146 (-24.74%)
TupeTransformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve existing models like BERT.
Stars: ✭ 143 (-26.29%)
OptimusOptimus: the first large-scale pre-trained VAE language model
Stars: ✭ 180 (-7.22%)
Hey JetsonDeep Learning based Automatic Speech Recognition with attention for the Nvidia Jetson.
Stars: ✭ 161 (-17.01%)
Document Classifier LstmA bidirectional LSTM with attention for multiclass/multilabel text classification.
Stars: ✭ 136 (-29.9%)
Lrp for lstmLayer-wise Relevance Propagation (LRP) for LSTMs
Stars: ✭ 152 (-21.65%)
Gpt NeoAn implementation of model parallel GPT2& GPT3-like models, with the ability to scale up to full GPT3 sizes (and possibly more!), using the mesh-tensorflow library.
Stars: ✭ 1,252 (+545.36%)
Awd Lstm LmLSTM and QRNN Language Model Toolkit for PyTorch
Stars: ✭ 1,834 (+845.36%)
Bert Sklearna sklearn wrapper for Google's BERT model
Stars: ✭ 182 (-6.19%)
Ld NetEfficient Contextualized Representation: Language Model Pruning for Sequence Labeling
Stars: ✭ 148 (-23.71%)
SruSRU is a recurrent unit that can run over 10 times faster than cuDNN LSTM, without loss of accuracy tested on many tasks.
Stars: ✭ 2,009 (+935.57%)
Image Caption Generator[DEPRECATED] A Neural Network based generative model for captioning images using Tensorflow
Stars: ✭ 141 (-27.32%)
Emotion Recognition Using SpeechBuilding and training Speech Emotion Recognizer that predicts human emotions using Python, Sci-kit learn and Keras
Stars: ✭ 159 (-18.04%)
Chars2vecCharacter-based word embeddings model based on RNN for handling real world texts
Stars: ✭ 130 (-32.99%)
Deep LyricsLyrics Generator aka Character-level Language Modeling with Multi-layer LSTM Recurrent Neural Network
Stars: ✭ 127 (-34.54%)
Deep SpyingSpying using Smartwatch and Deep Learning
Stars: ✭ 172 (-11.34%)
Keras XlnetImplementation of XLNet that can load pretrained checkpoints
Stars: ✭ 159 (-18.04%)
DeepecgECG classification programs based on ML/DL methods
Stars: ✭ 124 (-36.08%)
SpeechtAn opensource speech-to-text software written in tensorflow
Stars: ✭ 152 (-21.65%)
Deep News SummarizationNews summarization using sequence to sequence model with attention in TensorFlow.
Stars: ✭ 167 (-13.92%)
TfvosSemi-Supervised Video Object Segmentation (VOS) with Tensorflow. Includes implementation of *MaskRNN: Instance Level Video Object Segmentation (NIPS 2017)* as part of the NIPS Paper Implementation Challenge.
Stars: ✭ 151 (-22.16%)
Bert As Language Modelbert as language model, fork from https://github.com/google-research/bert
Stars: ✭ 185 (-4.64%)
Indic BertBERT-based Multilingual Model for Indian Languages
Stars: ✭ 160 (-17.53%)
Char Rnn ChineseMulti-layer Recurrent Neural Networks (LSTM, GRU, RNN) for character-level language models in Torch. Based on code of https://github.com/karpathy/char-rnn. Support Chinese and other things.
Stars: ✭ 192 (-1.03%)
Speech Recognition Neural NetworkThis is the end-to-end Speech Recognition neural network, deployed in Keras. This was my final project for Artificial Intelligence Nanodegree @Udacity.
Stars: ✭ 148 (-23.71%)
Xlnet GenXLNet for generating language.
Stars: ✭ 164 (-15.46%)
Arc PytorchThe first public PyTorch implementation of Attentive Recurrent Comparators
Stars: ✭ 147 (-24.23%)
Keras BertImplementation of BERT that could load official pre-trained models for feature extraction and prediction
Stars: ✭ 2,264 (+1067.01%)
Clue中文语言理解测评基准 Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard
Stars: ✭ 2,425 (+1150%)
LazynlpLibrary to scrape and clean web pages to create massive datasets.
Stars: ✭ 1,985 (+923.2%)
Crypto RnnLearning the Enigma with Recurrent Neural Networks
Stars: ✭ 139 (-28.35%)
StockpredictionPlain Stock Close-Price Prediction via Graves LSTM RNNs
Stars: ✭ 134 (-30.93%)
Lotclass[EMNLP 2020] Text Classification Using Label Names Only: A Language Model Self-Training Approach
Stars: ✭ 160 (-17.53%)
Electra中文 预训练 ELECTRA 模型: 基于对抗学习 pretrain Chinese Model
Stars: ✭ 132 (-31.96%)
Image Caption GeneratorA neural network to generate captions for an image using CNN and RNN with BEAM Search.
Stars: ✭ 126 (-35.05%)
Keras LmuKeras implementation of Legendre Memory Units
Stars: ✭ 160 (-17.53%)
Rcnn Text ClassificationTensorflow Implementation of "Recurrent Convolutional Neural Network for Text Classification" (AAAI 2015)
Stars: ✭ 127 (-34.54%)
HdltexHDLTex: Hierarchical Deep Learning for Text Classification
Stars: ✭ 191 (-1.55%)
Kogpt2 Finetuning🔥 Korean GPT-2, KoGPT2 FineTuning cased. 한국어 가사 데이터 학습 🔥
Stars: ✭ 124 (-36.08%)
Brain.jsbrain.js is a GPU accelerated library for Neural Networks written in JavaScript.
Stars: ✭ 12,358 (+6270.1%)
Rnn From ScratchUse tensorflow's tf.scan to build vanilla, GRU and LSTM RNNs
Stars: ✭ 123 (-36.6%)
MacbertRevisiting Pre-trained Models for Chinese Natural Language Processing (Findings of EMNLP)
Stars: ✭ 167 (-13.92%)
F LmLanguage Modeling
Stars: ✭ 156 (-19.59%)
RobbertA Dutch RoBERTa-based language model
Stars: ✭ 120 (-38.14%)
Linear Attention Recurrent Neural NetworkA recurrent attention module consisting of an LSTM cell which can query its own past cell states by the means of windowed multi-head attention. The formulas are derived from the BN-LSTM and the Transformer Network. The LARNN cell with attention can be easily used inside a loop on the cell state, just like any other RNN. (LARNN)
Stars: ✭ 119 (-38.66%)