SpeechtAn opensource speech-to-text software written in tensorflow
Stars: ✭ 152 (-21.65%)
Keras Gpt 2Load GPT-2 checkpoint and generate texts
Stars: ✭ 113 (-41.75%)
Deep News SummarizationNews summarization using sequence to sequence model with attention in TensorFlow.
Stars: ✭ 167 (-13.92%)
Transformers🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Stars: ✭ 55,742 (+28632.99%)
TfvosSemi-Supervised Video Object Segmentation (VOS) with Tensorflow. Includes implementation of *MaskRNN: Instance Level Video Object Segmentation (NIPS 2017)* as part of the NIPS Paper Implementation Challenge.
Stars: ✭ 151 (-22.16%)
Openseq2seqToolkit for efficient experimentation with Speech Recognition, Text2Speech and NLP
Stars: ✭ 1,378 (+610.31%)
Bert As Language Modelbert as language model, fork from https://github.com/google-research/bert
Stars: ✭ 185 (-4.64%)
Top Deep Learning Top 200 deep learning Github repositories sorted by the number of stars.
Stars: ✭ 1,365 (+603.61%)
Chemgan ChallengeCode for the paper: Benhenda, M. 2017. ChemGAN challenge for drug discovery: can AI reproduce natural chemical diversity? arXiv preprint arXiv:1708.08227.
Stars: ✭ 98 (-49.48%)
Indic BertBERT-based Multilingual Model for Indian Languages
Stars: ✭ 160 (-17.53%)
Pytorch Pos TaggingA tutorial on how to implement models for part-of-speech tagging using PyTorch and TorchText.
Stars: ✭ 96 (-50.52%)
Char Rnn ChineseMulti-layer Recurrent Neural Networks (LSTM, GRU, RNN) for character-level language models in Torch. Based on code of https://github.com/karpathy/char-rnn. Support Chinese and other things.
Stars: ✭ 192 (-1.03%)
Multitask sentiment analysisMultitask Deep Learning for Sentiment Analysis using Character-Level Language Model, Bi-LSTMs for POS Tag, Chunking and Unsupervised Dependency Parsing. Inspired by this great article https://arxiv.org/abs/1611.01587
Stars: ✭ 93 (-52.06%)
Speech Recognition Neural NetworkThis is the end-to-end Speech Recognition neural network, deployed in Keras. This was my final project for Artificial Intelligence Nanodegree @Udacity.
Stars: ✭ 148 (-23.71%)
KerasrR interface to the keras library
Stars: ✭ 90 (-53.61%)
Xlnet GenXLNet for generating language.
Stars: ✭ 164 (-15.46%)
TongramsA C++ library providing fast language model queries in compressed space.
Stars: ✭ 88 (-54.64%)
Arc PytorchThe first public PyTorch implementation of Attentive Recurrent Comparators
Stars: ✭ 147 (-24.23%)
Keras BertImplementation of BERT that could load official pre-trained models for feature extraction and prediction
Stars: ✭ 2,264 (+1067.01%)
Clue中文语言理解测评基准 Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard
Stars: ✭ 2,425 (+1150%)
Bio embeddingsGet protein embeddings from protein sequences
Stars: ✭ 86 (-55.67%)
LazynlpLibrary to scrape and clean web pages to create massive datasets.
Stars: ✭ 1,985 (+923.2%)
TnnBiologically-realistic recurrent convolutional neural networks
Stars: ✭ 83 (-57.22%)
Crypto RnnLearning the Enigma with Recurrent Neural Networks
Stars: ✭ 139 (-28.35%)
SimplednnSimpleDNN is a machine learning lightweight open-source library written in Kotlin designed to support relevant neural network architectures in natural language processing tasks
Stars: ✭ 81 (-58.25%)
Ai Reading MaterialsSome of the ML and DL related reading materials, research papers that I've read
Stars: ✭ 79 (-59.28%)
StockpredictionPlain Stock Close-Price Prediction via Graves LSTM RNNs
Stars: ✭ 134 (-30.93%)
Keras SruImplementation of Simple Recurrent Unit in Keras
Stars: ✭ 76 (-60.82%)
Lotclass[EMNLP 2020] Text Classification Using Label Names Only: A Language Model Self-Training Approach
Stars: ✭ 160 (-17.53%)
Codegan[Deprecated] Source Code Generation using Sequence Generative Adversarial Networks
Stars: ✭ 73 (-62.37%)
Electra中文 预训练 ELECTRA 模型: 基于对抗学习 pretrain Chinese Model
Stars: ✭ 132 (-31.96%)
Rnn TrajmodelThe source of the IJCAI2017 paper "Modeling Trajectory with Recurrent Neural Networks"
Stars: ✭ 72 (-62.89%)
Image Caption GeneratorA neural network to generate captions for an image using CNN and RNN with BEAM Search.
Stars: ✭ 126 (-35.05%)
Lstm Ctc Ocrusing rnn (lstm or gru) and ctc to convert line image into text, based on torch7 and warp-ctc
Stars: ✭ 70 (-63.92%)
Keras LmuKeras implementation of Legendre Memory Units
Stars: ✭ 160 (-17.53%)
Nezha chinese pytorchNEZHA: Neural Contextualized Representation for Chinese Language Understanding
Stars: ✭ 65 (-66.49%)
Rcnn Text ClassificationTensorflow Implementation of "Recurrent Convolutional Neural Network for Text Classification" (AAAI 2015)
Stars: ✭ 127 (-34.54%)
Cross Domain nerCross-domain NER using cross-domain language modeling, code for ACL 2019 paper
Stars: ✭ 67 (-65.46%)
HdltexHDLTex: Hierarchical Deep Learning for Text Classification
Stars: ✭ 191 (-1.55%)
DogtorchWho Let The Dogs Out? Modeling Dog Behavior From Visual Data https://arxiv.org/pdf/1803.10827.pdf
Stars: ✭ 66 (-65.98%)
Kogpt2 Finetuning🔥 Korean GPT-2, KoGPT2 FineTuning cased. 한국어 가사 데이터 학습 🔥
Stars: ✭ 124 (-36.08%)
Rcnn Relation ExtractionTensorflow Implementation of Recurrent Convolutional Neural Network for Relation Extraction
Stars: ✭ 64 (-67.01%)
Brain.jsbrain.js is a GPU accelerated library for Neural Networks written in JavaScript.
Stars: ✭ 12,358 (+6270.1%)
Gpt2PyTorch Implementation of OpenAI GPT-2
Stars: ✭ 64 (-67.01%)
Gdax Orderbook MlApplication of machine learning to the Coinbase (GDAX) orderbook
Stars: ✭ 60 (-69.07%)
MacbertRevisiting Pre-trained Models for Chinese Natural Language Processing (Findings of EMNLP)
Stars: ✭ 167 (-13.92%)
Char rnn lm zhlanguage model in Chinese,基于Pytorch官方文档实现
Stars: ✭ 57 (-70.62%)
Haystack🔍 Haystack is an open source NLP framework that leverages Transformer models. It enables developers to implement production-ready neural search, question answering, semantic document search and summarization for a wide range of applications.
Stars: ✭ 3,409 (+1657.22%)
BrainforgeA Neural Networking library based on NumPy only
Stars: ✭ 114 (-41.24%)
Linear Attention Recurrent Neural NetworkA recurrent attention module consisting of an LSTM cell which can query its own past cell states by the means of windowed multi-head attention. The formulas are derived from the BN-LSTM and the Transformer Network. The LARNN cell with attention can be easily used inside a loop on the cell state, just like any other RNN. (LARNN)
Stars: ✭ 119 (-38.66%)
Gpt ScrollsA collaborative collection of open-source safe GPT-3 prompts that work well
Stars: ✭ 195 (+0.52%)