Pytorch Openai Transformer Lm🐥A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI
Stars: ✭ 1,268 (+437.29%)
Neural spEnd-to-end ASR/LM implementation with PyTorch
Stars: ✭ 408 (+72.88%)
Gpt2PyTorch Implementation of OpenAI GPT-2
Stars: ✭ 64 (-72.88%)
Awesome Bert NlpA curated list of NLP resources focused on BERT, attention mechanism, Transformer networks, and transfer learning.
Stars: ✭ 567 (+140.25%)
Mead BaselineDeep-Learning Model Exploration and Development for NLP
Stars: ✭ 238 (+0.85%)
CtcdecoderConnectionist Temporal Classification (CTC) decoding algorithms: best path, prefix search, beam search and token passing. Implemented in Python.
Stars: ✭ 529 (+124.15%)
TupeTransformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve existing models like BERT.
Stars: ✭ 143 (-39.41%)
Bit RnnQuantize weights and activations in Recurrent Neural Networks.
Stars: ✭ 86 (-63.56%)
Gpt ScrollsA collaborative collection of open-source safe GPT-3 prompts that work well
Stars: ✭ 195 (-17.37%)
MinTLMinTL: Minimalist Transfer Learning for Task-Oriented Dialogue Systems
Stars: ✭ 61 (-74.15%)
Gpt2 FrenchGPT-2 French demo | Démo française de GPT-2
Stars: ✭ 47 (-80.08%)
Bert PytorchGoogle AI 2018 BERT pytorch implementation
Stars: ✭ 4,642 (+1866.95%)
CtcwordbeamsearchConnectionist Temporal Classification (CTC) decoder with dictionary and language model for TensorFlow.
Stars: ✭ 398 (+68.64%)
FNet-pytorchUnofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms
Stars: ✭ 204 (-13.56%)
Attention MechanismsImplementations for a family of attention mechanisms, suitable for all kinds of natural language processing tasks and compatible with TensorFlow 2.0 and Keras.
Stars: ✭ 203 (-13.98%)
Ml In TfGet started with Machine Learning in TensorFlow with a selection of good reads and implemented examples!
Stars: ✭ 45 (-80.93%)
Transformers🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Stars: ✭ 55,742 (+23519.49%)
Nfnets PytorchNFNets and Adaptive Gradient Clipping for SGD implemented in PyTorch
Stars: ✭ 215 (-8.9%)
Rnn ctcRecurrent Neural Network and Long Short Term Memory (LSTM) with Connectionist Temporal Classification implemented in Theano. Includes a Toy training example.
Stars: ✭ 220 (-6.78%)
GraphtransformerGraph Transformer Architecture. Source code for "A Generalization of Transformer Networks to Graphs", DLG-AAAI'21.
Stars: ✭ 187 (-20.76%)
Nn🧑🏫 50! Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
Stars: ✭ 5,720 (+2323.73%)
YinThe efficient and elegant JSON:API 1.1 server library for PHP
Stars: ✭ 214 (-9.32%)
Coursera Deep Learning SpecializationNotes, programming assignments and quizzes from all courses within the Coursera Deep Learning specialization offered by deeplearning.ai: (i) Neural Networks and Deep Learning; (ii) Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization; (iii) Structuring Machine Learning Projects; (iv) Convolutional Neural Networks; (v) Sequence Models
Stars: ✭ 188 (-20.34%)
Char Rnn ChineseMulti-layer Recurrent Neural Networks (LSTM, GRU, RNN) for character-level language models in Torch. Based on code of https://github.com/karpathy/char-rnn. Support Chinese and other things.
Stars: ✭ 192 (-18.64%)
TorchnlpEasy to use NLP library built on PyTorch and TorchText
Stars: ✭ 233 (-1.27%)
Sttn[ECCV'2020] STTN: Learning Joint Spatial-Temporal Transformations for Video Inpainting
Stars: ✭ 211 (-10.59%)
HdltexHDLTex: Hierarchical Deep Learning for Text Classification
Stars: ✭ 191 (-19.07%)
KospeechOpen-Source Toolkit for End-to-End Korean Automatic Speech Recognition.
Stars: ✭ 190 (-19.49%)
Im2latex TensorflowTensorflow implementation of the HarvardNLP paper - What You Get Is What You See: A Visual Markup Decompiler (https://arxiv.org/pdf/1609.04938v1.pdf)
Stars: ✭ 207 (-12.29%)
Graph TransformerTransformer for Graph Classification (Pytorch and Tensorflow)
Stars: ✭ 191 (-19.07%)
SentimentanalysisSentiment analysis neural network trained by fine-tuning BERT, ALBERT, or DistilBERT on the Stanford Sentiment Treebank.
Stars: ✭ 186 (-21.19%)
EchotorchA Python toolkit for Reservoir Computing and Echo State Network experimentation based on pyTorch. EchoTorch is the only Python module available to easily create Deep Reservoir Computing models.
Stars: ✭ 231 (-2.12%)
Xlnet zh中文预训练XLNet模型: Pre-Trained Chinese XLNet_Large
Stars: ✭ 207 (-12.29%)
Nlp learning结合python一起学习自然语言处理 (nlp): 语言模型、HMM、PCFG、Word2vec、完形填空式阅读理解任务、朴素贝叶斯分类器、TFIDF、PCA、SVD
Stars: ✭ 188 (-20.34%)
Bert As Language Modelbert as language model, fork from https://github.com/google-research/bert
Stars: ✭ 185 (-21.61%)
Hardware Aware Transformers[ACL 2020] HAT: Hardware-Aware Transformers for Efficient Natural Language Processing
Stars: ✭ 206 (-12.71%)
Bert Sklearna sklearn wrapper for Google's BERT model
Stars: ✭ 182 (-22.88%)
Keras BertImplementation of BERT that could load official pre-trained models for feature extraction and prediction
Stars: ✭ 2,264 (+859.32%)
Gpt2 NewstitleChinese NewsTitle Generation Project by GPT2.带有超级详细注释的中文GPT2新闻标题生成项目。
Stars: ✭ 235 (-0.42%)
Multigraph transformer transformer, multi-graph transformer, graph, graph classification, sketch recognition, sketch classification, free-hand sketch, official code of the paper "Multi-Graph Transformer for Free-Hand Sketch Recognition"
Stars: ✭ 231 (-2.12%)
Pytorch NceThe Noise Contrastive Estimation for softmax output written in Pytorch
Stars: ✭ 204 (-13.56%)
OptimusOptimus: the first large-scale pre-trained VAE language model
Stars: ✭ 180 (-23.73%)
Bert ChainerChainer implementation of "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"
Stars: ✭ 205 (-13.14%)
Transformer ClinicUnderstanding the Difficulty of Training Transformers
Stars: ✭ 179 (-24.15%)
Linear Attention TransformerTransformer based on a variant of attention that is linear complexity in respect to sequence length
Stars: ✭ 205 (-13.14%)