text2keywordsTrained T5 and T5-large model for creating keywords from text
Stars: ✭ 53 (+35.9%)
Mutual labels: transformer, t5
fastT5⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x.
Stars: ✭ 421 (+979.49%)
Mutual labels: transformer, t5
InsightRepository for Project Insight: NLP as a Service
Stars: ✭ 246 (+530.77%)
Mutual labels: transformer
densecapDense video captioning in PyTorch
Stars: ✭ 37 (-5.13%)
Mutual labels: transformer
awesome-transformer-searchA curated list of awesome resources combining Transformers with Neural Architecture Search
Stars: ✭ 194 (+397.44%)
Mutual labels: transformer
Ner Bert PytorchPyTorch solution of named entity recognition task Using Google AI's pre-trained BERT model.
Stars: ✭ 249 (+538.46%)
Mutual labels: transformer
Transformers-RLAn easy PyTorch implementation of "Stabilizing Transformers for Reinforcement Learning"
Stars: ✭ 107 (+174.36%)
Mutual labels: transformer
Relational Rnn PytorchAn implementation of DeepMind's Relational Recurrent Neural Networks in PyTorch.
Stars: ✭ 236 (+505.13%)
Mutual labels: transformer
TianChi AIEarthTianChi AIEarth Contest Solution
Stars: ✭ 57 (+46.15%)
Mutual labels: transformer
transformerBuild English-Vietnamese machine translation with ProtonX Transformer. :D
Stars: ✭ 41 (+5.13%)
Mutual labels: transformer
SegSwap(CVPRW 2022) Learning Co-segmentation by Segment Swapping for Retrieval and Discovery
Stars: ✭ 46 (+17.95%)
Mutual labels: transformer
nested-transformerNested Hierarchical Transformer https://arxiv.org/pdf/2105.12723.pdf
Stars: ✭ 174 (+346.15%)
Mutual labels: transformer
pytorch-lr-schedulerPyTorch implementation of some learning rate schedulers for deep learning researcher.
Stars: ✭ 65 (+66.67%)
Mutual labels: transformer
sb-nmtCode for Synchronous Bidirectional Neural Machine Translation (SB-NMT)
Stars: ✭ 66 (+69.23%)
Mutual labels: transformer
Pytorch Seq2seqTutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
Stars: ✭ 3,418 (+8664.1%)
Mutual labels: transformer
vietnamese-robertaA Robustly Optimized BERT Pretraining Approach for Vietnamese
Stars: ✭ 22 (-43.59%)
Mutual labels: transformer
BertvizTool for visualizing attention in the Transformer model (BERT, GPT-2, Albert, XLNet, RoBERTa, CTRL, etc.)
Stars: ✭ 3,443 (+8728.21%)
Mutual labels: transformer
SSE-PTCodes and Datasets for paper RecSys'20 "SSE-PT: Sequential Recommendation Via Personalized Transformer" and NurIPS'19 "Stochastic Shared Embeddings: Data-driven Regularization of Embedding Layers"
Stars: ✭ 103 (+164.1%)
Mutual labels: transformer
seq2seq-pytorchSequence to Sequence Models in PyTorch
Stars: ✭ 41 (+5.13%)
Mutual labels: transformer
MASTER-pytorchCode for the paper "MASTER: Multi-Aspect Non-local Network for Scene Text Recognition" (Pattern Recognition 2021)
Stars: ✭ 263 (+574.36%)
Mutual labels: transformer