Pytorch Seq2seqTutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
Stars: ✭ 3,418 (+11686.21%)
transformerNeutron: A pytorch based implementation of Transformer and its variants.
Stars: ✭ 60 (+106.9%)
Nmt KerasNeural Machine Translation with Keras
Stars: ✭ 501 (+1627.59%)
JoeynmtMinimalist NMT for educational purposes
Stars: ✭ 420 (+1348.28%)
NiuTrans.NMTA Fast Neural Machine Translation System. It is developed in C++ and resorts to NiuTensor for fast tensor APIs.
Stars: ✭ 112 (+286.21%)
transformer-sltSign Language Translation with Transformers (COLING'2020, ECCV'20 SLRTP Workshop)
Stars: ✭ 92 (+217.24%)
text simplificationText Simplification Model based on Encoder-Decoder (includes Transformer and Seq2Seq) model.
Stars: ✭ 66 (+127.59%)
pytorch basic nmtA simple yet strong implementation of neural machine translation in pytorch
Stars: ✭ 66 (+127.59%)
VideoTransformer-pytorchPyTorch implementation of a collections of scalable Video Transformer Benchmarks.
Stars: ✭ 159 (+448.28%)
zeroZero -- A neural machine translation system
Stars: ✭ 121 (+317.24%)
SockeyeSequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet
Stars: ✭ 990 (+3313.79%)
Transformer DynetAn Implementation of Transformer (Attention Is All You Need) in DyNet
Stars: ✭ 57 (+96.55%)
Njunmt TfAn open-source neural machine translation system developed by Natural Language Processing Group, Nanjing University.
Stars: ✭ 97 (+234.48%)
TabFormerCode & Data for "Tabular Transformers for Modeling Multivariate Time Series" (ICASSP, 2021)
Stars: ✭ 209 (+620.69%)
sparql-transformerA more handy way to use SPARQL data in your web app
Stars: ✭ 38 (+31.03%)
TransBTSThis repo provides the official code for : 1) TransBTS: Multimodal Brain Tumor Segmentation Using Transformer (https://arxiv.org/abs/2103.04430) , accepted by MICCAI2021. 2) TransBTSV2: Towards Better and More Efficient Volumetric Segmentation of Medical Images(https://arxiv.org/abs/2201.12785).
Stars: ✭ 254 (+775.86%)
En-transformerImplementation of E(n)-Transformer, which extends the ideas of Welling's E(n)-Equivariant Graph Neural Network to attention
Stars: ✭ 131 (+351.72%)
ColorizationThe pythorch implementation of Colorful Image Colorization. In ECCV, 2016.
Stars: ✭ 34 (+17.24%)
capeContinuous Augmented Positional Embeddings (CAPE) implementation for PyTorch
Stars: ✭ 29 (+0%)
Cross-lingual-SummarizationZero-Shot Cross-Lingual Abstractive Sentence Summarization through Teaching Generation and Attention
Stars: ✭ 28 (-3.45%)
php-serializerSerialize PHP variables, including objects, in any format. Support to unserialize it too.
Stars: ✭ 47 (+62.07%)
fastT5⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x.
Stars: ✭ 421 (+1351.72%)
kospeechOpen-Source Toolkit for End-to-End Korean Automatic Speech Recognition leveraging PyTorch and Hydra.
Stars: ✭ 456 (+1472.41%)
Transformer-MM-Explainability[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (+1568.97%)
DCGCNDensely Connected Graph Convolutional Networks for Graph-to-Sequence Learning (authors' MXNet implementation for the TACL19 paper)
Stars: ✭ 73 (+151.72%)
R-MeNTransformer-based Memory Networks for Knowledge Graph Embeddings (ACL 2020) (Pytorch and Tensorflow)
Stars: ✭ 74 (+155.17%)
CharLMCharacter-aware Neural Language Model implemented by PyTorch
Stars: ✭ 32 (+10.34%)
imdb-transformerA simple Neural Network for sentiment analysis, embedding sentences using a Transformer network.
Stars: ✭ 26 (-10.34%)
TokenLabelingPytorch implementation of "All Tokens Matter: Token Labeling for Training Better Vision Transformers"
Stars: ✭ 385 (+1227.59%)
TitleStylistSource code for our "TitleStylist" paper at ACL 2020
Stars: ✭ 72 (+148.28%)
sisterSImple SenTence EmbeddeR
Stars: ✭ 66 (+127.59%)
catrImage Captioning Using Transformer
Stars: ✭ 206 (+610.34%)
kaggle-champsCode for the CHAMPS Predicting Molecular Properties Kaggle competition
Stars: ✭ 49 (+68.97%)
ClusterTransformerTopic clustering library built on Transformer embeddings and cosine similarity metrics.Compatible with all BERT base transformers from huggingface.
Stars: ✭ 36 (+24.14%)
libaiLiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training
Stars: ✭ 284 (+879.31%)
naacl2018-feverFact Extraction and VERification baseline published in NAACL2018
Stars: ✭ 109 (+275.86%)
ru-dalleGenerate images from texts. In Russian
Stars: ✭ 1,606 (+5437.93%)
ViTs-vs-CNNs[NeurIPS 2021]: Are Transformers More Robust Than CNNs? (Pytorch implementation & checkpoints)
Stars: ✭ 145 (+400%)
keyword-transformerOfficial implementation of the Keyword Transformer: https://arxiv.org/abs/2104.00769
Stars: ✭ 76 (+162.07%)
Neural-Scam-ArtistWeb Scraping, Document Deduplication & GPT-2 Fine-tuning with a newly created scam dataset.
Stars: ✭ 18 (-37.93%)
svelte-jestJest Svelte component transformer
Stars: ✭ 37 (+27.59%)
bytekitJava 字节操作的工具库(不是字节码的工具库)
Stars: ✭ 40 (+37.93%)
t5-japaneseCodes to pre-train Japanese T5 models
Stars: ✭ 39 (+34.48%)
transformer-lsOfficial PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).
Stars: ✭ 201 (+593.1%)
MASTER-pytorchCode for the paper "MASTER: Multi-Aspect Non-local Network for Scene Text Recognition" (Pattern Recognition 2021)
Stars: ✭ 263 (+806.9%)
alpr utilsALPR model in unconstrained scenarios for Chinese license plates
Stars: ✭ 158 (+444.83%)
ConSSLPyTorch Implementation of SOTA SSL methods
Stars: ✭ 61 (+110.34%)
query-selectorLONG-TERM SERIES FORECASTING WITH QUERYSELECTOR – EFFICIENT MODEL OF SPARSEATTENTION
Stars: ✭ 63 (+117.24%)
vietnamese-robertaA Robustly Optimized BERT Pretraining Approach for Vietnamese
Stars: ✭ 22 (-24.14%)
Transformer Temporal TaggerCode and data form the paper BERT Got a Date: Introducing Transformers to Temporal Tagging
Stars: ✭ 55 (+89.66%)