Neural-Machine-TranslationSeveral basic neural machine translation models implemented by PyTorch & TensorFlow
Stars: ✭ 29 (-85.05%)
Eeg DlA Deep Learning library for EEG Tasks (Signals) Classification, based on TensorFlow.
Stars: ✭ 165 (-14.95%)
ClusterTransformerTopic clustering library built on Transformer embeddings and cosine similarity metrics.Compatible with all BERT base transformers from huggingface.
Stars: ✭ 36 (-81.44%)
TransformerA TensorFlow Implementation of the Transformer: Attention Is All You Need
Stars: ✭ 3,646 (+1779.38%)
Gpt2 ChitchatGPT2 for Chinese chitchat/用于中文闲聊的GPT2模型(实现了DialoGPT的MMI思想)
Stars: ✭ 1,230 (+534.02%)
En-transformerImplementation of E(n)-Transformer, which extends the ideas of Welling's E(n)-Equivariant Graph Neural Network to attention
Stars: ✭ 131 (-32.47%)
svelte-jestJest Svelte component transformer
Stars: ✭ 37 (-80.93%)
VT-UNet[MICCAI2022] This is an official PyTorch implementation for A Robust Volumetric Transformer for Accurate 3D Tumor Segmentation
Stars: ✭ 151 (-22.16%)
transformerA simple TensorFlow implementation of the Transformer
Stars: ✭ 25 (-87.11%)
Contextualized Topic ModelsA python package to run contextualized topic modeling. CTMs combine BERT with topic models to get coherent topics. Also supports multilingual tasks. Cross-lingual Zero-shot model published at EACL 2021.
Stars: ✭ 318 (+63.92%)
imdb-transformerA simple Neural Network for sentiment analysis, embedding sentences using a Transformer network.
Stars: ✭ 26 (-86.6%)
catrImage Captioning Using Transformer
Stars: ✭ 206 (+6.19%)
Transformer TensorflowTensorFlow implementation of 'Attention Is All You Need (2017. 6)'
Stars: ✭ 319 (+64.43%)
text simplificationText Simplification Model based on Encoder-Decoder (includes Transformer and Seq2Seq) model.
Stars: ✭ 66 (-65.98%)
Medical TransformerPytorch Code for "Medical Transformer: Gated Axial-Attention for Medical Image Segmentation"
Stars: ✭ 153 (-21.13%)
Neural-Scam-ArtistWeb Scraping, Document Deduplication & GPT-2 Fine-tuning with a newly created scam dataset.
Stars: ✭ 18 (-90.72%)
PyhgtCode for "Heterogeneous Graph Transformer" (WWW'20), which is based on pytorch_geometric
Stars: ✭ 313 (+61.34%)
transformer-lsOfficial PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).
Stars: ✭ 201 (+3.61%)
Distre[ACL 19] Fine-tuning Pre-Trained Transformer Language Models to Distantly Supervised Relation Extraction
Stars: ✭ 75 (-61.34%)
Cognitive Speech TtsMicrosoft Text-to-Speech API sample code in several languages, part of Cognitive Services.
Stars: ✭ 312 (+60.82%)
TokenLabelingPytorch implementation of "All Tokens Matter: Token Labeling for Training Better Vision Transformers"
Stars: ✭ 385 (+98.45%)
YinThe efficient and elegant JSON:API 1.1 server library for PHP
Stars: ✭ 214 (+10.31%)
sisterSImple SenTence EmbeddeR
Stars: ✭ 66 (-65.98%)
VedastrA scene text recognition toolbox based on PyTorch
Stars: ✭ 290 (+49.48%)
kaggle-champsCode for the CHAMPS Predicting Molecular Properties Kaggle competition
Stars: ✭ 49 (-74.74%)
DialogptLarge-scale pretraining for dialogue
Stars: ✭ 1,177 (+506.7%)
libaiLiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training
Stars: ✭ 284 (+46.39%)
ru-dalleGenerate images from texts. In Russian
Stars: ✭ 1,606 (+727.84%)
TransformerEasy Attributed String Creator
Stars: ✭ 278 (+43.3%)
ViTs-vs-CNNs[NeurIPS 2021]: Are Transformers More Robust Than CNNs? (Pytorch implementation & checkpoints)
Stars: ✭ 145 (-25.26%)
PresentoPresento - Transformer & Presenter Package for PHP
Stars: ✭ 71 (-63.4%)
TransformerImplementation of Transformer model (originally from Attention is All You Need) applied to Time Series.
Stars: ✭ 273 (+40.72%)
BertvizTool for visualizing attention in the Transformer model (BERT, GPT-2, Albert, XLNet, RoBERTa, CTRL, etc.)
Stars: ✭ 3,443 (+1674.74%)
t5-japaneseCodes to pre-train Japanese T5 models
Stars: ✭ 39 (-79.9%)
Remi"Pop Music Transformer: Beat-based Modeling and Generation of Expressive Pop Piano Compositions", ACM Multimedia 2020
Stars: ✭ 273 (+40.72%)
Mixture Of ExpertsA Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models
Stars: ✭ 68 (-64.95%)
vietnamese-robertaA Robustly Optimized BERT Pretraining Approach for Vietnamese
Stars: ✭ 22 (-88.66%)
Nlp Interview Notes本项目是作者们根据个人面试和经验总结出的自然语言处理(NLP)面试准备的学习笔记与资料,该资料目前包含 自然语言处理各领域的 面试题积累。
Stars: ✭ 207 (+6.7%)
SegSwap(CVPRW 2022) Learning Co-segmentation by Segment Swapping for Retrieval and Discovery
Stars: ✭ 46 (-76.29%)
Hrnet Semantic SegmentationThe OCR approach is rephrased as Segmentation Transformer: https://arxiv.org/abs/1909.11065. This is an official implementation of semantic segmentation for HRNet. https://arxiv.org/abs/1908.07919
Stars: ✭ 2,369 (+1121.13%)
transformerBuild English-Vietnamese machine translation with ProtonX Transformer. :D
Stars: ✭ 41 (-78.87%)
SSE-PTCodes and Datasets for paper RecSys'20 "SSE-PT: Sequential Recommendation Via Personalized Transformer" and NurIPS'19 "Stochastic Shared Embeddings: Data-driven Regularization of Embedding Layers"
Stars: ✭ 103 (-46.91%)
Ner Bert PytorchPyTorch solution of named entity recognition task Using Google AI's pre-trained BERT model.
Stars: ✭ 249 (+28.35%)
SentimentanalysisSentiment analysis neural network trained by fine-tuning BERT, ALBERT, or DistilBERT on the Stanford Sentiment Treebank.
Stars: ✭ 186 (-4.12%)
Bert ocr.pytorchUnofficial PyTorch implementation of 2D Attentional Irregular Scene Text Recognizer
Stars: ✭ 101 (-47.94%)
Speech TransformerA PyTorch implementation of Speech Transformer, an End-to-End ASR with Transformer network on Mandarin Chinese.
Stars: ✭ 565 (+191.24%)
PDNThe official PyTorch implementation of "Pathfinder Discovery Networks for Neural Message Passing" (WebConf '21)
Stars: ✭ 44 (-77.32%)