All Projects → Graphtransformer → Similar Projects or Alternatives

508 Open source projects that are alternatives of or similar to Graphtransformer

Wenet
Production First and Production Ready End-to-End Speech Recognition Toolkit
Stars: ✭ 617 (+229.95%)
Mutual labels:  transformer
Image-Captioning
Image Captioning with Keras
Stars: ✭ 60 (-67.91%)
Mutual labels:  attention
Asr syllable
基于卷积神经网络的语音识别声学模型的研究
Stars: ✭ 127 (-32.09%)
Mutual labels:  attention
transformer-slt
Sign Language Translation with Transformers (COLING'2020, ECCV'20 SLRTP Workshop)
Stars: ✭ 92 (-50.8%)
Mutual labels:  transformer
Typescript Is
Stars: ✭ 595 (+218.18%)
Mutual labels:  transformer
Multihead Siamese Nets
Implementation of Siamese Neural Networks built upon multihead attention mechanism for text semantic similarity task.
Stars: ✭ 144 (-22.99%)
Mutual labels:  attention
Bert ocr.pytorch
Unofficial PyTorch implementation of 2D Attentional Irregular Scene Text Recognizer
Stars: ✭ 101 (-45.99%)
Mutual labels:  transformer
Jazz transformer
Transformer-XL for Jazz music composition. Paper: "The Jazz Transformer on the Front Line: Exploring the Shortcomings of AI-Composed Music through Quantitative Measures", ISMIR 2020
Stars: ✭ 36 (-80.75%)
Mutual labels:  transformer
linformer
Implementation of Linformer for Pytorch
Stars: ✭ 119 (-36.36%)
Mutual labels:  transformer
Typescript Transform Macros
Typescript Transform Macros
Stars: ✭ 85 (-54.55%)
Mutual labels:  transformer
laravel5-hal-json
Laravel 5 HAL+JSON API Transformer Package
Stars: ✭ 15 (-91.98%)
Mutual labels:  transformer
Attention Is All You Need Pytorch
A PyTorch implementation of the Transformer model in "Attention is All You Need".
Stars: ✭ 6,070 (+3145.99%)
Mutual labels:  attention
Self Attentive Tensorflow
Tensorflow implementation of "A Structured Self-Attentive Sentence Embedding"
Stars: ✭ 189 (+1.07%)
Mutual labels:  attention
protein-transformer
Predicting protein structure through sequence modeling
Stars: ✭ 77 (-58.82%)
Mutual labels:  attention
graphtrans
Representing Long-Range Context for Graph Neural Networks with Global Attention
Stars: ✭ 45 (-75.94%)
Mutual labels:  transformer
Self Attention Classification
document classification using LSTM + self attention
Stars: ✭ 84 (-55.08%)
Mutual labels:  attention
Xpersona
XPersona: Evaluating Multilingual Personalized Chatbot
Stars: ✭ 54 (-71.12%)
Mutual labels:  transformer
Performer Pytorch
An implementation of Performer, a linear attention-based transformer, in Pytorch
Stars: ✭ 546 (+191.98%)
Mutual labels:  attention
laravel-scene
Laravel Transformer
Stars: ✭ 27 (-85.56%)
Mutual labels:  transformer
Ccnet Pure Pytorch
Criss-Cross Attention for Semantic Segmentation in pure Pytorch with a faster and more precise implementation.
Stars: ✭ 124 (-33.69%)
Mutual labels:  attention
tf2-transformer-chatbot
Transformer Chatbot in TensorFlow 2 with TPU support.
Stars: ✭ 94 (-49.73%)
Mutual labels:  transformer
Athena
an open-source implementation of sequence-to-sequence based speech processing engine
Stars: ✭ 542 (+189.84%)
Mutual labels:  transformer
image-classification
A collection of SOTA Image Classification Models in PyTorch
Stars: ✭ 70 (-62.57%)
Mutual labels:  transformer
Gpt2 Chitchat
GPT2 for Chinese chitchat/用于中文闲聊的GPT2模型(实现了DialoGPT的MMI思想)
Stars: ✭ 1,230 (+557.75%)
Mutual labels:  transformer
stagin
STAGIN: Spatio-Temporal Attention Graph Isomorphism Network
Stars: ✭ 34 (-81.82%)
Mutual labels:  attention
Rust Bert
Rust native ready-to-use NLP pipelines and transformer-based models (BERT, DistilBERT, GPT2,...)
Stars: ✭ 510 (+172.73%)
Mutual labels:  transformer
towhee
Towhee is a framework that is dedicated to making neural data processing pipelines simple and fast.
Stars: ✭ 821 (+339.04%)
Mutual labels:  transformer
Effective transformer
Running BERT without Padding
Stars: ✭ 169 (-9.63%)
Mutual labels:  transformer
YOLOv5-Lite
🍅🍅🍅YOLOv5-Lite: lighter, faster and easier to deploy. Evolved from yolov5 and the size of model is only 930+kb (int8) and 1.7M (fp16). It can reach 10+ FPS on the Raspberry Pi 4B when the input size is 320×320~
Stars: ✭ 1,230 (+557.75%)
Mutual labels:  transformer
Lightseq
LightSeq: A High Performance Inference Library for Sequence Processing and Generation
Stars: ✭ 501 (+167.91%)
Mutual labels:  transformer
Transformers without tears
Transformers without Tears: Improving the Normalization of Self-Attention
Stars: ✭ 80 (-57.22%)
Mutual labels:  transformer
Awesome Visual Transformer
Collect some papers about transformer with vision. Awesome Transformer with Computer Vision (CV)
Stars: ✭ 475 (+154.01%)
Mutual labels:  transformer
gnn-lspe
Source code for GNN-LSPE (Graph Neural Networks with Learnable Structural and Positional Representations), ICLR 2022
Stars: ✭ 165 (-11.76%)
Mutual labels:  attention
Transformer In Generating Dialogue
An Implementation of 'Attention is all you need' with Chinese Corpus
Stars: ✭ 121 (-35.29%)
Mutual labels:  transformer
datastories-semeval2017-task6
Deep-learning model presented in "DataStories at SemEval-2017 Task 6: Siamese LSTM with Attention for Humorous Text Comparison".
Stars: ✭ 20 (-89.3%)
Mutual labels:  attention
Punctuator2
A bidirectional recurrent neural network model with attention mechanism for restoring missing punctuation in unsegmented text
Stars: ✭ 483 (+158.29%)
Mutual labels:  attention
cometa
Corpus of Online Medical EnTities: the cometA corpus
Stars: ✭ 31 (-83.42%)
Mutual labels:  transformer
Machine Learning
My Attempt(s) In The World Of ML/DL....
Stars: ✭ 78 (-58.29%)
Mutual labels:  attention
pytorch-gpt-x
Implementation of autoregressive language model using improved Transformer and DeepSpeed pipeline parallelism.
Stars: ✭ 21 (-88.77%)
Mutual labels:  transformer
Chinesenre
中文实体关系抽取,pytorch,bilstm+attention
Stars: ✭ 463 (+147.59%)
Mutual labels:  attention
Fairseq Image Captioning
Transformer-based image captioning extension for pytorch/fairseq
Stars: ✭ 180 (-3.74%)
Mutual labels:  transformer
mtad-gat-pytorch
PyTorch implementation of MTAD-GAT (Multivariate Time-Series Anomaly Detection via Graph Attention Networks) by Zhao et. al (2020, https://arxiv.org/abs/2009.02040).
Stars: ✭ 85 (-54.55%)
Mutual labels:  attention
transformer-tensorflow2.0
transformer in tensorflow 2.0
Stars: ✭ 53 (-71.66%)
Mutual labels:  transformer
Structured Self Attention
A Structured Self-attentive Sentence Embedding
Stars: ✭ 459 (+145.45%)
Mutual labels:  attention
TransMorph Transformer for Medical Image Registration
TransMorph: Transformer for Unsupervised Medical Image Registration (PyTorch)
Stars: ✭ 130 (-30.48%)
Mutual labels:  transformer
Distre
[ACL 19] Fine-tuning Pre-Trained Transformer Language Models to Distantly Supervised Relation Extraction
Stars: ✭ 75 (-59.89%)
Mutual labels:  transformer
jeelizGlanceTracker
JavaScript/WebGL lib: detect if the user is looking at the screen or not from the webcam video feed. Lightweight and robust to all lighting conditions. Great for play/pause videos if the user is looking or not, or for person detection. Link to live demo.
Stars: ✭ 68 (-63.64%)
Mutual labels:  attention
Omninet
Official Pytorch implementation of "OmniNet: A unified architecture for multi-modal multi-task learning" | Authors: Subhojeet Pramanik, Priyanka Agrawal, Aman Hussain
Stars: ✭ 448 (+139.57%)
Mutual labels:  transformer
Nlp Models Tensorflow
Gathers machine learning and Tensorflow deep learning models for NLP problems, 1.13 < Tensorflow < 2.0
Stars: ✭ 1,603 (+757.22%)
Mutual labels:  attention
Nlp Experiments In Pytorch
PyTorch repository for text categorization and NER experiments in Turkish and English.
Stars: ✭ 35 (-81.28%)
Mutual labels:  transformer
laravel-mutate
Mutate Laravel attributes
Stars: ✭ 13 (-93.05%)
Mutual labels:  transformer
PAML
Personalizing Dialogue Agents via Meta-Learning
Stars: ✭ 114 (-39.04%)
Mutual labels:  transformer
Etagger
reference tensorflow code for named entity tagging
Stars: ✭ 100 (-46.52%)
Mutual labels:  transformer
Attentioncluster
TensorFlow Implementation of "Attention Clusters: Purely Attention Based Local Feature Integration for Video Classification"
Stars: ✭ 33 (-82.35%)
Mutual labels:  attention
Filipino-Text-Benchmarks
Open-source benchmark datasets and pretrained transformer models in the Filipino language.
Stars: ✭ 22 (-88.24%)
Mutual labels:  transformer
attention-is-all-you-need-paper
Implementation of Vaswani, Ashish, et al. "Attention is all you need." Advances in neural information processing systems. 2017.
Stars: ✭ 97 (-48.13%)
Mutual labels:  transformer
Attentive Neural Processes
implementing "recurrent attentive neural processes" to forecast power usage (w. LSTM baseline, MCDropout)
Stars: ✭ 33 (-82.35%)
Mutual labels:  attention
trapper
State-of-the-art NLP through transformer models in a modular design and consistent APIs.
Stars: ✭ 28 (-85.03%)
Mutual labels:  transformer
Jukebox
Code for the paper "Jukebox: A Generative Model for Music"
Stars: ✭ 4,863 (+2500.53%)
Mutual labels:  transformer
301-360 of 508 similar projects