All Projects → graphdeeplearning → Graphtransformer

graphdeeplearning / Graphtransformer

Licence: mit
Graph Transformer Architecture. Source code for "A Generalization of Transformer Networks to Graphs", DLG-AAAI'21.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Graphtransformer

Pytorch Original Transformer
My implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT pretrained models.
Stars: ✭ 411 (+119.79%)
Mutual labels:  attention, transformer
Deeplearning Nlp Models
A small, interpretable codebase containing the re-implementation of a few "deep" NLP models in PyTorch. Colab notebooks to run with GPUs. Models: word2vec, CNNs, transformer, gpt.
Stars: ✭ 64 (-65.78%)
Mutual labels:  attention, transformer
Speech Transformer
A PyTorch implementation of Speech Transformer, an End-to-End ASR with Transformer network on Mandarin Chinese.
Stars: ✭ 565 (+202.14%)
Mutual labels:  attention, transformer
Text Classification Models Pytorch
Implementation of State-of-the-art Text Classification Models in Pytorch
Stars: ✭ 379 (+102.67%)
Mutual labels:  attention, transformer
Bertqa Attention On Steroids
BertQA - Attention on Steroids
Stars: ✭ 112 (-40.11%)
Mutual labels:  attention, transformer
Nlp Tutorials
Simple implementations of NLP models. Tutorials are written in Chinese on my website https://mofanpy.com
Stars: ✭ 394 (+110.7%)
Mutual labels:  attention, transformer
Cell Detr
Official and maintained implementation of the paper Attention-Based Transformers for Instance Segmentation of Cells in Microstructures [BIBM 2020].
Stars: ✭ 26 (-86.1%)
Mutual labels:  attention, transformer
Visual-Transformer-Paper-Summary
Summary of Transformer applications for computer vision tasks.
Stars: ✭ 51 (-72.73%)
Mutual labels:  transformer, attention
Multiturndialogzoo
Multi-turn dialogue baselines written in PyTorch
Stars: ✭ 106 (-43.32%)
Mutual labels:  attention, transformer
Njunmt Tf
An open-source neural machine translation system developed by Natural Language Processing Group, Nanjing University.
Stars: ✭ 97 (-48.13%)
Mutual labels:  attention, transformer
Transformer Tensorflow
TensorFlow implementation of 'Attention Is All You Need (2017. 6)'
Stars: ✭ 319 (+70.59%)
Mutual labels:  attention, transformer
Medical Transformer
Pytorch Code for "Medical Transformer: Gated Axial-Attention for Medical Image Segmentation"
Stars: ✭ 153 (-18.18%)
Mutual labels:  attention, transformer
Keras Transformer
Transformer implemented in Keras
Stars: ✭ 273 (+45.99%)
Mutual labels:  attention, transformer
Neural sp
End-to-end ASR/LM implementation with PyTorch
Stars: ✭ 408 (+118.18%)
Mutual labels:  attention, transformer
ai challenger 2018 sentiment analysis
Fine-grained Sentiment Analysis of User Reviews --- AI CHALLENGER 2018
Stars: ✭ 16 (-91.44%)
Mutual labels:  transformer, attention
Awesome Fast Attention
list of efficient attention modules
Stars: ✭ 627 (+235.29%)
Mutual labels:  attention, transformer
Relation-Extraction-Transformer
NLP: Relation extraction with position-aware self-attention transformer
Stars: ✭ 63 (-66.31%)
Mutual labels:  transformer, attention
visualization
a collection of visualization function
Stars: ✭ 189 (+1.07%)
Mutual labels:  transformer, attention
Nlp Tutorial
Natural Language Processing Tutorial for Deep Learning Researchers
Stars: ✭ 9,895 (+5191.44%)
Mutual labels:  attention, transformer
Sightseq
Computer vision tools for fairseq, containing PyTorch implementation of text recognition and object detection
Stars: ✭ 116 (-37.97%)
Mutual labels:  attention, transformer

Graph Transformer Architecture

Source code for the paper "A Generalization of Transformer Networks to Graphs" by Vijay Prakash Dwivedi and Xavier Bresson, at AAAI'21 Workshop on Deep Learning on Graphs: Methods and Applications (DLG-AAAI'21).

We propose a generalization of transformer neural network architecture for arbitrary graphs: Graph Transformer.
Compared to the Standard Transformer, the highlights of the presented architecture are:

  • The attention mechanism is a function of neighborhood connectivity for each node in the graph.
  • The position encoding is represented by Laplacian eigenvectors, which naturally generalize the sinusoidal positional encodings often used in NLP.
  • The layer normalization is replaced by a batch normalization layer.
  • The architecture is extended to have edge representation, which can be critical to tasks with rich information on the edges, or pairwise interactions (such as bond types in molecules, or relationship type in KGs. etc).

Graph Transformer Architecture
Figure: Block Diagram of Graph Transformer Architecture

1. Repo installation

This project is based on the benchmarking-gnns repository.

Follow these instructions to install the benchmark and setup the environment.


2. Download datasets

Proceed as follows to download the datasets used to evaluate Graph Transformer.


3. Reproducibility

Use this page to run the codes and reproduce the published results.


4. Reference

📃 Paper on arXiv
📝 Blog on Towards Data Science
🎥 Video on YouTube

@article{dwivedi2021generalization,
  title={A Generalization of Transformer Networks to Graphs},
  author={Dwivedi, Vijay Prakash and Bresson, Xavier},
  journal={AAAI Workshop on Deep Learning on Graphs: Methods and Applications},
  year={2021}
}




Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].