Im2LaTeXAn implementation of the Show, Attend and Tell paper in Tensorflow, for the OpenAI Im2LaTeX suggested problem
Stars: ✭ 16 (-56.76%)
LSTM-AttentionA Comparison of LSTMs and Attention Mechanisms for Forecasting Financial Time Series
Stars: ✭ 53 (+43.24%)
kaggle-champsCode for the CHAMPS Predicting Molecular Properties Kaggle competition
Stars: ✭ 49 (+32.43%)
visual-compatibilityContext-Aware Visual Compatibility Prediction (https://arxiv.org/abs/1902.03646)
Stars: ✭ 92 (+148.65%)
SA-DLSentiment Analysis with Deep Learning models. Implemented with Tensorflow and Keras.
Stars: ✭ 35 (-5.41%)
3DInfomaxMaking self-supervised learning work on molecules by using their 3D geometry to pre-train GNNs. Implemented in DGL and Pytorch Geometric.
Stars: ✭ 107 (+189.19%)
Optic-Disc-UnetAttention Unet model with post process for retina optic disc segmention
Stars: ✭ 77 (+108.11%)
PyNetsA Reproducible Workflow for Structural and Functional Connectome Ensemble Learning
Stars: ✭ 114 (+208.11%)
SelfGNNA PyTorch implementation of "SelfGNN: Self-supervised Graph Neural Networks without explicit negative sampling" paper, which appeared in The International Workshop on Self-Supervised Learning for the Web (SSL'21) @ the Web Conference 2021 (WWW'21).
Stars: ✭ 24 (-35.14%)
ASAPAAAI 2020 - ASAP: Adaptive Structure Aware Pooling for Learning Hierarchical Graph Representations
Stars: ✭ 83 (+124.32%)
amta-netAsymmetric Multi-Task Attention Network for Prostate Bed Segmentation in CT Images
Stars: ✭ 26 (-29.73%)
hexiaMid-level PyTorch Based Framework for Visual Question Answering.
Stars: ✭ 24 (-35.14%)
grbGraph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for evaluating the adversarial robustness of Graph Machine Learning.
Stars: ✭ 70 (+89.19%)
rnn-text-classification-tfTensorflow implementation of Attention-based Bidirectional RNN text classification.
Stars: ✭ 26 (-29.73%)
how attentive are gatsCode for the paper "How Attentive are Graph Attention Networks?" (ICLR'2022)
Stars: ✭ 200 (+440.54%)
SimP-GCNImplementation of the WSDM 2021 paper "Node Similarity Preserving Graph Convolutional Networks"
Stars: ✭ 43 (+16.22%)
question-generationNeural Models for Key Phrase Detection and Question Generation
Stars: ✭ 29 (-21.62%)
CIANImplementation of the Character-level Intra Attention Network (CIAN) for Natural Language Inference (NLI) upon SNLI and MultiNLI corpus
Stars: ✭ 17 (-54.05%)
lstm-attentionAttention-based bidirectional LSTM for Classification Task (ICASSP)
Stars: ✭ 87 (+135.14%)
SIANCode and data for ECML-PKDD paper "Social Influence Attentive Neural Network for Friend-Enhanced Recommendation"
Stars: ✭ 25 (-32.43%)
Pro-GNNImplementation of the KDD 2020 paper "Graph Structure Learning for Robust Graph Neural Networks"
Stars: ✭ 202 (+445.95%)
AttentionalpoolingactionCode/Model release for NIPS 2017 paper "Attentional Pooling for Action Recognition"
Stars: ✭ 248 (+570.27%)
uniformer-pytorchImplementation of Uniformer, a simple attention and 3d convolutional net that achieved SOTA in a number of video classification tasks, debuted in ICLR 2022
Stars: ✭ 90 (+143.24%)
awesome-efficient-gnnCode and resources on scalable and efficient Graph Neural Networks
Stars: ✭ 498 (+1245.95%)
Entity-Graph-VLNCode of the NeurIPS 2021 paper: Language and Visual Entity Relationship Graph for Agent Navigation
Stars: ✭ 34 (-8.11%)
GalaXCGalaXC: Graph Neural Networks with Labelwise Attention for Extreme Classification
Stars: ✭ 28 (-24.32%)
EgoCNNCode for "Distributed, Egocentric Representations of Graphs for Detecting Critical Structures" (ICML 2019)
Stars: ✭ 16 (-56.76%)
ntds 2019Material for the EPFL master course "A Network Tour of Data Science", edition 2019.
Stars: ✭ 62 (+67.57%)
STAM-pytorchImplementation of STAM (Space Time Attention Model), a pure and simple attention model that reaches SOTA for video classification
Stars: ✭ 109 (+194.59%)
ChangeFormerOfficial PyTorch implementation of our IGARSS'22 paper: A Transformer-Based Siamese Network for Change Detection
Stars: ✭ 220 (+494.59%)
SubGNNSubgraph Neural Networks (NeurIPS 2020)
Stars: ✭ 136 (+267.57%)
GNNLens2Visualization tool for Graph Neural Networks
Stars: ✭ 155 (+318.92%)
demo-routenetDemo of RouteNet in ACM SIGCOMM'19
Stars: ✭ 79 (+113.51%)
RETRO-pytorchImplementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch
Stars: ✭ 473 (+1178.38%)
AC-VRNNPyTorch code for CVIU paper "AC-VRNN: Attentive Conditional-VRNN for Multi-Future Trajectory Prediction"
Stars: ✭ 21 (-43.24%)
RioGNNReinforced Neighborhood Selection Guided Multi-Relational Graph Neural Networks
Stars: ✭ 46 (+24.32%)
En-transformerImplementation of E(n)-Transformer, which extends the ideas of Welling's E(n)-Equivariant Graph Neural Network to attention
Stars: ✭ 131 (+254.05%)
DCGCNDensely Connected Graph Convolutional Networks for Graph-to-Sequence Learning (authors' MXNet implementation for the TACL19 paper)
Stars: ✭ 73 (+97.3%)
Transformers-RLAn easy PyTorch implementation of "Stabilizing Transformers for Reinforcement Learning"
Stars: ✭ 107 (+189.19%)
TS3000 TheChatBOTIts a social networking chat-bot trained on Reddit dataset . It supports open bounded queries developed on the concept of Neural Machine Translation. Beware of its being sarcastic just like its creator 😝 BDW it uses Pytorch framework and Python3.
Stars: ✭ 20 (-45.95%)
DARNNA Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction
Stars: ✭ 90 (+143.24%)
NARREThis is our implementation of NARRE:Neural Attentional Regression with Review-level Explanations
Stars: ✭ 100 (+170.27%)
LR-GCCFRevisiting Graph based Collaborative Filtering: A Linear Residual Graph Convolutional Network Approach, AAAI2020
Stars: ✭ 99 (+167.57%)
dgcnnClean & Documented TF2 implementation of "An end-to-end deep learning architecture for graph classification" (M. Zhang et al., 2018).
Stars: ✭ 21 (-43.24%)
pyg autoscaleImplementation of "GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings" in PyTorch
Stars: ✭ 136 (+267.57%)
axial-attentionImplementation of Axial attention - attending to multi-dimensional data efficiently
Stars: ✭ 245 (+562.16%)
robust-gcnImplementation of the paper "Certifiable Robustness and Robust Training for Graph Convolutional Networks".
Stars: ✭ 35 (-5.41%)
memory-compressed-attentionImplementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"
Stars: ✭ 47 (+27.03%)