All Projects → LeviBorodenko → dgcnn

LeviBorodenko / dgcnn

Licence: MIT license
Clean & Documented TF2 implementation of "An end-to-end deep learning architecture for graph classification" (M. Zhang et al., 2018).

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to dgcnn

resolutions-2019
A list of data mining and machine learning papers that I implemented in 2019.
Stars: ✭ 19 (-9.52%)
Mutual labels:  attention-mechanism, graph-embedding, graph-classification
Awesome Graph Classification
A collection of important graph embedding, classification and representation learning papers with implementations.
Stars: ✭ 4,309 (+20419.05%)
Mutual labels:  attention-mechanism, graph-embedding, graph-classification
GE-FSG
Graph Embedding via Frequent Subgraphs
Stars: ✭ 39 (+85.71%)
Mutual labels:  graph-embedding, graph-classification
PDN
The official PyTorch implementation of "Pathfinder Discovery Networks for Neural Message Passing" (WebConf '21)
Stars: ✭ 44 (+109.52%)
Mutual labels:  graph-classification, gnn
FEATHER
The reference implementation of FEATHER from the CIKM '20 paper "Characteristic Functions on Graphs: Birds of a Feather, from Statistical Descriptors to Parametric Models".
Stars: ✭ 34 (+61.9%)
Mutual labels:  graph-embedding, graph-classification
memory-compressed-attention
Implementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"
Stars: ✭ 47 (+123.81%)
Mutual labels:  attention-mechanism
multi-label-text-classification
Mutli-label text classification using ConvNet and graph embedding (Tensorflow implementation)
Stars: ✭ 44 (+109.52%)
Mutual labels:  graph-embedding
spatio-temporal-brain
A Deep Graph Neural Network Architecture for Modelling Spatio-temporal Dynamics in rs-fMRI Data
Stars: ✭ 22 (+4.76%)
Mutual labels:  gnn
Neural-Chatbot
A Neural Network based Chatbot
Stars: ✭ 68 (+223.81%)
Mutual labels:  attention-mechanism
uniformer-pytorch
Implementation of Uniformer, a simple attention and 3d convolutional net that achieved SOTA in a number of video classification tasks, debuted in ICLR 2022
Stars: ✭ 90 (+328.57%)
Mutual labels:  attention-mechanism
hexia
Mid-level PyTorch Based Framework for Visual Question Answering.
Stars: ✭ 24 (+14.29%)
Mutual labels:  attention-mechanism
sparsebn
Software for learning sparse Bayesian networks
Stars: ✭ 41 (+95.24%)
Mutual labels:  graphical-models
Visual-Attention-Model
Chainer implementation of Deepmind's Visual Attention Model paper
Stars: ✭ 27 (+28.57%)
Mutual labels:  attention-mechanism
glsp-examples
Example diagram editors built with Eclipse GLSP
Stars: ✭ 28 (+33.33%)
Mutual labels:  graphical-models
ChangeFormer
Official PyTorch implementation of our IGARSS'22 paper: A Transformer-Based Siamese Network for Change Detection
Stars: ✭ 220 (+947.62%)
Mutual labels:  attention-mechanism
awesome-efficient-gnn
Code and resources on scalable and efficient Graph Neural Networks
Stars: ✭ 498 (+2271.43%)
Mutual labels:  gnn
NARRE
This is our implementation of NARRE:Neural Attentional Regression with Review-level Explanations
Stars: ✭ 100 (+376.19%)
Mutual labels:  attention-mechanism
organic-chemistry-reaction-prediction-using-NMT
organic chemistry reaction prediction using NMT with Attention
Stars: ✭ 30 (+42.86%)
Mutual labels:  attention-mechanism
Causing
Causing: CAUsal INterpretation using Graphs
Stars: ✭ 47 (+123.81%)
Mutual labels:  gnn
selective search
Python implementation of selective search
Stars: ✭ 40 (+90.48%)
Mutual labels:  paper-implementations

DGCNN [TensorFlow]

TensorFlow 2 implementation of An end-to-end deep learning architecture for graph classification based on work by M. Zhang et al., 2018.

Moreover, we offer an attention based modification of the above by utilising graph attention (Veličković et al., 2017) to learn edge weights.

Installation

Simply run pip install dgcnn. The only dependency is tensorflow>=2.0.0.

Usage

The core data structure is the graph signal. If we have N nodes in a graph each having C observed features then the graph signal is the tensor with shape (batch, N, C) corresponding to the data produced by all nodes. Often we have sequences of graph signals in a time series. We will call them temporal graph signals and assume a shape of (batch, time steps, N, C). For each graph signal we also need to have the corresponding adjacency matrices of shape (batch, N, N) or (batch, timesteps, N, N) for temporal and non-temporal data, respectively. While DGCNNs can operate on graphs with different node-counts, C should always be the same and each batch should only contain graphs with the same number of nodes.

The DeepGraphConvolution Layer

This adaptable layer contains the whole DGCNN architecture and operates on both temporal and non-temporal data. It takes the graph signals and their corresponding adjacency matrices and performs the following steps (as described in the paper):

We initialize the layer by providing . The layer has many optional parameters that are described in the table below.

  1. It iteratively applies GraphConvolution layers h times with variable hidden feature dimensions .

  2. After that, it concatenates all the outputs of the graph convolutions into one tensor which has the shape (..., N, ).

  3. Finally it applies SortPooling as described in the paper to obtain the output tensor of shape (..., k, ).

Import this layer with from gdcnn.components import DeepGraphConvolution.

Initiated it with the following parameters:

Parameter Function
hidden_conv_units (required) List of the hidden feature dimensions used in the graph convolutions. in the paper.
k (required) Number of nodes to be kept after SortPooling.
flatten_signals (default: False) If True, flattens the last 2 dimensions of the output tensor into 1
attention_heads (default: None) If given, then instead of using as the transition matrix inside the graph convolutions, we will use an attention based transition matrix. Utilizing dgcnn.attention.AttentionMechanism as the internal attention mechanism. This sets the number of attention heads used.
attention_units (default: None) Also needs to be provided if attention_heads is set. This is the size of the internal embedding used by the attention mechanism.
use_sortpooling (default: True) Whether or not to apply sortpooling at the end of the procedure. If False, we will simply return the concatinated graph convolution outputs.

Thus, if we have non-temporal graph signals with 10 nodes and 5 features each and we would like to apply a DGCNN containing 3 graph convolutions with hidden feature dimensions of 10, 5 and 2 and SortPooling that keeps the 5 most relevant nodes. Then we would run

from dgcnn.components import DeepGraphConvolution
from tensorflow.keras.layers import Input
from tensorflow.keras import Model


# generating random graph signals as test data
graph_signal = np.random.normal(size=(100, 10, 5)

# corresponding fully connected adjacency matrices
adjacency = np.ones((100, 10, 10))

# inputs to the DGCNN
X = Input(shape=(10, 5), name="graph_signal")
E = Input(shape=(10, 10), name="adjacency")

# DGCNN
# Note that we pass the signals and adjacencies as a tuple.
# The graph signal always goes first!
output = DeepGraphConvolution([10, 5, 2], k=5 )((X, E))

# defining model
model = Model(inputs=[X, E], outputs=output)

Further layers and features

The documentation contains information on how to use the internal SortPooling, GraphConvolution and AttentionMechanism layers and also describes more optional parameters like regularisers, initialisers and constrains that can be used.

Contribute

Bug reports, fixes and additional features are always welcome! Make sure to run the tests with python setup.py test and write your own for new features. Thanks.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].