All Projects → lrjconan → Lanczosnetwork

lrjconan / Lanczosnetwork

Licence: mit
Lanczos Network, Graph Neural Networks, Deep Graph Convolutional Networks, Deep Learning on Graph Structured Data, QM8 Quantum Chemistry Benchmark, ICLR 2019

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Lanczosnetwork

json-serialization-benchmarking
Miscellaneous benchmarks for JSON serialization on JVM/Android
Stars: ✭ 48 (-83.62%)
Mutual labels:  benchmark
Knowledge distillation via TF2.0
The codes for recent knowledge distillation algorithms and benchmark results via TF2.0 low-level API
Stars: ✭ 87 (-70.31%)
Mutual labels:  benchmark
Github Action Benchmark
GitHub Action for continuous benchmarking to keep performance
Stars: ✭ 264 (-9.9%)
Mutual labels:  benchmark
IGUANA
IGUANA is a benchmark execution framework for querying HTTP endpoints and CLI Applications such as Triple Stores. Contact: [email protected]
Stars: ✭ 22 (-92.49%)
Mutual labels:  benchmark
FewCLUE
FewCLUE 小样本学习测评基准,中文版
Stars: ✭ 251 (-14.33%)
Mutual labels:  benchmark
PPM
A High-Quality Photograpy Portrait Matting Benchmark
Stars: ✭ 37 (-87.37%)
Mutual labels:  benchmark
autobench
Benchmark your application on CI
Stars: ✭ 16 (-94.54%)
Mutual labels:  benchmark
Web Tooling Benchmark
JavaScript benchmark for common web developer workloads
Stars: ✭ 290 (-1.02%)
Mutual labels:  benchmark
tls-perf
TLS handshakes benchnarking tool
Stars: ✭ 18 (-93.86%)
Mutual labels:  benchmark
Go Benchmark
Golang benchmarks used for optimizing code
Stars: ✭ 263 (-10.24%)
Mutual labels:  benchmark
glassbench
A micro-benchmark framework to use with cargo bench
Stars: ✭ 29 (-90.1%)
Mutual labels:  benchmark
criterion-compare-action
⚡️📊 Compare the performance of Rust project branches
Stars: ✭ 16 (-94.54%)
Mutual labels:  benchmark
grpc bench
Various gRPC benchmarks
Stars: ✭ 480 (+63.82%)
Mutual labels:  benchmark
liar
Flexible, stand-alone benchmarking
Stars: ✭ 16 (-94.54%)
Mutual labels:  benchmark
Superpixel Benchmark
An extensive evaluation and comparison of 28 state-of-the-art superpixel algorithms on 5 datasets.
Stars: ✭ 275 (-6.14%)
Mutual labels:  benchmark
iohk-monitoring-framework
This framework provides logging, benchmarking and monitoring.
Stars: ✭ 27 (-90.78%)
Mutual labels:  benchmark
Long-Map-Benchmarks
Benchmarking the best way to store long, Object value pairs in a map.
Stars: ✭ 32 (-89.08%)
Mutual labels:  benchmark
Pyaf
PyAF is an Open Source Python library for Automatic Time Series Forecasting built on top of popular pydata modules.
Stars: ✭ 289 (-1.37%)
Mutual labels:  benchmark
Bench
A generic latency benchmarking library.
Stars: ✭ 286 (-2.39%)
Mutual labels:  benchmark
Perfops Cli
A simple command line tool to interact with hundreds of servers around the world.
Stars: ✭ 263 (-10.24%)
Mutual labels:  benchmark

Lanczos Network

This is the PyTorch implementation of Lanczos Network as described in the following ICLR 2019 paper:

@inproceedings{liao2019lanczos,
  title={LanczosNet: Multi-Scale Deep Graph Convolutional Networks},
  author={Liao, Renjie and Zhao, Zhizhen and Urtasun, Raquel and Zemel, Richard},
  booktitle={ICLR},
  year={2019}
}

Visualization

Benchmark

We also provide our own implementation of 9 recent graph neural networks on the QM8 benchmark:

You should be able to reproduce the following results of weighted mean absolute error (MAE x 1.0e-3):

Methods Validation MAE Test MAE
GCN-FP 15.06 +- 0.04 14.80 +- 0.09
GGNN 12.94 +- 0.05 12.67 +- 0.22
DCNN 10.14 +- 0.05 9.97 +- 0.09
ChebyNet 10.24 +- 0.06 10.07 +- 0.09
GCN 11.68 +- 0.09 11.41 +- 0.10
MPNN 11.16 +- 0.13 11.08 +- 0.11
GraphSAGE 13.19 +- 0.04 12.95 +- 0.11
GPNN 12.81 +- 0.80 12.39 +- 0.77
GAT 11.39 +- 0.09 11.02 +- 0.06
LanczosNet 9.65 +- 0.19 9.58 +- 0.14
AdaLanczosNet 10.10 +- 0.22 9.97 +- 0.20

Note:

  • Above results are averaged over 3 runs with random seeds {1234, 5678, 9012}

Setup

To set up experiments, we need to download the preprocessed QM8 data and build our customized operators by running the following scripts:

./setup.sh

Note:

  • We also provide the script dataset/get_qm8_data.py to preprocess the raw QM8 data which requires the installation of DeepChem. It will produce a different train/dev/test split than what we used in the paper due to the randomness of DeepChem. Therefore, we suggest using our preprocessed data for a fair comparison.

Dependencies

Python 3, PyTorch(1.0)

Other dependencies can be installed via

pip install -r requirements.txt

Run Demos

Train

  • To run the training of experiment X where X is one of {qm8_lanczos_net, qm8_ada_lanczos_net, ...}:

    python run_exp.py -c config/X.yaml

Note:

  • Please check the folder config for a full list of configuration yaml files.
  • Most hyperparameters in the configuration yaml file are self-explanatory.

Test

  • After training, you can specify the test_model field of the configuration yaml file with the path of your best model snapshot, e.g.,

    test_model: exp/qm8_lanczos_net/LanczosNet_chemistry_2018-Oct-02-11-55-54_25460/model_snapshot_best.pth

  • To run the test of experiments X:

    python run_exp.py -c config/X.yaml -t

Run on General Graph Datasets

I provide an example code for a synthetic graph regression problem, i.e., given multiple graphs, each of which is accompanied with node embeddings, it is required to predict a real-valued graph embedding vector per graph.

  • To generate the synthetic dataset:

    cd dataset

    PYTHONPATH=../ python get_graph_data.py

  • To run the training:

    python run_exp.py -c config/graph_lanczos_net.yaml

Note:

  • Please read dataset/get_graph_data.py for more information on how to adapt it to your own graph datasets.
  • I only add support to LanczosNet by slightly modifying the learnable node embedding to the input node embedding in model/lanczos_net_general.py. It should be straightforward for you to add support to other models if you are interested.

Cite

Please cite our paper if you use this code in your research work.

Questions/Bugs

Please submit a Github issue or contact [email protected] if you have any questions or find any bugs.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].