All Projects → PengBoXiangShang → Multigraph_transformer

PengBoXiangShang / Multigraph_transformer

Licence: mit
transformer, multi-graph transformer, graph, graph classification, sketch recognition, sketch classification, free-hand sketch, official code of the paper "Multi-Graph Transformer for Free-Hand Sketch Recognition"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Multigraph transformer

Sparkliner
Sparkliner — easy way to make sparkline graph [Sketch plugin]
Stars: ✭ 184 (-20.35%)
Mutual labels:  graph, sketch
Self Attention Cv
Implementation of various self-attention mechanisms focused on computer vision. Ongoing repository.
Stars: ✭ 209 (-9.52%)
Mutual labels:  transformer
Plugin Requests
A collection of plugins requested to our Twitter account
Stars: ✭ 211 (-8.66%)
Mutual labels:  sketch
Tinyfaces Sketch Plugin
Fill selected layers in Sketch with free stock avatars
Stars: ✭ 221 (-4.33%)
Mutual labels:  sketch
Fluid For Sketch
[Sketch Plugin] Sketch-flavored Auto Layout-like Constraints
Stars: ✭ 2,408 (+942.42%)
Mutual labels:  sketch
Git Sketch Plugin
💎 A Git client generating pretty diffs built right into Sketch.
Stars: ✭ 2,459 (+964.5%)
Mutual labels:  sketch
Hyperformula
A complete, open-source Excel-like calculation engine written in TypeScript. Includes 380+ built-in functions. Maintained by the Handsontable team⚡
Stars: ✭ 210 (-9.09%)
Mutual labels:  graph
Euler
A distributed graph deep learning framework.
Stars: ✭ 2,701 (+1069.26%)
Mutual labels:  graph
Charts
Simple, responsive, modern SVG Charts with zero dependencies
Stars: ✭ 14,112 (+6009.09%)
Mutual labels:  graph
Flashx
FlashX is a collection of big data analytics tools that perform data analytics in the form of graphs and matrices.
Stars: ✭ 220 (-4.76%)
Mutual labels:  graph
Sketchcrapp
SketchCrapp - Crack your Sketch.app in seconds :) Supports MacOS Big Sur.
Stars: ✭ 218 (-5.63%)
Mutual labels:  sketch
Ehtrace
ATrace is a tool for tracing execution of binaries on Windows.
Stars: ✭ 218 (-5.63%)
Mutual labels:  graph
Ngraph.path
Path finding in a graph
Stars: ✭ 2,545 (+1001.73%)
Mutual labels:  graph
Yin
The efficient and elegant JSON:API 1.1 server library for PHP
Stars: ✭ 214 (-7.36%)
Mutual labels:  transformer
Vworkflows
Flow Visualization Library for JavaFX and VRL-Studio
Stars: ✭ 226 (-2.16%)
Mutual labels:  graph
Graph convolutional lstm
Traffic Graph Convolutional Recurrent Neural Network
Stars: ✭ 210 (-9.09%)
Mutual labels:  graph
Dependency Graph
A simple dependency graph for Node.js
Stars: ✭ 219 (-5.19%)
Mutual labels:  graph
Userline
Query and report user logons relations from MS Windows Security Events
Stars: ✭ 221 (-4.33%)
Mutual labels:  graph
Miaow
A set of plugins for Sketch include drawing links & marks, UI Kit & Color sync, font & text replacing.
Stars: ✭ 2,536 (+997.84%)
Mutual labels:  sketch
Unreal Polygonal Map Gen
An Unreal Engine 4 implementation of the Polygonal Map Generator for generating islands found at http://www-cs-students.stanford.edu/~amitp/game-programming/polygon-map-generation/
Stars: ✭ 229 (-0.87%)
Mutual labels:  graph

Multi-Graph Transformer for Free-Hand Sketch Recognition

This code repository is the official source code of the paper "Multi-Graph Transformer for Free-Hand Sketch Recognition" (ArXiv Link), by Peng Xu, Chaitanya K. Joshi, Xavier Bresson.

blog in Chinese|中文讲解

Requirements

Ubuntu 16.04.10

Anaconda 4.7.10

Python 3.7

PyTorch 1.2.0

How to install (clone) our environment will be detailed in the following. First of all, please install Anaconda.

Our hardware environment: 2 Intel(R) Xeon(R) CPUs (E5-2690 v4 @ 2.60GHz), 128 GB RAM, 4 GTX 1080 Ti GPUs.

All the following codes can run on single GTX 1080 Ti GPU.

Usage (How to Train Our MGT)

# 1. Choose your workspace and download our repository.
cd ${CUSTOMIZED_WORKSPACE}
git clone https://github.com/PengBoXiangShang/multigraph_transformer

# 2. Enter the directory.
cd multigraph_transformer

# 3. Clone our environment, and activate it.
conda-env create --name ${CUSTOMIZED_ENVIRONMENT_NAME} --file ./MGT_environment.yml
conda activate ${CUSTOMIZED_ENVIRONMENT_NAME}

# 4. Download our training/evaluation/testing datasets and the associated URL lists from our Google Drive folder. Then extract them into './dataloader' folder. data.tar.gz is 118MB, and its MD5 checksum is 8ce7347dfcc9f02376319ce321bbdd31.
cd ./dataloader
chmod +x download.sh
./download.sh
# If this script 'download.sh' can not work for you, please manually download data.tar.gz to current path via this link https://drive.google.com/open?id=1I4XKajNP6wtCpek4ZoCoVr2roGSOYBbW .
tar -zxvf data.tar.gz
rm -f data.tar.gz
cd ..

# 5. Train our MGT. Please see details in our code annotations.
# Please set the input arguments based on your case.
# When the program starts running, a folder named 'experimental_results/${CUSTOMIZED_EXPERIMENT_NAME}' will be created automatically to save your log, checkpoint, and TensorBoard curves.
python train_gra_transf_inpt5_new_dropout_2layerMLP_2nn4nnjnn_early_stop.py 
    --exp ${CUSTOMIZED_EXPERIMENT_NAME}   
    --batch_size ${CUSTOMIZED_SIZE}   
    --num_workers ${CUSTOMIZED_NUMBER} 
    --gpu ${CUSTOMIZED_GPU_NUMBER}

# Actually, we got the performance of MGT #17 (reported in Table 3 in our paper) by running the following command.
python train_gra_transf_inpt5_new_dropout_2layerMLP_2nn4nnjnn_early_stop.py 
    --exp train_gra_transf_inpt5_new_dropout_2layerMLP_2nn4nnjnn_early_stop_001   
    --batch_size 192   
    --num_workers 12 
    --gpu 1

Our Experimental Results

In order to fully demonstrate the traits of our MGT to both graph and sketch researchers, we will provide the codes of all our ablative models reported in our paper. We also provide our experimental results including trainging log files, model checkpoints, and TensorBoard curves. The following table provides the download links in Google Drive, which is corresponding to the Table 3 in our paper.

"GT #1" is the original Transformer [Vaswani et al.], representing each input graph as a fully-connected graph.
"GT #7" is a Transformer variant, representing each input graph as a sparse graph, i.e., A^{2-hop} structure defined in our paper.
"MGT #13" is an ablative variant of our MGT, representing each input graph as two sparse graphs, i.e., A^{2-hop} and A^{global}.
“MGT #17” is the full model of our MGT, representing each input graph as three sparse graphs, i.e., A^{1-hop}, A^{2-hop}, and A^{global}.
In the following table and diagram, we can see that multiple sparsely-connected graphs improve the performance of Transformer.
Please see details in our ArXiv paper.

Network acc. log & ckpts & TensorBoard curves training script
GT #1 0.5249 link, 50M, MD5 checksum 1f703a7aeb38a981bb430965a522b33a. train_gra_transf_inpt5_new_dropout_2layerMLP_fully_connected_graph_early_stop.py
GT #7 0.7082 link, 50M, MD5 checksum 8615fd91d5291380b9c027ad6dd195d8. train_gra_transf_inpt5_new_dropout_2layerMLP_4nn_early_stop.py
MGT #13 0.7237 link, 100M, MD5 checksum 12958648e3c392bf62d96ec30cf26b79. train_gra_transf_inpt5_new_dropout_2layerMLP_4nnjnn_early_stop.py
MGT #17 0.7280 link, 141M, MD5 checksum 7afe439e34f55eb64aa7463134d67367. train_gra_transf_inpt5_new_dropout_2layerMLP_2nn4nnjnn_early_stop.py

Citations

If you find this code useful to your research, please cite our paper as the following bibtex:

@article{xu2019multigraph,
  title={Multi-Graph Transformer for Free-Hand Sketch Recognition},
  author={Xu, Peng and Joshi, Chaitanya K and Bresson, Xavier},
  journal={arXiv preprint arXiv:1912.11258},
  year={2019}
}

Usage (How to Run Our Baselines)

Run CNN Baselines

# Based on the aforementioned operations and environment configurations.
# For brevity, we only provide the code for the two CNN baselines with best performance, i.e., Inceptionv3 and MobileNetv2.

# 1. Enter the 'dataloader' directory.
cd ${CUSTOMIZED_WORKSPACE}/multigraph_transformer/dataloader/

# 2. Download the training/evaluation/testing datasets (.PNG files) and the associated URL lists from our Google Drive folder. Then extract them into 'dataloader' folder. data_4_cnnbaselines.tar.gz is 558MB, and its MD5 checksum is 8f1132b400eb2bd9186f7f02d5c4d501.
chmod +x download_4_cnnbaselines.sh
./download_4_cnnbaselines.sh
tar -zxvf data_4_cnnbaselines.tar.gz
rm -f data_4_cnnbaselines.tar.gz
cd data_4_cnnbaselines
tar -zxvf tiny_train_set.tar.gz
rm -f tiny_train_set.tar.gz
tar -zxvf tiny_val_set.tar.gz
rm -f tiny_val_set.tar.gz
tar -zxvf tiny_test_set.tar.gz
rm -f tiny_test_set.tar.gz

# 3. Switch directory and run the scripts.
# Please set the input arguments based on your case.
# When the program starts running, a folder named 'experimental_results/${CUSTOMIZED_EXPERIMENT_NAME}' will be created automatically into ${CUSTOMIZED_WORKSPACE}/multigraph_transformer/baselines/cnn_baselines/, to save your log, checkpoint, and TensorBoard curves.
python train_inceptionv3.py 
    --exp ${CUSTOMIZED_EXPERIMENT_NAME}   
    --batch_size ${CUSTOMIZED_SIZE}   
    --num_workers ${CUSTOMIZED_NUMBER} 
    --gpu ${CUSTOMIZED_GPU_NUMBER}
    
# Actually, we got the performance of Inceptionv3 (reported in Table 2 in our paper) by running the following command.
python train_inceptionv3.py 
    --exp train_inceptionv3_001   
    --batch_size 64   
    --num_workers 12 
    --gpu 0

Run RNN Baselines

# Based on the aforementioned operations and environment configurations.
# For brevity, we only provide the code for the RNN baseline with best performance, i.e., bidirectional GRU.

# 1. Switch directory and run the scripts.
cd ${CUSTOMIZED_WORKSPACE}/multigraph_transformer/baselines/rnn_baselines/
# Please set the input arguments based on your case.
# When the program starts running, a folder named 'experimental_results/${CUSTOMIZED_EXPERIMENT_NAME}' will be created automatically into ${CUSTOMIZED_WORKSPACE}/multigraph_transformer/baselines/rnn_baselines/, to save your log, checkpoint, and TensorBoard curves.
python train_bigru.py 
    --exp ${CUSTOMIZED_EXPERIMENT_NAME}   
    --batch_size ${CUSTOMIZED_SIZE}   
    --num_workers ${CUSTOMIZED_NUMBER} 
    --gpu ${CUSTOMIZED_GPU_NUMBER}

# Actually, we got the performance of Bi-directional GRU (reported in Table 2 in our paper) by running the following command.
python train_bigru.py 
    --exp train_bigru_001   
    --batch_size 256   
    --num_workers 12 
    --gpu 0

License

This project is licensed under the MIT License

Acknowledgement

Many thanks to the great sketch dataset Quick, Draw! released by Google.

FAQ

Please see FAQ via this link.
If you would have further discussion on this code repository, please feel free to send email to Peng Xu.
Email: peng.xu [AT] ntu.edu.sg

Q: How can I download your training/evaluation/testing datasets if I can not access Google Drive?

A: Now, all our datasets, logs, checkpoints are stored in Google Drive. We will try to upload them into Aliyun or Baidu Yun, and update the download scripts and links. Thanks.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].