All Projects → yuweihao → KERN

yuweihao / KERN

Licence: MIT License
Code for Knowledge-Embedded Routing Network for Scene Graph Generation (CVPR 2019)

Programming Languages

python
139335 projects - #7 most used programming language
Cuda
1817 projects
c
50402 projects - #5 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to KERN

SGGpoint
[CVPR 2021] Exploiting Edge-Oriented Reasoning for 3D Point-based Scene Graph Analysis (official pytorch implementation)
Stars: ✭ 41 (-58.59%)
Mutual labels:  scene-graph, scene-graph-generation
SceneGraphFusion
No description or website provided.
Stars: ✭ 82 (-17.17%)
Mutual labels:  scene-graph, scene-graph-generation
MixGCF
MixGCF: An Improved Training Method for Graph Neural Network-based Recommender Systems, KDD2021
Stars: ✭ 73 (-26.26%)
Mutual labels:  graph-neural-network
Depth-VRD
Improving Visual Relation Detection using Depth Maps (ICPR 2020)
Stars: ✭ 33 (-66.67%)
Mutual labels:  scene-graph-generation
Social-Knowledge-Graph-Papers
A paper list of research about social knowledge graph
Stars: ✭ 27 (-72.73%)
Mutual labels:  graph-neural-network
egnn-pytorch
Implementation of E(n)-Equivariant Graph Neural Networks, in Pytorch
Stars: ✭ 249 (+151.52%)
Mutual labels:  graph-neural-network
PDN
The official PyTorch implementation of "Pathfinder Discovery Networks for Neural Message Passing" (WebConf '21)
Stars: ✭ 44 (-55.56%)
Mutual labels:  graph-neural-network
Graph Neural Net
Graph Convolutional Networks, Graph Attention Networks, Gated Graph Neural Net, Mixhop
Stars: ✭ 27 (-72.73%)
Mutual labels:  graph-neural-network
DIG
A library for graph deep learning research
Stars: ✭ 1,078 (+988.89%)
Mutual labels:  graph-neural-network
Hyper-SAGNN
hypergraph representation learning, graph neural network
Stars: ✭ 53 (-46.46%)
Mutual labels:  graph-neural-network
NativeFX
Native Rendering integration for JavaFX (13 and beyond)
Stars: ✭ 125 (+26.26%)
Mutual labels:  scene-graph
recovering-unbiased-scene-graphs
Official implementation of "Recovering the Unbiased Scene Graphs from the Biased Ones" (ACMMM 2021)
Stars: ✭ 65 (-34.34%)
Mutual labels:  scene-graph-generation
STTran
Spatial-Temporal Transformer for Dynamic Scene Graph Generation, ICCV2021
Stars: ✭ 113 (+14.14%)
Mutual labels:  scene-graph
3-D-Scene-Graph
3D scene graph generator implemented in Pytorch.
Stars: ✭ 52 (-47.47%)
Mutual labels:  scene-graph
sg-risk-assessment
This repo includes the source code and dataset information for reproducing the results of our paper (https://arxiv.org/abs/2009.06435)
Stars: ✭ 35 (-64.65%)
Mutual labels:  scene-graph
GNNs-in-Network-Neuroscience
A review of papers proposing novel GNN methods with application to brain connectivity published in 2017-2020.
Stars: ✭ 92 (-7.07%)
Mutual labels:  graph-neural-network
proscene
Processing library for the creation of interactive scenes
Stars: ✭ 45 (-54.55%)
Mutual labels:  scene-graph
stagin
STAGIN: Spatio-Temporal Attention Graph Isomorphism Network
Stars: ✭ 34 (-65.66%)
Mutual labels:  graph-neural-network
Knowledge Graph based Intent Network
Learning Intents behind Interactions with Knowledge Graph for Recommendation, WWW2021
Stars: ✭ 116 (+17.17%)
Mutual labels:  graph-neural-network
ReFine
Official code of "Towards Multi-Grained Explainability for Graph Neural Networks" (2021 NeurIPS)
Stars: ✭ 40 (-59.6%)
Mutual labels:  graph-neural-network

Knowledge-Embedded Routing Network for Scene Graph Generation

Tianshui Chen*, Weihao Yu*, Riquan Chen, and Liang Lin, “Knowledge-Embedded Routing Network for Scene Graph Generation”, CVPR, 2019. (* co-first authors) [PDF]

Note A typo in our final CVPR version paper: h_{iC}^o in eq. (6) should be corrected to f_{iC}^o.

This repository contains trained models and PyTorch version code for the above paper, If the paper significantly inspires you, we request that you cite our work:

Bibtex

@inproceedings{chen2019knowledge,
  title={Knowledge-Embedded Routing Network for Scene Graph Generation},
  author={Chen, Tianshui and Yu, Weihao and Chen, Riquan and Lin, Liang},
  booktitle = "Conference on Computer Vision and Pattern Recognition",
  year={2019}
}

Setup

In our paper, our model's strong baseline model is SMN (Stacked Motif Networks) introduced by @rowanz et al. To compare these two models fairly, the PyTorch version code of our model is based on @rowanz's code neural-motifs. Thank @rowanz for sharing his nice code to research community.

  1. Install python3.6 and pytorch 3. I recommend the Anaconda distribution. To install PyTorch if you haven't already, use conda install pytorch=0.3.0 torchvision=0.2.0 cuda90 -c pytorch. We use TensorBoard to observe the results of validation dataset. If you want to use it in PyTorch, you should install TensorFlow and tensorboardX first. If you don't want to use TensorBaord, just not use the command -tb_log_dir.

  2. Update the config file with the dataset paths. Specifically:

    • Visual Genome (the VG_100K folder, image_data.json, VG-SGG.h5, and VG-SGG-dicts.json). See data/stanford_filtered/README.md for the steps to download these.
    • You'll also need to fix your PYTHONPATH: export PYTHONPATH=/home/yuweihao/exp/KERN
  3. Compile everything. Update your CUDA path in Makefile file and run make in the main directory: this compiles the Bilinear Interpolation operation for the RoIs.

  4. Pretrain VG detection. To compare our model with neural-motifs fairly, we just use their pretrained VG detection. You can download their pretrained detector checkpoint provided by @rowanz. You could also run ./scripts/pretrain_detector.sh to train detector by yourself. Note: You might have to modify the learning rate and batch size according to number and memory of GPU you have.

  5. Generate knowledge matrices: python prior_matrices/generate_knowledge.py, or download them from here: prior_matrices (Google Drive, OneDrive).

  6. Train our KERN model. There are three training phase. You need a GPU with 12G memory.

    • Train VG relationship predicate classification: run CUDA_VISIBLE_DEVICES=YOUR_GPU_NUM ./scripts/train_kern_predcls.sh This phase maybe last about 20-30 epochs.
    • Train scene graph classification: run CUDA_VISIBLE_DEVICES=YOUR_GPU_NUM ./scripts/train_kern_sgcls.sh. Before run this script, you need to modify the path name of best checkpoint you trained in precls phase: -ckpt checkpoints/kern_predcls/vgrel-YOUR_BEST_EPOCH_RNUM.tar. It lasts about 8-13 epochs, then you can decrease the learning rate to 1e-6 to further improve the performance. Like neural-motifs, we use only one trained checkpoint for both predcls and sgcls tasks. You can also download our checkpoint here: kern_sgcls_predcls.tar (Google Drive, OneDrive).
    • Refine for detection: run CUDA_VISIBLE_DEVICES=YOUR_GPU_NUM ./scripts/train_kern_sgdet.sh or download the checkpoint here: kern_sgdet.tar (Google Drive, OneDrive). If you find the validation performance plateaus, you could also decrease learning rate to 1e-6 to improve performance.
  7. Evaluate: refer to the scripts CUDA_VISIBLE_DEVICES=YOUR_GPU_NUM ./scripts/eval_kern_[predcls/sgcls/sgdet].sh. You can conveniently find all our checkpoints, evaluation caches and results in this folder KERN_Download (Google Drive, OneDrive).

Evaluation metrics

In validation/test dataset, assume there are images. For each image, a model generates top predicted relationship triplets. As for image , there are ground truth relationship triplets, where triplets are predicted successfully by the model. We can calculate:

For image , in its ground truth relationship triplets, there are ground truth triplets with relationship (Except , meaning no relationship. The number of relationship classes is , including no relationship), where triplets are predicted successfully by the model. In images of validation/test dataset, for relationship , there are images which contain at least one ground truth triplet with this relationship. The R@X of relationship can be calculated:

Then we can calculate:

Some results

relationship distribution of VG dataset Figure 1. The distribution of different relationships on the VG dataset. The training and test splits share similar distribution.


Ours_SMN Figure 2. The R@50 without constraint of our method and the SMN on the predicate classification task on the VG dataset.


Ours_SMN_Diff Figure 3. The R@50 absolute improvement of different relationships of our method to the SMN. The R@50 are computed without constraint.


Scatter Figure 4. The relation between the R@50 improvement and sample proportion on the predicate classification task on the VG dataset. The R@50 are computed without constraint.


Method SGGen SGCls PredCls Mean Relative
mR@50 mR@100 mR@50 mR@100 mR@50 mR@100 improvement
Constraint SMN 5.3 6.1 7.1 7.6 13.3 14.4 9.0
Ours 6.4 7.3 9.4 10.0 17.7 19.2 11.7 ↑ 30.0%
Unconstraint SMN 9.3 12.9 15.4 20.6 27.5 37.9 20.6
Ours 11.7 16.0 19.8 26.2 36.3 49.0 26.5 ↑ 28.6%
Table 1. Comparison of the mR@50 and mR@100 in % with and without constraint on the three tasks of the VG dataset.

Acknowledgement

Thank @rowanz for his generously releasing nice code neural-motifs.

Help

Feel free to open an issue if you encounter trouble getting it to work.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].