All Projects → zilongzheng → visdial-gnn

zilongzheng / visdial-gnn

Licence: MIT license
PyTorch code for Reasoning Visual Dialogs with Structural and Partial Observations

Programming Languages

python
139335 projects - #7 most used programming language
lua
6591 projects
shell
77523 projects

Projects that are alternatives of or similar to visdial-gnn

DIG
A library for graph deep learning research
Stars: ✭ 1,078 (+2664.1%)
Mutual labels:  graph-neural-network
Awesome-Federated-Learning-on-Graph-and-GNN-papers
Federated learning on graph, especially on graph neural networks (GNNs), knowledge graph, and private GNN.
Stars: ✭ 206 (+428.21%)
Mutual labels:  graph-neural-network
visdial
Visual Dialog: Light-weight Transformer for Many Inputs (ECCV 2020)
Stars: ✭ 27 (-30.77%)
Mutual labels:  visual-dialog
KERN
Code for Knowledge-Embedded Routing Network for Scene Graph Generation (CVPR 2019)
Stars: ✭ 99 (+153.85%)
Mutual labels:  graph-neural-network
GP-GNN
Code and dataset of ACL2019 Paper: Graph Neural Networks with Generated Parameters for Relation Extraction.
Stars: ✭ 52 (+33.33%)
Mutual labels:  graph-neural-network
MixGCF
MixGCF: An Improved Training Method for Graph Neural Network-based Recommender Systems, KDD2021
Stars: ✭ 73 (+87.18%)
Mutual labels:  graph-neural-network
GNNs-in-Network-Neuroscience
A review of papers proposing novel GNN methods with application to brain connectivity published in 2017-2020.
Stars: ✭ 92 (+135.9%)
Mutual labels:  graph-neural-network
stagin
STAGIN: Spatio-Temporal Attention Graph Isomorphism Network
Stars: ✭ 34 (-12.82%)
Mutual labels:  graph-neural-network
GraphDeeSmartContract
Smart contract vulnerability detection using graph neural network (DR-GCN).
Stars: ✭ 84 (+115.38%)
Mutual labels:  graph-neural-network
SuperGAT
[ICLR 2021] How to Find Your Friendly Neighborhood: Graph Attention Design with Self-Supervision
Stars: ✭ 122 (+212.82%)
Mutual labels:  graph-neural-network
chemicalx
A PyTorch and TorchDrug based deep learning library for drug pair scoring.
Stars: ✭ 176 (+351.28%)
Mutual labels:  graph-neural-network
GNN-Recommendation
毕业设计:基于图神经网络的异构图表示学习和推荐算法研究
Stars: ✭ 52 (+33.33%)
Mutual labels:  graph-neural-network
egnn-pytorch
Implementation of E(n)-Equivariant Graph Neural Networks, in Pytorch
Stars: ✭ 249 (+538.46%)
Mutual labels:  graph-neural-network
Knowledge Graph based Intent Network
Learning Intents behind Interactions with Knowledge Graph for Recommendation, WWW2021
Stars: ✭ 116 (+197.44%)
Mutual labels:  graph-neural-network
Hyper-SAGNN
hypergraph representation learning, graph neural network
Stars: ✭ 53 (+35.9%)
Mutual labels:  graph-neural-network
ReFine
Official code of "Towards Multi-Grained Explainability for Graph Neural Networks" (2021 NeurIPS)
Stars: ✭ 40 (+2.56%)
Mutual labels:  graph-neural-network
Graph Neural Net
Graph Convolutional Networks, Graph Attention Networks, Gated Graph Neural Net, Mixhop
Stars: ✭ 27 (-30.77%)
Mutual labels:  graph-neural-network
PDN
The official PyTorch implementation of "Pathfinder Discovery Networks for Neural Message Passing" (WebConf '21)
Stars: ✭ 44 (+12.82%)
Mutual labels:  graph-neural-network
Social-Knowledge-Graph-Papers
A paper list of research about social knowledge graph
Stars: ✭ 27 (-30.77%)
Mutual labels:  graph-neural-network
mmd
This repository contains the Pytorch implementation for our SCAI (EMNLP-2018) submission "A Knowledge-Grounded Multimodal Search-Based Conversational Agent"
Stars: ✭ 28 (-28.21%)
Mutual labels:  visual-dialog

Reasoning Visual Dialogs with Structural and Partial Observations

Pytorch Implementation for the paper:

Reasoning Visual Dialogs with Structural and Partial Observations
Zilong Zheng*, Wenguan Wang*, Siyuan Qi*, Song-Chun Zhu (* equal contributions)
In CVPR 2019 (Oral)

Getting Started

This codebase is tested using Ubuntu 16.04, Python 3.5 and a single NVIDIA TITAN Xp GPU. Similar configurations are preferred.

Installation

  • Clone this repo:
git clone https://github.com/zilongzheng/visdial-gnn.git
cd visdial-gnn
  • Install requirements
    • Pytorch 0.4.1
    • For other Python dependencies, run:
      pip install -r requirements.txt
      

Train/Evaluate VisDial v1.0

  • We use pre-extracted image features as specified here for VisDial v1.0.

  • We use preprocessed dialog data as specified here

  • To reproduce our results, you can download preprocessed data and save it to $PROJECT_DIR/data/v1.0/ by

bash ./scripts/download_data_v1.sh faster_rcnn
  • To train a discriminative model, run:
#!./scripts/train_v1_faster_rcnn.sh
python train.py --dataroot ./data/v1.0/
  • To evaluate the model using val split, run:
python evaluate.py --dataroot ./data/v1.0/ --split val --ckpt /path/to/checkpoint

Train/Evaluate VisDial v0.9

  • We use pre-extracted image features from VGG-16 and VGG-19 as specified here
  • To download preprocessed data (e.g. vgg19) and save it to $PROJECT_DIR/data/v0.9/, run
bash ./scripts/download_data_v09.sh vgg19
  • To train a discriminative model using vgg19 pretrained image features, run
#!./scripts/train_v09_vgg19.sh
python train.py --dataroot ./data/v0.9/ \
                --version 0.9 \
                --img_train data_img_vgg19_pool5.h5 \
                --visdial_data visdial_data.h5 \
                --visdial_params visdial_params.json \
                --img_feat_size 512
  • To evaluate the model using val split, run:
python evaluate.py --dataroot ./data/v0.9/ \
                   --version 0.9 \
                   --split val \
                   --ckpt /path/to/checkpoint \
                   --img_val data_img_vgg19_pool5.h5 \
                   --visdial_data visdial_data.h5 \
                   --visdial_params visdial_params.json \
                   --img_feat_size 512

Citation

If you use this code for your research, please cite our paper.

@inproceedings{zheng2019reasoning,
    title={Reasoning Visual Dialogs with Structural and Partial Observations},
    author={Zheng, Zilong and Wang, Wenguan and Qi, Siyuan and Zhu, Song-Chun},
    booktitle={Computer Vision and Pattern Recognition (CVPR), 2019 IEEE Conference on},
    year={2019}
}

Acknowledgments

We use Visual Dialog Challenge Starter Code and GPNN as referenced util code.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].