All Projects → kanchen-usc → Kac Net

kanchen-usc / Kac Net

Licence: mit
Implementation of Knowledge Aided Consistency for Weakly Supervised Phrase Grounding in Tensorflow

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Kac Net

Describing a knowledge base
Code for Describing a Knowledge Base
Stars: ✭ 42 (-55.79%)
Mutual labels:  attention-mechanism
Sarcasm Detection
Detecting Sarcasm on Twitter using both traditonal machine learning and deep learning techniques.
Stars: ✭ 73 (-23.16%)
Mutual labels:  attention-mechanism
Attend infer repeat
A Tensorfflow implementation of Attend, Infer, Repeat
Stars: ✭ 82 (-13.68%)
Mutual labels:  attention-mechanism
Attentional Interfaces
🔍 Attentional interfaces in TensorFlow.
Stars: ✭ 58 (-38.95%)
Mutual labels:  attention-mechanism
Group Level Emotion Recognition
Model submitted for the ICMI 2018 EmotiW Group-Level Emotion Recognition Challenge
Stars: ✭ 70 (-26.32%)
Mutual labels:  attention-mechanism
Hierarchical Attention Networks
TensorFlow implementation of the paper "Hierarchical Attention Networks for Document Classification"
Stars: ✭ 75 (-21.05%)
Mutual labels:  attention-mechanism
Sockeye
Sequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet
Stars: ✭ 990 (+942.11%)
Mutual labels:  attention-mechanism
Competitive Inner Imaging Senet
Source code of paper: (not available now)
Stars: ✭ 89 (-6.32%)
Mutual labels:  attention-mechanism
Se3 Transformer Pytorch
Implementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch. This specific repository is geared towards integration with eventual Alphafold2 replication.
Stars: ✭ 73 (-23.16%)
Mutual labels:  attention-mechanism
Sturcture Inpainting
Source code of AAAI 2020 paper 'Learning to Incorporate Structure Knowledge for Image Inpainting'
Stars: ✭ 78 (-17.89%)
Mutual labels:  attention-mechanism
Global Self Attention Network
A Pytorch implementation of Global Self-Attention Network, a fully-attention backbone for vision tasks
Stars: ✭ 64 (-32.63%)
Mutual labels:  attention-mechanism
Pytorch Attention Guided Cyclegan
Pytorch implementation of Unsupervised Attention-guided Image-to-Image Translation.
Stars: ✭ 67 (-29.47%)
Mutual labels:  attention-mechanism
Deepaffinity
Protein-compound affinity prediction through unified RNN-CNN
Stars: ✭ 75 (-21.05%)
Mutual labels:  attention-mechanism
Ca Net
Code for Comprehensive Attention Convolutional Neural Networks for Explainable Medical Image Segmentation.
Stars: ✭ 56 (-41.05%)
Mutual labels:  attention-mechanism
Grounder
Implementation of Grounding of Textual Phrases in Images by Reconstruction in Tensorflow
Stars: ✭ 83 (-12.63%)
Mutual labels:  attention-mechanism
Attentional Neural Factorization Machine
Attention,Factorization Machine, Deep Learning, Recommender System
Stars: ✭ 39 (-58.95%)
Mutual labels:  attention-mechanism
Fake news detection deep learning
Fake News Detection using Deep Learning models in Tensorflow
Stars: ✭ 74 (-22.11%)
Mutual labels:  attention-mechanism
Eqtransformer
EQTransformer, a python package for earthquake signal detection and phase picking using AI.
Stars: ✭ 95 (+0%)
Mutual labels:  attention-mechanism
Attention unet
Raw implementation of attention gated U-Net by Keras
Stars: ✭ 85 (-10.53%)
Mutual labels:  attention-mechanism
Simplednn
SimpleDNN is a machine learning lightweight open-source library written in Kotlin designed to support relevant neural network architectures in natural language processing tasks
Stars: ✭ 81 (-14.74%)
Mutual labels:  attention-mechanism

KAC-Net

This repository contains tensorflow implementation for Knowledge Aided Consistency for Weakly Supervised Phrase Grounding in CVPR 2018.

Setup

Note: Please read the feature representation files in feature and annotation directories before using the code.

Platform: Tensorflow-1.1.0 (python 2.7)
Visual features: We use Faster-RCNN pre-trained on PASCAL 2012 VOC for Flickr30K Entities, and pre-trained on ImageNet for Referit Game. Please put visual features in the feature directory (More details can be seen in the README.md in this directory).
Global features: We extract the global visual feature for each image in Flickr30K Entities using a pre-trained Faster-RCNN on PASCAL VOC 2012 and store them in the folder global_feat.
Sentence features: We encode one-hot vector for each query, as well as the annotation for each query and image pair. Please put the encoded features in the annotation directory (More details are provided in the README.md in this directory).
File list: We generate a file list for each image in the Flickr30K Entities. If you would like to train and test on other dataset (e.g. Referit Game), please follow the similar format in the flickr_train_val.lst and flickr_test.lst.
Hyper parameters: Please check the Config class in the train.py.

Training & Test

Before training, we first pre-train GroundeR model (unsupervised scenario) and save the pre-trained model in the folder model/ground_unsupervised_base (epoch 53). The implementation of GroundeR is in this repository.

For training, please enter the root folder of KAC-Net, then type

$ python train.py -m [Model Name] -g [GPU ID] -k [knowledge]

You can choose different types of knowledge (-k option) as KBP values: coco and hard_coco are for soft and hard KBP values with a Faster-RCNN pre-trained on MSCOCO respectively. pas and hard_pas are for soft and hard KBP values with a VGG Network pre-trained on PASCAL VOC 2012 respectively. More details can be found in the paper.

For testing, please enter the root folder of KAC-Net, then type

$ python evaluate.py -m [Model Name] -g [GPU ID] -k [knowledge] --restore_id [Restore epoch ID]

Make sure the model name entered for evaluation is the same as the model name in training, and the epoch id exists.

Reference

If you find the repository is useful for your research, please consider citing the following work:

@inproceedings{Chen_2018_CVPR,
  title={Knowledge Aided Consistency for Weakly Supervised Phrase Grounding},
  author={Chen, Kan and Gao, Jiyang and Nevatia, Ram},
  booktitle={CVPR},
  year={2018}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].