All Projects → rowanz → Neural Motifs

rowanz / Neural Motifs

Licence: mit
Code for Neural Motifs: Scene Graph Parsing with Global Context (CVPR 2018)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Neural Motifs

ML-Research-Made-Easy
Link of ML papers to their blogs/ supplementary material
Stars: ✭ 25 (-93.81%)
Mutual labels:  vision
Dest
🐼 One Millisecond Deformable Shape Tracking Library (DEST)
Stars: ✭ 276 (-31.68%)
Mutual labels:  vision
Pythonfromspace
Python Examples for Remote Sensing
Stars: ✭ 344 (-14.85%)
Mutual labels:  vision
sparse-scene-flow
This repo contains C++ code for sparse scene flow method.
Stars: ✭ 23 (-94.31%)
Mutual labels:  vision
Facesvisiondemo
👀 iOS11 demo application for age and gender classification of facial images.
Stars: ✭ 273 (-32.43%)
Mutual labels:  vision
Imagedetect
✂️ Detect and crop faces, barcodes and texts in image with iOS 11 Vision api.
Stars: ✭ 286 (-29.21%)
Mutual labels:  vision
sim2real-docs
Synthesize image datasets of documents in natural scenes with Python+Blender3D
Stars: ✭ 39 (-90.35%)
Mutual labels:  vision
R2c
Recognition to Cognition Networks (code for the model in "From Recognition to Cognition: Visual Commonsense Reasoning", CVPR 2019)
Stars: ✭ 391 (-3.22%)
Mutual labels:  vision
Dirt
DIRT: a fast differentiable renderer for TensorFlow
Stars: ✭ 273 (-32.43%)
Mutual labels:  vision
Ios 11 By Examples
👨🏻‍💻 Examples of new iOS 11 APIs
Stars: ✭ 3,327 (+723.51%)
Mutual labels:  vision
pulse2percept
A Python-based simulation framework for bionic vision
Stars: ✭ 59 (-85.4%)
Mutual labels:  vision
Visionfacedetection
An example of use a Vision framework for face landmarks detection in iOS 11
Stars: ✭ 258 (-36.14%)
Mutual labels:  vision
Awesome Deep Vision Web Demo
A curated list of awesome deep vision web demo
Stars: ✭ 298 (-26.24%)
Mutual labels:  vision
Recogcis
Face detection & recognition AR app using the mlmodel to recognize company employees.
Stars: ✭ 28 (-93.07%)
Mutual labels:  vision
Multi sensor fusion
Multi-Sensor Fusion (GNSS, IMU, Camera) 多源多传感器融合定位 GPS/INS组合导航 PPP/INS紧组合
Stars: ✭ 357 (-11.63%)
Mutual labels:  vision
VisionLab
📺 A framework with common source code for demo projects that use Vision Framework
Stars: ✭ 32 (-92.08%)
Mutual labels:  vision
Apc Vision Toolbox
MIT-Princeton Vision Toolbox for the Amazon Picking Challenge 2016 - RGB-D ConvNet-based object segmentation and 6D object pose estimation.
Stars: ✭ 277 (-31.44%)
Mutual labels:  vision
Aravis
A vision library for genicam based cameras
Stars: ✭ 397 (-1.73%)
Mutual labels:  vision
Home Platform
HoME: a Household Multimodal Environment is a platform for artificial agents to learn from vision, audio, semantics, physics, and interaction with objects and other agents, all within a realistic context.
Stars: ✭ 370 (-8.42%)
Mutual labels:  vision
Grip
Program for rapidly developing computer vision applications
Stars: ✭ 314 (-22.28%)
Mutual labels:  vision

neural-motifs

Like this work, or scene understanding in general? You might be interested in checking out my brand new dataset VCR: Visual Commonsense Reasoning, at visualcommonsense.com!

This repository contains data and code for the paper Neural Motifs: Scene Graph Parsing with Global Context (CVPR 2018) For the project page (as well as links to the baseline checkpoints), check out rowanzellers.com/neuralmotifs. If the paper significantly inspires you, we request that you cite our work:

Bibtex

@inproceedings{zellers2018scenegraphs,
  title={Neural Motifs: Scene Graph Parsing with Global Context},
  author={Zellers, Rowan and Yatskar, Mark and Thomson, Sam and Choi, Yejin},
  booktitle = "Conference on Computer Vision and Pattern Recognition",  
  year={2018}
}

Setup

  1. Install python3.6 and pytorch 3. I recommend the Anaconda distribution. To install PyTorch if you haven't already, use conda install pytorch=0.3.0 torchvision=0.2.0 cuda90 -c pytorch.

  2. Update the config file with the dataset paths. Specifically:

    • Visual Genome (the VG_100K folder, image_data.json, VG-SGG.h5, and VG-SGG-dicts.json). See data/stanford_filtered/README.md for the steps I used to download these.
    • You'll also need to fix your PYTHONPATH: export PYTHONPATH=/home/rowan/code/scene-graph
  3. Compile everything. run make in the main directory: this compiles the Bilinear Interpolation operation for the RoIs as well as the Highway LSTM.

  4. Pretrain VG detection. The old version involved pretraining COCO as well, but we got rid of that for simplicity. Run ./scripts/pretrain_detector.sh Note: You might have to modify the learning rate and batch size, particularly if you don't have 3 Titan X GPUs (which is what I used). You can also download the pretrained detector checkpoint here.

  5. Train VG scene graph classification: run ./scripts/train_models_sgcls.sh 2 (will run on GPU 2). OR, download the MotifNet-cls checkpoint here: Motifnet-SGCls/PredCls.

  6. Refine for detection: run ./scripts/refine_for_detection.sh 2 or download the Motifnet-SGDet checkpoint.

  7. Evaluate: Refer to the scripts ./scripts/eval_models_sg[cls/det].sh.

help

Feel free to open an issue if you encounter trouble getting it to work!

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].