All Projects → airalcorn2 → Strike With A Pose

airalcorn2 / Strike With A Pose

Licence: gpl-3.0
A simple GUI tool for generating adversarial poses of objects.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Strike With A Pose

Bidaf Keras
Bidirectional Attention Flow for Machine Comprehension implemented in Keras 2
Stars: ✭ 60 (-14.29%)
Mutual labels:  neural-networks
Deep Review
A collaboratively written review paper on deep learning, genomics, and precision medicine
Stars: ✭ 1,141 (+1530%)
Mutual labels:  neural-networks
Torchgan
Research Framework for easy and efficient training of GANs based on Pytorch
Stars: ✭ 1,156 (+1551.43%)
Mutual labels:  neural-networks
Ai Platform
An open-source platform for automating tasks using machine learning models
Stars: ✭ 61 (-12.86%)
Mutual labels:  neural-networks
Interactive Classification
Interactive Classification for Deep Learning Interpretation
Stars: ✭ 65 (-7.14%)
Mutual labels:  neural-networks
Intent classifier
Stars: ✭ 67 (-4.29%)
Mutual labels:  neural-networks
Watchcarslearn
Self driving cars using NEAT
Stars: ✭ 59 (-15.71%)
Mutual labels:  neural-networks
Get started with deep learning for text with allennlp
Getting started with AllenNLP and PyTorch by training a tweet classifier
Stars: ✭ 69 (-1.43%)
Mutual labels:  neural-networks
Outlace.github.io
Machine learning and data science blog.
Stars: ✭ 65 (-7.14%)
Mutual labels:  neural-networks
Deeplearning4j
All DeepLearning4j projects go here.
Stars: ✭ 68 (-2.86%)
Mutual labels:  neural-networks
Aorun
Deep Learning over PyTorch
Stars: ✭ 61 (-12.86%)
Mutual labels:  neural-networks
Speedrun
Research code need not be ugly.
Stars: ✭ 65 (-7.14%)
Mutual labels:  neural-networks
Graph 2d cnn
Code and data for the paper 'Classifying Graphs as Images with Convolutional Neural Networks' (new title: 'Graph Classification with 2D Convolutional Neural Networks')
Stars: ✭ 67 (-4.29%)
Mutual labels:  neural-networks
Gosom
Self-organizing maps in Go
Stars: ✭ 60 (-14.29%)
Mutual labels:  neural-networks
Ai Residency List
List of AI Residency & Research programs, Ph.D Fellowships, Research Internships
Stars: ✭ 69 (-1.43%)
Mutual labels:  neural-networks
Cyclegan Qp
Official PyTorch implementation of "Artist Style Transfer Via Quadratic Potential"
Stars: ✭ 59 (-15.71%)
Mutual labels:  neural-networks
Entity embeddings categorical
Discover relevant information about categorical data with entity embeddings using Neural Networks (powered by Keras)
Stars: ✭ 67 (-4.29%)
Mutual labels:  neural-networks
Ehcf
This is our implementation of EHCF: Efficient Heterogeneous Collaborative Filtering (AAAI 2020)
Stars: ✭ 70 (+0%)
Mutual labels:  neural-networks
Blinkdl
A minimalist deep learning library in Javascript using WebGL + asm.js. Run convolutional neural network in your browser.
Stars: ✭ 69 (-1.43%)
Mutual labels:  neural-networks
Lovaszsoftmax
Code for the Lovász-Softmax loss (CVPR 2018)
Stars: ✭ 1,148 (+1540%)
Mutual labels:  neural-networks

Strike (With) A Pose

This is the companion tool to the paper:

Michael A. Alcorn, Qi Li, Zhitao Gong, Chengfei Wang, Long Mai, Wei-Shinn Ku, and Anh Nguyen. Strike (with) a pose: Neural networks are easily fooled by strange poses of familiar objects. Conference on Computer Vision and Pattern Recognition (CVPR). 2019.

Code to run experiments like those described in the paper can be found in the paper_code directory. The tool allows you to generate adversarial poses of objects with a graphical user interface. Please note that the included jeep object does not meet the realism standards set in the paper. Unfortunately, the school bus object shown in the GIF is proprietary and cannot be distributed with the tool. A browser port of the tool (created by Zhitao Gong) can be found here.

If you use this tool for your own research, please cite:

@article{alcorn-2019-strike-with-a-pose,
    Author = {Alcorn, Michael A. and Li, Qi and Gong, Zhitao and Wang, Chengfei and Mai, Long and Ku, Wei-Shinn and Nguyen, Anh},
    Title = {{Strike (with) a Pose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects}},
    Journal = {Conference on Computer Vision and Pattern Recognition (CVPR)},
    Year = {2019}
}

Table of Contents

Requirements

  • Git (Linux users only)
  • OpenGL ≥ 3.3 (many computers satisfy this requirement)
    • On Linux, you can check your OpenGL version with the following command (requires glx-utils on Fedora or mesa-utils on Ubuntu):
    glxinfo | grep "OpenGL version"
    
  • Python 3 (Mac, Windows)
    /Applications/Python\ 3.x/Install\ Certificates.command
    
    where x is the minor version of your particular Python 3 install. If you used the above link to install Python, the file will be at:
    /Applications/Python\ 3.6/Install\ Certificates.command
    

Install/Run

Note: the tool takes a little while to start the first time it's run because it has to download the neural network.

Linux

In the terminal, enter the following commands:

# Clone the strike-with-a-pose repository.
git clone https://github.com/airalcorn2/strike-with-a-pose.git
# Move to the strike-with-a-pose directory.
cd strike-with-a-pose
# Install strike-with-a-pose.
pip3 install .
# Run strike-with-a-pose.
strike-with-a-pose

You can also run the tool (after installing) by starting Python and entering the following:

from strike_with_a_pose import app
app.run_gui()

Mac

  1. Click here to download the tool ZIP.
  2. Extract the ZIP somewhere convenient (like your desktop).
  3. Double-click install.command in the strike-with-a-pose-master/ directory.
  1. Double-click strike-with-a-pose.command in the strike-with-a-pose-master/run/ directory.

Windows

  1. Click here to download the tool ZIP.
  2. Extract the ZIP somewhere convenient (like your desktop).
  3. Double-click install.bat in the strike-with-a-pose-master\ directory.
  • Note: you may need to click "More info" and then "Run anyway".
  1. Double-click strike-with-a-pose.bat in the strike-with-a-pose-master\run\ directory.

For Experts

Using Different Objects and Backgrounds

Users can test their own objects and backgrounds in Strike (With) A Pose by:

  1. Adding the appropriate files to the scene_files/ directory.
  2. Modifying the BACKGROUND_F, OBJ_F, and MTL_F variables in settings.py accordingly.
  3. Running the following command inside the strike-with-a-pose/ directory:
PYTHONPATH=strike_with_a_pose python3 -m strike_with_a_pose.app

Using Different Machine Learning Models

Users can experiment with different machine learning models in Strike (With) A Pose by:

  1. Defining a model class that implements the get_gui_comps, init_scene_comps, predict, render, and clear functions (e.g., image_classifier.py, object_detector.py, image_captioner.py, and class_activation_mapper.py [with major contributions by Qi Li]).
  2. Setting the MODEL variable in settings.py accordingly.
  3. Running the following command inside the strike-with-a-pose/ directory:
PYTHONPATH=strike_with_a_pose python3 -m strike_with_a_pose.app

To use the image captioner model, first download and install the COCO API:

git clone https://github.com/pdollar/coco.git
cd coco/PythonAPI/
make
python3 setup.py build
python3 setup.py install

Image Classifier

Object Detector

"The Elephant in the Room"-like (Rosenfeld et al., 2018) examples:

Image Captioner

Class Activation Mapper

"Learning Deep Features for Discriminative Localization"-like (Zhou et al., 2016) examples.

Additional Features

Press L to toggle Live mode. When on, the machine learning model will continuously generate predictions. Note, Live mode can cause considerable lag if you do not have each of (1) a powerful GPU, (2) CUDA installed, and (3) a PyTorch version with CUDA support installed.

Press X to toggle the object's teXture, which is useful for making sure the directional light is properly interacting with your object. If the light looks funny, swapping/negating the vertex normal coordinates can usually fix it. See the fix_normals.py script for an example.

Press F to toggle back-Face culling, which is necessary when rendering certain models (like some found in ShapeNet).

Press I to bring up the Individual component selector. This feature allows you to display individual object components (as defined by each newmtl in the .mtl file) by themselves.

Press C to Capture a screenshot of the current render. Screenshots are saved in the directory where the tool is started.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].