All Projects → srama2512 → sidekicks

srama2512 / sidekicks

Licence: MIT license
Sidekick Policy Learning for Active Visual Exploration (ECCV 2018)

Programming Languages

python
139335 projects - #7 most used programming language
Jupyter Notebook
11667 projects
shell
77523 projects

Projects that are alternatives of or similar to sidekicks

contextual
Contextual Bandits in R - simulation and evaluation of Multi-Armed Bandit Policies
Stars: ✭ 72 (+200%)
Mutual labels:  exploration, reinforcement
rl
Reinforcement learning algorithms implemented using Keras and OpenAI Gym
Stars: ✭ 14 (-41.67%)
Mutual labels:  reinforcement
Visual-Attention-Model
Chainer implementation of Deepmind's Visual Attention Model paper
Stars: ✭ 27 (+12.5%)
Mutual labels:  visual
VisualBTC
Visual bitcoin private key generator - a tool for safe bitcoin private key generation with the physical coin, or create funny "patterns" keys for gifts to your friends.
Stars: ✭ 29 (+20.83%)
Mutual labels:  visual
caltech samaritan
🚁〰️ Drone SLAM project for Caltech's ME 134 Autonomy class.
Stars: ✭ 35 (+45.83%)
Mutual labels:  exploration
improviz
A live-coded visual performance tool
Stars: ✭ 85 (+254.17%)
Mutual labels:  visual
dify
A fast pixel-by-pixel image comparison tool in Rust
Stars: ✭ 41 (+70.83%)
Mutual labels:  visual
React-Visual-Novel
A visual novel application made with React.
Stars: ✭ 26 (+8.33%)
Mutual labels:  visual
brockly
A Visual Go code generator
Stars: ✭ 55 (+129.17%)
Mutual labels:  visual
VisualBasicObfuscator
Visual Basic Code universal Obfuscator intended to be used during penetration testing assignments.
Stars: ✭ 115 (+379.17%)
Mutual labels:  visual
bark-ml
Gym environments and agents for autonomous driving.
Stars: ✭ 68 (+183.33%)
Mutual labels:  reinforcement
DataStore
Visual develop tool of creating mocked Json
Stars: ✭ 30 (+25%)
Mutual labels:  visual
SettingsUI
Windows 11 settings page in WinUI 3 applications ported from Powertoys
Stars: ✭ 95 (+295.83%)
Mutual labels:  visual
SQLServerTools
This repo is the home of various SQL-Server-Tools
Stars: ✭ 28 (+16.67%)
Mutual labels:  visual
node-red-contrib-FIWARE official
FIWARE-Node-Red integration supporting NGSI-LD
Stars: ✭ 14 (-41.67%)
Mutual labels:  visual
VB.NET
🌐 In this repository included useful examples of Visual Basic completed on Studio 2017 Enterprise Edition, added diploma work of time since 2013. 👔
Stars: ✭ 35 (+45.83%)
Mutual labels:  visual
Cerebrum
Crossmodal Supervised Learning Toolkit using High-Performance Extreme Learning Machines over the audio-visual-textual data
Stars: ✭ 41 (+70.83%)
Mutual labels:  visual
Deep-Learning-Mahjong---
Reinforcement learning (RL) implementation of imperfect information game Mahjong using markov decision processes to predict future game states
Stars: ✭ 45 (+87.5%)
Mutual labels:  reinforcement
code summarization public
source code for 'Improving automatic source code summarization via deep reinforcement learning'
Stars: ✭ 71 (+195.83%)
Mutual labels:  reinforcement
rubicon-ml
Capture all information throughout your model's development in a reproducible way and tie results directly to the model code!
Stars: ✭ 81 (+237.5%)
Mutual labels:  exploration

Emergence of exploratory look-around behaviors through active observation completion

A journal version of this work in conjunction with our prior work on Learning to Look Around: Intelligently Exploring Unseen Environments for Unknown Tasks has been published in Science Robotics 2019.

Emergence of exploratory look-around behaviors through active observation completion
Santhosh K. Ramakrishnan, Dinesh Jayaraman, Kristen Grauman
Science Robotics 2019

A cleaned version of this codebase along with new transfer tasks are available at https://github.com/srama2512/visual-exploration.

Sidekick Policy Learning

This repository contains code and data for the paper

Sidekick Policy Learning for Active Visual Exploration
Santhosh K. Ramakrishnan, Kristen Grauman
ECCV 2018

Setup

conda create -n spl python=2.7
source activate spl
  • Clone this project repository and setup requirements using pip.
git clone https://github.com/srama2512/sidekicks.git
cd sidekicks
pip install -r requirements.txt
  • Download preprocessed SUN360 and ModelNet data.
wget http://vision.cs.utexas.edu/projects/sidekicks/data.zip
unzip data.zip
  • Sidekick scores for ours-rew, ours-demo, rnd-rewards on both datasets have been provided here. The one-view model used to generate them have also been provided.

Evaluating pre-trained models

All the pre-trained models have been provided here. To evaluate them, download them to the models directory. To reproduce results from the paper:

wget http://vision.cs.utexas.edu/projects/sidekicks/models.zip
unzip models.zip
sh evaluation_script_final.sh

Evaluation examples

  • Evaluating SUN360 one-view baseline on the test data with avg metric:
python eval.py --h5_path data/sun360/sun360_processed.h5 --dataset 0 \
				  --model_path models/sun360/one-view.net --T 1 --M 8 --N 4 \
				  --start_view 2 --save_path dummy/ 
  • Evaluating SUN360 ltla baseline on the test data with avg metric:
python eval.py --h5_path data/sun360/sun360_processed.h5 --dataset 0 \
				  --model_path models/sun360/ltla.net --T 4 --M 8 --N 4 \
				  --start_view 2 --save_path dummy/ 
  • Evaluating SUN360 ltla baseline on the test data with adv metric:
python eval.py --h5_path data/sun360/sun360_processed.h5 --dataset 0 \
				  --model_path models/sun360/ltla.net --T 4 --M 8 --N 4 \
				  --start_view 2 --save_path dummy/ 
  • Evaluating SUN360 rnd-actions baseline on test data with avg metric:
python eval.py --h5_path data/sun360/sun360_processed.h5 --dataset 0 \
				  --model_path models/sun360/rnd-actions.net --T 4 --M 8 --N 4 \
				  --start_view 2 --actorType random --save_path dummy/ 
  • Evaluating ModelNet Hard one-view baseline on test (seen and unseen) data with avg metric:
python eval.py --h5_path modelnet30_processed.h5 \
				  --h5_path_unseen modelnet10_processed.h5 --dataset 1 \
				  --model_path models/modelnet_hard/one-view.net --T 1 --M 9 --N 5 \
				  --start_view 2 --save_path dummy/

Training models

Ensure that the pre-trained models and pre-computed scores are downloaded and extracted.

  • Training one-view model on SUN360 with default settings:
python main.py --T 1 --training_setting 0 --epochs 100 \
				  --save_path saved_models/sun360/one-view
  • Training ltla baseline on SUN360 with default settings (starting from pre-trained one-view model):
python main.py --T 4 --training_setting 1 --epochs 1000 \
				  --save_path saved_models/sun360/ltla/  \
				  --load_model models/sun360/one-view.net
  • Training ours-rew on SUN360 with default settings (with pre-computed score):
python main.py --T 4 --training_setting 1 --epochs 1000 \
				  --save_path saved_models/sun360/ours-rew/ \
				  --load_model models/sun360/one-view.net --expert_rewards True \
				  --rewards_h5_path scores/sun360/ours-rew-scores.h5
  • Training ours-demo on SUN360 with default settings (with pre-computed score):
python main.py --T 4 --training_setting 1 --epochs 1000 \
				  --save_path saved_models/sun360/ours-demo/ \
				  --load_model models/sun360/one-view.net --expert_trajectories True \
				  --utility_h5_path scores/sun360/ours-demo-scores.h5
  • Training ltla baseline on ModelNet Hard with default settings (starting from pre-trained one-view model):
python main.py --h5_path data/modelnet_hard/modelnet30_processed.h5 \
				  --training_setting 1 --dataset 1 --T 4 --M 9 --N 5 \
				  --load_model models/modelnet_hard/one-view.net \
				  --save_path saved_models/modelnet_hard/ltla/

The other ModelNet Hard models can be trained similar to SUN360 models. To train actor critic models, set --baselineType critic. To add full observability to the critic (for asymm-ac), set --critic_full_obs True.

Visualization

From the repository directory, start jupyter notebook and open visualize_policy_paper.ipynb. Perform the TODOs mentioned in the comments (setting the correct paths) and run the entire script. It will generate tensorboard files contained visualized heatmaps on several examples.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].