All Projects → ChanganVR → Awesome Embodied Vision

ChanganVR / Awesome Embodied Vision

Licence: mit
Reading list for research topics in embodied vision

Projects that are alternatives of or similar to Awesome Embodied Vision

Awesome4girls
A curated list of inclusive events/projects/initiatives for women in the tech area. 💝
Stars: ✭ 393 (+336.67%)
Mutual labels:  curated-list
Made In Iran
A list of cool projects made in Iran
Stars: ✭ 630 (+600%)
Mutual labels:  curated-list
Forums
Awesome Forums
Stars: ✭ 20 (-77.78%)
Mutual labels:  curated-list
Awesome Persian Podcasts
لیست بهترین پادکست‌های فارسی زبان 🔉
Stars: ✭ 414 (+360%)
Mutual labels:  curated-list
Macos Apps
Awesome macOS apps
Stars: ✭ 559 (+521.11%)
Mutual labels:  curated-list
Filterlists
🛡 The independent, comprehensive directory of filter and host lists for advertisements, trackers, malware, and annoyances.
Stars: ✭ 653 (+625.56%)
Mutual labels:  curated-list
Awesome Symfony
A collection of useful Symfony snippets.
Stars: ✭ 360 (+300%)
Mutual labels:  curated-list
Community Search
A community-curated repository of 🔥 learning resources
Stars: ✭ 72 (-20%)
Mutual labels:  curated-list
Pointers For Software Engineers
A curated list of topics to start learning software engineering
Stars: ✭ 5,447 (+5952.22%)
Mutual labels:  curated-list
Road To Master Ngrx Store
A curated guided hyperlinks to learn all there is to know of Ngrx/Store and state management in general
Stars: ✭ 15 (-83.33%)
Mutual labels:  curated-list
Cool Fashion Papers
👔👗🕶️🎩 Cool resources about Fashion + AI! (papers, datasets, workshops, companies, ...) (constantly updating)
Stars: ✭ 464 (+415.56%)
Mutual labels:  curated-list
Awesome Coreml Models
Largest list of models for Core ML (for iOS 11+)
Stars: ✭ 5,192 (+5668.89%)
Mutual labels:  curated-list
Tech Coops
A list of tech coops and resources concerning tech coops and worker owned cooperatives in general.
Stars: ✭ 681 (+656.67%)
Mutual labels:  curated-list
Awesome Applied Ct
ACT community resources
Stars: ✭ 412 (+357.78%)
Mutual labels:  curated-list
Image To Image Papers
🦓<->🦒 🌃<->🌆 A collection of image to image papers with code (constantly updating)
Stars: ✭ 949 (+954.44%)
Mutual labels:  curated-list
My Cs Degree
A CS degree with a focus on full-stack ML engineering, 2020
Stars: ✭ 391 (+334.44%)
Mutual labels:  curated-list
Awesome Critical Tech Reading List
A reading list for the modern critical programmer
Stars: ✭ 644 (+615.56%)
Mutual labels:  curated-list
Movies For Hackers
🎬 A curated list of movies every hacker & cyberpunk must watch.
Stars: ✭ 8,884 (+9771.11%)
Mutual labels:  curated-list
Awesome Scheme
A curated list of awesome Scheme resources and materials
Stars: ✭ 46 (-48.89%)
Mutual labels:  curated-list
Apis Made In Iran
A list of APIs from Iran
Stars: ✭ 835 (+827.78%)
Mutual labels:  curated-list

Awesome Embodied Vision Awesome

A curated list of embodied vision resources.

Inspired by the awesome list thing and awesome-vln.

By Changan Chen ([email protected]), Department of Computer Science at the University of Texas at Austin, with help from Tushar Nagarajan and Santhosh Kumar Ramakrishnan. If you see papers missing from the list, please send me an email or a pull request (format see below).

Table of Content

Contributing

When sending PRs, please put the new paper at the correct chronological position as the following format:

* **Paper Title** <br>
*Author(s)* <br>
Conference, Year. [[Paper]](link) [[Code]](link) [[Website]](link)

Papers

PointGoal Navigation

  • Cognitive Mapping and Planning for Visual Navigation
    Saurabh Gupta, Varun Tolani, James Davidson, Sergey Levine, Rahul Sukthankar, Jitendra Malik
    CVPR, 2017. [Paper]

  • Habitat: A Platform for Embodied AI Research
    Manolis Savva, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, Devi Parikh, Dhruv Batra
    ICCV, 2019. [Paper] [Code] [Website]

  • SplitNet: Sim2Sim and Task2Task Transfer for Embodied Visual Navigation
    Daniel Gordon, Abhishek Kadian, Devi Parikh, Judy Hoffman, Dhruv Batra
    ICCV, 2019. [Paper] [Code]

  • A Behavioral Approach to Visual Navigation with Graph Localization Networks
    Kevin Chen, Juan Pablo de Vicente, Gabriel Sepulveda, Fei Xia, Alvaro Soto, Marynel Vazquez, Silvio Savarese
    RSS, 2019. [Paper] [Code] [Website]

  • DD-PPO: Learning Near-Perfect PointGoal Navigators from 2.5 Billion Frames
    Erik Wijmans, Abhishek Kadian, Ari Morcos, Stefan Lee, Irfan Essa, Devi Parikh, Manolis Savva, Dhruv Batra
    ICLR, 2020. [Paper] [Code] [Website]

  • Learning to Explore using Active Neural SLAM
    Devendra Singh Chaplot, Dhiraj Gandhi, Saurabh Gupta, Abhinav Gupta, Ruslan Salakhutdinov
    ICLR, 2020. [Paper] [Code] [Website]

  • Auxiliary Tasks Speed Up Learning PointGoal Navigation
    Joel Ye, Dhruv Batra, Erik Wijmans, Abhishek Das
    CoRL, 2020. [Paper] [Code]

  • Occupancy Anticipation for Efficient Exploration and Navigation
    Santhosh K. Ramakrishnan, Ziad Al-Halah, Kristen Grauman
    ECCV, 2020. [Paper] [Code] [Website]

Audio-Visual Navigation

  • Audio-Visual Embodied Navigation
    Changan Chen*, Unnat Jain*, Carl Schissler, Sebastia Vicenc Amengual Gari, Ziad Al-Halah, Vamsi Krishna Ithapu, Philip Robinson, Kristen Grauman
    ECCV, 2020. [Paper] [Website]

  • Look, Listen, and Act: Towards Audio-Visual Embodied Navigation
    Chuang Gan, Yiwei Zhang, Jiajun Wu, Boqing Gong, Joshua B. Tenenbaum
    ICRA, 2020. [Paper] [Website]

  • Learning to Set Waypoints for Audio-Visual Navigation
    Changan Chen, Sagnik Majumder, Ziad Al-Halah, Ruohan Gao, Santhosh K. Ramakrishnan, Kristen Grauman
    arXiv, 2020. [Paper] [Website]

  • Semantic Audio-Visual Navigation
    Changan Chen, Ziad Al-Halah, Kristen Grauman
    arXiv, 2020. [Paper] [Website]

ObjectGoal Navigation

  • Cognitive Mapping and Planning for Visual Navigation
    Saurabh Gupta, Varun Tolani, James Davidson, Sergey Levine, Rahul Sukthankar, Jitendra Malik
    CVPR, 2017. [Paper]

  • Visual Semantic Navigation using Scene Priors
    Wei Yang, Xiaolong Wang, Ali Farhadi, Abhinav Gupta, Roozbeh Mottaghi
    ICLR, 2019. [Paper]

  • Visual Representations for Semantic Target Driven Navigation
    Arsalan Mousavian, Alexander Toshev, Marek Fiser, Jana Kosecka, Ayzaan Wahid, James Davidson
    ICRA, 2019. [Paper] [Code]

  • Learning to Learn How to Learn: Self-Adaptive Visual Navigation Using Meta-Learning
    Mitchell Wortsman, Kiana Ehsani, Mohammad Rastegari, Ali Farhadi, Roozbeh Mottaghi
    CVPR, 2019. [Paper] [Code] [Website]

  • Bayesian Relational Memory for Semantic Visual Navigation
    Yi Wu, Yuxin Wu, Aviv Tamar, Stuart Russell, Georgia Gkioxari, Yuandong Tian
    ICCV, 2019. [Paper] [Code]

  • Situational Fusion of Visual Representation for Visual Navigation
    William B. Shen, Danfei Xu, Yuke Zhu, Leonidas J. Guibas, Li Fei-Fei, Silvio Savarese
    ICCV, 2019. [Paper]

  • Object Goal Navigation using Goal-Oriented Semantic Exploration
    Devendra Singh Chaplot, Dhiraj Gandhi, Abhinav Gupta*, Ruslan Salakhutdinov*
    NeurIPS, 2020. [Paper] [Website]

  • Learning Object Relation Graph and Tentative Policy for Visual Navigation
    Heming Du, Xin Yu, Liang Zheng
    ECCV, 2020. [Paper]

  • Semantic Visual Navigation by Watching YouTube Videos
    Matthew Chang, Arjun Gupta, Saurabh Gupta
    arXiv, 2020. [Paper] [Website]

  • ObjectNav Revisited: On Evaluation of Embodied Agents Navigating to Objects
    Dhruv Batra, Aaron Gokaslan, Aniruddha Kembhavi, Oleksandr Maksymets, Roozbeh Mottaghi, Manolis Savva, Alexander Toshev, Erik Wijmans
    arXiv, 2020. [Paper]

  • MultiON: Benchmarking Semantic Map Memory using Multi-Object Navigation
    Saim Wani*, Shivansh Patel*, Unnat Jain*, Angel X. Chang, Manolis Savva
    NeurIPS, 2020. [Paper] [Code] [Website]

  • Learning hierarchical relationships for object-goal navigation
    Yiding Qiu, Anwesan Pal, Henrik I. Christensen
    CoRL, 2020. [Paper]

ImageGoal Navigation

  • Target-driven Visual Navigation in Indoor Scenes using Deep Reinforcement Learning
    Yuke Zhu, Roozbeh Mottaghi, Eric Kolve, Joseph J. Lim, Abhinav Gupta, Li Fei-Fei, Ali Farhadi
    ICRA, 2017. [Paper] [Website]

  • Semi-Parametric Topological Memory for Navigation
    Nikolay Savinov*, Alexey Dosovitskiy*, Vladlen Koltun
    ICLR, 2018. [Paper] [Code] [Website]

  • Neural Topological SLAM for Visual Navigation
    Devendra Singh Chaplot, Ruslan Salakhutdinov, Abhinav Gupta, Saurabh Gupta
    CVPR, 2020. [Paper] [Website]

Vision-Language Navigation

  • Vision-and-Language Navigation: Interpreting Visually-Grounded Navigation Instructions in Real Environments
    Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, Anton van den Hengel
    CVPR, 2018. [Paper] [Code] [Website]

  • Look Before You Leap: Bridging Model-Free and Model-Based Reinforcement Learning for Planned-Ahead Vision-and-Language Navigation
    Xin Wang, Wenhan Xiong, Hongmin Wang, William Yang Wang
    ECCV, 2018. [Paper]

  • Mapping Instructions to Actions in 3D Environmentswith Visual Goal Prediction
    Dipendra Misra, Andrew Bennett, Valts Blukis, Eyvind Niklasson, Max Shatkhin, Yoav Artzi
    EMNLP, 2018. [Paper]

  • Speaker-Follower Models for Vision-and-Language Navigation
    Daniel Fried, Ronghang Hu, Volkan Cirik, Anna Rohrbach, Jacob Andreas, Louis-Philippe Morency, Taylor Berg-Kirkpatrick, Kate Saenko, Dan Klein, Trevor Darrell
    NeurIPS, 2018. [Paper] [Code] [Website]

  • Reinforced Cross-Modal Matching and Self-Supervised Imitation Learning for Vision-Language Navigation
    Xin Wang, Qiuyuan Huang, Asli Celikyilmaz, Jianfeng Gao, Dinghan Shen, Yuan-Fang Wang, William Yang Wang, Lei Zhang
    CVPR, 2019. [Paper]

  • Self-Monitoring Navigation Agent via Auxiliary Progress Estimation
    Chih-Yao Ma, Jiasen Lu, Zuxuan Wu, Ghassan AlRegib, Zsolt Kira, Richard Socher, Caiming Xiong
    ICLR, 2019. [Paper] [Code] [Website]

  • The Regretful Agent: Heuristic-Aided Navigation through Progress Estimation
    Chih-Yao Ma, Zuxuan Wu, Ghassan AlRegib, Caiming Xiong, Zsolt Kira
    CVPR, 2019. [Paper] [Code] [Website]

  • TOUCHDOWN: Natural Language Navigation and Spatial Reasoning in Visual Street Environments
    Howard Chen, Alane Suhr, Dipendra Misra, Noah Snavely, Yoav Artzi
    CVPR, 2019. [Paper] [Code]

  • Tactical Rewind: Self-Correction via Backtracking in Vision-and-Language Navigation
    Liyiming Ke, Xiujun Li, Yonatan Bisk, Ari Holtzman, Zhe Gan, Jingjing Liu, Jianfeng Gao, Yejin Choi, Siddhartha Srinivasa
    CVPR, 2019. [Paper] [Code] [Video]

  • Vision-based Navigation with Language-based Assistance via Imitation Learning with Indirect Intervention
    Khanh Nguyen, Debadeepta Dey, Chris Brockett, Bill Dolan
    CVPR, 2019. [Paper] [Code] [Video]

  • Help, Anna! Visual Navigation with Natural Multimodal Assistance via Retrospective Curiosity-Encouraging Imitation Learning
    Khanh Nguyen, Hal Daumé III
    EMNLP, 2019. [Paper] [Code] [Video]

  • Chasing Ghosts: Instruction Following as Bayesian State Tracking
    Peter Anderson, Ayush Shrivastava, Devi Parikh, Dhruv Batra, Stefan Lee
    NeurIPS, 2019. [Paper] [Code] [Video]

  • Embodied Vision-and-Language Navigation with Dynamic Convolutional Filters
    Federico Landi, Lorenzo Baraldi, Massimiliano Corsini, Rita Cucchiara
    BMVC, 2019. [Paper] [Code]

  • Transferable Representation Learning in Vision-and-Language Navigation
    Haoshuo Huang, Vihan Jain, Harsh Mehta, Alexander Ku, Gabriel Magalhaes, Jason Baldridge, Eugene Ie
    ICCV, 2019. [Paper]

  • Unsupervised Reinforcement Learning of Transferable Meta-Skills for Embodied Navigation
    Juncheng Li, Xin Wang, Siliang Tang, Haizhou Shi, Fei Wu, Yueting Zhuang, William Yang Wang
    CVPR, 2020. [Paper]

  • Vision-Language Navigation with Self-Supervised Auxiliary Reasoning Tasks
    Fengda Zhu, Yi Zhu, Xiaojun Chang, Xiaodan Liang
    CVPR, 2020. [Paper]

  • Perceive, Transform, and Act: Multi-Modal Attention Networks for Vision-and-Language Navigation
    Federico Landi, Lorenzo Baraldi, Marcella Cornia, Massimiliano Corsini, Rita Cucchiara
    arXiv, 2019. [Paper] [Code]

  • Just Ask: An Interactive Learning Framework for Vision and Language Navigation
    Ta-Chung Chi, Mihail Eric, Seokhwan Kim, Minmin Shen, Dilek Hakkani-tur
    AAAI, 2020. [Paper]

  • Towards Learning a Generic Agent for Vision-and-Language Navigation via Pre-training
    Weituo Hao, Chunyuan Li, Xiujun Li, Lawrence Carin, Jianfeng Gao
    CVPR, 2020. [Paper] [Code]

  • Environment-agnostic Multitask Learning for Natural Language Grounded Navigation
    Xin Wang, Vihan Jain, Eugene Ie, William Yang Wang, Zornitsa Kozareva, Sujith Ravi
    ECCV, 2020. [Paper]

  • Counterfactual Vision-and-Language Navigation via Adversarial Path Sampling
    Tsu-Jui Fu, Xin Wang, Matthew Peterson, Scott Grafton, Miguel Eckstein, William Yang Wang
    ECCV, 2020. [Paper]

  • Multi-View Learning for Vision-and-Language Navigation
    Qiaolin Xia, Xiujun Li, Chunyuan Li, Yonatan Bisk, Zhifang Sui, Jianfeng Gao, Yejin Choi, Noah A. Smith
    arXiv, 2020. [Paper]

  • Vision-Dialog Navigation by Exploring Cross-modal Memory
    Yi Zhu, Fengda Zhu, Zhaohuan Zhan, Bingqian Lin, Jianbin Jiao, Xiaojun Chang, Xiaodan Liang
    CVPR, 2020. [Paper] [Code]

  • Take the Scenic Route: Improving Generalization in Vision-and-Language Navigation
    Felix Yu, Zhiwei Deng, Karthik Narasimhan, Olga Russakovsky
    arXiv, 2020. [Paper]

  • Sub-Instruction Aware Vision-and-Language Navigation
    Yicong Hong, Cristian Rodriguez-Opazo, Qi Wu, Stephen Gould
    arXiv, 2020. [Paper]

  • Beyond the Nav-Graph: Vision-and-Language Navigation in Continuous Environments
    Jacob Krantz, Erik Wijmans, Arjun Majumdar, Dhruv Batra, Stefan Lee
    ECCV, 2020. [Paper] [Code] [Website]

  • Counterfactual Vision-and-Language Navigation via Adversarial Path Sampling
    Tsu-Jui Fu, Xin Eric Wang, Matthew Peterson, Scott Grafton, Miguel Eckstein, William Yang Wang
    ECCV, 2020. [Paper] [Code] [Website]

  • Improving Vision-and-Language Navigation with Image-Text Pairs from the Web
    Arjun Majumdar, Ayush Shrivastava, Stefan Lee, Peter Anderson, Devi Parikh, Dhruv Batra
    ECCV, 2020. [Paper]

  • Soft Expert Reward Learning for Vision-and-Language Navigation
    Hu Wang, Qi Wu, Chunhua Shen
    ECCV, 2020. [Paper]

  • Active Visual Information Gathering for Vision-Language Navigation
    Hanqing Wang, Wenguan Wang, Tianmin Shu, Wei Liang, Jianbing Shen
    ECCV, 2020. [Paper] [Code]

  • Environment-agnostic Multitask Learning for Natural Language Grounded Navigation
    Xin Eric Wang, Vihan Jain, Eugene Ie, William Yang Wang, Zornitsa Kozareva, Sujith Ravi
    ECCV, 2020. [Paper]

  • Language and Visual Entity Relationship Graph for Agent Navigation
    Yicong Hong, Cristian Rodriguez, Yuankai Qi, Qi Wu, Stephen Gould
    NeurIPS, 2020. [Paper] [Code]

  • Counterfactual Vision-and-Language Navigation: Unravelling the Unseen
    Amin Parvaneh, Ehsan Abbasnejad, Damien Teney, Javen Qinfeng Shi, Anton van den Hengel
    NeurIPS, 2020. [Paper]

  • Evolving Graphical Planner: Contextual Global Planning for Vision-and-Language Navigation
    Zhiwei Deng, Karthik Narasimhan, Olga Russakovsky
    NeurIPS, 2020. [Paper]

Multiagent Navigation

  • Two Body Problem: Collaborative Visual Task Completion
    Unnat Jain*, Luca Weihs*, Eric Kolve, Mohammad Rastegari, Svetlana Lazebnik, Ali Farhadi, Alexander Schwing, Aniruddha Kembhavi
    CVPR, 2019. [Paper] [Website]

  • A Cordial Sync: Going Beyond Marginal Policies For Multi-Agent Embodied Tasks
    Unnat Jain*, Luca Weihs*, Eric Kolve, Ali Farhadi, Svetlana Lazebnik, Aniruddha Kembhavi, Alexander Schwing
    ECCV, 2020. [Paper] [Code] [Website]

Visual Exploration

  • Curiosity-driven Exploration by Self-supervised Prediction
    Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, Trevor Darrell
    ICML, 2017. [Paper] [Code] [Website]

  • Learning to Look Around: Intelligently Exploring Unseen Environments for Unknown Tasks
    Dinesh Jayaraman, Kristen Grauman
    CVPR, 2018. [Paper]

  • Sidekick Policy Learning for Active Visual Exploration
    Santhosh K. Ramakrishnan, Kristen Grauman
    ECCV, 2018. [Paper] [Code] [Website]

  • Learning Exploration Policies for Navigation
    Tao Chen, Saurabh Gupta, Abhinav Gupta
    ICLR, 2019. [Paper] [Code] [Website]

  • Episodic Curiosity through Reachability
    Nikolay Savinov, Anton Raichuk, Damien Vincent, Raphael Marinier, Marc Pollefeys, Timothy Lillicrap, Sylvain Gelly
    ICLR, 2019. [Paper] [Code] [Website]

  • Emergence of Exploratory Look-Around Behaviors through Active Observation Completion
    Santhosh K. Ramakrishnan*, Dinesh Jayaraman*, Kristen Grauman
    Science Robotics, 2019. [Paper] [Code] [Website]

  • Scene Memory Transformer for Embodied Agents in Long-Horizon Tasks
    Kuan Fang, Alexander Toshev, Li Fei-Fei, Silvio Savarese
    CVPR, 2019. [Paper] [Website]

  • Learning to Explore using Active Neural SLAM
    Devendra Singh Chaplot, Dhiraj Gandhi, Saurabh Gupta, Abhinav Gupta, Ruslan Salakhutdinov
    ICLR, 2020. [Paper] [Code] [Website]

  • Semantic Curiosity for Active Visual Learning
    Devendra Singh Chaplot, Helen Jiang, Saurabh Gupta, Abhinav Gupta
    ECCV, 2020. [Paper] [Website]

  • See, Hear, Explore: Curiosity via Audio-Visual Association
    Victoria Dean, Shubham Tulsiani, Abhinav Gupta
    arXiv, 2020. [Paper] [Website]

  • Occupancy Anticipation for Efficient Exploration and Navigation
    Santhosh K. Ramakrishnan, Ziad Al-Halah, Kristen Grauman
    ECCV, 2020. [Paper] [Code] [Website]

Embodied Question Answering

  • Embodied Question Answering
    Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, Dhruv Batra
    CVPR, 2018. [Paper] [Code] [Website]

  • Multi-Target Embodied Question Answering
    Licheng Yu, Xinlei Chen, Georgia Gkioxari, Mohit Bansal, Tamara L. Berg, Dhruv Batra
    CVPR, 2019. [Paper]

  • Embodied Question Answering in Photorealistic Environments with Point Cloud Perception
    Erik Wijmans*, Samyak Datta*, Oleksandr Maksymets*, Abhishek Das, Georgia Gkioxari, Stefan Lee, Irfan Essa, Devi Parikh, Dhruv Batra
    CVPR, 2019. [Paper]

Visual Interactions

  • Visual Semantic Planning using Deep Successor Representations
    Yuke Zhu, Daniel Gordon, Eric Kolve, Dieter Fox, Li Fei-Fei, Abhinav Gupta, Roozbeh Mottaghi, Ali Farhadi
    ICCV, 2017. [Paper]

  • IQA: Visual Question Answering in Interactive Environments
    Daniel Gordon, Aniruddha Kembhavi, Mohammad Rastegari, Joseph Redmon, Dieter Fox, and Ali Farhadi
    CVPR, 2018. [Paper] [Code] [Website]

  • ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks
    Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, Dieter Fox
    CVPR, 2020. [Paper] [Code] [Website]

  • Learning About Objects by Learning to Interact with Them
    Martin Lohmann, Jordi Salvador, Aniruddha Kembhavi, Roozbeh Mottaghi
    NeurIPS, 2020. [Paper]

  • Learning Affordance Landscapes for Interaction Exploration in 3D Environments
    Tushar Nagarajan, Kristen Grauman
    NeurIPS, 2020. [Paper]

Sim-to-real

  • Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World
    Josh Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, Pieter Abbeel
    IROS, 2017. [Paper]

  • Sim-to-Real Transfer for Vision-and-Language Navigation
    Peter Anderson, Ayush Shrivastava, Joanne Truong, Arjun Majumdar, Devi Parikh, Dhruv Batra, Stefan Lee
    CoRL, 2020. [Paper]

Datasets

  • A Dataset for Developing and Benchmarking Active Vision
    Phil Ammirato, Patrick Poirson, Eunbyung Park, Jana Kosecka, Alexander C. Berg
    ICRA, 2017. [Paper] [Code] [Website]

  • AI2-THOR: An Interactive 3D Environment for Visual AI
    Eric Kolve, Roozbeh Mottaghi, Winson Han, Eli VanderBilt, Luca Weihs, Alvaro Herrasti, Daniel Gordon, Yuke Zhu, Abhinav Gupta, Ali Farhadi
    arXiv, 2017. [Paper] [Code] [Website]

  • Matterport3D: Learning from RGB-D Data in Indoor Environments
    Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Nießner, Manolis Savva, Shuran Song, Andy Zeng, Yinda Zhang
    3DV, 2017. [Paper] [Code] [Website]

  • Gibson Env: Real-World Perception for Embodied Agents
    Fei Xia, Amir Zamir, Zhi-Yang He, Alexander Sax, Jitendra Malik, Silvio Savarese
    CVPR, 2018. [Paper] [Code] [Website]

  • The Replica Dataset: A Digital Replica of Indoor Spaces
    Julian Straub, Thomas Whelan, Lingni Ma, Yufan Chen, Erik Wijmans, Simon Green, Jakob J. Engel, Raul Mur-Artal, Carl Ren, Shobhit Verma, Anton Clarkson, Mingfei Yan, Brian Budge, Yajie Yan, Xiaqing Pan, June Yon, Yuyang Zou, Kimberly Leon, Nigel Carter, Jesus Briales, Tyler Gillingham, Elias Mueggler, Luis Pesqueira, Manolis Savva, Dhruv Batra, Hauke M. Strasdat, Renzo De Nardi, Michael Goesele, Steven Lovegrove, Richard Newcombe
    arXiV, 2019. [Paper] [Code]

  • Actionet: An Interactive End-to-End Platform for Task-Based Data Collection and Augmentation in 3D Environments
    Jiafei Duan, Samson Yu, Hui Li Tan, Cheston Tan
    ICIP, 2020. [Paper] [Code]

Environments

  • AI2-THOR: An Interactive 3D Environment for Visual AI
    Eric Kolve, Roozbeh Mottaghi, Winson Han, Eli VanderBilt, Luca Weihs, Alvaro Herrasti, Daniel Gordon, Yuke Zhu, Abhinav Gupta, Ali Farhadi
    arXiv, 2017. [Paper] [Code] [Website]

  • Building Generalizable Agents with a Realistic and Rich 3D Environment (House3D)
    Yi Wu, Yuxin Wu, Georgia Gkioxari, Yuandong Tian
    arXiv, 2018. [Paper] [Code]

  • CHALET: Cornell House Agent Learning Environment
    Claudia Yan, Dipendra Misra, Andrew Bennett, Aaron Walsman, Yonatan Bisk and Yoav Artzi
    arXiv, 2018. [Paper] [Code]

  • RoboTHOR: An Open Simulation-to-Real Embodied AI Platform
    Matt Deitke, Winson Han, Alvaro Herrasti, Aniruddha Kembhavi, Eric Kolve, Roozbeh Mottaghi, Jordi Salvador, Dustin Schwenk, Eli VanderBilt, Matthew Wallingford, Luca Weihs, Mark Yatskar, Ali Farhadi
    CVPR, 2020. [Paper] [Website]

  • Gibson Env: Real-World Perception for Embodied Agents
    Fei Xia, Amir Zamir, Zhi-Yang He, Alexander Sax, Jitendra Malik, Silvio Savarese
    CVPR, 2018. [Paper] [Code] [Website]

  • Habitat: A Platform for Embodied AI Research
    Manolis Savva, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, Devi Parikh, Dhruv Batra
    ICCV, 2019. [Paper] [Code] [Website]

MISC

  • Visual Learning and Embodied Agents in Simulation Environments Workshop
    ECCV, 2018. [website]

  • Embodied-AI Workshop
    CVPR, 2020. [website]

  • Gibson Sim2Real Challenge
    CVPR, 2020. [website]

  • Embodied Vision, Actions & Language Workshop
    ECCV, 2020. [website]

  • Closing the Reality Gap in Sim2Real Transfer for Robotics
    RSS, 2020. [website]

  • On Evaluation of Embodied Navigation Agents
    Peter Anderson, Angel Chang, Devendra Singh Chaplot, Alexey Dosovitskiy, Saurabh Gupta, Vladlen Koltun, Jana Kosecka, Jitendra Malik, Roozbeh Mottaghi, Manolis Savva, Amir R. Zamir
    arXiv, 2018. [Paper]

  • PyRobot: An Open-source Robotics Framework for Research and Benchmarking
    Adithya Murali*, Tao Chen*, Kalyan Vasudev Alwala*, Dhiraj Gandhi*, Lerrel Pinto, Saurabh Gupta, Abhinav Gupta
    arXiv, 2019. [Paper] [Code] [Website]

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].