All Projects → debadeepta → vnla

debadeepta / vnla

Licence: other
Code accompanying the CVPR 2019 paper: https://arxiv.org/abs/1812.04155

Programming Languages

C++
36643 projects - #6 most used programming language
python
139335 projects - #7 most used programming language
shell
77523 projects
CMake
9771 projects

Projects that are alternatives of or similar to vnla

Babyai
BabyAI platform. A testbed for training agents to understand and execute language commands.
Stars: ✭ 490 (+716.67%)
Mutual labels:  imitation-learning, nlp-machine-learning
Dstc8 Schema Guided Dialogue
The Schema-Guided Dialogue Dataset
Stars: ✭ 277 (+361.67%)
Mutual labels:  assistant, nlp-machine-learning
Sarah
Terminal Assistant For SemiCode OS
Stars: ✭ 201 (+235%)
Mutual labels:  assistant, nlp-machine-learning
Very-deep-cnn-tensorflow
Very deep CNN for text classification
Stars: ✭ 18 (-70%)
Mutual labels:  nlp-machine-learning
haxball-chameleon
Solving Haxball (www.haxball.com) using Imitation Learning methods.
Stars: ✭ 20 (-66.67%)
Mutual labels:  imitation-learning
lidtk
Language Identification Toolkit
Stars: ✭ 17 (-71.67%)
Mutual labels:  nlp-machine-learning
elastic transformers
Making BERT stretchy. Semantic Elasticsearch with Sentence Transformers
Stars: ✭ 153 (+155%)
Mutual labels:  nlp-machine-learning
AI-Sentiment-Analysis-on-IMDB-Dataset
Sentiment Analysis using Stochastic Gradient Descent on 50,000 Movie Reviews Compiled from the IMDB Dataset
Stars: ✭ 55 (-8.33%)
Mutual labels:  nlp-machine-learning
mlconjug3
A Python library to conjugate verbs in French, English, Spanish, Italian, Portuguese and Romanian (more soon) using Machine Learning techniques.
Stars: ✭ 47 (-21.67%)
Mutual labels:  nlp-machine-learning
kex
Kex is a python library for unsupervised keyword extraction from a document, providing an easy interface and benchmarks on 15 public datasets.
Stars: ✭ 46 (-23.33%)
Mutual labels:  nlp-machine-learning
Naive-Bayes-Evening-Workshop
Companion code for Introduction to Python for Data Science: Coding the Naive Bayes Algorithm evening workshop
Stars: ✭ 23 (-61.67%)
Mutual labels:  nlp-machine-learning
Jarvis AI
Jarvis AI is a Python Module which is able to perform task like Chatbot, Assistant etc. It provides base functionality for any assistant application. This JarvisAI is built using Tensorflow, Pytorch, Transformers and other opensource libraries and frameworks.
Stars: ✭ 196 (+226.67%)
Mutual labels:  assistant
Conditional-SeqGAN-Tensorflow
Conditional Sequence Generative Adversarial Network trained with policy gradient, Implementation in Tensorflow
Stars: ✭ 47 (-21.67%)
Mutual labels:  nlp-machine-learning
anuvada
Interpretable Models for NLP using PyTorch
Stars: ✭ 102 (+70%)
Mutual labels:  nlp-machine-learning
community-projects
Webots projects (PROTO files, controllers, simulation worlds, etc.) contributed by the community.
Stars: ✭ 20 (-66.67%)
Mutual labels:  robotics-simulation
Quora QuestionPairs DL
Kaggle Competition: Using deep learning to solve quora's question pairs problem
Stars: ✭ 54 (-10%)
Mutual labels:  nlp-machine-learning
uosteam
🎮 uosteam | scripts
Stars: ✭ 88 (+46.67%)
Mutual labels:  assistant
brand-sentiment-analysis
Scripts utilizing Heartex platform to build brand sentiment analysis from the news
Stars: ✭ 21 (-65%)
Mutual labels:  nlp-machine-learning
Multi-Type-TD-TSR
Extracting Tables from Document Images using a Multi-stage Pipeline for Table Detection and Table Structure Recognition:
Stars: ✭ 174 (+190%)
Mutual labels:  nlp-machine-learning
ShortText-Fasttext
ShortText classification
Stars: ✭ 12 (-80%)
Mutual labels:  nlp-machine-learning

Vision-based Navigation with Language-based Assistance via Imitation Learning with Indirect Intervention

License: MIT

Authors: Khanh Nguyen, Debadeepta Dey, Chris Brockett, Bill Dolan.

This repo contains code and data-downloading scripts for the paper Vision-based Navigation with Language-based Assistance via Imitation Learning with Indirect Intervention (CVPR 2019). We present Vision-based Navigation with Language-based Assistance (VNLA, pronounced as "Vanilla"), a grounded vision-language task where an agent with visual perception is guided via language to find objects in photorealistic indoor environments.

IMAGE ALT TEXT HERE

Development system

Our instructions assume the followings are installed:

See setup simulator for packages required to install the Matterport3D simulator.

The Ubuntu requirement is not mandatory. As long as you can sucessfully Anaconda, PyTorch and other required packages, you are good!

Let's play with the code!

  1. Clone this repo git clone --recursive https://github.com/debadeepta/vnla.git (don't forget the recursive flag!)
  2. Download data.
  3. Setup simulator.
  4. Run experiments.
  5. Extend this project.

Please create a Github issue or email [email protected], [email protected] for any question or feedback.

FAQ

Q: What's the difference between this task and the Room-to-Room task?

A: In R2R, the agent's task is given by a detailed language instruction (e.g., "Go the table, turn left, walk to the stairs, wait there"). The agent has to execute the instruction without additional assistance.

In VNLA (our task), the task is described as a high-level end-goal (the steps for accomplishing the task are not described) (e.g., "Find a cup in the kitchen"). The agent is capable of actively requesting additional assistance (in the form of language subgoals) while trying to fulfill the task.

Citation

If you want to cite this work, please use the following bibtex code

@InProceedings{nguyen2019vnla,
author = {Nguyen, Khanh and Dey, Debadeepta and Brockett, Chris and Dolan, Bill},
title = {Vision-Based Navigation With Language-Based Assistance via Imitation Learning With Indirect Intervention},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].