All Projects → YicongHong → Fine-Grained-R2R

YicongHong / Fine-Grained-R2R

Licence: MIT license
Code and data of the Fine-Grained R2R Dataset proposed in the EMNLP 2021 paper Sub-Instruction Aware Vision-and-Language Navigation

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Fine-Grained-R2R

robo-vln
Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"
Stars: ✭ 34 (+0%)
Mutual labels:  vision-and-language-navigation
Entity-Graph-VLN
Code of the NeurIPS 2021 paper: Language and Visual Entity Relationship Graph for Agent Navigation
Stars: ✭ 34 (+0%)
Mutual labels:  vision-and-language-navigation

Fine-Grained R2R

Code and data of the Fine-Grained R2R Dataset proposed in the EMNLP2020 paper Sub-Instruction Aware Vision-and-Language Navigation.

This dataset enriches the benchmark Room-to-Room (R2R) dataset by dividing the instructions into sub-instructions and pairing each of those with their corresponding viewpoints in the path.

  • The copyright resides with the authors of the paper Sub-Instruction Aware Vision-and-Language Navigation.
  • This dataset is build upon the Room-to-Room (R2R) dataset, we refer the readers to its repository for more details.

"The luxury of hope was given to me by the Terminator. Because if a machine can learn the value of human life, maybe we can too." --- Terminator: Judgment Day 1991.

Data

The Fine-Grained R2R data, which enriches the R2R dataset with sub-instructions and their corresponding paths. The overall instruction and trajectory of each sample remains the same.

  • For paths in the train, the validation seen and the validation unseen splits, we add two new entries:

    • new_instructions: A list of sub-instructions produced by the Chunking Function from the complete instructions. You can use import ast and ast.literal_eval() to read it a list.
    • chunk_view: A list of sub-paths corresponding to the sub-instructions, where each number in the list is an index of a viewpoint in the ground-truth path. The index starts at 1.
  • Some sub-instructions which refer to camera rotation or a STOP action could match to a single viewpoint.

  • For the test unseen split, we only provide the sub-instructions but not the sub-paths.

Source

The code of the proposed Chunking Function for generating sub-instructions.

  • Install the StanfordNLP package (v0.1.2 in our experiment) and download the English models for the neural pipeline.

  • Run make_subinstr.py to generate data with sub-instructions from the original R2R data.

  • The generated files had been sent to the Amazon Mechanical Turk (AMT) for annotating the sub-paths.

Reference

If you use or dicsuss the Fine-Grained R2R dataset in your work, please cite our paper:

@article{hong2020sub,
  title={Sub-Instruction Aware Vision-and-Language Navigation},
  author={Hong, Yicong and Rodriguez-Opazo, Cristian and Wu, Qi and Gould, Stephen},
  journal={arXiv preprint arXiv:2004.02707},
  year={2020}
}

Contact

If you have any question regarding the dataset or publication, please create an issue in this repository or email to [email protected].

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].