All Projects → utiasASRL → vtr3

utiasASRL / vtr3

Licence: Apache-2.0 License
VT&R3 is a C++ implementation of the Teach and Repeat navigation framework. It enables a robot to be taught a network of traversable paths and then accurately repeat any network portion.

Programming Languages

C++
36643 projects - #6 most used programming language
Cuda
1817 projects
python
139335 projects - #7 most used programming language
c
50402 projects - #5 most used programming language
matlab
3953 projects
CMake
9771 projects

Visual Teach & Repeat 3 (VT&R3)

Update (2022-02-24)

For code related to "Should Radar Replace Lidar in All-Weather Mapping and Localization?" (Keenan Burnett, Yuchen Wu, David J. Yoon, Angela P. Schoellig, Timothy D. Barfoot), please see the radar_lidar_dev branch. Code related to lidar teach and repeat is located under main/src/vtr_lidar and code related to radar teach and repeat is located under main/src/vtr_radar. We are working on merging this code into the main branch.

What is VT&R3?

VT&R3 is a C++ implementation of the Teach and Repeat navigation framework. It enables a robot to be taught a network of traversable paths and then accurately repeat any network portion. VT&R3 is designed for easy adaptation to various sensor (camera/LiDAR/RaDAR/GPS) and robot combinations. The current implementation includes a feature-based visual odometry and localization pipeline that estimates the robot's motion from stereo camera images and a point-cloud-based odometry and localization pipeline for LiDAR sensors.

Installation and Getting Started

The following articles will help you get started with VT&R3:

More information can be found in the wiki page.

Citation

Please cite the following paper when using VT&R3 for your research:

@article{paul2010vtr,
  author = {Furgale, Paul and Barfoot, Timothy D.},
  title = {Visual teach and repeat for long-range rover autonomy},
  journal = {Journal of Field Robotics},
  year = {2010},
  doi = {https://doi.org/10.1002/rob.20342}
}

Additional Citations

  • Multi-Experience Localization

    @inproceedings{michael2016mel,
      author = {Paton, Michael and MacTavish, Kirk and Warren, Michael and Barfoot, Timothy D.},
      booktitle = {2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
      title = {Bridging the appearance gap: Multi-experience localization for long-term visual teach and repeat},
      year={2016},
      doi={10.1109/IROS.2016.7759303}
    }
  • Lidar, Radar Teach and Repeat

      @inproceedings{burnett_radar22,
      author = {Burnett, Keenan and Wu, Yuchen and Yoon, David J., Schoellig, Angela P. and Barfoot, Timothy D.},
      title = {Should Radar Replace Lidar in All-Weather Mapping and Localization?},
      year={2022},
    }

License

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].