All Projects → wang-chen → AirLoop

wang-chen / AirLoop

Licence: BSD-3-Clause license
AirLoop: Lifelong Loop Closure Detection

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to AirLoop

Remembering-for-the-Right-Reasons
Official Implementation of Remembering for the Right Reasons (ICLR 2021)
Stars: ✭ 27 (-44.9%)
Mutual labels:  lifelong-learning
reproducible-continual-learning
Continual learning baselines and strategies from popular papers, using Avalanche. We include EWC, SI, GEM, AGEM, LwF, iCarl, GDumb, and other strategies.
Stars: ✭ 118 (+140.82%)
Mutual labels:  lifelong-learning
Adam-NSCL
PyTorch implementation of our Adam-NSCL algorithm from our CVPR2021 (oral) paper "Training Networks in Null Space for Continual Learning"
Stars: ✭ 34 (-30.61%)
Mutual labels:  lifelong-learning
Generative Continual Learning
No description or website provided.
Stars: ✭ 51 (+4.08%)
Mutual labels:  lifelong-learning
life-disciplines-projects
Life-Disciplines-Projects (LDP) is a life-management framework built within Obsidian. Feel free to transform it for your own personal needs.
Stars: ✭ 130 (+165.31%)
Mutual labels:  lifelong-learning
class-norm
Class Normalization for Continual Zero-Shot Learning
Stars: ✭ 34 (-30.61%)
Mutual labels:  lifelong-learning
FACIL
Framework for Analysis of Class-Incremental Learning with 12 state-of-the-art methods and 3 baselines.
Stars: ✭ 411 (+738.78%)
Mutual labels:  lifelong-learning
SIGIR2021 Conure
One Person, One Model, One World: Learning Continual User Representation without Forgetting
Stars: ✭ 23 (-53.06%)
Mutual labels:  lifelong-learning
Continual Learning Data Former
A pytorch compatible data loader to create sequence of tasks for Continual Learning
Stars: ✭ 32 (-34.69%)
Mutual labels:  lifelong-learning
CPG
Steven C. Y. Hung, Cheng-Hao Tu, Cheng-En Wu, Chien-Hung Chen, Yi-Ming Chan, and Chu-Song Chen, "Compacting, Picking and Growing for Unforgetting Continual Learning," Thirty-third Conference on Neural Information Processing Systems, NeurIPS 2019
Stars: ✭ 91 (+85.71%)
Mutual labels:  lifelong-learning
MetaLifelongLanguage
Repository containing code for the paper "Meta-Learning with Sparse Experience Replay for Lifelong Language Learning".
Stars: ✭ 21 (-57.14%)
Mutual labels:  lifelong-learning
AI physicist
AI Physicist, a paradigm with algorithms for learning theories from data, by Wu and Tegmark (2019)
Stars: ✭ 23 (-53.06%)
Mutual labels:  lifelong-learning
cvpr clvision challenge
CVPR 2020 Continual Learning Challenge - Submit your CL algorithm today!
Stars: ✭ 57 (+16.33%)
Mutual labels:  lifelong-learning
CVPR21 PASS
PyTorch implementation of our CVPR2021 (oral) paper "Prototype Augmentation and Self-Supervision for Incremental Learning"
Stars: ✭ 55 (+12.24%)
Mutual labels:  lifelong-learning
lifelong-learning
lifelong learning: record and analysis of my knowledge structure
Stars: ✭ 18 (-63.27%)
Mutual labels:  lifelong-learning
HebbianMetaLearning
Meta-Learning through Hebbian Plasticity in Random Networks: https://arxiv.org/abs/2007.02686
Stars: ✭ 77 (+57.14%)
Mutual labels:  lifelong-learning

AirLoop

This repo contains the source code for paper:

Dasong Gao, Chen Wang, Sebastian Scherer. "AirLoop: Lifelong Loop Closure Detection." International Conference on Robotics and Automation (ICRA), 2022.

Watch on YouTube

Demo

Examples of loop closure detection on each dataset. Note that our model is able to handle cross-environment loop closure detection despite only trained in individual environments sequentially:

Improved loop closure detection on TartanAir after extended training:

Usage

Dependencies

  • Python >= 3.5
  • PyTorch < 1.8
  • OpenCV >= 3.4
  • NumPy >= 1.19
  • Matplotlib
  • ConfigArgParse
  • PyYAML
  • tqdm

Data

We used the following subsets of datasets in our expriments:

  • TartanAir, download with tartanair_tools
    • Train/Test: abandonedfactory_night, carwelding, neighborhood, office2, westerndesert;
  • RobotCar, download with RobotCarDataset-Scraper
    • Train: 2014-11-28-12-07-13, 2014-12-10-18-10-50, 2014-12-16-09-14-09;
    • Test: 2014-06-24-14-47-45, 2014-12-05-15-42-07, 2014-12-16-18-44-24;
  • Nordland, download with gdown from Google Drive
    • Train/Test: All four seasons with recommended splits.

The datasets are aranged as follows:

$DATASET_ROOT/
├── tartanair/
│   ├── abandonedfactory_night/
|   |   ├── Easy/
|   |   |   └── ...
│   │   └── Hard/
│   │       └── ...
│   └── ...
├── robotcar/
│   ├── train/
│   │   ├── 2014-11-28-12-07-13/
│   │   └── ...
│   └── test/
│       ├── 2014-06-24-14-47-45/
│       └── ...
└── nordland/
    ├── train/
    │   ├── fall_images_train/
    │   └── ...
    └── test/
        ├── fall_images_test/
        └── ...

Note: For TartanAir, only <ENVIRONMENT>/<DIFFICULTY>/<image|depth>_left.zip is required. After unziping downloaded zip files, make sure to remove the duplicate <ENVIRONMENT> directory level (tartanair/abandonedfactory/abandonedfactory/Easy/... -> tartanair/abandonedfactory/Easy/...).

Configuration

The following values in config/config.yaml need to be set:

  • dataset-root: The parent directory to all datasets ($DATASET_ROOT above);
  • catalog-dir: An (initially empty) directory for caching processed dataset index;
  • eval-gt-dir: An (initially empty) directory for groundtruth produced during evaluation.

Commandline

The following command trains the model with the specified method on TartanAir with default configuration and evaluate the performance:

$ python main.py --method <finetune/si/ewc/kd/rkd/mas/rmas/airloop/joint>

Extra options*:

  • --dataset <tartanair/robotcar/nordland>: dataset to use.
  • --envs <LIST_OF_ENVIRONMENTS>: order of environments.**
  • --epochs <LIST_OF_EPOCHS>: number of epochs to train in each environment.**
  • --eval-save <PATH>: save path for predicted pairwise similarities generated during evaluation.
  • --out-dir <DIR>: output directory for model checkpoints and importance weights.
  • --log-dir <DIR>: Tensorboard logdir.
  • --skip-train: perform evaluation only.
  • --skip-eval: perform training only.

* See main_single.py for more settings.
** See main.py for defaults.

Evaluation results (R@100P in each environment) will be logged to console. --eval-save can be specified to save the predicted similarities in .npz format.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].