All Projects → SaynaEbrahimi → Remembering-for-the-Right-Reasons

SaynaEbrahimi / Remembering-for-the-Right-Reasons

Licence: MIT license
Official Implementation of Remembering for the Right Reasons (ICLR 2021)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Remembering-for-the-Right-Reasons

FACIL
Framework for Analysis of Class-Incremental Learning with 12 state-of-the-art methods and 3 baselines.
Stars: ✭ 411 (+1422.22%)
Mutual labels:  lifelong-learning, continual-learning
Adam-NSCL
PyTorch implementation of our Adam-NSCL algorithm from our CVPR2021 (oral) paper "Training Networks in Null Space for Continual Learning"
Stars: ✭ 34 (+25.93%)
Mutual labels:  lifelong-learning, continual-learning
Generative Continual Learning
No description or website provided.
Stars: ✭ 51 (+88.89%)
Mutual labels:  lifelong-learning, continual-learning
cvpr clvision challenge
CVPR 2020 Continual Learning Challenge - Submit your CL algorithm today!
Stars: ✭ 57 (+111.11%)
Mutual labels:  lifelong-learning, continual-learning
Continual Learning Data Former
A pytorch compatible data loader to create sequence of tasks for Continual Learning
Stars: ✭ 32 (+18.52%)
Mutual labels:  lifelong-learning, continual-learning
reproducible-continual-learning
Continual learning baselines and strategies from popular papers, using Avalanche. We include EWC, SI, GEM, AGEM, LwF, iCarl, GDumb, and other strategies.
Stars: ✭ 118 (+337.04%)
Mutual labels:  lifelong-learning, continual-learning
CVPR21 PASS
PyTorch implementation of our CVPR2021 (oral) paper "Prototype Augmentation and Self-Supervision for Incremental Learning"
Stars: ✭ 55 (+103.7%)
Mutual labels:  lifelong-learning, continual-learning
MetaLifelongLanguage
Repository containing code for the paper "Meta-Learning with Sparse Experience Replay for Lifelong Language Learning".
Stars: ✭ 21 (-22.22%)
Mutual labels:  lifelong-learning, continual-learning
CPG
Steven C. Y. Hung, Cheng-Hao Tu, Cheng-En Wu, Chien-Hung Chen, Yi-Ming Chan, and Chu-Song Chen, "Compacting, Picking and Growing for Unforgetting Continual Learning," Thirty-third Conference on Neural Information Processing Systems, NeurIPS 2019
Stars: ✭ 91 (+237.04%)
Mutual labels:  lifelong-learning, continual-learning
SIGIR2021 Conure
One Person, One Model, One World: Learning Continual User Representation without Forgetting
Stars: ✭ 23 (-14.81%)
Mutual labels:  lifelong-learning, continual-learning
class-norm
Class Normalization for Continual Zero-Shot Learning
Stars: ✭ 34 (+25.93%)
Mutual labels:  lifelong-learning, continual-learning
concept-based-xai
Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (+51.85%)
Mutual labels:  xai
mindsdb native
Machine Learning in one line of code
Stars: ✭ 34 (+25.93%)
Mutual labels:  xai
ADER
(RecSys 2020) Adaptively Distilled Exemplar Replay towards Continual Learning for Session-based Recommendation [Best Short Paper]
Stars: ✭ 28 (+3.7%)
Mutual labels:  continual-learning
php-best-practices
What I consider the best practices for web and software development.
Stars: ✭ 60 (+122.22%)
Mutual labels:  continual-learning
mindsdb server
MindsDB server allows you to consume and expose MindsDB workflows, through http.
Stars: ✭ 3 (-88.89%)
Mutual labels:  xai
OCDVAEContinualLearning
Open-source code for our paper: Unified Probabilistic Deep Continual Learning through Generative Replay and Open Set Recognition
Stars: ✭ 56 (+107.41%)
Mutual labels:  continual-learning
life-disciplines-projects
Life-Disciplines-Projects (LDP) is a life-management framework built within Obsidian. Feel free to transform it for your own personal needs.
Stars: ✭ 130 (+381.48%)
Mutual labels:  lifelong-learning
continual-knowledge-learning
[ICLR 2022] Towards Continual Knowledge Learning of Language Models
Stars: ✭ 77 (+185.19%)
Mutual labels:  continual-learning
xai-iml-sota
Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Stars: ✭ 51 (+88.89%)
Mutual labels:  xai

Remembering for the Right Reasons: Explanations Reduce Catastrophic Forgetting (ICLR 2021)

This is the PyTorch implementation of Remembering for the Right Reasons (RRR) published at ICLR 2021.

[Paper] [ICLR Talk] [Slides]

Citation

If using this code, parts of it, or developments from it, please cite our paper:

@inproceedings{
ebrahimi2021remembering,
title={Remembering for the Right Reasons: Explanations Reduce Catastrophic Forgetting},
author={Sayna Ebrahimi and Suzanne Petryk and Akash Gokul and William Gan and Joseph E. Gonzalez and Marcus Rohrbach and trevor darrell},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=tHgJoMfy6nI}
}

RRR

The goal of continual learning (CL) is to learn a sequence of tasks without suf- fering from the phenomenon of catastrophic forgetting. Previous work has shown that leveraging memory in the form of a replay buffer can reduce performance degradation on prior tasks. We hypothesize that forgetting can be further reduced when the model is encouraged to remember the evidence for previously made decisions. As a first step towards exploring this hypothesis, we propose a sim- ple novel training paradigm, called Remembering for the Right Reasons (RRR), that additionally stores visual model explanations for each example in the buffer and ensures the model has “the right reasons” for its predictions by encourag- ing its explanations to remain consistent with those used to make decisions at training time. Without this constraint, there is a drift in explanations and in- crease in forgetting as conventional continual learning algorithms learn new tasks. We demonstrate how RRR can be easily added to any memory or regularization- based approach and results in reduced forgetting, and more importantly, improved model explanations. We have evaluated our approach in the standard and few-shot settings and observed a consistent improvement across various CL approaches using different architectures and techniques to generate model explanations and demonstrated our approach showing a promising connection between explainabil- ity and continual learning. RRR

Prerequisites:

  • Linux-64
  • Python 3.6
  • PyTorch 1.3.1
  • NVIDIA GPU + CUDA10 CuDNN7.5

Installation

  • Create a conda environment using the provided requirements.txt file:
conda create -n rrr python=3.6
conda activate rrr
pip install -r requirements.txt
  • The following structure is expected in the main directory:
./src                     : main directory where all scripts are placed in
./data                    : datasets directory 
./checkpoints             : results are saved in here
./requirements.txt        : use to install our utilized packages

How to run:

CUB200 (base task 100 classes + 10 tasks = 11 tasks):

python main.py experiment.ntasks=11 --config ./configs/cub_fascil.yml

Datasets

  • Caltech-UCSD Birds-200-2011 (CUB-200-2011) dataset can be downloaded from here. Once downloaded, it should be placed in a sub-folder in ./data named as CUB_200_2011.

Questions/ Bugs

License

This source code is released under The MIT License found in the LICENSE file in the root directory of this source tree.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].