All Projects → GMvandeVen → Continual Learning

GMvandeVen / Continual Learning

Licence: mit
PyTorch implementation of various methods for continual learning (XdG, EWC, online EWC, SI, LwF, GR, GR+distill, RtF, ER, A-GEM, iCaRL).

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Continual Learning

Brain Inspired Replay
A brain-inspired version of generative replay for continual learning with deep neural networks (e.g., class-incremental learning on CIFAR-100; PyTorch code).
Stars: ✭ 99 (-83.5%)
Mutual labels:  artificial-neural-networks, variational-autoencoder, replay
Python Ml Course
Curso de Introducción a Machine Learning con Python
Stars: ✭ 442 (-26.33%)
Mutual labels:  artificial-neural-networks
Pyinstalive
Python script to download Instagram livestreams and replays.
Stars: ✭ 336 (-44%)
Mutual labels:  replay
Pytorch Rl
This repository contains model-free deep reinforcement learning algorithms implemented in Pytorch
Stars: ✭ 394 (-34.33%)
Mutual labels:  variational-autoencoder
Nimtorch
PyTorch - Python + Nim
Stars: ✭ 346 (-42.33%)
Mutual labels:  artificial-neural-networks
Tcpcopy
An online request replication tool, also a tcp stream replay tool, fit for real testing, performance testing, stability testing, stress testing, load testing, smoke testing, etc
Stars: ✭ 4,028 (+571.33%)
Mutual labels:  replay
Agi
Android GPU Inspector
Stars: ✭ 327 (-45.5%)
Mutual labels:  replay
Hyperparameter Optimization Of Machine Learning Algorithms
Implementation of hyperparameter optimization/tuning methods for machine learning & deep learning models (easy&clear)
Stars: ✭ 516 (-14%)
Mutual labels:  artificial-neural-networks
Nfcgate
An NFC research toolkit application for Android
Stars: ✭ 425 (-29.17%)
Mutual labels:  replay
Vae cf
Variational autoencoders for collaborative filtering
Stars: ✭ 386 (-35.67%)
Mutual labels:  variational-autoencoder
Tensorflow Generative Model Collections
Collection of generative models in Tensorflow
Stars: ✭ 3,785 (+530.83%)
Mutual labels:  variational-autoencoder
Replaymod
Minecraft ReplayMod
Stars: ✭ 348 (-42%)
Mutual labels:  replay
Awesome Vaes
A curated list of awesome work on VAEs, disentanglement, representation learning, and generative models.
Stars: ✭ 418 (-30.33%)
Mutual labels:  variational-autoencoder
Niutensor
NiuTensor is an open-source toolkit developed by a joint team from NLP Lab. at Northeastern University and the NiuTrans Team. It provides tensor utilities to create and train neural networks.
Stars: ✭ 337 (-43.83%)
Mutual labels:  artificial-neural-networks
Scvi Tools
Deep probabilistic analysis of single-cell omics data
Stars: ✭ 452 (-24.67%)
Mutual labels:  variational-autoencoder
Ai Simplest Network
The simplest form of an artificial neural network explained and demonstrated.
Stars: ✭ 333 (-44.5%)
Mutual labels:  artificial-neural-networks
First Steps Towards Deep Learning
This is an open sourced book on deep learning.
Stars: ✭ 376 (-37.33%)
Mutual labels:  artificial-neural-networks
Disentangling Vae
Experiments for understanding disentanglement in VAE latent representations
Stars: ✭ 398 (-33.67%)
Mutual labels:  variational-autoencoder
Trending Deep Learning
Top 100 trending deep learning repositories sorted by the number of stars gained on a specific day.
Stars: ✭ 543 (-9.5%)
Mutual labels:  artificial-neural-networks
Cranium
🤖 A portable, header-only, artificial neural network library written in C99
Stars: ✭ 501 (-16.5%)
Mutual labels:  artificial-neural-networks

Continual Learning

This is a PyTorch implementation of the continual learning experiments described in the following papers:

  • Three scenarios for continual learning (link)
  • Generative replay with feedback connections as a general strategy for continual learning (link)

Requirements

The current version of the code has been tested with:

  • pytorch 1.1.0
  • torchvision 0.2.2

Running the experiments

Individual experiments can be run with main.py. Main options are:

  • --experiment: which task protocol? (splitMNIST|permMNIST)
  • --scenario: according to which scenario? (task|domain|class)
  • --tasks: how many tasks?

To run specific methods, use the following:

  • Context-dependent-Gating (XdG): ./main.py --xdg=0.8
  • Elastic Weight Consolidation (EWC): ./main.py --ewc --lambda=5000
  • Online EWC: ./main.py --ewc --online --lambda=5000 --gamma=1
  • Synaptic Intelligence (SI): ./main.py --si --c=0.1
  • Learning without Forgetting (LwF): ./main.py --replay=current --distill
  • Generative Replay (GR): ./main.py --replay=generative
  • GR with distillation: ./main.py --replay=generative --distill
  • Replay-trough-Feedback (RtF): ./main.py --replay=generative --distill --feedback
  • Experience Replay (ER): ./main.py --replay=exemplars --budget=2000
  • Averaged Gradient Episodic Memory (A-GEM): ./main.py --replay=exemplars --agem --budget=2000
  • iCaRL: ./main.py --icarl --budget=2000

For information on further options: ./main.py -h.

The code in this repository only supports MNIST-based experiments. An extension to more challenging problems (e.g., with natural images as inputs) can be found here: https://github.com/GMvandeVen/brain-inspired-replay.

Running comparisons from the papers

"Three CL scenarios"-paper

This paper describes three scenarios for continual learning (Task-IL, Domain-IL & Class-IL) and provides an extensive comparion of recently proposed continual learning methods. It uses the permuted and split MNIST task protocols, with both performed according to all three scenarios.

A comparison of all methods included in this paper can be run with compare_all.py (this script includes extra methods and reports additional metrics compared to the paper). The comparison in Appendix B can be run with compare_taskID.py, and Figure C.1 can be recreated with compare_replay.py.

"Replay-through-Feedback"-paper

The three continual learning scenarios were actually first identified in this paper, after which this paper introduces the Replay-through-Feedback framework as a more efficent implementation of generative replay.

A comparison of all methods included in this paper can be run with compare_time.py. This includes a comparison of the time these methods take to train (Figures 4 and 5).

Note that the results reported in this paper were obtained with this earlier version of the code.

On-the-fly plots during training

With this code it is possible to track progress during training with on-the-fly plots. This feature requires visdom, which can be installed as follows:

pip install visdom

Before running the experiments, the visdom server should be started from the command line:

python -m visdom.server

The visdom server is now alive and can be accessed at http://localhost:8097 in your browser (the plots will appear there). The flag --visdom should then be added when calling ./main.py to run the experiments with on-the-fly plots.

For more information on visdom see https://github.com/facebookresearch/visdom.

Citation

Please consider citing our papers if you use this code in your research:

@article{vandeven2019three,
  title={Three scenarios for continual learning},
  author={van de Ven, Gido M and Tolias, Andreas S},
  journal={arXiv preprint arXiv:1904.07734},
  year={2019}
}

@article{vandeven2018generative,
  title={Generative replay with feedback connections as a general strategy for continual learning},
  author={van de Ven, Gido M and Tolias, Andreas S},
  journal={arXiv preprint arXiv:1809.10635},
  year={2018}
}

Acknowledgments

The research projects from which this code originated have been supported by an IBRO-ISN Research Fellowship, by the Lifelong Learning Machines (L2M) program of the Defence Advanced Research Projects Agency (DARPA) via contract number HR0011-18-2-0025 and by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DoI/IBC) contract number D16PC00003. Disclaimer: views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA, IARPA, DoI/IBC, or the U.S. Government.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].