All Projects → khurramjaved96 → Mrcl

khurramjaved96 / Mrcl

Code for the NeurIPS19 paper "Meta-Learning Representations for Continual Learning"

Programming Languages

python
139335 projects - #7 most used programming language

Labels

Projects that are alternatives of or similar to Mrcl

Cryptoinscriber
📈 A live cryptocurrency historical trade data blotter. Download live historical trade data from any cryptoexchange, be it for machine learning, backtesting/visualizing trading strategies or for Quantopian/Zipline.
Stars: ✭ 27 (-81.63%)
Mutual labels:  machine
Hal
🔴 A non-deterministic finite-state machine for Android & JVM that won't let you down
Stars: ✭ 63 (-57.14%)
Mutual labels:  machine
Zetaipc
A tiny .NET library to do inter-process communication (IPC) between different processes on the same machine.
Stars: ✭ 111 (-24.49%)
Mutual labels:  machine
Rapping Neural Network
Rap song writing recurrent neural network trained on Kanye West's entire discography
Stars: ✭ 951 (+546.94%)
Mutual labels:  machine
Docker Machine Driver Pwd
Docker machine PWD driver
Stars: ✭ 54 (-63.27%)
Mutual labels:  machine
Ios11 Visionframework
Vision Framework IOS WWDC 2017
Stars: ✭ 85 (-42.18%)
Mutual labels:  machine
Pytorch Forecasting
Time series forecasting with PyTorch
Stars: ✭ 849 (+477.55%)
Mutual labels:  machine
Setup
Setup a new machine without sudo!
Stars: ✭ 130 (-11.56%)
Mutual labels:  machine
Dotfiles
🖥️ Automated Configuration, Preferences and Software Installation for macOS
Stars: ✭ 1,103 (+650.34%)
Mutual labels:  machine
Scenescoop
A tool to describe the content of videos and suggest similar scenes in other videos/films.
Stars: ✭ 103 (-29.93%)
Mutual labels:  machine
Gpt2 Telegram Chatbot
GPT-2 Telegram Chat bot
Stars: ✭ 41 (-72.11%)
Mutual labels:  machine
Php Ml
PHP-ML - Machine Learning library for PHP
Stars: ✭ 7,900 (+5274.15%)
Mutual labels:  machine
Sigma
Rocket powered machine learning. Create, compare, adapt, improve - artificial intelligence at the speed of thought.
Stars: ✭ 98 (-33.33%)
Mutual labels:  machine
Nexrender
📹 Data-driven render automation for After Effects
Stars: ✭ 946 (+543.54%)
Mutual labels:  machine
Coffeehack
Hack of our Jura coffee machine
Stars: ✭ 116 (-21.09%)
Mutual labels:  machine
Grind
Configure and maintain your machine
Stars: ✭ 14 (-90.48%)
Mutual labels:  machine
Makine Ogrenmesi
Makine Öğrenmesi Türkçe Kaynak
Stars: ✭ 82 (-44.22%)
Mutual labels:  machine
Networm
Python network worm that spreads on the local network and gives the attacker control of these machines.
Stars: ✭ 135 (-8.16%)
Mutual labels:  machine
State
Finite state machine for TypeScript and JavaScript
Stars: ✭ 118 (-19.73%)
Mutual labels:  machine
Mit Deep Learning Book Pdf
MIT Deep Learning Book in PDF format (complete and parts) by Ian Goodfellow, Yoshua Bengio and Aaron Courville
Stars: ✭ 9,859 (+6606.8%)
Mutual labels:  machine

(05 July, 2020) Major bug fix and refactoring log:

  1. Fixed a bug that resulted in incorrect meta-gradients.
  2. Refactored the code. It should be easier to understand and modify now.
  3. Significantly improved results on both omniglot and sine benchmark by fixing the bug. By using a linear PLN layer -- as suggested by S. Beaulieu et.al (2020) -- it is possible to get the same results as ANML (S. Beaulieu 2020) without using any neuromodulation layers.
  4. The bug fix also makes the optimization more robust to hyper-parameter changes. The omniglot results hold for a wide range of meta-learning and inner learning rates.
  5. Added new pretrained models in the google drive. Check mrcl_trained_models/Omniglot_updated. There are eight pre-trained models, with different hyper-parameters. You can look at hyper-parameters in the metadata.json file. The old model will no longer work withe new code. If you want to use the old models, checkout an older commit of the repo.

A discussion on the changes: https://github.com/khurramjaved96/mrcl/issues/15

Reference: Beaulieu, Shawn, et al. "Learning to continually learn." ECAI (2020).

OML (Online-aware Meta-learning) ~ NeurIPS19

Paper : https://arxiv.org/abs/1905.12588

Overall system architecture for learning representations

Learning OML Representations

To learn representations for omnigtot run the following command:

python oml_omniglot.py --update_lr 0.03 --meta_lr 1e-4 --name OML_Omniglot/ --tasks 3 --update_step 5 --steps 700000 --rank 0

This will store the learned model at ../results/DDMonthYYYY/Omniglot/0.0001/oml_omniglot)

Evaluating Representations learned by OML

We provide trained models at https://drive.google.com/drive/folders/1vHHT5kxtgx8D4JHYg25iA-C31O5OjAQz?usp=sharing which can be used to evaluate performance on the continual learning benchmarks.

To evaluate performance on test trajectories of omniglot run:

python evaluate_omniglot.py --model-path path_to_model/learner.model --name Omniglot_evaluation/  --schedule 10:50:100:200:600

Exclude the --test argument to get result on training trajectories (Used to measure forgetting).

Results will be stored in a json file in "../results/DDMonthYYYY/Omniglot/eval/Omni_test_traj_0"

Visualizing Representations

To visualize representations for different omniglot models, run

python visualize_representations.py --name OML_rep_study --model ./trained_models/split_omniglot_oml.model

Results

Classification Results

The accuracy curve averaged over 50 runs as we learn more classes sequentially. The error bars represent 95% confidence intervals drawn using 1,000 bootstraps. We report results on both the training trajectory (left) and a held out dataset that has the same classes as the training trajectory (right). alt text Online updates starting from OML are capable of learning 200 classes with little to no forgetting. Other representations, such as pretraining and SR-NN suffer from noticeable forgetting on the other hand. OML also generalizes better than the other methods on the unseen held out set. Note that the Oracle, learned using multiple, IID passes over the trajectory, represents an upper bound on the performance, reflecting the inherent inaccuracy when training on an increasing number of classes.

Regression Results

Mean squared error across all 10 regression tasks. The x-axis in (a) corresponds to seeing all data points of samples for class 1, then class 2 and so on. These learning curves are averaged over 50 runs, with error bars representing 95% confidence interval drawn by 1,000 bootstraps. alt text We can see that the representation trained on iid data---pretraining---is not effective for online updating. Notice that in the final prediction accuracy in (b), pretraining and SR-NN representations have accurate predictions for task 10, but high error for earlier tasks. OML, on the other hand, has a slight skew in error towards later tasks in learning but is largely robust.

References

  1. Meta-learning code has been taken and modified from : https://github.com/dragen1860/MAML-Pytorch
  2. For EWC, MER, and ER-Reservoir experiments, we modify the following implementation to be able to load our models : https://github.com/mattriemer/MER
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].