All Projects → Nithin-Holla → MetaLifelongLanguage

Nithin-Holla / MetaLifelongLanguage

Licence: MIT license
Repository containing code for the paper "Meta-Learning with Sparse Experience Replay for Lifelong Language Learning".

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to MetaLifelongLanguage

FACIL
Framework for Analysis of Class-Incremental Learning with 12 state-of-the-art methods and 3 baselines.
Stars: ✭ 411 (+1857.14%)
Mutual labels:  lifelong-learning, continual-learning
Adam-NSCL
PyTorch implementation of our Adam-NSCL algorithm from our CVPR2021 (oral) paper "Training Networks in Null Space for Continual Learning"
Stars: ✭ 34 (+61.9%)
Mutual labels:  lifelong-learning, continual-learning
class-norm
Class Normalization for Continual Zero-Shot Learning
Stars: ✭ 34 (+61.9%)
Mutual labels:  lifelong-learning, continual-learning
Meta Learning Bert
Meta learning with BERT as a learner
Stars: ✭ 52 (+147.62%)
Mutual labels:  text-classification, meta-learning
Relation-Classification
Relation Classification - SEMEVAL 2010 task 8 dataset
Stars: ✭ 46 (+119.05%)
Mutual labels:  text-classification, relation-extraction
Macadam
Macadam是一个以Tensorflow(Keras)和bert4keras为基础,专注于文本分类、序列标注和关系抽取的自然语言处理工具包。支持RANDOM、WORD2VEC、FASTTEXT、BERT、ALBERT、ROBERTA、NEZHA、XLNET、ELECTRA、GPT-2等EMBEDDING嵌入; 支持FineTune、FastText、TextCNN、CharCNN、BiRNN、RCNN、DCNN、CRNN、DeepMoji、SelfAttention、HAN、Capsule等文本分类算法; 支持CRF、Bi-LSTM-CRF、CNN-LSTM、DGCNN、Bi-LSTM-LAN、Lattice-LSTM-Batch、MRC等序列标注算法。
Stars: ✭ 149 (+609.52%)
Mutual labels:  text-classification, relation-extraction
FUSION
PyTorch code for NeurIPSW 2020 paper (4th Workshop on Meta-Learning) "Few-Shot Unsupervised Continual Learning through Meta-Examples"
Stars: ✭ 18 (-14.29%)
Mutual labels:  meta-learning, continual-learning
Continual Learning Data Former
A pytorch compatible data loader to create sequence of tasks for Continual Learning
Stars: ✭ 32 (+52.38%)
Mutual labels:  lifelong-learning, continual-learning
Remembering-for-the-Right-Reasons
Official Implementation of Remembering for the Right Reasons (ICLR 2021)
Stars: ✭ 27 (+28.57%)
Mutual labels:  lifelong-learning, continual-learning
reproducible-continual-learning
Continual learning baselines and strategies from popular papers, using Avalanche. We include EWC, SI, GEM, AGEM, LwF, iCarl, GDumb, and other strategies.
Stars: ✭ 118 (+461.9%)
Mutual labels:  lifelong-learning, continual-learning
Few Shot Text Classification
Few-shot binary text classification with Induction Networks and Word2Vec weights initialization
Stars: ✭ 32 (+52.38%)
Mutual labels:  text-classification, meta-learning
cvpr clvision challenge
CVPR 2020 Continual Learning Challenge - Submit your CL algorithm today!
Stars: ✭ 57 (+171.43%)
Mutual labels:  lifelong-learning, continual-learning
Lightnlp
基于Pytorch和torchtext的自然语言处理深度学习框架。
Stars: ✭ 739 (+3419.05%)
Mutual labels:  text-classification, relation-extraction
Marktool
这是一款基于web的通用文本标注工具,支持大规模实体标注、关系标注、事件标注、文本分类、基于字典匹配和正则匹配的自动标注以及用于实现归一化的标准名标注,同时也支持文本的迭代标注和实体的嵌套标注。标注规范可自定义且同类型任务中可“一次创建多次复用”。通过分级实体集合扩大了实体类型的规模,并设计了全新高效的标注方式,提升了用户体验和标注效率。此外,本工具增加了审核环节,可对多人的标注结果进行一致性检验和调整,提高了标注语料的准确率和可靠性。
Stars: ✭ 190 (+804.76%)
Mutual labels:  text-classification, relation-extraction
SIGIR2021 Conure
One Person, One Model, One World: Learning Continual User Representation without Forgetting
Stars: ✭ 23 (+9.52%)
Mutual labels:  lifelong-learning, continual-learning
Generative Continual Learning
No description or website provided.
Stars: ✭ 51 (+142.86%)
Mutual labels:  lifelong-learning, continual-learning
CPG
Steven C. Y. Hung, Cheng-Hao Tu, Cheng-En Wu, Chien-Hung Chen, Yi-Ming Chan, and Chu-Song Chen, "Compacting, Picking and Growing for Unforgetting Continual Learning," Thirty-third Conference on Neural Information Processing Systems, NeurIPS 2019
Stars: ✭ 91 (+333.33%)
Mutual labels:  lifelong-learning, continual-learning
Ask2Transformers
A Framework for Textual Entailment based Zero Shot text classification
Stars: ✭ 102 (+385.71%)
Mutual labels:  text-classification, relation-extraction
HebbianMetaLearning
Meta-Learning through Hebbian Plasticity in Random Networks: https://arxiv.org/abs/2007.02686
Stars: ✭ 77 (+266.67%)
Mutual labels:  meta-learning, lifelong-learning
CVPR21 PASS
PyTorch implementation of our CVPR2021 (oral) paper "Prototype Augmentation and Self-Supervision for Incremental Learning"
Stars: ✭ 55 (+161.9%)
Mutual labels:  lifelong-learning, continual-learning

MetaLifelongLanguage

This is the official code for the paper Meta-Learning with Sparse Experience Replay for Lifelong Language Learning.

Getting started

  • Clone the repository: git clone [email protected]:Nithin-Holla/MetaLifelongLanguage.git.
  • Create a virtual environment.
  • Install the required packages: pip install -r MetaLifelongLanguage/requirements.txt.

Downloading the data

  • Create a directory for storing the data: mkdir data.
  • Navigate to the data directory: cd data.
  • Download the five datasets for text classification from here and unzip them in this directory.
  • Make a new directory for lifelong relation extraction: mkdir LifelongFewRel.
  • Download the files using these commands:
    wget https://raw.githubusercontent.com/hongwang600/Lifelong_Relation_Detection/master/data/relation_name.txt
    wget https://raw.githubusercontent.com/hongwang600/Lifelong_Relation_Detection/master/data/training_data.txt
    wget https://raw.githubusercontent.com/hongwang600/Lifelong_Relation_Detection/master/data/val_data.txt
  • Navigate back: cd ../...
  • The directory tree should look like this:
.
├── MetaLifelongLanguage
├── data
│   ├── ag_news_csv
│   │   ├── classes.txt
│   │   ├── readme.txt
│   │   ├── test.csv
│   │   └── train.csv
│   ├── amazon_review_full_csv
│   │   ├── readme.txt
│   │   ├── test.csv
│   │   └── train.csv
│   ├── dbpedia_csv
│   │   ├── classes.txt
│   │   ├── readme.txt
│   │   ├── test.csv
│   │   └── train.csv
│   ├── yahoo_answers_csv
│   │   ├── classes.txt
│   │   ├── readme.txt
│   │   ├── test.csv
│   │   └── train.csv
│   ├── yelp_review_full_csv
│   │   ├── readme.txt
│   │   ├── test.csv
│   │   └── train.csv
│   ├── LifelongFewRel
│   │   ├── relation_name.txt
│   │   ├── training_data.txt
│   │   ├── val_data.txt

Text classification

train_text_cls.py contains the code for training and evaluation on the lifelong text classification benchmark. The usage is:

python train_text_cls.py [-h] --order ORDER [--n_epochs N_EPOCHS] [--lr LR]
                         [--inner_lr INNER_LR] [--meta_lr META_LR]
                         [--model MODEL] [--learner LEARNER]
                         [--mini_batch_size MINI_BATCH_SIZE]
                         [--updates UPDATES] [--write_prob WRITE_PROB]
                         [--max_length MAX_LENGTH] [--seed SEED]
                         [--replay_rate REPLAY_RATE]
                         [--replay_every REPLAY_EVERY]

optional arguments:
  -h, --help            show this help message and exit
  --order ORDER         Order of datasets
  --n_epochs N_EPOCHS   Number of epochs (only for MTL)
  --lr LR               Learning rate (only for the baselines)
  --inner_lr INNER_LR   Inner-loop learning rate
  --meta_lr META_LR     Meta learning rate
  --model MODEL         Name of the model
  --learner LEARNER     Learner method
  --n_episodes N_EPISODES
                        Number of meta-training episodes
  --mini_batch_size MINI_BATCH_SIZE
                        Batch size of data points within an episode
  --updates UPDATES     Number of inner-loop updates
  --write_prob WRITE_PROB
                        Write probability for buffer memory
  --max_length MAX_LENGTH
                        Maximum sequence length for the input
  --seed SEED           Random seed
  --replay_rate REPLAY_RATE
                        Replay rate from memory
  --replay_every REPLAY_EVERY
                        Number of data points between replay

Relation extraction

train_rel.py contains the code for training and evaluating on the lifelong relation extraction benchmark. The usage is:

python train_rel.py [-h] [--n_epochs N_EPOCHS] [--lr LR] [--inner_lr INNER_LR]
                    [--meta_lr META_LR] [--model MODEL] [--learner LEARNER]
                    [--mini_batch_size MINI_BATCH_SIZE] [--updates UPDATES]
                    [--write_prob WRITE_PROB] [--max_length MAX_LENGTH]
                    [--seed SEED] [--replay_rate REPLAY_RATE] [--order ORDER]
                    [--num_clusters NUM_CLUSTERS]
                    [--replay_every REPLAY_EVERY]

optional arguments:
  -h, --help            show this help message and exit
  --n_epochs N_EPOCHS   Number of epochs (only for MTL)
  --lr LR               Learning rate (only for the baselines)
  --inner_lr INNER_LR   Inner-loop learning rate
  --meta_lr META_LR     Meta learning rate
  --model MODEL         Name of the model
  --learner LEARNER     Learner method
  --n_episodes N_EPISODES
                        Number of meta-training episodes
  --mini_batch_size MINI_BATCH_SIZE
                        Batch size of data points within an episode
  --updates UPDATES     Number of inner-loop updates
  --write_prob WRITE_PROB
                        Write probability for buffer memory
  --max_length MAX_LENGTH
                        Maximum sequence length for the input
  --seed SEED           Random seed
  --replay_rate REPLAY_RATE
                        Replay rate from memory
  --order ORDER         Number of task orders to run for
  --num_clusters NUM_CLUSTERS
                        Number of clusters to take
  --replay_every REPLAY_EVERY
                        Number of data points between replay

Citation

If you use this code repository, please consider citing the paper:

@article{holla2020lifelong,
  title={Meta-Learning with Sparse Experience Replay for Lifelong Language Learning},
  author={Holla, Nithin and Mishra, Pushkar and Yannakoudakis, Helen and Shutova, Ekaterina},
  journal={arXiv preprint arXiv:2009.04891},
  year={2020}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].