All Projects → fajieyuan → SIGIR2021_Conure

fajieyuan / SIGIR2021_Conure

Licence: other
One Person, One Model, One World: Learning Continual User Representation without Forgetting

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to SIGIR2021 Conure

awesome-graph-self-supervised-learning
Awesome Graph Self-Supervised Learning
Stars: ✭ 805 (+3400%)
Mutual labels:  transfer-learning, self-supervised-learning, pre-training
Kashgari
Kashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includes Word2Vec, BERT, and GPT2 Language Embedding.
Stars: ✭ 2,235 (+9617.39%)
Mutual labels:  transfer-learning, bert, bert-model
CVPR21 PASS
PyTorch implementation of our CVPR2021 (oral) paper "Prototype Augmentation and Self-Supervision for Incremental Learning"
Stars: ✭ 55 (+139.13%)
Mutual labels:  lifelong-learning, continual-learning, self-supervised-learning
Sigir2020 peterrec
Parameter-Efficient Transfer from Sequential Behaviors for User Modeling and Recommendation
Stars: ✭ 121 (+426.09%)
Mutual labels:  recommender-system, user, transfer-learning
Filipino-Text-Benchmarks
Open-source benchmark datasets and pretrained transformer models in the Filipino language.
Stars: ✭ 22 (-4.35%)
Mutual labels:  transformer, transfer-learning, bert
ParsBigBird
Persian Bert For Long-Range Sequences
Stars: ✭ 58 (+152.17%)
Mutual labels:  transfer-learning, bert
MinTL
MinTL: Minimalist Transfer Learning for Task-Oriented Dialogue Systems
Stars: ✭ 61 (+165.22%)
Mutual labels:  transformer, transfer-learning
Continual Learning Data Former
A pytorch compatible data loader to create sequence of tasks for Continual Learning
Stars: ✭ 32 (+39.13%)
Mutual labels:  lifelong-learning, continual-learning
MetaLifelongLanguage
Repository containing code for the paper "Meta-Learning with Sparse Experience Replay for Lifelong Language Learning".
Stars: ✭ 21 (-8.7%)
Mutual labels:  lifelong-learning, continual-learning
MetaHeac
This is an official implementation for "Learning to Expand Audience via Meta Hybrid Experts and Critics for Recommendation and Advertising"(KDD2021).
Stars: ✭ 36 (+56.52%)
Mutual labels:  recommendation, transfer-learning
transformer-models
Deep Learning Transformer models in MATLAB
Stars: ✭ 90 (+291.3%)
Mutual labels:  transformer, bert
bert-as-a-service TFX
End-to-end pipeline with TFX to train and deploy a BERT model for sentiment analysis.
Stars: ✭ 32 (+39.13%)
Mutual labels:  transformer, bert
SHOT-plus
code for our TPAMI 2021 paper "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer"
Stars: ✭ 46 (+100%)
Mutual labels:  transfer-learning, self-supervised-learning
cvpr clvision challenge
CVPR 2020 Continual Learning Challenge - Submit your CL algorithm today!
Stars: ✭ 57 (+147.83%)
Mutual labels:  lifelong-learning, continual-learning
text-generation-transformer
text generation based on transformer
Stars: ✭ 36 (+56.52%)
Mutual labels:  transformer, bert
Self-Supervised-Embedding-Fusion-Transformer
The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.
Stars: ✭ 57 (+147.83%)
Mutual labels:  bert, self-supervised-learning
golgotha
Contextualised Embeddings and Language Modelling using BERT and Friends using R
Stars: ✭ 39 (+69.57%)
Mutual labels:  transformer, bert
semantic-document-relations
Implementation, trained models and result data for the paper "Pairwise Multi-Class Document Classification for Semantic Relations between Wikipedia Articles"
Stars: ✭ 21 (-8.7%)
Mutual labels:  transformer, bert
are-16-heads-really-better-than-1
Code for the paper "Are Sixteen Heads Really Better than One?"
Stars: ✭ 128 (+456.52%)
Mutual labels:  transformer, bert
CPG
Steven C. Y. Hung, Cheng-Hao Tu, Cheng-En Wu, Chien-Hung Chen, Yi-Ming Chan, and Chu-Song Chen, "Compacting, Picking and Growing for Unforgetting Continual Learning," Thirty-third Conference on Neural Information Processing Systems, NeurIPS 2019
Stars: ✭ 91 (+295.65%)
Mutual labels:  lifelong-learning, continual-learning

SIGIR2021_Conure

One Person, One Model, One World: Learning Continual User Representation without Forgetting

Posts: https://zhuanlan.zhihu.com/p/437671278

@inproceedings{yuan2021one,
  title={One person, one model, one world: Learning continual user representation without forgetting},
  author={Yuan, Fajie and Zhang, Guoxiao and Karatzoglou, Alexandros and Jose, Joemon and Kong, Beibei and Li, Yudong},
  booktitle={Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval},
  pages={696--705},
  year={2021}
}

If you want to use Conure in real production systems with hundreds of million items. I strongly suggest: (1) understand our code released here ; (2)using TFRecord (tf.data.Dataset) and tf.estimator to replace feed_dict (slow), which is several times faster (see https://zhuanlan.zhihu.com/p/53345706); (3) contact [email protected] if you could not achieve expected results. (E.g., No personalization for new user recommendation);(4) For simplicity, we use standard dense matrix to store the binary mask matrix. You can replace it with the sparse matrix which is much more parameter efficient; (5) Please also note that the code attached here was rewritten by Fajie after leaving Tencent. Though it was not the original code used for this paper, you should be able to reproduce all results reported in the paper.

NextItNet pytorch version: https://github.com/syiswell/NextItNet-Pytorch


conure_tp_t1.py: Conure is trained on Task1, and once converged it will be pruned.

conure_ret_t1.py: Conure retrains the pruned architecture of Task1

conure_tp_t2.py: Conure is trained on Task2 and once converged it will be pruned.

conure_ret_t2.py: Conure retrains the pruned architecture of Task2

Large-scale Recommendation Dataset for pretraining,transfer learning (crosss-domain recommendation) and user representation learning:

Download the TTL dataset from: https://drive.google.com/file/d/1imhHUsivh6oMEtEW-RwVc4OsDqn-xOaP/view?usp=sharin or the ML dataset from: https://drive.google.com/file/d/1-_KmnZFaOdH11keLYVcgkf-kW_BaM266/view?usp=sharing

TTL Dataset: 可用于推荐系统预训练,迁移学习,跨域推荐,冷启动推荐,用户表征学习,自监督学习等任务。

Running our code:

Put these dataset on Data/Session

FOllowing these steps:

python conure_tp_t1.py After convergence (it takes more than 24 hours for training). We suggest 4 iterations. Parameters will be automatioally saved.

python conure_ret_t1.py You can manually stop this job if the results are satisfied (better than results reported in conure_tp_t1.py). Parameters will be automatioally saved.

python conure_tp_t2.py

python conure_ret_t2.py

python conure_tp_t3.py

python conure_ret_t3.py

python conure_tp_t4.py

python conure_ret_t4.py

Environments

  • Tensorflow (version: 1.10.0)
  • python 2.7

Related work:

[1]
@inproceedings{yuan2019simple,
  title={A simple convolutional generative network for next item recommendation},
  author={Yuan, Fajie and Karatzoglou, Alexandros and Arapakis, Ioannis and Jose, Joemon M and He, Xiangnan},
  booktitle={Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining},
  pages={582--590},
  year={2019}
}
[2]
@inproceedings{yuan2020parameter,
  title={Parameter-efficient transfer from sequential behaviors for user modeling and recommendation},
  author={Yuan, Fajie and He, Xiangnan and Karatzoglou, Alexandros and Zhang, Liguang},
  booktitle={Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval},
  pages={1469--1478},
  year={2020}
}
[3]
@inproceedings{yuan2020future,
  title={Future Data Helps Training: Modeling Future Contexts for Session-based Recommendation},
  author={Yuan, Fajie and He, Xiangnan and Jiang, Haochuan and Guo, Guibing and Xiong, Jian and Xu, Zhezhao and Xiong, Yilin},
  booktitle={Proceedings of The Web Conference 2020},
  pages={303--313},
  year={2020}
}
[4]
@article{sun2020generic,
  title={A Generic Network Compression Framework for Sequential Recommender Systems},
  author={Sun, Yang and Yuan, Fajie and Yang, Ming and Wei, Guoao and Zhao, Zhou and Liu, Duo},
  journal={Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining},
  year={2020}
}
[5]
@inproceedings{yuan2016lambdafm,
  title={Lambdafm: learning optimal ranking with factorization machines using lambda surrogates},
  author={Yuan, Fajie and Guo, Guibing and Jose, Joemon M and Chen, Long and Yu, Haitao and Zhang, Weinan},
  booktitle={Proceedings of the 25th ACM International on Conference on Information and Knowledge Management},
  pages={227--236},
  year={2016}
}
[6]
@article{wang2020stackrec,
  title={StackRec: Efficient Training of Very Deep Sequential Recommender Models by Layer Stacking},
  author={Wang, Jiachun and Yuan, Fajie and Chen, Jian and Wu, Qingyao and Li, Chengmin and Yang, Min and Sun, Yang and Zhang, Guoxiao},
  journal={arXiv preprint arXiv:2012.07598},
  year={2020}
}

Hiring

If you want to work with Fajie https://fajieyuan.github.io/, Please contact him by email [email protected]. His lab is now recruiting visiting students, interns, research assistants, posdocs, and research scientists. You can also contact him if you want to pursue a Phd degree at Westlake University. Please feel free to talk to him (by weichat: wuxiangwangyuan) if you have ideas or papers for collaboration. He is open to various collaborations. 西湖大学原发杰团队长期招聘:推荐系统和生物信息(尤其蛋白质相关)方向 ,科研助理,博士生,博后,访问学者,研究员系列。

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].