All Projects → RUCAIBox → CIKM2020-S3Rec

RUCAIBox / CIKM2020-S3Rec

Licence: other
Code for CIKM2020 "S3-Rec: Self-Supervised Learning for Sequential Recommendation with Mutual Information Maximization"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to CIKM2020-S3Rec

BossNAS
(ICCV 2021) BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search
Stars: ✭ 125 (-16.67%)
Mutual labels:  self-supervised-learning
FKD
A Fast Knowledge Distillation Framework for Visual Recognition
Stars: ✭ 49 (-67.33%)
Mutual labels:  self-supervised-learning
SimCLR-in-TensorFlow-2
(Minimally) implements SimCLR (https://arxiv.org/abs/2002.05709) in TensorFlow 2.
Stars: ✭ 75 (-50%)
Mutual labels:  self-supervised-learning
CVPR21 PASS
PyTorch implementation of our CVPR2021 (oral) paper "Prototype Augmentation and Self-Supervision for Incremental Learning"
Stars: ✭ 55 (-63.33%)
Mutual labels:  self-supervised-learning
MiniVox
Code for our ACML and INTERSPEECH papers: "Speaker Diarization as a Fully Online Bandit Learning Problem in MiniVox".
Stars: ✭ 15 (-90%)
Mutual labels:  self-supervised-learning
MINE
Mutual Information Neural Estimator implemented in Tensorflow
Stars: ✭ 43 (-71.33%)
Mutual labels:  mutual-information
newt
Natural World Tasks
Stars: ✭ 24 (-84%)
Mutual labels:  self-supervised-learning
GCA
[WWW 2021] Source code for "Graph Contrastive Learning with Adaptive Augmentation"
Stars: ✭ 69 (-54%)
Mutual labels:  self-supervised-learning
video repres mas
code for CVPR-2019 paper: Self-supervised Spatio-temporal Representation Learning for Videos by Predicting Motion and Appearance Statistics
Stars: ✭ 63 (-58%)
Mutual labels:  self-supervised-learning
improving segmentation with selfsupervised depth
[CVPR21] Implementation of our work "Three Ways to Improve Semantic Segmentation with Self-Supervised Depth Estimation"
Stars: ✭ 189 (+26%)
Mutual labels:  self-supervised-learning
BYOL
Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning
Stars: ✭ 102 (-32%)
Mutual labels:  self-supervised-learning
G-SimCLR
This is the code base for paper "G-SimCLR : Self-Supervised Contrastive Learning with Guided Projection via Pseudo Labelling" by Souradip Chakraborty, Aritra Roy Gosthipaty and Sayak Paul.
Stars: ✭ 69 (-54%)
Mutual labels:  self-supervised-learning
FisheyeDistanceNet
FisheyeDistanceNet
Stars: ✭ 33 (-78%)
Mutual labels:  self-supervised-learning
MSF
Official code for "Mean Shift for Self-Supervised Learning"
Stars: ✭ 42 (-72%)
Mutual labels:  self-supervised-learning
point-cloud-prediction
Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks
Stars: ✭ 97 (-35.33%)
Mutual labels:  self-supervised-learning
simsiam-cifar10
Code to train the SimSiam model on cifar10 using PyTorch
Stars: ✭ 33 (-78%)
Mutual labels:  self-supervised-learning
SelfSupervisedLearning-DSM
code for AAAI21 paper "Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion“
Stars: ✭ 26 (-82.67%)
Mutual labels:  self-supervised-learning
SelfTask-GNN
Implementation of paper "Self-supervised Learning on Graphs:Deep Insights and New Directions"
Stars: ✭ 78 (-48%)
Mutual labels:  self-supervised-learning
AdCo
AdCo: Adversarial Contrast for Efficient Learning of Unsupervised Representations from Self-Trained Negative Adversaries
Stars: ✭ 148 (-1.33%)
Mutual labels:  self-supervised-learning
sc depth pl
Pytorch Lightning Implementation of SC-Depth (V1, V2...) for Unsupervised Monocular Depth Estimation.
Stars: ✭ 86 (-42.67%)
Mutual labels:  self-supervised-learning

Code for our CIKM 2020 Paper "S3-Rec: Self-Supervised Learning for Sequential Recommendation with Mutual Information Maximization"

Overview

The major contributions of our paper are four self-supervised optimization objectives, which capture item-attribute, sequence-item, sequence-attribute and sequence-subsequence correlations in raw data, respectively. And these optimization objectives are developed in a unified form of mutual information maximization.

avatar

Reproduce

Since we conduct extensive experiments on six datasets and under two evaluation scene (ranking with 99 negative items or all items), you can check the ./reproduce/ directory to try your targeted dataset or evaluation scene as you like.

Results

We illustrate the performance of our method comparing with different methods on six datasets. The best performance and the second best performance methods are denoted in bold and underlined fonts respectively.

Considering some recent researchers argue the effectiveness of different ranking strategies for testing recommender systems, we conduct experiments with two mainstream evaluation approaches. It is really time-consuming for the additional experiment, please do not save your star :)

In the PAPER, we pair the ground-truth item with 99 randomly sampled negative items that the user has not interacted with, and report the results of HR@{1, 5, 10}, NDCG@{5, 10} and MRR. The used test files are named as

data-name_sample.txt

The results are shown in the following picture. avatar

We also rank the ground-truth item with all items. We omit the FM and AutoInt because they need enumerate all user-item pairs, which take a very long time. The results are shown in the following picture.

avatar

requirements

pip install -r requirements.txt

data format

data preprocess
./data/data_process.py

generate negative items for testing
./data/generate_test.py


data-name.txt
one user per line
user_1 item_1 item_2 ...
user_2 item_1 item_2 ...

data-name_sample.txt
one user per line
user_1 neg_item_1 neg_item_2 ...
user_2 neg_item_1 neg_item_2 ...

data-name_item2attributes.json
{item_1:[attr, ...], item_2:[attr, ...], ... }

pretrain

python run_pretrain.py \
--data_name data_name

finetune

We support two evaluation methods. For more details, please check the ./reproduce directory.

  • Rank ground-truth item with 99 randomly sampled negative items
python run_finetune_sample.py \
--data_name data_name \
--ckp pretrain_epochs_num
  • Rank the ground-truth item with all the items
python run_finetune_full.py \
--data_name data_name \
--ckp pretrain_epochs_num

Cite

If you find our codes and datasets useful for your research or development, please cite our paper:

@inproceedings{CIKM2020-S3Rec,
  author    = {Kun Zhou and
               Hui Wang and
               Wayne Xin Zhao and
               Yutao Zhu and
               Sirui Wang and
               Fuzheng Zhang and
               Zhongyuan Wang and
               Ji{-}Rong Wen},
  title     = {S3-Rec: Self-Supervised Learning for Sequential Recommendation with
               Mutual Information Maximization},
  booktitle = {{CIKM} '20: The 29th {ACM} International Conference on Information
               and Knowledge Management, Virtual Event, Ireland, October 19-23, 2020},
  pages     = {1893--1902},
  publisher = {{ACM}},
  year      = {2020}
}

Contact

If you have any questions for our paper or codes, please send an email to [email protected].

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].