All Projects → chenchongthu → Enmf

chenchongthu / Enmf

Licence: mit
This is our implementation of ENMF: Efficient Neural Matrix Factorization (TOIS. 38, 2020). This also provides a fair evaluation of existing state-of-the-art recommendation models.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Enmf

Recsys2019 deeplearning evaluation
This is the repository of our article published in RecSys 2019 "Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches" and of several follow-up studies.
Stars: ✭ 780 (+712.5%)
Mutual labels:  recommender-system, collaborative-filtering, reproducible-research, reproducibility
Evalai
☁️ 🚀 📊 📈 Evaluating state of the art in AI
Stars: ✭ 1,087 (+1032.29%)
Mutual labels:  evaluation, reproducible-research, reproducibility
Polara
Recommender system and evaluation framework for top-n recommendations tasks that respects polarity of feedbacks. Fast, flexible and easy to use. Written in python, boosted by scientific python stack.
Stars: ✭ 205 (+113.54%)
Mutual labels:  recommender-system, collaborative-filtering, evaluation
Mrsr
MRSR - Matlab Recommender Systems Research is a software framework for evaluating collaborative filtering recommender systems in Matlab.
Stars: ✭ 13 (-86.46%)
Mutual labels:  collaborative-filtering, evaluation
Newsrecommendsystem
个性化新闻推荐系统,A news recommendation system involving collaborative filtering,content-based recommendation and hot news recommendation, can be adapted easily to be put into use in other circumstances.
Stars: ✭ 557 (+480.21%)
Mutual labels:  recommender-system, collaborative-filtering
Recsys19 hybridsvd
Accompanying code for reproducing experiments from the HybridSVD paper. Preprint is available at https://arxiv.org/abs/1802.06398.
Stars: ✭ 23 (-76.04%)
Mutual labels:  recommender-system, collaborative-filtering
Rrtools
rrtools: Tools for Writing Reproducible Research in R
Stars: ✭ 508 (+429.17%)
Mutual labels:  reproducible-research, reproducibility
Elliot
Comprehensive and Rigorous Framework for Reproducible Recommender Systems Evaluation
Stars: ✭ 49 (-48.96%)
Mutual labels:  recommender-system, collaborative-filtering
Steppy Toolkit
Curated set of transformers that make your work with steppy faster and more effective 🔭
Stars: ✭ 21 (-78.12%)
Mutual labels:  reproducible-research, reproducibility
Movie Recommender System
Basic Movie Recommendation Web Application using user-item collaborative filtering.
Stars: ✭ 85 (-11.46%)
Mutual labels:  recommender-system, collaborative-filtering
Drake Examples
Example workflows for the drake R package
Stars: ✭ 57 (-40.62%)
Mutual labels:  reproducible-research, reproducibility
Collaborative Deep Learning For Recommender Systems
The hybrid model combining stacked denoising autoencoder with matrix factorization is applied, to predict the customer purchase behavior in the future month according to the purchase history and user information in the Santander dataset.
Stars: ✭ 60 (-37.5%)
Mutual labels:  recommender-system, collaborative-filtering
Neural collaborative filtering
Neural Collaborative Filtering
Stars: ✭ 1,243 (+1194.79%)
Mutual labels:  recommender-system, collaborative-filtering
Labnotebook
LabNotebook is a tool that allows you to flexibly monitor, record, save, and query all your machine learning experiments.
Stars: ✭ 526 (+447.92%)
Mutual labels:  reproducible-research, reproducibility
Neural graph collaborative filtering
Neural Graph Collaborative Filtering, SIGIR2019
Stars: ✭ 517 (+438.54%)
Mutual labels:  recommender-system, collaborative-filtering
Recoder
Large scale training of factorization models for Collaborative Filtering with PyTorch
Stars: ✭ 46 (-52.08%)
Mutual labels:  recommender-system, collaborative-filtering
Rankfm
Factorization Machines for Recommendation and Ranking Problems with Implicit Feedback Data
Stars: ✭ 71 (-26.04%)
Mutual labels:  recommender-system, collaborative-filtering
Sacred
Sacred is a tool to help you configure, organize, log and reproduce experiments developed at IDSIA.
Stars: ✭ 3,678 (+3731.25%)
Mutual labels:  reproducible-research, reproducibility
Gtsummary
Presentation-Ready Data Summary and Analytic Result Tables
Stars: ✭ 450 (+368.75%)
Mutual labels:  reproducible-research, reproducibility
Consimilo
A Clojure library for querying large data-sets on similarity
Stars: ✭ 54 (-43.75%)
Mutual labels:  recommender-system, collaborative-filtering

ENMF

This is our implementation of Efficient Neural Matrix Factorization, which is a basic model of the paper:

Chong Chen, Min Zhang, Chenyang Wang, Weizhi Ma, Minming Li, Yiqun Liu and Shaoping Ma. 2019. An Efficient Adaptive Transfer Neural Network for Social-aware Recommendation. In SIGIR'19.

This is also the codes of the TOIS paper:

Chong Chen, Min Zhang, Yongfeng Zhang, Yiqun Liu and Shaoping Ma. 2020. Efficient Neural Matrix Factorization without Sampling for Recommendation. In TOIS Vol. 38, No. 2, Article 14.

The slides of this work has been uploaded. A chinese version instruction can be found at Blog, and the video presentation can be found at Demo.

Please cite our SIGIR'19 paper or TOIS paper if you use our codes. Thanks!

@inproceedings{chen2019efficient,
  title={An Efficient Adaptive Transfer Neural Network for Social-aware Recommendation},
  author={Chen, Chong and Zhang, Min and Wang, Chenyang and Ma, Weizhi and Li, Minming and Liu, Yiqun and Ma, Shaoping},
  booktitle={Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval},
  pages={225--234},
  year={2019},
  organization={ACM}
}
@article{10.1145/3373807, 
author = {Chen, Chong and Zhang, Min and Zhang, Yongfeng and Liu, Yiqun and Ma, Shaoping}, 
title = {Efficient Neural Matrix Factorization without Sampling for Recommendation}, 
year = {2020}, 
issue_date = {January 2020}, 
publisher = {Association for Computing Machinery}, 
volume = {38}, 
number = {2}, 
issn = {1046-8188}, 
url = {https://doi.org/10.1145/3373807}, 
doi = {10.1145/3373807}, 
journal = {ACM Trans. Inf. Syst.}, 
month = jan, 
articleno = {Article 14}, 
numpages = {28}
}

Author: Chong Chen ([email protected])

Environments

  • python
  • Tensorflow
  • numpy
  • pandas

Example to run the codes

Train and evaluate the model:

python ENMF.py

Suggestions for parameters

Two important parameters need to be tuned for different datasets, which are:

parser.add_argument('--dropout', type=float, default=0.7,
                        help='dropout keep_prob')
parser.add_argument('--negative_weight', type=float, default=0.1,
                        help='weight of non-observed data')

Specifically, we suggest to tune "negative_weight" among [0.001,0.005,0.01,0.02,0.05,0.1,0.2,0.5]. Generally, this parameter is related to the sparsity of dataset. If the dataset is more sparse, then a small value of negative_weight may lead to a better performance.

Generally, the performance of our ENMF is better than existing state-of-the-art recommendation models like NCF, CovNCF, CMN, and NGCF. You can also contact us if you can not tune the parameters properly.

Comparison with the most recent methods (updating)

Do the "state-of-the-art" recommendation models really perform well? If you want to see more comparison between our ENMF and any "state-of-the-art" recommendation models, feel free to propose an issue.

1. LightGCN (SIGIR 2020) LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation.

To be consistent with LightGCN, we use the same evaluation metrics (i.e., [email protected] and [email protected]), use the same data Yelp2018 released in LightGCN (https://github.com/kuandeng/LightGCN).

The parameters of our ENMF on Yelp2018 are as follows:

parser.add_argument('--dropout', type=float, default=0.7,
                        help='dropout keep_prob')
parser.add_argument('--negative_weight', type=float, default=0.05,
                        help='weight of non-observed data')

Dataset: Yelp2018

Model [email protected] [email protected]
NGCF 0.0579 0.0477
Mult-VAE 0.0584 0.0450
GRMF 0.0571 0.0462
LightGCN 0.0649 0.0530
ENMF 0.0650 0.0515

2. NBPO (SIGIR 2020) Sampler Design for Implicit Feedback Data by Noisy-label Robust Learning.

This paper designs an adaptive sampler based on noisy-label robust learning for implicit feedback data. To be consistent with NBPO, we use the same evaluation metrics (i.e., [email protected], [email protected]), use the same data Amazon-14core released in NBPO (https://github.com/Wenhui-Yu/NBPO). For fair comparison, we also set the embedding size as 50, which is utilized in the NBPO work.

The parameters of our ENMF on Amazon-14core are as follows:

parser.add_argument('--dropout', type=float, default=0.2,
                        help='dropout keep_prob')
parser.add_argument('--negative_weight', type=float, default=0.2,
                        help='weight of non-observed data')

Dataset: Amazon-14core

Model [email protected] [email protected] [email protected] [email protected] [email protected] [email protected]
BPR 0.0326 0.0317 0.0275 0.0444 0.0551 0.0680
NBPO 0.0401 0.0357 0.0313 0.0555 0.0655 0.0810
ENMF 0.0419 0.0388 0.0314 0.0566 0.0698 0.0823

3. LCFN (ICML 2020)Graph Convolutional Network for Recommendation with Low-pass Collaborative Filters

To be consistent with LCFN, we use the same evaluation metrics (i.e., [email protected], [email protected]), use the same data Movlelens-1m released in LCFN (https://github.com/Wenhui-Yu/LCFN). For fair comparison, we also set the embedding size as 128, which is utilized in the LCFN work.

The parameters of our ENMF on Movielens-1m (ml-lcfn) are as follows:

parser.add_argument('--dropout', type=float, default=0.5,
                        help='dropout keep_prob')
parser.add_argument('--negative_weight', type=float, default=0.5,
                        help='weight of non-observed data')

Dataset: Movielens-1m (ml-lcfn)

Model [email protected] [email protected] [email protected] [email protected] [email protected] [email protected]
GCMC 0.1166 0.1437 0.1564 0.2411 0.2361 0.2496
NGCF 0.1153 0.1425 0.1582 0.2367 0.2347 0.2511
SCF 0.1189 0.1451 0.1600 0.2419 0.2398 0.2560
CGMC 0.1179 0.1431 0.1573 0.2408 0.2372 0.2514
LCFN 0.1213 0.1482 0.1625 0.2427 0.2429 0.2603
ENMF 0.1239 0.1512 0.1640 0.2457 0.2475 0.2656

4. DHCF (KDD 2020)Dual Channel Hypergraph Collaborative Filtering

To be consistent with DHCF, we use the same evaluation metrics (i.e., [email protected], [email protected]), use the same data CiteUlike-A (thanks for the authors of DHCF who kindly provide the dataset). For fair comparison, we also set the embedding size as 64, which is utilized in the DHCF work.

The parameters of our ENMF on CiteUlike-A are as follows:

parser.add_argument('--dropout', type=float, default=0.5,
                        help='dropout keep_prob')
parser.add_argument('--negative_weight', type=float, default=0.02,
                        help='weight of non-observed data')

Dataset: CiteUlike-A

Model [email protected] [email protected]
BPR 0.0330 0.0124
GCMC 0.0317 0.0103
PinSage 0.0508 0.0194
NGCF 0.0517 0.0193
DHCF 0.0635 0.0249
ENMF 0.0748 0.0280

5. SRNS (NeurIPS 2020)Simplify and Robustify Negative Sampling for Implicit Collaborative Filtering

This work proposes a simplified and robust negative sampling approach SRNS for implicit CF. The authors have compared their SRNS method with our ENMF in the original paper. However, we reran the experiment and got some different results.

To be consistent with SRNS, we use the same evaluation metrics (i.e., [email protected], [email protected]), use the same data Movlelens-1m released in SRNS (https://github.com/dingjingtao/SRNS). For fair comparison, we also set the embedding size as 32, which is utilized in the SRNS work.

The parameters of our ENMF on Movielens-1m(ml-srns) are as follows:

parser.add_argument('--dropout', type=float, default=0.9,
                        help='dropout keep_prob')
parser.add_argument('--negative_weight', type=float, default=0.3,
                        help='weight of non-observed data')

Dataset: Movielens-1m (ml-srns)

Model [email protected] [email protected] [email protected]
Uniform 0.1744 0.2846 0.3663
NNCF 0.0831 0.1428 0.1873
AOBPR 0.1782 0.2907 0.3749
IRGAN 0.1763 0.2878 0.3706
RNS-AS 0.1810 0.2950 0.3801
AdvIR 0.1792 0.2889 0.3699
ENMF (reported in the srns paper) 0.1846 0.2970 0.3804
SRNS 0.1911 0.3056 0.3907
ENMF (our) 0.1917 0.3124 0.4016
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].