All Projects → jinze1994 → Atrank

jinze1994 / Atrank

Licence: apache-2.0
An Attention-Based User Behavior Modeling Framework for Recommendation

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Atrank

JD2Skills-BERT-XMLC
Code and Dataset for the Bhola et al. (2020) Retrieving Skills from Job Descriptions: A Language Model Based Extreme Multi-label Classification Framework
Stars: ✭ 33 (-90.12%)
Mutual labels:  recommendation-system
deep recommenders
Deep Recommenders
Stars: ✭ 214 (-35.93%)
Mutual labels:  recommendation-system
Recdb Postgresql
RecDB is a recommendation engine built entirely inside PostgreSQL
Stars: ✭ 297 (-11.08%)
Mutual labels:  recommendation-system
TIFUKNN
kNN-based next-basket recommendation
Stars: ✭ 38 (-88.62%)
Mutual labels:  recommendation-system
WhySoMuch
knowledge graph recommendation
Stars: ✭ 67 (-79.94%)
Mutual labels:  recommendation-system
Deep-Learning-Model-for-Hybrid-Recommendation-Engine
A Hybrid recommendation engine built on deep learning architecture, which has the potential to combine content-based and collaborative filtering recommendation mechanisms using a deep learning supervisor
Stars: ✭ 19 (-94.31%)
Mutual labels:  recommendation-system
Recommendation-system
推荐系统资料笔记收录/ Everything about Recommendation System. 专题/书籍/论文/产品/Demo
Stars: ✭ 169 (-49.4%)
Mutual labels:  recommendation-system
Reco Gym
Code for reco-gym: A Reinforcement Learning Environment for the problem of Product Recommendation in Online Advertising
Stars: ✭ 314 (-5.99%)
Mutual labels:  recommendation-system
Knowledge Graph based Intent Network
Learning Intents behind Interactions with Knowledge Graph for Recommendation, WWW2021
Stars: ✭ 116 (-65.27%)
Mutual labels:  recommendation-system
Recommend
recommendation system with python
Stars: ✭ 284 (-14.97%)
Mutual labels:  recommendation-system
toptal-recommengine
Prototype recommendation engine built to accompany an article on Toptal Blog
Stars: ✭ 109 (-67.37%)
Mutual labels:  recommendation-system
yelper recommendation system
Yelper recommendation system
Stars: ✭ 117 (-64.97%)
Mutual labels:  recommendation-system
recommender
NReco Recommender is a .NET port of Apache Mahout CF java engine (standalone, non-Hadoop version)
Stars: ✭ 35 (-89.52%)
Mutual labels:  recommendation-system
intergo
A package for interleaving / multileaving ranking generation in go
Stars: ✭ 30 (-91.02%)
Mutual labels:  recommendation-system
Cornac
A Comparative Framework for Multimodal Recommender Systems
Stars: ✭ 308 (-7.78%)
Mutual labels:  recommendation-system
listenbrainz-labs
A collection tools/scripts to explore the ListenBrainz data using Apache Spark.
Stars: ✭ 16 (-95.21%)
Mutual labels:  recommendation-system
Context-Aware-Recommender
Hybrid Recommender System
Stars: ✭ 16 (-95.21%)
Mutual labels:  recommendation-system
Caserecommender
Case Recommender: A Flexible and Extensible Python Framework for Recommender Systems
Stars: ✭ 318 (-4.79%)
Mutual labels:  recommendation-system
Recsys
项亮的《推荐系统实践》的代码实现
Stars: ✭ 306 (-8.38%)
Mutual labels:  recommendation-system
SLIM-recommendation
A simple recommendation evaluation system, the algorithm includes SLIM, LFM, ItemCF, UserCF
Stars: ✭ 39 (-88.32%)
Mutual labels:  recommendation-system

ATRank

An Attention-Based User Behavior Modeling Framework for Recommendation

Introduction

This is an implementation of the paper ATRank: An Attention-Based User Behavior Modeling Framework for Recommendation. Chang Zhou, Jinze Bai, Junshuai Song, Xiaofei Liu, Zhengchao Zhao, Xiusi Chen, Jun Gao. AAAI 2018.

Bibtex:

@paper{zhou2018atrank,
  author = {Chang Zhou and Jinze Bai and Junshuai Song and Xiaofei Liu and Zhengchao Zhao and Xiusi Chen and Jun Gao},
  title = {ATRank: An Attention-Based User Behavior Modeling Framework for Recommendation},
  conference = {AAAI Conference on Artificial Intelligence},
  year = {2018}
}

This repository also contains all the competitor's methods mentioned in the paper. Some implementations consults the Transfomer, and Text-CNN.

Note that, the heterogeneous behavior datasets used in the paper is private, so you could not run multi-behavior code directly. But you could run the code on amazon dataset directly and review the heterogeneous behavior code.

Requirements

  • Python >= 3.6.1
  • NumPy >= 1.12.1
  • Pandas >= 0.20.1
  • TensorFlow >= 1.4.0 (Probably earlier version should work too, though I didn't test it)
  • GPU with memory >= 10G

Download dataset and preprocess

  • Step 1: Download the amazon product dataset of electronics category, which has 498,196 products and 7,824,482 records, and extract it to raw_data/ folder.
mkdir raw_data/;
cd utils;
bash 0_download_raw.sh;
  • Step 2: Convert raw data to pandas dataframe, and remap categorical id.
python 1_convert_pd.py;
python 2_remap_id.py

Training and Evaluation

This implementation not only contains the ATRank method, but also provides all the competitors' method, including BPR, CNN, RNN and RNN+Attention. The training procedures of all method is as follows:

  • Step 1: Choose a method and enter the folder.
cd atrank;

Alternatively, you could also run other competitors's methods directly by cd bpr cd cnn cd rnn cd rnn_att, and follow the same instructions below.

Note that, the heterogeneous behavior datasets used in the paper is private, so you could't run the code of this part directly. But you could review the neural network code we use in this paper by cd multi.

  • Step 2: Building the dataset adapted to current method.
python build_dataset.py
  • Step 3: Start training and evaluating using default arguments in background mode.
python train.py >log.txt 2>&1 &
  • Step 4: Check training and evaluating progress.
tail -f log.txt
tensorboard --logdir=save_path

Note that the evaluating producure alternate with training producure, so run the command above may cost five to ten hours until converge completely according to the different methods. If you need to kill that job instantly:

nvidia-smi  # Fetch the PID of current training process.
kill -9 PID # Kill the target process.

You could change the training and networks hyperparameters by command arguments, like python train.py --learning_rate=0.1. To see all command arguments you could use python train.py --help.

Results

You always could use tensorboard --logdir=save_path to see the AUC curve and check all kinds of embedding histogram. The collected AUC curve of test set is as follows

AUC curve in test set
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].