All Projects → sh0416 → bpr

sh0416 / bpr

Licence: GPL-3.0 license
Bayesian Personalized Ranking using PyTorch

Programming Languages

python
139335 projects - #7 most used programming language
C++
36643 projects - #6 most used programming language
Cuda
1817 projects

Projects that are alternatives of or similar to bpr

RecSys Course 2017
DEPRECATED This is the official repository for the 2017 Recommender Systems course at Polimi.
Stars: ✭ 23 (-78.1%)
Mutual labels:  bpr, recommender-system
Causal Reading Group
We will keep updating the paper list about machine learning + causal theory. We also internally discuss related papers between NExT++ (NUS) and LDS (USTC) by week.
Stars: ✭ 339 (+222.86%)
Mutual labels:  recommender-system
RecSys PyTorch
PyTorch implementations of Top-N recommendation, collaborative filtering recommenders.
Stars: ✭ 125 (+19.05%)
Mutual labels:  recommender-system
Neural Bayesian Personalized Ranking
Representation Learning and Pairwise Ranking for Implicit Feedback in Top-N Item Recommendation
Stars: ✭ 23 (-78.1%)
Mutual labels:  recommender-system
mildnet
Visual Similarity research at Fynd. Contains code to reproduce 2 of our research papers.
Stars: ✭ 76 (-27.62%)
Mutual labels:  recommender-system
KG4Rec
Knowledge-aware recommendation papers.
Stars: ✭ 76 (-27.62%)
Mutual labels:  recommender-system
recommender system with Python
recommender system tutorial with Python
Stars: ✭ 106 (+0.95%)
Mutual labels:  recommender-system
recsys2019
The complete code and notebooks used for the ACM Recommender Systems Challenge 2019
Stars: ✭ 26 (-75.24%)
Mutual labels:  recommender-system
online-course-recommendation-system
Built on data from Pluralsight's course API fetched results. Works with model trained with K-means unsupervised clustering algorithm.
Stars: ✭ 31 (-70.48%)
Mutual labels:  recommender-system
chainRec
Mengting Wan, Julian McAuley, "Item Recommendation on Monotonic Behavior Chains", in Proc. of 2018 ACM Conference on Recommender Systems (RecSys'18), Vancouver, Canada, Oct. 2018.
Stars: ✭ 52 (-50.48%)
Mutual labels:  recommender-system
adversarial-recommender-systems-survey
The goal of this survey is two-fold: (i) to present recent advances on adversarial machine learning (AML) for the security of RS (i.e., attacking and defense recommendation models), (ii) to show another successful application of AML in generative adversarial networks (GANs) for generative applications, thanks to their ability for learning (high-…
Stars: ✭ 110 (+4.76%)
Mutual labels:  recommender-system
recoreco
Fast item-to-item recommendations on the command line.
Stars: ✭ 33 (-68.57%)
Mutual labels:  recommender-system
STACP
Joint Geographical and Temporal Modeling based on Matrix Factorization for Point-of-Interest Recommendation - ECIR 2020
Stars: ✭ 19 (-81.9%)
Mutual labels:  recommender-system
SASRec.pytorch
PyTorch(1.6+) implementation of https://github.com/kang205/SASRec
Stars: ✭ 137 (+30.48%)
Mutual labels:  recommender-system
TIFUKNN
kNN-based next-basket recommendation
Stars: ✭ 38 (-63.81%)
Mutual labels:  recommender-system
auction-website
🏷️ An e-commerce marketplace template. An online auction and shopping website for buying and selling a wide variety of goods and services worldwide.
Stars: ✭ 44 (-58.1%)
Mutual labels:  recommender-system
recsim ng
RecSim NG: Toward Principled Uncertainty Modeling for Recommender Ecosystems
Stars: ✭ 106 (+0.95%)
Mutual labels:  recommender-system
Course-Recommendation-System
A system that will help in a personalized recommendation of courses for an upcoming semester based on the performance of previous semesters.
Stars: ✭ 14 (-86.67%)
Mutual labels:  recommender-system
Awesome-Machine-Learning-Papers
📖Notes and remarks on Machine Learning related papers
Stars: ✭ 35 (-66.67%)
Mutual labels:  recommender-system
Yue
A python library for music recommendation
Stars: ✭ 88 (-16.19%)
Mutual labels:  recommender-system

Bayesian Personalized Ranking from Implicit Feedback

Build Status

The repository implement the Bayesian Personalized Ranking using pyTorch (https://arxiv.org/pdf/1205.2618)
Other repositories also implement this model, but the evaluation takes longer time.
So, I implement this model using pyTorch with GPU acceleration for evaluation.
Implementation detail will be explained in the following section.

Environment

Hardware

  • AMD Ryzen 7 3700X 8-Core Processor
  • Samsung DDR4 32GB
  • NVIDIA TitanXp

Software

OS

I use both Windows and Linux(Ubuntu).

Python package

You have to install the following packages before executing this code.

  • python==3.6
  • pytorch==1.3.1
  • numpy==1.15.4
  • pandas==0.23.4

You can install these package by executing the following command or through anaconda.

pip install -r requirements.txt

Usage

0. Prepare data

This code support the movielens 1m data and movielens 20m data. You can get the dataset from the following list.

After downloading that file, unzip it.
We call the path for unzipped directory $data_dir.

1. Preprocess data

For basic usage, execute following command line to preprocess the data. It randomly split the whole dataset into two parts, training data and test data.

python preprocess.py --dataset ml-1m --data_dir $data_dir --output_data preprocessed/ml-1m.pickle
python preprocess.py --dataset ml-20m --data_dir $data_dir --output_data preprocessed/ml-20m.pickle
python preprocess.py --dataset amazon-beauty --data_dir $data_dir --output_data preprocessed/amazon-beauty.pickle

If you want to split training data and test data with time order, then execute the following command line. This code sorts the item list for each user using time order. After that, it splits the whole data into two parts, training data and test data. First 80% of the item list will become the training data and the last 20% of the item list will become test data.

python preprocess.py --dataset ml-1m --output_data preprocessed/ml-1m.pickle --time_order
python preprocess.py --dataset ml-20m --output_data preprocessed/ml-20m.pickle --time_order

Help message will give you more detail description for arguments.

python preprocess.py --help

2. Training MF model using BPR-OPT

Now, for real show, let's train MF model using BPR-OPT loss. You can execute the following command to train MF model using BPR-OPT.

python train.py --data preprocessed/ml-1m.pickle
python train.py --data preprocessed/ml-20m.pickle

Help message will give you more detail description for arguments. You can train MF model with different hyperparameter.

python train.py --help

Implementation detail

Result

The evaluation benchmark for movielens 1m is the following table. I think more tuning will get better result, but this value is reasonably around the statistic. I got very weird statistic when I train MovieLens-1M. I think I have to check my function more rigorously.

Dataset Preprocess P@1 P@5 P@10 R@1 R@5 R@10
Movielens-1m Random 0.3881 0.2987 0.2683 0.0178 0.0616 0.1018
Movielens-1m Time-order 0.1588 0.1348 0.1297 0.0071 0.0294 0.0519
Movielens-20m Random 0.2359 0.1790 0.1529 0.0118 0.0395 0.0652
Movielens-20m Time-order 0.1070 0.0887 0.0809 0.0059 0.0237 0.0431

Training loss curve

MovieLens 1M

Dataset Random Time-order
MovieLens-1M

Evaluation metric curve

MovieLens 1M

Dataset Random Time-order
MovieLens-1M

More information will get from the result directory.

FAQ

Continuous integration (Travis CI)

I use pytest framework to make my function reliable. The execution code for testing is pytest. You can get some useful code snippets from tests directory. As I cannot find continuous integration tool which freely support gpu, I couldn't test CUDA code I implemented. So, for stable execution, do test manually by using pytest to check it works well in your environment.

Laboratory (Experimental development)

Brand new data structure VariableShapeList

I am working for more elaborated approach to calculate evaluation metric. For now, I develop VariableShapeList which can handle list of tensors which has different length. Someone might said that it is equivalent with PackedSequence which is already implemented in pyTorch, but I can't use that data structure for evaluation metric. Some operation is directly implemented by CPP function and will be implemented by CUDA kernel function.

CPP Build tools (Optional)

For Windows, Visual Studio Build tool is needed for CPP extension. Install it from here

Use IterableDataset for delivering fast data structure

I figure out that the setup time for multiprocessing DataLoader is major bottleneck in my training script. Therefore, I refactor my dataset with IterableDataset and get 10x faster than existing implementation. This implementation needs to be tested.

Performance optimization

The large batch size and speed performance optimization boost evaluation metric. I will updated all statistics for MovieLens-1M and MovieLens-20M.

Contact

If you have any problem or encounter mysterious things during simulating this code, open issue or contact me by sending email to [email protected]

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].