All Projects → iemre → Mrsr

iemre / Mrsr

MRSR - Matlab Recommender Systems Research is a software framework for evaluating collaborative filtering recommender systems in Matlab.

Programming Languages

matlab
3953 projects

Projects that are alternatives of or similar to Mrsr

Polara
Recommender system and evaluation framework for top-n recommendations tasks that respects polarity of feedbacks. Fast, flexible and easy to use. Written in python, boosted by scientific python stack.
Stars: ✭ 205 (+1476.92%)
Mutual labels:  collaborative-filtering, matrix-factorization, evaluation
matrix-completion
Lightweight Python library for in-memory matrix completion.
Stars: ✭ 94 (+623.08%)
Mutual labels:  collaborative-filtering, matrix-factorization
Implicit
Fast Python Collaborative Filtering for Implicit Feedback Datasets
Stars: ✭ 2,569 (+19661.54%)
Mutual labels:  collaborative-filtering, matrix-factorization
recommender system with Python
recommender system tutorial with Python
Stars: ✭ 106 (+715.38%)
Mutual labels:  collaborative-filtering, matrix-factorization
Recotour
A tour through recommendation algorithms in python [IN PROGRESS]
Stars: ✭ 140 (+976.92%)
Mutual labels:  collaborative-filtering, matrix-factorization
Rsparse
Fast and accurate machine learning on sparse matrices - matrix factorizations, regression, classification, top-N recommendations.
Stars: ✭ 145 (+1015.38%)
Mutual labels:  collaborative-filtering, matrix-factorization
Recommendation.jl
Building recommender systems in Julia
Stars: ✭ 42 (+223.08%)
Mutual labels:  collaborative-filtering, matrix-factorization
Elliot
Comprehensive and Rigorous Framework for Reproducible Recommender Systems Evaluation
Stars: ✭ 49 (+276.92%)
Mutual labels:  collaborative-filtering, matrix-factorization
Awesome-Machine-Learning-Papers
📖Notes and remarks on Machine Learning related papers
Stars: ✭ 35 (+169.23%)
Mutual labels:  collaborative-filtering, matrix-factorization
Neural Collaborative Filtering
pytorch version of neural collaborative filtering
Stars: ✭ 263 (+1923.08%)
Mutual labels:  collaborative-filtering, matrix-factorization
Librec
LibRec: A Leading Java Library for Recommender Systems, see
Stars: ✭ 3,045 (+23323.08%)
Mutual labels:  collaborative-filtering, matrix-factorization
Rectorch
rectorch is a pytorch-based framework for state-of-the-art top-N recommendation
Stars: ✭ 121 (+830.77%)
Mutual labels:  collaborative-filtering, matrix-factorization
Metarec
PyTorch Implementations For A Series Of Deep Learning-Based Recommendation Models (IN PROGRESS)
Stars: ✭ 120 (+823.08%)
Mutual labels:  collaborative-filtering, matrix-factorization
Enmf
This is our implementation of ENMF: Efficient Neural Matrix Factorization (TOIS. 38, 2020). This also provides a fair evaluation of existing state-of-the-art recommendation models.
Stars: ✭ 96 (+638.46%)
Mutual labels:  collaborative-filtering, evaluation
Cornac
A Comparative Framework for Multimodal Recommender Systems
Stars: ✭ 308 (+2269.23%)
Mutual labels:  collaborative-filtering, matrix-factorization
Quick-Data-Science-Experiments-2017
Quick-Data-Science-Experiments
Stars: ✭ 19 (+46.15%)
Mutual labels:  collaborative-filtering, matrix-factorization
Deeprec
An Open-source Toolkit for Deep Learning based Recommendation with Tensorflow.
Stars: ✭ 954 (+7238.46%)
Mutual labels:  collaborative-filtering, matrix-factorization
Recoder
Large scale training of factorization models for Collaborative Filtering with PyTorch
Stars: ✭ 46 (+253.85%)
Mutual labels:  collaborative-filtering, matrix-factorization
Recommendation-System-Baseline
Some common recommendation system baseline, with description and link.
Stars: ✭ 34 (+161.54%)
Mutual labels:  collaborative-filtering, matrix-factorization
Daisyrec
A developing recommender system in pytorch. Algorithm: KNN, LFM, SLIM, NeuMF, FM, DeepFM, VAE and so on, which aims to fair comparison for recommender system benchmarks
Stars: ✭ 280 (+2053.85%)
Mutual labels:  collaborative-filtering, matrix-factorization

Please cite the original work https://link.springer.com/article/10.1007/s10115-018-1157-2 if you use this software in your work.

This folder contains some files of MATLAB code used for the experiments in the M.Sc. thesis titled "A Recommender System Based on Sparse Dictionary Coding" by Ismail Emre Kartoglu (King's College London, 2014)

MRSR is a set of MATLAB classes for recommender systems research. The idea is to gather all the recommender system algorithms and make reliable comparisons by separating the evaluation logic and the recommendation logic. The same evaluation logic is shared and used by all the recommendation algorithms. The user only needs to implement the recommendation logic and not worry about the evaluation logic, which is handled by the framework. The user can test their own algorithm by inheriting the AbstractExperiment class and implementing the abstract methods. Example use cases are described in what follows.

License

MATLAB Recommender System Research Software Copyright (C) 2014 Ismail Emre Kartoglu

This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

Recommender systems out of the box

  1. A predictive recommender based on sparse dictionary coding.

  2. A top-n recommender based on sparse dictionary coding.

  3. A predictive recommender based on k-NN.

  4. A top-n recommender based on k-NN.

  5. A predictive recommender based on the matrix factorisation method introduced by Koren et al.

  6. A random recommender (to check a given recommender does not perform worse than a random recommender!).

  7. MaxF top-n recommender (simply recommends the top-hit items to every user, works surprisingly well on some metrics).

Important notes:

  1. To be able to run a sparse coder experiment, the user must download the sparse coders encapsulated by the SparseCoder.m file.

    The user can download an implementation of the PC/BC-DIM algorithm using this link: http://www.inf.kcl.ac.uk/staff/mike/Code/sparse_classification.zip

    SolveDALM.m, SolveFISTA.m, SolveOMP.m, SolvePFP.m, and SolveSpaRSA.m files can be downloaded using the following web pages (24/08/2014):

    http://www.eecs.berkeley.edu/~yang/software/l1benchmark/ http://sparselab.stanford.edu/

    Update: PC/BC-DIM and PFP algorithms are now included in this project.

  2. To run the unit tests, run the "runalltests.m" matlab file. Matlab's xUnit unit test library might need to be installed.

  3. The program code assumes 0 to represent the missing values in the User Item Matrix.

Example 1 - Using the item-based KNN recommender:

test = ItemBasedKNN.createNewWithDatasets(baseSet, testSet)
test.k = 10
test.setSimilarityCalculatorTo(Similarity.COSINE);
test.calculatePredictiveAccuracy; % calculate MAE and RMSE
numberOfUsers = 943;
test.showPrecisionAndRecall(10, [1:numberOfUsers)]

Here baseSet is the training set (User-Item matrix), and testSet is the test User-Item matrix (ratings removed from the training set).

Example 2 - Using the item-based sparse coding recommender:

test = ItemBasedSparseCoderExperiment.createItemBasedExperiment(baseSet, testSet)
test.calculatePredictiveAccuracy; % calculate MAE and RMSE
numberOfUsers = 943;
test.showPrecisionAndRecall(10, [1:numberOfUsers]) 

% The above code will print the results.
% However, the user might want to access the results as follows:

mae = test.result.MAE
rmse = test.result.RMSE
cpp = test.result.cppRate
recall = test.result.recall
precision = test.result.precision
f1 = test.result.f1

To measure the performance of the user's own algorithm:

Create a class, make it inherit the AbstractExperiment class, and implement the following abstract methods in AbstractExperiment class:

       % Generate a top-n list for the given user. The list may contain
       % an item that is already rated in the base (training) set.
       topNList = generateTopNListForUser(obj, n, userIndex); 
       
       % Geneate a top-n list for the given user. The list may contain
       % only the unrated items (Items that are not rated in the base (training) set.
       topNList = generateTopNListForTestSetForUser(obj, n, userIndex);
       
       % Predict the rating of the given user userIndex for the item with itemIndex.
       prediction = makePrediction(obj, userIndex, itemIndex);
       
       % Make initial calculations. This may be similarity matrix
       % calculation for k-NN algorithm, or sparse reconstruction for
       % sparse coding. Some of the evaluation methods call this
       % function before they start their job. The user may leave this method
       % empty. They may instead choose to add their initialization logic (if any)
       % to other methods such as generateTopNListForTestSetForUser.
       initialize(obj);       

After implementing these methods, the user can measure the personalisation, CPP, recall, precision, MAE and RMSE of their own algorithm and safely compare their results to other methods. Please see the SampleRecommender.m file (https://github.com/iemre/MRSR/blob/master/SampleRecommender.m) as an example.

Please contact email if you have any questions/suggestions.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].