All Projects → awarebayes → Recnn

awarebayes / Recnn

Licence: apache-2.0
Reinforced Recommendation toolkit built around pytorch 1.7

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Recnn

GNN-Recommender-Systems
An index of recommendation algorithms that are based on Graph Neural Networks.
Stars: ✭ 505 (+39.5%)
Mutual labels:  recommendation-system, recommender-system
mildnet
Visual Similarity research at Fynd. Contains code to reproduce 2 of our research papers.
Stars: ✭ 76 (-79.01%)
Mutual labels:  recommendation-system, recommender-system
NVTabular
NVTabular is a feature engineering and preprocessing library for tabular data designed to quickly and easily manipulate terabyte scale datasets used to train deep learning based recommender systems.
Stars: ✭ 797 (+120.17%)
Mutual labels:  recommendation-system, recommender-system
Kratos
A modular-designed and easy-to-use microservices framework in Go.
Stars: ✭ 15,844 (+4276.8%)
Mutual labels:  library, toolkit
recommender
NReco Recommender is a .NET port of Apache Mahout CF java engine (standalone, non-Hadoop version)
Stars: ✭ 35 (-90.33%)
Mutual labels:  recommendation-system, recommender-system
Genometools
GenomeTools genome analysis system.
Stars: ✭ 186 (-48.62%)
Mutual labels:  library, toolkit
auction-website
🏷️ An e-commerce marketplace template. An online auction and shopping website for buying and selling a wide variety of goods and services worldwide.
Stars: ✭ 44 (-87.85%)
Mutual labels:  recommendation-system, recommender-system
Catalyst
Accelerated deep learning R&D
Stars: ✭ 2,804 (+674.59%)
Mutual labels:  reinforcement-learning, recommender-system
Context-Aware-Recommender
Hybrid Recommender System
Stars: ✭ 16 (-95.58%)
Mutual labels:  recommendation-system, recommender-system
TIFUKNN
kNN-based next-basket recommendation
Stars: ✭ 38 (-89.5%)
Mutual labels:  recommendation-system, recommender-system
Chat Ui Kit React
Build your own chat UI with React components in few minutes. Chat UI Kit from chatscope is an open source UI toolkit for developing web chat applications.
Stars: ✭ 131 (-63.81%)
Mutual labels:  library, toolkit
Cornac
A Comparative Framework for Multimodal Recommender Systems
Stars: ✭ 308 (-14.92%)
Mutual labels:  recommender-system, recommendation-system
Awesome Ui Component Library
Curated list of framework component libraries for UI styles/toolkit
Stars: ✭ 702 (+93.92%)
Mutual labels:  library, toolkit
retailbox
🛍️RetailBox - eCommerce Recommender System using Machine Learning
Stars: ✭ 32 (-91.16%)
Mutual labels:  recommendation-system, recommender-system
Kooper
Kooper is a simple Go library to create Kubernetes operators and controllers.
Stars: ✭ 388 (+7.18%)
Mutual labels:  library, toolkit
recommender system with Python
recommender system tutorial with Python
Stars: ✭ 106 (-70.72%)
Mutual labels:  recommendation-system, recommender-system
Drl4recsys
Courses on Deep Reinforcement Learning (DRL) and DRL papers for recommender systems
Stars: ✭ 196 (-45.86%)
Mutual labels:  reinforcement-learning, recommender-system
Reco Papers
Classic papers and resources on recommendation
Stars: ✭ 2,804 (+674.59%)
Mutual labels:  reinforcement-learning, recommender-system
Tf-Rec
Tf-Rec is a python💻 package for building⚒ Recommender Systems. It is built on top of Keras and Tensorflow 2 to utilize GPU Acceleration during training.
Stars: ✭ 18 (-95.03%)
Mutual labels:  recommendation-system, recommender-system
Recdb Postgresql
RecDB is a recommendation engine built entirely inside PostgreSQL
Stars: ✭ 297 (-17.96%)
Mutual labels:  recommender-system, recommendation-system

Documentation Status Documentation Status Code style: black

This is my school project. It focuses on Reinforcement Learning for personalized news recommendation. The main distinction is that it tries to solve online off-policy learning with dynamically generated item embeddings. I want to create a library with SOTA algorithms for reinforcement learning recommendation, providing the level of abstraction you like.

recnn.readthedocs.io

📊 The features can be summed up to

  • Abstract as you decide: you can import the entire algorithm (say DDPG) and tell it to ddpg.learn(batch), you can import networks and the learning function separately, create a custom loader for your task, or can define everything by yourself.

  • Examples do not contain any of the junk code or workarounds: pure model definition and the algorithm itself in one file. I wrote a couple of articles explaining how it functions.

  • The learning is built around sequential or frame environment that supports ML20M and like. Seq and Frame determine the length type of sequential data, seq is fully sequential dynamic size (WIP), while the frame is just a static frame.

  • State Representation module with various methods. For sequential state representation, you can use LSTM/RNN/GRU (WIP)

  • Parallel data loading with Modin (Dask / Ray) and caching

  • Pytorch 1.7 support with Tensorboard visualization.

  • New datasets will be added in the future.

📚 Medium Articles

The repo consists of two parts: the library (./recnn), and the playground (./examples) where I explain how to work with certain things.

  • Pretty much what you need to get started with this library if you know recommenders but don't know much about reinforcement learning:

  • Top-K Off-Policy Correction for a REINFORCE Recommender System:

Algorithms that are/will be added

Algorithm Paper Code
Deep Q Learning (PoC) https://arxiv.org/abs/1312.5602 examples/0. Embeddings/ 1.DQN
Deep Deterministic Policy Gradients https://arxiv.org/abs/1509.02971 examples/1.Vanilla RL/DDPG
Twin Delayed DDPG (TD3) https://arxiv.org/abs/1802.09477 examples/1.Vanilla RL/TD3
Soft Actor-Critic https://arxiv.org/abs/1801.01290 examples/1.Vanilla RL/SAC
Batch Constrained Q-Learning https://arxiv.org/abs/1812.02900 examples/99.To be released/BCQ
REINFORCE Top-K Off-Policy Correction https://arxiv.org/abs/1812.02353 examples/2. REINFORCE TopK

‍Repos I used code from

🤔 What is this

This is my school project. It focuses on Reinforcement Learning for personalized news recommendation. The main distinction is that it tries to solve online off-policy learning with dynamically generated item embeddings. Also, there is no exploration, since we are working with a dataset. In the example section, I use Google's BERT on the ML20M dataset to extract contextual information from the movie description to form the latent vector representations. Later, you can use the same transformation on new, previously unseen items (hence, the embeddings are dynamically generated). If you don't want to bother with embeddings pipeline, I have a DQN embeddings generator as a proof of concept.

✋ Getting Started

p.s. Image is clickable. here is direct link:

To learn more about recnn, read the docs: recnn.readthedocs.io

⚙️ Installing

pip install git+git://github.com/awarebayes/RecNN.git

PyPi is on its way...

🚀 Try demo

I built a Streamlit demo to showcase its features. It has 'recommend me a movie' feature! Note how the score changes when you rate the movies. When you start and the movies aren't rated (5/10 by default) the score is about ~40 (euc), but as you rate them it drops to <10, indicating more personalized and precise predictions. You can also test diversity, check out the correlation of recommendations, pairwise distances, and pinpoint accuracy.

Run it:

git clone [email protected]:awarebayes/RecNN.git 
cd RecNN && streamlit run examples/streamlit_demo.py

Docker image is available here

📁 Downloads

📁 Download the Models

📄 Citing

If you find RecNN useful for an academic publication, then please use the following BibTeX to cite it:

@misc{RecNN,
  author = {M Scherbina},
  title = {RecNN: RL Recommendation with PyTorch},
  year = {2019},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/awarebayes/RecNN}},
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].