All Projects → lca4 → Collaborative Rnn

lca4 / Collaborative Rnn

Licence: other
A TensorFlow implementation of the collaborative RNN (Ko et al, 2016).

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Collaborative Rnn

Gru4rec tensorflow
TensorFlow implemenation of GRu4Rec model
Stars: ✭ 192 (+220%)
Mutual labels:  rnn, recommender-system
Dream
rnn based model for recommendations
Stars: ✭ 77 (+28.33%)
Mutual labels:  rnn, recommender-system
Chameleon recsys
Source code of CHAMELEON - A Deep Learning Meta-Architecture for News Recommender Systems
Stars: ✭ 202 (+236.67%)
Mutual labels:  rnn, recommender-system
Rnn recsys
Our implementation of the paper "Embedding-based News Recommendation for Millions of Users"
Stars: ✭ 135 (+125%)
Mutual labels:  rnn, recommender-system
ML2017FALL
Machine Learning (EE 5184) in NTU
Stars: ✭ 66 (+10%)
Mutual labels:  rnn, recommender-system
Rnn Nmt
基于双向RNN,Attention机制的编解码神经机器翻译模型
Stars: ✭ 46 (-23.33%)
Mutual labels:  rnn
Time Attention
Implementation of RNN for Time Series prediction from the paper https://arxiv.org/abs/1704.02971
Stars: ✭ 52 (-13.33%)
Mutual labels:  rnn
Boilerplate Dynet Rnn Lm
Boilerplate code for quickly getting set up to run language modeling experiments
Stars: ✭ 37 (-38.33%)
Mutual labels:  rnn
Nlp overview
Overview of Modern Deep Learning Techniques Applied to Natural Language Processing
Stars: ✭ 1,104 (+1740%)
Mutual labels:  rnn
Rexy
Flexible and extendable recommender system based on an abstract User-Product-Tag schema
Stars: ✭ 57 (-5%)
Mutual labels:  recommender-system
Mxnet Seq2seq
Sequence to sequence learning with MXNET
Stars: ✭ 51 (-15%)
Mutual labels:  rnn
Recoder
Large scale training of factorization models for Collaborative Filtering with PyTorch
Stars: ✭ 46 (-23.33%)
Mutual labels:  recommender-system
Consimilo
A Clojure library for querying large data-sets on similarity
Stars: ✭ 54 (-10%)
Mutual labels:  recommender-system
Deep Speeling
Deep Learning neural network for correcting spelling
Stars: ✭ 45 (-25%)
Mutual labels:  rnn
Attention Over Attention Tf Qa
论文“Attention-over-Attention Neural Networks for Reading Comprehension”中AoA模型实现
Stars: ✭ 58 (-3.33%)
Mutual labels:  rnn
Attentional Neural Factorization Machine
Attention,Factorization Machine, Deep Learning, Recommender System
Stars: ✭ 39 (-35%)
Mutual labels:  recommender-system
Realtime Fall Detection For Rnn
Real-time ADLs and Fall Detection implement TensorFlow
Stars: ✭ 50 (-16.67%)
Mutual labels:  rnn
Char rnn lm zh
language model in Chinese,基于Pytorch官方文档实现
Stars: ✭ 57 (-5%)
Mutual labels:  rnn
Deepseqslam
The Official Deep Learning Framework for Route-based Place Recognition
Stars: ✭ 49 (-18.33%)
Mutual labels:  rnn

Collaborative RNN

This is a TensorFlow implementation of the Collaborative RNN presented in the paper

Collaborative Recurrent Neural Networks for Dynamic Recommender Systems, Young-Jun Ko, Lucas Maystre, Matthias Grossglauser, ACML, 2016.

A PDF of the paper can be found here.

Requirements

The code is tested with

  • Python 2.7.12 and 3.5.1
  • NumPy 1.13.3
  • TensorFlow 1.4.0
  • CUDA 8.0
  • cuDNN 6.0
  • six 1.11.0

If you are interested in quickly testing out our code, you might want to check out our step-by-step guide for running the collaborative RNN on an AWS EC2 p2.xlarge instance.

Quickstart

Reproducing the results of the paper should be as easy as following these three steps.

  1. Download the datasets.

    • The last.fm dataset is available on Òscar Celma's page. The relevant file is userid-timestamp-artid-artname-traid-traname.tsv.
    • The BrighKite dataset is available at SNAP. The relevant file is loc-brightkite_totalCheckins.txt.
  2. Preprocess the data (relabel user and items, remove degenerate cases, split into training and validation sets). This can be done using the script utils/preprocess.py. For example, for BrightKite:

     python utils/preprocess.py brightkite path/to/raw_file.txt
    

    This will create two files named brightkite-train.txt and brightkite-valid.txt.

  3. Run crnn.py on the preprocessed data. For example for BrightKite, you might want to try running

     python -u crnn.py brightkite-{train,valid}.txt --hidden-size=32 \
         --learning-rate=0.0075 --rho=0.997 \
         --chunk-size=64 --batch-size=20 --num-epochs=25
    

Here is a table that summarizes the settings that gave us the results published in the paper. All the setting can be passed as command line arguments to crnn.py.

Argument BrightKite last.fm
--batch-size 20 20
--chunk-size 64 64
--hidden-size 32 128
--learning-rate 0.0075 0.01
--max-train-chunks (None) 80
--max-valid-chunks (None) 8
--num-epochs 25 10
--rho 0.997 0.997

On a modern server with an Nvidia Titan X (Maxwell generation) GPU it takes around 40 seconds per epoch for the BrightKite dataset, and around 14 minutes per epoch on the last.fm dataset.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].