All Projects → allenjack → GATE

allenjack / GATE

Licence: MIT license
The implementation of "Gated Attentive-Autoencoder for Content-Aware Recommendation"

Programming Languages

python
139335 projects - #7 most used programming language
Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to GATE

SAE-NAD
The implementation of "Point-of-Interest Recommendation: Exploiting Self-Attentive Autoencoders with Neighbor-Aware Influence"
Stars: ✭ 48 (-26.15%)
Mutual labels:  autoencoder, attention-model
retailbox
🛍️RetailBox - eCommerce Recommender System using Machine Learning
Stars: ✭ 32 (-50.77%)
Mutual labels:  recommender-systems
Unsupervised-Classification-with-Autoencoder
Using Autoencoders for classification as unsupervised machine learning algorithms with Deep Learning.
Stars: ✭ 43 (-33.85%)
Mutual labels:  autoencoder
EZyRB
Easy Reduced Basis method
Stars: ✭ 49 (-24.62%)
Mutual labels:  autoencoder
pytorch integrated cell
Integrated Cell project implemented in pytorch
Stars: ✭ 40 (-38.46%)
Mutual labels:  autoencoder
tensorflow-mnist-AAE
Tensorflow implementation of adversarial auto-encoder for MNIST
Stars: ✭ 86 (+32.31%)
Mutual labels:  autoencoder
pdn-content-aware-fill
A Resynthesizer-based content aware fill Effect plugin for Paint.NET
Stars: ✭ 54 (-16.92%)
Mutual labels:  content-aware
Unsupervised Deep Learning
Unsupervised (Self-Supervised) Clustering of Seismic Signals Using Deep Convolutional Autoencoders
Stars: ✭ 36 (-44.62%)
Mutual labels:  autoencoder
dltf
Hands-on in-person workshop for Deep Learning with TensorFlow
Stars: ✭ 14 (-78.46%)
Mutual labels:  autoencoder
reasoning attention
Unofficial implementation algorithms of attention models on SNLI dataset
Stars: ✭ 34 (-47.69%)
Mutual labels:  attention-model
adversarial-autoencoder
Tensorflow 2.0 implementation of Adversarial Autoencoders
Stars: ✭ 17 (-73.85%)
Mutual labels:  autoencoder
seq2seq-autoencoder
Theano implementation of Sequence-to-Sequence Autoencoder
Stars: ✭ 12 (-81.54%)
Mutual labels:  autoencoder
Image-Retrieval
Image retrieval program made in Tensorflow supporting VGG16, VGG19, InceptionV3 and InceptionV4 pretrained networks and own trained Convolutional autoencoder.
Stars: ✭ 56 (-13.85%)
Mutual labels:  autoencoder
php-smartcrop-extension
smartcrop implementation in php extension
Stars: ✭ 17 (-73.85%)
Mutual labels:  content-aware
KBRD
Towards Knowledge-Based Recommender Dialog System @ EMNLP 2019
Stars: ✭ 123 (+89.23%)
Mutual labels:  recommender-systems
swin-transformer-pytorch
Implementation of the Swin Transformer in PyTorch.
Stars: ✭ 610 (+838.46%)
Mutual labels:  attention-model
G2P
Grapheme To Phoneme
Stars: ✭ 59 (-9.23%)
Mutual labels:  attention-model
eForest
This is the official implementation for the paper 'AutoEncoder by Forest'
Stars: ✭ 71 (+9.23%)
Mutual labels:  autoencoder
probabilistic nlg
Tensorflow Implementation of Stochastic Wasserstein Autoencoder for Probabilistic Sentence Generation (NAACL 2019).
Stars: ✭ 28 (-56.92%)
Mutual labels:  autoencoder
Face-Landmarking
Real time face landmarking using decision trees and NN autoencoders
Stars: ✭ 73 (+12.31%)
Mutual labels:  autoencoder

The GATE model for Content-aware Recommendation

The implementation of the paper:

Chen Ma, Peng Kang, Bin Wu, Qinglong Wang, and Xue Liu, "Gated Attentive-Autoencoder for Content-Aware Recommendation", in the 12th ACM International Conference on Web Search and Data Mining (WSDM 2019)

Arxiv: https://arxiv.org/abs/1812.02869

Please cite our paper if you use our code. Thanks!

Author: Chen Ma ([email protected])

Bibtex

@inproceedings{DBLP:conf/wsdm/MaKWWL19,
  author    = {Chen Ma and
               Peng Kang and
               Bin Wu and
               Qinglong Wang and
               Xue Liu},
  title     = {Gated Attentive-Autoencoder for Content-Aware Recommendation},
  booktitle = {{WSDM}},
  pages     = {519--527},
  publisher = {{ACM}},
  year      = {2019}
}

Environments

  • python 3.6
  • PyTorch (version: 0.4.0)
  • numpy (version: 1.15.0)
  • scipy (version: 1.1.0)
  • sklearn (version: 0.19.1)

Dataset

In our experiments, the citeulike-a dataset is from http://www.wanghao.in/CDL.htm, the movielens-20M dataset is from https://grouplens.org/datasets/movielens/20m/, the Amazon-CDs and Amazon-Books datasets are from http://jmcauley.ucsd.edu/data/amazon/. (If you need the data after preprocessing, please send me an email).

The XXX_user_records.pkl file is a list of lists that stores the inner item id of each user, e.g., user_records[0]=[item_id0, item_id1, item_id2,...].

The XXX_user_mapping.pkl file is a list that maps the user inner id to its original id, e.g., user_mapping[0]=A2SUAM1J3GNN3B.

The XXX_item_mapping.pkl file is similar to XXX_user_mapping.pkl.

The item_relation.pkl file is a list of lists that stores the neighbors of each item, e.g., item_relation[0]=[item_id0, item_id1, item_id2,...].

The review_word_sequence.pkl file is a list of lists that stores the word sequence in the description of each item , e.g., review_word_sequence[0]=[word_id0, word_id1, word_id2,...]. The word id is the same as the line number (start from 0) in the vocabulary.txt file.

Example to run the code

Data preprocessing:

The code for data preprocessing is put in the /preprocessing folder. Amazon_CDs.ipynb provides an example on how to transform the raw data into the .pickle files that used in our program.

Train and evaluate the model (you are strongly recommended to run the program on a machine with GPU):

python run.py
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].