All Projects → xiangwang1223 → Kgpolicy

xiangwang1223 / Kgpolicy

Licence: mit
Reinforced Negative Sampling over Knowledge Graph for Recommendation, WWW2020

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Kgpolicy

Kbgan
Code for "KBGAN: Adversarial Learning for Knowledge Graph Embeddings" https://arxiv.org/abs/1711.04071
Stars: ✭ 186 (+124.1%)
Mutual labels:  knowledge-graph, reinforcement-learning
KG4Rec
Knowledge-aware recommendation papers.
Stars: ✭ 76 (-8.43%)
Mutual labels:  knowledge-graph, recommender-system
Multihopkg
Multi-hop knowledge graph reasoning learned via policy gradient with reward shaping and action dropout
Stars: ✭ 202 (+143.37%)
Mutual labels:  knowledge-graph, reinforcement-learning
Entity2rec
entity2rec generates item recommendation using property-specific knowledge graph embeddings
Stars: ✭ 159 (+91.57%)
Mutual labels:  knowledge-graph, recommender-system
Knowledge graph attention network
KGAT: Knowledge Graph Attention Network for Recommendation, KDD2019
Stars: ✭ 610 (+634.94%)
Mutual labels:  knowledge-graph, recommender-system
Nlp4rec Papers
Paper list of NLP for recommender systems
Stars: ✭ 162 (+95.18%)
Mutual labels:  knowledge-graph, recommender-system
Recommender System
A developing recommender system in tensorflow2. Algorithm: UserCF, ItemCF, LFM, SLIM, GMF, MLP, NeuMF, FM, DeepFM, MKR, RippleNet, KGCN and so on.
Stars: ✭ 227 (+173.49%)
Mutual labels:  knowledge-graph, recommender-system
Awesome Deep Learning Papers For Search Recommendation Advertising
Awesome Deep Learning papers for industrial Search, Recommendation and Advertising. They focus on Embedding, Matching, Ranking (CTR prediction, CVR prediction), Post Ranking, Transfer, Reinforcement Learning, Self-supervised Learning and so on.
Stars: ✭ 136 (+63.86%)
Mutual labels:  reinforcement-learning, recommender-system
Recsim
A Configurable Recommender Systems Simulation Platform
Stars: ✭ 461 (+455.42%)
Mutual labels:  reinforcement-learning, recommender-system
Recnn
Reinforced Recommendation toolkit built around pytorch 1.7
Stars: ✭ 362 (+336.14%)
Mutual labels:  reinforcement-learning, recommender-system
Catalyst
Accelerated deep learning R&D
Stars: ✭ 2,804 (+3278.31%)
Mutual labels:  reinforcement-learning, recommender-system
Elliot
Comprehensive and Rigorous Framework for Reproducible Recommender Systems Evaluation
Stars: ✭ 49 (-40.96%)
Mutual labels:  knowledge-graph, recommender-system
Reco Papers
Classic papers and resources on recommendation
Stars: ✭ 2,804 (+3278.31%)
Mutual labels:  reinforcement-learning, recommender-system
Crslab
CRSLab is an open-source toolkit for building Conversational Recommender System (CRS).
Stars: ✭ 183 (+120.48%)
Mutual labels:  knowledge-graph, recommender-system
Drl4recsys
Courses on Deep Reinforcement Learning (DRL) and DRL papers for recommender systems
Stars: ✭ 196 (+136.14%)
Mutual labels:  reinforcement-learning, recommender-system
Minerva
Meandering In Networks of Entities to Reach Verisimilar Answers
Stars: ✭ 205 (+146.99%)
Mutual labels:  knowledge-graph, reinforcement-learning
skywalkR
code for Gogleva et al manuscript
Stars: ✭ 28 (-66.27%)
Mutual labels:  knowledge-graph, recommender-system
Chatbot cn
基于金融-司法领域(兼有闲聊性质)的聊天机器人,其中的主要模块有信息抽取、NLU、NLG、知识图谱等,并且利用Django整合了前端展示,目前已经封装了nlp和kg的restful接口
Stars: ✭ 791 (+853.01%)
Mutual labels:  knowledge-graph, reinforcement-learning
Ml Surveys
📋 Survey papers summarizing advances in deep learning, NLP, CV, graphs, reinforcement learning, recommendations, graphs, etc.
Stars: ✭ 1,063 (+1180.72%)
Mutual labels:  reinforcement-learning, recommender-system
Recosystem
Recommender System Using Parallel Matrix Factorization
Stars: ✭ 74 (-10.84%)
Mutual labels:  recommender-system

Knowledge Graph Policy Network

This is our Pytorch implementation for the paper:

Xiang Wang, Yaokun Xu, Xiangnan He, Yixin Cao, Meng Wang and Tat-Seng Chua (2020). Reinforced Negative Sampling over Knowledge Graph for Recommendation. Paper in ACM DL or Paper in arXiv. In WWW'2020, Taipei, Taiwan, China, April 20–24, 2020.

Author: Dr. Xiang Wang (xiangwang at u.nus.edu) and Mr. Yaokun Xu (xuyaokun98 at gmail.com)

Introduction

Knowledge Graph Policy Network (KGPolicy) is a new negative sampling framework tailored to knowledge-aware personalized recommendation. Exploiting rich connections of knowledge graph, KGPolicy is able to discover high-quality (i.e., informative and factual) items as negative training instances, thus providing better recommendation.

Citation

If you want to use our codes and datasets in your research, please cite:

@inproceedings{KGPolicy20,
  author    = {Xiang Wang and
               Yaokun Xu and
               Xiangnan He and
               Yixin Cao and
               Meng Wang and
               Tat{-}Seng Chua},
  title     = {Reinforced Negative Sampling over Knowledge Graph for Recommendation},
  booktitle = {{WWW}},
  year      = {2020}
}

Reproducibility

To demonstrate the reporducibility of the best performance reported in our paper and faciliate researchers for development and testing purpose, we provide the instructions as follows. Later, we will release the other baselines.

1. Data and Source Code

We follow our previous work, KGAT, and you can get the detailed information about the datasets in KGAT.

i. Create a new directory for this repo

mkdir KG-Policy
➜ cd KG-Policy

ii. Get dataset and pretrain model

➜ wget https://github.com/xiangwang1223/kgpolicy/releases/download/v1.0/Data.zip
➜ unzip Data.zip

iii. Get source code

➜ git clone https://github.com/xiangwang1223/kgpolicy.git

2. Environment

Please use conda to manage the environment.

i. Switch to source code dir

➜ cd kgpolicy

ii. Create a new environment

➜ conda create -n geo python=3.6
➜ conda activate geo

iii. Ensure python version is 3.6. Then install all requirements for this project.

➜ bash setup.sh

Note that: Sometimes there is mismatch between cuda version and torch_geometric version. If you encounter this problem, please try to install a correct cuda version. If you prefer to install all dependences by yourself, please ensure torch_geometric is properly installed. After doing these, now it's ready to train KG-Policy model.

3. Train

i. Train KG-Policy on last-fm. Also, KG-Policy can be trained on other two datasets, amazon-book and yelp2018, check it out in Data.

➜ python main.py

To run on other two datasets

➜ python main.py --regs 1e-4 --dataset yelp2018 --model_path model/best_yelp.ckpt 
➜ python main.py --regs 1e-4 --dataset amazon-book --model_path model/best_ab.ckpt 

Note that: The default regs is 1e-5, while we use 1e-4 as regs when training amazon-book and yelp2018. There are also some others parameters can be tuned for a better performance, check it out at common/config/parser.py.

4. Experiment result

To be consistent to our KGAT, we use the same evaluation metrics (i.e., [email protected] and [email protected]), use the same codes released in KGAT, and report them in our KGPolicy paper. We note that this implementation of N[email protected] is different from the standard definition, while they reflect similar trendings. Hence here we also report the results in terms of the standard [email protected]* and please check the implementation at common/test.py.

i. Dataset: last-fm

Model [email protected] [email protected]
RNS 0.0687 0.0584
DNS 0.0874 0.0746
IRGAN 0.0755 0.0627
KG-Policy 0.0957 0.0837

ii. Dataset: yelp2018

Model [email protected] [email protected]
RNS 0.0465 0.0298
DNS 0.0666 0.0429
IRGAN 0.0538 0.0342
KG-Policy 0.0746 0.0489

iii. Dataset: amazon-book

Model [email protected] [email protected]
RNS 0.1239 0.0647
DNS 0.1460 0.0775
IRGAN 0.1330 0.0693
KG-Policy 0.1609 0.0890

Acknowledgement

Any scientific publications that use our codes and datasets should cite the following paper as the reference:

@inproceedings{KGPolicy20,
  author    = {Xiang Wang and
               Yaokun Xu and
               Xiangnan He and
               Yixin Cao and
               Meng Wang and
               Tat{-}Seng Chua},
  title     = {Reinforced Negative Sampling over Knowledge Graph for Recommendation},
  booktitle = {{WWW}},
  year      = {2020}
}

Nobody guarantees the correctness of the data, its suitability for any particular purpose, or the validity of results based on the use of the data set. The data set may be used for any research purposes under the following conditions:

  • The user must acknowledge the use of the data set in publications resulting from the use of the data set.
  • The user may not redistribute the data without separate permission.
  • The user may not try to deanonymise the data.
  • The user may not use this information for any commercial or revenue-bearing purposes without first obtaining permission from us.

Funding Source Acknowledgement

This research is supported by the National Research Foundation, Singapore under its International Research Centres in Singapore Funding Initiative. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].