All Projects → EagleW → Describing_a_knowledge_base

EagleW / Describing_a_knowledge_base

Licence: mit
Code for Describing a Knowledge Base

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Describing a knowledge base

Paperrobot
Code for PaperRobot: Incremental Draft Generation of Scientific Ideas
Stars: ✭ 372 (+785.71%)
Mutual labels:  datasets, attention-mechanism, text-generation, natural-language-generation, generation
Writing-editing-Network
Code for Paper Abstract Writing through Editing Mechanism
Stars: ✭ 72 (+71.43%)
Mutual labels:  text-generation, generation, datasets, attention-mechanism
PlanSum
[AAAI2021] Unsupervised Opinion Summarization with Content Planning
Stars: ✭ 25 (-40.48%)
Mutual labels:  text-generation, natural-language-generation
Attention Mechanisms
Implementations for a family of attention mechanisms, suitable for all kinds of natural language processing tasks and compatible with TensorFlow 2.0 and Keras.
Stars: ✭ 203 (+383.33%)
Mutual labels:  attention-mechanism, text-generation
factedit
🧐 Code & Data for Fact-based Text Editing (Iso et al; ACL 2020)
Stars: ✭ 16 (-61.9%)
Mutual labels:  text-generation, natural-language-generation
keras-deep-learning
Various implementations and projects on CNN, RNN, LSTM, GAN, etc
Stars: ✭ 22 (-47.62%)
Mutual labels:  text-generation, attention-mechanism
3d Pointcloud
Papers and Datasets about Point Cloud.
Stars: ✭ 179 (+326.19%)
Mutual labels:  datasets, generation
Entity2Topic
[NAACL2018] Entity Commonsense Representation for Neural Abstractive Summarization
Stars: ✭ 20 (-52.38%)
Mutual labels:  text-generation, natural-language-generation
Kenlg Reading
Reading list for knowledge-enhanced text generation, with a survey
Stars: ✭ 257 (+511.9%)
Mutual labels:  text-generation, natural-language-generation
Accelerated Text
Accelerated Text is a no-code natural language generation platform. It will help you construct document plans which define how your data is converted to textual descriptions varying in wording and structure.
Stars: ✭ 256 (+509.52%)
Mutual labels:  text-generation, natural-language-generation
Rbxit
Really Basic sÍerra Tools - Sierra game patches and install wiki!
Stars: ✭ 32 (-23.81%)
Mutual labels:  wiki
Pqg Pytorch
Paraphrase Generation model using pair-wise discriminator loss
Stars: ✭ 33 (-21.43%)
Mutual labels:  natural-language-generation
Codegen
A model-view based code generator written in Java
Stars: ✭ 36 (-14.29%)
Mutual labels:  generation
Attentional Neural Factorization Machine
Attention,Factorization Machine, Deep Learning, Recommender System
Stars: ✭ 39 (-7.14%)
Mutual labels:  attention-mechanism
Dataframes.jl
In-memory tabular data in Julia
Stars: ✭ 951 (+2164.29%)
Mutual labels:  datasets
French Sentiment Analysis Dataset
A collection of over 1.5 Million tweets data translated to French, with their sentiment.
Stars: ✭ 35 (-16.67%)
Mutual labels:  datasets
Contributingtomyproject
Writeup from maintainers, admins, contributors on how someone can get started with their project.
Stars: ✭ 29 (-30.95%)
Mutual labels:  wiki
Ugics 2018 Android Basics Wiki
Wiki for Udacity Google India Challenge Scholarship 2018 🎓 Android Basics Course 📒
Stars: ✭ 29 (-30.95%)
Mutual labels:  wiki
Isab Pytorch
An implementation of (Induced) Set Attention Block, from the Set Transformers paper
Stars: ✭ 21 (-50%)
Mutual labels:  attention-mechanism
Gpt2 Telegram Chatbot
GPT-2 Telegram Chat bot
Stars: ✭ 41 (-2.38%)
Mutual labels:  generation

Describing a Knowledge Base

Describing a Knowledge Base

Accepted by 11th International Conference on Natural Language Generation (INLG 2018)

[Slides]

Table of Contents

Model Overview

Photo

Requirements

Environment:

  • Pytorch 0.4
  • Python 3.6 CAUTION!! Model might not be saved and loaded properly under Python 3.5

Data:

  • Wikipedia Person and Animal Dataset
    This dataset gathers unfiltered 428,748 person and 12,236 animal infobox with description based on Wikipedia dump (2018/04/01) and Wikidata (2018/04/12)

Quickstart

Preprocessing:

Put the Wikipedia Person and Animal Dataset under the Describing a Knowledge Base folder. Unzip it.

Randomly split the data into train, dev and test by runing split.py under utils folder.

python split.py

Run preprocess.py under the same folder.

You can choose person (type 0) or animal (type 1)

python preprocess.py --type 0

Training

Hyperparameter can be adjusted in the Config class of main.py and choose whether person (0) or animal (1) using type.

python main.py --cuda --mode 0 --type 0

Test

Compute score:

python main.py --cuda --mode 3

Predict single entity:

python main.py --cuda --mode 1

Citation

@InProceedings{W18-6502,
  author = 	"Wang, Qingyun
		and Pan, Xiaoman
		and Huang, Lifu
		and Zhang, Boliang
		and Jiang, Zhiying
		and Ji, Heng
		and Knight, Kevin",
  title = 	"Describing a Knowledge Base",
  booktitle = 	"Proceedings of the 11th International Conference on Natural Language Generation",
  year = 	"2018",
  publisher = 	"Association for Computational Linguistics",
  pages = 	"10--21",
  location = 	"Tilburg University, The Netherlands",
  url = 	"http://aclweb.org/anthology/W18-6502"
}

Attention Visualization

Photo

Photo

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].