All Projects → yuantiku → Commonsense Rc

yuantiku / Commonsense Rc

Licence: mit
Code for Yuanfudao at SemEval-2018 Task 11: Three-way Attention and Relational Knowledge for Commonsense Machine Comprehension

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Commonsense Rc

Textaugmentation Gpt2
Fine-tuned pre-trained GPT2 for custom topic specific text generation. Such system can be used for Text Augmentation.
Stars: ✭ 104 (-7.14%)
Mutual labels:  natural-language-processing
Ua Gec
UA-GEC: Grammatical Error Correction and Fluency Corpus for the Ukrainian Language
Stars: ✭ 108 (-3.57%)
Mutual labels:  natural-language-processing
Detecting Scientific Claim
Extracting scientific claims from biomedical abstracts (powered by AllenNLP), demo:
Stars: ✭ 109 (-2.68%)
Mutual labels:  natural-language-processing
Easy Bert
A Dead Simple BERT API for Python and Java (https://github.com/google-research/bert)
Stars: ✭ 106 (-5.36%)
Mutual labels:  natural-language-processing
Allennlp
An open-source NLP research library, built on PyTorch.
Stars: ✭ 10,699 (+9452.68%)
Mutual labels:  natural-language-processing
Nuspell
🖋️ Fast and safe spellchecking C++ library
Stars: ✭ 108 (-3.57%)
Mutual labels:  natural-language-processing
Magnitude
A fast, efficient universal vector embedding utility package.
Stars: ✭ 1,394 (+1144.64%)
Mutual labels:  natural-language-processing
Nlp Papers
Papers and Book to look at when starting NLP 📚
Stars: ✭ 111 (-0.89%)
Mutual labels:  natural-language-processing
Transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Stars: ✭ 55,742 (+49669.64%)
Mutual labels:  natural-language-processing
Xlnet extension tf
XLNet Extension in TensorFlow
Stars: ✭ 109 (-2.68%)
Mutual labels:  natural-language-processing
Chatbot
Русскоязычный чатбот
Stars: ✭ 106 (-5.36%)
Mutual labels:  natural-language-processing
Nltk
NLTK Source
Stars: ✭ 10,309 (+9104.46%)
Mutual labels:  natural-language-processing
Kadot
Kadot, the unsupervised natural language processing library.
Stars: ✭ 108 (-3.57%)
Mutual labels:  natural-language-processing
Ios ml
List of Machine Learning, AI, NLP solutions for iOS. The most recent version of this article can be found on my blog.
Stars: ✭ 1,409 (+1158.04%)
Mutual labels:  natural-language-processing
Awesome Emotion Recognition In Conversations
A comprehensive reading list for Emotion Recognition in Conversations
Stars: ✭ 111 (-0.89%)
Mutual labels:  natural-language-processing
Spokestack Python
Spokestack is a library that allows a user to easily incorporate a voice interface into any Python application.
Stars: ✭ 103 (-8.04%)
Mutual labels:  natural-language-processing
Papernotes
My personal notes and surveys on DL, CV and NLP papers.
Stars: ✭ 108 (-3.57%)
Mutual labels:  natural-language-processing
Opus Mt
Open neural machine translation models and web services
Stars: ✭ 111 (-0.89%)
Mutual labels:  natural-language-processing
Danlp
DaNLP is a repository for Natural Language Processing resources for the Danish Language.
Stars: ✭ 111 (-0.89%)
Mutual labels:  natural-language-processing
Awesome Embedding Models
A curated list of awesome embedding models tutorials, projects and communities.
Stars: ✭ 1,486 (+1226.79%)
Mutual labels:  natural-language-processing

Yuanfudao at SemEval-2018 Task 11: Three-way Attention and Relational Knowledge for Commonsense Machine Comprehension

Model Overview

We use attention-based LSTM networks.

For more technical details, please refer to our paper at https://arxiv.org/abs/1803.00191

For more details about this task, please refer to paper SemEval-2018 Task 11: Machine Comprehension Using Commonsense Knowledge.

Official leaderboard is available at https://competitions.codalab.org/competitions/17184#results (Evaluation Phase)

The overall model architecture is shown below:

Three-way Attentive Networks

How to run

Prerequisite

pytorch 0.2, 0.3 or 0.4 (may have a few warnings, but that's ok)

spacy >= 2.0

Won't work for >= python3.7 due to async keyword conflict.

GPU machine is preferred, training on CPU will be much slower.

Step 1:

Download preprocessed data from Google Drive or Baidu Cloud Disk, unzip and put them under folder data/.

If you choose to preprocess dataset by yourself, please run ./download.sh to download Glove embeddings and ConceptNet, and then run ./run.sh to preprocess dataset and train the model.

Official dataset can be downloaded on hidrive.

We transform original XML format data to Json format with xml2json by running ./xml2json.py --pretty --strip_text -t xml2json -o test-data.json test-data.xml

Step 2:

Train model with python3 src/main.py --gpu 0, the accuracy on development set will be approximately 83% after 50 epochs.

How to reproduce our competition results

Following above instructions you will get a model with ~81.5% accuracy on test set, we use two additional techniques for our official submission (~83.95% accuracy):

  1. Pretrain our model with RACE dataset for 10 epochs.

  2. Train 9 models with different random seeds and ensemble their outputs.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].