All Projects → ymcui → NLP-Review-Scorer

ymcui / NLP-Review-Scorer

Licence: Apache-2.0 license
Score your NLP paper review

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to NLP-Review-Scorer

JointIDSF
BERT-based joint intent detection and slot filling with intent-slot attention mechanism (INTERSPEECH 2021)
Stars: ✭ 55 (+120%)
Mutual labels:  bert
PromptPapers
Must-read papers on prompt-based tuning for pre-trained language models.
Stars: ✭ 2,317 (+9168%)
Mutual labels:  bert
hard-label-attack
Natural Language Attacks in a Hard Label Black Box Setting.
Stars: ✭ 26 (+4%)
Mutual labels:  bert
KitanaQA
KitanaQA: Adversarial training and data augmentation for neural question-answering models
Stars: ✭ 58 (+132%)
Mutual labels:  bert
sticker2
Further developed as SyntaxDot: https://github.com/tensordot/syntaxdot
Stars: ✭ 14 (-44%)
Mutual labels:  bert
wechsel
Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
Stars: ✭ 39 (+56%)
Mutual labels:  bert
banglabert
This repository contains the official release of the model "BanglaBERT" and associated downstream finetuning code and datasets introduced in the paper titled "BanglaBERT: Language Model Pretraining and Benchmarks for Low-Resource Language Understanding Evaluation in Bangla" accpeted in Findings of the Annual Conference of the North American Chap…
Stars: ✭ 186 (+644%)
Mutual labels:  bert
Kevinpro-NLP-demo
All NLP you Need Here. 个人实现了一些好玩的NLP demo,目前包含13个NLP应用的pytorch实现
Stars: ✭ 117 (+368%)
Mutual labels:  bert
gender-unbiased BERT-based pronoun resolution
Source code for the ACL workshop paper and Kaggle competition by Google AI team
Stars: ✭ 42 (+68%)
Mutual labels:  bert
NLP-paper
🎨 🎨NLP 自然语言处理教程 🎨🎨 https://dataxujing.github.io/NLP-paper/
Stars: ✭ 23 (-8%)
Mutual labels:  bert
SA-BERT
CIKM 2020: Speaker-Aware BERT for Multi-Turn Response Selection in Retrieval-Based Chatbots
Stars: ✭ 71 (+184%)
Mutual labels:  bert
diversity-index
A curated Diversity-Index of grants, scholarships and FA that encourages diversity in STEM fields aimed at half the world's population, Women!
Stars: ✭ 60 (+140%)
Mutual labels:  conference
NAG-BERT
[EACL'21] Non-Autoregressive with Pretrained Language Model
Stars: ✭ 47 (+88%)
Mutual labels:  bert
Romanian-Transformers
This repo is the home of Romanian Transformers.
Stars: ✭ 60 (+140%)
Mutual labels:  bert
AnnA Anki neuronal Appendix
Using machine learning on your anki collection to enhance the scheduling via semantic clustering and semantic similarity
Stars: ✭ 39 (+56%)
Mutual labels:  bert
threads
THREADS Conference Archive
Stars: ✭ 17 (-32%)
Mutual labels:  conference
speakerline
Showcasing speakers' proposals and timelines in an effort to demystify the CFP process and help new speakers get started.
Stars: ✭ 57 (+128%)
Mutual labels:  conference
roberta-wwm-base-distill
this is roberta wwm base distilled model which was distilled from roberta wwm by roberta wwm large
Stars: ✭ 61 (+144%)
Mutual labels:  bert
sleepytimeconference
The conference that comes together while you sleep.
Stars: ✭ 17 (-32%)
Mutual labels:  conference
Cross-Lingual-MRC
Cross-Lingual Machine Reading Comprehension (EMNLP 2019)
Stars: ✭ 66 (+164%)
Mutual labels:  bert

NLP Review Scorer

Disclaimer: This is only a toy. You should seriously treat your rebuttal despite the what scores are given below. Wish you good luck with your paper submission!

Also, as the notebook will run under YOUR CONTROL, please rest assured that your review won't be recorded in any form and I have no access to it.

I know some of you are thinking about how to convert paper review to a numerical score. Yes, the time has come.

In this notebook, you will be able to convert your paper review to overall score (hopefully in range 1~5) as well as reviewer confidence.

In my own experience, the prediction on reviewer confidence is not that accurate.

News

July 12, 2019 New model trained on 5.7K reviews is available. Seems to be more accurate.

July 11, 2019 Initial version released, trained on 3K reviews.

Quick Introduction

The model is trained on real reviews from PeerRead dataset as well as in-house collected reviews for training. Note that, we only include the reviews with open access, and the private reviews without author permissions are not included. The implementation was based on run_classifier.py in BERT repository with slight modifications.

As the review data is rather private, I won't be able to release them.

Prerequisites

How-To

  1. Copy (do not need to download) the one of the following model to your Google Drive.
Model Training Data MAE @ Dev Link
v2 (latest) 5.7k 0.35 Google Drive
v1 3k 0.5 Google Drive
  1. Then, go to Google Colab for further instructions

Sample Output (v2 version)

Note that, in real situations, your input review will be much longer than these examples!

***********REVIEW**************
This is a very good paper, outstanding paper, brilliant paper.
I have never seen such a good paper before.
It was well-written and the models are novel.
The evaluations are sound and the results achieve state-of-the-art performance.
It should be definitely accepted or I will be angry.
***********SCORE***************
Paper	Recommendation	Confidence
EMNLP	4.5141506	3.8331783
********************************

***********REVIEW**************
The paper was rather bad that I don't want to see it again.
The idea was trivial and the evaluations are not convincing to me at all.
We should reject this paper or I won't review for this venue in the future.
***********SCORE***************
Paper	Recommendation	Confidence
EMNLP	1.3770846	4.0270653
********************************

Disclaimer

This is not a product by Joint Laboratory of HIT and iFLYTEK Research (HFL).

Acknowledgement

I personally thank Google Colab for providing free computing resources for researchers.

Issue

If there is any problem, please submit a GitHub Issue.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].