All Projects → howardhsu → Bert For Rrc Absa

howardhsu / Bert For Rrc Absa

Licence: apache-2.0
code for our NAACL 2019 paper: "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Bert For Rrc Absa

sentiment-thermometer
Measure the sentiment towards a word, name or sentence on social networks
Stars: ✭ 56 (-80.42%)
Mutual labels:  sentiment-analysis
hashformers
Hashformers is a framework for hashtag segmentation with transformers.
Stars: ✭ 18 (-93.71%)
Mutual labels:  sentiment-analysis
Customer satisfaction analysis
基于在线民宿 UGC 数据的意见挖掘项目,包含数据挖掘和NLP 相关的处理,负责数据采集、主题抽取、情感分析等任务。目的是克服用户打分和评论不一致,实时对在线民宿的满意度评测,包含在线评论采集和情感可视化分析。搭建了百度地图POI查询入口,可以进行自动化的批量查询 POI 信息的功能;构建了基于在线民宿语料的 LDA 自动主题聚类模型,利用主题中心词能找出对应的主题属性字典;以用户打分作为标注,然后 litNlp 自带的字符级 TextCNN 进行情感分析,将情感分类概率分布作为情感趋势,最后通过 POI 热力图的方式对不同地域的民宿满意度进行展示。软件版本请见链接。
Stars: ✭ 262 (-8.39%)
Mutual labels:  sentiment-analysis
ML2017FALL
Machine Learning (EE 5184) in NTU
Stars: ✭ 66 (-76.92%)
Mutual labels:  sentiment-analysis
strtsmrt
Stock price trend prediction with news sentiment analysis using deep learning
Stars: ✭ 63 (-77.97%)
Mutual labels:  sentiment-analysis
laravel-nlp
Laravel wrapper for common NLP tasks
Stars: ✭ 41 (-85.66%)
Mutual labels:  sentiment-analysis
DeepSentiPers
Repository for the experiments described in the paper named "DeepSentiPers: Novel Deep Learning Models Trained Over Proposed Augmented Persian Sentiment Corpus"
Stars: ✭ 17 (-94.06%)
Mutual labels:  sentiment-analysis
Languagecrunch
LanguageCrunch NLP server docker image
Stars: ✭ 281 (-1.75%)
Mutual labels:  sentiment-analysis
sentimental
Sentiment analysis made easy; built on top off solid libraries.
Stars: ✭ 24 (-91.61%)
Mutual labels:  sentiment-analysis
Weibo terminator workflow
Update Version of weibo_terminator, This is Workflow Version aim at Get Job Done!
Stars: ✭ 259 (-9.44%)
Mutual labels:  sentiment-analysis
amazon-rekognition-engagement-meter
The Engagement Meter calculates and shows engagement levels of an audience participating in a meeting
Stars: ✭ 49 (-82.87%)
Mutual labels:  sentiment-analysis
sentR
Simple sentiment analysis framework for R
Stars: ✭ 31 (-89.16%)
Mutual labels:  sentiment-analysis
ai challenger 2018 sentiment analysis
Fine-grained Sentiment Analysis of User Reviews --- AI CHALLENGER 2018
Stars: ✭ 16 (-94.41%)
Mutual labels:  sentiment-analysis
TwEater
A Python Bot for Scraping Conversations from Twitter
Stars: ✭ 16 (-94.41%)
Mutual labels:  sentiment-analysis
Twitter Sent Dnn
Deep Neural Network for Sentiment Analysis on Twitter
Stars: ✭ 270 (-5.59%)
Mutual labels:  sentiment-analysis
NRC-Persian-Lexicon
NRC Word-Emotion Association Lexicon
Stars: ✭ 30 (-89.51%)
Mutual labels:  sentiment-analysis
neuralnets-semantics
Word semantics Deep Learning with Vanilla Python, Keras, Theano, TensorFlow, PyTorch
Stars: ✭ 15 (-94.76%)
Mutual labels:  sentiment-analysis
Bertweet
BERTweet: A pre-trained language model for English Tweets (EMNLP-2020)
Stars: ✭ 282 (-1.4%)
Mutual labels:  sentiment-analysis
Introduction Datascience Python Book
Introduction to Data Science: A Python Approach to Concepts, Techniques and Applications
Stars: ✭ 275 (-3.85%)
Mutual labels:  sentiment-analysis
Stock Analysis
Regression, Scrapers, and Visualization
Stars: ✭ 255 (-10.84%)
Mutual labels:  sentiment-analysis

BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis

code for our NAACL 2019 paper "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis", COLING 2020 paper "Understanding Pre-trained BERT for Aspect-based Sentiment Analysis" and (draft code of) Findings of EMNLP 2020 "DomBERT: Domain-oriented Language Model for Aspect-based Sentiment Analysis".

We found that BERT domain post-training (e.g, 1 day of training) is an economic way to boost the performance of BERT, because it is much harder (e.g., 10 days of training) to learn a general knowledge shared across domains and, meanwhile, loosing the long-tailed domain-specific knowledge.

News

Code base for "Understanding Pre-trained BERT for Aspect-based Sentiment Analysis" is released.
Code base on huggingface transformers is under transformers, with more cross-domain models.
Preprocessing ABSA xmls organized into a separate rep.
Want to have post-trained models for other domains in reviews ? checkout a cross-domain review BERT or download from HERE.
A conversational dataset of RRC can be found here.
If you only care about ASC, a more formal code base can be found in a similar rep focusing on ASC. **feedbacks are welcomed for missing instructions **

Problem to Solve

We focus on 3 review-based tasks: review reading comprehension (RRC), aspect extraction (AE) and aspect sentiment classification (ASC).

RRC: given a question ("how is the retina display ?") and a review ("The retina display is great.") find an answer span ("great") from that review;

AE: given a review sentence ("The retina display is great."), find aspects("retina display");

ASC: given an aspect ("retina display") and a review sentence ("The retina display is great."), detect the polarity of that aspect (positive).

E2E-ABSA: the combination of the above two tasks as a sequence labeling task.

And how a pre-trained BERT model on reviews be prepared for those tasks.

Code Base

For post-training of NAACL 2019 paper, the code base is splited into two versions: transformers/ (instructions) and pytorch-pretrained-bert/ (instructions).

For analysis of pre-trained BERT model for ABSA (COLING 2020), see this instructions.

Please check corresponding instructions for details.

Citation

If you find this work useful, please cite as following.

@inproceedings{xu_bert2019,
    title = "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis",
    author = "Xu, Hu and Liu, Bing and Shu, Lei and Yu, Philip S.",
    booktitle = "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics",
    month = "jun",
    year = "2019",
}
@inproceedings{xu_understanding2020,
    title = "Understanding Pre-trained BERT for Aspect-based Sentiment Analysis",
    author = "Xu, Hu and Shu, Lei and Yu, Philip S. and Liu, Bing",
    booktitle = "The 28th International Conference on Computational Linguistics",
    month = "Dec",
    year = "2020",
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].