All Projects → yangheng95 → Lc Absa

yangheng95 / Lc Absa

Licence: mit
Training & Inferring & Reproducing for SOTA ABSA models

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Lc Absa

sentistrength id
Sentiment Strength Detection in Bahasa Indonesia
Stars: ✭ 32 (-57.89%)
Mutual labels:  sentiment-classification
billboard
🎤 Lyrics/associated NLP data for Billboard's Top 100, 1950-2015.
Stars: ✭ 53 (-30.26%)
Mutual labels:  sentiment-classification
Tensorflow Sentiment Analysis On Amazon Reviews Data
Implementing different RNN models (LSTM,GRU) & Convolution models (Conv1D, Conv2D) on a subset of Amazon Reviews data with TensorFlow on Python 3. A sentiment analysis project.
Stars: ✭ 34 (-55.26%)
Mutual labels:  sentiment-classification
nsmc-zeppelin-notebook
Movie review dataset Word2Vec & sentiment classification Zeppelin notebook
Stars: ✭ 26 (-65.79%)
Mutual labels:  sentiment-classification
NewsMTSC
Target-dependent sentiment classification in news articles reporting on political events. Includes a high-quality data set of over 11k sentences and a state-of-the-art classification model.
Stars: ✭ 54 (-28.95%)
Mutual labels:  sentiment-classification
ML2017FALL
Machine Learning (EE 5184) in NTU
Stars: ✭ 66 (-13.16%)
Mutual labels:  sentiment-classification
HSSC
Code for "A Hierarchical End-to-End Model for Jointly Improving Text Summarization and Sentiment Classification" (IJCAI 2018)
Stars: ✭ 23 (-69.74%)
Mutual labels:  sentiment-classification
Senta
Baidu's open-source Sentiment Analysis System.
Stars: ✭ 1,187 (+1461.84%)
Mutual labels:  sentiment-classification
wink-sentiment
Accurate and fast sentiment scoring of phrases with #hashtags, emoticons :) & emojis 🎉
Stars: ✭ 51 (-32.89%)
Mutual labels:  sentiment-classification
Text Classification Pytorch
Text classification using deep learning models in Pytorch
Stars: ✭ 683 (+798.68%)
Mutual labels:  sentiment-classification
NTUA-slp-nlp
💻Speech and Natural Language Processing (SLP & NLP) Lab Assignments for ECE NTUA
Stars: ✭ 19 (-75%)
Mutual labels:  sentiment-classification
Fast-Dawid-Skene
Code for the algorithms in the paper: Vaibhav B Sinha, Sukrut Rao, Vineeth N Balasubramanian. Fast Dawid-Skene: A Fast Vote Aggregation Scheme for Sentiment Classification. KDD WISDOM 2018
Stars: ✭ 35 (-53.95%)
Mutual labels:  sentiment-classification
hashformers
Hashformers is a framework for hashtag segmentation with transformers.
Stars: ✭ 18 (-76.32%)
Mutual labels:  sentiment-classification
pysenti
Chinese Sentiment Classification Tool. 情感极性分类,基于知网、清华、BosonNLP情感词典,易扩展,基准方法,开箱即用。
Stars: ✭ 31 (-59.21%)
Mutual labels:  sentiment-classification
Twitter Sentiment Analysis
Sentiment analysis on tweets using Naive Bayes, SVM, CNN, LSTM, etc.
Stars: ✭ 978 (+1186.84%)
Mutual labels:  sentiment-classification
domain-attention
codes for paper "Domain Attention Model for Multi-Domain Sentiment Classification"
Stars: ✭ 22 (-71.05%)
Mutual labels:  sentiment-classification
KoBERT-nsmc
Naver movie review sentiment classification with KoBERT
Stars: ✭ 57 (-25%)
Mutual labels:  sentiment-classification
Nlp Tutorial
A list of NLP(Natural Language Processing) tutorials
Stars: ✭ 1,188 (+1463.16%)
Mutual labels:  sentiment-classification
Absa Pytorch
Aspect Based Sentiment Analysis, PyTorch Implementations. 基于方面的情感分析,使用PyTorch实现。
Stars: ✭ 1,181 (+1453.95%)
Mutual labels:  sentiment-classification
Awesome Sentiment Analysis
Repository with all what is necessary for sentiment analysis and related areas
Stars: ✭ 459 (+503.95%)
Mutual labels:  sentiment-classification

LCF-based Aspect Polarity Classification (基于局部上下文专注机制的方面级情感分类模型库)

Training & Inferring & Reproducing SOTA models of ABSA

Aspect-based Sentiment Analysis (GloVe / BERT).

Chinese Aspect-based Sentiment Analysis (中文方面级情感分类)

PyTorch Implementations.

We hope this repository will help you and sincerely request bug reports and Suggestions. If you like this repository you can star or share this repository to others.

Codes for our paper(s):

Requirement

  • Python 3.7 + (recommended)
  • PyTorch >= 1.0
  • transformers pip install transformers or conda install transformers
  • Try to set batch_size=8 or max_seq_len=40 while out-of-memory error occurs.

Before Training

  • Download the GloVe if you want to use the GloVe-based models.
  • Download the domain-adapted BERT if you want to use state-of-the-art bert-based models.
  • Set use_bert_spc=True to employ BERT-SPC input format and improve model performance.
  • Set use_dual_bert=True to use dual BERTs for modeling local context and global context, respectively. Bert-based models need more computational resources, e.g. system memory.

Model Introduction

This repository provides a variety of APC models, especially the those based on the local context focus mechanisms, including:

Our LCF-based APC models

Other famous APC models

Phan M H, Ogunbona P O. Modelling context and syntactical features for aspect-based sentiment analysis[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020: 3211-3220.

The following models are forked from ABSA-PyTorch.

Song Y, Wang J, Jiang T, et al. Attentional encoder network for targeted sentiment classification[J]. arXiv preprint arXiv:1902.09314, 2019.

Fan F, Feng Y, Zhao D. Multi-grained attention network for aspect-level sentiment classification[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2018: 3433-3442.

Huang B, Ou Y, Carley K M. Aspect level sentiment classification with attention-over-attention neural networks[C]//International Conference on Social Computing, Behavioral-Cultural Modeling and Prediction and Behavior Representation in Modeling and Simulation. Springer, Cham, 2018: 197-206.

Li X, Bing L, Lam W, et al. Transformation Networks for Target-Oriented Sentiment Classification[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2018: 946-956.

Liu Q, Zhang H, Zeng Y, et al. Content attention model for aspect based sentiment analysis[C]//Proceedings of the 2018 World Wide Web Conference. 2018: 1023-1032.

Chen P, Sun Z, Bing L, et al. Recurrent attention network on memory for aspect sentiment analysis[C]//Proceedings of the 2017 conference on empirical methods in natural language processing. 2017: 452-461.

Tang D, Qin B, Liu T. Aspect Level Sentiment Classification with Deep Memory Network[C]//Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. 2016: 214-224.

Ma D, Li S, Zhang X, et al. Interactive attention networks for aspect-level sentiment classification[C]//Proceedings of the 26th International Joint Conference on Artificial Intelligence. 2017: 4068-4074.

Wang Y, Huang M, Zhu X, et al. Attention-based LSTM for aspect-level sentiment classification[C]//Proceedings of the 2016 conference on empirical methods in natural language processing. 2016: 606-615.

Tang D, Qin B, Feng X, et al. Effective LSTMs for Target-Dependent Sentiment Classification[C]//Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. 2016: 3298-3307.

Datasets

  • ACL Twitter dataset
  • Chinese Review Datasets
  • Multilingual dataset (combining of above datasets)
  • SemEval-2014
  • SemEval-2015 (From ASGAN)
  • SemEval-2016 (From ASGAN)

Extra Hyperparameters

We list the valid parameters for each model for reference.

Models srd lcf LCA lcp sigma(σ) use_bert_spc
BERT-BASE X X X X X X
BERT-SPC X X X X X X
LCF-GloVe X X X X
LCF-BERT X X X
LCA-LSTM X X
LCA-Glove X
LCA-BERT

Although datasets and models and be combined in most scenarios, some combinations are not recommended. Such Chinese dataset and BERT-base-uncased (English), Chinese and LCFS-BERT.

Performance of BERT-based Models

The state-of-the-art benchmarks of the ABSA task can be found at NLP-progress (See Section of SemEval-2014 subtask4) "D", "S" and "A" denote dual-BERT, single-BERT and adapted-BERT, respectively. "N/A" means waiting to test.

Models Laptop14 (acc) Restaurant14 (acc) Twitter(acc) Memory Usage
LCF-BERT-CDM (D+A) 82.92 89.11 77.89 < 8 G
LCF-BERT-CDW (D+A) 82.76 89.38 77.17 < 8 G
LCF-BERT-CDM (S+A) 80.72 89.22 75.72 < 5.5 G
LCF-BERT-CDW (S+A) 80.88 88.57 75.58 < 5.5 G
LCF-BERT-CDM (S) 80.56 85.45 75.29 < 5.5 G
LCF-BERT-CDW (S) 80.25 85.54 76.59 < 5.5 G
LCA-BERT (S+A) 82.45 88.93 77.46 < 5.5 G
LCA-BERT (S) 81.66 86.07 76.59 < 5.5 G
AEN-BERT 79.93 83.12 74.71 < 6 G

We provides a training log of LCF-BERT based on domain-adapted BERT to guide reproductions. Try to set other random seeds to explore different results. Learn to train the domain adapted BERT pretrained models from domain-adapted-atsc, and place the pre-trained models in bert_pretrained_models.

Training

Training single model with cmd:

python train.py

or running multiple experiments using config file

python train.py --config experiments_apc.json

Inferring

We release the universal batch inferring of aspect polarity for all listed APC models! Check here and follow the instructions to do batch inferring.

Acknowledgement

This work is based on the repositories of ABSA-PyTorch and the pytorch-transformers. Thanks to the authors for their devotion and Thanks to everyone who offered assistance. Feel free to report any bug or discussing with us.

Contributions & Bug Reports.

This Repository is under development. There may be unknown problems in the code. We hope to get your help to make it easier to use and stable.

Citation

If this repository is helpful to you, please cite our papers:

@article{yang2021multi,
    title={A multi-task learning model for chinese-oriented aspect polarity classification and aspect term extraction},
    author={Yang, Heng and Zeng, Biqing and Yang, JianHao and Song, Youwei and Xu, Ruyang},
    journal={Neurocomputing},
    volume={419},
    pages={344--356},
    year={2021},
    publisher={Elsevier}
}
@article{zeng2019lcf,
    title={LCF: A Local Context Focus Mechanism for Aspect-Based Sentiment Classification},
    author={Zeng, Biqing and Yang, Heng and Xu, Ruyang and Zhou, Wu and Han, Xuli},
    journal={Applied Sciences},
    volume={9},
    number={16},
    pages={3389},
    year={2019},
    publisher={Multidisciplinary Digital Publishing Institute}
}
@misc{yang2020enhancing,
    title={Enhancing Fine-grained Sentiment Classification Exploiting Local Context Embedding}, 
    author={Heng Yang and Biqing Zeng},
    year={2020},
    eprint={2010.00767},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

Related Repositories

ABSA-PyTorch

domain-adapted-atsc

LCFS-BERT

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].