All Projects → BinLiang-NLP → Scon-ABSA

BinLiang-NLP / Scon-ABSA

Licence: other
[CIKM 2021] Enhancing Aspect-Based Sentiment Analysis with Supervised Contrastive Learning

Programming Languages

HTML
75241 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Scon-ABSA

PBAN-PyTorch
A Position-aware Bidirectional Attention Network for Aspect-level Sentiment Analysis, PyTorch implementation.
Stars: ✭ 33 (+94.12%)
Mutual labels:  sentiment-analysis, aspect-based-sentiment-analysis
MemNet ABSA
No description or website provided.
Stars: ✭ 20 (+17.65%)
Mutual labels:  sentiment-analysis, aspect-based-sentiment-analysis
SA-DL
Sentiment Analysis with Deep Learning models. Implemented with Tensorflow and Keras.
Stars: ✭ 35 (+105.88%)
Mutual labels:  sentiment-analysis, aspect-based-sentiment-analysis
Real Time DataMining Software
携程/榛果民宿实时评论挖掘软件,包含数据的实时采集/数据清洗/结构化保存/ UGC 数据主题提取/情感分析/后结构化可视化等技术的综合性演示 Demo。基于在线民宿 UGC 数据的意见挖掘项目,包含数据挖掘和 NLP 相关的处理,负责数据采集、主题抽取、情感分析等任务。主要克服用户打分和评论不一致,实时对携程和美团在线民宿的满意度进行评测以及对额外数据进行可视化的综合性工具,多维度的对在线 UGC 进行数据挖掘并可视化,demo 视频演示见链接。
Stars: ✭ 43 (+152.94%)
Mutual labels:  sentiment-analysis
SoCo
[NeurIPS 2021 Spotlight] Aligning Pretraining for Detection via Object-Level Contrastive Learning
Stars: ✭ 125 (+635.29%)
Mutual labels:  contrastive-learning
char-cnn-text-classification-tensorflow
Simple Convolutional Neural Network (CNN) for sentiment classification of Chinese movie reviews.
Stars: ✭ 55 (+223.53%)
Mutual labels:  sentiment-analysis
phone-reviews-nlp
Modern NLP and sentiment analysis on amazon mobile phone reviews
Stars: ✭ 21 (+23.53%)
Mutual labels:  sentiment-analysis
HEAPUtil
Code for the RA-L (IROS) 2021 paper "A Hierarchical Dual Model of Environment- and Place-Specific Utility for Visual Place Recognition"
Stars: ✭ 46 (+170.59%)
Mutual labels:  contrastive-learning
pytorch-sentiment-analysis
char-rnn implementation for sentiment analysis on twitter data
Stars: ✭ 32 (+88.24%)
Mutual labels:  sentiment-analysis
Dataset-Sentimen-Analisis-Bahasa-Indonesia
Repositori ini merupakan kumpulan dataset terkait analisis sentimen Berbahasa Indonesia. Apabila Anda menggunakan dataset-dataset yang ada pada repositori ini untuk penelitian, maka cantumkanlah/kutiplah jurnal artikel terkait dataset tersebut. Dataset yang tersedia telah diimplementasikan dalam beberapa penelitian dan hasilnya telah dipublikasi…
Stars: ✭ 38 (+123.53%)
Mutual labels:  sentiment-analysis
mirror-bert
[EMNLP 2021] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.
Stars: ✭ 56 (+229.41%)
Mutual labels:  contrastive-learning
SentimentAnalysis
基于新浪微博数据的情感极性分析
Stars: ✭ 43 (+152.94%)
Mutual labels:  sentiment-analysis
rosette-elasticsearch-plugin
Document Enrichment plugin for Elasticsearch
Stars: ✭ 25 (+47.06%)
Mutual labels:  sentiment-analysis
sentiment-analysis-imdb
This is a classifier focused on sentiment analysis of movie reviews
Stars: ✭ 11 (-35.29%)
Mutual labels:  sentiment-analysis
twitter mining
Twitter Mining in Java
Stars: ✭ 25 (+47.06%)
Mutual labels:  sentiment-analysis
GCL
List of Publications in Graph Contrastive Learning
Stars: ✭ 25 (+47.06%)
Mutual labels:  contrastive-learning
VarCLR
VarCLR: Variable Semantic Representation Pre-training via Contrastive Learning
Stars: ✭ 30 (+76.47%)
Mutual labels:  contrastive-learning
AirBnbPricePrediction
Training and Testing a Set of Machine Learning/Deep Learning Models to Predict Airbnb Prices for NYC
Stars: ✭ 47 (+176.47%)
Mutual labels:  sentiment-analysis
amazon-reviews
Sentiment Analysis & Topic Modeling with Amazon Reviews
Stars: ✭ 26 (+52.94%)
Mutual labels:  sentiment-analysis
COVID-19-Tweet-Classification-using-Roberta-and-Bert-Simple-Transformers
Rank 1 / 216
Stars: ✭ 24 (+41.18%)
Mutual labels:  sentiment-analysis

Enhancing Aspect-Based Sentiment Analysis with Supervised Contrastive Learning

This repository contains the PyTorch code and implementation for the paper Enhancing Aspect-Based Sentiment Analysis with Supervised Contrastive Learning.

Enhancing Aspect-Based Sentiment Analysis with Supervised Contrastive Learning
Bin Liang#, Wangda Luo#, Xiang Li, Lin Gui, Min Yang, Xiaoqi Yu, and Ruifeng Xu*. Proceedings of CIKM 2020

Please cite our paper and kindly give a star for this repository if you use this code.

For any question, plaese email [email protected] or [email protected].

Model Overview

model

Requirement

  • pytorch >= 0.4.0
  • numpy >= 1.13.3
  • sklearn
  • python 3.6 / 3.7
  • CUDA 9.0
  • transformers

To install requirements, run pip install -r requirements.txt.

Dataset

you can directly use the processed dataset located in datasets/:
Note that you need to extract the data from the datasets folder: unzip datasets.zip

├── data
│   │   ├── semeval14(res14,laptop14)
│   │   ├── semeval15(res15)
│   │   ├── semeval16(res16)
│   │   ├── MAMS

The dataSet contains with cl_2X3 is the dataSet obtained after label argment, and each data is as follows:
Context
Aspect
Aspect-sentiment-label(-1:negative;0:netrual;1:positive)
Contrastive-label(aspect-dependent/aspect-invariant)
Contrastive-aspect-label(0:negative;1:netrual;2:positive)

Preparation

a) Download the pytorch version pre-trained bert-base-uncased model and vocabulary from the link provided by huggingface. Then change the value of parameter --bert_model_dir to the directory of the bert model. you can get the pre-trained bert-base-uncased model in https://github.com/huggingface/transformers.

b) Label enhancement method. For new data, additional supervised signals need to be obtained through label enhancement;
    i) Through BERT overfitting the training set, the acc can reach more than 97%;
    ii) Replace aspect with other or mask, and get the emotional label of the aspect after replacing the aspect;
    iii) Determine whether the output label is consistent with the real label, and fill in the aspect-dependent/aspect-invariant label for the data.

c) The data defaults are in data_utils.py, which you can view if you want to change the data entered into the model.

Training

  1. Adjust the parameters and set the experiment.
    --model:Selection model.(bert_spc_cl)
    --dataset:Select dataSet.(acl14,res14,laptop14,res15,res16,mams and so on)
    --num_epoch:Iterations of the model.
    --is_test 0:Verify module.(1 is data verification, 0 is model training)
    --type: Select a task type.(normal,cl2,cl6,cl2X3)
  2. Run the shell script to start the program.
./run.sh

For run.sh code:


CUDA_VISIBLE_DEVICES=3 \
  python train_cl.py \
  --model_name bert_spc_cl \
  --dataset cl_mams_2X3 \
  --num_epoch 50 \
  --is_test 0 \
  --type cl2X3

For dataset implementation, you can choose these datasets: "cl_acl2014_2X3" "cl_res2014_2X3" "cl_laptop2014_2X3" "cl_res2015_2X3" "cl_res2016_2X3" "cl_mams_2X3".

Testing

./run_test.sh

Citation

@inproceedings{10.1145/3459637.3482096,
author = {Liang, Bin and Luo, Wangda and Li, Xiang and Gui, Lin and Yang, Min and Yu, Xiaoqi and Xu, Ruifeng},
title = {Enhancing Aspect-Based Sentiment Analysis with Supervised Contrastive Learning},
year = {2021},
isbn = {9781450384469},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3459637.3482096},
doi = {10.1145/3459637.3482096},
abstract = {Most existing aspect-based sentiment analysis (ABSA) research efforts are devoted to extracting the aspect-dependent sentiment features from the sentence towards the given aspect. However, it is observed that about 60% of the testing aspects in commonly used public datasets are unknown to the training set. That is, some sentiment features carry the same polarity regardless of the aspects they are associated with (aspect-invariant sentiment), which props up the high accuracy of existing ABSA models when inevitably inferring sentiment polarities for those unknown testing aspects. Therefore, in this paper, we revisit ABSA from a novel perspective by deploying a novel supervised contrastive learning framework to leverage the correlation and difference among different sentiment polarities and between different sentiment patterns (aspect-invariant/-dependent). This allows improving the sentiment prediction for (unknown) testing aspects in the light of distinguishing the roles of valuable sentiment features. Experimental results on 5 benchmark datasets show that our proposed approach substantially outperforms state-of-the-art baselines in ABSA. We further extend existing neural network-based ABSA models with our proposed framework and achieve improved performance.},
booktitle = {Proceedings of the 30th ACM International Conference on Information & Knowledge Management},
pages = {3242–3247},
numpages = {6},
keywords = {sentiment analysis, contrastive learning, aspect sentiment analysis},
location = {Virtual Event, Queensland, Australia},
series = {CIKM '21}
}

or

@inproceedings{liang2021enhancing,
  title={Enhancing Aspect-Based Sentiment Analysis with Supervised Contrastive Learning},
  author={Liang, Bin and Luo, Wangda and Li, Xiang and Gui, Lin and Yang, Min and Yu, Xiaoqi and Xu, Ruifeng},
  booktitle={Proceedings of the 30th ACM International Conference on Information \& Knowledge Management},
  pages={3242--3247},
  year={2021}
}

Credits

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].