All Projects → tae898 → erc

tae898 / erc

Licence: MIT License
Emotion recognition in conversation

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language
Makefile
30231 projects

Projects that are alternatives of or similar to erc

Clue
中文语言理解测评基准 Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard
Stars: ✭ 2,425 (+7032.35%)
Mutual labels:  transformers, bert, roberta
converse
Conversational text Analysis using various NLP techniques
Stars: ✭ 147 (+332.35%)
Mutual labels:  transformers, emotion-recognition, huggingface
HugsVision
HugsVision is a easy to use huggingface wrapper for state-of-the-art computer vision
Stars: ✭ 154 (+352.94%)
Mutual labels:  transformers, bert, huggingface
Text-Summarization
Abstractive and Extractive Text summarization using Transformers.
Stars: ✭ 38 (+11.76%)
Mutual labels:  transformers, bert, roberta
policy-data-analyzer
Building a model to recognize incentives for landscape restoration in environmental policies from Latin America, the US and India. Bringing NLP to the world of policy analysis through an extensible framework that includes scraping, preprocessing, active learning and text analysis pipelines.
Stars: ✭ 22 (-35.29%)
Mutual labels:  transformers, bert, huggingface
COVID-19-Tweet-Classification-using-Roberta-and-Bert-Simple-Transformers
Rank 1 / 216
Stars: ✭ 24 (-29.41%)
Mutual labels:  transformers, bert, roberta
chef-transformer
Chef Transformer 🍲 .
Stars: ✭ 29 (-14.71%)
Mutual labels:  transformers, huggingface
ganbert-pytorch
Enhancing the BERT training with Semi-supervised Generative Adversarial Networks in Pytorch/HuggingFace
Stars: ✭ 60 (+76.47%)
Mutual labels:  bert, huggingface
ParsBigBird
Persian Bert For Long-Range Sequences
Stars: ✭ 58 (+70.59%)
Mutual labels:  transformers, bert
robo-vln
Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"
Stars: ✭ 34 (+0%)
Mutual labels:  transformers, bert
golgotha
Contextualised Embeddings and Language Modelling using BERT and Friends using R
Stars: ✭ 39 (+14.71%)
Mutual labels:  transformers, bert
TorchBlocks
A PyTorch-based toolkit for natural language processing
Stars: ✭ 85 (+150%)
Mutual labels:  transformers, bert
Self-Supervised-Embedding-Fusion-Transformer
The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.
Stars: ✭ 57 (+67.65%)
Mutual labels:  bert, emotion-recognition
Pytorch-NLU
Pytorch-NLU,一个中文文本分类、序列标注工具包,支持中文长文本、短文本的多类、多标签分类任务,支持中文命名实体识别、词性标注、分词等序列标注任务。 Ptorch NLU, a Chinese text classification and sequence annotation toolkit, supports multi class and multi label classification tasks of Chinese long text and short text, and supports sequence annotation tasks such as Chinese named entity recognition, part of speech ta…
Stars: ✭ 151 (+344.12%)
Mutual labels:  transformers, bert
GoEmotions-pytorch
Pytorch Implementation of GoEmotions 😍😢😱
Stars: ✭ 95 (+179.41%)
Mutual labels:  transformers, bert
Tianchi2020ChineseMedicineQuestionGeneration
2020 阿里云天池大数据竞赛-中医药文献问题生成挑战赛
Stars: ✭ 20 (-41.18%)
Mutual labels:  bert, roberta
KAREN
KAREN: Unifying Hatespeech Detection and Benchmarking
Stars: ✭ 18 (-47.06%)
Mutual labels:  bert, huggingface
text2text
Text2Text: Cross-lingual natural language processing and generation toolkit
Stars: ✭ 188 (+452.94%)
Mutual labels:  transformers, bert
text2class
Multi-class text categorization using state-of-the-art pre-trained contextualized language models, e.g. BERT
Stars: ✭ 15 (-55.88%)
Mutual labels:  transformers, bert
KLUE
📖 Korean NLU Benchmark
Stars: ✭ 420 (+1135.29%)
Mutual labels:  bert, roberta

Emotion Recognition in Coversation (ERC)

PWC
PWC

At the moment, we only use the text modality to correctly classify the emotion of the utterances.The experiments were carried out on two datasets (i.e. MELD and IEMOCAP)

Prerequisites

RoBERTa training

First configure the hyper parameters and the dataset in train-erc-text.yaml and then, In this directory run the below commands. I recommend you to run this in a virtualenv.

pip install -r requirements.txt
python train-erc-text.py

This will subsequently call train-erc-text-hp.py and train-erc-text-full.py.

Results on the test split (weighted f1 scores)

Model MELD IEMOCAP
EmoBERTa No past and future utterances 63.46 56.09
Only past utterances 64.55 68.57
Only future utterances 64.23 66.56
Both past and future utterances 65.61 67.42
without speaker names 65.07 64.02

Above numbers are the mean values of five random seed runs.

If you want to see more training test details, check out ./results/

If you want to download the trained checkpoints and stuff, then here is where you can download them. It's a pretty big zip file.

Troubleshooting

The best way to find and solve your problems is to see in the github issue tab. If you can't find what you want, feel free to raise an issue. We are pretty responsive.

Contributing

Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Run make style && quality in the root repo directory, to ensure code quality.
  4. Commit your Changes (git commit -m 'Add some AmazingFeature')
  5. Push to the Branch (git push origin feature/AmazingFeature)
  6. Open a Pull Request

Cite our work

Check out the paper.

@misc{kim2021emoberta,
      title={EmoBERTa: Speaker-Aware Emotion Recognition in Conversation with RoBERTa}, 
      author={Taewoon Kim and Piek Vossen},
      year={2021},
      eprint={2108.12009},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

DOI

Authors

License

MIT

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].