All Projects → yashsmehta → personality-prediction

yashsmehta / personality-prediction

Licence: MIT license
Experiments for automated personality detection using Language Models and psycholinguistic features on various famous personality datasets including the Essays dataset (Big-Five)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to personality-prediction

Protein Sequence Embedding Iclr2019
Source code for "Learning protein sequence embeddings using information from structure" - ICLR 2019
Stars: ✭ 194 (+77.98%)
Mutual labels:  language-model
PLBART
Official code of our work, Unified Pre-training for Program Understanding and Generation [NAACL 2021].
Stars: ✭ 151 (+38.53%)
Mutual labels:  language-model
ChineseNER
中文NER的那些事儿
Stars: ✭ 241 (+121.1%)
Mutual labels:  bert-fine-tuning
Attention Mechanisms
Implementations for a family of attention mechanisms, suitable for all kinds of natural language processing tasks and compatible with TensorFlow 2.0 and Keras.
Stars: ✭ 203 (+86.24%)
Mutual labels:  language-model
Mead Baseline
Deep-Learning Model Exploration and Development for NLP
Stars: ✭ 238 (+118.35%)
Mutual labels:  language-model
COCO-LM
[NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining
Stars: ✭ 109 (+0%)
Mutual labels:  language-model
Char Rnn Chinese
Multi-layer Recurrent Neural Networks (LSTM, GRU, RNN) for character-level language models in Torch. Based on code of https://github.com/karpathy/char-rnn. Support Chinese and other things.
Stars: ✭ 192 (+76.15%)
Mutual labels:  language-model
calm
Context Aware Language Models
Stars: ✭ 29 (-73.39%)
Mutual labels:  language-model
Zeroth
Kaldi-based Korean ASR (한국어 음성인식) open-source project
Stars: ✭ 248 (+127.52%)
Mutual labels:  language-model
Vaaku2Vec
Language Modeling and Text Classification in Malayalam Language using ULMFiT
Stars: ✭ 68 (-37.61%)
Mutual labels:  language-model
Pytorch Nce
The Noise Contrastive Estimation for softmax output written in Pytorch
Stars: ✭ 204 (+87.16%)
Mutual labels:  language-model
Relational Rnn Pytorch
An implementation of DeepMind's Relational Recurrent Neural Networks in PyTorch.
Stars: ✭ 236 (+116.51%)
Mutual labels:  language-model
pd3f
🏭 PDF text extraction pipeline: self-hosted, local-first, Docker-based
Stars: ✭ 132 (+21.1%)
Mutual labels:  language-model
Lingvo
Lingvo
Stars: ✭ 2,361 (+2066.06%)
Mutual labels:  language-model
KB-ALBERT
KB국민은행에서 제공하는 경제/금융 도메인에 특화된 한국어 ALBERT 모델
Stars: ✭ 215 (+97.25%)
Mutual labels:  language-model
Gpt Scrolls
A collaborative collection of open-source safe GPT-3 prompts that work well
Stars: ✭ 195 (+78.9%)
Mutual labels:  language-model
TF-NNLM-TK
A toolkit for neural language modeling using Tensorflow including basic models like RNNs and LSTMs as well as more advanced models.
Stars: ✭ 20 (-81.65%)
Mutual labels:  language-model
CharLM
Character-aware Neural Language Model implemented by PyTorch
Stars: ✭ 32 (-70.64%)
Mutual labels:  language-model
asr24
24-hour Automatic Speech Recognition
Stars: ✭ 27 (-75.23%)
Mutual labels:  language-model
rnn-theano
RNN(LSTM, GRU) in Theano with mini-batch training; character-level language models in Theano
Stars: ✭ 68 (-37.61%)
Mutual labels:  language-model

Automated Personality Prediction using Pre-Trained Language Models

  PyTorch Version   Open Source GitHub Repo Stars

This repository contains code for the paper Bottom-Up and Top-Down: Predicting Personality with Psycholinguistic and Language Model Features, published in IEEE International Conference of Data Mining 2020.

Here are a set of experiments written in tensorflow + pytorch to explore automated personality detection using Language Models on the Essays dataset (Big-Five personality labelled traits) and the Kaggle MBTI dataset.

Setup

Pull this repository from GitLab via:

git clone [email protected]:ml-automated-personality-detection/personality.git

Creating a new conda environment is recomended. Install PyTorch GPU/CPU for your setup.

conda create -n mvenv python=3.8
conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia

See the requirements.txt for the list of dependent packages which can be installed via:

pip -r requirements.txt

Usage

First run the LM extractor code which passes the dataset through the language model and stores the embeddings (of all layers) in a pickle file. Creating this 'new dataset' saves us a lot of compute time and allows effective searching of the hyperparameters for the finetuning network. Before running the code, create a pkl_data folder in the repo folder. All the arguments are optional and passing no arguments runs the extractor with the default values.

python LM_extractor.py -dataset_type 'essays' -token_length 512 -batch_size 32 -embed 'bert-base' -op_dir 'pkl_data'

Next run a finetuning model to take the extracted features as input from the pickle file and train a finetuning model. We find a shallow MLP to be the best performing one

python finetune_models/MLP_LM.py
Results Table Language Models vs Psycholinguistic Traits

Predicting personality on unseen text

Follow the steps below for predicting personality (e.g. the Big-Five: OCEAN traits) on a new text/essay:

  1. You will have to train your model -- for that, first choose your training dataset (e.g. essays).
  2. Extract features for each of the essays by passing it through a language model of your choice (e.g. BERT) by running the LM_extractor.py file. This will create a pickle file containing the training features.
  3. Next, train the finetuning model. Let's say it is a simple MLP (this was the best performing one, as can be seen from Table 2 of the paper). Use the extracted features from the LM to train this model. Here, you can experiment with 1) different models (e.g. SVMs, Attention+RNNs, etc.) and 2) concatenating the corresponding psycholinguistic features for each of the essays.
  4. You will have to write code to save the optimal model parameters after the training is complete.
  5. For the new data, first pass it through the SAME language model feature extraction pipeline and save this. Load your pre-trained model into memory and run it on these extracted features.

Note: The text pre-processing (e.g. tokenization, etc.) before passing it through the language model should be the SAME for training and testing.

Running Time

LM_extractor.py

On a RTX2080 GPU, the -embed 'bert-base' extractor takes about ~2m 30s and 'bert-large' takes about ~5m 30s

On a CPU, 'bert-base' extractor takes about ~25m

python finetune_models/MLP_LM.py

On a RTX2080 GPU, running for 15 epochs (with no cross-validation) takes from 5s-60s, depending on the MLP architecture.

Literature

Deep Learning based Personality Prediction [Literature REVIEW] (Springer AIR Journal - 2020)

@article{mehta2020recent,
  title={Recent Trends in Deep Learning Based Personality Detection},
  author={Mehta, Yash and Majumder, Navonil and Gelbukh, Alexander and Cambria, Erik},
  journal={Artificial Intelligence Review},
  pages={2313–2339},
  year={2020},
  doi = {https://doi.org/10.1007/s10462-019-09770-z},
  url = {https://link.springer.com/article/10.1007/s10462-019-09770-z}
  publisher={Springer}
}

Language Model Based Personality Prediction (ICDM - 2020)

If you find this repo useful for your research, please cite it using the following:

@inproceedings{mehta2020bottom,
  title={Bottom-up and top-down: Predicting personality with psycholinguistic and language model features},
  author={Mehta, Yash and Fatehi, Samin and Kazameini, Amirmohammad and Stachl, Clemens and Cambria, Erik and Eetemadi, Sauleh},
  booktitle={2020 IEEE International Conference on Data Mining (ICDM)},
  pages={1184--1189},
  year={2020},
  organization={IEEE}
}

License

The source code for this project is licensed under the MIT license.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].