All Projects → dangz90 → Deep-Learning-for-Expression-Recognition-in-Image-Sequences

dangz90 / Deep-Learning-for-Expression-Recognition-in-Image-Sequences

Licence: MIT License
The project uses state of the art deep learning on collected data for automatic analysis of emotions.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Deep-Learning-for-Expression-Recognition-in-Image-Sequences

theano-recurrence
Recurrent Neural Networks (RNN, GRU, LSTM) and their Bidirectional versions (BiRNN, BiGRU, BiLSTM) for word & character level language modelling in Theano
Stars: ✭ 40 (+53.85%)
Mutual labels:  lstm, gru
ConvLSTM-PyTorch
ConvLSTM/ConvGRU (Encoder-Decoder) with PyTorch on Moving-MNIST
Stars: ✭ 202 (+676.92%)
Mutual labels:  lstm, gru
automatic-personality-prediction
[AAAI 2020] Modeling Personality with Attentive Networks and Contextual Embeddings
Stars: ✭ 43 (+65.38%)
Mutual labels:  lstm, cnn-keras
lmkit
language models toolkits with hierarchical softmax setting
Stars: ✭ 16 (-38.46%)
Mutual labels:  lstm, gru
sklearn-audio-classification
An in-depth analysis of audio classification on the RAVDESS dataset. Feature engineering, hyperparameter optimization, model evaluation, and cross-validation with a variety of ML techniques and MLP
Stars: ✭ 31 (+19.23%)
Mutual labels:  emotion, emotion-recognition
Emotion and Polarity SO
An emotion classifier of text containing technical content from the SE domain
Stars: ✭ 74 (+184.62%)
Mutual labels:  emotion, emotion-recognition
Manhattan-LSTM
Keras and PyTorch implementations of the MaLSTM model for computing Semantic Similarity.
Stars: ✭ 28 (+7.69%)
Mutual labels:  lstm, gru
tf-ran-cell
Recurrent Additive Networks for Tensorflow
Stars: ✭ 16 (-38.46%)
Mutual labels:  lstm, gru
myDL
Deep Learning
Stars: ✭ 18 (-30.77%)
Mutual labels:  lstm, gru
LSTM-GRU-from-scratch
LSTM, GRU cell implementation from scratch in tensorflow
Stars: ✭ 30 (+15.38%)
Mutual labels:  lstm, gru
Resnet-Emotion-Recognition
Identifies emotion(s) from user facial expressions
Stars: ✭ 21 (-19.23%)
Mutual labels:  emotion, emotion-recognition
dts
A Keras library for multi-step time-series forecasting.
Stars: ✭ 130 (+400%)
Mutual labels:  lstm, gru
RECCON
This repository contains the dataset and the PyTorch implementations of the models from the paper Recognizing Emotion Cause in Conversations.
Stars: ✭ 126 (+384.62%)
Mutual labels:  emotion, emotion-recognition
LearningMetersPoems
Official repo of the article: Yousef, W. A., Ibrahime, O. M., Madbouly, T. M., & Mahmoud, M. A. (2019), "Learning meters of arabic and english poems with recurrent neural networks: a step forward for language understanding and synthesis", arXiv preprint arXiv:1905.05700
Stars: ✭ 18 (-30.77%)
Mutual labels:  lstm, gru
emotion-recognition-GAN
This project is a semi-supervised approach to detect emotions on faces in-the-wild using GAN
Stars: ✭ 20 (-23.08%)
Mutual labels:  emotion, emotion-recognition
ntua-slp-semeval2018
Deep-learning models of NTUA-SLP team submitted in SemEval 2018 tasks 1, 2 and 3.
Stars: ✭ 79 (+203.85%)
Mutual labels:  lstm, emotion-recognition
Trafficflowprediction
Traffic Flow Prediction with Neural Networks(SAEs、LSTM、GRU).
Stars: ✭ 242 (+830.77%)
Mutual labels:  lstm, gru
Pytorch Seq2seq
Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
Stars: ✭ 3,418 (+13046.15%)
Mutual labels:  lstm, gru
keras-deep-learning
Various implementations and projects on CNN, RNN, LSTM, GAN, etc
Stars: ✭ 22 (-15.38%)
Mutual labels:  lstm, cnn-keras
SemEval2019Task3
Code for ANA at SemEval-2019 Task 3
Stars: ✭ 41 (+57.69%)
Mutual labels:  emotion, lstm

Deep Learning for Expression Recognition in Image Sequences

Facial expressions convey lots of information, which can be used for identifying emotions. These facial expressions vary in time when they are being performed. Recognition of certain emotions is a very challenging task even for people. This thesis consists of using machine learning algorithms for recognizing emotions in image sequences. It uses the state-of-the-art deep learning on collected data for automatic analysis of emotions. Concretely, the thesis presents a comparison of current state-of-the-art learning strategies that can handle spatio-temporal data and adapt classical static approaches to deal with images sequences. Expanded versions of CNN, 3DCNN, and Recurrent approaches are evaluated and compared in two public datasets for universal emotion recognition, where the performances are shown, and pros and cons are discussed.

Built With

  • Keras - Keras is a high-level neural networks API, written in Python
  • Tensorflow - TensorFlow™ is an open source software library for numerical computation using data flow graphs.
  • Theano - Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently.

Datasets

  • SASE-FE - SASE-FE real and fake emotion database contains videos with six typical emotions depicting a real and a fake emotion.
  • Oulu-Casia - Oulu-CASIA NIR&VIS facial expression database contains videos with the six typical expressions.

Author

License

This project is licensed under the MIT License - see the LICENSE.md file for details

Acknowledgments

  • The face frontalization [2] preprocess was performed using Douglas Souza [dougsouza] implementation.
  • The 3D CNN model is based on Alberto Montes [albertomontesg] implementation of C3D model.
  • The CNN model is based on Refik Can Malli [rcmalli] implementation of the VGG-Face.
  • The VGG-Face was first introduced by Omkar M. Parkhi, Andrea Vedaldi, Andrew Zisserman from University of Oxford.
  • The C3D Model first introduced by Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, Manohar Paluri from Facebook AI Research and Dartmouth College.

Bibliography

  • [1] Ofodile, I., Kulkarni, K., Corneanu, C. A., Escalera, S., Baro, X., Hyniewska, S., ... & Anbarjafari, G. (2017). Automatic recognition of deceptive facial expressions of emotion. arXiv preprint arXiv:1707.04061.
  • [2] Hassner, T., Harel, S., Paz, E., & Enbar, R. (2015). Effective face frontalization in unconstrained images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 4295-4304).
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].