All Projects → ValentinRicher → emotion-recognition-GAN

ValentinRicher / emotion-recognition-GAN

Licence: MIT license
This project is a semi-supervised approach to detect emotions on faces in-the-wild using GAN

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to emotion-recognition-GAN

Gans In Action
Companion repository to GANs in Action: Deep learning with Generative Adversarial Networks
Stars: ✭ 748 (+3640%)
Mutual labels:  dcgan, semi-supervised-learning
sklearn-audio-classification
An in-depth analysis of audio classification on the RAVDESS dataset. Feature engineering, hyperparameter optimization, model evaluation, and cross-validation with a variety of ML techniques and MLP
Stars: ✭ 31 (+55%)
Mutual labels:  emotion, emotion-recognition
RECCON
This repository contains the dataset and the PyTorch implementations of the models from the paper Recognizing Emotion Cause in Conversations.
Stars: ✭ 126 (+530%)
Mutual labels:  emotion, emotion-recognition
DST-CBC
Implementation of our paper "DMT: Dynamic Mutual Training for Semi-Supervised Learning"
Stars: ✭ 98 (+390%)
Mutual labels:  semi-supervised-learning, tensorboard
Music player with Emotions recognition
This program can recognize your mood by detecting your face and play song according your mood
Stars: ✭ 79 (+295%)
Mutual labels:  emotion, face
GAN-Project-2018
GAN in Tensorflow to be run via Linux command line
Stars: ✭ 21 (+5%)
Mutual labels:  dcgan, dcgan-tensorflow
Emotion and Polarity SO
An emotion classifier of text containing technical content from the SE domain
Stars: ✭ 74 (+270%)
Mutual labels:  emotion, emotion-recognition
catgan pytorch
Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks
Stars: ✭ 50 (+150%)
Mutual labels:  dcgan, semi-supervised-learning
Deep-Learning-for-Expression-Recognition-in-Image-Sequences
The project uses state of the art deep learning on collected data for automatic analysis of emotions.
Stars: ✭ 26 (+30%)
Mutual labels:  emotion, emotion-recognition
dissertation
🎓 📜 This repository holds my final year and dissertation project during my time at the University of Lincoln titled 'Deep Learning for Emotion Recognition in Cartoons'.
Stars: ✭ 22 (+10%)
Mutual labels:  emotion, emotion-recognition
Resnet-Emotion-Recognition
Identifies emotion(s) from user facial expressions
Stars: ✭ 21 (+5%)
Mutual labels:  emotion, emotion-recognition
Facifier
An emotion and gender detector based on facial features, built with Python and OpenCV
Stars: ✭ 52 (+160%)
Mutual labels:  emotion, face
PSCognitiveService
Powershell module to access Microsoft Azure Machine learning RESTful API's or Microsoft cognitive services
Stars: ✭ 46 (+130%)
Mutual labels:  emotion, face
Emopy
A deep neural net toolkit for emotion analysis via Facial Expression Recognition (FER)
Stars: ✭ 744 (+3620%)
Mutual labels:  emotion, face
FacialEmotionRecognition
Using Extended Cohn-Kanade AU-Coded Facial Expression Database to classify basic human facial emotion expressions using ann
Stars: ✭ 28 (+40%)
Mutual labels:  emotion, face
Face-Recognition-FaceNet
A python script label faces in group photos using Facenet. 🎉
Stars: ✭ 21 (+5%)
Mutual labels:  face
Billion-scale-semi-supervised-learning
Implementing Billion-scale semi-supervised learning for image classification using Pytorch
Stars: ✭ 81 (+305%)
Mutual labels:  semi-supervised-learning
NRCLex
An affect generator based on TextBlob and the NRC affect lexicon. Note that lexicon license is for research purposes only.
Stars: ✭ 42 (+110%)
Mutual labels:  emotion
Cognitive-Face-Xamarin
A client library that makes it easy to work with the Microsoft Cognitive Services Face API on Xamarin.iOS, Xamarin.Android, and Xamarin.Forms and/or Portable Class Libraries.
Stars: ✭ 18 (-10%)
Mutual labels:  face
coursera-gan-specialization
Programming assignments and quizzes from all courses within the GANs specialization offered by deeplearning.ai
Stars: ✭ 277 (+1285%)
Mutual labels:  dcgan

Emotion Recognition with semi-supervised GAN

The goal of this work was to assemble two models used to depict emotions on faces. These models are Action Units and Valence Arousal.
Action Units are facial muscle movements. The combination of these Action Units can be interpreted as an emotion. Valence Arousal is a 2D continuous scale where Valence represents how positive or negative a person is when feeling an emotion and Arousal represents how excited or calm the person is.
Now the possibility to run the code with the facemotion repository !

Prerequisites

Setup

  1. Clone the project to your environment :

    git clone https://github.com/ValentinRicher/emotion-recognition-GAN.git
    
  2. Create the virtual environment :

  • with virtualenv
    virtualenv <venv-name>
    
  • or with virtualenvwrapper
    mkvirtualenv <venv-name>
    
  1. Activate your virtual environment :
  • with virtualenv
    source <venv-name>/bin/activate
    
  • with virtualenvwrapper
    workon <venv-name>
    
  1. Install the libraries :
  • if you use a GPU (recommended)
    pip install -R gpu-requirements.txt
    
  • if you use a CPU
    pip install -R requirements.txt
    

Usage

  • Download the FaceMotion dataset

    python download.py --model xx --img_size yy
    

    This will download the images from the FaceMotion dataset into a ./datasets/facemotion directory if not already done and create the h5py files with the good labels and image sizes.

  • Train the model

    python trainer.py --model xx --img_size yy
    
  • Evaluate the model

    • if you want to test a specific model

      python evaler.py --checkpoint_path ckpt_p
      

      ckpt_p should be like : BOTH-is_32-bs_64-lr_1.00E-04-ur_5-20190217_145915/train_dir/model-201

    • if you want to test the last model saved

      python evaler.py --train_dir tr_d
      

      tr_d should be like : BOTH-is_32-bs_64-lr_1.00E-04-ur_5-20190217_145915/train_dir/

    • if you want to test all the models in train_dir

      python evaler.py --train_dir tr_d --all
      

For the moment it is only possible to work with 32*32 pixels images because the model architecture for 64*64 and 96*96 are not ready yet

Results

[default model] -> model : BOTH / image size : 32 / batch size : 64 / learning rate : 1e-4 / update rate : 5 / 1 000 000 epochs

In the following grid of 1000 images (20 lines, 50 columns), 1 image is generated for every 1000 epoch

Images created by the Generator during training : train fake images

Images created by the Generator during testing : test fake images

We can notice that until epoch 400 000, the Generator can create pretty good faces before collapsing

Real images used for training the Discriminator : train real images

Real images used for testing the Discriminator : test real images

To Do

  • Add metrics for the real or fake images
  • Connect the GAN repo with the dataset repo to create automatic downloading
  • Re-organize the facemotion.py to the same level as other .py files
  • Use Google Cloud Platform -> impossible to use a GPU without paying
  • Use Google Colab -> impossible to download the dataset quickly
  • Config file to YAML
  • Add a file to get the info stored in events files created for TensorBoard
  • Do a notice to explain the project and how to use it
  • Create an architecture for 64*64 images
  • Create an architecture for 96*96 images
  • Add an early stopping possibility

Acknowledgments

Parts of the code from https://github.com/gitlimlab/SSGAN-Tensorflow

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].