All Projects → thoughtworksarts → Emopy

thoughtworksarts / Emopy

Licence: agpl-3.0
A deep neural net toolkit for emotion analysis via Facial Expression Recognition (FER)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Emopy

Music player with Emotions recognition
This program can recognize your mood by detecting your face and play song according your mood
Stars: ✭ 79 (-89.38%)
Mutual labels:  emotion, face
Faceaging By Cyclegan
Stars: ✭ 105 (-85.89%)
Mutual labels:  deep-neural-networks, face
Facifier
An emotion and gender detector based on facial features, built with Python and OpenCV
Stars: ✭ 52 (-93.01%)
Mutual labels:  face, emotion
3dmmasstn
MatConvNet implementation for incorporating a 3D Morphable Model (3DMM) into a Spatial Transformer Network (STN)
Stars: ✭ 218 (-70.7%)
Mutual labels:  deep-neural-networks, face
emotion-recognition-GAN
This project is a semi-supervised approach to detect emotions on faces in-the-wild using GAN
Stars: ✭ 20 (-97.31%)
Mutual labels:  emotion, face
FacialEmotionRecognition
Using Extended Cohn-Kanade AU-Coded Facial Expression Database to classify basic human facial emotion expressions using ann
Stars: ✭ 28 (-96.24%)
Mutual labels:  emotion, face
Emotion Classification From Audio Files
Understanding emotions from audio files using neural networks and multiple datasets.
Stars: ✭ 189 (-74.6%)
Mutual labels:  deep-neural-networks, emotion
PSCognitiveService
Powershell module to access Microsoft Azure Machine learning RESTful API's or Microsoft cognitive services
Stars: ✭ 46 (-93.82%)
Mutual labels:  emotion, face
Speech Emotion Analyzer
The neural network model is capable of detecting five different male/female emotions from audio speeches. (Deep Learning, NLP, Python)
Stars: ✭ 633 (-14.92%)
Mutual labels:  deep-neural-networks, emotion
Mobileface
A face recognition solution on mobile device.
Stars: ✭ 669 (-10.08%)
Mutual labels:  face
Deepfacelab
DeepFaceLab is the leading software for creating deepfakes.
Stars: ✭ 30,308 (+3973.66%)
Mutual labels:  deep-neural-networks
Segan
Speech Enhancement Generative Adversarial Network in TensorFlow
Stars: ✭ 661 (-11.16%)
Mutual labels:  deep-neural-networks
Neupy
NeuPy is a Tensorflow based python library for prototyping and building neural networks
Stars: ✭ 670 (-9.95%)
Mutual labels:  deep-neural-networks
Keras Attention
Visualizing RNNs using the attention mechanism
Stars: ✭ 697 (-6.32%)
Mutual labels:  deep-neural-networks
Adversarial video generation
A TensorFlow Implementation of "Deep Multi-Scale Video Prediction Beyond Mean Square Error" by Mathieu, Couprie & LeCun.
Stars: ✭ 662 (-11.02%)
Mutual labels:  deep-neural-networks
Face swap
End-to-end, automatic face swapping pipeline
Stars: ✭ 722 (-2.96%)
Mutual labels:  face
Extreme 3d faces
Extreme 3D Face Reconstruction: Looking Past Occlusions
Stars: ✭ 653 (-12.23%)
Mutual labels:  face
Saliency
TensorFlow implementation for SmoothGrad, Grad-CAM, Guided backprop, Integrated Gradients and other saliency techniques
Stars: ✭ 648 (-12.9%)
Mutual labels:  deep-neural-networks
Bmw Tensorflow Training Gui
This repository allows you to get started with a gui based training a State-of-the-art Deep Learning model with little to no configuration needed! NoCode training with TensorFlow has never been so easy.
Stars: ✭ 736 (-1.08%)
Mutual labels:  deep-neural-networks
Softlearning
Softlearning is a reinforcement learning framework for training maximum entropy policies in continuous domains. Includes the official implementation of the Soft Actor-Critic algorithm.
Stars: ✭ 713 (-4.17%)
Mutual labels:  deep-neural-networks

EmoPy

EmoPy is a python toolkit with deep neural net classes which predicts human emotional expression classifications given images of people's faces. The goal of this project is to explore the field of Facial Expression Recognition (FER) using existing public datasets, and make neural network models which are free, open, easy to research and easy integrate into other projects.

Labeled FER Images
Figure from @Chen2014FacialER

The behavior of the system is highly dependent on the available data, and the developers of EmoPy created and tested the system using only publicly-available datasets.

To get a better grounding in the project you may find these write-ups useful:

We aim to expand our development community, and we are open to suggestions and contributions. Usually these types of algorithms are used commercially, so we want to help open source the best possible version of them in order to improve public access and engagement in this area. Please contact an EmoPy maintainer (see below) to discuss.

Overview

EmoPy includes several modules that are plugged together to build a trained FER prediction model.

  • fermodel.py
  • neuralnets.py
  • dataset.py
  • data_loader.py
  • csv_data_loader.py
  • directory_data_loader.py
  • data_generator.py

The fermodel.py module uses pre-trained models for FER prediction, making it the easiest entry point to get a trained model up and running quickly.

Each of the modules contains one class, except for neuralnets.py, which has one interface and five subclasses. Each of these subclasses implements a different neural net architecture using the Keras framework with Tensorflow backend, allowing you to experiment and see which one performs best for your needs.

The EmoPy documentation contains detailed information on the classes and their interactions. Also, an overview of the different neural nets included in this project is included below.

Operating Constraints

Commercial FER projects are regularly trained on millions of labeled images, in massive private datasets. By contrast, in order to remain free and open source, EmoPy was created to work with only public datasets, which presents a major constraint on training for accurate results.

EmoPy was originally created and designed to fulfill the needs of the RIOT project, in which audience members facial expressions are recorded in a controlled lighting environment.

For these two reasons, EmoPy functions best when the input image:

  • is evenly lit, with relatively few shadows, and/or
  • matches to some extent the style, framing and cropping of images from the training dataset

As of this writing, the best available public dataset we have found is Microsoft FER+, with around 30,000 images. Training on this dataset should yield best results when the input image relates to some extent to the style of the images in the set.

For a deeper analysis of the origin and operation of EmoPy, which will be useful to help evaluate its potential for your needs, please read our full write-up on EmoPy.

Choosing a Dataset

Try out the system using your own dataset or a small dataset we have provided in the Emopy/examples/image_data subdirectory. The sample datasets we provide will not yield good results due to their small size, but they serve as a great way to get started.

Predictions ideally perform well on a diversity of datasets, illumination conditions, and subsets of the standard 7 emotion labels (happiness, anger, fear, surprise, disgust, sadness, calm/neutral) seen in FER research. Some good example public datasets are the Extended Cohn-Kanade and Microsoft FER+.

Environment Setup

Python is compatible with multiple operating systems. If you would like to use EmoPy on another OS, please convert these instructions to match your target environment. Let us know how you get on, and we will try to support you and share you results.

Before beginning, if you do not have Homebrew installed run this command to install:

/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

EmoPy runs using Python 3.6 and up, theoretically on any Python-compatible OS. We tested EmoPy using Python 3.6.6 on OSX.

There are 2 ways you can install Python 3.6.6:

  1. Directly from the [Python website] (https://www.python.org/downloads/release/python-366/), or
  2. Using [pyenv] (https://github.com/pyenv/pyenv):
$ brew install pyenv
$ pyenv install 3.6.6

GraphViz is required for visualisation functions.

brew install graphviz

The next step is to set up a virtual environment using virtualenv. Install virtualenv with sudo.

sudo pip install virtualenv

Create and activate the virtual environment. Run:

python3.6 -m venv venv

Or if using pyenv:

$ pyenv exec python3.6 -m venv venv

Where the second venv is the name of your virtual environment. To activate, run from the same directory:

source venv/bin/activate

Your terminal command line should now be prefixed with (venv).

(To deactivate the virtual environment run deactivate in the command line. You'll know it has been deactivated when the prefix (venv) disappears.)

Installation

From PyPi

Once the virtual environment is activated, you may install EmoPy using

pip install EmoPy

From the source

Clone the directory and open it in your terminal.

git clone https://github.com/thoughtworksarts/EmoPy.git
cd EmoPy

Install the remaining dependencies using pip.

pip install -r requirements.txt

Now you're ready to go!

Running tests

You can run the tests with:

python EmoPy/tests/run_all.py

We encourage improvements and additions to these tests!

Running the examples

You can find example code to run each of the current neural net classes in examples. You may either download the example directory to a location of your choice on your machine, or find the example directory included in the installation.

If you choose to use the installed package, you can find the examples directory by starting in the virtual environment directory you created and typing:

cd lib/python3.6/site-packages/EmoPy/examples

The best place to start is the FERModel example. Here is a listing of that code:

from EmoPy.src.fermodel import FERModel
from pkg_resources import resource_filename

target_emotions = ['calm', 'anger', 'happiness']
model = FERModel(target_emotions, verbose=True)

print('Predicting on happy image...')
model.predict(resource_filename('EmoPy.examples','image_data/sample_happy_image.png'))

print('Predicting on disgust image...')
model.predict(resource_filename('EmoPy.examples','image_data/sample_disgust_image.png'))

print('Predicting on anger image...')
model.predict(resource_filename('EmoPy.examples','image_data/sample_anger_image2.png'))

The code above loads a pre-trained model and then predicts an emotion on a sample image. As you can see, all you have to supply with this example is a set of target emotions and a sample image.

Once you have completed the installation, you can run this example from the examples folder by running the example script.

python fermodel_example.py

The first thing the example does is load and initialize the model. Next it prints out emotion probabilities for each sample image its given. It should look like this:

FERModel Training Output

To train your own neural net, use one of our FER neural net classes to get started. You can try the convolutional_model.py example:

python convolutional_model.py

The example first initializes the model. A summary of the model architecture will be printed out. This includes a list of all the neural net layers and the shape of their output. Our models are built using the Keras framework, which offers this visualization function.

Convolutional Example Output Part 1

You will see the training and validation accuracies of the model being updated as it is trained on each sample image. The validation accuracy will be very low since we are only using three images for training and validation. It should look something like this:

Convolutional Example Output Part 2

Comparison of neural network models

ConvolutionalNN

Convolutional Neural Networks (CNNs) are currently considered the go-to neural networks for Image Classification, because they pick up on patterns in small parts of an image, such as the curve of an eyebrow. EmoPy's ConvolutionalNN is trained on still images.

TimeDelayConvNN

The Time-Delayed 3D-Convolutional Neural Network model is inspired by the work described in this paper written by Dr. Hongying Meng of Brunel University, London. It uses temporal information as part of its training samples. Instead of using still images as training samples, it uses past images from a series for additional context. One training sample will contain n number of images from a series and its emotion label will be that of the most recent image. The idea is to capture the progression of a facial expression leading up to a peak emotion.

Facial Expression Image Sequence
Facial expression image sequence in Cohn-Kanade database from @Jia2014

ConvolutionalLstmNN

The Convolutional Long Short Term Memory neural net is a convolutional and recurrent neural network hybrid. Convolutional NNs use kernels, or filters, to find patterns in smaller parts of an image. Recurrent NNs (RNNs) take into account previous training examples, similar to the Time-Delay Neural Network, for context. This model is able to both extract local data from images and use temporal context.

The Time-Delay model and this model differ in how they use temporal context. The former only takes context from within video clips of a single face as shown in the figure above. The ConvolutionLstmNN is given still images that have no relation to each other. It looks for pattern differences between past image samples and the current sample as well as their labels. It isn’t necessary to have a progression of the same face, simply different faces to compare.

7 Standard Facial Expressions
Figure from @vanGent2016

TransferLearningNN

This model uses a technique known as Transfer Learning, where pre-trained deep neural net models are used as starting points. The pre-trained models it uses are trained on images to classify objects. The model then retrains the pre-trained models using facial expression images with emotion classifications rather than object classifications. It adds a couple top layers to the original model to match the number of target emotions we want to classify and reruns the training algorithm with a set of facial expression images. It only uses still images, no temporal context.

ConvolutionalNNDropout

This model is the most recent addition to EmoPy. It is a 2D Convolutional Neural Network that implements dropout, batch normalization, and L2 regularization. It is currently performing with a training accuracy of 0.7045 and a validation accuracy of 0.6536 when classifying 7 emotions. Further training will be done to determine how it performs on smaller subsets of emotions.

Performance

Before implementing the ConvolutionalNNDropout model, the ConvolutionalLstmNN model was performing best when classifying 7 emotions with a validation accuracy of 47.5%. The table below shows accuracy values of this model and the TransferLearningNN model when trained on all seven standard emotions and on a subset of three emotions (fear, happiness, neutral). They were trained on 5,000 images from the FER+ dataset.

Neural Net Model 7 emotions 3 emotions
Training Accuracy Validation Accuracy Training Accuracy Validation Accuracy
ConvolutionalLstmNN 0.6187 0.4751 0.9148 0.6267
TransferLearningNN 0.5358 0.2933 0.7393 0.4840

Both models are overfitting, meaning that training accuracies are much higher than validation accuracies. This means that the models are doing a really good job of recognizing and classifying patterns in the training images, but are overgeneralizing. They are less accurate when predicting emotions for new images.

If you would like to experiment with different parameters using our neural net classes, we recommend you use FloydHub, a platform for training and deploying deep learning models in the cloud. Let us know how your models are doing! The goal is to optimize the performance and generalizability of all the FERPython models.

Guiding Principles

These are the principals we use to guide development and contributions to the project:

  • FER for Good. FER applications have the potential to be used for malicious purposes. We want to build EmoPy with a community that champions integrity, transparency, and awareness and hope to instill these values throughout development while maintaining an accessible, quality toolkit.

  • User Friendliness. EmoPy prioritizes user experience and is designed to be as easy as possible to get an FER prediction model up and running by minimizing the total user requirements for basic use cases.

  • Experimentation to Maximize Performance. Optimal performance in FER prediction is a primary goal. The deep neural net classes are designed to easily modify training parameters, image pre-processing options, and feature extraction methods in the hopes that experimentation in the open-source community will lead to high-performing FER prediction.

  • Modularity. EmoPy contains four base modules (fermodel, neuralnets, imageprocessor, and featureextractor) that can be easily used together with minimal restrictions.

Contributing

  1. Fork it!
  2. Create your feature branch: git checkout -b my-new-feature
  3. Commit your changes: git commit -am 'Add some feature'
  4. Push to the branch: git push origin my-new-feature
  5. Submit a pull request :D

This is a new library that has a lot of room for growth. Check out the list of open issues that we need help addressing!

Contributors

Thanks goes to these wonderful people (emoji key):


angelicaperez37

💻 📝 📖

sbriley

💻

Sofia Tania

💻

Andrew McWilliams

📖 🤔

Webs

💻

Sara GW

💻

Megan Sullivan

📖

sadnantw

💻 ⚠️

Julien Deswaef

💻 📖

Tanushri Chakravorty

💻 💡

Linas Vepštas

🔌

Emily Sachs

💻

Diana Gamez

💻

dtoakley

📖 💻

Anju

🚧

Satish Dash

🚧

This project follows the all-contributors specification. Contributions of any kind welcome!

Projects built on EmoPy

Want to list you project here? Please file an issue (or pull request) and tell us how EmoPy is helping you.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].