All Projects β†’ atulapra β†’ Emotion Detection

atulapra / Emotion Detection

Licence: mit
Real-time Facial Emotion Detection using deep learning

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Emotion Detection

Perceptron
A flexible artificial neural network builder to analyse performance, and optimise the best model.
Stars: ✭ 370 (-15.53%)
Mutual labels:  opencv
Opencv Mingw Build
πŸ‘€ MinGW 32bit and 64bit version of OpenCV compiled on Windows. Including OpenCV 3.3.1, 3.4.1, 3.4.1-x64, 3.4.5, 3.4.6, 3.4.7, 3.4.8-x64, 3.4.9, 4.0.0-alpha-x64, 4.0.0-rc-x64, 4.0.1-x64, 4.1.0, 4.1.0-x64, 4.1.1-x64, 4.5.0-with-contrib
Stars: ✭ 401 (-8.45%)
Mutual labels:  opencv
Tensorflow Cmake
TensorFlow examples in C, C++, Go and Python without bazel but with cmake and FindTensorFlow.cmake
Stars: ✭ 418 (-4.57%)
Mutual labels:  opencv
Nowatermark
remove watermark. εŽ»ι™€ε›Ύη‰‡δΈ­ηš„ζ°΄ε°
Stars: ✭ 373 (-14.84%)
Mutual labels:  opencv
Movement Tracking
UP - DOWN - LEFT - RIGHT movement tracking.
Stars: ✭ 379 (-13.47%)
Mutual labels:  opencv
Tagui
Free RPA tool by AI Singapore
Stars: ✭ 4,257 (+871.92%)
Mutual labels:  opencv
Opencvforunity
OpenCV for Unity (Untiy Asset Plugin)
Stars: ✭ 359 (-18.04%)
Mutual labels:  opencv
Csi Camera
Simple example of using a CSI-Camera (like the Raspberry Pi Version 2 camera) with the NVIDIA Jetson Nano Developer Kit
Stars: ✭ 433 (-1.14%)
Mutual labels:  opencv
Pibooth
The pibooth project provides a Photo Booth application out-of-the-box for Raspberry Pi and opencv compatible devices
Stars: ✭ 398 (-9.13%)
Mutual labels:  opencv
Image Processing Algorithm
paper implement
Stars: ✭ 415 (-5.25%)
Mutual labels:  opencv
Stereo Calibration
πŸ“· πŸ“· Stereo camera calibration using OpenCV and C++
Stars: ✭ 376 (-14.16%)
Mutual labels:  opencv
Python video stab
A Python package to stabilize videos using OpenCV
Stars: ✭ 377 (-13.93%)
Mutual labels:  opencv
Handwriting Ocr
OCR software for recognition of handwritten text
Stars: ✭ 411 (-6.16%)
Mutual labels:  opencv
Pythonsift
A clean and concise Python implementation of SIFT (Scale-Invariant Feature Transform)
Stars: ✭ 374 (-14.61%)
Mutual labels:  opencv
Pycair
Content aware image resizing
Stars: ✭ 425 (-2.97%)
Mutual labels:  opencv
Cmake Templates
Some CMake Templates (examples). Qt, Boost, OpenCV, C++11, etc 一些栗子
Stars: ✭ 368 (-15.98%)
Mutual labels:  opencv
Gocv
Go package for computer vision using OpenCV 4 and beyond.
Stars: ✭ 4,511 (+929.91%)
Mutual labels:  opencv
Simple vehicle counting
Vehicle Detection, Tracking and Counting
Stars: ✭ 439 (+0.23%)
Mutual labels:  opencv
Deepbacksub
Virtual Video Device for Background Replacement with Deep Semantic Segmentation
Stars: ✭ 426 (-2.74%)
Mutual labels:  opencv
React Native Openalpr
An open-source React Native automatic license plate recognition package for OpenALPR
Stars: ✭ 415 (-5.25%)
Mutual labels:  opencv

Emotion detection using deep learning

Introduction

This project aims to classify the emotion on a person's face into one of seven categories, using deep convolutional neural networks. The model is trained on the FER-2013 dataset which was published on International Conference on Machine Learning (ICML). This dataset consists of 35887 grayscale, 48x48 sized face images with seven emotions - angry, disgusted, fearful, happy, neutral, sad and surprised.

Dependencies

  • Python 3, OpenCV, Tensorflow
  • To install the required packages, run pip install -r requirements.txt.

Basic Usage

The repository is currently compatible with tensorflow-2.0 and makes use of the Keras API using the tensorflow.keras library.

  • First, clone the repository and enter the folder
git clone https://github.com/atulapra/Emotion-detection.git
cd Emotion-detection
  • Download the FER-2013 dataset from here and unzip it inside the src folder. This will create the folder data.

  • If you want to train this model, use:

cd src
python emotions.py --mode train
  • If you want to view the predictions without training again, you can download the pre-trained model from here and then run:
cd src
python emotions.py --mode display
  • The folder structure is of the form:
    src:

    • data (folder)
    • emotions.py (file)
    • haarcascade_frontalface_default.xml (file)
    • model.h5 (file)
  • This implementation by default detects emotions on all faces in the webcam feed. With a simple 4-layer CNN, the test accuracy reached 63.2% in 50 epochs.

Accuracy plot

Data Preparation (optional)

  • The original FER2013 dataset in Kaggle is available as a single csv file. I had converted into a dataset of images in the PNG format for training/testing and provided this as the dataset in the previous section.

  • In case you are looking to experiment with new datasets, you may have to deal with data in the csv format. I have provided the code I wrote for data preprocessing in the dataset_prepare.py file which can be used for reference.

Algorithm

  • First, the haar cascade method is used to detect faces in each frame of the webcam feed.

  • The region of image containing the face is resized to 48x48 and is passed as input to the CNN.

  • The network outputs a list of softmax scores for the seven classes of emotions.

  • The emotion with maximum score is displayed on the screen.

Example Output

Mutiface

References

  • "Challenges in Representation Learning: A report on three machine learning contests." I Goodfellow, D Erhan, PL Carrier, A Courville, M Mirza, B Hamner, W Cukierski, Y Tang, DH Lee, Y Zhou, C Ramaiah, F Feng, R Li,
    X Wang, D Athanasakis, J Shawe-Taylor, M Milakov, J Park, R Ionescu, M Popescu, C Grozea, J Bergstra, J Xie, L Romaszko, B Xu, Z Chuang, and Y. Bengio. arXiv 2013.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].