All Projects → justinshenk → Fer

justinshenk / Fer

Licence: mit
Facial Expression Recognition with a deep neural network as a PyPI package

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Fer

Hic Data Analysis Bootcamp
Workshop on measuring, analyzing, and visualizing the 3D genome with Hi-C data.
Stars: ✭ 102 (-0.97%)
Mutual labels:  jupyter-notebook
Loads clustering
Data Science project to cluster loads coming from http://en.openei.org/datasets/files/961/pub/
Stars: ✭ 102 (-0.97%)
Mutual labels:  jupyter-notebook
Python Data Science Handbook
A Chinese translation of Jake Vanderplas' "Python Data Science Handbook". 《Python数据科学手册》在线Jupyter notebook中文翻译
Stars: ✭ 102 (-0.97%)
Mutual labels:  jupyter-notebook
Keras Openpose Reproduce
Keras implementation of Realtime Multi-Person Pose Estimation
Stars: ✭ 102 (-0.97%)
Mutual labels:  jupyter-notebook
Tsa
The Thalesians' Time Series Analysis (TSA) library
Stars: ✭ 102 (-0.97%)
Mutual labels:  jupyter-notebook
Advanced Machine Learning With Python
Code repository for Advanced Machine Learning with Python, published by Packt
Stars: ✭ 102 (-0.97%)
Mutual labels:  jupyter-notebook
Rnn Robinhood
Automated trading on Robinhood via RNN
Stars: ✭ 102 (-0.97%)
Mutual labels:  jupyter-notebook
Jumptutorials.jl
Tutorials on using JuMP for mathematical optimization in Julia
Stars: ✭ 103 (+0%)
Mutual labels:  jupyter-notebook
Sst
SST: Single-Stream Temporal Action Proposals (Official Repo)
Stars: ✭ 102 (-0.97%)
Mutual labels:  jupyter-notebook
Facemesh.pytorch
This is the PyTorch implementation of paper Real-time Facial Surface Geometry from Monocular Video on Mobile GPUs (https://arxiv.org/pdf/1907.06724.pdf)
Stars: ✭ 101 (-1.94%)
Mutual labels:  jupyter-notebook
Models
DLTK Model Zoo
Stars: ✭ 101 (-1.94%)
Mutual labels:  jupyter-notebook
Scipy2017 Jupyter Widgets Tutorial
Notebooks for the SciPy 2017 tutorial "The Jupyter Interactive Widget Ecosystem"
Stars: ✭ 102 (-0.97%)
Mutual labels:  jupyter-notebook
End To End Time Series
This repository hosts code for my Time Series videos part of playlist here - https://www.youtube.com/playlist?list=PL3N9eeOlCrP5cK0QRQxeJd6GrQvhAtpBK
Stars: ✭ 103 (+0%)
Mutual labels:  jupyter-notebook
Dataminingnotesandpractice
记录我学习数据挖掘过程的笔记和见到的奇技,持续更新~
Stars: ✭ 103 (+0%)
Mutual labels:  jupyter-notebook
Ddn
Deep Declarative Networks
Stars: ✭ 103 (+0%)
Mutual labels:  jupyter-notebook
Storytelling With Data
Plots from the book "Storytelling with data" implementation using Python and matplotlib
Stars: ✭ 100 (-2.91%)
Mutual labels:  jupyter-notebook
Pokelyzer
A webhook listener and database schema for doing geospatial analysis and advanced analytics on Pokemon Go data.
Stars: ✭ 102 (-0.97%)
Mutual labels:  jupyter-notebook
Dl Workshop
Master gradient-based machine learning. Also secretly a JAX course in disguise!
Stars: ✭ 103 (+0%)
Mutual labels:  jupyter-notebook
Advanced Deep Learning With Python
Advanced Deep Learning with Python
Stars: ✭ 103 (+0%)
Mutual labels:  jupyter-notebook
Deep ctr
Stars: ✭ 102 (-0.97%)
Mutual labels:  jupyter-notebook

FER

Facial expression recognition.

image

PyPI version Build Status Downloads

Open In Colab

INSTALLATION

Currently FER only supports Python 3.6 onwards. It can be installed through pip:

$ pip install fer

This implementation requires OpenCV>=3.2 and Tensorflow>=1.7.0 installed in the system, with bindings for Python3.

They can be installed through pip (if pip version >= 9.0.1):

$ pip install tensorflow>=1.7 opencv-contrib-python==3.3.0.9

or compiled directly from sources (OpenCV3, Tensorflow).

Note that a tensorflow-gpu version can be used instead if a GPU device is available on the system, which will speedup the results. It can be installed with pip:

$ pip install tensorflow-gpu\>=1.7.0

USAGE

The following example illustrates the ease of use of this package:

from fer import FER
import cv2

img = cv2.imread("justin.jpg")
detector = FER()
detector.detect_emotions(img)

Sample output:

[{'box': [277, 90, 48, 63], 'emotions': {'angry': 0.02, 'disgust': 0.0, 'fear': 0.05, 'happy': 0.16, 'neutral': 0.09, 'sad': 0.27, 'surprise': 0.41}]

Pretty print it with import pprint; pprint.pprint(result).

Just want the top emotion? Try:

emotion, score = detector.top_emotion(img) # 'happy', 0.99

MTCNN Facial Recognition

Faces by default are detected using OpenCV's Haar Cascade classifier. To use the more accurate MTCNN network, add the parameter:

detector = FER(mtcnn=True)

Video

For recognizing facial expressions in video, the Video class splits video into frames. It can use a local Keras model (default) or Peltarion API for the backend:

from fer import Video
from fer import FER

video_filename = "tests/woman2.mp4"
video = Video(video_filename)

# Analyze video, displaying the output
detector = FER(mtcnn=True)
raw_data = video.analyze(detector, display=True)
df = video.to_pandas(raw_data)

The detector returns a list of JSON objects. Each JSON object contains two keys: 'box' and 'emotions':

  • The bounding box is formatted as [x, y, width, height] under the key 'box'.
  • The emotions are formatted into a JSON object with the keys 'anger', 'disgust', 'fear', 'happy', 'sad', surprise', and 'neutral'.

Other good examples of usage can be found in the files example.py and video-example.py located in the root of this repository.

MODEL

FER bundles a Keras model.

The model is a convolutional neural network with weights saved to HDF5 file in the data folder relative to the module's path. It can be overriden by injecting it into the FER() constructor during instantiation with the emotion_model parameter.

LICENSE

MIT License.

CREDIT

This code includes methods and package structure copied or derived from Iván de Paz Centeno's implementation of MTCNN and Octavio Arriaga's facial expression recognition repo.

REFERENCE

FER 2013 dataset curated by Pierre Luc Carrier and Aaron Courville, described in:

"Challenges in Representation Learning: A report on three machine learning contests," by Ian J. Goodfellow, Dumitru Erhan, Pierre Luc Carrier, Aaron Courville, Mehdi Mirza, Ben Hamner, Will Cukierski, Yichuan Tang, David Thaler, Dong-Hyun Lee, Yingbo Zhou, Chetan Ramaiah, Fangxiang Feng, Ruifan Li, Xiaojie Wang, Dimitris Athanasakis, John Shawe-Taylor, Maxim Milakov, John Park, Radu Ionescu, Marius Popescu, Cristian Grozea, James Bergstra, Jingjing Xie, Lukasz Romaszko, Bing Xu, Zhang Chuang, and Yoshua Bengio, arXiv:1307.0414.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].