All Projects → pedrodiamel → ferattention

pedrodiamel / ferattention

Licence: MIT License
FERAtt: Facial Expression Recognition with Attention Net

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to ferattention

Facial-Expression-Recognition
A Pytorch Implementation of FER( facial expression recognition )
Stars: ✭ 27 (-60.87%)
Mutual labels:  facial-expression-recognition, fer
Facial-Expression-Recognition
Facial-Expression-Recognition using tensorflow
Stars: ✭ 19 (-72.46%)
Mutual labels:  facial-expression-recognition
Hemuer
An AI Tool to record expressions of users as they watch a video and then visualize the funniest parts of it!
Stars: ✭ 22 (-68.12%)
Mutual labels:  facial-expression-recognition
FMPN-FER
Official PyTorch Implementation of 'Facial Motion Prior Networks for Facial Expression Recognition', VCIP 2019, Oral
Stars: ✭ 76 (+10.14%)
Mutual labels:  facial-expression-recognition
AIML-Human-Attributes-Detection-with-Facial-Feature-Extraction
This is a Human Attributes Detection program with facial features extraction. It detects facial coordinates using FaceNet model and uses MXNet facial attribute extraction model for extracting 40 types of facial attributes. This solution also detects Emotion, Age and Gender along with facial attributes.
Stars: ✭ 48 (-30.43%)
Mutual labels:  facial-expression-recognition
Emotion-Investigator
An Exciting Deep Learning-based Flask web app that predicts the Facial Expressions of users and also does Graphical Visualization of the Expressions.
Stars: ✭ 44 (-36.23%)
Mutual labels:  facial-expression-recognition
fer
Facial Expression Recognition
Stars: ✭ 32 (-53.62%)
Mutual labels:  facial-expression-recognition
facial-expression-recognition
The main purpose of the project - recognition of emotions based on facial expressions. Cohn-Kanade data set (http://www.pitt.edu/~emotion/ck-spread.htm) is used for explorations and training
Stars: ✭ 60 (-13.04%)
Mutual labels:  facial-expression-recognition
MA-Net
“Learning Deep Global Multi-scale and Local Attention Features for Facial Expression Recognition in the Wild”, IEEE T-IP, 2021.
Stars: ✭ 53 (-23.19%)
Mutual labels:  facial-expression-recognition
Action-Units-Heatmaps
Code for BMVC paper "Joint Action Unit localisation and intensity estimation through heatmap regression"
Stars: ✭ 80 (+15.94%)
Mutual labels:  facial-expression-recognition
webCamEmocognizer
A cool emotion detector using your laptop/desktop webcam
Stars: ✭ 57 (-17.39%)
Mutual labels:  facial-expression-recognition
facial-expression-recognition
Facial Expression Recognition Using CNN and Haar-Cascade
Stars: ✭ 44 (-36.23%)
Mutual labels:  facial-expression-recognition
Amend-Representation-Module
ARM - Official PyTorch Implementation
Stars: ✭ 53 (-23.19%)
Mutual labels:  facial-expression-recognition

FERAtt: Facial Expression Recognition with Attention Net

License: MIT

This repository is under construction ...

Paper | arXiv

Pedro D. Marrero Fernandez1, Fidel A. Guerrero-Peña1, Tsang Ing Ren1, Alexandre Cunha2

  • 1 Centro de Informatica (CIn), Universidade Federal de Pernambuco (UFPE), Brazil
  • 2 Center for Advanced Methods in Biological Image Analysis (CAMBIA) California Institute of Technology, USA

Introduction

Pytorch implementation for FERAtt neural net. Facial Expression Recognition with Attention Net (FERAtt), is based on the dual-branch architecture and consists of four major modules: (i) an attention module $$G_{att}$$ to extract the attention feature map, (ii) a feature extraction module $G_{ft}$ to obtain essential features from the input image $I$, (iii) a reconstruction module $G_{rec}$ to estimate a good attention image $I_{att}$, and (iv) a representation module $G_{rep}$ that is responsible for the representation and classification of the facial expression image.

Prerequisites

  • Linux or macOS
  • Python 3
  • NVIDIA GPU + CUDA cuDNN
  • PyTorch 1.5

Installation

$git clone https://github.com/pedrodiamel/pytorchvision.git
$cd pytorchvision
$python setup.py install
$pip install -r installation.txt

Docker:

docker build -f "Dockerfile" -t feratt:latest .
./run_docker.sh

Visualize result with Visdom

We now support Visdom for real-time loss visualization during training!

To use Visdom in the browser:

# First install Python server and client
pip install visdom
# Start the server (probably in a screen or tmux)
python -m visdom.server -env_path runs/visdom/
# http://localhost:8097/

How use

Step 1: Train

./train_bu3dfe.sh
./train_ck.sh

Citation

If you find this useful for your research, please cite the following paper.

@InProceedings{Fernandez_2019_CVPR_Workshops,
author = {Marrero Fernandez, Pedro D. and Guerrero Pena, Fidel A. and Ing Ren, Tsang and Cunha, Alexandre},
title = {FERAtt: Facial Expression Recognition With Attention Net},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}

Acknowledgments

Gratefully acknowledge financial support from the Brazilian government agency FACEPE.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].