All Projects → vijuSR → Facial_emotion_recognition__emojifier

vijuSR / Facial_emotion_recognition__emojifier

Licence: mit
Recognizes the facial emotion and overlays emoji, equivalent to the emotion, on the persons face.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Facial emotion recognition emojifier

Sketchcnn
Robust freeform surface modeling from user 2d sketches.
Stars: ✭ 66 (-19.51%)
Mutual labels:  convolutional-neural-networks
Age Gender Estimation
Keras implementation of a CNN network for age and gender estimation
Stars: ✭ 1,195 (+1357.32%)
Mutual labels:  convolutional-neural-networks
Recursive Cnns
Implementation of my paper "Real-time Document Localization in Natural Images by Recursive Application of a CNN."
Stars: ✭ 80 (-2.44%)
Mutual labels:  convolutional-neural-networks
Graph 2d cnn
Code and data for the paper 'Classifying Graphs as Images with Convolutional Neural Networks' (new title: 'Graph Classification with 2D Convolutional Neural Networks')
Stars: ✭ 67 (-18.29%)
Mutual labels:  convolutional-neural-networks
Recurrent Environment Simulators
Deepmind Recurrent Environment Simulators paper implementation in tensorflow
Stars: ✭ 73 (-10.98%)
Mutual labels:  convolutional-neural-networks
Chainer Pspnet
PSPNet in Chainer
Stars: ✭ 76 (-7.32%)
Mutual labels:  convolutional-neural-networks
Espnetv2 Coreml
Semantic segmentation on iPhone using ESPNetv2
Stars: ✭ 66 (-19.51%)
Mutual labels:  convolutional-neural-networks
Vehicle Retrieval Kcnns
vehicle image retrieval using k CNNs ensemble method
Stars: ✭ 81 (-1.22%)
Mutual labels:  convolutional-neural-networks
Quicknat pytorch
PyTorch Implementation of QuickNAT and Bayesian QuickNAT, a fast brain MRI segmentation framework with segmentation Quality control using structure-wise uncertainty
Stars: ✭ 74 (-9.76%)
Mutual labels:  convolutional-neural-networks
Seranet
Super Resolution of picture images using deep learning
Stars: ✭ 79 (-3.66%)
Mutual labels:  convolutional-neural-networks
Equivariant Transformers
Equivariant Transformer (ET) layers are image-to-image mappings that incorporate prior knowledge on invariances with respect to continuous transformations groups (ICML 2019). Paper: https://arxiv.org/abs/1901.11399
Stars: ✭ 68 (-17.07%)
Mutual labels:  convolutional-neural-networks
3d Reconstruction With Neural Networks
3D reconstruction with neural networks using Tensorflow. See link for Video (https://www.youtube.com/watch?v=iI6ZMST8Ri0)
Stars: ✭ 71 (-13.41%)
Mutual labels:  convolutional-neural-networks
Wav2letter
Speech Recognition model based off of FAIR research paper built using Pytorch.
Stars: ✭ 78 (-4.88%)
Mutual labels:  convolutional-neural-networks
Deep Plant
Deep-Plant: Plant Classification with CNN/RNN. It consists of CAFFE/Tensorflow implementation of our PR-17, TIP-18 (HGO-CNN & PlantStructNet) and MalayaKew dataset.
Stars: ✭ 66 (-19.51%)
Mutual labels:  convolutional-neural-networks
Emnist
A project designed to explore CNN and the effectiveness of RCNN on classifying the EMNIST dataset.
Stars: ✭ 81 (-1.22%)
Mutual labels:  convolutional-neural-networks
Sru Deeplearning Workshop
دوره 12 ساعته یادگیری عمیق با چارچوب Keras
Stars: ✭ 66 (-19.51%)
Mutual labels:  convolutional-neural-networks
Rethinking Network Pruning
Rethinking the Value of Network Pruning (Pytorch) (ICLR 2019)
Stars: ✭ 1,197 (+1359.76%)
Mutual labels:  convolutional-neural-networks
Neuralnetworkpostprocessing
Unity Post Processing with Convolution Neural Network
Stars: ✭ 81 (-1.22%)
Mutual labels:  convolutional-neural-networks
Ros Openpose
CMU's OpenPose for ROS
Stars: ✭ 81 (-1.22%)
Mutual labels:  convolutional-neural-networks
Elektronn3
A PyTorch-based library for working with 3D and 2D convolutional neural networks, with focus on semantic segmentation of volumetric biomedical image data
Stars: ✭ 78 (-4.88%)
Mutual labels:  convolutional-neural-networks

facial_emotion_recognition__EMOJIFIER

Recognizes the facial emotion and overlays emoji, equivalent to the emotion, on the persons face.

Some results First!

res

Getting Started

  1. Get the code:

    • Using SSH: git clone [email protected]:vijuSR/facial_emotion_recognition__EMOJIFIER.git
      OR
    • Using HTTP: git clone https://github.com/vijuSR/facial_emotion_recognition__EMOJIFIER.git
  2. Setup the Virtual Environment (Recommended):

    • Create the virtual environment
      • python3 -m venv </path/to/venv>
    • Activate your virtual-environment
      • Linux: source </path/to/venv>/bin/activate
      • Windows: cd </path/to/venv> then .\Scripts\activate
    • Install the requirements
      • cd <root-dir-of-project>
      • pip install --upgrade -I -r requirements.txt

      Install any missing requirement with pip install <package-name>

      That's all for the setup ! 😃

Making it work for you:

There are 4 steps from nothing (not even a single image) to getting the result as shown above.

And you don't need anything extra than this repo.

  • STEP 0 - define your EMOTION-MAP 😄 ❤️ 👏

    1. cd <to-repo-root-dir>
    2. Open the 'emotion_map.json'
    3. Change this mapping as you desire. You need to write the "emotion-name". Don't worry for the numeric-value assigned, only requirement is they should be unique.
    4. There must be a .png emoji image file in the '/emoji' folder for every "emotion-name" mentioned in the emotion_map.json.
    5. Open the 'config.ini' file and change the path to "haarcascade_frontalface_default.xml" file path on your system. For example on my system it's: > "G:/VENVIRONMENT/computer_vision/Lib/site-packages/cv2/data/haarcascade_frontalface_default.xml" where > "G:/VENVIRONMENT/computer_vision" is my virtual environment path.
    6. 'config.ini' contains the hyperparameters of the model. These will depend on the model and the dataset size. The default one should work fine for current model and a dataset size of around 1.2k to 3k. IT'S HIGHLY RECOMMENDED TO PLAY AROUND WITH THEM.
  • STEP 1 - generating the facial images

    1. cd </to/repo/root/dir>
    2. run python3 src/face_capture.py --emotion_name <emotion-name> --number_of_images <number>
      -- example: python3 src/face_capture.py --emotion_name smile --number_of_images 200

    This will open the cam and all you need to do is give the smile emotion from your face.

    • NOTE: You must change /emotion_map.json if you want another set emotions than what is already defined.
    • Do this step for all the different emotions in different lighting conditions.
    • For the above result, I used 300 images for each emotions captured in 3 different light condition (100 each).
    • You can see your images inside the 'images' folder which will contain different folder for different emotion images.
  • STEP 2 - creating the dataset out of it

    1. run python3 src/dataset_creator.py
    • This will create the ready-to-use dataset as a python pickled file and will save it in the dataset folder.
  • STEP 3 - training the model on the dataset and saving it

    1. run python3 src/trainer.py
    • This will start the model-training and upon the training it will save the tensorflow model in the 'model-checkpoints' folder.
    • It has the parameters that worked well for me, feel free to change it and explore.
  • STEP 4 - using the trained model to make prediction

    1. run python3 src/predictor.py
    • this will open the cam, and start taking the video feed -- NOW YOU HAVE DONE IT ALL. 👏

Its time to show your emotions ❤️

P.S. -- The model was trained on my facial images only, but was able to detect the expressions of my brother as well.

result

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].