All Projects → huangyangyu → Noiseface

huangyangyu / Noiseface

Licence: mit
Noise-Tolerant Paradigm for Training Face Recognition CNNs

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Noiseface

Seqface
SeqFace : Making full use of sequence information for face recognition
Stars: ✭ 125 (-5.3%)
Mutual labels:  caffe, face-recognition
Face Identification With Cnn Triplet Loss
Face identification with cnn+triplet-loss written by Keras.
Stars: ✭ 45 (-65.91%)
Mutual labels:  cnn, face-recognition
Dlib face recognition from camera
Detect and recognize the faces from camera / 调用摄像头进行人脸识别,支持多张人脸同时识别
Stars: ✭ 719 (+444.7%)
Mutual labels:  cnn, face-recognition
Deepface
Deep Learning Models for Face Detection/Recognition/Alignments, implemented in Tensorflow
Stars: ✭ 409 (+209.85%)
Mutual labels:  cnn, face-recognition
Mobilenet V2 Caffe
MobileNet-v2 experimental network description for caffe
Stars: ✭ 93 (-29.55%)
Mutual labels:  cnn, caffe
Liteflownet
LiteFlowNet: A Lightweight Convolutional Neural Network for Optical Flow Estimation, CVPR 2018 (Spotlight paper, 6.6%)
Stars: ✭ 474 (+259.09%)
Mutual labels:  cnn, caffe
Facerecognition
OpenCV 3 & Keras implementation of face recognition for specific people.
Stars: ✭ 32 (-75.76%)
Mutual labels:  cnn, face-recognition
Facedetection
C++ project to implement MTCNN, a perfect face detect algorithm, on different DL frameworks. The most popular frameworks: caffe/mxnet/tensorflow, are all suppported now
Stars: ✭ 255 (+93.18%)
Mutual labels:  cnn, caffe
Dispnet Flownet Docker
Dockerfile and runscripts for DispNet and FlowNet1 (estimation of disparity and optical flow)
Stars: ✭ 78 (-40.91%)
Mutual labels:  cnn, caffe
Haddoc2
Caffe to VHDL
Stars: ✭ 57 (-56.82%)
Mutual labels:  cnn, caffe
Largemargin softmax loss
Implementation for <Large-Margin Softmax Loss for Convolutional Neural Networks> in ICML'16.
Stars: ✭ 319 (+141.67%)
Mutual labels:  caffe, face-recognition
Sphereface
Implementation for <SphereFace: Deep Hypersphere Embedding for Face Recognition> in CVPR'17.
Stars: ✭ 1,483 (+1023.48%)
Mutual labels:  caffe, face-recognition
Caffe Mobile
Optimized (for size and speed) Caffe lib for iOS and Android with out-of-the-box demo APP.
Stars: ✭ 316 (+139.39%)
Mutual labels:  cnn, caffe
Face verification experiment
Original Caffe Version for LightCNN-9. Highly recommend to use PyTorch Version (https://github.com/AlfredXiangWu/LightCNN)
Stars: ✭ 712 (+439.39%)
Mutual labels:  caffe, face-recognition
Caffe Hrt
Heterogeneous Run Time version of Caffe. Added heterogeneous capabilities to the Caffe, uses heterogeneous computing infrastructure framework to speed up Deep Learning on Arm-based heterogeneous embedded platform. It also retains all the features of the original Caffe architecture which users deploy their applications seamlessly.
Stars: ✭ 271 (+105.3%)
Mutual labels:  cnn, caffe
Flownet2
FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks
Stars: ✭ 938 (+610.61%)
Mutual labels:  cnn, caffe
Raspberrypi Facedetection Mtcnn Caffe With Motion
MTCNN with Motion Detection, on Raspberry Pi with Love
Stars: ✭ 204 (+54.55%)
Mutual labels:  cnn, caffe
Cnnforandroid
The Convolutional Neural Network(CNN) for Android
Stars: ✭ 245 (+85.61%)
Mutual labels:  cnn, caffe
Jacinto Ai Devkit
Training & Quantization of embedded friendly Deep Learning / Machine Learning / Computer Vision models
Stars: ✭ 49 (-62.88%)
Mutual labels:  cnn, caffe
Keras Oneclassanomalydetection
[5 FPS - 150 FPS] Learning Deep Features for One-Class Classification (AnomalyDetection). Corresponds RaspberryPi3. Convert to Tensorflow, ONNX, Caffe, PyTorch. Implementation by Python + OpenVINO/Tensorflow Lite.
Stars: ✭ 102 (-22.73%)
Mutual labels:  cnn, caffe

Noise-Tolerant Paradigm for Training Face Recognition CNNs

Paper link: https://arxiv.org/abs/1903.10357

Presented at CVPR 2019

This is the code for the paper

Contents

  1. Requirements
  2. Dataset
  3. How-to-use
  4. Diagram
  5. Performance
  6. Contact
  7. Citation
  8. License

Requirements

  1. Caffe

Dataset

Training dataset:

  1. CASIA-Webface clean
  2. IMDB-Face
  3. MS-Celeb-1M

Testing dataset:

  1. LFW
  2. AgeDB
  3. CFP
  4. MegaFace

Both the training data and testing data are aligned by the method described in util.py

How-to-use

Firstly, you can train the network in noisy dataset as following steps:

step 1: add noise_tolerant_fr and relevant layers(at ./layers directory) to caffe project and recompile it.

step 2: download training dataset to ./data directory, and corrupt training dataset in different noise ratio which you can refer to ./code/gen_noise.py, then generate the lmdb file through caffe tool.

step 3: configure prototxt file in ./deploy directory.

step 4: run caffe command to train the network by using noisy dataset.

After training, you can evaluate the model on testing dataset by using evaluate.py.

Diagram

The figure shows three strategies in different purposes. At the beginning of the training process, we focus on all samples; then we focus on easy/clean samples; at last we focus on semi-hard clean samples.

The strategy

The figure explains the fusion function of three strategies. The left part demonstrates three functions: α(δr), β(δr), and γ(δr). The right part shows two fusion examples. According to the ω, we can see that the easy/clean samples are emphasized in the first example(δr < 0.5), and the semi-hard clean samples are emphasized in the second example(δr > 0.5). For more detail, please click here to play demo video. The detail

The figure shows the 2D Histall of CNNcommon (up) and CNNm2 (down) under 40% noise rate. The 2D Hist

The figure shows the 3D Histall of CNNcommon (left) and CNNm2 (right) under 40% noise rate. The 3D Hist

Performance

The table shows comparison of accuracies(%) on LFW, ResNet-20 models are used. CNNclean is trained with clean data WebFace-Clean-Sub by using the traditional method. CNNcommon is trained with noisy dataset WebFace-All by using the traditional method. CNNct is trained with noisy dataset WebFace-All by using our implemented Co-teaching(with pre-given noise rates). CNNm1 and CNNm2 are all trained with noisy dataset WebFace-All but through the proposed approach, and they respectively use the 1st and 2nd method to compute loss. Note: The WebFace-Clean-Sub is the clean part of the WebFace-All, the WebFace-All contains noise data with different rate as describe below.

Loss Actual Noise Rate CNNclean CNNcommon CNNct CNNm1 CNNm2 Estimated Noise Rate
L2softmax 0% 94.65 94.65 - 95.00 96.28 2%
L2softmax 20% 94.18 89.05 92.12 92.95 95.26 18%
L2softmax 40% 92.71 85.63 87.10 89.91 93.90 42%
L2softmax 60% 91.15 76.61 83.66 86.11 87.61 56%
Arcface 0% 97.95 97.95 - 97.11 98.11 2%
Arcface 20% 97.80 96.48 96.53 96.83 97.76 18%
Arcface 40% 96.53 92.33 94.25 95.88 97.23 36%
Arcface 60% 94.56 84.05 90.36 93.66 95.15 54%

Contact

Wei Hu

Yangyu Huang

Citation

If you find this work useful in your research, please cite

@inproceedings{Hu2019NoiseFace,
  title = {Noise-Tolerant Paradigm for Training Face Recognition CNNs},
  author = {Hu, Wei and Huang, Yangyu and Zhang, Fan and Li, Ruirui},
  booktitle = {IEEE Conference on Computer Vision and Pattern Recognition},
  month = {June},
  year = {2019},
  address = {Long Beach, CA}
}

License

The project is released under the MIT License
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].