All Projects → Joker316701882 → Additive Margin Softmax

Joker316701882 / Additive Margin Softmax

This is the implementation of paper <Additive Margin Softmax for Face Verification>

Projects that are alternatives of or similar to Additive Margin Softmax

Deeplearning.ai Assignments
Stars: ✭ 268 (-42.24%)
Mutual labels:  jupyter-notebook, deeplearning
Action Recognition Visual Attention
Action recognition using soft attention based deep recurrent neural networks
Stars: ✭ 350 (-24.57%)
Mutual labels:  jupyter-notebook, deeplearning
Tensorwatch
Debugging, monitoring and visualization for Python Machine Learning and Data Science
Stars: ✭ 3,191 (+587.72%)
Mutual labels:  jupyter-notebook, deeplearning
Learningdl
三个月教你从零入门深度学习Tensorflow版配套代码
Stars: ✭ 238 (-48.71%)
Mutual labels:  jupyter-notebook, deeplearning
Pytorch Original Transformer
My implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT pretrained models.
Stars: ✭ 411 (-11.42%)
Mutual labels:  jupyter-notebook, deeplearning
Udemy derinogrenmeyegiris
Udemy Derin Öğrenmeye Giriş Kursunun Uygulamaları ve Daha Fazlası
Stars: ✭ 239 (-48.49%)
Mutual labels:  jupyter-notebook, deeplearning
Pytorch Tutorials Examples And Books
PyTorch1.x tutorials, examples and some books I found 【不定期更新】整理的PyTorch 1.x 最新版教程、例子和书籍
Stars: ✭ 346 (-25.43%)
Mutual labels:  jupyter-notebook, deeplearning
Text Classification
Text Classification through CNN, RNN & HAN using Keras
Stars: ✭ 216 (-53.45%)
Mutual labels:  jupyter-notebook, deeplearning
Text summurization abstractive methods
Multiple implementations for abstractive text summurization , using google colab
Stars: ✭ 359 (-22.63%)
Mutual labels:  jupyter-notebook, deeplearning
Portrait Segmentation
Real-time portrait segmentation for mobile devices
Stars: ✭ 358 (-22.84%)
Mutual labels:  jupyter-notebook, deeplearning
Deep Learning In Production
Develop production ready deep learning code, deploy it and scale it
Stars: ✭ 216 (-53.45%)
Mutual labels:  jupyter-notebook, deeplearning
Monk object detection
A one-stop repository for low-code easily-installable object detection pipelines.
Stars: ✭ 437 (-5.82%)
Mutual labels:  jupyter-notebook, deeplearning
Deeplearning cv notes
📓 deepleaning and cv notes.
Stars: ✭ 223 (-51.94%)
Mutual labels:  jupyter-notebook, deeplearning
Dl tutorial
Tutorials for deep learning
Stars: ✭ 247 (-46.77%)
Mutual labels:  jupyter-notebook, deeplearning
Paddlehelix
Bio-Computing Platform featuring Large-Scale Representation Learning and Multi-Task Deep Learning “螺旋桨”生物计算工具集
Stars: ✭ 213 (-54.09%)
Mutual labels:  jupyter-notebook, deeplearning
T81 558 deep learning
Washington University (in St. Louis) Course T81-558: Applications of Deep Neural Networks
Stars: ✭ 4,152 (+794.83%)
Mutual labels:  jupyter-notebook, deeplearning
Release
Deep Reinforcement Learning for de-novo Drug Design
Stars: ✭ 201 (-56.68%)
Mutual labels:  jupyter-notebook, deeplearning
Learnopencv
Learn OpenCV : C++ and Python Examples
Stars: ✭ 15,385 (+3215.73%)
Mutual labels:  jupyter-notebook, deeplearning
Magnet
Deep Learning Projects that Build Themselves
Stars: ✭ 351 (-24.35%)
Mutual labels:  jupyter-notebook, deeplearning
Deep Learning Resources
由淺入深的深度學習資源 Collection of deep learning materials for everyone
Stars: ✭ 422 (-9.05%)
Mutual labels:  jupyter-notebook, deeplearning

Additive-Margin-Softmax

This is the implementation of paper <Additive Margin Softmax for Face Verification>

Training logic is highly inspired by Sandberg's Facenet, check it if you are interested.

model structure can be found at ./models/resface.py and loss head can be found at AM-softmax.py

Usage

Step1: Align Dataset

See folder "align", this totally forked from insightface. The default image size is (112,96), in this repository, all trained faces share same size (112,96). Use align code to align your train data and validation data (like lfw) first. You can use align_lfw.py to align both training set and lfw, don't worry about others like align_insight, align_dlib.

python align_lfw.py --input-dir [train data dir] --output-dir [aligned output dir]

Step2: Train AM-softmax

Read parse_arguments() function carefully to confiure parameters. If you are new in face recognition, after aligning dataset, simply run this code, the default settings will help you solve the rest.

python train.py --data_dir [aligned train data] --random_flip --learning_rate -1 --learning_rate_schedule_file ./data/learning_rate_AM_softmax.txt --lfw_dir [aligned lfw data] --keep_probability 0.8 --weight_decay 5e-4

Also watch out that acc on lfw is not from cross validation. Read source code for more detail. Thanks Sandberg again for his extraordinary code.

News

Date Update
2018-02-11 Currently it only reaches 97.6%. There might be some bugs, or some irregular preprocessings, when it reaches > 99%, detail configuration will be posted here.
2018-02-14 Now acc on lfw reaches 99.3% with only use resface36 and flipped-concatenate validation.
2018-02-15 After fixing bugs in training code, finally resface20 can reach 99.33% which only took 4 hours to converge. Notice:This model is trained on vggface2 without removing overlaps between vggface2 and lfw, so the performance is little higher than reported in orginal paper 98.98%(m=0.35) which trained on casia whose overlaps with lfw are removed.
2018-02-17 Using L-Resnet50E-IR which was proposed in this paper can reach 99.42%. Also I noticed that alignment method is crucial to accuracy. The quality of alignment algorithm might be the bottleneck of modern face recognition system.
2018‑02‑28 Just for fun, I tried m=0.2 with Resface20, acc on lfw reaches 99.47%. All experimens that I've done used AdamOptimizer without weight decay, SGD(with/without momentum) or RMSProp actually performed really bad in my experiments. My assumption is the difference of implementation of optimizer inside different frameworks (e.g. caffe and tf).
2018-03-05 Add training logic and align code.
2018-04-17 Fix bugs in evaluation code. Upload new/deeper model "LRenet50E_IR" proposed in insightface which performs better than resface20 and 36.
2018-08-29 Recently I revisiting this code and found that "weight_decay" settings for last fc layer is wrong, which lead to previous weird experiment conclusion. Now it's been fixed. And to follow standard evaluation protocal on lfw, evaluation code has been modified. The latest experiment result is updated here: Resface20(bn) + vggface2 + weight_decay5e-4 + batch_size256 + momentum achieves 0.995+-0.003 on lfw. Further more, with this code, it's easy to use some deeper models to achieve 99.7%+ on lfw. One big problem of this code is that it will load the name list of all images in cache at the begining, which will take very huge memory space. Also current dataset are composed of so small image files which will lead to low efficiency when load and transmit them. Thus tfrecord is recommanded to speed up training process.

lfw accuracy

Adam w/o weight_decay: img

Momemtum with weight_decay: See ./tfboard/resface20_mom_weightdecay.png

My Chinese blog about Face Recognition system

https://xraft.github.io/2018/03/21/FaceRecognition/
It includes the experimental details of this repo. Welcome and share your precious advice!

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].