All Projects → meilfang → LMFD-PAD

meilfang / LMFD-PAD

Licence: other
Learnable Multi-level Frequency Decomposition and Hierarchical Attention Mechanism for Generalized Face Presentation Attack Detection

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to LMFD-PAD

DCAN
[AAAI 2020] Code release for "Domain Conditioned Adaptation Network" https://arxiv.org/abs/2005.06717
Stars: ✭ 27 (+0%)
Mutual labels:  attention-mechanism
CrabNet
Predict materials properties using only the composition information!
Stars: ✭ 57 (+111.11%)
Mutual labels:  attention-mechanism
resolutions-2019
A list of data mining and machine learning papers that I implemented in 2019.
Stars: ✭ 19 (-29.63%)
Mutual labels:  attention-mechanism
h-transformer-1d
Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning
Stars: ✭ 121 (+348.15%)
Mutual labels:  attention-mechanism
Hierarchical-attention-network
My implementation of "Hierarchical Attention Networks for Document Classification" in Keras
Stars: ✭ 26 (-3.7%)
Mutual labels:  attention-mechanism
Hierarchical-Word-Sense-Disambiguation-using-WordNet-Senses
Word Sense Disambiguation using Word Specific models, All word models and Hierarchical models in Tensorflow
Stars: ✭ 33 (+22.22%)
Mutual labels:  attention-mechanism
SiGAT
source code for signed graph attention networks (ICANN2019) & SDGNN (AAAI2021)
Stars: ✭ 37 (+37.04%)
Mutual labels:  attention-mechanism
Multi-task-Conditional-Attention-Networks
A prototype version of our submitted paper: Conversion Prediction Using Multi-task Conditional Attention Networks to Support the Creation of Effective Ad Creatives.
Stars: ✭ 21 (-22.22%)
Mutual labels:  attention-mechanism
Compact-Global-Descriptor
Pytorch implementation of "Compact Global Descriptor for Neural Networks" (CGD).
Stars: ✭ 22 (-18.52%)
Mutual labels:  attention-mechanism
visdial
Visual Dialog: Light-weight Transformer for Many Inputs (ECCV 2020)
Stars: ✭ 27 (+0%)
Mutual labels:  attention-mechanism
Machine-Translation-Hindi-to-english-
Machine translation is the task of converting one language to other. Unlike the traditional phrase-based translation system which consists of many small sub-components that are tuned separately, neural machine translation attempts to build and train a single, large neural network that reads a sentence and outputs a correct translation.
Stars: ✭ 19 (-29.63%)
Mutual labels:  attention-mechanism
Brain-Tumor-Segmentation
Attention-Guided Version of 2D UNet for Automatic Brain Tumor Segmentation
Stars: ✭ 125 (+362.96%)
Mutual labels:  attention-mechanism
Attention mechanism-event-extraction
Attention mechanism in CNNs to extract events of interest
Stars: ✭ 17 (-37.04%)
Mutual labels:  attention-mechanism
efficient-attention
An implementation of the efficient attention module.
Stars: ✭ 191 (+607.41%)
Mutual labels:  attention-mechanism
long-short-transformer
Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch
Stars: ✭ 103 (+281.48%)
Mutual labels:  attention-mechanism
SequenceToSequence
A seq2seq with attention dialogue/MT model implemented by TensorFlow.
Stars: ✭ 11 (-59.26%)
Mutual labels:  attention-mechanism
NLP-paper
🎨 🎨NLP 自然语言处理教程 🎨🎨 https://dataxujing.github.io/NLP-paper/
Stars: ✭ 23 (-14.81%)
Mutual labels:  attention-mechanism
dodrio
Exploring attention weights in transformer-based models with linguistic knowledge.
Stars: ✭ 233 (+762.96%)
Mutual labels:  attention-mechanism
stanford-cs231n-assignments-2020
This repository contains my solutions to the assignments for Stanford's CS231n "Convolutional Neural Networks for Visual Recognition" (Spring 2020).
Stars: ✭ 84 (+211.11%)
Mutual labels:  attention-mechanism
Linear-Attention-Mechanism
Attention mechanism
Stars: ✭ 27 (+0%)
Mutual labels:  attention-mechanism

LMFD-PAD


Note

This is the official repository of the accepted at WACV 2022: LMFD-PAD: Learnable Multi-level Frequency Decomposition and Hierarchical Attention Mechanism for Generalized Face Presentation Attack Detection. The paper can be found in here.

Pipeline Overview

overview

Data preparation

Since the data in all used PAD datasets in our work are videos, we sample 10 frames in the average time interval of each video. In addition, the ratio of bona fide and attack is balanced by simple duplication. Finally, CSV files are generated for further training and evaluation. The format of the dataset CSV file is:

image_path,label
/image_dir/image_file_1.png, bonafide
/image_dir/image_file_2.png, bonafide
/image_dir/image_file_3.png, attack
/image_dir/image_file_4.png, attack

Training

The training code for intra-dataset and cross-dataset experiments is same, the difference code between intra_db_main.py and cross_db_main.py is evaluation metrics.

  1. Example of intra-dataset training and testing:
    python intra_db_main.py \
      --protocol_dir 'dir_containing_csv_files' \
      --backbone resnet50 \
      --pretrain True \
      --lr 0.001 \
      --batch_size 64 \
      --prefix 'custom_note' \
    
  2. Example of cross-dataset training and testing is similar:
    python cross_db_main.py \
      --protocol_dir 'dir_containing_csv_files' \
      --backbone resnet50 \
      --pretrain True \
      --lr 0.001 \
      --batch_size 64 \
      --prefix 'custom_note' \
    

Results

The results of cross-dataset evaluation under different experimental settings on four face PAD datasets. More details can be found in paper. cross_db

Models

Four models pre-trained based on four cross-dataset experimental settings can be download via google driver. Please using the following threshold for testing those pre-trained weights. The thresholds of icm_o, ocm_i, omi_c, and oci_m models are 0.7309441, 0.6971898, 0.613508, and 0.53312653, respectively. More information and small test can be found in test.py. Please make sure give the correct model path.

if you use LMFD-HAM architecture in this repository, please cite the following paper:

@inproceedings{DBLP:conf/wacv/FangDKK22,
  author    = {Meiling Fang and
               Naser Damer and
               Florian Kirchbuchner and
               Arjan Kuijper},
  title     = {Learnable Multi-level Frequency Decomposition and Hierarchical Attention
               Mechanism for Generalized Face Presentation Attack Detection},
  booktitle = {{WACV}},
  pages     = {1131--1140},
  publisher = {{IEEE}},
  year      = {2022}
}

License

This project is licensed under the terms of the Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license. Copyright (c) 2020 Fraunhofer Institute for Computer Graphics Research IGD Darmstadt.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].