All Projects → RicherMans → Speaker-Anti-Spoofing-Classifiers

RicherMans / Speaker-Anti-Spoofing-Classifiers

Licence: other
Baselines and Classifiers for speaker anti-spoofing detection

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to Speaker-Anti-Spoofing-Classifiers

FaceLivenessDetection-SDK
3D Passive Face Liveness Detection (Anti-Spoofing) & Deepfake detection. A single image is needed to compute liveness score. 99,67% accuracy on our dataset and perfect scores on multiple public datasets (NUAA, CASIA FASD, MSU...).
Stars: ✭ 85 (+466.67%)
Mutual labels:  spoofing, spoofing-attack
ASSERT
JHU's system submission to the ASVspoof 2019 Challenge: Anti-Spoofing with Squeeze-Excitation and Residual neTworks (ASSERT).
Stars: ✭ 50 (+233.33%)
Mutual labels:  anti-spoofing, spoofing-attack
Attentive-Filtering-Network
University of Edinbrugh-Johns Hopkins University's system for ASVspoof 2017 Version 2.0 dataset.
Stars: ✭ 47 (+213.33%)
Mutual labels:  anti-spoofing, spoofing-attack
bob
Bob is a free signal-processing and machine learning toolbox originally developed by the Biometrics group at Idiap Research Institute, in Switzerland. - Mirrored from https://gitlab.idiap.ch/bob/bob
Stars: ✭ 38 (+153.33%)
Mutual labels:  spoofing
aev
Android library to verify the safety of user devices. Make sure that API calls from your app can be trusted. Instantly detect rooted devices, emulators, cloned apps, and other risk factors.
Stars: ✭ 64 (+326.67%)
Mutual labels:  spoofing
freqtrade-gym
A customized gym environment for developing and comparing reinforcement learning algorithms in crypto trading.
Stars: ✭ 192 (+1180%)
Mutual labels:  baseline
arp
A go package to monitor ARP changes and notify when mac is online or offline. Also allow forced IP address change (IP spoofing).
Stars: ✭ 33 (+120%)
Mutual labels:  spoofing
redmine evm
Earned Value Management plugin for Redmine
Stars: ✭ 33 (+120%)
Mutual labels:  baseline
l2rpn-baselines
L2RPN Baselines a repository to host baselines for l2rpn competitions.
Stars: ✭ 57 (+280%)
Mutual labels:  baseline
postgres-baseline
DevSec PostgreSQL Baseline - InSpec Profile
Stars: ✭ 47 (+213.33%)
Mutual labels:  baseline
naacl2018-fever
Fact Extraction and VERification baseline published in NAACL2018
Stars: ✭ 109 (+626.67%)
Mutual labels:  baseline
baset
Testing tool for baseline strategy
Stars: ✭ 26 (+73.33%)
Mutual labels:  baseline
Face-Liveness-Detection-SDK-iOS
Robust, Realtime, On-Device Face Liveness Detection (Face Anti Spoofing) For iOS
Stars: ✭ 48 (+220%)
Mutual labels:  anti-spoofing
wipri
(Network Metadata Privacy) MAC Address spoofer with bonus combo (options): hostname + signal spoofer - *no disconnects!* Privacy Changing w/Brand Mimics: run as command or optional new randomized identity each boot; includes flags for continually changing random times/changing valid OUI addresses + Hostname randomizer + Device/Signal/location an…
Stars: ✭ 26 (+73.33%)
Mutual labels:  spoofing
Text-Classification-LSTMs-PyTorch
The aim of this repository is to show a baseline model for text classification by implementing a LSTM-based model coded in PyTorch. In order to provide a better understanding of the model, it will be used a Tweets dataset provided by Kaggle.
Stars: ✭ 45 (+200%)
Mutual labels:  baseline
MockSMS
Android application to create/craft fake sms.
Stars: ✭ 63 (+320%)
Mutual labels:  spoofing
revc
The fastest and safest EVC encoder and decoder
Stars: ✭ 75 (+400%)
Mutual labels:  baseline
Face-Recognition-Jetson-Nano
Recognize 2000+ faces on your Jetson Nano with database auto-fill and anti-spoofing
Stars: ✭ 63 (+320%)
Mutual labels:  anti-spoofing
Kevinpro-NLP-demo
All NLP you Need Here. 个人实现了一些好玩的NLP demo,目前包含13个NLP应用的pytorch实现
Stars: ✭ 117 (+680%)
Mutual labels:  baseline
text-classification-baseline
Pipeline for fast building text classification TF-IDF + LogReg baselines.
Stars: ✭ 55 (+266.67%)
Mutual labels:  baseline

Speaker-Anti-Spoofing-Classifiers

This repository provides a basic speaker anti-spoofing system using neural networks in pytorch 1.0+ and python 3.6+.

Python requirements are:

pandas==0.25.3
tqdm==4.28.1
torch==1.3.1
matplotlib==3.1.1
numpy==1.17.4
fire==0.2.1
pytorch_ignite==0.2.1
h5py==2.10.0
six==1.13.0
adabound==0.0.5
ignite==1.1.0
librosa==0.7.1
metrics==0.3.3
pypeln==0.1.10
PyYAML==5.2
scikit_learn==0.22

Moreover, to download the data, you will need wget.

Evaluation scripts are directly taken from the baseline of the ASVspoof2019 challenge, seen here

Datasets

The most broadly used datasets for spoofing detection (currently) are:

  • ASVspoof2019 encompassing logical and physical attacks alike.
  • ASVspoof2017 encompassing only physical attacks with the focus on in the wild devices and scenes.
  • ASVspoof2015 encompassing only logical attacks with the focus on synthesize and voice conversion attacks.

For mixed logical and physical attacks, the mixed AVspoof dataset ( a subpart of it is the BTAS16 dataset) can be also used. The dataset is publicly available, but only for research purposes.

Moreover, rather recent the Fake-or-Real (FoR) dataset was introduced using openly available synthesizers such as Baidu, Google, Amazon to create spoofs for logical access.

Feature extraction

Features are extracted using the librosa toolkit. We provide four commonly used features: CQT,(Linear)log-Spectrograms, log-Mel-Spectrograms and raw wave features.

Models

Currently, the (what I think) most popular model is LightCNN, which is the winner of the ASVspoof2017 challenge paper. Another model called CGCNN, which modified the MFM activation to gated linear unit (GLU) activations has been successfully employed in the ASVspoof2019 challenge. My current implementation can be seen in models.py.

All experiments conducted in this repository use 90% of the training set as training data and 10% as cross-validation. Development data is not used at all. All results are only evaluated on the respective evaluation set.

The baseline results are as follows:

Dataset Feature Model EER
ASV15 Spec LightCNN 0.68
ASV15 Spec CGCNN 0.33
ASV17 Spec LightCNN 12.18
ASV17 Spec CGCNN 11.16
BTAS16 Spec LightCNN 2.04
BTAS16 Spec CGCNN 3.26
FoR-norm Spec LightCNN 5.43
FoR-norm Spec CGCNN 18.64

Config

The framework here uses a combination of google-fire and yaml parsing to enable a convenient interface for anti-spoofing model training. By default one needs to pass a *.yaml configuration file into any of the command scripts. However parameters of the yaml files can also be overwritten on the fly:

python3 run.py train config/asv17.yaml --model MYMODEL, searches for the model MYMODEL in models.py and runs the experiment using that model.

Other notable arguments are:

  • --model_args '{filters:[60,60]}' sets the filter sizes of a convolutional model to 60, 60.
  • --batch_size 32 --num_workers 2' sets training hyper parameters batch_size as well as the number of async workers for dataloading.
  • --transforms '[timemask,freqmask]' applies augmentation on the training data, defined in augment.py.

Commands

The main script of this repository is run.py. Five commands are available:

  • train e.g., python3 run.py train config/asv17.yaml trains a specified model on specified data.
  • score e.g., python3 run.py score EXPERIMENT_PATH OUTPUTFILE.tsv --testlabel data/filelists/asv17/eval.tsv --testdata data/hdf5/asv17/spec/eval.h5 scores a given experiment and produces OUTPUTFILE.tsv containing the respective scores. End-to-End scoring is utilized, where the genuine class scores are representative of the model belief. Only a single dataset can be scored.
  • evaluate_eer uses the library contained in evaluation/ to calculate an EER. Example usage is: python3 run.py evaluate_eer experiments/asv17/LightCNN/SOMEEXPERIEMNT/scored.txt data/filelists/asv17/eval.tsv output.txt. output.txt is generated with the results ( which are also printed to console ).
  • run e.g., python3 run.py run config/asv17.yaml trains, scores and evaluates an experiment. Can also support multiple tests using --testlabel ['a.tsv','b.tsv]' or just updating the config file. Is effectively train, score, evaluate_eer in one and the most recommended way of running any experiment.

Usage

For a simple e.g., ASVspoof2017 dataset run, please run the following:

git clone https://github.com/RicherMans/Speaker-Anti-Spoofing-Classifiers
cd Speaker-Anti-Spoofing-Classifiers
pip3 install -r requirements.txt
cd data/scripts
bash download_asv17.sh
bash prepare_asv17.sh
cd ../../features/
python3 extract_feature.py ../data/filelists/asv17/train.tsv -o hdf5/asv17/spec/train.h5 # Extracts spectrogram features
python3 extract_feature.py ../data/filelists/asv17/eval.tsv -o hdf5/asv17/spec/eval.h5 #Spectrogram features
cd ../
python3 run.py run config/asv17.yaml # Runs LightCNN Model. Results will be displayed in the console and a directory experiments/asv17 will be created.
python3 run.py run config/asv17.yaml --model CGCNN # Runs CGCNN Model
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].