FaceLivenessDetection-SDK3D Passive Face Liveness Detection (Anti-Spoofing) & Deepfake detection. A single image is needed to compute liveness score. 99,67% accuracy on our dataset and perfect scores on multiple public datasets (NUAA, CASIA FASD, MSU...).
Stars: ✭ 85 (+123.68%)
Mutual labels: biometrics, face-recognition, face-detection, spoofing
Speaker-RecognitionThis repo contains my attempt to create a Speaker Recognition and Verification system using SideKit-1.3.1
Stars: ✭ 94 (+147.37%)
Mutual labels: gmm, speaker-recognition, speaker-verification, gmm-ubm
Face.evolve.pytorch🔥🔥High-Performance Face Recognition Library on PaddlePaddle & PyTorch🔥🔥
Stars: ✭ 2,719 (+7055.26%)
Mutual labels: feature-extraction, face-recognition, face-detection
SurfboardNovoic's audio feature extraction library
Stars: ✭ 318 (+736.84%)
Mutual labels: signal-processing, feature-extraction, speech-processing
Huawei-Challenge-Speaker-IdentificationTrained speaker embedding deep learning models and evaluation pipelines in pytorch and tesorflow for speaker recognition.
Stars: ✭ 34 (-10.53%)
Mutual labels: speaker-recognition, speaker-verification, speech-processing
Dyamic Graph RepresentationOfficial Dynamic Graph Representation PyTorch implement for iris/face recognition
Stars: ✭ 22 (-42.11%)
Mutual labels: feature-extraction, biometrics, face-recognition
torchsubbandPytorch implementation of subband decomposition
Stars: ✭ 63 (+65.79%)
Mutual labels: signal-processing, speech-processing
spafe🔉 spafe: Simplified Python Audio Features Extraction
Stars: ✭ 310 (+715.79%)
Mutual labels: signal-processing, speech-processing
pyssppython speech signal processing library
Stars: ✭ 18 (-52.63%)
Mutual labels: signal-processing, speech-processing
Speech Feature ExtractionFeature extraction of speech signal is the initial stage of any speech recognition system.
Stars: ✭ 78 (+105.26%)
Mutual labels: signal-processing, feature-extraction
antropyAntroPy: entropy and complexity of (EEG) time-series in Python
Stars: ✭ 111 (+192.11%)
Mutual labels: signal-processing, feature-extraction
Awesome Speech EnhancementA tutorial for Speech Enhancement researchers and practitioners. The purpose of this repo is to organize the world’s resources for speech enhancement and make them universally accessible and useful.
Stars: ✭ 257 (+576.32%)
Mutual labels: signal-processing, speech-processing
Php MlPHP-ML - Machine Learning library for PHP
Stars: ✭ 7,900 (+20689.47%)
Mutual labels: regression, feature-extraction
Computer Vision Guide📖 This guide is to help you understand the basics of the computerized image and develop computer vision projects with OpenCV. Includes Python, Java, JavaScript, C# and C++ examples.
Stars: ✭ 244 (+542.11%)
Mutual labels: feature-extraction, face-recognition
ShifterPitch shifter using WSOLA and resampling implemented by Python3
Stars: ✭ 22 (-42.11%)
Mutual labels: signal-processing, speech-processing
Speech signal processing and classificationFront-end speech processing aims at extracting proper features from short- term segments of a speech utterance, known as frames. It is a pre-requisite step toward any pattern recognition problem employing speech or audio (e.g., music). Here, we are interesting in voice disorder classification. That is, to develop two-class classifiers, which can discriminate between utterances of a subject suffering from say vocal fold paralysis and utterances of a healthy subject.The mathematical modeling of the speech production system in humans suggests that an all-pole system function is justified [1-3]. As a consequence, linear prediction coefficients (LPCs) constitute a first choice for modeling the magnitute of the short-term spectrum of speech. LPC-derived cepstral coefficients are guaranteed to discriminate between the system (e.g., vocal tract) contribution and that of the excitation. Taking into account the characteristics of the human ear, the mel-frequency cepstral coefficients (MFCCs) emerged as descriptive features of the speech spectral envelope. Similarly to MFCCs, the perceptual linear prediction coefficients (PLPs) could also be derived. The aforementioned sort of speaking tradi- tional features will be tested against agnostic-features extracted by convolu- tive neural networks (CNNs) (e.g., auto-encoders) [4]. The pattern recognition step will be based on Gaussian Mixture Model based classifiers,K-nearest neighbor classifiers, Bayes classifiers, as well as Deep Neural Networks. The Massachussets Eye and Ear Infirmary Dataset (MEEI-Dataset) [5] will be exploited. At the application level, a library for feature extraction and classification in Python will be developed. Credible publicly available resources will be 1used toward achieving our goal, such as KALDI. Comparisons will be made against [6-8].
Stars: ✭ 155 (+307.89%)
Mutual labels: feature-extraction, speech-processing
Face-Recognition-Raspberry-Pi-64-bitsRecognize 2000+ faces on your Raspberry Pi 4 with database auto-fill and anti-spoofing
Stars: ✭ 48 (+26.32%)
Mutual labels: face-recognition, face-detection
StrugatzkiAlgorithms for matching audio file similarities. Mirror of https://git.iem.at/sciss/Strugatzki
Stars: ✭ 38 (+0%)
Mutual labels: signal-processing, feature-extraction
DeepFaceRecognitionFace Recognition with Transfer Learning
Stars: ✭ 16 (-57.89%)
Mutual labels: face-recognition, face-detection
Face-RecognitionA Java application for Face Recognition under expressions, occlusions and pose variations.
Stars: ✭ 55 (+44.74%)
Mutual labels: face-recognition, face-detection