All Projects → voithru → voice-activity-detection

voithru / voice-activity-detection

Licence: MIT license
Pytorch implementation of SELF-ATTENTIVE VAD, ICASSP 2021

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to voice-activity-detection

android-vad
This VAD library can process audio in real-time utilizing GMM which helps identify presence of human speech in an audio sample that contains a mixture of speech and noise.
Stars: ✭ 64 (-21.95%)
Mutual labels:  vad, voice-activity-detection
Ffsubsync
Automagically synchronize subtitles with video.
Stars: ✭ 5,167 (+6201.22%)
Mutual labels:  vad, voice-activity-detection
spokestack-android
Extensible Android mobile voice framework: wakeword, ASR, NLU, and TTS. Easily add voice to any Android app!
Stars: ✭ 52 (-36.59%)
Mutual labels:  vad, voice-activity-detection
spokestack-ios
Spokestack: give your iOS app a voice interface!
Stars: ✭ 27 (-67.07%)
Mutual labels:  vad, voice-activity-detection
cobra
On-device voice activity detection (VAD) powered by deep learning.
Stars: ✭ 76 (-7.32%)
Mutual labels:  vad, voice-activity-detection
rVADfast
This is the Python library for an unsupervised, fast method for robust voice activity detection (rVAD), as in the paper rVAD: An Unsupervised Segment-Based Robust Voice Activity Detection Method.
Stars: ✭ 80 (-2.44%)
Mutual labels:  voice-activity-detection
Huawei-Challenge-Speaker-Identification
Trained speaker embedding deep learning models and evaluation pipelines in pytorch and tesorflow for speaker recognition.
Stars: ✭ 34 (-58.54%)
Mutual labels:  voice-activity-detection
voice gender detection
♂️♀️ Detect a person's gender from a voice file (90.7% +/- 1.3% accuracy).
Stars: ✭ 51 (-37.8%)
Mutual labels:  voice-activity-detection
python-webrtc-audio-processing
Python bindings of WebRTC Audio Processing
Stars: ✭ 140 (+70.73%)
Mutual labels:  vad
react-native-spokestack
Spokestack: give your React Native app a voice interface!
Stars: ✭ 53 (-35.37%)
Mutual labels:  voice-activity-detection
Datadriven-GPVAD
The codebase for Data-driven general-purpose voice activity detection.
Stars: ✭ 81 (-1.22%)
Mutual labels:  voice-activity-detection
vad-plotter
Plotter for NEXRAD VWP retrievals.
Stars: ✭ 18 (-78.05%)
Mutual labels:  vad
Vad
android vad library base on webrtc vad
Stars: ✭ 24 (-70.73%)
Mutual labels:  vad
object centric VAD
An Tensorflow Re-Implement of CVPR 2019 "Object-centric Auto-Encoders and Dummy Anomalies for Abnormal Event Detection in Video"
Stars: ✭ 89 (+8.54%)
Mutual labels:  vad
Noisetorch
Real-time microphone noise suppression on Linux.
Stars: ✭ 5,199 (+6240.24%)
Mutual labels:  voice-activity-detection
rVAD
Matlab and Python libraries for an unsupervised method for robust voice activity detection (rVAD), as in the paper rVAD: An Unsupervised Segment-Based Robust Voice Activity Detection Method.
Stars: ✭ 46 (-43.9%)
Mutual labels:  voice-activity-detection
open-speech-corpora
💎 A list of accessible speech corpora for ASR, TTS, and other Speech Technologies
Stars: ✭ 841 (+925.61%)
Mutual labels:  voice-activity-detection

SELF-ATTENTIVE VAD: CONTEXT-AWARE DETECTION OF VOICE FROM NOISE (ICASSP 2021)

Pytorch implementation of SELF-ATTENTIVE VAD | Paper | Dataset

Yong Rae Jo, Youngki Moon, Won Ik Cho , and Geun Sik Jo

Voithru Inc., Inha University, Seoul National University.

2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)

Abstract

Recent voice activity detection (VAD) schemes have aimed at leveraging the decent neural architectures, but few were successful with applying the attention network due to its high reliance on the encoder-decoder framework. This has often let the built systems have a high dependency on the re- current neural networks, which are costly and sometimes less context-sensitive considering the scale and property of acoustic frames. To cope with this issue with the self- attention mechanism and achieve a simple, powerful, and environment-robust VAD, we first adopt the self-attention architecture in building up the modules for voice detection and boosted prediction. Our model surpasses the previous neural architectures in view of low signal-to-ratio and noisy real-world scenarios, at the same time displaying the robust- ness regarding the noise types. We make the test labels on movie data publicly available for the fair competition and future progress.

Getting started

Installation

$ git clone https://github.com/voithru/voice-activity-detection.git
$ cd voice-activity-detection

Linux

$ pip install -r requirements.txt

Main

$ python main.py --help

Training

$ python main.py train --help
Usage: main.py train [OPTIONS] CONFIG_PATH

Evaluation

$ python main.py evaluate --help
Usage: main.py evaluate [OPTIONS] EVAL_PATH CHECKPOINT_PATH

Inference

$ python main.py predict --help
Usage: main.py predict [OPTIONS] AUDIO_PATH CHECKPOINT_PATH

Overview

teaser
Figure. Overall architecture

Results

teaser
Figure. Test result - Noisex92

teaser
Figure. Test result - Real-world audio dataset

Citation

@INPROCEEDINGS{9413961,
  author={Jo, Yong Rae and Ki Moon, Young and Cho, Won Ik and Sik Jo, Geun},
  booktitle={ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, 
  title={Self-Attentive VAD: Context-Aware Detection of Voice from Noise}, 
  year={2021},
  volume={},
  number={},
  pages={6808-6812},
  doi={10.1109/ICASSP39728.2021.9413961}}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].