All Projects → drethage → Speech Denoising Wavenet

drethage / Speech Denoising Wavenet

Licence: mit
A neural network for end-to-end speech denoising

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Speech Denoising Wavenet

Wavenet vocoder
WaveNet vocoder
Stars: ✭ 1,926 (+273.26%)
Mutual labels:  speech, speech-processing, wavenet
Vq Vae Speech
PyTorch implementation of VQ-VAE + WaveNet by [Chorowski et al., 2019] and VQ-VAE on speech signals by [van den Oord et al., 2017]
Stars: ✭ 187 (-63.76%)
Mutual labels:  speech, speech-processing, wavenet
hifigan-denoiser
HiFi-GAN: High Fidelity Denoising and Dereverberation Based on Speech Deep Features in Adversarial Networks
Stars: ✭ 88 (-82.95%)
Mutual labels:  speech, wavenet, speech-processing
ttslearn
ttslearn: Library for Pythonで学ぶ音声合成 (Text-to-speech with Python)
Stars: ✭ 158 (-69.38%)
Mutual labels:  speech, wavenet, speech-processing
Speechbrain.github.io
The SpeechBrain project aims to build a novel speech toolkit fully based on PyTorch. With SpeechBrain users can easily create speech processing systems, ranging from speech recognition (both HMM/DNN and end-to-end), speaker recognition, speech enhancement, speech separation, multi-microphone speech processing, and many others.
Stars: ✭ 242 (-53.1%)
Mutual labels:  neural-networks, speech, speech-processing
Neural Voice Cloning With Few Samples
Implementation of Neural Voice Cloning with Few Samples Research Paper by Baidu
Stars: ✭ 211 (-59.11%)
Mutual labels:  speech, speech-processing
Gcc Nmf
Real-time GCC-NMF Blind Speech Separation and Enhancement
Stars: ✭ 231 (-55.23%)
Mutual labels:  speech, speech-processing
Numpy Ml
Machine learning, in numpy
Stars: ✭ 11,100 (+2051.16%)
Mutual labels:  neural-networks, wavenet
Kerasdeepspeech
A Keras CTC implementation of Baidu's DeepSpeech for model experimentation
Stars: ✭ 245 (-52.52%)
Mutual labels:  neural-networks, speech
Source Separation Wavenet
A neural network for end-to-end music source separation
Stars: ✭ 185 (-64.15%)
Mutual labels:  neural-networks, wavenet
IMS-Toucan
Text-to-Speech Toolkit of the Speech and Language Technologies Group at the University of Stuttgart. Objectives of the development are simplicity, modularity, controllability and multilinguality.
Stars: ✭ 295 (-42.83%)
Mutual labels:  speech, speech-processing
speech-transformer
Transformer implementation speciaized in speech recognition tasks using Pytorch.
Stars: ✭ 40 (-92.25%)
Mutual labels:  end-to-end, speech
UniSpeech
UniSpeech - Large Scale Self-Supervised Learning for Speech
Stars: ✭ 224 (-56.59%)
Mutual labels:  speech, speech-processing
React Native Dialogflow
A React-Native Bridge for the Google Dialogflow (API.AI) SDK
Stars: ✭ 182 (-64.73%)
Mutual labels:  speech, speech-processing
Sincnet
SincNet is a neural architecture for efficiently processing raw audio samples.
Stars: ✭ 764 (+48.06%)
Mutual labels:  neural-networks, speech-processing
End2end Asr Pytorch
End-to-End Automatic Speech Recognition on PyTorch
Stars: ✭ 175 (-66.09%)
Mutual labels:  speech, end-to-end
Wavenet Enhancement
Speech Enhancement using Bayesian WaveNet
Stars: ✭ 86 (-83.33%)
Mutual labels:  speech, wavenet
Tfg Voice Conversion
Deep Learning-based Voice Conversion system
Stars: ✭ 115 (-77.71%)
Mutual labels:  speech, speech-processing
Shifter
Pitch shifter using WSOLA and resampling implemented by Python3
Stars: ✭ 22 (-95.74%)
Mutual labels:  speech, speech-processing
LIUM
Scripts for LIUM SpkDiarization tools
Stars: ✭ 28 (-94.57%)
Mutual labels:  speech, speech-processing

A Wavenet For Speech Denoising

A neural network for end-to-end speech denoising, as described in: "A Wavenet For Speech Denoising"

Listen to denoised samples under varying noise conditions and SNRs here

Installation

It is recommended to use a virtual environment

  1. git clone https://github.com/drethage/speech-denoising-wavenet.git
  2. pip install -r requirements.txt
  3. Install pygpu

Currently the project requires Keras 1.2 and Theano 0.9.0, the large dilations present in the architecture are not supported by the current version of Tensorflow (1.2.0)

Usage

A pre-trained model (best-performing model described in the paper) can be found in sessions/001/models and is ready to be used out-of-the-box. The parameterization of this model is specified in sessions/001/config.json

Download the dataset as described below

Denoising:

Example: THEANO_FLAGS=optimizer=fast_compile,device=gpu python main.py --mode inference --config sessions/001/config.json --noisy_input_path data/NSDTSEA/noisy_testset_wav --clean_input_path data/NSDTSEA/clean_testset_wav

Speedup

To achieve faster denoising, one can increase the target-field length by use of the optional --target_field_length argument. This defines the amount of samples that are denoised in a single forward propagation, saving redundant calculations. In the following example, it is increased 10x that of when the model was trained, the batch_size is reduced to 4.

Faster Example: THEANO_FLAGS=device=gpu python main.py --mode inference --target_field_length 16001 --batch_size 4 --config sessions/001/config.json --noisy_input_path data/NSDTSEA/noisy_testset_wav --clean_input_path data/NSDTSEA/clean_testset_wav

Training:

THEANO_FLAGS=device=gpu python main.py --mode training --config config.json

Configuration

A detailed description of all configurable parameters can be found in config.md

Optional command-line arguments:

Argument Valid Inputs Default Description
mode [training, inference] training
config string config.json Path to JSON-formatted config file
print_model_summary bool False Prints verbose summary of the model
load_checkpoint string None Path to hdf5 file containing a snapshot of model weights

Additional arguments during inference:

Argument Valid Inputs Default Description
one_shot bool False Denoises each audio file in a single forward propagation
target_field_length int as defined in config.json Overrides parameter in config.json for denoising with different target-field lengths than used in training
batch_size int as defined in config.json # of samples per batch
condition_value int 1 Corresponds to speaker identity
clean_input_path string None If supplied, SNRs of denoised samples are computed

Dataset

The "Noisy speech database for training speech enhancement algorithms and TTS models" (NSDTSEA) is used for training the model. It is provided by the University of Edinburgh, School of Informatics, Centre for Speech Technology Research (CSTR).

  1. Download here
  2. Extract to data/NSDTSEA
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].