All Projects → awesome-speech-enhancement → Similar Projects or Alternatives

128 Open source projects that are alternatives of or similar to awesome-speech-enhancement

Noise2Noise-audio denoising without clean training data
Source code for the paper titled "Speech Denoising without Clean Training Data: a Noise2Noise Approach". Paper accepted at the INTERSPEECH 2021 conference. This paper tackles the problem of the heavy dependence of clean speech data required by deep learning based audio denoising methods by showing that it is possible to train deep speech denoisi…
Stars: ✭ 49 (+2.08%)
UniSpeech
UniSpeech - Large Scale Self-Supervised Learning for Speech
Stars: ✭ 224 (+366.67%)
wavelet-denoiser
A wavelet audio denoiser done in python
Stars: ✭ 29 (-39.58%)
Mutual labels:  speech-processing, denoising
Voice-Separation-and-Enhancement
A framework for quick testing and comparing multi-channel speech enhancement and separation methods, such as DSB, MVDR, LCMV, GEVD beamforming and ICA, FastICA, IVA, AuxIVA, OverIVA, ILRMA, FastMNMF.
Stars: ✭ 60 (+25%)
fdndlp
A speech dereverberation algorithm, also called wpe
Stars: ✭ 115 (+139.58%)
Espnet
End-to-End Speech Processing Toolkit
Stars: ✭ 4,533 (+9343.75%)
Speech Enhancement MMSE-STSA
A statistical model-based Speech Enhancement Using MMSE-STSA
Stars: ✭ 54 (+12.5%)
Voice-Denoising-AN
A Conditional Generative Adverserial Network (cGAN) was adapted for the task of source de-noising of noisy voice auditory images. The base architecture is adapted from Pix2Pix.
Stars: ✭ 42 (-12.5%)
Mutual labels:  denoising, speech-enhancement
ConvolutionaNeuralNetworksToEnhanceCodedSpeech
In this work we propose two postprocessing approaches applying convolutional neural networks (CNNs) either in the time domain or the cepstral domain to enhance the coded speech without any modification of the codecs. The time domain approach follows an end-to-end fashion, while the cepstral domain approach uses analysis-synthesis with cepstral d…
Stars: ✭ 25 (-47.92%)
torchsubband
Pytorch implementation of subband decomposition
Stars: ✭ 63 (+31.25%)
open-speech-corpora
💎 A list of accessible speech corpora for ASR, TTS, and other Speech Technologies
Stars: ✭ 841 (+1652.08%)
hifigan-denoiser
HiFi-GAN: High Fidelity Denoising and Dereverberation Based on Speech Deep Features in Adversarial Networks
Stars: ✭ 88 (+83.33%)
Mutual labels:  speech-processing, denoising
Gcc Nmf
Real-time GCC-NMF Blind Speech Separation and Enhancement
Stars: ✭ 231 (+381.25%)
Mutual labels:  speech-processing
SpleeterRT
Real time monaural source separation base on fully convolutional neural network operates on Time-frequency domain.
Stars: ✭ 111 (+131.25%)
Mutual labels:  speech-enhancement
Vq Vae Speech
PyTorch implementation of VQ-VAE + WaveNet by [Chorowski et al., 2019] and VQ-VAE on speech signals by [van den Oord et al., 2017]
Stars: ✭ 187 (+289.58%)
Mutual labels:  speech-processing
Speech Enhancement
Deep neural network based speech enhancement toolkit
Stars: ✭ 167 (+247.92%)
Mutual labels:  speech-processing
DiViMe
ACLEW Diarization Virtual Machine
Stars: ✭ 28 (-41.67%)
Mutual labels:  speech-processing
rankpruning
🧹 Formerly for binary classification with noisy labels. Replaced by cleanlab.
Stars: ✭ 81 (+68.75%)
Mutual labels:  denoising
Speech signal processing and classification
Front-end speech processing aims at extracting proper features from short- term segments of a speech utterance, known as frames. It is a pre-requisite step toward any pattern recognition problem employing speech or audio (e.g., music). Here, we are interesting in voice disorder classification. That is, to develop two-class classifiers, which can discriminate between utterances of a subject suffering from say vocal fold paralysis and utterances of a healthy subject.The mathematical modeling of the speech production system in humans suggests that an all-pole system function is justified [1-3]. As a consequence, linear prediction coefficients (LPCs) constitute a first choice for modeling the magnitute of the short-term spectrum of speech. LPC-derived cepstral coefficients are guaranteed to discriminate between the system (e.g., vocal tract) contribution and that of the excitation. Taking into account the characteristics of the human ear, the mel-frequency cepstral coefficients (MFCCs) emerged as descriptive features of the speech spectral envelope. Similarly to MFCCs, the perceptual linear prediction coefficients (PLPs) could also be derived. The aforementioned sort of speaking tradi- tional features will be tested against agnostic-features extracted by convolu- tive neural networks (CNNs) (e.g., auto-encoders) [4]. The pattern recognition step will be based on Gaussian Mixture Model based classifiers,K-nearest neighbor classifiers, Bayes classifiers, as well as Deep Neural Networks. The Massachussets Eye and Ear Infirmary Dataset (MEEI-Dataset) [5] will be exploited. At the application level, a library for feature extraction and classification in Python will be developed. Credible publicly available resources will be 1used toward achieving our goal, such as KALDI. Comparisons will be made against [6-8].
Stars: ✭ 155 (+222.92%)
Mutual labels:  speech-processing
Dtln
Tensorflow 2.x implementation of the DTLN real time speech denoising model. With TF-lite, ONNX and real-time audio processing support.
Stars: ✭ 147 (+206.25%)
Mutual labels:  speech-processing
react-native-spokestack
Spokestack: give your React Native app a voice interface!
Stars: ✭ 53 (+10.42%)
Mutual labels:  speech-processing
Wavenet vocoder
WaveNet vocoder
Stars: ✭ 1,926 (+3912.5%)
Mutual labels:  speech-processing
A Convolutional Recurrent Neural Network For Real Time Speech Enhancement
A minimum unofficial implementation of the "A Convolutional Recurrent Neural Network for Real-Time Speech Enhancement" (CRN) using PyTorch
Stars: ✭ 123 (+156.25%)
Mutual labels:  speech-processing
MITK-Diffusion
MITK Diffusion - Official part of the Medical Imaging Interaction Toolkit
Stars: ✭ 47 (-2.08%)
Mutual labels:  denoising
bLUe PYSIDE
bLUe - A simple and comprehensive image editor featuring automatic contrast enhancement, color correction, 3D LUT creation, raw postprocessing, exposure fusion and noise reduction
Stars: ✭ 41 (-14.58%)
Mutual labels:  noise-reduction
Deepvoice3 pytorch
PyTorch implementation of convolutional neural networks-based text-to-speech synthesis models
Stars: ✭ 1,654 (+3345.83%)
Mutual labels:  speech-processing
Nonautoreggenprogress
Tracking the progress in non-autoregressive generation (translation, transcription, etc.)
Stars: ✭ 118 (+145.83%)
Mutual labels:  speech-processing
Speechbrain.github.io
The SpeechBrain project aims to build a novel speech toolkit fully based on PyTorch. With SpeechBrain users can easily create speech processing systems, ranging from speech recognition (both HMM/DNN and end-to-end), speaker recognition, speech enhancement, speech separation, multi-microphone speech processing, and many others.
Stars: ✭ 242 (+404.17%)
Mutual labels:  speech-processing
Neural Voice Cloning With Few Samples
Implementation of Neural Voice Cloning with Few Samples Research Paper by Baidu
Stars: ✭ 211 (+339.58%)
Mutual labels:  speech-processing
Shifter
Pitch shifter using WSOLA and resampling implemented by Python3
Stars: ✭ 22 (-54.17%)
Mutual labels:  speech-processing
React Native Dialogflow
A React-Native Bridge for the Google Dialogflow (API.AI) SDK
Stars: ✭ 182 (+279.17%)
Mutual labels:  speech-processing
mann-for-speech-separation
Neural Turing machine for source separation in Tensorflow
Stars: ✭ 18 (-62.5%)
Mutual labels:  speech-separation
Vocgan
VocGAN: A High-Fidelity Real-time Vocoder with a Hierarchically-nested Adversarial Network
Stars: ✭ 158 (+229.17%)
Mutual labels:  speech-processing
Deep-Restore-PyTorch
Deep CNN for learning image restoration without clean data!
Stars: ✭ 59 (+22.92%)
Mutual labels:  noise-reduction
Tutorial separation
This repo summarizes the tutorials, datasets, papers, codes and tools for speech separation and speaker extraction task. You are kindly invited to pull requests.
Stars: ✭ 151 (+214.58%)
Mutual labels:  speech-processing
spatial-temporal-LCMV
multi-channel microphone array noise reduction
Stars: ✭ 43 (-10.42%)
Mutual labels:  noise-reduction
Zzz Retired openstt
RETIRED - OpenSTT is now retired. If you would like more information on Mycroft AI's open source STT projects, please visit:
Stars: ✭ 146 (+204.17%)
Mutual labels:  speech-processing
awesome-keyword-spotting
This repository is a curated list of awesome Speech Keyword Spotting (Wake-Up Word Detection).
Stars: ✭ 150 (+212.5%)
Mutual labels:  speech-processing
Pb bss
Collection of EM algorithms for blind source separation of audio signals
Stars: ✭ 127 (+164.58%)
Mutual labels:  speech-processing
deepbeam
Deep learning based Speech Beamforming
Stars: ✭ 58 (+20.83%)
Mutual labels:  speech-enhancement
noise-synthesis
Rethinking Noise Synthesis and Modeling in Raw Denoising (ICCV2021)
Stars: ✭ 63 (+31.25%)
Mutual labels:  denoising
Tfg Voice Conversion
Deep Learning-based Voice Conversion system
Stars: ✭ 115 (+139.58%)
Mutual labels:  speech-processing
Tf Kaldi Speaker
Neural speaker recognition/verification system based on Kaldi and Tensorflow
Stars: ✭ 117 (+143.75%)
Mutual labels:  speech-processing
Wave U Net For Speech Enhancement
Implement Wave-U-Net by PyTorch, and migrate it to the speech enhancement.
Stars: ✭ 106 (+120.83%)
Mutual labels:  speech-processing
Pytorch Kaldi Neural Speaker Embeddings
A light weight neural speaker embeddings extraction based on Kaldi and PyTorch.
Stars: ✭ 99 (+106.25%)
Mutual labels:  speech-processing
CResMD
(ECCV 2020) Interactive Multi-Dimension Modulation with Dynamic Controllable Residual Learning for Image Restoration
Stars: ✭ 92 (+91.67%)
Mutual labels:  denoising
bob
Bob is a free signal-processing and machine learning toolbox originally developed by the Biometrics group at Idiap Research Institute, in Switzerland. - Mirrored from https://gitlab.idiap.ch/bob/bob
Stars: ✭ 38 (-20.83%)
Mutual labels:  speech-processing
Vokaturiandroid
Emotion recognition by speech in android.
Stars: ✭ 79 (+64.58%)
Mutual labels:  speech-processing
Sptk
A modified version of Speech Signal Processing Toolkit (SPTK)
Stars: ✭ 71 (+47.92%)
Mutual labels:  speech-processing
Gcommandspytorch
ConvNets for Audio Recognition using Google Commands Dataset
Stars: ✭ 65 (+35.42%)
Mutual labels:  speech-processing
audio noise clustering
https://dodiku.github.io/audio_noise_clustering/results/ ==> An experiment with a variety of clustering (and clustering-like) techniques to reduce noise on an audio speech recording.
Stars: ✭ 24 (-50%)
Mutual labels:  noise-reduction
Deep-Clustering-for-Speech-Separation
Pytorch implements Deep Clustering: Discriminative Embeddings For Segmentation And Separation
Stars: ✭ 99 (+106.25%)
Mutual labels:  speech-separation
aydin
Aydin — User-friendly, Fast, Self-Supervised Image Denoising for All.
Stars: ✭ 105 (+118.75%)
Mutual labels:  denoising
Dnc
Discriminative Neural Clustering for Speaker Diarisation
Stars: ✭ 60 (+25%)
Mutual labels:  speech-processing
Fullsubnet
PyTorch implementation of "A Full-Band and Sub-Band Fusion Model for Real-Time Single-Channel Speech Enhancement."
Stars: ✭ 51 (+6.25%)
Mutual labels:  speech-processing
napari-hub
Discover, install, and share napari plugins
Stars: ✭ 44 (-8.33%)
Mutual labels:  denoising
Keras Sincnet
Keras (tensorflow) implementation of SincNet (Mirco Ravanelli, Yoshua Bengio - https://github.com/mravanelli/SincNet)
Stars: ✭ 47 (-2.08%)
Mutual labels:  speech-processing
Formant Analyzer
iOS application for finding formants in spoken sounds
Stars: ✭ 43 (-10.42%)
Mutual labels:  speech-processing
spafe
🔉 spafe: Simplified Python Audio Features Extraction
Stars: ✭ 310 (+545.83%)
Mutual labels:  speech-processing
Calculate-SNR-SDR
Script to calculate SNR and SDR using python
Stars: ✭ 76 (+58.33%)
Mutual labels:  speech-separation
1-60 of 128 similar projects