Noise2Noise-audio denoising without clean training dataSource code for the paper titled "Speech Denoising without Clean Training Data: a Noise2Noise Approach". Paper accepted at the INTERSPEECH 2021 conference. This paper tackles the problem of the heavy dependence of clean speech data required by deep learning based audio denoising methods by showing that it is possible to train deep speech denoisi…
Stars: ✭ 49 (+2.08%)
UniSpeechUniSpeech - Large Scale Self-Supervised Learning for Speech
Stars: ✭ 224 (+366.67%)
Voice-Separation-and-EnhancementA framework for quick testing and comparing multi-channel speech enhancement and separation methods, such as DSB, MVDR, LCMV, GEVD beamforming and ICA, FastICA, IVA, AuxIVA, OverIVA, ILRMA, FastMNMF.
Stars: ✭ 60 (+25%)
fdndlpA speech dereverberation algorithm, also called wpe
Stars: ✭ 115 (+139.58%)
EspnetEnd-to-End Speech Processing Toolkit
Stars: ✭ 4,533 (+9343.75%)
Voice-Denoising-ANA Conditional Generative Adverserial Network (cGAN) was adapted for the task of source de-noising of noisy voice auditory images. The base architecture is adapted from Pix2Pix.
Stars: ✭ 42 (-12.5%)
ConvolutionaNeuralNetworksToEnhanceCodedSpeechIn this work we propose two postprocessing approaches applying convolutional neural networks (CNNs) either in the time domain or the cepstral domain to enhance the coded speech without any modification of the codecs. The time domain approach follows an end-to-end fashion, while the cepstral domain approach uses analysis-synthesis with cepstral d…
Stars: ✭ 25 (-47.92%)
torchsubbandPytorch implementation of subband decomposition
Stars: ✭ 63 (+31.25%)
open-speech-corpora💎 A list of accessible speech corpora for ASR, TTS, and other Speech Technologies
Stars: ✭ 841 (+1652.08%)
hifigan-denoiserHiFi-GAN: High Fidelity Denoising and Dereverberation Based on Speech Deep Features in Adversarial Networks
Stars: ✭ 88 (+83.33%)
Gcc NmfReal-time GCC-NMF Blind Speech Separation and Enhancement
Stars: ✭ 231 (+381.25%)
SpleeterRTReal time monaural source separation base on fully convolutional neural network operates on Time-frequency domain.
Stars: ✭ 111 (+131.25%)
Vq Vae SpeechPyTorch implementation of VQ-VAE + WaveNet by [Chorowski et al., 2019] and VQ-VAE on speech signals by [van den Oord et al., 2017]
Stars: ✭ 187 (+289.58%)
Speech EnhancementDeep neural network based speech enhancement toolkit
Stars: ✭ 167 (+247.92%)
DiViMeACLEW Diarization Virtual Machine
Stars: ✭ 28 (-41.67%)
rankpruning🧹 Formerly for binary classification with noisy labels. Replaced by cleanlab.
Stars: ✭ 81 (+68.75%)
Speech signal processing and classificationFront-end speech processing aims at extracting proper features from short- term segments of a speech utterance, known as frames. It is a pre-requisite step toward any pattern recognition problem employing speech or audio (e.g., music). Here, we are interesting in voice disorder classification. That is, to develop two-class classifiers, which can discriminate between utterances of a subject suffering from say vocal fold paralysis and utterances of a healthy subject.The mathematical modeling of the speech production system in humans suggests that an all-pole system function is justified [1-3]. As a consequence, linear prediction coefficients (LPCs) constitute a first choice for modeling the magnitute of the short-term spectrum of speech. LPC-derived cepstral coefficients are guaranteed to discriminate between the system (e.g., vocal tract) contribution and that of the excitation. Taking into account the characteristics of the human ear, the mel-frequency cepstral coefficients (MFCCs) emerged as descriptive features of the speech spectral envelope. Similarly to MFCCs, the perceptual linear prediction coefficients (PLPs) could also be derived. The aforementioned sort of speaking tradi- tional features will be tested against agnostic-features extracted by convolu- tive neural networks (CNNs) (e.g., auto-encoders) [4]. The pattern recognition step will be based on Gaussian Mixture Model based classifiers,K-nearest neighbor classifiers, Bayes classifiers, as well as Deep Neural Networks. The Massachussets Eye and Ear Infirmary Dataset (MEEI-Dataset) [5] will be exploited. At the application level, a library for feature extraction and classification in Python will be developed. Credible publicly available resources will be 1used toward achieving our goal, such as KALDI. Comparisons will be made against [6-8].
Stars: ✭ 155 (+222.92%)
DtlnTensorflow 2.x implementation of the DTLN real time speech denoising model. With TF-lite, ONNX and real-time audio processing support.
Stars: ✭ 147 (+206.25%)
MITK-DiffusionMITK Diffusion - Official part of the Medical Imaging Interaction Toolkit
Stars: ✭ 47 (-2.08%)
bLUe PYSIDEbLUe - A simple and comprehensive image editor featuring automatic contrast enhancement, color correction, 3D LUT creation, raw postprocessing, exposure fusion and noise reduction
Stars: ✭ 41 (-14.58%)
Deepvoice3 pytorchPyTorch implementation of convolutional neural networks-based text-to-speech synthesis models
Stars: ✭ 1,654 (+3345.83%)
NonautoreggenprogressTracking the progress in non-autoregressive generation (translation, transcription, etc.)
Stars: ✭ 118 (+145.83%)
Speechbrain.github.ioThe SpeechBrain project aims to build a novel speech toolkit fully based on PyTorch. With SpeechBrain users can easily create speech processing systems, ranging from speech recognition (both HMM/DNN and end-to-end), speaker recognition, speech enhancement, speech separation, multi-microphone speech processing, and many others.
Stars: ✭ 242 (+404.17%)
ShifterPitch shifter using WSOLA and resampling implemented by Python3
Stars: ✭ 22 (-54.17%)
VocganVocGAN: A High-Fidelity Real-time Vocoder with a Hierarchically-nested Adversarial Network
Stars: ✭ 158 (+229.17%)
Deep-Restore-PyTorchDeep CNN for learning image restoration without clean data!
Stars: ✭ 59 (+22.92%)
Tutorial separationThis repo summarizes the tutorials, datasets, papers, codes and tools for speech separation and speaker extraction task. You are kindly invited to pull requests.
Stars: ✭ 151 (+214.58%)
Zzz Retired opensttRETIRED - OpenSTT is now retired. If you would like more information on Mycroft AI's open source STT projects, please visit:
Stars: ✭ 146 (+204.17%)
awesome-keyword-spottingThis repository is a curated list of awesome Speech Keyword Spotting (Wake-Up Word Detection).
Stars: ✭ 150 (+212.5%)
Pb bssCollection of EM algorithms for blind source separation of audio signals
Stars: ✭ 127 (+164.58%)
deepbeamDeep learning based Speech Beamforming
Stars: ✭ 58 (+20.83%)
noise-synthesisRethinking Noise Synthesis and Modeling in Raw Denoising (ICCV2021)
Stars: ✭ 63 (+31.25%)
Tf Kaldi SpeakerNeural speaker recognition/verification system based on Kaldi and Tensorflow
Stars: ✭ 117 (+143.75%)
CResMD(ECCV 2020) Interactive Multi-Dimension Modulation with Dynamic Controllable Residual Learning for Image Restoration
Stars: ✭ 92 (+91.67%)
bobBob is a free signal-processing and machine learning toolbox originally developed by the Biometrics group at Idiap Research Institute, in Switzerland. - Mirrored from https://gitlab.idiap.ch/bob/bob
Stars: ✭ 38 (-20.83%)
VokaturiandroidEmotion recognition by speech in android.
Stars: ✭ 79 (+64.58%)
SptkA modified version of Speech Signal Processing Toolkit (SPTK)
Stars: ✭ 71 (+47.92%)
GcommandspytorchConvNets for Audio Recognition using Google Commands Dataset
Stars: ✭ 65 (+35.42%)
audio noise clusteringhttps://dodiku.github.io/audio_noise_clustering/results/ ==> An experiment with a variety of clustering (and clustering-like) techniques to reduce noise on an audio speech recording.
Stars: ✭ 24 (-50%)
aydinAydin — User-friendly, Fast, Self-Supervised Image Denoising for All.
Stars: ✭ 105 (+118.75%)
DncDiscriminative Neural Clustering for Speaker Diarisation
Stars: ✭ 60 (+25%)
FullsubnetPyTorch implementation of "A Full-Band and Sub-Band Fusion Model for Real-Time Single-Channel Speech Enhancement."
Stars: ✭ 51 (+6.25%)
napari-hubDiscover, install, and share napari plugins
Stars: ✭ 44 (-8.33%)
Keras SincnetKeras (tensorflow) implementation of SincNet (Mirco Ravanelli, Yoshua Bengio - https://github.com/mravanelli/SincNet)
Stars: ✭ 47 (-2.08%)
Formant AnalyzeriOS application for finding formants in spoken sounds
Stars: ✭ 43 (-10.42%)
spafe🔉 spafe: Simplified Python Audio Features Extraction
Stars: ✭ 310 (+545.83%)