hifigan-denoiserHiFi-GAN: High Fidelity Denoising and Dereverberation Based on Speech Deep Features in Adversarial Networks
Stars: ✭ 88 (+203.45%)
awesome-speech-enhancementA curated list of awesome Speech Enhancement papers, libraries, datasets, and other resources.
Stars: ✭ 48 (+65.52%)
FullsubnetPyTorch implementation of "A Full-Band and Sub-Band Fusion Model for Real-Time Single-Channel Speech Enhancement."
Stars: ✭ 51 (+75.86%)
Tutorial separationThis repo summarizes the tutorials, datasets, papers, codes and tools for speech separation and speaker extraction task. You are kindly invited to pull requests.
Stars: ✭ 151 (+420.69%)
Awesome DiarizationA curated list of awesome Speaker Diarization papers, libraries, datasets, and other resources.
Stars: ✭ 673 (+2220.69%)
VokaturiandroidEmotion recognition by speech in android.
Stars: ✭ 79 (+172.41%)
Pyannote AudioNeural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding
Stars: ✭ 978 (+3272.41%)
CCWTComplex Continuous Wavelet Transform
Stars: ✭ 136 (+368.97%)
SurfboardNovoic's audio feature extraction library
Stars: ✭ 318 (+996.55%)
Pb bssCollection of EM algorithms for blind source separation of audio signals
Stars: ✭ 127 (+337.93%)
Awesome Speech EnhancementA tutorial for Speech Enhancement researchers and practitioners. The purpose of this repo is to organize the world’s resources for speech enhancement and make them universally accessible and useful.
Stars: ✭ 257 (+786.21%)
GcommandspytorchConvNets for Audio Recognition using Google Commands Dataset
Stars: ✭ 65 (+124.14%)
WaveletCNNWavelet CNN, Texture Classification in Keras
Stars: ✭ 19 (-34.48%)
Formant AnalyzeriOS application for finding formants in spoken sounds
Stars: ✭ 43 (+48.28%)
VocganVocGAN: A High-Fidelity Real-time Vocoder with a Hierarchically-nested Adversarial Network
Stars: ✭ 158 (+444.83%)
SincnetSincNet is a neural architecture for efficiently processing raw audio samples.
Stars: ✭ 764 (+2534.48%)
IMS-ToucanText-to-Speech Toolkit of the Speech and Language Technologies Group at the University of Stuttgart. Objectives of the development are simplicity, modularity, controllability and multilinguality.
Stars: ✭ 295 (+917.24%)
UspeechSpeech recognition toolkit for the arduino
Stars: ✭ 448 (+1444.83%)
Zzz Retired opensttRETIRED - OpenSTT is now retired. If you would like more information on Mycroft AI's open source STT projects, please visit:
Stars: ✭ 146 (+403.45%)
PysptkA python wrapper for Speech Signal Processing Toolkit (SPTK).
Stars: ✭ 297 (+924.14%)
YANGstraight sourceAnalytic signal-based source information analysis for YANGstraight and real-time interactive tools
Stars: ✭ 31 (+6.9%)
SpeechTransProgressTracking the progress in end-to-end speech translation
Stars: ✭ 139 (+379.31%)
Deepvoice3 pytorchPyTorch implementation of convolutional neural networks-based text-to-speech synthesis models
Stars: ✭ 1,654 (+5603.45%)
speechportal(1st place at HopHacks) A dynamic webVR memory palace for speech training, utilizing natural language processing and Google Streetview API
Stars: ✭ 14 (-51.72%)
Gcc NmfReal-time GCC-NMF Blind Speech Separation and Enhancement
Stars: ✭ 231 (+696.55%)
napsAn experiment for building gateware for the axiom micro / beta using nmigen and yosys
Stars: ✭ 28 (-3.45%)
SptkA modified version of Speech Signal Processing Toolkit (SPTK)
Stars: ✭ 71 (+144.83%)
Vq Vae SpeechPyTorch implementation of VQ-VAE + WaveNet by [Chorowski et al., 2019] and VQ-VAE on speech signals by [van den Oord et al., 2017]
Stars: ✭ 187 (+544.83%)
DncDiscriminative Neural Clustering for Speaker Diarisation
Stars: ✭ 60 (+106.9%)
Keras SincnetKeras (tensorflow) implementation of SincNet (Mirco Ravanelli, Yoshua Bengio - https://github.com/mravanelli/SincNet)
Stars: ✭ 47 (+62.07%)
Speech EnhancementDeep neural network based speech enhancement toolkit
Stars: ✭ 167 (+475.86%)
PnccA implementation of Power Normalized Cepstral Coefficients: PNCC
Stars: ✭ 40 (+37.93%)
lensThe official network explorer for Wavelet.
Stars: ✭ 28 (-3.45%)
Rte Speech GeneratorNatural Language Processing to generate new speeches for the President of Turkey.
Stars: ✭ 22 (-24.14%)
Speech signal processing and classificationFront-end speech processing aims at extracting proper features from short- term segments of a speech utterance, known as frames. It is a pre-requisite step toward any pattern recognition problem employing speech or audio (e.g., music). Here, we are interesting in voice disorder classification. That is, to develop two-class classifiers, which can discriminate between utterances of a subject suffering from say vocal fold paralysis and utterances of a healthy subject.The mathematical modeling of the speech production system in humans suggests that an all-pole system function is justified [1-3]. As a consequence, linear prediction coefficients (LPCs) constitute a first choice for modeling the magnitute of the short-term spectrum of speech. LPC-derived cepstral coefficients are guaranteed to discriminate between the system (e.g., vocal tract) contribution and that of the excitation. Taking into account the characteristics of the human ear, the mel-frequency cepstral coefficients (MFCCs) emerged as descriptive features of the speech spectral envelope. Similarly to MFCCs, the perceptual linear prediction coefficients (PLPs) could also be derived. The aforementioned sort of speaking tradi- tional features will be tested against agnostic-features extracted by convolu- tive neural networks (CNNs) (e.g., auto-encoders) [4]. The pattern recognition step will be based on Gaussian Mixture Model based classifiers,K-nearest neighbor classifiers, Bayes classifiers, as well as Deep Neural Networks. The Massachussets Eye and Ear Infirmary Dataset (MEEI-Dataset) [5] will be exploited. At the application level, a library for feature extraction and classification in Python will be developed. Credible publicly available resources will be 1used toward achieving our goal, such as KALDI. Comparisons will be made against [6-8].
Stars: ✭ 155 (+434.48%)
AudinoOpen source audio annotation tool for humans™
Stars: ✭ 740 (+2451.72%)
aydinAydin — User-friendly, Fast, Self-Supervised Image Denoising for All.
Stars: ✭ 105 (+262.07%)
DtlnTensorflow 2.x implementation of the DTLN real time speech denoising model. With TF-lite, ONNX and real-time audio processing support.
Stars: ✭ 147 (+406.9%)
PaseProblem Agnostic Speech Encoder
Stars: ✭ 348 (+1100%)
ITKIsotropicWaveletsExternal Module for ITK, implementing Isotropic Wavelets and Riesz Filter for multiscale phase analysis.
Stars: ✭ 12 (-58.62%)
NnmnkwiiLibrary to build speech synthesis systems designed for easy and fast prototyping.
Stars: ✭ 308 (+962.07%)
UHV-OTS-SpeechA data annotation pipeline to generate high-quality, large-scale speech datasets with machine pre-labeling and fully manual auditing.
Stars: ✭ 94 (+224.14%)
vaka neural network toolbox for animal vocalizations and bioacoustics
Stars: ✭ 21 (-27.59%)
nhwcodecNHW : A Next-Generation Image Compression Codec
Stars: ✭ 56 (+93.1%)
scim[wip]Speech recognition tool-box written by Nim. Based on Arraymancer.
Stars: ✭ 17 (-41.38%)
NonautoreggenprogressTracking the progress in non-autoregressive generation (translation, transcription, etc.)
Stars: ✭ 118 (+306.9%)
bobBob is a free signal-processing and machine learning toolbox originally developed by the Biometrics group at Idiap Research Institute, in Switzerland. - Mirrored from https://gitlab.idiap.ch/bob/bob
Stars: ✭ 38 (+31.03%)
napari-hubDiscover, install, and share napari plugins
Stars: ✭ 44 (+51.72%)
SmileStatistical Machine Intelligence & Learning Engine
Stars: ✭ 5,412 (+18562.07%)
Speechbrain.github.ioThe SpeechBrain project aims to build a novel speech toolkit fully based on PyTorch. With SpeechBrain users can easily create speech processing systems, ranging from speech recognition (both HMM/DNN and end-to-end), speaker recognition, speech enhancement, speech separation, multi-microphone speech processing, and many others.
Stars: ✭ 242 (+734.48%)