Speechbrain.github.ioThe SpeechBrain project aims to build a novel speech toolkit fully based on PyTorch. With SpeechBrain users can easily create speech processing systems, ranging from speech recognition (both HMM/DNN and end-to-end), speaker recognition, speech enhancement, speech separation, multi-microphone speech processing, and many others.
Gcc NmfReal-time GCC-NMF Blind Speech Separation and Enhancement
Vq Vae SpeechPyTorch implementation of VQ-VAE + WaveNet by [Chorowski et al., 2019] and VQ-VAE on speech signals by [van den Oord et al., 2017]
VocganVocGAN: A High-Fidelity Real-time Vocoder with a Hierarchically-nested Adversarial Network
Speech signal processing and classificationFront-end speech processing aims at extracting proper features from short- term segments of a speech utterance, known as frames. It is a pre-requisite step toward any pattern recognition problem employing speech or audio (e.g., music). Here, we are interesting in voice disorder classification. That is, to develop two-class classifiers, which can discriminate between utterances of a subject suffering from say vocal fold paralysis and utterances of a healthy subject.The mathematical modeling of the speech production system in humans suggests that an all-pole system function is justified [1-3]. As a consequence, linear prediction coefficients (LPCs) constitute a first choice for modeling the magnitute of the short-term spectrum of speech. LPC-derived cepstral coefficients are guaranteed to discriminate between the system (e.g., vocal tract) contribution and that of the excitation. Taking into account the characteristics of the human ear, the mel-frequency cepstral coefficients (MFCCs) emerged as descriptive features of the speech spectral envelope. Similarly to MFCCs, the perceptual linear prediction coefficients (PLPs) could also be derived. The aforementioned sort of speaking tradi- tional features will be tested against agnostic-features extracted by convolu- tive neural networks (CNNs) (e.g., auto-encoders) [4]. The pattern recognition step will be based on Gaussian Mixture Model based classifiers,K-nearest neighbor classifiers, Bayes classifiers, as well as Deep Neural Networks. The Massachussets Eye and Ear Infirmary Dataset (MEEI-Dataset) [5] will be exploited. At the application level, a library for feature extraction and classification in Python will be developed. Credible publicly available resources will be 1used toward achieving our goal, such as KALDI. Comparisons will be made against [6-8].
Tutorial separationThis repo summarizes the tutorials, datasets, papers, codes and tools for speech separation and speaker extraction task. You are kindly invited to pull requests.
DtlnTensorflow 2.x implementation of the DTLN real time speech denoising model. With TF-lite, ONNX and real-time audio processing support.
Zzz Retired opensttRETIRED - OpenSTT is now retired. If you would like more information on Mycroft AI's open source STT projects, please visit:
Pb bssCollection of EM algorithms for blind source separation of audio signals
Deepvoice3 pytorchPyTorch implementation of convolutional neural networks-based text-to-speech synthesis models
NonautoreggenprogressTracking the progress in non-autoregressive generation (translation, transcription, etc.)
Tf Kaldi SpeakerNeural speaker recognition/verification system based on Kaldi and Tensorflow
SptkA modified version of Speech Signal Processing Toolkit (SPTK)
DncDiscriminative Neural Clustering for Speaker Diarisation
FullsubnetPyTorch implementation of "A Full-Band and Sub-Band Fusion Model for Real-Time Single-Channel Speech Enhancement."
Keras SincnetKeras (tensorflow) implementation of SincNet (Mirco Ravanelli, Yoshua Bengio - https://github.com/mravanelli/SincNet)
PnccA implementation of Power Normalized Cepstral Coefficients: PNCC
Pyannote AudioNeural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding
Rte Speech GeneratorNatural Language Processing to generate new speeches for the President of Turkey.
SincnetSincNet is a neural architecture for efficiently processing raw audio samples.
AudinoOpen source audio annotation tool for humans™
Awesome DiarizationA curated list of awesome Speaker Diarization papers, libraries, datasets, and other resources.
UspeechSpeech recognition toolkit for the arduino
PaseProblem Agnostic Speech Encoder
SurfboardNovoic's audio feature extraction library
NnmnkwiiLibrary to build speech synthesis systems designed for easy and fast prototyping.
PysptkA python wrapper for Speech Signal Processing Toolkit (SPTK).
Awesome Speech EnhancementA tutorial for Speech Enhancement researchers and practitioners. The purpose of this repo is to organize the world’s resources for speech enhancement and make them universally accessible and useful.
hifigan-denoiserHiFi-GAN: High Fidelity Denoising and Dereverberation Based on Speech Deep Features in Adversarial Networks
vaka neural network toolbox for animal vocalizations and bioacoustics
speechportal(1st place at HopHacks) A dynamic webVR memory palace for speech training, utilizing natural language processing and Google Streetview API
scim[wip]Speech recognition tool-box written by Nim. Based on Arraymancer.
LIUMScripts for LIUM SpkDiarization tools
ttslearnttslearn: Library for Pythonで学ぶ音声合成 (Text-to-speech with Python)
UniSpeechUniSpeech - Large Scale Self-Supervised Learning for Speech
SpeechEnhancementCombining Weighted Multi-resolution STFT Loss and Distance Fusion to Optimize Speech Enhancement Generative Adversarial Networks
pyssppython speech signal processing library
CNN-VADA Convolutional Neural Network based Voice Activity Detector for Smartphones
Speech-BackbonesThis is the main repository of open-sourced speech technology by Huawei Noah's Ark Lab.
open-speech-corpora💎 A list of accessible speech corpora for ASR, TTS, and other Speech Technologies
QuantumSpeech-QCNNIEEE ICASSP 21 - Quantum Convolution Neural Networks for Speech Processing and Automatic Speech Recognition