scim[wip]Speech recognition tool-box written by Nim. Based on Arraymancer.
Stars: ✭ 17 (-32%)
awesome-speech-enhancementA curated list of awesome Speech Enhancement papers, libraries, datasets, and other resources.
Stars: ✭ 48 (+92%)
spafe🔉 spafe: Simplified Python Audio Features Extraction
Stars: ✭ 310 (+1140%)
torchsubbandPytorch implementation of subband decomposition
Stars: ✭ 63 (+152%)
Zzz Retired opensttRETIRED - OpenSTT is now retired. If you would like more information on Mycroft AI's open source STT projects, please visit:
Stars: ✭ 146 (+484%)
Deepvoice3 pytorchPyTorch implementation of convolutional neural networks-based text-to-speech synthesis models
Stars: ✭ 1,654 (+6516%)
PyTelToolsPython Telemac Tools for post-processing tasks (includes a workflow)
Stars: ✭ 24 (-4%)
GcommandspytorchConvNets for Audio Recognition using Google Commands Dataset
Stars: ✭ 65 (+160%)
Tutorial separationThis repo summarizes the tutorials, datasets, papers, codes and tools for speech separation and speaker extraction task. You are kindly invited to pull requests.
Stars: ✭ 151 (+504%)
Pb bssCollection of EM algorithms for blind source separation of audio signals
Stars: ✭ 127 (+408%)
InstancedMotionVectorShows how to support rendering per-instance motion vectors within Indirect instanced drawing of Unity.
Stars: ✭ 45 (+80%)
Tf Kaldi SpeakerNeural speaker recognition/verification system based on Kaldi and Tensorflow
Stars: ✭ 117 (+368%)
pytorch-mfccA pytorch implementation of MFCC.
Stars: ✭ 30 (+20%)
VokaturiandroidEmotion recognition by speech in android.
Stars: ✭ 79 (+216%)
IMS-ToucanText-to-Speech Toolkit of the Speech and Language Technologies Group at the University of Stuttgart. Objectives of the development are simplicity, modularity, controllability and multilinguality.
Stars: ✭ 295 (+1080%)
Speechbrain.github.ioThe SpeechBrain project aims to build a novel speech toolkit fully based on PyTorch. With SpeechBrain users can easily create speech processing systems, ranging from speech recognition (both HMM/DNN and end-to-end), speaker recognition, speech enhancement, speech separation, multi-microphone speech processing, and many others.
Stars: ✭ 242 (+868%)
FullsubnetPyTorch implementation of "A Full-Band and Sub-Band Fusion Model for Real-Time Single-Channel Speech Enhancement."
Stars: ✭ 51 (+104%)
Formant AnalyzeriOS application for finding formants in spoken sounds
Stars: ✭ 43 (+72%)
Pyannote AudioNeural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding
Stars: ✭ 978 (+3812%)
gridppSoftware to post-process gridded weather forecasts
Stars: ✭ 33 (+32%)
SincnetSincNet is a neural architecture for efficiently processing raw audio samples.
Stars: ✭ 764 (+2956%)
Speech signal processing and classificationFront-end speech processing aims at extracting proper features from short- term segments of a speech utterance, known as frames. It is a pre-requisite step toward any pattern recognition problem employing speech or audio (e.g., music). Here, we are interesting in voice disorder classification. That is, to develop two-class classifiers, which can discriminate between utterances of a subject suffering from say vocal fold paralysis and utterances of a healthy subject.The mathematical modeling of the speech production system in humans suggests that an all-pole system function is justified [1-3]. As a consequence, linear prediction coefficients (LPCs) constitute a first choice for modeling the magnitute of the short-term spectrum of speech. LPC-derived cepstral coefficients are guaranteed to discriminate between the system (e.g., vocal tract) contribution and that of the excitation. Taking into account the characteristics of the human ear, the mel-frequency cepstral coefficients (MFCCs) emerged as descriptive features of the speech spectral envelope. Similarly to MFCCs, the perceptual linear prediction coefficients (PLPs) could also be derived. The aforementioned sort of speaking tradi- tional features will be tested against agnostic-features extracted by convolu- tive neural networks (CNNs) (e.g., auto-encoders) [4]. The pattern recognition step will be based on Gaussian Mixture Model based classifiers,K-nearest neighbor classifiers, Bayes classifiers, as well as Deep Neural Networks. The Massachussets Eye and Ear Infirmary Dataset (MEEI-Dataset) [5] will be exploited. At the application level, a library for feature extraction and classification in Python will be developed. Credible publicly available resources will be 1used toward achieving our goal, such as KALDI. Comparisons will be made against [6-8].
Stars: ✭ 155 (+520%)
Awesome DiarizationA curated list of awesome Speaker Diarization papers, libraries, datasets, and other resources.
Stars: ✭ 673 (+2592%)
DtlnTensorflow 2.x implementation of the DTLN real time speech denoising model. With TF-lite, ONNX and real-time audio processing support.
Stars: ✭ 147 (+488%)
Speaker-IdentificationA program for automatic speaker identification using deep learning techniques.
Stars: ✭ 84 (+236%)
sonopyA simple audio feature extraction library
Stars: ✭ 72 (+188%)
NonautoreggenprogressTracking the progress in non-autoregressive generation (translation, transcription, etc.)
Stars: ✭ 118 (+372%)
VoxelTerrainThis project's main goal is to generate and visualize terrain built using voxels. It was achieved using different approaches and computing technologies just for the sake of performance and implementation comparison.
Stars: ✭ 37 (+48%)
SptkA modified version of Speech Signal Processing Toolkit (SPTK)
Stars: ✭ 71 (+184%)
DncDiscriminative Neural Clustering for Speaker Diarisation
Stars: ✭ 60 (+140%)
UspeechSpeech recognition toolkit for the arduino
Stars: ✭ 448 (+1692%)
Keras SincnetKeras (tensorflow) implementation of SincNet (Mirco Ravanelli, Yoshua Bengio - https://github.com/mravanelli/SincNet)
Stars: ✭ 47 (+88%)
PnccA implementation of Power Normalized Cepstral Coefficients: PNCC
Stars: ✭ 40 (+60%)
Gcc NmfReal-time GCC-NMF Blind Speech Separation and Enhancement
Stars: ✭ 231 (+824%)
Rte Speech GeneratorNatural Language Processing to generate new speeches for the President of Turkey.
Stars: ✭ 22 (-12%)
UHV-OTS-SpeechA data annotation pipeline to generate high-quality, large-scale speech datasets with machine pre-labeling and fully manual auditing.
Stars: ✭ 94 (+276%)
AudinoOpen source audio annotation tool for humans™
Stars: ✭ 740 (+2860%)
Vq Vae SpeechPyTorch implementation of VQ-VAE + WaveNet by [Chorowski et al., 2019] and VQ-VAE on speech signals by [van den Oord et al., 2017]
Stars: ✭ 187 (+648%)
Aubioa library for audio and music analysis
Stars: ✭ 2,601 (+10304%)
Speech EnhancementDeep neural network based speech enhancement toolkit
Stars: ✭ 167 (+568%)
PaseProblem Agnostic Speech Encoder
Stars: ✭ 348 (+1292%)
bobBob is a free signal-processing and machine learning toolbox originally developed by the Biometrics group at Idiap Research Institute, in Switzerland. - Mirrored from https://gitlab.idiap.ch/bob/bob
Stars: ✭ 38 (+52%)
ReshadeA generic post-processing injector for games and video software.
Stars: ✭ 2,285 (+9040%)
Numpy MlMachine learning, in numpy
Stars: ✭ 11,100 (+44300%)