All Projects → Sptk → Similar Projects or Alternatives

71 Open source projects that are alternatives of or similar to Sptk

speechrec
a simple speech recognition app using the Web Speech API Interfaces
Stars: ✭ 18 (-74.65%)
Mutual labels:  speech-processing
wavelet-denoiser
A wavelet audio denoiser done in python
Stars: ✭ 29 (-59.15%)
Mutual labels:  speech-processing
speechportal
(1st place at HopHacks) A dynamic webVR memory palace for speech training, utilizing natural language processing and Google Streetview API
Stars: ✭ 14 (-80.28%)
Mutual labels:  speech-processing
open-speech-corpora
💎 A list of accessible speech corpora for ASR, TTS, and other Speech Technologies
Stars: ✭ 841 (+1084.51%)
Mutual labels:  speech-processing
React Native Dialogflow
A React-Native Bridge for the Google Dialogflow (API.AI) SDK
Stars: ✭ 182 (+156.34%)
Mutual labels:  speech-processing
Awesome Speech Enhancement
A tutorial for Speech Enhancement researchers and practitioners. The purpose of this repo is to organize the world’s resources for speech enhancement and make them universally accessible and useful.
Stars: ✭ 257 (+261.97%)
Mutual labels:  speech-processing
spafe
🔉 spafe: Simplified Python Audio Features Extraction
Stars: ✭ 310 (+336.62%)
Mutual labels:  speech-processing
Awesome Diarization
A curated list of awesome Speaker Diarization papers, libraries, datasets, and other resources.
Stars: ✭ 673 (+847.89%)
Mutual labels:  speech-processing
Speechbrain.github.io
The SpeechBrain project aims to build a novel speech toolkit fully based on PyTorch. With SpeechBrain users can easily create speech processing systems, ranging from speech recognition (both HMM/DNN and end-to-end), speaker recognition, speech enhancement, speech separation, multi-microphone speech processing, and many others.
Stars: ✭ 242 (+240.85%)
Mutual labels:  speech-processing
ttslearn
ttslearn: Library for Pythonで学ぶ音声合成 (Text-to-speech with Python)
Stars: ✭ 158 (+122.54%)
Mutual labels:  speech-processing
CNN-VAD
A Convolutional Neural Network based Voice Activity Detector for Smartphones
Stars: ✭ 60 (-15.49%)
Mutual labels:  speech-processing
Tutorial separation
This repo summarizes the tutorials, datasets, papers, codes and tools for speech separation and speaker extraction task. You are kindly invited to pull requests.
Stars: ✭ 151 (+112.68%)
Mutual labels:  speech-processing
Pysptk
A python wrapper for Speech Signal Processing Toolkit (SPTK).
Stars: ✭ 297 (+318.31%)
Mutual labels:  speech-processing
QuantumSpeech-QCNN
IEEE ICASSP 21 - Quantum Convolution Neural Networks for Speech Processing and Automatic Speech Recognition
Stars: ✭ 71 (+0%)
Mutual labels:  speech-processing
Sincnet
SincNet is a neural architecture for efficiently processing raw audio samples.
Stars: ✭ 764 (+976.06%)
Mutual labels:  speech-processing
DiViMe
ACLEW Diarization Virtual Machine
Stars: ✭ 28 (-60.56%)
Mutual labels:  speech-processing
hifigan-denoiser
HiFi-GAN: High Fidelity Denoising and Dereverberation Based on Speech Deep Features in Adversarial Networks
Stars: ✭ 88 (+23.94%)
Mutual labels:  speech-processing
ConvolutionaNeuralNetworksToEnhanceCodedSpeech
In this work we propose two postprocessing approaches applying convolutional neural networks (CNNs) either in the time domain or the cepstral domain to enhance the coded speech without any modification of the codecs. The time domain approach follows an end-to-end fashion, while the cepstral domain approach uses analysis-synthesis with cepstral d…
Stars: ✭ 25 (-64.79%)
Mutual labels:  speech-processing
Formant Analyzer
iOS application for finding formants in spoken sounds
Stars: ✭ 43 (-39.44%)
Mutual labels:  speech-processing
IMS-Toucan
Text-to-Speech Toolkit of the Speech and Language Technologies Group at the University of Stuttgart. Objectives of the development are simplicity, modularity, controllability and multilinguality.
Stars: ✭ 295 (+315.49%)
Mutual labels:  speech-processing
LIUM
Scripts for LIUM SpkDiarization tools
Stars: ✭ 28 (-60.56%)
Mutual labels:  speech-processing
Neural Voice Cloning With Few Samples
Implementation of Neural Voice Cloning with Few Samples Research Paper by Baidu
Stars: ✭ 211 (+197.18%)
Mutual labels:  speech-processing
Uspeech
Speech recognition toolkit for the arduino
Stars: ✭ 448 (+530.99%)
Mutual labels:  speech-processing
Vocgan
VocGAN: A High-Fidelity Real-time Vocoder with a Hierarchically-nested Adversarial Network
Stars: ✭ 158 (+122.54%)
Mutual labels:  speech-processing
SpeechEnhancement
Combining Weighted Multi-resolution STFT Loss and Distance Fusion to Optimize Speech Enhancement Generative Adversarial Networks
Stars: ✭ 49 (-30.99%)
Mutual labels:  speech-processing
spokestack-ios
Spokestack: give your iOS app a voice interface!
Stars: ✭ 27 (-61.97%)
Mutual labels:  speech-processing
Dtln
Tensorflow 2.x implementation of the DTLN real time speech denoising model. With TF-lite, ONNX and real-time audio processing support.
Stars: ✭ 147 (+107.04%)
Mutual labels:  speech-processing
Nnmnkwii
Library to build speech synthesis systems designed for easy and fast prototyping.
Stars: ✭ 308 (+333.8%)
Mutual labels:  speech-processing
Speech-Backbones
This is the main repository of open-sourced speech technology by Huawei Noah's Ark Lab.
Stars: ✭ 205 (+188.73%)
Mutual labels:  speech-processing
Rte Speech Generator
Natural Language Processing to generate new speeches for the President of Turkey.
Stars: ✭ 22 (-69.01%)
Mutual labels:  speech-processing
Huawei-Challenge-Speaker-Identification
Trained speaker embedding deep learning models and evaluation pipelines in pytorch and tesorflow for speaker recognition.
Stars: ✭ 34 (-52.11%)
Mutual labels:  speech-processing
Neural Voice Cloning With Few Samples
This repository has implementation for "Neural Voice Cloning With Few Samples"
Stars: ✭ 262 (+269.01%)
Mutual labels:  speech-processing
awesome-speech-enhancement
A curated list of awesome Speech Enhancement papers, libraries, datasets, and other resources.
Stars: ✭ 48 (-32.39%)
Mutual labels:  speech-processing
Keras Sincnet
Keras (tensorflow) implementation of SincNet (Mirco Ravanelli, Yoshua Bengio - https://github.com/mravanelli/SincNet)
Stars: ✭ 47 (-33.8%)
Mutual labels:  speech-processing
Shifter
Pitch shifter using WSOLA and resampling implemented by Python3
Stars: ✭ 22 (-69.01%)
Mutual labels:  speech-processing
SpeechTransProgress
Tracking the progress in end-to-end speech translation
Stars: ✭ 139 (+95.77%)
Mutual labels:  speech-processing
awesome-keyword-spotting
This repository is a curated list of awesome Speech Keyword Spotting (Wake-Up Word Detection).
Stars: ✭ 150 (+111.27%)
Mutual labels:  speech-processing
Audino
Open source audio annotation tool for humans™
Stars: ✭ 740 (+942.25%)
Mutual labels:  speech-processing
react-native-spokestack
Spokestack: give your React Native app a voice interface!
Stars: ✭ 53 (-25.35%)
Mutual labels:  speech-processing
vak
a neural network toolbox for animal vocalizations and bioacoustics
Stars: ✭ 21 (-70.42%)
Mutual labels:  speech-processing
torchsubband
Pytorch implementation of subband decomposition
Stars: ✭ 63 (-11.27%)
Mutual labels:  speech-processing
Dnc
Discriminative Neural Clustering for Speaker Diarisation
Stars: ✭ 60 (-15.49%)
Mutual labels:  speech-processing
bob
Bob is a free signal-processing and machine learning toolbox originally developed by the Biometrics group at Idiap Research Institute, in Switzerland. - Mirrored from https://gitlab.idiap.ch/bob/bob
Stars: ✭ 38 (-46.48%)
Mutual labels:  speech-processing
scim
[wip]Speech recognition tool-box written by Nim. Based on Arraymancer.
Stars: ✭ 17 (-76.06%)
Mutual labels:  speech-processing
UHV-OTS-Speech
A data annotation pipeline to generate high-quality, large-scale speech datasets with machine pre-labeling and fully manual auditing.
Stars: ✭ 94 (+32.39%)
Mutual labels:  speech-processing
Speech Denoising Wavenet
A neural network for end-to-end speech denoising
Stars: ✭ 516 (+626.76%)
Mutual labels:  speech-processing
Gcc Nmf
Real-time GCC-NMF Blind Speech Separation and Enhancement
Stars: ✭ 231 (+225.35%)
Mutual labels:  speech-processing
awesome-multimodal-ml
Reading list for research topics in multimodal machine learning
Stars: ✭ 3,125 (+4301.41%)
Mutual labels:  speech-processing
Vq Vae Speech
PyTorch implementation of VQ-VAE + WaveNet by [Chorowski et al., 2019] and VQ-VAE on speech signals by [van den Oord et al., 2017]
Stars: ✭ 187 (+163.38%)
Mutual labels:  speech-processing
Pncc
A implementation of Power Normalized Cepstral Coefficients: PNCC
Stars: ✭ 40 (-43.66%)
Mutual labels:  speech-processing
Speech Enhancement
Deep neural network based speech enhancement toolkit
Stars: ✭ 167 (+135.21%)
Mutual labels:  speech-processing
UniSpeech
UniSpeech - Large Scale Self-Supervised Learning for Speech
Stars: ✭ 224 (+215.49%)
Mutual labels:  speech-processing
Speech signal processing and classification
Front-end speech processing aims at extracting proper features from short- term segments of a speech utterance, known as frames. It is a pre-requisite step toward any pattern recognition problem employing speech or audio (e.g., music). Here, we are interesting in voice disorder classification. That is, to develop two-class classifiers, which can discriminate between utterances of a subject suffering from say vocal fold paralysis and utterances of a healthy subject.The mathematical modeling of the speech production system in humans suggests that an all-pole system function is justified [1-3]. As a consequence, linear prediction coefficients (LPCs) constitute a first choice for modeling the magnitute of the short-term spectrum of speech. LPC-derived cepstral coefficients are guaranteed to discriminate between the system (e.g., vocal tract) contribution and that of the excitation. Taking into account the characteristics of the human ear, the mel-frequency cepstral coefficients (MFCCs) emerged as descriptive features of the speech spectral envelope. Similarly to MFCCs, the perceptual linear prediction coefficients (PLPs) could also be derived. The aforementioned sort of speaking tradi- tional features will be tested against agnostic-features extracted by convolu- tive neural networks (CNNs) (e.g., auto-encoders) [4]. The pattern recognition step will be based on Gaussian Mixture Model based classifiers,K-nearest neighbor classifiers, Bayes classifiers, as well as Deep Neural Networks. The Massachussets Eye and Ear Infirmary Dataset (MEEI-Dataset) [5] will be exploited. At the application level, a library for feature extraction and classification in Python will be developed. Credible publicly available resources will be 1used toward achieving our goal, such as KALDI. Comparisons will be made against [6-8].
Stars: ✭ 155 (+118.31%)
Mutual labels:  speech-processing
Pase
Problem Agnostic Speech Encoder
Stars: ✭ 348 (+390.14%)
Mutual labels:  speech-processing
pyssp
python speech signal processing library
Stars: ✭ 18 (-74.65%)
Mutual labels:  speech-processing
Gcommandspytorch
ConvNets for Audio Recognition using Google Commands Dataset
Stars: ✭ 65 (-8.45%)
Mutual labels:  speech-processing
Fullsubnet
PyTorch implementation of "A Full-Band and Sub-Band Fusion Model for Real-Time Single-Channel Speech Enhancement."
Stars: ✭ 51 (-28.17%)
Mutual labels:  speech-processing
Pyannote Audio
Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding
Stars: ✭ 978 (+1277.46%)
Mutual labels:  speech-processing
Surfboard
Novoic's audio feature extraction library
Stars: ✭ 318 (+347.89%)
Mutual labels:  speech-processing
BookLibrary
Book Library of P&W Studio
Stars: ✭ 13 (-81.69%)
Mutual labels:  speech-processing
1-60 of 71 similar projects