All Projects → Walleclipse → Deep_speaker Speaker_recognition_system

Walleclipse / Deep_speaker Speaker_recognition_system

Keras implementation of ‘’Deep Speaker: an End-to-End Neural Speaker Embedding System‘’ (speaker recognition)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Deep speaker Speaker recognition system

Tts
Text-to-Speech for Arduino
Stars: ✭ 118 (-32.18%)
Mutual labels:  speech
Voice activity detection
Voice Activity Detection based on Deep Learning & TensorFlow
Stars: ✭ 132 (-24.14%)
Mutual labels:  speech
Aeneas
aeneas is a Python/C library and a set of tools to automagically synchronize audio and text (aka forced alignment)
Stars: ✭ 1,942 (+1016.09%)
Mutual labels:  speech
Pytorch Asr
ASR with PyTorch
Stars: ✭ 124 (-28.74%)
Mutual labels:  speech
Voc
A physical model of the human vocal tract using literate programming, based on Pink Trombone.
Stars: ✭ 129 (-25.86%)
Mutual labels:  speech
Diffwave
DiffWave is a fast, high-quality neural vocoder and waveform synthesizer.
Stars: ✭ 139 (-20.11%)
Mutual labels:  speech
Tfg Voice Conversion
Deep Learning-based Voice Conversion system
Stars: ✭ 115 (-33.91%)
Mutual labels:  speech
Pytorch Kaldi
pytorch-kaldi is a project for developing state-of-the-art DNN/RNN hybrid speech recognition systems. The DNN part is managed by pytorch, while feature extraction, label computation, and decoding are performed with the kaldi toolkit.
Stars: ✭ 2,097 (+1105.17%)
Mutual labels:  speech
Avpi
an open source voice command macro software
Stars: ✭ 130 (-25.29%)
Mutual labels:  speech
Wavenet vocoder
WaveNet vocoder
Stars: ✭ 1,926 (+1006.9%)
Mutual labels:  speech
Kaldi
kaldi-asr/kaldi is the official location of the Kaldi project.
Stars: ✭ 11,151 (+6308.62%)
Mutual labels:  speech
Asr audio data links
A list of publically available audio data that anyone can download for ASR or other speech activities
Stars: ✭ 128 (-26.44%)
Mutual labels:  speech
Wavegrad
A fast, high-quality neural vocoder.
Stars: ✭ 138 (-20.69%)
Mutual labels:  speech
Code Switching Papers
A curated list of research papers and resources on code-switching
Stars: ✭ 122 (-29.89%)
Mutual labels:  speech
Tts Papers
🐸 collection of TTS papers
Stars: ✭ 160 (-8.05%)
Mutual labels:  speech
Speech And Text Unity Ios Android
Speed to text in Unity iOS use Native Speech Recognition
Stars: ✭ 117 (-32.76%)
Mutual labels:  speech
Allosaurus
Allosaurus is a pretrained universal phone recognizer for more than 2000 languages
Stars: ✭ 135 (-22.41%)
Mutual labels:  speech
Chatbot Watson Android
An Android ChatBot powered by Watson Services - Assistant, Speech-to-Text and Text-to-Speech on IBM Cloud.
Stars: ✭ 169 (-2.87%)
Mutual labels:  speech
Tacotron asr
Speech Recognition Using Tacotron
Stars: ✭ 165 (-5.17%)
Mutual labels:  speech
Tacotron
A TensorFlow Implementation of Tacotron: A Fully End-to-End Text-To-Speech Synthesis Model
Stars: ✭ 1,756 (+909.2%)
Mutual labels:  speech

Deep Speaker: speaker recognition system

Data Set: LibriSpeech
Reference paper: Deep Speaker: an End-to-End Neural Speaker Embedding System
Reference code : https://github.com/philipperemy/deep-speaker (Thanks to Philippe Rémy)

This code was trained on librispeech-train-clean dataset, tested on librispeech-test-clean dataset. In my code, librispeech dataset shows ~5% EER with CNN model.

About the Code

train.py
This is the main file, contains training, evaluation and save-model function
models.py
The neural network used for the experiment. This file contains three models, CNN model (same with the paper’s CNN), GRU model (same with the paper's GRU), simple_cnn model. simple_cnn model has similar performance with the original CNN model, but the number of trained parameter dropped from 24M to 7M.
select_batch.py
Choose the optimal batch feed to the network. This is one of the cores of this experiment.
triplet_loss.py
This is a code to calculate triplet-loss for network training. Implementation is the same as paper.
test_model.py
This is a code that evaluates (test) the model, in terms of EER...
eval_matrics.py
For calculating equal error rate, f-measure, accuracy, and other metrics
pretaining.py
This is for pre-training on softmax classification loss.
pre_process.py
Load the utterance, filter out the mute, extract the fbank feature and save the module in .npy format.

Experimental Results

This code was trained on librispeech-train-clean dataset, tested on librispeech-test-clean dataset. In my code, librispeech dataset shows ~5% EER with CNN model.

More Details

If you want to know more details, please read deep_speaker_report.pdf (English) or deep_speaker实验报告.pdf (中文).

Simple Use

  1. Preprare data.
    I provide the sample data in audio/LibriSpeechSamples/ or you can download full LibriSpeech data or prepare your own data.

  2. Preprocessing.
    Extract feature and preprocessing: python preprocess.py.

  3. Training.
    If you want to train your model with Triplet Loss: python train.py.
    If you want to pretrain with softmax loss first: python pretraining.py then python train.py.
    Note: If you want to pretrain or not, you need to set PRE_TRAIN(in constants.py) flag with True or False.

  4. Evaluation.
    Evaluate the model in terms of EER: test_model.py.
    Note: During training, train.py also evaluates the model.

  5. Plot loss curve.
    Plot loss curve and EER curve with utils.py.

import constants as c
from utils import plot_loss
loss_file=c.CHECKPOINT_FOLDER+'/losses.txt' # loss file path
plot_loss(loss_file)
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].