All Projects → philipperemy → Tensorflow Ctc Speech Recognition

philipperemy / Tensorflow Ctc Speech Recognition

Licence: apache-2.0
Application of Connectionist Temporal Classification (CTC) for Speech Recognition (Tensorflow 1.0 but compatible with 2.0).

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Tensorflow Ctc Speech Recognition

Tensorflowasr
⚡️ TensorFlowASR: Almost State-of-the-art Automatic Speech Recognition in Tensorflow 2. Supported languages that can use characters or subwords
Stars: ✭ 400 (+214.96%)
Mutual labels:  speech-recognition, speech-to-text, ctc
Rnn ctc
Recurrent Neural Network and Long Short Term Memory (LSTM) with Connectionist Temporal Classification implemented in Theano. Includes a Toy training example.
Stars: ✭ 220 (+73.23%)
Mutual labels:  speech-recognition, speech-to-text, ctc
Asrt speechrecognition
A Deep-Learning-Based Chinese Speech Recognition System 基于深度学习的中文语音识别系统
Stars: ✭ 4,943 (+3792.13%)
Mutual labels:  speech-recognition, speech-to-text, ctc
Tensorflow end2end speech recognition
End-to-End speech recognition implementation base on TensorFlow (CTC, Attention, and MTL training)
Stars: ✭ 305 (+140.16%)
Mutual labels:  speech-recognition, speech-to-text, ctc
Eesen
The official repository of the Eesen project
Stars: ✭ 738 (+481.1%)
Mutual labels:  speech-recognition, speech-to-text, ctc
Casr Demo
基于Flask Web的中文自动语音识别演示系统,包含语音识别、语音合成、声纹识别之说话人识别。
Stars: ✭ 76 (-40.16%)
Mutual labels:  speech-to-text, ctc
Wav2letter
Speech Recognition model based off of FAIR research paper built using Pytorch.
Stars: ✭ 78 (-38.58%)
Mutual labels:  speech-recognition, speech-to-text
Deepspeech Websocket Server
Server & client for DeepSpeech using WebSockets for real-time speech recognition in separate environments
Stars: ✭ 79 (-37.8%)
Mutual labels:  speech-recognition, speech-to-text
Vosk Api
Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node
Stars: ✭ 1,357 (+968.5%)
Mutual labels:  speech-recognition, speech-to-text
Dragonfire
the open-source virtual assistant for Ubuntu based Linux distributions
Stars: ✭ 1,120 (+781.89%)
Mutual labels:  speech-recognition, speech-to-text
B.e.n.j.i.
B.E.N.J.I.- The Impossible Missions Force's digital assistant
Stars: ✭ 83 (-34.65%)
Mutual labels:  speech-recognition, speech-to-text
Pytorch Asr
ASR with PyTorch
Stars: ✭ 124 (-2.36%)
Mutual labels:  speech-recognition, ctc
Nativescript Speech Recognition
💬 Speech to text, using the awesome engines readily available on the device.
Stars: ✭ 72 (-43.31%)
Mutual labels:  speech-recognition, speech-to-text
Patter
speech-to-text in pytorch
Stars: ✭ 71 (-44.09%)
Mutual labels:  speech-recognition, speech-to-text
Deepspeech
A PaddlePaddle implementation of ASR.
Stars: ✭ 1,219 (+859.84%)
Mutual labels:  speech-recognition, speech-to-text
Openasr
A pytorch based end2end speech recognition system.
Stars: ✭ 69 (-45.67%)
Mutual labels:  speech-recognition, speech-to-text
Mongolian Speech Recognition
Mongolian speech recognition with PyTorch
Stars: ✭ 97 (-23.62%)
Mutual labels:  speech-recognition, speech-to-text
Spokestack Python
Spokestack is a library that allows a user to easily incorporate a voice interface into any Python application.
Stars: ✭ 103 (-18.9%)
Mutual labels:  speech-recognition, speech-to-text
Speech And Text
Speech to text (PocketSphinx, Iflytex API, Baidu API) and text to speech (pyttsx3) | 语音转文字(PocketSphinx、百度 API、科大讯飞 API)和文字转语音(pyttsx3)
Stars: ✭ 102 (-19.69%)
Mutual labels:  speech-recognition, speech-to-text
Kaldi
kaldi-asr/kaldi is the official location of the Kaldi project.
Stars: ✭ 11,151 (+8680.31%)
Mutual labels:  speech-recognition, speech-to-text

Tensorflow CTC Speech Recognition

  • Compatible with Tensorflow through v1 compat.
  • Application of Connectionist Temporal Classification (CTC) for Speech Recognition (Tensorflow 1.0)
  • On the VCTK Corpus (same corpus as the one used by WaveNet).

How to get started?

git clone https://github.com/philipperemy/tensorflow-ctc-speech-recognition.git ctc-speech
cd ctc-speech
pip3 install -r requirements.txt # inside a virtualenv

# Download the VCTK Corpus here: http://homepages.inf.ed.ac.uk/jyamagis/page3/page58/page58.html
# OR use this file (~65MB) that contains all the utterances for speaker p225.
wget https://www.dropbox.com/s/xecprghgwbbuk3m/vctk-pc225.tar.gz
tar xvzf vctk-pc225.tar.gz && rm -rf vctk-pc225.tar.gz
python generate_audio_cache.py --audio_dir vctk-p225



wget http://homepages.inf.ed.ac.uk/jyamagis/release/VCTK-Corpus.tar.gz # 10GB!l
python3 ctc_tensorflow_example.py # to run the experiment defined in the section First Experiment.

You can also download only the relevant files here https://www.dropbox.com/s/xecprghgwbbuk3m/vctk-pc225.tar.gz?dl=1 (~69MB). Thanks to @Burak Bayramli.

Requirements

  • dill: improved version of pickle
  • librosa: library to interact with audio wav files
  • namedtupled: dictionary to named tuples
  • numpy: scientific library
  • python_speech_features: extracting relevant features from raw audio data
  • tensorflow: machine learning library
  • progressbar2: progression bar

First experiment

The code to reproduce this experiment is no longer in the latest commit.

git checkout ba6c10fba2383cd4933d47896f95d30248458161

Set up

Speech Recognition is a very difficult topic. In this first experiment, we consider:

  • A very small subset of the VCTK Corpus composed of only one speaker: p225.
  • Only 5 sentences of this speaker, denoted as: 001, 002, 003, 004 and 005.

The network is defined as:

  • One LSTM layer rnn.LSTMCell with 100 units, completed by a softmax.
  • Batch size of 1.
  • Momentum Optimizer with learning rate of 0.005 and momentum of 0.9.

The validation set is obtained by constantly truncating the audio files randomly at the beginning (between 0 and 125ms max). We make sure that we do not cut when the speaker is speaking. Using 5 unseen sentences would be more realistic, however, it's almost impossible for the network to pick it up since a training set of only 5 sentences is way too small to cover all the possible phonemes of the english language. By truncating randomly the silences at the beginning, we make sure that the network does not learn the mapping audio from sentence -> text in a dumb way.

Results

Most of the time, the network can guess the correct sentence. Sometimes, it misses a bit but still encouraging.

Example 1

Original training: diving is no part of football
Decoded training: diving is no part of football
Original validation: theres still a bit to go
Decoded validation: thers still a bl to go
Epoch 3074/10000, train_cost = 0.032, train_ler = 0.000, val_cost = 9.131, val_ler = 0.125, time = 1.648

Example 2

Original training: three hours later the man was free
Decoded training: three hours later the man was free
Original val: and they were being paid 
Decoded val: nand they ere being paid  
Epoch 3104/10000, train_cost = 0.075, train_ler = 0.000, val_cost = 2.945, val_ler = 0.077, time = 1.042

Example 3

Original training: theres still a bit to go
Decoded training: theres still a bit to go
Original val: three hours later the man was free
Decoded val: three hors late th man wasfree
Epoch 3108/10000, train_cost = 0.032, train_ler = 0.000, val_cost = 12.532, val_ler = 0.118, time = 0.859

CTC Loss

CTC Loss (Log scale)

CTC Loss is the raw loss defined in the paper by Alex Graves.

LER Loss

LER (Label Error Rate) measures the inaccuracy between the predicted and the ground truth texts.

Clearly we can see that the network learns very well on just 5 sentences! It's far from being perfect but quite appealing for a first try.

Second experiment

  • LSTM with 256 cells.
  • Only one speaker: p225.
  • 15 shortest utterances used as testing set.
  • Rest used as training set.
  • Can now define a batch size different than 1.
Epoch 2723/3000, train_cost = 1.108, train_ler = 0.000, val_cost = 59.116, val_ler = 0.467, time = 2.559
- Original (training) : but the commission is on a collision course with the government 
- Decoded  (training) : but the commission is on a collision course with the government 
- Original (training) : this action reflects a slump in bookings 
- Decoded  (training) : this action reflects a slump in bookings 
- Original (training) : they had to learn to work from the consumer back 
- Decoded  (training) : they had to learn to work from the consumer back 
- Original (training) : it depends on the internal discussions in the ministry of defence 
- Decoded  (training) : it depends on the internal discussions in the ministry of defence 
- Original (training) : irvine said his company was intent on supporting the scottish dairy industry 
- Decoded  (training) : irvine said his company was intent on supporting the scottish dairy industry 
- Original (training) : the pain was almost too much to bear 
- Decoded  (training) : the pain was almost too much to bear 
- Original (training) : this is a very common type of bow one showing mainly red and yellow with little or no green or blue 
- Decoded  (training) : this is a very common type of bow one showing mainly red and yellow with little or no green or blue 
- Original (training) : in fact he should never have been in the field 
- Decoded  (training) : in fact he should never have been in the field 
- Original (training) : saddam is not the only example of evil in our world 
- Decoded  (training) : saddam is nat the only example of evil in our world 
- Original (training) : so did she meet him  
- Decoded  (training) : so did she meet him  
- Original (validation) : it is a court case 
- Decoded  (validation) : it is a cot ase    

LER Loss

This experiment is interesting. The network seems to generalize a bit on unseen audio files (of the same speaker). I didn't expect the network to perform well on such a small dataset. However, let's keep in mind that the generalization power is quite poor here. It's overfitting as well.

Special Thanks

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].