All Projects → kaituoxu → Listen Attend Spell

kaituoxu / Listen Attend Spell

A PyTorch implementation of Listen, Attend and Spell (LAS), an End-to-End ASR framework.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Listen Attend Spell

Kospeech
Open-Source Toolkit for End-to-End Korean Automatic Speech Recognition.
Stars: ✭ 190 (+29.25%)
Mutual labels:  asr, end-to-end
Rnn Transducer
MXNet implementation of RNN Transducer (Graves 2012): Sequence Transduction with Recurrent Neural Networks
Stars: ✭ 114 (-22.45%)
Mutual labels:  asr, end-to-end
End2end Asr Pytorch
End-to-End Automatic Speech Recognition on PyTorch
Stars: ✭ 175 (+19.05%)
Mutual labels:  asr, end-to-end
End-to-End-Mandarin-ASR
End-to-end speech recognition on AISHELL dataset.
Stars: ✭ 20 (-86.39%)
Mutual labels:  end-to-end, asr
Tensorflow end2end speech recognition
End-to-End speech recognition implementation base on TensorFlow (CTC, Attention, and MTL training)
Stars: ✭ 305 (+107.48%)
Mutual labels:  asr, end-to-end
Espresso
Espresso: A Fast End-to-End Neural Speech Recognition Toolkit
Stars: ✭ 808 (+449.66%)
Mutual labels:  asr, end-to-end
kospeech
Open-Source Toolkit for End-to-End Korean Automatic Speech Recognition leveraging PyTorch and Hydra.
Stars: ✭ 456 (+210.2%)
Mutual labels:  end-to-end, asr
speech-transformer
Transformer implementation speciaized in speech recognition tasks using Pytorch.
Stars: ✭ 40 (-72.79%)
Mutual labels:  end-to-end, asr
kosr
Korean speech recognition based on transformer (트랜스포머 기반 한국어 음성 인식)
Stars: ✭ 25 (-82.99%)
Mutual labels:  end-to-end, asr
Speech Transformer
A PyTorch implementation of Speech Transformer, an End-to-End ASR with Transformer network on Mandarin Chinese.
Stars: ✭ 565 (+284.35%)
Mutual labels:  asr, end-to-end
E2e Asr
PyTorch Implementations for End-to-End Automatic Speech Recognition
Stars: ✭ 106 (-27.89%)
Mutual labels:  asr, end-to-end
Bigcidian
Pronunciation lexicon covering both English and Chinese languages for Automatic Speech Recognition.
Stars: ✭ 99 (-32.65%)
Mutual labels:  asr
Protractor
E2E test framework for Angular apps
Stars: ✭ 8,792 (+5880.95%)
Mutual labels:  end-to-end
Wav2letter
Speech Recognition model based off of FAIR research paper built using Pytorch.
Stars: ✭ 78 (-46.94%)
Mutual labels:  asr
Voicer
AGI-server voice recognizer for #Asterisk
Stars: ✭ 73 (-50.34%)
Mutual labels:  asr
Deepvoice3 pytorch
PyTorch implementation of convolutional neural networks-based text-to-speech synthesis models
Stars: ✭ 1,654 (+1025.17%)
Mutual labels:  end-to-end
Delta
DELTA is a deep learning based natural language and speech processing platform.
Stars: ✭ 1,479 (+906.12%)
Mutual labels:  asr
Asr benchmark
Program to benchmark various speech recognition APIs
Stars: ✭ 71 (-51.7%)
Mutual labels:  asr
Openasr
A pytorch based end2end speech recognition system.
Stars: ✭ 69 (-53.06%)
Mutual labels:  asr
Tacotron Pytorch
A Pytorch Implementation of Tacotron: End-to-end Text-to-speech Deep-Learning Model
Stars: ✭ 104 (-29.25%)
Mutual labels:  end-to-end

Listen, Attend and Spell

A PyTorch implementation of Listen, Attend and Spell (LAS) [1], an end-to-end automatic speech recognition framework, which directly converts acoustic features to character sequence using only one nueral network.

Install

  • Python3 (Recommend Anaconda)
  • PyTorch 0.4.1+
  • Kaldi (Just for feature extraction)
  • pip install -r requirements.txt
  • cd tools; make KALDI=/path/to/kaldi
  • If you want to run egs/aishell/run.sh, download aishell dataset for free.

Usage

  1. $ cd egs/aishell and modify aishell data path to your path in run.sh.
  2. $ bash run.sh, that's all!

You can change hyper-parameter by $ bash run.sh --parameter_name parameter_value, egs, $ bash run.sh --stage 3. See parameter name in egs/aishell/run.sh before . utils/parse_options.sh.

More detail

$ cd egs/aishell/
$ . ./path.sh

Train

$ train.py -h

Decode

$ recognize.py -h

Workflow

Workflow of egs/aishell/run.sh:

  • Stage 0: Data Preparation
  • Stage 1: Feature Generation
  • Stage 2: Dictionary and Json Data Preparation
  • Stage 3: Network Training
  • Stage 4: Decoding

Visualize loss

If you want to visualize your loss, you can use visdom to do that:

  • Open a new terminal in your remote server (recommend tmux) and run $ visdom.
  • Open a new terminal and run $ bash run.sh --visdom 1 --visdom_id "<any-string>" or $ train.py ... --visdom 1 --vidsdom_id "<any-string>".
  • Open your browser and type <your-remote-server-ip>:8097, egs, 127.0.0.1:8097.
  • In visdom website, chose <any-string> in Environment to see your loss.

Results

Model CER Config
LSTMP 9.85 4x(1024-512)
Listen, Attend and Spell 13.2 See egs/aishell/run.sh

Reference

[1] W. Chan, N. Jaitly, Q. Le, and O. Vinyals, “Listen, attend and spell: A neural network for large vocabulary conversational speech recognition,” in ICASSP 2016. (https://arxiv.org/abs/1508.01211v2)

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].