All Projects → biyoml → End-to-End-Mandarin-ASR

biyoml / End-to-End-Mandarin-ASR

Licence: other
End-to-end speech recognition on AISHELL dataset.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to End-to-End-Mandarin-ASR

kospeech
Open-Source Toolkit for End-to-End Korean Automatic Speech Recognition leveraging PyTorch and Hydra.
Stars: ✭ 456 (+2180%)
Mutual labels:  end-to-end, speech-recognition, asr
End2end Asr Pytorch
End-to-End Automatic Speech Recognition on PyTorch
Stars: ✭ 175 (+775%)
Mutual labels:  end-to-end, speech-recognition, asr
E2e Asr
PyTorch Implementations for End-to-End Automatic Speech Recognition
Stars: ✭ 106 (+430%)
Mutual labels:  end-to-end, speech-recognition, asr
Rnn Transducer
MXNet implementation of RNN Transducer (Graves 2012): Sequence Transduction with Recurrent Neural Networks
Stars: ✭ 114 (+470%)
Mutual labels:  end-to-end, speech-recognition, asr
Espresso
Espresso: A Fast End-to-End Neural Speech Recognition Toolkit
Stars: ✭ 808 (+3940%)
Mutual labels:  end-to-end, speech-recognition, asr
Automatic speech recognition
End-to-end Automatic Speech Recognition for Madarian and English in Tensorflow
Stars: ✭ 2,751 (+13655%)
Mutual labels:  end-to-end, speech-recognition, chinese-speech-recognition
kosr
Korean speech recognition based on transformer (트랜스포머 기반 한국어 음성 인식)
Stars: ✭ 25 (+25%)
Mutual labels:  end-to-end, speech-recognition, asr
Kospeech
Open-Source Toolkit for End-to-End Korean Automatic Speech Recognition.
Stars: ✭ 190 (+850%)
Mutual labels:  end-to-end, speech-recognition, asr
Tensorflow end2end speech recognition
End-to-End speech recognition implementation base on TensorFlow (CTC, Attention, and MTL training)
Stars: ✭ 305 (+1425%)
Mutual labels:  end-to-end, speech-recognition, asr
Speech Transformer Tf2.0
transformer for ASR-systerm (via tensorflow2.0)
Stars: ✭ 90 (+350%)
Mutual labels:  end-to-end, speech-recognition
ctc-asr
End-to-end trained speech recognition system, based on RNNs and the connectionist temporal classification (CTC) cost function.
Stars: ✭ 112 (+460%)
Mutual labels:  speech-recognition, asr
Wav2letter
Facebook AI Research's Automatic Speech Recognition Toolkit
Stars: ✭ 5,907 (+29435%)
Mutual labels:  end-to-end, speech-recognition
Listen Attend Spell
A PyTorch implementation of Listen, Attend and Spell (LAS), an End-to-End ASR framework.
Stars: ✭ 147 (+635%)
Mutual labels:  end-to-end, asr
megs
A merged version of multiple open-source German speech datasets.
Stars: ✭ 21 (+5%)
Mutual labels:  speech-recognition, asr
ASR-Audio-Data-Links
A list of publically available audio data that anyone can download for ASR or other speech activities
Stars: ✭ 179 (+795%)
Mutual labels:  speech-recognition, asr
wav2vec2-live
A live speech recognition using Facebooks wav2vec 2.0 model.
Stars: ✭ 205 (+925%)
Mutual labels:  speech-recognition, asr
specAugment
Tensor2tensor experiment with SpecAugment
Stars: ✭ 46 (+130%)
Mutual labels:  speech-recognition, specaugment
opensource-voice-tools
A repo listing known open source voice tools, ordered by where they sit in the voice stack
Stars: ✭ 21 (+5%)
Mutual labels:  speech-recognition, asr
Espnet
End-to-End Speech Processing Toolkit
Stars: ✭ 4,533 (+22565%)
Mutual labels:  end-to-end, speech-recognition
Speech Transformer
A PyTorch implementation of Speech Transformer, an End-to-End ASR with Transformer network on Mandarin Chinese.
Stars: ✭ 565 (+2725%)
Mutual labels:  end-to-end, asr

End-to-End-Mandarin-ASR

中文語音辨識

End-to-end speech recognition on AISHELL dataset using Pytorch.

The entire system is an attention-based sequence-to-sequence model1. The encoder is a bidirectional GRU net with BatchNorm, and the decoder is another GRU net that applies Luong-based attention3.

The acoustic features are 80-dimensional filter banks. We apply SpecAugment4 to these features to improve generalization. They are also stacked every 3 consecutive frames, so the time resolution is reduced.

With this code you can achieve ~10% CER on the test set after 100 epochs.

Usage

Install requirements

$ pip install -r requirements.txt

Data

  1. Download AISHELL dataset (data_aishell.tgz) from http://www.openslr.org/33/.
  2. Extract data_aishell.tgz:
$ python extract_aishell.py ${PATH_TO_FILE}
  1. Create lists (*.csv) of audio file paths along with their transcripts:
$ python prepare_data.py ${DIRECTORY_OF_AISHELL}

Train

Check available options:

$ python train.py -h

Use the default configuration for training:

$ python train.py exp/default.yaml

You can also write your own configuration file based on exp/default.yaml.

$ python train.py ${PATH_TO_YOUR_CONFIG}

Show loss curve

With the default configuration, the training logs are stored in exp/default/history.csv. You should specify your training logs accordingly.

$ python show_history.py exp/default/history.csv

Test

During training, the program will keep monitoring the error rate on development set. The checkpoint with the lowest error rate will be saved in the logging directory (by default exp/default/best.pth).

To evalutate the checkpoint on test set (with a beam width of 5), run:

$ python eval.py exp/default/best.pth --beams 5

Or you can test random audio from the test set and see the attentions:

$ python inference.py exp/default/best.pth --beams 5

Predict:
你 电 池 市 场 也 在 向 好
Ground-truth:
锂 电 池 市 场 也 在 向 好

TODO

  • Beam Search
  • Restore checkpoint and resume previous training
  • SpecAugment
  • LM Rescoring
  • Label Smoothing
  • Polyak Averaging

References

[1] W. Chan et al., "Listen, Attend and Spell", https://arxiv.org/abs/1508.01211

[2] J. Chorowski et al., "Attention-Based Models for Speech Recognition", https://arxiv.org/abs/1506.07503

[3] M. Luong et al., "Effective Approaches to Attention-based Neural Machine Translation", https://arxiv.org/abs/1508.04025

[4] D. Park et al., "SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition", https://arxiv.org/abs/1904.08779

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].