All Projects → tugstugi → Mongolian Speech Recognition

tugstugi / Mongolian Speech Recognition

Mongolian speech recognition with PyTorch

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Mongolian Speech Recognition

Wav2letter
Speech Recognition model based off of FAIR research paper built using Pytorch.
Stars: ✭ 78 (-19.59%)
Mutual labels:  convolutional-neural-networks, speech-recognition, speech-to-text, asr
vosk-asterisk
Speech Recognition in Asterisk with Vosk Server
Stars: ✭ 52 (-46.39%)
Mutual labels:  speech-recognition, speech-to-text, asr
Keras Sincnet
Keras (tensorflow) implementation of SincNet (Mirco Ravanelli, Yoshua Bengio - https://github.com/mravanelli/SincNet)
Stars: ✭ 47 (-51.55%)
Mutual labels:  convolutional-neural-networks, speech-recognition, asr
sova-asr
SOVA ASR (Automatic Speech Recognition)
Stars: ✭ 123 (+26.8%)
Mutual labels:  speech-recognition, speech-to-text, asr
kaldi-long-audio-alignment
Long audio alignment using Kaldi
Stars: ✭ 21 (-78.35%)
Mutual labels:  speech-recognition, speech-to-text, asr
spokestack-ios
Spokestack: give your iOS app a voice interface!
Stars: ✭ 27 (-72.16%)
Mutual labels:  speech-recognition, speech-to-text, asr
Openasr
A pytorch based end2end speech recognition system.
Stars: ✭ 69 (-28.87%)
Mutual labels:  speech-recognition, speech-to-text, asr
ASR-Audio-Data-Links
A list of publically available audio data that anyone can download for ASR or other speech activities
Stars: ✭ 179 (+84.54%)
Mutual labels:  speech-recognition, speech-to-text, asr
Tensorflow end2end speech recognition
End-to-End speech recognition implementation base on TensorFlow (CTC, Attention, and MTL training)
Stars: ✭ 305 (+214.43%)
Mutual labels:  speech-recognition, speech-to-text, asr
Cheetah
On-device streaming speech-to-text engine powered by deep learning
Stars: ✭ 383 (+294.85%)
Mutual labels:  speech-recognition, speech-to-text, asr
PCPM
Presenting Collection of Pretrained Models. Links to pretrained models in NLP and voice.
Stars: ✭ 21 (-78.35%)
Mutual labels:  speech-recognition, speech-to-text, asr
Eesen
The official repository of the Eesen project
Stars: ✭ 738 (+660.82%)
Mutual labels:  speech-recognition, speech-to-text, asr
speech-recognition-evaluation
Evaluate results from ASR/Speech-to-Text quickly
Stars: ✭ 25 (-74.23%)
Mutual labels:  speech-recognition, speech-to-text, asr
Syn Speech
Syn.Speech is a flexible speaker independent continuous speech recognition engine for Mono and .NET framework
Stars: ✭ 57 (-41.24%)
Mutual labels:  speech-recognition, speech-to-text, asr
react-native-spokestack
Spokestack: give your React Native app a voice interface!
Stars: ✭ 53 (-45.36%)
Mutual labels:  speech-recognition, speech-to-text, asr
speech-recognition
SDKs and docs for Skit's speech to text service
Stars: ✭ 20 (-79.38%)
Mutual labels:  speech-recognition, speech-to-text, asr
leopard
On-device speech-to-text engine powered by deep learning
Stars: ✭ 354 (+264.95%)
Mutual labels:  speech-recognition, speech-to-text, asr
wav2vec2-live
A live speech recognition using Facebooks wav2vec 2.0 model.
Stars: ✭ 205 (+111.34%)
Mutual labels:  speech-recognition, speech-to-text, asr
demo vietasr
Vietnamese Speech Recognition
Stars: ✭ 22 (-77.32%)
Mutual labels:  speech-recognition, speech-to-text, asr
Silero Models
Silero Models: pre-trained STT models and benchmarks made embarrassingly simple
Stars: ✭ 522 (+438.14%)
Mutual labels:  speech-recognition, speech-to-text, asr

An online demo trained with a Mongolian proprietary dataset (WER 8%): https://chimege.mn/.

In this repo, following papers are implemented:

This repo is partially based on:

Training

  1. Install PyTorch>=1.3 with conda
  2. Install remaining dependencies: pip install -r requirements.txt
  3. Download the Mongolian Bible dataset: cd datasets && python dl_mbspeech.py
  4. Pre compute the mel spectrograms: python preprop_dataset.py --dataset mbspeech
  5. Train: python train.py --model crnn --max-epochs 50 --dataset mbspeech --lr-warmup-steps 100
    • logs for the TensorBoard are saved in the folder logdir

Results

During the training, the ground truth and recognized texts are logged into the TensorBoard. Because the dataset contains only a single person, the predicted texts from the validation set should be already recognizable after few epochs:

EXPECTED:

аливаа цус хувцсан дээр үсрэхэд цус үсэрсэн хэсгийг та нар ариун газарт угаагтун

PREDICTED:

аливаа цус хувцсан дээр үсэрхэд цус усарсан хэсхийг та нар ариун газарт угаагтун

For fun, you can also generate an audio with a Mongolian TTS and try to recognize it. The following code generates an audio with the TTS of the Mongolian National University and does speech recognition on that generated audio:

# generate audio for 'Миний төрсөн нутаг Монголын сайхан орон'
wget -O test.wav "http://172.104.34.197/nlp-web-demo/tts?voice=1&text=Миний төрсөн нутаг Монголын сайхан орон."
# speech recognition on that TTS generated audio
python transcribe.py --checkpoint=logdir/mbspeech_crnn_sgd_wd1e-05/epoch-0050.pth --model=crnn test.wav
# will output: 'миний төрсөн нут мөнголын сайхан оөрулн'

It is also possible to use a KenLM binary model. First download it from tugstugi/mongolian-nlp. After that, install parlance/ctcdecode. Now you can transcribe with the language model:

python transcribe.py --checkpoint=path/to/checkpoint --lm=mn_5gram.binary --alpha=0.3 test.wav

Contribute

If you are Mongolian and want to help us, please record your voice on Common Voice.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].