rwth-i6 / rasr
Licence: other
The RWTH ASR Toolkit.
Stars: ✭ 43
Programming Languages
Labels
Projects that are alternatives of or similar to rasr
End2end Asr Pytorch
End-to-End Automatic Speech Recognition on PyTorch
Stars: ✭ 175 (+306.98%)
Mutual labels: asr
Chinese text normalization
Chinese text normalization for speech processing
Stars: ✭ 242 (+462.79%)
Mutual labels: asr
Listen Attend Spell
A PyTorch implementation of Listen, Attend and Spell (LAS), an End-to-End ASR framework.
Stars: ✭ 147 (+241.86%)
Mutual labels: asr
Mrcp Plugin With Freeswitch
使用FreeSWITCH接受用户手机呼叫,通过UniMRCP Server集成讯飞开放平台(xfyun)插件将用户语音进行语音识别(ASR),并根据自定义业务逻辑调用语音合成(TTS),构建简单的端到端语音呼叫中心。
Stars: ✭ 168 (+290.7%)
Mutual labels: asr
Asr Evaluation
Python module for evaluating ASR hypotheses (e.g. word error rate, word recognition rate).
Stars: ✭ 190 (+341.86%)
Mutual labels: asr
Rnn Transducer
MXNet implementation of RNN Transducer (Graves 2012): Sequence Transduction with Recurrent Neural Networks
Stars: ✭ 114 (+165.12%)
Mutual labels: asr
Pytorch Kaldi
pytorch-kaldi is a project for developing state-of-the-art DNN/RNN hybrid speech recognition systems. The DNN part is managed by pytorch, while feature extraction, label computation, and decoding are performed with the kaldi toolkit.
Stars: ✭ 2,097 (+4776.74%)
Mutual labels: asr
Edgedict
Working online speech recognition based on RNN Transducer. ( Trained model release available in release )
Stars: ✭ 205 (+376.74%)
Mutual labels: asr
Speech To Text Russian
Проект для распознавания речи на русском языке на основе pykaldi.
Stars: ✭ 151 (+251.16%)
Mutual labels: asr
Py Kaldi Asr
Some simple wrappers around kaldi-asr intended to make using kaldi's (online) decoders as convenient as possible.
Stars: ✭ 156 (+262.79%)
Mutual labels: asr
Kospeech
Open-Source Toolkit for End-to-End Korean Automatic Speech Recognition.
Stars: ✭ 190 (+341.86%)
Mutual labels: asr
Asr audio data links
A list of publically available audio data that anyone can download for ASR or other speech activities
Stars: ✭ 128 (+197.67%)
Mutual labels: asr
Kerasdeepspeech
A Keras CTC implementation of Baidu's DeepSpeech for model experimentation
Stars: ✭ 245 (+469.77%)
Mutual labels: asr
Hms Ml Demo
HMS ML Demo provides an example of integrating Huawei ML Kit service into applications. This example demonstrates how to integrate services provided by ML Kit, such as face detection, text recognition, image segmentation, asr, and tts.
Stars: ✭ 187 (+334.88%)
Mutual labels: asr
Wukong Robot
🤖 wukong-robot 是一个简单、灵活、优雅的中文语音对话机器人/智能音箱项目,还可能是首个支持脑机交互的开源智能音箱项目。
Stars: ✭ 3,110 (+7132.56%)
Mutual labels: asr
Zeroth
Kaldi-based Korean ASR (한국어 음성인식) open-source project
Stars: ✭ 248 (+476.74%)
Mutual labels: asr
Sprint - The RWTH Speech Recognition Framework * Requirements Debian package name given in brackets - GCC >= 4.8 (gcc, g++) - GNU Bison (bison) - GNU Make (make) - libxml2 (libxml2, libxml2-dev) - libsndfile (libsndfile1, libsndfile1-dev) - libcppunit (libcppunit, libcppunit-dev) - LAPACK (lapack3, lapack3-dev) - BLAS (refblas3, refblas3-dev) * Build - Adapt Config.make, Options.make, config/os-linux.make to your environment - Check requirements: ./scripts/requirements.sh - Compile make - Install: Will install executables in arch/linux-intel-standard/ (INSTALL_TARGET in Options.make) make install * Documentation - Code Documentation (requires Doxygen) Run 'doxygen' in src/ Documentation is generated in src/api/html - http://www-i6.informatik.rwth-aachen.de/rwth-asr Wiki: http://www-i6.informatik.rwth-aachen.de/rwth-asr/manual - Manual: http://www-i6.informatik.rwth-aachen.de/sprintdoc Login on request * Signal Analysis Flow files for common acoustic features: http://www-i6.informatik.rwth-aachen.de/rwth-asr/files/flow-examples-0.2.tar.gz -- (c) 2000-2020 RWTH Aachen University, Lehrstuhl fuer Informatik 6 http://www-i6.informatik.rwth-aachen.de/rwth-asr/ [email protected]
Note that the project description data, including the texts, logos, images, and/or trademarks,
for each open source project belongs to its rightful owner.
If you wish to add or remove any projects, please contact us at [email protected].