All Projects → wenet-e2e → wenet

wenet-e2e / wenet

Licence: Apache-2.0 license
Production First and Production Ready End-to-End Speech Recognition Toolkit

Programming Languages

C++
36643 projects - #6 most used programming language
python
139335 projects - #7 most used programming language
shell
77523 projects
perl
6916 projects
CMake
9771 projects
java
68154 projects - #9 most used programming language

Projects that are alternatives of or similar to wenet

kospeech
Open-Source Toolkit for End-to-End Korean Automatic Speech Recognition leveraging PyTorch and Hydra.
Stars: ✭ 456 (-80.87%)
Mutual labels:  transformer, speech-recognition, asr, conformer
sova-asr
SOVA ASR (Automatic Speech Recognition)
Stars: ✭ 123 (-94.84%)
Mutual labels:  speech-recognition, automatic-speech-recognition, asr
lightning-asr
Modular and extensible speech recognition library leveraging pytorch-lightning and hydra.
Stars: ✭ 36 (-98.49%)
Mutual labels:  speech-recognition, asr, conformer
Kospeech
Open-Source Toolkit for End-to-End Korean Automatic Speech Recognition.
Stars: ✭ 190 (-92.03%)
Mutual labels:  transformer, speech-recognition, asr
leopard
On-device speech-to-text engine powered by deep learning
Stars: ✭ 354 (-85.15%)
Mutual labels:  speech-recognition, automatic-speech-recognition, asr
kaldi-long-audio-alignment
Long audio alignment using Kaldi
Stars: ✭ 21 (-99.12%)
Mutual labels:  speech-recognition, automatic-speech-recognition, asr
demo vietasr
Vietnamese Speech Recognition
Stars: ✭ 22 (-99.08%)
Mutual labels:  speech-recognition, automatic-speech-recognition, asr
Athena
an open-source implementation of sequence-to-sequence based speech processing engine
Stars: ✭ 542 (-77.27%)
Mutual labels:  transformer, speech-recognition, asr
Neural sp
End-to-end ASR/LM implementation with PyTorch
Stars: ✭ 408 (-82.89%)
Mutual labels:  transformer, speech-recognition, asr
kosr
Korean speech recognition based on transformer (트랜스포머 기반 한국어 음성 인식)
Stars: ✭ 25 (-98.95%)
Mutual labels:  transformer, speech-recognition, asr
Openasr
A pytorch based end2end speech recognition system.
Stars: ✭ 69 (-97.11%)
Mutual labels:  transformer, speech-recognition, asr
Wenet
Production First and Production Ready End-to-End Speech Recognition Toolkit
Stars: ✭ 617 (-74.12%)
Mutual labels:  transformer, speech-recognition, asr
End2end Asr Pytorch
End-to-End Automatic Speech Recognition on PyTorch
Stars: ✭ 175 (-92.66%)
Mutual labels:  transformer, speech-recognition, asr
obvi
A Polymer 3+ webcomponent / button for doing speech recognition
Stars: ✭ 54 (-97.73%)
Mutual labels:  speech-recognition, automatic-speech-recognition
wav2vec2-live
A live speech recognition using Facebooks wav2vec 2.0 model.
Stars: ✭ 205 (-91.4%)
Mutual labels:  speech-recognition, asr
rustfst
Rust re-implementation of OpenFST - library for constructing, combining, optimizing, and searching weighted finite-state transducers (FSTs). A Python binding is also available.
Stars: ✭ 104 (-95.64%)
Mutual labels:  speech-recognition, asr
megs
A merged version of multiple open-source German speech datasets.
Stars: ✭ 21 (-99.12%)
Mutual labels:  speech-recognition, asr
ASR-Audio-Data-Links
A list of publically available audio data that anyone can download for ASR or other speech activities
Stars: ✭ 179 (-92.49%)
Mutual labels:  speech-recognition, asr
react-native-spokestack
Spokestack: give your React Native app a voice interface!
Stars: ✭ 53 (-97.78%)
Mutual labels:  speech-recognition, asr
opensource-voice-tools
A repo listing known open source voice tools, ordered by where they sit in the voice stack
Stars: ✭ 21 (-99.12%)
Mutual labels:  speech-recognition, asr

WeNet

License Python-Version

Roadmap | Docs | Papers | Runtime (x86) | Runtime (android) | Pretrained Models

We share neural Net together.

The main motivation of WeNet is to close the gap between research and production end-to-end (E2E) speech recognition models, to reduce the effort of productionizing E2E models, and to explore better E2E models for production.

🔥 News

  • 2022.07.21: RNN-T is supported now, see rnnt for benchmark.
  • 2022.07.03: Python binding is stable, see python binding for usage.

Highlights

  • Production first and production ready: The core design principle of WeNet. WeNet provides full stack solutions for speech recognition.

    • Unified solution for streaming and non-streaming ASR: U2++ framework--develop, train, and deploy only once.
    • Runtime solution: built-in server x86 and on-device android runtime solution.
    • Model exporting solution: built-in solution to export model to LibTorch/ONNX for inference.
    • LM solution: built-in production-level LM solution.
    • Other production solutions: built-in contextual biasing, time stamp, endpoint, and n-best solutions.
  • Accurate: WeNet achieves SOTA results on a lot of public speech datasets.

  • Light weight: WeNet is easy to install, easy to use, well designed, and well documented.

Performance Benchmark

Please see examples/$dataset/s0/README.md for benchmark on different speech datasets.

Installation(Python Only)

If you just want to use WeNet as a python package for speech recognition application, just install it by pip, please note python 3.6+ is required.

pip3 install wenetruntime

And please see doc for usage.

Installation(Training and Developing)

  • Clone the repo
git clone https://github.com/wenet-e2e/wenet.git
conda create -n wenet python=3.8
conda activate wenet
pip install -r requirements.txt
conda install pytorch=1.10.0 torchvision torchaudio=0.10.0 cudatoolkit=11.1 -c pytorch -c conda-forge
  • Optionally, if you want to use x86 runtime or language model(LM), you have to build the runtime as follows. Otherwise, you can just ignore this step.
# runtime build requires cmake 3.14 or above
cd runtime/libtorch
mkdir build && cd build && cmake -DFST_HAVE_BIN=ON .. && cmake --build .

Discussion & Communication

Please visit Discussions for further discussion.

For Chinese users, you can aslo scan the QR code on the left to follow our offical account of WeNet. We created a WeChat group for better discussion and quicker response. Please scan the personal QR code on the right, and the guy is responsible for inviting you to the chat group.

If you can not access the QR image, please access it on gitee.

Or you can directly discuss on Github Issues.

Contributors

Acknowledge

  1. We borrowed a lot of code from ESPnet for transformer based modeling.
  2. We borrowed a lot of code from Kaldi for WFST based decoding for LM integration.
  3. We referred EESEN for building TLG based graph for LM integration.
  4. We referred to OpenTransformer for python batch inference of e2e models.

Citations

@inproceedings{yao2021wenet,
  title={WeNet: Production oriented Streaming and Non-streaming End-to-End Speech Recognition Toolkit},
  author={Yao, Zhuoyuan and Wu, Di and Wang, Xiong and Zhang, Binbin and Yu, Fan and Yang, Chao and Peng, Zhendong and Chen, Xiaoyu and Xie, Lei and Lei, Xin},
  booktitle={Proc. Interspeech},
  year={2021},
  address={Brno, Czech Republic },
  organization={IEEE}
}

@article{zhang2022wenet,
  title={WeNet 2.0: More Productive End-to-End Speech Recognition Toolkit},
  author={Zhang, Binbin and Wu, Di and Peng, Zhendong and Song, Xingchen and Yao, Zhuoyuan and Lv, Hang and Xie, Lei and Yang, Chao and Pan, Fuping and Niu, Jianwei},
  journal={arXiv preprint arXiv:2203.15455},
  year={2022}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].