All Projects → silversparro → Wav2letter.pytorch

silversparro / Wav2letter.pytorch

Licence: mit
A fully convolution-network for speech-to-text, built on pytorch.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Wav2letter.pytorch

Wav2letter
Speech Recognition model based off of FAIR research paper built using Pytorch.
Stars: ✭ 78 (-25%)
Mutual labels:  convolutional-neural-networks, speech-recognition, speech-to-text
Mongolian Speech Recognition
Mongolian speech recognition with PyTorch
Stars: ✭ 97 (-6.73%)
Mutual labels:  convolutional-neural-networks, speech-recognition, speech-to-text
Openseq2seq
Toolkit for efficient experimentation with Speech Recognition, Text2Speech and NLP
Stars: ✭ 1,378 (+1225%)
Mutual labels:  speech-recognition, speech-to-text
Dragonfire
the open-source virtual assistant for Ubuntu based Linux distributions
Stars: ✭ 1,120 (+976.92%)
Mutual labels:  speech-recognition, speech-to-text
Vosk Api
Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node
Stars: ✭ 1,357 (+1204.81%)
Mutual labels:  speech-recognition, speech-to-text
Syn Speech
Syn.Speech is a flexible speaker independent continuous speech recognition engine for Mono and .NET framework
Stars: ✭ 57 (-45.19%)
Mutual labels:  speech-recognition, speech-to-text
Audio Pretrained Model
A collection of Audio and Speech pre-trained models.
Stars: ✭ 61 (-41.35%)
Mutual labels:  speech-recognition, speech-to-text
Patter
speech-to-text in pytorch
Stars: ✭ 71 (-31.73%)
Mutual labels:  speech-recognition, speech-to-text
Discordspeechbot
A speech-to-text bot for discord with music commands and more using NodeJS. Ideally for controlling your Discord server using voice commands, can also be useful for hearing-impaired people.
Stars: ✭ 35 (-66.35%)
Mutual labels:  speech-recognition, speech-to-text
Deepspeech
A PaddlePaddle implementation of ASR.
Stars: ✭ 1,219 (+1072.12%)
Mutual labels:  speech-recognition, speech-to-text
Deepspeech Websocket Server
Server & client for DeepSpeech using WebSockets for real-time speech recognition in separate environments
Stars: ✭ 79 (-24.04%)
Mutual labels:  speech-recognition, speech-to-text
Keras Sincnet
Keras (tensorflow) implementation of SincNet (Mirco Ravanelli, Yoshua Bengio - https://github.com/mravanelli/SincNet)
Stars: ✭ 47 (-54.81%)
Mutual labels:  convolutional-neural-networks, speech-recognition
Spokestack Python
Spokestack is a library that allows a user to easily incorporate a voice interface into any Python application.
Stars: ✭ 103 (-0.96%)
Mutual labels:  speech-recognition, speech-to-text
Angle
⦠ Angle: new speakable syntax for python 💡
Stars: ✭ 61 (-41.35%)
Mutual labels:  speech-recognition, speech-to-text
Artyom.js
A voice control - voice commands - speech recognition and speech synthesis javascript library. Create your own siri,google now or cortana with Google Chrome within your website.
Stars: ✭ 1,011 (+872.12%)
Mutual labels:  speech-recognition, speech-to-text
Openasr
A pytorch based end2end speech recognition system.
Stars: ✭ 69 (-33.65%)
Mutual labels:  speech-recognition, speech-to-text
Stephanie Va
Stephanie is an open-source platform built specifically for voice-controlled applications as well as to automate daily tasks imitating much of an virtual assistant's work.
Stars: ✭ 772 (+642.31%)
Mutual labels:  speech-recognition, speech-to-text
Kur
Descriptive Deep Learning
Stars: ✭ 811 (+679.81%)
Mutual labels:  speech-recognition, speech-to-text
Nativescript Speech Recognition
💬 Speech to text, using the awesome engines readily available on the device.
Stars: ✭ 72 (-30.77%)
Mutual labels:  speech-recognition, speech-to-text
B.e.n.j.i.
B.E.N.J.I.- The Impossible Missions Force's digital assistant
Stars: ✭ 83 (-20.19%)
Mutual labels:  speech-recognition, speech-to-text

wav2Letter.pytorch

Implementation of Wav2Letter using Baidu Warp-CTC. Creates a network based on the Wav2Letter architecture, trained with the CTC activation function.

Currently Tested on pytorch [1.3.1] with cuda10.1 and python3.7.

Branch selfAttentionExps : contains the code having the attention layer in b/w the final layer and the starting layer. improves the training time.

Branch trainableFrontEnd : contains the code in progress to train the model using the raw audio samples only.

Branch python27 : contains the same code as of master but for python2.7 and pytorch0.4.1

Current Checkpoint can be downloaded from : https://drive.google.com/file/d/1HH_4TkPUrfcfRSUp2wqgKUu72bfJ8y8t/view?usp=sharing

NOTE : The model is giving around 37WER with greedy decoder and the performance can be improved by using a beam decoder and a language model

Features

  • Train Wav2Letter.
  • Language model support using kenlm.
  • Noise injection for online training to improve noise robustness.
  • Audio augmentation to improve noise robustness.
  • Easy start/stop capabilities in the event of crash or hard stop during training.
  • Visdom/Tensorboard support for visualizing training graphs.
  • Train the model directly on the raw wav form and removed the dependency of creating spectogram. (The old code is shifted to branch 'speechRecognitionSpectogram' )

Installation

Several libraries are needed to be installed for training to work. I will assume that everything is being installed in an Anaconda installation on Ubuntu.

Install PyTorch if you haven't already.

Install this fork for Warp-CTC bindings:

git clone https://github.com/SeanNaren/warp-ctc.git
cd warp-ctc
mkdir build; cd build
cmake ..
make
export CUDA_HOME="/usr/local/cuda"
cd ../pytorch_binding
python setup.py install

Install pytorch audio:

sudo apt-get install sox libsox-dev libsox-fmt-all
git clone https://github.com/pytorch/audio.git
cd audio
pip install cffi
python setup.py install

If you want decoding to support beam search with an optional language model, install ctcdecode:

git clone --recursive https://github.com/parlance/ctcdecode.git
cd ctcdecode
pip install .

Finally clone this repo and run this within the repo:

pip install -r requirements.txt

Usage

Custom Dataset

To create a custom dataset you must create a CSV file containing the locations of the training data. This has to be in the format of:

/path/to/audio.wav,transcription
/path/to/audio2.wav,transcription
...

The first path is to the audio file, and the second is the text containing the transcript on one line. This can then be used as stated below.

Training

python train.py --train-manifest data/train_manifest.csv --val-manifest data/val_manifest.csv

Use python train.py --help for more parameters and options.

There is also Visdom support to visualize training. Once a server has been started, to use:

python train.py --visdom

There is also Tensorboard support to visualize training. Follow the instructions to set up. To use:

python train.py --tensorboard --logdir log_dir/ # Make sure the Tensorboard instance is made pointing to this log directory

MultiGpu support

python -m multiproc train.py --visdom --cuda # Add your parameters as normal, multiproc will scale to all GPUs automatically

For both visualisation tools, you can add your own name to the run by changing the --id parameter when training.

Testing

For testing write all the file path into a csv and run

python test.py

PS : for speed improvements try to run test.py with the flag '--fuse-layers'. This option will fuse all the conv-bn operation and increase the model inference speed.

Noise Augmentation/Injection

There is support for two different types of noise; noise augmentation and noise injection.

Noise Augmentation

Applies small changes to the tempo and gain when loading audio to increase robustness. To use, use the --augment flag when training.

Noise Injection

Dynamically adds noise into the training data to increase robustness. To use, first fill a directory up with all the noise files you want to sample from. The dataloader will randomly pick samples from this directory.

To enable noise injection, use the --noise-dir /path/to/noise/dir/ to specify where your noise files are. There are a few noise parameters to tweak, such as --noise_prob to determine the probability that noise is added, and the --noise-min, --noise-max parameters to determine the minimum and maximum noise to add in training.

Included is a script to inject noise into an audio file to hear what different noise levels/files would sound like. Useful for curating the noise dataset.

python noise_inject.py --input-path /path/to/input.wav --noise-path /path/to/noise.wav --output-path /path/to/input_injected.wav --noise-level 0.5 # higher levels means more noise

Checkpoints

Training supports saving checkpoints of the model to continue training from should an error occur or early termination. To enable epoch checkpoints use:

python train.py --checkpoint

To enable checkpoints every N batches through the epoch as well as epoch saving:

python train.py --checkpoint --checkpoint-per-batch N # N is the number of batches to wait till saving a checkpoint at this batch.

Note for the batch checkpointing system to work, you cannot change the batch size when loading a checkpointed model from it's original training run.

To continue from a checkpointed model that has been saved:

python train.py --continue-from models/wav2Letter_checkpoint_epoch_N_iter_N.pth.tar

This continues from the same training state as well as recreates the visdom graph to continue from if enabled.

If you would like to start from a previous checkpoint model but not continue training, add the --finetune flag to restart training from the --continue-from weights.

Choosing batch sizes

Included is a script that can be used to benchmark whether training can occur on your hardware, and the limits on the size of the model/batch sizes you can use. To use:

python benchmark.py --batch-size 32

Use the flag --help to see other parameters that can be used with the script.

Model details

Saved models contain the metadata of their training process. To see the metadata run the below command:

python model.py --model-path models/wav2Letter.pth.tar

To also note, there is no final softmax layer on the model as when trained, warp-ctc does this softmax internally. This will have to also be implemented in complex decoders if anything is built on top of the model, so take this into consideration!

Testing/Inference

To evaluate a trained model on a test set (has to be in the same format as the training set):

python test.py --model-path models/wav2Letter.pth --test-manifest /path/to/test_manifest.csv --cuda

Alternate Decoders

By default, test.py use a GreedyDecoder which picks the highest-likelihood output label at each timestep. Repeated and blank symbols are then filtered to give the final output.

A beam search decoder can optionally be used with the installation of the ctcdecode library as described in the Installation section. The test and transcribe scripts have a --decoder argument. To use the beam decoder, add --decoder beam. The beam decoder enables additional decoding parameters:

  • beam_width how many beams to consider at each timestep
  • lm_path optional binary KenLM language model to use for decoding
  • alpha weight for language model
  • beta bonus weight for words

Time offsets

Use the --offsets flag to get positional information of each character in the transcription when using transcribe.py script. The offsets are based on the size of the output tensor, which you need to convert into a format required. For example, based on default parameters you could multiply the offsets by a scalar (duration of file in seconds / size of output) to get the offsets in seconds.

Acknowledgements

This work is inspired from the deepspeech.pytorch repository of Sean Naren. This work was done as a part of Silversparro project work regarding speech to text.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].