All Projects → parlance → Ctcdecode

parlance / Ctcdecode

Licence: mit
PyTorch CTC Decoder bindings

Projects that are alternatives of or similar to Ctcdecode

Tensorflow end2end speech recognition
End-to-End speech recognition implementation base on TensorFlow (CTC, Attention, and MTL training)
Stars: ✭ 305 (-31%)
Mutual labels:  beam-search, ctc
Ctc pytorch
CTC end -to-end ASR for timit and 863 corpus.
Stars: ✭ 161 (-63.57%)
Mutual labels:  ctc, decoder
Ctcwordbeamsearch
Connectionist Temporal Classification (CTC) decoder with dictionary and language model for TensorFlow.
Stars: ✭ 398 (-9.95%)
Mutual labels:  ctc, decoder
Ctcdecoder
Connectionist Temporal Classification (CTC) decoding algorithms: best path, prefix search, beam search and token passing. Implemented in Python.
Stars: ✭ 529 (+19.68%)
Mutual labels:  beam-search, ctc
Pytorch Asr
ASR with PyTorch
Stars: ✭ 124 (-71.95%)
Mutual labels:  ctc, decoder
Jlm
A fast LSTM Language Model for large vocabulary language like Japanese and Chinese
Stars: ✭ 105 (-76.24%)
Mutual labels:  beam-search, decoder
Seq2seq chatbot
基于seq2seq模型的简单对话系统的tf实现,具有embedding、attention、beam_search等功能,数据集是Cornell Movie Dialogs
Stars: ✭ 308 (-30.32%)
Mutual labels:  beam-search
Tf chatbot seq2seq antilm
Seq2seq chatbot with attention and anti-language model to suppress generic response, option for further improve by deep reinforcement learning.
Stars: ✭ 369 (-16.52%)
Mutual labels:  beam-search
Image Captioning
Image Captioning using InceptionV3 and beam search
Stars: ✭ 290 (-34.39%)
Mutual labels:  beam-search
Jpegsnoop
JPEGsnoop: JPEG decoder and detailed analysis
Stars: ✭ 282 (-36.2%)
Mutual labels:  decoder
Neural sp
End-to-end ASR/LM implementation with PyTorch
Stars: ✭ 408 (-7.69%)
Mutual labels:  ctc
Json Rust
JSON implementation in Rust
Stars: ✭ 395 (-10.63%)
Mutual labels:  decoder
Cnn lstm ctc tensorflow
CNN+LSTM+CTC based OCR implemented using tensorflow.
Stars: ✭ 343 (-22.4%)
Mutual labels:  ctc
Ultrajson
Ultra fast JSON decoder and encoder written in C with Python bindings
Stars: ✭ 3,504 (+692.76%)
Mutual labels:  decoder
Flif
Free Lossless Image Format
Stars: ✭ 3,668 (+729.86%)
Mutual labels:  decoder
Crnn
A TensorFlow implementation of https://github.com/bgshih/crnn
Stars: ✭ 287 (-35.07%)
Mutual labels:  ctc
Im2latex
Image to LaTeX (Seq2seq + Attention with Beam Search) - Tensorflow
Stars: ✭ 342 (-22.62%)
Mutual labels:  beam-search
Ffmpegcommand
FFmpegCommand适用于Android的FFmpeg命令库,实现了对音视频相关的处理,能够快速的处理音视频,大概功能包括:音视频剪切,音视频转码,音视频解码原始数据,音视频编码,视频转图片或gif,视频添加水印,多画面拼接,音频混音,视频亮度和对比度,音频淡入和淡出效果等
Stars: ✭ 394 (-10.86%)
Mutual labels:  decoder
Pytorch Chatbot
Pytorch seq2seq chatbot
Stars: ✭ 336 (-23.98%)
Mutual labels:  beam-search
Megreader
A research project for text detection and recognition using PyTorch 1.2.
Stars: ✭ 332 (-24.89%)
Mutual labels:  ctc

ctcdecode

ctcdecode is an implementation of CTC (Connectionist Temporal Classification) beam search decoding for PyTorch. C++ code borrowed liberally from Paddle Paddles' DeepSpeech. It includes swappable scorer support enabling standard beam search, and KenLM-based decoding. If you are new to the concepts of CTC and Beam Search, please visit the Resources section where we link a few tutorials explaining why they are needed.

Installation

The library is largely self-contained and requires only PyTorch. Building the C++ library requires gcc or clang. KenLM language modeling support is also optionally included, and enabled by default.

The below installation also works for Google Colab.

# get the code
git clone --recursive https://github.com/parlance/ctcdecode.git
cd ctcdecode && pip install .

How to Use

from ctcdecode import CTCBeamDecoder

decoder = CTCBeamDecoder(
    labels,
    model_path=None,
    alpha=0,
    beta=0,
    cutoff_top_n=40,
    cutoff_prob=1.0,
    beam_width=100,
    num_processes=4,
    blank_id=0,
    log_probs_input=False
)
beam_results, beam_scores, timesteps, out_lens = decoder.decode(output)

Inputs to CTCBeamDecoder

  • labels are the tokens you used to train your model. They should be in the same order as your outputs. For example if your tokens are the english letters and you used 0 as your blank token, then you would pass in list("_abcdefghijklmopqrstuvwxyz") as your argument to labels
  • model_path is the path to your external kenlm language model(LM). Default is none.
  • alpha Weighting associated with the LMs probabilities. A weight of 0 means the LM has no effect.
  • beta Weight associated with the number of words within our beam.
  • cutoff_top_n Cutoff number in pruning. Only the top cutoff_top_n characters with the highest probability in the vocab will be used in beam search.
  • cutoff_prob Cutoff probability in pruning. 1.0 means no pruning.
  • beam_width This controls how broad the beam search is. Higher values are more likely to find top beams, but they also will make your beam search exponentially slower. Furthermore, the longer your outputs, the more time large beams will take. This is an important parameter that represents a tradeoff you need to make based on your dataset and needs.
  • num_processes Parallelize the batch using num_processes workers. You probably want to pass the number of cpus your computer has. You can find this in python with import multiprocessing then n_cpus = multiprocessing.cpu_count(). Default 4.
  • blank_id This should be the index of the CTC blank token (probably 0).
  • log_probs_input If your outputs have passed through a softmax and represent probabilities, this should be false, if they passed through a LogSoftmax and represent negative log likelihood, you need to pass True. If you don't understand this, run print(output[0][0].sum()), if it's a negative number you've probably got NLL and need to pass True, if it sums to ~1.0 you should pass False. Default False.

Inputs to the decode method

  • output should be the output activations from your model. If your output has passed through a SoftMax layer, you shouldn't need to alter it (except maybe to transpose), but if your output represents negative log likelihoods (raw logits), you either need to pass it through an additional torch.nn.functional.softmax or you can pass log_probs_input=False to the decoder. Your output should be BATCHSIZE x N_TIMESTEPS x N_LABELS so you may need to transpose it before passing it to the decoder. Note that if you pass things in the wrong order, the beam search will probably still run, you'll just get back nonsense results.

Outputs from the decode method

4 things get returned from decode

  1. beam_results - Shape: BATCHSIZE x N_BEAMS X N_TIMESTEPS A batch containing the series of characters (these are ints, you still need to decode them back to your text) representing results from a given beam search. Note that the beams are almost always shorter than the total number of timesteps, and the additional data is non-sensical, so to see the top beam (as int labels) from the first item in the batch, you need to run beam_results[0][0][:out_len[0][0]].
  2. beam_scores - Shape: BATCHSIZE x N_BEAMS A batch with the approximate CTC score of each beam (look at the code here for more info). If this is true, you can get the model's confidence that the beam is correct with p=1/np.exp(beam_score).
  3. timesteps - Shape: BATCHSIZE x N_BEAMS The timestep at which the nth output character has peak probability. Can be used as alignment between the audio and the transcript.
  4. out_lens - Shape: BATCHSIZE x N_BEAMS. out_lens[i][j] is the length of the jth beam_result, of item i of your batch.

More examples

Get the top beam for the first item in your batch beam_results[0][0][:out_len[0][0]]

Get the top 50 beams for the first item in your batch

for i in range(50):
     print(beam_results[0][i][:out_len[0][i]])

Note, these will be a list of ints that need decoding. You likely already have a function to decode from int to text, but if not you can do something like. "".join[labels[n] for n in beam_results[0][0][:out_len[0][0]]] using the labels you passed in to CTCBeamDecoder

Resources

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].