All Projects → keithito → Tacotron

keithito / Tacotron

Licence: mit
A TensorFlow implementation of Google's Tacotron speech synthesis with pre-trained model (unofficial)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Tacotron

Comprehensive-Tacotron2
PyTorch Implementation of Google's Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions. This implementation supports both single-, multi-speaker TTS and several techniques to enforce the robustness and efficiency of the model.
Stars: ✭ 22 (-99.15%)
Mutual labels:  tts, speech-synthesis, tacotron
Wavernn
WaveRNN Vocoder + TTS
Stars: ✭ 1,636 (-36.61%)
Mutual labels:  speech-synthesis, tacotron, tts
Lingvo
Lingvo
Stars: ✭ 2,361 (-8.52%)
Mutual labels:  speech-synthesis, tts
Deepvoice3 pytorch
PyTorch implementation of convolutional neural networks-based text-to-speech synthesis models
Stars: ✭ 1,654 (-35.92%)
Mutual labels:  speech-synthesis, tts
Expressive tacotron
Tensorflow Implementation of Expressive Tacotron
Stars: ✭ 192 (-92.56%)
Mutual labels:  speech-synthesis, tacotron
Crystal
Crystal - C++ implementation of a unified framework for multilingual TTS synthesis engine with SSML specification as interface.
Stars: ✭ 108 (-95.82%)
Mutual labels:  speech-synthesis, tts
Durian
Implementation of "Duration Informed Attention Network for Multimodal Synthesis" (https://arxiv.org/pdf/1909.01700.pdf) paper.
Stars: ✭ 111 (-95.7%)
Mutual labels:  speech-synthesis, tts
Awesome Speech Recognition Speech Synthesis Papers
Automatic Speech Recognition (ASR), Speaker Verification, Speech Synthesis, Text-to-Speech (TTS), Language Modelling, Singing Voice Synthesis (SVS), Voice Conversion (VC)
Stars: ✭ 2,085 (-19.22%)
Mutual labels:  tts, speech-synthesis
Cnn vocoder
A fast cnn-based vocoder
Stars: ✭ 74 (-97.13%)
Mutual labels:  speech-synthesis, tts
Xva Synth
Machine learning based speech synthesis Electron app, with voices from specific characters from video games
Stars: ✭ 136 (-94.73%)
Mutual labels:  speech-synthesis, tacotron
Marytts
MARY TTS -- an open-source, multilingual text-to-speech synthesis system written in pure java
Stars: ✭ 1,699 (-34.17%)
Mutual labels:  speech-synthesis, tts
Multi Tacotron Voice Cloning
Phoneme multilingual(Russian-English) voice cloning based on
Stars: ✭ 192 (-92.56%)
Mutual labels:  tacotron, tts
Tacotron Pytorch
A Pytorch Implementation of Tacotron: End-to-end Text-to-speech Deep-Learning Model
Stars: ✭ 104 (-95.97%)
Mutual labels:  speech-synthesis, tacotron
Tacotron 2
DeepMind's Tacotron-2 Tensorflow implementation
Stars: ✭ 1,968 (-23.75%)
Mutual labels:  speech-synthesis, tacotron
Spokestack Python
Spokestack is a library that allows a user to easily incorporate a voice interface into any Python application.
Stars: ✭ 103 (-96.01%)
Mutual labels:  speech-synthesis, tts
Gst Tacotron
A PyTorch implementation of Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis
Stars: ✭ 175 (-93.22%)
Mutual labels:  tacotron, tts
Mimic Recording Studio
Mimic Recording Studio is a Docker-based application you can install to record voice samples, which can then be trained into a TTS voice with Mimic2
Stars: ✭ 202 (-92.17%)
Mutual labels:  tacotron, tts
Tacotron2
A PyTorch implementation of Tacotron2, an end-to-end text-to-speech(TTS) system described in "Natural TTS Synthesis By Conditioning Wavenet On Mel Spectrogram Predictions".
Stars: ✭ 43 (-98.33%)
Mutual labels:  speech-synthesis, tacotron
Cs224n Gpu That Talks
Attention, I'm Trying to Speak: End-to-end speech synthesis (CS224n '18)
Stars: ✭ 52 (-97.99%)
Mutual labels:  speech-synthesis, tts
Pytorch Dc Tts
Text to Speech with PyTorch (English and Mongolian)
Stars: ✭ 122 (-95.27%)
Mutual labels:  speech-synthesis, tts

Tacotron

An implementation of Tacotron speech synthesis in TensorFlow.

Audio Samples

  • Audio Samples from models trained using this repo.
    • The first set was trained for 441K steps on the LJ Speech Dataset
      • Speech started to become intelligible around 20K steps.
    • The second set was trained by @MXGray for 140K steps on the Nancy Corpus.

Recent Updates

  1. @npuichigo fixed a bug where dropout was not being applied in the prenet.

  2. @begeekmyfriend created a fork that adds location-sensitive attention and the stop token from the Tacotron 2 paper. This can greatly reduce the amount of data required to train a model.

Background

In April 2017, Google published a paper, Tacotron: Towards End-to-End Speech Synthesis, where they present a neural text-to-speech model that learns to synthesize speech directly from (text, audio) pairs. However, they didn't release their source code or training data. This is an independent attempt to provide an open-source implementation of the model described in their paper.

The quality isn't as good as Google's demo yet, but hopefully it will get there someday :-). Pull requests are welcome!

Quick Start

Installing dependencies

  1. Install Python 3.

  2. Install the latest version of TensorFlow for your platform. For better performance, install with GPU support if it's available. This code works with TensorFlow 1.3 and later.

  3. Install requirements:

    pip install -r requirements.txt
    

Using a pre-trained model

  1. Download and unpack a model:

    curl https://data.keithito.com/data/speech/tacotron-20180906.tar.gz | tar xzC /tmp
    
  2. Run the demo server:

    python3 demo_server.py --checkpoint /tmp/tacotron-20180906/model.ckpt
    
  3. Point your browser at localhost:9000

    • Type what you want to synthesize

Training

Note: you need at least 40GB of free disk space to train a model.

  1. Download a speech dataset.

    The following are supported out of the box:

    You can use other datasets if you convert them to the right format. See TRAINING_DATA.md for more info.

  2. Unpack the dataset into ~/tacotron

    After unpacking, your tree should look like this for LJ Speech:

    tacotron
      |- LJSpeech-1.1
          |- metadata.csv
          |- wavs
    

    or like this for Blizzard 2012:

    tacotron
      |- Blizzard2012
          |- ATrampAbroad
          |   |- sentence_index.txt
          |   |- lab
          |   |- wav
          |- TheManThatCorruptedHadleyburg
              |- sentence_index.txt
              |- lab
              |- wav
    
  3. Preprocess the data

    python3 preprocess.py --dataset ljspeech
    
    • Use --dataset blizzard for Blizzard data
  4. Train a model

    python3 train.py
    

    Tunable hyperparameters are found in hparams.py. You can adjust these at the command line using the --hparams flag, for example --hparams="batch_size=16,outputs_per_step=2". Hyperparameters should generally be set to the same values at both training and eval time. The default hyperparameters are recommended for LJ Speech and other English-language data. See TRAINING_DATA.md for other languages.

  5. Monitor with Tensorboard (optional)

    tensorboard --logdir ~/tacotron/logs-tacotron
    

    The trainer dumps audio and alignments every 1000 steps. You can find these in ~/tacotron/logs-tacotron.

  6. Synthesize from a checkpoint

    python3 demo_server.py --checkpoint ~/tacotron/logs-tacotron/model.ckpt-185000
    

    Replace "185000" with the checkpoint number that you want to use, then open a browser to localhost:9000 and type what you want to speak. Alternately, you can run eval.py at the command line:

    python3 eval.py --checkpoint ~/tacotron/logs-tacotron/model.ckpt-185000
    

    If you set the --hparams flag when training, set the same value here.

Notes and Common Issues

  • TCMalloc seems to improve training speed and avoids occasional slowdowns seen with the default allocator. You can enable it by installing it and setting LD_PRELOAD=/usr/lib/libtcmalloc.so. With TCMalloc, you can get around 1.1 sec/step on a GTX 1080Ti.

  • You can train with CMUDict by downloading the dictionary to ~/tacotron/training and then passing the flag --hparams="use_cmudict=True" to train.py. This will allow you to pass ARPAbet phonemes enclosed in curly braces at eval time to force a particular pronunciation, e.g. Turn left on {HH AW1 S S T AH0 N} Street.

  • If you pass a Slack incoming webhook URL as the --slack_url flag to train.py, it will send you progress updates every 1000 steps.

  • Occasionally, you may see a spike in loss and the model will forget how to attend (the alignments will no longer make sense). Although it will recover eventually, it may save time to restart at a checkpoint prior to the spike by passing the --restore_step=150000 flag to train.py (replacing 150000 with a step number prior to the spike). Update: a recent fix to gradient clipping by @candlewill may have fixed this.

  • During eval and training, audio length is limited to max_iters * outputs_per_step * frame_shift_ms milliseconds. With the defaults (max_iters=200, outputs_per_step=5, frame_shift_ms=12.5), this is 12.5 seconds.

    If your training examples are longer, you will see an error like this: Incompatible shapes: [32,1340,80] vs. [32,1000,80]

    To fix this, you can set a larger value of max_iters by passing --hparams="max_iters=300" to train.py (replace "300" with a value based on how long your audio is and the formula above).

  • Here is the expected loss curve when training on LJ Speech with the default hyperparameters: Loss curve

Other Implementations

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].