All Projects → zceng → LVCNet

zceng / LVCNet

Licence: Apache-2.0 License
LVCNet: Efficient Condition-Dependent Modeling Network for Waveform Generation

Programming Languages

python
139335 projects - #7 most used programming language
Jupyter Notebook
11667 projects
shell
77523 projects

Projects that are alternatives of or similar to LVCNet

Tensorflowtts
😝 TensorFlowTTS: Real-Time State-of-the-art Speech Synthesis for Tensorflow 2 (supported including English, French, Korean, Chinese, German and Easy to adapt for other languages)
Stars: ✭ 2,382 (+3455.22%)
Mutual labels:  text-to-speech, tts, speech-synthesis, vocoder, parallel-wavegan
Fre-GAN-pytorch
Fre-GAN: Adversarial Frequency-consistent Audio Synthesis
Stars: ✭ 73 (+8.96%)
Mutual labels:  text-to-speech, tts, speech-synthesis, vocoder
WaveGrad2
PyTorch Implementation of Google Brain's WaveGrad 2: Iterative Refinement for Text-to-Speech Synthesis
Stars: ✭ 55 (-17.91%)
Mutual labels:  text-to-speech, tts, speech-synthesis
react-native-spokestack
Spokestack: give your React Native app a voice interface!
Stars: ✭ 53 (-20.9%)
Mutual labels:  text-to-speech, tts, speech-synthesis
Parallel-Tacotron2
PyTorch Implementation of Google's Parallel Tacotron 2: A Non-Autoregressive Neural TTS Model with Differentiable Duration Modeling
Stars: ✭ 149 (+122.39%)
Mutual labels:  text-to-speech, tts, speech-synthesis
Wavegrad
Implementation of Google Brain's WaveGrad high-fidelity vocoder (paper: https://arxiv.org/pdf/2009.00713.pdf). First implementation on GitHub.
Stars: ✭ 245 (+265.67%)
Mutual labels:  text-to-speech, tts, speech-synthesis
IMS-Toucan
Text-to-Speech Toolkit of the Speech and Language Technologies Group at the University of Stuttgart. Objectives of the development are simplicity, modularity, controllability and multilinguality.
Stars: ✭ 295 (+340.3%)
Mutual labels:  text-to-speech, tts, speech-synthesis
Expressive-FastSpeech2
PyTorch Implementation of Non-autoregressive Expressive (emotional, conversational) TTS based on FastSpeech2, supporting English, Korean, and your own languages.
Stars: ✭ 139 (+107.46%)
Mutual labels:  text-to-speech, tts, speech-synthesis
Pytorch Dc Tts
Text to Speech with PyTorch (English and Mongolian)
Stars: ✭ 122 (+82.09%)
Mutual labels:  text-to-speech, tts, speech-synthesis
Zero-Shot-TTS
Unofficial Implementation of Zero-Shot Text-to-Speech for Text-Based Insertion in Audio Narration
Stars: ✭ 33 (-50.75%)
Mutual labels:  text-to-speech, tts, speech-synthesis
TensorVox
Desktop application for neural speech synthesis written in C++
Stars: ✭ 140 (+108.96%)
Mutual labels:  text-to-speech, tts, speech-synthesis
Cross-Speaker-Emotion-Transfer
PyTorch Implementation of ByteDance's Cross-speaker Emotion Transfer Based on Speaker Condition Layer Normalization and Semi-Supervised Training in Text-To-Speech
Stars: ✭ 107 (+59.7%)
Mutual labels:  text-to-speech, tts, speech-synthesis
Daft-Exprt
PyTorch Implementation of Daft-Exprt: Robust Prosody Transfer Across Speakers for Expressive Speech Synthesis
Stars: ✭ 41 (-38.81%)
Mutual labels:  text-to-speech, tts, speech-synthesis
melgan
MelGAN implementation with Multi-Band and Full Band supports...
Stars: ✭ 54 (-19.4%)
Mutual labels:  text-to-speech, speech-synthesis, vocoder
Marytts
MARY TTS -- an open-source, multilingual text-to-speech synthesis system written in pure java
Stars: ✭ 1,699 (+2435.82%)
Mutual labels:  text-to-speech, tts, speech-synthesis
vits
VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech
Stars: ✭ 1,604 (+2294.03%)
Mutual labels:  text-to-speech, tts, speech-synthesis
AdaSpeech
AdaSpeech: Adaptive Text to Speech for Custom Voice
Stars: ✭ 108 (+61.19%)
Mutual labels:  text-to-speech, tts, speech-synthesis
Crystal
Crystal - C++ implementation of a unified framework for multilingual TTS synthesis engine with SSML specification as interface.
Stars: ✭ 108 (+61.19%)
Mutual labels:  text-to-speech, tts, speech-synthesis
Durian
Implementation of "Duration Informed Attention Network for Multimodal Synthesis" (https://arxiv.org/pdf/1909.01700.pdf) paper.
Stars: ✭ 111 (+65.67%)
Mutual labels:  text-to-speech, tts, speech-synthesis
StyleSpeech
Official implementation of Meta-StyleSpeech and StyleSpeech
Stars: ✭ 161 (+140.3%)
Mutual labels:  text-to-speech, tts, speech-synthesis

LVCNet: Efficient Condition-Dependent Modeling Network for Waveform Generation

Using LVCNet to design the generator of Parallel WaveGAN and the same strategy to train it, the inference speed of the new vocoder is more than 5x faster than the original vocoder without any degradation in audio quality.

Our current works [Paper] has been accepted by ICASSP2021, and our previous works were described in Melglow.

Training and Test

  1. prepare the data, download LJSpeech dataset from https://keithito.com/LJ-Speech-Dataset/, and save it in data/LJSpeech-1.1. Then run

    python -m vocoder.preprocess --data-dir ./data/LJSpeech-1.1 --config configs/lvcgan.v1.yaml

    The mel-sepctrums are calculated and saved in the folder temp/.

  2. Training LVCNet

    python -m vocoder.train --config configs/lvcgan.v1.yaml --exp-dir exps/exp.lvcgan.v1
  3. Test LVCNet

    python -m vocoder.test --config configs/lvcgan.v1.yaml --exp-dir exps/exp.lvcgan.v1
  4. The experimental results, including training logs, model checkpoints and synthesized audios, are stored in the folder exps/exp.lvcgan.v1/.
    Similarity, you can also use the config file configs/pwg.v1.yaml to train a Parallel WaveGAN model.

    # training
    python -m vocoder.train --config configs/pwg.v1.yaml --exp-dir exps/exp.pwg.v1
    # test
    python -m vocoder.test --config configs/pwg.v1.yaml --exp-dir exps/exp.pwg.v1

Results

Tensorboard

Use the tensorboard to view the experimental training process:

tensorboard --logdir exps

Traning Loss

image

Evaluate Loss

image

Aduio Sample

Audio Samples are saved in samples/, where

  • samples/*_lvc.wav are generated by LVCNet,
  • samples/*_pwg.wav are generated by Parallel WaveGAN,
  • samples/*_real.wav are the real audio.

Reference

LVCNet: Efficient Condition-Dependent Modeling Network for Waveform Generation, https://arxiv.org/abs/2102.10815
MelGlow: Efficient Waveform Generative Network Based on Location-Variable Convolution, https://arxiv.org/abs/2012.01684
https://github.com/kan-bayashi/ParallelWaveGAN
https://github.com/lmnt-com/diffwave

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].