All Projects → r9y9 → Gantts

r9y9 / Gantts

Licence: other
PyTorch implementation of GAN-based text-to-speech synthesis and voice conversion (VC)

Projects that are alternatives of or similar to Gantts

Tacotron pytorch
PyTorch implementation of Tacotron speech synthesis model.
Stars: ✭ 242 (-47.39%)
Mutual labels:  jupyter-notebook, speech-synthesis
Zhihu
This repo contains the source code in my personal column (https://zhuanlan.zhihu.com/zhaoyeyu), implemented using Python 3.6. Including Natural Language Processing and Computer Vision projects, such as text generation, machine translation, deep convolution GAN and other actual combat code.
Stars: ✭ 3,307 (+618.91%)
Mutual labels:  jupyter-notebook, gan
Wavegrad
Implementation of Google Brain's WaveGrad high-fidelity vocoder (paper: https://arxiv.org/pdf/2009.00713.pdf). First implementation on GitHub.
Stars: ✭ 245 (-46.74%)
Mutual labels:  jupyter-notebook, speech-synthesis
Gan Tutorial
Simple Implementation of many GAN models with PyTorch.
Stars: ✭ 227 (-50.65%)
Mutual labels:  jupyter-notebook, gan
Sdv
Synthetic Data Generation for tabular, relational and time series data.
Stars: ✭ 360 (-21.74%)
Mutual labels:  jupyter-notebook, gan
Nemo
NeMo: a toolkit for conversational AI
Stars: ✭ 3,685 (+701.09%)
Mutual labels:  jupyter-notebook, speech-synthesis
Faceswap Gan
A denoising autoencoder + adversarial losses and attention mechanisms for face swapping.
Stars: ✭ 3,099 (+573.7%)
Mutual labels:  jupyter-notebook, gan
Gans From Theory To Production
Material for the tutorial: "Deep Diving into GANs: from theory to production"
Stars: ✭ 182 (-60.43%)
Mutual labels:  jupyter-notebook, gan
Advanced Tensorflow
Little More Advanced TensorFlow Implementations
Stars: ✭ 364 (-20.87%)
Mutual labels:  jupyter-notebook, gan
T81 558 deep learning
Washington University (in St. Louis) Course T81-558: Applications of Deep Neural Networks
Stars: ✭ 4,152 (+802.61%)
Mutual labels:  jupyter-notebook, gan
Gan steerability
On the "steerability" of generative adversarial networks
Stars: ✭ 225 (-51.09%)
Mutual labels:  jupyter-notebook, gan
Deep Learning Resources
由淺入深的深度學習資源 Collection of deep learning materials for everyone
Stars: ✭ 422 (-8.26%)
Mutual labels:  jupyter-notebook, gan
Swapnet
Virtual Clothing Try-on with Deep Learning. PyTorch reproduction of SwapNet by Raj et al. 2018. Now with Docker support!
Stars: ✭ 202 (-56.09%)
Mutual labels:  jupyter-notebook, gan
Nn
🧑‍🏫 50! Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
Stars: ✭ 5,720 (+1143.48%)
Mutual labels:  jupyter-notebook, gan
Dragan
A stable algorithm for GAN training
Stars: ✭ 189 (-58.91%)
Mutual labels:  jupyter-notebook, gan
Pytorch Lesson Zh
pytorch 包教不包会
Stars: ✭ 279 (-39.35%)
Mutual labels:  jupyter-notebook, gan
Catdcgan
A DCGAN that generate Cat pictures 🐱‍💻
Stars: ✭ 177 (-61.52%)
Mutual labels:  jupyter-notebook, gan
Keraspp
코딩셰프의 3분 딥러닝, 케라스맛
Stars: ✭ 178 (-61.3%)
Mutual labels:  jupyter-notebook, gan
Hifi Gan
HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis
Stars: ✭ 325 (-29.35%)
Mutual labels:  gan, speech-synthesis
Simgan Captcha
Solve captcha without manually labeling a training set
Stars: ✭ 405 (-11.96%)
Mutual labels:  jupyter-notebook, gan

GAN TTS

Build Status PyPI DOI

PyTorch implementation of Generative adversarial Networks (GAN) based text-to-speech (TTS) and voice conversion (VC).

  1. Saito, Yuki, Shinnosuke Takamichi, and Hiroshi Saruwatari. "Statistical Parametric Speech Synthesis Incorporating Generative Adversarial Networks." IEEE/ACM Transactions on Audio, Speech, and Language Processing (2017).
  2. Shan Yang, Lei Xie, Xiao Chen, Xiaoyan Lou, Xuan Zhu, Dongyan Huang, Haizhou Li, " Statistical Parametric Speech Synthesis Using Generative Adversarial Networks Under A Multi-task Learning Framework", arXiv:1707.01670, Jul 2017.

Generated audio samples

Audio samples are available in the Jupyter notebooks at the link below:

Notes on hyper parameters

  • adversarial_streams, which represents streams (mgc, lf0, vuv, bap) to be used to compute adversarial loss, is a very speech quality sensitive parameter. Computing adversarial loss on mgc features (except for first few dimensions) seems to be working good.
  • If mask_nth_mgc_for_adv_loss > 0, first mask_nth_mgc_for_adv_loss dimension for mgc will be ignored for computing adversarial loss. As described in saito2017asja, I confirmed that using 0-th (and 1-th) mgc for computing adversarial loss affects speech quality. From my experience, mask_nth_mgc_for_adv_loss = 1 for mgc order 25, mask_nth_mgc_for_adv_loss = 2 for mgc order 59 are working to me.
  • F0 extracted by WORLD will be spline interpolated. Set f0_interpolation_kind to "slinear" if you want frist-order spline interpolation, which is same as Merlin's default.
  • Set use_harvest to True if you want to use Harvest F0 estimation algorithm. If False, Dio and StoneMask are used to estimate/refine F0.
  • If you see cuda runtime error (2) : out of memory, try smaller batch size. https://github.com/r9y9/gantts/issues/3

Notes on [2]

Though I haven't got improvements over Saito's approach [1] yet, but the GAN-based models described in [2] should be achieved by the following configurations:

  • Set generator_add_noise to True. This will enable generator to use Gaussian noise as input. Linguistic features are concatenated with the noise vector.
  • Set discriminator_linguistic_condition to True. The discriminator uses linguistic features as condition.

Requirements

Installation

Please install PyTorch, TensorFlow and SRU (if needed) first. Once you have those, then

git clone --recursive https://github.com/r9y9/gantts && cd gantts
pip install -e ".[train]"

should install all other dependencies.

Repository structure

  • gantts/: Network definitions, utilities for working on sequence-loss optimization.
  • prepare_features_vc.py: Acoustic feature extraction script for voice conversion.
  • prepare_features_tts.py: Linguistic/duration/acoustic feature extraction script for TTS.
  • train.py: GAN-based training script. This is written to be generic so that can be used for training voice conversion models as well as text-to-speech models (duration/acoustic).
  • train_gan.sh: Adversarial training wrapper script for train.py.
  • hparams.py: Hyper parameters for VC and TTS experiments.
  • evaluation_vc.py: Evaluation script for VC.
  • evaluation_tts.py: Evaluation script for TTS.

Feature extraction scripts are written for CMU ARCTIC dataset, but can be easily adapted for other datasets.

Run demos

Voice conversion (en)

vc_demo.sh is a clb to clt voice conversion demo script. Before running the script, please download wav files for clb and slt from CMU ARCTIC and check that you have all data in a directory as follows:

> tree ~/data/cmu_arctic/ -d -L 1
/home/ryuichi/data/cmu_arctic/
├── cmu_us_awb_arctic
├── cmu_us_bdl_arctic
├── cmu_us_clb_arctic
├── cmu_us_jmk_arctic
├── cmu_us_ksp_arctic
├── cmu_us_rms_arctic
└── cmu_us_slt_arctic

Once you have downloaded datasets, then:

./vc_demo.sh ${experimental_id} ${your_cmu_arctic_data_root}

e.g.,

 ./vc_demo.sh vc_gan_test ~/data/cmu_arctic/

Model checkpoints will be saved at ./checkpoints/${experimental_id} and audio samples are saved at ./generated/${experimental_id}.

Text-to-speech synthesis (en)

tts_demo.sh is a self-contained TTS demo script. The usage is:

./tts_demo.sh ${experimental_id}

This will download slt_arctic_full_data used in Merlin's demo, perform feature extraction, train models and synthesize audio samples for eval/test set. ${experimenta_id} can be arbitrary string, for example,

./tts_demo.sh tts_test

Model checkpoints will be saved at ./checkpoints/${experimental_id} and audio samples are saved at ./generated/${experimental_id}.

Hyper paramters

See hparams.py.

Monitoring training progress

tensorboard --logdir=log

References

Notice

The repository doesn't try to reproduce same results reported in their papers because 1) data is not publically available and 2). hyper parameters are highly depends on data. Instead, I tried same ideas on different data with different hyper parameters.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].