All Projects → liusongxiang → ppg-vc

liusongxiang / ppg-vc

Licence: Apache-2.0 license
PPG-Based Voice Conversion

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to ppg-vc

YourTTS
YourTTS: Towards Zero-Shot Multi-Speaker TTS and Zero-Shot Voice Conversion for everyone
Stars: ✭ 217 (+40.91%)
Mutual labels:  speech-synthesis, voice-conversion
VQMIVC
Official implementation of VQMIVC: One-shot (any-to-any) Voice Conversion @ Interspeech 2021 + Online playing demo!
Stars: ✭ 278 (+80.52%)
Mutual labels:  voice-conversion, one-shot
voice-conversion
an tutorial implement of voice conversion using pytorch
Stars: ✭ 26 (-83.12%)
Mutual labels:  speech-synthesis, voice-conversion
SingleVC
Any-to-one voice conversion using the data augment strategy: pitch shifted and duration remained.
Stars: ✭ 25 (-83.77%)
Mutual labels:  speech-synthesis, voice-conversion
Espnet
End-to-End Speech Processing Toolkit
Stars: ✭ 4,533 (+2843.51%)
Mutual labels:  speech-synthesis, voice-conversion
MediumVC
Any-to-any voice conversion using synthetic specific-speaker speeches as intermedium features
Stars: ✭ 46 (-70.13%)
Mutual labels:  speech-synthesis, voice-conversion
sova-tts-tps
NLP-preprocessor for the SOVA-TTS project
Stars: ✭ 44 (-71.43%)
Mutual labels:  speech-synthesis
melgan
MelGAN implementation with Multi-Band and Full Band supports...
Stars: ✭ 54 (-64.94%)
Mutual labels:  speech-synthesis
QPPWG
Quasi-Periodic Parallel WaveGAN Pytorch implementation
Stars: ✭ 41 (-73.38%)
Mutual labels:  speech-synthesis
VAENAR-TTS
PyTorch Implementation of VAENAR-TTS: Variational Auto-Encoder based Non-AutoRegressive Text-to-Speech Synthesis.
Stars: ✭ 66 (-57.14%)
Mutual labels:  speech-synthesis
PPG
Code to estimate HR from PPG signals using Subspace Decomposition and Kalman filter for the dataset of 22 PPG recordings provided for the 2015 IEEE Signal Processing Cup (SP Cup) competition. The traces are stored in folder 'DATABASE'. Please cite this publication when referencing this material: "Measuring Heart Rate During Physical Exercise by …
Stars: ✭ 43 (-72.08%)
Mutual labels:  ppg
TinyCog
Small Robot, Toy Robot platform
Stars: ✭ 29 (-81.17%)
Mutual labels:  speech-synthesis
open-speech-corpora
💎 A list of accessible speech corpora for ASR, TTS, and other Speech Technologies
Stars: ✭ 841 (+446.1%)
Mutual labels:  speech-synthesis
CVC
CVC: Contrastive Learning for Non-parallel Voice Conversion (INTERSPEECH 2021, in PyTorch)
Stars: ✭ 45 (-70.78%)
Mutual labels:  voice-conversion
Daft-Exprt
PyTorch Implementation of Daft-Exprt: Robust Prosody Transfer Across Speakers for Expressive Speech Synthesis
Stars: ✭ 41 (-73.38%)
Mutual labels:  speech-synthesis
AFE4490 Oximeter
This pulse oximetry shield from ProtoCentral uses the AFE4490 IC to enable your Arduino to measure heart rate as well as SpO2 values.
Stars: ✭ 39 (-74.68%)
Mutual labels:  ppg
JD-NMF
Joint Dictionary Learning-based Non-Negative Matrix Factorization for Voice Conversion (TBME 2016)
Stars: ✭ 20 (-87.01%)
Mutual labels:  voice-conversion
Phomeme
Simple sentence mixing tool (work in progress)
Stars: ✭ 18 (-88.31%)
Mutual labels:  voice-conversion
deep-learning-german-tts
Thorsten-Voice: A free to use, offline working, high quality german TTS voice should be available for every project without any license struggling.
Stars: ✭ 268 (+74.03%)
Mutual labels:  speech-synthesis
spokestack-ios
Spokestack: give your iOS app a voice interface!
Stars: ✭ 27 (-82.47%)
Mutual labels:  speech-synthesis

One-shot Phonetic PosteriorGram (PPG)-Based Voice Conversion (PPG-VC): Any-to-Many Voice Conversion with Location-Relative Sequence-to-Sequence Modeling (TASLP 2021)

Paper | Pre-trained models | Paper Demo

This paper proposes an any-to-many location-relative, sequence-to-sequence (seq2seq) based, non-parallel voice conversion approach. In this approach, we combine a bottle-neck feature extractor (BNE) with a seq2seq based synthesis module. During the training stage, an encoder-decoder based hybrid connectionist-temporal-classification-attention (CTC-attention) phoneme recognizer is trained, whose encoder has a bottle-neck layer. A BNE is obtained from the phoneme recognizer and is utilized to extract speaker-independent, dense and rich linguistic representations from spectral features. Then a multi-speaker location-relative attention based seq2seq synthesis model is trained to reconstruct spectral features from the bottle-neck features, conditioning on speaker representations for speaker identity control in the generated speech. To mitigate the difficulties of using seq2seq based models to align long sequences, we down-sample the input spectral feature along the temporal dimension and equip the synthesis model with a discretized mixture of logistic (MoL) attention mechanism. Since the phoneme recognizer is trained with large speech recognition data corpus, the proposed approach can conduct any-to-many voice conversion. Objective and subjective evaluations shows that the proposed any-to-many approach has superior voice conversion performance in terms of both naturalness and speaker similarity. Ablation studies are conducted to confirm the effectiveness of feature selection and model design strategies in the proposed approach. The proposed VC approach can readily be extended to support any-to-any VC (also known as one/few-shot VC), and achieve high performance according to objective and subjective evaluations.

Diagram of the BNE-Seq2seqMoL system.

This repo implements an updated version of PPG-based VC models.

Notes:

  • The PPG model provided in conformer_ppg_model is based on Hybrid CTC-Attention phoneme recognizer, trained with LibriSpeech (960hrs). PPGs have frame-shift of 10 ms, with dimensionality of 144. This modelis very much similar to the one used in this paper.

  • This repo uses Hifi-GAN V1 as the vocoder model, sampling rate of synthesized audio is 24kHz.

Updates!

  • We provide an audio sample uttered by Barack Obama (link), you can convert any voice into Obama's voice using this sample as reference. Please have a try!
  • BNE-Seq2seqMoL One-shot VC model are uploaded (link)
  • BiLSTM-based One-shot VC model are uploaded (link)

How to use

Setup with virtualenv

$ cd tools
$ make

Note: If you want to specify Python version, CUDA version or PyTorch version, please run for example:

$ make PYTHON=3.7 CUDA_VERSION=10.1 PYTORCH_VERSION=1.6

Conversion with a pretrained model

  1. Download a model from here, we recommend to first try the model bneSeq2seqMoL-vctk-libritts460-oneshot. Put the config file and the checkpoint file in a folder <model_dir>.
  2. Prepare a source wav directory <source_wav_dur>, where the wavs inside are what you want to convert.
  3. Prepare a reference audio sample (i.e., the target voice you want convert to) <ref_wavpath>.
  4. Run test.sh as:
sh test.sh <model_dir>/seq2seq_mol_ppg2mel_vctk_libri_oneshotvc_r4_normMel_v2.yaml <model_dir>/best_loss_step_304000.pth \
  <source_wav_dir> <ref_wavpath>

The converted wavs are saved in the folder vc_gen_wavs.

Data preprocessing

Activate the virtual env py source tools/venv/bin/activate, then:

  • Please run 1_compute_ctc_att_bnf.py to compute PPG features.
  • Please run 2_compute_f0.py to compute fundamental frequency.
  • Please run 3_compute_spk_dvecs.py to compute speaker d-vectors.

Training

  • Please refer to run.sh

Citations

If you use this repo for your research, please consider of citing the following related papers.

@ARTICLE{liu2021any,
  author={Liu, Songxiang and Cao, Yuewen and Wang, Disong and Wu, Xixin and Liu, Xunying and Meng, Helen},
  journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing}, 
  title={Any-to-Many Voice Conversion With Location-Relative Sequence-to-Sequence Modeling}, 
  year={2021},
  volume={29},
  number={},
  pages={1717-1728},
  doi={10.1109/TASLP.2021.3076867}
}

@inproceedings{Liu2018,
  author={Songxiang Liu and Jinghua Zhong and Lifa Sun and Xixin Wu and Xunying Liu and Helen Meng},
  title={Voice Conversion Across Arbitrary Speakers Based on a Single Target-Speaker Utterance},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={496--500},
  doi={10.21437/Interspeech.2018-1504},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1504}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].