title | emoji | colorFrom | colorTo | sdk | sdk_version | python_version | app_file | pinned |
---|---|---|---|---|---|---|---|---|
Ukrainian TTS |
blue |
yellow |
gradio |
3.3 |
3.9 |
app.py |
false |
📢 🤖
Ukrainian TTS Ukrainian TTS (text-to-speech) using Coqui TTS.
Link to online demo -> https://huggingface.co/spaces/robinhad/ukrainian-tts
Note: online demo saves user input to improve user experience, by using it you give your consent to analyze this data.
Link to source code and models -> https://github.com/robinhad/ukrainian-tts
Telegram bot -> https://t.me/uk_tts_bot
Code is licensed under MIT License
, models are under GNU GPL v3 License
.
❤️
Support If you like my work, please support
For collaboration and question please contact me here:
Telegram https://t.me/robinhad
Twitter https://twitter.com/robinhad
You're welcome to join UA Speech Recognition and Synthesis community: Telegram https://t.me/speech_recognition_uk
🤖
Examples Mykyta (male)
:
mykyta.mp4
Olena (female)
:
olena.mp4
Dmytro (male)
:
dmytro.mp4
Olha (female)
:
olha.mp4
Lada (female)
:
lada.mp4
📢
How to use: As a package:
- Install using command:
pip install git+https://github.com/robinhad/ukrainian-tts.git
- Run a code snippet:
from ukrainian_tts.tts import TTS, Voices, Stress
tts = TTS(use_cuda=False)
with open("test.wav", mode="wb") as file:
_, text = tts.tts("Привіт", Voices.Dmytro.value, Stress.Model.value, file)
print("Accented text:", text)
Run manually:
Caution: this won't use normalizer and autostress like a web demo.
pip install -r requirements.txt
.- Download
model.pth
andspeakers.pth
from "Releases" tab. - Launch as one-time command:
tts --text "Text for TTS" \
--model_path path/to/model.pth \
--config_path path/to/config.json \
--speaker_idx dmytro \
--out_path folder/to/save/output.wav
or alternatively launch web server using:
tts-server --model_path path/to/model.pth \
--config_path path/to/config.json
🏋️
How to train: - Refer to "Nervous beginner guide" in Coqui TTS docs.
- Instead of provided
config.json
use one from this repo.
🤝
Attribution - Model training - Yurii Paniv @robinhad
- Mykyta, Olena, Lada, Dmytro, Olha dataset - Yehor Smoliakov @egorsmkv
- Dmytro voice - Dmytro Chaplynskyi @dchaplinsky
- Silence cutting using HMM-GMM - Volodymyr Kyrylov @proger
- Autostress (with dictionary) using ukrainian-word-stress - Oleksiy Syvokon @asivokon
- Autostress (with model) using ukrainian-accentor - Bohdan Mykhailenko @NeonBohdan + Yehor Smoliakov @egorsmkv