All Projects → baxtree → wiki2ssml

baxtree / wiki2ssml

Licence: Apache-2.0 license
Wiki2SSML provides the WikiVoice markup language used for fine-tuning synthesised voice.

Programming Languages

javascript
184084 projects - #8 most used programming language
PEG.js
56 projects

Projects that are alternatives of or similar to wiki2ssml

startup-starter-kit
The Structured Content Startup Starter Kit
Stars: ✭ 42 (+35.48%)
Mutual labels:  google-assistant, ssml
Jovo Framework
🔈 The Open Source Voice Layer: Build Voice Experiences for Alexa, Google Assistant, Samsung Bixby, Web Apps, and much more
Stars: ✭ 1,320 (+4158.06%)
Mutual labels:  amazon-alexa, google-assistant
Assistants Pi
Headless Google Assistant and Alexa on Raspberry Pi
Stars: ✭ 280 (+803.23%)
Mutual labels:  amazon-alexa, google-assistant
Assistantjs
TypeScript framework to build cross-platform voice applications (alexa, google home, ...).
Stars: ✭ 100 (+222.58%)
Mutual labels:  amazon-alexa, google-assistant
Assistantcomputercontrol
Control your computer with your Google Home or Amazon Alexa assistant!
Stars: ✭ 554 (+1687.1%)
Mutual labels:  amazon-alexa, google-assistant
Awesome Voice Apps
🕶 A curated list of awesome voice projects, tools, and resources for Amazon Alexa, Google Assistant, and more.
Stars: ✭ 138 (+345.16%)
Mutual labels:  amazon-alexa, google-assistant
Lingvo
Lingvo
Stars: ✭ 2,361 (+7516.13%)
Mutual labels:  speech-synthesis
ttsflow
tensorflow speech synthesis c++ inference for voicenet
Stars: ✭ 17 (-45.16%)
Mutual labels:  speech-synthesis
Naomi
The Naomi Project is an open source, technology agnostic platform for developing always-on, voice-controlled applications!
Stars: ✭ 171 (+451.61%)
Mutual labels:  speech-synthesis
Vocgan
VocGAN: A High-Fidelity Real-time Vocoder with a Hierarchically-nested Adversarial Network
Stars: ✭ 158 (+409.68%)
Mutual labels:  speech-synthesis
sova-tts-engine
Tacotron2 based engine for the SOVA-TTS project
Stars: ✭ 63 (+103.23%)
Mutual labels:  speech-synthesis
sam
Software Automatic Mouth - Tiny Speech Synthesizer
Stars: ✭ 316 (+919.35%)
Mutual labels:  speech-synthesis
Wavegrad
Implementation of Google Brain's WaveGrad high-fidelity vocoder (paper: https://arxiv.org/pdf/2009.00713.pdf). First implementation on GitHub.
Stars: ✭ 245 (+690.32%)
Mutual labels:  speech-synthesis
Universalvocoding
A PyTorch implementation of "Robust Universal Neural Vocoding"
Stars: ✭ 197 (+535.48%)
Mutual labels:  speech-synthesis
voder
An emulation of the Voder Speech Synthesizer.
Stars: ✭ 19 (-38.71%)
Mutual labels:  speech-synthesis
Expressive tacotron
Tensorflow Implementation of Expressive Tacotron
Stars: ✭ 192 (+519.35%)
Mutual labels:  speech-synthesis
IMS-Toucan
Text-to-Speech Toolkit of the Speech and Language Technologies Group at the University of Stuttgart. Objectives of the development are simplicity, modularity, controllability and multilinguality.
Stars: ✭ 295 (+851.61%)
Mutual labels:  speech-synthesis
Cyclegan Vc2
Voice Conversion by CycleGAN (语音克隆/语音转换): CycleGAN-VC2
Stars: ✭ 158 (+409.68%)
Mutual labels:  speech-synthesis
Tacotron pytorch
PyTorch implementation of Tacotron speech synthesis model.
Stars: ✭ 242 (+680.65%)
Mutual labels:  speech-synthesis
idear
🎙️ Handsfree Audio Development Interface
Stars: ✭ 84 (+170.97%)
Mutual labels:  speech-synthesis

Build Status Codecov Node GitHub license

Wiki2SSML

wiki2ssml can transform the WikiVoice markups into the W3C SSML widely supported by various text-to-speech services as an interchange format for synthesised voice tuning.

Install

$ npm install wiki2ssml

or

$ yarn add wiki2ssml

Introduction

wiki2ssml eases the burden of editors preparing scripts in SSML, widely understood by modern speech synthesisers including but not limited to Amazon Polly, Google TTS, IBM Watson TTS and Microsoft Azure TTS. It has been developed in Vanilla JavaScript and powered by WikiVoice which provides an unobtrusive solution of blending voice-tuning markups with free texts and creates seamless experiences of editing scripts and voices in one go.

WikiVoice

Format

[[attribute(:value)?(,attribute:value)*(|target)?]]

Supported Markups

Expressions Operations
[[volume:SCALE|TEXT]] Speaking volume
[[speed:SCALE|TEXT]] Speaking rate
[[pitch:SCALE|TEXT]] Speaking pitch
[[silence:DURATION,strength:STRENGTH]] Pause with duration and strength
[[emphasis:LEVEL|TEXT]] Emphasis with LEVEL
[[audio:AUDIO_URI]] Audio embedded into speech
[[lang:LANGUAGE|TEXT]] Language indicator
[[paragraph|TEXT]] Paragraph indicator
[[sentence|TEXT]] Sentence indicator
[[type:TYPE|TEXT]] Type it should be said as
[[voice:NAME|TEXT]] Voice name it should be said with
[[pos:POS|TEXT]] Part of speech it should be prounouced as
[[substitute:TEXT1|TEXT2]] Replace TEXT2 with TEXT1 as substitution
[[alphabet:ALPHABET,pronunciation:PRONUNCIATION|TEXT]] Phonetic pronunciation
[[volume:SCALE,speed:SCALE,pitch:SCALE|TEXT]] Speaking volume, rate and pitch
[[type:TYPE,format:FORMAT,detail:DETAIL|TEXT]] Type it should be said as
[[mark:NAME]] Mark referencing a location
[[seeAlso:URI] URI providing additional information about marked-up content]
[[cacheControl:no-cache]] No caching on marked-up content
[[lexicon:URI,type:TEXT]] Location of the lexicon document and its media type
*[[...]][[...]]...[[...]]* <par> time container with one or more markups
#[[...]][[...]]...[[...]]# <seq> time container with one or more markups

Vendor-Specific Markups

Expressions Operations
[[amzWhispered|TEXT]] Whispering
[[amzPhonation:PHONATION|TEXT]] Speaking Softly
[[amzTimbre:SCALE|TEXT]] Controlling Timbre
[[amzDRC|TEXT]] Dynamic Range Compression
[[amzBreathDuration:SCALE,amzBreathVolume:SCALE]] Breathing based on the manual model
[[amzDefaultBreath]] Default breathing based on the manual model
[[amzAutoBreathsVolume:SCALE,amzAutoBreathsFrequency:SCALE,amzAutoBreathsDuration:SCALE|TEXT]] Breathing based on the automated model
[[amzDefaultAutoBreaths]] Default breathing based on the automated model
[[amzSpeakingStyle:STYLE|TEXT]] Speaking style
[[amzEmotion:EMOTION,amzIntensity:SCALE|TEXT]] Speaking emotionally
[[amzMaxDuration:DURATION#124;TEXT]] Maximum Speech duration
[[gglMediaSpeak|TEXT]] Media container for speech
[[gglMediaSpeakEnd:DURATION|TEXT]] Media container for speech with the ending time
[[gglMediaSpeakFadeIn:DURATION,gglMediaSpeakFadeOut:DURATION|TEXT]] Media container for speach with fade
[[gglMediaAudio:URI]] Media container for audio
[[gglMediaAudioFadeIn:DURATION,gglMediaAudioFadeOut:DURATION,gglMediaAudio:URI]] Media container for audio with fade
[[ibmExprType:TYPE|TEXT]] Expressiveness type
[[ibmTransType:TYPE,ibmTransStrength:SCALE|TEXT]] Voice transformation
[[ibmTransBreathiness:SCALE,ibmTransPitchRange:SCALE,ibmTransTimbre:SCALE|TEXT]] Voice custom transformation
[[voice:NAME|[[mstExprType:TYPE|TEXT]]]] Voice-specific speaking style
[[mstBackgroundAudio:URI,mstBackgroundAudioVolume:SCALE]] Background audio and its volume
[[mstBackgroundAudio:URI,mstBackgroundAudioFadeIn:SCALE,mstBackgroundAudioFadeOut:SCALE]] Background audio with fade-in and fade-out
[[mstExprStyle:STYLE,mstExprDegree:SCALE|TEXT]] Speaking style and its intensity

More details on canonical attribute values can be found at Speech Synthesis Markup Language (SSML). For ranges of vendor-specific values please refer to their online documents. Each attribute name in camel case can be rewritten in kebab case (e.g., firstSecondThird <=> first-second-third). Non-vendor-specific attributes can be abbreviated into their first three letters.

parseToSsml(input, languageCode, options)

  • input <string> (required)
  • languageCode <string> (required: RFC 1766)
  • options <object> (optional)
    • version <string> (default: "1.1")
    • pretty <boolean> (default: false)
    • encoding <string> (default: "UTF-8")

Example

var parser = require("wiki2ssml");
try {
    var input = "[[volume:+2dB,speed:50%|Speak this with the volume increased by 2dB at half the default speech rate.]]";
    var ssml = parser.parseToSsml(input, "en-GB", {pretty: true});
    console.log(ssml);
} catch (e) {
    if (e instanceof parser.SyntaxError) {
        // The input does not have valid WikiVoice markups
    } else if (e instanceof parser.ArgumentError) {
        // Either the input or the language code is missing
    } else {
        // Handle any unspecified exceptions
    }
}

will print out:

<?xml version="1.0" encoding="UTF-8"?>
<speak version="1.1" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/2001/10/synthesis http://www.w3.org/TR/speech-synthesis/synthesis.xsd" xml:lang="en-GB">
  <prosody rate="50%" volume="+2dB">Speak this with the volume increased by 2dB at half the default speech rate.</prosody>
</speak>
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].