All Projects → Shahabks → myprosody

Shahabks / myprosody

Licence: MIT license
A Python library for measuring the acoustic features of speech (simultaneous speech, high entropy) compared to ones of native speech.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to myprosody

my-voice-analysis
My-Voice Analysis is a Python library for the analysis of voice (simultaneous speech, high entropy) without the need of a transcription. It breaks utterances and detects syllable boundaries, fundamental frequency contours, and formants.
Stars: ✭ 164 (+1.23%)
Mutual labels:  speech-analysis, acoustic-model
brasiltts
Brasil TTS é um conjunto de sintetizadores de voz, em português do Brasil, que lê telas para portadores de deficiência visual. Transforma texto em áudio, permitindo que pessoas cegas ou com baixa visão tenham acesso ao conteúdo exibido na tela. Embora o principal público-alvo de sistemas de conversão texto-fala – como o Brasil TTS – seja formado…
Stars: ✭ 34 (-79.01%)
Mutual labels:  voice-recognition
Vosk
VOSK Speech Recognition Toolkit
Stars: ✭ 182 (+12.35%)
Mutual labels:  voice-recognition
sam
Software Automatic Mouth - Tiny Speech Synthesizer
Stars: ✭ 316 (+95.06%)
Mutual labels:  phonemes
Project news alan ai
In this video, we're going to build a Conversational Voice Controlled React News Application using Alan AI. Alan AI is a revolutionary speech recognition software that allows you to add voice capabilities to your applications.
Stars: ✭ 202 (+24.69%)
Mutual labels:  voice-recognition
leopard
On-device speech-to-text engine powered by deep learning
Stars: ✭ 354 (+118.52%)
Mutual labels:  voice-recognition
Swiftspeech
A speech recognition framework designed for SwiftUI.
Stars: ✭ 149 (-8.02%)
Mutual labels:  voice-recognition
YANGstraight source
Analytic signal-based source information analysis for YANGstraight and real-time interactive tools
Stars: ✭ 31 (-80.86%)
Mutual labels:  speech-analysis
react-native-spokestack
Spokestack: give your React Native app a voice interface!
Stars: ✭ 53 (-67.28%)
Mutual labels:  voice-recognition
Caster
Dragonfly-Based Voice Programming and Accessibility Toolkit
Stars: ✭ 242 (+49.38%)
Mutual labels:  voice-recognition
Voicebook
🗣️ A book and repo to get you started programming voice computing applications in Python (10 chapters and 200+ scripts).
Stars: ✭ 236 (+45.68%)
Mutual labels:  voice-recognition
Speaker Recognition Py3
Base on MFCC and GMM(基于MFCC和高斯混合模型的语音识别)
Stars: ✭ 202 (+24.69%)
Mutual labels:  voice-recognition
Bayesian-Pitch-Tracking-Using-Harmonic-model
Pitch detection and pitch tracking, voicing unvoicing detection (VAD),基音检测
Stars: ✭ 70 (-56.79%)
Mutual labels:  speech-analysis
Voice Overlay Android
🗣 An overlay that gets your user’s voice permission and input as text in a customizable UI
Stars: ✭ 189 (+16.67%)
Mutual labels:  voice-recognition
picovoice
The end-to-end platform for building voice products at scale
Stars: ✭ 316 (+95.06%)
Mutual labels:  voice-recognition
Angular Search Experience
Algolia + Angular = 🔥🔥🔥
Stars: ✭ 167 (+3.09%)
Mutual labels:  voice-recognition
Fdsoundactivatedrecorder
Start recording when the user speaks
Stars: ✭ 227 (+40.12%)
Mutual labels:  voice-recognition
Calculate-SNR-SDR
Script to calculate SNR and SDR using python
Stars: ✭ 76 (-53.09%)
Mutual labels:  speech-analysis
KeenASR-Android-PoC
A proof-of-concept app using KeenASR SDK on Android. WE ARE HIRING: https://keenresearch.com/careers.html
Stars: ✭ 21 (-87.04%)
Mutual labels:  voice-recognition
octopus
On-device speech-to-index engine powered by deep learning.
Stars: ✭ 30 (-81.48%)
Mutual labels:  voice-recognition

GitHub stars GitHub forks

myprosody

** The python script of myprosody revised and now is available here on "master branch" **

*** Version-10 release: two new functions were added ***

NEW VERSION: the prosodic features of speech (simultaneous speech) compared to the features of native speech ***spoken language proficiency level estimator

NOTE:

1- Both My-Voice-Analysis and Myprosody work on Python 3.7 
2- If you install My-Voice-Analysis through PyPi, please use: 
      mysp=__import__("my-voice-analysis") instead of import myspsolution as mysp
3- It it better to keep the folder names as single entities for instance "Name_Folder" or "NameFolder" without space in the dirctoy path

A Python library for measuring the acoustic features of speech (simultaneous speech, high entropy) compared to ones of native speech.

Prosody is the study of the tune and rhythm of speech and how these features contribute to meaning. Prosody is the study of those aspects of speech that typically apply to a level above that of the individual phoneme and very often to sequences of words (in prosodic phrases). Features above the level of the phoneme (or "segment") are referred to as suprasegmentals. A phonetic study of prosody is a study of the suprasegmental features of speech. At the phonetic level, prosody is characterised by:

  1. vocal pitch (fundamental frequency)
  2. acoustic intensity
  3. rhythm (phoneme and syllable duration)

MyProsody is a Python library for measuring these acoustic features of speech (simultaneous speech, high entropy) compared to ones of native speech. The acoustic features of native speech patterns have been observed and established by employing Machine Learning algorithms. An acoustic model (algorithm) breaks recorded utterances (48 kHz & 32 bit sampling rate and bit depth respectively) and detects syllable boundaries, fundamental frequency contours, and formants. Its built-in functions recognize/measures:

                     o	Average_syll_pause_duration 
                     o	No._long_pause /o	Speaking-time 
                     o	No._of_words_in_minutes
                     o	Articulation_rate
                     o	No._words_in_minutes
                     o	formants_index
                     o	f0_index ((f0 is for fundamental frequency)
                     o	f0_quantile_25_index 
                     o	f0_quantile_50_index 
                     o	f0_quantile_75_index 
                     o	f0_std 
                     o	f0_max 
                     o	f0_min 
                     o	No._detected_vowel 
                     o	perc%._correct_vowel
                     o	(f2/f1)_mean (1st and 2nd formant frequencies)
                     o	(f2/f1)_std
                     o	no._of_words
                     o	no._of_pauses
                     o	intonation_index 
                     o	(voiced_syll_count)/(no_of_pause)
                     o	TOEFL_Scale_Score 
                     o	Score_Shannon_index
                     o	speaking_rate
                     o	gender recognition
                     o	speech mood (semantic analysis)
                     o	pronunciation posterior score
                     o	articulation-rate 
                     o	speech rate 
                     o	filler words
                     o	f0 statistics
                     -------------
                     NEW
                     --------------
                     o level (CEFR level)
                     o prosody-aspects (comparison, native level)

The library was developed based upon the idea introduced by Klaus Zechner et al “Automatic scoring of non-native spontaneous speech in tests of spoken English” Speech Communicaion vol 51-2009, Nivja DeJong and Ton Wempe [1], Paul Boersma and David Weenink [2], Carlo Gussenhoven [3], S.M Witt and S.J. Young [4] , and Yannick Jadoul [5].

Peaks in intensity (dB) that are preceded and followed by dips in intensity are considered as potential syllable cores. MyProsody is unique in its aim to provide a complete quantitative and analytical way to study acoustic features of a speech. Moreover, those features could be analysed further by employing Python's functionality to provide more fascinating insights into speech patterns.

This library is for Linguists, scientists, developers, speech and language therapy clinics and researchers.

Please note that MyProsody Analysis is currently in initial state though in active development. While the amount of functionality that is currently present is not huge, more will be added over the next few months.

Installation

Myprosody can be installed like any other Python library, using (a recent version of) the Python package manager pip, on Linux, macOS, and Windows:

                                    pip install myprosody

or, to update your installed version to the latest release:

                                    pip install -u myprosody

you need also

                                    import pickle

to run those functions of "Myprosody" which predict the spoken language proficiency level of your audio files. You need the pickle library to activate the trained acoustic and language models

NOTE:

After installing Myprosody, download the folder called:

                                  myprosody

from

https://github.com/Shahabks/myprosody

and save on your computer. The folder includes the audio files folder where you will save your audio files for analysis.

Audio files must be in *.wav format, recorded at 48 kHz sample frame and 24-32 bits of resolution.

To check how the myprosody functions behave, please check

                                EXAMPLES.pdf
                                testpro.py

on

https://github.com/Shahabks/myprosody

Development

Myprosody was developed by MYOLUTIONS Lab in Japan. It is part of New Generation of Voice Recognition and Acoustic & Language modelling Project in MYSOLUTIONS Lab. That is planned to enrich the functionality of Myprosody by adding more advanced functions.

Pronunciation

My-Voice-Analysis and MYprosody repos are two capsulated libraries from one of our main projects on speech scoring. The main project (its early version) employed ASR and used the Hidden Markov Model framework to train simple Gaussian acoustic models for each phoneme for each speaker in the given available audio datasets, then calculating all the symmetric K-L divergences for each pair of models for each speaker. What you see in these repos are just an approximate of those model without paying attention to level of accuracy of each phenome rather on fluency In the project's machine learning model we considered audio files of speakers who possessed an appropriate degree of pronunciation, either in general or for a specific utterance, word or phoneme, (in effect they had been rated with expert-human graders). Here below the figure illustrates some of the factors that the expert-human grader had considered in rating as an overall score

image

S. M. Witt, 2012 “Automatic error detection in pronunciation training: Where we are and where we need to go,”

References and Acknowledgements

  1. DeJong N.H, and Ton Wempe [2009]; “Praat script to detect syllable nuclei and measure speech rate automatically”; Behavior Research Methods, 41(2).385-390.

  2. Paul Boersma and David Weenink; http://www.fon.hum.uva.nl/praat/

  3. Gussenhoven C. [2002]; “ Intonation and Interpretation: Phonetics and Phonology”; Centre for Language Studies, Univerity of Nijmegen, The Netherlands.

  4. Witt S.M and Young S.J [2000]; “Phone-level pronunciation scoring and assessment or interactive language learning”; Speech Communication, 30 (2000) 95-108.

  5. Jadoul, Y., Thompson, B., & de Boer, B. (2018). Introducing Parselmouth: A Python interface to Praat. Journal of Phonetics, 71, 1-15. https://doi.org/10.1016/j.wocn.2018.07.001 (https://parselmouth.readthedocs.io/en/latest/)

  6. "Automatic scoring of non-native spontaneous speech in tests of spoken English", Speech Communication, Volume 51, Issue 10, October 2009, Pages 883-895

  7. "A three-stage approach to the automated scoring of spontaneous spoken responses", Computer Speech & Language, Volume 25, Issue 2, April 2011, Pages 282-306

  8. "Automated Scoring of Nonnative Speech Using the SpeechRaterSM v. 5.0 Engine", ETS research report, Volume 2018, Issue 1, December 2018, Pages: 1-28

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].