All Projects → Helsinki-NLP → Prosody

Helsinki-NLP / Prosody

Licence: mit
Helsinki Prosody Corpus and A System for Predicting Prosodic Prominence from Text

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Prosody

Coarij
Corpus of Annual Reports in Japan
Stars: ✭ 55 (-60.43%)
Mutual labels:  dataset, corpus, natural-language-processing
Nlp bahasa resources
A Curated List of Dataset and Usable Library Resources for NLP in Bahasa Indonesia
Stars: ✭ 158 (+13.67%)
Mutual labels:  dataset, corpus, natural-language-processing
Ua Gec
UA-GEC: Grammatical Error Correction and Fluency Corpus for the Ukrainian Language
Stars: ✭ 108 (-22.3%)
Mutual labels:  dataset, corpus, natural-language-processing
Fakenewscorpus
A dataset of millions of news articles scraped from a curated list of data sources.
Stars: ✭ 255 (+83.45%)
Mutual labels:  dataset, corpus, natural-language-processing
Awesome Hungarian Nlp
A curated list of NLP resources for Hungarian
Stars: ✭ 121 (-12.95%)
Mutual labels:  dataset, corpus, natural-language-processing
Insuranceqa Corpus Zh
🚁 保险行业语料库,聊天机器人
Stars: ✭ 821 (+490.65%)
Mutual labels:  dataset, corpus, natural-language-processing
Ja.text8
Japanese text8 corpus for word embedding.
Stars: ✭ 79 (-43.17%)
Mutual labels:  corpus, natural-language-processing
Dataset List
lists of text corpus and more (mainly Japanese)
Stars: ✭ 84 (-39.57%)
Mutual labels:  dataset, corpus
Pytreebank
😡😇 Stanford Sentiment Treebank loader in Python
Stars: ✭ 93 (-33.09%)
Mutual labels:  dataset, natural-language-processing
Anago
Bidirectional LSTM-CRF and ELMo for Named-Entity Recognition, Part-of-Speech Tagging and so on.
Stars: ✭ 1,392 (+901.44%)
Mutual labels:  natural-language-processing, sequence-labeling
Wikisql
A large annotated semantic parsing corpus for developing natural language interfaces.
Stars: ✭ 965 (+594.24%)
Mutual labels:  dataset, natural-language-processing
Bond
BOND: BERT-Assisted Open-Domain Name Entity Recognition with Distant Supervision
Stars: ✭ 96 (-30.94%)
Mutual labels:  dataset, natural-language-processing
Ncrfpp
NCRF++, a Neural Sequence Labeling Toolkit. Easy use to any sequence labeling tasks (e.g. NER, POS, Segmentation). It includes character LSTM/CNN, word LSTM/CNN and softmax/CRF components.
Stars: ✭ 1,767 (+1171.22%)
Mutual labels:  natural-language-processing, sequence-labeling
Char Rnn Tensorflow
Multi-layer Recurrent Neural Networks for character-level language models implements by TensorFlow
Stars: ✭ 58 (-58.27%)
Mutual labels:  dataset, natural-language-processing
Mams For Absa
A Multi-Aspect Multi-Sentiment Dataset for aspect-based sentiment analysis.
Stars: ✭ 135 (-2.88%)
Mutual labels:  dataset, natural-language-processing
Mtnt
Code for the collection and analysis of the MTNT dataset
Stars: ✭ 48 (-65.47%)
Mutual labels:  dataset, natural-language-processing
Neuronblocks
NLP DNN Toolkit - Building Your NLP DNN Models Like Playing Lego
Stars: ✭ 1,356 (+875.54%)
Mutual labels:  natural-language-processing, sequence-labeling
Flair
A very simple framework for state-of-the-art Natural Language Processing (NLP)
Stars: ✭ 11,065 (+7860.43%)
Mutual labels:  natural-language-processing, sequence-labeling
Awesome Ai Services
An overview of the AI-as-a-service landscape
Stars: ✭ 133 (-4.32%)
Mutual labels:  natural-language-processing, speech-synthesis
Jsut Lab
HTS-style full-context labels for JSUT v1.1
Stars: ✭ 28 (-79.86%)
Mutual labels:  dataset, speech-synthesis

Predicting Prosodic Prominence from Text with Pre-Trained Contextualized Word Representations

Update 30 October 2019:

  • Data files modified to include improved word boundary values.

Update 9 September 2019:

  • Data files have been modified to include information about the source file in LibriTTS: Instead of an empty line before each sentence, there is now a line with <file> file_name.txt.
  • The code in prosody_dataset.py has been updated accordingly.

This repository contains the Helsinki Prosody Corpus and the code for the paper:

Aarne Talman, Antti Suni, Hande Celikkanat, Sofoklis Kakouros, Jörg Tiedemann and Martti Vainio. 2019. Predicting Prosodic Prominence from Text with Pre-trained Contextualized Word Representations. Proceedings of NoDaLiDa.

Abstract: In this paper we introduce a new natural language processing dataset and benchmark for predicting prosodic prominence from written text. To our knowledge this will be the largest publicly available dataset with prosodic labels. We describe the dataset construction and the resulting benchmark dataset in detail and train a number of different models ranging from feature-based classifiers to neural network systems for the prediction of discretized prosodic prominence. We show that pre-trained contextualized word representations from BERT outperform the other models even with less than 10% of the training data. Finally we discuss the dataset in light of the results and point to future research and plans for further improving both the dataset and methods of predicting prosodic prominence from text. The dataset and the code for the models are publicly available.

If you find the corpus or the system useful, please cite:

@inproceedings{talman_etal2019prosody,
  author = {Aarne Talman and Antti Suni and Hande Celikkanat and Sofoklis Kakouros 
            and J\"org Tiedemann and Martti Vainio},
  title = {Predicting Prosodic Prominence from Text with Pre-trained Contextualized 
           Word Representations},
  booktitle = {Proceedings of NoDaLiDa},
  year = {2019}
}

The Helsinki Prosody Corpus

License: CC BY 4.0

This repository contains the largest annotated dataset of English language with labels for prosodic prominence.

  • Download: The corpus is available in the data folder. Clone this repository or download the files separately.

The prosody corpus contains automatically generated, high quality prosodic annotations for the recently published LibriTTS corpus (Zen et al. 2019) using the Continuous Wavelet Transform Annotation method (Suni et al. 2017) and the Wavelet Prosody Analyzer toolkit.

Continuous Wavelet Transform Annotation method Image: Continuous Wavelet Transform Annotation method

Corpus statistics

Datasets Speakers Sentences Words Label: 0 Label: 1 Label: 2
train-100 247 33,041 570,592 274,184 155,849 140,559
train-360 904 116,262 2,076,289 1,003,454 569,769 503,066
dev 40 5,726 99,200 47,535 27,454 24,211
test 39 4,821 90,063 43,234 24,543 22,286
Total: 1230 159,850 2,836,144 1,368,407 777,615 690,122

Format

The corpus contains data in text files with one word per line and sentences separated with a line <file> file_name.txt, where the filename refers to the source file in LibriTTS. Each line in a sentence has five items separated with tabs in the following order:

  • word
  • discrete prominence label: 0 (non-prominent), 1 (prominent), 2 (highly prominent), (NA for punctuation)
  • discrete word boundary label: 0, 1, 2 (NA for punctuation)
  • real-valued prominence label (NA for punctuation)
  • real-valued word boundary label (NA for punctuation)

Example:

commercial    2    1    1.679    0.715

Tasks

The dataset can be used for two different prosody prediction tasks: 2-way and 3-way prosody prediction. As the dataset is annotated with three labels, 3-way classification can be done directly with the data. To use the data for 2-way classification task map label 2 to label 1 to get two discrete classes 0 (non-promiment) and 1 (prominent).

System

License: MIT

This repository contains the code for our BERT and BiLSTM models for predicting prosodic prominence from written English text. To use the system following dependencies need to be installed:

  • Python 3
  • PyTorch>=1.0
  • argparse
  • pytorch_transformers
  • numpy

To install the requirements run:

pip3 install -r requirements.txt

To download the word embeddings for the LSTM model run:

./download_embeddings.sh

Models included:

  • BERT
  • LSTM
  • Majority class per word
  • See model.py for the complete list

For the BERT model run training by executing:

# Train BERT-Uncased
python3 main.py \
    --model BertUncased \
    --train_set train_360 \
    --batch_size 32 \
    --epochs 2 \
    --save_path results_bert.txt \
    --log_every 50 \
    --learning_rate 0.00005 \
    --weight_decay 0 \
    --gpu 0 \
    --fraction_of_train_data 1 \
    --optimizer adam \
    --seed 1234

For the Bidirectional LSTM model run training by executing:

# Train 3-layer BiLSTM
python3 main.py \
    --model BiLSTM \
    --train_set train_360 \
    --layers 3 \
    --hidden_dim 600 \
    --batch_size 64 \
    --epochs 5 \
    --save_path results_bilstm.txt \
    --log_every 50 \
    --learning_rate 0.001 \
    --weight_decay 0 \
    --gpu 0 \
    --fraction_of_train_data 1 \
    --optimizer adam \
    --seed 1234

Output

Output of the system is a text file with the following structure:

<word> tab <label> tab <prediction>

Example output:

And    0     0
those  2     2
who    0     0
meet   1     2
in     0     0
the    0     0
great  1     1
hall   1     1
with   0     0
the    0     0
white  2     1
Atlas  2     2
?      NA    NA

Baseline Results

Main experimental results from the paper using the train-360 dataset.

Model Test accuracy (2-way) Test accuracy (3-way)
BERT-base 83.2% 68.6%
3-layer BiLSTM 82.1% 66.4%
CRF (MarMoT) 81.8% 66.4%
SVM+GloVe (Minitagger) 80.8% 65.4%
Majority class per word 80.2% 62.4%
Majority class 52.0% 48.0%
Random 49.0% 39.5%

Contact

Aarne Talman: [email protected]

References

[1] Heiga Zen, Viet Dang, Rob Clark, Yu Zhang, Ron J Weiss, Ye Jia, Zhifeng Chen and Yonghui Wu. 2019. LibriTTS: A corpus derived from LibriSpeech for text-to-speech. arXiv preprint arXiv:1904.02882.

[2] Antti Suni, Juraj Šimko, Daniel Aalto and Martti Vainio. 2017. Hierarchical representation and estimation of prosody using continuous wavelet transform. Computer Speech & Language. Volume 45. Pages 123-136. ISSN 0885-2308. https://doi.org/10.1016/j.csl.2016.11.001.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].