All Projects → MTG → mtg-jamendo-dataset

MTG / mtg-jamendo-dataset

Licence: Apache-2.0 license
Metadata, scripts and baselines for the MTG-Jamendo dataset

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to mtg-jamendo-dataset

Madmom
Python audio and music signal processing library
Stars: ✭ 728 (+420%)
Mutual labels:  audio-analysis, music-information-retrieval
essentia-tutorial
A tutorial for using Essentia in Python
Stars: ✭ 16 (-88.57%)
Mutual labels:  audio-analysis, music-information-retrieval
MixingBear
Package for automatic beat-mixing of music files in Python 🐻🎚
Stars: ✭ 73 (-47.86%)
Mutual labels:  audio-analysis, music-information-retrieval
da-tacos
A Dataset for Cover Song Identification and Understanding
Stars: ✭ 50 (-64.29%)
Mutual labels:  audio-analysis, music-information-retrieval
Gist
A C++ Library for Audio Analysis
Stars: ✭ 244 (+74.29%)
Mutual labels:  audio-analysis, music-information-retrieval
Music Synthesis With Python
Music Synthesis with Python talk, originally given at PyGotham 2017.
Stars: ✭ 48 (-65.71%)
Mutual labels:  audio-analysis, music-information-retrieval
Essentia.js
JavaScript library for music/audio analysis and processing powered by Essentia WebAssembly
Stars: ✭ 294 (+110%)
Mutual labels:  audio-analysis, music-information-retrieval
Essentia
C++ library for audio and music analysis, description and synthesis, including Python bindings
Stars: ✭ 1,985 (+1317.86%)
Mutual labels:  audio-analysis, music-information-retrieval
ACA-Slides
Slides and Code for "An Introduction to Audio Content Analysis," also taught at Georgia Tech as MUSI-6201. This introductory course on Music Information Retrieval is based on the text book "An Introduction to Audio Content Analysis", Wiley 2012/2022
Stars: ✭ 84 (-40%)
Mutual labels:  audio-analysis, music-information-retrieval
spafe
🔉 spafe: Simplified Python Audio Features Extraction
Stars: ✭ 310 (+121.43%)
Mutual labels:  audio-analysis, music-information-retrieval
music-genre-classification
Zalo AI Challenge - Music Genre Classification
Stars: ✭ 23 (-83.57%)
Mutual labels:  music-information-retrieval
COVID-19-train-audio
COVID-19 Coughs files for training AI models
Stars: ✭ 39 (-72.14%)
Mutual labels:  audio-analysis
arranger
An AI for Automatic Instrumentation
Stars: ✭ 37 (-73.57%)
Mutual labels:  music-information-retrieval
MidiTok
A convenient MIDI / symbolic music tokenizer for Deep Learning networks, with multiple strategies 🎶
Stars: ✭ 180 (+28.57%)
Mutual labels:  music-information-retrieval
CLMR
Official PyTorch implementation of Contrastive Learning of Musical Representations
Stars: ✭ 216 (+54.29%)
Mutual labels:  music-information-retrieval
AudioSearch
Python implementation of the "Shazam" algorithm
Stars: ✭ 38 (-72.86%)
Mutual labels:  audio-analysis
Modulo7
A semantic and technical analysis of musical scores based on Information Retrieval Principles
Stars: ✭ 15 (-89.29%)
Mutual labels:  music-information-retrieval
audio noise clustering
https://dodiku.github.io/audio_noise_clustering/results/ ==> An experiment with a variety of clustering (and clustering-like) techniques to reduce noise on an audio speech recording.
Stars: ✭ 24 (-82.86%)
Mutual labels:  audio-analysis
tomato
Turkish-Ottoman Makam (M)usic Analysis TOolbox
Stars: ✭ 30 (-78.57%)
Mutual labels:  music-information-retrieval
audio to midi
A CNN which converts piano audio to a simplified MIDI format
Stars: ✭ 29 (-79.29%)
Mutual labels:  music-information-retrieval

The MTG-Jamendo Dataset

DOI

We present the MTG-Jamendo Dataset, a new open dataset for music auto-tagging. It is built using music available at Jamendo under Creative Commons licenses and tags provided by content uploaders. The dataset contains over 55,000 full audio tracks with 195 tags from genre, instrument, and mood/theme categories. We provide elaborated data splits for researchers and report the performance of a simple baseline approach on five different sets of tags: genre, instrument, mood/theme, top-50, and overall.

This repository contains metadata, scripts, instructions on how to download and use the dataset and reproduce baseline results.

A subset of the dataset is used in the Emotion and Theme Recognition in Music Task within MediaEval 2019-2021 (you are welcome to participate).

Structure

Metadata files in data

Pre-processing

  • raw.tsv (56,639) - raw file without postprocessing
  • raw_30s.tsv(55,701) - tracks with duration more than 30s
  • raw_30s_cleantags.tsv(55,701) - with tags merged according to tag_map.json
  • raw_30s_cleantags_50artists.tsv(55,609) - with tags that have at least 50 unique artists
  • tag_map.json - map of tags that we merged
  • tags_top50.txt - list of top 50 tags
  • autotagging.tsv = raw_30sec_cleantags_50artists.tsv - base file for autotagging (after all postprocessing, 195 tags)

Subsets

  • autotagging_top50tags.tsv (54,380) - only top 50 tags according to tag frequency in terms of tracks
  • autotagging_genre.tsv (55,215) - only tracks with genre tags (95 tags), and only those tags
  • autotagging_instrument.tsv (25,135) - instrument tags (41 tags)
  • autotagging_moodtheme.tsv (18,486) - mood/theme tags (59 tags)

Splits

  • splits folder contains training/validation/testing sets for autotagging.tsv and subsets

Note: A few tags are discarded in the splits to guarantee the same list of tags across all splits. For autotagging.tsv, this results in 55,525 tracks annotated by 87 genre tags, 40 instrument tags, and 56 mood/theme tags available in the splits.

Splits are generated from autotagging.tsv, containing all tags. For each split, the related subsets (top50, genre, instrument, mood/theme) are built filtering out unrelated tags and tracks without any tags.

Statistics in stats

Top 20 tags per category

Statistics of number of tracks, albums and artists per tag sorted by number of artists. Each directory has statistics for metadata file with the same name. Here is the statistics for the autotagging set. Statistics for subsets based on categories are not kept seperated due to it already included in autotagging.

Using the dataset

Requirements

  • Python 3.7+
  • Create virtual environment and install requirements
python -m venv venv
source venv/bin/activate
pip install -r scripts/requirements.txt

The original requirements are kept in reguirements-orig.txt

Downloading the data

All audio is distributed in 320kbps MP3 format. In addition we provide precomputed mel-spectrograms which are distributed as NumPy Arrays in NPY format. We also provide precomputed statistical features from Essentia (used in the AcousticBrainz music database) in JSON format. The audio files and the NPY/JSON files are split into folders packed into TAR archives. The dataset is hosted online at MTG UPF.

We provide the following data subsets:

  • raw_30s/audio - all available audio for raw_30s.tsv (508 GB)
  • raw_30s/melspecs - mel-spectrograms for raw_30s.tsv (229 GB)
  • autotagging-moodtheme/audio - audio for the mood/theme subset autotagging_moodtheme.tsv (152 GB)
  • autotagging-moodtheme/melspecs - mel-spectrograms for the autotagging_moodtheme.tsv subset (68 GB)

For faster downloads, we host a copy of the dataset on Google Drive. We provide a script to download and validate all files in the dataset. See its help message for more information:

python scripts/download/download.py -h
usage: download.py [-h] [--dataset {raw_30s,autotagging_moodtheme}]
                   [--type {audio,melspecs,acousticbrainz}]
                   [--from {gdrive,mtg}] [--unpack] [--remove]
                   outputdir

Download the MTG-Jamendo dataset

positional arguments:
  outputdir             directory to store the dataset

optional arguments:
  -h, --help            show this help message and exit
  --dataset {raw_30s,autotagging_moodtheme}
                        dataset to download (default: raw_30s)
  --type {audio,melspecs,acousticbrainz}
                        type of data to download (audio, mel-spectrograms,
                        AcousticBrainz features) (default: audio)
  --from {gdrive,mtg}   download from Google Drive (fast everywhere) or MTG
                        (server in Spain, slow) (default: gdrive)
  --unpack              unpack tar archives (default: False)
  --remove              remove tar archives while unpacking one by one (use to
                        save disk space) (default: False)

For example, to download audio for the autotagging_moodtheme.tsv subset, unpack and validate all tar archives:

mkdir /path/to/download
python3 scripts/download/download.py --dataset autotagging_moodtheme --type audio /path/to/download --unpack --remove

Unpacking process is run after tar archive downloads are complete and validated. In the case of download errors, re-run the script to download missing files.

Due to the large size of the dataset, it can be useful to include the --remove flag to save disk space: in this case, tar archive are unpacked and immediately removed one by one.

Loading data in python

Assuming you are working in scripts folder

import commons

input_file = '../data/autotagging.tsv'
tracks, tags, extra = commons.read_file(input_file)

tracks is a dictionary with track_id as key and track data as value:

{
    1376256: {
    'artist_id': 490499,
    'album_id': 161779,
    'path': '56/1376256.mp3',
    'duration': 166.0,
    'tags': [
        'genre---easylistening',
        'genre---downtempo',
        'genre---chillout',
        'mood/theme---commercial',
        'mood/theme---corporate',
        'instrument---piano'
        ],
    'genre': {'chillout', 'downtempo', 'easylistening'},
    'mood/theme': {'commercial', 'corporate'},
    'instrument': {'piano'}
    }
    ...
}

tags contains mapping of tags to track_id:

{
    'genre': {
        'easylistening': {1376256, 1376257, ...},
        'downtempo': {1376256, 1376257, ...},
        ...
    },
    'mood/theme': {...},
    'instrument': {...}
}

extra has information that is useful to format output file, so pass it to write_file if you are using it, otherwise you can just ignore it

Reproduce postprocessing & statistics

  • Recompute statistics for raw and raw_30s
python scripts/get_statistics.py data/raw.tsv stats/raw
python scripts/get_statistics.py data/raw_30s.tsv stats/raw_30s
  • Clean tags and recompute statistics (raw_30s_cleantags)
python scripts/clean_tags.py data/raw_30s.tsv data/tag_map.json data/raw_30s_cleantags.tsv
python scripts/get_statistics.py data/raw_30s_cleantags.tsv stats/raw_30s_cleantags
  • Filter out tags with low number of unique artists and recompute statistics (raw_30s_cleantags_50artists)
python scripts/filter_fewartists.py data/raw_30s_cleantags.tsv 50 data/raw_30s_cleantags_50artists.tsv --stats-directory stats/raw_30s_cleantags_50artists
  • autotagging file in data and folder in stats is a symbolic link to raw_30s_cleantags_50artists

  • Visualize top 20 tags per category

python scripts/visualize_tags.py stats/autotagging 20  # generates top20.pdf figure

Recreate subsets

  • Create subset with only top50 tags by number of tracks
python scripts/filter_toptags.py data/autotagging.tsv 50 data/autotagging_top50tags.tsv --stats-directory stats/autotagging_top50tags --tag-list data/tags/tags_top50.txt
python scripts/split_filter_subset.py data/splits autotagging autotagging_top50tags --subset-file data/tags/top50.txt
  • Create subset with only mood/theme tags (or other category: genre, instrument)
python scripts/filter_category.py data/autotagging.tsv mood/theme data/autotagging_moodtheme.tsv --tag-list data/tags/moodtheme.txt
python scripts/split_filter_subset.py data/splits autotagging autotagging_moodtheme --category mood/theme 

Reproduce experiments

  • Preprocessing
python scripts/baseline/get_npy.py run 'your_path_to_spectrogram_npy'
  • Train
python scripts/baseline/main.py --mode 'TRAIN' 
  • Test
python scripts/baseline/main.py --mode 'TEST' 
optional arguments:
  --batch_size                batch size (default: 32)
  --mode {'TRAIN', 'TEST'}    train or test (default: 'TRAIN')
  --model_save_path           path to save trained models (default: './models')
  --audio_path                path of the dataset (default='/home')
  --split {0, 1, 2, 3, 4}     split of data to use (default=0)
  --subset {'all', 'genre', 'instrument', 'moodtheme', 'top50tags'}
                              subset to use (default='all')

Results

Related Datasets

The MTG-Jamendo Dataset can be linked to related datasets tailored to specific applications.

Music Classification Annotations

The Music Classification Annotations contains annotations for the split-0 test set according to the taxonomies of 15 existing music classification datasets including genres, moods, danceability, voice/instrumental, gender, and tonal/atonal. These labels are suitable for training individual classifiers or learning everything in a multi-label setup (auto-tagging). Most of the taxonomies were annotated by three different annotators. We provide the subset of annotations with perfect inter-annotator agreement ranging from 411 to 8756 tracks depending on the taxonomy.

Research challenges using the dataset

Citing the dataset

Please consider citing the following publication when using the dataset:

Bogdanov, D., Won M., Tovstogan P., Porter A., & Serra X. (2019). The MTG-Jamendo Dataset for Automatic Music Tagging. Machine Learning for Music Discovery Workshop, International Conference on Machine Learning (ICML 2019).

@conference {bogdanov2019mtg,
    author = "Bogdanov, Dmitry and Won, Minz and Tovstogan, Philip and Porter, Alastair and Serra, Xavier",
    title = "The MTG-Jamendo Dataset for Automatic Music Tagging",
    booktitle = "Machine Learning for Music Discovery Workshop, International Conference on Machine Learning (ICML 2019)",
    year = "2019",
    address = "Long Beach, CA, United States",
    url = "http://hdl.handle.net/10230/42015"
}

An expanded version of the paper describing the dataset and the baselines will be announced later.

License

  • The code in this repository is licensed under Apache 2.0
  • The metadata is licensed under a CC BY-NC-SA 4.0.
  • The audio files are licensed under Creative Commons licenses, see individual licenses for details in audio_licenses.txt.

Copyright 2019-2022 Music Technology Group

Acknowledgments

This work was funded by the predoctoral grant MDM-2015-0502-17-2 from the Spanish Ministry of Economy and Competitiveness linked to the Maria de Maeztu Units of Excellence Programme (MDM-2015-0502).

This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 765068.

This work has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 688382 "AudioCommons".

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].