All Projects → cifkao → ismir2019-music-style-translation

cifkao / ismir2019-music-style-translation

Licence: other
The code for the ISMIR 2019 paper “Supervised symbolic music style translation using synthetic data”.

Programming Languages

python
139335 projects - #7 most used programming language
Jupyter Notebook
11667 projects
shell
77523 projects

Projects that are alternatives of or similar to ismir2019-music-style-translation

cunet
Control mechanisms to the U-Net architecture for doing multiple source separation instruments
Stars: ✭ 36 (+33.33%)
Mutual labels:  music-information-retrieval, ismir
Gist
A C++ Library for Audio Analysis
Stars: ✭ 244 (+803.7%)
Mutual labels:  music-information-retrieval
Aca Code
Matlab scripts accompanying the book "An Introduction to Audio Content Analysis" (www.AudioContentAnalysis.org)
Stars: ✭ 67 (+148.15%)
Mutual labels:  music-information-retrieval
Omr Datasets
Collection of datasets used for Optical Music Recognition
Stars: ✭ 158 (+485.19%)
Mutual labels:  music-information-retrieval
Mad Twinnet
The code for the MaD TwinNet. Demo page:
Stars: ✭ 99 (+266.67%)
Mutual labels:  music-information-retrieval
Awesome Deep Learning Music
List of articles related to deep learning applied to music
Stars: ✭ 2,195 (+8029.63%)
Mutual labels:  music-information-retrieval
Vocal Melody Extraction
Source code for "Vocal melody extraction with semantic segmentation and audio-symbolic domain transfer learning".
Stars: ✭ 44 (+62.96%)
Mutual labels:  music-information-retrieval
ForestCoverChange
Detecting and Predicting Forest Cover Change in Pakistani Areas Using Remote Sensing Imagery
Stars: ✭ 23 (-14.81%)
Mutual labels:  sequence-to-sequence
Tutorial
Tutorial covering Open Source tools for Source Separation.
Stars: ✭ 223 (+725.93%)
Mutual labels:  music-information-retrieval
Audioowl
Fast and simple music and audio analysis using RNN in Python 🕵️‍♀️ 🥁
Stars: ✭ 151 (+459.26%)
Mutual labels:  music-information-retrieval
Muspy
A toolkit for symbolic music generation
Stars: ✭ 151 (+459.26%)
Mutual labels:  music-information-retrieval
Fma
FMA: A Dataset For Music Analysis
Stars: ✭ 1,391 (+5051.85%)
Mutual labels:  music-information-retrieval
Dali
DALI: a large Dataset of synchronised Audio, LyrIcs and vocal notes.
Stars: ✭ 193 (+614.81%)
Mutual labels:  music-information-retrieval
Symbolic Musical Datasets
🎹 symbolic musical datasets
Stars: ✭ 79 (+192.59%)
Mutual labels:  music-information-retrieval
seq2seq-pytorch
Sequence to Sequence Models in PyTorch
Stars: ✭ 41 (+51.85%)
Mutual labels:  sequence-to-sequence
Music Synthesis With Python
Music Synthesis with Python talk, originally given at PyGotham 2017.
Stars: ✭ 48 (+77.78%)
Mutual labels:  music-information-retrieval
Essentia
C++ library for audio and music analysis, description and synthesis, including Python bindings
Stars: ✭ 1,985 (+7251.85%)
Mutual labels:  music-information-retrieval
Omnizart
Omniscient Mozart, being able to transcribe everything in the music, including vocal, drum, chord, beat, instruments, and more.
Stars: ✭ 165 (+511.11%)
Mutual labels:  music-information-retrieval
Neural-Chatbot
A Neural Network based Chatbot
Stars: ✭ 68 (+151.85%)
Mutual labels:  sequence-to-sequence
ACA-Slides
Slides and Code for "An Introduction to Audio Content Analysis," also taught at Georgia Tech as MUSI-6201. This introductory course on Music Information Retrieval is based on the text book "An Introduction to Audio Content Analysis", Wiley 2012/2022
Stars: ✭ 84 (+211.11%)
Mutual labels:  music-information-retrieval

Supervised symbolic music style translation

This is the code for the ISMIR 2019 paper ‘Supervised symbolic music style translation using synthetic data’. If you use the code in your research, please cite the paper as:

Ondřej Cífka, Umut Şimşekli, Gaël Richard. “Supervised Symbolic Music Style Translation Using Synthetic Data”, 20th International Society for Music Information Retrieval Conference, Delft, The Netherlands, 2019. doi:10.5281/zenodo.3527878.

Check out the 📻 example outputs and the accompanying 📝 blog post, which summarizes the paper. You might also be interested in our more recent paper [🧑‍💻 code, 🌎 website] on one-shot accompaniment style transfer.

The repository contains the following directories:

  • code – code for training and evaluating models
  • experiments – configuration files for the models from the paper
  • data – data preparation recipes

You can either download the trained models, or train your own by following the steps below. If you encounter any problems, please feel free to open an issue.

Installation

Clone the repository and make sure you have Python 3.6 or later. Then run the following commands.

  1. (optional) To make sure you have the right versions of the most important packages, run:

    pip install -r requirements.txt

    Alternatively, if you use conda, you can create your environment using

    conda env create -f environment.yml

    This will also install the correct versions of the CUDA and CuDNN libraries.

    If you wish to use different (more recent) package versions, you may skip this step; the code should still work.

  2. Install the package with:

    pip install './code[gpu]'

    Or for the non-GPU version (only if you skipped step 1):

    pip install './code[nogpu]'

Data

See the data README for how to prepare the data.

Training a model

The scripts for training the models are in the ismir2019_cifka.models package.

The experiments directory has a subdirectory for each model from the paper. The model.yaml file in each directory contains all the hyperparameters and other settings required to train and use the model; the first line also tells you what type of model it is (i.e. seq2seq_style or roll2seq_style). For example, to train the all2bass model, run the following command inside the experiments directory:

python -m ismir2019_cifka.models.roll2seq_style --logdir all2bass train

You may need to adjust the paths in model.yaml to point to your dataset.

Running a model

Before running a trained model on some MIDI files, we need to use the chop_midi script to chop them up into segments and save them in the expected format (see the data README for more information), e.g.:

python -m ismir2019_cifka.data.chop_midi \
    --no-drums \
    --force-tempo 60 \
    --bars-per-segment 8 \
    --include-segment-id \
    song1.mid song2.mid songs.pickle

Then we can run the model, providing the input file, the output file and the target style. For example:

python -m ismir2019_cifka.models.roll2seq_style --logdir all2bass run songs.pickle output.pickle ZZREGGAE

To listen to the outputs, we need to convert them back to MIDI files, which involves time-stretching the music from 60 BPM to the desired tempo, assigning an instrument, and concatenating the segments of each song:

python -m ismir2019_cifka.data.notes2midi \
   --instrument 'Fretless Bass' \
   --stretch 60:115 \
   --group-by-name \
   --time-unit 4 \
   output.pickle outputs

Evaluation

To reproduce the results on the Bodhidharma dataset, first download the trained models and prepare the dataset, then change to the experiments directory and run ./evaluate_bodhidharma.sh. Note that this will run each model many times on the entire dataset (once for each target style), so you might want to start with only a subset of the models or styles or run a number of them in parallel. The results will be stored in the results subdirectory; use the evaluation.ipynb Jupyter notebook to load and plot them.

To compute the metrics on your own data, use python -m ismir2019_cifka.evaluate directly. To better understand all the arguments, see how they are used in evaluate_bodhidharma.sh. The tricky ones are:

  • --data-prefix: where to look for the model outputs inside the model directory; for example, if you pass --data-prefix outputs/test_, then the outputs of model model1 in style A will be taken from model1/outputs/test_A.pickle
  • --style-profile-dir: a directory containing JSON files with reference style profiles; you can generate these using python -m ismir2019_cifka.eval.style_profile

Alternatively, you can import the evaluation metrics from the ismir2019_cifka.eval package and use them from your own code.

Acknowledgment

This work has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 765068.

Copyright notice

Copyright 2019 Ondřej Cífka of Télécom Paris, Institut Polytechnique de Paris.
All rights reserved.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].