All Projects → cifkao → groove2groove

cifkao / groove2groove

Licence: BSD-3-Clause license
Code for "Groove2Groove: One-Shot Music Style Transfer with Supervision from Synthetic Data"

Programming Languages

python
139335 projects - #7 most used programming language
Jupyter Notebook
11667 projects
shell
77523 projects

Projects that are alternatives of or similar to groove2groove

Transferlearning
Transfer learning / domain adaptation / domain generalization / multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习
Stars: ✭ 8,481 (+9537.5%)
Mutual labels:  paper, style-transfer
Style Transfer In Text
Paper List for Style Transfer in Text
Stars: ✭ 1,030 (+1070.45%)
Mutual labels:  paper, style-transfer
Music-Style-Transfer
Source code for "Transferring the Style of Homophonic Music Using Recurrent Neural Networks and Autoregressive Model"
Stars: ✭ 16 (-81.82%)
Mutual labels:  style-transfer, music-style-transfer
sdn-nfv-papers
This is a paper list about Resource Allocation in Network Functions Virtualization (NFV) and Software-Defined Networking (SDN).
Stars: ✭ 40 (-54.55%)
Mutual labels:  paper
NLP Toolkit
Library of state-of-the-art models (PyTorch) for NLP tasks
Stars: ✭ 92 (+4.55%)
Mutual labels:  style-transfer
audioContextEncoder
A context encoder for audio inpainting
Stars: ✭ 18 (-79.55%)
Mutual labels:  paper
paper-survey
Summary of machine learning papers
Stars: ✭ 26 (-70.45%)
Mutual labels:  paper
Text-Summarization-Repo
텍스트 요약 분야의 주요 연구 주제, Must-read Papers, 이용 가능한 model 및 data 등을 추천 자료와 함께 정리한 저장소입니다.
Stars: ✭ 213 (+142.05%)
Mutual labels:  paper
sRender
Facial Sketch Render, ICASSP 2021
Stars: ✭ 20 (-77.27%)
Mutual labels:  style-transfer
yuanxiaosc.github.io
个人博客;论文;机器学习;深度学习;Python学习;C++学习;
Stars: ✭ 19 (-78.41%)
Mutual labels:  paper
TiDB-A-Raft-based-HTAP-Database
Unofficial! English original and Chinese translation of the paper.
Stars: ✭ 42 (-52.27%)
Mutual labels:  paper
Awesome-Lane-Detection
A paper list with code of lane detection.
Stars: ✭ 34 (-61.36%)
Mutual labels:  paper
publications
IMS Machine Learning Lab publications.
Stars: ✭ 18 (-79.55%)
Mutual labels:  papers-with-code
ZiggoNext
Custom component to integrate Arris DCX960 Horizon EOS Settopbox into Home Assistant
Stars: ✭ 33 (-62.5%)
Mutual labels:  magenta
msla2014
wherein I implement several substructural logics in Agda
Stars: ✭ 24 (-72.73%)
Mutual labels:  paper
SANET
Arbitrary Style Transfer with Style-Attentional Networks
Stars: ✭ 105 (+19.32%)
Mutual labels:  style-transfer
AdversarialAudioSeparation
Code accompanying the paper "Semi-supervised adversarial audio source separation applied to singing voice extraction"
Stars: ✭ 70 (-20.45%)
Mutual labels:  paper
dreamsnap
Real life through the eyes of an artist
Stars: ✭ 16 (-81.82%)
Mutual labels:  style-transfer
MiniVox
Code for our ACML and INTERSPEECH papers: "Speaker Diarization as a Fully Online Bandit Learning Problem in MiniVox".
Stars: ✭ 15 (-82.95%)
Mutual labels:  paper
LMMS
Language Modelling Makes Sense - WSD (and more) with Contextual Embeddings
Stars: ✭ 79 (-10.23%)
Mutual labels:  paper

Groove2Groove

This is the source code for the IEEE TASLP paper:

Ondřej Cífka, Umut Şimşekli and Gaël Richard. "Groove2Groove: One-Shot Music Style Transfer with Supervision from Synthetic Data." IEEE/ACM Transactions on Audio, Speech, and Language Processing, 28:2638–2650, 2020. doi: 10.1109/TASLP.2020.3019642.

If you use the code in your research, please reference the paper.

Links

🔬 Paper postprint [pdf]
🎹 Supplementary website with examples and a live demo
🎵 Examples on YouTube
📁 MIDI file dataset, containing almost 3000 different styles
🤖 Band-in-a-Box automation scripts for generating the dataset
🧠 Model parameters (to be extracted into experiments)

Looking around

  • code: the main codebase (a Python package called groove2groove)
  • data: scripts needed to prepare the datasets
  • experiments: experiment configuration files
  • experiments/eval: evaluation code (see the eval.ipynb notebook)
  • api: an API server for the web demo

Installation

Clone the repository, then run the following commands.

  1. Install the dependencies using one of the following options:

    • Create a new environment using conda:

      conda env create -f environment.yml

      This will also install the correct versions of Python and the CUDA and CuDNN libraries.

    • Using pip (a virtual environment is recommended):

      pip install -r requirements.txt

      You will need Python 3.6 because we use a version of TensorFlow which is not available from PyPI for more recent Python versions.

    The code has been tested with TensorFlow 1.12, CUDA 9.0 and CuDNN 7.6.0. Other versions of TensorFlow (1.x) may work too.

  2. Install the package with:

    pip install './code[gpu]'

Usage

The main entry point of the package is the groove2groove.models.roll2seq_style_transfer module, which takes care of training and running the model. Run python -m groove2groove.models.roll2seq_style_transfer -h to see the available command line arguments.

The train command runs the training:

python -m groove2groove.models.roll2seq_style_transfer --logdir $LOGDIR train

Replace $LOGDIR with the model directory, containing the model.yaml configuration file (e.g. one of the directories under experiments).

To run a trained model on a single pair of MIDI files, use the run-midi command, e.g.:

python -m groove2groove.models.roll2seq_style_transfer --logdir $LOGDIR run-midi \
    --sample --softmax-temperature 0.6 \
    content.mid style.mid output.mid

To run it on a whole pre-processed dataset (e.g. the one in data/bodhidharma), use the run-test command, e.g.:

python -m groove2groove.models.roll2seq_style_transfer --logdir $LOGDIR run-test \
    --sample --softmax-temperature 0.6 --batch-size 128 \
    content.db style.db keypairs.tsv output.db 

Here, keypairs.tsv lists on each line a key from content.db and a key from style.db to use as inputs. Note that content.db and style.db may be the same file.

Acknowledgment

This work has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 765068.

Copyright notice

Copyright 2019–2020 Ondřej Cífka of Télécom Paris, Institut Polytechnique de Paris.
All rights reserved.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].