All Projects → Js-Mim → rl_singing_voice

Js-Mim / rl_singing_voice

Licence: MIT license
Unsupervised Representation Learning for Singing Voice Separation

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to rl singing voice

Pointglr
Global-Local Bidirectional Reasoning for Unsupervised Representation Learning of 3D Point Clouds (CVPR 2020)
Stars: ✭ 86 (+377.78%)
Mutual labels:  representation-learning, unsupervised-learning
Pytorch Byol
PyTorch implementation of Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning
Stars: ✭ 213 (+1083.33%)
Mutual labels:  representation-learning, unsupervised-learning
Autoregressive Predictive Coding
Autoregressive Predictive Coding: An unsupervised autoregressive model for speech representation learning
Stars: ✭ 138 (+666.67%)
Mutual labels:  representation-learning, unsupervised-learning
Simclr
PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations
Stars: ✭ 750 (+4066.67%)
Mutual labels:  representation-learning, unsupervised-learning
FUSION
PyTorch code for NeurIPSW 2020 paper (4th Workshop on Meta-Learning) "Few-Shot Unsupervised Continual Learning through Meta-Examples"
Stars: ✭ 18 (+0%)
Mutual labels:  representation-learning, unsupervised-learning
Bagofconcepts
Python implementation of bag-of-concepts
Stars: ✭ 18 (+0%)
Mutual labels:  representation-learning, unsupervised-learning
Variational Ladder Autoencoder
Implementation of VLAE
Stars: ✭ 196 (+988.89%)
Mutual labels:  representation-learning, unsupervised-learning
Disentangling Vae
Experiments for understanding disentanglement in VAE latent representations
Stars: ✭ 398 (+2111.11%)
Mutual labels:  representation-learning, unsupervised-learning
VQ-APC
Vector Quantized Autoregressive Predictive Coding (VQ-APC)
Stars: ✭ 34 (+88.89%)
Mutual labels:  representation-learning, unsupervised-learning
Revisiting-Contrastive-SSL
Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [NeurIPS 2021]
Stars: ✭ 81 (+350%)
Mutual labels:  representation-learning, unsupervised-learning
Unsupervised Classification
SCAN: Learning to Classify Images without Labels (ECCV 2020), incl. SimCLR.
Stars: ✭ 605 (+3261.11%)
Mutual labels:  representation-learning, unsupervised-learning
awesome-graph-self-supervised-learning
Awesome Graph Self-Supervised Learning
Stars: ✭ 805 (+4372.22%)
Mutual labels:  representation-learning, unsupervised-learning
Lemniscate.pytorch
Unsupervised Feature Learning via Non-parametric Instance Discrimination
Stars: ✭ 532 (+2855.56%)
Mutual labels:  representation-learning, unsupervised-learning
Self Supervised Learning Overview
📜 Self-Supervised Learning from Images: Up-to-date reading list.
Stars: ✭ 73 (+305.56%)
Mutual labels:  representation-learning, unsupervised-learning
Awesome Vaes
A curated list of awesome work on VAEs, disentanglement, representation learning, and generative models.
Stars: ✭ 418 (+2222.22%)
Mutual labels:  representation-learning, unsupervised-learning
Simclr
SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners
Stars: ✭ 2,720 (+15011.11%)
Mutual labels:  representation-learning, unsupervised-learning
Simclr
PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations by T. Chen et al.
Stars: ✭ 293 (+1527.78%)
Mutual labels:  representation-learning, unsupervised-learning
Contrastive Predictive Coding
Keras implementation of Representation Learning with Contrastive Predictive Coding
Stars: ✭ 369 (+1950%)
Mutual labels:  representation-learning, unsupervised-learning
Contrastive Predictive Coding Pytorch
Contrastive Predictive Coding for Automatic Speaker Verification
Stars: ✭ 223 (+1138.89%)
Mutual labels:  representation-learning, unsupervised-learning
M-NMF
An implementation of "Community Preserving Network Embedding" (AAAI 2017)
Stars: ✭ 119 (+561.11%)
Mutual labels:  representation-learning, unsupervised-learning

Unsupervised Interpretable Representation Learning for Singing Voice Separation

This repository contains the PyTorch (1.4) implementation of our method for representation learning. Our method is based on (convolutional) neural networks, to learn representations from music signals that could be used for singing voice separation. The trick here is that the proposed method is employing cosine functions at the decoding stage. The resulting representation is non-negative and real-valued, and it could employed, fairly easily, by current supervised models for music source separation. The proposed method is inspired by Sinc-Net and dDSP.

Authors

S.I. Mimilakis, K. Drossos, G. Schuller

What's inside?

  • Code for the neural architectures used in our study and their corresponding minimization objectives (nn_modules/)
  • Code for performing the unsupervised training (scripts/exp_rl_*)
  • Code for reconstructing the signal(s) (scripts/exp_fe_test.py)
  • Code for inspecting the outcome(s) of the training (scripts/make_plots.py)
  • Code for visualizing loss functions, reading/writing audio files, and creating batches (tools/)
  • Perks (unreported implementations/routines)
    • The discriminator-like objective, as a proxy to mutual information, reported here

What's not inside!

How to use

Training

  1. Download the dataset and declare the path of the downloaded dataset in tools/helpers.py
  2. Apply any desired changes to the model by tweeking the parameters in settings/rl_experiment_settings.py
  3. Execute scripts/exp_rl_vanilla.py for the basic method
  4. Execute scripts/exp_rl_sinkhorn.py for the extended method, using Sinkhorn distances

Testing

  1. Download the dataset and declare the path of the downloaded dataset in tools/helpers.py
  2. Download the results and place them under the results folder
  3. Load up the desired model by declaring the experiment id in settings/rl_experiment_settings.py (e.g. r-mcos8)
  4. Execute scripts/exp_fe_test.py (some arguments for plotting and file writing are necesary)

Reference

If you find this code useful for your research, cite our papers:

  @inproceedings{mim20_uirl_eusipco,  
  author={S. I. Mimilakis and K. Drossos and G. Schuller},  
  title={Unsupervised Interpretable Representation Learning for Singing Voice Separation},  
  year={2020},
  booktitle={Proceedings of the 27th European Signal Processing Conference (EUSIPCO 2020)}
  }
  @misc{mimilakis2020revisiting,
  title={Revisiting Representation Learning for Singing Voice Separation with Sinkhorn Distances},
  author={S. I. Mimilakis and K. Drossos and G. Schuller},
  year={2020},
  eprint={2007.02780},
  archivePrefix={arXiv},
  primaryClass={cs.SD}
  }

Acknowledgements

Stylianos Ioannis Mimilakis is supported in part by the German Research Foundation (AB 675/2-1, MU 2686/11-1).

License

MIT

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].