All Projects â†’ ValentinVignal â†’ midiGenerator

ValentinVignal / midiGenerator

Licence: other
Generate midi file with deep neural network 🎶

Programming Languages

python
139335 projects - #7 most used programming language
Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to midiGenerator

osmid
osmid is a tool to bridge MIDI and OSC. It is currently in use in Sonic Pi
Stars: ✭ 63 (+110%)
Mutual labels:  midi
midiomatic
A collection of MIDI filter, generator and processor plugins
Stars: ✭ 23 (-23.33%)
Mutual labels:  midi
midiplayer
Play MIDI file right in your browser with the WebMIDIAPI
Stars: ✭ 53 (+76.67%)
Mutual labels:  midi
PySprint
Recreation of the Atari ST port of Super Sprint with Pygame
Stars: ✭ 19 (-36.67%)
Mutual labels:  pygame
teensy-midi-looper
teensy midi loop recorder
Stars: ✭ 30 (+0%)
Mutual labels:  midi
WeatherPi TFT
a weather display for a raspberry pi and a TFT display written in python3 and pygame
Stars: ✭ 66 (+120%)
Mutual labels:  pygame
MIDI.jl
A Julia library for handling MIDI files
Stars: ✭ 55 (+83.33%)
Mutual labels:  midi
AUSequencer
(WIP) MIDI Sequencer Audio Unit
Stars: ✭ 26 (-13.33%)
Mutual labels:  midi
music embedding
A package for representing music data based on music theory
Stars: ✭ 19 (-36.67%)
Mutual labels:  midi
AUParamsApp
An AUv3 MIDI plugin. See the blog post
Stars: ✭ 24 (-20%)
Mutual labels:  midi
Sudoku-Solver
🎯 This Python-based Sudoku Solver utilizes the PyGame Library and Backtracking Algorithm to visualize and solve Sudoku puzzles efficiently. With its intuitive interface, users can input and interact with the Sudoku board, allowing for a seamless solving experience.
Stars: ✭ 51 (+70%)
Mutual labels:  pygame
Python-Games
A collection of small python games made by me using pygame and tkinter libraries
Stars: ✭ 121 (+303.33%)
Mutual labels:  pygame
Miles
Swift Playground that creates jazz improvisations (WWDC 2018)
Stars: ✭ 31 (+3.33%)
Mutual labels:  midi
from-data-to-sound
🎵 Simple Node.js script for transforming data to a MIDI file
Stars: ✭ 33 (+10%)
Mutual labels:  midi
ManosOsc
(Eyebeam #13 of 13) Output OSC, MIDI, and After Effects/Maya animation scripts from the Leap Motion controller.
Stars: ✭ 53 (+76.67%)
Mutual labels:  midi
Ensembles
A digital arranger workstation powered by FluidSynth
Stars: ✭ 312 (+940%)
Mutual labels:  midi
minesweeper
💣 The classic minesweeper game in python
Stars: ✭ 34 (+13.33%)
Mutual labels:  pygame
flutter midi
Midi Playback in Flutter
Stars: ✭ 52 (+73.33%)
Mutual labels:  midi
wui
Collection of GUI widgets for the web
Stars: ✭ 44 (+46.67%)
Mutual labels:  midi
arduino-midi-footswitch
USB MIDI Pedal built with Arduino
Stars: ✭ 24 (-20%)
Mutual labels:  midi

Midi Generator

Introduction

This project aim to train different neural networks on midi dataset to generate music

Context

This repository contains the code of my Master Thesis for my Master in the School of Computing of the National University of Singapore ""Music completion with deep probabilistic models"

The Report, Abstract and Presentation's slides are available here

Files

bayesian-opt.py

python bayesian-opt.py

It is used to find the best hyper parameters to train a model

compute_data.py

python compute_data.py

It is used to compute the .mid files of the dataset and create the numpy arrays used to train a model.

generate.py

python generate.py

Loads a trained model and generate music from it.

train.py

python train.py

Creates or loads a model, train it, and also generate music at the end.

How to use it

Setup

First, get a midi dataset and save all the song in a folder Dataset The emplacement of this folder has to be at the emplacement ../../../../../storage1/valentin/. To change it, go in the files src/GlobalVariables/path.py and modify it. The midi files have to be in a folder. The name of the folder is the name of the dataset

Get the information of the dataset

To get the information of the dataset, run

python debug/check_dataset.py <DATANAME>

Main options:

  • data is the name of the dataset name
  • -h or --help plot the options of the file
  • --notes-range the range of the notes. Default is 0:88
  • --instruments the instruments separated by a , (ex: Piano,Trombone)
  • --bach for bach dataset (4 voices of piano)
  • --mono to specify to use mono encoding for monophonic music
  • --no-transpose don't transpose the songs in C major or A minor

The file will print the number of available songs and the notes-range to specify to not loss any data

Compute the dataset

To extract the tensors from a midi dataset and save them

python compute_data.py <DATANAME>

The main options are:

  • data is the name of the dataset
  • -h or --help will print the options of the file
  • --notes-range to specify the range of the notes to consider
  • --instruments the instruments separated by a , (ex: Piano,Trombone)
  • --bach for bach dataset (4 voices of piano)
  • --mono to specify to use mono encoding for monophonic music
  • --no-transpose don't transpose the songs in C major or A minor

Train and use a model

To train, evaluation, create songs with a model, run

python train.py

The main options are:

  • -h or --help to print the options of the file
  • -d or --data the name of the dataset
  • --data-test the name of the test dataset
  • -m or --model to specify the model to use (modelName,nameParam,nbSteps)
  • -l or --load to load the id of the model (name-modelName,nameParam,nbSteps-epochs-id)
  • --mono is used to use a monophonic dataset
  • --no-transposed is used to not use a transpose dataset (to C major or A minor)
  • -e or --epochs number of epochs
  • --gpu specifies the GPU to use
  • -b or --batch the batch size
  • --lr specifies the learning rate
  • --no-rpoe is used to not used the RPoE layer (Recurrent Product of Experts)
  • --validation the proportion of the data tu use as a validation set
  • --predict-offset the offset of prediction (use 2 for the band player script)
  • --evaluate to evaluate the model on the test dataset
  • --generate do the generate task
  • --generate-fill do the fill task
  • --redo-generate do the redo task
  • --noise value of the noise in the inputs of the training data
  • -n or --name to give a name to the model
  • --use-binary use the sigmoid loss for a note_continue for monophonic music

To get the best results from the report, run:

python train.py -d DATANAME --data-test DATATESTNAME -m MRRMVAE,1,8 --use-binary --mono --evaluate --generate --generate-fill --redo-generate

Band Player

This file load a trained model and use a it to play with the user in real time

python controller.py

The main options are:

  • -h or --help to print the options of the file
  • --inst to specify what instrument sound the user wants
  • --tempo specify the tempo
  • -l or --load to load the id of the model (name-modelName,nameParam,nbSteps-epochs-id)
  • --played-voice the voice played by the user in the band
  • --inst-mask list of the mask to choose the voices the user wants to play with (to play the first voice and with the second and last voice for a model with 5 voices: [1,1,0,0,1])
  • --nb-steps-shown the number of steps showed on the piano roll plot
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].