All Projects â†’ davda54 â†’ generating-music

davda54 / generating-music

Licence: GPL-3.0 License
🎷 Artificial Composition of Multi-Instrumental Polyphonic Music

Programming Languages

C#
18002 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to generating-music

Music-generation-cRNN-GAN
cRNN-GAN to generate music by training on instrumental music (midi)
Stars: ✭ 38 (+35.71%)
Mutual labels:  midi, music-generation
python-twelve-tone
🎶 12-tone matrix to generate dodecaphonic melodies 🎶
Stars: ✭ 68 (+142.86%)
Mutual labels:  midi, music-generation
MidiTok
A convenient MIDI / symbolic music tokenizer for Deep Learning networks, with multiple strategies 🎶
Stars: ✭ 180 (+542.86%)
Mutual labels:  midi, music-generation
genmusic
Generative Music- a stochastic modal music generator
Stars: ✭ 17 (-39.29%)
Mutual labels:  music-generation, music-generator
facet
Facet is a live coding system for algorithmic music
Stars: ✭ 72 (+157.14%)
Mutual labels:  midi, music-generation
LabMidi
Midi IN and OUT. Standard midi file parser and player. Midi Softsynth implementation.
Stars: ✭ 38 (+35.71%)
Mutual labels:  midi
NegativeHarmonizer
A python tool to invert the tonality (a.k.a negative harmony) of midi notation
Stars: ✭ 23 (-17.86%)
Mutual labels:  midi
syn2midi
Create pianobooster midi from youtube piano video lessons (Synthesia).
Stars: ✭ 42 (+50%)
Mutual labels:  midi
JZZ-midi-SMF
Standard MIDI Files: read / write / play
Stars: ✭ 28 (+0%)
Mutual labels:  midi
Focusrite-Midi-Control
APP DOWNLOAD LINK (Mac OSX)
Stars: ✭ 24 (-14.29%)
Mutual labels:  midi
Arduino-BLE-MIDI
MIDI over Bluetooth Low Energy (BLE-MIDI) 1.0 for Arduino
Stars: ✭ 133 (+375%)
Mutual labels:  midi
MIDIKit
🎹 Modern multi-platform Swift CoreMIDI wrapper with MIDI 2.0 support.
Stars: ✭ 26 (-7.14%)
Mutual labels:  midi
tune
Make xenharmonic music and create synthesizer tuning files for microtonal scales.
Stars: ✭ 73 (+160.71%)
Mutual labels:  midi
elektron-sysex-to-midi
A simple tool for generating MIDI-files based on Elektron MachineDrum sysex dumps.
Stars: ✭ 33 (+17.86%)
Mutual labels:  midi
kpop midi
MIDI transcriptions of kpop songs. Most examples focus on piano chord progressions.
Stars: ✭ 22 (-21.43%)
Mutual labels:  midi
MidiAnimImporter
A custom importer that imports a .mid file (SMF; Standard MIDI File) into an animation clip.
Stars: ✭ 69 (+146.43%)
Mutual labels:  midi
midi-grid
DIY midi controller project
Stars: ✭ 60 (+114.29%)
Mutual labels:  midi
midi degradation toolkit
A toolkit for generating datasets of midi files which have been degraded to be 'un-musical'.
Stars: ✭ 29 (+3.57%)
Mutual labels:  midi
auapp
Simple example of an AUv3 MIDI app
Stars: ✭ 18 (-35.71%)
Mutual labels:  midi
android-midisuite
Android MIDI test programs and examples.
Stars: ✭ 123 (+339.29%)
Mutual labels:  midi

Artificial Composition of Multi-Instrumental Polyphonic Music

We have tried to create a generative model capable of composing polyphonic music. Contrary to other work in the same field, our goal was to generate music with multiple instruments playing simultaneously to cover a broader musical space.

We train three modules based on LSTM neural networks as the generative model; a lot of effort is put into reducing high complexity of multi-instrumental music representation by a thorough musical analysis.

YOUTUBE_SAMPLE

 

Musical Analysis

We need to reduce the complexity of input MIDI data without significant compromises on the possible musical expression. Training a neural network without this reduction would be impossible for current state-of-the-art neural architectures because the space of raw MIDI files is too large. The analysis consists of three main procesures:

Meter detection

Capturing the rhythmic structure correctly is difficult, as the rhythm is dependent on time which is inherently continuous. To discretize the time with as low loss of information as possible, we have to find the underlying metrical structure and normalize the tempo according to it.

Key detection

When considering the most usual major and minor modes, there are 24 different keys in total. A single pitch has a different role in each of these keys, which adds a lot of complexity that a prediction model should learn. Hence, we classify the keys (using a random forest model) and then transpose them to a unified key.

Chord sequences detection

We use the harmonic structure as a representation of music at a higher abstract level (than simple notes) to prolong the scope of the LSTM generative model. Therefore we try to estimate the chord progressions with a help of the detected meter and key.

 

Generative Model

The generative model consists of three modules: Chord Predictor, Note Predictor and Volume Predictor.

Illustration of the generative model

Chord Predictor is used to generate the underlying chords, one chord per beat. It is trained on data obtained from the chord detector. The most important module is Note Predictor, which generates events – either note-ons, note-offs or shifts in time. The Note Predictor accepts a chord obtained from the Chord Predictor and the last event to generate a new event. If the event constitutes a start of a new note, then the Volume Predictor is used to assign a volume to that note. Each module processes information in a sequential manner – it predicts a new state based on all of the previous states. Hence we use an LSTM network as the base of each module.

Illustration of Note Predictor architecture

 

Conclusion

In order to evaluate the performance of the proposed model, we have conducted an online survey. The results show that there is still space for enhancement, but overall the artificial music was rated almost as good as the real music and was rather hard to distinguish (with 62.6% accuracy).

We hope the thesis shows that it is indeed possible to let computers compose music. Although the output is not perfect and is limited to a symbolic representation, it can create listenable compositions that are hard to distinguish from human-made music.

Surcey ratings

 

Overview of the Repository

The repository is divided into four different folders that are briefly described below. More details can be found in readme files contained in each of them.

Analyzer

GUI application used for analysis, conversion and visualization of .mid and internal .mus files. Please note that the GUI front-end is a side project, not an explicit part of the bachelor thesis, but it contains all algorithms described in Chapter 5 about music analysis. We use it for an easier and more intuitive evaluation of the algorithms.

Generative Model

Source files for the generative model implemented in PyTorch (deep learning library for python). It also contains already trained models that can be used to generate new music.

Samples

Sample files outputed from the models contained in the Generative Model folder. The folder contains both good and bad sounding examples that should illustrate the overall behaviour of the generative model.

Survey

Files related to the online questionaire. Contains the 104 audio files used in the questionaire and also a table with all answers from 293 users.

Thesis

Single-page version of the bachelor thesis.

 

BibTeX Citation

@thesis{SAMUEL18,
  author    = {David Samuel},
  title     = {Artificial Composition of Multi-Instrumental Polyphonic Music},
  year      = {2018},
  type      = {Bachelor's Thesis},
  publisher = {Univerzita Karlova, Matematicko-fyzik{\'a}ln{\'\i} fakulta},
  url       = {https://is.cuni.cz/webapps/zzp/detail/194043/},
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].