All Projects → brunnergino → Jambot

brunnergino / Jambot

Licence: mit

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Jambot

Esp8266audio
Arduino library to play MOD, WAV, FLAC, MIDI, RTTTL, MP3, and AAC files on I2S DACs or with a software emulated delta-sigma DAC on the ESP8266 and ESP32
Stars: ✭ 972 (+1844%)
Mutual labels:  midi
Char Rnn Keras
TensorFlow implementation of multi-layer recurrent neural networks for training and sampling from texts
Stars: ✭ 40 (-20%)
Mutual labels:  lstm
Ssun
Spectral-Spatial Unified Networks for Hyperspectral Image Classification
Stars: ✭ 44 (-12%)
Mutual labels:  lstm
Sfz2bitwig
Convert .SFZ files into Bitwig Studio multisample instruments.
Stars: ✭ 37 (-26%)
Mutual labels:  midi
Beep
Beep sound library and utility for alerting end of a command execution. Beep can also play MIDI or text music score.
Stars: ✭ 39 (-22%)
Mutual labels:  midi
Fluttermidicommand
A Flutter plugin to send and receive MIDI
Stars: ✭ 41 (-18%)
Mutual labels:  midi
Neural Networks
All about Neural Networks!
Stars: ✭ 34 (-32%)
Mutual labels:  lstm
Theconductor
Toolset for making musical applications with Unity, Max and Ableton.
Stars: ✭ 48 (-4%)
Mutual labels:  midi
Mega Drive Midi Interface
Control the Sega Mega Drive's Yamaha YM2612 and PSG with MIDI
Stars: ✭ 39 (-22%)
Mutual labels:  midi
Pytorchtext
1st Place Solution for Zhihu Machine Learning Challenge . Implementation of various text-classification models.(知乎看山杯第一名解决方案)
Stars: ✭ 1,022 (+1944%)
Mutual labels:  lstm
Language Modelling
Generating Text using Deep Learning in Python - LSTM, RNN, Keras
Stars: ✭ 38 (-24%)
Mutual labels:  lstm
Rnn Stocks Prediction
Another attempt to use Deep-Learning in the financial markets
Stars: ✭ 39 (-22%)
Mutual labels:  lstm
Sangita
A Natural Language Toolkit for Indian Languages
Stars: ✭ 43 (-14%)
Mutual labels:  lstm
Twitter Sentiment Analysis
Sentiment analysis on tweets using Naive Bayes, SVM, CNN, LSTM, etc.
Stars: ✭ 978 (+1856%)
Mutual labels:  lstm
Touchosc2midi
a (linux compatible) TouchOSC Midi Bridge written in python
Stars: ✭ 44 (-12%)
Mutual labels:  midi
Midi shield
Midi shield product 9595, available from SparkFun Electronics
Stars: ✭ 34 (-32%)
Mutual labels:  midi
Deeptranslit
Efficient and easy to use transliteration for Indian languages
Stars: ✭ 41 (-18%)
Mutual labels:  lstm
Deepseqslam
The Official Deep Learning Framework for Route-based Place Recognition
Stars: ✭ 49 (-2%)
Mutual labels:  lstm
Rnn Notebooks
RNN(SimpleRNN, LSTM, GRU) Tensorflow2.0 & Keras Notebooks (Workshop materials)
Stars: ✭ 48 (-4%)
Mutual labels:  lstm
Avsr Deep Speech
Google Summer of Code 2017 Project: Development of Speech Recognition Module for Red Hen Lab
Stars: ✭ 43 (-14%)
Mutual labels:  lstm

JamBot: Music Theory Aware Chord Based Generation of Polyphonic Music with LSTMs

Paper

JamBot: Music Theory Aware Chord Based Generation of Polyphonic Music with LSTMs", presented at ICTAI 2017.

https://arxiv.org/abs/1711.07682

Main Results

  • When JamBot is trained on an unshifted dataset, i.e., all songs are left in their respective keys, the learned chord embeddings form the circle of fifths.
Circle of Fifths 2D Projection of Chord Embeddings
(Source: By Just plain Bill - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=4463183)

Setup

Make sure you have the following Python packages installed. In brackets we indicate the latest tested version numbers (Last updated: 11th April 2018)

  • keras (2.1.5)
  • tensorflow-gpu (1.6.0)
  • numpy (1.14.2)
  • pretty_midi (0.2.8)
  • mido (1.2.8)
  • progressbar2 (3.37.0)
  • matplotlib (2.2.2)
  • h5py (2.7.1)

All tested with Python 3.5.4 (Last updated: 11th April 2018).

Dataset

We used the lmd_matched subset of the Lakh Midi Dataset, which can be downloaded here: https://colinraffel.com/projects/lmd/

Data Processing

Create the data/original directory and put your MIDI files inside it. Run data_processing.py to adjust the tempo and shift the midi songs, extract the chords and piano rolls. This might take some time. There may be some error messages printed due to invalid MIDI files. There should now be several new sub-directories in /data/.

Training

First we train the Chord LSTM by running:

python chord_lstm_training.py

Model checkpoints will be periodically saved to the (default) directory /models/chords/.

Now we can train the Polyphonic LSTM. First we need to adjust the chord_model_path string in polyphonic_lstm_training.py to point to a Chord LSTM checkpoint in /models/chords/ (we need this for the learned chord embeddings). Finally we can train the Polyphonic LSTM:

python polyphonic_lstm_training.py

Model checkpoints will be periodically saved to the (default) directory /models/chords_mldy/.

Generating

Adjust the chord_model_folder, chord_model_name, melody_model_folder, melody_model_name strings in generate.py to point them to trained chord and polyphonic LSTMs models in models/. Adjust the seed_path, seed_chord_path and seed_name in generate.py to point it to the extracted chords (in the data/shifted/chord_index folder) and piano roll (in the data/shifted/indroll folder) of the desired seed. Adjust the BPM, note_cap, and chord_temperature parameters if desired and run

python generate.py

to generate a song. The song will be saved with a few different instrumentiations in midi_save_folder (default is predicted_midi/).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].