ruohoruotsi / Lstm Music Genre Classification
Licence: mit
Music genre classification with LSTM Recurrent Neural Nets in Keras & PyTorch
Stars: ✭ 166
Programming Languages
Projects that are alternatives of or similar to Lstm Music Genre Classification
Deep Music Genre Classification
🎵 Using Deep Learning to Categorize Music as Time Progresses Through Spectrogram Analysis
Stars: ✭ 23 (-86.14%)
Mutual labels: classification, lstm, rnn
Stylenet
A cute multi-layer LSTM that can perform like a human 🎶
Stars: ✭ 187 (+12.65%)
Mutual labels: lstm, rnn, music
Rnn Theano
使用Theano实现的一些RNN代码,包括最基本的RNN,LSTM,以及部分Attention模型,如论文MLSTM等
Stars: ✭ 31 (-81.33%)
Mutual labels: classification, lstm, rnn
Deepjazz
Deep learning driven jazz generation using Keras & Theano!
Stars: ✭ 2,766 (+1566.27%)
Mutual labels: lstm, rnn, music
Machine Learning
My Attempt(s) In The World Of ML/DL....
Stars: ✭ 78 (-53.01%)
Mutual labels: classification, lstm, rnn
Pytorch Learners Tutorial
PyTorch tutorial for learners
Stars: ✭ 97 (-41.57%)
Mutual labels: lstm, rnn
Rnnvis
A visualization tool for understanding and debugging RNNs
Stars: ✭ 162 (-2.41%)
Mutual labels: lstm, rnn
Lstms.pth
PyTorch implementations of LSTM Variants (Dropout + Layer Norm)
Stars: ✭ 111 (-33.13%)
Mutual labels: lstm, rnn
Cnn lstm for text classify
CNN, LSTM, NBOW, fasttext 中文文本分类
Stars: ✭ 90 (-45.78%)
Mutual labels: lstm, rnn
See Rnn
RNN and general weights, gradients, & activations visualization in Keras & TensorFlow
Stars: ✭ 102 (-38.55%)
Mutual labels: lstm, rnn
Pytorch Rnn Text Classification
Word Embedding + LSTM + FC
Stars: ✭ 112 (-32.53%)
Mutual labels: lstm, rnn
Load forecasting
Load forcasting on Delhi area electric power load using ARIMA, RNN, LSTM and GRU models
Stars: ✭ 160 (-3.61%)
Mutual labels: lstm, rnn
Pytorch Pos Tagging
A tutorial on how to implement models for part-of-speech tagging using PyTorch and TorchText.
Stars: ✭ 96 (-42.17%)
Mutual labels: lstm, rnn
Word Rnn Tensorflow
Multi-layer Recurrent Neural Networks (LSTM, RNN) for word-level language models in Python using TensorFlow.
Stars: ✭ 1,297 (+681.33%)
Mutual labels: lstm, rnn
Lstm Crypto Price Prediction
Predicting price trends in cryptomarkets using an lstm-RNN for the use of a trading bot
Stars: ✭ 136 (-18.07%)
Mutual labels: lstm, rnn
Chinese Chatbot
中文聊天机器人,基于10万组对白训练而成,采用注意力机制,对一般问题都会生成一个有意义的答复。已上传模型,可直接运行,跑不起来直播吃键盘。
Stars: ✭ 124 (-25.3%)
Mutual labels: lstm, rnn
Music Genre Classification with LSTMs
- Classify music files based on genre from the GTZAN music corpus
- GTZAN corpus is included for easy of use
- Use multiple layers of LSTM Recurrent Neural Nets
- Implementations in PyTorch, Keras
& Darknet.
Test trained LSTM model
In the ./weights/
you can find trained model weights and model architecture.
To test the model on your custom audio file, run
python3 predict_example.py path/to/custom/file.mp3
or to test the model on our custom files, run
python3 predict_example.py audio/classical_music.mp3
Audio features extracted
Dependencies
- Python3
- numpy
- librosa → for audio feature extraction
-
Keras
pip install keras
-
PyTorch
pip install torch torchvision
brew install libomp
Ideas for improving accuracy:
- GTZAN dataset has problems, how do we use it with consideration?
- Normalize MFCCs & other input features (Recurrent BatchNorm?)
- Decay learning rate
- How are we initing the weights?
- Better optimization hyperparameters (too little dropout)
- Do you have avoidable bias? How's your variance?
Accuracy
At Epoch 400, training on a TITAN X GPU (October 2017):
Loss | Accuracy | |
---|---|---|
Training | 0.5801 |
0.7810 |
Validation | 0.734523485104 |
0.766666688025 |
Testing | 0.900845060746 |
0.683333342274 |
At Epoch 400, training on a 2018 Macbook Pro CPU (May 2019):
Loss | Accuracy | |
---|---|---|
Training | 0.3486 |
0.8738 |
Validation | 1.028421084086 |
0.700000017881 |
Testing | 1.209656755129 |
0.683333347241 |
Note that the project description data, including the texts, logos, images, and/or trademarks,
for each open source project belongs to its rightful owner.
If you wish to add or remove any projects, please contact us at [email protected].