All Projects → rubensolozabal → BinPacking_Neural_Combinatorial_Optimization

rubensolozabal / BinPacking_Neural_Combinatorial_Optimization

Licence: MIT license
Bin Packing Problem using Neural Combinatorial Optimization.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to BinPacking Neural Combinatorial Optimization

Sleepeegnet
SleepEEGNet: Automated Sleep Stage Scoring with Sequence to Sequence Deep Learning Approach
Stars: ✭ 89 (-4.3%)
Mutual labels:  sequence-to-sequence
Pytorch Seq2seq
Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
Stars: ✭ 3,418 (+3575.27%)
Mutual labels:  sequence-to-sequence
ismir2019-music-style-translation
The code for the ISMIR 2019 paper “Supervised symbolic music style translation using synthetic data”.
Stars: ✭ 27 (-70.97%)
Mutual labels:  sequence-to-sequence
Delta
DELTA is a deep learning based natural language and speech processing platform.
Stars: ✭ 1,479 (+1490.32%)
Mutual labels:  sequence-to-sequence
Npmt
Towards Neural Phrase-based Machine Translation
Stars: ✭ 175 (+88.17%)
Mutual labels:  sequence-to-sequence
seq2seq-pytorch
Sequence to Sequence Models in PyTorch
Stars: ✭ 41 (-55.91%)
Mutual labels:  sequence-to-sequence
Xmunmt
An implementation of RNNsearch using TensorFlow
Stars: ✭ 69 (-25.81%)
Mutual labels:  sequence-to-sequence
bin-packing-core
基于人工智能(遗传算法 + 贪心 max-rect 算法) 的矩形拼接算法
Stars: ✭ 26 (-72.04%)
Mutual labels:  binpacking
Text summarization with tensorflow
Implementation of a seq2seq model for summarization of textual data. Demonstrated on amazon reviews, github issues and news articles.
Stars: ✭ 226 (+143.01%)
Mutual labels:  sequence-to-sequence
Neural-Chatbot
A Neural Network based Chatbot
Stars: ✭ 68 (-26.88%)
Mutual labels:  sequence-to-sequence
Mlds2018spring
Machine Learning and having it Deep and Structured (MLDS) in 2018 spring
Stars: ✭ 124 (+33.33%)
Mutual labels:  sequence-to-sequence
Seq2seq tutorial
Code For Medium Article "How To Create Data Products That Are Magical Using Sequence-to-Sequence Models"
Stars: ✭ 132 (+41.94%)
Mutual labels:  sequence-to-sequence
recursion-and-dynamic-programming
Julia and Python recursion algorithm, fractal geometry and dynamic programming applications including Edit Distance, Knapsack (Multiple Choice), Stock Trading, Pythagorean Tree, Koch Snowflake, Jerusalem Cross, Sierpiński Carpet, Hilbert Curve, Pascal Triangle, Prime Factorization, Palindrome, Egg Drop, Coin Change, Hanoi Tower, Cantor Set, Fibo…
Stars: ✭ 37 (-60.22%)
Mutual labels:  knapsack
Openseq2seq
Toolkit for efficient experimentation with Speech Recognition, Text2Speech and NLP
Stars: ✭ 1,378 (+1381.72%)
Mutual labels:  sequence-to-sequence
Keras-LSTM-Trajectory-Prediction
A Keras multi-input multi-output LSTM-based RNN for object trajectory forecasting
Stars: ✭ 88 (-5.38%)
Mutual labels:  sequence-to-sequence
Language Translation
Neural machine translator for English2German translation.
Stars: ✭ 82 (-11.83%)
Mutual labels:  sequence-to-sequence
Speech recognition with tensorflow
Implementation of a seq2seq model for Speech Recognition using the latest version of TensorFlow. Architecture similar to Listen, Attend and Spell.
Stars: ✭ 253 (+172.04%)
Mutual labels:  sequence-to-sequence
SSIM Seq2Seq
SSIM - A Deep Learning Approach for Recovering Missing Time Series Sensor Data
Stars: ✭ 32 (-65.59%)
Mutual labels:  sequence-to-sequence
cross-lingual-open-ie
MT/IE: Cross-lingual Open Information Extraction with Neural Sequence-to-Sequence Models
Stars: ✭ 22 (-76.34%)
Mutual labels:  sequence-to-sequence
ForestCoverChange
Detecting and Predicting Forest Cover Change in Pakistani Areas Using Remote Sensing Imagery
Stars: ✭ 23 (-75.27%)
Mutual labels:  sequence-to-sequence

BinPacking_Neural_Combinatorial_Optimization

Bin Packing Problem using Neural Combinatorial Optimization.

This Tensorflow model tackles Bin-Packing Problem using Reinforcement Learning. It trains multi-stacked LSTM cells to perform an RNN agent able to embed information from the environment and variable size sequences batched form the whole combinational input space.

This AI is performed to behave like a first-fit algorithm. https://en.wikipedia.org/wiki/Bin_packing_problem#First-fit_algorithm

My special greetings to Michel Deudon (@mdeudon) & Pierre Cournut (@pcournut) for their inspirational TSP implementation. https://github.com/MichelDeudon/neural-combinatorial-optimization-rl-tensorflow

Requirements

  • Python 3.6
  • Tensorflow 1.8.0
  • Minizinc 2.1.1 (optional -> --enable_performance)
    pip install -r requirements.txt

Usage

Test pretained model performance:

    python main.py --train_mode=False --load_model=True (--enable_performance=True)

Train your own model from scratch:

    python main.py --train_mode=True --save_model=True

Continue training a previously saved model:

    python main.py --train_mode=True --save_model=True --load_model=True

Debug

To visualize training variables on Tensorboard:

    tensorboard --logdir=summary/repo

To activate Tensorflow debugger in Tensorboard, uncomment TensorBoard Debug Wrapper code. Execute Tensorboard after running the model.

    tensorboard --logdir=summary/repo --debugger_port 6064

Results

Solutions are tested againt Gecode open-source constraint solver.

Performance obtained is over 80%.

Author

Ruben Solozabal, PhD student at the University of the Basque Country [UPV/EHU] Bilbao

Date: October 2018

Contact me: [email protected]

References

Bello, I., Pham, H., Le, Q. V., Norouzi, M., & Bengio, S. (2016). Neural combinatorial optimization with reinforcement learning. arXiv preprint arXiv:1611.09940.

Azalia Mirhoseini, Hieu Pham, Quoc Le, Mohammad Norouzi, Samy Bengio, Benoit Steiner, Yuefeng Zhou, Naveen Kumar, Rasmus Larsen, and Jeff Dean, Device placement optimization with reinforcement learning, 2017.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].