All Projects → dcmocanu → Sparse Evolutionary Artificial Neural Networks

dcmocanu / Sparse Evolutionary Artificial Neural Networks

Licence: mit
Always sparse. Never dense. But never say never. A repository for the Adaptive Sparse Connectivity concept and its algorithmic instantiation, i.e. Sparse Evolutionary Training, to boost Deep Learning scalability on various aspects (e.g. memory and computational time efficiency, representation and generalization power).

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Sparse Evolutionary Artificial Neural Networks

Awesome Deep Neuroevolution
A collection of Deep Neuroevolution resources or evolutionary algorithms applying in Deep Learning (constantly updating)
Stars: ✭ 150 (-17.58%)
Mutual labels:  deep-neural-networks, evolutionary-algorithms, neuroevolution
Mariana
The Cutest Deep Learning Framework which is also a wonderful Declarative Language
Stars: ✭ 151 (-17.03%)
Mutual labels:  deep-neural-networks, artificial-neural-networks, deep-learning-algorithms
Deeplearning.ai
deeplearning.ai , By Andrew Ng, All video link
Stars: ✭ 625 (+243.41%)
Mutual labels:  deep-neural-networks, artificial-neural-networks, deep-learning-algorithms
Echo
Python package containing all custom layers used in Neural Networks (Compatible with PyTorch, TensorFlow and MegEngine)
Stars: ✭ 126 (-30.77%)
Mutual labels:  deep-neural-networks, deep-learning-algorithms
Cnn Svm
An Architecture Combining Convolutional Neural Network (CNN) and Linear Support Vector Machine (SVM) for Image Classification
Stars: ✭ 170 (-6.59%)
Mutual labels:  classification, artificial-neural-networks
Deephyper
DeepHyper: Scalable Asynchronous Neural Architecture and Hyperparameter Search for Deep Neural Networks
Stars: ✭ 117 (-35.71%)
Mutual labels:  deep-neural-networks, scalability
360sd Net
Pytorch implementation of ICRA 2020 paper "360° Stereo Depth Estimation with Learnable Cost Volume"
Stars: ✭ 94 (-48.35%)
Mutual labels:  deep-neural-networks, deep-learning-algorithms
Numpydl
Deep Learning Library. For education. Based on pure Numpy. Support CNN, RNN, LSTM, GRU etc.
Stars: ✭ 169 (-7.14%)
Mutual labels:  deep-neural-networks, deep-learning-algorithms
Invoicenet
Deep neural network to extract intelligent information from invoice documents.
Stars: ✭ 1,886 (+936.26%)
Mutual labels:  classification, deep-neural-networks
Paddlex
PaddlePaddle End-to-End Development Toolkit(『飞桨』深度学习全流程开发工具)
Stars: ✭ 3,399 (+1767.58%)
Mutual labels:  classification, deep-neural-networks
Livianet
This repository contains the code of LiviaNET, a 3D fully convolutional neural network that was employed in our work: "3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study"
Stars: ✭ 143 (-21.43%)
Mutual labels:  deep-neural-networks, deep-learning-algorithms
100daysofmlcode
My journey to learn and grow in the domain of Machine Learning and Artificial Intelligence by performing the #100DaysofMLCode Challenge.
Stars: ✭ 146 (-19.78%)
Mutual labels:  classification, artificial-neural-networks
Selfdrivingcar
A collection of all projects pertaining to different layers in the SDC software stack
Stars: ✭ 107 (-41.21%)
Mutual labels:  classification, deep-neural-networks
Neuroflow
Artificial Neural Networks for Scala
Stars: ✭ 105 (-42.31%)
Mutual labels:  classification, artificial-neural-networks
Hyperdensenet
This repository contains the code of HyperDenseNet, a hyper-densely connected CNN to segment medical images in multi-modal image scenarios.
Stars: ✭ 124 (-31.87%)
Mutual labels:  deep-neural-networks, deep-learning-algorithms
Top Deep Learning
Top 200 deep learning Github repositories sorted by the number of stars.
Stars: ✭ 1,365 (+650%)
Mutual labels:  deep-neural-networks, artificial-neural-networks
Glasses
High-quality Neural Networks for Computer Vision 😎
Stars: ✭ 138 (-24.18%)
Mutual labels:  classification, deep-neural-networks
Deep Learning Drizzle
Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!!
Stars: ✭ 9,717 (+5239.01%)
Mutual labels:  deep-neural-networks, artificial-neural-networks
Machine Learning Algorithms
A curated list of almost all machine learning algorithms and deep learning algorithms grouped by category.
Stars: ✭ 92 (-49.45%)
Mutual labels:  classification, deep-learning-algorithms
Evolutionsimulator
Evolution Simulator with Box2D
Stars: ✭ 143 (-21.43%)
Mutual labels:  evolutionary-algorithms, neuroevolution

sparse-evolutionary-artificial-neural-networks

  • Proof of concept implementations of various sparse artificial neural network models with adaptive sparse connectivity trained with the Sparse Evolutionary Training (SET) procedure.
  • The following implementations are distributed in the hope that they may be useful, but without any warranties; Their use is entirely at the user's own risk.
Implementation 1 - SET-MLP with Keras and Tensorflow (SET-MLP-Keras-Weights-Mask)
  • Proof of concept implementation of Sparse Evolutionary Training (SET) for Multi Layer Perceptron (MLP) on CIFAR10 using Keras and a mask over weights.
  • This implementation can be used to test SET in varying conditions, using the Keras framework versatility, e.g. various optimizers, activation layers, tensorflow.
  • Also it can be easily adapted for Convolutional Neural Networks or other models which have dense layers.
  • Variants of this implementation have been used to perform the experiments from Reference 1 with MLP and CNN.
  • However, due the fact that the weights are stored in the standard Keras format (dense matrices), this implementation can not scale properly.
  • If you would like to build an SET-MLP with over 100000 neurons, please use Implementation 2.
Implementation 2 - SET-MLP using just sparse data structures from pure Python 3 (SET-MLP-Sparse-Python-Data-Structures)
  • An improved version of this Implementation can be found here https://github.com/SelimaC/Tutorial-SCADS-Summer-School-2020-Scalable-Deep-Learning

  • Proof of concept implementation of Sparse Evolutionary Training (SET) for Multi Layer Perceptron (MLP) on lung dataset using Python, SciPy sparse data structures, and (optionally) Cython.

  • This implementation was developed just in the last stages of the reviewing process, and we are briefly discussing about it in the "Peer Review File" which can be downloaded from Reference 1 website.

  • This implementation can be used to create SET-MLP with hundred of thousands of neurons on a standard laptop. It was made starting from the vanilla fully connected MLP implementation of Ritchie Vink (https://www.ritchievink.com/) and we would like to acknowledge his work and thank him. Also, we would like to thank Thomas Hagebols for analyzing the performance of SciPy sparse matrix operations. We thank also to Amarsagar Reddy Ramapuram Matavalam from Iowa State University ([email protected]), who provided us a faster implementation of the "weightsEvolution" method, after the initial release of this code.

  • If you would like to try large SET-MLP models, below are the expected running times measured on my laptop (16 GB RAM) using the original implementation of the "weightsEvolution" method. I have used exactly the model and the dataset from the file "set_mlp_sparse_data_structures.py" and I just changed the number of hidden neurons per layer:

    • 3,000 neurons/hidden layer, 12,317 neurons in total
      0.3 minutes/epoch
    • 30,000 neurons/hidden layer, 93,317 neurons in total
      3 minutes/epoch
    • 300,000 neurons/hidden layer, 903,317 neurons in total
      49 minutes/epoch
    • 600,000 neurons/hidden layer, 1,803,317 neurons in total
      112 minutes/epoch
  • If you would like to try out SET-MLP with various activation functions, optimization methods and so on (in the detriment of scalability) please use Implementation 1.

Implementation 3 - SET-RBM using just sparse data structures from pure Python 3 (SET-RBM-Sparse-Python-Data-Structures)
  • Proof of concept implementation of Sparse Evolutionary Training (SET) for Restricted Boltzmann Machine (RBM) on COIL20 dataset using Python, SciPy sparse data structures, and (optionally) Cython.
  • This implementation can be used to create SET-RBM with hundred of thousands of neurons on a standard laptop and was developed just before the publication of Reference 1.
Implementation 4 - IJCAI 2019 tutorial - light hands-on experience code (Tutorial-IJCAI-2019-Scalable-Deep-Learning)
  • Tutorial details - "Scalable Deep Learning: from theory to practice" https://sites.google.com/view/scalable-deep-learning-ijcai19
  • The code is based on Implementation 2 of SET-MLP to which Dropout is added.
  • In the "Pretrained_results" folder there is a nice animation "fashion_mnist_connections_evolution_per_input_pixel_rand0.gif" of the input layer connectivity evolution during training.
Implementation 5 - ECMLPKDD 2019 tutorial - light hands-on experience code (Tutorial-ECMLPKDD-2019-Scalable-Deep-Learning)
  • Tutorial details - "Scalable Deep Learning: from theory to practice" https://sites.google.com/view/sdl-ecmlpkdd-2019-tutorial
  • The code is based on Implementation 2 of SET-MLP to which Dropout is added.
  • In the "Pretrained_results" folder there is a nice animation "fashion_mnist_connections_evolution_per_input_pixel_rand0.gif" of the input layer connectivity evolution during training.
References

For an easy understanding of these implementations please read the following articles. Also, if you use parts of this code in your work, please cite the corresponding ones:

  1. @article{Mocanu2018SET, author = {Mocanu, Decebal Constantin and Mocanu, Elena and Stone, Peter and Nguyen, Phuong H. and Gibescu, Madeleine and Liotta, Antonio}, journal = {Nature Communications}, title = {Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science}, year = {2018}, doi = {10.1038/s41467-018-04316-3}, url = {https://www.nature.com/articles/s41467-018-04316-3 }}

  2. @article{Mocanu2016XBM, author={Mocanu, Decebal Constantin and Mocanu, Elena and Nguyen, Phuong H. and Gibescu, Madeleine and Liotta, Antonio}, title={A topological insight into restricted Boltzmann machines}, journal={Machine Learning}, year={2016}, volume={104}, number={2}, pages={243--270}, doi={10.1007/s10994-016-5570-z}, url={https://doi.org/10.1007/s10994-016-5570-z }}

  3. @phdthesis{Mocanu2017PhDthesis, title = {Network computations in artificial intelligence}, author = {Mocanu, Decebal Constantin}, year = {2017}, isbn = {978-90-386-4305-2}, publisher = {Eindhoven University of Technology}, url={https://pure.tue.nl/ws/files/69949254/20170629_CO_Mocanu.pdf } }

  4. @article{Liu2019onemillion, author = {Liu, Shiwei and Mocanu, Decebal Constantin and Mocanu and Ramapuram Matavalam, Amarsagar Reddy and Pei, Yulong Pei and Pechenizkiy, Mykola}, journal = {arXiv:1901.09181}, title = {Sparse evolutionary Deep Learning with over one million artificial neurons on commodity hardware}, year = {2019}, url={https://arxiv.org/abs/1901.09181 } }

SET shows that large sparse neural networks can be built if topological sparsity is created from the design phase, before training. There are many algorithmic and implementation improvements which can be made. If you find this work interesting, please share the links to this Github page and to Reference 1. For any question, suggestion, feedback please feel free to contact me by email.

Community

Some time ago, I had a very pleasant unexpected surprise when I found out that Michael Klear released "Synapses". This library implements SET layers in PyTorch and as Michael says it is "truly sparse". For more details please read his article:

https://towardsdatascience.com/the-sparse-future-of-deep-learning-bce05e8e094a

And try out "Synapses" yourself:

https://github.com/AlliedToasters/synapses

Many things can be improved in "Synapses". If interested, please contact and help Michael in developing further the project.

Update 4 June 2020

Our paper "Topological insights into sparse neural networks" https://arxiv.org/pdf/2006.14085.pdf has been accepted at ECMLPKDD 2020. It proposes Neural Network Sparse Topology Distance (NNSTD) to measure the distance between different sparse neural networks. The code is here https://github.com/Shiweiliuiiiiiii/Sparse_Topology_Distance. Also, it shows in a principled manner that sparse training easily unveils a plenitude of sparse sub-networks with very different topologies which outperform the dense networks.

Update 30 November 2020

For an interesting quick read about sparse training, please have a look on this blog https://numenta.com/blog/2020/10/30/case-for-sparsity-in-neural-networks-part-2-dynamic-sparsity

Update 14 December 2020

To see how sparse training can be used for feature selection please check our latest paper, titled "Quick and Robust Feature Selection: the Strength of Energy-efficient Sparse Training for Autoencoders", here: https://arxiv.org/abs/2012.00560

and the corresponding truly sparse implementation here: https://github.com/zahraatashgahi/QuickSelection

Many thanks,
Decebal

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].