All Projects → jinyyy666 → mm-bp-snn

jinyyy666 / mm-bp-snn

Licence: other
No description or website provided.

Programming Languages

Cuda
1817 projects
C++
36643 projects - #6 most used programming language
matlab
3953 projects

Projects that are alternatives of or similar to mm-bp-snn

gpuhd
Massively Parallel Huffman Decoding on GPUs
Stars: ✭ 30 (+0%)
Mutual labels:  gpu-acceleration
Blur-and-Clear-Classification
Classifying the Blur and Clear Images
Stars: ✭ 88 (+193.33%)
Mutual labels:  backpropagation
Learning-Lab-C-Library
This library provides a set of basic functions for different type of deep learning (and other) algorithms in C.This deep learning library will be constantly updated
Stars: ✭ 20 (-33.33%)
Mutual labels:  backpropagation
Nexus
🖼️ Actionscript 3, GPU accelerated 2D game engine using Stage3D
Stars: ✭ 12 (-60%)
Mutual labels:  gpu-acceleration
NeuroFlow
Awesome deep learning crate
Stars: ✭ 69 (+130%)
Mutual labels:  backpropagation
SwiftSimpleNeuralNetwork
A simple multi-layer feed-forward neural network with backpropagation built in Swift.
Stars: ✭ 29 (-3.33%)
Mutual labels:  backpropagation
runtime
AnyDSL Runtime Library
Stars: ✭ 17 (-43.33%)
Mutual labels:  gpu-acceleration
Machine-Learning-in-Python-Workshop
My workshop on machine learning using python language to implement different algorithms
Stars: ✭ 89 (+196.67%)
Mutual labels:  backpropagation
Deep-Learning-Coursera
Projects from the Deep Learning Specialization from deeplearning.ai provided by Coursera
Stars: ✭ 123 (+310%)
Mutual labels:  backpropagation
dpnp
NumPy drop-in replacement for Intel(R) XPUs
Stars: ✭ 42 (+40%)
Mutual labels:  gpu-acceleration
KRS
The Kria Robotics Stack (KRS) is a ROS 2 superset for industry, an integrated set of robot libraries and utilities to accelerate the development, maintenance and commercialization of industrial-grade robotic solutions while using adaptive computing.
Stars: ✭ 26 (-13.33%)
Mutual labels:  gpu-acceleration
ai-backpropagation
The backpropagation algorithm explained and demonstrated.
Stars: ✭ 20 (-33.33%)
Mutual labels:  backpropagation
Medium-Python-Neural-Network
This code is part of my post on Medium.
Stars: ✭ 58 (+93.33%)
Mutual labels:  backpropagation
Jamais-Vu
Audio Fingerprinting and Recognition in Python using NVidia's CUDA
Stars: ✭ 24 (-20%)
Mutual labels:  gpu-acceleration
pytorch-gpu-data-science-project
Template repository for a Python 3-based (data) science project with GPU acceleration using the PyTorch ecosystem.
Stars: ✭ 16 (-46.67%)
Mutual labels:  gpu-acceleration
Scientific-Programming-in-Julia
Repository for B0M36SPJ
Stars: ✭ 32 (+6.67%)
Mutual labels:  gpu-acceleration
environments
Determined AI public environments
Stars: ✭ 22 (-26.67%)
Mutual labels:  gpu-acceleration
Obsidian
Obsidian Language Repository
Stars: ✭ 38 (+26.67%)
Mutual labels:  gpu-acceleration
Galaxia-Runtime
Galaxy generator for Unity 3D, with Custom Particle Distributors, DirectX 11 Particles and Highly customization, curve driven Generation.
Stars: ✭ 36 (+20%)
Mutual labels:  gpu-acceleration
CS231n
PyTorch/Tensorflow solutions for Stanford's CS231n: "CNNs for Visual Recognition"
Stars: ✭ 47 (+56.67%)
Mutual labels:  backpropagation

Hybrid Macro/Micro Level Backpropagation for SNNs

This repo is the CUDA implementation of SNNs trained the hybrid macro/micro level backpropagation, modified based on zhxfl for spiking neuron networks.

The paper Hybrid Macro/Micro Level Backpropagation for Training Deep Spiking Neural Networks is accepted by NeurIPS 2018.

Contact [email protected] if you have any questions or concerns.

Dependencies and Libraries

  • opencv
  • cuda (suggest cuda 8.0)

You can compile the code on windows or linux.

SDK include path(-I)
  • linux: /usr/local/cuda/samples/common/inc/ (For include file "helper_cuda"); /usr/local/include/opencv/ (Depend on situation)
  • windows: X:/Program Files (x86) /NVIDIA Corporation/CUDA Samples/v6.5/common/inc (For include file "helper_cuda"); X:/Program Files/opencv/vs2010/install/include (Depend on situation)
Library search path(-L)
  • linux: /usr/local/lib/
  • windows: X:/Program Files/opencv/vs2010/install/x86/cv10/lib (Depend on situation)
libraries(-l)
  • opencv_core
  • opencv_highgui
  • opencv_imgproc
  • opencv_imgcodecs (need for opencv3.0)
  • cublas
  • curand
  • cudadevrt

Installation

The repo requires CUDA 8.0+ to run.

Please install the opencv and cuda before hand.

Install CMake and OpenCV

$ sudo apt-get install cmake libopencv-dev 

Checkout and compile the code:

$ git clone https://github.com/jinyyy666/mm-bp-snn.git
$ cd mm-bp-snn
$ mkdir build
$ cd build
$ cmake ..
$ make -j
GPU compute compatibility
  • capability 6.0 for Titan XP, which is used for the authors.

Get Dataset

Get the MNIST dataset:

$ cd mm-bp-snn/mnist/
$ ./get_mnist.sh

Get the N-MNIST dataset by the link. Then unzip the ''Test.zip'' and ''Train.zip''.

Run the matlab code: NMNIST_Converter.m in nmnist/

Run the code

  • MNIST
$ cd mm-bp-snn
$ ./build/CUDA-SNN 6 1
  • N-MNIST
$ cd mm-bp-snn
$ ./build/CUDA-SNN 7 1
  • For Spiking-CNN, you need to enable the #define SPIKING_CNN in main.cpp, and recompile.
$ cd mm-bp-snn
$ ./build/CUDA-SNN 6 1
For Window user

Do the following to set up compilation environment.

  • Install Visual Stidio and OpenCV.
  • When you create a new project using VS, You can find NVIDIA-CUDA project template, create a cuda-project.
  • View-> Property Pages-> Configuration Properties-> CUDA C/C++ -> Device-> Code Generation-> compute_60,sm_60
  • View-> Property Pages-> Configuration Properties-> CUDA C/C++ -> Common-> Generate Relocatable Device Code-> Yes(-rdc=true)
  • View-> Property Pages-> Configuration Properties-> Linker-> Input-> Additional Dependencies-> libraries(-l)
  • View-> Property Pages-> Configuration Properties-> VC++ Directories-> General-> Library search path(-L)
  • View-> Property Pages-> Configuration Properties-> VC++ Directories-> General-> Include Directories(-I)

Notes

  • The SNNs are implemented in terms of layers. User can config the SNNs by using configuration files in Config/
  • The program will save the best test result and save the network weight in the file "Result/checkPoint.txt", If the program exit accidentally, you can continue the program form this checkpoint.
  • The log for the reported performance of the three datasets and the correspoding checkout point files can be found in Result folder.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].