All Projects → roberthangu → snn_object_recognition

roberthangu / snn_object_recognition

Licence: GPL-2.0 license
One-Shot Object Appearance Learning using Spiking Neural Networks

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to snn object recognition

brian2cuda
A brian2 extension to simulate spiking neural networks on GPUs
Stars: ✭ 46 (+100%)
Mutual labels:  neuroscience, spiking-neural-networks
nengo-dl
Deep learning integration for Nengo
Stars: ✭ 76 (+230.43%)
Mutual labels:  neuroscience, spiking-neural-networks
WheatNNLeek
Spiking neural network system
Stars: ✭ 26 (+13.04%)
Mutual labels:  neuroscience, spiking-neural-networks
BrainPy
Brain Dynamics Programming in Python
Stars: ✭ 242 (+952.17%)
Mutual labels:  neuroscience, spiking-neural-networks
cellfinder
Automated 3D cell detection and registration of whole-brain images
Stars: ✭ 122 (+430.43%)
Mutual labels:  neuroscience
PyRhO
A virtual optogenetics laboratory
Stars: ✭ 30 (+30.43%)
Mutual labels:  neuroscience
neuroexpresso
📊 Gene expression in neuroexpresso database
Stars: ✭ 15 (-34.78%)
Mutual labels:  neuroscience
CARLsim4
CARLsim is an efficient, easy-to-use, GPU-accelerated software framework for simulating large-scale spiking neural network (SNN) models with a high degree of biological detail.
Stars: ✭ 75 (+226.09%)
Mutual labels:  spiking-neural-networks
syntalos
Flow-based synchronized parallel DAQ from diverse sources and flexible control for neuroscience experiments
Stars: ✭ 13 (-43.48%)
Mutual labels:  neuroscience
mrivis
medical image visualization library and development toolkit
Stars: ✭ 19 (-17.39%)
Mutual labels:  neuroscience
FSL-Mate
FSL-Mate: A collection of resources for few-shot learning (FSL).
Stars: ✭ 1,346 (+5752.17%)
Mutual labels:  one-shot-learning
Miniscope-v4
All things Miniscope v4
Stars: ✭ 90 (+291.3%)
Mutual labels:  neuroscience
tvb-root
Main TVB codebase
Stars: ✭ 46 (+100%)
Mutual labels:  neuroscience
kfs
Keras for Science
Stars: ✭ 69 (+200%)
Mutual labels:  neuroscience
auryn
Auryn: A fast simulator for spiking neural networks with synaptic plasticity
Stars: ✭ 77 (+234.78%)
Mutual labels:  spiking-neural-networks
spykesim
Extended edit similarity measurement for high dimensional discrete-time series signal (e.g., multi-unit spike-train).
Stars: ✭ 18 (-21.74%)
Mutual labels:  neuroscience
Neurapse
Nuerapse simulations for SNNs
Stars: ✭ 22 (-4.35%)
Mutual labels:  neuroscience
MIRACL
Multi-modal Image Registration And Connectivity anaLysis
Stars: ✭ 23 (+0%)
Mutual labels:  neuroscience
EfficientWord-Net
OneShot Learning-based hotword detection.
Stars: ✭ 78 (+239.13%)
Mutual labels:  one-shot-learning
neuronunit
A package for data-driven validation of neuron and ion channel models using SciUnit
Stars: ✭ 36 (+56.52%)
Mutual labels:  neuroscience

One-Shot Object Appearance Learning using Spiking Neural Networks

Note: This project is no longer actively maintained or supported. Questions and issues might be answered with delay or not at all.

This is a Spiking Neural Network used for testing one-shot object appearance learning. That is the learning of new object features based on one or very few training instances.

It is written in Python and runs on the NEST neurosimulator, giving the framework a better biological plausibility over other networks of this kind, which use own implementations for the neural mechanics.

Features

The network consists of 5 layers of spiking neurons. The first 4 are alternating simple and complex cell layers (S1, C1, S2, C2), and the 5th is a classifier (e.g. a SVM). The learning of the object features happens between the C1 - S2 layers using Spike-Timing-Dependent Plasticity (STDP). This architecture is inspired by the work of Masquelier et al. Some sample features learned by the network can be seen below.

Table: Features extracted from motorbikes (top) and faces (bottom)

These features were extracted from the Motorbikes and Faces datasets of the Caltech 101 image training set. They were learned by presenting only pictures of the same dataset to the network. In contrast to that, the image below shows a set of smaller features extracted by showing images of three classes combined to the network, namely of Airplanes, Motorbikes and Faces.

These combined features are used for the One-Shot appearance learning, as the network tries to "find" these features inside of new, unseen object classes.

There are also videos showing the convergence of the weights during the training with motorbikes and faces and airplanes, motorbikes and pianos.

Usage

Since running all the layers of the network at once is computationally very slow, the process is divided into several steps, which are run separately, one after another. Specifically, the basic data on which the computation relies are the spiketrains. Spikes are propagated from the first layer to the last. Thus, to speed up the computation, the simulation can be "stopped" after a certain layer, dump the spiketrains to a file and use them as an input for the next layer in a later simulation, thus avoiding the need to recompute the same information again when tuning or testing a certain layer.

For this purpose there are three scripts:

  • dump-c1-spikes.py or dump-blanked-c1-spikes.py: Runs from input image to the C1 layer. The output is the C1 spiketrains. The second script adds a blanktime between each consecutive images. This is beneficial for the recognition later.
  • learn-features.py: Simulate the C1 - S2 layers. This is the place where the S2 weights are learned, i.e. the "features", and are dumped to file, from which they can be later used for classification. The filename is automatically generated from the given command line parameters and the name of the C1 spike dumpfile.
  • dump-c2-spikes.py: C1 to the C2 layer
  • classify-images.py or classify-images-one-shot.py: These scripts use the weights learned previously to learn new object classes images in a one-shot manner. The first script uses a SVM for the classification of the images and does not rely on the dumped C2 spikes. The second script does "real" one-shot classification by training an extra fully connected neural layer with STDP instead of just using an SVM. Thus it uses the dumped C2 spikes to speed up the training of the last layer. Both scripts use S2 weights pre-learned from a set of classes and apply them to learn the characteristics of new classes.

The usage of each file can be seen by running it with the --help command line argument. Below is also a minimal example for each script with some sane defaults.

  1. To dump the C1 spiketrains with a blanktime between consecutive images:

    ./dump-blanked-c1-spikes.py --
        --dataset-label <your label>
        --training-dir <training images>
    
  2. Train the C1 - S2 weights (i.e. extract the features). The filename of the weights dumpfile is automatically generated:

    ./learn-features.py --
        --c1-dumpfile <c1 spiketrain dumpfile> 
    
  3. [Optional. Used for accelerating the STDP learning in the one-shot classifier] Dump the C2 spiketrains:

    ./dump-c2-spikes.py --
        --training-c1-dumpfile <c1 spiketrain dumpfile>
        --weights-from <S2 weigths dumpfile from step 2>
    
  4. Learn and classify new classes by using the weights of step 2 either with an SVM (first script) or with a fully connected end-layer using STDP:

    ./classify-images.py --
       --training-c2-dumpfile <c1 dumpfile of the training dataset>  
       --validation-c1-dumpfile <c1 dumpfile of the validation dataset>
       --training-labels <text file containing the labels of the training images>
       --validation-labels <text file containing the labels of the validation images>
       --weights-from <S2 weigths dumpfile from step 2>
    
    ./classify-images-one-shot.py --
       <same parameters as above>
    

Installation

NOTE: At the moment the network relies on a NEST extension which adds a shared-weights synapse type. This mechanism greatly speeds up the computation, since weight changes to one synapse no longer need to be copied to all the others, but is read from a single shared table.

In order to run the code, the user needs to install NEST 2.10.0 with PyNN 0.8.1. Please consult their corresponding web pages for installation instructions.

The code is written in Python 3, thus a working installation of it is also required.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].