All Projects → derniercri → multilayer-perceptron

derniercri / multilayer-perceptron

Licence: GPL-3.0 License
Library to make and train a concurrent multilayer perceptron

Programming Languages

erlang
1774 projects

Projects that are alternatives of or similar to multilayer-perceptron

benchmark-http
No description or website provided.
Stars: ✭ 15 (-67.39%)
Mutual labels:  concurrency
Shift
Light-weight EventKit wrapper.
Stars: ✭ 31 (-32.61%)
Mutual labels:  concurrency
soabase-stages
A tiny library that makes staged/pipelined CompletableFutures much easier to create and manage
Stars: ✭ 23 (-50%)
Mutual labels:  concurrency
TAOMP
《多处理器编程的艺术》一书中的示例代码实现,带有注释与单元测试
Stars: ✭ 39 (-15.22%)
Mutual labels:  concurrency
await async
Provide await and async methods to Crystal Lang
Stars: ✭ 71 (+54.35%)
Mutual labels:  concurrency
aiorwlock
Read/Write Lock - synchronization primitive for asyncio
Stars: ✭ 90 (+95.65%)
Mutual labels:  concurrency
theater
Actor framework for Dart. This package makes it easier to work with isolates, create clusters of isolates.
Stars: ✭ 29 (-36.96%)
Mutual labels:  concurrency
csp.js
📺 CSP for vanilla JavaScript
Stars: ✭ 45 (-2.17%)
Mutual labels:  concurrency
sto
Software Transactional Objects
Stars: ✭ 40 (-13.04%)
Mutual labels:  concurrency
async-enumerable-dotnet
Experimental operators for C# 8 IAsyncEnumerables
Stars: ✭ 32 (-30.43%)
Mutual labels:  concurrency
treap
A thread-safe, persistent Treap (tree + heap) for ordered key-value mapping and priority sorting.
Stars: ✭ 23 (-50%)
Mutual labels:  concurrency
traffic
Massively real-time traffic streaming application
Stars: ✭ 25 (-45.65%)
Mutual labels:  concurrency
transit
Massively real-time city transit streaming application
Stars: ✭ 20 (-56.52%)
Mutual labels:  concurrency
pygolang
Go-like features for Python and Cython. (mirror of https://lab.nexedi.com/kirr/pygolang)
Stars: ✭ 37 (-19.57%)
Mutual labels:  concurrency
CacheLib
Pluggable in-process caching engine to build and scale high performance services
Stars: ✭ 637 (+1284.78%)
Mutual labels:  concurrency
wise-river
Object streaming the way it should be.
Stars: ✭ 33 (-28.26%)
Mutual labels:  concurrency
futex
File-based Ruby Mutex
Stars: ✭ 14 (-69.57%)
Mutual labels:  concurrency
BrainModels
Brain models implementation with BrainPy
Stars: ✭ 36 (-21.74%)
Mutual labels:  neurons
nativescript-http
The best way to do HTTP requests in NativeScript, a drop-in replacement for the core HTTP with important improvements and additions like proper connection pooling, form data support and certificate pinning
Stars: ✭ 32 (-30.43%)
Mutual labels:  concurrency
Async-Channel
Python async multi-task communication library. Used by OctoBot project.
Stars: ✭ 13 (-71.74%)
Mutual labels:  concurrency

Multilayer_perceptron_library

This library is a small experimentation about neural network and actor model. The module is not used in production. The library provides functions to create and train multi-layer perceptron.

Thanks a lot to fh-d for this awesome logo ! Thanks a lot to fh-d for this awesome logo !

Installation

This project use rebar3, so :

  • rebar3 compile compile the project ;
  • rebar3 eunit run the tests ;
  • rebar3 edoc generate the documentation in doc/ ;
  • rebar3 dialyzer type check the library ;
  • rebar3 shell run an Erlang shell with the library loaded.

Neuron module

Link to the documentation

Here is how to create a neural network by initializing all the values in each neuron. This is useless if the network is to be trained because the values will be generated randomly.

The function neuron:make_layer_hard/3 create a layer.

  • Rank : the ID of a layer (0 is output, > 0 are inputs) ;
  • Pid list : the list of the PID's to be relied to the neuron ;
  • Neurons list: a list of ordered neurons.

Neuron representation

A neuron is a tuple with 4 fields :

  • Nb_inputs : The number of inputs connected to the neuron. This is the neuron number of the previous layer or the number of network input if the neuron is part of the input layer ;
  • Weights : the list of weights applied to each input of the neuron (there must be as many as input). These weights must be sorted in the same order as the entries. Thus the weight applied to the inlet 1 must be in the first position ;
  • B : The threshold of the neuron ;
  • F : Activation function of the neuron.

Example

Initialization of a network performing a xor. The activation function is the threshold function of the utils module. The current process is used as the output for the network.

Example

%% Neuron's creation
N1 = {2, [2, 2], -1, utils:threshold(0)},
N2 = {2, [-1, -1], 1.5, utils:threshold(0)},
N3 = {2, [1, 1], -1.5, utils:threshold(0)},
%% Layer's creation
C1 = neuron:make_layer_hard(0, [self()],[N3),
C2 = neuron:make_layer_hard(1, C1, [N1, N2]).

Create an empty network to be trained

In the case of a training, a network could be initialized with empty values, using neuron:make_network/3 :

  • Layer_values : list of parameters for each layer (see layer_value) for a more detailed description). The parameters must be arranged in ascending order (the output layer with the index 0 must therefore be placed at the top of the list) ;
  • Nb_inputs : the number of inputs of the network ;
  • Nb_layer : the number of layers of the network.

A xor without values

F = fun (X) -> utils:sigmoid(X) end,
L = [{1, 2, F},
{2, 2, F}],
{Network, Input_list, Output_list, Network_size} = neuron:make_network(L, 2, 2).

The values returned by make_network will help us communicate with the network and train it.

The Output_list values contain the PIDs of the neurons in the output layer. This list must be passed as an argument to the neuron: connect_output/2 function to be connected to an output. Be careful, only connect your outputs once the network has completed its workout, otherwise your output process will be overwhelmed with message.

Network interaction

To send information to the network, it is necessary to use an external process connected to the input layer of the network and to send it the information to be transmitted. The information will then be transmitted to all the neurons of the input layer.

If the network was created using the make_network function, the list of PIDs of the processes used as input is returned in the value Input_list (see above)

If the network has been initialized manually, you must manually create the entries. Simply create a process and make it use the neuron function: input/3.

Input1 = spawn(fun () -> neuron:input(2, 1, C2) end),
Input2 = spawn(fun () -> neuron:input(2, 2, C2) end).

To send a value to an entry it is enough to send the message {input, Value} to the process created. For example, to send the value 1 to the input Input1 created previously: Input1 ! {Input, 1}.

Retrieve the output values of the network

Once these computation are complete, the network will send these results to the specified processes. (These processes are specified at creation if you created the neuron manually or by using the neuron:connect_output/2 function if you have initialized an empty network). The sent message is: {done, Result, {PID, Layer, Rank}} with:

  • Result : the result of the computation ;
  • PID : the PID of the neuron who's send the result ;
  • Layer : The rank of the layer who's send the result ;
  • Rank : the rank of the sender.

Neuron training

To create the supervisor we will reuse the values returned by the function: neuron:make_network.

Trainer = trainer:init(Network, Network_size, Input_list).

Trainer is the PID of the supervisor.

launch a training

To start the training, several values have to be initialized. Training values and training constants. (See the training_value and training_constant documentation)

In our example, we want to recreate a xor. We will initialize our supervisor with a margin of error of 0, a speed of 1 and a maximum of iteration of 10000.

To start the training, we will use the trainer:launch/4.

Threshold = 0,
Primus_F = F,
Speed = 1,
Max_iter = 10000,

%% Training values
Training_list = [ {[1,0], [1]}, {[0,1], [1]}, {[0,0], [0]}, {[1,1], [0]}],

%% Start the training
trainer:launch(Trainer, Input_list, Training_list, {Threshold, Primus_F, Speed, Max_iter}).

Now that our network is trained, we can use the connect_output function to collect the results with the main process:

neuron:connect_output(self(), Output_list)
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].