All Projects â†’ MikhailKravets â†’ NeuroFlow

MikhailKravets / NeuroFlow

Licence: MIT license
Awesome deep learning crate

Programming Languages

rust
11053 projects

Projects that are alternatives of or similar to NeuroFlow

The Math Behind A Neural Network
📄 The math behind the neural network used for Olivia
Stars: ✭ 119 (+72.46%)
Mutual labels:  backpropagation
NMSIS
Nuclei Microcontroller Software Interface Standard Development Repo
Stars: ✭ 24 (-65.22%)
Mutual labels:  nn
ODTbrain
Python library for diffraction tomography with the Born and Rytov approximations
Stars: ✭ 19 (-72.46%)
Mutual labels:  backpropagation
Deeplearning Notes
Notes for Deep Learning Specialization Courses led by Andrew Ng.
Stars: ✭ 126 (+82.61%)
Mutual labels:  backpropagation
Neural Network From Scratch
Ever wondered how to code your Neural Network using NumPy, with no frameworks involved?
Stars: ✭ 230 (+233.33%)
Mutual labels:  backpropagation
Pytorch Book
PyTorch tutorials and fun projects including neural talk, neural style, poem writing, anime generation (《深度学习框架PyTorch:入门与实战》)
Stars: ✭ 9,546 (+13734.78%)
Mutual labels:  nn
Selfdrivingcar
A collection of all projects pertaining to different layers in the SDC software stack
Stars: ✭ 107 (+55.07%)
Mutual labels:  backpropagation
Character-recognition-by-neural-network
Back Propagation, Python
Stars: ✭ 32 (-53.62%)
Mutual labels:  backpropagation
Backpropagation
Implementing multilayer neural networks through backpropagation using Java.
Stars: ✭ 242 (+250.72%)
Mutual labels:  backpropagation
yann
Yet Another Neural Network Library 🤔
Stars: ✭ 26 (-62.32%)
Mutual labels:  nn
Teaching Monolith
Data science teaching materials
Stars: ✭ 126 (+82.61%)
Mutual labels:  backpropagation
Backprop
Heterogeneous automatic differentiation ("backpropagation") in Haskell
Stars: ✭ 154 (+123.19%)
Mutual labels:  backpropagation
Densenet
MXNet implementation for DenseNet
Stars: ✭ 28 (-59.42%)
Mutual labels:  nn
Nn
A tiny neural network 🧠
Stars: ✭ 119 (+72.46%)
Mutual labels:  backpropagation
dcgan vae pytorch
dcgan combined with vae in pytorch!
Stars: ✭ 110 (+59.42%)
Mutual labels:  nn
Simple Neural Networks
Simple neural networks based only on Numpy
Stars: ✭ 114 (+65.22%)
Mutual labels:  backpropagation
Credit-Card-Fraud
No description or website provided.
Stars: ✭ 17 (-75.36%)
Mutual labels:  nn
ai-backpropagation
The backpropagation algorithm explained and demonstrated.
Stars: ✭ 20 (-71.01%)
Mutual labels:  backpropagation
cheatsheets-ai-fork
Cheat Sheets for deep learning and machine learning.
Stars: ✭ 21 (-69.57%)
Mutual labels:  nn
neat-python
Python implementation of the NEAT neuroevolution algorithm
Stars: ✭ 32 (-53.62%)
Mutual labels:  nn


Build status codecov crates

NeuroFlow is fast neural networks (deep learning) Rust crate. It relies on three pillars: speed, reliability, and speed again.

Hello, everyone! Work on the crate is currently suspended because I am a little busy to do it :( Thanks you all

How to use

Let's try to approximate very simple function 0.5*sin(e^x) - cos(e^(-x)).

extern crate neuroflow;

use neuroflow::FeedForward;
use neuroflow::data::DataSet;
use neuroflow::activators::Type::Tanh;


fn main(){
    /*
        Define neural network with 1 neuron in input layers. Network contains 4 hidden layers.
        And, such as our function returns single value, it is reasonable to have 1 neuron in the output layer.
    */
    let mut nn = FeedForward::new(&[1, 7, 8, 8, 7, 1]);
    
    /*
        Define DataSet.
        
        DataSet is the Type that significantly simplifies work with neural network.
        Majority of its functionality is still under development :(
    */
    let mut data: DataSet = DataSet::new();
    let mut i = -3.0;
    
    // Push the data to DataSet (method push accepts two slices: input data and expected output)
    while i <= 2.5 {
        data.push(&[i], &[0.5*(i.exp().sin()) - (-i.exp()).cos()]);
        i += 0.05;
    }
    
    // Here, we set necessary parameters and train neural network by our DataSet with 50 000 iterations
    nn.activation(Tanh)
        .learning_rate(0.01)
        .train(&data, 50_000);

    let mut res;
    
    // Let's check the result
    i = 0.0;
    while i <= 0.3{
        res = nn.calc(&[i])[0];
        println!("for [{:.3}], [{:.3}] -> [{:.3}]", i, 0.5*(i.exp().sin()) - (-i.exp()).cos(), res);
        i += 0.07;
    }
}

Expected output

for [0.000], [-0.120] -> [-0.119]
for [0.070], [-0.039] -> [-0.037]
for [0.140], [0.048] -> [0.050]
for [0.210], [0.141] -> [0.141]
for [0.280], [0.240] -> [0.236]

But we don't want to lose our trained network so easily. So, there is functionality to save and restore neural networks from files.

    /*
        In order to save neural network into file call function save from neuroflow::io module.
        
        First argument is link on the saving neural network;
        Second argument is path to the file. 
    */
    neuroflow::io::save(&mut nn, "test.flow").unwrap();
    
    /*
        After we have saved the neural network to the file we can restore it by calling
        of load function from neuroflow::io module.
        
        We must specify the type of new_nn variable.
        The only argument of load function is the path to file containing
        the neural network
    */
    let mut new_nn: FeedForward = neuroflow::io::load("test.flow").unwrap();

Classic XOR problem (with no classic input of data)

Let's create file named TerribleTom.csv in the root of project. This file should have following innards:

0,0,-,0
0,1,-,1
1,0,-,1
1,1,-,0

where - is the delimiter that separates input vector from its desired output vector.

extern crate neuroflow;

use neuroflow::FeedForward;
use neuroflow::data::DataSet;
use neuroflow::activators::Type::Tanh;


fn main(){
    /*
        Define neural network with 2 neurons in input layers,
        1 hidden layer (with 2 neurons),
        1 neuron in output layer
    */
    let mut nn = FeedForward::new(&[2, 2, 1]);
    
    // Here we load data for XOR from the file `TerribleTom.csv`
    let mut data = DataSet::from_csv("TerribleTom.csv");
    
    // Set parameters and train the network
    nn.activation(Tanh)
        .learning_rate(0.1)
        .momentum(0.15)
        .train(&data, 20_000);

    let mut res;
    let mut d;
    for i in 0..data.len(){
        res = nn.calc(data.get(i).0)[0];
        d = data.get(i);
        println!("for [{:.3}, {:.3}], [{:.3}] -> [{:.3}]", d.0[0], d.0[1], d.1[0], res);
    }
}

Expected output

for [0.000, 0.000], [0.000] -> [0.000]
for [1.000, 0.000], [1.000] -> [1.000]
for [0.000, 1.000], [1.000] -> [1.000]
for [1.000, 1.000], [0.000] -> [0.000]

Installation

Insert into your project's cargo.toml block next line

[dependencies]
neuroflow = "0.1.3"

Then in project root file

extern crate neuroflow;

License

MIT License

Attribution

The origami bird from logo is made by Freepik

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].