All Projects → jldbc → Numpy_neural_net

jldbc / Numpy_neural_net

Licence: mit
A simple neural network (multilayer perceptron) with backpropagation implemented in Python with NumPy

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Numpy neural net

Yolo Tf2
yolo(all versions) implementation in keras and tensorflow 2.4
Stars: ✭ 695 (+2680%)
Mutual labels:  numpy
Automatic Watermark Detection
Project for Digital Image Processing
Stars: ✭ 754 (+2916%)
Mutual labels:  numpy
Ilearndeeplearning.py
This repository contains small projects related to Neural Networks and Deep Learning in general. Subjects are closely linekd with articles I publish on Medium. I encourage you both to read as well as to check how the code works in the action.
Stars: ✭ 896 (+3484%)
Mutual labels:  numpy
Trax
Trax — Deep Learning with Clear Code and Speed
Stars: ✭ 6,666 (+26564%)
Mutual labels:  numpy
Machine learning refined
Notes, examples, and Python demos for the textbook "Machine Learning Refined" (published by Cambridge University Press).
Stars: ✭ 750 (+2900%)
Mutual labels:  numpy
Numba
NumPy aware dynamic Python compiler using LLVM
Stars: ✭ 7,090 (+28260%)
Mutual labels:  numpy
Chainer
A flexible framework of neural networks for deep learning
Stars: ✭ 5,656 (+22524%)
Mutual labels:  numpy
Skydetector
A Python implementation of Sky Region Detection in a Single Image for Autonomous Ground Robot Navigation (Shen and Wang, 2013)
Stars: ✭ 23 (-8%)
Mutual labels:  numpy
Human Detection And Tracking
Human-detection-and-Tracking
Stars: ✭ 753 (+2912%)
Mutual labels:  numpy
Ffmpeg
Docker build for FFmpeg on Ubuntu / Alpine / Centos 7 / Scratch
Stars: ✭ 828 (+3212%)
Mutual labels:  scratch
Madmom
Python audio and music signal processing library
Stars: ✭ 728 (+2812%)
Mutual labels:  numpy
Probabilistic robotics
solution of exercises of the book "probabilistic robotics"
Stars: ✭ 734 (+2836%)
Mutual labels:  numpy
Numpy 100
100 numpy exercises (with solutions)
Stars: ✭ 7,681 (+30624%)
Mutual labels:  numpy
Notes Python
中文 Python 笔记
Stars: ✭ 6,127 (+24408%)
Mutual labels:  numpy
Naivecnn
A naive (very simple!) implementation of a convolutional neural network
Stars: ✭ 18 (-28%)
Mutual labels:  numpy
Mahotas
Computer Vision in Python
Stars: ✭ 673 (+2592%)
Mutual labels:  numpy
Pykaldi
A Python wrapper for Kaldi
Stars: ✭ 756 (+2924%)
Mutual labels:  numpy
Numpyro
Probabilistic programming with NumPy powered by JAX for autograd and JIT compilation to GPU/TPU/CPU.
Stars: ✭ 929 (+3616%)
Mutual labels:  numpy
Machinelearning
Machine learning algorithms implemented by pure numpy
Stars: ✭ 905 (+3520%)
Mutual labels:  numpy
Scratchapi
A library written in Java for accessing scratch.mit.edu via your Java application...
Stars: ✭ 5 (-80%)
Mutual labels:  scratch

NumPy Neural Network

This is a simple multilayer perceptron implemented from scratch in pure Python and NumPy.

This repo includes a three and four layer nueral network (with one and two hidden layers respectively), trained via batch gradient descent with backpropogation. The tunable parameters include:

  • Learning rate
  • Regularization lambda
  • Nodes per hidden layer
  • Number of output classes
  • Stopping criterion
  • Activation function

A good starting point for this model is a learning rate of 0.01, regularization of 0.01, 32 nodes per hidden layer, and ReLU activations. These will differ according the context in which the model is used.

Here are some results from the three-layer model on some particularly tricky separation boundaries. The model generalizes well to non-linear patterns.

inline 50%inline 50%

parameter tuning looked as follows:

inline 50%inline 50% inline 50%inline 50%

As you can see, most of the patterns worked as expected. More data led to more stable training, more nodes led to a better model fit, increased regularization led to increased training loss, and a smaller learning rate caused a smoother but slower-moving training curve. Worth noting, however, is how extreme values of some of these values caused the model to become less stable. A high number of observations or learning rate, for example, caused erratic and sub-optimal behaviour during training. This is an indication that there is still a significant that can be done in order to optimize this model.

lessons learned:

  • Logistic activation functions really do complicate MLP training. Too low of a learning rate, too many observations, and sigmoidal activation functions all made this model unstable, and even broke it in some cases.

  • These these models are incredibly flexible. This simple network was able to approximate every function I threw its way.

  • Neural networks are hard. I have a newfound appreciation for the layers of abstraction that tensorflow, keras, etc. provide between programmer and network.

Thank you to WildML for providing a starting point for this project + code, and to Ian Goodfellow's book Deep Learning for background on the algorithms and parameter tuning.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].