All Projects → kdexd → Digit Classifier

kdexd / Digit Classifier

Licence: mit
A single handwritten digit classifier, using the MNIST dataset. Pure Numpy.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Digit Classifier

Awesome Jetpack Compose Learning Resources
👓 A continuously updated list of learning Jetpack Compose for Android apps.
Stars: ✭ 275 (-64.74%)
Mutual labels:  beginner-friendly
Gitstart
Make a Pull Request
Stars: ✭ 415 (-46.79%)
Mutual labels:  beginner-friendly
Hacktoberfest 2020
Welcome to Open-source! Simply add your details to contributors | Repo for Hacktoberfest 2020 ✅
Stars: ✭ 621 (-20.38%)
Mutual labels:  beginner-friendly
Front End
Operation Code's website
Stars: ✭ 301 (-61.41%)
Mutual labels:  beginner-friendly
Hacktoberfest2019
Happy Hacktober! This is a beginner friendly repository made specifically for Hacktoberfest that helps you get your first PR.
Stars: ✭ 378 (-51.54%)
Mutual labels:  beginner-friendly
Problem Solving Javascript
🔥 Crack you JS interviews ⚡ Collection of most common JS Interview questions with Unit Tests 🚀
Stars: ✭ 451 (-42.18%)
Mutual labels:  beginner-friendly
learn-git
Learn How to contribute to other repositories on GitHub.
Stars: ✭ 93 (-88.08%)
Mutual labels:  beginner-friendly
Plots2
a collaborative knowledge-exchange platform in Rails; we welcome first-time contributors! 🎈
Stars: ✭ 666 (-14.62%)
Mutual labels:  beginner-friendly
Hacktoberfest
Hacktoberfest 2018. Check out the end video for this project ->
Stars: ✭ 406 (-47.95%)
Mutual labels:  beginner-friendly
Hello World
Hello World in all possible programmnig languages
Stars: ✭ 558 (-28.46%)
Mutual labels:  beginner-friendly
Letra Extension
Passively learn a new language every time you open a new tab
Stars: ✭ 323 (-58.59%)
Mutual labels:  beginner-friendly
Awesome Hacktoberfest 2020
A curated list of awesome Hacktoberfest 2020 repositories, guides and resources
Stars: ✭ 349 (-55.26%)
Mutual labels:  beginner-friendly
Good First Issue
🖥 CLI for finding good first issues
Stars: ✭ 485 (-37.82%)
Mutual labels:  beginner-friendly
Byte Of Vim
"A Byte of Vim" is a book which aims to help you to learn how to use the Vim editor (version 7), even if all you know is how to use the computer keyboard.
Stars: ✭ 283 (-63.72%)
Mutual labels:  beginner-friendly
React Animation Comparison
A tour of React animation libraries with a focus on developer experience
Stars: ✭ 646 (-17.18%)
Mutual labels:  beginner-friendly
Programming
Code a program in a language of your choice.
Stars: ✭ 269 (-65.51%)
Mutual labels:  beginner-friendly
Creative Profile Readme
A Collection of GitHub Profiles with awesome readme
Stars: ✭ 449 (-42.44%)
Mutual labels:  beginner-friendly
Awesome Android Learning Resources
👓 A curated list of awesome android learning resources for android app developers.
Stars: ✭ 753 (-3.46%)
Mutual labels:  beginner-friendly
Python Tutorial
A Python 3 programming tutorial for beginners.
Stars: ✭ 647 (-17.05%)
Mutual labels:  beginner-friendly
Flutter For Android Developers
Compilation of Flutter materials for Android developers
Stars: ✭ 488 (-37.44%)
Mutual labels:  beginner-friendly

MNIST Handwritten Digit Classifier

An implementation of multilayer neural network using numpy library. The implementation is a modified version of Michael Nielsen's implementation in Neural Networks and Deep Learning book.

Brief Background:

If you are familiar with basics of Neural Networks, feel free to skip this section. For total beginners who landed up here before reading anything about Neural Networks:

Sigmoid Neuron

  • Neural networks are made up of building blocks known as Sigmoid Neurons. These are named so because their output follows Sigmoid Function.
  • xj are inputs, which are weighted by wj weights and the neuron has its intrinsic bias b. The output of neuron is known as "activation ( a )".

Note: There are other functions in use other than sigmoid, but this information for now is sufficient for beginners.

  • A neural network is made up by stacking layers of neurons, and is defined by the weights of connections and biases of neurons. Activations are a result dependent on a certain input.

Why a modified implementation ?

This book and Stanford's Machine Learning Course by Prof. Andrew Ng are recommended as good resources for beginners. At times, it got confusing to me while referring both resources:

MATLAB has 1-indexed data structures, while numpy has them 0-indexed. Some parameters of a neural network are not defined for the input layer, so there was a little mess up in mathematical equations of book, and indices in code. For example according to the book, the bias vector of second layer of neural network was referred as bias[0] as input layer (first layer) has no bias vector. I found it a bit inconvenient to play with.

I am fond of Scikit Learn's API style, hence my class has a similar structure of code. While theoretically it resembles the book and Stanford's course, you can find simple methods such as fit, predict, validate to train, test, validate the model respectively.

Naming and Indexing Convention:

I have followed a particular convention in indexing quantities. Dimensions of quantities are listed according to this figure.

Small Labelled Neural Network

Layers

  • Input layer is the 0th layer, and output layer is the Lth layer. Number of layers: NL = L + 1.
sizes = [2, 3, 1]

Weights

  • Weights in this neural network implementation are a list of matrices (numpy.ndarrays). weights[l] is a matrix of weights entering the lth layer of the network (Denoted as wl).
  • An element of this matrix is denoted as wljk. It is a part of jth row, which is a collection of all weights entering jth neuron, from all neurons (0 to k) of (l-1)th layer.
  • No weights enter the input layer, hence weights[0] is redundant, and further it follows as weights[1] being the collection of weights entering layer 1 and so on.
weights = |¯   [[]],    [[a, b],    [[p],   ¯|
          |              [c, d],     [q],    |
          |_             [e, f]],    [r]]   _|

Biases

  • Biases in this neural network implementation are a list of one-dimensional vectors (numpy.ndarrays). biases[l] is a vector of biases of neurons in the lth layer of network (Denoted as bl).
  • An element of this vector is denoted as blj. It is a part of jth row, the bias of jth in layer.
  • Input layer has no biases, hence biases[0] is redundant, and further it follows as biases[1] being the biases of neurons of layer 1 and so on.
biases = |¯   [[],    [[0],    [[0]]   ¯|
         |     []],    [1],             |
         |_            [2]],           _|

'Z's

  • For input vector x to a layer l, z is defined as: zl = wl . x + bl
  • Input layer provides x vector as input to layer 1, and itself has no input, weight or bias, hence zs[0] is redundant.
  • Dimensions of zs will be same as biases.

Activations

  • Activations of lth layer are outputs from neurons of lth which serve as input to (l+1)th layer. The dimensions of biases, zs and activations are similar.
  • Input layer provides x vector as input to layer 1, hence activations[0] can be related to x - the input training example.

Execution of Neural network

#to train and test the neural network algorithm, please use the following command
python main.py
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].