All Projects → FlorianMuellerklein → Machine Learning

FlorianMuellerklein / Machine Learning

Machine learning library written in readable python code

Projects that are alternatives of or similar to Machine Learning

Sas kernel
A Jupyter kernel for SAS. This opens up all the data manipulation and analytics capabilities of your SAS system within a notebook interface. Use the Jupyter Notebook interface to execute SAS code and view results inline.
Stars: ✭ 162 (-0.61%)
Mutual labels:  jupyter-notebook
Bitcoin trading bot
This is the code for "Bitcoin Trading Bot" By Siraj Raval on Youtube
Stars: ✭ 163 (+0%)
Mutual labels:  jupyter-notebook
Imagecompletion Dcgan
Image completion using deep convolutional generative adversarial nets in tensorflow
Stars: ✭ 163 (+0%)
Mutual labels:  jupyter-notebook
Nlp With Python
Scikit-Learn, NLTK, Spacy, Gensim, Textblob and more
Stars: ✭ 2,197 (+1247.85%)
Mutual labels:  jupyter-notebook
Bigdata docker
Big Data Ecosystem Docker
Stars: ✭ 161 (-1.23%)
Mutual labels:  jupyter-notebook
Repo 2018
Deep Learning Summer School + Tensorflow + OpenCV cascade training + YOLO + COCO + CycleGAN + AWS EC2 Setup + AWS IoT Project + AWS SageMaker + AWS API Gateway + Raspberry Pi3 Ubuntu Core
Stars: ✭ 163 (+0%)
Mutual labels:  jupyter-notebook
Pix2pix Film
An implementation of Pix2Pix in Tensorflow for use with frames from films
Stars: ✭ 162 (-0.61%)
Mutual labels:  jupyter-notebook
Scientific graphics in python
Электронный учебник-пособие по научной графике в python
Stars: ✭ 163 (+0%)
Mutual labels:  jupyter-notebook
Coursera Machine Learning Solutions Python
A repository with solutions to the assignments on Andrew Ng's machine learning MOOC on Coursera
Stars: ✭ 163 (+0%)
Mutual labels:  jupyter-notebook
Learnpythonforresearch
This repository provides everything you need to get started with Python for (social science) research.
Stars: ✭ 163 (+0%)
Mutual labels:  jupyter-notebook
Fsdl Text Recognizer 2021 Labs
Complete deep learning project developed in Full Stack Deep Learning, Spring 2021
Stars: ✭ 158 (-3.07%)
Mutual labels:  jupyter-notebook
Julia tutorials
Tutorials on Julia topics
Stars: ✭ 162 (-0.61%)
Mutual labels:  jupyter-notebook
Ai Toolkit Iot Edge
AI Toolkit for Azure IoT Edge
Stars: ✭ 163 (+0%)
Mutual labels:  jupyter-notebook
Neural Nets Are Weird
Stars: ✭ 162 (-0.61%)
Mutual labels:  jupyter-notebook
Neuralnets
Deep Learning libraries tested on images and time series
Stars: ✭ 163 (+0%)
Mutual labels:  jupyter-notebook
Image keras
Building an image classifier using keras
Stars: ✭ 162 (-0.61%)
Mutual labels:  jupyter-notebook
Keraspersonlab
Keras-tensorflow implementation of PersonLab (https://arxiv.org/abs/1803.08225)
Stars: ✭ 163 (+0%)
Mutual labels:  jupyter-notebook
Blog posts
Blog posts for matatat.org
Stars: ✭ 163 (+0%)
Mutual labels:  jupyter-notebook
Notes
Contains Example Programs and Notebooks for some courses at Bogazici University, Department of Computer Engineering
Stars: ✭ 163 (+0%)
Mutual labels:  jupyter-notebook
Data Science Template
A starter template for Equinor data science / data engineering projects
Stars: ✭ 163 (+0%)
Mutual labels:  jupyter-notebook

Machine-Learning

Various machine learning algorithms broken down in basic and readable python code. Useful for studying and learning how the algorithms function.

  • MultiLayerPerceptron.py - Basic multilayer perceptron neural network written with numpy. With weight decay regularization, learning rate decay, softmax or logistic sigmoid output layer, and tanh hidden layer.

  • LinearRegression.py - Gradient descent linear regression with l2 regularization.

  • LogisticRegression.py - Gradient descent logistic regression with l2 regularization.

Usage

MultiLayerPerceptron

Parameters

-input (int): Size of input layer, must match the number of features in the input dataset.

-hidden (int): Size of hidden layer, more hidden neurons can model more complex data at the cost of potentially overfitting.

-output (int): Size of output layers, must match the number of possible classes. Can use 1 for binary classification.

-iterations (int): controls the number of passes over the traning data (aka epochs). Defaults to 50

-learning_rate (float): The learning rate constant controls how much weights are updated on each iteration. Defaults to 0.01.

-l2_in (float): Weight decay regularization term for the input layer weights, keeps weights low to avoid overfitting. Useful when hidden layer is large. Defaults to 0 (off).

-l2_out (float): Weight decay regularization term for the hidden layer weights, keeps weights low to avoid overfitting. Useful when hidden layer is large. Defaults to 0 (off).

-momentum (float): Adds a fraction of the previous weight update to the current weight update. Is used to help system from converging at a local minimum. A high value can increase the learning speed but risks overshooting the minimum. A low momentum can get stuck in a local minimum and decreases the speed of learning. Defaults to 0 (off).

-rate_decay (float): How much to decrease learning rate on each iteration. The idea is to start with a high learning rate to avoid local minima and then slow down as the global minimum is approached. Defaults to 0 (off).

-output_layer (string): Which activation function to use for the output layer. Currently accepts 'logistic' for logistic sigmoid or 'softmax' for softmax. Use softmax when the outputs are mutually exclusive. Defaults to 'logistic'.

-verbose (bool): Whether to print current error rate while training. Defaults to True.

Fitting and predicting

  1. Initialize the network and setting up the size of each layer.
NN = MLP_Classifier(64, 100, 10)
  1. Train the network with the training dataset. The training dataset must be in the following format with y values one hot encoded. There is an example in the demo function of the MLP on how to import data with numpy and get it into the appropriate format.
	[[[x1, x2, x3, ..., xn], [y1, y2, ..., yn]],
    [[[x1, x2, x3, ..., xn], [y1, y2, ..., yn]],
    ...
    [[[x1, x2, x3, ..., xn], [y1, y2, ..., yn]]]
NN.fit(train)
  1. Make predictions on testing dataset. Same format as training dataset without the list of y values. Will return a list of predictions.
NN.predict(X_test)

Linear and Logistic Regression

Parameters

-learning_rate (float): The learning rate constant controls how much weights are updated on each iteration. Defaults to 0.01.

-iterations (int): controls the number of passes over the traning data (aka epochs). Defaults to 50.

-intercept (bool): Whether or not to fit an intercept. Defaults to True.

-L2 (float): Weight decay regularization term for the weights, keeps weights low to avoid overfitting. Defaults to 0 (off).

-tolerance (float): The error value in which to stop training. Defaults to 0 (off).

-verbose (bool): Whether to print current error rate while training. Defaults to True.

Fitting and predicting

  1. Initialize the linear model.
linearReg = LinReg(learning_rate = 0.1, iterations = 500, verbose = True, l2 = 0.001)
  1. Train the model with the training dataset. The training dataset has to be a numpy array, the X and y values must be seperated into two different arrays.
linearReg.fit(X = X_train, y = y_train)
  1. Make predictions on testing dataset. Same format as training dataset without the array of y values. Will return a list of predictions.
linearReg.predict(X_test)

Logistic regression has one extra parameter for .predict. If labels is set to 'True' the predicted class is returned, otherwise the probability of the class being label 1 is returned.

logit.predict(X_test, labels = True)
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].