All Projects → anish-lakkapragada → Sealion

anish-lakkapragada / Sealion

Licence: apache-2.0
The first machine learning framework that encourages learning ML concepts instead of memorizing class functions.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Sealion

Pyod
A Python Toolbox for Scalable Outlier Detection (Anomaly Detection)
Stars: ✭ 5,083 (+1728.42%)
Mutual labels:  data-science, data-analysis, neural-networks, unsupervised-learning
Igel
a delightful machine learning tool that allows you to train, test, and use models without writing code
Stars: ✭ 2,956 (+963.31%)
Mutual labels:  data-science, data-analysis, neural-networks
Ml Workspace
🛠 All-in-one web-based IDE specialized for machine learning and data science.
Stars: ✭ 2,337 (+740.65%)
Mutual labels:  data-science, data-analysis, neural-networks
Data Science Hacks
Data Science Hacks consists of tips, tricks to help you become a better data scientist. Data science hacks are for all - beginner to advanced. Data science hacks consist of python, jupyter notebook, pandas hacks and so on.
Stars: ✭ 273 (-1.8%)
Mutual labels:  data-science, data-analysis
Dash.jl
Dash for Julia - A Julia interface to the Dash ecosystem for creating analytic web applications in Julia. No JavaScript required.
Stars: ✭ 248 (-10.79%)
Mutual labels:  data-science, modeling
Darwinexlabs
Datasets, tools and more from Darwinex Labs - Prop Investing Arm & Quant Team @ Darwinex
Stars: ✭ 248 (-10.79%)
Mutual labels:  data-science, neural-networks
Datascience
Curated list of Python resources for data science.
Stars: ✭ 3,051 (+997.48%)
Mutual labels:  data-science, data-analysis
copulae
Multivariate data modelling with Copulas in Python
Stars: ✭ 96 (-65.47%)
Mutual labels:  modeling, data-analysis
Datacamp Python Data Science Track
All the slides, accompanying code and exercises all stored in this repo. 🎈
Stars: ✭ 250 (-10.07%)
Mutual labels:  data-science, neural-networks
Urs
Universal Reddit Scraper - A comprehensive Reddit scraping command-line tool written in Python.
Stars: ✭ 275 (-1.08%)
Mutual labels:  data-science, data-analysis
Flux.jl
Relax! Flux is the ML library that doesn't make you tensor
Stars: ✭ 3,358 (+1107.91%)
Mutual labels:  data-science, neural-networks
Cjworkbench
The data journalism platform with built in training
Stars: ✭ 244 (-12.23%)
Mutual labels:  data-science, data-analysis
computational-neuroscience
Short undergraduate course taught at University of Pennsylvania on computational and theoretical neuroscience. Provides an introduction to programming in MATLAB, single-neuron models, ion channel models, basic neural networks, and neural decoding.
Stars: ✭ 36 (-87.05%)
Mutual labels:  modeling, data-analysis
Xlearn
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.
Stars: ✭ 2,968 (+967.63%)
Mutual labels:  data-science, data-analysis
Deep Learning Machine Learning Stock
Stock for Deep Learning and Machine Learning
Stars: ✭ 240 (-13.67%)
Mutual labels:  data-science, data-analysis
Ntm One Shot Tf
One Shot Learning using Memory-Augmented Neural Networks (MANN) based on Neural Turing Machine architecture in Tensorflow
Stars: ✭ 238 (-14.39%)
Mutual labels:  data-science, neural-networks
Keras
Deep Learning for humans
Stars: ✭ 53,476 (+19135.97%)
Mutual labels:  data-science, neural-networks
Awesome Distributed Deep Learning
A curated list of awesome Distributed Deep Learning resources.
Stars: ✭ 277 (-0.36%)
Mutual labels:  data-science, neural-networks
Tablesaw
Java dataframe and visualization library
Stars: ✭ 2,785 (+901.8%)
Mutual labels:  data-science, data-analysis
Deepgraph
Analyze Data with Pandas-based Networks. Documentation:
Stars: ✭ 232 (-16.55%)
Mutual labels:  data-science, data-analysis

SeaLion

python License total lines issues pypi repo size Deploy to PyPI

SeaLion is designed to teach today's aspiring ml-engineers the popular machine learning concepts of today in a way that gives both intuition and ways of application. We do this through concise algorithms that do the job in the least jargon possible and examples to guide you through every step of the way.

Quick Demo


SeaLion in Action

General Usage

For most classifiers you can just do (we'll use Logistic Regression as an example here) :

from sealion.regression import LogisticRegression
log_reg = LogisticRegression()

to initialize, and then to train :

log_reg.fit(X_train, y_train) 

and for testing :

y_pred = log_reg.predict(X_test) 
evaluation = log_reg.evaluate(X_test, y_test) 

For the unsupervised clustering algorithms you may do :

from sealion.unsupervised_clustering import KMeans
kmeans = KMeans(k = 3)

and then to fit and predict :

predictions = kmeans.fit_predict(X) 

Neural networks are a bit more complicated, so you may want to check an example here.

The syntax of the APIs was designed to be easy to use and familiar to most other ML libraries. This is to make sure both beginners and experts in the field can comfortably use SeaLion. Of course, none of the source code uses other ML frameworks.

Testimonials and Reddit Posts

"Super Expansive Python ML Library"

Analytics Vidhya calls SeaLion's algorithms beginner-friendly, efficient, and concise.

r/Python : r/Python Post

r/learnmachinelearning : r/learningmachinelearning Post

Installation

The package is available on PyPI. Install like such :

pip install sealion

SeaLion can only support Python 3, so please make sure you are on the newest version.

General Information

SeaLion was built by Anish Lakkapragada, a freshman in high school, starting in Thanksgiving of 2020 and has continued onto early 2021. The library is meant for beginners to use when solving the standard libraries like iris, breast cancer, swiss roll, the moons dataset, MNIST, etc. The source code is not as much as most other ML libraries (only 4000 lines) so it's pretty easy to contribute to. He hopes to spread machine learning to other high schoolers through this library.

Documentation

All documentation is currently being put on a website. However useful it may be, I highly recommend you check the examples posted on GitHub here to see the usage of the APIs and how it works.

Updates for v4.1 and up!

First things first - thank you for all of the support. The two reddit posts did much better than I expected (1.6k upvotes, about 200 comments) and I got a lot of feedback and advice. Thank you to anyone who participated in r/Python or r/learnmachinelearning.

SeaLion has also taken off with the posts. We currently have had 3 issues (1 closed) and have reached 195 stars and 20 forks. I wasn't expecting this and I am grateful for everyone who has shown their appreciation for this library.

Also some issues have popped up. Most of them can be easily solved by just deleting sealion manually (going into the folder where the source is and just deleting it - not pip uninstall) and then reinstalling the usual way, but feel free to put an issue up anytime.

In versions 4.1+ we are hoping to polish the library more. Currently 4.1 comes with Bernoulli Naive Bayes and we also have added precision, recall, and the f1 metric in the utils module. We are hoping to include Gaussian Mixture Models and Batch Normalization in the future. Code examples for these new algorithms will be created within a day or two after release. Thank you!

Updates for v3.0.0!

SeaLion v3.0 and up has had a lot of major milestones.

The first thing is that all the code examples (in jupyter notebooks) for basically all of the modules in sealion are put into the examples directory. Most of them go over using actual datasets like iris, breast cancer, moons, blobs, MNIST, etc. These were all built using v3.0.8 -hopefully that clears up any confusion. I hope you enjoy them.

Perhaps the biggest change in v3.0 is how we have changed the Cython compilation. A quick primer on Cython if you are unfamiliar - you take your python code (in .py files), change it and add some return types and type declarations, put that in a .pyx file, and compile it to a .so file. The .so file is then imported in the python module which you use.

The main bug fixed was that the .so file is actually specific to the architecture of the user. I use macOS and compiled all my files in .so, so prior v3.0 I would just give those .so files to anybody else. However other architectures and OSs like Ubuntu would not be able to recognize those files. Instead what we do know is just store the .pyx files (universal for all computers) in the source code, and the first time you import sealion all of those .pyx files will get compiled into .so files (so they will work for whatever you are using.) This means the first import will take about 40 seconds, but after that it will be as quick as any other import.

Machine Learning Algorithms

The machine learning algorithms of SeaLion are listed below. Please note that the stucture of the listing isn't meant to resemble that of SeaLion's APIs. Of course, new algorithms are being made right now.

  1. Deep Neural Networks

    • Optimizers
      • Gradient Descent (and mini-batch gradient descent)
      • Momentum Optimization w/ Nesterov Accelerated Gradient
      • Stochastic gradient descent (w/ momentum + nesterov)
      • AdaGrad
      • RMSprop
      • Adam
      • Nadam
    • Layers
      • Flatten (turn 2D+ data to 2D matrices)
      • Dense (fully-connected layers)
    • Regularization
      • Dropout
    • Activations
      • ReLU
      • Tanh
      • Sigmoid
      • Softmax
      • Leaky ReLU
      • ELU
      • SELU
      • Swish
    • Loss Functions
      • MSE (for regression)
      • CrossEntropy (for classification)
    • Transfer Learning
      • Save weights (in a pickle file)
      • reload them and then enter them into the same neural network
      • this is so you don't have to start training from scratch
  2. Regression

    • Linear Regression (Normal Equation, closed-form)
    • Ridge Regression (L2 regularization, closed-form solution)
    • Lasso Regression (L1 regularization)
    • Elastic-Net Regression
    • Logistic Regression
    • Softmax Regression
    • Exponential Regression
    • Polynomial Regression
  3. Dimensionality Reduction

    • Principal Component Analysis (PCA)
    • t-distributed Stochastic Neighbor Embedding (tSNE)
  4. Gaussian Mixture Models (GMMs)

    • unsupervised clustering with "soft" predictions
    • anomaly detection
    • AIC & BIC calculation methods
  5. Unsupervised Clustering

    • KMeans (w/ KMeans++)
    • DBSCAN
  6. Naive Bayes

    • Multinomial Naive Bayes
    • Gaussian Naive Bayes
    • Bernoulli Naive Bayes
  7. Trees

    • Decision Tree (with max_branches, min_samples regularization + CART training)
  8. Ensemble Learning

    • Random Forests
    • Ensemble/Voting Classifier
  9. Nearest Neighbors

    • k-nearest neighbors
  10. Utils - one_hot encoder function (one_hot()) - plot confusion matrix function (confusion_matrix()) - revert one hot encoding to 1D Array (revert_one_hot()) - revert softmax predictions to 1D Array (revert_softmax())

Algorithms in progress

Some of the algorithms we are working on right now.

  1. Batch Normalization
  2. AdaBelief Optimizer
  3. Barnes Hut t-SNE (please, please contribute for this one)

Contributing

First, install the required libraries:

pip install -r requirements.txt

If you feel you can do something better than how it is right now in SeaLion, please do! Believe me, you will find great joy in simplifying my code (probably using numpy) and speeding it up. The major problem right now is speed, some algorithms like PCA can handle 10000+ data points, whereas tSNE is unscalable with O(n^2) time complexity. We have solved this problem with Cython + parallel processing (thanks joblib), so algorithms (aside from neural networks) are working well with <1000 points. Getting to the next level will need some help.

Most of the modules I use are numpy, pandas, joblib, and tqdm. I prefer using less dependencies in the code, so please keep it down to a minimum.

Other than that, thanks for contributing!

Acknowledgements

Plenty of articles and people helped me a long way. Some of the tougher questions I dealt with were Automatic Differentiation in neural networks, in which this tutorial helped me. I also got some help on the O(n^2) time complexity problem of the denominator of t-SNE from this article and understood the mathematical derivation for the gradients (original paper didn't go over it) from here. Also I used the PCA method from handsonml so thanks for that too Aurélien Géron. Lastly special thanks to Evan M. Kim and Peter Washington for helping make the normal equation and cauchy distribution in tSNE make sense. Also thanks to @Kento Nishi for helping me understand open-source.

Feedback, comments, or questions

If you have any feedback or something you would like to tell me, please do not hesitate to share! Feel free to comment here on github or reach out to me through [email protected]!

©Anish Lakkapragada 2021

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].