All Projects → mljar → Mljar Supervised

mljar / Mljar Supervised

Licence: mit
Automated Machine Learning Pipeline with Feature Engineering and Hyper-Parameters Tuning 🚀

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Mljar Supervised

Auto ml
[UNMAINTAINED] Automated machine learning for analytics & production
Stars: ✭ 1,559 (+62.23%)
Mutual labels:  data-science, scikit-learn, automl, xgboost, hyperparameter-optimization, feature-engineering, lightgbm, automated-machine-learning
Tpot
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.
Stars: ✭ 8,378 (+771.8%)
Mutual labels:  data-science, scikit-learn, automl, xgboost, hyperparameter-optimization, random-forest, feature-engineering, automated-machine-learning
Hyperactive
A hyperparameter optimization and data collection toolbox for convenient and fast prototyping of machine-learning models.
Stars: ✭ 182 (-81.06%)
Mutual labels:  data-science, scikit-learn, xgboost, hyperparameter-optimization, feature-engineering, automated-machine-learning
My Data Competition Experience
本人多次机器学习与大数据竞赛Top5的经验总结,满满的干货,拿好不谢
Stars: ✭ 271 (-71.8%)
Mutual labels:  data-science, automl, xgboost, hyperparameter-optimization, feature-engineering, lightgbm
Hyperparameter hunter
Easy hyperparameter optimization and automatic result saving across machine learning algorithms and libraries
Stars: ✭ 648 (-32.57%)
Mutual labels:  data-science, scikit-learn, xgboost, hyperparameter-optimization, feature-engineering, lightgbm
Autogluon
AutoGluon: AutoML for Text, Image, and Tabular Data
Stars: ✭ 3,920 (+307.91%)
Mutual labels:  data-science, scikit-learn, automl, hyperparameter-optimization, automated-machine-learning
Autodl
Automated Deep Learning without ANY human intervention. 1'st Solution for AutoDL [email protected]
Stars: ✭ 854 (-11.13%)
Mutual labels:  data-science, automl, feature-engineering, lightgbm, automated-machine-learning
Nni
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Stars: ✭ 10,698 (+1013.22%)
Mutual labels:  data-science, automl, hyperparameter-optimization, feature-engineering, automated-machine-learning
Mlbox
MLBox is a powerful Automated Machine Learning python library.
Stars: ✭ 1,199 (+24.77%)
Mutual labels:  data-science, automl, xgboost, lightgbm, automated-machine-learning
AutoTabular
Automatic machine learning for tabular data. ⚡🔥⚡
Stars: ✭ 51 (-94.69%)
Mutual labels:  scikit-learn, xgboost, lightgbm, feature-engineering, automl
Lale
Library for Semi-Automated Data Science
Stars: ✭ 198 (-79.4%)
Mutual labels:  data-science, scikit-learn, automl, hyperparameter-optimization, automated-machine-learning
Featuretools
An open source python library for automated feature engineering
Stars: ✭ 5,891 (+513.01%)
Mutual labels:  data-science, scikit-learn, automl, feature-engineering, automated-machine-learning
Autoviz
Automatically Visualize any dataset, any size with a single line of code. Created by Ram Seshadri. Collaborators Welcome. Permission Granted upon Request.
Stars: ✭ 310 (-67.74%)
Mutual labels:  scikit-learn, automl, xgboost, automated-machine-learning
Machinejs
[UNMAINTAINED] Automated machine learning- just give it a data file! Check out the production-ready version of this project at ClimbsRocks/auto_ml
Stars: ✭ 412 (-57.13%)
Mutual labels:  data-science, scikit-learn, automl, automated-machine-learning
mindware
An efficient open-source AutoML system for automating machine learning lifecycle, including feature engineering, neural architecture search, and hyper-parameter tuning.
Stars: ✭ 34 (-96.46%)
Mutual labels:  hyperparameter-optimization, feature-engineering, automl, automated-machine-learning
Xcessiv
A web-based application for quick, scalable, and automated hyperparameter tuning and stacked ensembling in Python.
Stars: ✭ 1,255 (+30.59%)
Mutual labels:  data-science, scikit-learn, hyperparameter-optimization, automated-machine-learning
Automl alex
State-of-the art Automated Machine Learning python library for Tabular Data
Stars: ✭ 132 (-86.26%)
Mutual labels:  data-science, automl, xgboost, hyperparameter-optimization
Auptimizer
An automatic ML model optimization tool.
Stars: ✭ 166 (-82.73%)
Mutual labels:  data-science, automl, hyperparameter-optimization, automated-machine-learning
Lightautoml
LAMA - automatic model creation framework
Stars: ✭ 196 (-79.6%)
Mutual labels:  data-science, automl, feature-engineering, automated-machine-learning
Eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
Stars: ✭ 2,477 (+157.75%)
Mutual labels:  data-science, scikit-learn, xgboost, lightgbm

MLJAR Automated Machine Learning for Humans

Build Status Coverage Status PyPI version PyPI pyversions


Documentation: https://supervised.mljar.com/

Source Code: https://github.com/mljar/mljar-supervised


Table of Contents

Automated Machine Learning 🚀

The mljar-supervised is an Automated Machine Learning Python package that works with tabular data. It is designed to save time for a data scientist. It abstracts the common way to preprocess the data, construct the machine learning models, and perform hyper-parameters tuning to find the best model 🏆. It is no black-box as you can see exactly how the ML pipeline is constructed (with a detailed Markdown report for each ML model).

The mljar-supervised will help you with:

  • explaining and understanding your data (Automatic Exploratory Data Analysis),
  • trying many different machine learning models (Algorithm Selection and Hyper-Parameters tuning),
  • creating Markdown reports from analysis with details about all models (Atomatic-Documentation),
  • saving, re-running and loading the analysis and ML models.

It has four built-in modes of work:

  • Explain mode, which is ideal for explaining and understanding the data, with many data explanations, like decision trees visualization, linear models coefficients display, permutation importances and SHAP explanations of data,
  • Perform for building ML pipelines to use in production,
  • Compete mode that trains highly-tuned ML models with ensembling and stacking, with a purpose to use in ML competitions.
  • Optuna mode that can be used to search for highly-tuned ML models, should be used when the performance is the most important, and computation time is not limited (it is available from version 0.10.0)

Of course, you can further customize the details of each mode to meet the requirements.

What's good in it? 💥

  • It is using many algorithms: Baseline, Linear, Random Forest, Extra Trees, LightGBM, Xgboost, CatBoost, Neural Networks, and Nearest Neighbors.
  • It can compute Ensemble based on greedy algorithm from Caruana paper.
  • It can stack models to build level 2 ensemble (available in Compete mode or after setting stack_models parameter).
  • It can do features preprocessing, like: missing values imputation and converting categoricals. What is more, it can also handle target values preprocessing.
  • It can do advanced features engineering, like: Golden Features, Features Selection, Text and Time Transformations.
  • It can tune hyper-parameters with not-so-random-search algorithm (random-search over defined set of values) and hill climbing to fine-tune final models.
  • It can compute the Baseline for your data. That you will know if you need Machine Learning or not!
  • It has extensive explanations. This package is training simple Decision Trees with max_depth <= 5, so you can easily visualize them with amazing dtreeviz to better understand your data.
  • The mljar-supervised is using simple linear regression and include its coefficients in the summary report, so you can check which features are used the most in the linear model.
  • It cares about explainability of models: for every algorithm, the feature importance is computed based on permutation. Additionally, for every algorithm the SHAP explanations are computed: feature importance, dependence plots, and decision plots (explanations can be switched off with explain_level parameter).
  • There is automatic documnetation for every ML experiment run with AutoML. The mljar-supervised creates markdown reports from AutoML training full of ML details, metrics and charts.

Automatic Documentation

The AutoML Report

The report from running AutoML will contain the table with infomation about each model score and time needed to train the model. For each model there is a link, which you can click to see model's details. The performance of all ML models is presented as scatter and box plots so you can visually inspect which algorithms perform the best 🏆.

AutoML leaderboard

The Decision Tree Report

The example for Decision Tree summary with trees visualization. For classification tasks additional metrics are provided:

  • confusion matrix
  • threshold (optimized in the case of binary classification task)
  • F1 score
  • Accuracy
  • Precision, Recall, MCC

Decision Tree summary

The LightGBM Report

The example for LightGBM summary:

Decision Tree summary

Available Modes 📚

In the docs you can find details about AutoML modes are presented in the table .

Explain

automl = AutoML(mode="Explain")

It is aimed to be used when the user wants to explain and understand the data.

  • It is using 75%/25% train/test split.
  • It is using: Baseline, Linear, Decision Tree, Random Forest, Xgboost, Neural Network algorithms and ensemble.
  • It has full explanations: learning curves, importance plots, and SHAP plots.

Perform

automl = AutoML(mode="Perform")

It should be used when the user wants to train a model that will be used in real-life use cases.

  • It is using 5-fold CV.
  • It is using: Linear, Random Forest, LightGBM, Xgboost, CatBoost and Neural Network. It uses ensembling.
  • It has learning curves and importance plots in reports.

Compete

automl = AutoML(mode="Compete")

It should be used for machine learning competitions.

  • It adapts the validation strategy depending on dataset size and total_time_limit. It can be: train/test split (80/20), 5-fold CV or 10-fold CV.
  • It is using: Linear, Decision Tree, Random Forest, Extra Trees, LightGBM, Xgboost, CatBoost, Neural Network and Nearest Neighbors. It uses ensemble and stacking.
  • It has only learning curves in the reports.

Optuna

automl = AutoML(mode="Optuna", optuna_time_budget=3600)

It should be used when the performance is the most important and time is not limited.

  • It is using 10-fold CV
  • It is using: Random Forest, Extra Trees, LightGBM, Xgboost, and CatBoost. Those algorithms are tuned by Optuna framework for optuna_time_budget seconds, each. Algorithms are tuned with original data, without advanced feature engineering.
  • It is using advanced feature engineering, stacking and ensembling. The hyperparameters found for original data are reused with those steps.
  • It produces learning curves in the reports.

Examples

👉 Binary Classification Example

There is a simple interface available with fit and predict methods.

import pandas as pd
from sklearn.model_selection import train_test_split
from supervised.automl import AutoML

df = pd.read_csv(
    "https://raw.githubusercontent.com/pplonski/datasets-for-start/master/adult/data.csv",
    skipinitialspace=True,
)
X_train, X_test, y_train, y_test = train_test_split(
    df[df.columns[:-1]], df["income"], test_size=0.25
)

automl = AutoML()
automl.fit(X_train, y_train)

predictions = automl.predict(X_test)

AutoML fit will print:

Create directory AutoML_1
AutoML task to be solved: binary_classification
AutoML will use algorithms: ['Baseline', 'Linear', 'Decision Tree', 'Random Forest', 'Xgboost', 'Neural Network']
AutoML will optimize for metric: logloss
1_Baseline final logloss 0.5519845471086654 time 0.08 seconds
2_DecisionTree final logloss 0.3655910192804364 time 10.28 seconds
3_Linear final logloss 0.38139916864708445 time 3.19 seconds
4_Default_RandomForest final logloss 0.2975204390214936 time 79.19 seconds
5_Default_Xgboost final logloss 0.2731086827200411 time 5.17 seconds
6_Default_NeuralNetwork final logloss 0.319812276905242 time 21.19 seconds
Ensemble final logloss 0.2731086821194617 time 1.43 seconds
  • the AutoML results in Markdown report
  • the Xgboost Markdown report, please take a look at amazing dependence plots produced by SHAP package 💖
  • the Decision Tree Markdown report, please take a look at beautiful tree visualization ✨
  • the Logistic Regression Markdown report, please take a look at coefficients table, and you can compare the SHAP plots between (Xgboost, Decision Tree and Logistic Regression) ☕️

👉 Multi-Class Classification Example

The example code for classification of the optical recognition of handwritten digits dataset. Running this code in less than 30 minutes will result in test accuracy ~98%.

import pandas as pd 
# scikit learn utilites
from sklearn.datasets import load_digits
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
# mljar-supervised package
from supervised.automl import AutoML

# load the data
digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(
    pd.DataFrame(digits.data), digits.target, stratify=digits.target, test_size=0.25,
    random_state=123
)

# train models with AutoML
automl = AutoML(mode="Perform")
automl.fit(X_train, y_train)

# compute the accuracy on test data
predictions = automl.predict_all(X_test)
print(predictions.head())
print("Test accuracy:", accuracy_score(y_test, predictions["label"].astype(int)))

👉 Regression Example

Regression example on Boston house prices data. On test data it scores ~ 10.85 mean squared error (MSE).

import numpy as np
import pandas as pd
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from supervised.automl import AutoML # mljar-supervised

# Load the data
housing = load_boston()
X_train, X_test, y_train, y_test = train_test_split(
    pd.DataFrame(housing.data, columns=housing.feature_names),
    housing.target,
    test_size=0.25,
    random_state=123,
)

# train models with AutoML
automl = AutoML(mode="Explain")
automl.fit(X_train, y_train)

# compute the MSE on test data
predictions = automl.predict(X_test)
print("Test MSE:", mean_squared_error(y_test, predictions))

👉 More Examples

Documentation 📚

For details please check mljar-supervised docs.

Installation 📦

From PyPi repository:

pip install mljar-supervised

From source code:

git clone https://github.com/mljar/mljar-supervised.git
cd mljar-supervised
python setup.py install

Installation for development

git clone https://github.com/mljar/mljar-supervised.git
virtualenv venv --python=python3.6
source venv/bin/activate
pip install -r requirements.txt
pip install -r requirements_dev.txt

Running in the docker:

FROM python:3.7-slim-buster
RUN apt-get update && apt-get -y update
RUN apt-get install -y build-essential python3-pip python3-dev
RUN pip3 -q install pip --upgrade
RUN pip3 install mljar-supervised jupyter
CMD ["jupyter", "notebook", "--port=8888", "--no-browser", "--ip=0.0.0.0", "--allow-root"]

Contributing

To get started take a look at our Contribution Guide for information about our process and where you can fit in!

Contributors

License 👔

The mljar-supervised is provided with MIT license.

MLJAR ❤️

The mljar-supervised is an open-source project created by MLJAR. We care about ease of use in the Machine Learning. The mljar.com provides a beautiful and simple user interface for building machine learning models.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].