All Projects → serengil → Chefboost

serengil / Chefboost

Licence: mit
A Lightweight Decision Tree Framework supporting regular algorithms: ID3, C4,5, CART, CHAID and Regression Trees; some advanced techniques: Gradient Boosting (GBDT, GBRT, GBM), Random Forest and Adaboost w/categorical features support for Python

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Chefboost

Orange3
🍊 📊 💡 Orange: Interactive data analysis
Stars: ✭ 3,152 (+1690.91%)
Mutual labels:  data-science, data-mining, random-forest, decision-trees
Awesome Fraud Detection Papers
A curated list of data mining papers about fraud detection.
Stars: ✭ 843 (+378.98%)
Mutual labels:  data-science, data-mining, random-forest, gradient-boosting
Lightgbm
A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.
Stars: ✭ 13,293 (+7452.84%)
Mutual labels:  kaggle, data-mining, decision-trees, gradient-boosting
yggdrasil-decision-forests
A collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models.
Stars: ✭ 156 (-11.36%)
Mutual labels:  random-forest, cart, decision-trees, gradient-boosting
Lightautoml
LAMA - automatic model creation framework
Stars: ✭ 196 (+11.36%)
Mutual labels:  kaggle, data-science, gradient-boosting
Machine Learning With Python
Practice and tutorial-style notebooks covering wide variety of machine learning techniques
Stars: ✭ 2,197 (+1148.3%)
Mutual labels:  data-science, random-forest, decision-trees
Awesome Decision Tree Papers
A collection of research papers on decision, classification and regression trees with implementations.
Stars: ✭ 1,908 (+984.09%)
Mutual labels:  cart, random-forest, gradient-boosting
decision-trees-for-ml
Building Decision Trees From Scratch In Python
Stars: ✭ 61 (-65.34%)
Mutual labels:  random-forest, cart, gradient-boosting
Bike-Sharing-Demand-Kaggle
Top 5th percentile solution to the Kaggle knowledge problem - Bike Sharing Demand
Stars: ✭ 33 (-81.25%)
Mutual labels:  random-forest, kaggle, decision-trees
Tpot
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.
Stars: ✭ 8,378 (+4660.23%)
Mutual labels:  data-science, random-forest, gradient-boosting
Machine Learning In R
Workshop (6 hours): preprocessing, cross-validation, lasso, decision trees, random forest, xgboost, superlearner ensembles
Stars: ✭ 144 (-18.18%)
Mutual labels:  random-forest, decision-trees
Python Machine Learning Book
The "Python Machine Learning (1st edition)" book code repository and info resource
Stars: ✭ 11,428 (+6393.18%)
Mutual labels:  data-science, data-mining
Data Science Toolkit
Collection of stats, modeling, and data science tools in Python and R.
Stars: ✭ 169 (-3.98%)
Mutual labels:  data-science, data-mining
Efficient Apriori
An efficient Python implementation of the Apriori algorithm.
Stars: ✭ 145 (-17.61%)
Mutual labels:  data-science, data-mining
Fantasy Basketball
Scraping statistics, predicting NBA player performance with neural networks and boosting algorithms, and optimising lineups for Draft Kings with genetic algorithm. Capstone Project for Machine Learning Engineer Nanodegree by Udacity.
Stars: ✭ 146 (-17.05%)
Mutual labels:  data-science, data-mining
Machine learning for good
Machine learning fundamentals lesson in interactive notebooks
Stars: ✭ 142 (-19.32%)
Mutual labels:  data-science, data-mining
Open Solution Toxic Comments
Open solution to the Toxic Comment Classification Challenge
Stars: ✭ 154 (-12.5%)
Mutual labels:  kaggle, data-science
Benchm Ml
A minimal benchmark for scalability, speed and accuracy of commonly used open source implementations (R packages, Python scikit-learn, H2O, xgboost, Spark MLlib etc.) of the top machine learning algorithms for binary classification (random forests, gradient boosted trees, deep neural networks etc.).
Stars: ✭ 1,835 (+942.61%)
Mutual labels:  data-science, random-forest
Machine Learning Workflow With Python
This is a comprehensive ML techniques with python: Define the Problem- Specify Inputs & Outputs- Data Collection- Exploratory data analysis -Data Preprocessing- Model Design- Training- Evaluation
Stars: ✭ 157 (-10.8%)
Mutual labels:  kaggle, gradient-boosting
Pzad
Курс "Прикладные задачи анализа данных" (ВМК, МГУ имени М.В. Ломоносова)
Stars: ✭ 160 (-9.09%)
Mutual labels:  data-science, data-mining

chefboost

Downloads Stars License Patreon

Chefboost is a lightweight gradient boosting, random forest and adaboost enabled decision tree framework including regular ID3, C4.5, CART, CHAID and regression tree algorithms with categorical features support. You just need to write a few lines of code to build decision trees with Chefboost.

Installation - Demo

The easiest way to install Chefboost framework is to download it from from PyPI.

pip install chefboost

Usage - Demo

Basically, you just need to pass the dataset as pandas data frame and tree configurations optionally after importing Chefboost as illustrated below. You just need to put the target label to the right. Besides, chefboost handles both numeric and nominal features and target values in contrast to its alternatives.

from chefboost import Chefboost as chef
import pandas as pd

df = pd.read_csv("dataset/golf.txt")
config = {'algorithm': 'C4.5'}
model = chef.fit(df, config = config)

Outcomes

Built decision trees are stored as python if statements in the tests/outputs/rules directory. A sample of decision rules is demonstrated below.

def findDecision(Outlook, Temperature, Humidity, Wind, Decision):
   if Outlook == 'Rain':
      if Wind == 'Weak':
         return 'Yes'
      elif Wind == 'Strong':
         return 'No'
      else:
         return 'No'
   elif Outlook == 'Sunny':
      if Humidity == 'High':
         return 'No'
      elif Humidity == 'Normal':
         return 'Yes'
      else:
         return 'Yes'
   elif Outlook == 'Overcast':
      return 'Yes'
   else:
      return 'Yes'

Testing for custom instances

Decision rules will be stored in outputs/rules/ folder when you build decision trees. You can run the built decision tree for new instances as illustrated below.

prediction = chef.predict(model, param = ['Sunny', 'Hot', 'High', 'Weak'])

You can consume built decision trees directly as well. In this way, you can restore already built decision trees and skip learning steps, or apply transfer learning. Loaded trees offer you findDecision method to test for new instances.

moduleName = "outputs/rules/rules" #this will load outputs/rules/rules.py
tree = chef.restoreTree(moduleName)
prediction = tree.findDecision(['Sunny', 'Hot', 'High', 'Weak'])

tests/global-unit-test.py will guide you how to build a different decision trees and make predictions.

Model save and restoration

You can save your trained models. This makes your model ready for transfer learning.

chef.save_model(model, "model.pkl")

In this way, you can use the same model later to just make predictions. This skips the training steps. Restoration requires to store .py and .pkl files under outputs/rules.

model = chef.load_model("model.pkl")
prediction = chef.predict(model, ['Sunny',85,85,'Weak'])

Sample configurations

Chefboost supports several decision tree, bagging and boosting algorithms. You just need to pass the configuration to use different algorithms.

Regular Decision Trees

Regular decision tree algorithms find the best feature and the best split point maximizing the information gain. It builds decision trees recursively in child nodes.

config = {'algorithm': 'C4.5'} #Set algorithm to ID3, C4.5, CART, CHAID or Regression
model = chef.fit(df, config)

The following regular decision tree algorithms are wrapped in the library.

Algorithm Metric Tutorial Demo
ID3 Entropy, Information Gain Tutorial Demo
C4.5 Entropy, Gain Ratio Tutorial Demo
CART GINI Tutorial Demo
CHAID Chi Square Tutorial Demo
Regression Standard Deviation Tutorial Demo

Gradient Boosting Tutorial, Demo

Gradient boosting is basically based on building a tree, and then building another based on the previous one's error. In this way, it boosts results. Predictions will be the sum of each tree'e prediction result.

config = {'enableGBM': True, 'epochs': 7, 'learning_rate': 1, 'max_depth': 5}

Random Forest Tutorial, Demo

Random forest basically splits the data set into several sub data sets and builds different data set for those sub data sets. Predictions will be the average of each tree's prediction result.

config = {'enableRandomForest': True, 'num_of_trees': 5}

Adaboost Tutorial, Demo

Adaboost applies a decision stump instead of a decision tree. This is a weak classifier and aims to get min 50% score. It then increases the unclassified ones and decreases the classified ones. In this way, it aims to have a high score with weak classifiers.

config = {'enableAdaboost': True, 'num_of_weak_classifier': 4}

Feature Importance - Demo

Decision trees are naturally interpretable and explainable algorithms. A decision is clear made by a single tree. Still we need some extra layers to understand the built models. Besides, random forest and GBM are hard to explain. Herein, feature importance is one of the most common way to see the big picture and understand built models.

df = chef.feature_importance("outputs/rules/rules.py")
feature final_importance
Humidity 0.3688
Wind 0.3688
Outlook 0.2624
Temperature 0.0000

Paralellism

Chefboost offers parallelism to speed model building up. Branches of a decision tree will be created in parallel in this way. You should set enableParallelism argument to True in the configuration. Its default value is False. It allocates half of the total number of cores in your environment if parallelism is enabled.

if __name__ == '__main__':
   config = {'algorithm': 'C4.5', 'enableParallelism': True, 'num_cores': 2}
   model = chef.fit(df, config)

Notice that you have to locate training step in an if block and it should check you are in main.

E-Learning

This playlist guides you how to use Chefboost step by step for different algorithms. You can also find the tutorials about these core algorithms here.

Besides, you can enroll this online course - Decision Trees for Machine Learning From Scratch and follow the curriculum if you wonder the theory of decision trees and how this framework is developed.

Contributing

Pull requests are welcome. You should run the unit tests locally by running test/global-unit-test.py. Please share the unit test result logs in the PR.

Support

There are many ways to support a project - starring⭐️ the GitHub repos is just one.

You can also support this project on Patreon 🙏

Citation

Please cite chefboost in your publications if it helps your research. Here is an example BibTeX entry:

@misc{serengil2019chefboost,
  abstract = {Lightweight Decision Trees Framework supporting Gradient Boosting (GBDT, GBRT, GBM), Random Forest and Adaboost w/categorical features support for Python},
  author={Serengil, Sefik Ilkin},
  title={chefboost},
  url={https://github.com/serengil/chefboost}
  year={2019}
}

Licence

Chefboost is licensed under the MIT License - see LICENSE for more details.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].