All Projects → ArdalanM → Pylightgbm

ArdalanM / Pylightgbm

Licence: other
Python binding for Microsoft LightGBM

Projects that are alternatives of or similar to Pylightgbm

Autoeq
Automatic headphone equalization from frequency responses
Stars: ✭ 5,989 (+1725.91%)
Mutual labels:  jupyter-notebook
Ml Art Colabs
A list of Machine Learning Art Colabs
Stars: ✭ 308 (-6.1%)
Mutual labels:  jupyter-notebook
Nlp fundamentals
📘 Contains a series of hands-on notebooks for learning the fundamentals of NLP
Stars: ✭ 328 (+0%)
Mutual labels:  jupyter-notebook
Bdci2019 Sentiment Classification
CCF BDCI 2019 互联网新闻情感分析 复赛top1解决方案
Stars: ✭ 317 (-3.35%)
Mutual labels:  jupyter-notebook
Caffe Speech Recognition
Speech Recognition with the Caffe deep learning framework, migrating to
Stars: ✭ 323 (-1.52%)
Mutual labels:  jupyter-notebook
Pytorchneuralstyletransfer
Implementation of Neural Style Transfer in Pytorch
Stars: ✭ 327 (-0.3%)
Mutual labels:  jupyter-notebook
Probability
Probabilistic reasoning and statistical analysis in TensorFlow
Stars: ✭ 3,550 (+982.32%)
Mutual labels:  jupyter-notebook
Cc150
《程序员面试金典》(cc150)
Stars: ✭ 326 (-0.61%)
Mutual labels:  jupyter-notebook
Node
Neural Oblivious Decision Ensembles for Deep Learning on Tabular Data
Stars: ✭ 323 (-1.52%)
Mutual labels:  jupyter-notebook
Observations
Stars: ✭ 325 (-0.91%)
Mutual labels:  jupyter-notebook
Your First Machine Learning Project End To End In Python
这是一个完整的,端到端的机器学习项目,非常适合有一定基础后拿来练习,以提高对完整机器学习项目的认识
Stars: ✭ 323 (-1.52%)
Mutual labels:  jupyter-notebook
Autocrop
😌 Automatically detects and crops faces from batches of pictures.
Stars: ✭ 320 (-2.44%)
Mutual labels:  jupyter-notebook
Jupyter Edu Book
Teaching and Learning with Jupyter
Stars: ✭ 325 (-0.91%)
Mutual labels:  jupyter-notebook
Machine Learning For Trading
Code for Machine Learning for Algorithmic Trading, 2nd edition.
Stars: ✭ 4,979 (+1417.99%)
Mutual labels:  jupyter-notebook
Dota Doai
This repo is the codebase for our team to participate in DOTA related competitions, including rotation and horizontal detection.
Stars: ✭ 326 (-0.61%)
Mutual labels:  jupyter-notebook
Adanet
Fast and flexible AutoML with learning guarantees.
Stars: ✭ 3,340 (+918.29%)
Mutual labels:  jupyter-notebook
Youtube Code Repository
Repository for most of the code from my YouTube channel
Stars: ✭ 317 (-3.35%)
Mutual labels:  jupyter-notebook
Tutorials
Jupyter notebook tutorials from QuantConnect website for Python, Finance and LEAN.
Stars: ✭ 323 (-1.52%)
Mutual labels:  jupyter-notebook
Python Data Analysis And Image Processing Tutorial
파이썬을 활용한 데이터 분석과 이미지 처리 - 강의 자료 및 소스코드 Repository입니다.
Stars: ✭ 325 (-0.91%)
Mutual labels:  jupyter-notebook
Scipy Cookbook
Scipy Cookbook
Stars: ✭ 326 (-0.61%)
Mutual labels:  jupyter-notebook

pyLightGBM: python binding for Microsoft LightGBM

Build Status Coverage Status Packagist

Features:

  • Regression, Classification (binary, multi class)
  • Feature importance (clf.feature_importance())
  • Early stopping (clf.best_round)
  • Works with scikit-learn: GridSearchCV, cross_val_score, etc...
  • Silent mode (verbose=False)

Installation

Install lastest verion of Microsoft LightGBM then install the wrapper:

 pip install git+https://github.com/ArdalanM/pyLightGBM.git

Examples

  • Regression:
import numpy as np
from sklearn import datasets, metrics, model_selection
from pylightgbm.models import GBMRegressor

# full path to lightgbm executable (on Windows include .exe)
exec = "~/Documents/apps/LightGBM/lightgbm"

X, y = datasets.load_diabetes(return_X_y=True)
clf = GBMRegressor(exec_path=exec,
                   num_iterations=100, early_stopping_round=10,
                   num_leaves=10, min_data_in_leaf=10)

x_train, x_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.2)

clf.fit(x_train, y_train, test_data=[(x_test, y_test)])
print("Mean Square Error: ", metrics.mean_squared_error(y_test, clf.predict(x_test)))
  • Binary Classification:
import numpy as np
from sklearn import datasets, metrics, model_selection
from pylightgbm.models import GBMClassifier

# full path to lightgbm executable (on Windows include .exe)
exec = "~/Documents/apps/LightGBM/lightgbm"

X, Y = datasets.make_classification(n_samples=200, n_features=10)
x_train, x_test, y_train, y_test = model_selection.train_test_split(X, Y, test_size=0.2)

clf = GBMClassifier(exec_path=exec, min_data_in_leaf=1)
clf.fit(x_train, y_train, test_data=[(x_test, y_test)])
y_pred = clf.predict(x_test)
print("Accuracy: ", metrics.accuracy_score(y_test, y_pred))
  • Grid Search:
import numpy as np
from sklearn import datasets, metrics, model_selection
from pylightgbm.models import GBMClassifier

# full path to lightgbm executable (on Windows include .exe)
exec = "~/Documents/apps/LightGBM/lightgbm"

X, Y = datasets.make_classification(n_samples=1000, n_features=10)

gbm = GBMClassifier(exec_path=exec,
                    metric='binary_error', early_stopping_round=10, bagging_freq=10)

param_grid = {'learning_rate': [0.1, 0.04], 'bagging_fraction': [0.5, 0.9]}

scorer = metrics.make_scorer(metrics.accuracy_score, greater_is_better=True)
clf = model_selection.GridSearchCV(gbm, param_grid, scoring=scorer, cv=2)

clf.fit(X, Y)

print("Best score: ", clf.best_score_)
print("Best params: ", clf.best_params_)

Notebooks

Available parameters (default values):

  • application="regression"
  • num_iterations=10
  • learning_rate=0.1
  • num_leaves=127
  • tree_learner="serial"
  • num_threads=1
  • min_data_in_leaf=100
  • metric='l2'
  • is_training_metric=False
  • feature_fraction=1.
  • feature_fraction_seed=2
  • bagging_fraction=1.
  • bagging_freq=0
  • bagging_seed=3
  • metric_freq=1
  • early_stopping_round=0
  • max_bin=255
  • is_unbalance=False
  • num_class=1
  • boosting_type='gbdt'
  • min_sum_hessian_in_leaf=10
  • drop_rate=0.01
  • drop_seed=4
  • max_depth=-1
  • lambda_l1=0.
  • lambda_l2=0.
  • min_gain_to_split=0.
  • verbose=True
  • model=None
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].