All Projects → ray-project → Tune Sklearn

ray-project / Tune Sklearn

Licence: apache-2.0
A drop-in replacement for Scikit-Learn’s GridSearchCV / RandomizedSearchCV -- but with cutting edge hyperparameter tuning techniques.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Tune Sklearn

Auto Sklearn
Automated Machine Learning with scikit-learn
Stars: ✭ 5,916 (+2354.77%)
Mutual labels:  scikit-learn, automl, bayesian-optimization, hyperparameter-tuning
mindware
An efficient open-source AutoML system for automating machine learning lifecycle, including feature engineering, neural architecture search, and hyper-parameter tuning.
Stars: ✭ 34 (-85.89%)
Mutual labels:  bayesian-optimization, hyperparameter-tuning, automl
Smac3
Sequential Model-based Algorithm Configuration
Stars: ✭ 564 (+134.02%)
Mutual labels:  automl, bayesian-optimization, hyperparameter-tuning
Scikit Optimize
Sequential model-based optimization with a `scipy.optimize` interface
Stars: ✭ 2,258 (+836.93%)
Mutual labels:  bayesian-optimization, scikit-learn, hyperparameter-tuning
Lale
Library for Semi-Automated Data Science
Stars: ✭ 198 (-17.84%)
Mutual labels:  scikit-learn, automl, hyperparameter-tuning
Modal
A modular active learning framework for Python
Stars: ✭ 1,148 (+376.35%)
Mutual labels:  scikit-learn, bayesian-optimization
Nni
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Stars: ✭ 10,698 (+4339%)
Mutual labels:  automl, bayesian-optimization
Adatune
Gradient based Hyperparameter Tuning library in PyTorch
Stars: ✭ 226 (-6.22%)
Mutual labels:  automl, hyperparameter-tuning
Milano
Milano is a tool for automating hyper-parameters search for your models on a backend of your choice.
Stars: ✭ 140 (-41.91%)
Mutual labels:  automl, hyperparameter-tuning
Mljar Supervised
Automated Machine Learning Pipeline with Feature Engineering and Hyper-Parameters Tuning 🚀
Stars: ✭ 961 (+298.76%)
Mutual labels:  scikit-learn, automl
Auto ml
[UNMAINTAINED] Automated machine learning for analytics & production
Stars: ✭ 1,559 (+546.89%)
Mutual labels:  scikit-learn, automl
Igel
a delightful machine learning tool that allows you to train, test, and use models without writing code
Stars: ✭ 2,956 (+1126.56%)
Mutual labels:  scikit-learn, automl
Tpot
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.
Stars: ✭ 8,378 (+3376.35%)
Mutual labels:  scikit-learn, automl
Mlprimitives
Primitives for machine learning and data science.
Stars: ✭ 46 (-80.91%)
Mutual labels:  automl, hyperparameter-tuning
Amla
AutoML frAmework for Neural Networks
Stars: ✭ 119 (-50.62%)
Mutual labels:  automl, hyperparameter-tuning
The Deep Learning With Keras Workshop
An Interactive Approach to Understanding Deep Learning with Keras
Stars: ✭ 34 (-85.89%)
Mutual labels:  scikit-learn, hyperparameter-tuning
Automl alex
State-of-the art Automated Machine Learning python library for Tabular Data
Stars: ✭ 132 (-45.23%)
Mutual labels:  automl, hyperparameter-tuning
Forecasting
Time Series Forecasting Best Practices & Examples
Stars: ✭ 2,123 (+780.91%)
Mutual labels:  automl, hyperparameter-tuning
Auptimizer
An automatic ML model optimization tool.
Stars: ✭ 166 (-31.12%)
Mutual labels:  automl, hyperparameter-tuning
Hyperparameter hunter
Easy hyperparameter optimization and automatic result saving across machine learning algorithms and libraries
Stars: ✭ 648 (+168.88%)
Mutual labels:  scikit-learn, hyperparameter-tuning

tune-sklearn

pytest

Tune-sklearn is a drop-in replacement for Scikit-Learn’s model selection module (GridSearchCV, RandomizedSearchCV) with cutting edge hyperparameter tuning techniques.

Features

Here’s what tune-sklearn has to offer:

  • Consistency with Scikit-Learn API: Change less than 5 lines in a standard Scikit-Learn script to use the API [example].
  • Modern tuning techniques: tune-sklearn allows you to easily leverage Bayesian Optimization, HyperBand, BOHB, and other optimization techniques by simply toggling a few parameters.
  • Framework support: tune-sklearn is used primarily for tuning Scikit-Learn models, but it also supports and provides examples for many other frameworks with Scikit-Learn wrappers such as Skorch (Pytorch) [example], KerasClassifier (Keras) [example], and XGBoostClassifier (XGBoost) [example].
  • Scale up: Tune-sklearn leverages Ray Tune, a library for distributed hyperparameter tuning, to parallelize cross validation on multiple cores and even multiple machines without changing your code.

Check out our API Documentation and Walkthrough (for master branch).

Installation

Dependencies

  • numpy (>=1.16)
  • ray
  • scikit-learn (>=0.23)

User Installation

pip install tune-sklearn ray[tune]

or

pip install -U git+https://github.com/ray-project/tune-sklearn.git && pip install 'ray[tune]'

Tune-sklearn Early Stopping

For certain estimators, tune-sklearn can also immediately enable incremental training and early stopping. Such estimators include:

  • Estimators that implement 'warm_start' (except for ensemble classifiers and decision trees)
  • Estimators that implement partial fit
  • XGBoost, LightGBM and CatBoost models (via incremental learning)

To read more about compatible scikit-learn models, see scikit-learn's documentation at section 8.1.1.3.

Early stopping algorithms that can be enabled include HyperBand and Median Stopping (see below for examples).

If the estimator does not support partial_fit, a warning will be shown saying early stopping cannot be done and it will simply run the cross-validation on Ray's parallel back-end.

Apart from early stopping scheduling algorithms, tune-sklearn also supports passing custom stoppers to Ray Tune. These can be passed via the stopper argument when instantiating TuneSearchCV or TuneGridSearchCV. See the Ray documentation for an overview of available stoppers.

Examples

TuneGridSearchCV

To start out, it’s as easy as changing our import statement to get Tune’s grid search cross validation interface, and the rest is almost identical!

TuneGridSearchCV accepts dictionaries in the format { param_name: str : distribution: list } or a list of such dictionaries, just like scikit-learn's GridSearchCV. The distribution can also be the output of Ray Tune's tune.grid_search.

# from sklearn.model_selection import GridSearchCV
from tune_sklearn import TuneGridSearchCV

# Other imports
import numpy as np
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier

# Set training and validation sets
X, y = make_classification(n_samples=11000, n_features=1000, n_informative=50, n_redundant=0, n_classes=10, class_sep=2.5)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=1000)

# Example parameters to tune from SGDClassifier
parameters = {
    'alpha': [1e-4, 1e-1, 1],
    'epsilon':[0.01, 0.1]
}

tune_search = TuneGridSearchCV(
    SGDClassifier(),
    parameters,
    early_stopping="MedianStoppingRule",
    max_iters=10
)

import time # Just to compare fit times
start = time.time()
tune_search.fit(X_train, y_train)
end = time.time()
print("Tune Fit Time:", end - start)
pred = tune_search.predict(X_test)
accuracy = np.count_nonzero(np.array(pred) == np.array(y_test)) / len(pred)
print("Tune Accuracy:", accuracy)

If you'd like to compare fit times with sklearn's GridSearchCV, run the following block of code:

from sklearn.model_selection import GridSearchCV
# n_jobs=-1 enables use of all cores like Tune does
sklearn_search = GridSearchCV(
    SGDClassifier(),
    parameters,
    n_jobs=-1
)

start = time.time()
sklearn_search.fit(X_train, y_train)
end = time.time()
print("Sklearn Fit Time:", end - start)
pred = sklearn_search.predict(X_test)
accuracy = np.count_nonzero(np.array(pred) == np.array(y_test)) / len(pred)
print("Sklearn Accuracy:", accuracy)

TuneSearchCV

TuneSearchCV is an upgraded version of scikit-learn's RandomizedSearchCV.

It also provides a wrapper for several search optimization algorithms from Ray Tune's tune.suggest, which in turn are wrappers for other libraries. The selection of the search algorithm is controlled by the search_optimization parameter. In order to use other algorithms, you need to install the libraries they depend on (pip install column). The search algorithms are as follows:

Algorithm search_optimization value Summary Website pip install
(Random Search) "random" Randomized Search built-in
SkoptSearch "bayesian" Bayesian Optimization [Scikit-Optimize] scikit-optimize
HyperOptSearch "hyperopt" Tree-Parzen Estimators [HyperOpt] hyperopt
TuneBOHB "bohb" Bayesian Opt/HyperBand [BOHB] hpbandster ConfigSpace
Optuna "optuna" Tree-Parzen Estimators [Optuna] optuna

All algorithms other than RandomListSearcher accept parameter distributions in the form of dictionaries in the format { param_name: str : distribution: tuple or list }.

Tuples represent real distributions and should be two-element or three-element, in the format (lower_bound: float, upper_bound: float, Optional: "uniform" (default) or "log-uniform"). Lists represent categorical distributions. Ray Tune Search Spaces are also supported and provide a rich set of potential distributions. Search spaces allow for users to specify complex, potentially nested search spaces and parameter distributions. Furthermore, each algorithm also accepts parameters in their own specific format. More information in Tune documentation.

Random Search (default) accepts dictionaries in the format { param_name: str : distribution: list } or a list of such dictionaries, just like scikit-learn's RandomizedSearchCV.

from tune_sklearn import TuneSearchCV

# Other imports
import scipy
from ray import tune
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier

# Set training and validation sets
X, y = make_classification(n_samples=11000, n_features=1000, n_informative=50, n_redundant=0, n_classes=10, class_sep=2.5)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=1000)

# Example parameter distributions to tune from SGDClassifier
# Note the use of tuples instead if non-random optimization is desired
param_dists = {
    'loss': ['squared_hinge', 'hinge'], 
    'alpha': (1e-4, 1e-1, 'log-uniform'),
    'epsilon': (1e-2, 1e-1)
}

bohb_tune_search = TuneSearchCV(SGDClassifier(),
    param_distributions=param_dists,
    n_trials=2,
    max_iters=10,
    search_optimization="bohb"
)

bohb_tune_search.fit(X_train, y_train)

# Define the `param_dists using the SearchSpace API
# This allows the specification of sampling from discrete and 
# categorical distributions (below for the `learning_rate` scheduler parameter)
param_dists = {
    'loss': tune.choice(['squared_hinge', 'hinge']),
    'alpha': tune.loguniform(1e-4, 1e-1),
    'epsilon': tune.uniform(1e-2, 1e-1),
}


hyperopt_tune_search = TuneSearchCV(SGDClassifier(),
    param_distributions=param_dists,
    n_trials=2,
    early_stopping=True, # uses Async HyperBand if set to True
    max_iters=10,
    search_optimization="hyperopt"
)

hyperopt_tune_search.fit(X_train, y_train)

Other Machine Learning Libraries and Examples

Tune-sklearn also supports the use of other machine learning libraries such as Pytorch (using Skorch) and Keras. You can find these examples here:

More information

Ray Tune

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].