All Projects → MLBazaar → Btb

MLBazaar / Btb

Licence: mit
A simple, extensible library for developing AutoML systems

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Btb

Autogluon
AutoGluon: AutoML for Text, Image, and Tabular Data
Stars: ✭ 3,920 (+2365.41%)
Mutual labels:  automl, hyperparameter-optimization
Auto Sklearn
Automated Machine Learning with scikit-learn
Stars: ✭ 5,916 (+3620.75%)
Mutual labels:  automl, hyperparameter-optimization
Hpbandster
a distributed Hyperband implementation on Steroids
Stars: ✭ 456 (+186.79%)
Mutual labels:  automl, hyperparameter-optimization
Mlmodels
mlmodels : Machine Learning and Deep Learning Model ZOO for Pytorch, Tensorflow, Keras, Gluon models...
Stars: ✭ 145 (-8.81%)
Mutual labels:  automl, hyperparameter-optimization
Milano
Milano is a tool for automating hyper-parameters search for your models on a backend of your choice.
Stars: ✭ 140 (-11.95%)
Mutual labels:  automl, hyperparameter-optimization
Awesome Automl Papers
A curated list of automated machine learning papers, articles, tutorials, slides and projects
Stars: ✭ 3,198 (+1911.32%)
Mutual labels:  automl, hyperparameter-optimization
Smac3
Sequential Model-based Algorithm Configuration
Stars: ✭ 564 (+254.72%)
Mutual labels:  automl, hyperparameter-optimization
FEDOT
Automated modeling and machine learning framework FEDOT
Stars: ✭ 312 (+96.23%)
Mutual labels:  hyperparameter-optimization, automl
Tpot
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.
Stars: ✭ 8,378 (+5169.18%)
Mutual labels:  automl, hyperparameter-optimization
Mljar Supervised
Automated Machine Learning Pipeline with Feature Engineering and Hyper-Parameters Tuning 🚀
Stars: ✭ 961 (+504.4%)
Mutual labels:  automl, hyperparameter-optimization
My Data Competition Experience
本人多次机器学习与大数据竞赛Top5的经验总结,满满的干货,拿好不谢
Stars: ✭ 271 (+70.44%)
Mutual labels:  automl, hyperparameter-optimization
Deephyper
DeepHyper: Scalable Asynchronous Neural Architecture and Hyperparameter Search for Deep Neural Networks
Stars: ✭ 117 (-26.42%)
Mutual labels:  automl, hyperparameter-optimization
Meta-SAC
Auto-tune the Entropy Temperature of Soft Actor-Critic via Metagradient - 7th ICML AutoML workshop 2020
Stars: ✭ 19 (-88.05%)
Mutual labels:  hyperparameter-optimization, automl
Automl alex
State-of-the art Automated Machine Learning python library for Tabular Data
Stars: ✭ 132 (-16.98%)
Mutual labels:  automl, hyperparameter-optimization
mindware
An efficient open-source AutoML system for automating machine learning lifecycle, including feature engineering, neural architecture search, and hyper-parameter tuning.
Stars: ✭ 34 (-78.62%)
Mutual labels:  hyperparameter-optimization, automl
Atm
Auto Tune Models - A multi-tenant, multi-data system for automated machine learning (model selection and tuning).
Stars: ✭ 504 (+216.98%)
Mutual labels:  automl, hyperparameter-optimization
codeflare
Simplifying the definition and execution, scaling and deployment of pipelines on the cloud.
Stars: ✭ 163 (+2.52%)
Mutual labels:  hyperparameter-optimization, automl
hyper-engine
Python library for Bayesian hyper-parameters optimization
Stars: ✭ 80 (-49.69%)
Mutual labels:  hyperparameter-optimization, gaussian-processes
Awesome Automl And Lightweight Models
A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hyperparameter Optimization, 5.) Automated Feature Engineering.
Stars: ✭ 691 (+334.59%)
Mutual labels:  automl, hyperparameter-optimization
Nni
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Stars: ✭ 10,698 (+6628.3%)
Mutual labels:  automl, hyperparameter-optimization

BTB An open source project from Data to AI Lab at MIT.

A simple, extensible backend for developing auto-tuning systems.

Development Status PyPi Shield Travis CI Shield Coverage Status Downloads Binder

Overview

BTB ("Bayesian Tuning and Bandits") is a simple, extensible backend for developing auto-tuning systems such as AutoML systems. It provides an easy-to-use interface for tuning models and selecting between models.

It is currently being used in several AutoML systems:

Try it out now!

If you want to quickly discover BTB, simply click the button below and follow the tutorials!

Binder

Install

Requirements

BTB has been developed and tested on Python 3.6, 3.7 and 3.8

Also, although it is not strictly required, the usage of a virtualenv is highly recommended in order to avoid interfering with other software installed in the system where BTB is run.

Install with pip

The easiest and recommended way to install BTB is using pip:

pip install baytune

This will pull and install the latest stable release from PyPi.

If you want to install from source or contribute to the project please read the Contributing Guide.

Quickstart

In this short tutorial we will guide you through the necessary steps to get started using BTB to select between models and tune a model to solve a Machine Learning problem.

In particular, in this example we will be using BTBSession to perform solve the Wine classification problem by selecting between the DecisionTreeClassifier and the SGDClassifier models from scikit-learn while also searching for their best hyperparameter configuration.

Prepare a scoring function

The first step in order to use the BTBSession class is to develop a scoring function.

This is a Python function that, given a model name and a hyperparameter configuration, evaluates the performance of the model on your data and returns a score.

from sklearn.datasets import load_wine
from sklearn.linear_model import SGDClassifier
from sklearn.metrics import f1_score, make_scorer
from sklearn.model_selection import cross_val_score
from sklearn.tree import DecisionTreeClassifier


dataset = load_wine()
models = {
    'DTC': DecisionTreeClassifier,
    'SGDC': SGDClassifier,
}

def scoring_function(model_name, hyperparameter_values):
    model_class = models[model_name]
    model_instance = model_class(**hyperparameter_values)
    scores = cross_val_score(
        estimator=model_instance,
        X=dataset.data,
        y=dataset.target,
        scoring=make_scorer(f1_score, average='macro')
    )
    return scores.mean()

Define the tunable hyperparameters

The second step is to define the hyperparameters that we want to tune for each model as Tunables.

from btb.tuning import Tunable
from btb.tuning import hyperparams as hp

tunables = {
    'DTC': Tunable({
        'max_depth': hp.IntHyperParam(min=3, max=200),
        'min_samples_split': hp.FloatHyperParam(min=0.01, max=1)
    }),
    'SGDC': Tunable({
        'max_iter': hp.IntHyperParam(min=1, max=5000, default=1000),
        'tol': hp.FloatHyperParam(min=1e-3, max=1, default=1e-3),
    })
}

Start the searching process

Once you have defined a scoring function and the tunable hyperparameters specification of your models, you can start the searching for the best model and hyperparameter configuration by using the btb.BTBSession.

All you need to do is create an instance passing the tunable hyperparameters scpecification and the scoring function.

from btb import BTBSession

session = BTBSession(
    tunables=tunables,
    scorer=scoring_function
)

And then call the run method indicating how many tunable iterations you want the BTBSession to perform:

best_proposal = session.run(20)

The result will be a dictionary indicating the name of the best model that could be found and the hyperparameter configuration that was used:

{
    'id': '826aedc2eff31635444e8104f0f3da43',
    'name': 'DTC',
    'config': {
        'max_depth': 21,
        'min_samples_split': 0.044010284821858835
    },
    'score': 0.907229308339589
}

How does BTB perform?

We have a comprehensive benchmarking framework that we use to evaluate the performance of our Tuners. For every release, we perform benchmarking against 100's of challenges, comparing tuners against each other in terms of number of wins. We present the latest leaderboard from latest release below:

Number of Wins on latest Version

tuner with ties without ties
Ax.optimize 220 32
BTB.GCPEiTuner 139 2
BTB.GCPTuner 252 90
BTB.GPEiTuner 208 16
BTB.GPTuner 213 24
BTB.UniformTuner 177 1
HyperOpt.tpe 186 6
SMAC.HB4AC 180 4
SMAC.SMAC4HPO_EI 220 31
SMAC.SMAC4HPO_LCB 205 16
SMAC.SMAC4HPO_PI 221 35
  • Detailed results from which this summary emerged are available here.
  • If you want to compare your own tuner, follow the steps in our benchmarking framework here.
  • If you have a proposal for tuner that we should include in our benchmarking get in touch with us at [email protected].

More tutorials

  1. To just tune hyperparameters - see our tuning tutorial here and documentation here.
  2. To see the types of hyperparameters we support see our documentation here.
  3. You can read about our benchmarking framework here.
  4. See our tutorial on selection here and documentation here.

For more details about BTB and all its possibilities and features, please check the project documentation site!

Also do not forget to have a look at the notebook tutorials.

Citing BTB

If you use BTB, please consider citing the following paper:

@article{smith2019mlbazaar,
  author = {Smith, Micah J. and Sala, Carles and Kanter, James Max and Veeramachaneni, Kalyan},
  title = {The Machine Learning Bazaar: Harnessing the ML Ecosystem for Effective System Development},
  journal = {arXiv e-prints},
  year = {2019},
  eid = {arXiv:1905.08942},
  pages = {arxiv:1904.09535},
  archivePrefix = {arXiv},
  eprint = {1905.08942},
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].