All Projects → PhilipMay → mltb

PhilipMay / mltb

Licence: BSD-2-Clause license
Machine Learning Tool Box

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to mltb

ml-pipeline
Using Kafka-Python to illustrate a ML production pipeline
Stars: ✭ 90 (+260%)
Mutual labels:  hyperparameter-optimization, lightgbm, hyperopt
Hyperparameter hunter
Easy hyperparameter optimization and automatic result saving across machine learning algorithms and libraries
Stars: ✭ 648 (+2492%)
Mutual labels:  hyperparameter-optimization, lightgbm, hyperparameter-tuning
publib
Produce publication-level quality images on top of Matplotlib
Stars: ✭ 34 (+36%)
Mutual labels:  plot, matplotlib
sarviewer
Generate graphs with gnuplot or matplotlib (Python) from sar data
Stars: ✭ 60 (+140%)
Mutual labels:  plot, matplotlib
mlr3tuning
Hyperparameter optimization package of the mlr3 ecosystem
Stars: ✭ 44 (+76%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning
Rustplotlib
A Rust's binding of matplotlib
Stars: ✭ 31 (+24%)
Mutual labels:  plot, matplotlib
Itermplot
An awesome iTerm2 backend for Matplotlib, so you can plot directly in your terminal.
Stars: ✭ 1,267 (+4968%)
Mutual labels:  plot, matplotlib
scikit-hyperband
A scikit-learn compatible implementation of hyperband
Stars: ✭ 68 (+172%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning
nodeplotlib
NodeJS plotting library for JavaScript and TypeScript. On top of plotly.js. Inspired by matplotlib.
Stars: ✭ 115 (+360%)
Mutual labels:  plot, matplotlib
heatmaps
Better heatmaps in Python
Stars: ✭ 117 (+368%)
Mutual labels:  plot, matplotlib
naturalselection
A general-purpose pythonic genetic algorithm.
Stars: ✭ 17 (-32%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning
differential-privacy-bayesian-optimization
This repo contains the underlying code for all the experiments from the paper: "Automatic Discovery of Privacy-Utility Pareto Fronts"
Stars: ✭ 22 (-12%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning
Adjusttext
A small library for automatically adjustment of text position in matplotlib plots to minimize overlaps.
Stars: ✭ 731 (+2824%)
Mutual labels:  plot, matplotlib
Vapeplot
matplotlib extension for vaporwave aesthetics
Stars: ✭ 483 (+1832%)
Mutual labels:  plot, matplotlib
Matplotlib4j
Matplotlib for java: A simple graph plot library for java with powerful python matplotlib
Stars: ✭ 107 (+328%)
Mutual labels:  plot, matplotlib
Tensorflow Plot
📈 TensorFlow + Matplotlib as TF ops
Stars: ✭ 285 (+1040%)
Mutual labels:  plot, matplotlib
Hypernets
A General Automated Machine Learning framework to simplify the development of End-to-end AutoML toolkits in specific domains.
Stars: ✭ 221 (+784%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning
matplotlib-haskell
Haskell bindings for Python's Matplotlib
Stars: ✭ 80 (+220%)
Mutual labels:  plot, matplotlib
bitrate-plotter
Plots a graph showing the bitrate every second or the bitrate of every GOP, for a video or audio file.
Stars: ✭ 15 (-40%)
Mutual labels:  plot, matplotlib
mango
Parallel Hyperparameter Tuning in Python
Stars: ✭ 241 (+864%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning

PyPI version License

Machine Learning Tool Box

This is the machine learning tool box. A collection of userful machine learning tools intended for reuse and extension. The toolbox contains the following modules:

  • hyperopt - Hyperopt tool to save and restart evaluations
  • keras - Keras (tf.keras) callback for various metrics and various other Keras tools
  • lightgbm - metric tool functions for LightGBM
  • metrics - several metric implementations
  • plot - plot and visualisation tools
  • tools - various (i.a. statistical) tools

Module: hyperopt

This module contains a tool function to save and restart Hyperopt evaluations. This is done by saving and loading the hyperopt.Trials objects. The usage looks like this:

from mltb.hyperopt import fmin
from hyperopt import tpe, hp, STATUS_OK


def objective(x):
    return {
        'loss': x ** 2,
        'status': STATUS_OK,
        'other_stuff': {'type': None, 'value': [0, 1, 2]},
        }


best, trials = fmin(objective,
    space=hp.uniform('x', -10, 10),
    algo=tpe.suggest,
    max_evals=100,
    filename='trials_file')

print('best:', best)
print('number of trials:', len(trials.trials))

Output of first run:

No trials file "trials_file" found. Created new trials object.
100%|██████████| 100/100 [00:00<00:00, 338.61it/s, best loss: 0.0007185087453453681]
best: {'x': 0.026805013436769026}
number of trials: 100

Output of second run:

100 evals loaded from trials file "trials_file".
100%|██████████| 100/100 [00:00<00:00, 219.65it/s, best loss: 0.00012259809712488858]
best: {'x': 0.011072402500130158}
number of trials: 200

Module: lightgbm

This module implements metric functions that are not included in LightGBM. At the moment this is the F1- and accuracy-score for binary and multi class problems. The usage looks like this:

bst = lgb.train(param,
                train_data,
                valid_sets=[validation_data]
                early_stopping_rounds=10,
                evals_result=evals_result,
                feval=mltb.lightgbm.multi_class_f1_score_factory(num_classes, 'macro'),
               )

Module: keras (for tf.keras)

BinaryClassifierMetricsCallback

This module provides custom metrics in form of a callback. Because the callback adds these values to the internal logs dictionary it is possible to use the EarlyStopping callback to do early stopping on these metrics.

Parameters

Parameter Description Type Default values
val_data Validation input list
val_label Validation output list
pos_label Which index is the positive label Optional[int] 1
metrics List of supported metric names or custom metric functions List[Union[str, Callable]] ['val_roc_auc', 'val_average_precision', 'val_f1', 'val_acc']

Available metrics

  • val_roc_auc : ROC-AUC
  • val_f1 : F1-score
  • val_acc: Accuracy
  • val_average_precision: Average precision
  • val_mcc: Matthews correlation coefficient

The usage looks like this:

bcm_callback = mltb.keras.BinaryClassifierMetricsCallback(val_data, val_labels)
es_callback = callbacks.EarlyStopping(monitor='val_roc_auc', patience=5,  mode='max')

history = network.fit(train_data, train_labels,
                      epochs=1000,
                      batch_size=128,

                      #do not give validation_data here or validation will be done twice
                      #validation_data=(val_data, val_labels),

                      #always provide BinaryClassifierMetricsCallback before the EarlyStopping callback
                      callbacks=[bcm_callback, es_callback],
)

You can also define your own custom metric:

def custom_average_recall_score(y_true, y_pred, pos_label):
    rounded_pred = np.rint(y_pred)
    return sklearn.metrics.recall_score(y_true, rounded_pred, pos_label)


bcm_callback = mltb.keras.BinaryClassifierMetricsCallback(val_data, val_labels,metrics=[custom_average_recall_score])
es_callback = callbacks.EarlyStopping(monitor='custom_average_recall_score', patience=5,  mode='max')

history = network.fit(train_data, train_labels,
                      epochs=1000,
                      batch_size=128,

                      #do not give validation_data here or validation will be done twice
                      #validation_data=(val_data, val_labels),

                      #always provide BinaryClassifierMetricsCallback before the EarlyStopping callback
                      callbacks=[bcm_callback, es_callback],
)
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].