All Projects → cerlymarco → keras-hypetune

cerlymarco / keras-hypetune

Licence: MIT license
A friendly python package for Keras Hyperparameters Tuning based only on NumPy and hyperopt.

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to keras-hypetune

mlrHyperopt
Easy Hyper Parameter Optimization with mlr and mlrMBO.
Stars: ✭ 30 (-36.17%)
Mutual labels:  hyperparameter-optimization, tuning-parameters
mango
Parallel Hyperparameter Tuning in Python
Stars: ✭ 241 (+412.77%)
Mutual labels:  hyperparameter-optimization, tuning-parameters
scikit-hyperband
A scikit-learn compatible implementation of hyperband
Stars: ✭ 68 (+44.68%)
Mutual labels:  hyperparameter-optimization
ml-pipeline
Using Kafka-Python to illustrate a ML production pipeline
Stars: ✭ 90 (+91.49%)
Mutual labels:  hyperparameter-optimization
cli
Polyaxon Core Client & CLI to streamline MLOps
Stars: ✭ 18 (-61.7%)
Mutual labels:  hyperparameter-optimization
go-bayesopt
A library for doing Bayesian Optimization using Gaussian Processes (blackbox optimizer) in Go/Golang.
Stars: ✭ 47 (+0%)
Mutual labels:  hyperparameter-optimization
multi-label-classification
基于tf.keras的多标签多分类模型
Stars: ✭ 72 (+53.19%)
Mutual labels:  tensorflow-keras
stremr
Streamlined Estimation for Static, Dynamic and Stochastic Treatment Regimes in Longitudinal Data
Stars: ✭ 33 (-29.79%)
Mutual labels:  tuning-parameters
ProxGradPytorch
PyTorch implementation of Proximal Gradient Algorithms a la Parikh and Boyd (2014). Useful for Auto-Sizing (Murray and Chiang 2015, Murray et al. 2019).
Stars: ✭ 28 (-40.43%)
Mutual labels:  hyperparameter-optimization
optuna-dashboard
Real-time Web Dashboard for Optuna.
Stars: ✭ 240 (+410.64%)
Mutual labels:  hyperparameter-optimization
maggy
Distribution transparent Machine Learning experiments on Apache Spark
Stars: ✭ 83 (+76.6%)
Mutual labels:  hyperparameter-optimization
prostateMR 3D-CAD-csPCa
Hierarchical probabilistic 3D U-Net, with attention mechanisms (—𝘈𝘵𝘵𝘦𝘯𝘵𝘪𝘰𝘯 𝘜-𝘕𝘦𝘵, 𝘚𝘌𝘙𝘦𝘴𝘕𝘦𝘵) and a nested decoder structure with deep supervision (—𝘜𝘕𝘦𝘵++). Built in TensorFlow 2.5. Configured for voxel-level clinically significant prostate cancer detection in multi-channel 3D bpMRI scans.
Stars: ✭ 32 (-31.91%)
Mutual labels:  tensorflow-keras
mlr3tuning
Hyperparameter optimization package of the mlr3 ecosystem
Stars: ✭ 44 (-6.38%)
Mutual labels:  hyperparameter-optimization
naturalselection
A general-purpose pythonic genetic algorithm.
Stars: ✭ 17 (-63.83%)
Mutual labels:  hyperparameter-optimization
optkeras
OptKeras: wrapper around Keras and Optuna for hyperparameter optimization
Stars: ✭ 29 (-38.3%)
Mutual labels:  hyperparameter-optimization
Ensemble-of-Multi-Scale-CNN-for-Dermatoscopy-Classification
Fully supervised binary classification of skin lesions from dermatoscopic images using an ensemble of diverse CNN architectures (EfficientNet-B6, Inception-V3, SEResNeXt-101, SENet-154, DenseNet-169) with multi-scale input.
Stars: ✭ 25 (-46.81%)
Mutual labels:  tensorflow-keras
Hypernets
A General Automated Machine Learning framework to simplify the development of End-to-end AutoML toolkits in specific domains.
Stars: ✭ 221 (+370.21%)
Mutual labels:  hyperparameter-optimization
bbai
Set model hyperparameters using deterministic, exact algorithms.
Stars: ✭ 19 (-59.57%)
Mutual labels:  hyperparameter-optimization
textlearnR
A simple collection of well working NLP models (Keras, H2O, StarSpace) tuned and benchmarked on a variety of datasets.
Stars: ✭ 16 (-65.96%)
Mutual labels:  hyperparameter-optimization
randopt
Streamlined machine learning experiment management.
Stars: ✭ 108 (+129.79%)
Mutual labels:  hyperparameter-optimization

keras-hypetune

A friendly python package for Keras Hyperparameters Tuning based only on NumPy and Hyperopt.

Overview

A very simple wrapper for fast Keras hyperparameters optimization. keras-hypetune lets you use the power of Keras without having to learn a new syntax. All you need it's just create a python dictionary where to put the parameter boundaries for the experiments and define your Keras model (in any format: Functional or Sequential) inside a callable function.

def get_model(param):
        
    model = Sequential()
    model.add(Dense(param['unit_1'], activation=param['activ']))
    model.add(Dense(param['unit_2'], activation=param['activ']))
    model.add(Dense(1))
    model.compile(optimizer=Adam(learning_rate=param['lr']), 
                  loss='mse', metrics=['mae'])
    
    return model

The optimization process is easily trackable using the callbacks provided by Keras. At the end of the searching, you can access all you need by querying the keras-hypetune searcher. The best solutions can be automatically saved in proper locations.

Installation

pip install --upgrade keras-hypetune

Tensorflow and Keras are not needed requirements. keras-hypetune is specifically for tf.keras with TensorFlow 2.0. The usage of GPU is normally available.

Fixed Validation Set

This tuning modality operates the optimization on a fixed validation set. The parameter combinations are evaluated always on the same set of data. In this case, it's allowed the usage of any kind of input data format accepted by Keras.

KerasGridSearch

All the passed parameter combinations are created and evaluated.

param_grid = {
    'unit_1': [128,64], 
    'unit_2': [64,32],
    'lr': [1e-2,1e-3], 
    'activ': ['elu','relu'],
    'epochs': 100, 
    'batch_size': 512
}

kgs = KerasGridSearch(get_model, param_grid, monitor='val_loss', greater_is_better=False)
kgs.search(x_train, y_train, validation_data=(x_valid, y_valid))

KerasRandomSearch

Only random parameter combinations are created and evaluated.

The number of parameter combinations that are tried is given by n_iter. If all parameters are presented as a list, sampling without replacement is performed. If at least one parameter is given as a distribution (from scipy.stats random variables), sampling with replacement is used.

param_grid = {
    'unit_1': [128,64], 
    'unit_2': stats.randint(32, 128),
    'lr': stats.uniform(1e-4, 0.1), 
    'activ': ['elu','relu'],
    'epochs': 100, 
    'batch_size': 512
}

krs = KerasRandomSearch(get_model, param_grid, monitor='val_loss', greater_is_better=False, 
                        n_iter=15, sampling_seed=33)
krs.search(x_train, y_train, validation_data=(x_valid, y_valid))

KerasBayesianSearch

The parameter values are chosen according to bayesian optimization algorithms based on gaussian processes and regression trees (from hyperopt).

The number of parameter combinations that are tried is given by n_iter. Parameters must be given as hyperopt distributions.

param_grid = {
    'unit_1': 64 + hp.randint('unit_1', 64),
    'unit_2': 32 + hp.randint('unit_2', 96),
    'lr': hp.loguniform('lr', np.log(0.001), np.log(0.02)), 
    'activ': hp.choice('activ', ['elu','relu']),
    'epochs': 100, 
    'batch_size': 512
}

kbs = KerasBayesianSearch(get_model, param_grid, monitor='val_loss', greater_is_better=False, 
                          n_iter=15, sampling_seed=33)
kbs.search(x_train, y_train, trials=Trials(), validation_data=(x_valid, y_valid))

Cross Validation

This tuning modality operates the optimization using a cross-validation approach. The CV strategies available are the same provided by scikit-learn splitter classes. The parameter combinations are evaluated on the mean score of the folds. In this case, it's allowed the usage of only numpy array data. For tasks involving multi-input/output, the arrays can be wrapped into list or dict like in normal Keras.

KerasGridSearchCV

All the passed parameter combinations are created and evaluated.

param_grid = {
    'unit_1': [128,64], 
    'unit_2': [64,32],
    'lr': [1e-2,1e-3], 
    'activ': ['elu','relu'],
    'epochs': 100, 
    'batch_size': 512
}

cv = KFold(n_splits=3, random_state=33, shuffle=True)

kgs = KerasGridSearchCV(get_model, param_grid, cv=cv, monitor='val_loss', greater_is_better=False)
kgs.search(X, y)

KerasRandomSearchCV

Only random parameter combinations are created and evaluated.

The number of parameter combinations that are tried is given by n_iter. If all parameters are presented as a list, sampling without replacement is performed. If at least one parameter is given as a distribution (from scipy.stats random variables), sampling with replacement is used.

param_grid = {
    'unit_1': [128,64], 
    'unit_2': stats.randint(32, 128),
    'lr': stats.uniform(1e-4, 0.1), 
    'activ': ['elu','relu'],
    'epochs': 100, 
    'batch_size': 512
}

cv = KFold(n_splits=3, random_state=33, shuffle=True)

krs = KerasRandomSearchCV(get_model, param_grid, cv=cv, monitor='val_loss', greater_is_better=False,
                          n_iter=15, sampling_seed=33)
krs.search(X, y)

KerasBayesianSearchCV

The parameter values are chosen according to bayesian optimization algorithms based on gaussian processes and regression trees (from hyperopt).

The number of parameter combinations that are tried is given by n_iter. Parameters must be given as hyperopt distributions.

param_grid = {
    'unit_1': 64 + hp.randint('unit_1', 64),
    'unit_2': 32 + hp.randint('unit_2', 96),
    'lr': hp.loguniform('lr', np.log(0.001), np.log(0.02)), 
    'activ': hp.choice('activ', ['elu','relu']),
    'epochs': 100, 
    'batch_size': 512
}

cv = KFold(n_splits=3, random_state=33, shuffle=True)

kbs = KerasBayesianSearchCV(get_model, param_grid, cv=cv, monitor='val_loss', greater_is_better=False,
                            n_iter=15, sampling_seed=33)
kbs.search(X, y, trials=Trials())
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].