All Projects → logicalclocks → maggy

logicalclocks / maggy

Licence: Apache-2.0 license
Distribution transparent Machine Learning experiments on Apache Spark

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to maggy

Auto Sklearn
Automated Machine Learning with scikit-learn
Stars: ✭ 5,916 (+7027.71%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning, automl, hyperparameter-search
Auto-Surprise
An AutoRecSys library for Surprise. Automate algorithm selection and hyperparameter tuning 🚀
Stars: ✭ 19 (-77.11%)
Mutual labels:  hyperparameter-tuning, automl, hyperparameter-search
Automl alex
State-of-the art Automated Machine Learning python library for Tabular Data
Stars: ✭ 132 (+59.04%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning, automl
Auptimizer
An automatic ML model optimization tool.
Stars: ✭ 166 (+100%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning, automl
ultraopt
Distributed Asynchronous Hyperparameter Optimization better than HyperOpt. 比HyperOpt更强的分布式异步超参优化库。
Stars: ✭ 93 (+12.05%)
Mutual labels:  hyperparameter-optimization, automl, blackbox-optimization
mango
Parallel Hyperparameter Tuning in Python
Stars: ✭ 241 (+190.36%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning, blackbox-optimization
Lale
Library for Semi-Automated Data Science
Stars: ✭ 198 (+138.55%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning, automl
Milano
Milano is a tool for automating hyper-parameters search for your models on a backend of your choice.
Stars: ✭ 140 (+68.67%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning, automl
mindware
An efficient open-source AutoML system for automating machine learning lifecycle, including feature engineering, neural architecture search, and hyper-parameter tuning.
Stars: ✭ 34 (-59.04%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning, automl
Scikit Optimize
Sequential model-based optimization with a `scipy.optimize` interface
Stars: ✭ 2,258 (+2620.48%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning, hyperparameter-search
Hypernets
A General Automated Machine Learning framework to simplify the development of End-to-end AutoML toolkits in specific domains.
Stars: ✭ 221 (+166.27%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning, automl
bbopt
Black box hyperparameter optimization made easy.
Stars: ✭ 66 (-20.48%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning, blackbox-optimization
Smac3
Sequential Model-based Algorithm Configuration
Stars: ✭ 564 (+579.52%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning, automl
Ray
An open source framework that provides a simple, universal API for building distributed applications. Ray is packaged with RLlib, a scalable reinforcement learning library, and Tune, a scalable hyperparameter tuning library.
Stars: ✭ 18,547 (+22245.78%)
Mutual labels:  hyperparameter-optimization, automl, hyperparameter-search
Pbt
Population Based Training (in PyTorch with sqlite3). Status: Unsupported
Stars: ✭ 138 (+66.27%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning
Mlmodels
mlmodels : Machine Learning and Deep Learning Model ZOO for Pytorch, Tensorflow, Keras, Gluon models...
Stars: ✭ 145 (+74.7%)
Mutual labels:  hyperparameter-optimization, automl
Btb
A simple, extensible library for developing AutoML systems
Stars: ✭ 159 (+91.57%)
Mutual labels:  hyperparameter-optimization, automl
Rl Baselines3 Zoo
A collection of pre-trained RL agents using Stable Baselines3, training and hyperparameter optimization included.
Stars: ✭ 161 (+93.98%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning
Auto ml
[UNMAINTAINED] Automated machine learning for analytics & production
Stars: ✭ 1,559 (+1778.31%)
Mutual labels:  hyperparameter-optimization, automl
Coursera Deep Learning Specialization
Notes, programming assignments and quizzes from all courses within the Coursera Deep Learning specialization offered by deeplearning.ai: (i) Neural Networks and Deep Learning; (ii) Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization; (iii) Structuring Machine Learning Projects; (iv) Convolutional Neural Networks; (v) Sequence Models
Stars: ✭ 188 (+126.51%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning

Maggy

Hopsworks Community Maggy Documentation PyPiStatus Downloads CodeStyle License

Maggy is a framework for distribution transparent machine learning experiments on Apache Spark. In this post, we introduce a new unified framework for writing core ML training logic as oblivious training functions. Maggy enables you to reuse the same training code whether training small models on your laptop or reusing the same code to scale out hyperparameter tuning or distributed deep learning on a cluster. Maggy enables the replacement of the current waterfall development process for distributed ML applications, where code is rewritten at every stage to account for the different distribution context.

Maggy Maggy uses the same distribution transparent training function in all steps of the machine learning development process.

Quick Start

Maggy uses PySpark as an engine to distribute the training processes. To get started, install Maggy in the Python environment used by your Spark Cluster, or install Maggy in your local Python environment with the 'spark' extra, to run on Spark in local mode:

pip install maggy

The programming model consists of wrapping the code containing the model training inside a function. Inside that wrapper function provide all imports and parts that make up your experiment.

Single run experiment:

def train_fn():
    # This is your training iteration loop
    for i in range(number_iterations):
        ...
        # add the maggy reporter to report the metric to be optimized
        reporter.broadcast(metric=accuracy)
         ...
    # Return metric to be optimized or any metric to be logged
    return accuracy

from maggy import experiment
result = experiment.lagom(train_fn=train_fn, name='MNIST')

lagom is a Swedish word meaning "just the right amount". This is how MAggy uses your resources.

Documentation

Full documentation is available at maggy.ai

Contributing

There are various ways to contribute, and any contribution is welcome, please follow the CONTRIBUTING guide to get started.

Issues

Issues can be reported on the official GitHub repo of Maggy.

Citation

Please see our publications on maggy.ai to find out how to cite our work.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].