All Projects → SuryaThiru → mlgauge

SuryaThiru / mlgauge

Licence: MIT license
A simple library to benchmark the performance of machine learning methods across different datasets.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to mlgauge

micro-runner
Micro-Runner, a CLI playground for benchmarking your JavaScript code
Stars: ✭ 27 (+22.73%)
Mutual labels:  benchmark
glDelegateBench
quick and dirty inference time benchmark for TFLite gles delegate
Stars: ✭ 17 (-22.73%)
Mutual labels:  benchmark
Clamor
The Python Discord API Framework
Stars: ✭ 14 (-36.36%)
Mutual labels:  wrapper
MVDribbbleKit
A modern Objective-C wrapper for the Dribbble API.
Stars: ✭ 31 (+40.91%)
Mutual labels:  wrapper
nodemark
A modern benchmarking library for Node.js
Stars: ✭ 23 (+4.55%)
Mutual labels:  benchmark
arewefastyet
Nightly Benchmarks Project
Stars: ✭ 31 (+40.91%)
Mutual labels:  benchmark
rust-web-benchmarks
Benchmarking web frameworks written in rust with rewrk tool.
Stars: ✭ 97 (+340.91%)
Mutual labels:  benchmark
web-benchmarks
A set of HTTP server benchmarks for Golang, node.js and Python with proper CPU utilization and database connection pooling.
Stars: ✭ 22 (+0%)
Mutual labels:  benchmark
cache-bench
Explore the impact of virtual memory settings on caching efficiency on Linux systems under memory pressure
Stars: ✭ 25 (+13.64%)
Mutual labels:  benchmark
JikanKt
A Kotlin wrapper for Jikan REST API
Stars: ✭ 17 (-22.73%)
Mutual labels:  wrapper
WebDGap
WebDGap allows you to convert any website or HTML/CSS/JavaScript web application to a native Windows, Mac, Linux, PhoneGap, and Chrome application/extension.
Stars: ✭ 106 (+381.82%)
Mutual labels:  wrapper
word-benchmarks
Benchmarks for intrinsic word embeddings evaluation.
Stars: ✭ 45 (+104.55%)
Mutual labels:  benchmark
Xamarin-Android
PSPDFKit for Android wrapper for the Xamarin platform.
Stars: ✭ 18 (-18.18%)
Mutual labels:  wrapper
OptimisationAlgorithms
Searching global optima with firefly algorithm and solving traveling salesmen problem with genetic algorithm
Stars: ✭ 20 (-9.09%)
Mutual labels:  benchmark
language-benchmarks
A simple benchmark system for compiled and interpreted languages.
Stars: ✭ 21 (-4.55%)
Mutual labels:  benchmark
SpotifyWebApi
A .net core wrapper for the Spotify Web API
Stars: ✭ 19 (-13.64%)
Mutual labels:  wrapper
SDGym
Benchmarking synthetic data generation methods.
Stars: ✭ 177 (+704.55%)
Mutual labels:  benchmark
snapraid-aio-script
The definitive all-in-one SnapRAID script. Diff, sync, scrub are things of the past. Manage SnapRAID and much, much more!
Stars: ✭ 92 (+318.18%)
Mutual labels:  wrapper
join-order-benchmark
Join Order Benchmark (JOB)
Stars: ✭ 174 (+690.91%)
Mutual labels:  benchmark
python3
Python 3 wrapper for Nim
Stars: ✭ 16 (-27.27%)
Mutual labels:  wrapper

mlgauge

Build Formatting Code style: black Documentation Status License: MIT

A simple library to benchmark performance of machine learning methods across different datasets. mlgauge is also a wrapper around PMLB and OpenML which provide benchmark datasets for machine learning.

mlgauge can help you if

  • You are developing a machine learning method or an automl system and want to compare and analyze how it performs against other methods.
  • You are learning different machine learning methods and would like to understand how different methods behave under different conditions.

Checkout the documentation to learn more.

Installation

pip install mlgauge

Usage

This is the workflow for setting up and running a comparison benchmark with mlgauge:

  1. Set up your methods by defining a Method class. If your method follows the sklearn API, you can directly use the SklearnMethod which provides a typical sklearn workflow for estimators.
  2. Set up the experiments with the Analysis class.
  3. Collect the results for further comparative analysis.

Example

from mlgauge import Analysis, SklearnMethod
from xgboost import XGBClassifier
from lightgbm import LGBMClassifier
from catboost import CatBoostClassifier
from sklearn.ensemble import GradientBoostingClassifier
import matplotlib.pyplot as plt

SEED = 42

methods = [
    ("xgboost", SklearnMethod(XGBClassifier(n_jobs=-1,verbose=0), ["accuracy", "f1_micro"])),
    ("lightgbm", SklearnMethod(LGBMClassifier(n_jobs=-1,verbose=0), ["accuracy", "f1_micro"])),
    ("catboost", SklearnMethod(CatBoostClassifier(thread_count=-1,verbose=0), ["accuracy", "f1_micro"])),
    ("gbm", SklearnMethod(GradientBoostingClassifier(verbose=0), ["accuracy", "f1_micro"])),
]

an = Analysis(
    methods=methods,
    metric_names=["accuracy", "f1 score"],
    datasets="classification",
    n_datasets=10,
    random_state=SEED,
)
an.run()

print(an.get_result_as_df("f1 score"))
                          xgboost  lightgbm  catboost       gbm
datasets
mfeat_morphological      0.674000  0.682000  0.698000  0.700000
labor                    0.800000  0.733333  0.866667  0.800000
analcatdata_aids         0.769231  0.384615  0.538462  0.692308
mofn_3_7_10              1.000000  0.990937  1.000000  1.000000
flags                    0.444444  0.377778  0.355556  0.400000
analcatdata_creditscore  1.000000  1.000000  1.000000  1.000000
mfeat_morphological      0.674000  0.682000  0.698000  0.700000
penguins                 0.988095  0.976190  0.988095  0.988095
glass                    0.730769  0.673077  0.692308  0.711538
iris                     0.973684  0.973684  0.973684  0.973684
an.plot_results("f1 score")

boosting plot

More examples are available in the documentation.

Credits

Logo designed by the talented Neha Balasundaram.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].