All Projects β†’ beringresearch β†’ lab

beringresearch / lab

Licence: Apache-2.0 License
A lightweight command line interface for the management of arbitrary machine learning tasks

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to lab

Metaflow
πŸš€ Build and manage real-life data science projects with ease!
Stars: ✭ 5,108 (+29947.06%)
Mutual labels:  ml, model-management
Mlflow
Open source platform for the machine learning lifecycle
Stars: ✭ 10,898 (+64005.88%)
Mutual labels:  ml, model-management
Bentoml
Model Serving Made Easy
Stars: ✭ 3,064 (+17923.53%)
Mutual labels:  ml, model-management
char-rnn-tensorflow
μ½”λ“œλ₯Ό κ°„κ²°ν•˜κ²Œ μ •λ¦¬ν•˜κ³  ν•œκΈ€μ£Όμ„μ„ μΆ”κ°€ν•œ Char-RNN
Stars: ✭ 24 (+41.18%)
Mutual labels:  ml
informatica-public
Public code developed during my MSc study at University of Bologna
Stars: ✭ 79 (+364.71%)
Mutual labels:  ml
MixingBear
Package for automatic beat-mixing of music files in Python 🐻🎚
Stars: ✭ 73 (+329.41%)
Mutual labels:  ml
dashboard
Project for managing ML model and deploying ML module. It can deploy the Rekcurd service to Kubernetes cluster.
Stars: ✭ 27 (+58.82%)
Mutual labels:  ml
kaggle
Kaggle solutions
Stars: ✭ 17 (+0%)
Mutual labels:  ml
go-tensorflow
Tools and libraries for using Tensorflow (and Tensorflow Serving) in go
Stars: ✭ 25 (+47.06%)
Mutual labels:  ml
vs-mlrt
Efficient ML Filter Runtimes for VapourSynth (with built-in support for waifu2x, DPIR, RealESRGANv2, and Real-CUGAN)
Stars: ✭ 34 (+100%)
Mutual labels:  ml
Keras-Application-Zoo
Reference implementations of popular DL models missing from keras-applications & keras-contrib
Stars: ✭ 31 (+82.35%)
Mutual labels:  ml
Machine-Learning-Projects-2
No description or website provided.
Stars: ✭ 23 (+35.29%)
Mutual labels:  ml
vision-camera-image-labeler
VisionCamera Frame Processor Plugin to label images using MLKit Vision
Stars: ✭ 62 (+264.71%)
Mutual labels:  ml
SynapseML
Simple and Distributed Machine Learning
Stars: ✭ 3,355 (+19635.29%)
Mutual labels:  ml
100-days-of-ai
δΊΊε·₯智能 100 倩
Stars: ✭ 14 (-17.65%)
Mutual labels:  ml
oomstore
Lightweight and Fast Feature Store Powered by Go (and Rust).
Stars: ✭ 76 (+347.06%)
Mutual labels:  ml
pico-ml
A toy programming language which is a subset of OCaml.
Stars: ✭ 36 (+111.76%)
Mutual labels:  ml
deepchecks
Test Suites for Validating ML Models & Data. Deepchecks is a Python package for comprehensively validating your machine learning models and data with minimal effort.
Stars: ✭ 1,595 (+9282.35%)
Mutual labels:  ml
PuzzleLib
Deep Learning framework with NVIDIA & AMD support
Stars: ✭ 52 (+205.88%)
Mutual labels:  ml
yggdrasil-decision-forests
A collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models.
Stars: ✭ 156 (+817.65%)
Mutual labels:  ml

Documentation Status

Machine Learning Lab

A lightweight command line interface for the management of arbitrary machine learning tasks.

Documentation is available at: https://bering-ml-lab.readthedocs.io/en/latest/

NOTE: Lab is in active development - expect a bumpy ride!

alt text

Installation

The latest stable version can be installed directly from PyPi:

pip install lab-ml

Development version can be installed from github.

git clone https://github.com/beringresearch/lab
cd lab
pip install --editable .

Concepts

Lab employs three concepts: reproducible environment, logging, and model persistence. A typical machine learning workflow can be turned into a Lab Experiment by adding a single decorator.

Creating a new Lab Project

lab init --name [NAME]

Lab will look for a requirements.txt file in the working directory to generate a portable virtual environment for ML experiments.

Setting up a Lab Experiment

Here's a simple script that trains an SVM classifier on the iris data set:

from sklearn import svm, datasets
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, precision_score

C = 1.0
gamma = 0.7
iris = datasets.load_iris()
X = iris.data
y = iris.target

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.24, random_state=42)

clf = svm.SVC(C, 'rbf', gamma=gamma, probability=True)
clf.fit(X_train, y_train)

y_pred = clf.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
precision = precision_score(y_test, y_pred, average = 'macro')

It's trivial to create a Lab Experiment using a simple decorator:

from sklearn import svm, datasets
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, precision_score

from lab.experiment import Experiment ## New Line

e = Experiment() ## New Line

@e.start_run ## New Line
def train():
    C = 1.0
    gamma = 0.7
    iris = datasets.load_iris()
    X = iris.data
    y = iris.target

    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.24, random_state=42)

    clf = svm.SVC(C, 'rbf', gamma=gamma, probability=True)
    clf.fit(X_train, y_train)

    y_pred = clf.predict(X_test)
    accuracy = accuracy_score(y_test, y_pred)
    precision = precision_score(y_test, y_pred, average = 'macro')

    e.log_metric('accuracy_score', accuracy) ## New Line
    e.log_metric('precision_score', precision) ## New Line

    e.log_parameter('C', C) ## New Line
    e.log_parameter('gamma', gamma) ## New Line

    e.log_model('svm', clf) ## New Line

Running an Experiment

Lab Experiments can be run as:

lab run <PATH/TO/TRAIN.py>

Comparing models

Lab assumes that all Experiments associated with a Project log consistent performance metrics. We can quickly assess performance of each experiment by running:

lab ls

Experiment    Source              Date        accuracy_score    precision_score
------------  ------------------  ----------  ----------------  -----------------
49ffb76e      train_mnist_mlp.py  2019-01-15  0.97: β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ  0.97: β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ
261a34e4      train_mnist_cnn.py  2019-01-15  0.98: β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ  0.98: β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ

Pushing models to a centralised repository

Lab experiments can be pushed to a centralised filesystem through integration with minio. Lab assumes that you have setup minio on a private cloud.

Lab can be configured once to interface with a remote minio instance:

lab config minio --tag my-minio --endpoint [URL:PORT] --accesskey [STRING] --secretkey [STRING]

To push a local lab experiment to minio:

lab push --tag my-minio --bucket [BUCKETNAME] .

Copyright 2020, Bering Limited

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].