All Projects → arita37 → Mlmodels

arita37 / Mlmodels

Licence: mpl-2.0
mlmodels : Machine Learning and Deep Learning Model ZOO for Pytorch, Tensorflow, Keras, Gluon models...

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Mlmodels

Auto ts
Automatically build ARIMA, SARIMAX, VAR, FB Prophet and XGBoost Models on Time Series data sets with a Single Line of Code. Now updated with Dask to handle millions of rows.
Stars: ✭ 195 (+34.48%)
Mutual labels:  jupyter-notebook, automl, sklearn
codeflare
Simplifying the definition and execution, scaling and deployment of pipelines on the cloud.
Stars: ✭ 163 (+12.41%)
Mutual labels:  sklearn, hyperparameter-optimization, automl
Aws Machine Learning University Accelerated Nlp
Machine Learning University: Accelerated Natural Language Processing Class
Stars: ✭ 1,695 (+1068.97%)
Mutual labels:  jupyter-notebook, sklearn, gluon
Autogluon
AutoGluon: AutoML for Text, Image, and Tabular Data
Stars: ✭ 3,920 (+2603.45%)
Mutual labels:  automl, hyperparameter-optimization, gluon
Automl alex
State-of-the art Automated Machine Learning python library for Tabular Data
Stars: ✭ 132 (-8.97%)
Mutual labels:  automl, sklearn, hyperparameter-optimization
Aws Machine Learning University Accelerated Tab
Machine Learning University: Accelerated Tabular Data Class
Stars: ✭ 718 (+395.17%)
Mutual labels:  jupyter-notebook, sklearn, gluon
Ko en neural machine translation
Korean English NMT(Neural Machine Translation) with Gluon
Stars: ✭ 55 (-62.07%)
Mutual labels:  jupyter-notebook, gluon
Coronavirus visualization and prediction
This repository tracks the spread of the novel coronavirus, also known as SARS-CoV-2. It is a contagious respiratory virus that first started in Wuhan in December 2019. On 2/11/2020, the disease is officially named COVID-19 by the World Health Organization.
Stars: ✭ 62 (-57.24%)
Mutual labels:  jupyter-notebook, sklearn
Mlatimperial2017
Materials for the course of machine learning at Imperial College organized by Yandex SDA
Stars: ✭ 71 (-51.03%)
Mutual labels:  jupyter-notebook, sklearn
Nni
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Stars: ✭ 10,698 (+7277.93%)
Mutual labels:  automl, hyperparameter-optimization
Machine Learning
Machine learning for Project Cognoma
Stars: ✭ 30 (-79.31%)
Mutual labels:  jupyter-notebook, sklearn
Spark Nlp Models
Models and Pipelines for the Spark NLP library
Stars: ✭ 88 (-39.31%)
Mutual labels:  jupyter-notebook, nlu
Dogbreed gluon
kaggle Dog Breed Identification
Stars: ✭ 116 (-20%)
Mutual labels:  jupyter-notebook, gluon
Aws Machine Learning University Accelerated Cv
Machine Learning University: Accelerated Computer Vision Class
Stars: ✭ 1,068 (+636.55%)
Mutual labels:  jupyter-notebook, gluon
Tpot
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.
Stars: ✭ 8,378 (+5677.93%)
Mutual labels:  automl, hyperparameter-optimization
My Journey In The Data Science World
📢 Ready to learn or review your knowledge!
Stars: ✭ 1,175 (+710.34%)
Mutual labels:  jupyter-notebook, sklearn
Mljar Supervised
Automated Machine Learning Pipeline with Feature Engineering and Hyper-Parameters Tuning 🚀
Stars: ✭ 961 (+562.76%)
Mutual labels:  automl, hyperparameter-optimization
Advisor
Open-source implementation of Google Vizier for hyper parameters tuning
Stars: ✭ 1,359 (+837.24%)
Mutual labels:  jupyter-notebook, automl
Auto ml
[UNMAINTAINED] Automated machine learning for analytics & production
Stars: ✭ 1,559 (+975.17%)
Mutual labels:  automl, hyperparameter-optimization
Milano
Milano is a tool for automating hyper-parameters search for your models on a backend of your choice.
Stars: ✭ 140 (-3.45%)
Mutual labels:  automl, hyperparameter-optimization

mlmodels : Model ZOO

This repository is the Model ZOO for Pytorch, Tensorflow, Keras, Gluon, LightGBM, Keras, Sklearn models etc with Lightweight Functional interface to wrap access to Recent and State of Art Deep Learning, ML models and Hyper-Parameter Search, cross platforms that follows the logic of sklearn, such as fit, predict, transform, metrics, save, load etc. Now, more than 60 recent models (> 2018) are available in those domains :

Main characteristics :

  • Functional type interface : reduce boilerplate code, good for scientific computing.
  • JSON based input : reduce boilerplate code, easy for experiment management.
  • Focus to move research/script code to benchmark batch.

alt text alt text alt text

Benefits of mlmodels repo :


Having a simple framework for both machine learning models and deep learning models, without BOILERPLATE code.

Collection of models, model zoo in Pytorch, Tensorflow, Keras allows richer possibilities in model re-usage, model batching and benchmarking. Unique and simple interface, zero boilerplate code (!), and recent state of art models/frameworks are the main strength of MLMODELS. Different domain fields are available, such as computer vision, NLP, Time Series prediction, tabular data classification.

How to Start :

guide

If you like the idea, we are Looking for Contributors :

contribution guide

Model List :


Time Series:
  1. Montreal AI, Nbeats: 2019, Advanced interpretable Time Series Neural Network, [Link]

  2. Amazon Deep AR: 2019, Multi-variate Time Series NNetwork, [Link]

  3. Facebook Prophet 2017, Time Series prediction [Link]

  4. ARMDN, Advanced Multi-variate Time series Prediction : 2019, Associative and Recurrent Mixture Density Networks for time series. [Link]

  5. LSTM Neural Network prediction : Stacked Bidirectional and Unidirectional LSTM Recurrent Neural Network for Network-wide Traffic Speed Prediction [Link]

NLP:
  1. Sentence Transformers : 2019, Embedding of full sentences using BERT, [Link]

  2. Transformers Classifier : Using Transformer for Text Classification, [Link]

  3. TextCNN Pytorch : 2016, Text CNN Classifier, [Link]

  4. TextCNN Keras : 2016, Text CNN Classifier, [Link]

  5. Bi-directionnal Conditional Random Field LSTM for Name Entiryt Recognition, [Link]

  6. DRMM: Deep Relevance Matching Model for Ad-hoc Retrieval.[Link]

  7. DRMMTKS: Deep Top-K Relevance Matching Model for Ad-hoc Retrieval. [Link]

  8. ARC-I: Convolutional Neural Network Architectures for Matching Natural Language Sentences [Link]

  9. ARC-II: Convolutional Neural Network Architectures for Matching Natural Language Sentences [Link]

  10. DSSM: Learning Deep Structured Semantic Models for Web Search using Clickthrough Data [Link]

  11. CDSSM: Learning Semantic Representations Using Convolutional Neural Networks for Web Search [Link]

  12. MatchLSTM: Machine Comprehension Using Match-LSTM and Answer Pointer [Link]

  13. DUET: Learning to Match Using Local and Distributed Representations of Text for Web Search [Link]

  14. KNRM: End-to-End Neural Ad-hoc Ranking with Kernel Pooling [Link]

  15. ConvKNRM: Convolutional neural networks for soft-matching n-grams in ad-hoc search [Link]

  16. ESIM: Enhanced LSTM for Natural Language Inference [Link]

  17. BiMPM: Bilateral Multi-Perspective Matching for Natural Language Sentences [Link]

  18. MatchPyramid: Text Matching as Image Recognition [Link]

  19. Match-SRNN: Match-SRNN: Modeling the Recursive Matching Structure with Spatial RNN [Link]

  20. aNMM: aNMM: Ranking Short Answer Texts with Attention-Based Neural Matching Model [Link]

  21. MV-LSTM: [Link]

  22. DIIN: Natural Lanuguage Inference Over Interaction Space [Link]

  23. HBMP: Sentence Embeddings in NLI with Iterative Refinement Encoders [Link]

TABULAR:

LightGBM : Light Gradient Boosting

AutoML Gluon : 2020, AutoML in Gluon, MxNet using LightGBM, CatBoost

Auto-Keras : 2020, Automatic Keras model selection

All sklearn models :

All sklearn models :

linear_model.ElasticNet
linear_model.ElasticNetCV
linear_model.Lars
linear_model.LarsCV
linear_model.Lasso
linear_model.LassoCV
linear_model.LassoLars
linear_model.LassoLarsCV
linear_model.LassoLarsIC
linear_model.OrthogonalMatchingPursuit
linear_model.OrthogonalMatchingPursuitCV

svm.LinearSVC
svm.LinearSVR
svm.NuSVC
svm.NuSVR
svm.OneClassSVM
svm.SVC
svm.SVR
svm.l1_min_c

neighbors.KNeighborsClassifier
neighbors.KNeighborsRegressor
neighbors.KNeighborsTransformer

Binary Neural Prediction from tabular data:

Binary Neural Prediction from tabular data:
  1. A Convolutional Click Prediction Model]([Link |)]

  2. Deep Learning over Multi-field Categorical Data: A Case Study on User Response Prediction]([Link |)]

  3. Product-based neural networks for user response prediction]([Link |)]

  4. Wide & Deep Learning for Recommender Systems]([Link |)]

  5. DeepFM: A Factorization-Machine based Neural Network for CTR Prediction]([Link |)]

  6. Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction]([Link |)]

  7. Deep & Cross Network for Ad Click Predictions]([Link |)]

  8. Attentional Factorization Machines: Learning the Weight of Feature Interactions via Attention Networks]([Link |)]

  9. Neural Factorization Machines for Sparse Predictive Analytics]([Link |)]

  10. xDeepFM: Combining Explicit and Implicit Feature Interactions for Recommender Systems]([Link |)]

  11. AutoInt: Automatic Feature Interaction Learning via Self-Attentive Neural Networks]([Link |)]

  12. Deep Interest Network for Click-Through Rate Prediction]([Link |)]

  13. Deep Interest Evolution Network for Click-Through Rate Prediction]([Link |)]

  14. Operation-aware Neural Networks for User Response Prediction]([Link |)]

  15. Feature Generation by Convolutional Neural Network for Click-Through Rate Prediction ]([Link |)]

  16. Deep Session Interest Network for Click-Through Rate Prediction ]([Link |)]

  17. FiBiNET: Combining Feature Importance and Bilinear feature Interaction for Click-Through Rate Prediction]([Link |)]

VISION:
  1. Vision Models (pre-trained) :
    alexnet: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size [Link]

  2. densenet121: Adversarial Perturbations Prevail in the Y-Channel of the YCbCr Color Space [Link]

  3. densenet169: Classification of TrashNet Dataset Based on Deep Learning Models [Link]

  4. densenet201: Utilization of DenseNet201 for diagnosis of breast abnormality [Link]

  5. densenet161: Automated classification of histopathology images using transfer learning [Link]

  6. inception_v3: Menfish Classification Based on Inception_V3 Convolutional Neural Network [Link]

  7. resnet18: Leveraging the VTA-TVM Hardware-Software Stack for FPGA Acceleration of 8-bit ResNet-18 Inference [Link]

  8. resnet34: Automated Pavement Crack Segmentation Using Fully Convolutional U-Net with a Pretrained ResNet-34 Encoder [Link]

  9. resnet50: Extremely Large Minibatch SGD: Training ResNet-50 on ImageNet in 15 Minutes [Link]

  10. resnet101: Classification of Cervical MR Images using ResNet101 [Link]

  11. resnet152: Deep neural networks show an equivalent and often superior performance to dermatologists in onychomycosis diagnosis: Automatic construction of onychomycosis datasets by region-based convolutional deep neural network [Link]

  12. resnext50_32x4d: Automatic Grading of Individual Knee Osteoarthritis Features in Plain Radiographs using Deep Convolutional Neural Networks [Link]

  13. resnext101_32x8d: DEEP LEARNING BASED PLANT PART DETECTION IN GREENHOUSE SETTINGS [Link]

  14. wide_resnet50_2: Identificac¸˜ao de Esp´ecies de ´Arvores por Imagens de Tronco Utilizando Aprendizado de Ma´quina Profundo [Link]

  15. wide_resnet101_2: Identification of Tree Species by Trunk Images Using Deep Machine Learning [Link]

  16. squeezenet1_0: Classification of Ice Crystal Habits Observed From Airborne Cloud Particle Imager by Deep Transfer Learning [Link]

  17. squeezenet1_1: Benchmarking parts based face processing in-the-wild for gender recognition and head pose estimation [Link]

  18. vgg11: ernausNet: U-Net with VGG11 Encoder Pre-Trained on ImageNet for Image Segmentation [Link]

  19. vgg13: Convolutional Neural Network for Raindrop Detection [Link]

  20. vgg16: Automatic detection of lumen and media in the IVUS images using U-Net with VGG16 Encoder [Link]

  21. vgg19: A New Transfer Learning Based on VGG-19 Network for Fault Diagnosis [Link]

  22. vgg11_bn:Shifted Spatial-Spectral Convolution for Deep Neural Networks [Link]

  23. vgg13_bn: DETOX: A Redundancy-based Framework for Faster and More Robust Gradient Aggregation [Link]

  24. vgg16_bn: Partial Convolution based Padding [Link]

  25. vgg19_bn: NeurIPS 2019 Disentanglement Challenge: Improved Disentanglement through Learned Aggregation of Convolutional Feature Maps [Link]

  26. googlenet: On the Performance of GoogLeNet and AlexNet Applied to Sketches [Link]

  27. shufflenet_v2_x0_5: Exemplar Normalization for Learning Deep Representation [Link]

  28. shufflenet_v2_x1_0: Tree Species Identification by Trunk Images Using Deep Machine Learning [Link]

  29. mobilenet_v2: MobileNetV2: Inverted Residuals and Linear Bottlenecks [Link]

More resources are available on model list here

Contribution


Dev-Documentation link

Starting contributing : link

Colab creation :link

Model benchmarking : link

Add new models : link

Core compute : link

User Documentation


User-Documentation: link

Colab


Colab :link

Installation Guide:


Installation Guide:

(A) Using pre-installed Setup (one click run) :

Read-more

(B) Using Colab :

Read-more

Initialize template and Tests

Will copy template, dataset, example to your folder

ml_models --init  /yourworkingFolder/
To test Hyper-parameter search:
ml_optim
To test model fitting
ml_models

Actual test runs

Read-more

test_fast_linux


Usage in Jupyter/Colab

Read-more


Command Line tools:

Read-more


Model List

Read-more


How to add a new model

Read-more


Index of functions/methods

Read-more


Testing

Read-more

Testing : debugging Process Read-more

Tutorial : Code Design, Testing Read-more

Tests: github actions to add Read-more


Research Papers

Read-more


Tutorials


Tutorial : New contributors Read-more

Tutorial : Code Design, Testing Read-more

Tutorial : Usage of dataloader Read-more

TUTORIAL : Use Colab for Code Development Read-more

TUTORIAL : Do a PR or add model in mlmodels Read-more

TUTORIAL : Using Online editor for mlmodels Read-more

Example Notebooks


Example Notebooks

LSTM example in TensorFlow (Example notebook)

LSTM example in TensorFlow

Define model and data definitions

# import library
import mlmodels


model_uri    = "model_tf.1_lstm.py"
model_pars   =  {  "num_layers": 1,
                  "size": ncol_input, "size_layer": 128, "output_size": ncol_output, "timestep": 4,
                }
data_pars    =  {"data_path": "/folder/myfile.csv"  , "data_type": "pandas" }
compute_pars =  { "learning_rate": 0.001, }

out_pars     =  { "path": "ztest_1lstm/", "model_path" : "ztest_1lstm/model/"}
save_pars = { "path" : "ztest_1lstm/model/" }
load_pars = { "path" : "ztest_1lstm/model/" }


#### Load Parameters and Train
from mlmodels.models import module_load

module        =  module_load( model_uri= model_uri )                           # Load file definition
module.init(model_pars=model_pars, data_pars=data_pars, compute_pars=compute_pars)             # Create Model instance
module.fit(data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)


#### Inference
metrics_val   =  module.evaluate(data_pars, compute_pars, out_pars) # get stats
ypred         = module.predict(data_pars, compute_pars, out_pars)     # predict pipeline
---

AutoML example in Gluon (Example notebook)

AutoML example in Gluon
# import library
import mlmodels
import autogluon as ag

#### Define model and data definitions
model_uri = "model_gluon.gluon_automl.py"
data_pars = {"train": True, "uri_type": "amazon_aws", "dt_name": "Inc"}

model_pars = {"model_type": "tabular",
              "learning_rate": ag.space.Real(1e-4, 1e-2, default=5e-4, log=True),
              "activation": ag.space.Categorical(*tuple(["relu", "softrelu", "tanh"])),
              "layers": ag.space.Categorical(
                          *tuple([[100], [1000], [200, 100], [300, 200, 100]])),
              'dropout_prob': ag.space.Real(0.0, 0.5, default=0.1),
              'num_boost_round': 10,
              'num_leaves': ag.space.Int(lower=26, upper=30, default=36)
             }

compute_pars = {
    "hp_tune": True,
    "num_epochs": 10,
    "time_limits": 120,
    "num_trials": 5,
    "search_strategy": "skopt"
}

out_pars = {
    "out_path": "dataset/"
}



#### Load Parameters and Train
from mlmodels.models import module_load

module        =  module_load( model_uri= model_uri )                           # Load file definition
module.init(model_pars=model_pars, data_pars=data_pars, compute_pars=compute_pars)             # Create Model instance
module.fit(data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)


#### Inference
metrics_val   =  module.evaluate(data_pars, compute_pars, out_pars) # get stats
ypred         = module.predict(data_pars, compute_pars, out_pars)     # predict pipeline


---

RandomForest example in Scikit-learn (Example notebook)

RandomForest example in Scikit-learn
```python # import library import mlmodels

Define model and data definitions

model_uri = "model_sklearn.sklearn.py"

model_pars = {"model_name": "RandomForestClassifier", "max_depth" : 4 , "random_state":0}

data_pars = {'mode': 'test', 'path': "../mlmodels/dataset", 'data_type' : 'pandas' }

compute_pars = {'return_pred_not': False}

out_pars = {'path' : "../ztest"}

Load Parameters and Train

from mlmodels.models import module_load

module = module_load( model_uri= model_uri ) # Load file definition module.init(model_pars=model_pars, data_pars=data_pars, compute_pars=compute_pars) # Create Model instance module.fit(data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)

Inference

metrics_val = module.evaluate(data_pars, compute_pars, out_pars) # get stats ypred = module.predict(data_pars, compute_pars, out_pars) # predict pipeline


</details>

---

### TextCNN example in keras ([Example notebook](example/textcnn.ipynb))

<details>
<summary> TextCNN example in keras </summary>
<br>

```python
# import library
import mlmodels

#### Define model and data definitions
model_uri    = "model_keras.textcnn.py"

data_pars    = {"path" : "../mlmodels/dataset/text/imdb.csv", "train": 1, "maxlen":400, "max_features": 10}

model_pars   = {"maxlen":400, "max_features": 10, "embedding_dims":50}
                       
compute_pars = {"engine": "adam", "loss": "binary_crossentropy", "metrics": ["accuracy"] ,
                        "batch_size": 32, "epochs":1, 'return_pred_not':False}

out_pars     = {"path": "ztest/model_keras/textcnn/"}



#### Load Parameters and Train
from mlmodels.models import module_load

module        =  module_load( model_uri= model_uri )                           # Load file definition
module.init(model_pars=model_pars, data_pars=data_pars, compute_pars=compute_pars)             # Create Model instance
module.fit(data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)

#### Inference
metrics_val   =  module.evaluate(data_pars, compute_pars, out_pars) # get stats
ypred         = module.predict(data_pars, compute_pars, out_pars)     # predict pipeline

Using json config file for input (Example notebook, JSON file)

Using json config file for input

Import library and functions

# import library
import mlmodels

#### Load model and data definitions from json
from mlmodels.models import module_load
from mlmodels.util import load_config

model_uri    = "model_tf.1_lstm.py"
module        =  module_load( model_uri= model_uri )                           # Load file definition

model_pars, data_pars, compute_pars, out_pars = module.get_params(param_pars={
    'choice':'json',
    'config_mode':'test',
    'data_path':'../mlmodels/example/1_lstm.json'
})

module        =  module_load( model_uri= model_uri )                           # Load file definition
module.init(model_pars=model_pars, data_pars=data_pars, compute_pars=compute_pars)             # Create Model instance
module.fit(data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)


#### Inference
metrics_val   =  module.evaluate(data_pars, compute_pars, out_pars) # get stats
ypred         = module.predict(data_pars, compute_pars, out_pars)     # predict pipeline



Using Scikit-learn's SVM for Titanic Problem from json file (Example notebook, JSON file)

Using Scikit-learn's SVM for Titanic Problem from json file

Import library and functions

# import library
import mlmodels

#### Load model and data definitions from json
from mlmodels.models import module_load
from mlmodels.util import load_config

model_uri    = "model_sklearn.sklearn.py"
module        =  module_load( model_uri= model_uri )                           # Load file definition

model_pars, data_pars, compute_pars, out_pars = module.get_params(param_pars={
    'choice':'json',
    'config_mode':'test',
    'data_path':'../mlmodels/example/sklearn_titanic_svm.json'
})

#### Load Parameters and Train

module        =  module_load( model_uri= model_uri )                           # Load file definition
module.init(model_pars=model_pars, data_pars=data_pars, compute_pars=compute_pars)             # Create Model instance
module.fit(data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)


#### Inference
metrics_val   =  module.evaluate(data_pars, compute_pars, out_pars) # get stats
ypred         = module.predict(data_pars, compute_pars, out_pars)     # predict pipeline


#### Check metrics
import pandas as pd
from sklearn.metrics import roc_auc_score

y = pd.read_csv('../mlmodels/dataset/tabular/titanic_train_preprocessed.csv')['Survived'].values
roc_auc_score(y, ypred)



Using Scikit-learn's Random Forest for Titanic Problem from json file (Example notebook, JSON file)

Using Scikit-learn's Random Forest for Titanic Problem from json file

Import library and functions

# import library
import mlmodels

#### Load model and data definitions from json
from mlmodels.models import module_load
from mlmodels.util import load_config

model_uri    = "model_sklearn.sklearn.py"
module        =  module_load( model_uri= model_uri )                           # Load file definition

model_pars, data_pars, compute_pars, out_pars = module.get_params(param_pars={
    'choice':'json',
    'config_mode':'test',
    'data_path':'../mlmodels/example/sklearn_titanic_randomForest.json'
})


module        =  module_load( model_uri= model_uri )                           # Load file definition
module.init(model_pars=model_pars, data_pars=data_pars, compute_pars=compute_pars)             # Create Model instance
module.fit(data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)


#### Inference
metrics_val   =  module.evaluate(data_pars, compute_pars, out_pars) # get stats
ypred         = module.predict(data_pars, compute_pars, out_pars)     # predict pipeline

#### Check metrics
import pandas as pd
from sklearn.metrics import roc_auc_score

y = pd.read_csv('../mlmodels/dataset/tabular/titanic_train_preprocessed.csv')['Survived'].values
roc_auc_score(y, ypred)


Using Autogluon for Titanic Problem from json file (Example notebook, JSON file)

Using Autogluon for Titanic Problem from json file

Import library and functions

# import library
import mlmodels

#### Load model and data definitions from json
from mlmodels.models import module_load
from mlmodels.util import load_config

model_uri    = "model_gluon.gluon_automl.py"
module        =  module_load( model_uri= model_uri )                           # Load file definition

model_pars, data_pars, compute_pars, out_pars = module.get_params(
    choice='json',
    config_mode= 'test',
    data_path= '../mlmodels/example/gluon_automl.json'
)


module        =  module_load( model_uri= model_uri )                           # Load file definition
module.init(model_pars=model_pars, data_pars=data_pars, compute_pars=compute_pars)             # Create Model instance
module.fit(data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)


#### Inference
metrics_val   =  module.evaluate(data_pars, compute_pars, out_pars) # get stats
ypred         = module.predict(data_pars, compute_pars, out_pars)     # predict pipeline



import pandas as pd
from sklearn.metrics import roc_auc_score

y = pd.read_csv('../mlmodels/dataset/tabular/titanic_train_preprocessed.csv')['Survived'].values
roc_auc_score(y, ypred)



Using hyper-params (optuna) for Titanic Problem from json file (Example notebook, JSON file)

Using hyper-params (optuna) for Titanic Problem from json file

Import library and functions

# import library
from mlmodels.models import module_load
from mlmodels.optim import optim
from mlmodels.util import params_json_load


#### Load model and data definitions from json

###  hypermodel_pars, model_pars, ....
model_uri   = "model_sklearn.sklearn.py"
config_path = path_norm( 'example/hyper_titanic_randomForest.json'  )
config_mode = "test"  ### test/prod



#### Model Parameters
hypermodel_pars, model_pars, data_pars, compute_pars, out_pars = params_json_load(config_path, config_mode= config_mode)
print( hypermodel_pars, model_pars, data_pars, compute_pars, out_pars)


module            =  module_load( model_uri= model_uri )                      
model_pars_update = optim(
    model_uri       = model_uri,
    hypermodel_pars = hypermodel_pars,
    model_pars      = model_pars,
    data_pars       = data_pars,
    compute_pars    = compute_pars,
    out_pars        = out_pars
)


module        =  module_load( model_uri= model_uri )                           # Load file definition
module.init(model_pars=model_pars, data_pars=data_pars, compute_pars=compute_pars)             # Create Model instance
module.fit(data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)


#### Inference
metrics_val   =  module.evaluate(data_pars, compute_pars, out_pars) # get stats
ypred         = module.predict(data_pars, compute_pars, out_pars)     # predict pipeline


#### Check metrics
import pandas as pd
from sklearn.metrics import roc_auc_score

y = pd.read_csv( path_norm('dataset/tabular/titanic_train_preprocessed.csv') )
y = y['Survived'].values
roc_auc_score(y, ypred)

Using LightGBM for Titanic Problem from json file (Example notebook, JSON file)

Using LightGBM for Titanic Problem from json file

Import library and functions

# import library
import mlmodels
from mlmodels.models import module_load
from mlmodels.util import path_norm_dict, path_norm
from jsoncomment import JsonComment ; json = JsonComment()

#### Load model and data definitions from json
# Model defination
model_uri    = "model_sklearn.model_lightgbm.py"
module        =  module_load( model_uri= model_uri)

# Path to JSON
data_path = '../dataset/json/lightgbm_titanic.json'  

# Model Parameters
pars = json.load(open( data_path , mode='r'))
for key, pdict in  pars.items() :
  globals()[key] = path_norm_dict( pdict   )   ###Normalize path

#### Load Parameters and Train
module        =  module_load( model_uri= model_uri )                           # Load file definition
module.init(model_pars=model_pars, data_pars=data_pars, compute_pars=compute_pars)             # Create Model instance
module.fit(data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)


#### Inference
metrics_val   =  module.evaluate(data_pars, compute_pars, out_pars) # get stats
ypred         = module.predict(data_pars, compute_pars, out_pars)     # predict pipeline


#### Check metrics
metrics_val = module.evaluate(model, data_pars, compute_pars, out_pars)
metrics_val 


Using Vision CNN RESNET18 for MNIST dataset (Example notebook, JSON file)

Using Vision CNN RESNET18 for MNIST dataset
# import library
import mlmodels
from mlmodels.models import module_load
from mlmodels.util import path_norm_dict, path_norm, params_json_load
from jsoncomment import JsonComment ; json = JsonComment()


#### Model URI and Config JSON
model_uri   = "model_tch.torchhub.py"
config_path = path_norm( 'model_tch/torchhub_cnn.json'  )
config_mode = "test"  ### test/prod


#### Model Parameters
hypermodel_pars, model_pars, data_pars, compute_pars, out_pars = params_json_load(config_path, config_mode= config_mode)
print( hypermodel_pars, model_pars, data_pars, compute_pars, out_pars)


#### Load Parameters and Train
module        =  module_load( model_uri= model_uri )                           # Load file definition
module.init(model_pars=model_pars, data_pars=data_pars, compute_pars=compute_pars)             # Create Model instance
module.fit(data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)


#### Inference
metrics_val   =  module.evaluate(data_pars, compute_pars, out_pars) # get stats
ypred         = module.predict(data_pars, compute_pars, out_pars)     # predict pipeline


#### Check metrics
metrics_val = module.evaluate(model, data_pars, compute_pars, out_pars)
metrics_val 




---

Using ARMDN Time Series (Example notebook, JSON file)

Using ARMDN Time Serie
# import library
import mlmodels
from mlmodels.models import module_load
from mlmodels.util import path_norm_dict, path_norm, params_json_load
from jsoncomment import JsonComment ; json = JsonComment()


#### Model URI and Config JSON
model_uri   = "model_keras.ardmn.py"
config_path = path_norm( 'model_keras/ardmn.json'  )
config_mode = "test"  ### test/prod




#### Model Parameters
hypermodel_pars, model_pars, data_pars, compute_pars, out_pars = params_json_load(config_path, config_mode= config_mode)
print( hypermodel_pars, model_pars, data_pars, compute_pars, out_pars)


#### Load Parameters and Train
module        =  module_load( model_uri= model_uri )                           # Load file definition
module.init(model_pars=model_pars, data_pars=data_pars, compute_pars=compute_pars)             # Create Model instance
module.fit(data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)


#### Inference
metrics_val   =  module.evaluate(data_pars, compute_pars, out_pars) # get stats
ypred         = module.predict(data_pars, compute_pars, out_pars)     # predict pipeline


#### Check metrics
metrics_val = module.evaluate(model, data_pars, compute_pars, out_pars)
metrics_val 



#### Save/Load
module.save(model, save_pars ={ 'path': out_pars['path'] +"/model/"})

model2 = module.load(load_pars ={ 'path': out_pars['path'] +"/model/"})


Pytorch

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].