All Projects → perlogix → dama

perlogix / dama

Licence: GPL-3.0 license
a simplified machine learning container platform that helps teams get started with an automated workflow

Programming Languages

go
31211 projects - #10 most used programming language
Makefile
30231 projects
Dockerfile
14818 projects

Projects that are alternatives of or similar to dama

qaboard
Algorithm engineering is hard enough: don't spend your time with logistics. QA-Board organizes your runs and lets you visualize, compare and share results.
Stars: ✭ 48 (-36.84%)
Mutual labels:  mlops
actions-ml-cicd
A Collection of GitHub Actions That Facilitate MLOps
Stars: ✭ 181 (+138.16%)
Mutual labels:  mlops
cartpole-rl-remote
CartPole game by Reinforcement Learning, a journey from training to inference
Stars: ✭ 24 (-68.42%)
Mutual labels:  mlops
chitra
A multi-functional library for full-stack Deep Learning. Simplifies Model Building, API development, and Model Deployment.
Stars: ✭ 210 (+176.32%)
Mutual labels:  mlops
k3ai
A lightweight tool to get an AI Infrastructure Stack up in minutes not days. K3ai will take care of setup K8s for You, deploy the AI tool of your choice and even run your code on it.
Stars: ✭ 105 (+38.16%)
Mutual labels:  mlops
ck
Portable automation meta-framework to manage, describe, connect and reuse any artifacts, scripts, tools and workflows on any platform with any software and hardware in a non-intrusive way and with minimal effort. Try it using this tutorial to modularize and automate ML Systems benchmarking from the Student Cluster Competition at SC'22:
Stars: ✭ 501 (+559.21%)
Mutual labels:  mlops
gockerfile
🐳 gockerfile is a YAML Docker-compatible alternative to the Dockerfile Specializing in simple go server.
Stars: ✭ 44 (-42.11%)
Mutual labels:  moby
aml-compute
GitHub Action that allows you to attach, create and scale Azure Machine Learning compute resources.
Stars: ✭ 19 (-75%)
Mutual labels:  mlops
cli
Polyaxon Core Client & CLI to streamline MLOps
Stars: ✭ 18 (-76.32%)
Mutual labels:  mlops
charts
Helm charts for creating reproducible and maintainable deployments of Polyaxon with Kubernetes.
Stars: ✭ 32 (-57.89%)
Mutual labels:  mlops
mrmr
mRMR (minimum-Redundancy-Maximum-Relevance) for automatic feature selection at scale.
Stars: ✭ 170 (+123.68%)
Mutual labels:  mlops
great expectations action
A GitHub Action that makes it easy to use Great Expectations to validate your data pipelines in your CI workflows.
Stars: ✭ 66 (-13.16%)
Mutual labels:  mlops
mlreef
The collaboration workspace for Machine Learning
Stars: ✭ 1,409 (+1753.95%)
Mutual labels:  mlops
serving-pytorch-models
Serving PyTorch models with TorchServe 🔥
Stars: ✭ 91 (+19.74%)
Mutual labels:  mlops
lightning-hydra-template
PyTorch Lightning + Hydra. A very user-friendly template for rapid and reproducible ML experimentation with best practices. ⚡🔥⚡
Stars: ✭ 1,905 (+2406.58%)
Mutual labels:  mlops
monai-deploy
MONAI Deploy aims to become the de-facto standard for developing, packaging, testing, deploying and running medical AI applications in clinical production.
Stars: ✭ 56 (-26.32%)
Mutual labels:  mlops
neptune-client
📒 Experiment tracking tool and model registry
Stars: ✭ 348 (+357.89%)
Mutual labels:  mlops
mlops-workload-orchestrator
The MLOps Workload Orchestrator solution helps you streamline and enforce architecture best practices for machine learning (ML) model productionization. This solution is an extendable framework that provides a standard interface for managing ML pipelines for AWS ML services and third-party services.
Stars: ✭ 114 (+50%)
Mutual labels:  mlops
vertex-edge
A tool for training models to Vertex on Google Cloud Platform.
Stars: ✭ 24 (-68.42%)
Mutual labels:  mlops
benderopt
Black-box optimization library
Stars: ✭ 84 (+10.53%)
Mutual labels:  mlops

dama

CircleCI Go Report Card

A simplified machine learning container platform that helps teams get started with an automated workflow.

demo gif

DISCLAIMER: dama is currently in alpha due to the lack of security and scaling, but still fun to try out!

Server Configuration

Server default configurations in config.yml These configurations are loaded by default if not overridden in config.yml.

expire: "1300"
deployexpire: "86400"
uploadsize: 2000000000
envsize: 20
https:
  listen: "0.0.0.0"
  port: "8443"
  debug: false
  verifytls: false
db:
  db: 0
  maxretries: 20
docker:
  endpoint: "unix:///var/run/docker.sock"
  cpushares: 512
  memory: 1073741824
gotty:
  tls: false

These configurations need to be set in your environment variables.

# Server admin username and password
DamaUser       # example: DamaUser="tim"
DamaPassword   # example: DamaPassword="9e9692478ca848a19feb8e24e5506ec89"

# Redis database password if applicable
DBPassword     # example: DBPassword="9e9692478ca848a19feb8e24e5506ec89"

All configurations types

images: ["perlogix:minimal"]                # required / string array
expire: "1300"                             # string
deployexpire: "86400"                      # string
uploadsize: 2000000000                     # int
envsize: 20                                # int
https:
  listen: "0.0.0.0"                        # string
  port: "8443"                             # string
  pem: "/opt/dama.pem"                     # required / string
  key: "/opt/dama.key"                     # required / string
  debug: false                             # bool
  verifytls: false                         # bool
db:
  network: "unix"                          # required / string
  address: "./tmp/redis.sock"              # required / string
  db: 0                                    # int
  maxretries: 20                           # int
docker:
  endpoint: "unix:///var/run/docker.sock"  # string
  cpushares: 512                           # int
  memory: 1073741824                       # int
gotty:
  tls: false                               # bool

CLI Configuration

These environment variables need to be exported in order to use dama-cli.

DAMA_SERVER # example: export DAMA_SERVER="https://localhost:8443/"
DAMA_USER   # example: export DAMA_USER="tim"
DAMA_KEY    # example: export DAMA_KEY="9e9692478ca848a19feb8e24e5506ec89"

CLI Flags

Usage: dama [options] <args>

 -new           Create a new environment from scratch and delete the old one
 -run           Create environment and run with dama.yml
 -file          Run with dama.yml in different directory
 -env           Create an environment variable or secret for runtime
 -img           Specify a docker image to be used instead of the default image
 -dl            Download file from workspace in your environment to your local computer
 -up            Upload files from your local computer to workspace in your environment
 -deploy        Deploy API and get your unique URI
 -show-api      Show API details: URL, Health and Type
 -show-images   Show images available to use

CLI Examples

dama -new
dama -run
dama -run -file ../dama.yml
dama -env "AWS_ACCESS_KEY_ID=123,AWS_SECRET_ACCESS_KEY=234"
dama -deploy
dama -run -img tensorflow:lite
dama -show-images
dama -show-api
dama -up data.csv
dama -dl model.pkl

dama.yml File

This a simple dama.yml to setup your environment and run a Flask API.

image: "perlogix:minimal"
port: "5000"
pip: |
  Flask==0.12.2
  scikit-learn==0.19.1
  numpy==1.14.2
  scipy==1.0.0
python: |
  from flask import Flask, request, jsonify
  from sklearn import datasets
  from sklearn.model_selection import train_test_split
  from sklearn.ensemble import RandomForestClassifier
  from sklearn.externals import joblib

  X, y = datasets.load_iris(return_X_y=True)
  X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
  model = RandomForestClassifier(random_state=101)
  model.fit(X_train, y_train)
  print("Score on the training set is: {:2}".format(model.score(X_train, y_train)))
  print("Score on the test set is: {:.2}".format(model.score(X_test, y_test)))
  model_filename = 'iris-rf-v1.0.pkl'
  print("Saving model to {}...".format(model_filename))
  joblib.dump(model, model_filename)
  app = Flask(__name__)
  MODEL = joblib.load('iris-rf-v1.0.pkl')
  MODEL_LABELS = ['setosa', 'versicolor', 'virginica']

  @app.route('/predict')
    def predict():
      sepal_length = request.args.get('sepal_length')
      sepal_width = request.args.get('sepal_width')
      petal_length = request.args.get('petal_length')
      petal_width = request.args.get('petal_width')
      features = [[sepal_length, sepal_width, petal_length, petal_width]]
      label_index = MODEL.predict(features)
      label = MODEL_LABELS[label_index[0]]
      return jsonify(status='complete', label=label)
	
  if __name__ == '__main__':
    app.run(debug=False, host="0.0.0.0", threaded=True)

cURL API in sandbox or deploy

curl -ks https://localhost:8443/api/<insert sandbox key>/predict?sepal_length=5&sepal_width=3.1&petal_length=2.5&petal_width=1.2

Even simpler environment setup with model training.

image: "perlogix:tensorflow"
checkout: "https://github.com/aymericdamien/TensorFlow-Examples.git"
cmd: |
  cd TensorFlow-Examples/examples/3_NeuralNetworks
  python neural_network.py

All YAML configuration option types.

project         # string       - proejct name
env             # string array - env variables
checkout        # string       - git checkout master branch
time_format     # string       - python time format used in container as env variable TIMESTAMP
setup_cmd       # string       - run setup /initial command before cmd or python
cmd             # string       - run BASH Linux command
python          # string       - run inline Python
pip             # string       - install pip packages
image           # string       - define container image for environment
port            # string       - port to expose for web service
git:
  url           # string       - git URL
  branch        # string       - git branch
  sha           # string       - git SHA
aws_s3:
  file          # string       - file to push or pull
  dir           # string       - directory to push or pull
  bucket_push   # string       - push file or dir to S3
  bucket_pull   # string       - pull file or dir from S3

Dockerfiles

Add these lines to your Dockerfiles for your CLI to connect via websockets

RUN cd /usr/bin && curl -L https://github.com/yudai/gotty/releases/download/v1.0.1/gotty_linux_amd64.tar.gz | tar -xz
CMD ["/usr/bin/gotty", "--reconnect", "-w", "/bin/bash"]

Build

make build

To Do

  • Tokenize environment variables in DB
  • Write test suite
  • Provide Vagrant and Docker images
  • Add scheduler / resource manager for multi-host container serving
  • Rewrite auth middleware
  • Swap out stdlib flags package for third-party package
  • These docs stink!
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].