All Projects → deepakiim → Deploy-machine-learning-model

deepakiim / Deploy-machine-learning-model

Licence: MIT license
Dockerize and deploy machine learning model as REST API using Flask

Programming Languages

python
139335 projects - #7 most used programming language
Dockerfile
14818 projects
shell
77523 projects

Projects that are alternatives of or similar to Deploy-machine-learning-model

Python-for-Remote-Sensing
python codes for remote sensing applications will be uploaded here. I will try to teach everything I learn during my projects in here.
Stars: ✭ 20 (-67.21%)
Mutual labels:  scikit-learn
nyc-2019-scikit-sprint
NYC WiMLDS scikit-learn open source sprint (Aug 24, 2019)
Stars: ✭ 28 (-54.1%)
Mutual labels:  scikit-learn
machine learning
A gentle introduction to machine learning: data handling, linear regression, naive bayes, clustering
Stars: ✭ 22 (-63.93%)
Mutual labels:  scikit-learn
django-quick-start
Deploy a Django app on Render
Stars: ✭ 17 (-72.13%)
Mutual labels:  deployment
spot price machine learning
Machine Learning for Spot Prices
Stars: ✭ 25 (-59.02%)
Mutual labels:  scikit-learn
handson-ml
도서 "핸즈온 머신러닝"의 예제와 연습문제를 담은 주피터 노트북입니다.
Stars: ✭ 285 (+367.21%)
Mutual labels:  scikit-learn
serverless-model-aws
Deploy any Machine Learning model serverless in AWS.
Stars: ✭ 19 (-68.85%)
Mutual labels:  deployment
WOA-Deployer
WOA Deployer
Stars: ✭ 77 (+26.23%)
Mutual labels:  deployment
text-classification-cn
中文文本分类实践,基于搜狗新闻语料库,采用传统机器学习方法以及预训练模型等方法
Stars: ✭ 81 (+32.79%)
Mutual labels:  scikit-learn
ci-docker-image
A Docker Image meant for use with CI/CD pipelines
Stars: ✭ 23 (-62.3%)
Mutual labels:  deployment
draughtsman
An in-cluster agent that handles Helm based deployments
Stars: ✭ 31 (-49.18%)
Mutual labels:  deployment
PracticalMachineLearning
A collection of ML related stuff including notebooks, codes and a curated list of various useful resources such as books and softwares. Almost everything mentioned here is free (as speech not free food) or open-source.
Stars: ✭ 60 (-1.64%)
Mutual labels:  scikit-learn
AutoDeploy
AutoDeploy is a single configuration deployment library
Stars: ✭ 43 (-29.51%)
Mutual labels:  deployment
docker-wordmove
Docker image to run Wordmove
Stars: ✭ 16 (-73.77%)
Mutual labels:  deployment
mini-qml
Minimal Qt deployment for Linux, Windows, macOS and WebAssembly.
Stars: ✭ 44 (-27.87%)
Mutual labels:  deployment
running-redmine-on-puma
running redmine on puma installation tutorial (Ubuntu/MySQL)
Stars: ✭ 20 (-67.21%)
Mutual labels:  deployment
clinica
Software platform for clinical neuroimaging studies
Stars: ✭ 153 (+150.82%)
Mutual labels:  scikit-learn
datascienv
datascienv is package that helps you to setup your environment in single line of code with all dependency and it is also include pyforest that provide single line of import all required ml libraries
Stars: ✭ 53 (-13.11%)
Mutual labels:  scikit-learn
MachineLearning
Implementations of machine learning algorithm by Python 3
Stars: ✭ 16 (-73.77%)
Mutual labels:  scikit-learn
kmeans-dbscan-tutorial
A clustering tutorial with scikit-learn for beginners.
Stars: ✭ 20 (-67.21%)
Mutual labels:  scikit-learn

Dockerize and deploy machine learning model as REST API using Flask

A simple Flask application that can serve predictions machine learning model. Reads a pickled sklearn model into memory when the Flask app is started and returns predictions through the /predict endpoint. You can also use the /train endpoint to train/retrain the model.

Steps for deploying ML model

  1. Install Flask and Docker

  2. Serialise your scikit-learn model (this can be done using Pickle, or JobLib)

  3. [optional] add column names list to scikit object ex: rf.columns = ['Age', 'Sex', 'Embarked', 'Survived']

  4. Create a separate flask_api.py file which will build the web service using Flask

    1. To run python flask_api.py
    2. Go to http address to check if its working
  5. Create a dockerfile which does the below items

    1. Install ubuntu, python and git
    2. Clone code repo from git or move local python code to /app in container
    3. Set WORKDIR to /app
    4. Install packages in requirements.xt
    5. Expose the port for flask enpoint
    6. Define ENTRYPOINT as python main.py 9999
  6. Build docker image

  7. Run docker container

  8. Make a http POST call with some data, and receive the prediction back using postman or python requests library.

  9. Push the docker container to docker registry / ship to production

  10. Install PIP requirements

    FYI: The code requries Python 3.6+ to run

    pip install -r requirements.txt
    
  11. Running API

    python main.py <port>
    
  12. Endpoints

    /predict (POST)

    Returns an array of predictions given a JSON object representing independent variables. Here's a sample input:

    [
        {"Age": 14, "Sex": "male", "Embarked": "S"},
        {"Age": 68, "Sex": "female", "Embarked": "C"},
        {"Age": 45, "Sex": "male", "Embarked": "C"},
        {"Age": 32, "Sex": "female", "Embarked": "S"}
    ]
    

    and sample output:

    {"prediction": [0, 1, 1, 0]}
    

    /train (GET)

    Trains the model. This is currently hard-coded to be a random forest model that is run on a subset of columns of the titanic dataset.

    /wipe (GET)

    Removes the trained model.

Docker commands

Note: Docker tag or id should be always specified in the end of the docker command to avoid issues

  1. Build docker image from Dockerfile

    docker build -t "<app name>" -f docker-files/Dockerfile . eg: docker build -t "ml_app" -f docker-files/Dockerfile .

  2. Run the docker container after build

    docker run -p 9999:9999 ml_app # -p to make the port externally avaiable for browsers

  3. Show all running containers

    docker ps

    a. Kill and remove running container

    docker rm <containerid> -f

  4. Open bash in a running docker container (optional)

    docker exec -ti <containerid> bash

  5. Docker Entry point The ENTRYPOINT specifies a command that will always be executed when the container starts. The CMD specifies arguments that will be fed to the ENTRYPOINT 1683

Docker has a default ENTRYPOINT which is /bin/sh -c but does not have a default CMD. --entrypoint in docker run will overwrite the default entry point docker run -it --entrypoint /bin/bash <image>

I WANT TO CONNECT FROM A CONTAINER TO A SERVICE ON THE HOST The host has a changing IP address (or none if you have no network access). From 18.03 onwards our recommendation is to connect to the special DNS name host.docker.internal, which resolves to the internal IP address used by the host. This is for development purpose and will not work in a production environment outside of Docker Desktop for Mac.

The gateway is also reachable as gateway.docker.internal.

To avoid ModuleNotFoundError: No module named 'api'

add this line to docker file ENV PYTHONPATH="$PYTHONPATH:/" and insure that init.py exists in necessary folders

Appendix

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].