All Projects → elliotwaite → pytorch-to-javascript-with-onnx-js

elliotwaite / pytorch-to-javascript-with-onnx-js

Licence: MIT License
Run PyTorch models in the browser using ONNX.js

Programming Languages

python
139335 projects - #7 most used programming language
javascript
184084 projects - #8 most used programming language
HTML
75241 projects
CSS
56736 projects

Projects that are alternatives of or similar to pytorch-to-javascript-with-onnx-js

pytorch2keras
PyTorch to Keras model convertor
Stars: ✭ 788 (+386.42%)
Mutual labels:  pytorch-models, onnx
ai-serving
Serving AI/ML models in the open standard formats PMML and ONNX with both HTTP (REST API) and gRPC endpoints
Stars: ✭ 122 (-24.69%)
Mutual labels:  onnx, onnx-models
lego-art-remix
Powerful computer vision assisted Lego mosaic creator · Over 500,000 images created (so far!)
Stars: ✭ 148 (-8.64%)
Mutual labels:  onnx
AI-LAB
This repository contains a docker image that I use to develop my artificial intelligence applications in an uncomplicated fashion. Python, TensorFlow, PyTorch, ONNX, Keras, OpenCV, TensorRT, Numpy, Jupyter notebook... 🐋🔥
Stars: ✭ 44 (-72.84%)
Mutual labels:  onnx
optimum
🏎️ Accelerate training and inference of 🤗 Transformers with easy to use hardware optimization tools
Stars: ✭ 567 (+250%)
Mutual labels:  onnx
Fast Stacked Hourglass Network OpenVino
A fast stacked hourglass network for human pose estimation on OpenVino
Stars: ✭ 52 (-67.9%)
Mutual labels:  onnx
ai-deployment
关注AI模型上线、模型部署
Stars: ✭ 149 (-8.02%)
Mutual labels:  onnx
ONNX-HITNET-Stereo-Depth-estimation
Python scripts form performing stereo depth estimation using the HITNET model in ONNX.
Stars: ✭ 21 (-87.04%)
Mutual labels:  onnx
serving-runtime
Exposes a serialized machine learning model through a HTTP API.
Stars: ✭ 15 (-90.74%)
Mutual labels:  onnx
person-detection
TensorRT person tracking RFBNet300
Stars: ✭ 30 (-81.48%)
Mutual labels:  onnx
ONNX-Runtime-with-TensorRT-and-OpenVINO
Docker scripts for building ONNX Runtime with TensorRT and OpenVINO in manylinux environment
Stars: ✭ 15 (-90.74%)
Mutual labels:  onnx
wonnx
A GPU-accelerated ONNX inference run-time written 100% in Rust, ready for the web
Stars: ✭ 160 (-1.23%)
Mutual labels:  onnx
ONNX-ImageNet-1K-Object-Detector
Python scripts for performing object detection with the 1000 labels of the ImageNet dataset in ONNX. The repository combines a class agnostic object localizer to first detect the objects in the image, and next a ResNet50 model trained on ImageNet is used to label each box.
Stars: ✭ 18 (-88.89%)
Mutual labels:  onnx
YOLOX
YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported. Documentation: https://yolox.readthedocs.io/
Stars: ✭ 6,570 (+3955.56%)
Mutual labels:  onnx
ONNX-Mobile-Human-Pose-3D
Python scripts for performing 3D human pose estimation using the Mobile Human Pose model in ONNX.
Stars: ✭ 69 (-57.41%)
Mutual labels:  onnx
vs-mlrt
Efficient ML Filter Runtimes for VapourSynth (with built-in support for waifu2x, DPIR, RealESRGANv2, and Real-CUGAN)
Stars: ✭ 34 (-79.01%)
Mutual labels:  onnx
model-zoo-old
The ONNX Model Zoo is a collection of pre-trained models for state of the art models in deep learning, available in the ONNX format
Stars: ✭ 38 (-76.54%)
Mutual labels:  onnx
aws-lambda-docker-serverless-inference
Serve scikit-learn, XGBoost, TensorFlow, and PyTorch models with AWS Lambda container images support.
Stars: ✭ 56 (-65.43%)
Mutual labels:  pytorch-models
sparsify
Easy-to-use UI for automatically sparsifying neural networks and creating sparsification recipes for better inference performance and a smaller footprint
Stars: ✭ 138 (-14.81%)
Mutual labels:  onnx
onnx2tensorRt
tensorRt-inference darknet2onnx pytorch2onnx mxnet2onnx python version
Stars: ✭ 14 (-91.36%)
Mutual labels:  onnx

Run PyTorch models in the browser using ONNX.js

Run PyTorch models in the browser with JavaScript by first converting your PyTorch model into the ONNX format and then loading that ONNX model in your website or app using ONNX.js. In the video tutorial below, I take you through this process using the demo example of a handwritten digit recognition model trained on the MNIST dataset.

Tutorial

https://www.youtube.com/watch?v=Vs730jsRgO8

Live Demo and Code Sandbox

The files in this repo (and a description of what they do)

├── degug_demo
│   ├── debug.html (A debug test to make sure the generated ONNX model works. 
│   │               Uses ONNX.js to load and run the generated ONNX model.)
│   │ 
│   └── onnx_model.onnx (A copy of the generated ONNX model that will be loaded
│                        for debugging.)
│
├── full_demo
│   ├── index.html (The full demo's HTML code.)
│   │ 
│   ├── onnx_model.onnx (A copy of the generated ONNX model. Used by script.js.)
│   │ 
│   ├── script.js (The full demos's JS code. Loads the onnx_model.onnx and 
│   │              predicts the drawn numbers.)
│   │ 
│   └── style.css (The full demo's CSS.)
│                            
├── convert_to_onnx.py (Converts a trained PyTorch model into an ONNX model.)
│
├── inference_mnist_model.py (The PyTorch model description. Used by
│                             convert_to_onnx.py to generate the ONNX model.)
│                             
├── inputs_batch_preview.png (A preview of a batch of augmented input data. 
│                             Generated by preview_mnist_dataset.py.)
│
├── onnx_model.py (The ONNX model generated by convert_to_onnx.py.)
│
├── preview_dataset.py (For testing out different types of data augmentation.)
│
├── pytorch_model.pt (The trained PyTorch model parameters. Generated by 
│                     train_mnist.model.py and used by convert_to_onnx.py to
│                     generate the ONNX model.)
│
└── train_mnist_model.pt (Trains the PyTorch model and saves the trained 
                          parameters as pytorch_model.pt.)

The benefits of running a model in the browser:

  • Faster inference times with smaller models.
  • Easy to host and scale (only static files).
  • Offline support.
  • User privacy (can keep the data on the device).

The benefits of using a backend server:

  • Faster load times (don't have to download the model).
  • Faster and consistent inference times with larger models (can take advantage of GPUs or other accelerators).
  • Model privacy (don't have to share your model if you want to keep it private).

License

MIT

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].