All Projects → doughtmw → HoloLens2-Machine-Learning

doughtmw / HoloLens2-Machine-Learning

Licence: MIT license
Using deep learning models for image classification directly on the HoloLens 2.

Programming Languages

C#
18002 projects

Projects that are alternatives of or similar to HoloLens2-Machine-Learning

Automl
Google Brain AutoML
Stars: ✭ 4,795 (+10323.91%)
Mutual labels:  efficientnet
ArUcoDetectionHoloLens-Unity
ArUco marker tracking on the HoloLens, implemented in Unity.
Stars: ✭ 79 (+71.74%)
Mutual labels:  hololens2
efficientnet-jax
EfficientNet, MobileNetV3, MobileNetV2, MixNet, etc in JAX w/ Flax Linen and Objax
Stars: ✭ 114 (+147.83%)
Mutual labels:  efficientnet
Pytorch Image Models
PyTorch image models, scripts, pretrained weights -- ResNet, ResNeXT, EfficientNet, EfficientNetV2, NFNet, Vision Transformer, MixNet, MobileNet-V3/V2, RegNet, DPN, CSPNet, and more
Stars: ✭ 15,232 (+33013.04%)
Mutual labels:  efficientnet
image embeddings
Using efficientnet to provide embeddings for retrieval
Stars: ✭ 107 (+132.61%)
Mutual labels:  efficientnet
awesome-computer-vision-models
A list of popular deep learning models related to classification, segmentation and detection problems
Stars: ✭ 419 (+810.87%)
Mutual labels:  efficientnet
flexible-yolov5
More readable and flexible yolov5 with more backbone(resnet, shufflenet, moblienet, efficientnet, hrnet, swin-transformer) and (cbam,dcn and so on), and tensorrt
Stars: ✭ 282 (+513.04%)
Mutual labels:  efficientnet
TensorMONK
A collection of deep learning models (PyTorch implemtation)
Stars: ✭ 21 (-54.35%)
Mutual labels:  efficientnet
MixNet-PyTorch
Concise, Modular, Human-friendly PyTorch implementation of MixNet with Pre-trained Weights.
Stars: ✭ 16 (-65.22%)
Mutual labels:  efficientnet
efficientdet
PyTorch Implementation of the state-of-the-art model for object detection EfficientDet [pre-trained weights provided]
Stars: ✭ 21 (-54.35%)
Mutual labels:  efficientnet
Efficientnet
Implementation of EfficientNet model. Keras and TensorFlow Keras.
Stars: ✭ 1,920 (+4073.91%)
Mutual labels:  efficientnet
EfficientUNetPlusPlus
Decoder architecture based on the UNet++. Combining residual bottlenecks with depthwise convolutions and attention mechanisms, it outperforms the UNet++ in a coronary artery segmentation task, while being significantly more computationally efficient.
Stars: ✭ 37 (-19.57%)
Mutual labels:  efficientnet
food-detection-yolov5
🍔🍟🍗 Food analysis baseline with Theseus. Integrate object detection, image classification and multi-class semantic segmentation. 🍞🍖🍕
Stars: ✭ 68 (+47.83%)
Mutual labels:  efficientnet
Yet Another Efficientdet Pytorch
The pytorch re-implement of the official efficientdet with SOTA performance in real time and pretrained weights.
Stars: ✭ 4,945 (+10650%)
Mutual labels:  efficientnet
make-a-little-progress-every-day
学无止境,督促自己学习。每天进步一点点,水滴石穿-贵在坚持。
Stars: ✭ 75 (+63.04%)
Mutual labels:  hololens2
Segmentation models
Segmentation models with pretrained backbones. Keras and TensorFlow Keras.
Stars: ✭ 3,575 (+7671.74%)
Mutual labels:  efficientnet
LabAssistVision
Proof of concept of a mixed reality application for the Microsoft HoloLens 2 integrating object recognition using cloud computing and real-time on-device markerless object tracking. The augmented objects provide interaction using hand input, eye-gaze, and voice commands to verify the tracking result visually.
Stars: ✭ 70 (+52.17%)
Mutual labels:  hololens2
detectron2 backbone
detectron2 backbone: resnet18, efficientnet, hrnet, mobilenet v2, resnest, bifpn
Stars: ✭ 171 (+271.74%)
Mutual labels:  efficientnet
Ensemble-of-Multi-Scale-CNN-for-Dermatoscopy-Classification
Fully supervised binary classification of skin lesions from dermatoscopic images using an ensemble of diverse CNN architectures (EfficientNet-B6, Inception-V3, SEResNeXt-101, SENet-154, DenseNet-169) with multi-scale input.
Stars: ✭ 25 (-45.65%)
Mutual labels:  efficientnet
efficientnetv2.pytorch
PyTorch implementation of EfficientNetV2 family
Stars: ✭ 366 (+695.65%)
Mutual labels:  efficientnet

HoloLens-2-Machine-Learning

Using the EfficientNetB0 model, trained on the ImageNet 1000 class dataset, for image classification. Model inference is run directly on the HoloLens 2 using its onboard CPU.

About

  • Optimal performance is achieved using version 19041 builds. In this sample I am using build 19041.1161 (Windows Holographic, version 20H2 - August 2021 Update) which can be downloaded from MSFT via the following link and installed using the Advanced Recovery Companion
  • Tested with Unity 2019.4 LTS, Visual Studio 2019, and the HoloLens 2
  • Building off of the WinMLExperiments sample from Rene Schulte
  • Input video frames of size (224, 224) for online inference
  • Pretrained TensorFlow-Keras implementation of the EfficientNetB0 framework was converted directly to ONNX format for use in this sample

Run sample

  • Open sample in Unity
  • Switch build platform to Universal Windows Platform, select HoloLens for target device, and ARM64 as the target platform
  • Build Visual Studio project and open .sln file
  • Copy the onnx-models\model.onnx file to the Builds\HoloLens-2-Machine-Learning\Assets folder
  • Import to Visual Studio project as an existing file, place in the assets folder
  • In the asset properties window (as below), confirm that the Content field has its boolean value set to True. This enables the ONNX model to be loaded at runtime from the Visual Studio assets folder

  • Build the sample in Release mode for ARM64 and deploy to the HoloLens 2 to test
  • Prediction labels are pulled from the parsed ImageNet labels .json file (which includes 1000 image net classes)
  • Output includes the predicted label, its associated probability and the inference time in milliseconds

Model conversion to ONNX

  • See sample conversion from official TensorFlow efficientnet weights to ONNX format in README file
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].