All Projects → microsoft → Microsoft Rocket Video Analytics Platform

microsoft / Microsoft Rocket Video Analytics Platform

Licence: mit
A highly extensible software stack to empower everyone to build practical real-world live video analytics applications for object detection and counting with cutting edge machine learning algorithms.

Projects that are alternatives of or similar to Microsoft Rocket Video Analytics Platform

Trainyourownyolo
Train a state-of-the-art yolov3 object detector from scratch!
Stars: ✭ 399 (+146.3%)
Mutual labels:  object-detection, gpu, yolov3
Fastmot
High-performance multiple object tracking based on YOLO, Deep SORT, and optical flow
Stars: ✭ 284 (+75.31%)
Mutual labels:  object-detection, yolov3, edge-computing
Yolov3 tensorflow
Complete YOLO v3 TensorFlow implementation. Support training on your own dataset.
Stars: ✭ 1,498 (+824.69%)
Mutual labels:  object-detection, yolov3
Tensorflow Yolov4 Tflite
YOLOv4, YOLOv4-tiny, YOLOv3, YOLOv3-tiny Implemented in Tensorflow 2.0, Android. Convert YOLO v4 .weights tensorflow, tensorrt and tflite
Stars: ✭ 1,881 (+1061.11%)
Mutual labels:  object-detection, yolov3
Yolo V3 Iou
YOLO3 动漫人脸检测 (Based on keras and tensorflow) 2019-1-19
Stars: ✭ 116 (-28.4%)
Mutual labels:  object-detection, yolov3
Yolov3 Model Pruning
在 oxford hand 数据集上对 YOLOv3 做模型剪枝(network slimming)
Stars: ✭ 1,386 (+755.56%)
Mutual labels:  object-detection, yolov3
Tensorflow2.0 Examples
🙄 Difficult algorithm, Simple code.
Stars: ✭ 1,397 (+762.35%)
Mutual labels:  object-detection, yolov3
Tensorflow Object Detection Tutorial
The purpose of this tutorial is to learn how to install and prepare TensorFlow framework to train your own convolutional neural network object detection classifier for multiple objects, starting from scratch
Stars: ✭ 113 (-30.25%)
Mutual labels:  object-detection, gpu
Aspnetboilerplate Core Ng
Tutorial for ASP.NET Boilerplate Core + Angular
Stars: ✭ 61 (-62.35%)
Mutual labels:  azure, dotnet-core
Spark
.NET for Apache® Spark™ makes Apache Spark™ easily accessible to .NET developers.
Stars: ✭ 1,721 (+962.35%)
Mutual labels:  azure, dotnet-core
Yolo label
GUI for marking bounded boxes of objects in images for training neural network Yolo v3 and v2 https://github.com/AlexeyAB/darknet, https://github.com/pjreddie/darknet
Stars: ✭ 128 (-20.99%)
Mutual labels:  object-detection, yolov3
Orleans.clustering.kubernetes
Orleans Membership provider for Kubernetes
Stars: ✭ 140 (-13.58%)
Mutual labels:  azure, dotnet-core
Azurestorageexplorer
☁💾 Manage your Azure Storage blobs, tables, queues and file shares from this simple and intuitive web application.
Stars: ✭ 88 (-45.68%)
Mutual labels:  azure, dotnet-core
Fastai
R interface to fast.ai
Stars: ✭ 85 (-47.53%)
Mutual labels:  object-detection, gpu
Yolov5
YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
Stars: ✭ 19,914 (+12192.59%)
Mutual labels:  object-detection, yolov3
People Counter Python
Create a smart video application using the Intel Distribution of OpenVINO toolkit. The toolkit uses models and inference to run single-class object detection.
Stars: ✭ 62 (-61.73%)
Mutual labels:  object-detection, edge-computing
Mobilenet Yolo
MobileNetV2-YoloV3-Nano: 0.5BFlops 3MB HUAWEI P40: 6ms/img, YoloFace-500k:0.1Bflops 420KB🔥🔥🔥
Stars: ✭ 1,566 (+866.67%)
Mutual labels:  object-detection, yolov3
Bmw Labeltool Lite
This repository provides you with a easy to use labeling tool for State-of-the-art Deep Learning training purposes.
Stars: ✭ 145 (-10.49%)
Mutual labels:  object-detection, yolov3
Computervision Recipes
Best Practices, code samples, and documentation for Computer Vision.
Stars: ✭ 8,214 (+4970.37%)
Mutual labels:  object-detection, azure
Developing Solutions Azure Exam
This repository contains resources for the Exam AZ-203: Developing Solutions for Microsoft Azure. You can find direct links to resources and and practice resources to test yourself ☁️🎓📚
Stars: ✭ 59 (-63.58%)
Mutual labels:  azure, dotnet-core

Microsoft Rocket Video Analytics Platform

A highly extensible software stack to empower everyone to build practical real-world live video analytics applications for object detection and counting/alerting with cutting edge machine learning algorithms. The repository features a hybrid edge-cloud video analytics pipeline (built on C# .NET Core), which allows TensorFlow DNN model plug-in, GPU/FPGA acceleration, docker containerization/Kubernetes orchestration, and interactive querying for after-the-fact analysis. A brief summary of Rocket platform can be found inside 📝Rocket-features-and-pipelines.pdf.

Feel free to check out our 📝webinar on Rocket from Dec 2019.

How to run the code

Step 1: Set up environment

Setup on Windows

  • Microsoft Visual Studio (VS 2017 is preferred) is recommended IDE for Rocket on Windows 10. While installing Visual Studio, please also add C++ 2015.3 v14.00 (v140) toolset to your local machine. Snapshot below shows how to include C++ 2015.3 v14.00 from Visual Studio Installer.
    C++v140

  • Follow instructions to install .NET Core 2.2 (2.2.102 is preferred).

  • To enable GPU support, install CUDA Toolkit and cuDNN. Please also make sure your NVIDIA driver is up-to-date.

    • CUDA 8.0 (e.g., cuda_8.0.61_win10_network.exe) is needed for Darknet (e.g., YOLO) models.

      After installation, please make sure files in C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\extras\visual_studio_integration\MSBuildExtensions are copied to C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\VC\VCTargets\BuildCustomizations

    • CUDA 9.1 (e.g., cuda_9.1.85_win10_network.exe) is needed to support TensorFlow models.

    • cuDNN v7 is preferred (e.g., cudnn-8.0-windows10-x64-v7.2.1.38.zip).

      Copy <installpath>\cuda\bin\cudnn64_7.dll to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\bin.
      Copy <installpath>\cuda\ include\cudnn.h to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\include.
      Copy <installpath>\cuda\lib\x64\cudnn.lib to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\lib\x64.
      Add Variable Name: CUDA_PATH with Variable Value: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0 into Environment Variables.
      PathVariable

    • Restart your computer after installing CUDA and cuDNN.

Setup on Linux

Docker is recommended to run Rocket on Linux. Below we use Ubuntu 16.04 as an example to walk through the steps of building Rocket docker image and run it with GPU acceleration.

  • Install .NET Core 2.2 SDK (2.2.301 is preferred).
  • Install docker-ce (version 18.09.7 is preferred).
  • Install NVIDIA driver based on your GPU model (e.g., 418.67 for Tesla GPU).
  • Install nvidia-docker2. The NVIDIA Container Toolkit allows users to build and run GPU accelerated Docker containers. Note that you do NOT need to install the CUDA toolkit on the host, but GPU driver needs to be installed.

Step 2: Run the pipeline

Check out the repository.

Prepare video feeds and line configuration

  • Prepare video feeds. Rocket can be fed with either live video streams (e.g., rtsp://<url>:<port>/) or local video files (should be put into \media\). A sample video file sample.mp4 is already included in \media\.
  • Prepare a configuration file (should be placed into \cfg\) used in line-based counting/alerting and cascaded DNN calls. Each line in the file defines a line-of-interest with the format below.
    <line_name> <line_id> <x_1> <y_1> <x_2> <y_2> <overlap_threshold>
    A line configuration file sample.txt manually created based on sample.mp4 is also included in the folder \cfg\. sampleline

Build on Windows

  • Run Config.bat before the first time you run Rocket to download pre-compiled OpenCV and TensorFlow binaries as well as Darknet YOLO weights files. It may take few minutes depending on your network status. Proceed only when all downloads finish. YOLOv3 and Tiny YOLOv3 are already included in Rocket. You can plug-in other YOLO models as you wish.

  • Launch VAP.sln in src\VAP\ from Visual Studio.

  • Set pipeline config PplConfig in VideoPipelineCore - App.config. We have pre-compiled six configurations in the code. Pipeline descriptions are also included in 📝Rocket-features-and-pipelines.pdf.

    • 0: Line-based counting
    • 1: Darknet Yolo v3 on every frame (slide #7)
    • 2: TensorFlow FastRCNN on every frame (slide #8)
    • 3: Background subtraction-based (BGS) early filtering -> Darknet Tiny Yolo -> Darknet Yolo v3 (slide #9)
    • 4: BGS early filtering -> Darknet Tiny Yolo -> Database (ArangoDB and blob storage on Azure) (slide #10)
    • 5: BGS early filtering -> TensorFlow Fast R-CNN -> Azure Machine Learning (cloud) (slide #11)
  • (Optional) Set up your own database and Azure Machine Learning service if PplConfig is set to 4 or 5.

    • Azure Database:
      • Deploy SQL database like MySQL or NoSQL database such as ArangoDB on Azure by creating a VM.
      • Supply database settings (e.g., server name, user name, credentials etc.) to Rocket in App.Config.
      • You can also set up your cloud storage (e.g., Azure Blob Storage) to store images/videos. In pipeline 4, Rocket sends detection images to an Azure storage account and metadata to an Azure database.
      azureconfig
    • Azure Machine Learning:
      • Deploy your deep learning models to Azure (e.g., using Azure Kubernetes Service or AKS) for inference with GPU or FPGA.
      • After deploying your model as a web service, provide host URL, key, and service ID to VAP in App.Config. Rocker will handle the communication between local modules and the cloud service.
      azureconfig
  • Build the solution.

  • Run the code.

    • Using Visual Studio: set VideoPipelineCore - Property - Debug - Application Arguments <video_file/camera_url> <line_detection_config_file> <sampling_factor> <resolution_factor> <object_category>. To run Rocket on the sample video, for example, arguments can be set to sample.mp4 sample.txt 1 1 car.
    • Using Command Line (CMD or PowerShell): run dotnet .\VideoPipelineCore.dll <video_file/camera_url> <line_detection_config_file> <sampling_factor> <resolution_factor> <object_category> in \src\VAP\VideoPipelineCore\bin\Debug\netcoreapp2.2. For instance, dotnet .\VideoPipelineCore.dll sample.mp4 sample.txt 1 1 car.

Build on Linux

We have pre-built a Rocket docker image from docker branch with local processing only (slide #12 without cloud parts). The image is hosted on Docker Hub, a public library and community for container images, and you will be asked to login before pull/push images (sign up first if you don't have an account).

To test on the pre-built Rocket image, run
docker pull ycshu086/rocket-sample-edgeonly:0.1

Once pulled, run the command below to start Rocket with NVIDIA GPU.
docker run --runtime=nvidia -v <local directory>:/app/output ycshu086/rocket-sample-edgeonly:0.1 sample.mp4 sample.txt 1 1 car

  • Build your own Rocket pipeline on Linux

    • Pull base docker image with CUDA toolkit and OpenCV. This image is needed to build Rocket docker image.
      docker pull ycshu086/ubuntu-dotnetcore-opencv-opencvsharp-cuda-cudnn:<version>.
    • Git clone docker branch for source code to dockerize Rocket on Linux.
    • Create line configuration file(s) inside \cfg. If you are running Rocket on a pre-recorded video, please also copy the video file into \media.
    • (Optional) Update \src\VAP\VideoPipelineCore\App.Config to set proper parameters for database and Azure Machine Learning service connection.
    • Run sudo chmod 744 Config.sh and sudo ./Config.sh before the first time you build Rocket image to download pre-compiled TensorFlow binaries.
    • Run docker build to build Rocket image using Dockerfile.VAP.
      docker build -t <repository>/<image>:<version> -f Dockerfile.VAP .
    • (Optional) Push Rocket image to a cloud repository (e.g., docker hub, Azure Container Registry etc.) if you need to run it somewhere else.
      docker push -t <repository>/<image>:<version>
  • Run Rocket image on Linux

    • Pull a pre-built Rocket docker image to the local machine. You can use docker images to check existing images.
      docker pull -t <repository>/<image>:<version>

    • Mount volume into the container and run Rocket image with NVIDIA GPU.
      docker run --runtime=nvidia -v <local directory>:/app/output <repository>/<image>:<version> sample.mp4 sample.txt 1 1 car

Step 3: Results

Output images are generated in folders in \src\VAP\VideoPipelineCore\bin\ (Windows), or the local directory you mount during docker run on Linux. Results from different modules are sent to different directories (e.g., output_bgsline for background subtraction-based detector) whereas output_all has images from all modules. Name of each file consists of frame ID, module name, and confidence score. Below are few sample results from running pipeline 3 and pipeline 5 on sample.mp4. You should also see results printed in console during running. output The above illustration on pipeline 3 shows that at frame 2679, background subtraction detected an object, tiny Yolo DNN confirmed it was a car with a confidence of 0.24, and heavy Yolo v3 confirmed it with a confidence of 0.92. Likewise, for pipeline 5 where the TensorFlow FastRNN model had a confidence of 0.55 and AzureML (in the cloud) came back with a confidence of 0.76 for the same object.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].