All Projects → nathanrooy → rpi-urban-mobility-tracker

nathanrooy / rpi-urban-mobility-tracker

Licence: GPL-3.0 license
The easiest way to count pedestrians, cyclists, and vehicles on edge computing devices or live video feeds.

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language
Dockerfile
14818 projects

Projects that are alternatives of or similar to rpi-urban-mobility-tracker

smart-social-distancing
Social Distancing Detector using deep learning and capable to run on edge AI devices such as NVIDIA Jetson, Google Coral, and more.
Stars: ✭ 129 (+72%)
Mutual labels:  edge-computing, edge-tpu, coral-tpu
yolo-deepsort-flask
Target detection and multi target tracking platform based on Yolo DeepSort and Flask.
Stars: ✭ 29 (-61.33%)
Mutual labels:  deep-sort, deepsort
deep sort realtime
A really more real-time adaptation of deep sort
Stars: ✭ 31 (-58.67%)
Mutual labels:  deepsort, deep-sort-tracking
Deep sort pytorch
MOT using deepsort and yolov3 with pytorch
Stars: ✭ 1,948 (+2497.33%)
Mutual labels:  deep-sort, deepsort
google-coral
Community gathering point for Google Coral dev board and dongle knowledge.
Stars: ✭ 81 (+8%)
Mutual labels:  edge-computing, tensorflow-lite
YOLOX deepsort tracker
using yolox+deepsort for object-tracking
Stars: ✭ 228 (+204%)
Mutual labels:  deep-sort, deepsort
zero-shot-object-tracking
Object tracking implemented with the Roboflow Inference API, DeepSort, and OpenAI CLIP.
Stars: ✭ 242 (+222.67%)
Mutual labels:  object-tracking, deep-sort
Yolov5-Deepsort
最新版本yolov5+deepsort目标检测和追踪,能够显示目标类别,支持5.0版本可训练自己数据集
Stars: ✭ 201 (+168%)
Mutual labels:  object-tracking, deepsort
intruder-detector-python
Build an application that alerts you when someone enters a restricted area. Learn how to use models for multiclass object detection.
Stars: ✭ 16 (-78.67%)
Mutual labels:  edge-computing
glDelegateBench
quick and dirty inference time benchmark for TFLite gles delegate
Stars: ✭ 17 (-77.33%)
Mutual labels:  tensorflow-lite
capture reid
可基于摄像头实时监控或录制的视频或静态图片进行行人检测(lffd)/跟踪(deep sort)和行人重识别(reid)。
Stars: ✭ 87 (+16%)
Mutual labels:  deep-sort
SiamFusion
No description or website provided.
Stars: ✭ 26 (-65.33%)
Mutual labels:  object-tracking
UniTrack
[NeurIPS'21] Unified tracking framework with a single appearance model. It supports Single Object Tracking (SOT), Video Object Segmentation (VOS), Multi-Object Tracking (MOT), Multi-Object Tracking and Segmentation (MOTS), Pose Tracking, Video Instance Segmentation (VIS), and class-agnostic MOT (e.g. TAO dataset).
Stars: ✭ 293 (+290.67%)
Mutual labels:  object-tracking
android tflite
GPU Accelerated TensorFlow Lite applications on Android NDK. Higher accuracy face detection, Age and gender estimation, Human pose estimation, Artistic style transfer
Stars: ✭ 105 (+40%)
Mutual labels:  tensorflow-lite
yolov5 deepsort tensorrt
This repo uses YOLOv5 and DeepSORT to implement object tracking algorithm. Also using TensorRTX to transform model to engine, and deploying all code on the NVIDIA Xavier with TensorRT further.
Stars: ✭ 117 (+56%)
Mutual labels:  deepsort
faas-sim
A framework for trace-driven simulation of serverless Function-as-a-Service platforms
Stars: ✭ 33 (-56%)
Mutual labels:  edge-computing
nntrainer
NNtrainer is Software Framework for Training Neural Network Models on Devices.
Stars: ✭ 92 (+22.67%)
Mutual labels:  tensorflow-lite
objtrack
实现常用的目标跟踪算法
Stars: ✭ 22 (-70.67%)
Mutual labels:  object-tracking
motor-defect-detector-python
Predict performance issues with manufacturing equipment motors. Perform local or cloud analytics of the issues found, and then display the data on a user interface to determine when failures might arise.
Stars: ✭ 24 (-68%)
Mutual labels:  edge-computing
object-size-detector-python
Monitor mechanical bolts as they move down a conveyor belt. When a bolt of an irregular size is detected, this solution emits an alert.
Stars: ✭ 26 (-65.33%)
Mutual labels:  edge-computing

Raspberry Pi Urban Mobility Tracker (DeepSORT + MobileNet)

The Raspberry Pi Urban Mobility Tracker is the simplest way to track and count pedestrians, cyclists, scooters, and vehicles. For more information, see the original blog post [here].

Hardware

Primary Components

  1. Raspberry Pi (ideally v4-b)
  2. Raspberry Pi camera (ideally v2)
  3. Google Coral Accelerator (Not required, but strongly encouraged)

Secondary Components

  1. Ballhead mount: https://www.amazon.com/gp/product/B00DA38C3G
  2. Clear lens: https://www.amazon.com/gp/product/B079JW114G
  3. Weatherproof enclosure: https://www.amazon.com/gp/product/B005UPAN0W
  4. 30000mAh battery: https://www.amazon.com/gp/product/B01M5LKV4T

Notes

  • The mounts located in geometry/ are currently represented as stl files which are 3d printer ready. I don't currently have a 3d printer so I used the crowd sourced printing service https://printathing.com/ which yielded great results (kind of a sales pitch, but not really. I just like the service).
  • The original FreeCAD file is also included just in case you want to modify the geometry.
  • The only cutting necessary is through the plastic case to allow for the lens. This joint should then be sealed using silicone caulk to prevent any moisture from entering.
  • All the secondary components listed are just suggestions which worked well for my build. Feel free to use what ever you want.
3D printed mounts mounts with attached hardware
Final setup (open) Front (closed)

Install (Raspberry Pi)

  1. UMT has been dockerized in order to minimize installation friction. Start off by installing Docker on your Raspbery Pi or what ever you plan on using. The instructions below assume a Raspberry Pi v4 with Raspberry Pi OS 2020-12-02. This is also a good time to add non-root users to the Docker user group. As an example, to add the Raspberry pi default user pi:
sudo usermod -aG docker pi
  1. Open a terminal and create a directory for the UMT output:
UMT_DIR=${HOME}/umt_output && mkdir -p ${UMT_DIR}
  1. Move into the new directory:
cd ${UMT_DIR}
  1. Download the Dockerfile and build it:
wget https://raw.githubusercontent.com/nathanrooy/rpi-urban-mobility-tracker/master/Dockerfile

docker build . -t umt
  1. Start the Docker container:
docker run --rm -it --privileged --mount type=bind,src=${UMT_DIR},dst=/root umt
  1. Test install by downloading a video and running the tracker:
wget https://github.com/nathanrooy/rpi-urban-mobility-tracker/raw/master/data/videos/highway_01.mp4

umt -video highway_01.mp4

If everything worked correctly, you should see a directory labeled output filled with 10 annotated video frames.

Install (Ubuntu)

First, create a new virtualenv, initialize it, then install the TensorFlow Lite runtime package for Python:

pip3 install --extra-index-url https://google-coral.github.io/py-repo/ tflite_runtime

Then finish with the following:

pip install git+https://github.com/nathanrooy/rpi-urban-mobility-tracker

Lastly, test the install by running step #6 from the Raspberry Pi install instructions above.

Model Choice

The default deep learning model is the MobileNet v1 which has been trained on the COCO dataset and quantized for faster performance on edge deployments. Another good model choice is PedNet which is also a quantized MobileNet v1 however, it's been optimized specifically for pedestrians, cyclsts, and vehicles. To use PedNet, simply download it from its repo here: https://github.com/nathanrooy/ped-net or clone it.

git clone https://github.com/nathanrooy/ped-net

Once the model and labels have been downloaded, simply use the modelpath and labelmap flags to specify a non-default model setup. As an example:

umt -camera -modelpath pednet_20200326_tflite_graph.tflite -labelmap labels.txt

Usage

Since this code is configured as a cli, everything is accessible via the umt command on your terminal. To run while using the Raspberry Pi camera (or laptop camera) data source run the following:

umt -camera

To run the tracker on an image sequence, append the -imageseq flag followed by a path to the images. Included in this repo are the first 300 frames from the MOT (Multiple Object Tracking Benchmark) Challenge PETS09-S2L1 video. To use them, simply download/clone this repo and cd into the main directory.

umt -imageseq data/images/PETS09-S2L1/

To view the bounding boxes and tracking ability of the system, append the -display flag to view a live feed. Note that this will greatly slow down the fps and is only recommended for testing purposes.

umt -imageseq data/images/PETS09-S2L1/ -display

By default, only the first 10 frames will be processed. To increase or decrease this value, append the -nframes flag followed by an integer value.

umt -imageseq data/images/PETS09-S2L1/ -display -nframes 20

To persist the image frames and detections, use the -save flag. Saved images are then available in the output/ directory.

umt -imageseq data/images/PETS09-S2L1/ -save -nframes 20

To run the tracker using a video file input, append the -video flag followed by a path to the video file. Included in this repo are two video clips of vehicle traffic.

umt -video data/videos/highway_01.mp4

In certain instances, you may want to override the default object detection threshold (default=0.5). To accompish this, append the -threshold flag followed by a float value in the range of [0,1]. A value closer to one will yield fewer detections with higher certainty while a value closer to zero will result in more detections with lower certainty. It's usually better to error on the side of lower certainty since these objects can always be filtered out during post processing.

umt -video data/videos/highway_01.mp4 -display -nframes 100 -threshold 0.4

To get the highest fps possible, append the -tpu flag to use the Coral USB Accelerator for inferencing.

umt -imageseq data/images/PETS09-S2L1/ -tpu

References

@inproceedings{Wojke2017simple,
  title={Simple Online and Realtime Tracking with a Deep Association Metric},
  author={Wojke, Nicolai and Bewley, Alex and Paulus, Dietrich},
  booktitle={2017 IEEE International Conference on Image Processing (ICIP)},
  year={2017},
  pages={3645--3649},
  organization={IEEE},
  doi={10.1109/ICIP.2017.8296962}
}

@inproceedings{Wojke2018deep,
  title={Deep Cosine Metric Learning for Person Re-identification},
  author={Wojke, Nicolai and Bewley, Alex},
  booktitle={2018 IEEE Winter Conference on Applications of Computer Vision (WACV)},
  year={2018},
  pages={748--756},
  organization={IEEE},
  doi={10.1109/WACV.2018.00087}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].