All Projects → DrewNF → Tensorflow_object_tracking_video

DrewNF / Tensorflow_object_tracking_video

Licence: mit
Object Tracking in Tensorflow ( Localization Detection Classification ) developed to partecipate to ImageNET VID competition

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Tensorflow object tracking video

Rectlabel Support
RectLabel - An image annotation tool to label images for bounding box object detection and segmentation.
Stars: ✭ 338 (-31.16%)
Mutual labels:  object-detection, yolo, imagenet, detection
Caffe Model
Caffe models (including classification, detection and segmentation) and deploy files for famouse networks
Stars: ✭ 1,258 (+156.21%)
Mutual labels:  classification, imagenet, inception, detection
Yolov5 ncnn
🍅 Deploy NCNN on mobile phones. Support Android and iOS. 移动端NCNN部署,支持Android与iOS。
Stars: ✭ 535 (+8.96%)
Mutual labels:  object-detection, yolo, detection
Android Yolo
Real-time object detection on Android using the YOLO network with TensorFlow
Stars: ✭ 604 (+23.01%)
Mutual labels:  object-detection, yolo, detection
Gfocal
Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection, NeurIPS2020
Stars: ✭ 376 (-23.42%)
Mutual labels:  object-detection, classification, detection
Pytorch Imagenet Cifar Coco Voc Training
Training examples and results for ImageNet(ILSVRC2012)/CIFAR100/COCO2017/VOC2007+VOC2012 datasets.Image Classification/Object Detection.Include ResNet/EfficientNet/VovNet/DarkNet/RegNet/RetinaNet/FCOS/CenterNet/YOLOv3.
Stars: ✭ 130 (-73.52%)
Mutual labels:  classification, imagenet, detection
Label Studio
Label Studio is a multi-type data labeling and annotation tool with standardized output format
Stars: ✭ 7,264 (+1379.43%)
Mutual labels:  dataset, yolo, imagenet
Yolo tensorflow
🚖 Object Detection (YOLOv1) implentation in tensorflow, with training, testing and video features.
Stars: ✭ 45 (-90.84%)
Mutual labels:  object-detection, classification, yolo
Yolo label
GUI for marking bounded boxes of objects in images for training neural network Yolo v3 and v2 https://github.com/AlexeyAB/darknet, https://github.com/pjreddie/darknet
Stars: ✭ 128 (-73.93%)
Mutual labels:  object-detection, yolo, detection
Map
mean Average Precision - This code evaluates the performance of your neural net for object recognition.
Stars: ✭ 2,324 (+373.32%)
Mutual labels:  object-detection, yolo, detection
Pine
🌲 Aimbot powered by real-time object detection with neural networks, GPU accelerated with Nvidia. Optimized for use with CS:GO.
Stars: ✭ 202 (-58.86%)
Mutual labels:  object-detection, yolo, detection
etiketai
Etiketai is an online tool designed to label images, useful for training AI models
Stars: ✭ 63 (-87.17%)
Mutual labels:  detection, yolo, imagenet
Tensornets
High level network definitions with pre-trained weights in TensorFlow
Stars: ✭ 982 (+100%)
Mutual labels:  object-detection, yolo, inception
Kaggle Rsna
Deep Learning for Automatic Pneumonia Detection, RSNA challenge
Stars: ✭ 74 (-84.93%)
Mutual labels:  object-detection, classification, detection
Caffe2 Ios
Caffe2 on iOS Real-time Demo. Test with Your Own Model and Photos.
Stars: ✭ 221 (-54.99%)
Mutual labels:  object-detection, classification, yolo
Tfjs Yolo Tiny
In-Browser Object Detection using Tiny YOLO on Tensorflow.js
Stars: ✭ 465 (-5.3%)
Mutual labels:  object-detection, yolo, detection
Lightnet
🌓 Bringing pjreddie's DarkNet out of the shadows #yolo
Stars: ✭ 322 (-34.42%)
Mutual labels:  object-detection, yolo
Vott
Visual Object Tagging Tool: An electron app for building end to end Object Detection Models from Images and Videos.
Stars: ✭ 3,684 (+650.31%)
Mutual labels:  object-detection, detection
Pytorch Randaugment
Unofficial PyTorch Reimplementation of RandAugment.
Stars: ✭ 323 (-34.22%)
Mutual labels:  classification, imagenet
Tianchi Medical Lungtumordetect
天池医疗AI大赛[第一季]:肺部结节智能诊断 UNet/VGG/Inception/ResNet/DenseNet
Stars: ✭ 314 (-36.05%)
Mutual labels:  classification, inception

Tensorflow_Object_Tracking_Video

(Version 0.3, Last Update 10-03-2017)

alt text alt text alt text alt text

The Project follow the below index:

  1. Introduction;
  2. Requitements & Installation;
  3. YOLO Script Usage
    1. Setting Parameters;
    2. Usage.
  4. VID TENSORBOX Script Usage
    1. Setting Parameters;
    2. Usage.
  5. TENSORBOX Tests Files;
  6. Dataset Scripts;
  7. Copyright;
  8. State of the Project.
  9. DOWNLOADS.
  10. Acknowledgements.
  11. Bibliography.

1.Introduction

This Repository is my Master Thesis Project: "Develop a Video Object Tracking with Tensorflow Technology" and it's still developing, so many updates will be made. In this work, I used the architecture and problem solving strategy of the Paper T-CNN(Arxiv), that won last year IMAGENET 2015 Teaser Challenge VID. So the whole script architecture will be made of several component in cascade:

  1. Still Image Detection (Return Tracking Results on single Frame);
  2. Temporal Information Detection( Introducing Temporal Information into the DET Results);
  3. Context Information Detection( Introducing Context Information into the DET Results);

Notice that the Still Image Detection component could be unique or decompose into two sub-component:

  1. First: determinate "Where" in the Frame;
  2. Second: determinate "What" in the Frame.

My project use many online tensorflow projects, as:

2.Requirement & Installation

To install the script you only need to download the Repository. To Run the script you have to had installed:

  • Tensorflow;
  • OpenCV;
  • Python;

All the Python library necessary could be installed easily trought pip install package-name. If you want to follow a guide to install the requirements here is the link for a tutorial I wrote for myself and for a course of Deep Learning at UPC.

3.YOLO Script Usage

You only look once (YOLO) is a state-of-the-art, real-time object detection system.## i.Setting Parameters This are the inline terminal argmunts taken from the script, most of them aren't required, only the video path must be specified when we call the script:

  parser = argparse.ArgumentParser()
  parser.add_argument('--det_frames_folder', default='det_frames/', type=str)
  parser.add_argument('--det_result_folder', default='det_results/', type=str)
  parser.add_argument('--result_folder', default='summary_result/', type=str)
  parser.add_argument('--summary_file', default='results.txt', type=str)
  parser.add_argument('--output_name', default='output.mp4', type=str)
  parser.add_argument('--perc', default=5, type=int)
  parser.add_argument('--path_video', required=True, type=str)

Now you have to download the weights for YOLO and put them into /YOLO_DET_Alg/weights/.

For YOLO knowledge here you can find Original code(C implementation) & paper.

ii.Usage

After Set the Parameters, we can proceed and run the script:

  python VID_yolo.py --path_video video.mp4

You will see some Terminal Output like:

alt tag

You will see a realtime frames output(like the one here below) and then finally all will be embedded into the Video Output( I uploaded the first two Test I've made in the folder /video_result, you can download them and take a look to the final result. The first one has problems in the frames order, this is why you will see so much flickering in the video image,the problem was then solved and in the second doesn't show frames flickering ):

alt tag

4.VID TENSORBOX Script Usage

i.Setting Parameters

This are the inline terminal argmunts taken from the script, most of them aren't required. As before, only the video path must be specified when we call the script:

  parser.add_argument('--output_name', default='output.mp4', type=str)
  parser.add_argument('--hypes', default='./hypes/overfeat_rezoom.json', type=str)
  parser.add_argument('--weights', default='./output/save.ckpt-1090000', type=str)
  parser.add_argument('--perc', default=2, type=int)
  parser.add_argument('--path_video', required=True, type=str)

I will soon put a weight file to download. For train and spec on the multiclass implementation I will add them after the end of my thesis project.

ii.Usage

Download the .zip files linked in the Download section and replace the folders.

Then, after set the parameters, we can proceed and run the script:

  python VID_tensorbox_multi_class.py --path_video video.mp4

5.Tensorbox Tests

In the folder video_result_OVT you can find files result of the runs of the VID TENSOBOX scripts.

6.Dataset Scripts

All the scripts below are for the VID classes so if you wonna adapt them for other you have to simply change the Classes.py file where are defined the correspondencies between codes and names. All the data on the image are made respect a specific Image Ratio, because TENSORBOX works only with 640x480 PNG images, you will have to change the code a little to adapt to your needs. I will provide four scripts:

  1. Process_Dataset_heavy.py: Process your dataset with a brute force approach, you will obtain more bbox and files for each class;
  2. Process_Dataset_lightweight.py: Process your dataset with a lightweight approach making, you will obtain less bbox and files for each class;
  3. Resize_Dataset.py: Resize your dataset to 640x480 PNG images;
  4. Test_Processed_Data.py: Will test that the process end well without errors.

I've also add some file scripts to pre process and prepare the dataset to train the last component, the Inception Model, you can find them in a subfolder of the dataset script folder.

7.Copyright

According to the LICENSE file of the original code,

  • Me and original author hold no liability for any damages;
  • Do not use this on commercial!.

8.State of the Project

  • Support YOLO (SingleClass) DET Algorithm;
  • Support Training ONLY TENSOBOX and INCEPTION Training;
  • USE OF TEMPORAL INFORMATION [This are retrieved through some post processing algorithm I've implemented in the Utils_Video.py file NOT TRAINABLE];
  • Modular Architecture composed in cascade by: Tensorbox (as General Object Detector), Tracker and Smoother and Inception (as Object Classifier);

9.Downloads

Here below the links of the weights file for Inception and Tensorbox from my retraining experiments:

10.Acknowledgements

Thanks to Professors:

  • Elena Baralis from Politecnico di Torino Dipartimento di Automatica e Informatica;
  • Jordi Torres from BSC Department of Computer Science;
  • Xavi Giro ”I” Nieto from UPC Department of Image Processing.

11.Bibliography

i.Course

ii.Classification

iii.Detection

iv.Tracking

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].