All Projects → Cartucho → Map

Cartucho / Map

Licence: apache-2.0
mean Average Precision - This code evaluates the performance of your neural net for object recognition.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Map

Android Yolo
Real-time object detection on Android using the YOLO network with TensorFlow
Stars: ✭ 604 (-74.01%)
Mutual labels:  object-detection, yolo, pascal-voc, detection
Openlabeling
Label images and video for Computer Vision applications
Stars: ✭ 706 (-69.62%)
Mutual labels:  object-detection, yolo, darknet, pascal-voc
Pine
🌲 Aimbot powered by real-time object detection with neural networks, GPU accelerated with Nvidia. Optimized for use with CS:GO.
Stars: ✭ 202 (-91.31%)
Mutual labels:  object-detection, yolo, darknet, detection
Object Detection Metrics
Most popular metrics used to evaluate object detection algorithms.
Stars: ✭ 3,888 (+67.3%)
Mutual labels:  object-detection, pascal-voc, metrics, average-precision
Yolov5 ncnn
🍅 Deploy NCNN on mobile phones. Support Android and iOS. 移动端NCNN部署,支持Android与iOS。
Stars: ✭ 535 (-76.98%)
Mutual labels:  object-detection, yolo, detection
Bmw Yolov4 Training Automation
This repository allows you to get started with training a state-of-the-art Deep Learning model with little to no configuration needed! You provide your labeled dataset or label your dataset using our BMW-LabelTool-Lite and you can start the training right away and monitor it in many different ways like TensorBoard or a custom REST API and GUI. NoCode training with YOLOv4 and YOLOV3 has never been so easy.
Stars: ✭ 533 (-77.07%)
Mutual labels:  object-detection, yolo, darknet
Make Sense
Free to use online tool for labelling photos. https://makesense.ai
Stars: ✭ 2,087 (-10.2%)
Mutual labels:  object-detection, pascal-voc, detection
Yolo annotation tool
Annotation tool for YOLO in opencv
Stars: ✭ 17 (-99.27%)
Mutual labels:  object-detection, yolo, darknet
Darknet ros
YOLO ROS: Real-Time Object Detection for ROS
Stars: ✭ 1,101 (-52.62%)
Mutual labels:  object-detection, yolo, darknet
Yolo label
GUI for marking bounded boxes of objects in images for training neural network Yolo v3 and v2 https://github.com/AlexeyAB/darknet, https://github.com/pjreddie/darknet
Stars: ✭ 128 (-94.49%)
Mutual labels:  object-detection, yolo, detection
Review object detection metrics
Review on Object Detection Metrics: 14 object detection metrics including COCO's and PASCAL's metrics. Supporting different bounding box formats.
Stars: ✭ 100 (-95.7%)
Mutual labels:  object-detection, pascal-voc, metrics
Tracking With Darkflow
Real-time people Multitracker using YOLO v2 and deep_sort with tensorflow
Stars: ✭ 515 (-77.84%)
Mutual labels:  object-detection, yolo, darknet
Yolo3 4 Py
A Python wrapper on Darknet. Compatible with YOLO V3.
Stars: ✭ 504 (-78.31%)
Mutual labels:  object-detection, yolo, darknet
Easy Yolo
Yolo (Real time object detection) model training tutorial with deep learning neural networks
Stars: ✭ 98 (-95.78%)
Mutual labels:  object-detection, yolo, darknet
Efficientdet.pytorch
Implementation EfficientDet: Scalable and Efficient Object Detection in PyTorch
Stars: ✭ 1,383 (-40.49%)
Mutual labels:  object-detection, pascal-voc, detection
Tensorflow object tracking video
Object Tracking in Tensorflow ( Localization Detection Classification ) developed to partecipate to ImageNET VID competition
Stars: ✭ 491 (-78.87%)
Mutual labels:  object-detection, yolo, detection
Tfjs Yolo Tiny
In-Browser Object Detection using Tiny YOLO on Tensorflow.js
Stars: ✭ 465 (-79.99%)
Mutual labels:  object-detection, yolo, detection
Mobilenet Yolo
MobileNetV2-YoloV3-Nano: 0.5BFlops 3MB HUAWEI P40: 6ms/img, YoloFace-500k:0.1Bflops 420KB🔥🔥🔥
Stars: ✭ 1,566 (-32.62%)
Mutual labels:  object-detection, yolo, darknet
Rectlabel Support
RectLabel - An image annotation tool to label images for bounding box object detection and segmentation.
Stars: ✭ 338 (-85.46%)
Mutual labels:  object-detection, yolo, detection
Yolo Custom Object Detector
Making custom object detector using Yolo (Java and Python)
Stars: ✭ 84 (-96.39%)
Mutual labels:  object-detection, yolo, darknet

mAP (mean Average Precision)

GitHub stars

This code will evaluate the performance of your neural net for object recognition.

In practice, a higher mAP value indicates a better performance of your neural net, given your ground-truth and set of classes.

Citation

This project was developed for the following paper, please consider citing it:

@INPROCEEDINGS{8594067,
  author={J. {Cartucho} and R. {Ventura} and M. {Veloso}},
  booktitle={2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, 
  title={Robust Object Recognition Through Symbiotic Deep Learning In Mobile Robots}, 
  year={2018},
  pages={2336-2341},
}

Table of contents

Explanation

The performance of your neural net will be judged using the mAP criterium defined in the PASCAL VOC 2012 competition. We simply adapted the official Matlab code into Python (in our tests they both give the same results).

First (1.), we calculate the Average Precision (AP), for each of the classes present in the ground-truth. Finally (2.), we calculate the mAP (mean Average Precision) value.

1. Calculate AP

For each class:

First, your neural net detection-results are sorted by decreasing confidence and are assigned to ground-truth objects. We have "a match" when they share the same label and an IoU >= 0.5 (Intersection over Union greater than 50%). This "match" is considered a true positive if that ground-truth object has not been already used (to avoid multiple detections of the same object).

Using this criterium, we calculate the precision/recall curve. E.g:

Then we compute a version of the measured precision/recall curve with precision monotonically decreasing (shown in light red), by setting the precision for recall r to the maximum precision obtained for any recall r' > r.

Finally, we compute the AP as the area under this curve (shown in light blue) by numerical integration. No approximation is involved since the curve is piecewise constant.

2. Calculate mAP

We calculate the mean of all the AP's, resulting in an mAP value from 0 to 100%. E.g:

Prerequisites

You need to install:

Optional:

  • plot the results by installing Matplotlib - Linux, macOS and Windows:
    1. python -mpip install -U pip
    2. python -mpip install -U matplotlib
  • show animation by installing OpenCV:
    1. python -mpip install -U pip
    2. python -mpip install -U opencv-python

Quick-start

To start using the mAP you need to clone the repo:

git clone https://github.com/Cartucho/mAP

Running the code

Step by step:

  1. Create the ground-truth files
  2. Copy the ground-truth files into the folder input/ground-truth/
  3. Create the detection-results files
  4. Copy the detection-results files into the folder input/detection-results/
  5. Run the code: python main.py

Optional (if you want to see the animation):

  1. Insert the images into the folder input/images-optional/

PASCAL VOC, Darkflow and YOLO users

In the scripts/extra folder you can find additional scripts to convert PASCAL VOC, darkflow and YOLO files into the required format.

Create the ground-truth files

  • Create a separate ground-truth text file for each image.
  • Use matching names for the files (e.g. image: "image_1.jpg", ground-truth: "image_1.txt").
  • In these files, each line should be in the following format:
    <class_name> <left> <top> <right> <bottom> [<difficult>]
    
  • The difficult parameter is optional, use it if you want the calculation to ignore a specific detection.
  • E.g. "image_1.txt":
    tvmonitor 2 10 173 238
    book 439 157 556 241
    book 437 246 518 351 difficult
    pottedplant 272 190 316 259
    

Create the detection-results files

  • Create a separate detection-results text file for each image.
  • Use matching names for the files (e.g. image: "image_1.jpg", detection-results: "image_1.txt").
  • In these files, each line should be in the following format:
    <class_name> <confidence> <left> <top> <right> <bottom>
    
  • E.g. "image_1.txt":
    tvmonitor 0.471781 0 13 174 244
    cup 0.414941 274 226 301 265
    book 0.460851 429 219 528 247
    chair 0.292345 0 199 88 436
    book 0.269833 433 260 506 336
    

Authors:

  • João Cartucho

    Feel free to contribute

    GitHub contributors

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].