All Projects → pyaf → parallel_mAP_evaluation

pyaf / parallel_mAP_evaluation

Licence: other
This repo parallelizes mAP_evaluation using python's multiprocessing module.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to parallel mAP evaluation

atpbar
Progress bars for threading and multiprocessing tasks on terminal and Jupyter Notebook
Stars: ✭ 74 (+311.11%)
Mutual labels:  multiprocessing
imvoxelnet
[WACV2022] ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection
Stars: ✭ 179 (+894.44%)
Mutual labels:  nuscenes
omnibot
One slackbot to rule them all
Stars: ✭ 69 (+283.33%)
Mutual labels:  lyft
Parallel-NDJSON-Reader
Parallel NDJSON Reader for Python
Stars: ✭ 13 (-27.78%)
Mutual labels:  multiprocessing
python-graceful-shutdown
Example of a Python code that implements graceful shutdown while using asyncio, threading and multiprocessing
Stars: ✭ 109 (+505.56%)
Mutual labels:  multiprocessing
pillar-motion
Self-Supervised Pillar Motion Learning for Autonomous Driving (CVPR 2021)
Stars: ✭ 98 (+444.44%)
Mutual labels:  nuscenes
dailycodingproblem
Solutions to Daily Coding Problem questions
Stars: ✭ 26 (+44.44%)
Mutual labels:  lyft
MyTools
🛠️一些小工具脚本
Stars: ✭ 23 (+27.78%)
Mutual labels:  multiprocessing
bsuir-csn-cmsn-helper
Repository containing ready-made laboratory works in the specialty of computing machines, systems and networks
Stars: ✭ 43 (+138.89%)
Mutual labels:  multiprocessing
RTGraph
A simple Python application for plotting and storing data in real time
Stars: ✭ 45 (+150%)
Mutual labels:  multiprocessing
lyft-node-sdk
Node SDK for the Lyft Public API
Stars: ✭ 15 (-16.67%)
Mutual labels:  lyft
Ultra
An operating system that doesn't try to be UNIX. Made completely from scratch with its own bootloader. 😊
Stars: ✭ 48 (+166.67%)
Mutual labels:  multiprocessing
mantichora
A simple interface to Python multiprocessing and threading
Stars: ✭ 13 (-27.78%)
Mutual labels:  multiprocessing
Mandelbrot-set-explorer
An interactive Mandelbrot set, made with Python3 and Tkinter
Stars: ✭ 31 (+72.22%)
Mutual labels:  multiprocessing
think-async
🌿 Exploring cooperative concurrency primitives in Python
Stars: ✭ 178 (+888.89%)
Mutual labels:  multiprocessing
a3c-super-mario-pytorch
Reinforcement Learning for Super Mario Bros using A3C on GPU
Stars: ✭ 35 (+94.44%)
Mutual labels:  multiprocessing
lyft.github.io
This is code for oss.lyft.com website.
Stars: ✭ 13 (-27.78%)
Mutual labels:  lyft
FinanceCenter
Fetching Financial Data (US/China)
Stars: ✭ 26 (+44.44%)
Mutual labels:  multiprocessing
TransE
TransE方法的Python实现,解释SGD中TransE的向量更新
Stars: ✭ 31 (+72.22%)
Mutual labels:  multiprocessing
neomake-multiprocess
A vim plugin for running multiple process asynchronously base on neomake.
Stars: ✭ 36 (+100%)
Mutual labels:  multiprocessing

Parallel mAP_evaluation

This repo parallelizes mAP_evaluation using python's multiprocessing module.

As we know, in Lyft's 3d object detection challenge, the evaluation metric mAP is calculated as mean of mAPs for IoU thresholds 0.5, 0.55, 0.6, ... 0.95, (see here) looping over these 10 thresholds one by one can be time consuming process. Here's how it looks when you do so:

Only one hyperthread is fully utilized, rest are idle.

In this repo, you can find the parallelized implementation of mAP evaluation (mAP_evaluation.py) which uses Python's inbuilt multiprocessing module to compute APs for all 10 IoUs parallelly and simultaneously. Here's how it looks using parallelized version:

The parallel implementation is ~10x faster than the for loop implementation.

Requirements

  • lyft's devkit sdk link
  • fire
  • pathlib
  • numpy

Instructions

As official mAP_evaluation script, this script also expects the predictions and ground truth to be in the format:

pred_file: json file, predictions in global frame, in the format of:

predictions = [{
    'sample_token': '0f0e3ce89d2324d8b45aa55a7b4f8207fbb039a550991a5149214f98cec136ac',
    'translation': [971.8343488872263, 1713.6816097857359, -25.82534357061308],
    'size': [2.519726579986132, 7.810161372666739, 3.483438286096803],
    'rotation': [0.10913582721095375, 0.04099572636992043, 0.01927712319721745, 1.029328402625659],
    'name': 'car',
    'score': 0.3077029437237213
}]

gt_file: ground truth annotations in global frame, in the format of:

gt = [{
    'sample_token': '0f0e3ce89d2324d8b45aa55a7b4f8207fbb039a550991a5149214f98cec136ac',
    'translation': [974.2811881299899, 1714.6815014457964, -23.689857123368846],
    'size': [1.796, 4.488, 1.664],
    'rotation': [0.14882026466054782, 0, 0, 0.9888642620837121],
    'name': 'car'
}]

output_dir: a directory to save the final results.

I've given a sample pred_file and gt_file in the tmp folder of this repository. Here's how you can run the evaluation script:

python mAP_evaluation.py --gt_file="tmp/gt_data.json" --pred_file="tmp/pred_data.json" --output_dir="tmp/"

After this command finishes, you'll find metric_summary.json file in tmp containing the mAPs of all the iou thresholds as well as the overall mAP.

Enjoy!

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].