All Projects → ipl-uw → 2019-CVPR-AIC-Track-1-UWIPL

ipl-uw / 2019-CVPR-AIC-Track-1-UWIPL

Licence: other
Repository for 2019 CVPR AI City Challenge Track 1 from IPL@UW

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to 2019-CVPR-AIC-Track-1-UWIPL

cityflow-nl
Challenge Track 5 in 2021 AI City Challenge. (Baseline Model)
Stars: ✭ 25 (+31.58%)
Mutual labels:  aicitychallenge
Multi Object Tracking Paper List
Paper list and source code for multi-object-tracking
Stars: ✭ 1,508 (+7836.84%)
Mutual labels:  multi-object-tracking
Homography-Based-MOTDT
MOTDT with Homography Matrix for Multi-Object Tracking
Stars: ✭ 21 (+10.53%)
Mutual labels:  multi-object-tracking
awesome-3d-multi-object-tracking-autonomous-driving
A summary and list of open source 3D multi object tracking and datasets at this stage.
Stars: ✭ 16 (-15.79%)
Mutual labels:  multi-object-tracking
ssd sort
Multi-person tracking with SSD and Sort
Stars: ✭ 86 (+352.63%)
Mutual labels:  multi-object-tracking
Fairmot
[IJCV-2021] FairMOT: On the Fairness of Detection and Re-Identification in Multi-Object Tracking
Stars: ✭ 3,194 (+16710.53%)
Mutual labels:  multi-object-tracking
Craft Pytorch
Official implementation of Character Region Awareness for Text Detection (CRAFT)
Stars: ✭ 2,220 (+11584.21%)
Mutual labels:  cvpr2019
visual-compatibility
Context-Aware Visual Compatibility Prediction (https://arxiv.org/abs/1902.03646)
Stars: ✭ 92 (+384.21%)
Mutual labels:  cvpr2019
ByteTrack
ByteTrack: Multi-Object Tracking by Associating Every Detection Box
Stars: ✭ 1,991 (+10378.95%)
Mutual labels:  multi-object-tracking
siam-mot
SiamMOT: Siamese Multi-Object Tracking
Stars: ✭ 446 (+2247.37%)
Mutual labels:  multi-object-tracking
MPLT
Multi-person 3D panoramic localization tracking
Stars: ✭ 27 (+42.11%)
Mutual labels:  multi-object-tracking
multi-camera-pig-tracking
Official Implementation of "Tracking Grow-Finish Pigs Across Large Pens Using Multiple Cameras"
Stars: ✭ 25 (+31.58%)
Mutual labels:  multi-object-tracking
TA3N
[ICCV 2019 Oral] TA3N: https://github.com/cmhungsteve/TA3N (Most updated repo)
Stars: ✭ 45 (+136.84%)
Mutual labels:  cvpr2019
UniTrack
[NeurIPS'21] Unified tracking framework with a single appearance model. It supports Single Object Tracking (SOT), Video Object Segmentation (VOS), Multi-Object Tracking (MOT), Multi-Object Tracking and Segmentation (MOTS), Pose Tracking, Video Instance Segmentation (VIS), and class-agnostic MOT (e.g. TAO dataset).
Stars: ✭ 293 (+1442.11%)
Mutual labels:  multi-object-tracking
yolo deepsort
Fast MOT base on yolo+deepsort, support yolo3 and yolo4
Stars: ✭ 47 (+147.37%)
Mutual labels:  multi-object-tracking
zero virus
Zero-VIRUS: Zero-shot VehIcle Route Understanding System for Intelligent Transportation (CVPR 2020 AI City Challenge Track 1)
Stars: ✭ 25 (+31.58%)
Mutual labels:  aicitychallenge
Paddledetection
Object Detection toolkit based on PaddlePaddle. It supports object detection, instance segmentation, multiple object tracking and real-time multi-person keypoint detection.
Stars: ✭ 5,799 (+30421.05%)
Mutual labels:  multi-object-tracking
obman
[cvpr19] Hands+Objects synthetic dataset, instructions to download and code to load the dataset
Stars: ✭ 120 (+531.58%)
Mutual labels:  cvpr2019
CrowdFlow
Optical Flow Dataset and Benchmark for Visual Crowd Analysis
Stars: ✭ 87 (+357.89%)
Mutual labels:  multi-object-tracking
CompenNet
[CVPR'19] End-to-end Projector Photometric Compensation
Stars: ✭ 35 (+84.21%)
Mutual labels:  cvpr2019

2019-CVPR-AIC-Track-1-UWIPL

1. Single Camera Tracking

Please download the source code and follow the instruction from https://github.com/GaoangW/TNT/tree/master/AIC19.

2. Deep Feature Re_identification

Please download the source code and follow the instruction from https://github.com/ipl-uw/2019-CVPR-AIC-Track-2-UWIPL. Create video2img folder in the downloaded project (i.e., Video-Person-ReID/video2img/). Put and run python crop_img.py in the same folder in the downloaded dataset (i.e., aic19-track1-mtmc/test). You need to creat a folder track1_test_img in the same path (i.e., aic19-track1-mtmc/test/track1_test_img). After that, create a folder track1_sct_img_test_big and run python crop_img_big.py. Then, create a folder log in the dowanloaded project (i.e., Video-Person-ReID/log) and put the downloaded model file of track1 ReID in this folder. Finally, run python Graph_ModelDataGen.py to obtain the feature files (q_camids3_no_nms_big0510.npy, qf3_no_nms_big0510.npy and q_pids3_no_nms_big0510.npy).
The code is based on Jiyang Gao's Video-Person-ReID [code].

3. Trajectory-Based Camera Link Models

Put the feature files (q_camids3_no_nms_big0510.npy, qf3_no_nms_big0510.npy and q_pids3_no_nms_big0510.npy) in the Transition-Model folder of this project. Then, run python main_in_transition_matrix.py. Then, find the results in Transition-Model/transition_data /ICT-no_nms_big510/. The filename of the result should look like ict_greedy_iter1600_10.559_654_498.txt (1600 means the number of iteration, 10.559 represents the distance of embeddings at current iteration, 654 means the maximum globelID assigned to cross-camera vehicles, and 498 is the number of unique globelID of cross-camera vehicles after merging).

4. NMS

Put the output result (e.g. ict_greedy_iter1600_10.559_654_498.txt) from previous step in NMS folder. Then, run python NMS_filter.py to get the final result (e.g. ict_greedy_iter1600_10.559_654_498_big.txt) of Track1.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].