All Projects → NVlabs → Pamtri

NVlabs / Pamtri

Licence: other
PAMTRI: Pose-Aware Multi-Task Learning for Vehicle Re-Identification (ICCV 2019) - Official PyTorch Implementation

Labels

Projects that are alternatives of or similar to Pamtri

Object Detection And Location Realsensed435
Use the Intel D435 real-sensing camera to realize target detection based on the Yolov3 framework under the Opencv DNN framework, and realize the 3D positioning of the Objection according to the depth information. Real-time display of the coordinates in the camera coordinate system.ADD--Using Yolov5 By TensorRT model,AGX-Xavier,RealTime Object Detection
Stars: ✭ 36 (-32.08%)
Mutual labels:  cuda
Sixtyfour
How fast can we brute force a 64-bit comparison?
Stars: ✭ 41 (-22.64%)
Mutual labels:  cuda
Singularity Tutorial
Tutorial for using Singularity containers
Stars: ✭ 46 (-13.21%)
Mutual labels:  cuda
Nvidia libs test
Tests and benchmarks for cudnn (and in the future, other nvidia libraries)
Stars: ✭ 36 (-32.08%)
Mutual labels:  cuda
Nbody
N body gravity attraction problem solver
Stars: ✭ 40 (-24.53%)
Mutual labels:  cuda
Cuda Convnet2.torch
Torch7 bindings for cuda-convnet2 kernels!
Stars: ✭ 42 (-20.75%)
Mutual labels:  cuda
Deformable Convolution V2 Pytorch
Deformable ConvNets V2 (DCNv2) in PyTorch
Stars: ✭ 963 (+1716.98%)
Mutual labels:  cuda
Hungariangpu
An GPU/CUDA implementation of the Hungarian algorithm
Stars: ✭ 51 (-3.77%)
Mutual labels:  cuda
Octree Slam
Large octree map construction and rendering with CUDA and OpenGL
Stars: ✭ 40 (-24.53%)
Mutual labels:  cuda
Slic cuda
Superpixel SLIC for GPU (CUDA)
Stars: ✭ 45 (-15.09%)
Mutual labels:  cuda
Smallpt Parallel Bvh Gpu
A GPU implementation of smallpt (http://www.kevinbeason.com/smallpt/) with Bounding Volume Hierarchy (BVH) tree.
Stars: ✭ 36 (-32.08%)
Mutual labels:  cuda
Style Feature Reshuffle
caffe implementation of "Arbitrary Style Transfer with Deep Feature Reshuffle"
Stars: ✭ 38 (-28.3%)
Mutual labels:  cuda
Docs Pytorch
Deep Object Co-Segmentation
Stars: ✭ 43 (-18.87%)
Mutual labels:  cuda
Cure
Stars: ✭ 36 (-32.08%)
Mutual labels:  cuda
Hornet
Hornet data structure for sparse dynamic graphs and matrices
Stars: ✭ 49 (-7.55%)
Mutual labels:  cuda
Ktt
Kernel Tuning Toolkit
Stars: ✭ 33 (-37.74%)
Mutual labels:  cuda
Qualia2.0
Qualia is a deep learning framework deeply integrated with automatic differentiation and dynamic graphing with CUDA acceleration. Qualia was built from scratch.
Stars: ✭ 41 (-22.64%)
Mutual labels:  cuda
Carlsim3
CARLsim is an efficient, easy-to-use, GPU-accelerated software framework for simulating large-scale spiking neural network (SNN) models with a high degree of biological detail.
Stars: ✭ 52 (-1.89%)
Mutual labels:  cuda
Cs344
Introduction to Parallel Programming class code
Stars: ✭ 1,051 (+1883.02%)
Mutual labels:  cuda
Lyra
Stars: ✭ 43 (-18.87%)
Mutual labels:  cuda

PAMTRI: Pose-Aware Multi-Task Learning for Vehicle Re-Identification

This repo contains the official PyTorch implementation of PAMTRI: Pose-Aware Multi-Task Learning for Vehicle Re-Identification Using Highly Randomized Synthetic Data, ICCV 2019.

[Paper] [Poster]

Introduction

We address the problem of vehicle re-identification using multi-task learning and embeded pose representations. Since manually labeling images with detailed pose and attribute information is prohibitive, we train the network with a combination of real and randomized synthetic data.

The proposed framework consists of two convolutional neural networks (CNNs), which are shown in the figure below. Top: The pose estimation network is an extension of high-resolution network (HRNet) for predicting keypoint coordinates (with confidence/visibility) and generating heatmaps/segments. Bottom: The multi-task network uses the embedded pose information from HRNet for joint vehicle re-identification and attribute classification.

Illustrating the architecture of PAMTRI

Getting Started

Environment

The code was developed and tested with Python 3.6 on Ubuntu 16.04, using a NVIDIA GeForce RTX 2080 Ti GPU card. Other platforms or GPU card(s) may work but are not fully tested.

Code Structure

Please refer to the README.md in each of the following directories for detailed instructions.

  • PoseEstNet directory: The modified version of HRNet for vehicle pose estimation. The code for training and testing, keypoint labels, and pre-trained models are provided.

  • MultiTaskNet directory: The multi-task network for joint vehicle re-identification and attribute classification using embedded pose representations. The code for training and testing, attribute labels, predicted keypoints, and pre-trained models are provided.

References

Please cite these papers if you use this code in your research:

@inproceedings{Tang19PAMTRI,
  author = {Zheng Tang and Milind Naphade and Stan Birchfield and Jonathan Tremblay and William Hodge and Ratnesh Kumar and Shuo Wang and Xiaodong Yang},
  title = { {PAMTRI}: {P}ose-aware multi-task learning for vehicle re-identification using highly randomized synthetic data},
  booktitle = {Proc. of the International Conference on Computer Vision (ICCV)},
  pages = {211-–220},
  address = {Seoul, Korea},
  month = oct,
  year = 2019
}

@inproceedings{Tang19CityFlow,
  author = {Zheng Tang and Milind Naphade and Ming-Yu Liu and Xiaodong Yang and Stan Birchfield and Shuo Wang and Ratnesh Kumar and David Anastasiu and Jenq-Neng Hwang},
  title = {City{F}low: {A} city-scale benchmark for multi-target multi-camera vehicle tracking and re-identification},
  booktitle = {Proc. of the Conference on Computer Vision and Pattern Recognition (CVPR)},
  pages = {8797–-8806},
  address = {Long Beach, CA, USA},
  month = jun,
  year = 2019
}

License

Code in the repository, unless otherwise specified, is licensed under the NVIDIA Source Code License.

Contact

For any questions please contact Zheng (Thomas) Tang.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].