All Projects → CoinCheung → triplet-reid-pytorch

CoinCheung / triplet-reid-pytorch

Licence: Apache-2.0 license
My implementation of the paper [In Defense of the Triplet Loss for Person Re-Identification]

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to triplet-reid-pytorch

DeepCD
[ICCV17] DeepCD: Learning Deep Complementary Descriptors for Patch Representations
Stars: ✭ 39 (-58.95%)
Mutual labels:  triplet
triplet-loss-pytorch
Highly efficient PyTorch version of the Semi-hard Triplet loss ⚡️
Stars: ✭ 79 (-16.84%)
Mutual labels:  triplet
triplet
Re-implementation of tripletloss function in FaceNet
Stars: ✭ 94 (-1.05%)
Mutual labels:  triplet
HiCMD
[CVPR2020] Hi-CMD: Hierarchical Cross-Modality Disentanglement for Visible-Infrared Person Re-Identification
Stars: ✭ 64 (-32.63%)
Mutual labels:  reid
reid baseline gluon
SOTA results for reid baseline model (Gluon implementation)
Stars: ✭ 14 (-85.26%)
Mutual labels:  reid
Ranked Person ReID
Person reID
Stars: ✭ 91 (-4.21%)
Mutual labels:  reid
CM-NAS
CM-NAS: Cross-Modality Neural Architecture Search for Visible-Infrared Person Re-Identification (ICCV2021)
Stars: ✭ 39 (-58.95%)
Mutual labels:  reid
capture reid
可基于摄像头实时监控或录制的视频或静态图片进行行人检测(lffd)/跟踪(deep sort)和行人重识别(reid)。
Stars: ✭ 87 (-8.42%)
Mutual labels:  reid
understand videobased reid
关于video_reid代码的注释,原始代码地址
Stars: ✭ 50 (-47.37%)
Mutual labels:  reid

triplet-ReID-pytorch

This is a simple implementation of the algorithm proposed in paper In Defense of the Triplet Loss for Person Re-Identification.

This project is based on pytorch0.4.0 and python3.

To be straight-forward and simple, only the method of training on pretrained Resnet-50 with batch-hard sampler(TriNet according to the authors) is implemented.

prepare dataset

Run the script of datasets/download_market1501.sh to download and uncompress the Market1501 dataset.

    $ cd triplet-reid-pytorch/datasets
    $ sh download_market1501.sh 

train the model

  • To train on the Market1501 dataset, just run the training script:
    $ cd triplet-reid-pytorch
    $ python3 train.py

This will train an embedder model based on ResNet-50. The trained model will be stored in the path of /res/model.pkl.

embed the query and gallery dataset

  • To embed the gallery set and query set of Market1501, run the corresponding embedding scripts:
    $ python3 embed.py \
      --store_pth ./res/emb_gallery.pkl \
      --data_pth datasets/Market-1501-v15.09.15/bounding_box_test

    $ python3 embed.py \
      --store_pth ./res/emb_query.pkl \
      --data_pth datasets/Market-1501-v15.09.15/query

These scripts will use the trained embedder to embed the gallery and query set of Market1501, and store the embeddings as /res/embd_gallery.pkl and /res/emb_query.pkl.

evaluate the embeddings

  • Then compute the rank-1 cmc and mAP:
    $ python3 eval.py --gallery_embs ./res/emb_gallery.pkl \
      --query_embs ./res/emb_query.pkl \
      --cmc_rank 1

This will evaluate the model with the query and gallery dataset.

Notes

After refering to some other paper and implementations, I got to to know two tricks that help to boost the performance:

  • adjust the stride of the last stage of resnet from 2 to 1.
  • use augmentation method of random erasing.

With these two tricks, the mAP and rank-1 cmc on Market1501 dataset reaches 76.04/88.27, much higher than the result claimed in the paper.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].