All Projects → yihongXU → TransCenter

yihongXU / TransCenter

Licence: other
This is the official implementation of TransCenter. The code and pretrained models are now available here: https://gitlab.inria.fr/yixu/TransCenter_official.

Projects that are alternatives of or similar to TransCenter

deepconsensus
DeepConsensus uses gap-aware sequence transformers to correct errors in Pacific Biosciences (PacBio) Circular Consensus Sequencing (CCS) data.
Stars: ✭ 124 (+51.22%)
Mutual labels:  transformers
PyTorch-Model-Compare
Compare neural networks by their feature similarity
Stars: ✭ 119 (+45.12%)
Mutual labels:  transformers
molecule-attention-transformer
Pytorch reimplementation of Molecule Attention Transformer, which uses a transformer to tackle the graph-like structure of molecules
Stars: ✭ 46 (-43.9%)
Mutual labels:  transformers
long-short-transformer
Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch
Stars: ✭ 103 (+25.61%)
Mutual labels:  transformers
xpandas
Universal 1d/2d data containers with Transformers functionality for data analysis.
Stars: ✭ 25 (-69.51%)
Mutual labels:  transformers
COVID-19-Tweet-Classification-using-Roberta-and-Bert-Simple-Transformers
Rank 1 / 216
Stars: ✭ 24 (-70.73%)
Mutual labels:  transformers
anonymisation
Anonymization of legal cases (Fr) based on Flair embeddings
Stars: ✭ 85 (+3.66%)
Mutual labels:  transformers
deepfrog
An NLP-suite powered by deep learning
Stars: ✭ 16 (-80.49%)
Mutual labels:  transformers
DocSum
A tool to automatically summarize documents abstractively using the BART or PreSumm Machine Learning Model.
Stars: ✭ 58 (-29.27%)
Mutual labels:  transformers
converse
Conversational text Analysis using various NLP techniques
Stars: ✭ 147 (+79.27%)
Mutual labels:  transformers
lightning-transformers
Flexible components pairing 🤗 Transformers with Pytorch Lightning
Stars: ✭ 551 (+571.95%)
Mutual labels:  transformers
WellcomeML
Repository for Machine Learning utils at the Wellcome Trust
Stars: ✭ 31 (-62.2%)
Mutual labels:  transformers
modules
The official repository for our paper "Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks". We develop a method for analyzing emerging functional modularity in neural networks based on differentiable weight masks and use it to point out important issues in current-day neural networks.
Stars: ✭ 25 (-69.51%)
Mutual labels:  transformers
label-studio-transformers
Label data using HuggingFace's transformers and automatically get a prediction service
Stars: ✭ 117 (+42.68%)
Mutual labels:  transformers
pytorch-vit
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Stars: ✭ 250 (+204.88%)
Mutual labels:  transformers
Text-Summarization
Abstractive and Extractive Text summarization using Transformers.
Stars: ✭ 38 (-53.66%)
Mutual labels:  transformers
gnn-lspe
Source code for GNN-LSPE (Graph Neural Networks with Learnable Structural and Positional Representations), ICLR 2022
Stars: ✭ 165 (+101.22%)
Mutual labels:  transformers
course-content-dl
NMA deep learning course
Stars: ✭ 537 (+554.88%)
Mutual labels:  transformers
language-planner
Official Code for "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents"
Stars: ✭ 84 (+2.44%)
Mutual labels:  transformers
geometry-free-view-synthesis
Is a geometric model required to synthesize novel views from a single image?
Stars: ✭ 265 (+223.17%)
Mutual labels:  transformers

TransCenterV2: Transformers with Dense Representations for Multiple-Object Tracking

An update towards a more efficient and powerful TransCenter, TransCenter-Lite!

The code for TransCenterV2 and TransCenter-Lite is now available, you can find the code and pretrained models at https://gitlab.inria.fr/robotlearn/TransCenter_official.

TransCenter: Transformers with Dense Representations for Multiple-Object Tracking
Yihong Xu, Yutong Ban, Guillaume Delorme, Chuang Gan, Daniela Rus, Xavier Alameda-Pineda
[Paper] [Project]

Bibtex

If you find this code useful, please star the project and consider citing:

@misc{xu2021transcenter,
      title={TransCenter: Transformers with Dense Representations for Multiple-Object Tracking}, 
      author={Yihong Xu and Yutong Ban and Guillaume Delorme and Chuang Gan and Daniela Rus and Xavier Alameda-Pineda},
      year={2021},
      eprint={2103.15145},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

MOTChallenge Results

For TransCenter V2:

MOT17 public detections:

Pretrained MOTA MOTP IDF1 FP FN IDS
CoCo 71.9% 80.5% 64.1% 27,356 126,860 4,118
CH 75.9% 81.2% 65.9% 30,190 100,999 4,626

MOT20 public detections:

Pretrained MOTA MOTP IDF1 FP FN IDS
CoCo 67.7% 79.8% 58.9% 54,967 108,376 3,707
CH 72.8% 81.0% 57.6% 28,026 110,312 2,621

MOT17 private detections:

Pretrained MOTA MOTP IDF1 FP FN IDS
CoCo 72.7% 80.3% 64.0% 33,807 115,542 4,719
CH 76.5% 81.1% 65.5% 40,101 88,827 5,394

MOT20 private detections:

Pretrained MOTA MOTP IDF1 FP FN IDS
CoCo 67.7% 79.8% 58.7% 56,435 107,163 3,759
CH 72.9% 81.0% 57.7% 28,596 108,982 2,625

Note:

  • The results can be slightly different depending on the running environment.
  • We might keep updating the results in the near future.

Acknowledgement

The code for TransCenterV2, TransCenter-Lite is modified and network pre-trained weights are obtained from the following repositories:

  1. The PVTv2 backbone pretrained models from PVTv2.
  2. The data format conversion code is modified from CenterTrack.

CenterTrack, Deformable-DETR, Tracktor.

@article{zhou2020tracking,
  title={Tracking Objects as Points},
  author={Zhou, Xingyi and Koltun, Vladlen and Kr{\"a}henb{\"u}hl, Philipp},
  journal={ECCV},
  year={2020}
}

@InProceedings{tracktor_2019_ICCV,
author = {Bergmann, Philipp and Meinhardt, Tim and Leal{-}Taix{\'{e}}, Laura},
title = {Tracking Without Bells and Whistles},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}}

@article{zhu2020deformable,
  title={Deformable DETR: Deformable Transformers for End-to-End Object Detection},
  author={Zhu, Xizhou and Su, Weijie and Lu, Lewei and Li, Bin and Wang, Xiaogang and Dai, Jifeng},
  journal={arXiv preprint arXiv:2010.04159},
  year={2020}
}

@article{zhang2021bytetrack,
  title={ByteTrack: Multi-Object Tracking by Associating Every Detection Box},
  author={Zhang, Yifu and Sun, Peize and Jiang, Yi and Yu, Dongdong and Yuan, Zehuan and Luo, Ping and Liu, Wenyu and Wang, Xinggang},
  journal={arXiv preprint arXiv:2110.06864},
  year={2021}
}

@article{wang2021pvtv2,
  title={Pvtv2: Improved baselines with pyramid vision transformer},
  author={Wang, Wenhai and Xie, Enze and Li, Xiang and Fan, Deng-Ping and Song, Kaitao and Liang, Ding and Lu, Tong and Luo, Ping and Shao, Ling},
  journal={Computational Visual Media},
  volume={8},
  number={3},
  pages={1--10},
  year={2022},
  publisher={Springer}
}

Several modules are from:

MOT Metrics in Python: py-motmetrics

Soft-NMS: Soft-NMS

DETR: DETR

DCNv2: DCNv2

PVTv2: PVTv2

ByteTrack: ByteTrack

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].