All Projects → VITA-Group → Abd Net

VITA-Group / Abd Net

Licence: mit
[ICCV 2019] "ABD-Net: Attentive but Diverse Person Re-Identification" https://arxiv.org/abs/1908.01114

Programming Languages

python
139335 projects - #7 most used programming language

Labels

Projects that are alternatives of or similar to Abd Net

NTUA-slp-nlp
💻Speech and Natural Language Processing (SLP & NLP) Lab Assignments for ECE NTUA
Stars: ✭ 19 (-93.01%)
Mutual labels:  attention
Diverse-Structure-Inpainting
CVPR 2021: "Generating Diverse Structure for Image Inpainting With Hierarchical VQ-VAE"
Stars: ✭ 131 (-51.84%)
Mutual labels:  attention
ResUNetPlusPlus
Official code for ResUNetplusplus for medical image segmentation (TensorFlow implementation) (IEEE ISM)
Stars: ✭ 69 (-74.63%)
Mutual labels:  attention
interpretable-han-for-document-classification-with-keras
Keras implementation of hierarchical attention network for document classification with options to predict and present attention weights on both word and sentence level.
Stars: ✭ 18 (-93.38%)
Mutual labels:  attention
dhs summit 2019 image captioning
Image captioning using attention models
Stars: ✭ 34 (-87.5%)
Mutual labels:  attention
Visual-Transformer-Paper-Summary
Summary of Transformer applications for computer vision tasks.
Stars: ✭ 51 (-81.25%)
Mutual labels:  attention
AoA-pytorch
A Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering
Stars: ✭ 33 (-87.87%)
Mutual labels:  attention
Attentionwalk
A PyTorch Implementation of "Watch Your Step: Learning Node Embeddings via Graph Attention" (NeurIPS 2018).
Stars: ✭ 266 (-2.21%)
Mutual labels:  attention
SBR
⌛ Introducing Self-Attention to Target Attentive Graph Neural Networks (AISP '22)
Stars: ✭ 22 (-91.91%)
Mutual labels:  attention
mtad-gat-pytorch
PyTorch implementation of MTAD-GAT (Multivariate Time-Series Anomaly Detection via Graph Attention Networks) by Zhao et. al (2020, https://arxiv.org/abs/2009.02040).
Stars: ✭ 85 (-68.75%)
Mutual labels:  attention
CoVA-Web-Object-Detection
A Context-aware Visual Attention-based training pipeline for Object Detection from a Webpage screenshot!
Stars: ✭ 18 (-93.38%)
Mutual labels:  attention
attention-target-detection
[CVPR2020] "Detecting Attended Visual Targets in Video"
Stars: ✭ 105 (-61.4%)
Mutual labels:  attention
Attention
一些不同的Attention机制代码
Stars: ✭ 17 (-93.75%)
Mutual labels:  attention
Base-On-Relation-Method-Extract-News-DA-RNN-Model-For-Stock-Prediction--Pytorch
基於關聯式新聞提取方法之雙階段注意力機制模型用於股票預測
Stars: ✭ 33 (-87.87%)
Mutual labels:  attention
ai challenger 2018 sentiment analysis
Fine-grained Sentiment Analysis of User Reviews --- AI CHALLENGER 2018
Stars: ✭ 16 (-94.12%)
Mutual labels:  attention
RNNSearch
An implementation of attention-based neural machine translation using Pytorch
Stars: ✭ 43 (-84.19%)
Mutual labels:  attention
Semantic-Aware-Attention-Based-Deep-Object-Co-segmentation
Semantic Aware Attention Based Deep Object Co-segmentation
Stars: ✭ 61 (-77.57%)
Mutual labels:  attention
Encoder decoder
Four styles of encoder decoder model by Python, Theano, Keras and Seq2Seq
Stars: ✭ 269 (-1.1%)
Mutual labels:  attention
Abcnn
Implementation of ABCNN(Attention-Based Convolutional Neural Network) on Tensorflow
Stars: ✭ 264 (-2.94%)
Mutual labels:  attention
Attention-Visualization
Visualization for simple attention and Google's multi-head attention.
Stars: ✭ 54 (-80.15%)
Mutual labels:  attention

ABD-Net: Attentive but Diverse Person Re-Identification

PWC PWC PWC

Code for this paper ABD-Net: Attentive but Diverse Person Re-Identification

Tianlong Chen, Shaojin Ding*, Jingyi Xie*, Ye Yuan, Wuyang Chen, Yang Yang, Zhou Ren, Zhangyang Wang

In ICCV 2019

Refer to Training Guides README here, original README here, datasets README here, Model ZOO README here.

We provide complete usage pretrained models for our paper.

More models will come soon. If you want a pretrained model for some specific datasets, please be free to post an issue in our repo.

Overview

Attention mechanism has been shown to be effective for person re-identification (Re-ID). However, the learned attentive feature embeddings which are often not naturally diverse nor uncorrelated, will compromise the retrieval performance based on the Euclidean distance. We advocate that enforcing diversity could greatly complement the power of attention. To this end, we propose an Attentive but Diverse Network (ABD-Net), which seamlessly integrates attention modules and diversity regularization throughout the entire network, to learn features that are representative, robust, and more discriminative.

Here are the visualization of attention maps. (i) Original images; (ii) Attentive feature maps; (iii) Attentive but diverse feature maps. Diversity can be observed to make attention "broader" in general, and to correct some mistaken over-emphasis (such as clothes textures) by attention. (L: large values; S: small values.)

Methods

We add a CAM (Channel Attention Module) and O.F. on the outputs of res_conv_2 block. The regularized feature map is used as the input of res_conv_3. Next, after the res_conv_4 block, the network splits into a global branch and an attentive branch in parallel. We apply O.W. on all conv layers in our ResNet-50 backbone, i.e.​, from res_conv_1 to res_conv_4 and the two res_conv_5 in both branches. The outputs of two branches are concatenated as the final feature embedding.

Here are the detailed structures of CAM (Channel Attention Module) and PAM (Position Attention Module).

Results

Our proposed ABD-Net achieves the state-of-the-art (SOTA) performance in Market-1501, DukeMTMC-Re-ID and MSMT17 datasets. The detailed comparison with previous SOTA can be found in our paper.

Dataset Top-1 mAP
Market-1501 95.60 88.28
DukeMTMC-Re-ID 89.00 78.59
MSMT17 82.30 60.80

Here are three Re-ID examples of ABD-Net (XE), Baseline + PAM + CAM and Baseline on Market-1s501. Left: query image. Right: i): top-5 results of ABD-Net (XE). ii): top-5 results of Baseline + PAM + CAM. iii): top-5 results of Baseline. Images in red boxes are negative results.

Citation

If you use this code for your research, please cite our paper.

​```
@InProceedings{Chen_2019_ICCV,
author = {Tianlong Chen and Shaojin Ding and Jingyi Xie and Ye Yuan and Wuyang Chen and Yang Yang and Zhou Ren and Zhangyang Wang},
title = {ABD-Net: Attentive but Diverse Person Re-Identification},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2019}
}
​```
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].