All Projects → Ze-Yang → Context-Transformer

Ze-Yang / Context-Transformer

Licence: MIT license
Context-Transformer: Tackling Object Confusion for Few-Shot Detection, AAAI 2020

Programming Languages

python
139335 projects - #7 most used programming language
c
50402 projects - #5 most used programming language
Cuda
1817 projects

Projects that are alternatives of or similar to Context-Transformer

MinTL
MinTL: Minimalist Transfer Learning for Task-Oriented Dialogue Systems
Stars: ✭ 61 (-31.46%)
Mutual labels:  transformer, transfer-learning
SIGIR2021 Conure
One Person, One Model, One World: Learning Continual User Representation without Forgetting
Stars: ✭ 23 (-74.16%)
Mutual labels:  transformer, transfer-learning
Transferlearning
Transfer learning / domain adaptation / domain generalization / multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习
Stars: ✭ 8,481 (+9429.21%)
Mutual labels:  transfer-learning, few-shot
AITQA
resources for the IBM Airlines Table-Question-Answering Benchmark
Stars: ✭ 12 (-86.52%)
Mutual labels:  transformer, transfer-learning
Nlp Paper
NLP Paper
Stars: ✭ 484 (+443.82%)
Mutual labels:  transformer, transfer-learning
Getting Things Done With Pytorch
Jupyter Notebook tutorials on solving real-world problems with Machine Learning & Deep Learning using PyTorch. Topics: Face detection with Detectron 2, Time Series anomaly detection with LSTM Autoencoders, Object Detection with YOLO v5, Build your first Neural Network, Time Series forecasting for Coronavirus daily cases, Sentiment Analysis with BERT.
Stars: ✭ 738 (+729.21%)
Mutual labels:  transformer, transfer-learning
Filipino-Text-Benchmarks
Open-source benchmark datasets and pretrained transformer models in the Filipino language.
Stars: ✭ 22 (-75.28%)
Mutual labels:  transformer, transfer-learning
Abstractive Summarization With Transfer Learning
Abstractive summarisation using Bert as encoder and Transformer Decoder
Stars: ✭ 358 (+302.25%)
Mutual labels:  transformer, transfer-learning
Flow Forecast
Deep learning PyTorch library for time series forecasting, classification, and anomaly detection (originally for flood forecasting).
Stars: ✭ 368 (+313.48%)
Mutual labels:  transformer, transfer-learning
Awesome Bert Nlp
A curated list of NLP resources focused on BERT, attention mechanism, Transformer networks, and transfer learning.
Stars: ✭ 567 (+537.08%)
Mutual labels:  transformer, transfer-learning
Bert Keras
Keras implementation of BERT with pre-trained weights
Stars: ✭ 820 (+821.35%)
Mutual labels:  transformer, transfer-learning
ReinventCommunity
No description or website provided.
Stars: ✭ 103 (+15.73%)
Mutual labels:  transfer-learning
CrabNet
Predict materials properties using only the composition information!
Stars: ✭ 57 (-35.96%)
Mutual labels:  transformer
meta-learning-progress
Repository to track the progress in Meta-Learning (MtL), including the datasets and the current state-of-the-art for the most common MtL problems.
Stars: ✭ 26 (-70.79%)
Mutual labels:  transfer-learning
wechsel
Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
Stars: ✭ 39 (-56.18%)
Mutual labels:  transfer-learning
WSDM2022-PTUPCDR
This is the official implementation of our paper Personalized Transfer of User Preferences for Cross-domain Recommendation (PTUPCDR), which has been accepted by WSDM2022.
Stars: ✭ 65 (-26.97%)
Mutual labels:  transfer-learning
paccmann proteomics
PaccMann models for protein language modeling
Stars: ✭ 28 (-68.54%)
Mutual labels:  transformer
semantic-segmentation
SOTA Semantic Segmentation Models in PyTorch
Stars: ✭ 464 (+421.35%)
Mutual labels:  transformer
TDRG
Transformer-based Dual Relation Graph for Multi-label Image Recognition. ICCV 2021
Stars: ✭ 32 (-64.04%)
Mutual labels:  transformer
sticker2
Further developed as SyntaxDot: https://github.com/tensordot/syntaxdot
Stars: ✭ 14 (-84.27%)
Mutual labels:  transformer

Context-Transformer: Tackling Object Confusion for Few-Shot Detection

Language grade: Python

This repository contains the official implementation of the AAAI 2020 paper Context-Transformer: Tackling Object Confusion for Few-Shot Detection.

Introduction

To tackle the object confusion problem in few-shot detection, we propose a novel Context-Transformer within a concise deep transfer framework. Specifically, Context-Transformer can effectively leverage source-domain object knowledge as guidance, and automatically formulate relational context clues to enhance the detector's generalization capcity to the target domain. It can be flexibly embedded in the popular SSD-style detectors, which makes it a plug-and-play module for end-to-end few-shot learning. For more details, please refer to our original paper.

Transfer Setting COCO60 to VOC20 (Novel Class mAP)

Method 1shot 5shot
Prototype 22.8 39.8
Imprinted 24.5 40.9
Non-local 25.2 41.0
Baseline 21.5 39.4
Ours 27.0 43.8

News: We now support instance shot for COCO60 to VOC20 transfer setting, denoted by suffix -IS below.

Method 1shot 5shot
Baseline-IS 19.2 35.7
Ours-IS 27.1 40.4

Note:

  • The instance shots are kept the same as incremental setting, which is different from the image shots we originally used in transfer setting. Therefore, it's possible that the 1-shot result of Ours-IS (27.1) is comparable to Ours (27.0).

Incremental Setting VOC15 to VOC20 (Novel Class mAP)

Method (1-shot) Split1 Split2 Split3
Shmelkov2017 23.9 19.2 21.4
Kang2019 14.8 15.7 19.2
Ours 39.8 32.5 34.0
Method (5-shot) Split1 Split2 Split3
Shmelkov2017 38.8 32.5 31.8
Kang2019 33.9 30.1 40.6
Ours 44.2 36.3 40.8

Note:

  • The results here is higher than that reported in the paper due to training strategy adjustment.

License

Context-Transformer is released under the MIT License (refer to the LICENSE file for details).

Citing Context-Transformer

If you find Context-Transformer useful in your research, please consider citing:

@inproceedings{yang2020context,
  title={Context-Transformer: Tackling Object Confusion for Few-Shot Detection.},
  author={Yang, Ze and Wang, Yali and Chen, Xianyu and Liu, Jianzhuang and Qiao, Yu},
  booktitle={AAAI},
  pages={12653--12660},
  year={2020}
}

 

Contents

  1. Installation
  2. Datasets
  3. Training
  4. Evaluation

Installation

  • Clone this repository. This repository is mainly based on RFBNet and Detectron2, many thanks to them.

  • Install anaconda and requirements:

    • python 3.6

    • PyTorch 1.4.0

    • CUDA 10.0

    • gcc 5.4

    • cython

    • opencv

    • matplotlib

    • tabulate

    • termcolor

    • tensorboard

      You can setup the entire environment simply with following lines:

      conda create -n CT python=3.6 && conda activate CT
      conda install pytorch torchvision cudatoolkit=10.0 -c pytorch
      conda install cython matplotlib tabulate termcolor tensorboard
      pip install opencv-python
  • Compile the nms and coco tools:

sh make.sh

Note:

  • Check your GPU architecture support in utils/build.py, line 131. Default is:
'nvcc': ['-arch=sm_61',
  • Ensure that the cuda environment is integrally installed, including compiler, tools and libraries. Plus, make sure the cudatoolkit version in the conda environment matches with the one you compile with. Check about that using nvcc -V and conda list | grep cudatoolkit, the output version should be the same.
  • We have test the code on PyTorch-1.4.0 and Python 3.6. It might be able to run on other versions but with no guarantee.

Datasets

VOC Dataset

Download VOC2007 trainval & test

# specify a directory for dataset to be downloaded into, else default is ~/data/
sh data/scripts/VOC2007.sh # <directory>

Download VOC2012 trainval

# specify a directory for dataset to be downloaded into, else default is ~/data/
sh data/scripts/VOC2012.sh # <directory>

Create symlink for the VOC dataset:

ln -s /path/to/VOCdevkit data/VOCdevkit

Image shots and splits preparation

Move the Main2007.zip and Main2012.zip under data/ folder to data/VOCdevkit/VOC2007/ImageSets/ and data/VOCdevkit/VOC2012/ImageSets/ respectively, and unzip them. Make sure that the .txt files contained in the zip file are under corresponding path/to/Main/ folder.

COCO Dataset

Download COCO benchmark

Download the MS COCO dataset from official website to data/COCO/ (or make a symlink ln -s /path/to/coco data/COCO). All annotation files (.json) should be placed under the COCO/annotations/ folder. It should have this basic structure

$COCO/
$COCO/cache/
$COCO/annotations/
$COCO/images/
$COCO/images/train2014/
$COCO/images/val2014/

Note: The current COCO dataset has released new train2017 and val2017 sets which are just new splits of the same image sets.

Image splits preparation

Run the following command to obtain nonvoc/voc split annotation files (.json):

python data/split_coco_dataset_voc_nonvoc.py

Training

First download the fc-reduced VGG-16 PyTorch base network weights at https://s3.amazonaws.com/amdegroot-models/vgg16_reducedfc.pth or from BaiduYun Driver, and place it under the directory weights/.

Phase 1

Transfer Setting

To pretrain RFBNet on source domain dataset COCO60:

python train.py --save-folder weights/COCO60_pretrain -d COCO -p 1

Incremental Setting

To pretrain RFBNet on VOC split1 (simply change --split for other splits):

python train.py --save-folder weights/VOC_split1_pretrain -d VOC -p 1 -max 50000 --steps 30000 40000 --checkpoint-period 4000 --warmup-iter 1000 --setting incre --split 1

Note:

  • To ease your reproduce, feel free to download the above pretrained RFBNet models via BaiduYun Driver or OneDrive directly.

Phase 2

Transfer Setting

To finetune on VOC dataset (1 shot):

python train.py --load-file weights/COCO60_pretrain/model_final.pth --save-folder weights/fewshot/transfer/VOC_1shot -d VOC -p 2 --shot 1 --method ours -max 2000 --steps 1500 1750 --checkpoint-period 200 --warmup-iter 0 --no-mixup-iter 750 -b 20

To finetune on VOC dataset (5 shot):

python train.py --load-file weights/COCO60_pretrain/model_final.pth --save-folder weights/fewshot/transfer/VOC_5shot -d VOC -p 2 --shot 5 --method ours -max 4000 --steps 3000 3500 --checkpoint-period 500 --warmup-iter 0 --no-mixup-iter 1500

Incremental Setting

To finetune on VOC dataset for split1 setting (1 shot):

python train.py -d VOC --split 1 --setting incre -p 2 -m ours --shot 1 --save-folder weights/fewshot/incre/VOC_split1_1shot --load-file weights/VOC_split1_pretrain/model_final.pth -max 200 --steps 150 --checkpoint-period 50 --warmup-iter 0 --no-mixup-iter 100

To finetune on VOC dataset for split1 setting (5 shot):

python train.py -d VOC --split 1 --setting incre -p 2 -m ours --shot 5 --save-folder weights/fewshot/incre/VOC_split1_5shot --load-file weights/VOC_split1_pretrain/model_final.pth -max 400 --steps 350 --checkpoint-period 50 --warmup-iter 0 --no-mixup-iter 100

Note:

  • Simply change --split for other split settings.
  • For other shot settings, feel free to adjust --shot, -max, --steps and --no-mixup-iter to obtain satisfactory results.

Evaluation

Phase 1

Transfer Setting

To evaluate the pretrained model on COCO minival set:

python test.py -d COCO -p 1 --save-folder weights/COCO60_pretrain --resume

Incremental setting

To evaluate the pretrained model on VOC2007 test set (specify your target split via --split):

python test.py -d VOC --split 1 --setting incre -p 1 --save-folder weights/VOC_split1_pretrain --resume

Phase 2

Transfer Setting

To evaluate the transferred model on VOC2007 test set:

python test.py -d VOC -p 2 --save-folder weights/fewshot/transfer/VOC_5shot --resume

Incremental setting

To evaluate the incremental model on VOC2007 test set (specify your target split via --split):

python test.py -d VOC --split 1 --setting incre -p 2 --save-folder weights/fewshot/incre/VOC_split1_5shot --resume

Note:

  • --resume: load model from the last checkpoint in the folder --save-folder.

If you would like to manually specify the path to load model, use --load-file path/to/model.pth instead of --resume.

 

Should you have any questions regarding this repo, feel free to email me at [email protected].

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].