All Projects → pengzhiliang → Conformer

pengzhiliang / Conformer

Licence: Apache-2.0 license
Official code for Conformer: Local Features Coupling Global Representations for Visual Recognition

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Conformer

HRFormer
This is an official implementation of our NeurIPS 2021 paper "HRFormer: High-Resolution Transformer for Dense Prediction".
Stars: ✭ 357 (+3.48%)
Mutual labels:  transformer, classification
verseagility
Ramp up your custom natural language processing (NLP) task, allowing you to bring your own data, use your preferred frameworks and bring models into production.
Stars: ✭ 23 (-93.33%)
Mutual labels:  transformer, classification
Text Classification Models Pytorch
Implementation of State-of-the-art Text Classification Models in Pytorch
Stars: ✭ 379 (+9.86%)
Mutual labels:  transformer, classification
COVID-19-Tweet-Classification-using-Roberta-and-Bert-Simple-Transformers
Rank 1 / 216
Stars: ✭ 24 (-93.04%)
Mutual labels:  transformer, classification
well-classified-examples-are-underestimated
Code for the AAAI 2022 publication "Well-classified Examples are Underestimated in Classification with Deep Neural Networks"
Stars: ✭ 21 (-93.91%)
Mutual labels:  transformer, classification
Demo Chinese Text Binary Classification With Bert
Stars: ✭ 276 (-20%)
Mutual labels:  transformer, classification
Nlp research
NLP research:基于tensorflow的nlp深度学习项目,支持文本分类/句子匹配/序列标注/文本生成 四大任务
Stars: ✭ 141 (-59.13%)
Mutual labels:  transformer, classification
paccmann proteomics
PaccMann models for protein language modeling
Stars: ✭ 28 (-91.88%)
Mutual labels:  transformer
Transformer-Transducer
PyTorch implementation of "Transformer Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T Loss" (ICASSP 2020)
Stars: ✭ 61 (-82.32%)
Mutual labels:  transformer
ICON
(TPAMI2022) Salient Object Detection via Integrity Learning.
Stars: ✭ 125 (-63.77%)
Mutual labels:  transformer
SegFormer
Official PyTorch implementation of SegFormer
Stars: ✭ 1,264 (+266.38%)
Mutual labels:  transformer
set-transformer
A neural network architecture for prediction on sets
Stars: ✭ 18 (-94.78%)
Mutual labels:  transformer
Relation-Classification
Relation Classification - SEMEVAL 2010 task 8 dataset
Stars: ✭ 46 (-86.67%)
Mutual labels:  classification
Kevinpro-NLP-demo
All NLP you Need Here. 个人实现了一些好玩的NLP demo,目前包含13个NLP应用的pytorch实现
Stars: ✭ 117 (-66.09%)
Mutual labels:  transformer
vita
Vita - Genetic Programming Framework
Stars: ✭ 24 (-93.04%)
Mutual labels:  classification
NLP-paper
🎨 🎨NLP 自然语言处理教程 🎨🎨 https://dataxujing.github.io/NLP-paper/
Stars: ✭ 23 (-93.33%)
Mutual labels:  transformer
embeddings
Embeddings: State-of-the-art Text Representations for Natural Language Processing tasks, an initial version of library focus on the Polish Language
Stars: ✭ 27 (-92.17%)
Mutual labels:  classification
Context-Transformer
Context-Transformer: Tackling Object Confusion for Few-Shot Detection, AAAI 2020
Stars: ✭ 89 (-74.2%)
Mutual labels:  transformer
awesome-text-classification
Text classification meets word embeddings.
Stars: ✭ 27 (-92.17%)
Mutual labels:  classification
cnn-rnn-classifier
A practical example on how to combine both a CNN and a RNN to classify images.
Stars: ✭ 47 (-86.38%)
Mutual labels:  classification

Conformer: Local Features Coupling Global Representations for Visual Recognition

Accpeted to ICCV21!

This repository is built upon DeiT, timm, and mmdetction.

Introduction

Within Convolutional Neural Network (CNN), the convolution operations are good at extracting local features but experience difficulty to capture global representations. Within visual transformer, the cascaded self-attention modules can capture long-distance feature dependencies but unfortunately deteriorate local feature details. In this paper, we propose a hybrid network structure, termed Conformer, to take advantage of convolutional operations and self-attention mechanisms for enhanced representation learning. Conformer roots in the Feature Coupling Unit (FCU), which fuses local features and global representations under different resolutions in an interactive fashion. Conformer adopts a concurrent structure so that local features and global representations are retained to the maximum extent. Experiments show that Conformer, under the comparable parameter complexity, outperforms the visual transformer (DeiT-B) by 2.3% on ImageNet. On MSCOCO, it outperforms ResNet-101 by 3.7% and 3.6% mAPs for object detection and instance segmentation, respectively, demonstrating the great potential to be a general backbone network.

The basic architecture of the Conformer is shown as following:

We also show the comparison of feature maps of CNN (ResNet-101), Visual Transformer (DeiT-S), and the proposed Conformer as following. The patch embeddings in transformer are reshaped to feature maps for visualization. While CNN activates discriminative local regions ($e.g.$, the peacock's head in (a) and tail in (e)), the CNN branch of Conformer takes advantage of global cues from the visual transformer and thereby activates complete object ($e.g.$, full extent of the peacock in (b) and (f)). Compared with CNN, local feature details of the visual transformer are deteriorated ($e.g.$, (c) and (g)). In contrast, the transformer branch of Conformer retains the local feature details from CNN while depressing the background ($e.g.$, the peacock contours in (d) and (h) are more complete than those in(c) and (g).

Getting started

Install

First, install PyTorch 1.7.0+ and torchvision 0.8.1+ and pytorch-image-models 0.3.2:

conda install -c pytorch pytorch torchvision
pip install timm==0.3.2

Data preparation

Download and extract ImageNet train and val images from http://image-net.org/. The directory structure is the standard layout for the torchvision datasets.ImageFolder, and the training and validation data is expected to be in the train/ folder and val folder respectively:

/path/to/imagenet/
  train/
    class1/
      img1.jpeg
    class2/
      img2.jpeg
  val/
    class1/
      img3.jpeg
    class/2
      img4.jpeg

Training and test

Training

To train Conformer-S on ImageNet on a single node with 8 gpus for 300 epochs run:

export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
OUTPUT='./output/Conformer_small_patch16_batch_1024_lr1e-3_300epochs'

python -m torch.distributed.launch --master_port 50130 --nproc_per_node=8 --use_env main.py \
                                   --model Conformer_small_patch16 \
                                   --data-set IMNET \
                                   --batch-size 128 \
                                   --lr 0.001 \
                                   --num_workers 4 \
                                   --data-path /data/user/Dataset/ImageNet_ILSVRC2012/ \
                                   --output_dir ${OUTPUT} \
                                   --epochs 300

Test

To test Conformer-S on ImageNet on a single gpu run:

CUDA_VISIBLE_DEVICES=0, python main.py  --model Conformer_small_patch16 --eval --batch-size 64 \
                --input-size 224 \
                --data-set IMNET \
                --num_workers 4 \
                --data-path /data/user/Dataset/ImageNet_ILSVRC2012/ \
                --epochs 100 \
                --resume ../Conformer_small_patch16.pth

Model zoo

Model Parameters MACs Top-1 Acc Link
Conformer-Ti 23.5 M 5.2 G 81.3 % baidu(code: hzhm) google
Conformer-S 37.7 M 10.6 G 83.4 % baidu(code: qvu8) google
Conformer-B 83.3 M 23.3 G 84.1 % baidu(code: b4z9) google

Citation

@article{peng2021conformer,
      title={Conformer: Local Features Coupling Global Representations for Visual Recognition}, 
      author={Zhiliang Peng and Wei Huang and Shanzhi Gu and Lingxi Xie and Yaowei Wang and Jianbin Jiao and Qixiang Ye},
      journal={arXiv preprint arXiv:2105.03889},
      year={2021},
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].