All Projects → GewelsJI → SINet-V2

GewelsJI / SINet-V2

Licence: Apache-2.0 License
Concealed Object Detection (SINet-V2, IEEE TPAMI 2022). Code using Jittor Framework is available.

Programming Languages

python
139335 projects - #7 most used programming language
matlab
3953 projects

Projects that are alternatives of or similar to SINet-V2

conceal-explorer
Conceal Explorer - CCX Block Explorer
Stars: ✭ 26 (-60%)
Mutual labels:  conceal
mixed-segdec-net-comind2021
Official PyTorch implementation for "Mixed supervision for surface-defect detection: from weakly to fully supervised learning"
Stars: ✭ 133 (+104.62%)
Mutual labels:  defect-detection
PySODEvalToolkit
PySODEvalToolkit: A Python-based Evaluation Toolbox for Salient Object Detection and Camouflaged Object Detection
Stars: ✭ 59 (-9.23%)
Mutual labels:  camouflaged-object-detection
DefectDetectionDemo
Textile defect detection using OpenCVSharp
Stars: ✭ 33 (-49.23%)
Mutual labels:  defect-detection
A-Hierarchical-Transformation-Discriminating-Generative-Model-for-Few-Shot-Anomaly-Detection
Official pytorch implementation of the paper: "A Hierarchical Transformation-Discriminating Generative Model for Few Shot Anomaly Detection"
Stars: ✭ 42 (-35.38%)
Mutual labels:  defect-detection
TFT-LCD defects detecter-Qt-opencv
TFT-LCD defects detecter based on the improved saliency model
Stars: ✭ 26 (-60%)
Mutual labels:  defect-detection
conceal-desktop
Conceal Desktop (GUI)
Stars: ✭ 65 (+0%)
Mutual labels:  conceal
conceal-api
Conceal API - JavaScript Interface (RPC/API)
Stars: ✭ 18 (-72.31%)
Mutual labels:  conceal

Concealed Object Detection (IEEE TPAMI)

PyTorch implementation of our extended model, termed as Search and Identification Network (SINet-V2).

Authors: Deng-Ping Fan, Ge-Peng Ji, Ming-Ming Cheng & Ling Shao.

1. Features

  • Introduction. This repository contains the source code, prediction results, and evaluation toolbox of our Search and Identification Network, also called SINet-V2 (arXiv / SuppMaterial / ProjectPage) , which are the journal extension version of our paper SINet (github / pdf) published at CVPR-2020.

  • Highlights. Compared to our conference version, we achieve new SOTA in the field of COD via the two well-elaborated sub-modules, including neighbor connection decoder (NCD) and group-reversal attention (GRA). Please refer to our paper for more details.

If you have any questions about our paper, feel free to contact me via e-mail ([email protected]). And if you are using our code and evaluation toolbox for your research, please cite this paper (BibTeX).

2. 🔥 NEWS 🔥

  • [2021/12/26] 🔥 < Concealed Object Detection > 论文在Jittor Developer Conference 2021中荣获「优秀计图论文奖」
  • [2021/12/14] 🔥 恭喜四川大学傅可人教授团队的课题「面向工业质检的通用缺陷检测模型」基于SINetV2模型作为分割基线模型,参与“中信银行杯”第三届中国研究生人工智能创新大赛并取得喜人成绩.「初赛企业组赛题-总排名第一总决赛-二等奖
  • [2021/10/10] Delivering a spotlight presentation 「伪装目标检测技术及其应用」 in VALSE 2021. The poster file can be found at link (paper id-31).
  • [2021/10/09] Note that there are two images (COD10K-CAM-1-Aquatic-3-Crab-32.png and COD10K-CAM-2-Terrestrial-23-Cat-1506.png) that overlap between the training and test set of COD10K. You can either keep or discard those two images because they only slightly affect the final performance (~0.1% changes in terms of different metrics).
  • [2021/07/20] HUAWEI 藤蔓技术论坛2021 Talk: “伪装目标检测技术与应用”,报告人:范登平,2021. (PPT下载
  • [2021/06/16] Updating latest download link (Pytorch / Jittor) on four testing dataset, including CHAMELEON, CAMO, COD10K, and NC4K.
  • [2021/06/11] 🔥 「图形与几何计算」公众号报道:计图开源:隐蔽目标检测新任务在计图框架下推理性能大幅提升
  • [2021/06/05] The Jittor convertion of SINet-V2 (inference code) is available right now. It has robust inference efficiency compared to PyTorch version, please enjoy it. Many thanks to Yu-Cheng Chou for the excellent conversion from pytorch framework)
  • [2021/06/01] 🔥 Our TPAMI-2021 paper is early access to IEEE Xplore.
  • [2021/05/18] 机器之心走近全球顶尖实验室系列之「伪装目标检测:挑战、方法和应用」视频报告分享(链接)。
  • [2021/05/16] Jittor code will come soon ...
  • [2021/05/01] Updating the download link of training/testing dataset in our experiments.
  • [2021/04/20] The release of inference map on the 2021-CVPR-NC4K test dataset, which can be downloaded from the Google Drive.
  • [2021/02/21] Uploading the whole project.
  • [2021/01/16] Creating repository.

3. Overview


Figure 1: Task relationship. One of the most popular directions in computer vision is generic object detection. Note that generic objects can be either salient or camouflaged; camouflaged objects can be seen as difficult cases of generic objects. Typical generic object detection tasks include semantic segmentation and panoptic segmentation (see Fig. 2 b).


Figure 2: Given an input image (a), we present the ground-truth for (b) panoptic segmentation (which detects generic objects including stuff and things), (c) salient instance/object detection (which detects objects that grasp human attention), and (d) the proposed camouflaged object detection task, where the goal is to detect objects that have a similar pattern (e.g., edge, texture, or color) to the natural habitat. In this case, the boundaries of the two butterflies are blended with the bananas, making them difficult to identify. This task is far more challenging than the traditional salient object detection or generic object detection.

References of Salient Object Detection (SOD) benchmark works
[1] Video SOD: Shifting More Attention to Video Salient Object Detection. CVPR, 2019. (Project Page)
[2] RGB SOD: Salient Objects in Clutter: Bringing Salient Object Detection to the Foreground. ECCV, 2018. (Project Page)
[3] RGB-D SOD: Rethinking RGB-D Salient Object Detection: Models, Datasets, and Large-Scale Benchmarks. TNNLS, 2020. (Project Page)
[4] Co-SOD: Taking a Deeper Look at the Co-salient Object Detection. CVPR, 2020. (Project Page)

4. Proposed Framework

4.1. Training/Testing

The training and testing experiments are conducted using PyTorch with a single GeForce RTX TITAN GPU of 24 GB Memory.

Note that our model also supports low memory GPU, which means you should lower the batch size.

  1. Prerequisites:

    Note that SINet-V2 is only tested on Ubuntu OS with the following environments. It may work on other operating systems (i.e., Windows) as well but we do not guarantee that it will.

    • Creating a virtual environment in terminal: conda create -n SINet python=3.6.

    • Installing necessary packages: PyTorch > 1.1, opencv-python

  2. Prepare the data:

    • downloading testing dataset and move it into ./Dataset/TestDataset/, which can be found in Google Drive.

    • downloading training/validation dataset and move it into ./Dataset/TrainValDataset/, which can be found in Google Drive

    • downloading pretrained weights and move it into ./snapshot/SINet_V2/Net_epoch_best.pth, which can be found in this download link (Google Drive).

    • downloading Res2Net weights on ImageNet dataset download link (Google Drive).

  3. Training Configuration:

    • Assigning your costumed path, like --train_save and --train_path in MyTrain_Val.py.

    • Just enjoy it via run python MyTrain_Val.py in your terminal.

  4. Testing Configuration:

    • After you download all the pre-trained model and testing dataset, just run MyTesting.py to generate the final prediction map: replace your trained model directory (--pth_path).

    • Just enjoy it!

3.2 Evaluating your trained model:

One-key evaluation is written in MATLAB code (link), please follow this the instructions in ./eval/main.m and just run it to generate the evaluation results in ./res/. The complete evaluation toolbox (including data, map, eval code, and res): link.

3.3 Pre-computed maps:

They can be found in download link(Pytorch / Jittor) on four testing dataset, including CHAMELEON, CAMO, COD10K, NC4K.

4. Citation

If you find this project useful, please consider citing:

@article{fan2021concealed,
  title={Concealed Object Detection},
  author={Fan, Deng-Ping and Ji, Ge-Peng and Cheng, Ming-Ming and Shao, Ling},
  journal={IEEE TPAMI},
  year={2022}
}

@inproceedings{fan2020camouflaged,
  title={Camouflaged object detection},
  author={Fan, Deng-Ping and Ji, Ge-Peng and Sun, Guolei and Cheng, Ming-Ming and Shen, Jianbing and Shao, Ling},
  booktitle={IEEE CVPR},
  pages={2777--2787},
  year={2020}
}

6. FAQ

  1. If the image cannot be loaded in the page (mostly in the domestic network situations).

    Solution Link

7. License

The source code is free for research and education use only. Any commercial usage should get formal permission first.


back to top

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].