All Projects → shenyunhang → GAL-fWSD

shenyunhang / GAL-fWSD

Licence: other
Generative Adversarial Learning Towards Fast Weakly Supervised Detection

Projects that are alternatives of or similar to GAL-fWSD

concept-based-xai
Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (+127.78%)
Mutual labels:  weakly-supervised-learning
trove
Weakly supervised medical named entity classification
Stars: ✭ 55 (+205.56%)
Mutual labels:  weakly-supervised-learning
Awesome-Weak-Shot-Learning
A curated list of papers, code and resources pertaining to weak-shot classification, detection, and segmentation.
Stars: ✭ 142 (+688.89%)
Mutual labels:  weakly-supervised-learning
MLIC-KD-WSD
Multi-Label Image Classification via Knowledge Distillation from Weakly-Supervised Detection (ACM MM 2018)
Stars: ✭ 58 (+222.22%)
Mutual labels:  weakly-supervised-detection
WeSHClass
[AAAI 2019] Weakly-Supervised Hierarchical Text Classification
Stars: ✭ 83 (+361.11%)
Mutual labels:  weakly-supervised-learning
knodle
A PyTorch-based open-source framework that provides methods for improving the weakly annotated data and allows researchers to efficiently develop and compare their own methods.
Stars: ✭ 76 (+322.22%)
Mutual labels:  weakly-supervised-learning
Advances-in-Label-Noise-Learning
A curated (most recent) list of resources for Learning with Noisy Labels
Stars: ✭ 360 (+1900%)
Mutual labels:  weakly-supervised-learning
Learning-Action-Completeness-from-Points
Official Pytorch Implementation of 'Learning Action Completeness from Points for Weakly-supervised Temporal Action Localization' (ICCV-21 Oral)
Stars: ✭ 53 (+194.44%)
Mutual labels:  weakly-supervised-learning
weasel
Weakly Supervised End-to-End Learning (NeurIPS 2021)
Stars: ✭ 117 (+550%)
Mutual labels:  weakly-supervised-learning
dcsp segmentation
No description or website provided.
Stars: ✭ 34 (+88.89%)
Mutual labels:  weakly-supervised-learning
WS3D
Official version of 'Weakly Supervised 3D object detection from Lidar Point Cloud'(ECCV2020)
Stars: ✭ 104 (+477.78%)
Mutual labels:  weakly-supervised-learning
WSDEC
Weakly Supervised Dense Event Captioning in Videos, i.e. generating multiple sentence descriptions for a video in a weakly-supervised manner.
Stars: ✭ 95 (+427.78%)
Mutual labels:  weakly-supervised-learning
SPML
Universal Weakly Supervised Segmentation by Pixel-to-Segment Contrastive Learning
Stars: ✭ 81 (+350%)
Mutual labels:  weakly-supervised-learning
Learning-From-Rules
Implementation of experiments in paper "Learning from Rules Generalizing Labeled Exemplars" to appear in ICLR2020 (https://openreview.net/forum?id=SkeuexBtDr)
Stars: ✭ 46 (+155.56%)
Mutual labels:  weakly-supervised-learning
Simple-does-it-weakly-supervised-instance-and-semantic-segmentation
Weakly Supervised Segmentation by Tensorflow. Implements semantic segmentation in Simple Does It: Weakly Supervised Instance and Semantic Segmentation, by Khoreva et al. (CVPR 2017).
Stars: ✭ 46 (+155.56%)
Mutual labels:  weakly-supervised-learning
WSL4MIS
Scribbles or Points-based weakly-supervised learning for medical image segmentation, a strong baseline, and tutorial for research and application.
Stars: ✭ 100 (+455.56%)
Mutual labels:  weakly-supervised-learning
C2C
Implementation of Cluster-to-Conquer: A Framework for End-to-End Multi-Instance Learning for Whole Slide Image Classification approach.
Stars: ✭ 30 (+66.67%)
Mutual labels:  weakly-supervised-learning
MetaCat
Minimally Supervised Categorization of Text with Metadata (SIGIR'20)
Stars: ✭ 52 (+188.89%)
Mutual labels:  weakly-supervised-learning
TS-CAM
Codes for TS-CAM: Token Semantic Coupled Attention Map for Weakly Supervised Object Localization.
Stars: ✭ 96 (+433.33%)
Mutual labels:  weakly-supervised-learning
deviation-network
Source code of the KDD19 paper "Deep anomaly detection with deviation networks", weakly/partially supervised anomaly detection, few-shot anomaly detection
Stars: ✭ 94 (+422.22%)
Mutual labels:  weakly-supervised-learning

Generative Adversarial Learning Towards Fast Weakly Supervised Detection

By Yunhang Shen, Rongrong Ji, Shengchuan Zhang, Wangmeng Zuo, Yan Wang.

CVPR 2018 Paper.

Citing GAL-fWSD

If you find GAL-fWSD useful in your research, please consider citing:

@InProceedings{GAL-fWSD_2018_CVPR,
author = {Shen, Yunhang and Ji, Rongrong and Zhang, Shengchuan and Zuo, Wangmeng and Wang, Yan},
title = {Generative Adversarial Learning Towards Fast Weakly Supervised Detection},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2018}
}

Installation

Clone and install the CSC repository.

Usage

Note that GAL-fWSD in CSC repository does not contians the discriminator as described in the paper.

Although bringing unstable, it still approximates the performance in paper and speedups the training stage significantly.

To train and test a GAL-fWSD detector, use experiments/scripts/gan_300.sh or experiments/scripts/gan_512.sh. Output is written underneath $CSC_ROOT/output.

cd $CSC_ROOT
./experiments/scripts/gan_300.sh [GPU_ID] [NET] [DATASET] [--set ...]
# GPU_ID is the GPU you want to train on
# NET in {VGG_CNN_F, VGG_CNN_M_1024, VGG16} is the network arch to use
# DATASET in {pascal_voc, pascal_voc10, pascal_voc12, pascal_voc07+12, coco}
# --set ... allows you to specify configure options, e.g.
#   --set EXP_DIR seed_rng1701 RNG_SEED 1701

Example:

./experiments/scripts/gan_300.sh 0 VGG16 pascal_voc --set EXP_DIR gan_300

This will reproduction approximate VGG16 result in paper.

Trained GAL-fWSD networks are saved under:

output/<experiment directory>/<dataset name>/

Test outputs are saved under:

output/<experiment directory>/<dataset name>/<network snapshot name>/
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].