All Projects → akshitac8 → tfvaegan

akshitac8 / tfvaegan

Licence: MIT license
[ECCV 2020] Official Pytorch implementation for "Latent Embedding Feedback and Discriminative Features for Zero-Shot Classification". SOTA results for ZSL and GZSL

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to tfvaegan

Generative MLZSL
[TPAMI Under Submission] Generative Multi-Label Zero-Shot Learning
Stars: ✭ 37 (-65.42%)
Mutual labels:  pytorch-implementation, gzsl, zsl, clswgan
SOLAR
PyTorch code for "SOLAR: Second-Order Loss and Attention for Image Retrieval". In ECCV 2020
Stars: ✭ 150 (+40.19%)
Mutual labels:  eccv2020, eccv-2020
SPAN
Semantics-guided Part Attention Network (ECCV 2020 Oral)
Stars: ✭ 19 (-82.24%)
Mutual labels:  pytorch-implementation, eccv2020
ailia-models
The collection of pre-trained, state-of-the-art AI models for ailia SDK
Stars: ✭ 1,102 (+929.91%)
Mutual labels:  image-classification, action-recognition
ICCV2021-Paper-Code-Interpretation
ICCV2021/2019/2017 论文/代码/解读/直播合集,极市团队整理
Stars: ✭ 2,022 (+1789.72%)
Mutual labels:  image-classification, action-recognition
gzsl-od
Out-of-Distribution Detection for Generalized Zero-Shot Action Recognition
Stars: ✭ 47 (-56.07%)
Mutual labels:  action-recognition, zero-shot-learning
JSTASR-DesnowNet-ECCV-2020
This is the project page of our paper which has been published in ECCV 2020.
Stars: ✭ 17 (-84.11%)
Mutual labels:  eccv2020, eccv-2020
synse-zsl
Official PyTorch code for the ICIP 2021 paper 'Syntactically Guided Generative Embeddings For Zero Shot Skeleton Action Recognition'
Stars: ✭ 14 (-86.92%)
Mutual labels:  action-recognition, zero-shot-learning
Gluon Cv
Gluon CV Toolkit
Stars: ✭ 5,001 (+4573.83%)
Mutual labels:  image-classification, action-recognition
ResNet-50-CBAM-PyTorch
Implementation of Resnet-50 with and without CBAM in PyTorch v1.8. Implementation tested on Intel Image Classification dataset from https://www.kaggle.com/puneet6060/intel-image-classification.
Stars: ✭ 31 (-71.03%)
Mutual labels:  image-classification, pytorch-implementation
Xception-with-Your-Own-Dataset
Easy-to-use scripts for training and inferencing with Xception on your own dataset
Stars: ✭ 51 (-52.34%)
Mutual labels:  image-classification
Openpose-based-GUI-for-Realtime-Pose-Estimate-and-Action-Recognition
GUI based on the python api of openpose in windows using cuda10 and cudnn7. Support body , hand, face keypoints estimation and data saving. Realtime gesture recognition is realized through two-layer neural network based on the skeleton collected from the gui.
Stars: ✭ 69 (-35.51%)
Mutual labels:  action-recognition
DCAN
[AAAI 2020] Code release for "Domain Conditioned Adaptation Network" https://arxiv.org/abs/2005.06717
Stars: ✭ 27 (-74.77%)
Mutual labels:  pytorch-implementation
Squeeze-and-Recursion-Temporal-Gates
Code for : [Pattern Recognit. Lett. 2021] "Learn to cycle: Time-consistent feature discovery for action recognition" and [IJCNN 2021] "Multi-Temporal Convolutions for Human Action Recognition in Videos".
Stars: ✭ 62 (-42.06%)
Mutual labels:  action-recognition
Zero-Shot-TTS
Unofficial Implementation of Zero-Shot Text-to-Speech for Text-Based Insertion in Audio Narration
Stars: ✭ 33 (-69.16%)
Mutual labels:  zero-shot-learning
image features
Extract deep learning features from images using simple python interface
Stars: ✭ 84 (-21.5%)
Mutual labels:  image-classification
BottleneckTransformers
Bottleneck Transformers for Visual Recognition
Stars: ✭ 231 (+115.89%)
Mutual labels:  image-classification
WS3D
Official version of 'Weakly Supervised 3D object detection from Lidar Point Cloud'(ECCV2020)
Stars: ✭ 104 (-2.8%)
Mutual labels:  eccv2020
stackml-js
Machine Learning platform in-browser for creators
Stars: ✭ 34 (-68.22%)
Mutual labels:  image-classification
PyTrx
PyTrx is a Python object-oriented programme created for the purpose of calculating real-world measurements from oblique images and time-lapse image series. Its primary purpose is to obtain velocities, surface areas, and distances from oblique, optical imagery of glacial environments.
Stars: ✭ 31 (-71.03%)
Mutual labels:  image-classification

PWC PWC PWC PWC PWC PWC PWC PWC

Latent Embedding Feedback and Discriminative Features for Zero-Shot Classification (ECCV 2020)

Sanath Narayan*, Akshita Gupta*, Fahad Shahbaz Khan, Cees G. M. Snoek, Ling Shao

(* denotes equal contribution)

Paper: https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123670477.pdf

Video Presentation: Short summary , Overview

Finetuned features: https://drive.google.com/drive/folders/13-eyljOmGwVRUzfMZIf_19HmCj1yShf1?usp=sharing

Webpage: https://akshitac8.github.io/tfvaegan/

Zero-shot learning strives to classify unseen categories for which no data is available during training. In the generalized variant, the test samples can further belong to seen or unseen categories. The stateof-the-art relies on Generative Adversarial Networks that synthesize unseen class features by leveraging class-specific semantic embeddings. During training, they generate semantically consistent features, but discard this constraint during feature synthesis and classification. We propose to enforce semantic consistency at all stages of (generalized) zero-shot learning: training, feature synthesis and classification. We first introduce a feedback loop, from a semantic embedding decoder, that iteratively refines the generated features during both the training and feature synthesis stages. The synthesized features together with their corresponding latent embeddings from the decoder are then transformed into discriminative features and utilized during classification to reduce ambiguities among categories. Experiments on (generalized) zero-shot object and action classification reveal the benefit of semantic consistency and iterative feedback, outperforming existing methods on six zero-shot learning benchmarks

Overall Architecture:



Overall Framework for TF-Vaegan

A feedback module, which utilizes the auxiliary decoder during both training and feature synthesis stages for improving semantic quality of synthesized feature.

A discriminative feature transformation that utilizes the auxiliary decoder during the classification stage for enhancing zero-shot classification.

Prerequisites

  • Python 3.6
  • Pytorch 0.3.1
  • torchvision 0.2.0
  • h5py 2.10
  • scikit-learn 0.22.1
  • scipy=1.4.1
  • numpy 1.18.1
  • numpy-base 1.18.1
  • pillow 5.1.0

Installation

The model is built in PyTorch 0.3.1 and tested on Ubuntu 16.04 environment (Python3.6, CUDA9.0, cuDNN7.5).

For installing, follow these intructions

conda create -n tfvaegan python=3.6
conda activate tfvaegan
pip install https://download.pytorch.org/whl/cu90/torch-0.3.1-cp36-cp36m-linux_x86_64.whl
pip install torchvision==0.2.0 scikit-learn==0.22.1 scipy==1.4.1 h5py==2.10 numpy==1.18.1

Data preparation

Standard ZSL and GZSL datasets

Download CUB, AWA, FLO and SUN features from the drive link shared below.

link: https://drive.google.com/drive/folders/16Xk1eFSWjQTtuQivTogMmvL3P6F_084u?usp=sharing

Download UCF101 and HMDB51 features from the drive link shared below.

link: https://drive.google.com/drive/folders/1pNlnL3LFSkXkJNkTHNYrQ3-Ie4vvewBy?usp=sharing

Extract them in the datasets folder.

Custom datasets

  1. Download the custom dataset images in the datsets folder.
  2. Use a pre-defined RESNET101 as feature extractor. For example, you can a have look here
  3. Extract features from the pre-defined RESNET101 and save the features in the dictionary format with keys 'features', 'image_files', 'labels'.
  4. Save the dictionary in a .mat format using,
    import scipy.io as io
    io.savemat('temp',feat)
    

Training

Zero-Shot Image Classification

  1. To train and evaluate ZSL and GZSL models on CUB, AWA, FLO and SUN, please run:
CUB : python scripts/run_cub_tfvaegan.py
AWA : python scripts/run_awa_tfvaegan.py
FLO : python scripts/run_flo_tfvaegan.py
SUN : python scripts/run_sun_tfvaegan.py

Zero-Shot Action Classification

  1. To train and evaluate ZSL and GZSL models on UCF101, HMDB51, please run:
HMDB51 : python scripts/run_hmdb51_tfvaegan.py
UCF101 : python scripts/run_ucf101_tfvaegan.py

Results

Citation:

If you find this useful, please cite our work as follows:

@inproceedings{narayan2020latent,
	title={Latent Embedding Feedback and Discriminative Features for Zero-Shot Classification},
	author={Narayan, Sanath and Gupta, Akshita and Khan, Fahad Shahbaz and Snoek, Cees GM and Shao, Ling},
	booktitle={ECCV},
	year={2020}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].