All Projects → IBM → bLVNet-TAM

IBM / bLVNet-TAM

Licence: Apache-2.0 license
The official Codes for NeurIPS 2019 paper. Quanfu Fan, Ricarhd Chen, Hilde Kuehne, Marco Pistoia, David Cox, "More Is Less: Learning Efficient Video Representations by Temporal Aggregation Modules"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to bLVNet-TAM

Ig65m Pytorch
PyTorch 3D video classification models pre-trained on 65 million Instagram videos
Stars: ✭ 217 (+301.85%)
Mutual labels:  action-recognition
Lintel
A Python module to decode video frames directly, using the FFmpeg C API.
Stars: ✭ 240 (+344.44%)
Mutual labels:  action-recognition
Keras-for-Co-occurrence-Feature-Learning-from-Skeleton-Data-for-Action-Recognition
Keras implementation for Co-occurrence-Feature-Learning-from-Skeleton-Data-for-Action-Recognition
Stars: ✭ 44 (-18.52%)
Mutual labels:  action-recognition
Ta3n
[ICCV 2019 (Oral)] Temporal Attentive Alignment for Large-Scale Video Domain Adaptation (PyTorch)
Stars: ✭ 217 (+301.85%)
Mutual labels:  action-recognition
Ms G3d
[CVPR 2020 Oral] PyTorch implementation of "Disentangling and Unifying Graph Convolutions for Skeleton-Based Action Recognition"
Stars: ✭ 225 (+316.67%)
Mutual labels:  action-recognition
TA3N
[ICCV 2019 Oral] TA3N: https://github.com/cmhungsteve/TA3N (Most updated repo)
Stars: ✭ 45 (-16.67%)
Mutual labels:  action-recognition
Mmskeleton
A OpenMMLAB toolbox for human pose estimation, skeleton-based action recognition, and action synthesis.
Stars: ✭ 2,378 (+4303.7%)
Mutual labels:  action-recognition
MiCT-Net-PyTorch
Video Recognition using Mixed Convolutional Tube (MiCT) on PyTorch with a ResNet backbone
Stars: ✭ 48 (-11.11%)
Mutual labels:  action-recognition
Alphaction
Spatio-Temporal Action Localization System
Stars: ✭ 221 (+309.26%)
Mutual labels:  action-recognition
temporal-binding-network
Implementation of "EPIC-Fusion: Audio-Visual Temporal Binding for Egocentric Action Recognition, ICCV, 2019" in PyTorch
Stars: ✭ 95 (+75.93%)
Mutual labels:  action-recognition
Paddlevideo
Comprehensive, latest, and deployable video deep learning algorithm, including video recognition, action localization, and temporal action detection tasks. It's a high-performance, light-weight codebase provides practical models for video understanding research and application
Stars: ✭ 218 (+303.7%)
Mutual labels:  action-recognition
Action recognition zoo
Codes for popular action recognition models, verified on the something-something data set.
Stars: ✭ 227 (+320.37%)
Mutual labels:  action-recognition
sparseprop
Temporal action proposals
Stars: ✭ 46 (-14.81%)
Mutual labels:  action-recognition
Actionvlad
ActionVLAD for video action classification (CVPR 2017)
Stars: ✭ 217 (+301.85%)
Mutual labels:  action-recognition
two-stream-action-recognition-keras
Two-stream CNNs for video action recognition implemented in Keras
Stars: ✭ 116 (+114.81%)
Mutual labels:  action-recognition
Step
STEP: Spatio-Temporal Progressive Learning for Video Action Detection. CVPR'19 (Oral)
Stars: ✭ 196 (+262.96%)
Mutual labels:  action-recognition
Attentionalpoolingaction
Code/Model release for NIPS 2017 paper "Attentional Pooling for Action Recognition"
Stars: ✭ 248 (+359.26%)
Mutual labels:  action-recognition
C3D-tensorflow
Action recognition with C3D network implemented in tensorflow
Stars: ✭ 34 (-37.04%)
Mutual labels:  action-recognition
weakly-action-localization
No description or website provided.
Stars: ✭ 30 (-44.44%)
Mutual labels:  action-recognition
temporal-ssl
Video Representation Learning by Recognizing Temporal Transformations. In ECCV, 2020.
Stars: ✭ 46 (-14.81%)
Mutual labels:  action-recognition

bLVNet-TAM

This repository holds the code and models for our paper,

Quanfu Fan*, Chun-Fu (Richard) Chen*, Hilde Kuehne, Marco Pistoia, David Cox, "More Is Less: Learning Efficient Video Representations by Temporal Aggregation Modules"

If you use the code and models from this repo, please cite our work. Thanks!

@incollection{
    fan2019blvnet,
    title={{More Is Less: Learning Efficient Video Representations by Temporal Aggregation Modules}},
    author={Quanfu Fan and Chun-Fu (Ricarhd) Chen and Hilde Kuehne and Marco Pistoia and David Cox},
    booktitle={Advances in Neural Information Processing Systems 33},
    year={2019}
}

Requirements

pip install -r requirement.txt

Pretrained Models on Something-Something

The results below (top-1 accuracy) are reported under the single-crop and single-clip setting.

V1

Name Top-1 Val Acc.
bLVNet-TAM-50-a2-b4-f8x2 46.4
bLVNet-TAM-50-a2-b4-f16x2 48.4
bLVNet-TAM-101-a2-b4-f8x2 47.8
bLVNet-TAM-101-a2-b4-f16x2 49.6
bLVNet-TAM-101-a2-b4-f24x2 52.2
bLVNet-TAM-101-a2-b4-f32x2 53.1

V2

Name Top-1 Val Acc.
bLVNet-TAM-50-a2-b4-f8x2 59.1
bLVNet-TAM-50-a2-b4-f16x2 61.7
bLVNet-TAM-101-a2-b4-f8x2 60.2
bLVNet-TAM-101-a2-b4-f16x2 61.9
bLVNet-TAM-101-a2-b4-f24x2 64.0
bLVNet-TAM-101-a2-b4-f32x2 65.2

Data Preparation

We provide two scripts in the folder tools for prepare input data for model training. The scripts sample an image sequence from a video and then resize each image to have its shorter side to be 256 while keeping the aspect ratio of the image. You may need to set up folder_root accordingly to assure the extraction works correctly.

Training

To reproduce the results in our paper, the pretrained models of bLNet are required and they are available at here.

With the pretrained models placed in the folder pretrained, the following script can be used to train a bLVNet-101-TAM-a2-b4-f8x2 model on Something-Something V2

python3 train.py --datadir /path/to/folder \
--dataset st2stv2 -d 101 --groups 16  \ 
--logdir /path/to/logdir --lr 0.01 -b 64 --dropout 0.5 -j 36 \
--blending_frames 3 --epochs 50 --disable_scaleup --imagenet_blnet_pretrained

Test

First download the models and put them in the pretrained folder. Then follow the example below to evaluate a model. Example: evaluating the bLVNet-101-TAM-a2-b4-f8x2 model on Something-Something V2

python3 test.py --datadir /path/to/folder --dataset st2stv2 -d 101 --groups 16 \ 
--alpha 2 --beta 4 --evaluate --pretrained --dataset --disable_scaleup \
--logdir /path/to/logdir

You can add num_crops and num_clips arguments to perform multi-crops and multi-clips evaluation to video-level accuracy.

Please feel free to let us know if you encounter any issue when using our code and models.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].