All Projects → tomar840 → two-stream-fusion-for-action-recognition-in-videos

tomar840 / two-stream-fusion-for-action-recognition-in-videos

Licence: other
No description or website provided.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to two-stream-fusion-for-action-recognition-in-videos

two-stream-action-recognition-keras
Two-stream CNNs for video action recognition implemented in Keras
Stars: ✭ 116 (+45%)
Mutual labels:  action-recognition, two-stream-cnn
Ms G3d
[CVPR 2020 Oral] PyTorch implementation of "Disentangling and Unifying Graph Convolutions for Skeleton-Based Action Recognition"
Stars: ✭ 225 (+181.25%)
Mutual labels:  pretrained-models, action-recognition
MiCT-Net-PyTorch
Video Recognition using Mixed Convolutional Tube (MiCT) on PyTorch with a ResNet backbone
Stars: ✭ 48 (-40%)
Mutual labels:  action-recognition, ucf101
temporal-ssl
Video Representation Learning by Recognizing Temporal Transformations. In ECCV, 2020.
Stars: ✭ 46 (-42.5%)
Mutual labels:  action-recognition, ucf101
conv3d-video-action-recognition
My experimentation around action recognition in videos. Contains Keras implementation for C3D network based on original paper "Learning Spatiotemporal Features with 3D Convolutional Networks", Tran et al. and it includes video processing pipelines coded using mPyPl package. Model is being benchmarked on popular UCF101 dataset and achieves result…
Stars: ✭ 50 (-37.5%)
Mutual labels:  action-recognition, ucf101
WARP
Code for ACL'2021 paper WARP 🌀 Word-level Adversarial ReProgramming. Outperforming `GPT-3` on SuperGLUE Few-Shot text classification. https://aclanthology.org/2021.acl-long.381/
Stars: ✭ 66 (-17.5%)
Mutual labels:  pretrained-models
fall-detection-two-stream-cnn
Real-time fall detection using two-stream convolutional neural net (CNN) with Motion History Image (MHI)
Stars: ✭ 49 (-38.75%)
Mutual labels:  two-stream-cnn
super-gradients
Easily train or fine-tune SOTA computer vision models with one open source training library
Stars: ✭ 429 (+436.25%)
Mutual labels:  pretrained-models
open clip
An open source implementation of CLIP.
Stars: ✭ 1,534 (+1817.5%)
Mutual labels:  pretrained-models
transformer-models
Deep Learning Transformer models in MATLAB
Stars: ✭ 90 (+12.5%)
Mutual labels:  pretrained-models
text classifier
Tensorflow2.3的文本分类项目,支持各种分类模型,支持相关tricks。
Stars: ✭ 135 (+68.75%)
Mutual labels:  pretrained-models
deep-action-detection
Multi-stream CNN architectures for action detection with actor-centric filtering
Stars: ✭ 24 (-70%)
Mutual labels:  two-stream-cnn
adascan-public
Code for AdaScan: Adaptive Scan Pooling (CVPR 2017)
Stars: ✭ 43 (-46.25%)
Mutual labels:  action-recognition
theWorldInSafety
Surveillance System Against Violence
Stars: ✭ 31 (-61.25%)
Mutual labels:  action-recognition
ICCV2021-Paper-Code-Interpretation
ICCV2021/2019/2017 论文/代码/解读/直播合集,极市团队整理
Stars: ✭ 2,022 (+2427.5%)
Mutual labels:  action-recognition
Dataset-REPAIR
REPresentAtion bIas Removal (REPAIR) of datasets
Stars: ✭ 49 (-38.75%)
Mutual labels:  action-recognition
safety-gear-detector-python
Observe workers as they pass in front of a camera to determine if they have adequate safety protection.
Stars: ✭ 54 (-32.5%)
Mutual labels:  pretrained-models
TCFPN-ISBA
Temporal Convolutional Feature Pyramid Network (TCFPN) & Iterative Soft Boundary Assignment (ISBA), CVPR '18
Stars: ✭ 40 (-50%)
Mutual labels:  action-recognition
video repres mas
code for CVPR-2019 paper: Self-supervised Spatio-temporal Representation Learning for Videos by Predicting Motion and Appearance Statistics
Stars: ✭ 63 (-21.25%)
Mutual labels:  action-recognition
Tacotron2-PyTorch
Yet another PyTorch implementation of Tacotron 2 with reduction factor and faster training speed.
Stars: ✭ 118 (+47.5%)
Mutual labels:  pretrained-models

two-stream-fusion-for-action-recognition-in-videos

We have implemented convolutional two stream network for action recognition for two cases -

  • Two stream average fusion at softmax layer.
  • Two stream fusion at convolutional layer.

1. Data

We have used UCF101 dataset for this project. For utilization of temporal information, we have used optical flow images and RGB frames for utilizing spatial information. The pre-processed RGB frames and flow images can be downloaded from feichtenhofer/twostreamfusion)

  • RGB images
wget http://ftp.tugraz.at/pub/feichtenhofer/tsfusion/data/ucf101_jpegs_256.zip.001
wget http://ftp.tugraz.at/pub/feichtenhofer/tsfusion/data/ucf101_jpegs_256.zip.002
wget http://ftp.tugraz.at/pub/feichtenhofer/tsfusion/data/ucf101_jpegs_256.zip.003

cat ucf101_jpegs_256.zip* > ucf101_jpegs_256.zip
unzip ucf101_jpegs_256.zip
  • Optical Flow
wget http://ftp.tugraz.at/pub/feichtenhofer/tsfusion/data/ucf101_tvl1_flow.zip.001
wget http://ftp.tugraz.at/pub/feichtenhofer/tsfusion/data/ucf101_tvl1_flow.zip.002
wget http://ftp.tugraz.at/pub/feichtenhofer/tsfusion/data/ucf101_tvl1_flow.zip.003

cat ucf101_tvl1_flow.zip* > ucf101_tvl1_flow.zip
unzip ucf101_tvl1_flow.zip

For both the cases we have used a stack of 10 RGB frames as input for spatial stream and a stack of 50 optical flow frames as input for temporal stream. So, for a batch size = 4, typical spatial loader will look like the image below - data_loading jpeg

2. Models

We have used vgg-19 model pre-trained on ImageNet for both the streams.

3. Implementation details for both cases

  • Note :- To do weight transformation for first layers of ConvNets, we first average the weight value across the RGB channels and replicate this average value by the channel number in that ConvNet.

3.1 Two stream average fusion at softmax layer

The architecture for this case has been shown in the figure below - 

average_fusion

3.2 Two stream fusion at convolution layer

The Architecture for this case is shown in the Figure below - conv_fusion The ConvNets are being replaced be vgg model, trained on ImageNet.

4. Training Models

  • Please modify this path and this path to fit the UCF101 dataset on your device.
  • If you want to change the number of frames in RGB stack, then modify here. Select the frames you want to have in the stack. If you want, you can also introduce randomness in choosing the frames for stacking.

5. Performance

5.1 Performance for two stream average fusion

  • For first 20 classes of UCF101 dataset
Network Acc.
Spatial cnn 91.96%
Motion cnn 97.30%
Average fusion 99.01%
  • For all 101 classes of UCF101 dataset
Network Acc.
Spatial cnn 48.64%
Motion cnn 51.17%
Average fusion 62.13%

5.2 Performance for two stream fusion at convolution layer

  • For first 20 classes of UCF101 dataset, we get an accuracy of 96.01 %
  • For all 101 classes of UCF 101 dataset, we get an accuracy of 68.23 %

6. Reference Paper

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].