All Projects → wushidonguc → two-stream-action-recognition-keras

wushidonguc / two-stream-action-recognition-keras

Licence: MIT license
Two-stream CNNs for video action recognition implemented in Keras

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to two-stream-action-recognition-keras

MiCT-Net-PyTorch
Video Recognition using Mixed Convolutional Tube (MiCT) on PyTorch with a ResNet backbone
Stars: ✭ 48 (-58.62%)
Mutual labels:  action-recognition, ucf-101
TCE
This repository contains the code implementation used in the paper Temporally Coherent Embeddings for Self-Supervised Video Representation Learning (TCE).
Stars: ✭ 51 (-56.03%)
Mutual labels:  action-recognition, ucf-101
two-stream-fusion-for-action-recognition-in-videos
No description or website provided.
Stars: ✭ 80 (-31.03%)
Mutual labels:  action-recognition, two-stream-cnn
Amass
Data preparation and loader for AMASS
Stars: ✭ 180 (+55.17%)
Mutual labels:  action-recognition
Mmskeleton
A OpenMMLAB toolbox for human pose estimation, skeleton-based action recognition, and action synthesis.
Stars: ✭ 2,378 (+1950%)
Mutual labels:  action-recognition
Ms G3d
[CVPR 2020 Oral] PyTorch implementation of "Disentangling and Unifying Graph Convolutions for Skeleton-Based Action Recognition"
Stars: ✭ 225 (+93.97%)
Mutual labels:  action-recognition
temporal-binding-network
Implementation of "EPIC-Fusion: Audio-Visual Temporal Binding for Egocentric Action Recognition, ICCV, 2019" in PyTorch
Stars: ✭ 95 (-18.1%)
Mutual labels:  action-recognition
Vip
Video Platform for Action Recognition and Object Detection in Pytorch
Stars: ✭ 175 (+50.86%)
Mutual labels:  action-recognition
TA3N
[ICCV 2019 Oral] TA3N: https://github.com/cmhungsteve/TA3N (Most updated repo)
Stars: ✭ 45 (-61.21%)
Mutual labels:  action-recognition
Ican
[BMVC 2018] iCAN: Instance-Centric Attention Network for Human-Object Interaction Detection
Stars: ✭ 225 (+93.97%)
Mutual labels:  action-recognition
Paddlevideo
Comprehensive, latest, and deployable video deep learning algorithm, including video recognition, action localization, and temporal action detection tasks. It's a high-performance, light-weight codebase provides practical models for video understanding research and application
Stars: ✭ 218 (+87.93%)
Mutual labels:  action-recognition
Step
STEP: Spatio-Temporal Progressive Learning for Video Action Detection. CVPR'19 (Oral)
Stars: ✭ 196 (+68.97%)
Mutual labels:  action-recognition
Alphaction
Spatio-Temporal Action Localization System
Stars: ✭ 221 (+90.52%)
Mutual labels:  action-recognition
Optical Flow Guided Feature
Implementation Code of the paper Optical Flow Guided Feature, CVPR 2018
Stars: ✭ 186 (+60.34%)
Mutual labels:  action-recognition
sparseprop
Temporal action proposals
Stars: ✭ 46 (-60.34%)
Mutual labels:  action-recognition
Hidden Two Stream
Caffe implementation for "Hidden Two-Stream Convolutional Networks for Action Recognition"
Stars: ✭ 179 (+54.31%)
Mutual labels:  action-recognition
Attentionalpoolingaction
Code/Model release for NIPS 2017 paper "Attentional Pooling for Action Recognition"
Stars: ✭ 248 (+113.79%)
Mutual labels:  action-recognition
Ta3n
[ICCV 2019 (Oral)] Temporal Attentive Alignment for Large-Scale Video Domain Adaptation (PyTorch)
Stars: ✭ 217 (+87.07%)
Mutual labels:  action-recognition
Actionvlad
ActionVLAD for video action classification (CVPR 2017)
Stars: ✭ 217 (+87.07%)
Mutual labels:  action-recognition
Action recognition zoo
Codes for popular action recognition models, verified on the something-something data set.
Stars: ✭ 227 (+95.69%)
Mutual labels:  action-recognition

Two-stream-action-recognition-keras

License: MIT DOI

We use spatial and temporal stream cnn under the Keras framework to reproduce published results on UCF-101 action recognition dataset. This is a project from a research internship at the Machine Intelligence team, IBM Research AI, Almaden Research Center, by Wushi Dong ([email protected]).

References

Data

Spatial input data -> rgb frames

First, download the dataset from UCF into the data folder: cd data && wget http://crcv.ucf.edu/data/UCF101/UCF101.rar

Then extract it with unrar e UCF101.rar. in disk, which costs about 5.9G.

We use split #1 for all of our experiments.

Motion input data -> stacked optical flows

Download the preprocessed tvl1 optical flow dataset directly from https://github.com/feichtenhofer/twostreamfusion.

wget http://ftp.tugraz.at/pub/feichtenhofer/tsfusion/data/ucf101_tvl1_flow.zip.001
wget http://ftp.tugraz.at/pub/feichtenhofer/tsfusion/data/ucf101_tvl1_flow.zip.002
wget http://ftp.tugraz.at/pub/feichtenhofer/tsfusion/data/ucf101_tvl1_flow.zip.003
cat ucf101_tvl1_flow.zip* > ucf101_tvl1_flow.zip
unzip ucf101_tvl1_flow.zip

Training

Spatial-stream cnn

  • We classify each video by looking at a single frame. We use ImageNet pre-trained models and transfer learning to retrain Inception on our data. We first fine-tune the top dense layers for 10 epochs and then retrain the top two inception blocks.

Temporal-stream cnn

  • We train the temporal-stream cnn from scratch. In every mini-batch, we randomly select 128 (batch size) videos from 9537 training videos and futher randomly select 1 optical flow stack in each video. We follow the reference paper and use 10 x-channels and 10 y-channels for each optical flow stack, resulting in a input shape of (224, 224, 20).

  • Multiple workers are utilized in the data generator for faster training.

Data augmentation

  • Both streams apply the same data augmentation technique such as corner cropping and random horizontal flipping. Temporally, we pick the starting frame among those early enough to guarantee a desired number of frames. For shorter videos, we looped the video as many times as necessary to satisfy each model’s input interface.

Testing

  • We fused the two streams by averaging the softmax scores.

  • We uniformly sample a number of frames in each video and the video level prediction is the voting result of all frame level predictions. We pick the starting frame among those early enough to guarantee a desired number of frames. For shorter videos, we looped the video as many times as necessary to satisfy each model’s input interface.

Results

Network Simonyan et al [1] Ours
Spatial 72.7% 73.1%
Temporal 81.0% 78.8%
Fusion 85.9% 82.0%
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].