All Projects → demianzhang → weakly-action-localization

demianzhang / weakly-action-localization

Licence: other
No description or website provided.

Programming Languages

python
139335 projects - #7 most used programming language
matlab
3953 projects
shell
77523 projects

Projects that are alternatives of or similar to weakly-action-localization

Mmskeleton
A OpenMMLAB toolbox for human pose estimation, skeleton-based action recognition, and action synthesis.
Stars: ✭ 2,378 (+7826.67%)
Mutual labels:  action-recognition
Ms G3d
[CVPR 2020 Oral] PyTorch implementation of "Disentangling and Unifying Graph Convolutions for Skeleton-Based Action Recognition"
Stars: ✭ 225 (+650%)
Mutual labels:  action-recognition
temporal-ssl
Video Representation Learning by Recognizing Temporal Transformations. In ECCV, 2020.
Stars: ✭ 46 (+53.33%)
Mutual labels:  action-recognition
Ig65m Pytorch
PyTorch 3D video classification models pre-trained on 65 million Instagram videos
Stars: ✭ 217 (+623.33%)
Mutual labels:  action-recognition
Ican
[BMVC 2018] iCAN: Instance-Centric Attention Network for Human-Object Interaction Detection
Stars: ✭ 225 (+650%)
Mutual labels:  action-recognition
Lintel
A Python module to decode video frames directly, using the FFmpeg C API.
Stars: ✭ 240 (+700%)
Mutual labels:  action-recognition
Amass
Data preparation and loader for AMASS
Stars: ✭ 180 (+500%)
Mutual labels:  action-recognition
two-stream-action-recognition-keras
Two-stream CNNs for video action recognition implemented in Keras
Stars: ✭ 116 (+286.67%)
Mutual labels:  action-recognition
Action recognition zoo
Codes for popular action recognition models, verified on the something-something data set.
Stars: ✭ 227 (+656.67%)
Mutual labels:  action-recognition
sparseprop
Temporal action proposals
Stars: ✭ 46 (+53.33%)
Mutual labels:  action-recognition
Actionvlad
ActionVLAD for video action classification (CVPR 2017)
Stars: ✭ 217 (+623.33%)
Mutual labels:  action-recognition
Paddlevideo
Comprehensive, latest, and deployable video deep learning algorithm, including video recognition, action localization, and temporal action detection tasks. It's a high-performance, light-weight codebase provides practical models for video understanding research and application
Stars: ✭ 218 (+626.67%)
Mutual labels:  action-recognition
Attentionalpoolingaction
Code/Model release for NIPS 2017 paper "Attentional Pooling for Action Recognition"
Stars: ✭ 248 (+726.67%)
Mutual labels:  action-recognition
Step
STEP: Spatio-Temporal Progressive Learning for Video Action Detection. CVPR'19 (Oral)
Stars: ✭ 196 (+553.33%)
Mutual labels:  action-recognition
temporal-binding-network
Implementation of "EPIC-Fusion: Audio-Visual Temporal Binding for Egocentric Action Recognition, ICCV, 2019" in PyTorch
Stars: ✭ 95 (+216.67%)
Mutual labels:  action-recognition
Optical Flow Guided Feature
Implementation Code of the paper Optical Flow Guided Feature, CVPR 2018
Stars: ✭ 186 (+520%)
Mutual labels:  action-recognition
Alphaction
Spatio-Temporal Action Localization System
Stars: ✭ 221 (+636.67%)
Mutual labels:  action-recognition
SEC-tensorflow
a tensorflow version for SEC approach in the paper "seed, expand and constrain: three principles for weakly-supervised image segmentation".
Stars: ✭ 35 (+16.67%)
Mutual labels:  weakly-supervised
Keras-for-Co-occurrence-Feature-Learning-from-Skeleton-Data-for-Action-Recognition
Keras implementation for Co-occurrence-Feature-Learning-from-Skeleton-Data-for-Action-Recognition
Stars: ✭ 44 (+46.67%)
Mutual labels:  action-recognition
TA3N
[ICCV 2019 Oral] TA3N: https://github.com/cmhungsteve/TA3N (Most updated repo)
Stars: ✭ 45 (+50%)
Mutual labels:  action-recognition

Weakly Supervised Action Localization by Sparse Temporal Pooling Network

Overview

This repository contains reproduced code reported in the paper "Weakly Supervised Action Localization by Sparse Temporal Pooling Network" by Phuc Nguyen and Ting Liu. The paper was posted on arXiv in Dec 2017, published as a CVPR 2018 conference paper.

Disclaimer: This is the reproduced code, not an original code of the paper.

Running the code

see the multi_run.sh

firstly you need to use Dense_Flow to extract rgb and optical flow

secondly you need to extract I3D feature using thumos14-i3d

finally you can train, test and combine the rgb and flow result

result

Model 0.1 0.2 0.3 0.4 0.5
RGB-I3D 0.351 0.279 0.213 0.150 0.102
Flow-I3D 0.408 0.340 0.269 0.205 0.144
Two-Stream I3D - - - - -

please pay attention to the order according to the paper

  1. sample 10fps

use opencv cv2.VideoCapture to read the video and keep one frame every three frames[original video in 30fps]

  1. keep ratio 340x180 and resize the small edge to 256

after resize frames, then use cv2.VideoWriter to save sampled frames into video

  1. extract flow

use dense_flow to extract rgb and flows, please pay attention to the shell default parameter 340x256, you should change it to the video's shape in order to keep the ratio

  1. center crop 224x224, each 16 frames are sent to i3d

The result may be a little higher than the result, because I did not obey this order before.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].