Video2tfrecordEasily convert RGB video data (e.g. .avi) to the TensorFlow tfrecords file format for training e.g. a NN in TensorFlow. This implementation allows to limit the number of frames per video to be stored in the tfrecords.
Stars: ✭ 137 (-15.95%)
MultiverseDataset, code and model for the CVPR'20 paper "The Garden of Forking Paths: Towards Multi-Future Trajectory Prediction". And for the ECCV'20 SimAug paper.
Stars: ✭ 131 (-19.63%)
MmactionAn open-source toolbox for action understanding based on PyTorch
Stars: ✭ 1,711 (+949.69%)
I3d finetuneTensorFlow code for finetuning I3D model on UCF101.
Stars: ✭ 128 (-21.47%)
Movienet ToolsTools for movie and video research
Stars: ✭ 113 (-30.67%)
Temporal Shift Module[ICCV 2019] TSM: Temporal Shift Module for Efficient Video Understanding
Stars: ✭ 1,282 (+686.5%)
Temporally Language GroundingA Pytorch implemention for some state-of-the-art models for" Temporally Language Grounding in Untrimmed Videos"
Stars: ✭ 73 (-55.21%)
Tdn[CVPR 2021] TDN: Temporal Difference Networks for Efficient Action Recognition
Stars: ✭ 72 (-55.83%)
Tsn PytorchTemporal Segment Networks (TSN) in PyTorch
Stars: ✭ 895 (+449.08%)
Mmaction2OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark
Stars: ✭ 684 (+319.63%)
DEAR[ICCV 2021 Oral] Deep Evidential Action Recognition
Stars: ✭ 36 (-77.91%)
PyAnomalyUseful Toolbox for Anomaly Detection
Stars: ✭ 95 (-41.72%)
DIN-Group-Activity-Recognition-BenchmarkA new codebase for Group Activity Recognition. It contains codes for ICCV 2021 paper: Spatio-Temporal Dynamic Inference Network for Group Activity Recognition and some other methods.
Stars: ✭ 26 (-84.05%)
just-ask[TPAMI Special Issue on ICCV 2021 Best Papers, Oral] Just Ask: Learning to Answer Questions from Millions of Narrated Videos
Stars: ✭ 57 (-65.03%)
STCNetSTCNet: Spatio-Temporal Cross Network for Industrial Smoke Detection
Stars: ✭ 29 (-82.21%)
MTL-AQAWhat and How Well You Performed? A Multitask Learning Approach to Action Quality Assessment [CVPR 2019]
Stars: ✭ 38 (-76.69%)
NExT-QANExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions (CVPR'21)
Stars: ✭ 50 (-69.33%)
glimpse cloudsPytorch implementation of the paper "Glimpse Clouds: Human Activity Recognition from Unstructured Feature Points", F. Baradel, C. Wolf, J. Mille , G.W. Taylor, CVPR 2018
Stars: ✭ 30 (-81.6%)
SSTDA[CVPR 2020] Action Segmentation with Joint Self-Supervised Temporal Domain Adaptation (PyTorch)
Stars: ✭ 150 (-7.98%)
Awesome Groundingawesome grounding: A curated list of research papers in visual grounding
Stars: ✭ 247 (+51.53%)
PaddlevideoComprehensive, latest, and deployable video deep learning algorithm, including video recognition, action localization, and temporal action detection tasks. It's a high-performance, light-weight codebase provides practical models for video understanding research and application
Stars: ✭ 218 (+33.74%)
ActionvladActionVLAD for video action classification (CVPR 2017)
Stars: ✭ 217 (+33.13%)
StepSTEP: Spatio-Temporal Progressive Learning for Video Action Detection. CVPR'19 (Oral)
Stars: ✭ 196 (+20.25%)
Youtube 8mThe 2nd place Solution to the Youtube-8M Video Understanding Challenge by Team Monkeytyping (based on tensorflow)
Stars: ✭ 171 (+4.91%)