All Projects → escorciav → Daps

escorciav / Daps

Licence: mit
This repo allocate DAPs code of our ECCV 2016 publication

Programming Languages

python
139335 projects - #7 most used programming language
python27
39 projects

Projects that are alternatives of or similar to Daps

Rnn Theano
使用Theano实现的一些RNN代码,包括最基本的RNN,LSTM,以及部分Attention模型,如论文MLSTM等
Stars: ✭ 31 (-58.11%)
Mutual labels:  theano
Training toolbox caffe
Training Toolbox for Caffe
Stars: ✭ 51 (-31.08%)
Mutual labels:  action-recognition
Epic Kitchens 55 Action Models
EPIC-KITCHENS-55 baselines for Action Recognition
Stars: ✭ 68 (-8.11%)
Mutual labels:  action-recognition
Iris Python
Collection of iris classifcation program for teaching purpose
Stars: ✭ 33 (-55.41%)
Mutual labels:  theano
Chinesetrafficpolicepose
Detects Chinese traffic police commanding poses 检测中国交警指挥手势
Stars: ✭ 49 (-33.78%)
Mutual labels:  action-recognition
Feel The Kern
Generating proportional fonts with deep learning
Stars: ✭ 59 (-20.27%)
Mutual labels:  theano
Video Classification 3d Cnn Pytorch
Video classification tools using 3D ResNet
Stars: ✭ 874 (+1081.08%)
Mutual labels:  action-recognition
Hake Action
As a part of the HAKE project, includes the reproduced SOTA models and the corresponding HAKE-enhanced versions (CVPR2020).
Stars: ✭ 72 (-2.7%)
Mutual labels:  action-recognition
Resgcnv1
ResGCN: an efficient baseline for skeleton-based human action recognition.
Stars: ✭ 50 (-32.43%)
Mutual labels:  action-recognition
Fight detection
Real time Fight Detection Based on 2D Pose Estimation and RNN Action Recognition
Stars: ✭ 65 (-12.16%)
Mutual labels:  action-recognition
Okutama Action
Okutama-Action: An Aerial View Video Dataset for Concurrent Human Action Detection
Stars: ✭ 36 (-51.35%)
Mutual labels:  action-recognition
Yann
This toolbox is support material for the book on CNN (http://www.convolution.network).
Stars: ✭ 41 (-44.59%)
Mutual labels:  theano
Tars
A deep generative model library in Theano and Lasagne
Stars: ✭ 61 (-17.57%)
Mutual labels:  theano
Action Recognition Using 3d Resnet
Use 3D ResNet to extract features of UCF101 and HMDB51 and then classify them.
Stars: ✭ 32 (-56.76%)
Mutual labels:  action-recognition
Merlin
This is now the official location of the Merlin project.
Stars: ✭ 1,168 (+1478.38%)
Mutual labels:  theano
Theano Kaldi Rnn
THEANO-KALDI-RNNs is a project implementing various Recurrent Neural Networks (RNNs) for RNN-HMM speech recognition. The Theano Code is coupled with the Kaldi decoder.
Stars: ✭ 31 (-58.11%)
Mutual labels:  theano
Basic nns in frameworks
several basic neural networks[mlp, autoencoder, CNNs, recurrentNN, recursiveNN] implements under several NN frameworks[ tensorflow, pytorch, theano, keras]
Stars: ✭ 58 (-21.62%)
Mutual labels:  theano
Tdn
[CVPR 2021] TDN: Temporal Difference Networks for Efficient Action Recognition
Stars: ✭ 72 (-2.7%)
Mutual labels:  action-recognition
Mlatimperial2017
Materials for the course of machine learning at Imperial College organized by Yandex SDA
Stars: ✭ 71 (-4.05%)
Mutual labels:  theano
Aiopen
AIOpen是一个按人工智能三要素(数据、算法、算力)进行AI开源项目分类的汇集项目,项目致力于跟踪目前人工智能(AI)的深度学习(DL)开源项目,并尽可能地罗列目前的开源项目,同时加入了一些曾经研究过的代码。通过这些开源项目,使初次接触AI的人们对人工智能(深度学习)有更清晰和更全面的了解。
Stars: ✭ 62 (-16.22%)
Mutual labels:  theano

Deep Action Proposals for Videos

Temporal Action Proposals for long untrimmed videos.

DAPs architecture allows to retrieve segments from long videos where it is likely to find actions with high recall very quickly.

pull arch figure

Welcome

Welcome to our repo! This project hosts a simple, handy interface to generate segments where it is likely to find actions in your videos.

If you find any piece of code valuable for your research please cite this work:

@Inbook{Escorcia2016,
author="Escorcia, Victor and Caba Heilbron, Fabian and Niebles, Juan Carlos and Ghanem, Bernard",
editor="Leibe, Bastian and Matas, Jiri and Sebe, Nicu and Welling, Max",
title="DAPs: Deep Action Proposals for Action Understanding",
bookTitle="Computer Vision -- ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part III",
year="2016",
publisher="Springer International Publishing",
address="Cham",
pages="768--784",
isbn="978-3-319-46487-9",
doi="10.1007/978-3-319-46487-9_47",
url="http://dx.doi.org/10.1007/978-3-319-46487-9_47"
}

If you like this project, give us a ⭐️ in the github banner 😉.

Installation

  1. Ensure that you have gcc, conda, CUDA and CUDNN (optional).

  2. Clone our repo, git clone https://github.com/escorciav/daps/.

  3. Go to our project folder and type bash install.sh.

Notes

  • Our implementation uses Theano. It is tested with gcc but as long as Theano supports your desired compiler, go ahead.

  • In case you don't want to use conda, our python dependencies are here. A complete list of dependecies is here.

  • Do you like environment-modules? we provide bash scripts to activate or deactivate the environment. Personalize them 😉.

What can you find?

  • Pre-trained models. Our generalization experiment suggests that you may expect decent results for other kind of action classes with similar lengths. Check out the models trained on the validation set of THUMOS14.

  • Pre-computed action proposals. Take a look at our results if you are interested in comparisons or building cool algorithms on top of our outputs.

  • Code for retrieving proposals in new videos. Check out our program to retrieve proposals from your video.

Do you want to try?

  1. Download the C3D representation of a couple of videos from here.

  2. Download our model.

  3. Go to our project folder

  4. Activate our conda environment. We reduce your choices to one of the following:

  • Execute source activate daps-eccv16, for conda users.

  • Execute ./activate.sh, for conda and environment-modules users.

  • Ensure that our package is in your PYTHONPATH, python users.

Note for new environment-modules users: You must personalize the script to activate the environment, otherwise it will fail.

  1. Execute: tools/generate_proposals.py -iv video_test_0000541 -ic3d [path-to-c3d-of-videos] -imd [path-our-model]

Questions

Please visit our FAQs, if you have any doubt. In case that your question is not there, send us an email.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].