All Projects → miquelmarti → Okutama Action

miquelmarti / Okutama Action

Okutama-Action: An Aerial View Video Dataset for Concurrent Human Action Detection

Projects that are alternatives of or similar to Okutama Action

Hand pose action
Dataset and code for the paper "First-Person Hand Action Benchmark with RGB-D Videos and 3D Hand Pose Annotations", CVPR 2018.
Stars: ✭ 173 (+380.56%)
Mutual labels:  dataset, action-recognition, benchmark
BIRL
BIRL: Benchmark on Image Registration methods with Landmark validations
Stars: ✭ 66 (+83.33%)
Mutual labels:  benchmark, dataset
Clue
中文语言理解测评基准 Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard
Stars: ✭ 2,425 (+6636.11%)
Mutual labels:  dataset, benchmark
MaskedFaceRepresentation
Masked face recognition focuses on identifying people using their facial features while they are wearing masks. We introduce benchmarks on face verification based on masked face images for the development of COVID-safe protocols in airports.
Stars: ✭ 17 (-52.78%)
Mutual labels:  benchmark, dataset
Hpatches Benchmark
Python & Matlab code for local feature descriptor evaluation with the HPatches dataset.
Stars: ✭ 129 (+258.33%)
Mutual labels:  dataset, benchmark
Hake
HAKE: Human Activity Knowledge Engine (CVPR'18/19/20, NeurIPS'20)
Stars: ✭ 132 (+266.67%)
Mutual labels:  dataset, action-recognition
Weatherbench
A benchmark dataset for data-driven weather forecasting
Stars: ✭ 227 (+530.56%)
Mutual labels:  dataset, benchmark
Pcam
The PatchCamelyon (PCam) deep learning classification benchmark.
Stars: ✭ 340 (+844.44%)
Mutual labels:  dataset, benchmark
Deeperforensics 1.0
[CVPR 2020] A Large-Scale Dataset for Real-World Face Forgery Detection
Stars: ✭ 338 (+838.89%)
Mutual labels:  dataset, benchmark
Datasets
A repository of pretty cool datasets that I collected for network science and machine learning research.
Stars: ✭ 302 (+738.89%)
Mutual labels:  dataset, benchmark
Epic Kitchens 55 Annotations
🍴 Annotations for the EPIC KITCHENS-55 Dataset.
Stars: ✭ 120 (+233.33%)
Mutual labels:  dataset, action-recognition
Mmaction2
OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark
Stars: ✭ 684 (+1800%)
Mutual labels:  action-recognition, benchmark
Pglib Opf
Benchmarks for the Optimal Power Flow Problem
Stars: ✭ 114 (+216.67%)
Mutual labels:  dataset, benchmark
Sensaturban
🔥Urban-scale point cloud dataset (CVPR 2021)
Stars: ✭ 135 (+275%)
Mutual labels:  dataset, benchmark
Core50
CORe50: a new Dataset and Benchmark for Continual Learning
Stars: ✭ 91 (+152.78%)
Mutual labels:  dataset, benchmark
Fashion Mnist
A MNIST-like fashion product database. Benchmark 👇
Stars: ✭ 9,675 (+26775%)
Mutual labels:  dataset, benchmark
Chinesetrafficpolicepose
Detects Chinese traffic police commanding poses 检测中国交警指挥手势
Stars: ✭ 49 (+36.11%)
Mutual labels:  dataset, action-recognition
Vidvrd Helper
To keep updates with VRU Grand Challenge, please use https://github.com/NExTplusplus/VidVRD-helper
Stars: ✭ 81 (+125%)
Mutual labels:  dataset, action-recognition
Tape
Tasks Assessing Protein Embeddings (TAPE), a set of five biologically relevant semi-supervised learning tasks spread across different domains of protein biology.
Stars: ✭ 295 (+719.44%)
Mutual labels:  dataset, benchmark
Medmnist
[ISBI'21] MedMNIST Classification Decathlon: A Lightweight AutoML Benchmark for Medical Image Analysis
Stars: ✭ 338 (+838.89%)
Mutual labels:  dataset, benchmark

Abstract

We present Okutama-Action, a new video dataset for aerial view concurrent human action detection. It consists of 43 minute-long fully-annotated sequences with 12 action classes. Okutama-Action features many challenges missing in current datasets, including dynamic transition of actions, significant changes in scale and aspect ratio, abrupt camera movement, as well as multi-labeled actors. As a result, our dataset is more challenging than existing ones, and will help push the field forward to enable real-world applications.

Publications

Okutama-Action: An Aerial View Video Dataset for Concurrent Human Action Detection

Download

Find the training set (with labels) of Okutama-Action in the following link. Find the test set in the following link.

About

The creation of this dataset was supported by Prendinger Lab at the National Institute of Informatics, Tokyo, Japan.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].