All Projects → DirtyHarryLYL → Hake

DirtyHarryLYL / Hake

Licence: mit
HAKE: Human Activity Knowledge Engine (CVPR'18/19/20, NeurIPS'20)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Hake

Motion Sense
MotionSense Dataset for Human Activity and Attribute Recognition ( time-series data generated by smartphone's sensors: accelerometer and gyroscope)
Stars: ✭ 159 (+20.45%)
Mutual labels:  dataset, activity-recognition
Awesome Action Recognition
A curated list of action recognition and related area resources
Stars: ✭ 3,202 (+2325.76%)
Mutual labels:  action-recognition, activity-recognition
Hand pose action
Dataset and code for the paper "First-Person Hand Action Benchmark with RGB-D Videos and 3D Hand Pose Annotations", CVPR 2018.
Stars: ✭ 173 (+31.06%)
Mutual labels:  dataset, action-recognition
C3d Keras
C3D for Keras + TensorFlow
Stars: ✭ 171 (+29.55%)
Mutual labels:  action-recognition, activity-recognition
Hake Action Torch
HAKE-Action in PyTorch
Stars: ✭ 74 (-43.94%)
Mutual labels:  action-recognition, activity-recognition
Video Caffe
Video-friendly caffe -- comes with the most recent version of Caffe (as of Jan 2019), a video reader, 3D(ND) pooling layer, and an example training script for C3D network and UCF-101 data
Stars: ✭ 172 (+30.3%)
Mutual labels:  action-recognition, activity-recognition
Robust-Deep-Learning-Pipeline
Deep Convolutional Bidirectional LSTM for Complex Activity Recognition with Missing Data. Human Activity Recognition Challenge. Springer SIST (2020)
Stars: ✭ 20 (-84.85%)
Mutual labels:  activity-recognition, action-recognition
Squeeze-and-Recursion-Temporal-Gates
Code for : [Pattern Recognit. Lett. 2021] "Learn to cycle: Time-consistent feature discovery for action recognition" and [IJCNN 2021] "Multi-Temporal Convolutions for Human Action Recognition in Videos".
Stars: ✭ 62 (-53.03%)
Mutual labels:  activity-recognition, action-recognition
Hake Action
As a part of the HAKE project, includes the reproduced SOTA models and the corresponding HAKE-enhanced versions (CVPR2020).
Stars: ✭ 72 (-45.45%)
Mutual labels:  action-recognition, activity-recognition
Chinesetrafficpolicepose
Detects Chinese traffic police commanding poses 检测中国交警指挥手势
Stars: ✭ 49 (-62.88%)
Mutual labels:  dataset, action-recognition
Timeception
Timeception for Complex Action Recognition, CVPR 2019 (Oral Presentation)
Stars: ✭ 153 (+15.91%)
Mutual labels:  action-recognition, activity-recognition
M Pact
A one stop shop for all of your activity recognition needs.
Stars: ✭ 85 (-35.61%)
Mutual labels:  action-recognition, activity-recognition
Awesome Activity Prediction
Paper list of activity prediction and related area
Stars: ✭ 147 (+11.36%)
Mutual labels:  action-recognition, activity-recognition
Step
STEP: Spatio-Temporal Progressive Learning for Video Action Detection. CVPR'19 (Oral)
Stars: ✭ 196 (+48.48%)
Mutual labels:  action-recognition, activity-recognition
Okutama Action
Okutama-Action: An Aerial View Video Dataset for Concurrent Human Action Detection
Stars: ✭ 36 (-72.73%)
Mutual labels:  dataset, action-recognition
Vidvrd Helper
To keep updates with VRU Grand Challenge, please use https://github.com/NExTplusplus/VidVRD-helper
Stars: ✭ 81 (-38.64%)
Mutual labels:  dataset, action-recognition
Epic Kitchens 55 Annotations
🍴 Annotations for the EPIC KITCHENS-55 Dataset.
Stars: ✭ 120 (-9.09%)
Mutual labels:  dataset, action-recognition
Ember Impagination
An Ember Addon that puts the fun back in asynchronous, paginated datasets
Stars: ✭ 123 (-6.82%)
Mutual labels:  dataset
I3d finetune
TensorFlow code for finetuning I3D model on UCF101.
Stars: ✭ 128 (-3.03%)
Mutual labels:  action-recognition
Awesome Hungarian Nlp
A curated list of NLP resources for Hungarian
Stars: ✭ 121 (-8.33%)
Mutual labels:  dataset

HAKE: Human Activity Knowledge Engine

More details please refer to HAKE website http://hake-mvig.cn.

HAKE project:

  • HAKE-Data (CVPR'18/20): HAKE-HICO, HAKE-HICO-DET, HAKE-Large, Extra-40-verbs.
  • HAKE-A2V (CVPR'20): Activity2Vec, a general activity feature extractor based on HAKE data, converting a human (box) to a fixed-size vector, PaSta and action scores.
  • HAKE-Action-TF, HAKE-Action-Torch (CVPR'19/20, NeurIPS'20, TPAMI'21): SOTA action understanding methods and the corresponding HAKE-enhanced versions (TIN, IDN).
  • HAKE-3D (CVPR'20): 3D human-object representation for action understanding (DJ-RN).
  • HAKE-Object (CVPR'20): object knowledge learner to advance action understanding (SymNet).
  • Halpe: a joint project under AlphaPose and HAKE, full-body human keypoints (body, face, hand, 136 points) of 50,000 HOI images.
  • HOI Learning List: a list of recent HOI (Human-Object Interaction) papers, code, datasets and leaderboard on widely-used benchmarks. Hope it could help everyone interested in HOI.

News: (2021.2.7) Upgraded HAKE-Activity2Vec is released! Images/Videos --> human box + ID + skeleton + part states + action + representation. [Description]

Full demo: [YouTube], [bilibili]

(2021.1.15) Our extended version of TIN (Transferable Interactiveness Network) is accepted by TPAMI!

(2020.10.27) The code of IDN (Paper) in NeurIPS'20 is released!

(2020.6.16) Our larger version HAKE-Large (>122K images, activity and part state labels) and Extra-40-verbs (40 new actions) are released!

The image-level and instance-level part state annotations upon HICO and HICO-DET are available!

Note that:

  • Image-level means that what Human-Object Interactions are included in an image, and the corrsponding task is the HOI recognition (image-level multi-label classification from HICO).
  • Instance-level means that what HOIs are performed by a person, and the task is HOI detection (instance-level multi-label detection from HICO-DET).

If you find HAKE useful, please cite our paper:

@inproceedings{li2020pastanet,
    title={PaStaNet: Toward Human Activity Knowledge Engine},
    author={Li, Yong-Lu and Xu, Liang and Liu, Xinpeng and Huang, Xijie and Xu, Yue and Wang, Shiyi and Fang, Hao-Shu and Ma, Ze and Chen, Mingyang and Lu, Cewu},
    booktitle={CVPR},
    year={2020}
    
@inproceedings{lu2018beyond,
    title={Beyond holistic object recognition: Enriching image understanding with part states},
    author={Lu, Cewu and Su, Hao and Li, Yonglu and Lu, Yongyi and Yi, Li and Tang, Chi-Keung and Guibas, Leonidas J},
    booktitle={CVPR},
    year={2018}

HAKE-HICO (For Image-level HOI Recognition)

We have released image-level part state annotations on HICO. HOI recognition task can be modeled as a multi-label classification problem with 600 HOI categories. Given a still image, the model should tell the involved HOI categories in this image.

All the 38,116 images in train set of HICO dataset are annotated with finer human body part states. For better understanding of HOI recognition task, you could refer to these works: HICO, Pair-wise, HAKE.

Dataset

The labels are packaged in Annotations/hico-image-level.tar.gz, you can use:

cd Annotations
tar zxvf hico-image-level.tar.gz

to unzip them and get hico-training-set-image-level.json for train set of HICO respectively. More details about the format are shown in Dataset format.

The HICO dataset can be found here: HICO.

Code and Models

The corresponding code and models can be found here.

Results

We provide our current state-of-the-art result file on HICO.

Method [email protected] [email protected] [email protected] mAP result
Pairwise-Part+HAKE-ALL 25.40 32.48 33.71 47.09 hico_result_pairwise_hake_all.csv

Evaluation

After downloading above result file, you could use the following commands to evaluate:

  1. Download evaluation code here (It is a modification of this benchmark)
  2. Copy the result file to #/data/test-result.csv, where # means the folder of the evaluation code
  3. run matlab -nodesktop -nodisplay
  4. run eval_default_run

HAKE-HICO-DET (For Instance-level HOI Detection)

Instance-level part state annotations on HICO-DET are also available.

Dataset

The labels are packaged in Annotations/hico-det-instance-level.tar.gz, you could use:

cd Annotations
tar zxvf hico-det-instance-level.tar.gz

to unzip them and get hico-det-training-set-instance-level.json for train set of HICO-DET respectively. More details about the format are shown in Dataset format.

The HICO-DET dataset can be found here: HICO-DET.

Code and Models

The corresponding code and models can be found here.

HAKE-Large (For Instance-level Action Understanding Pre-training)

Instance-level part state annotations on HAKE-Large are also available now!

Dataset

The labels are packaged in Annotations/hake_large_annotation.tar.gz, you could use:

cd Annotations
tar zxvf hake_large_annotation.tar.gz

to unzip them and get hake_large_annotation.json for train set of HAKE-Large respectively. More details about the format are shown in Dataset format.

Images

You could download the corresponding images following this.

Extra 40 verb categories

We also provided the image set and part-state labels of the extra 40 verb categories (includes both HOI and human-only actions). You can download them from Google Drive. The verb_list, part-state_list is attached in the zip file. For these 40 verb categories, objects are also from coco80 categories but object bounding boxes and categories are optional (e.g. dance has no interactive objects).

TODOS

  • [x] Image-level label results on HICO
  • [x] Image-level code and models
  • [x] Instance-level label results on HICO-DET
  • [x] Instance-level code and models
  • [x] HAKE-Large data
  • [x] HAKE-A2V, pipeline, model
  • [x] HAKE-Action in PYTorch
  • [ ] HAKE-AVA data and code (video-based)
  • [ ] New Benchmark
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].