All Projects → nesl → Robust-Deep-Learning-Pipeline

nesl / Robust-Deep-Learning-Pipeline

Licence: BSD-3-Clause license
Deep Convolutional Bidirectional LSTM for Complex Activity Recognition with Missing Data. Human Activity Recognition Challenge. Springer SIST (2020)

Programming Languages

Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to Robust-Deep-Learning-Pipeline

Awesome-Human-Activity-Recognition
An up-to-date & curated list of Awesome IMU-based Human Activity Recognition(Ubiquitous Computing) papers, methods & resources. Please note that most of the collections of researches are mainly based on IMU data.
Stars: ✭ 72 (+260%)
Mutual labels:  time-series, activity-recognition, human-activity-recognition
Lstm Human Activity Recognition
Human Activity Recognition example using TensorFlow on smartphone sensors dataset and an LSTM RNN. Classifying the type of movement amongst six activity categories - Guillaume Chevalier
Stars: ✭ 2,943 (+14615%)
Mutual labels:  activity-recognition, lstm, human-activity-recognition
dana
DANA: Dimension-Adaptive Neural Architecture (UbiComp'21)( ACM IMWUT)
Stars: ✭ 28 (+40%)
Mutual labels:  time-series, activity-recognition, human-activity-recognition
Lstm anomaly thesis
Anomaly detection for temporal data using LSTMs
Stars: ✭ 178 (+790%)
Mutual labels:  time-series, lstm
Motion Sense
MotionSense Dataset for Human Activity and Attribute Recognition ( time-series data generated by smartphone's sensors: accelerometer and gyroscope)
Stars: ✭ 159 (+695%)
Mutual labels:  time-series, activity-recognition
Sequitur
Library of autoencoders for sequential data
Stars: ✭ 162 (+710%)
Mutual labels:  time-series, lstm
Time Attention
Implementation of RNN for Time Series prediction from the paper https://arxiv.org/abs/1704.02971
Stars: ✭ 52 (+160%)
Mutual labels:  time-series, lstm
lstm har
LSTM based human activity recognition using smart phone sensor dataset
Stars: ✭ 20 (+0%)
Mutual labels:  lstm, human-activity-recognition
MSAF
Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"
Stars: ✭ 47 (+135%)
Mutual labels:  action-recognition, multimodal-deep-learning
Time-Series-Forecasting
Rainfall analysis of Maharashtra - Season/Month wise forecasting. Different methods have been used. The main goal of this project is to increase the performance of forecasted results during rainy seasons.
Stars: ✭ 27 (+35%)
Mutual labels:  time-series, lstm
LSTM-Time-Series-Analysis
Using LSTM network for time series forecasting
Stars: ✭ 41 (+105%)
Mutual labels:  time-series, lstm
Pytorch Gan Timeseries
GANs for time series generation in pytorch
Stars: ✭ 109 (+445%)
Mutual labels:  time-series, lstm
Deep Learning Based Ecg Annotator
Annotation of ECG signals using deep learning, tensorflow’ Keras
Stars: ✭ 110 (+450%)
Mutual labels:  time-series, lstm
Kaggle Competition Favorita
5th place solution for Kaggle competition Favorita Grocery Sales Forecasting
Stars: ✭ 169 (+745%)
Mutual labels:  time-series, lstm
Wdk
The Wearables Development Toolkit - a development environment for activity recognition applications with sensor signals
Stars: ✭ 68 (+240%)
Mutual labels:  time-series, activity-recognition
Squeeze-and-Recursion-Temporal-Gates
Code for : [Pattern Recognit. Lett. 2021] "Learn to cycle: Time-consistent feature discovery for action recognition" and [IJCNN 2021] "Multi-Temporal Convolutions for Human Action Recognition in Videos".
Stars: ✭ 62 (+210%)
Mutual labels:  activity-recognition, action-recognition
pose2action
experiments on classifying actions using poses
Stars: ✭ 24 (+20%)
Mutual labels:  lstm, action-recognition
Getting Things Done With Pytorch
Jupyter Notebook tutorials on solving real-world problems with Machine Learning & Deep Learning using PyTorch. Topics: Face detection with Detectron 2, Time Series anomaly detection with LSTM Autoencoders, Object Detection with YOLO v5, Build your first Neural Network, Time Series forecasting for Coronavirus daily cases, Sentiment Analysis with BERT.
Stars: ✭ 738 (+3590%)
Mutual labels:  time-series, lstm
Deep Learning Time Series
List of papers, code and experiments using deep learning for time series forecasting
Stars: ✭ 796 (+3880%)
Mutual labels:  time-series, lstm
MTL-AQA
What and How Well You Performed? A Multitask Learning Approach to Action Quality Assessment [CVPR 2019]
Stars: ✭ 38 (+90%)
Mutual labels:  lstm, action-recognition

Deep Convolutional Bidirectional LSTM for Complex Activity Recognition with Missing Data

This repo contains sample code for training deep learning pipelines on multimodal data containing missing and misaligned samples, noisy artifacts and data with variable sampling rates and timing errors, intended for complex event processing. We have benchmarked the pipeline on complex activity recognition using the Cooking Activity Recognition Dataset. [Paper] [Slides] [Presentation Video]

The proposed training pipeline stood 3rd in the Cooking Activity Recognition Challenge out of 78 teams and was awarded the 2020 IEEE Lance Stafford Larson Student Award by IEEE Computer Society.

Summary

Complex activity recognition using multiple on-body sensors is challenging due to missing samples, misaligned data times tamps across sensors, and variations in sampling rates. In this paper, we introduce a robust training pipeline that handles sampling rate variability, missing data, and misaligned data time stamps using intelligent data augmentation techniques. Specifically, we use controlled jitter in window length and add artificial misalignments in data timestamps between sensors, along with masking representations of missing data. We evaluate our pipeline on the Cooking Activity Dataset with Macro and Micro Activities, benchmarking the performance of deep convolutional bidirectional long short-term memory (DCBL) classifier. In our evaluations, DCBL achieves test accuracies of 88% and 72%, respectively, for macro- and micro-activity classifications, exceeding performance over state-of-the-art vanilla activity classifiers.

Device_Image

Data Details

For raw dataset usage guide for the data used for benchmarking, please visit the Cooking Activity Recognition Challenge website.

Code and Usage

  1. Requirements: The Jupyter notebooks used to train the models require Keras with Tensorflow backend and Scikit-learn. The notebook with training process are provided.
  2. Macro Classifier: The notebooks training macro classifiers are Macro_Activity_Masking_Classifier.ipynb and Macro_Activity_Without_Masking_Classifier.ipynb.
  3. Micro Classifier: The notebooks training micro classifiers are Micro_Activity_Masking_Classifier.ipynb and Micro_Activity_Without_Masking_Classifier.ipynb.
  4. Overall Pipelines: The overall pipeline fusing macro and micro classifiers using weighted decision fusion is available with the name Final_Pipeline.ipynb.
  5. Data and pre-trained checkpoints: The data and pre-trained checkpoints of classifiers are available in the repo. The path used in Final_Pipeline.ipynb for the classifier is directed to the trained checkpoints.

Citation

Please cite this as:

Saha S.S., Sandha S.S., Srivastava M. (2021) Deep Convolutional Bidirectional LSTM for Complex Activity Recognition with Missing Data. In: Ahad M.A.R., Lago P., Inoue S. (eds) Human Activity Recognition Challenge. Smart Innovation, Systems and Technologies, vol 199. Springer, Singapore. https://doi.org/10.1007/978-981-15-8269-1_4

@Inbook{Saha2021,
author="Saha, Swapnil Sayan
and Sandha, Sandeep Singh
and Srivastava, Mani",
editor="Ahad, Md Atiqur Rahman
and Lago, Paula
and Inoue, Sozo",
title="Deep Convolutional Bidirectional LSTM for Complex Activity Recognition with Missing Data",
bookTitle="Human Activity Recognition Challenge",
year="2021",
publisher="Springer Singapore",
address="Singapore",
pages="39--53",
isbn="978-981-15-8269-1",
doi="10.1007/978-981-15-8269-1_4",
url="https://doi.org/10.1007/978-981-15-8269-1_4"
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].