All Projects → smellslikeml → Actionai

smellslikeml / Actionai

Licence: gpl-3.0
custom human activity recognition modules by pose estimation and cascaded inference using sklearn API

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Actionai

Libfaceid
libfaceid is a research framework for prototyping of face recognition solutions. It seamlessly integrates multiple detection, recognition and liveness models w/ speech synthesis and speech recognition.
Stars: ✭ 354 (-12.38%)
Mutual labels:  raspberry-pi, scikit-learn, pose-estimation
Rnn For Human Activity Recognition Using 2d Pose Input
Activity Recognition from 2D pose using an LSTM RNN
Stars: ✭ 165 (-59.16%)
Mutual labels:  lstm, pose-estimation
Ailearning
AiLearning: 机器学习 - MachineLearning - ML、深度学习 - DeepLearning - DL、自然语言处理 NLP
Stars: ✭ 32,316 (+7899.01%)
Mutual labels:  lstm, scikit-learn
MachineLearning
Implementations of machine learning algorithm by Python 3
Stars: ✭ 16 (-96.04%)
Mutual labels:  scikit-learn, lstm
Text Classification
Machine Learning and NLP: Text Classification using python, scikit-learn and NLTK
Stars: ✭ 239 (-40.84%)
Mutual labels:  scikit-learn, machinelearning
Igel
a delightful machine learning tool that allows you to train, test, and use models without writing code
Stars: ✭ 2,956 (+631.68%)
Mutual labels:  scikit-learn, machinelearning
turbofan failure
Aircraft engine failure prediction model
Stars: ✭ 23 (-94.31%)
Mutual labels:  scikit-learn, lstm
Model Describer
model-describer : Making machine learning interpretable to humans
Stars: ✭ 22 (-94.55%)
Mutual labels:  scikit-learn, machinelearning
Forecasting-Solar-Energy
Forecasting Solar Power: Analysis of using a LSTM Neural Network
Stars: ✭ 23 (-94.31%)
Mutual labels:  lstm, machinelearning
Machine-Learning
The projects I do in Machine Learning with PyTorch, keras, Tensorflow, scikit learn and Python.
Stars: ✭ 54 (-86.63%)
Mutual labels:  scikit-learn, lstm
GeneticAlgorithmForFeatureSelection
Search the best feature subset for you classification mode
Stars: ✭ 82 (-79.7%)
Mutual labels:  classifier, machinelearning
Bet On Sibyl
Machine Learning Model for Sport Predictions (Football, Basketball, Baseball, Hockey, Soccer & Tennis)
Stars: ✭ 190 (-52.97%)
Mutual labels:  scikit-learn, machinelearning
Emlearn
Machine Learning inference engine for Microcontrollers and Embedded devices
Stars: ✭ 154 (-61.88%)
Mutual labels:  scikit-learn, classifier
Datacamp Python Data Science Track
All the slides, accompanying code and exercises all stored in this repo. 🎈
Stars: ✭ 250 (-38.12%)
Mutual labels:  scikit-learn, machinelearning
Data Umbrella Scikit Learn Sprint
Jun 2020 scikit-learn sprint
Stars: ✭ 93 (-76.98%)
Mutual labels:  scikit-learn, machinelearning
website-fingerprinting
Deanonymizing Tor or VPN users with website fingerprinting and machine learning.
Stars: ✭ 59 (-85.4%)
Mutual labels:  classifier, scikit-learn
Machinelearning
My blogs and code for machine learning. http://cnblogs.com/pinard
Stars: ✭ 5,984 (+1381.19%)
Mutual labels:  scikit-learn, machinelearning
ML-For-Beginners
12 weeks, 26 lessons, 52 quizzes, classic Machine Learning for all
Stars: ✭ 40,023 (+9806.68%)
Mutual labels:  scikit-learn, machinelearning
Code
Compilation of R and Python programming codes on the Data Professor YouTube channel.
Stars: ✭ 287 (-28.96%)
Mutual labels:  scikit-learn, machinelearning
Thesemicolon
This repository contains Ipython notebooks and datasets for the data analytics youtube tutorials on The Semicolon.
Stars: ✭ 345 (-14.6%)
Mutual labels:  lstm, scikit-learn

ActionAI 🤸

Python 3.x stars forks license twitter

ActionAI is a python library for training machine learning models to classify human action. It is a generalization of our yoga smart personal trainer, which is included in this repo as an example.

Getting Started

These instructions will show how to prepare your image data, train a model, and deploy the model to classify human action from image samples. See deployment for notes on how to deploy the project on a live stream.

Prerequisites

Installing

We recommend using a virtual environment to avoid any conflicts with your system's global configuration. You can install the required dependencies via pip:

Jetson Nano Installation

We use the trt_pose repo to extract pose estimations. Please look to this repo to install the required dependencies. You will also need to download these zipped model assets and unzip the package into the models/ directory.

# Assuming your python path points to python 3.x 
$ pip install -r requirements.txt

All preprocessing, training, and deployment configuration variables are stored in the conf.py file in the config/ directory. You can create your own conf.py files and store them in this directory for fast experimentation.

The conf.py file included imports a LinearRegression model as our classifier by default.

Example

After proprocessing your image data using the preprocess.py script, you can create a model by calling the actionModel()function, which creates a scikit-learn pipeline. Then, call the trainModel() function with your data to train:

# Stage your model
pipeline = actionModel(config.classifier())

# Train your model
model = trainModel(config.csv_path, pipeline)

Data processing

Arrange your image data as a directory of subdirectories, each subdirectory named as a label for the images contained in it. Your directory structure should look like this:

├── images_dir
│   ├── class_1
│   │   ├── sample1.png
│   │   ├── sample2.jpg
│   │   ├── ...
│   ├── class_2
│   │   ├── sample1.png
│   │   ├── sample2.jpg
│   │   ├── ...
.   .
.   .

Samples should be standard image files recognized by the pillow library.

To generate a dataset from your images, run the preprocess.py script.

$ python preprocess.py

This will stage the labeled image dataset in a csv file written to the data/ directory.

Training

After reading the csv file into a dataframe, a custom scikit-learn transformer estimates body keypoints to produce a low-dimensional feature vector for each sample image. This representation is fed into a scikit-learn classifier set in the config file. This approach works well for lightweight applications that require classifying a pose like the YogAI usecase:

Run the train.py script to train and save a classifier

$ python train.py

The pickled model will be saved in the models/ directory

To train a more complex model to classify a sequence of poses culminating in an action (ie. squat or spin), use the train_sequential.py script. This script will train an LSTM model to classify movements.

$ python train_sequential.py

Deployment

We've provided a sample inference script, inference.py, that will read input from a webcam, mp4, or rstp stream, run inference on each frame, and print inference results.

If you are running on a Jetson Nano, you can try running the iva.py script, which will perform multi-person tracking and activity recognition like the demo gif above Getting Started. Make sure you have followed the Jetson Nano installation instructions above and simply run:

$ python iva.py 0

# or if you have a video file

$ python iva.py /path/to/file.mp4

If specified, this script will write a labeled video as out.mp4. This demo uses a sample model called lstm_spin_squat.h5 to classify spinning vs. squatting. Change the model and motion dictionary under the RUNSECONDARY flag to run your own classifier.

Teachable Machine

We've also included a script under the experimental folder, teachable_machine.py, that supports labelling samples via a PS3 Controller on a Jetson Nano and training in real-time from a webcam stream. This will require these extra dependencies:

To test it, run:

# Using a webcam
$ python experimental/teachable_machine.py /dev/video0  

# Using a video asset
$ python experimental/teachable_machine.py /path/to/file.mp4  

This script will also write labelled data into a csv file stored in data/ directory and produce a video asset out.mp4.

Contributing

Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.

License

This project is licensed under the GNU General Public License v3.0 - see the LICENSE.md file for details

References

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].