All Projects → loicmarie → Hands Detection

loicmarie / Hands Detection

Licence: mit
Hands video tracker using the Tensorflow Object Detection API and Faster RCNN model. The data used is the Hand Dataset from University of Oxford.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Hands Detection

Taco
🌮 Trash Annotations in Context Dataset Toolkit
Stars: ✭ 243 (+179.31%)
Mutual labels:  object-detection, dataset
Shape Detection
🟣 Object detection of abstract shapes with neural networks
Stars: ✭ 170 (+95.4%)
Mutual labels:  object-detection, dataset
Epic Kitchens 55 Annotations
🍴 Annotations for the EPIC KITCHENS-55 Dataset.
Stars: ✭ 120 (+37.93%)
Mutual labels:  object-detection, dataset
Tju Dhd
A newly built high-resolution dataset for object detection and pedestrian detection (IEEE TIP 2020)
Stars: ✭ 75 (-13.79%)
Mutual labels:  object-detection, dataset
Maskrcnn Modanet
A Mask R-CNN Keras implementation with Modanet annotations on the Paperdoll dataset
Stars: ✭ 59 (-32.18%)
Mutual labels:  object-detection, dataset
Exclusively Dark Image Dataset
Exclusively Dark (ExDARK) dataset which to the best of our knowledge, is the largest collection of low-light images taken in very low-light environments to twilight (i.e 10 different conditions) to-date with image class and object level annotations.
Stars: ✭ 274 (+214.94%)
Mutual labels:  object-detection, dataset
Lacmus
Lacmus is a cross-platform application that helps to find people who are lost in the forest using computer vision and neural networks.
Stars: ✭ 142 (+63.22%)
Mutual labels:  object-detection, dataset
Tensorflow object tracking video
Object Tracking in Tensorflow ( Localization Detection Classification ) developed to partecipate to ImageNET VID competition
Stars: ✭ 491 (+464.37%)
Mutual labels:  object-detection, dataset
Awesome machine learning solutions
A curated list of repositories for my book Machine Learning Solutions.
Stars: ✭ 65 (-25.29%)
Mutual labels:  object-detection, dataset
Vidvrd Helper
To keep updates with VRU Grand Challenge, please use https://github.com/NExTplusplus/VidVRD-helper
Stars: ✭ 81 (-6.9%)
Mutual labels:  object-detection, dataset
Gtavisionexport
Code to export full segmentations from GTA
Stars: ✭ 83 (-4.6%)
Mutual labels:  object-detection
Fog Google
Fog for Google Cloud Platform
Stars: ✭ 83 (-4.6%)
Mutual labels:  google-cloud
Frostnet
FrostNet: Towards Quantization-Aware Network Architecture Search
Stars: ✭ 85 (-2.3%)
Mutual labels:  object-detection
Cesi
WWW 2018: CESI: Canonicalizing Open Knowledge Bases using Embeddings and Side Information
Stars: ✭ 85 (-2.3%)
Mutual labels:  dataset
Gossipnet
Non-maximum suppression for object detection in a neural network
Stars: ✭ 83 (-4.6%)
Mutual labels:  object-detection
Fastai
R interface to fast.ai
Stars: ✭ 85 (-2.3%)
Mutual labels:  object-detection
Labelme
automatic tagging data, the training data prepare for mask-rcnn
Stars: ✭ 83 (-4.6%)
Mutual labels:  object-detection
Crnn With Stn
implement CRNN in Keras with Spatial Transformer Network
Stars: ✭ 83 (-4.6%)
Mutual labels:  dataset
Fashion Mnist
A MNIST-like fashion product database. Benchmark 👇
Stars: ✭ 9,675 (+11020.69%)
Mutual labels:  dataset
Rotated iou
Differentiable IoU of rotated bounding boxes using Pytorch
Stars: ✭ 85 (-2.3%)
Mutual labels:  object-detection

Hands Detection

Hands video tracker using the Tensorflow Object Detection API and Faster RCNN model. The data used is the "Hand Dataset" from University of Oxford. The dataset can be found here. More informations: "Hand detection using multiple proposals", A. Mittal, A. Zisserman, P. H. S. Torr, British Machine Vision Conference, 2011.

You can find demo here.

Demo

Installation

First we need to install the Tensorflow Object Detection API. You can either install dependencies or run the provided docker image.

Installing dependencies

Please follow Tensorflow Object Detection API installation tutorial in models/ directory

Using docker

We use the gcr.io/tensorflow/tensorflow image, so we have already Jupyter and Tensorboard services. The TOD API is already installed, the next step is to pull data from Hands Dataset.

docker build -t hands-tracker .
docker run -it -p 8888:8888 -p 6006:6006 hands-tracker bash

Training on Google Cloud ML

Pulling data from the Oxford University Hands Dataset

To pull data in dataset/ directory, use the following python script:

python create_inputs_from_dataset.py

If you need more informations about pulling data from University of Oxford, or MAT files to TFRecord files conversion, see the IPython notebook for generating inputs The dataset folder should be structured as following:

dataset/
|---  test_dataset/
|------  test_data/
|----------  images/
|----------  annotations/
|---  training_dataset/
|------  training_data/
|----------  images/
|----------  annotations/
|---  validation_dataset/
...

Deploy model to Google Cloud Storage

To make following steps easier, use the following variables:

export GC_PROJECT_ID=<your_project_id>
export GCS_BUCKET=<your_gcs_bucket>

First, we have to log in with our Google Cloud account and setup config

gcloud auth login
gcloud config set project $GC_PROJECT_ID
gcloud auth application-default login

Next, we can deploy our project files to Google Cloud Storage. You can use the following script:

./deploy_on_gcs.sh $GCS_BUCKET

Create training and eval jobs

Our project is ready for training. We can create our job on Google Cloud ML

gcloud ml-engine jobs submit training `whoami`_object_detection_`date +%s` \
    --job-dir=gs://${GCS_BUCKET}/train \
    --packages models/dist/object_detection-0.1.tar.gz,models/slim/dist/slim-0.1.tar.gz \
    --module-name object_detection.train \
    --region us-central1 \
    --scale-tier BASIC \
    -- \
    --train_dir=gs://${GCS_BUCKET}/train \
    --pipeline_config_path=gs://${GCS_BUCKET}/data/faster_rcnn_resnet101_hands.config

The scale tier used here is 'BASIC' and training takes absolutely forever, but with 'BASIC_GPU' config training takes approximatively two hours. Be aware that after job began you'll be charged on your credit card.

Once the job has started, you can run an evaluation job as following:

gcloud ml-engine jobs submit training `whoami`_object_detection_eval_`date +%s` \
    --job-dir=gs://${GCS_BUCKET}/train \
    --packages models/dist/object_detection-0.1.tar.gz,models/slim/dist/slim-0.1.tar.gz \
    --module-name object_detection.eval \
    --region us-central1 \
    --scale-tier BASIC_GPU \
    -- \
    --checkpoint_dir=gs://${GCS_BUCKET}/train \
    --eval_dir=gs://${GCS_BUCKET}/eval \
    --pipeline_config_path=gs://${GCS_BUCKET}/data/faster_rcnn_resnet101_hands.config

Monitoring

Finally, if you are using the provided docker image, you can monitor your training job with Tensorboard:

tensorboard --logdir=gs://${GCS_BUCKET}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].