All Projects → whatrocks → cozmo-tensorflow

whatrocks / cozmo-tensorflow

Licence: Apache-2.0 license
🤖 Cozmo the Robot recognizes objects with TensorFlow

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to cozmo-tensorflow

Assembled Cnn
Tensorflow implementation of "Compounding the Performance Improvements of Assembled Techniques in a Convolutional Neural Network"
Stars: ✭ 319 (+422.95%)
Mutual labels:  imagenet, transfer-learning
Skin Lesions Classification DCNNs
Transfer Learning with DCNNs (DenseNet, Inception V3, Inception-ResNet V2, VGG16) for skin lesions classification
Stars: ✭ 47 (-22.95%)
Mutual labels:  imagenet, transfer-learning
quick-start
FloydHub quick start project - train TensorFlow model with MNIST dataset
Stars: ✭ 23 (-62.3%)
Mutual labels:  floyd-cli, floydhub
super-gradients
Easily train or fine-tune SOTA computer vision models with one open source training library
Stars: ✭ 429 (+603.28%)
Mutual labels:  imagenet, transfer-learning
fastai-fall2018
🏃Notebooks from the USCF Deep Learning course (fast.ai v3)
Stars: ✭ 12 (-80.33%)
Mutual labels:  floyd-cli, floydhub
Rexnet
Official Pytorch implementation of ReXNet (Rank eXpansion Network) with pretrained models
Stars: ✭ 319 (+422.95%)
Mutual labels:  imagenet, transfer-learning
Big transfer
Official repository for the "Big Transfer (BiT): General Visual Representation Learning" paper.
Stars: ✭ 1,096 (+1696.72%)
Mutual labels:  imagenet, transfer-learning
named-entity-recognition-template
Build a deep learning model for predicting the named entities from text.
Stars: ✭ 51 (-16.39%)
Mutual labels:  floydhub
alexnet
custom implementation alexnet with tensorflow
Stars: ✭ 21 (-65.57%)
Mutual labels:  imagenet
save-and-resume
Checkpoint tutorial on FloydHub for Pytorch, Keras and Tensorflow.
Stars: ✭ 36 (-40.98%)
Mutual labels:  floydhub
Dawn Bench Entries
DAWNBench: An End-to-End Deep Learning Benchmark and Competition
Stars: ✭ 254 (+316.39%)
Mutual labels:  imagenet
language-identification-template
Detect the languages from short pieces of text
Stars: ✭ 20 (-67.21%)
Mutual labels:  floydhub
clean-net
Tensorflow source code for "CleanNet: Transfer Learning for Scalable Image Classifier Training with Label Noise" (CVPR 2018)
Stars: ✭ 86 (+40.98%)
Mutual labels:  transfer-learning
colornet-template
Colorizing B&W Photos with Neural Networks
Stars: ✭ 31 (-49.18%)
Mutual labels:  floydhub
pykale
Knowledge-Aware machine LEarning (KALE): accessible machine learning from multiple sources for interdisciplinary research, part of the 🔥PyTorch ecosystem
Stars: ✭ 381 (+524.59%)
Mutual labels:  transfer-learning
floyd-docs
FloydHub's documentation code. Contributions welcome!
Stars: ✭ 66 (+8.2%)
Mutual labels:  floydhub
mrnet
Building an ACL tear detector to spot knee injuries from MRIs with PyTorch (MRNet)
Stars: ✭ 98 (+60.66%)
Mutual labels:  transfer-learning
DeepFaceRecognition
Face Recognition with Transfer Learning
Stars: ✭ 16 (-73.77%)
Mutual labels:  transfer-learning
EffcientNetV2
EfficientNetV2 implementation using PyTorch
Stars: ✭ 94 (+54.1%)
Mutual labels:  imagenet
head-network-distillation
[IEEE Access] "Head Network Distillation: Splitting Distilled Deep Neural Networks for Resource-constrained Edge Computing Systems" and [ACM MobiCom HotEdgeVideo 2019] "Distilled Split Deep Neural Networks for Edge-assisted Real-time Systems"
Stars: ✭ 27 (-55.74%)
Mutual labels:  imagenet

cozmo-tensorflow

Cozmo the Robot learns to recognize everyday objects using TensorFlow and FloydHub.

finder

The setup

Install the Cozmo SDK

virtualenv ~/.env/cozmo -p python3
source ~/.env/cozmo/bin/activate
git clone https://www.github.com/whatrocks/cozmo-tensorflow
cd cozmo-tensorflow
pip install -r requirements.txt

Login to FloydHub CLI (sign up for a free account here)

floyd login

1. Use Cozmo to generate training data

Getting enough training data for a deep learning project is often a pain. But thankfully we have a robot who loves to run around and take photos with his camera, so let's just ask Cozmo to take pictures of things we want him to learn. Let's start with a can of delicious overpriced seltzer. Place Cozmo directly in front of a bottle of seltzer, and make sure that he has enough space to rotate around the can to take some pictures. Be sure to enter the name of the object that Cozmo is photographing when you run the cozmo-paparazzi script.

python3 cozmo-paparazzi.py seltzer

CozmoPaparazzi

Repeat that step for as many objects (categories) as you want Cozmo to learn! You should now see all your image categories as subdirectories within the /data folder.

Uploading dataset to FloydHub

Now, let's upload our images to FloydHub as a FloydHub Dataset so that we can use them throughout our various model training and model servicing jobs.

cd data
floyd data init cozmo-images
floyd data upload

2. Training our model on FloydHub

Make sure you are in our project's root directory, and then initialize a FloydHub project so that we can train our model on a fully-configured TensorFlow cloud GPU machine.

floyd init cozmo-tensorflow

Now we can kick off a deep learning training job on FloydHub. Couple things to note:

  • We'll be doing some simple transfer learning with the Inception v3 model provided by Google. Instead of training a model from scratch, we can start with this pre-trained model, and then replace its final layer to teach it to recognize the objects we want Cozmo to learn.
  • We're mounting the dataset that Cozmo created with the --data flag at the /data directory on our FloydHub machine.
  • I've edited this script (initially provided by the TensorFlow team) to write its output to the /output directory. This is important when you're using FloydHub, because FloydHub jobs always store their outputs in the /output directory). In our case, we'll be saving our retrained ImageNet model and the training labels to the /output folder.
floyd run \
  --gpu \
  --data whatrocks/datasets/cozmo-images:data \
  "python retrain.py --image_dir /data"

That's it! There's no need to configure anything on AWS or install TensorFlow or deal with GPU drivers or anything like that. If you'd like to use TensorBoard during your training jobs, just add --tensorboard to your run command.

Once your job is complete, you'll be able to see your newly retrained model in the job's output directory.

I recommend converting your job's output into a standalone FloydHub Dataset to make it easier for you to mount it in future jobs (which we're going to be doing in the next step). You can do this by clicking the 'Create Dataset' button on the job's output page.

3. Connecting Cozmo to our trained model on FloydHub

We can test our newly retrained model by running another job on FloydHub that:

Model-serving is an experimental feature on FloydHub - we'd love to hear your feedback on Twitter!. You'll need to include a simple Flask app called app.py in your project's code for this feature to work. In our case, I've created a simple Flask app that will evaluate an image using the model we trained in our last step.

floyd run \
  --data whatrocks/datasets/cozmo-imagenet:model \
  --mode serve

Finally, let's run our cozmo-detective.py script to ask Cozmo to move around the office to find a specific object.

python3 cozmo-detective.py toothpaste

Every time that Cozmo moves, he'll send an black and white image of whatever he's seeing to the model endpoint on FloydHub - and FloydHub will run the model against this image, returning the following payload with "Cozmo's guesses" and how long it took to compute the guesses.

{
  'answer': 
    {
      'plant': 0.022327899932861328, 
      'seltzer': 0.9057837128639221, 
      'toothpaste': 0.07188836485147476
    }, 
  'seconds': 0.947
}

If Cozmo is at least 80% confident that he's looking at the object in question, then he'll run towards it victoriously!

finder

Once you are done, don't forget to shut down your FloydHub serving job on the FloydHub website!

References

This project is an extension of @nheidloff's Cozmo visual recognition project and the Google Code Labs TensorFlow for Poets project.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].