All Projects → felipessalvatore → Self_driving_pi_car

felipessalvatore / Self_driving_pi_car

Licence: mit
A deep neural network based self-driving car, that combines Lego Mindstorms NXT with the computational power of a Raspberry Pi 3.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Self driving pi car

Donkeycar
Open source hardware and software platform to build a small scale self driving car.
Stars: ✭ 2,192 (+194.62%)
Mutual labels:  self-driving-car, raspberry-pi
Tensorrider self driving car
基于BP神经网络的自动驾驶模型车。包含收集数据、控制模型生成与在线/离线自动运行所需的程序。
Stars: ✭ 17 (-97.72%)
Mutual labels:  self-driving-car, raspberry-pi
Self drive
基于树莓派的自动驾驶小车,利用树莓派和tensorflow实现小车在赛道的自动驾驶。(Self-driving car based on raspberry pi(tensorflow))
Stars: ✭ 749 (+0.67%)
Mutual labels:  self-driving-car, raspberry-pi
Gtsrb
Convolutional Neural Network for German Traffic Sign Recognition Benchmark
Stars: ✭ 65 (-91.26%)
Mutual labels:  self-driving-car, cnn
Vpilot
Scripts and tools to easily communicate with DeepGTAV. In the future a self-driving agent will be implemented.
Stars: ✭ 136 (-81.72%)
Mutual labels:  self-driving-car, cnn
Self Driving Car
A End to End CNN Model which predicts the steering wheel angle based on the video/image
Stars: ✭ 106 (-85.75%)
Mutual labels:  self-driving-car, cnn
True artificial intelligence
真AI人工智能
Stars: ✭ 38 (-94.89%)
Mutual labels:  self-driving-car, raspberry-pi
Self Driving Car 3d Simulator With Cnn
Implementing a self driving car using a 3D Driving Simulator. CNN will be used for training
Stars: ✭ 143 (-80.78%)
Mutual labels:  self-driving-car, cnn
Self Driving Toy Car
A self driving toy car using end-to-end learning
Stars: ✭ 494 (-33.6%)
Mutual labels:  self-driving-car, raspberry-pi
Text Classification
Implementation of papers for text classification task on DBpedia
Stars: ✭ 682 (-8.33%)
Mutual labels:  cnn
Jarvis
Jarvis.sh is a simple configurable multi-lang assistant.
Stars: ✭ 701 (-5.78%)
Mutual labels:  raspberry-pi
Carla
Open-source simulator for autonomous driving research.
Stars: ✭ 7,012 (+842.47%)
Mutual labels:  self-driving-car
Freenos
FreeNOS (Free Niek's Operating System) is an experimental microkernel based operating system for learning purposes written in C++. You may use the code as you wish under the terms of the GPLv3.
Stars: ✭ 683 (-8.2%)
Mutual labels:  raspberry-pi
Torchio
Medical image preprocessing and augmentation toolkit for deep learning
Stars: ✭ 708 (-4.84%)
Mutual labels:  cnn
Raspberry Pi Turnkey
How to make a Raspberry Pi image that can be deployed anywhere and assigned to a WiFi network without SSH 👌
Stars: ✭ 682 (-8.33%)
Mutual labels:  raspberry-pi
Raspberrycast
📺 Transform your Raspberry Pi into a streaming device. Videos can be sent from mobile devices or computers (Chrome extension).
Stars: ✭ 726 (-2.42%)
Mutual labels:  raspberry-pi
Joycontrol
Emulate Nintendo Switch Controllers over Bluetooth
Stars: ✭ 667 (-10.35%)
Mutual labels:  raspberry-pi
Prototypical Networks For Few Shot Learning Pytorch
Implementation of Prototypical Networks for Few Shot Learning (https://arxiv.org/abs/1703.05175) in Pytorch
Stars: ✭ 669 (-10.08%)
Mutual labels:  cnn
Bosssensor
Hide screen when boss is approaching.
Stars: ✭ 6,081 (+717.34%)
Mutual labels:  cnn
Dlib face recognition from camera
Detect and recognize the faces from camera / 调用摄像头进行人脸识别,支持多张人脸同时识别
Stars: ✭ 719 (-3.36%)
Mutual labels:  cnn

Self-Driving Pi Car

Build Status License

Introduction

Self-Driving Pi Car is a deep neural network based self-driving car, that combines Lego Mindstorms NXT with the computational power of a Raspberry Pi 3.

This repository was created by Paula Moraes and Felipe Salvatore.

Robot driving on a track

You can read more about the project on Medium

Getting Started

Install

The first thing you need to do is to install all the libraries for the Raspberry Pi. To do so, open a terminal in Raspberry Pi and run:

$ cd raspi_utils/
$ bash install.sh

In the computer that you will perform the training -- protip: don't train the model in the Raspberry Pi! -- install all the requirements by runnig:

$ pip install -r requirements.txt

Usage

Attention: in the master branch all python code is written for Python 2. If you would like to run this project in Python 3, please switch to the python3 branch of this repository.

Collecting data

Before doing any kind of training you need to collect the track data. So in the Raspberry Pi -- with the assembled robot -- run the data collection script:

  $ cd self_driving/data_collection/ 
  $ python DataCollector.py -n <images_folder_name>

Pressing q will stop execution and save all images and pickle file.

Inside the folder <images_folder_name> there will be subdirectories organized by timestamps similar to 2018-02-17-23-27-02 with the collected *.png images. All the associated labels are saved in a pickle file 2018-02-17-23-27-02_pickle in <images_folder_name>.

Compress <images_folder_name> directory and export it from Raspberry Pi to other computer (using scp command, cloud, email, etc).

  $ tar cvf <images_folder_name>.tar <images_folder_name>

Attention: please continue following the instructions in the computer that will be use for training.

Generating npy and tfrecords

Before generating tfrecords, you need to transform the untar <images_folder_name> containing all folders of images and pickles into a tuple of np.arrays. Running the following script will result in the creation of <npy_files_name>_90_160_3_data.npy and <npy_files_name>_90_160_3_labels.npy files:

  $ cd self_driving/data_manipulation/
  $ python img2array.py <images_folder_path> <npy_folder_path> <npy_files_name>

To generate tfrecords from *.npy and augment or manipulate (e.g. binarize) the data, run:

 $ cd ../ml_training/ 
 $ python generate_tfrecords.py <npy_data_path> <npy_labels_path> -n <name_tfrecords> 

Resulting in <name_tfrecords>_train.tfrecords, <name_tfrecords>_test.tfrecords and <name_tfrecords>_valid.tfrecords files.

Hyperparameters optimization

Attention: all code in this section can be runned on both Python 2 and 3 with TensorFlow 1.2.1 (and above) and with GPU support, if possible.

Now it's time to test different architectures, learning rates and optimizers, in the hopes of improving accuracy.

Best architecture search

Running the following script will create architecture_results.txt file with the results for a given configuration passed through optional arguments.

 $ python best_architecture.py -n <name_tfrecords>

Best learning rate search

Running the following script will create learning_rate_results.txt file with the results for a given configuration passed through optional arguments.

 $ python best_learning_rate.py -n <name_tfrecords>

Best optimizer search

Running the following script will create optimizer_results.txt file with the results for a given configuration passed through optional arguments.

 $ python best_optimizer.py -n <name_tfrecords>

Training the model

Attention: back to Python 2

After searching for an appropriate combination of hyperparameters, you must train the model running this script with additional arguments relative to the model:

  $ python train.py -n <name_tfrecords> -v

The result will be a checkpoints directory with all files needed to deploy the model.

Accuracy test

You can test for accuracy with the script:

  $ python acc_test.py -n <name_tfrecords>

Simulation

Before going live, it's possible to simulate the model in action with track images. This simulation script uses checkpoints and <images_folder_path> to generate new images, saved on <output_images_folder_path>, with a stamp of the probabilities for each class.

  $ cd ../
  $ python simulation.py <images_folder_path> <output_images_folder_path>

Example:

Self-driving

Attention: this section must be run on Raspberry Pi.

After training the model and loading its checkpoints to Raspberry Pi, there will be two modes available: regular and debug.

On regular mode, the car will take an action based on the model's prediction given an image took by the camera. Pressing q will stop the execution:

$ python DiffController.py 

The debug mode works in the same way as regular, but also creates a debug-run directory containing all images taken during execution with a stamp of the probabilities for each class. Pressing q will stop the execution:

$ python DiffController.py -d

Running the tests

There is two kind of tests: the ones from the Raspberry Pi and the ones for the training computer. In the Raspberry Pi run

$ python setup.py test 

These tests serve to check if the conection with the NXT robot is working.

And in the training computer

  $ bash test_script.sh 

These last tests check if the image manipulation functions and the tensorflow model are doing what they suppose to be doing.

Built With

Citation

  @misc{self_driving_pi_car2018,
    author = {Paula Moraes and Felipe Salvatore},
    title = {Self-Driving Pi Car},
    year = {2018},
    howpublished = {\url{https://github.com/felipessalvatore/self_driving_pi_car}},
    note = {commit xxxxxxx}
  }
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].