All Projects → NVIDIA-AI-IOT → Formula1Epoch

NVIDIA-AI-IOT / Formula1Epoch

Licence: other
An autonomous R.C. racecar which detects people.

Programming Languages

Makefile
30231 projects
CMake
9771 projects
python
139335 projects - #7 most used programming language
C++
36643 projects - #6 most used programming language
common lisp
692 projects
c
50402 projects - #5 most used programming language

Projects that are alternatives of or similar to Formula1Epoch

dukes
Self Driving RC Car
Stars: ✭ 17 (-73.02%)
Mutual labels:  self-driving-car, rc-car
Autonomous-RC-Car
Self-driving RC Car ROS Software
Stars: ✭ 17 (-73.02%)
Mutual labels:  self-driving-car, rc-car
LiDAR-GTA-V
A plugin for Grand Theft Auto V that generates a labeled LiDAR point cloud from the game environment.
Stars: ✭ 127 (+101.59%)
Mutual labels:  self-driving-car
mpc
A software pipeline using the Model Predictive Control method to drive a car around a virtual track.
Stars: ✭ 119 (+88.89%)
Mutual labels:  self-driving-car
Light-Condition-Style-Transfer
Lane Detection in Low-light Conditions Using an Efficient Data Enhancement : Light Conditions Style Transfer (IV 2020)
Stars: ✭ 133 (+111.11%)
Mutual labels:  self-driving-car
Traffic-Sign-CNN
Deep learning network for traffic sign image classification
Stars: ✭ 15 (-76.19%)
Mutual labels:  self-driving-car
highway-path-planning
My path-planning pipeline to navigate a car safely around a virtual highway with other traffic.
Stars: ✭ 39 (-38.1%)
Mutual labels:  self-driving-car
mediapipe plus
The purpose of this project is to apply mediapipe to more AI chips.
Stars: ✭ 38 (-39.68%)
Mutual labels:  jetson
end-to-end-deep-learning
Autonomous driving simulation in the Unity engine.
Stars: ✭ 27 (-57.14%)
Mutual labels:  self-driving-car
dockerfile-yolov5-jetson
Dockerfile for yolov5 inference on NVIDIA Jetson
Stars: ✭ 30 (-52.38%)
Mutual labels:  jetson
jetson-monitor
🚨 Jetson is an HTTP monitoring service used to notify by various messaging platforms such as Slack and Telegram
Stars: ✭ 45 (-28.57%)
Mutual labels:  jetson
Model-Predictive-Control
Udacity Self-Driving Car Engineer Nanodegree. Project: Model Predictive Control
Stars: ✭ 50 (-20.63%)
Mutual labels:  self-driving-car
yolov4 trt ros
YOLOv4 object detector using TensorRT engine
Stars: ✭ 89 (+41.27%)
Mutual labels:  jetson
SelfDrivingRCCar
Autonomous RC Car using Neural Networks, Python and Open CV
Stars: ✭ 102 (+61.9%)
Mutual labels:  self-driving-car
DonkeyDrift
Open-source self-driving car based on DonkeyCar and programmable chassis
Stars: ✭ 15 (-76.19%)
Mutual labels:  self-driving-car
Articles-Bookmarked
No description or website provided.
Stars: ✭ 30 (-52.38%)
Mutual labels:  self-driving-car
dig-into-apollo
Apollo notes (Apollo学习笔记) - Apollo learning notes for beginners.
Stars: ✭ 1,786 (+2734.92%)
Mutual labels:  self-driving-car
Visualizing-lidar-data
Visualizing lidar data using Uber Autonomous Visualization System (AVS) and Jupyter Notebook Application
Stars: ✭ 75 (+19.05%)
Mutual labels:  self-driving-car
CAP augmentation
Cut and paste augmentation for object detection and instance segmentation
Stars: ✭ 93 (+47.62%)
Mutual labels:  self-driving-car
ROSE
ROSE project car race game
Stars: ✭ 24 (-61.9%)
Mutual labels:  self-driving-car

logo.png

Repository for the Formula 1 Epoch, or F1Epoch, Team.

The Problem

Emergency Applications

The F1Epoch RACECAR Robots, Epoch and RaceX, are deployable in a wide variety of situations that can affect an office environment, from a fire to an earthquake to a furious storm. Meant to replace a fire marshal, these robots will make multiple head count assessments of the building. In essence, it will try to count humans who are still and based on sensors and a timestamp, send a report of the location of the person spotted.

The Principle Task Explained

The RACECARs are meant to use the principles of deep learning and a few neural networks to drive around given a map of a building and scan for bodies of people. Using an IMU, each person's location relative to the starting point is saved for the use of a rescue team. Using a camera and LIDAR, the RACECARs should learn how to steer and using the camera, should be able to detect people.

car.png

The (Proposed) Solution

Hardware Requirements

For Epoch and RaceX, we used a NVIDIA Jetson TX1 and TX2, respectively, for reliable edge processing of our multiple neural networks. On each RACECAR we have a Logitech C720 webcam, which replaced the ZED camera that originally came with the RACECARs, used for stereo vision. We also have a RPLidar and a Sparkfun 9DOF Razor IMU on each of the robots. We had to reconfigure the kernels to accomodate all the USB ports. We used < and info about hardware of car itself >

Software Requirements

For both cars, we flashed the Jetsons with Jetpack. This should install most or all of the C++ dependencies you require for the project (Note: flashing requires an external Ubuntu host machine). Ensure that you have CUDA, the proper GPU driver installations, etc. by following https://github.com/dusty-nv/jetson-inference.

For many of the Python dependencies and drivers for the various parts of the robot, we strongly suggest you check out JetsonHacks.com for simple scripts and tutorials for installation. ROS, or Robot Operating System, is something that can be very finicky to handle at first even though it's a brilliant tool for controlling the car - JetsonHacks even has scripts for that!

For much of our code pertaining to our PeopleNet - detecting people - we sought help from jetson-inference, a directory created by Dustin Franklin, which you can check out at https://github.com/dusty-nv/jetson-inference. We forked and modified his code to suit our needs for PeopleNet.

Software Process

We have a navigation network that generates a map using a custom recursive backtracking we created in Python.

Our PeopleNet was built in C++ with the help of https://github.com/dusty-nv/jetson-inference "jetson-inference", using the Caffe + DIGITS neural network libraries to train a network on an external GPU server and then deploy it on our Jetsons. We used the DetectNet framework with GoogLeNet weights, compiled with our own self-labeled people images to train.

Our SteerNet - turns the robot at the appropriate time - runs in a separate process in python using Keras a user friendly neural network training library, built on top of a Tensorflow backend. It trains on a GPU server using training data that comprises of image and LiDAR inputs and a joystick output for controlling the steering. We're seeking to get around 40,000 total training samples for training.

Get Started

Check out Dusty-NV, jetson-inference, 2 Days to a Demo, and the deployment of models you might create.

https://github.com/dusty-nv/jetson-inference

Check out Keras and the usage of SteerNet in DonkeyCar

https://github.com/fchollet/keras https://wroscoe.github.io/keras-lane-following-autopilot.html

Check out JetsonHacks for all things installation and configuration

http://www.jetsonhacks.com/ http://www.github.com/jetsonhacks

Lastly, post issues if there's any further basic guidance you need or something that we've missed!

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].