All Projects → neerajd12 → object-detection-with-svm-and-opencv

neerajd12 / object-detection-with-svm-and-opencv

Licence: Apache-2.0 license
detect objects using svm and opencv

Programming Languages

Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to object-detection-with-svm-and-opencv

Ml Cheatsheet
A constantly updated python machine learning cheatsheet
Stars: ✭ 136 (+466.67%)
Mutual labels:  sklearn, scipy
resolutions-2019
A list of data mining and machine learning papers that I implemented in 2019.
Stars: ✭ 19 (-20.83%)
Mutual labels:  sklearn, scipy
Data Analysis
主要是爬虫与数据分析项目总结,外加建模与机器学习,模型的评估。
Stars: ✭ 142 (+491.67%)
Mutual labels:  sklearn, scipy
object-detection-with-deep-learning
demonstrating use of convolution neural networks to detect objects in a video
Stars: ✭ 17 (-29.17%)
Mutual labels:  sklearn, scipy
techloop-ml-plus
Archives and Tasks for ML+ sessions
Stars: ✭ 23 (-4.17%)
Mutual labels:  sklearn, scipy
skan
Python module to analyse skeleton (thin object) images
Stars: ✭ 92 (+283.33%)
Mutual labels:  scipy
trt pose hand
Real-time hand pose estimation and gesture classification using TensorRT
Stars: ✭ 137 (+470.83%)
Mutual labels:  sklearn
AIML-Projects
Projects I completed as a part of Great Learning's PGP - Artificial Intelligence and Machine Learning
Stars: ✭ 85 (+254.17%)
Mutual labels:  support-vector-machines
Igel
a delightful machine learning tool that allows you to train, test, and use models without writing code
Stars: ✭ 2,956 (+12216.67%)
Mutual labels:  sklearn
machine-learning-templates
Template codes and examples for Python machine learning concepts
Stars: ✭ 40 (+66.67%)
Mutual labels:  sklearn
compv
Insanely fast Open Source Computer Vision library for ARM and x86 devices (Up to #50 times faster than OpenCV)
Stars: ✭ 155 (+545.83%)
Mutual labels:  support-vector-machines
merkalysis
A marketing tool that helps you to market your products using organic marketing. This tool can potentially save you 1000s of dollars every year. The tool predicts the reach of your posts on social media and also suggests you hashtags for captions in such a way that it increases your reach.
Stars: ✭ 28 (+16.67%)
Mutual labels:  sklearn
PyCannyEdge
Educational Python implementation of the Canny Edge Detector
Stars: ✭ 31 (+29.17%)
Mutual labels:  scipy
jupyter boilerplate
Adds a customizable menu item to Jupyter (IPython) notebooks to insert boilerplate snippets of code
Stars: ✭ 69 (+187.5%)
Mutual labels:  scipy
A-B-testing-with-Machine-Learning
Implemented an A/B Testing solution with the help of machine learning
Stars: ✭ 37 (+54.17%)
Mutual labels:  sklearn
ml
Base machine learning image and environment.
Stars: ✭ 15 (-37.5%)
Mutual labels:  sklearn
scipy con 2019
Tutorial Sessions for SciPy Con 2019
Stars: ✭ 262 (+991.67%)
Mutual labels:  scipy
CNCC-2019
Computational Neuroscience Crash Course (CNCC 2019)
Stars: ✭ 26 (+8.33%)
Mutual labels:  scipy
scipy-crash-course
Material for a 24 hours course on Scientific Python
Stars: ✭ 98 (+308.33%)
Mutual labels:  scipy
imbalanced-ensemble
Class-imbalanced / Long-tailed ensemble learning in Python. Modular, flexible, and extensible. | 模块化、灵活、易扩展的类别不平衡/长尾机器学习库
Stars: ✭ 199 (+729.17%)
Mutual labels:  sklearn

object-detection-with-svm-and-opencv code

Feature Selection and tuning

skimage hog function is used to extract the HOG features in cell 3 of the notebook (Vehicle-Detection-SVM.ipynb). Apart from HOG features color histogram and raw color features are also used

HOG features for all the 3 channels in HSV color space are extracted. These were selected after trial and error. Refer cell 2 for the colorspaces tested. Apart from these I used orientation as 9 with 8 pixels per cell and 2 cells per block which gave best results after testing a few combinations.

Refer cell 8 in the notebook for the parameters used

Classifier selection and tuning

First, different classfiers like SVC, decision tree were tested and svc was chosen because it gave better results with default configurations. Refer cell 5,10,11.

Played around with different parameters like kernel('linear', 'rbf'), C, gamma, probability etc. linear kernel was chosen because it was fastest without sizeable loss in performance.

Initially only the HOG features were used which gave good performance but also had a lot of false positives. Using the histogram and color features helped reduce the false positives. Threasholding further improved the false possitive situation especially in the videos.

Finding the cars in the image

A sliding window approach has been implemented, where overlapping tiles in each test image are classified as vehicle or non-vehicle. Some justification has been given for the particular implementation chosen.

To find the cars sliding window technique is used in the cell 7 of the notebook. Ititially all the processing was done for each window but as noted in the class the hog features were extracted once to improve performance.

Also the processing is only done for the road part of the image by cropping out the unwanted parts before processing

Training a Classifier

Cell 9 of the notebook sets the training and testing data. Here we read the images from the disk and extract color feature, histogram features and HOG features and bag them all in cell using the wrapper function in cell 4 of the notebook.

After this the features are scaled using sklearn standardscaler and split in test and training set in a 80-20 ratio.

The Classifier is then training with this data in cell 10 and finally its tested on the test data in cell 11.

Video Processing

The detections from the video frames are added to a queue and the mean of the last 10 frames is used to get the last detections. The mean is thresholded to get the good detections. This remove the false positives and enhances the windows with car.

vehicle detection test 1

vehicle detection 1

vehicle detection test 2

vehicle detection 2

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].