All Projects → HandsomeHans → Svm Classification Localization

HandsomeHans / Svm Classification Localization

HoG, PCA, PSO, Hard Negative Mining, Sliding Window, Edge Boxes, NMS

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Svm Classification Localization

Machine-Learning-Models
In This repository I made some simple to complex methods in machine learning. Here I try to build template style code.
Stars: ✭ 30 (-76.92%)
Mutual labels:  svm, pca
Clandmark
Open Source Landmarking Library
Stars: ✭ 204 (+56.92%)
Mutual labels:  svm, detection
Scene Text Recognition
Scene text detection and recognition based on Extremal Region(ER)
Stars: ✭ 146 (+12.31%)
Mutual labels:  svm, detection
Ailearning
AiLearning: 机器学习 - MachineLearning - ML、深度学习 - DeepLearning - DL、自然语言处理 NLP
Stars: ✭ 32,316 (+24758.46%)
Mutual labels:  svm, pca
NIDS-Intrusion-Detection
Simple Implementation of Network Intrusion Detection System. KddCup'99 Data set is used for this project. kdd_cup_10_percent is used for training test. correct set is used for test. PCA is used for dimension reduction. SVM and KNN supervised algorithms are the classification algorithms of project. Accuracy : %83.5 For SVM , %80 For KNN
Stars: ✭ 45 (-65.38%)
Mutual labels:  svm, pca
VisualML
Interactive Visual Machine Learning Demos.
Stars: ✭ 104 (-20%)
Mutual labels:  svm, pca
Ml Course
Starter code of Prof. Andrew Ng's machine learning MOOC in R statistical language
Stars: ✭ 154 (+18.46%)
Mutual labels:  svm, pca
ml
经典机器学习算法的极简实现
Stars: ✭ 130 (+0%)
Mutual labels:  svm, pca
Machine Learning With Python
Python code for common Machine Learning Algorithms
Stars: ✭ 3,334 (+2464.62%)
Mutual labels:  svm, pca
Patternrecognition matlab
Feature reduction projections and classifier models are learned by training dataset and applied to classify testing dataset. A few approaches of feature reduction have been compared in this paper: principle component analysis (PCA), linear discriminant analysis (LDA) and their kernel methods (KPCA,KLDA). Correspondingly, a few approaches of classification algorithm are implemented: Support Vector Machine (SVM), Gaussian Quadratic Maximum Likelihood and K-nearest neighbors (KNN) and Gaussian Mixture Model(GMM).
Stars: ✭ 33 (-74.62%)
Mutual labels:  svm, pca
Model Quantization
Collections of model quantization algorithms
Stars: ✭ 118 (-9.23%)
Mutual labels:  detection
Sinaweibo Emotion Classification
新浪微博情感分析应用
Stars: ✭ 118 (-9.23%)
Mutual labels:  svm
Opennpd
C++ detect and train of "A Fast and Accurate Unconstrained Face Detector".
Stars: ✭ 126 (-3.08%)
Mutual labels:  detection
Awesome Gan For Medical Imaging
Awesome GAN for Medical Imaging
Stars: ✭ 1,814 (+1295.38%)
Mutual labels:  detection
Mobilenet
MobileNet build with Tensorflow
Stars: ✭ 1,531 (+1077.69%)
Mutual labels:  detection
Mylearn
machine learning algorithm
Stars: ✭ 125 (-3.85%)
Mutual labels:  svm
Dstl unet
Dstl Satellite Imagery Feature Detection
Stars: ✭ 117 (-10%)
Mutual labels:  detection
Ac Fpn
Implement of paper 《Attention-guided Context Feature Pyramid Network for Object Detection》
Stars: ✭ 117 (-10%)
Mutual labels:  detection
Sfd.pytorch
S3FD: single shot face detector in pytorch
Stars: ✭ 116 (-10.77%)
Mutual labels:  detection
Pytorch Imagenet Cifar Coco Voc Training
Training examples and results for ImageNet(ILSVRC2012)/CIFAR100/COCO2017/VOC2007+VOC2012 datasets.Image Classification/Object Detection.Include ResNet/EfficientNet/VovNet/DarkNet/RegNet/RetinaNet/FCOS/CenterNet/YOLOv3.
Stars: ✭ 130 (+0%)
Mutual labels:  detection

SVM-classification-detection (Python2.7)

HoG, PCA, PSO, Hard Negative Mining, Sliding Window, NMS

image

Best way to do detection is:

HoG(features) -> PCA(less features) + PSO(best C&gamma) -> origin SVM -> HNM(more features) -> better SVM -> SW -> NMS(bbox regression)

Sorry for my laziness.

I think I should clarify the steps for the program.

  1. Extract HoG features (script 1)

  2. Train an initial model for pso (script 2)

  3. Do pca and pso for better parameters C and gamma (script 6)

  4. Use no-pca features and the best parameters to train the second model (script 2)

  5. In order to increase the accuracy, use the second model to do hnm and get the final model(script 7)

  6. Finally, choose an algorithm you like to do location(script 8 or 9 or 10)

PS:

  1. The reason I use pca is to accelerate the speed of pso. To be honestly, pso is really slow.

  2. For step 4, you can also use features processed by pca, but I strongly advise you to hold as possible as more features. Because more features, higher accuracy.

中文地址:http://blog.csdn.net/renhanchi/article/category/7007663

强烈建议将6篇文章都仔细看一遍,再来跑代码,或者边看边跑。内容不是很多,但是会对你理解算法和代码有很大帮助。

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].