All Projects → Xtra-Computing → Thundersvm

Xtra-Computing / Thundersvm

Licence: apache-2.0
ThunderSVM: A Fast SVM Library on GPUs and CPUs

Projects that are alternatives of or similar to Thundersvm

Pycaret
An open-source, low-code machine learning library in Python
Stars: ✭ 4,594 (+258.35%)
Mutual labels:  gpu, regression, classification
Benchmarks
Comparison tools
Stars: ✭ 139 (-89.16%)
Mutual labels:  classification, gpu, regression
Arboretum
Gradient Boosting powered by GPU(NVIDIA CUDA)
Stars: ✭ 64 (-95.01%)
Mutual labels:  gpu, cuda
Sru Deeplearning Workshop
دوره 12 ساعته یادگیری عمیق با چارچوب Keras
Stars: ✭ 66 (-94.85%)
Mutual labels:  classification, regression
Mlbox
MLBox is a powerful Automated Machine Learning python library.
Stars: ✭ 1,199 (-6.47%)
Mutual labels:  classification, regression
Ml
A high-level machine learning and deep learning library for the PHP language.
Stars: ✭ 1,270 (-0.94%)
Mutual labels:  classification, regression
Tsne Cuda
GPU Accelerated t-SNE for CUDA with Python bindings
Stars: ✭ 1,120 (-12.64%)
Mutual labels:  gpu, cuda
Parenchyma
An extensible HPC framework for CUDA, OpenCL and native CPU.
Stars: ✭ 71 (-94.46%)
Mutual labels:  gpu, cuda
Heteroflow
Concurrent CPU-GPU Programming using Task Models
Stars: ✭ 57 (-95.55%)
Mutual labels:  gpu, cuda
Pytsetlinmachine
Implements the Tsetlin Machine, Convolutional Tsetlin Machine, Regression Tsetlin Machine, Weighted Tsetlin Machine, and Embedding Tsetlin Machine, with support for continuous features, multigranularity, and clause indexing
Stars: ✭ 80 (-93.76%)
Mutual labels:  classification, regression
Cuda Design Patterns
Some CUDA design patterns and a bit of template magic for CUDA
Stars: ✭ 78 (-93.92%)
Mutual labels:  gpu, cuda
Openml R
R package to interface with OpenML
Stars: ✭ 81 (-93.68%)
Mutual labels:  classification, regression
Pycuda
CUDA integration for Python, plus shiny features
Stars: ✭ 1,112 (-13.26%)
Mutual labels:  gpu, cuda
Neuralnetplayground
A MATLAB implementation of the TensorFlow Neural Networks Playground seen on http://playground.tensorflow.org/
Stars: ✭ 60 (-95.32%)
Mutual labels:  classification, regression
Ggnn
GGNN: State of the Art Graph-based GPU Nearest Neighbor Search
Stars: ✭ 63 (-95.09%)
Mutual labels:  gpu, cuda
Optix Path Tracer
OptiX Path Tracer
Stars: ✭ 60 (-95.32%)
Mutual labels:  gpu, cuda
Metriculous
Measure and visualize machine learning model performance without the usual boilerplate.
Stars: ✭ 71 (-94.46%)
Mutual labels:  classification, regression
Mpr
Reference implementation for "Massively Parallel Rendering of Complex Closed-Form Implicit Surfaces" (SIGGRAPH 2020)
Stars: ✭ 84 (-93.45%)
Mutual labels:  gpu, cuda
Php Ml
PHP-ML - Machine Learning library for PHP
Stars: ✭ 7,900 (+516.22%)
Mutual labels:  classification, regression
Carlsim3
CARLsim is an efficient, easy-to-use, GPU-accelerated software framework for simulating large-scale spiking neural network (SNN) models with a high degree of biological detail.
Stars: ✭ 52 (-95.94%)
Mutual labels:  gpu, cuda

Build Status Build status GitHub license Documentation Status GitHub issues PyPI version Downloads

What's new

  • We have recently released ThunderGBM, a fast GBDT and Random Forest library on GPUs.
  • add scikit-learn interface, see here

Overview

The mission of ThunderSVM is to help users easily and efficiently apply SVMs to solve problems. ThunderSVM exploits GPUs and multi-core CPUs to achieve high efficiency. Key features of ThunderSVM are as follows.

  • Support all functionalities of LibSVM such as one-class SVMs, SVC, SVR and probabilistic SVMs.
  • Use same command line options as LibSVM.
  • Support Python, R, Matlab and Ruby interfaces.
  • Supported Operating Systems: Linux, Windows and MacOS.

Why accelerate SVMs: A survey conducted by Kaggle in 2017 shows that 26% of the data mining and machine learning practitioners are users of SVMs.

Documentation | Installation | API Reference (doxygen)

Contents

Getting Started

Prerequisites

  • cmake 2.8 or above
  • gcc 4.8 or above for Linux and MacOS
  • Visual C++ for Windows

If you want to use GPUs, you also need to install CUDA.

Quick Install

Download the Python wheel file (For Python3 or above).

Install the Python wheel file.

pip install thundersvm-cu90-0.2.0-py3-none-linux_x86_64.whl
Example
from thundersvm import SVC
clf = SVC()
clf.fit(x, y)

Download

git clone https://github.com/Xtra-Computing/thundersvm.git

Build on Linux (build instructions for MacOS and Windows)

ThunderSVM on GPUs
cd thundersvm
mkdir build && cd build && cmake .. && make -j

If you run into issues that can be traced back to your version of gcc, use cmake with a version flag to force gcc 6. That would look like this:

cmake -DCMAKE_C_COMPILER=gcc-6 -DCMAKE_CXX_COMPILER=g++-6 ..
ThunderSVM on CPUs
# in thundersvm root directory
git submodule init eigen && git submodule update
mkdir build && cd build && cmake -DUSE_CUDA=OFF .. && make -j

If make -j doesn't work, please simply use make. The number of CPU cores to use can be specified by the -o option (e.g., -o 10), and refer to Parameters for more information.

Quick Start

./bin/thundersvm-train -c 100 -g 0.5 ../dataset/test_dataset.txt
./bin/thundersvm-predict ../dataset/test_dataset.txt test_dataset.txt.model test_dataset.predict

You will see Accuracy = 0.98 after successful running.

How to cite ThunderSVM

If you use ThunderSVM in your paper, please cite our work (full version).

@article{wenthundersvm18,
 author = {Wen, Zeyi and Shi, Jiashuai and Li, Qinbin and He, Bingsheng and Chen, Jian},
 title = {{ThunderSVM}: A Fast {SVM} Library on {GPUs} and {CPUs}},
 journal = {Journal of Machine Learning Research},
 volume={19},
 pages={797--801},
 year = {2018}
}

Other publications

  • Zeyi Wen, Jiashuai Shi, Bingsheng He, Yawen Chen, and Jian Chen. Efficient Multi-Class Probabilistic SVMs on GPUs. IEEE Transactions on Knowledge and Data Engineering (TKDE), 2018.
  • Zeyi Wen, Bingsheng He, Kotagiri Ramamohanarao, Shengliang Lu, and Jiashuai Shi. Efficient Gradient Boosted Decision Tree Training on GPUs. The 32nd IEEE International Parallel and Distributed Processing Symposium (IPDPS), pages 234-243, 2018.

Related websites

Acknowledgement

  • We acknowledge NVIDIA for their hardware donations.
  • This project is hosted by NUS, collaborating with Prof. Jian Chen (South China University of Technology). Initial work of this project was done when Zeyi Wen worked at The University of Melbourne.
  • This work is partially supported by a MoE AcRF Tier 1 grant (T1 251RES1610) in Singapore.
  • We also thank the authors of LibSVM and OHD-SVM which inspire our algorithmic design.

Selected projects that use ThunderSVM

[1] Scene Graphs for Interpretable Video Anomaly Classification (published in NeurIPS18)

[2] 3D semantic segmentation for high-resolution aerial survey derived point clouds using deep learning. (published in ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, 2018).

[3] Performance Comparison of Machine Learning Models for DDoS Attacks Detection. (published in IEEE International Computer Science and Engineering Conference (ICSEC), 2018).

[4] Kernel machines that adapt to GPUs for effective large batch training. (in arXiv preprint arXiv:1806.06144, 2018).

[5] Sampling Bias in Deep Active Classification: An Empirical Study. (in arXiv preprint arXiv:1909.09389, 2019).

[6] Machine Learning-Based Fast Banknote Serial Number Recognition Using Knowledge Distillation and Bayesian Optimization. (published in Sensors 19.19:4218, 2019).

[7] Classification for Device-free Localization based on Deep Neural Networks. (in Diss. The University of Aizu, 2019).

[8] An accurate and robust approach of device-free localization with convolutional autoencoder. (published in IEEE Internet of Things Journal 6.3:5825-5840, 2019).

[9] Accounting for part pose estimation uncertainties during trajectory generation for part pick-up using mobile manipulators. (published in IEEE International Conference on Robotics and Automation (ICRA), 2019).

[10] Genetic improvement of GPU code. (published in IEEE/ACM International Workshop on Genetic Improvement (GI), 2019). The source code of ThunderSVM is used as a benchmark.

[11] Dynamic Multi-Resolution Data Storage. (published in IEEE/ACM International Symposium on Microarchitecture, 2019). The source code of ThunderSVM is used as a benchmark.

[12] Hyperparameter Estimation in SVM with GPU Acceleration for Prediction of Protein-Protein Interactions. (published in IEEE International Conference on Big Data, 2019).

[13] Texture Selection for Automatic Music Genre Classification. (published in Applied Soft Computing, 2020).

[14] Evolving Switch Architecture toward Accommodating In-Network Intelligence. (published in IEEE Communications Magazine 58.1: 33-39, 2020).

[15] Block-Sparse Coding Based Machine Learning Approach for Dependable Device-Free Localization in IoT Environment. (published in IEEE Internet of Things Journal, 2020).

[16] An adaptive trust boundary protection for IIoT networks using deep-learning feature extraction based semi-supervised model. (published in IEEE Transactions on Industrial Informatics, 2020).

[17] Performance Prediction for Multi-Application Concurrency on GPUs. (published in IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), 2020).

[18] Tensorsvm: accelerating kernel machines with tensor engine. (published in ACM International Conference on Supercomputing (ICS), 2020).

[19] GEVO: GPU Code Optimization Using Evolutionary Computation. (published in ACM Transactions on Architecture and Code Optimization (TACO), 2020).

[20] CRISPRpred (SEQ): a sequence-based method for sgRNA on target activity prediction using traditional machine learning. (published in BMC bioinformatics, 2020).

[21] Prediction of gas concentration using gated recurrent neural networks. (published in IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), 2020).

[22] Design powerful predictor for mRNA subcellular location prediction in Homo sapiens. (published in Briefings in Bioinformatics, 2021).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].