All Projects → stephenbeckr → SparsifiedKMeans

stephenbeckr / SparsifiedKMeans

Licence: MIT license
KMeans for big data using preconditioning and sparsification, Matlab implementation. Aka k-means

Programming Languages

matlab
3953 projects
c
50402 projects - #5 most used programming language

Projects that are alternatives of or similar to SparsifiedKMeans

skmeans
Super fast simple k-means implementation for unidimiensional and multidimensional data.
Stars: ✭ 59 (+18%)
Mutual labels:  kmeans, k-means
ClusterAnalysis.jl
Cluster Algorithms from Scratch with Julia Lang. (K-Means and DBSCAN)
Stars: ✭ 22 (-56%)
Mutual labels:  k-means
topic modelling financial news
Topic modelling on financial news with Natural Language Processing
Stars: ✭ 51 (+2%)
Mutual labels:  k-means
ClusterR
Gaussian mixture models, k-means, mini-batch-kmeans and k-medoids clustering
Stars: ✭ 69 (+38%)
Mutual labels:  kmeans
gouda
Golang Utilities for Data Analysis
Stars: ✭ 18 (-64%)
Mutual labels:  kmeans
MoTIS
Mobile(iOS) Text-to-Image search powered by multimodal semantic representation models(e.g., OpenAI's CLIP). Accepted at NAACL 2022.
Stars: ✭ 60 (+20%)
Mutual labels:  k-means
clustering-python
Different clustering approaches applied on different problemsets
Stars: ✭ 36 (-28%)
Mutual labels:  kmeans
breathing-k-means
The "breathing k-means" algorithm with datasets and example notebooks
Stars: ✭ 74 (+48%)
Mutual labels:  k-means
Feature-Engineering-for-Fraud-Detection
Implementation of feature engineering from Feature engineering strategies for credit card fraud
Stars: ✭ 31 (-38%)
Mutual labels:  kmeans
AnnA Anki neuronal Appendix
Using machine learning on your anki collection to enhance the scheduling via semantic clustering and semantic similarity
Stars: ✭ 39 (-22%)
Mutual labels:  kmeans
KMeans elbow
Code for determining optimal number of clusters for K-means algorithm using the 'elbow criterion'
Stars: ✭ 35 (-30%)
Mutual labels:  kmeans
data-science-popular-algorithms
Data Science algorithms and topics that you must know. (Newly Designed) Recommender Systems, Decision Trees, K-Means, LDA, RFM-Segmentation, XGBoost in Python, R, and Scala.
Stars: ✭ 65 (+30%)
Mutual labels:  kmeans
kmeans-dbscan-tutorial
A clustering tutorial with scikit-learn for beginners.
Stars: ✭ 20 (-60%)
Mutual labels:  kmeans
ml-simulations
Animated Visualizations of Popular Machine Learning Algorithms
Stars: ✭ 33 (-34%)
Mutual labels:  kmeans
Clustering-Python
Python Clustering Algorithms
Stars: ✭ 23 (-54%)
Mutual labels:  kmeans
R-stats-machine-learning
Misc Statistics and Machine Learning codes in R
Stars: ✭ 33 (-34%)
Mutual labels:  k-means
Genetic-Algorithm-on-K-Means-Clustering
Implementing Genetic Algorithm on K-Means and compare with K-Means++
Stars: ✭ 37 (-26%)
Mutual labels:  k-means
osm-data-classification
Migrated to: https://gitlab.com/Oslandia/osm-data-classification
Stars: ✭ 23 (-54%)
Mutual labels:  kmeans
online-course-recommendation-system
Built on data from Pluralsight's course API fetched results. Works with model trained with K-means unsupervised clustering algorithm.
Stars: ✭ 31 (-38%)
Mutual labels:  k-means
MachineLearning
Implementations of machine learning algorithm by Python 3
Stars: ✭ 16 (-68%)
Mutual labels:  kmeans

SparsifiedKMeans

KMeans for big data using preconditioning and sparsification, Matlab implementation. Uses the KMeans clustering algorithm (also known as Lloyd's Algorithm or "K Means" or "K-Means") but sparsifies the data in a special manner to achieve significant (and tunable) savings in computation time and memory.

The code provides kmeans_sparsified which is used much like the kmeans function from the Statistics toolbox in Matlab. There are three benefits:

  1. The basic implementation is much faster than the Statistics toolbox version. We also have a few modern options that the toolbox version lacks; e.g., we implement K-means++ for initialization. (Update: Since 2015, Matlab has improved the speed of their routine and initialization, and now their version and ours are comparable).
  2. We have a new variant, called sparsified KMeans, that preconditions and then samples the data, and this version can be thousands of times faster, and is designed for big data sets that are unmangeable otherwise
  3. The code also allows a big-data option. Instead of passing in a matrix of data, you give it the location of a .mat file, and the code will break the data into chunks. This is useful when the data is, say, 10 TB and your computer only has 6 GB of RAM. The data is loaded in smaller chunks (e.g., less than 6 GB), which is then preconditioned and sampled and discarded from RAM, and then the next data chunk is processed. The entire algorithm is one-pass over the dataset.

/Note/: if you use our code in an academic paper, we appreciate it if you cite us: "Preconditioned Data Sparsification for Big Data with Applications to PCA and K-means", F. Pourkamali Anaraki and S. Becker, IEEE Trans. Info. Theory, 2017.

Why use it?

For moderate to large data, we believe this is one of the fastest ways to run k-means. For extremely large data that cannot all fit into core memory of your computer, we believe there are almost no good alternatives (in theory and practice) to this code.

Installation

Every time you start a new Matlab session, run setup_kmeans and it will correctly set the paths. The first time you run it, it may also compile some mex files; for this, you need a valid C compiler (see http://www.mathworks.com/support/compilers/R2015a/index.html).

Version

Current version is 2.1

Authors

Reference

Preconditioned Data Sparsification for Big Data with Applications to PCA and K-means, F. Pourkamali Anaraki and S. Becker, IEEE Trans. Info. Theory, 2017. See also the arXiv version

Bibtex:

@article{SparsifiedKmeans,
    title = {Preconditioned Data Sparsification for Big Data with Applications to {PCA} and {K}-means},
    Author = {Pourkamali-Anaraki, F. and Becker, S.},
    year = 2017,
    doi = {10.1109/TIT.2017.2672725},
    journal = {IEEE Trans. Info. Theory},
    volume = 63,
    number = 5,
    pages = {2954--2974}
}

Related projects

  • sparsekmeans by Eric Kightley is our joint project to implement the algorithm in python, and support out-of-memory operations. The sparseklearn is the generalization of this idea to other types of machine learning algorithms (also python).

Further information

Some images taken from the paper or slides from presentations; see the journal paper for full explanations

Example on synthetic data

Example on synthetic data

Main idea

Main idea

MNIST experiment

MNIST experiment MNIST accuracy

Infinite MNIST big data experiment

MNIST2 accuracy

Two-pass mode for increased accuracy

Two pass

Theory

Theory

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].