All Projects → erfanMhi → A-quantum-inspired-genetic-algorithm-for-k-means-clustering

erfanMhi / A-quantum-inspired-genetic-algorithm-for-k-means-clustering

Licence: other
Implementation of a Quantum inspired genetic algorithm proposed by A quantum-inspired genetic algorithm for k-means clustering paper.

Programming Languages

Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to A-quantum-inspired-genetic-algorithm-for-k-means-clustering

ClusterAnalysis.jl
Cluster Algorithms from Scratch with Julia Lang. (K-Means and DBSCAN)
Stars: ✭ 22 (-21.43%)
Mutual labels:  clustering, k-means
Genetic-Algorithm-on-K-Means-Clustering
Implementing Genetic Algorithm on K-Means and compare with K-Means++
Stars: ✭ 37 (+32.14%)
Mutual labels:  genetic-algorithm, k-means
pytorch kmeans
Implementation of the k-means algorithm in PyTorch that works for large datasets
Stars: ✭ 38 (+35.71%)
Mutual labels:  clustering, k-means
R-stats-machine-learning
Misc Statistics and Machine Learning codes in R
Stars: ✭ 33 (+17.86%)
Mutual labels:  clustering, k-means
Smile
Statistical Machine Intelligence & Learning Engine
Stars: ✭ 5,412 (+19228.57%)
Mutual labels:  clustering, genetic-algorithm
KnapsackFX
Solving Knapsack 0/1 problem with various Local Search algorithms like Hill Climbing, Genetic Algorithms, Simulated Annealing, Tabu Search
Stars: ✭ 25 (-10.71%)
Mutual labels:  genetic-algorithm
qcor
C++ compiler for heterogeneous quantum-classical computing built on Clang and XACC
Stars: ✭ 78 (+178.57%)
Mutual labels:  quantum-computing
public research
Publicly available research done by BOHR.TECHNOLOGY.
Stars: ✭ 16 (-42.86%)
Mutual labels:  quantum-computing
scSeqR
This package has migrated to https://github.com/rezakj/iCellR please use iCellR instead of scSeqR for more functionalities and updates.
Stars: ✭ 16 (-42.86%)
Mutual labels:  clustering
Heart disease prediction
Heart Disease prediction using 5 algorithms
Stars: ✭ 43 (+53.57%)
Mutual labels:  clustering
RcppML
Rcpp Machine Learning: Fast robust NMF, divisive clustering, and more
Stars: ✭ 52 (+85.71%)
Mutual labels:  clustering
IBM-Quantum-QCE20-Tutorials
Repository of code notebooks for tutorials at IEEE Quantum Week (QCE20) https://qce.quantum.ieee.org/tutorials/
Stars: ✭ 38 (+35.71%)
Mutual labels:  quantum-computing
snATAC
<<------ Use SnapATAC!!
Stars: ✭ 23 (-17.86%)
Mutual labels:  clustering
ssdc
ssdeep cluster analysis for malware files
Stars: ✭ 24 (-14.29%)
Mutual labels:  clustering
swanager
A high-level Docker Services management tool built on top of Swarm
Stars: ✭ 12 (-57.14%)
Mutual labels:  clustering
Clustering4Ever
C4E, a JVM friendly library written in Scala for both local and distributed (Spark) Clustering.
Stars: ✭ 126 (+350%)
Mutual labels:  clustering
consul role
Ansible role to install Consul (cluster of) server/agent
Stars: ✭ 14 (-50%)
Mutual labels:  clustering
AI-Programming-using-Python
This repository contains implementation of different AI algorithms, based on the 4th edition of amazing AI Book, Artificial Intelligence A Modern Approach
Stars: ✭ 43 (+53.57%)
Mutual labels:  genetic-algorithm
clusterix
Visual exploration of clustered data.
Stars: ✭ 44 (+57.14%)
Mutual labels:  clustering
NNM
The PyTorch official implementation of the CVPR2021 Poster Paper NNM: Nearest Neighbor Matching for Deep Clustering.
Stars: ✭ 46 (+64.29%)
Mutual labels:  clustering

A quantum-inspired genetic algorithm comparision with genetic algorithm for k-means clustering

Abstract

In this paper we want to compare two diffrent algorithms for k-means clustering, first is Quantum Inspired Genetic Algorithm that we will implement it from Quantom Inspired genetic article, and second is a simple genetic algorithm from another article.

Contents

  • Introduction
  • Quantum-inspired genetic algorithm for k-means clustering implementation
  • Genetic algorithm for k-means clustering implementation
  • Comparison of two algorithm
  • Conclusion

Introduction

Clustering plays an important role in many unsupervised learning areas, such as pattern recognition, data mining and knowledge discovery. Clustering problem can be summarized as: Given n points in Rd space and an integer k, find a set of k points, called centroids, such that the sum of the distances of each of the n points to its nearest centroid is minimized. Generally speaking, conventional clustering algorithms can be grouped into two main categories, namely hierarchical clustering algorithms and partitional clustering algorithms. A hierarchical clustering algorithm outputs a dendrogram, which is a tree structure showing a sequence of clusterings with each clustering being a partition of the dataset. Unlike the hierarchical clustering algorithm, the partitional clustering algorithms partition the data set into a number of clusters, and the output is only a single partition of the data set. The majority of partitional clustering algorithms obtain the partition through the maximization or minimization of some criterion functions. Recent researches show that the partitional clustering algorithms are well suited for clustering a large dataset due to their relatively low computational requirements. And the time complexity of the partitional algorithms is almost linear, which makes them widely used. Among the partitional clustering algorithms, the most famous one is k-means clustering. K-means clustering algorithm first randomly generates k initial cluster centroids. After several iterations of the algorithm, data can be classified into certain clusters by the criterion function, which makes the data close to each other in the same cluster and widely separated among clusters. However, the traditional k-means clustering algorithm has two drawbacks. The one is that the number of clusters has to be known in advance, and the other is that the clustering result is sensitive to the selection of initial cluster centroids and this may lead the algorithm converge to the local optima. Different datasets have different number of clusters, which is difficult to known beforehand, and the initial cluster centroids are selected randomly, which will make the algorithm converge to the different local optima. Therefore, a lot of research efforts have been conducted on mitigating the two drawbacks of the conventional k-means clustering algorithm. The genetic algorithm (GA) is one of the methods to avoid local optima and discover good initial centroids that lead to superior partitions under k-means.

Results

Alt text

Refrenece

https://www.sciencedirect.com/science/article/pii/S095741740901063X

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].